Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
You think: Aha! We go: ha, ha.

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


             

Tracker scanner camera

Instead of pointing the camera have it learn to focus and steady its look
 
(0)
  [vote for,
against]

A round 360 degrees camera that using image processing follows and focuses on only a certain area, just like the eye goes jumping to various regions of the images darting back and forth in Raped Eye Movements scanning whatever it sees, discarding most of the info, and focusing only on the important parts.

Why can't a camera do the same? - But with no mechanical parts!

The end of the scanner, the unfocused images and all other Photographer Problems. Welcome to the Camera Problems era.

pashute, Oct 25 2011

Please log in.
If you're not logged in, you can see what this page looks like, but you will not be able to add anything.
Short name, e.g., Bob's Coffee
Destination URL. E.g., https://www.coffee.com/
Description (displayed with the short name and URL.)






       "Raped Eye Movements" sounds somewhat discomforting...
hippo, Oct 25 2011
  

       // the eye goes jumping to various regions of the images darting back and forth in Rap(i)d Eye Movements scanning whatever it sees, discarding most of the info, and focusing only on the important parts //   

       I hate to sound like a know-it-all, but this is called 'sacceding' (sp? Root: 'saccede', pronounced like 'succeed'), and it is one of the things that robotics engineers have struggled to replicate for many years. Apparently it is a major stumbling block in allowing an advanced robotic eye to gather and process visual information as quickly and effectively as we do. Or so I have read, at any rate.   

       R.E.M. is something different; depending on who you ask, it's thought to be either a reflexive or somatic behavior linked to the perceived visual stimuli in our dreams, or simply the random firing of neurons that takes place in many of our motor centers while we rest (like twitching fingers, shrugging, etc.).
Alterother, Oct 25 2011
  

       //follows and focuses on only a certain area// How? If you are not going to have any moving parts, then the sensor of the camera must be able to see the entire field of view at the same time, in focus.   

       Now say you get that far. How does the camera know what is of interest before it does any image processing? It happens that the human brain does preliminary processing on the out of focus/out of interest area and uses that to indicate the next point of interest (plus lots of rapid random movements around the point of interest to fill in data). In order to develop that sort of data on a camera you are still going to have to process the the entire image.
MechE, Oct 25 2011
  

       //I hate to sound like a know-it-all, but this is called 'sacceding' // You don't - it's "saccading".   

       You could sort of get there by just having a camera that captured a much wider field (a fish- eye lens) and then saved only the interesting region. But, as [MechE] points out, how does the camera know which is the interesting region? Plus, for any given level of sensor technology, you'll have fewer pixels in your region of interest than you could otherwise have.   

       However, I guess that as pixel-count gets higher and higher, there'll come a point where it makes more sense to take a wider image than to increase resolution.   

       (But, before we expend pixels on wider views, they'll be used to get greater depth of field - Google "light field camera", which uses extra pixels to capture light from different directions.)   

       Sorry, drifting into a ramble here, but...   

       the fundamental problem is that we expect cameras to be as good our eyes, but our eyes depend on really crap optics and very very clever processing. You never see what you're looking at: what you see is the model your brain makes, based on jumpy, patchy, blurry images and a lot of filling in. That's very hard to implement in a camera.
MaxwellBuchanan, Oct 25 2011
  

       Let the camera saccade to anything that moves, and let the computer generate an image in which anything not moving is assumed to look the same as last time you foveated it*. Wouldn't work in all environments, but in some, at least, it might do a credible imitation of primate visual system's \\model-building with crap optics\\   

       *Don't video compression schemes work that way? Essentially, this system would record "pre- compressed" video.
mouseposture, Oct 26 2011
  

       Thanks, [Max], I knew I had it wrong. It's just that I've spent a long time cultivating my largely undeserved reputation for never voluntarily fact-checking or citing my sources, and I'm not about to spoil that over a simple mispelling when I'm busy holding forth on a topic others know far more about.
Alterother, Oct 26 2011
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle