h a l f b a k e r yOh yeah? Well, eureka too.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
3D Video Camera
To accompany the Flapping Sheet Hologram or other 3D projection technologies. | |
First of all, this is not an idea for a stereoscopic 3D camera. Those are widely baked and not suitable for such display devices.
This idea is for digitally recording objects in 3 dimensions. This camera will be very similar to a regular digital video camera, with two physical differences.
First, it will have to be capable of much faster refresh rates. Second, it will have a small, light lens that will vibrate along the axis of the center of the lens. For each cycle of vibration, the camera will have to capture 2 x the desired z resolution (z being the axis of the center of the lens). Each of these images are passed to a digital filter that deletes all data that is not sharply in focus.
This setup will result in the 3D recording of objects. Let's take a red sphere as an example. As the closest part of the sphere is in focus, it will record a red point. The next frame will be a small red ring, then a larger red ring, etc. After 1/2 of a vibration cycle, you will have recorded half of the sphere in 3D. Of course, if you want to record more than one side you'd need more than one camera (and know the distance and direction between the 2+ cameras).
Flapping Sheet Hologram
http://www.halfbake..._20Sheet_20Hologram We have thoroughly discussed details of halfbaked and fullbaked display technologies. [Worldgineer, Oct 05 2004, last modified Oct 17 2004]
Please log in.
If you're not logged in,
you can see what this page
looks like, but you will
not be able to add anything.
Destination URL.
E.g., https://www.coffee.com/
Description (displayed with the short name and URL.)
|
|
//digital filter that deletes all data that is not sharply in focus//
I'm having a hard time imagining how this would work in practice. |
|
|
Ok, so I admit I don't know of a sharpness-based filter. How I imagine it would work is by analyzing how "sharp" or "fuzzy" something is, and cut out everything fuzzy. I guess I'm hoping that if human mind can do something like this, we can build software to do it. |
|
|
//if you want to record more than one side you'd need more than one camera (and know the distance and direction between the 2+ cameras).// |
|
|
[Shz] Are you agreeing with part of my idea, or trying to say you need more than once camera for 3d? If the later, I'd disagree. |
|
|
In essence, yes, this would be fine. The sticking point really is that 'in focus' filter. Fuzziness vs Sharpness wouldn't really help, we judge distance based on parallax, focusing and a mental map of what's going on, not so much the fuzziness. And lots of perfectly in focus things would be fuzzy - like a blank curtain, or any other gently curving surface. |
|
|
However, here is a suggestion. If you have two cameras side by side behind the synchronous lens, you put a smaller aperture in front of one than the other. Now anything in the focal plane will look the same, regardless of the aperture you put in front of it. But things outside the focal plane won't. So your digital camera could perhaps compare the images, and work out what's in focus and what isn't. The only problem would be that the performance of this system would decay with distance, since you can focus to infinity and be focused on things anywhere between 100m and 100 miles and not notice any difference. If this is for a volumetric display (ie the display is inside a fixed volume) then that shouldn't matter. |
|
|
Pretty good idea though, my only reservation about the physical construction would be the sheer speed of the syncronous lens, but that could probably be got around with sufficiently high quality components and miniaturisation. |
|
|
[imagin8or] Sorry I missed your anno. I disagree that // lots of perfectly in focus things would be fuzzy//. In your curtain example, two vertical lines on each curtain ridge would be in focus. For curved surfaces there would be one plane in focus. |
|
|
//we judge distance based on parallax// This is partially true, but you're missing a big piece. Close one eye and look at something far away then near you. Your eye automatically focuses on the object it's looking at. This is the effect I'm talking about - your brain's ability to judge when something's in focus. The curvature in your eye could then be used to determine the distance to the object in focus (though I assume it's more precise for humans to use two eyes and parallax to determine this). |
|
|
Great idea. You could judge the distance by recording where the lens is in the z-plane. Autofocus systems already try to determine what's "fuzzy" and what's not. Your setup could use an enhanced version of these algorithms since it has data on what just became "fuzzier" and what became "sharper." |
|
| |