h a l f b a k e r yFaster than a stationary bullet.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
window
a window in the fabric of space-time | |
3d cameras (which generate a depth-value per pixel) are becoming cheaper and better. At some point in the future (say 2-3 years) depth-cameras should become fairly standard items.
Consider, then, a tablet computer with a 6-degrees-of-freedom sensor (for example, a polhemus sensor, or the sensor in
the upcoming Nintendo Revolution controller) and a depth-camera mounted in the frame. Such a machine would be able to generate a real-time, room-oriented 3D mesh of its surroundings.
The machine would also be able to render a room-oriented realtime 3D model, linking the virtual 'camera' with the physical position/orientation of the tablet; the machine forms a 'window' onto the 3D model.
This opens the way for the aforementioned 'window' - we connect two machines in different physical locations via the interweb. Alice sends Bob her realtime mesh; Bob sends his to Alice. Each can look 'through' their tablet into the other person's room. The tablet therefore becomes a 'window' directly connecting the two locations together (well, visually, anyway).
depthcam
http://videothing.b...-vision-camera.html HD depthcam [bumhat, Feb 02 2006]
(?) nintendo revolution
http://video.google...nintendo+revolution showing the 6-DOF sensor [bumhat, Feb 02 2006]
[link]
|
|
This sounds great. The result would be like looking at a 3D object with one eye, would it not? |
|
|
I think that, to an extent, as you move your head around (with the position sensor fixed to it), the screen redraws the image as it should appear from that new angle.
But hidden objects would be difficult with one depth camera, unless there were more depth cameras... |
|
|
No holograms involved. Each tablet is rendering a 2D view of a 3D dataset (mesh/textures, like in an arcade game). |
|
|
It's the same as a webcam, except that the tablet is used as an interface to 'move' the webcam on the other side of the link. Instead of really moving the webcam, it's moving the virtual viewpoint onto the dataset. |
|
|
Indeed, this can be extended to multiple cameras, but the advantage of a single-portable-camera/screen form-factor is portability and privacy (i.e. you clear your mesh, then point your camera to see only the things you want to show). |
|
| |