h a l f b a k e r yBite me.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Ultra-HMD
Virtual reality without the headache | |
Head-mounted-diplays (HMDs) cause headaches for two main reasons:
(1) "Lag" between head movement and screen update
(2) Interoccular convergence/focus mismatch.
Solution for (1): Use extra processors to pre-render images in the next most likely directions of movement (usually yaw-type
movement of the head). OK, this is a fairly obvious fix, but wait until you read (2)!!!!
Solution for (2): In your typical HMD, the point of focus is set to infinity for comfort, so if your eyes are converging on an object at middle-distance, your eye-lenses still have to stay relaxed at infinity. This is bad... and causes semi-permanent or permanent vision problems (several HMD studies prove this vision damage to exist).
My fix to the problem is attaching an autofocus system to the LCD lenses (pointed through the human eye at the retina) to keep the HMD screen in focus no matter how much the eye changes focus. A feedback system through the auto-focus sends focal measurement data to the image processor CPU. In turn the CPU applies digital bluring to the fore, middle and background objects as appropriate.
Now it is up to the user to keep their inter-occular convergence and focal distance in a reasonable ratio (just like in the real world). The result would be amazingly convincing for virtual objects in the 1-5m range.
[I've greedily sat on this idea for a good 10 years, but have always been too lazy to prototype it. So here it is for the world's benefit! HMD designers: here's your big break!] ;)
PLEASE POST BACK HERE IF YOU ATTEMPT A PROTOTYPE! I'd love to know if someone used this idea.
Eye Cam
http://www.msnbc.ms...2988/site/newsweek/ Possible technology for feature 2 [krelnik, Oct 04 2004, last modified Oct 05 2004]
[link]
|
|
I'm no expect in this area.. but some thoughts- |
|
|
1) I wouldn't have thought the eyestrain problem was due to too large a depth of field (and thus the solution being selective blurring); rather, surely the problem is that the screen appears (to the eye) at a fixed distance, which the eye focuses on. This distance is determined by the physical distance of the display panel and the lenses in between. So, I would suggest that one or both of these would need to be adjusted in real time instead of mere digital blurring. |
|
|
2) You'll have to explain how the system detects the eye's focus state. Presumably devices used in eye examinations do this, so the idea here does something similar? Is it possible to do this without interfering with/obstructing the displayed image? |
|
|
I linked to an article that talks about some technology that could be used to solve the second problem. It can measure where your eyes are pointed, therefore what they are focusing on and what parts of the image should therefore be blurred. |
|
|
I think this is an interesting idea, I imagine someone somewhere is already working on it. |
|
|
1: The system will adjust the focus of the optics so that the focal plane of the viewed image is at the proper depth of the object being viewed.
2: The system will create virtual depth of field by selectively applying synthetic focal blur to the image.
3: The system will pre-render a frames outside of the current view direction to compensate for real-time render lag. |
|
|
Addressing these requirements:
1: Not particularly difficult, assuming your optics are fast enough. Probably not a problem. A secondary optic system can project a grid image which will be viewed in the cornea reflection. Where the eye is pointed is not really important, as it is possible to look at one object, but focus elsewhere. The determination of approximate focal plane is more critical than determination of viewed object.
2: Once the focal plane is determined, focal blur should be easy to implement. While a genuine focal blur effect via offset multisampling is computationally intensive, a false focal blur implemented as a depth-field postprocess effect is simple and fast, and could work very well.
3: Rendering complete frames is not necessary. If an individual field is rendered to a view frustum larger than projected, the final step of image processing could be to simply select the appropriate portion of the rendered image to display, and center it properly. While this is more computationally intensive than rendering a properly cropped frustum, it is much less intensive than rendering several fully developed alternate frames. |
|
|
Inter-ocular convergence is not a problem, as the same system that determines the "look-at vector" and focal plane parameters of the eye can determine exactly how to adjust the optics to an individual user. |
|
|
While a fully developed commercial application of this technology is not yet available, I'm fairly certain (read: first-hand experience with such devices) that it is already being developed. I can't bun it as new, but since there's not yet readily available information, I can't bone it as "widely known to exist" either. Kudos for pointing out possible solutions to problems which plague virtual reality display systems. |
|
|
1. Yes, both are a problem and both are fixed by the system. |
|
|
2. If you put an autofocus system in front of the LCD, and make the autofocus point through the cornea to keep the retina in focus, then (acording to my understanding of the symmetry of optics) they cornea will be unable to UNBLUR the LCD since the autofocus will keep re-adjusting. |
|
|
Yes, speed of the autofocus sytem will be an issue, but SLR cameras have very fast ones, and I think there is a (very brief) period of brain re-adjustments after a saccade (sp? "eye-twitch") and/or refocus. |
|
|
FreeFall: good explanations. You obviously know your stuff. |
|
| |