h a l f b a k e r yThe word "How?" springs to mind at this point.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
LCD windshield
Block out the sun or bright headlights by selectively darkening parts of the windshield | |
Instead of a visor that requires you to keep moving your head to keep the sun out of your eyes, why not have a "visor" that tracks your head? My implementation:
Add a grid of LCD squares (or hexagons or whatever) to the windshield. Each can turn on to darken a small portion of the windshield.
A
camera will face out of the dashboard at the driver and be able to sense which part of the scene is the driver's eye socket area. This is the most complicated part of the system, but not as complicated as it might seem. Face tracking software is quite robust and established. It doesn't need to track the driver's eyes, calculate where the eyes are in space, calculate trajectories from other bright light sources, or anything like that. It just needs to be able to say "This region of the scene contains the driver's eyes".
It then "scans" the entire windshield:
1. Take a frame with the camera.
2. Darken an LCD square.
3. Take another frame
4. Compare the two last frames in the eye socket area.
5a. If the eye socket area has not changed in average brightness too much, turn off the LCD square.
5b. If the eye socket area is significantly darker than the previous frame, that LCD square is casting a shadow from a bright light source. Good. Leave it turned on for this cycle.
6. Go to the next LCD square and start over at step 1.
In this way, with a fast enough scan time (LCD TVs have response times in the milliseconds) and fast enough camera, the driver will not see the scanning action. Each LCD square will flicker on for a fraction of a second and back off again. The driver will just see a darkening of regions of the windshield where there are bright lights. If they move their head from side to side, a kind of blocky pattern of darkness will track the sun's position.
This avoids computational complexity and WIBNI functionality of other similar ideas I had. It's the simplest design I can think of that could actually be built and work.
In early models, a switch will be placed where the visor traditionally goes that will turn off the device in an emergency or malfunction.
The device could also double as a heads-up display.
Same idea, less details
Anti-Dazzle_20Windshields already on the halfbakery [BJS, Dec 11 2006]
same idea with detail
Localized_20Glare_20Control [jhomrighaus, Dec 11 2006]
Structured light
http://en.wikipedia...ki/Structured_light Casting coded shadows on a scene using darkened pixels in a projector to take measurements of the scene [omegatron, Jan 17 2010]
[link]
|
|
This exact idea is already on the half bakery, the only difference with yours is that you tell much more detail about how it could actually work. |
|
|
[marked-for-deletion] Redundant, see links |
|
|
Theirs: "A computer would then calculate the point at which the light rays intersect the glass panel. It would then selectively shade that area using a formula that is adjustable by individual users(to determine area, level of opacity etc.)" |
|
|
"That seems to be a very complex system for a modest goal." |
|
|
"The amount and nature of the object tracking, gaze and ray-tracing involved in this idea is a bit far-fetched, especially with multiple light sources" |
|
|
Mine: "It doesn't need to track the driver's eyes, calculate where the eyes are in space, calculate trajectories from other bright light sources, or anything like that." |
|
|
// That's a quote from your post, yet you say it doesn't track the driver's eyes? // |
|
|
Correct. It takes pictures of the driver and detects which part of the scene *contains* the driver's eyes, but that's it. It doesn't try to figure out where they are looking. |
|
|
It doesn't need to calculate the eyes' position in space, track where the eyes are pointing, or calculate the intersection of the windshield and every line from both eyes to every bright light source in front of the car (which would require another camera and even more processing). All it does is detect whether a given LCD square is casting a shadow on the top half of the driver's face. Much, much simpler processing. |
|
|
Which involves processing for every single pixel on the windshield every single time. I would posit that your system would require a significantly higher amount of processing to achieve the same goal. plus yours may lead to blind spots that could create a hazard, where as sensing the environment and only adjusting the areas where needed would be much safer. Also in order to do what you are describing the system would need to turn off all the pixels, measure for a shadow, then move to the next pixel, otherwise the system would not know if a pixel is having an impact or not. thus the system would have to blink the entire system on and off, thus negating any effect. It would also have serious issues with light sources of various intensities or varying directions, as they would cast light on the face that would cause the system to think a pixel had no effect when it was actually a different light source causing the light. |
|
|
As to my system, exact tracking of the eyes direction of focus is not required, only its location and the heads orientation, the system then dims all incident sources that meet the criteria. This is pretty basic math and easily handled by any modern computing system. Only one camera is required with a field of view of 180 degrees(they use 360 degree cameras on robotic vehicles) This would require a basic tuning and setup procedure and then the user would tune in their desired settings through time. |
|
|
The redundancy is that there is already an idea here that utilizes an electronic system that utilizes basic facial recognition to control an LCD windshield and an idea that utilizes a similar system to control glare on other transparent surfaces. |
|
|
// Which involves processing for every single pixel on the windshield every single time. |
|
|
Yep. I'm going to start calling them "wixels". Windshield pixel elements. :-) |
|
|
// I would posit that your system would require a significantly higher amount of processing to achieve the same goal. |
|
|
// plus yours may lead to blind spots that could create a hazard |
|
|
By what mechanism? Any method could have blind spots or flaws that could be fixed in the design process. Do you see something specific about this method that would lead to blind spots? |
|
|
// Also in order to do what you are describing the system would need to turn off all the pixels, measure for a shadow, then move to the next pixel, otherwise the system would not know if a pixel is having an impact or not. |
|
|
It only has to measure the difference between the scene with one wixel turned off and then with that wixel turned on. Doesn't matter what the others are doing at the time. |
|
|
Comparison of brightness between two frames is trivial, and would only need to be calculated for the area around the eyes; not the entire scene the camera can see. |
|
|
// thus the system would have to blink the entire system on and off, thus negating any effect. |
|
|
Not the entire windshield, but yes, each wixel would be off for a few milliseconds every cycle. How would that negate the effect, though? |
|
|
// It would also have serious issues with light sources of various intensities or varying directions |
|
|
Constantly flickering lights (run on high frequency AC?) might be a problem. I don't think there's much intersection between "lights bright enough to block" and "constantly flickering lights" on the road, though. You mostly just want to block the sun and high beams from oncoming traffic, both of which are constant in brightness on millisecond timescales. |
|
|
// As to my system, exact tracking of the eyes direction of focus is not required |
|
|
Well, That's what "eye tracking" means. |
|
|
// and the heads orientation |
|
|
// This is pretty basic math |
|
|
No. That's not trivial at all. |
|
|
You'll need at least two cameras for the driver, two cameras for the scene in front of the car. You would have to both track the movement of the objects (this object is a head, this object is the sun, now they've moved to here) and also measure their position and distance accurately, based on stereoscopic data (not easy). |
|
|
My system needs only one camera and it can tolerate a lot of error. |
|
|
Instead of computational raytracing from the sun to the person's eye, it lets the light from the sun do the raytracing. :-) |
|
|
Hmm... The tracking of the face's position could be made even simpler with a two-color camera... Like half the pixels filters would respond to the range of visible light, and the other half would respond to the infrared emitted by human skin, or reflected characteristically by human skin and not other things. |
|
|
// The redundancy is that there is already an idea here that utilizes an electronic system that utilizes basic facial recognition to control an LCD windshield |
|
|
But it's a different implementation. How similar do they have to be to be redundant? |
|
|
If it's just the basic idea that is redundant, then your 2006 idea should also be deleted for redundancy with the 2004 anti-dazzle idea, which doesn't even explain how the light sources are tracked. |
|
|
what a wuss. a basic parabolic mirror sensor can readily determine the angle of incidence of the incoming light. |
|
|
In your system you would need a hell of a camera to determine if a 1mm pixel has changed the light level, add more than one source(ie 2 oncoming cars some distance apart) and blocking only one of the 2 sources would have no discernible effect. |
|
|
your system will need to be able to identify a face, and its position and whether the change it detects is a zit or a "wixel" doing its thing. That sounds awfully complicated to me. |
|
|
I'm not sure what to tell you but no-one seemed to think that mine was redundant back when I posted it, and several thought it was a good Idea, until you got all miffed no-one seemed to dislike it. Sorry! |
|
|
// a basic parabolic mirror sensor can readily determine the angle of incidence of the incoming light. |
|
|
And distance? And position of the driver's eyes? For multiple light sources at once? And calculate the intersection of the windshield and the line joining those two points? |
|
|
// That sounds awfully complicated to me. |
|
|
Then you don't understand it. Try reading it again. |
|
|
go back and read my objections to your approach, you have not addressed them. Yours is not nearly as simple as you are explain it. |
|
|
Some simple signal emitting eyeglasses or just the frame for those of good eyesight would allow you to skip the challenging eye recognition software and camera.
(Thanks, jhomrighaus, for correcting my recent inaugural submission.) |
|
|
//what a wuss. a basic parabolic mirror sensor can readily determine the angle of incidence of the incoming light. |
|
|
Can you explain how that works? Does it work with multiple sources of incoming light? The sun, specular reflections, headlights from multiple cars at night. And you're going to do trigonometry based on head position detection in a 3D model of the scene to figure out which pixels to darken? |
|
|
// In your system you would need a hell of a camera to determine if a 1mm pixel has changed the light level |
|
|
The "pixels" in the windshield would be rather large, not 1 mm. Only a few hundred per windshield. The darkening effect would be obvious to the camera (a drop in light level where the pixel casts a shadow on the eye region, without a drop elsewhere, coincident with darkening a particular pixel), but each pixel would turn on for only a few milliseconds in scanning mode, too fast for the person to notice anything except a small darkening of the entire screen. |
|
|
// add more than one source(ie 2 oncoming cars some distance apart) and blocking only one of the 2 sources would have no discernible effect. |
|
|
It would automatically block all sources. |
|
|
// your system will need to be able to identify a face, and its position |
|
|
Even cheap webcams can do this now. |
|
|
// and whether the change it detects is a zit or a "wixel" doing its thing. That sounds awfully complicated to me. |
|
|
?? Zits are very small, and wouldn't change from one frame to the next. I don't understand what this has to do with anything. |
|
| |