(The original version of this post was deleted) In any location in a forest where there is even a tiny amount of the sky visible from most positions on the ground, one should be able to take numerous aerial photos at many different angles, then reconstruct a complete photo of the ground and structures beneath the canopy. So... fly over the area while taking a movie at an extremely high resolution and frame rate. From the viewing angle of each frame, there should be areas of the ground visible through the canopy. To help distinguish between the canopy and the ground, one could use two cameras aimed exactly the same: one focused on the ground and the other on the canopy. The lenses used must not be of the advanced type that keep everything in sharp focus, because you need a distance-dependent blurring for this to method to work. As the aircraft passes over the terrain to be imaged, areas of visible ground will appear and disappear as the viewing angle changes. These areas will also tend to "travel" toward the bottoms of the frames during a fly-over and so can be recognized partly on this basis and tracked until they disappear. The areas of ground image in the frames would need correcting for the changing viewing angles. The corrected image data would then be arranged and placed into a single image. For best results, try for a very stable ground speed during the filming, as GPS alone would not likely be accurate enough. This would obviously be more successful in deciduous forests after leaves have fallen. This idea was inspired by a History Channel show about the Bohemian Club.
Other uses would be the production of aerial photos with tree branches cleaned out of it. Such photos would be useful for creating maps of land with buildings and other features of interest.
Added Dec 17, 2012: I should have mentioned that the method would usually, if not always require multiple passes. Originally, it was intended for extreme circumstances where areas are almost completely obscured by foliage.-- Alvin, Nov 11 2011 Confocal Microscope Idea LCD_20Pinhole_20Confocal_20MicroscopeSimilar photographc technique, totally different scale. [scad mientist, Nov 11 2011] Low-Cost Computational Camera (CSU senior project) http://projects-web...design/AY13/camera/2×2 RPi camera array for seeing through foliage [notexactly, Mar 06 2015] Not sure of the method, but the idea is kind of neat.-- doctorremulac3, Nov 11 2011 It sounds like you expect to get a full picture of the ground with a single pass. Based on your original assumption, "there is even a tiny amount of the sky visible from most positions on the ground," you will not. For example if I stand in a grove of trees and there is only one visible patch of sky that is east of me and 80 degrees above the horizon, if your plane flies north to south directly overhead, I won't be able to see it, and it won't be able to see me. To get a full picture with a tree canopy that thick you need to fly back and forth enough times so that you take a picture from every location in the sky. How many passes required depends on the size of the hole in the trees. If it is a 1 ft hole in the trees and the plane is many time higher in altitude than the leaves, then a single pass will see a strip of ground about 1 ft wide.
Now if you change your assumption to require a less densely forested area such that from any location, in any direction you look there is at least one location between straight up and X degrees above the horizon where you can see the sky, then this could theoretically work in a single pass if the camera being used has an angle of view of X or larger. The larger the view angle of the camera, the more resolution you need, but the denser the canopy that you can see through. Removing tree branches after the leaves have fallen off ought to work really well with readily available camera technology.
I suspect that with current image processing you could do this without GPS, a steady airspeed, or two cameras with different focal lengths. My Sony DSC-TX55 has a mode where you move the camera in an arc while it takes many photographs. It then composites them into a 3D image that allows you to look behind object on the screen by tilting it while viewing the image. I suspect this works by identifying and tracking features as they move across the field of view. Based on the relative speed that different objects move, the relative distance from the camera can be determined. I think my camera just adjusts the position of all the photos so that the closest object is stationary. Your application of deleting the foreground objects to create a non-3D view of the background would require some different processing, but should be doable.
You could also have some challenges with your exposure setting. The ground under the canopy will be relatively dark compared to the leave on top, so you'll probably need to overexpose the part of the image that you don't care about, but when you get to an open field, automatically adjust so you can see that as well. Some cameras have features where they automatically take multiple images with different exposure settings and composite the image, so this should be a challenge, not a roadblock. Depending on the quality of your camera, when you overexpose, you will get some glow from the overexposed part of the image that will wash out the edges of the darker portions you are trying to see, so that will reduce the effective size of each hole in the tree canopy.-- scad mientist, Nov 11 2011 scad mientist, I should have mentioned that the method would usually, if not always require multiple passes. Originally, it was intended for extreme circumstances where areas are almost completely obscured by foliage.-- Alvin, Nov 11 2011 Isn't this Street View ? ... minus the actual streets.-- FlyingToaster, Nov 11 2011 FlyingToaster, This could improve Street View and any other mapping-related effort by removing clutter from photos whether aerial or taken from the ground. It might also be useful for producing uncluttered photos in general.-- Alvin, Nov 12 2011 I quite like the idea, but I would like to correct the minor detail: //The lenses used must not be of the advanced type that keep everything in sharp focus//
As most photographers know, the amount of depth of field (lots = everything in focus even at different distances; little = one thing in focus everything else blurred) does not depend on the lens quality; that only determines how sharp the whole picture is (i.e. the relatively sharp bits). DoF is decreased by an increased apature - i.e. the effective diameter of the lens - and by the distance you are from the subject - the further away you go, the sharper things of different distances become. In fact, it is generally the 'advanced type' lenses which are geared towards images with very little depth of field : an extreme example of the opposite of this is in a pinhole camera where there is no lens, yet things of all distances are pretty much equally sharp.
I would actually suggest a rangefinder mechanism where two optical systems are a known distance apart and the images will only coincide at the level at which we are interested. The problem with this is of course that the further apart the rangefinder bases are, the more accurate the system. However, as they move apart, the things they see through the canopy will start to not coincide. Also a computer system analysing it may thing that the leaves in the canopy are coinciding just because of similar patterns.
We could still make use of the differing views from different aircraft positions though by mounting a few cameras along the wing of the plane.
[+] for concept though.-- TomP, Nov 12 2011 I like it."I 'can too' see the forest for the trees!"-- 2 fries shy of a happy meal, Nov 12 2011 If you reverse this and state "there is a tiny amount of ground visible from a given positioning the sky" it becomes more doable. But the closer you are the obstacle blocking sight, the harder this becomes. For example, imagine your neighbor has got a pet komodo dragon in his fenced yard and you want to see it. If there is a sliver between fence boards, one could take serial images thru that sliver as you moved along then combine them to be a fenceless image. Sometimes your eye can do this for you if you ride a bike by a fence with cracks at the correct speed.
So the trick would be for the plane to maximally utilize all visually clear spaces, creating a strip of visualized ground. If there were so few of these spaces that the strip was not useful, additional passes might be needed as each approach angle might offer new clear spaces.-- bungston, Nov 12 2011 Individual gaps between leaves from below act as pinholes, as can be seen during solar eclipses. This makes me wonder how to exploit this the other way round, because it seems to me that the view at the right distance from above the canopy could be photographed without a lens, though rather dim.-- nineteenthly, Nov 12 2011 agent orange?-- not_morrison_rm, Nov 14 2011 [19] It might feel that way, but the piece of film would have to be proportionately much much larger than the "pinhole" in order for it to work, as in a real pinhole camera.-- FlyingToaster, Nov 15 2011 The bigger problem with the pinhole concept is that with a normal pinhole camera, the inside of the box (where the film is) is dark and the only light is the light that enters through the pinhole. During the day there would be orders of magnitude more light coming from the surrounding treetops than from one small hole in the tree canopy.
This could almost work on a cloudy night to photograph a scene under the canopy that has its own light source, but I think you'll still be disapointed at wasting that very large piece of film because leaves are usually translucent. Lets say that a leaf blocks 99% of the light. But the area of leaves is 100 times the size of the hole. That results in just as much light hitting the film through the leaves as through the "pinhole".-- scad mientist, Nov 15 2011 //The areas of ground image in the frames would need correcting for the changing viewing angles. The corrected image data would then be arranged and placed into a single image.// The only way to do that (except in Saskatchewan) would be to model the images as a 3D surface - which is a bonus payoff for all that imaging and processing.-- spidermother, Nov 15 2011 I say "agent orange" one more time.
Or some cunning ruse involving silver nitrate sprayed onto the leaves and flare dropped beneath the canopy.?-- not_morrison_rm, Nov 16 2011 Easily bakeable/widely baked as research projects. You just need a large enough light field/plenoptic/synthetic aperture camera or camera array, or one moving camera doing aperture synthesis like you said.-- notexactly, Mar 02 2015 Could also be used where the photographer and subject are both stationary and on the ground but the obscurifying layer is moving - e.g. to take photographs through trees on a windy day.-- hippo, Mar 02 2015 Or a photo of a tourist attraction without the tourists getting in the way. I remember reading a Lifehacker article a while back that said to take many photos from the same location and then take them into Photoshop and somehow find the most common value (mode) for each pixel and use all of those values to make the final image. That's essentially temporal aperture synthesis.-- notexactly, Mar 02 2015 [approximately] Now that would be interesting - you could get photos of whatever tourist attraction you wanted (e.g. the Eiffel Tower, the Washington Monument) in the middle of the day, with absolutely no one in the picture.-- hippo, Mar 02 2015 random, halfbakery