h a l f b a k e r yNot from concentrate.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
So if you've got a camera that's moving, you can get 3D information from the footage.
So - get a bunch of computers (the public might even be able to use javascript) to process the footage from you tube, Eventually the various models of streets / cities would build up and join, and would create an
ever-less-patchy 3D model of the entire world.
[edit] And i don't mean photosynth... That's not a 3D model, but a spacialy-aware picture-displayer. (or wasn't last time i looked) It shows the technology is there though...
[edit 2] Also, you can get primative 3D building data by looking at the shadows (on/in/and from google earth). It's not great, but you could get a computer program to give a fair stab at the heights and shapes of buildings.
google earth in 3D would be pretty cool, no?
http://www.ted.com
http://www.ted.com/.../blaise_aguera.html Blaise Aguera y Arcas demos augmented-reality maps [lagomorph, Apr 02 2010]
Google Earth Hacks 3D Models
http://www.gearthha...cat10/3D-Models.htm Things are working towards this goal. [Aristotle, Apr 07 2010]
[link]
|
|
Indeed it would be very cool. |
|
|
Stereographic aerial photography is widely baked in the GIS (geographical information systems) field, and is used to make relief and contour maps. You can either have two cameras in the plane, or take twice as many shots as you zoom along. |
|
|
I have a friend who works on these exact photosets - converting, with the aid of a computer, stereographs into maps with terrain data. |
|
|
Plug those image sets into Google Earth, tweak the viewing software slightly, use 3D display technology [x], and you're away. |
|
|
The same principle could be applied to the Google Street View spycar - twice as many cameras, or twice as many shots. |
|
|
There's also research - several years old now, and previously cited elsewhere on the HB - that uses publicly available data - like say, photos on Flickr tagged //trevi fountain, rome// - and a looooooot of edge detection and number crunching - to construct a 3D model of said landmark. It works amazingly well. The last time I looked (and you'll excuse me for not digging up a link, I pray), the method had been extended to construct a large section of the city around the Coliseum. |
|
|
So yes, in essence, what you are saying can be done, and is being done. Can't say I would call it ~widely~ known to exist, but it is out there. |
|
|
I would expect that someone in the Googleplex is investigating it as we speak. The main problem, I suspect, is that aerial stereography datasets are hideously expensive. |
|
|
By // a bunch of computers (the public might even be able to use javascript) // do you mean a distributed computing project, like SETI@home, Folding@home, etc? Yes, sounds quite doable - I would wildy guess that the algorithm lends itself well to breaking down into small jobs. Here's a hundred or so photos, see what edges you can match. The client could even trawl Flickr looking for the images. |
|
|
You will always look out of the window m_rm...
unless it blue-screens on you. |
|
| |