h a l f b a k e r yi v n i n seeks n e t o
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
I seen videos where events like car crashes are filmed
from
multiple video sources of different qualities and
smartphones.
What would be nice, is if you could have something like a
"video photosynth", where you can get multiple videos
and
work out where it is in visual space of a scene.
If
you can work out where it is in visual space, perhaps
you
can also work out the exact timing of each video clip.
So imagine being able to see a grey dot cloud
representing
the 3d structure of immobile objects in the scene (like
walls), and each video clip being represented by a
flashlight that shows an animated point cloud of the
scene
at the time.
This might help with visualizing a filmed crime scene
perhaps. Especially if you can jump to any point in the
timeline.
http://xkcd.com/1425/
Simple does not equal easy. [MechE, Sep 29 2014]
123D Catch
http://www.123dapp.com/catch Doesn't take video input (yet), but that's just because video isn't typically high-def enough (yet). [Freefall, Sep 29 2014]
Hyperlapse
http://research.mic...rojects/hyperlapse/ Smoothed video from a moving camera - goes WAY beyond traditional frame-transformation image stabilization. [Freefall, Sep 29 2014]
Please log in.
If you're not logged in,
you can see what this page
looks like, but you will
not be able to add anything.
Destination URL.
E.g., https://www.coffee.com/
Description (displayed with the short name and URL.)
|
|
I don't know if I would call it "reconstruction". But I do suspect that in the not-distant future something like this will become Standard Operating Procedure. |
|
|
well, I won't be surprised if in the not-too distant future,
the latest episode of CSI has the person asking to 'enhance' a
pixel cloud to absurdity. |
|
|
There's a story, possibly apocryphal, that back when the
capabilities for machine vision were first being
developed, a professor assigned a project in that regard
to a couple of Grad Students, blithely assuming that they
would be able to work out identifying objects in about
two months. |
|
|
60 years later, we're still working on it. |
|
|
Reconstructing 3d from two known cameras (optics and
position) is relatively easy. Reconstructing it from
multiple known reference points/objects in each field of
view, somewhat harder, but still feasible. Reconstructing
it by identifying different aspects of different random
objects
from different, unknown, and variable (except in the case
of fixed security cameras) viewpoints is rather difficult.
I'm not even going to begin to claim it's impossible, and I
do think we'll get there someday, but I'm not expecting it
tomorrow. |
|
|
Autodesk 123D catch builds point clouds from multiple images from a generic camera, determining the lens properties on the fly. |
|
|
Microsoft is already developing tools to take a video and generate a new video output along a smoothed path, based on on-the-fly generation of a point cloud and representative geometry, and mapping the video onto that geometry. |
|
|
Check out the videos of the Hyperlapse generation process, showing the entire generated scene along with what is visible from the camera at each frame. |
|
| |