Stitch-ster would be a spider that crawls the web tring to match your stuff with other stuff that is out there, so for instance if there is a known time on a video that coincides with a known time on your video then Stitch-ster would stitch those two together. And t would have an algorithmic aspect that searches for really long-scale gestures that coordinate between files -- like if two videos taken from different angles contain the same sound of a car, the algorithm could look for similarities between videos and audios and create a stitched reality out of that.-- JesusHChrist, Jul 20 2012 [+] The latter part of this, at least, would be extremely difficult to impossible to achieve with today's technology. But someday, probably before too long, this idea will come to fruition. I recall seeing something similar, with 3D models of famous landmarks being rebuilt automatically from tourists' photographs found on the Web.-- ytk, Jul 20 2012 It could start with videos where there are lots of known recordings from one time/place, take an event that is unique, characteristic, and for which there is one very clear recording, and then work out an algorithm by associating videos of progressively less similarity, and using crowd sourcing like the mechanical Turk from amazon to associate videos that cant be done automatically, and have the algorithm learn from the associations that the people make. The cutting edge of the algorithm development would be inferring things about intelligent agents based on the differences in the ways they interact with different recordings.-- JesusHChrist, Jul 20 2012 hmm, sounds like a web-AI "subconscious" process.-- FlyingToaster, Jul 20 2012 What about texts as well? Harder than video I would think.-- pocmloc, Jul 20 2012 In order to compare features of video or audio files, wouldn't this bot have to 'play' each file in order to analyze it? Or does it just wait until somebody opens an mpeg and operate on their cpu while it analyzes the content of whatever they're viewing?-- Alterother, Jul 20 2012 Maybe it could just start with whatever the most recognizable "event" that was common to a bunch of videos was -- so whatever the first "thing" or pattern that meets some criterion of interestingness that an AI could recognize in a set of audio or video, or correlated audio and video data.
I think you could do text by starting with speech and a set of files that have been agreed on as "real", (not staged) by humans, and then speech recognizing and indexing those files for statistical correlations, like % of time the word "oops" is followed by crashing sound and then statistics for how often the creasing sound has this general characteristic...-- JesusHChrist, Jul 21 2012 For video of Big Ben, stitches against the main chimes should even easy enough. Possibly also taking into account how round or ellipse the clock face is. Bun.-- not_only_but_also, Jul 24 2012 //Big Ben//
Sp: Big Liz *snicker*-- ytk, Jul 24 2012 I wonder what kind of movie plot it would stitch together?-- RayfordSteele, Jul 24 2012 Sounds like Exquisite Corpse.-- normzone, Jul 30 2012 random, halfbakery