h a l f b a k e r yThunk.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
This algorithm learns from millions of users constantly
teaching it new things about what is to be considered
when looking at an image, so instead of the
unintelligible math based algorithms which do edge
detection and contrast or hue border discovery, this
algorithm actually understands its
looking at letters on
the grass of a baseball team, or a person with a tie and
three peace suit, and that white thing is not
something outside the selection, but rather part of
the shirt collar so it goes at the first shot, together
with the shirt.
Then you "zoom in" to see the parts. It has a complete
semantic web of terms connected to the shapes and
textures it "sees", which can then be constructed into
a search-base, and can be utilized for smart retrieval,
and also fast reconstruction of vector type images
with rastor quality.
Please log in.
If you're not logged in,
you can see what this page
looks like, but you will
not be able to add anything.
Annotation:
|
|
Google+ is offering something similar, with auto-
highlights (getting the best 30% of your full set) &
auto color/crop/etc. & Recognition of famous /
common things. |
|
|
I don't fault you for not knowing what Google+ is
doing, & have on their roadmap, as almost nobody
uses it. |
|
|
I meant as a second level analysis, of course AFTER
edge detection and other image digital signal
processing. The technology for this is as old as
Marvin
Minsky's book The Society of Mind about
"connectionism" when people were still excited
about
what AI can achieve. |
|
|
And the google+ thing is very interesting. I'm saying
recognizing not by an preconceived algorithm, but
by having people show shapes and parts of pictures
and classifying them, similar to face tagging (which I
think started in Google's Picasa) |
|
| |