h a l f b a k e r yNormal isn't your first language, is it?
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Please log in.
Before you can vote, you need to register.
Please log in or create an account.
|
At some point in the future computers and robots may grow advanced enough that we may require some sort of test to be able to tell a biological person from a technological one. Think "Blade Runner".
In the movie "Blade Runner" the traditional Turing Test (can you tell the difference between the computer
and person by talking to it) no longer works, and instead in order to tell the difference one must rely on carefully monitoring unconsious physiological reflexes to emotionally disturbing questions. This process requires special equipment and is time-consuming to enact, and relies on the fact that the robots don't have emotions, which the movie itself raises as a possible fading degree of seperation (to wit, the robots are gaining emotions as well).
I propse a much simpler test that instead relies on one of the fundamental flaws of the human brain: the optical illusion. The test subject is presented with several simple optical illusions--if the subject gives the illusionary effect as what they see, then we can fairly certainly acertain that they are human. If, on the other hand, the test subject does NOT percieve the illusion, then we can fairly certainly acertain that the subject is NOT human--no correctly functioning computer would be fooled by the tricks that screw up human perception.
See link for some of the sorts of illusions I think would work particularly well for this test.
Lightness illusions
http://www-bcs.mit..../main-frameset.html demonstrations of lightness and contrast-based optical illusions [5th Earth, Jan 06 2005]
"Perceptron" Cover
http://www.amazon.c...103-6704061-6342220 Illusory figures (click to enlarge) [Predictor, Jan 06 2005]
Android
http://www.androidw....com/Head110204.jpg [Machiavelli, Jan 07 2005]
(?) Triangle Illusion
http://wilstar.com/...lusion-triangle.gif An example of an illusion without a "correct" physical interpretation. [Predictor, Jan 07 2005]
evilsheep's (as opposed to Turing's) Test
http://www.ibiblio....df9403/df940321.jpg An illustration of evilsheep's idea. [Predictor, Jan 10 2005]
[link]
|
|
Great link, but couldn't you just use something more primitive? Their anatomy would still be different...wouldn't it? |
|
|
//no correctly functioning computer would be fooled by the tricks that screw up human perception//
Why not? |
|
|
I would think that an AI which couldn't easily be distinguished from humans whoiuld be smart enough to fake it. Also, some actualy people don't do well with optical illusions. Interesting though. |
|
|
I see two potential problems with this test: |
|
|
1. Many machine vision systems have the same (or similar) flaws as biological vision systems, often having been built to mimic their biological counterparts. Minsky made specific note of this in regard to certain types of neural networks' inability to distinguish particular characteristics of visual figures. See the cover of "Perceptrons", for instance. |
|
|
2. Assuming an adversarial relationship between men and machines, wouldn't machines or their human associates eventually develop the ability to deliberately fall for illusions? |
|
|
[yabba]: Yes, their anatomy would be different. But then again, it was presumably different in Blade Runner too--the point of the test is to determine humanity through non-surgical techniques. |
|
|
[zen tom}: A computer would not fall for an illusion because even a cursory quantitative analysis will show the true state of the illusion. If asked "which square is darker", even if a computer knew it was supposed to answer one or the other, (IMO) it could not correctly deduce which one was *perceptually* darker to a human because the two squares are in fact identical, and it would be a poor computer indeed that couldn't tell that they were identical. |
|
|
[tiromancer]: Ability to fake a response to an illusion is contingent upon having seen the illusion before, and knowing what the normal human interpretation is. I'm sure clever researchers could develop a near-infinite array of illusions (kept secret from the public at large) that would test the same sorts of brain functions without being especially similar in appearance or presentation to known existing illusions. |
|
|
As for humans doing "poorly" with illusions, so to speak, I was not familiar with this fact. Still, I'd assume most humans could do better than the 50% results the test would give with an AI (if it worked). |
|
|
[predictor], I'm not sure what I'm supposed to see (or not see) regarding the book cover you mentioned. I can reasonably easily see several differences between those two figures--if an AI visual recognition system can't see any differences, then it only further proves my point, just from a different direction--that human sight and AI sight would function fundamentally differently. As for other flaws that mimic human flaws, I'd need a better (or better explained) axample to be convinced. |
|
|
I had to re-create some of those perception tests (in color instead of the grey scale) for a color theory class. Very time consuming. Other than that, I have nothing to say about this idea because I don't really know what you're talking about (not your fault, don't worry). |
|
|
I am an android. My photo is in the link above. |
|
|
[zen tom], I disagree. Machine vision systems are fooled (have trouble interpreting) all the time by things which aren't even illusions. Besides, many illusions have no "correct" interpretation, such as the linked Triangle. |
|
|
[5th Earth], it is not a matter of "seeing any differences", it is a matter of interpretation. Minksy observed that perceptrons (and humans) cannot determine easily whether such figures are composed of one or two curves. More sophisticated machine visions systems could be concocted to solve this problem, but the point is that machine vision systems suffer from many of the same problems as human vision because they need to attack the same problems. Similar solutions are often used, and these suffer from similar limitations. |
|
|
[Predictor], now that you explain the point of the figures I see what you mean. I had deduced that one of the figures was a single loop and one was two loops, but it was *not* reasonably easy. Still, that is not a question of being "fooled" by the figure, but just the fact that the two figures are difficult to parse at all. Most simple optical illusions are very easy to parse--they just tend to promote incorrect interpretations, even in the face of what is (or at least, should be) obviously true. |
|
|
I see your point about artificial vision systems having the same limitations as human ones, but (albeit having a vested interest in the validity of my idea) I'm still not convinced an artificial vision system would fall for actual illusions--or, if it did, it would fall for different ones than humans fall for. |
|
|
Optical illusions are not the results of our 'feeble human minds' - they are natures shortcuts that have allowed us to pack massive amounts of effective processing power into a structure the size of a melon. If you want your adroid to be as capable as a human, you too will most likely need to take some processing shortcuts too, or else give it a massive head (which might give it away) why not use the same processing shortcuts that have been shown to work by nature? The worse thing that can happen is your android will get an Escher obsession and start putting posters of fractals on his bedroom wall. |
|
|
//[zen tom], I disagree.// By the way [predictor], what do you disagree with me about? Before this anno, I had only asked a question. |
|
|
Sorry, zen tom, I'm new here. I mistook the comment addressed to you from 5th Earth as yours. |
|
|
No worries - and in case it's not been done elswhere already, then welcome to the HalfBakery. |
|
|
[Zen Tom], by your argument illusions ARE the result of our "feeble human minds"--the reason we take the shortcuts that lead to illusions is because we DON'T have the mental processing power needed to "do it right". |
|
|
A couple of trivial points, but there's no reason an android would have to have it's computer solely in it's head. That might be anatomically similar to a human but there's no reason the computer systems couldn't be located elsewhere, or even distributed evenly throughout large portions of the body. An android's brain could be much larger than a human's. |
|
|
Also, the "impossible triangle" diagram listed DOES have a correct physical interpretation--the interpretation that it is a collection of gradient-shaded parrallelograms and trapezoids on a plane, and in fact not 3-dimensional in any way at all, impossible or not. |
|
|
More seriously, note that I don't necessarily dispute that androids will see illusions--but I doubt they will see the same illusions as humans do, if for no other reason than because I don't think we ourselves understand the way human sight works well enough to transfer it directly into another medium. |
|
|
At any rate, if I was teaching a robot to see, I'd view an optical illusion as a bug in the design and endeavor to fix it--illusions are false inputs, and after all, Garbage In, Garbage Out. This is something of a self-fulfilling prophecy for my idea, but it's how I would act if I was not anticipating the need for detailed human-robot deception. |
|
|
As I understand it, people see optical illusions because they have learned to interpret them that way, not because it's an innate function of the brain. For an android to pass as human, it would have to learn to respond to the same visual cues as authentic humans, otherwise testing it would never be an issue. |
|
|
As with the riddle: Father and son are injured blah blah blah doctor cant operate on boy yadda yadda yadda "This boy is my son!" blah blah blah |
|
|
It isn't much of a riddle unless you assume that a doctor must be male; a conditioned response, not an innate one. Something an android could theoretically learn just as easily as any other human. |
|
|
If I was designing a robot that could see, I'd like to use lidar. Lidar has its own set of problems. There's a PDF floating around somewhere about how puddles appear to be solid when viewed from anything near a perpendicular angle. |
|
|
So ultimately, I like the idea that robots would have perceptions which could be differentiated by this kind of chicanery, but I'd say the point is moot. |
|
|
[tiromancer], I don't really see how one could "learn" an illusion. I can see the illusion even in examples I have never seen before--and everyone sees each illusion in pretty much the same way. |
|
|
Now if you mean because sight itself is a learned ability, then fine--but everyone learns to see the same way with the same "equipment" (eyeballs, optic nerves). An android would be using some different mechanism to learn to see with; therefore, an android would learn to see differently. |
|
|
Sorry. What I mean is, you learn to infer depth in drawings. If you had not done so, the impossible triangle is neither impossible, nor a triangle, really. I first heard this in high school as an anecdote about some isolated tribe who weren't able to comprehend photographs. The particulars were probably made up, but it's certainly true for perspective drawings. |
|
|
Come on guys, all you have to do to tell whether someone is an android or not is to give them a swift kick to the groin. |
|
|
Yes, I found this idea documented online (see linked illustration). |
|
| |