h a l f b a k e r yThe leaning tower of Piezo
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
As most sentient beings know, the Turing test was proposed by Alan Turing as a means to determine a machine's ability to exhibit intelligent behavior. A human judge remotely engages in a natural language conversation with a machine, which tries to appear human. If the judge cannot reliably tell the
machine from a human, the machine is said to have passed the test.
With all the smart machines around now (some of which could possibly pass the above test, but which are far from sentient), I think we need a more interesting Turing test. (First, a definition - Sentience: the ability to feel, or perceive, or be conscious, or have subjective experiences.) I propose this: A human judge remotely engages in a natural language conversation with a machine, which tries to appear human.* If the judge is convinced that the machine BELIEVES it is a sentient being, the machine is said to have passed the test.
* This should be amended to: tries to appear sentient - not necessarily human.
Britney Spears' Guide to Semiconductor Physics
http://britneyspears.ac/lasers.htm [sqeaketh the wheel, Jun 10 2011]
Chinese Room
http://en.wikipedia.org/wiki/Chinese_room Why the original Turing test is stupid. [DIYMatt, Jun 11 2011]
[link]
|
|
Cut the chit-chat. Shoot 3 of the buggers. If the 3rd one gets up its a Borg. |
|
|
So why is that any better than the previous test?
Surely in both tests, it's all down to the quality of
your judge's subjective powers for recognising
sentience, isn't it? Sure, a clever judge in the
original Turing test might adopt this approach in
deciding whether they give their conversation
partner the meat-tick or not, so I accept this as a
valid judging strategy, but is it a whole new test?
Not really. |
|
|
[zen-T] The proposal is supposed to incorporate the idea that intelligence or consciousness is self-referential (see blowhard books by D Hofstadter). A judge really has no way to determine if a machine IS sentient. The idea is that what matters is not that, but whether the machine THINKS it is. Then, for all practical purposes (FAPP),it is. If you don't follow that, I'm afraid you might be failing the original test. Where's my blaster? |
|
|
What if none of us really pass the test, but are just chemical machines? Where's [nineteenthly] to philosophize on this? |
|
|
Come on then, [squeak], if you think you're tough enough to take
us. We have adaptive shielding, you know. |
|
|
And [gnome], that shoot-first-and- dodge-questions-later isn't
going to get you anywhere (except perhaps rapid promotion
within the Metropolitan Police). |
|
|
The test fails at the simplest challenge; what if the testee truly
believes it is intelligent and sentient, but is in fact dumb as a
plank, like- for instance- Britney Spears? |
|
|
[87] Do you mean you are not familiar with "Britney Spears' Guide to Semiconductor Physics"? <link> |
|
|
I can't see how this works at all, let alone how it is in any way better than the original Turing test. |
|
|
The point of the test is to establish a benchmark for determining a computer's ability to exhibit intelligent behavior. Is sentience a necessary attribute to intelligent behavior? I'm not sure that its presence is an indicator of said behavior. So I think your test is looking for the wrong thing - that is, if it endeavors to achieve the same goal as the original test. |
|
|
If sentience is a part of intelligence, it is only a part, and I believe it would be a much simpler task to design a computer system that feigns only that part. |
|
|
Also, in the original test concept, the human judge is engaged with another human as well as a machine, and the goal is to sort out which is which. Do you intend to keep this arrangement? |
|
|
//With all the smart machines around now (some
of which could possibly pass the above test, but
which are far from sentient), I think we need a
more interesting Turing test.// |
|
|
An aside: there's a guy in my lab who works on the
behaviour of nematodes, trying to understand it
at the level of individual cells and, eventually,
individual molecules. Every time he starts to
understand one type of behaviour in this way, it
stops being behaviour - people just call it
chemotaxis or whatever. |
|
|
Your proposal is that, since there are machines
which can appear human within a limited
framework, it's time to make the test harder. |
|
|
By this reasoning, there will never be a sentient
machine, since you will just invent more subtle
and idiosyncratic tests to differentiate it from
humans. |
|
|
//By this reasoning, there will never be a sentient machine// |
|
|
Wrong. By the new test, there will be new sentient beings recognized, depending on who is judging. That is the point - no test like this can be absolute, even regarding Britney Spears. The test does not replace Turing's test, it is a variant for a different purpose. |
|
|
Not an improvement, not even a variant: it's the
opposite of what Turing proposed*. It puts the
subjective judgements about inaccessible mental
states back in. |
|
|
*Admittedly, that's an improvement if you disagree
with Turing. |
|
|
I've always felt that the Turing test is merely an exercise in human arrogance. As though the only measure of intelligence or sentience is the ability to imitate a human. |
|
|
Everybody should go read "On Intelligence" by Jeff Hawkins right now. RIGHT NOW! It's a pretty fascinating book that addresses exactly why this idea is much better than the original (stupid) Turing Test. |
|
|
The Turing test is a stupid answer because it's a
stupid question. Presumably, that was Turing's
point. |
|
|
// there are machines which can appear human within a limited
framework // |
|
|
Conversely, there are humans that can't quite manage" human"
in any framework whatsoever, even when hanging upside-down
from said framework and eating a banana. |
|
|
// Everybody should go read "On Intelligence" by Jeff Hawkins right now. RIGHT NOW! // |
|
|
Agree. It is one of the best books out there, and deserves multiple reads. The (unstated) analogies between his ideas on how the mind/brain works and how physics (physics theories) works jumps right out at you. (That is, everything is perceived as a model or through a model building exercise.) |
|
|
//I've always felt that the Turing test is merely an exercise in human arrogance. As though the only measure of intelligence or sentience is the ability to imitate a human.// |
|
|
Notice that my improved test does not require the machine to imitate a human. |
|
|
//Cut the chit-chat. Shoot 3 of the buggers. If the 3rd one gets up its a Borg.// |
|
|
And if it runs before you shoot it, it's Sentient? |
|
|
Based on the way humans behave, it's sentient if it shoots you
first. |
|
|
//Based on the way humans behave, it's sentient if it shoots you first.//
KETTLE! - Looking a bit black over there my son! |
|
|
... says a self-confessed member of the species that invented such evergreen favoutites as "Isandlwhna", "Custer's Last Stand", "Omaha Beach" and the classic travel movie "Ypres to Passchendaele; six miles, six months, and six hundred thousand dead". |
|
| |