h a l f b a k e r y"This may be bollocks, but it's lovely bollocks."
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
This may not work for frogs, but for some other vertebrates, such as pigeons, it may do.
Take a frog. Put before it two doors, one with the Chinese characters for "sexual intercourse" clearly written on it, the other blank. If the frog swims through the labelled door, he will find a frog of the
opposite sex. Keep doing this, randomising the positions of the doors. While doing this, follow a similar procedure with the Chinese characters for "food" and place a fly behind it. After a while, the frog will be able to read the Chinese for "food" and "sexual intercourse".
If it doesn't work for frogs, it will work for some other vertebrate normally considered to be of inferior intelligence. It needn't be Chinese: Egyptian hieroglyphics would work just as well.
Snail_20Koans
just feed these snails to your frogs [xenzag, Nov 24 2007]
John Searle's Chinese Room
http://www.iep.utm.edu/c/chineser.htm I really dislike this thought experiment and its ensuing discussion. [jutta, Nov 25 2007]
[link]
|
|
I imagine this idea continuing on with another paragraph beginning with "And then..." which would tell what use this would be. I can't imagine what might be in it. |
|
|
Just don't teach it "Death" too soon. |
|
|
I agree with you, [jutta], on the Chinese Room experiment. It's specious, and misses key points. |
|
|
My grounds is that, though Searle thinks he has countered the dualism charge, I disagree. I think his "thought is consciousness" approach remains highly dualistic. He asserts that it is "commonsensical" - that it is somehow merely "interactionist" to assert that thought is "both caused by and realized in underlying brain processes" and thereby avoid dualism. |
|
|
But I think I have a counter argument to that, and it goes back to the water pipes metaphor, which he uses to demonstrate the so-called "difference" that he claims exists between the room and my head. Simply put, given complex enough plumbing, the Searle-in-the-room could indeed conceivably create thought by turning on the right sequence of pipes, and it is simply dualist to assert that this is commonsensically not possible. |
|
|
"Hey, Fred, what does it say on that door?"
"It says 'pregnant whore frog' in Chinese" |
|
|
I don't like the Chinese room experiment either. It's an argument against functionalism. I see consciousness as an emergent property of matter, like ferromagnetism. The problem with this is that there seems to be a weird coincidence between agglomerations of matter which are functionally equivalent to conscious beings and the actual property of consciousness. |
|
|
What would you think about a committee which was organised to simulate Chinese conversation? Would the entire committee exhibit a single consciousness? What about a committee of trained frogs? |
|
|
If an army of said animals became
disorientated whilst engaged in confusing
military combat, would this situation be
correctly described as: "lost in the froggery
of war"? |
|
|
A million laughing Chinese frogs. |
|
|
I think consciousness is merely a matter of degree of complexity. It may seem shocking, but there is no intrinsic reason to believe that so-called "artificial" systems could never possibly become "conscious" the way our brain systems are, given (if ever possible) the level of complexity of same. |
|
|
What happens of all the frogs in China jump at the same time? the great leap forward? what's chinese for food? Chow, or Ribbet? |
|
|
Nobody likes the Chinese Room
experiment, but it's unusual not to "like"
an experiment - gedanken or not. |
|
|
Surely part of the problem is that
functionality is the only handle we have on
consciousness. As soon as you start to
question the Turing test, there really is no
alternative and no likelihood of progress.
But that's not necessarily a good reason to
not like the experiment. |
|
|
A description of the internal state of the central nervous system or another system with the consciousness of property does not have the same meaning as the description of the mental state it is actually exhibiting. For instance, crudely speaking, "I am afraid" is not the same as "my sympathetic nervous system is currently more active than my parasympathetic" and will never get any closer to that meaning regardless of detail. However, i agree that a system of the right complexity and structure would be conscious. |
|
|
[giligamesh], 食品 (That'll never display properly in a million years, but it looks OK on this screen). |
|
|
//functionality is the only handle we have on consciousness// |
|
|
Well, functionality is the only *scientific* handle we have on consciousness. The other 'handle' we have is empathy. The trouble with empathy as a handle is that you can't produce universal, repeatable results by pulling on it. Does progress necessarily depend on the availability of such results? |
|
|
It depends on what you're trying to
achieve. If you want to produce
machines that act intelligently, then the
functional approach is likely to be the
most fruitful. |
|
|
But if you're trying to understand how
human intelligence works, then the
argument is not robust. It's like looking
for your dropped keys under the
streetlight because you wouldn't be able
to see them if you looked in the dark
corners: if the key's aren't under the
streetlight, you won't find them. |
|
|
Not that I'm opposed to pragmatic
functionalism. It's the only route we
have at present, but that in itself is not
an argument that it's correct. |
|
|
Fred, lost for hours in the Chinese maze,
realised that he had forgotten his frog. |
|
|
Introspection is another possible way of coming to an understanding of consciousness, and in that case it can be developed into the process called phenomenology, which in itself could be applied to the design of artificially intelligent systems, so i wouldn't say functionality is the only useful model for that. Whether phenomenology is scientific or not, i don't know. It can certainly be applied to psychology, but is that a science? |
|
|
Piping in on the Searle's Room thing - I think it's an artifact of its time. Just like the Turing Test, and Penrose's well published thoughts on machine intelligence. |
|
|
All of those arguments are against "algorithmic conciousness" - which I think we can all agree was always going to be a non starter. |
|
|
Bright new scientific vistas (chaos, complexity, simplicity, simplexity, synchronicity, small-world networks, and emergent phenomena in general) have since been investigated and found to contain vast new areas of fertile territory. |
|
|
The killer against the Searle argument (at least from my perspective) is that if you ask individual neurons in the brain of your average Shanhaian, they won't know how to speak Chinese either. |
|
|
I think there's a tendency for metaphors of the mind to follow the dominant technology of the time, so Freud talked about the forces of the ego, superego and id, likening the mind to a steam engine, then it was like a telephone exchange, then a computer. So, yes, chaos and complexity could contribute to a model of consciousness, but then we'll move on again and there'll be a new model, which like all the others will be infinitely far from the truth, corroborated but not true like other theories, but in the meantime a working model of the mind could still be built based on probabilistic automata or phenomenological analysis, and it would still have consciousness even if the model on which it is based is superceded. In that case, maybe there could be steam-powered artificial intelligence, but it would probably be very neurotic and want to kill its inventor and have sex with its inventor's partner. |
|
|
Freudian analysis for steam-based intelligences - brilliant! |
|
|
"Sometimes a piston is just a piston." |
|
|
//steam-powered artificial intelligence, but it would probably be very neurotic and want to kill its inventor// |
|
|
"very neurotic and want to kill its inventor and have sex with its inventor's partner" |
|
|
so that'd be "The Demon Seed" if my memory for really bad movies is right. Killer Clowns from Outer Space, now that's quality... |
|
|
The practical application would be so that frogs could read the menus and go somewhere where frog isn't eaten, for example not going to France. |
|
| |