h a l f b a k e r yI think this would be a great thing to not do.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Prompted by recent discoursions on the linked idea
below.
Rats - they're really pretty clever. They are also modular
(integer numbers of rats), and can be trained quite easily
to perform moderately complex stimulus/response
routines.
I also note that there are several brands of "logic
modules"
available, primarily for teaching electronics. In general,
these are bricks or boxes with two or more input and
output terminals on them, implementing the basic logic
functions such as AND, OR, XOR, NOT etc. Students plug
these blocks together to create more complex logical
functions such as addition, multiplication and so forth.
So.
The Ratulator system comprises a set of standardised rat
cages. Each cage contains a rat, trained to perform a
basic logical operation. Inputs take the form of coloured
flags which can be raised or lowered inside the cage by
means of levers on the outside of the cage. Outputs take
the form of levers inside the cage, which raise or lower
flags on the outside. Each cage is marked clearly with
the standard symbol representing the logical operation
its rat performs.
Output flags and input levers are designed so that they
can
be coupled together by flexible connectors, so that one
rat's output can be another rat's input. The same inputs
and outputs also act as the human/ratchine interface.
With a couple of trained NOT rats, you can create a
simple
astable oscillator, limited only by the availability of
reward pellets (which ensure that each rat stays on the
job). Add a couple more rats (including, for instance a
NOR rat and some more NOT rats), and more complex
logical functions can be built up.
Or, if your pockets are deeper (or you can team up with
friends who have the appropriate modules), you can
create
your very own rat-based calculator. Large institutions
may
even wish to create programmable computers; a full
implementation of the Pentium4 in RatOS will require
about 6-8 million modules (more if you want storage).
The possibilities are as unlimitless as your imagination!
Coming soon: Drosophila OS. Trade up for lower
running costs and faster cycle-times!!!
CubeOS. Under development. Please contact your
MaxCom representative to determine whether your cube-
farm environment is suitable for beta-testing of CubeOS.
Expired by:
Largely_20analogue_...0voice_20calculator [MaxwellBuchanan, May 20 2011]
Reminiscent (but clearly different)
http://plato.stanfo...tries/chinese-room/ Chinese Room argument [nineteenthly, May 20 2011]
//how lowly an organism?//
http://brembs.net/learning/aplysia/ [mouseposture, May 20 2011]
[link]
|
|
Why does this put me in mind of Hex? It's only a skull with a dribbly candle on top and a thing that goes 'parp' away... |
|
|
One problem - continuing the association between the correct lever press and the delivery of the reward, when the system is operating (as apposed to the training period). If the reward is delivered only when the correct lever is pushed, then the hardware already 'knows' which is correct, and the rat is redundant. |
|
|
There might be a way around that, especially considering that the reward does not need to be administered each time; I can't be bothered thinking it through now. Perhaps each rat will randomly be switched to 'training' mode, when rewards are available (the administering of the reward being controlled by a different rat, which is in 'working' mode). As long as the rats spend more time working than training, there will be some left over, usable, processing capacity. |
|
|
The rats will, of course, not know which mode they are in, and so will press the correct lever whether or not a reward is due. |
|
|
//the hardware already 'knows' which is correct, and
the rat is redundant// |
|
|
There is an element of that (and well spotted,
incidentally - I hadn't thought of it). However, I am
choosing to ignore it on philosophical grounds. |
|
|
This would work great in implementing fuzzy logic. |
|
|
This isn't likely to render the mouse redundant
anytime soon, I'm guessing. |
|
|
//the hardware already 'knows' which is correct,
and the rat is redundant// |
|
|
Thinking about this a bit more, I don't see this as
an issue. In a simple calculator, the input and
output are connected by some sort of fixed,
predetermined machinery. This is most obvious in
the case of a mechanical calculator, but also true
of a simple electronic calculator and, ultimately,
of any computer. You put a signal in, and after a
moment the inevitable happens and you get a
signal out. |
|
|
The same is true of the ratulator (or the more
ambitious ratputer). |
|
|
You could programme a ratputer to run the Game
of Life, and the emergent patterns would be
mechanistically inevitable but still as interesting. |
|
|
Isn't this what the lower classes are for? |
|
|
I like, sounds like it'd work. Two things about this: |
|
|
Firstly, how lowly an organism could still be trained to
perform such tasks, or enter relevant states which could
manifest in some way? |
|
|
Secondly, how large could the scale of integration be?
Could you train a one-bit adder, for example? A two-bit
adder? What about a viper? |
|
|
Reminds me of Searle's Chinese Room. |
|
|
I'm pretty sure you could do calculations on the fly. |
|
|
To reward the rats you would need some redundancy
and put them in an array so that several rats were
responding to the same stimulus or there was a
check rat. You could call it RAID. |
|
|
Would the flags operate on some kind of ratchet? |
|
|
That would be irrational. |
|
|
[nineteenthly] regarding Searle and his Chinese
Room. The problem is that Searle is a twat. |
|
|
The first thing that strikes me about Searle relates
to a guy at work who is trying to understand
behaviour in nematodes (C. elegans). The full and
exact map of all the neurons and other cells is
known for this simple beast, and this guy is trying
to understand how these circuits produce
behaviour such as food-seeking, avoidance of
noxious stimuli and so forth. |
|
|
The field began with people being able to say
"when this neuron fires, this muscle cell contracts
and the animal turns left". But of course that
wasn't a behaviour, it was a simple mechanistic
process. |
|
|
It's now advanced to the point where he can say
"when the nematode is exposed to substance X, it
bind receptor Y, which fires neuron Z, which fires
neurons A, B and C in succession, which makes the
worm turn this way and move away from the
substance." |
|
|
A few years ago, this avoidance of noxious stimuli
was behaviour. Now that we understand how it
works, it's not behaviour any more. |
|
|
So the first reason that Searle is a twat is that,
eventually (with a lot of luck), we will be able to
explain what goes on in the brain of a Chinese
interpreter in arbitrarily great detail. When this
happens, Chinese interpretation will cease to be
an intelligent behaviour, according to Searle. |
|
|
The second reason Searle is a twat is basically the
same as the first, but boils down the the Turing
test. Turing wasn't being trivialist or reductionist
when he proposed the Turing test, he was just
pointing out that all the philosophical agonizing is
just so much cerebral masturbation, which will
never lead anywhere. (Happy though the
intermediate results may be.) |
|
|
//availability of reward pellets (which ensure that
each rat stays on the job)// Use a variable-ratio
reward schedule, and you pellet consumption will
drop nearly to zero. |
|
|
//how lowly an organism// <link> |
|
|
A sheep/gate computer was on my brainstorm of dissertation ideas in my final year at uni in Wales studying computer science. I decided against it as it would have meant lots of talking to farmers. |
|
|
You could have called them woolly logic gates. |
|
|
Searle's argument actually does make sense, but only as a
reductio ad absurdum of (natural) intelligence. |
|
| |