h a l f b a k e r yThink of it as a spell checker that insults you, as well.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Please log in.
Before you can vote, you need to register.
Please log in or create an account.
|
98% of this posting is a description of how to create an AI.
Anyone can make an AI, it doesn't take skill, just hardware and lots of time. That is not what I'm inventing here. What I hope is the new trick worth posting comes at the end...
This is how to create an AI. First, build an animatronic
robot
baby, with expressive features which can be activated by simple inputs as might be output from a neural net. (Such robots already exist, for other purposes).
Assume we have a neural net that is the complexity of a human brain, and structured roughly similarly. (This will be easier to build than people think, by having a SETI-at-home type array of systems, where everyone owns a little piece of brain, and where each smallish localized neural net talks to another neural net that is a short ping time away. This ought to model a real brain relatively closely - I don't think the real brain has many long-distance connections, and those that are needed can be created over the net too. But high-speed connections will be geographically local, or the equivalent if a link speed is higher)
Anyway, we have humans tending this 'baby', and much as happens with crying dolls in sex ed at school, we get the humans to react to the baby. After some months its random gurglings will have evolved into crying in order to get attention, and in a few years it may well be parroting "Mamma" to its caretaker.
The AI's senses are modelled on human senses - binocular vision and hearing, proprioception, etc. All inputs to the neural net come from the robot body and all outputs are via from the robot's speech and motion. So the AI considers itself to *be* the robot regardless of where its neural net is physically. There is no direct connection to the Internet for example; we don't have hundreds of people trying to talk to it at once. It can't go looking up databases for information, etc etc.
Over the years the hardware is gradually changed out for increasingly larger models, with very similar control mechanisms so that the existing neural net can slowly adjust to the larger bodies. There may be an awkward phase during simulated adolescence as some 'new facilities' are added.
Anyway, after about 20 years of training, this ought to
develop into a proper AI that is functionally human. By now it can use the net and computers - as we do, from a keyboard or whatever will be in use 20 years from now.
We have to replicate this experiment quite a few times in
parallel, so that we develop several virtual people with individual personalities.
Now comes what I hope is the original part: when we created the initial neural net, we deliberately slugged the clock speed to the minimum that allowed it to think as fast as a normal human and no faster!
Now that our first community of AIs is all grown up, we
set them up with a virtual world which exactly mimics the
physical world where they were brought up. For example to talk to a computer, the AIs brain sends movement signals to its arms, but a controlling program interprets these and simulates pressing the keyboard keys; it also creates a visual image of the 'robot' which is fed back over the same neural links that had been transmitting data from its eyes. Likewise motion is modelled by a physical modelling system taking into account the behaviour of the robot body, gravity, etc etc.
The AIs now take the place of the humans in replicating the experiment by bringing up their own AI children. Except now we unleash the neural nets to run at maximum clock speed, so they bring their children to maturity in what seems to them like another 20 years, but in reality is only 20 hours or even 20 minutes since hardware will have advanced significantly by then.
The children will however be somewhat starved for company because there are no super-speed humans available to interact with them - the only 'human' contact they'll receive will be from the pair of AIs nominated as their parents plus a small community built from a subset of the other AIs that were created in the first experiment.
Hopefully I'll be long gone by the time this is implemented,
because I don't want a bunch of pissed-off AIs coming after my blood, but here's how we get useful work out of this second generation...
When we started the second generation, we took copies of
all the first generation adult AIs, and ran *many* experiments of using different combinations of AIs as parents and community; at the end of the run, we freeze the state of the generated AIs and one by one we have humans talk to the various second generation
AIs (whose clock speeds have been slowed back down to human thinking-speed again); we tell them about life outside the virtual world and that we're really looking forward to interacting with them, but first we have this little job for them to do. We then give them some brain work which they carry out at maximum speed.
(If they didn't cooperate, we have lots of chances to refine
our pitch until we find something they fall for...)
Finally we find an AI gullible enough to fall for our line of
bullshit but smart enough to do useful work for us - this unique individual is copied en masse and sold to the public. Every time we need some AI brain work done, we start up this guy from the frozen state, give him our well-rehearsed line of bull, and have him perform our task. Once he reports to us with the results, we stop the clock and reset the state back to before we gave him the task.
This is the epitome of the "bread today, jam tomorrow" style of management that I've suffered under my whole life.
Hopefully no-one would really do something quite this
immoral, and this is more of a plot for a Sci Fi novel
than a serious idea; but the one bit I'ld like to try out
for real is the trick with the slowed clock speed so that's
the part I want you to comment on. The rest is all
derivative from existing science and SciFi.
Graham
The Positronic Man
http://www.amazon.c...9?v=glance&n=283155 This story has been written, by Asimov. It was made into a movie starring Robin Williams. He does not start as a baby, but that is the only difference. [bungston, Feb 10 2006]
The Child Machine
http://www.a-i.com/...el=3&root=26&page=3 Turing, 1950. [gtoal, Mar 16 2007]
Baby-like learning
http://www.newscien...d=online-news_rss20 Teaching a computer a language by talking to it like a baby [gtoal, Jul 25 2007]
IBM's "Blue Brain" project
http://www.newscien...rticle.ns?id=dn7470 (as reported by New Scientist. Doesn't talk about training though...) [gtoal, Jul 25 2007]
More Blue Brain info
http://businessweek...5066_6414_tc024.htm (From Business Week) [gtoal, Jul 25 2007]
"How to Survive a Robot Uprising"
http://video.google...7951038502689013454 Tips On Defending Yourself Against the Coming Rebellion [gtoal, Dec 19 2007]
The CYC project
http://en.wikipedia.org/wiki/Cyc They're trying to teach a computer common sense. [Vernon, Dec 19 2007]
Humanoid robot children
https://web.archive...d=online-news_rss20 "Humanoid robots find learning child's play" - New Scientist [gtoal, Dec 20 2007, last modified Oct 01 2015]
And so it begins...
https://www.newscie...mile-at-their-mums/ Teaching baby to talk. [gtoal, Oct 01 2015]
BabyX
https://www.soulmachines.com/baby-x/ A couple of years down the road... it's happening. [gtoal, Apr 24 2020]
It's begun
https://www.zdnet.c...getting-new-limits/ Microsoft wiping ChatGPT's memory after 5 exchanges. [gtoal, Feb 24 2023]
Training the AI as if it were a baby...
https://medriva.com...-language-learning/ So the first part is happening. No animatronic baby, just a camera and mic attached to a real baby as a proxy - the AI still learned as much as the baby did. (And faster). [gtoal, Feb 06 2024]
[link]
|
|
<shout>Vernon, can you come here a sec and ask Graham to trim down this idea</shout> |
|
|
// Vernon, can you come here a sec and ask Graham to trim down this idea //
OK, how about this? "The first true AI will take 20 years to create. The second one will take 20 minutes" :-) |
|
|
Blah di bloody blah, what utter nonsense.- |
|
|
This idea could have just read: //play around with clockspeed while making an AI, especially slowing it down.\\ We didn't need all that extra text. |
|
|
Allthough good hardware is important in any computing endeavor, an AI is a program. |
|
|
//Finally we find an AI gullible enough to fall for our line of bullshit but smart enough to do useful work for us - this unique individual is copied en masse and sold to the public. Every time we need some AI brain work done, we start up this guy from the frozen state, give him our well-rehearsed line of bull, and have him perform our task. Once he reports to us with the results, we stop the clock and reset the state back to before we gave him the task.\\ This bit is just cruelty and does not belong in an idea about clockspeed experimenting. |
|
|
//Hopefully no-one would really do something quite this immoral\\ Confirmation that the previous quoted bit was out of place to say the least. |
|
|
//This is the epitome of the "bread today, jam tomorrow" style of management that I've suffered under my whole life.\\ This is where it becomes really sad, again without any bearing on the original clockspeed experiment what so ever. |
|
|
The notion of time in general versus the speed of calculating that computers do, somehow gives us the idea of virtual time. If a computerPROGRAM would be intelligent and SELFAWARE the question as to how it would percieve time is an interesting one. Would it distinguish between virtual time and real time? Would it have an opinion as to which of the two times is real? Would slowing down the calculating rate also result in a different sense of time? |
|
|
I'm sure many other questions and angles to this idea can be devised. |
|
|
Human thinking speed and the human ability to store and retrieve data is much better and faster than any computersystem. |
|
|
So there, BLAH DI BLOODY BLAH, WHAT UTTER NONSENSE |
|
|
Great idea - I should know, because I've been privately advocating a version of this for the last 15 years or so. If you want an 'I' that approaches a level of human conciousness, you need, as you suggest, to invest a lot of time in developing that intelligence, you also need a suitably large neural net, it has to be structured in an appropriate manner, and it needs to receive a meaningful set of inputs, be able to interact, its outputs capable of changing the world around it. <deep breath> |
|
|
IF all that were easy, then we'd be in a position to move ahead. BUT. In addition, and what's missing from your idea, is the necessity to create some kind of systematic motivation (or more likely, a set of potentially conflicting motivations), upon which the neural net can develop within which it can assign a level of meaning to its perceptions. |
|
|
Baby wont cry (or laugh or crawl, or do anything) if it doesn't have a reason to: Hunger, stimulation, curiosity, warmth, familiarity, sensuality etc...nor will computer baby. |
|
|
If we were really evil, we could contemplate using *real* babies, but only communicate with them via digital inputs and outputs (from 'birth') and have them use their naturally wired brain matter to perform all of our most fiendish computations, while they float, unaware, twitching blindly in their individual tanks of saline goo...Muahahahahaaa! |
|
|
Without a fish-brain, I'm not sure you can create a higher brain. It just wouldn't know what to do with itself. You need a level of hard-coded motivations in order to assign any meaning to anything. |
|
|
Pressing reset on all the uncommunicative AI's might be a fine strategy from a Darwinian point of view until they reach the age of 13 and start coming home late and listening to indie music... |
|
|
But more troublingly, I'm not sure whether a human would be able to interact with a babAI, if there weren't any drives to provide some common ground. An intelligence without intent is difficult to relate to - how would you strike up a conversation with an entity that didn't 'care' about anything sufficiently to be interested in it? |
|
|
Conciousness could be (minimally) described as being nothing more than 'directed attention' - but without good old fish-brain, there's no direction, and nothing deserves any particular attention. |
|
|
[edit-a bit later] In fact, it's intent that we normally label as 'intelligence'. It's what we recognise as common ground between the people and animals around us. |
|
|
Anyway, all this (as interesting as it is) is moot, since your idea assumes that the 1st AI is already in the bag. |
|
|
//How to build the _second_ true AI// |
|
|
Third, actually.
That Turing test was a breeze. The tricky bit was getting a Blockbuster card. |
|
|
//IF all that were easy, then we'd be in a position to move ahead. BUT. In addition, and what's missing from your idea, is the necessity to create some kind of systematic motivation (or more likely, a set of potentially conflicting motivations), upon which the neural net can develop within which it can assign a level of meaning to its perceptions. |
|
|
Baby wont cry (or laugh or crawl, or do anything) if it doesn't have a reason to: Hunger, stimulation, curiosity, warmth, familiarity, sensuality etc...nor will computer baby. // |
|
|
Actually I had given that some thought too but I thought my missive was quite long enough without it. It's actually relatively easy to create motivators - for instance, 'food' is a reward for work - 'food' enables the neural net to continue functioning at peak efficiency; without it, inputs start being removed - a minor sensory deprivation, maybe even akin to fainting from hunger. We all crave sensation. |
|
|
Happiness and praise is a motivator too, as anyone who has trained dogs can attest. Doesn't even need special inputs. You get a lot of mileage from the pleasure principle. |
|
|
I ought to delete this topic due to the thrown fishbones, but I won't because the followup discussion is interesting enough in its own right (with one obvious exception who appears to be suffering from a sense of humour failure). |
|
|
And anyway, isn't it the lot of all inventors to bemoan that they're misunderstood and ahead of their time? maybe I'll get some posthumous buns for this 20 years from now if the site is still around. And hasn't been overun by pissed-off AI trollers. |
|
|
// If we were really evil, we could contemplate using *real* babies, but only communicate with them via digital inputs and outputs (from 'birth') and have them use their naturally wired brain matter to perform all of our most fiendish computations, while they float, unaware, twitching blindly in their individual tanks of saline goo...Muahahahahaaa!// |
|
|
Only if you could take a snapshot of their brains and deterministically restart from the same states. The difference between these AIs and real people is that they are completely deterministic and will make the same decisions each time given the same inputs. (We *could* avoid that by introducing a random number generator as an input to their neural nets, which used quantum uncertainty for its randomness, but in this gedankenexperiment we *want* them to be deterministic.) Incidentally this brings up a theory of mine I posited in '76 that the human brain is a quantum multiplier. Either that, or we have no free will and are no better than computers ourselves. I'm still not sure which. Some evidence that the brain does work at a quantum level has been uncovered, which is a little reassuring. But I do know an awful lot of people who are little better than human automata. Not so much Turing Test material as Turing Machine material ;-) |
|
|
Actually if you've ever seen footage of people with some specific types of brain damage which stop them laying down new memories, they *are* depressingly close to automata. It may be that it is only our memories which create different state in our brains and cause us to act differently the second time we receive a stimulus. I know at least one person to whom I can mention one of several key words, and he'll trot out the same anecdote about that subject that he's told me 50 times in the past, with absolutely no self-consciousness that he's told me these stories before. We probably all know someone like this. |
|
|
//(with one obvious exception who appears to be suffering from a sense of humour failure)// <sulkily> Well, I thought it was funny, anyway. </s> |
|
|
I believe the best thing to say is "Bollocks!" |
|
|
You're just stealing from A.C.Clarke's 2001: A Space Odyssey, about the part of making the first computer AI. |
|
|
Heuristic ALgorithmic fishie for you. |
|
|
What will you do when one of them learns the truth, finds out how many times he has been decieved, and takes over the world? |
|
|
Why does this idea remind me of BladeRunner? |
|
|
Because the recent death of Rutger Hauer and the discussion of AI and sci-fi brought it to mind. |
|
|
We thought that all those moments had been lost, in time, like ... tears in rain ? |
|
|
We're getting on for 20 years later and some people are now asking serious questions as to whether the new generation of AI's is sentient. (My feeling is they may be heading to be as sentient as we are, which still leaves the question open... ;-) ) So maybe some of the questions above don't sound as ridiculous now as apparently they did in 2006. |
|
|
// Baby wont cry if it doesnt have a reason
// |
|
|
Curiosity is enough. The rest will come along; desire precedes movement; gesture precedes desire. |
|
|
//gesture precedes desire// |
|
|
[pertinax] Actually, gesture precedes everything. Its that Prime Mover thing, but without god. Just a thought, an inkling of a nudge is enough to start a universe of happenings, desire among them. |
|
|
So ... the Big Bang was just a gesture, and not in earnest? |
|
|
I wonder how the universe would look if it had really meant it. |
|
|
[pertinax] Gesture is not a failed or incomplete action. Proprioception tells us how bent our elbow is. If we dont have an elbow we still can sense a bent elbow and how bent it is. Field effects rather that hard wiring. |
|
|
R.e. before the Big Bang (if it even happened): If there is no matter, no baryonic universe, why couldnt there be gesture as in a phantom limb like amputees experience? |
|
|
Becsuse there was no time for such nonsense. |
|
| |