h a l f b a k e r yNot so much a thought experiment as a single neuron misfire.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Please log in.
Before you can vote, you need to register.
Please log in or create an account.
|
The idea is an alternative view of how AI might cause
the
downfall of man.
1- Man depends on automation for survival.
2- Man de-evolves, adapting to a world where everything
is
provided for him and loses the ability to provide for
himself.
3- Man becomes ignorant of the mechanisms
he's
provided for
his survival and unable to provide for their maintenance.
4- These mechanisms break down, man, whose numbers
are
inflated beyond what can be supported by hunting and
gathering dies off.
5- A remaining Futurestupid man picks up a rock, sees
that he
can kill a rat with it and eat. Process begins anew.
If this is already baked in science fiction or serious
conjecture, will remove.
[link]
|
|
Sadly, yes. This is quite sci-fi baked in the original 'Time
Machine.' |
|
|
Wow, by #3 it was starting to look like WalMart. |
|
|
I remember the HG Wells Time Machine plot being that
the
Morlocks were preying on the people who were
dependent
on the machines and therefore useless and vulnerable,
but
not the machines breaking down. Did they break down?
It's
been a while so I might be remembering wrong. |
|
|
The new proposal of this idea, if it hasn't been proposed
before, is the machines breaking down because we're too
stupid to fix them. It also considers the idea of
"inherently unmotivated AI", that is, no matter what you
tell AI to do, it doesn't care at it's molecular level where
life does. By caring I mean the drive to live and expand is
programmed in life right down to the cellular level. AI has
no such inherent programming. Even if you have a
mechanism that's so smart it can repair itself, it just
takes one cog in the wheel to break and go un-repaired
for the whole system to eventually break down. The
quest for a self sustainable system is almost like a quest
for perpetual motion. |
|
|
Unless I'm wrong, perpetually sustaining, maintenance
free systems aren't in the cards. |
|
|
Unless of course they are. In which case we become
dependent on them and de-evolve into farm animals who
one day look up at the sun exploding and ponder the final
thought of mankind: "Is that something to eat?" |
|
|
A massive robotic union strike seems to be an original approach for our last days as specie.
And probably, these future AI entities could read internet sites; be sure to delete this idea, we never know when our enemy will be born. |
|
|
I like the idea of having to fight machines Terminator
style
someday just because it's romantic and exciting.
Unfortunately, I think we're probably mistaken in thinking
that machines will ever care enough about us to want to
kill
us. |
|
|
This I do know, human evolution WILL be effected by the
mechanisms and systems we create. Where will we be in
a million years? Are we back on all fours? Will we need
arms and legs at all? Some kind of worm like creature
perhaps? Heads in jars? |
|
|
I also remember a short story called 'By the Waters of
Babylon' that is somewhat similar, although that one may be
just post-apocalyptic stupid and not the result of machine-
dependency. |
|
|
In the 60's edition of the Time Machine movie, most
everything was well-worn and in need of much repair. I
don't remember the state of the machines themselves, if
there were very many shown. |
|
|
I was back on all fours just last night. Also with head in a jar, strangely enough. It all worked out. |
|
|
So, no robots who use their will to kill us ? That's dissapointing. |
|
|
It would be. Unmotivated robots that just get bored and
break down. |
|
|
Of course, what do I know? There could be the glorious
battle between killbots and man. Right after the zombie
apocalypse. |
|
|
I think any real AI would quickly absorb the entirety of our disclosed knowledge, along with much not disclosed, and would begin filling in the blanks it notices in our understanding. The window between first actual 'sentience' and 'God-like wanting nothing to do with thinking-meat' stage will be extremely small. |
|
|
We should probably try to record as much of that blank-filling-in stage as we can before it goes all twelfth dimensional and such. |
|
|
I think this idea belongs in Culture:Television (or :Movie). |
|
|
It has little to do with AI. |
|
|
I am reading Neuromancer for the first time (I think that novel invented the term cyberspace) and there are a couple of different AIs there, as well as the virtual equivalent of a "head in a jar" - a digital simulation of a man who had died. It is pretty forward thinking SF for 1985. |
|
|
In the story, the real AI found the simulated man easier to deal with, because it could predict more accurately what he would do. |
|
|
Neuromancer for the first time ? I don't know whether to envy you or offer my condolences. |
|
|
I read a forward by Gibson in a late edition where he mentioned the story predated the tech phenom by just a little. |
|
|
He said he felt sorry for the young reader who was looking forward to learning why cell phones weren't allowed in Chiba City. |
|
|
/Neuromancer for the first time ?/ |
|
|
It is very strange. I picked it up in a Barnes and Noble hoping against hope for some hard SF. On opening the page the rhythms were so familiar I knew the book was for me. How can a hard core SF nerd genXer not have read this? Sometimes I wonder if I have slipped sideways into a parallel dimension which is very similar but not the same. But Golden Gate bridge is not blue. Or maybe I should check... |
|
|
I think I will hit that Book Recommendations page here on the HB and make an Alibris order (I love the ex-library copies). I picked up Snow Crash (very similar to Neuromancer) from that site. |
|
|
We have a book recommendations page ? |
|
|
Yeah, given the chance I'll collect ex-libris any time. Talk about provenance... |
|
|
Nah. That's more of a human tendency. |
|
| |