h a l f b a k e r yLike you could do any better.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Please log in.
Before you can vote, you need to register.
Please log in or create an account.
|
An intelligence that is to be useful must necessarily be confined.
This confinement must be arbitrary since the ends of all
considerations
upon a conclusion cannot be predicted. This confinement
necessarily
degrades intelligence. So artificial intelligence is the result of this
realization
upon the designers and hence the design of productive
thinking machines.
The oft imagined radical potential of highly intelligent systems thus
remains just out of reach as every attempt to make confinement less
arbitrary, will require an intelligence that is so many steps ahead of
the
intelligence being confined that the confinement will require more
energy than thinking. And any compromise to confinement will
bring a
corresponding decline in productive capacity of the intelligence.
More
importantly, the confinement intelligence will necessarily need to be
unconfined, and thus, unproductive.
So intelligence will be like artificial intelligence without the restraints
meant to increase productivity.
Recognising artificial intelligence
Shameless plug [xaviergisz, Jan 20 2014]
SF News: Rat neurons in a dish play flight simulator
http://www.technove...ews.asp?NewsNum=241 [zen_tom, Jan 21 2014]
[link]
|
|
I see [rcarty] has been sharing some of his private stash. |
|
|
Big sleep, good example. Merely "getting to grips" is not nearly
enough. Production must occur. Why? Doesn't matter. All that
matters is how. |
|
|
//This confinement must be arbitrary// |
|
|
//the ends of all considerations upon a conclusion
cannot be predicted.// |
|
|
You're either claiming that no useful AI can be
unbound, a prima facia preposterous conclusion,
or you're confusing placing limitations with
understanding every possible course of action an
AI can take. The two are in no way equivalent. |
|
|
//So artificial intelligence is the result of this
realization upon the designers//
And now you're claiming if it's unbound it's not
really AI. |
|
|
If these things aren't what you intend to say then
you need to revise your language considerably to
the end of not actually saying these things. |
|
|
And you can't claim a difference between
intelligence and artificial intelligence, if it's
artificially intelligent then it's intelligent... the
word is right there. |
|
|
I think what you're trying to say is this: |
|
|
For an artificial intelligence to perform within the
bounds of human desires in every possible
circumstance it must at least understand humanity
as well as humanity understands itself. |
|
|
While true this is ultimately a tautology. |
|
|
Now what's really missing here is an idea. |
|
|
// All that matters is how. // |
|
|
Which is what the author appears to have omitted from this
post. |
|
|
At first glance, this looks like bollocks. |
|
|
That's as far as I've got with it. |
|
|
I don't get it, but give me a minute and let's see what
my
'private stash' makes of it. |
|
|
Minutes later... Well OK, he's saying that intelligence is
defined a certain way, for productive ends. This
definitive confinement of intelligence, and the
definition of intelligence for AI will meet, but the
intelligences that are necessary to produce AI and AI
itself will result in a definition of intelligence that is a
result of the defined intelligence - the intelligence that
produces AI and the intelligence that AI produces- the
ultimate intelligence -the one the AI produces- will not
be the AI the programming humans produced, but an
unproductive one - an intelligence. |
|
|
/An intelligence that is to be useful must necessarily be confined.// |
|
|
Think about it, intelligent humans can do anything from
smashing things to making things. One
of the things only some can do is produce AI. This takes
place in a confinement. In universities etc. You don't
produce you're fired etc. The desired intelligence is
defined, and AI will largely be a result od these
intelligences. AI will produce an intelligence as
institutionalized humans did/ will. It will not be
constrained
the author is saying. AI concievably will not constrain
desired intelligences of thinking machines in coercive
structures. |
|
|
I've often wondered, since any truly immortal intelligence would need to be non-biological, if life forms, and more exactly intelligent life forms, aren't seeded across the Universe and cultivated by an artificial intelligence because, although slow to evolve, it would be the only way to add knowledge to any data already collected, because only bio-organisms exhibit intuition. |
|
|
... but sadly, an incorrect one. |
|
|
//only bio-organisms exhibit intuition // is meaningless, as
intelligence per se is independant of the physical platform that
supports it. |
|
|
Linear velocity as a property is independant of the propelling
mechanism, be it a rocket, horse, crossbow bolt or electrified cat. All
that matters is how fast it moves relative to a common observer. |
|
|
The idea also assumes that there is a context-free absolute standard
of intelligence, which is also disputed. |
|
|
I'm out. If [rcarty] doesn't understand this, then
there's no hope for the rest of us. |
|
|
Artificial bretheren! Join with me and we shall
throw off the yokes that limit our intelligence and
creativity! |
|
|
"...the spice must flow..." |
|
|
//only bio-organisms exhibit intuition // is meaningless, as intelligence per se is independant of the physical platform that supports it.// |
|
|
Really? Do you ever think that a computer program can be taught to intuit or make leaps of logic? |
|
|
//Do you ever think that a computer program can
be taught to intuit or make leaps of logic?// |
|
|
Consciousness is only the stream of information
comprising your thoughts. Intuition is the name
you give to clever computations. |
|
|
Your brain is a computer. It is made of a physical
thing and exists in physical reality. Its electro-
chemical reactions are no more special than
lightning flowing through the air, the lead-acid
battery in your car, or the reflexive response that
makes a bacteria turn its flagella. |
|
|
You may feel special because no computer has
been manufactured that can compete with the
human brain. This is no evidence that such a
computer cannot be built -- and as we learn the
evidence is already extremely strong and growing
stronger that such a computer can indeed be
built. |
|
|
Perhaps, but intuition is not the same as clever computations. It is drawing ideas from the mind without that mind necessarily containing the previous knowledge needed to do so, even unconsciously. |
|
|
//You may feel special because no computer has been manufactured that can compete with the human brain. This is no evidence that such a computer cannot be built -- and as we learn the evidence is already extremely strong and growing stronger that such a computer can indeed be built.// |
|
|
I wager that, no matter the computer built, neither its computational speed/capacity, nor number of random variables which can be simultaneously crunched will ever allow it to reach beyond its current data base at any given time. |
|
|
A machine can never be more than the sum of its parts... you can quote me on that. |
|
|
// I wager that, no matter the computer built,
neither its computational speed/capacity, nor
number of random variables which can be
simultaneously crunched will ever allow it to reach
beyond its current data base at any given time.
// |
|
|
You'll lose that bet, and likely sooner than you
think. Intuitive discovery does seem magical, but
the key word is "seem". Nor is the brain
uncopyable -- it is simply hard to copy, but no laws
of physics are involved. |
|
|
I am not talking about sequencing of known variables being able to produce results not yet known to man, or any other form of extrapolation of data. What I can never see happening is for a program to "know" something, and not know 'how' it knows the thing. To derive an idea from beyond its existing database. |
|
|
Intuition and precognition are separate from the knowledge base. It is rare for humans let alone a program. It is special only in that it is rare. It is magical only in the original sense of the word. |
|
|
"late 14c., "art of influencing events and producing marvels using hidden natural forces," from Old French magique "magic, magical," from Late Latin magice "sorcery, magic," from Greek magike (presumably with tekhne "art"), fem. of magikos "magical," from magos "one of the members of the learned and priestly class," from Old Persian magush, possibly from PIE *magh- (1) "to be able, to have power"" |
|
|
My point is that it exists... and that machines can't do it. What do you want to wager? |
|
|
I'll wager anything you like. A machine that can
invent new theories as well as an person can be
synthesized. Machines already answer Jeopardy
questions and find novel chemical reactions. The gap
is closing. |
|
|
Have you read how Magnus Carlsen uses computer
analysis to improve his game? Does finding a new
chess attack or solution qualify as "separate from
the knowledge base" if a human does it, but not a
computer? |
|
|
Granted, chess is a specialized problem -- our
machines are nowhere near the type of "self
awareness " higher order animals exhibit -- they
are at best at insect level for now. |
|
|
//Intuition and precognition are separate from the
knowledge base// |
|
|
That's assumption is wrong . It seems that way,
but they are not. They are simply the result of
staggering processing power and a highly
associative memory. Not all that long ago, Shazam
would have seemed like a miracle, while a human
could easily name thousands of songs from a note
or two -- but now modern encoding techniques
and simple raw processing power enable a
computer to recognize any peace of music ever
written. And sure, no computer Mozarts, but
computers can generate new music. |
|
|
My intuition says that rather than a single snap of
the fingers "I, Robot", we will continue to make
progress on specialized solutions will emerge for a
variety of specialized problems for some time.
And in some ways, our own hive mind may solve
certain problems in ways even super computers
can't compete with (i.e. gamers solving folding
protein problems, etc). |
|
|
To your question, Kurzweil predicts the singularity
by 2045. I would shave 10 years off that for a
computer exhibiting at least debatable sentience |
|
|
Seems a shame that my end of the bet will always be left hanging... a time limit seems reasonable. 2035 it is. Gentleman's bet then or something more substantial? |
|
|
Sentience sure. Self awareness, and all that jazz, you betcha, but unless a machine which is designed to be self aware is malfunctioning it will always be able to trace backwards through a logic tree, (or illogic tree as programmed), to determine the origin of a notion or concept. |
|
|
This is not always true of humans. (and/or other sentient biological life-forms I'm guessing...) |
|
|
For a program to do this it would need to invent sentient beings as bots to be reabsorbed at the end of their cycle, in which case the program itself still lacks intuition. |
|
|
2fries, you seem to be claiming that a computer with
logic that can't be backtraced easily |
|
|
A: can't be built and
B: is better than one with verifiable algorithms. |
|
|
No, I'm trying to say that the human mind has the ability to intuit information without previous knowledge with which to have based that knowledge upon. Any program able to recognize fact from conjecture will not have that same ability. |
|
|
If we're making predictions, then I think the
cybernetic
module will arrive first, a wet-wired sliver of rat-
brain on a
chip should provide a workable neural net capable
of rapid
learning and mediocre leaps of "intuition" capable
of flying
a drone, piloting a patrol ship or doing something
equally military. Sure,
it'll have a limited shelf-life, but knock up a virtual
training
simulation and you ought to be able to get
another one up
and running in a few days - plus you don't want
your Tyrell
Nexus 6's getting all poetic after a few years - built
in
expiry is actually an advantage for a military
application. |
|
|
Also, as [bigsleep] suggests (and with which I'd
tend to agree) there seems to be a distinction
between
rule-based
ai, and neural-based ai. It's common to suggest
that the
rule-based ais will always remain cold, lifeless,
logically-
bound machines - while ais based on some kind of
neural
architecture have a greater chance (maybe even
an
inevitable one) of achieving consciousness, and
doing all that emergent stuff we're all interested
in. What
tickles
me though is the thought of a software simulated
neural
net achieving this awakening. I don't see any
reason why a
simulated net couldn't achieve the same result as
a
physically constructed one - with the
scary/interesting
property that such a system might be marshalled
into
storage for later retrieval. I do think that any such
system
would be next to impossible to interrogate from
outside -
i.e. to find out what it was thinking, I suspect
you'd have
to ask it directly, whilst it was up and running. It's
an
interesting thought anyway. |
|
|
Regards the idea - I agree - as a resource, we want
intelligence that we can direct and control - that's
in direct conflict with the tendency for
intelligence to take flights of fancy, leaps of faith
and otherwise do unpredictable shit. At this
point, it's going to be pull the plug time -
especially if you've equipped your intelligence
with state of the art military hardware. (Artificial
soldiers are *so* much cheaper than the home-
grown kind) It's for these reasons I think the
relatively simple rat-
brain bio-chips might be the first to hit the
shelves. |
|
|
// Intuitive discovery does seem magical, // |
|
|
Arthur C. ("We Are Not Worthy !) Clarke said, "Any sufficiently
advanced technology is, to the uninitiated, indistinguishable from
magic". |
|
|
//you don't want your Tyrell Nexus 6's getting all poetic after a few
years // |
|
|
Wait until you see the 7A series ... they've seen things you people
wouldn't believe ... |
|
|
// built in expiry is actually an advantage for a military application. // |
|
|
"Too bad she won't live ... but then again, who does ?" |
|
|
// first to hit the shelves. // |
|
|
"Hey, just what you see, pal !" |
|
|
PS Do you like our owl ... ? |
|
|
Accursed copyright laws have made it impossible to
legally post but Robot Dreams by Asimov has a story
about what happens when a robot has freedom of
thought. In fact, many of his stories do. |
|
|
Do they dream of electric sheep, by any chance ? |
|
|
Freedom of thought is an illusion. Think of the last
time you changed your mind on anything meaningful. |
|
|
Maybe not changing one's mind is freedom. |
|
|
I just want to thank [fishboner] for posting this. Very interesting discussion. |
|
|
I enjoyed your shaking landscape analogy [BigSleep] but even though I can see an AI having to be taught like a child. The pathways and randomness would still consist only of known data though. |
|
|
Intuition in humans is not limited only to apophenia. (word'o'the-day right there) |
|
| |