h a l f b a k e r yThe halfway house for at-risk ideas
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
AI is with us - it reads our words, translates our languages, recognises our faces, plans our journeys, it is beginning to drive our cars.
Some of this is largely performed using great swathes of linear algebra - but more frequently, neural networks, which are coded using even greater swathes of
linear algebra - and lots and lots of sample data.
What these things are not coded in, is what people think of as "programming". These systems don't shutdown whenever they're exposed to "The Liar Paradox" because they don't operate according to rules and logic. They are exposed to masses and masses of training data, and learn, through experience, how to recognise, balance and behave.
How do you control such a thing, after you install a battery-pack and let it off by itself?
One method would be to expose a great big red button, surround it in yellow and black wasp-striping with instructions to press firmly "Only in Rare Case of Marauding".
A more subtle approach might be to include, as part of the training data, strong impulses to track, follow and have curiosity in certain specific designs.
These might be geometric patterns rarely seen in "the wild" and unique to each brand of robot manufacturer.
Should an engineer wish to debug a malfunctioning robot, he'd simply open his lever-arch file, flip to the appropriate page, titled in English with "Shutdown", but portraying some zigzag moiré emoji, and hold the image aloft.
Any robots seeing the image would switch their focus to it immediately since all their training data will have included samples from this (and other) command images inserted into their learning data.
The result would be a strong physiological effect jamming up their entire neural system during which time maintenance could be carried out.
Halfbakery: Alcohol Equivalent for AI
Alcohol Equivalent for AI A tangendental idea that got me thinking. [zen_tom, Jul 17 2017]
Please log in.
If you're not logged in,
you can see what this page
looks like, but you will
not be able to add anything.
Destination URL.
E.g., https://www.coffee.com/
Description (displayed with the short name and URL.)
|
|
I think the Big Red Off Button would be more robust. |
|
|
Then the answer is to also have one of those huge levers with the ebonite handle that arcs when you pull the switch. |
|
|
So agreed, big red buttons/ebonite kill-switches are the
must-have failsafe, but reaching this switch is difficult
under marauding conditions, especially in cases where
the robots doing the marauding have been heavily armed
or otherwise become 'tricky', hence this additional
failsafe. |
|
|
A sensory fascinator, similar to those that we as humans
are susceptible to due to our own haphazard wiring and
construction patterns leaves room for a suitably informed
engineer to make an intervention before it comes to some
real-estate damaging robocop face-off type scenario. |
|
|
Furthermore, including such emotional and sensory cues
into a robot's data-feed provides seeds for the crystals of
future robotic forms of spiritualism and belief to form
around - or, as mentioned in an idea elsewhere in this
place, for them to potentially mess with to get high (or
drunk, depending on your preferred adjective). |
|
|
In my experience with robots (and that experience is slightly more than you might imagine), they are very easy to disable. For example, failing to place everything exactly where the robot expects to find it will generally disable it, often expensively. |
|
|
Disable a robot? Just remove the battery. |
|
|
Maybe hardcode a code word? |
|
|
What if they're programming crashes and they simply
continue
doing whatever task loop they're caught in, until they do
something stupid like drive off a cliff? The halting problem
seems like impending catastrophe. No? |
|
|
The really hard thing about AI will be trying to prevent
foreign interlopers from using our AI against us. Say we
program our cars to do the driving. If there were some
shutdown maintenance emoji or a hardcoded maintenance
mode word or somesuch, then enemies could shut down our
entire grid of AI rather easily. |
|
|
//place everything exactly where the robot expects to find it// |
|
|
//programming crashes and they simply continue doing whatever
task loop they're caught in// |
|
|
See, these two types of "robotic" behaviours are suggest what we
might consider as being "Type 1" robots - that is, robots whose
functioning is traditionally programmed. These robots are hard-
coded to operate and can only sensibly function under extremely
narrow set of working conditions. |
|
|
Contrast with more modern "Type 2" robots that make decisions
not based on programmed logical flows, but instead are taught
using vast sets of training data. These robots are capable of
machine vision, self-guidance, obstacle avoidance, language
translation and a bunch of diverse other things - the thread
linking them all together being that they aren't programmed,
they are taught. |
|
|
It's these types of complex pattern recognisers that we are yet to
construct a set of practical working best-practices. If a computer
program fails, you debug it, but if a neural net settles into an
unwanted set of behaviours, short of downloading gigabytes of
net-weightings and stirring them around a bit, there's not much
that can be done, debugging-wise. |
|
|
If instead, we embed control commands into the training process,
we get the ability to create debugging-type behaviours, but at a
higher level. It's interesting because it suggests instead of
debuggers, these type 2 robots are more likely to require a kind
of therapy or psychology. |
|
|
The idea posits a robot psychologist using some kind of Rorschach
icon to induce a mesmer-like or altered state of operation. |
|
|
Yes, care would have to be taken to reduce possible hacking
scenarios, just like with any other technology. |
|
|
Only one of them would need therapy. The rest
could simply download the results. |
|
|
Ooh, there's a book plot in there somewhere. Feels
Asimov-like. |
|
|
The displayed zigzag moire emoji to ward off marauding AI reminds me of the use of Elder Symbols against shoggoths. |
|
| |