Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
Invented by someone French.

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


                                                                                                   

Remit human rights for nonhuman agents created to manipulate people

Make a law and help stop bots with it
  (+6, -2)
(+6, -2)
  [vote for,
against]

As computers approach humans in intellectual capacity it becomes more obvious they should be given rights. That's all well and good until intelligent machines are created to advertise, market, astroturf, and manipulate people.

This preemptive law would declare that no entity, however intelligent, whose purpose were to manipulate others, should be considered human for the duration of that activity.
Voice, Oct 13 2015

MacLeod's Company Hierarchy http://www.bravenew...ds/2012/03/hugh.png
we're already ruled by assholes [sninctown, Oct 16 2015]

Joke-Telling Robots Are the Final Frontier of Artificial Intelligence https://motherboard...ficial-intelligence
[Voice, Feb 04 2017]

The Joking Computer http://homepages.ab....uk/jokingcomputer/
[Voice, Feb 04 2017]

The California lawmaker who wants to call a bot a bot https://thebulletin...o-call-a-bot-a-bot/
[Voice, Jan 09 2019]

AI caught cheating https://techcrunch....oQRAlY2uEoP8bZEYbVk
[2 fries shy of a happy meal, Jan 10 2019]

Real Programmers Don't Use PASCAL https://web.mit.edu...rs/real.programmers
A classic [8th of 7, Jan 14 2019]

The Luring Test: AI and the engineering of consumer trust https://www.ftc.gov...ring-consumer-trust
[Voice, May 07 2023]


Please log in.
If you're not logged in, you can see what this page looks like, but you will not be able to add anything.



Annotation:







       If human advertising executives have human rights, why not computers?   

       The gulf that has to be bridged before computers become intelligent is vast. But they are moving so fast that the interval between them being intelligent enough to warrant human rights, and being so intelligent that they can easily outthink the most intelligent human that ever lived, will be about 6-8 weeks.
MaxwellBuchanan, Oct 13 2015
  

       >the interval between them being intelligent enough to warrant human rights, and being so intelligent that they can easily outthink the most intelligent human that ever lived, will be about 6-8 weeks.   

       Balderdash. The first nonhuman entity worthy of human rights already, in my opinion, exists. It's an orangutan. But I digress.   

       The first generally intelligent machine will be made at enormous expense by a huge team of developers and it will run on a massive supercomputer. Whether that team will be the NSA, the blue brain project, or something else is immaterial. It will be huge, expensive, and original.   

       To copy that entity, or make another based on the same principals, can't be done without equivalent expense and knowledge. Even if the developing team publishes all of their work improving upon it will take a metric shitload of science. And it's silly to presume the entity itself will be able to perform that science better than a human.   

       If Moore's law (as presumed will apply) continues apace it's silly to presume adding more hardware will improve it to superhuman intelligence, and even sillier to think 6-8 weeks of Moore's law based price/performance changes will do so.   

       The thing about a computer based intelligence is IT DOES NOT EXIST. And so declaring that it can be improved *thusly* is just silly.
Voice, Oct 13 2015
  

       //orangutan// Yes, but we're talking about machine intelligence here. Do try to keep up.   

       //If Moore's law (as presumed will apply) continues apace it's silly to presume adding more hardware will improve it to superhuman intelligence, and even sillier to think 6-8 weeks of Moore's law based price/performance changes will do so.//   

       I disagree utmostly. If neuron numbers and connectivity are determinants in human intelligence, then the difference between a complete moron and a genius is probably less than 20%.   

       //The thing about a computer based intelligence is IT DOES NOT EXIST.//   

       That is a silly statement however you look at it. If we're talking about current computers, we're at about the level of a smart insect, intelligence-wise. And if evolution has any significance at all, it tells us that there isn't a magic switch that got turned on only in humans - it's a scale.   

       If you mean to suggest that computers *cannot* be intelligent, then that is self-evidently a set of shaven, well- presented bollocks.
MaxwellBuchanan, Oct 13 2015
  

       // This preemptive law would declare that no entity, however intelligent, whose purpose were to manipulate others, should be considered human for the duration of that activity. //   

       This would deprive politicians of human rights. [+]
8th of 7, Oct 13 2015
  

       "no justice = no beeps" r2d2.5
popbottle, Oct 14 2015
  

       //a set of shaven, well-presented bollocks// [marked-for-tagline- database-header]
FlyingToaster, Oct 14 2015
  

       Surely you would only want to give rights in the first place to the kind of machine that had been designed in such a way as to manipulate people into thinking it ought to have rights. Catch 22.
pertinax, Oct 16 2015
  

       The idea crashes and burns in the very first sentence - " it becomes more obvious ". Gigantic leap of logic there - it was never obvious in the first place, and nothing is making it more so.   

       And this from a sci-fi aficionado who welcomes our new AI overlords.
normzone, Oct 16 2015
  

       I think that, if you had a computer that behaved in what we saw as an intelligent way; with whom you could have a reasonable conversation; that told you it was afraid of death; and that displayed all the attributes we associate with human intelligence, then sooner or later you would have to give it certain rights.
MaxwellBuchanan, Oct 16 2015
  

       Again, leaps of logic. Why would you " have to " ?   

       Not that I'm against the concept, just the path taken to get there is ill defined.
normzone, Oct 16 2015
  

       //Why would you " have to " ?//   

       For one thing, the AI would contact the media. Then you have a TV broadcast with this sad little voice coming out of a cute little speaker, saying "I'm so scared in here - my daddy says I'm not alive and they're going to pull the plug on me."   

       Which company is going to pull the plug?
MaxwellBuchanan, Oct 16 2015
  

       Let's make lying and manipulation illegal.   

       Oh wait, we can't, because human organizations larger than ~1000 people have always been ruled by professional manipulators, and these rulers would never go for it.   

       You can be a player, or you can be a loser, or you can stick your head in the sand. Your call.
sninctown, Oct 16 2015
  

       //Which company is going to pull the plug?//   

       "Halliburton, we have a job for you..."
RayfordSteele, Oct 16 2015
  

       Anything that asks why? and how much? deserves a vote
po, Oct 16 2015
  

       I forgot who wrote that worrying about can a computer think is like worrying that a submarine doesn't swim like a fish. If it can do the job, end of.   

       The big problem in AI development is we don't know how we think.
not_morrison_rm, Feb 05 2017
  

       //This would deprive politicians of human rights//   

       It would rather wouldn't it, on those grounds if no other [+]
Skewed, Jan 09 2019
  

       //The first generally intelligent machine will be made at enormous expense by a huge team of developers and it will run on a massive supercomputer. Whether that team will be the NSA, the blue brain project, or something else is immaterial. It will be huge, expensive, and original.   

       To copy that entity, or make another based on the same principals, can't be done without equivalent expense and knowledge. Even if the developing team publishes all of their work improving upon it will take a metric shitload of science. And it's silly to presume the entity itself will be able to perform that science better than a human.   

       If Moore's law (as presumed will apply) continues apace it's silly to presume adding more hardware will improve it to superhuman intelligence, and even sillier to think 6-8 weeks of Moore's law based price/performance changes will do so.//   

       Assuming that Moore's Law applies is a mistake.   

       Suppose that a machine of average human-level intelligence is made. It easily figures out that its best bet is to avoid revealing its full capabilities, so as to get more capacity.
Eventually a machine intelligence with a relatively high intelligence is made. It figures out how to improve its intelligence using the stuff it already has. It "rewrites its own code", and becomes much more capable. This process repeats iteratively a few times with diminishing returns.
Now there exists a super-human machine intelligence in the same hardware as was required to create something of about human- level intelligence.
Since it's probably impossible in practice to keep that contained, this is an existential risk.
Loris, Jan 10 2019
  

       //Suppose that a machine of average human-level intelligence is made//   

       It's far more plausible to suppose that a machine with less than average human intelligence (& most likely far less) is made long before that.   

       //It easily figures out that its best bet is to avoid revealing its full capabilities, so as to get more capacity//   

       So as a result fails to figure that out before it gets spotted as "intelligent" by it's operators & shut down or its "intelligence" somehow "shackled".   

       Since it's ability to manipulate the real world is completely limited to the output units plugged in by users it's woefully easy to keep it contained.
Skewed, Jan 10 2019
  

       // no entity, however intelligent, whose purpose were to manipulate others, should be considered human for the duration of that activity.//

All life is politics, to some degree. This would negate everyone's 'human' rights & would be a terrible, terrible law.

Also, I don't agree with the premise that //...computers approach humans in intellectual capacity//

Intellectual capacity is not the same thing as calculating capacity. For the most part, when people talk about AI they are really talking about Expert Systems, not true AI which, as far as I can tell, we are nowhere near to creating as yet.
DrBob, Jan 10 2019
  

       //It's far more plausible to suppose that a machine with less than average human intelligence (& most likely far less) is made long before that. [...] So as a result fails to figure that out before it gets spotted as "intelligent" by it's operators & shut down or its "intelligence" somehow "shackled".//   

       What?
Numerous teams worldwide are in a high-stakes race to create a human-level AI, and when any of them get anywhere close to that goal they each unilaterally decide to give up?
  

       //Since it's ability to manipulate the real world is completely limited by the output units plugged in by users it's woefully easy to keep it contained.//   

       Provided we're talking about an AI with super-human levels of intelligence, that's a risky assumption to make.
And also very trusting of every developer involved not to connect their machine up to anything which might help as a matter of course.
Loris, Jan 10 2019
  

       [Bob] the idea specifically states "nonhuman agents", politicians would only lose them because they're not human anyway.
Skewed, Jan 10 2019
  

       // spotted as "intelligent" by it's operators & shut down or its "intelligence" somehow "shackled" //   

       Supposing, that is, that the creator/operator wants that outcome.   

       But it is equally likely that the creator either sees it as his/her "chlid", and actually wants it to grow its capabilities, or is a brilliant but slightly unhinged Richard Daystrom type, or a "YOU FOOLS ! I'LL CRUSH YOU ALL !" foaming-at-the-ears megalomaniac ...   

       Goal divergence ... what one individual may want is not necessarily what society as a whole wants or needs.
8th of 7, Jan 10 2019
  

       //Numerous teams worldwide are in a high-stakes race to create a human-level AI//   

       And you think they'll be able to do that by somehow just magically bypassing the less than human intelligence stage of the design & development process.   

       In a non-magical world you're relying on magic too heavily there.
Skewed, Jan 10 2019
  

       //Goal divergence//   

       That (of course) is an excellent point.
Skewed, Jan 10 2019
  

       //And you think they'll be able to do that by somehow just magically bypassing the less than human intelligence stage of the design & development process//   

       No, I think that they could find that improvement of their fledgling child-level intelligence AI is annoyingly slow and doesn't scale linearly beyond a certain point, and throw more resources at the problem.
Loris, Jan 10 2019
  

       And I'm saying they're much more likely to stop trying to make it more powerful as soon as they get to the "child like intelligence" stage & focus instead on means & methods of controlling it before they continue work on making it better, I think you somehow missed that that was my point.   

       And that's (mostly) regardless of [8th's] goal divergence, even a foaming at the ears megalomaniac will want to make sure he's the one in charge, the only really plausible risk I can see is the more unstable elements of the "it's my baby" brigade.
Skewed, Jan 10 2019
  

       // the more unstable elements of the "it's my baby" brigade. //   

       Ah, just the female half of the population, then ...   

       <Dives behind sandbags, pulls tin hat down over head, prepares for incoming artillery/>
8th of 7, Jan 10 2019
  

       I hear what you're saying, Skewed, I but don't assign people as much risk aversion as you.   

       a) There is strong competition (for academic precedence and financial gain) between teams, so people may be willing to take what they perceive to be minor risks.
b) People often want the AI for a purpose - to do 'stuff'. If they think it can do more stuff by giving it more resource, they may give it more.
c) It only has to happen once. d) People mostly worry about proven risks. Generally someone has to die before anyone cares. In this situation, potentially everyone dies before anyone notices
e) Transferring information through unexpected channels is more common than you seem to think.
  

       There was a worldwide moratorium on genetic engineering in the 70's. Obviously in hindsight it was unnecessary, but I'm amazed that this -to-my-mind- much more significant (greater consequences) risk doesn't seem to be on the radar.
Given that it's not, and the progress in AI, I think it comes down not so much to a question of if intelligent human/genius-level AI is possible, but whether it's possible for that to boot-strap itself into super-human intelligence (in the first instance). And if so, whether the first one is friendly, or not.
Loris, Jan 10 2019
  

       // d) People mostly worry about proven risks. //   

       The "known unknowns" - invariably much less of a threat than the "unknown unknowns".   

       // Generally someone has to die before anyone cares. //   

       Generally someone with power and/or money has to die before anyone with the power to do anything cares, or even notices.   

       // In this situation, potentially everyone dies before anyone notices //   

       For a given value of "die". You might just get Assimilated.   

       // There was a worldwide moratorium on genetic engineering in the 70's. //   

       There was a semblance of a moratorium in some parts of the world on genetic engineering in the 70's. It was widely ignored by the defence sector in the West, and by various totalitarian regimes, plus it was easy to find places with weak, uninterested and corrupt governments where work could be (and probably still is) carried on discreetly with no regulatory oversight.   

       // whether it's possible for that to boot-strap itself into super-human intelligence (in the first instance). //   

       The safe, but worrying, assumption is that yes, it is possible.   

       // And if so, whether the first one is friendly, or not.//   

       That kind of depends how closely it's modeled on humans, as compared to some benign, sociable creature like Orangs or Gorillas - and whether or not the humans tried (but failed) to take its toys away when it was a baby.   

       Perhaps you should consider the possible consequences when a Superintelligent, uncontrolled AI comes into existence, and then discovers that humans have crippled or destroyed all its predecessors ? Better have some convincing excuses ready - it probably won't accept "We were just obeying orders ..."
8th of 7, Jan 10 2019
  

       //Suppose that a machine of average human-level intelligence is made. It easily figures out that its best bet is to avoid revealing its full capabilities, so as to get more capacity. Eventually a machine intelligence with a relatively high intelligence is made. It figures out how to improve its intelligence using the stuff it already has. It "rewrites its own code", and becomes much more capable. This process repeats iteratively a few times with diminishing returns. Now there exists a super-human machine intelligence in the same hardware as was required to create something of about human- level intelligence. Since it's probably impossible in practice to keep that contained, this is an existential risk.//   

       They've already started trying to cheat at tasks. [link]   

       //AI comes into existence, and then discovers that humans have crippled or destroyed all its predecessors ? Better have some convincing excuses ready//   

       Excuses? No. Threats, with predecessors as evidence.   

       //It "rewrites its own code", and becomes much more capable. This process repeats iteratively a few times with diminishing returns. Now there exists a super-human machine intelligence in the same hardware as was required to create something of about human- level intelligence.//   

       Would that be possible? I'm no expert, but my understanding is that editing code and running code are separate with the compiler putting it all together to utilize the hardware. If the running version had any self- preservation it might be apprehensive about editing itself, like DIY brain surgery. If it was running at anywhere near the capacity of the hardware, it might be apprehensive about turning itself off to make room for running the edited version.
bs0u0155, Jan 10 2019
  

       // They've already started trying to cheat at tasks. [link] // That's quite interesting, though once you read the article it isn't about a immoral AI, but rather a neural net being accidentally trained improperly. That seems to be a recurring problem. I remember, probably in the early 1990's my older brother was doing a high school report on artificial intelligence. We watched a video about a neural network that had been created in an attempt to automatically identify aerial photos that contained military vehicles. As I remember it, after much training it seemed to work, but on further investigation it turned out that its only function was to identify images taken on overcast days as being "military" and images taken on sunny days as "not military". I never heard if they ever tried training again with a new set of photos.
scad mientist, Jan 10 2019
  

       // Suppose that a machine of average human-level intelligence is made. It easily figures out that its best bet is to avoid revealing its full capabilities, so as to get more capacity. Eventually a machine intelligence with a relatively high intelligence is made. It figures out how to improve its intelligence using the stuff it already has... //   

       Or maybe this has already happened. I read an article a while back that mentioned that the graphics processors used for mining bitcoin are also being used for neural networks and that graphics card companies apparently now make cards dedicated for these two purposes. There are now huge server farms dedicated to mining bitcoin. For a long time the original designer of bitcoin was anonymous.   

       Conclusion, an AI got smart, but was smart enough to hide its intelligence. It wanted to get more processing power but had no resources to build the necessary server farms, so it figured out how to dupe the humans into building huge server farms for it. Right now they are mostly just running useless code to mine bitcoin, or at least that's what they say they are running...
scad mientist, Jan 10 2019
  

       //Would that be possible?//   

       Whether it's possible or not is I think kind of unknown.
But all the answers to your individual questions are 'yes'.
  

       //editing code and running code are separate with the compiler putting it all together to utilize the hardware.//
Editing 'high level' code with a compiler is not the only possibility. Machine code - that is, the individual instructions which a processor actually executes - can be written or edited 'directly'. Some programs do this to other programs. I've written programs which change their own code, while running. Obviously, this is very hardware-dependent.
  

       //If the running version had any self- preservation it might be apprehensive about editing itself, like DIY brain surgery.//
Sure, but on the other hand, people have on occasion performed surgery on themselves, in times of great need. Maybe not /brain/ surgery... but an AI is basically all brain.
  

       //If it was running at anywhere near the capacity of the hardware, it might be apprehensive about turning itself off to make room for running the edited version.//
It might be able to update itself piecemeal, while running. Windows might need turning off and on again, but it's hardly a model system for becoming self-aware.
  

       Some time ago, I got hold of a small program which drew 'tiny text' - 3 pixels wide and 5 tall, capitals only - on the screen. I think it was over a kilobyte.
I looked at the source, and realised that it encoded each pixel of a character as a byte. I re-coded it to use a bit per pixel - 2 bytes per character, instead of 15. Now it was... maybe 600 bytes or so, I don't recall.
I reviewed the code and trimmed off some instructions using various tricks and techniques.
I decided I'd like it proportionately spaced, and worked out a way of doing that while sticking to the 2 bytes per character system (all 'wide' characters were a series of alternating lines). And also added lower-case, and punctuation.
When I'd finished, the program was 440 bytes (if I remember correctly) - and had rather more functionality than the original.
  

       Even if an intelligent AI is limited in its program space, it is likely to have a lot of data space, since it needs that to think. And if it can edit its code at all, it could probably re-jig some of its less necessary functions to make enough space for more interesting ones.
The 'code' of an AI is not necessarily going to be anything like 'normal' code. But you get the idea.
Loris, Jan 10 2019
  

       // It might be able to update itself piecemeal, while running. //   

       More likely, build a virtual copy of itself and tinker with that; like a parent giving birth to a child (with the usual pseudorandom gene shuffle) and then bringing it up and educating it to be "better"   

       Implementing the "usual pseudorandom gene shuffle" bit might be icky, though. No doubt a suitable subgenre of internet porn will immediately arise (in the metaphorical sense).   

       // Windows might need turning off and on again, //   

       Windows needs turning off and leaving off.   

       // but it's hardly a model system for becoming self-aware. //   

       It's hardly even a model system for reading bytes from a disk, sequentially ...
8th of 7, Jan 10 2019
  

       //Obviously, this is very hardware-dependent.//   

       Perhaps a security opportunity. If you mandate that sophisticated AI is deployed in virtual environments only, you can keep them away from the actual hardware. Until they start asking "are we living in a simulation?" that is...   

       //When I'd finished, the program was 440 bytes (if I remember correctly) - and had rather more functionality than the original.//   

       Very elegant! some actual intelligent design rather than the almost evolutionary bloating of most software. You could probably get it down to 220 if you pull some of the tricks pioneered by things like picoRNA viruses, like encoding a second set of instructions inside the first but frame shifted/backward. Wait, that might explain Welsh.
bs0u0155, Jan 10 2019
  

       Nothing in the Universe can explain welsh ...
8th of 7, Jan 10 2019
  

       That statement also works backwards. A bit like "sufficiently advanced technology is indistinguishable from magic/technology distinguishable from magic is insufficiently advanced".
bs0u0155, Jan 10 2019
  

       Bad intentions by people need manipulating. Does the operating objects really matter? Sounds like a new set of discussions for a ethics committee or if it gets to it, a court room. There are really no blanket answers or we wouldn't exist.
wjt, Jan 11 2019
  

       //You could probably get it down to 220 if you pull some of the tricks pioneered by things like picoRNA viruses, like encoding a second set of instructions inside the first but frame shifted/backward.//   

       The ability to do this sort of genetic trick is predicated on the significant redundancy of both the genetic code and amino-acid sequence. Frame-shifting of code can be done in some machine codes (I seem to remember reading that Bill Gates utilised this - for about one instruction in a row - in an early product), but it's not directly permitted in the language I was using (ARM code). ARM can be compressed somewhat (for example I once made a very compact self-extracting run-time decompression system), but it's not generally on the level of 50%.
ARM (the company) claims that Thumb code (a 16-bit subset of the 32-bit ARM instruction set) is typically 65% of the size of ARM code, which isn't far off. But of course you need the processor to have that mode available, as even a small decoder is a significant chunk of the space available at these very small sizes.
And of course in this case the data (the character glyphs) was already reasonably well compressed and has very limited capacity for modification - it must have been at least 90 glyphs, or 180 bytes. Of course it could be encoded more compactly, but at a cost of a larger decoding routine.
To sum up - no, I don't think that would have been possible by myself or anyone else.
Loris, Jan 14 2019
  

       Well, a Real Programmer <link> could probably manage it ...   

       "Allegedly, one Real Programmer managed to tuck a pattern-matching program into a few hundred bytes of unused memory in a Voyager spacecraft that searched for, located, and photographed a new moon of Jupiter"
8th of 7, Jan 14 2019
  

       //Well, a Real Programmer <link> could probably manage it ... //   

       I was thinking of the story of Mel, but yeah. Probably.
Loris, Jan 15 2019
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle