Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
"Put it on a plate, son. You'll enjoy it more."

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


                                                                                                                               

Please log in.
Before you can vote, you need to register. Please log in or create an account.

Recognising artificial intelligence

  (+5, -4)
(+5, -4)
  [vote for,
against]

What is artificial intelligence? A narrow interpretation is something that replicates human intelligence. But there is an unimaginably large amount of other phenomena that might be intelligent, just not recognisable as intelligent by human standards.

The Hutter Prize has been set up in an attempt to solve this problem. The Hutter Prize is based on the theory that efficient compression is closely related to intelligence. The problem with this approach is that you get lots of data compression and not much intelligence.

I propose a slightly different definition: Intelligence is processing information in an efficient manner.

OK, how does this help? It moves away from the data in vs. data out ratio which hasn’t led to AI.

OK, so how do you calculate efficiency? By measuring the amount of heat generated by the computer or neural network.

When a bit of data is erased it dissipates heat (‘reversible computing’ is based on this fact). So I reckon that a computer or neural network that is processing information efficiently will be generate less heat than an inefficient information processor (all other things being equal).

This has the advantage over the Hutter Prize of being able to *dynamically* test for efficiency of processing information. This also has the advantage of being able to test both analogue and digital systems.

This method could be used when developing/training neural networks. This method moves away from the rigid ‘goal based’ neural network training, which is unlikely to develop real intelligence.

xaviergisz, Jun 09 2007

The "Hutter prize" website. http://prize.hutter1.net/
[jutta, Jun 09 2007]

NewScientist article http://www.newscien...20of%20the%20brain?
Unfortunately a full copy of this article is only available to subscribers. The premise of the article is the brain is intelligent because it minimises free-energy. The proponent of this idea is Karl Friston (do a google search of his publications if interested). [xaviergisz, Jun 04 2008]

IBM Scientists Measure the Heat Emitted From Erasing a Single Bit http://science.slas...rasing-a-single-bit
[xaviergisz, Mar 11 2012]

(?) Ethical duck-typing http://www.geneseo....mages/medallion.jpg
[mouseposture, Mar 11 2012]

[link]






       I'm not convinced that intelligence can be linked to efficiency of data processing - in fact, I'd almost argue that the opposite is the case.   

       Take the human brain, it uses around 20W - in comparison, your average laptop uses around 15W - and that includes powering the hard-drives, display, cooling fans etc.   

       An animal brain (e.g. a mouse's) uses far less - but is probably capable of processing a similar amount of input/output - and by your standard, might (at times) appear to be more intelligent.   

       Now there's probably a vastly disproportionate amount of data processing going on between a brain and your average laptop, but again, the brain performs this using an electro-chemical mechanism, while a PC's processor is likely to be electro-magnetic. I'd venture that the electro-chemical mechanics are less efficient than electro-magnetic bit-switching in the traditional computer.   

       My point being that the mechanics of intelligence (and hence the power requirements) probably aren't proportional to the level of intelligence. It should be possible to create an intelligence that is purely mechanical (steam-powered if needs be) that would be horrendously inefficient. To put it yet another way, it's my belief that intelligence is a matter of organisation, rather than one of efficiency.
zen_tom, Jun 09 2007
  

       I don't think I can agree with either of you -- I think that intelligence isn't terribly measurable at our current technological stage of development, nor even really very definable. The human brain can come to conclusions without following the logical path, which, as far as I know, is completely inconceivable to a computer... but yeah.   

       I don't know enough to really opinionate, so disregarding of this anno may be entirely warranted.
CaptainClapper, Jun 09 2007
  

       Artificial Intelligence is no match for natural stupidity.
nuclear hobo, Jun 09 2007
  

       I'm not proposing that power=intelligence. I'm saying this might be useful for comparisons between almost identical neural networks. Also, this is more a thought experiment than a ready-to-test theory.   

       Note: the heat due to the operation of components in an information processor far outweigh the heat dissipated due to erasing information. This thought-experiment is all about the tiny residual amount of heat.   

       Imagine a black box (i.e. computer, neural network or other information processor) that accepts inputs and produces outputs.   

       If the input was 10011111 and the output was also 10011111, there would be no erasing of bits of informations, thus no heat dissipation due to erasing of bits.   

       If the input was 10011111 and the output was 00000000 then there would be 6 (or maybe 8) bits of erasing heat dissipated.   

       If the input was 10011111 and the black box outputed every second bit i.e. 1011, only 4 bits of erasing heat is dissipated. (this is essentially a very lossy data compression)   

       So I'm proposing if you had a much more complex black-box and information input, measuring the erasing heat might be useful in developing neural networks.   

       This is an attempt to approach AI from a physics perspective.   

       A brain (artificial or real) is like everything else in the universe - it moves on the path of least resistance towards its lowest energy level. So a brain is a form of matter that processes information not because of some innate 'life force', but merely because the path of least resistance is to process information and process it efficiently. The tricky part is designing the brain.   

       As an equation it's something like:
f(information, brain) = intelligence
  

       Where function f is, of course, incredibly complicated.
xaviergisz, Jun 10 2007
  

       bigsleep, I completely agree about the need to hard-wire neural networks to develop AI. See my other idea, "modular neural network" for a possible approach.
xaviergisz, Jun 10 2007
  

       What [zen-tom] said, only much louder. Does that make me more intelligent then [Zen-tom]?
zeno, Jun 10 2007
  

       It depends [zeno] on how loud you shout, and what you had for breakfast!   

       //A brain (artificial or real) is like everything else in the universe - it moves on the path of least resistance towards its lowest energy level. So a brain is a form of matter that processes information not because of some innate 'life force', but merely because the path of least resistance is to process information and process it efficiently.//   

       I like this notion of things moving towards their lowest energy level, taking the path of least resistance - but again, isn't life (and as a high-point of that process, brains in particular) something that prolongs that energy drop via a the twin loopholes of history and organisation in the laws of thermodynamics?   

       In other words, the shortest and most efficient route for a metal ball to get from the top of the machine to the bottom is directly from top to bottom. A suitably constructed (life analogous) pinball machine stops the ball taking the most efficient route, causing it instead to take a more interesting route, and in doing so, produce the most amount of information possible (in terms of the ball's trajectory) for the same energy drop - - Ahh!, which I suppose is kind of what you are talking about. i.e. Using your measure, a more 'intelligent' pinball machine would be one that made the ball bounce around more interestingly than one who's ball followed a direct path. In these terms, the more the ball bounces about, for a given energy drop, the more efficient the machine.   

       The problem is defining what it is that's producing the 'information' - e.g. using the last example, you could describe the trajectory of the ball in various ways, all of which might take up the same space, whether it took a straight-route, or a more chaotic one.   

       Then there's the question of linking this concept to intelligence - complexity certainly and, I suppose if you're in the camp who feel that sufficient complexity, having once reached some critical mass, inevitably becomes intelligence.   

       But I'm thankful (hence bunnage) to you for making me think about something I've not spent much time on recently - I'm really liking this thing about success being linked to eeking out the most complexity for a given (potential) energy drop - it provide a metric for measuring life - which has, since inception, been improving on pinball designs, getting the ball to bounce around in ever increasingly complex patterns, delaying its inevitable drop back into the drain behind the flippers.
zen_tom, Jun 11 2007
  

       hi zen_tom, thanks for reconsidering this idea. I admit I could have expressed the idea more clearly initially.   

       I think what we're talking about (your pinball analogy and my artificial intelligence idea) is the general principle that information, complexity, intelligence and entropy are all somehow interelated. The trick is to apply this interelationship to something useful.
xaviergisz, Jun 12 2007
  

       /If the input was 10011111 and the output was also 10011111, there would be no erasing of bits of informations, thus no heat dissipation due to erasing of bits./   

       If no deletion = less heat dissipation = more efficient processing = more evidence of intelligence, would increasing the output produce even less heat dissipation? If the output was the same as the input, but doubled, this would be less heat dissipation yet. As the output increases, heat dissipation decreases. As output increases towards infinity, evidence for intelligence also increases towards infinity.
bungston, Jun 13 2007
  

       hi bungston, you've pointed out an important feature (or is it a flaw) in the experiment. Yes, the output of the black-box must be limited. In fact, in the purest form of the experiment, the black-box would have no ouputs at all (the black-box would have memory and computing components).   

       Thus the black-box would have the options of:
a) storing the information uncompressed;
b) storing the information compressed (lossless or lossy);
c) erasing the information;
(or a combination of these options)
  

       Of course this raises a really important question: would more heat be dissipated by calculating and compressing than simply erasing data?
xaviergisz, Jun 13 2007
  

       Another thing I should emphasise about this idea is that it is a 'calculus of variation'. It is about getting a 'brain' and making small variations and testing the heat output for a particular input.   

       in maths terms its something like:   

       minimum erasing heat( f(information, brain+delta1), f(information, brain+delta2), ... ) = maximum intelligence   

       where delta1, delta2 etc are small variations to the brain, and maximum intelligence is a local maximum.
xaviergisz, Jun 13 2007
  

       Personally when in doubt, refer to the three laws. If anyone didn't get that, go read some Asimov.
punk_punker, Jun 13 2007
  

       I always thought the brain to body mass ratio was the determining factor.
quantum_flux, Jun 13 2007
  

       pipedream.
WcW, Jun 04 2008
  

       //I always thought the brain to body mass ratio was the determining factor.//   

       That belief may require drastic revision. Chihuahuas have the largest brain to body mass ratio out of all dogs... and their brain to body mass ratio is far greater than ours as well. I wouldn't venture to consider them particularly intelligent though.
ye_river_xiv, Jun 04 2008
  

       Everyone's skirting around the key issue here, so I'll just say it: the real trick is not in recognizing intelligence in another form, but self-awareness. Intelligence (IMHO) is tied into reasoning ability, and true reasoning takes self-awareness.   

       My dogs are self aware: when they look at a mirror, they recognize themselves. If put a spot of paint on my dog's forehead and hold up a mirror, he will look at his reflection and then rub his face with his paw. I performed this experiment after reading about the same thing being done (equally successfully) with dolphins. For the record, my dogs are American Pit Bull Terriers, which are, in spite of thier undeserved notoriety, reknowned as one of the most intelligent breeds. I have also seen my dogs use tools: they occasionally use a stick gripped in the mouth to scratch that hard-to-reach spot on thier butt, and I once observed three of them working together to build a pile of rocks high enough that they could stand on top of it to see over the fence (apparently for the purpose of barking at my neighboors).   

       My point is that abstract reasoning and goal-oriented behavior are far more reliable signs of intelligence than sheer capacity for logic. I wonder how capable we are to recognize such things in a non-biological entity, or, for that matter, if it would recognize the same traits in us.   

       I have met many chihuahuas, and not one of them recognizes themself in the mirror.
Alterother, Jun 06 2008
  

       //If put a spot of paint on my dog's forehead and hold up a mirror, he will look at his reflection and then rub his face with his paw.// I don't believe that but, if it's true, you could convince me by publishing it in a peer-reviewed journal. It would be quite significant.
MaxwellBuchanan, Jun 06 2008
  

       I like [zen_tom]'s concept of a qualitative measure of intelligence and/or life*, and the definition that this can be measured based on the extent to which it can create a detour between start and end points for a given system.   

       It reminds me of Richard Dawkins point about an elephant or a human simply being an extraordinarily complicated gene replication machine, the entire life of the human or elephant just being a byproduct or digression in the process   

       Also (if I've understood the theory, which I doubt) this approach supports the insight that the most intelligent and complex thing in the universe is the universe ... because the whole thing is just a detour between one state of non-existence and another   

       *it occurs to me for the first time that there may be a simple logical relationship between these two concepts, and that it relates to the idea that you don't have a binary state for either, but a qualitative one ... something is not 'intelligent or not' but 'how intelligent'. Something is not 'alive or not' but 'how alive'.   

       Interesting ... must chew on that.
kindachewy, Jul 10 2009
  

       The distinction between life and unlife is replicability. If it can be exactly replicated then it isn't alive. This is so primal a notion that when cloning was initially proposed people FELT that the clones would be soulless.   

       We know that a computer is not alive because it can be made to "reboot", to replicate itself identically. No matter what mode of randomness is added after that fact the essential nature of artificial intelligence is that it can be reproduced.   

       As we begin to produce systems of a unique nature, processors that cannot be reproduced we will come to recognize them as "living" and value their intelligence as "real".   

       Determining if an intelligence is "real" or "artificial" is a pointless distinction. If I want a great read, I don't go to my pocket calculator I go to the book store. Where I find books. I don't care who wrote them. When I want a simple math problem solved I go to a calculator, I don't care what material the processor is made out of as long as the answers are useful.
WcW, Jul 10 2009
  

       //The distinction between life and unlife is replicability. If it can be exactly replicated then it isn't alive.//   

       Amusingly, almost the opposite of this is true.
Loris, Jul 10 2009
  

       reproduction and replication are different. reproduction may attempt replication but it never actually is.
WcW, Jul 10 2009
  

       There are lots of things which replicate asexually and have small enough genomes to make copies without incorporating errors most of the time.   

       Whether living things attempt to minimise or optimise mutations is interesting, but it isn't part of any reasonable definition of life.
Loris, Jul 10 2009
  

       Obligatory Kung Pow quote:   

       "Hey, I know you!"
normzone, Jul 10 2009
  

       [WCW] seems to be missing my point a little - although maybe not directly responding to it. I'm a little unclear on that. However, assuming it was a response, I'd like to try to clarify.   

       We (generally) from a commonsensical viewpoint feel that something is 'alive' or 'not alive'. Similarly we feel that something is 'intelligent' or 'not intelligent'.   

       Humans perceive the world by creating binary black/white (digital) divisions over a 'shades of grey' reality.   

       We classify things, and often our classifications cause us to see - and interpret - things in a way that is distorted by those very classifications.   

       [That is not philosophical speculation, btw ... accepted and documented reality]   

       What if 'life' and 'intelligence' actually relate to ranges rather than unique states. [That *is* philosophical speculation!]   

       What if we could be 'more alive' than we are? What if there are other levels of intelligence, or consciousness, that are outside of our direct experience - like ultraviolet or infrared light - and which we therefore don't recognise unless we search for tools and approaches which will reveal them to us.   

       Things to ponder for a Friday night ...
kindachewy, Jul 10 2009
  

       // What if we could be 'more alive' than we are? //   

       You can be..... come, join us .... don't be afraid .... you know you want to ..... resistance is futile ..... you'll wonder why you ever hesitated ....
8th of 7, Jul 10 2009
  

       [-] By my definition, artificial intelligence has an self-emergent quality about it. Computers can do many intelligent things, but I would classify very few as "artificial intelligence / human intelligence".

  

       Self emergence is very computationally inefficient. For example, you can program a swarm of artificial bees to find and collect pollen in a virtual environment by programming them to follow only 1 simple rule: follow the gradient of the scent marks of your fellow bees. Because there is only one rule to follow, the bees have to rely on numbers and patience to collect all the pollen. You could give the same problem to an engineer, and they'll calculate the most efficient way for N bees to find and collect pollen, while having them communicate by better means than lousy scent marks. They'll blow the AI bees out of the water.

Pure math is usually (always?) more efficient than nature. Nature may take the path of the least resistance, but due to evolution, there are inefficiencies left in the behavioral pattern. However, what may appear inefficient on surface may have it's purpose. Back to the bees example. It may be more efficient to give the bees a complex communication protocol to make them more efficient and centralize their operation. But what if a component of that complex system breaks down? Will it break down gracefully as is the case with the less efficient scent based swarm approach, or will it break down catastrophically?

Intelligence in nature may be very inefficient, but it is extremely resilient and persistent. That's what computers are missing.
ixnaum, Jul 11 2009
  

       //By my definition, artificial intelligence has an self-emergent quality about it//   

       OK, we have different definitions of intelligence.   

       One of the difficulties in developing artificial intelligence is determining/defining the 'end goal'. I have provided one possible definition. I'm not saying it's necessarily correct, but I think it is worth considering.   

       I agree that the qualities you have ascribed to intelligence such as resilience, persistence and self-emergence are useful and important to 'natural intelligence'. However, these qualities are not easily quantifiable and do not help in developing a 'testable' definition of intelligence.
xaviergisz, Jul 11 2009
  

       //However, these qualities are not easily quantifiable and do not help in developing a 'testable' definition of intelligence.//   

       You are right, I didn't provide an alternative. However, testing efficiency as you suggest won't help either. Efficiency may be related to intelligence, but it's not the same thing.
Take sorting algorithms for example. Your test would score "bubble sort" as less intelligent than "qsort" even though both sorting algorithms arrive at the exact same output given the identical input.
I'm not saying that efficiency isn't important - it's good to be efficient. But in the end it doesn't tell you anything about the intelligence of the black box doing the calculation. It is a good indicator for the intelligence of the programmer who invented the algorithm though.

Or maybe your argument is that by being efficient, you save CPU cycles. That way we arrive at the critical point on the Moore's law curve where a CPU can emulate a human brain. I don't know about that. We don't know how it's suppose to work yet, so optimizing now and making the algorithms efficient won't pay off. Back to the sort example ... Let's say your goal is to achieve sorting, and you just can't crack that programming challenge. Starting with a simple to understand but really slow algorithm would be more productive than trying to optimize an algorithm that doesn't even sort properly yet.
ixnaum, Jul 13 2009
  

       /That all points to an AI /   

       I propose that artificial intelligence be recognized with a pointing maneuver similar to that employed by Donald Sutherland's character when it recognizes a non-pod person.
bungston, Jul 13 2009
  

       I like this idea. While certain spambots have made a joke out of the RuPaul turning test, and certain computers can do very complicated things, we still rarely consider them intelligent.   

       Similarly, hive insects can do some remarkably organized activities, but we claim them to not be intelligent. Yet cuttlefish are able to learn about glass walls, and we suddenly start praising them as being at least as intelligent as us.   

       Obviously, a new definition of intelligence is needed. I would caution you to be quite careful in how you define intelligence though. The "intelligent design" crowd have developed a number of theories as to how one might find proof of an intelligent designer. Their "irreducible complexity," and "Specified complexity" are somewhat interesting concepts, but so far they tell us more about the biases of the designers, and their willingness to claim everything is intelligently designed, rather than to deny the existence of a designer. While interesting, if taken seriously, their results have some interesting, and far-reaching implications for what is "intelligent" and what is "designed."   

       I fear your idea may find itself wandering down this road itself if not carefully guided. I can agree that there is a scale of intelligence, and possibly even of life, but I suspect that there must be a zero point for it somewhere.
ye_river_xiv, Aug 16 2011
  

       I think it more likely that the first true AI will 'recognize' us before we recognize it.
Alterother, Aug 16 2011
  

       Betcha the process by which we recognize an AI as intelligent, or sentient, or a person will not, when it finally happens, be logically, philosophically, or mathematically well-founded. It will be more akin to the process by which Africans were recognized as such by Europeans.
mouseposture, Aug 16 2011
  

       Natural Stupidity will always defeat Artificial Intelligence.   

       You call it "Democracy".
8th of 7, Aug 16 2011
  

       Intelligence and sentience are not the same thing. Biological organisms are motivated by a battle for limited resources. The only way that computational intelligence will become analogous to biological intelligence is if it becomes similarly bellicose. The question is one of making sure that the pace of domestication progresses at the same rate as the pace of human dependance. Right now human society could survive (barely) the collapse of computerized technology (say, after the use of an EMP weapon). In the span of the next generation that will no longer be the case. We need real ethical and sociological answers NOW before we become totally dependent. If computational systems can become relatively independent what prevents them from exploiting the human tendency to become addicted? To fall in love? To believe in the impossible? Even today we know how vulnerable we are to software designed by human software designers. When computers can monitor even our most subtle responses programs that can take over our lives more effectively than crystal meth would be possible, i.e. inevitable.
WcW, Aug 17 2011
  

       [WcW], please tell us, what part of "Resistance is Futile" don't you understand ?
8th of 7, Aug 17 2011
  

       //I have met many chihuahuas, and not one of them recognizes themself in the mirror.//   

       My daughter's chihuahua recognizes itself in the mirror.   

       Just sayin...   

       Aberrations occur in nature all the time.   

       For the record, I should have mentioned that only two of my three dogs at the time showed that level of intelligence (the smartest of the three is now deceased). The other one is dumber than a bag of hammers and would be far more likely to eat the mirror than recognize himself in it.
Alterother, Mar 11 2012
  

       Nothing to do with the topic of course but...
Have you noticed that the runt of a litter, if it survives, is usually the smartest mutt of any given litter?
I've seen this a few times now and always wondered if it was a general rule of thumb or just some fluke with the litter mates I've gotten to see grow up.
  

       So, about your siblings...
MaxwellBuchanan, Mar 11 2012
  

       I think all this agonising over how we'll recognize artificial intelligence is reminiscent of people who ask "how do I know if I've had an orgasm" - when it happens, you'll know.   

       As we develop smarter computers, it won't really matter whether or when we start to call them intelligent - the only important question will be whether it can do things we want done. Advertisers rather than philosophers will decide when to call something intelligent, but people will just use it if it works.
MaxwellBuchanan, Mar 11 2012
  

       [MB] I favor the Turing-duck approach too, but does it work when morality enters the picture? At some stage, it'll be necessary to decide whether AIs have rights, can own property, or, at the very least whether we should avoid hurting their feelings.   

       The important question about [Alterother]'s dog is not whether it wipes the paint off when it sees itself in the mirror, but whether it fails to do so when it does not. Now, that would be a very curious incident indeed.
mouseposture, Mar 11 2012
  

       Good point.   

       After this subject came back up today, I dug out the scrap of paper bearing the results of the mirror experiment. According to the record, I repeated the experiment 30 times each with Griz and Rusty, only ten times with Jack. At the time, it did not occur to me to set up a control, and thinking about it now, I have no idea how I could have. Here are the results, collected over a three-day period:   

       Griz wiped the paint off the first time and twenty-two times in total; the last eight times he wiped the paint off before I could get the mirror in front of him.   

       Rusty wiped the paint off nineteen times out of thirty.   

       Jack wiped the paint off zero times out of ten, after which I gave up on him. Rusty licked the paint off of Jack's face eight times out of ten, Griz once, and the last time Jack wore it for the rest of the day.   

       // Have you noticed that the runt of a litter, if it survives, is usually the smartest mutt of any given litter? //   

       My smartest currently living dog (Rusty) was the runt; the aforementioned dumbass (Jack) is his brother and was the first born and largest in a litter of seven. I have observed the 'smart runt' phenomena before, as has my father, and we hear anecdotes about it frequently. A close family friend claims that the same is true with goats. Our casual theory is that in the very early formative period, runts have to be clever in order to get enough milk; if they don't develop intelligence early, their larger, stronger siblings will force them away from the teat. Smart runts also tend to be more aggressive and territorial; Dad calls it 'runt syndrome.'
Alterother, Mar 11 2012
  

       Not sure why I didn't publish those results with the original post. I was a HB noob, young and foolish, ignorant of the 'Baker's method. Now I am older and still foolish, and still make unsubstantiated claims as a matter of course, but today I do so not out of ignorance but simply because it is my way.
Alterother, Mar 11 2012
  

       //At some stage, it'll be necessary to decide whether AIs have rights, can own property, or, at the very least whether we should avoid hurting their feelings.//   

       History would tend to suggest that no objective measure of intelligence is likely to influence our decision on such matters.
MaxwellBuchanan, Mar 11 2012
  

       //it did not occur to me to set up a control, and thinking about it now, I have no idea how I could have. // The control is to apply water (or the like), which can be felt when applied but not seen.
MaxwellBuchanan, Mar 11 2012
  

       Duh. See? It takes a scientist to think of these sciency things.   

       Still, they didn't rub it off until I showed it to them, except in the end when Griz caught on to the game. I really just wanted to see if it would work with dogs in addition to dolphins, and sought only my own satisfaction with the reults.
Alterother, Mar 11 2012
  

       // History would tend to suggest that no objective measure of intelligence is likely to influence our decision on such matters. //   

       True. After all, in some places, your species allows females to drive, own property, even vote ...   

       You're doomed ...
8th of 7, Mar 11 2012
  

       Vote??!! Don't be bloody ridiculous.
MaxwellBuchanan, Mar 11 2012
  

       Sounds like this //'runt syndrome.'// phenomena bears looking into.
A more interesting experiment, if it does prove out, would be to have seperation of litters at birth and each pup asigned a nurse mother to rule out the nature-vs-nurture-ness of competition for food and affection leading to intellegence when tested later against their estranged litter-mates.
  

       My own hunch is that the results would not change much, but I guess that would depend on the tests.   

       I don't know that it hasn't been looked into. If the most respected veterinarian in Maine and his oddball son know about it, shirley someone else must have noticed.
Alterother, Mar 11 2012
  

       //no objective measure of intelligence is likely to influence our decision// Not very much, no, but if you try sometimes, you just might find people occasionally catch on <link>
mouseposture, Mar 11 2012
  

       You'd think...
Pavlov shoulda clued in anyway.
  

       To the internet!
<later>
Runt syndrome seems to only have myth status and hasn't been properly tested, or at least if the theory has been tested it is hard to find a study.
  

       It hardly seems to be a universal trait. I haven't paid it enough attention to even put a percentage on it, but if I had to it would be pretty low. I've just seen and heard enough to mark it as a trend. I'll definitely confirm it as more than a myth.
Alterother, Mar 11 2012
  

       I've only noticed this trait in dogs. A survey of animal handlers in general would help to see if it holds true of any critter giving multiple births. I bet they'd be happy to contribute their observations.
More of these mythy-esque home-spun thumb-rules need looking into dang-it.
  

       Sorry to sidetrack your A.I.DEA [xaviergisz].
The only thing that pops into my head when contemplating defining artificial intelligence is a Shroud of Turing Test, which is really no help at all, and I shouldn't have mentioned it in the first place.
  

       ^Haw!
FlyingToaster, Mar 12 2012
  

       //        see if it holds true of any critter giving multiple births. //   

       I never noticed it when I was raising rabbits, but I wasn't looking for it, and runt kits usually die within a few days of birth. Even if they didn't, rabbits, with rare exception, aren't noted for cleverness.
Alterother, Mar 16 2012
  

       The glowing red eyes and the weapons are the usual giveaways...
not_morrison_rm, Mar 16 2012
  

       Yes, that's how most people detect _my_ feral cunning-- usually too late--but what does that have to do with runty rabbits?
Alterother, Mar 16 2012
  

       <shrugs>
Wait!
  

       They're both... hare razing?   
      
[annotate]
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle