h a l f b a k e r yNot so much a thought experiment as a single neuron misfire.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
It seems that it is not improbable that the earth would come to a catastrophic end, before the end of the sun's life, by asteroid, the bomb, or some other unforeseen means (Ward/Brownlee). Apparently, evolution doesn't work the same way twice (Miller) because of the uncertainty principle (Heisenberg).
We sapiens may be quite unique in the universe having taken 4 billion years to evolve, a feat not to be repeated often. Moreover, if it were repeated, we may not look the same at all, or be so smart.
To save our genome, perhaps the answer is self-replicating robots: inorganic so that they can populate the moon and mars and so on, taking with them hard disk samples of the DNA of all the wonderful creatures of earth.
Think of it, little solar powered critters magnetizing iron filings from the surface of the moon and burning them into body parts with a big magnifying glass. Maybe they would take a thousand years to produce a single offspring: who cares? At least we would have something to show for the 4 billion years we enjoyed on earth.
Surely the priority in robots should be for reproduction rather that artificial intelligence. At least if they are reproducing themselves they are not linked to government grants! And intelligence can emerge in its own sweet time.
"The Physics of Immortality"
http://www.amazon.c...50-5732836?v=glance Wherein the author supposes that people (and indeed entire universes) can be perfectly emulated on computers, and that all people who have ever lived will be emulated on computers in the far-distant future. [snarfyguy, Oct 04 2004]
an ode to Henrietta Lacks
http://malwebb.cust...ce.net.au/hela.html 50 years of (largely) clonal cell culture, and still going strong -- not bad for a eukaryotic genome. [n-pearson, Oct 04 2004]
This idea was investigated by NASA for Jimmy Carter in 1980
http://www.islandone.org/MMSG/aasm/ [peterpeter, Oct 04 2004]
Please log in.
If you're not logged in,
you can see what this page
looks like, but you will
not be able to add anything.
Destination URL.
E.g., https://www.coffee.com/
Description (displayed with the short name and URL.)
|
|
Seems like you're grasping for what Frank J. Tipler wrote about in "The Physics of Immortality." |
|
|
If we're really that smart, we're in no danger. If we're not that smart, well... |
|
|
Be careful about those self-replicating robots. "Growth for the sake of growth is the philosophy of cancer" (Jasper Fforde). They have got some robots like that on the Stargate show. |
|
|
Growth for the sake of growth is also evolution is it not kinda sorta.. |
|
|
In other words, isn't it better to do something, even if not perfect first time, than nothing at all. After all, isn't that supposed to be how we emerged in the first place. If evolution is all that, perhaps it would work on the CMCF's (Clunky Metal Critter Factories). |
|
|
Well, the soma of at least one human, an African-
American ovarian
cancer victim who died under fairly poor care in 1953, has
'won' a jackpot of indefinite clonal genomic reproduction
(so far just here on earth, AFAIK) via ongoing research
interest in cells derived from her tumor. See link for an
elegy to the woman, Henrietta Lacks, aka Helen Lane. |
|
|
//Growth for the sake of growth is also evolution is it not kinda sorta..// Please speak english.
Reference the last 3 paragraphs: Please define your invention. Please elaborate on what the hell you mean, particularly in paragraphs 1 to 4. Ta. SP: Catastrophic. Sp: Gnome. Genight! |
|
|
<chicken run>"Agh! Gnomes, now! It's all in me head, it's all in me head..." |
|
|
I feel your pain [RayfordSteele]. They get me like that too. |
|
|
Save the genome: remove the gnome. |
|
|
Thank you [snarfyguy], I ordered the book. Apparently Robert Freitas, for NASA, performed a study on the feasibility of such factories in the 80's: making this idea redundant. However this is a mixture of other ideas: or is based on the premises of others (Tipler/von Neumann/Freitas/Ward/Brownlee) to name a few. |
|
|
I thought that the uncertainty principle told us that on the sub atomic level, knowledge of it's speed, and it's state are mutally exclusive goals, as to find one in this reality, with our current technology you are required to change the other??????????? Reading sources.... You are basing all this on the assumption that the beasts that evolve afterward are not going to be more intelligent than man - as I remember, the last time this happened (or was meant to have happened) the smartest things on the planet were dinosaurs (and we came about because the more robust animals that could sustain their body temperature survived in ever increasing numbers, and the death of he dinosaurs gave the mammals the big break they needed... who knows that couldn't, and shouldn't happen to use???)... and also, if a bomb brought about the destruction of the palnet should we really allow our selves to repopulate it???? |
|
|
[Ossalisc] The way Miller attributes non-repeating evolution to the uncertainty principal is rather involved, so I suppose I was grasping a bit to try to just put them together in one sentence. There are no guarantees on the emergence of intellegence. However, inorganic survivors would be more adept in the average solar system, and so , if populating the galaxy and beyond, there is some chance some good may emerge. That's hope. And whereas organic life can emerge under the right conditions 'by itself', it seems impossible that inorganic 'life' would be able to do this without an initial boost. |
|
|
[peterpeter], where do you stand on the (weak or strong) Anthropic Principle then? <Note to self> High^5 on getting the question in when I am 9 hours away froma blissful 2 weeks in Spain </Note to self> [Ossalic] //and it's state are mutally exclusive goals//sp: goats |
|
| |