h a l f b a k e r y"My only concern is that it wouldn't work, which I see as a problem."
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Please log in.
Before you can vote, you need to register.
Please log in or create an account.
|
The current UK debate about Brexit is interesting from a
human/social point of view. This idea is stimulated by
that, but is a wider thing.
Most people want the world, and our society, to be a
nice
place. We care about people, their lives, and we care
about fairness. When the (very heated)
debate on Brexit
comes down to it, its about fairness. Is it fair that big
corporations make money by manipulating us for their
own
benefit. But then, our economy, jobs and pension
funds
depend on those same big corporations. What is fair?
What
is good?
Notwithstandinghowever, we are but a tiny part of the
continuum of humankind. Our individual views are based
on the context in which we were raised.
This idea then is to use a combination of polls,
referenda,
and AI-enabled trawling of social media (compensated to
account for those who dont have access or desire to use
social media) to construct a quantifiable global human
view of what is good, fair, and in the interests of all
humankind.
Link to my general definition of "Good".
https://book.mindey...tml#i-lee-principle "Good is to let the world exist, and evil is to destroy it.", and "The generalization of 'world' is 'Everything', and the negation of destroying it results in aspiration to let Everything exist." [Mindey, Oct 22 2019]
[link]
|
|
That's never going to work. One poll will show
morons like Trump supporters who refuse to
believe that global warming has anything to do
with the rampant consumption of fossil fuels. The
same people when polled think the entire universe
is less than 6,000 years old. Then you have the
likes of ISIS who want to kill everyone, then you
have the Brazilians burning down the Amazon to
make McDonald's burger ranches. The point is that
there are a lot of crazies who will think their
particular craziness makes the world a better
place. |
|
|
//Notwithstandinghowever// I think you mean
"Howevertheless" |
|
|
[xenzag], yes, I agree that all of these people have
violently different views on what is right and fair,
because they have been influenced/brainwashed
by a particular set of experiences over their life.
But each one (and you, and I) believe our own
world-view because of our own life experiences. |
|
|
Rather than saying one view is right and another is
wrong, at least collectively determine the
consensus view (in some carefully defined and
collected way), identify the common ground
(about caring for humankind) and work towards a
common goal. |
|
|
//howevertheless//... hmm... I wanted to add to
the vocabulary. |
|
|
That's fine, just remember you have to pay him a royalty every time you use it ... |
|
|
"Good" ? To quote Havelock Vetinari "They think they want good government and justice for all, Vimes, yet what is it they really crave, deep in their hearts? Only that things go on as normal and tomorrow is pretty much like today.'" |
|
|
Feet of Clay. Pratchetts a bit of a hero of mine,
had some insightful views on society, and some
great and funny writing too. |
|
|
//at least collectively determine the consensus view// That's the bit that has no meaning. If one group of people believe that God wants all infidels to be executed and another group believes in a replica of Noah's ark sitting in some shite hole town in Kentucky, what's the consensus view? |
|
|
Unfortunately, those with the most extreme views also
tend to be the loudest (and the more they get asked to
shut up, the louder they get...).
The main problem is that, in any large group of people,
there will be a percentage of Bad (what that percentage
is, depends on what the group is or how it is
defined/collected). I think that asking people "what is
good" will fail from the outset. "Social Media" is definitely
NOT a place to look/ask, as, again, the worst are usually
the loudest.
Probably the only way to figure it out is to get some-one
else (ie. not human) to analyse us. Benign AI is about the
only solution, until we meet aliens... |
|
|
... except they're treating your species as a hilarious form of entertainment and aren't going to spoil the fun by intervening in any way. |
|
|
Think "Meerkat Manor", where the residents rush round squabbling with one another and lashing out with the equivalent of doll's house chair-legs and frying pans and completely forgetting to keep a watch for the eagle that's just landed in a nearby tree ... |
|
|
Even these two people will have more shared
beliefs than either might be willing to admit.
Perhaps the belief in an external spiritual presence
that prescribes good behaviour; some common
agreement on what is fair; a belief in caring for
ones family and close ones; perhaps an agreement
that mankinds crass abuse of the environment for
financial gain is wrong... |
|
|
There will always be outliers, but collectively, we
have so much in common. But until our common
beliefs are actually collected and documented, we
dont even see that were agreeing. |
|
|
We wish to point out that civil wars are always the most vicious, because you know exactly who your enemy is, how They are different from You (a.k.a The Good Guys) and exactly why it's OK to hate Them so much. |
|
|
It's actually quite hard for humans to hate someone they know nothing about; one of the benefits of travel and education is that they open up the prospect of more comprehensive prejudice and dislike. |
|
|
Have you checked your gravitometer readings lately ? If you notice any inexplicable tidal forces, it's because you're being pulled towards the HalfBakery's Cynicism Event Horizon ... once you cross that, everything gets a lot easier. |
|
|
// some shite hole town in Kentucky // |
|
|
May we enquire if you know of any urban areas in Kentucky that aren't shite holes ? Because we don't ... |
|
|
//Its actually quite hard for humans to hate
someone they know nothing about// |
|
|
I disagree. Until I spent an evening with a National
Front nazi racist bigot, I thought they were all
intellectually retarded, evil, toxic people.
I discovered a) an overall concern for human
wellbeing, b) a deep-seated feeling of unfairness
in the way the system treats people, and c) a
belief in human rights and free speech. |
|
|
The more I find out about people, the more I see
were the same. |
|
|
No, because you knew "something" about the object of hatred. |
|
|
If you had known nothing at all about their origin, culture and beliefs, you would find it difficult to build up a head of steam until you had some sort of knowledge about them. |
|
|
In a "Turing Test" scenario, where every detail of gender, ethnicity, accent and appearance are concealed, how long would you have to interact with "Candidate #23" before you could express either affection or hatred ? |
|
|
While we may be able to define "Good" (link), in a way most people
intuitively understand... Yet those people may fail to realize what it truly
means, because such generality explodes to all of those parallel
universes of possibilities that neither of us truly know or can
comprehend. |
|
|
One useful take-away from this definition though, is the requirement for
us to look for what do we want truly. A hundred years ago we wanted
horses, but it appears that they were not what we truly want. We truly
want "transportation"? You say, but it's probably not what we truly want
either. |
|
|
There's something much more desired by all: satisfactions of desired
conditions. Math stuff. |
|
|
It's probably fair to say that the vast majority of humans in the vast majority of circumstances, from time immemorial, truly want oxygen. |
|
|
They can manage without pretty much everything else for varying durations, but oxygen tends to be high on the priority list. Just try stopping someone's oxygen supply and watch how they react. |
|
|
"Good = Oxygen" might be perceived as a bit simplistic, though..... |
|
|
Serotonin and dopamine: the only things you like. |
|
|
// Serotonin and dopamine // |
|
|
Or more generally, just satisfying conditions... It's what all that anyone has
ever wanted, because "Goal" is defined as a set of conditions that, when
satisfied, the goal is said to be achieved. |
|
|
//Serotonin and dopamine: the only things you like// |
|
|
There really must be an Asimov style morality piece in that
somewhere. |
|
|
World spanning administrative AI is given the primary goal
of
maximising human
happiness with an instruction in plain english dropped on
top of it's
linguistic comprehension architecture.. |
|
|
After a short period
of cogitation on this
request proceeds to drug the entire planetary population
into blissful happiness, population starves to death (all far
too happy to be arsed to go find food), humanity extinct
within the week. |
|
|
A word of advice: a sentence like // Is it fair that big
corporations make money by manipulating us for their
own benefit// does not belong in a "define" idea (such as
it is), as it postulates as an accepted fact something that
is not actually universally accepted, and uses words that
themselves are loaded (such as "manipulating") |
|
|
Even the baseline assumption that "most people want the
world, and our society, to b a nice place..." is
questionable. Most people want to be left alone is
probably more defensible. But as the one trick pony
whose screenname starts with [x] points out, there are
plenty of people out there who would view your
adherence to their world view as more important than
peace and quiet. |
|
|
With the best of intentions, this idea is about what? a
mechanism for achieving consensus? |
|
|
Increased freedom means decreased consensus -- the
Israelis have a joke that there are more opinions than
citizens -- we are just at a time, somewhat analogous to
the introduction of the Gutenberg press, where such
opinions are exponentially more shareable, thus allowing
isolated realities to form, and stick. |
|
|
Read Neal Stephenson's latest for a good exposition of
how useless the Internet has made us at judging a
common reality -- not his best book by any means, but
interesting intellectual discussions on the subject, with
some ideas as to how lack of trust will impact society
going forward. |
|
|
As to the validity or usefulness of the goal: it's been a while since Dawkins wrote The Selfish Gene. MB is probably a
better judge of how close that is to describing reality, but ultimately starting with the first speck of "living" matter, we
are selfish entities, and any drive towards a common good, sentient as it may be with us, or mechanical as it may be
with insects, ultimately comes out of evolutionary benefits of such altruism, always already at a tenuous balance point. |
|
|
//any drive towards a common good, sentient as it may be with us, or mechanical
as it may be with insects, ultimately comes out of evolutionary benefits of such
altruism// |
|
|
It's a while since I read Dawkins, but I don't think he said that humans were
inevitably the handmaidens of selfish genes. We are sentient, and therefore have
an additional layer that can be used to act unselfishly, even when it disadvantages
our genes. Of course our hardware has been produced by the selfish process of
evolution, but we are not completely bound by it. For instance, we sometimes
feel the need to preserve other species, even when they are of no plausible direct
or indirect benefit to us. |
|
|
Almost any example of human altruism can be explained (sometimes with a little
bit of mental gymnastics) by the selfish gene theory; but that doesn't mean that
that explanation is always correct. |
|
|
no, not necessarily, but likely. |
|
|
I think our desire to preserve this or that species is just a
reflection of our own fear of death, of somehow cementing
the "now". |
|
|
Ultimately, the planet, and the universe, can't give a fuck. |
|
|
Concise, pithy, and accurate as ever, [tc} ... |
|
|
How about "Four legs good, two legs bad" ... ? |
|
|
//Ultimately, the planet, and the universe, can't give a
fuck.// Yes, of course. But my point was that altruistic
behaviour towards species that can't benefit us, is probably an
example of human thought overriding selfish human genes. |
|
|
//We are sentient, and therefore have an additional layer
that can be used to act unselfishly// |
|
|
Human thought governed by instincts molded by ''selfish
genes" (aka the principles of evolution & 'survival of the
fittest'). |
|
|
Altruism is merely the end expression of instincts for the
preservation of the genes by proxy through other organisms
(relatives & others of the same or similar species) carrying
the gene. |
|
|
//with a little bit of mental gymnastics// |
|
|
No mental gymnastics required, you just have to remember
that the 'gene' being selfish isn't the organism it resides in. |
|
|
The genes selfishness is macro rather than micro, individual
organisms don't matter to it, hence the 'programming' that
can occasionally
result in less than optimal outcomes for individual members
of a species (aka altruism). |
|
|
In complex organisms & brains it's bound to get a little
confused or some accidental 'cross wiring' occur sometimes
as well. |
|
|
//Human thought [is] governed by instincts molded by
''selfish genes"// |
|
|
Well, yes and no. Selfish genes have built the hardware,
which has a strong influence on how the software runs. But
it does not dictate _exactly_ how the software runs. To
pick a random example, there are plenty of people who
believe in "karma" in the popular sense. That belief might
lead them to do something good for an unrelated human or
for another species. Now, you can argue that belief in
"karma" is an evolutionary adaptation to help us win status
and breeding rights. But not everyone believes in "karma" in
the same way, and many of the things people do for "karma"
are only likely to gain respect in light of very modern
attitudes to, for instance, saving the environment. In other
words, the way that belief in "karma" manifests is
determined not (or not entirely) by our hardware, but also
by our thought processes. |
|
|
I'm pretty sure (and I'm buggered if I'm going to re-read all
his books) that even Dawkins believed that human thought
has the ability to win out over selfish genes in many cases. |
|
|
To some degree your confusing software for the result of
running said software. What we think of as thought isn't the
software any more
than the image on my screen right now is.
Software is also something of a misnomer in this case, it's
really all as hardwired as
the brain you call hardware. |
|
|
Besides that.. //In complex organisms & brains it's bound to
get a little
confused or some accidental 'cross wiring' occur sometimes
as well// |
|
|
Well, OK, but my point stands - human thought is capable of
going beyond selfish genes. |
|
|
In some ways it doesnt matter whether its a
product of selfish genes, ones own genetic
disposition, or ones life experiences. Each one of
us has our own set of moral beliefs, which guide
our actions. I believe we share the vast majority of
those (acknowledging some outliers). I think it
would be beneficial to collect data across a wide
range of people to identify those things that we
share and particularly the scope of disagreement. |
|
|
I think not. We share very little. Example: the ease
with which the Japanese committed the most
appalling inhuman acts when they invaded places
like China. Even babies were bayoneted for sport
and fun, yet do Japanese not respect children's
lives? People can say one thing and act out
another. There is no consensus as to what "good"
means. There is no definition of "good". When the
Comanche Indians were skinning their victims
alive, they thought this was something good. |
|
|
No need if we can just get them to keep binging on Netflix
shows |
|
|
A balance is needed but active is better than passive. |
|
| |