h a l f b a k e r yOK, we're here. Now what?
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Please log in.
Before you can vote, you need to register.
Please log in or create an account.
|
This is probably very naive and may be elsewhere on here
but i haven't found it. My lack of understanding of
computers makes it difficult for me to express this clearly
or sensibly.
The operating system constantly monitors what happens on
the computer, in terms of processes, memory used,
speed,
drivers employed and so forth. In many cases, there are
at least two options for doing each thing which perform
the same task as differently as possible. Each one starts
off with a probability rating associated with it, initially one
in two if there are two such tasks. Before a task is run,
the operating system generates a pseudorandom number
which decides which way it will run the task concerned.
Now, suppose the computer does something "naughty" like
run slowly, losing data, making a silly noise or displaying
something weirdly, or it does something "good" like
running particularly smoothly, exiting an application
quickly, playing media non-choppily and the like. There
are two keys on the keyboard labelled "PUNISH" and
"REWARD". Pressing the former reduces the probability
rating for the specific ways of doing the tasks concerned
and pressing the latter increases them. Failure to
complain also increases the probability marginally. This
way, tasks which seem to interfere with the running of the
computer are weeded out, leaving those which work well,
and bugs are sorted out without conscious intervention.
I'm sure this is bollocks, but could someone explain why
please?
[link]
|
|
Is this a genetic algorithm, where the "fitness" of each mutation is defined largely by reference to the current mood of the user? |
|
|
If so, wouldn't it require a very patient user to spend a long time responding to randomly sub-optimal behaviour before the algorithm even reached a break-even point? |
|
|
[Quest], the problem isn't always RAM even when
it is. I have no idea if this happens, but i get the
impression that launching programs in different
orders makes a difference to performance, and i
wonder if this is to do with something like a
program claiming memory belongs to it and then
not relinquishing that claim when it exits, or
memory locations not being consecutive but being
treated as if they are, so that the RAM gets
fragmented in the same way as backing storage
would. As i say though, i have no idea if this is
true or if it ever happened. |
|
|
[Bigsleep], i realise it's not very specific but i think
the usual situation is that users just have no or
little idea why there's a problem, even if they are
well-informed about computers, just because so
much stuff is hidden, so it's hard to see how it
could be made specific. |
|
|
Those are the first things which come to mind. I just
wish i understood what the problem was. It seems
to go beyond all reason. However, that's too general
for here. As it happens, you remind me of another
idea i had. |
|
|
[19thly], I think you need to buy more processing
power/RAM. Newer iterations of software are
written on the basis of ever greater resources
being available, in a cruel parody of Moore's Law. |
|
|
Any machine with less than 4GB of RAM is really
behind the game, if it's using recent software.
16GB of RAM and upward, coupled with a dual or
quad core 3GHz+ processor provides a far more
pleasant computing experience for not a lot more
capital outlay. |
|
| |