h a l f b a k e r yGet half a life.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Please log in.
Before you can vote, you need to register.
Please log in or create an account.
|
[A short preface]: Daniel Dennet wrote a book summarizing contemporary scientific thought of how human, animal and even insect brains work: The idea is that various parts work at the same time on figuring various tasks. They each chip in with their input. The famous Marvin Minsky example is when looking
at the words THE CAT: "T|-\E C|-\T" , one brain task searches for LETTERS, then it looks at the MIDDLE LETTER area a word, then it checks what lines there are HORIZONTAL/VERTICLE/SLANTED, while another task is checking the whole word looking at C*T and deciding that the |-\ is an A, not an H.
Even when a task has finished it does not give ONE result but a list of possible results.
The various tasks compete for attention, and our resulting actions and understandings are according to the currently chosen set of results.
But the tasks usually continue running a bit more, and many times (as Patricia Churchland wrote in 'Neurophilosophy' about understanding jokes) they crop up later on when we "re-alize" or get a "new perspective" on things.
[--End of Preface] ===========================
The Framework:
I propose a framework of tasks that have the following: [HB reminder: incomplete idea ahead]
a. Tasks all return lists of possible results (or "associations").
b. Result priority: Associations (=results) can have priority tags associated to them.
* The priorities can be static (i.e. "Most probably the correct result", "Small chance this is needed").
* Or priorities can be parametrized: (i.e. "Check first and last letter to decide")
c. Tasks start and end according to "events" in the system, and pass the results by invoking "events". (i.e. "INFO_NEEDED('LETTER')" to start these tasks, "INFO_FOUND('LETTER')" will allow the task to continue for a short while, "EMERGENCY_STOP_ALL('LETTER')" will stop the tasks immediately, and give attention to emergency tasks, and the task itself will emit "INFO_FOUND('LETTER')"
Running: Once a working framework would exist, we should run existing computer programs under it. (i.e. Office programs - where spellcheck is a learning task that runs in the background according to the letter you are writing and the to previous letters and how you interacted with spellcheck...)
Finally: We could create hardware that works on small tasks, creating a "brain" from various components interacting thru this famework.
Acovea
http://www.coyotegu...om/products/acovea/ ACOVEA (Analysis of Compiler Options via Evolutionary Algorithm) implements a genetic algorithm to find the "best" options for compiling programs with the GNU Compiler Collection (GCC) C and C++ compilers... [Spacecoyote, Nov 03 2008]
Acovea - functional link 2015
https://directory.fsf.org/wiki/Acovea [pashute, Jun 17 2015]
Self programming chip
[pashute, Jun 29 2015]
[link]
|
|
Heres a halfbaked pseudocode version:
|
|
|
LISTS: TaskKeywords, TriggerKeywords, PossibleSentEvents, PossibleReceivedEvents, ResultPossibleFields, PossibleResults, ActualResults, PossibleFriendTasks, CurrentFriendTasks. |
|
|
ACTIONS:
OnTriggerReceived(eventName, [sourcetaskKeywords),
AssociateEvent(triggers, resultName, outputEvent, outputParams, outputScript) |
|
|
why the bone. baked? unoriginal? too techy? |
|
|
Baked...Any modern OS worth its bits splits each task between all available processor cores. |
|
|
SpaceCoyote: Actually no. A programmer can split a time consuming task into multiple threads, allowing the task to be done by multiple processors. But if the programmer doesn't do this extra work then the task will be done in one thread on one core. A CPU core can perform multiple instructions in a single thread simultaneously, where possible. But the OS does not do this. |
|
|
Paschute: I'm a little unclear on what this idea actually does and what its' purpose is. |
|
|
So two or more different functions to do the same task compete for completion . As complexity grows a particular function will be better suited to a single situation . Running three different method spell checkers on the single document . |
|
|
Yes! Thats how the mind works. You run various tasks that give you fast results for immeidate use, and more refined results later. So 3 (probably more needed) spellchecking utilities would give you various TYPES of spellchecking results. |
|
|
From recognizing simple typos and correction suggestions, to grammer or theme errors, according to who is writing and what is being written. |
|
|
Its definitely NOT baked in the sense that [spacecoyote] wrote. This would create a dynamic computing environment, where programmers give a vague idea, and perhaps some algorithms for accomplishing a task, but the actual program would constantly be refining itself via messages. |
|
|
The result could be faster better computing in what we expect computers to do today, and of course new types of applications that were very hard to accomplish until now, such as complex robotics tasks. |
|
|
Oh, now I get what you mean. It reminds me of Acovea (link) |
|
|
Anyway this sort of stuff doesn't matter much anymore, traditionally any process in a prototype program that didn't run fast enough would be optimized by hand until it did, this usually meant a loss of accuracy (which was justified because the program didn't need more accuracy), in much the same way the brain does when guesstimating. |
|
|
Nowadays processors are insanely fast. The massively parallel thing the brain does is to solve a problem the brain has: it is slow. Modern processors don't have this problem as severely as the brain does. |
|
|
Part of this is baked within the processor itself; most opcodes only use a small part of the processor, and with out-of-order architectures, it executes at the same time any opcodes in the pipeline that require the other unused parts of processor. |
|
|
I can think several applications for competing tasks. Data
Mining for one. You run multiple tasks searching a set of
data like the internet, each using different search
arguments but all looking for same thing, say terrorist
sites. Or matching pieces of cell phone recordings for a
specific voice print. |
|
|
Testing alternate version of a design in parallel. You create
several systems for winning at black jack and run each of
them in a separate task using the same randomly
generated data. After a month or two you compare a graph
of the results. |
|
|
The thing is that, it is more an application level
implementation than a OS level thing. |
|
|
Acovea does it for one thing - a compiler. I want a
framework for building software programs. |
|
|
Again, NOT all doing exactly the same task, but rather
cooperating AND competing with the same GOAL. In the
case of spell-checking, It might find some extra info, it
might disagree with another process about the results,
and should begin a "negotiation", and it may be giving
results in a different order. If there's a bottleneck, and
some process is waiting for a result from another, or if
one process feels it will take too much time to give a
result they could call out to the others for help. |
|
|
3 to one on the boneside only means that more
halfbakers
don't like it than those who do, but not that its a bad
idea. |
|
|
I'm going to get interest in making it happen. |
|
|
I think an extra small enhancement can make the
difference: indicative computing - adding a notice
published by each sub process with an indication of its
success - how close it thinks it is to a result, and what it
expects the result to be. |
|
|
As a side note to [spacecoyote] [theGem] and others
here: From your comments it seems you are sold out on
the "fixed algorithm" paradigm, where you can figure in
advance the best and the exact method to receive a
result. But in many real world problems that COULD be
solved by computers and many times are not, due to
"complexity", a new paradigm of programming, which
incorporates the "old programming" methods and
enhances them, this type of programming can create a
breakthrough. Rather than "talking to a machine", and
having it do only exact orders in a stuttering "robot-like"
manner, the new computer programs will be "picking up
old programs" and even "writing their own code" in order
to complete the tasks at hand and achieve the goals they
need. |
|
|
Lets discuss sorting: Some sort strategies work better on
different data than others. What if you don't know what
type of information you are looking at in advance.
Perhaps its already sorted alphabetically. Perhaps they
are totally random. Perhaps you have the task of sorting
many short lists and checking if they are sorted
alphabetically will take more time than actually sorting
them. |
|
|
The way to go in this case is to have some of your
computation power in parallel checking and sampling the
data, working on it and on different parts of it in
different ways, and once there some ways that are more
successful, telling the other processes that they can stop. |
|
|
This parallel "competing" process holds also for the main
controlling process as a whole (if there is such a thing in
the first place) some of which will search the short-term
and long-term memories and come up with a "plan" on
how to go about with the task at hand, invoking capable
subtasks that were previously appropriate and successful
in advancing the tasks goals. |
|
|
This is in continuation with my newer entry: Self
Programming Chip |
|
| |