Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
Warm and Fussy

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


           

Overstand Model

A model for Requested Task Artificial Comprehension
  (+1)
(+1)
  [vote for,
against]

The main difference between this and Neural Networks is that there are subtask networks with known goals. As opposed to one big self adapting network looking at the giant data and coming up with "something", we have well defined subtasks each looking at the big data and along with other subtasks is competing on getting its own "picture" of data.

When you teach the Overstander arithmetic it uses models to learn how to solve arithmetic. It wants to "know" arithmetic, so it won't just use an online calculator (which is at hand). It will build a model that ALSO tells it how to do the carry-over and how to use addition tables, and how to program the CPU to do binary arithmetic, and as the problems get harder it will learn how problems are solved at higher and higher levels.

The LAST thing we want from overstander is to do poetry. This is not a Turing Test machine, its a model for achieving actual comprehension, and having what to show for it. First, and WAY BEFORE the poetry task, it has to give intelligent answers and decisions for technical and specific requests using all the "tools" that an intelligent human would use to do that.

At the first stages, it would heavily rely on human assistance.

OCR would have a neural network for edge detection (or any other program that could achieve that) and a separate subtask for grammatical or lexical analysis etc. Each task is self-contained and well-defined, correctly readable to humans as well.

This model replaces (or rather adds on to) the regular grammatical or statistical models used for Artificial Intelligence and Artificial Language Analysis (like GPT3).

When you have a request for an action or task or response, the Overstander would use a highly specific set of Overstand models to dissect the request and model it with a set of basic possible "understandings" from inside the topic.

It then would navigate along these understandings in parallel, in order to come up with an intelligent solution or decision, or action, while on the way asking relevant clarification questions bringing the computer to its ever better understandings, which could be worded or shown by its decisions and reactions.

The result would be that we could "fully" understand what is going on in the big picture.

We could start off with a kitten model for reaction to seeing a moving animal like a lizard, snake or cockroach, starting with instincts and preset responses.

Then move on to Overstand models for doing OCR of handwritten Malayalam or Cuneiform, or Syriac texts, along with all the resources a human would have at hand for that task.

Failures would be analyzed and pointed out to the Overstander software, and thus the model's subsets could be corrected.

Basically, the difference between this and any plain old neural network is that there are human-defined subtasks that help define what is needed for responding correctly to the request.

As an example, reading an ancient text would need the following knowledge modules: 1. Document source types (images vs text), 2. Resources for finding prior analysis of this text, recognizing if it has already been previously analyzed and deciphered, 3. Letter sets and fonts. 4. Recognizing the direction of the text, and the lines of the text. 5. Dissecting the letters and comparing them to each other and to other writings. (Creating dictionaries and lexicons on the way). 6. Understanding the content of the text: Which itself has many subtasks to be identified and defined. (For example: finding quotations, finding the topic being discussed, finding the place and time it was written. Finding out the borders of our knowledge in each one of these. etc. etc.

Oh, and in more advanced versions the Overstander would also want to know WHY we are asking this or that particular request, and what we wish to achieve.

In the most advanced version, it would have its own opinion too.

It could take a long time to build even the simplest model in this fashion, but as time progresses, there will be an automatic model that learns how to model. Learning to learn...

pashute, Dec 20 2022

Youtube: GPT3: An Even Bigger Language Model - Computerphile https://www.youtube...watch?v=_8yVOC4ciXc
[pashute, Dec 20 2022]

[link]






       From the GPT3 video (see link)   

       [00:10:20] but what about things like scientific papers? If you fed it enough science, enough scientific papers, do you think, could it come up with something that we've not really realized before? Or something new?   

       [00:10:29] Yeah, so my instinct, is to say NO. It's just predicting the next word. Right? It's just a language model... It doesn't have the ability to build the kind of abstract mental structures that you need in order to actually synthesize new um knowledge.   

       [00:10:56] BUT there's a kind of an outside view that says that we thought that about a bunch of things that it now seems to be doing so... I'm not gonna say that it definitely couldn't do that. So one example of a task which it got better at, tremendously better at, is arithmetic. Which is kind of an interesting task, because again, it's a language model, not trying to do arithmetic, it's not designed to do arithmetic. But in GPT2 if you put in two plus two equals, and get it to give you the next token, it would give you a four. But that's not very surprising. Like, that's not very impressive. Because You would expect to see in its dataset the words two plus two equals four very many times. That's pure memorization...   

       ...[00:15:14] for 3-digit addition and subtraction, again it's getting like 80%, 90%, and that's a big jump from the smaller models.   

       ...[00:15:24] what they're suggesting in the paper is that it has actually learned how to learn... like that's the interpretation that they're pushing
pashute, Dec 20 2022
  

       You want an AI standing over you?
pertinax, Dec 21 2022
  

       So this seems like an AI whose methods are broken into swim lanes that are bound by conventional educational lanes and methods, which seems interesting in that it at least attempts to mirror human comprehension methods. I'm unsure how that is achieved as many of these swim lane models I'm sure are not easily programmed and doing that seems to be the heaviest lift in AI, simply moving the problem over, so to speak.
RayfordSteele, Dec 21 2022
  

       How will you tell the AI "this is the math AI"? If you do it the conventional way you no longer have AI. If you do it the ChatGPT way you no longer have a math AI.
Voice, Dec 23 2022
  
      
[annotate]
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle