Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
Buy 1/4, get 1/4 free.

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


                     

Please log in.
Before you can vote, you need to register. Please log in or create an account.

Artificial self learning programmer

Learns how to program and to create programs from requests in natural language
 
(+1, -1)
  [vote for,
against]

This AI program has a chat-bot interface. I give it material to read and websites to look up. It reads them and analyzes the until it has an "understanding" of how to program. This means

1. It aquires a lexicon of words and phrases, and rules how to use the programming language, and what for. And then uses this to broaden its knowledge by searching the internet and reading relevant material.

2. The learning process is interactive with a human mentor. You help it learn by answering questions it has and pointing it in the right "direction". But since the program itself is online it also learns how to search for answers to its own questions online.

3. Once its competent enough you can register it to do online courses and get tested.

4. In an advanced stage you can allow it to even post questions on Stack-Exchange, monitored at first of course.

5. At the final stage it learns to create programs upon request in natural langauge. You ask it to create a program that does so and so, it discusses it with you, does the web research and comes up with the computer program that you want.

It will probably work for a narrow type of programming, but lets get started.

Within a few years you'll have ASLPs who are experts in their field, and who could make possible many new types of applications. For example: Today, if I wish to make a speech recognition program for some isoteric language such as Hebrew, or a program that analyzes and categorizes handwritten letters from manuscript images, you need to spend a few years and raise the money for programmers. Here you just put the computer to work and get your user interface and algorithms all by just managing your digital programmer in natural language.

It will learn how to do effective code review, and even how to clearly explain the concepts and algorithms that it came up with.

It will learn how to keep version control, with good documentary, readable also to humans.

The management will stay ours but the know-how will be their's. (This will change at some point, and then we'll ask them to write science fiction).

pashute, Nov 09 2020

https://www.youtube.com/watch?v=G6Z_S6hs29s YouTube: 14 Cool Apps Built on GPT-3
Some of these are cooler than others, but there's bash, react, html, css and other design/simple coding applications where GPT-3 accepts user input in natural language, and outputs functioning code. [zen_tom, Nov 09 2020]

https://www.youtube.com/watch?v=_8yVOC4ciXc YouTube: GPT3: An E...del - Computerphile
Primer on GPT-3 an humungous pre-trained generalised language model with surprisingly good adaptation to what we humans normally consider quite specific tasks. [zen_tom, Nov 09 2020]

[link]






       Teach it to say "Well, it works on my machine".
pertinax, Nov 09 2020
  

       Didn't they try something like this, with the result of it "learning" about Pizzagate or some other nuttiness? I thought I had read something like that.
RayfordSteele, Nov 09 2020
  

       Will you end up having to teach it Phenomenology ?
8th of 7, Nov 09 2020
  

       I think I've seen some GPT-3 work in this area - will try and dig it out as a link.
zen_tom, Nov 09 2020
  

       This is already science fiction. Or fantasy. One small step above posting "lets make a computer that can fix any problem because it's on the internet"   

       People have been trying to accomplish exactly this for decades, and often enough using exactly this approach.
Voice, Nov 09 2020
  

       "Bot learning from the internet" has been tried, and it failed. Because it learned from humans, and some humans are idiots (especially when they're on the internet...).
Found it: TayTweets. Had to be shut down because of idiots teaching it stupid stuff.
(Similarly, there was a Google sketch learning thing that was useless, because people were giving it really crap (and often completely wrong) sketches of the objects it requested...)
neutrinos_shadow, Nov 09 2020
  

       In answer to [neutrinos] and [Voice]: During the main stages of learning, which may take very long and is an essential part of the ASLP setup, It is a tool for the mentors, who are teaching it how to learn, and putting it back on track when things go awry.   

       And then there are [zen tom]'s links.
pashute, Nov 11 2020
  

       On the youtube from Zen tom's link to GPT3:   

       [00:10:20] but what about things like scientific papers? If you fed it enough science, enough scientific papers, do you think, could it come up with something that we've not really realized before? Or something new?   

       [00:10:29] Yeah, so my instinct, is to say NO. It's just predicting the next word. Right? It's just a language model... It doesn't have the ability to build the kind of abstract mental structures that you need in order to actually synthesize new um knowledge.   

       [00:10:56] BUT there's a kind of an outside view that says that we thought that about a bunch of things that it now seems to be doing so... I'm not gonna say that it definitely couldn't do that. So one example of a task which it got better at, tremendously better at, is arithmetic. Which is kind of an interesting task, because again, it's a language model, not trying to do arithmetic, it's not designed to do arithmetic. But in GPT2 if you put in two plus two equals, and get it to give you the next token, it would give you a four. But that's not very surprising. Like, that's not very impressive. Because You would expect to see in its dataset the words two plus two equals four very many times. That's pure memorization...   

       ...[00:15:14] for 3-digit addition and subtraction, again it's getting like 80%, 90%, and that's a big jump from the smaller models.   

       ...[00:15:24] what they're suggesting in the paper is that it has actually learned how to learn... like that's the interpretation that they're pushing.   

       -----------------------   

       But no. I think a dedicated model with terms from programming and graphics, and ways of solving certain known tasks could be achieved. ALONG with a GPT3 like model for all language which would assist (but only assist) the ASLT and be one of (but only one of) its many tools at hand.
pashute, Dec 20 2022
  
      
[annotate]
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle