h a l f b a k e r yClearly this is a metaphor for something.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
This is an idea I had back around 1990. Halfbakery is such a wonderful place to publish these ideas.
Before I describe the idea, I have to express some of my own reservations: first of all a friend who is smarter then I am and who knows more about math and computer science then I do told me back
then it's impossible, so maybe it is. I wouldn't be surprised if there is someone here who can confirm that.
My idea was that it might be possible to develop an algebra for code: a system to describe code in terms of its input and output and to create a formula that describes the code. Then with our new algebra we can reduce our process formula to
its simplest form.
If you could do this, even if you could do it for elements of the code, you could analyze code: turn it into a formula that the describes the code. Then reduce that formula. Of course you know the properties of your processor and you could then find the optimal code to produce that output from that input. You would have a way to turn a poorly written into the fastest possible algorithm that has the same input and output. Bugs, incorrect calculations, would still be in that code, but it would be optimally fast. Then, you could make a program that does this to entire programs and to the operating system. And of course you would optimize the program itself.
You know what, maybe this isn't an invention, just a "wouldn't it be nice if" fantasy. I don't have the capability to make this real and that's possibly kind of a giveaway. I thought of this when I had a NEC V20 based PC-XT, that ran at around 7 or 9 Mhz (with a neat little NMI button that was great for debugging). Computers are so fast now, it seems that cpu speed doesn't matter much anymore, except for special applications.
lambda calculus
http://en.wikipedia...iki/Lambda_calculus this may be similar to what you're talking about, or at least a starting point for further searching. [xaviergisz, Oct 19 2006]
Superoptimization
http://en.wikipedia...i/Superoptimization Finding the optimal code sequence for producing output from input. This Wikipedia entry is frustratingly thin, but at least it links to some papers. [jutta, Oct 19 2006]
Wikipedia: Halting Problem
http://en.wikipedia...iki/Halting_Problem The impossibility of your system in the general sense is a corollary of this. [jutta, Oct 19 2006]
PEPA
http://www.dcs.ed.ac.uk/pepa/ Process Evaluation Process Algebra [Jinbish, Oct 19 2006]
Algebralang
https://github.com/samsquire/algebralang Algebraic programming language [chronological, Jul 20 2022]
Please log in.
If you're not logged in,
you can see what this page
looks like, but you will
not be able to add anything.
Annotation:
|
|
Languages where
1. the programs are defined in terms of the mathematical relationship between inputs and outputs and
2. the implementation is left to the program compiler/interpreter
...exist. |
|
|
As I remember, they tend to produce slower code than other languages, not faster code. |
|
|
I forgot in the description: the ability to analyze existing code. I always imagined this would work with machine code. I have added it. |
|
|
[pertinax] the only language that I know about that approaches what you mention is Prolog. Is this what you had in mind? I don't know anything about the speed difference with purposely written procedural code, but I believe there are Prolog compilers that produce fast code. |
|
|
Code of just about any programming language is a description of itself. C, ML, LISP, Prolog, machine language, it makes no difference, other than in matters of convenience. |
|
|
The job of an optimizer is to replace machine code with a faster version that has the same observable results. What is fastest depends on how something is actually used - for different inputs, the same algorithm can have radically different runtimes. |
|
|
"Superoptimization" tries to solve this problem for short pieces of code without branches. In the presence of loops and input, things get more difficult. |
|
|
So, minus the "perfection" part, this is already what's going on - although at a smaller scale. |
|
|
To prove the impossibility of the enterprise in general, write a program P that calls the optimizer O on its own optimized binary code P, then compares the outcome O(P) with the input P. If the optimizer changed anything, have the program terminate; otherwise, loop and run again. |
|
|
Apart from Prolog, I was thinking of 'functional' programming languages, such as Orwell. ('Functional' doesn't just mean that you can make function calls - it means that everything the program does is coded in terms of functional relations from domains to co-domains, or something like that). |
|
|
I had the same idea as this. Except I called it loop algebra. |
|
|
I too think programs can be represented as algebra and optimised. I even wrote some example programs in a language that uses algebra as its formation. |
|
|
Princess Algebra: an algebra in which each unknown discovers that she was a princess all along, deep down. |
|
|
Analgebra: A bra that numbs overly sensitive breasts when breastfeeding. |
|
| |