h a l f b a k e r yOh yeah? Well, eureka too.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Please log in.
Before you can vote, you need to register.
Please log in or create an account.
|
(To be read whilst waving your hands about)
This idea would benefit from new type of memory, processor and toolset. But fundamentally there's nothing too 'magic' about it.
It's supposed to assist optimization - which is pretty clever stuff at the moment, but can always do with a hand. The toolset
could use execution feedback data to optimize the code in subsequent iterations of the code-build.
Most optimizers that I'm aware of use static techniques for analysing code execution. This method uses empirical meters. i.e. what the code has done in the past.
1) Memory: Each word of program-space (instruction memory) has a shadow word which is used as an execution-histogram. We'll call this histogram-space. It's very similar to the way profilers work...
2) Whenever the processor executes an instruction from program-space (in, say, debug mode) it increments the counter in the associated location in histogram-space.
It's pipe-line filling algorithm could use the histogram-space data during pre-fetches to give it a good clue what's likely to be the next instruction to be executed. OR...
3) The toolset (compiler/optimizer/linker) is able to read the histogram-space after execution, to provide positive feedback to it's optimization algorithm... e.g. to re arrange rarely used code such that it doesn't waste cache/ensure that branches generally fail (maintaining the processor-pipeline)
The idea is that during development and testing, the processor is kept in debug-optimize mode (profiling / keeping track of which instructions are used most, which branches are are likely / less likely to fire).
When the final build is done, the histogram data is analysed to ensure that branches are arranged to be as fast as possible.
(It's nearly a full-moon again, and I'm on a roll. Mars moving closer to Earth isn't helping either. )
Shazam music recognition
http://www.shazam.c...44C93436.webportal1 [Dub, Sep 25 2005]
[link]
|
|
I don't write compilers but I've used several. About the only difference I can speak to between this and a standard compiler is the abilty to " re arrange rarely used code " (optimize). You're proposing that the compiler manage a histogram of code snippets, and incorporate the frequency distribution of those snippets into its optimization routine? Is it your concept for a distributed JIT compiler? |
|
|
[rensure] Not, JIT more Historical/Heuristic, using past performance as an indicator to future performance. Not code snippets either - instructions. I was thinking lower-level. |
|
|
Most compilers I'm aware of look at the code, and arrange the instructions to :
1) Do what you want them to do...
2)In as //short-time/small-size// as possible (depending on your needs). |
|
|
They tend to look at the code (statically), assume that it'd do one-thing rather than another, and arrange the instructions to suit. |
|
|
To my knowlege none actually look at what the code really does in real-life situations, and use that to tweek and adjust future performance... Normally, that's left upto the coder (you set your profiler on to the problem, and see where all your processing time's going, then concentrate those routines). The idea of this is that REAL run-time info is fed-back in, to give big hints to the compiler/linker as to what code really needs looking at, and which never/hardly ever gets used - That stuff (well written exception handlers), can be kept on HDD for all the processor cares - as long as there's room in the cache for the really useful stuff. |
|
|
Sorry, I don't understand. Are we talking about [reensure]'s comment, here? |
|
|
Am I allowed to refute what you said? //You ask more than you're willing to give// Not true. I donated an idea. No-one responded for a long time. (1:0 to HB). I'm not _winning_ here. I'm glad someone's responding, so I want to make sure they've understood what I said. Lord know's I don't expect to make the //best// table with this one! |
|
|
I may have honestly miss-understood what he's suggesting... but It looks more like he wants clarification of my idea and isn't suggesting an alternative version. He thinks I'm talking about JIT. I'm saying this is an alternative no a different idea. I could be wrong, though. [reensure]? (BTW, I just clarified my 1st response to him a little... But what I said echo's the original posting.) |
|
|
Think nothing of it. As I said initially, my experience with compilers is limited to how they serve a writer of software in high-level language. So far as I know, "smart" compilers do at bit of CPU time management -- they determine which processes may be more efficiently cached or calculated, which processes make the most calls, which processes are loaded (given register addresses?) long before they are run due to the way the underlying code is written, etc. Before the code can be run, or after the code can be run but needs tweaking. |
|
|
I only guessed that your proposal was to develop a JIT (probably my poor word choice, since Just-in-time seems synonymous with Java-in-time) compiler or a distributed heuristic compiler that modifies the operating system (kernel?) as user data input varies across the network. |
|
|
[reensure] JIT - Just-In-Time? I thought the compiler compiled what it needed JUST in time to use it, and not (as traditionally) before. I don't know verymuch about JIT at all, though. You might be right. |
|
|
[reensure] Nope, nothing that complicated. Rather I was thinking of the instruction pipe-line of the processor... The idea requires each instruction executed to have an associated 'instruction-executed' counter (histogram), so that the toolset (compiler / optimizer / linker) could see just how frequently each instruction actually got used... and arrange the code accordingly next time the code's built... I really don't know if it's a good idea or not. It seems like it should be... |
|
|
Good. I read your Idea right at first. My own thoughts took me down the garden path. Thanks. |
|
|
Not sure about //Huffman compression being efficient way of encoding histograms//. Also not sure about 200bytes for the zip algo. AFAICR the zip algo's a bunch of different algo's and zip selected the best one to use for the data.... I doubt they they'd all fit < 200bytes - That said there's an old coding tale that one of the Mariner missions found one of the moons of Jupiter (or something - check snopes, I guess!) with an image recognition algo they managed to fit in the last spare bytes of the 2K ROM the thing had! So who knows! :) |
|
|
But if you imagine that in the time it takes to read a single byte of data from HDD the processor/memory manager could have moved thousands, perhaps millions of Kb of data it starts to make sense to keep the data small on HDD and expand in memory. |
|
|
That said, it depends what you're going to do with it... MP3 (lossy-compression) for example is great, takes up a little bit of space and 'generates' minutes of good quality audio - which is fine. It's not as high quality as raw data, but it takes up much less space. Humans can cope with the sound not being perfect. If you're writing a vast database of important data, and you're changing one or two values in the middle of it every now and again, obviously you can't use lossy compression techniques... And if all your data is compressed you probably can't continuously (losslessly) decompress / compress all the data all every-time... You have to think about the application (e.g. use a cache-copy, and update the compressed data when you get a chance). |
|
|
(Back towards being On Topic) Typically optimisers try and get loops and regularly repeated task small enough to fit in the cache... I did think about posting an idea for CRAM (Compressed RAM) which was normal RAM, but with a compressor/de-compressor built in ... that way the silicon can remain small/low power, but the RAM could appear to be more... but it's probably already baked FAIK. |
|
|
There was 'The Thumb' tho... That was an adition to the Arm (Originally Acorn Risc Machine, now owned by Intel and called something else!), It compressed the (normally 32-bit) word instuctions in a huffman-like way I think (well, the more regularly use insts were smaller than the less regularly used ones...)... I seem to recall it was blindinly fast at executing Java bytecode too... |
|
|
Intel Kryptillion ?Not googleable? - Presumably it learns how programs run (in some way similar to this). I guess that's good... or is that the CRAM thing... I imagined that one'd gone by now. |
|
|
Mp3 stuff... Hmm. But are you remembering your psophormetrics as well? :) I've a funny feeling that variable MP3 does that - doesn't it?
... I wanted to do something similar to JPG's... compress until the differences were minimal. I wondered if the same idea'd be handy for cheap-and-cheerful image recognition... If a piccy of a face won't compress anymore than a certain amount (because of it's noisiness)... Never had time... I have now tho. :)
Do you get Dr Dobbs? - There're a few interesting articles in there (There was a very interesting article on Duff's device you might like). Also, I've a good book somewhere (Addison/Wesley I think. Red anyway.) on the subject...I find it interesting, but my maths lets me down. |
|
|
Take a look as Shazam [link] - Music recognition from your phone-and apparently it works really well. Just dial 2580 and let it listen, and it txt's you back. |
|
|
//At least aiming high and bodging proves that things are possible// Ah, it's nice to know I'm not alone. |
|
| |