h a l f b a k e r yThe best idea since raw toast.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
|
But if transistors are further apart signals take longer to travel between them |
|
|
It is a remarkable world where we're up against speed-of-light
delays between micron-sized components. Surely, the
development of the first real computer cannot be far off. |
|
|
//micron-sized components//
Not anymore they're not |
|
|
// Surely, the development of the first real computer cannot be far off.// |
|
|
must not bite, must not bite |
|
|
Thought this might be an idea for a method to
increase the distance between each chip in a
portion of fish and chips. |
|
|
No, you make the chips from aerated potato pulp. Higher ratios of aer to potatoe makes the chips less dense and reduces raw material cost. |
|
|
//Higher ratios of aer to potatoe // That reads as if it should
be read in a pirate voice. |
|
|
"Chicken pieces of eight ... chicken pieces of eight ..." |
|
|
I wonder if computers could be made faster with a sort of
"catch up" programming. |
|
|
For instance, a group of computer gates is waiting for an
input, and will then compute to give an output. But it takes a
while for that input to reach the gates. So, have the gates
calculate the output for either possible input and then, by the
time the input actually arrives, they can just output the
appropriate precalculated result. |
|
|
Can't transistors be 3D though. An example would be an OR gate that raises a circuit going up one layer if true and raises a circuit going down if false as well as the standard output on the horizontal layer. Trees of circuits across and up and down. |
|
|
More space opens up more connective branches.The minimum limit would be a 3x3x3 cube of transistors: 1 input , 25 outputs or 25 inputs, 1 output.
Give an AI the components and see what it builds. |
|
|
// have the gates calculate the output for either possible input and then, by the
time the input actually arrives, they can just output the appropriate
precalculated result. // |
|
|
// Yes, that was done several years ago (2003 for first WP article) - predictive
branching. // |
|
|
But that's on the level of instructions, not the level of logic gates. On the other
hand, I don't think there would necessarily be any speedup from doing this on the
gate level. If the gate or network of gates precomputes outputs for all possible
combinations of inputs, the propagation delay to choose which of those outputs
to give when the input arrives is probably the same as the propagation delay of
actually computing the output. And, assuming you don't throw away and
recompute this set of outputs between every presentation of inputs and the next
one (which seems like a silly thing to do), you've essentially turned your gate(s)
into a LUTwhich is already known to be able to stand in for any arrangement of
gates with a given number of inputs and outputs anyway. |
|
|
I suppose, if it turns out that a LUT is somewhy faster than actual gates in a given
application (a long cascade of gates?), you could have the actual gates compute
the output when a not-seen-before input is given, and store that in the LUT for
next time. But then you need a writeable LUT, which is more expensive (in
money, die space, and power) than a read-only one, and has no advantage over a
read-only one except in cases where a read-only one would have to be
impractically large, in which case a smaller but writeable one in parallel with
actual gates would have an advantage, but only for a subset of recently seen
inputs (i.e. a memoization cache considerably smaller than the set of all possible
inputs). But then you need additional processing of some sort (more gates?) to
manage what stays in the cache and what gets thrown out, unless you've invented
some sort of automatic-cache-managing ram that exploits physical principles to
track what data is least recently used, or something like that. |
|
|
So what I am really saying is that is seems computer is making more by packing smaller of the same dimensional method rather than using the space, which as [hippo] indicated is time, to create a computing advancement of higher dimensions. |
|
|
Well, apparently there are another 7 dimensions curled up
really small. We should probably fill them up. |
|
|
We probably already do. No? |
|
|
I was thinking, cars and roads could be a really ruff analogy to chip circuits.The cars all following logic making overall patterns. If it was now said that red cars had to make left turns and blue cars only right turns to destinations, is this adding another dimension? |
|
|
// Well, apparently there are another 7 dimensions curled
up really small. We should probably fill them up. // |
|
|
That's how sophons work. I wonder how sophons get energy,
and dispose of waste energy. |
|
|
//If it was now said that red cars had to make left turns and blue cars only right turns to destinations, is this adding another dimension?//
I think this would just lead to people getting out of their cars and respraying them by the side of the road in order to get to their destination faster. Or I suppose you could have two cars, red and blue, one on a trailer towed behind the other. Then when you wanted to change the direction you were allowed to turn you'd just have to stop and swap the cars over, covering up the towed car with a tarpaulin, of course. |
|
|
It is always the case, the weighing up of the time it takes to circumvent a rule, with consequences, and the time of just doing the rule. |
|
|
I never did get that folded up dimension thing. Either you have 7 dimensions or you don't. |
|
| |