h a l f b a k e r yBone to the bad.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
New processors, such as the Cell under development by IBM/Sony, increasingly favour having several separate processors on one piece of silicon. The Cell has a 64-bit main processor controlling eight 32-bit slave processors.
When a chip is produced, defects in the silicon it's made from can mean that
it doesn't work properly. Even though 99.9% of its transistors are properly formed, it is useless and can't be sold. The bigger the chip, the higher the chances of such a defect. For a multi-core chip the problem gets worse - for example, you could wind up binning a chip whose master and seven slaves worked, all because of a major defect in the eighth slave.
I propose that such multi-core chips be graded according to how many of their cores funcion. This could be achieved with bridges on the chip surface, one per core, that could be laser-cut to indicate to the system which cores did not work. This is a similar process to how a chip's operating speed is denoted - it is tested at progressively faster speeds until its max speed is found, and bridges on the chip are then burnt to lock the chip to run at this speed.
The lower quality chips would be suitable for less demanding applications (the Cell chip has approximately ten times the power of a current P4, so even at 5/8 performance it's still quite something), and the improved yield that resulted would bring the overall cost of the chips down.
Something like this?
http://www.wired.co...,00.html?tw=rss.ITS [justaguy, May 19 2005]
[link]
|
|
I am lost. Can you find me? |
|
|
Oh dear. Does this not make enough sense? |
|
|
[donut], he's talking about chips in the humming thing under your desk, not the kind you eat. No, don't open the humming thing under your desk looking for a snack. |
|
|
Good idea. If they aren't already planning this, they should. |
|
|
The extent to which this improves yield would depend upon the types of failures that generally caused chips to be rejected. I would expect that in many cases, isolating the 'junk' CPU cores would require cutting quite a few wires, including the power traces. Not exactly trivial. |
|
|
I thought they did this already. For example, I thought when they make 4.0 ghz chips the ones that are not stable at 4.0 get underclocked to 3.8 ghz, and so on. I'm not sure but I think its been baked. |
|
|
(psst, he mentions that in the idea) |
|
|
Supercat, I was thinking to instruct the master not to send to/receive from slaves marked as unusable. Power would still be supplied, but the core would idle. I agree that big flaws might lead to the dead slave interfering with other units though, and that could be a problem. |
|
|
Yes, speed-binning is already done quite extensively. This idea, however, seems to address functionality binning. If a part of the chip doesn't work, just disconnect or deactivate the failed part and sell it as a lower-functionality CPU. |
|
|
Unfortunately, this too is already baked. It's called the "Celeron". When Intel produces a P4 processor with a failed on-chip cache, the cache is simply deactivated and the P4 is relabeled as a Celeron. |
|
|
This idea is an extremely obvious extension of celeron-style functionality binning, so I'll have to call it baked. |
|
|
One difference between this and the Celeron binning is that the FPU doesn't need to have a data bus connection (at least I wouldn't expect it to). For optimal performance, there would be a separate data path between the main CPU and the FPU which would not be used for any other purpose. Thus, if the FPU is powered down, bus loading on this separate data path would be a non-factor. By contrast, if multiple CPUs have to share a data bus, isolating one could be more difficult. Adding bus drivers to isolate them would be possible, but it would also add an extra gate delay. |
|
|
Optimal performance doesn't need to be such a concern though. This process would allow the use of chips that would otherwise be binned; they wouldn't be used in demanding applications and hence inefficiency wouldn't be too much of a problem. |
|
|
The problem is that the buffers necessary to allow a chip to be useful in the presence of defects would impede performance even when there weren't defects. |
|
|
Would be very surprised if this wasn't implemented on down the line. It strikes me that it may be better to build the master and eight slaves as separate units and find a better (ie smaller) way of plugging them together than sockets on a pc. |
|
|
Apparently this was prevalent in Russian CPU manufacture many years ago. Their yield was basically zero, as each chip shipped with a list of the opcodes that didn't function on it.
So if you come across a Russian engineer that wrote compilers - that's one hot guy, as the compiler had to be able to substitute combinations of working opcodes (eg. invert, test>0) for non-functioning opcodes (eg. test<0).
Maybe it's not such a good idea after all. |
|
|
Already proposed 20 years ago (at least for memory chips) for wafer-scale integration by Ivor Catt. |
|
|
The PS3 was recently launched at E3. Turns out that of the 8 slave processors, one on each chip will be disabled to improve yields. |
|
| |