New processors, such as the Cell under development by IBM/Sony, increasingly favour having several separate processors on one piece of silicon. The Cell has a 64-bit main processor controlling eight 32-bit slave processors.
When a chip is produced, defects in the silicon it's made from can mean that it doesn't work properly. Even though 99.9% of its transistors are properly formed, it is useless and can't be sold. The bigger the chip, the higher the chances of such a defect. For a multi-core chip the problem gets worse - for example, you could wind up binning a chip whose master and seven slaves worked, all because of a major defect in the eighth slave.
I propose that such multi-core chips be graded according to how many of their cores funcion. This could be achieved with bridges on the chip surface, one per core, that could be laser-cut to indicate to the system which cores did not work. This is a similar process to how a chip's operating speed is denoted - it is tested at progressively faster speeds until its max speed is found, and bridges on the chip are then burnt to lock the chip to run at this speed.
The lower quality chips would be suitable for less demanding applications (the Cell chip has approximately ten times the power of a current P4, so even at 5/8 performance it's still quite something), and the improved yield that resulted would bring the overall cost of the chips down.-- david_scothern, Feb 24 2005 Something like this? http://www.wired.co...,00.html?tw=rss.ITS [justaguy, May 19 2005] I am lost. Can you find me?-- missingdonuts, Feb 24 2005 Oh dear. Does this not make enough sense?-- david_scothern, Feb 24 2005 [donut], he's talking about chips in the humming thing under your desk, not the kind you eat. No, don't open the humming thing under your desk looking for a snack.
Good idea. If they aren't already planning this, they should.-- Worldgineer, Feb 24 2005 The extent to which this improves yield would depend upon the types of failures that generally caused chips to be rejected. I would expect that in many cases, isolating the 'junk' CPU cores would require cutting quite a few wires, including the power traces. Not exactly trivial.-- supercat, Feb 24 2005 I thought they did this already. For example, I thought when they make 4.0 ghz chips the ones that are not stable at 4.0 get underclocked to 3.8 ghz, and so on. I'm not sure but I think its been baked.-- darkboy115, Feb 24 2005 (psst, he mentions that in the idea)-- Worldgineer, Feb 24 2005 Supercat, I was thinking to instruct the master not to send to/receive from slaves marked as unusable. Power would still be supplied, but the core would idle. I agree that big flaws might lead to the dead slave interfering with other units though, and that could be a problem.-- david_scothern, Feb 24 2005 Yes, speed-binning is already done quite extensively. This idea, however, seems to address functionality binning. If a part of the chip doesn't work, just disconnect or deactivate the failed part and sell it as a lower-functionality CPU.
Unfortunately, this too is already baked. It's called the "Celeron". When Intel produces a P4 processor with a failed on-chip cache, the cache is simply deactivated and the P4 is relabeled as a Celeron.
This idea is an extremely obvious extension of celeron-style functionality binning, so I'll have to call it baked.-- Freefall, Feb 24 2005 One difference between this and the Celeron binning is that the FPU doesn't need to have a data bus connection (at least I wouldn't expect it to). For optimal performance, there would be a separate data path between the main CPU and the FPU which would not be used for any other purpose. Thus, if the FPU is powered down, bus loading on this separate data path would be a non-factor. By contrast, if multiple CPUs have to share a data bus, isolating one could be more difficult. Adding bus drivers to isolate them would be possible, but it would also add an extra gate delay.-- supercat, Feb 26 2005 Optimal performance doesn't need to be such a concern though. This process would allow the use of chips that would otherwise be binned; they wouldn't be used in demanding applications and hence inefficiency wouldn't be too much of a problem.-- david_scothern, Feb 26 2005 The problem is that the buffers necessary to allow a chip to be useful in the presence of defects would impede performance even when there weren't defects.-- supercat, Feb 27 2005 Would be very surprised if this wasn't implemented on down the line. It strikes me that it may be better to build the master and eight slaves as separate units and find a better (ie smaller) way of plugging them together than sockets on a pc.-- wagster, Feb 27 2005 Apparently this was prevalent in Russian CPU manufacture many years ago. Their yield was basically zero, as each chip shipped with a list of the opcodes that didn't function on it. So if you come across a Russian engineer that wrote compilers - that's one hot guy, as the compiler had to be able to substitute combinations of working opcodes (eg. invert, test>0) for non-functioning opcodes (eg. test<0). Maybe it's not such a good idea after all.-- greenie, Feb 28 2005 Already proposed 20 years ago (at least for memory chips) for wafer-scale integration by Ivor Catt.-- AbsintheWithoutLeave, Feb 28 2005 The PS3 was recently launched at E3. Turns out that of the 8 slave processors, one on each chip will be disabled to improve yields.-- david_scothern, May 19 2005 random, halfbakery