Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
Free set of rusty screwdrivers if you order now.

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


                             

Please log in.
Before you can vote, you need to register. Please log in or create an account.

CPU Bus Interconnect

Join motherboards together to form a Beowulf-ish machine, but with memory sharing.
  (+2)
(+2)
  [vote for,
against]

The Beowulf cluster is a marvelous invention, but its usefulness is limited by the relatively miniscule bandwidth of even the best network interfaces. For applications requiring memory sharing, conventional "big iron" supercomputers are still used. I propose the making of a standard PC motherboard with card edge connectors on both sides, a socket on one and a header on the other. These would allow several motherboards to share a CPU bus; thus memory sharing and common access to a single set of peripherals, etc. could be implemented. The components would all have to be of the same type, but would be quite cheap.
dsm, Jan 07 2003

(?) Dual ported RAM http://www.cypress....oducts/CY7C056V.pdf
Dual port, commercial [kbecker, Oct 04 2004]

(?) Quad ported RAM http://www.kip.uni-...ions/SRAM-Paper.pdf
Download the paper if you want it. It may soon be gone. [kbecker, Oct 04 2004]

[link]






       We are parallel-processing Borg. Resistance is futile...
RayfordSteele, Jan 08 2003
  

       some systems are kind of like this. Check out the latest SGI systems. It's just not available on intel based systems, which is the most common and actually one of the worst architectures available.   

       i say baked.
ironfroggy, Jan 08 2003
  

       I like the idea, but how would conflicts be resolved?
For instance, suppose 2 processors want to read memory at the same time. They could take it in turns, but then memory access would slow down the more you connected together. (You could have separate busses for local and external data to reduce the problem).
  

       I guess this sort of thing tends towards having packet-based data transfer, which is basically what you are trying to avoid..
Loris, Jan 08 2003
  

       // We are parallel-processing Borg. Resistance is futile... //   

       HEY ! That's OUR line !!   

       Stuff like this is under development using high bandwidth optical links to remove the network bottlenecks.
8th of 7, Jan 08 2003
  

       not baked - SGI and its ilk cannot be considered everyday hardware. What i proposed is a new feature for standard PC motherboards. All it would cost is the price of two card edge connectors and a cheap IC to eliminate the electrical nightmares.
dsm, Jan 08 2003
  

       so really all this idea is, is a faster way to connect your machines?   

       I've often wondered if its possible to just directly wire the PCI slots of two machines together. perhaps some circutry between then, but as little as possible. If one machine could call IRQ's on the other for transferign data, rates would be amazing.   

       Or, maybe a connector between memory slots?
ironfroggy, Jan 08 2003
  

       maybe... maybe.. another way of conecting this new "plug n' share" motherboards !:   

       Motherboards shares CPU bus. Maybe we could have a MASTER motherboard with 1 CPU, RAM, HDs, PCI connectors, USB, etc... and then a lot of SLAVE motherboard, with 1 CPU and 1 xxxMB of RAM each.   

       Now the tricky thing... let's say each motherboard has a 32bit bus and 128 MB RAM... motherboards could be connected REALLY in parallel... so if you have 4 motherboards (master included), you would have a pc with a 128bits bus (32x4)... and 512MB RAM. So...let's say we have a 1GHz CPU with 128MB ram... instead of buying a 2.4 GHz CPU, we could buy 7 slave motherboards... and maybe we could have a PC with a 256bit bus.   

       I don't really know what is faster... a pc running at 2.4GHz with a 32bit bus... or a PC running at 1GHz with a 256bit bus.   

       Maybe manufacturers don't think some way of developing a motherboard with.. lets say.. 512bit bus... or 1Kbit bus (well it should also need a CPU with same bandwidth)   

       This kind of hardware would be fast for huge numbers... on the other hand.. the bottle neck would be on linear calculations....
NickHunter, Jun 14 2003
  

       The bottle neck are those interconnects on the printed circuit board and the pins on the RAM chip. Otherwise it would work just fine. I tried it with 8-bit CPUs and multiported RAM. That worked fine, but you couldn't afford 1Gbyte of dual or quad ported RAM (see links). Otherwise the motherboard manufacturers would not do all those patches with "Northbridge" and "Southbridge." It would just be one RAM port each for CPU, Graphics, Storage, PCI bus.   

       [NickHunter] Go for the 1GHz, 256bit bus. In my test it looked like the product Frequency * Bus_Width determined the performance.
kbecker, Jun 14 2003
  

       Some late-80s to early-90s Macintoshes had a PDS (Processor Direct) slot, which was exactly this - a pin-for-pin connection to the CPU. I read somewhere recently that at least one person had built a parallel system out of 16mHz SE/30s.   

       Can I be bothered digging up the link? Nope. But a croissant for you.   

       A PCI based system would not be as fast as what you are proposing, which in modern terminology would be called linking the Front Side Buses if I'm not mistaken. The PCI bus is well downstream of there.   

       The cache slot might be a candidate - if there is one.
BunsenHoneydew, Mar 17 2004
  

       I have, on and off for several months now, been considering a computer that would consist of two (preferably heterogeneous) motherboards and CPUs in one case, operating together as one computer. I had been thinking of just using Ethernet to interconnect them, but PCIe or FSB would likely be better. Ethernet is easier for a prototype though.   

       Anyway, the two sub-computers would run some kind of hypervisor thing that would combine them into a cluster, and then a VM containing the OS you actually want to use would run on the cluster. The hypervisor would use machine learning to determine which tasks to run on which CPU, to optimize to their different strengths.   

       I don't remember for sure, but I think I planned to make the PCIe bus switchable between the two motherboards on a per-card basis (using a device that goes between the motherboards and the cards), so they could share video cards and such. And since those would just be forwarded into the VM anyway, I guess the hypervisor could just switch each card to whichever motherboard had the better weighted combination of less load at the moment and better performance with that card, and it would just work transparently. (Maybe I came up with a better system for card sharing, but I don't remember it if so. Also, I don't think PCIe is usually hot-pluggable, so the hypervisor would have to handle that too.)
notexactly, May 13 2016
  

       Regarding the links to PDFs about multi-ported RAM, I was able to get the first one using the Wayback Machine, but I couldn't find the second one.   

       Also, those are about SRAM. Does multi-ported DRAM (let alone multi-ported DDR(2/3/4)) exist, apart from VRAM and CPU registers?
notexactly, May 13 2016
  

       Get some Dell PowerEdge 2950's with fibre channel. Dual 64-bit Xeons, up to 24Gb RAM. Hook up the fibre channel links.   

       Forget edge connectors, too much like hard work.
8th of 7, May 13 2016
  
      
[annotate]
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle