Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
Get half a life.

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


                                   

The Stack

Need more? Add more.
  (+1, -3)
(+1, -3)
  [vote for,
against]

The Stack was inspired by the Cube World toys, those cubes with animated stick figures that you could stack on top of and next to each other and they would interact. The Stack does something similar, only with PCs.

The basic unit of the Stack is a minimal configuration integrated on a single board: a low power, relatively low-speed processor, some RAM, some flash memory; SD card reader, audio/video input and output ports, USB ports, Ethernet card. By itself, a very cheap and very modest little home or office computer, but not much more.

Stack modules connect to each other via 10 Gbit Ethernet, and USB Host and USB Device ports each module would have on its top and bottom (and maybe sides). The connected modules automatically set up a parallel computing network, configuration depending on the network's intended use. The extra modules would serve as extra processing power, USB hubs, extra monitor outputs, ports for external devices, or as workstations for other users. The "main" module would act as the server and gateway node.

A Stack could also connect to other Stacks into a larger computing grid.

Additional storage space or optical disk drives would also be available as specialised modules, or you could simply use external, USB or network drives.

This way, if you need more processing power, just add another module on the stack, or connect it into the network, and you're set.

Ideally, the Stack modules would connect using a proprietary high-speed bus, and the BIOS would seamlessly blend together the many processors, RAM modules and flash storage so that the OS would think it's running on a single, powerful computer, but I'm not sure how feasible that is at the moment.

The closest any existing setup comes to the idea of the Stack is the mainframe architecture, (warning, extremely simplified and mostly wrong explanation to follow), where you could add or remove processors as you needed, and the overgrown BIOS-turned-OS would distribute tasks to available processors and take the strain of trying to figure that mess out off the installed OS (or systems; it could have several running at the same time) that served only as an interface to the end user(s).

Unfortunately, this tones down the whole idea of the Stack, since the single difference between it and a regular desktop PC is that you can stack them up like LEGO bricks.

But even like this, you can have a 4-module Stack as your desktop PC, and a module in every room, serving as a HTPC, Skype telephone, Netbook, mp3 alarm clock, every node harnessing the power of all unoccupied nodes when required. And when nobody's watching, the whole array could run SETI@Home.

Veho, Apr 08 2009

Cube World http://www.radicaga...cubeworld/index.php
Tiny people. [Veho, Apr 08 2009]

Amdahl's Law http://en.wikipedia...wiki/Amdahl%27s_law
From parallel computing, Amdahl predicted that parallelism was limited. [Jinbish, Apr 09 2009]

Gustafson's Law http://en.wikipedia...i/Gustafson%27s_Law
A more optimistic interpretation of the parallelisation was introduced by Gustafson, introducing the aspect of scale and speed-up to a problem. [Jinbish, Apr 09 2009]

Beagle Board http://beagleboard.org/
Example of an all-in-one integrated solution. Powerful enough for one module to be a computer in itself; ideally, cheap enough for additional modules to be, well, modules. [Veho, Apr 12 2009]

[link]






       If they communicate over Ethernet, why the need for physical proximity?
phoenix, Apr 08 2009
  

       Hi all, newbie here.   

       Need for proximity is essential. In fact it is probably the downfall of the idea, because processors just inches apart is still too far apart.   

       One of the limiting factors for processor performance is the speed of light...the speed at which signals travel from one part of the processor to another. If you make the processor physically larger, or split the processing into multiple units some distance apart, the signals take all the longer to travel.
Sweaty_Elvis, Apr 08 2009
  

       potentially useful on the type of boards where the main processor does all the work.
FlyingToaster, Apr 08 2009
  

       "because processors just inches apart is still too far apart"
My point, because dis/reassembling the packet data probably takes longer than the trip across the network.
phoenix, Apr 09 2009
  

       //a low power, relatively low-speed processor, some RAM, some flash memory; SD card reader, audio/video input and output ports, USB ports,//
So far, you've described a mobile phone.

BTW This concept was baked in the early 1990s using transputers and modules called TRAMs (TRansputer Modules). There were ethernet, graphics, video capture, DSP, bulk memory, generic I/O, hard disk I/O TRAMs.
coprocephalous, Apr 09 2009
  

       Parallelization is the problem here. Some problems are easy to split up between multiple processors, but many, annoyingly, can't be split up and need to run on a single processor. This is why desktop PCs were until quite recently sold with faster and faster processors rather than multiple processors. Other applications could take advantage of multiple processors, but just haven't been written that way, and you can't rely on software vendors to rewrite their software to suit a new architecture.   

       So sadly it doesn't follow, if you have a processor 90% fast enough to run Half-Life 2, that two processors or even a hundred will be able to run it.
Srimech, Apr 09 2009
  

       // I miss the par keyword.//
sp. "PAR"
coprocephalous, Apr 09 2009
  

       //From parallel computing, Amdahl predicted that parallelism was limited//
Trouble with Gene was that he sometimes picked the wrong metaphor.
He once was quoted as saying something like "150 Skodas will never be as fast as one Ferrari".
True, but in unit time, they would cover more ground.
coprocephalous, Apr 09 2009
  

       Best simple explanation I've heard. In undergrad we had to write an essay on them. The entire class practically said the same thing, but none of us actually understood it.
Jinbish, Apr 09 2009
  

       Perhaps I should call it "Copro's Corollary"
coprocephalous, Apr 09 2009
  

       Oops, thinking about it, it may well have been Seymour Cray who made the automotive analogy, not Gene Amdahl.
coprocephalous, Apr 09 2009
  

       I think the analogy is quite appropriate. 150 Skodas would be great for moving lots of people, but too often we find ourselves with the problem of moving just one person fast.
Srimech, Apr 09 2009
  

       There are advantages and disadvantages to both approaches to processing, true. It all depends on what you need it to do. While some users do need one Ferrari, many actually need several Skodas. Playing Solitaire with an mp3 playing in the background, several open browsers, torrent client quietly crunching packets, and maybe a DVD being ripped or recorded, and a spreadsheet open and within Alt+Tab's reach in case the boss drops in, for example, describes the use of around 90% of PCs, and such a use would profit more from several slower processors (or several separate PCs altogether) than it would from one faster CPU. If we add the software actually designed to take advantage of multiple processors or grid computing, a Stack would be powerful enough for most users.   

       Let me amend a few points. Each module would connect to the adjacent modules via two or more Ethernet links, so that the network topography could be reconfigured on the fly, without plugging / unplugging a ton of wires. Also, each module would have both USB Host and USB Device ports, so that every module would see (and access) every other module (and any devices plugged into them) as USB peripherals on a hub.   

       If you design each module so that every port simply plugs into the corresponding port on the adjoining module, along with power leads and a few pegs and corresponding slots added for structural integrity, the modules could simply click together like LEGO bricks, no additional wiring required.   

       Remote modules wouldn't be as interconnected as modules on the Stack, because the USB standard doesn't allow for long cables, and stretching several cables per module across the house is probably more trouble than it's worth. So modules would be connected via single Ethernet link, or wirelessly, into a larger home network. This would still enable you to access other nodes and run the more hardware-strenuous applications (via remote desktop), leaving your local node free for Solitaire (or whatever). Ideally, the OS would distribute tasks this way by itself, leaving you one less thing to worry about.   

       [coprocephalous], the mobile phone comparison is a good one. What I had in mind were Netbook motherboards or similar boards (see link), but the idea is similar enough.   

       The Stack is entirely possible from a hardware point of view. So is distributing tasks among several CPUs/computers/nodes, either via remote desktop, grid computing, parallel computing clusters, batch processing, or simply chopping the file up into several bits and having each node work on one fragment. The Stack would simply (okay, not so simply) do that automatically.   

       So while a home or office Stack or network of Stacks still wouldn't be able to run Half-Life, it would be able to run several dozen applications, maybe for several people, simultaneously. Much like the 150 Skodas, which was sort of the general idea in the first place.
Veho, Apr 12 2009
  
      
[annotate]
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle