Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
"It would work, if you can find alternatives to each of the steps involved in this process."

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


                         

Substrate focused supercomputer design

Build underpinnings and futureproof supercomputers with them
 
(+1, -1)
  [vote for,
against]

To design and build a supercomputer is a human time-scale endeavor. To purchase a personal computer is, in many ways, a technology driven time scale.

Suppose an organization budgets $50 million to build a supercomputer. Environmental and regulatory hurdles must be jumped. A new building must be manufactured to house it. A cooling system must be designed and installed. The computer itself must be installed, configured, and tested. It takes years.

On the other hand parts for a single personal computer can be ordered from manufacturers, expected to arrive on a time scale measured in days, and assembled in an hour. And an upgrade is a matter of ordering and installing a single part. As Moore's Law chugs along PC's are edging closer to the power of supercomputers that cost many times as much simply because of the delay. There isn't enough time for a supercomputer to return value before the power of new components doubles a few times and renders them obsolete.

The solution is a new design process for supercomputers. Instead of designing for the most power and most efficient design given available components first make a standard substrate. I'm not talking cabinets and blades here. I mean an architecture focused on rapid module replacement. Require new components to conform to this longer-term standard such that each part will immediately fit into the new system with no interruption in operations.

Take processing power. You can't anticipate the next generation's socket format but you can demand an inexpensive and easily swapped module that accepts any socket and interfaces with the rest of the node without changing the node. You can't anticipate high bandwidth memory-processor stacking but you can demand a module that has a standard interface and allows usage of processor-matched memory without having to remove other memory modules.

Every component can be changed immediately if upgrading is desired, and that means when a new component is ordered it's not a matter of designing a whole new warehouse of racks, but of pulling just that component and putting in new versions.
Voice, Jun 28 2016

High Bandwidth Memory http://www.amd.com/...re-technologies/hbm
3-D chips for rapid memory access [Voice, Jun 28 2016]

Spintronics https://en.wikipedia.org/wiki/Spintronics
For all your frational bit needs [Voice, Jun 29 2016]


Please log in.
If you're not logged in, you can see what this page looks like, but you will not be able to add anything.



Annotation:







       Well, at least it's not in...oh, wait a minute. Never mind.
normzone, Jun 28 2016
  

       //You can't anticipate high bandwidth memory- processor stacking // You just did.   

       But yes, basically. Maybe.
MaxwellBuchanan, Jun 28 2016
  

       //You just did ( anticipate high bandwidth memory- processor stacking)//   

       I wish I had but I didn't. AMD and NVidia are both doing that now.
Voice, Jun 28 2016
  

       Ah. Then it would be "couldn't have anticipated". Shows how far I am behind the bleeding edge...
MaxwellBuchanan, Jun 28 2016
  

       You might have a problem with your generic connectors when bit-counts double. Most CPUs handle 64 bits of data in one gulp these days, but future generations will gobble 128 bits at a time, then 256 bits at a time, and so on. Connectors designed today probably won't have enough electrical contact points for some of those future data processors.
Vernon, Jun 29 2016
  

       Some of faster computer design comes with simply shrinking the space between gates.
RayfordSteele, Jun 29 2016
  

       Based on the concept of relativity, that means that if we make ourselves bigger our computers will be faster. Do really fat people get faster broadband?
MaxwellBuchanan, Jul 02 2016
  

       No, but they have less need for speed in general, and if we all simply think slower, then computer speed isn't a problem.
RayfordSteele, Jul 02 2016
  

       [-] because this will be a bottleneck. The connections between the different parts of the computer, and the things that implement those connections (i.e. motherboards, etc.) are improved more frequently than your 'longer-term' standard would be, so they would get ahead of it in terms of data transfer rate and such. This would result in them being bottlenecked by your connections.
notexactly, Jul 18 2016
  

       //if we all simply think slower, then computer speed isn't a problem.// <witty comment goes here>
FlyingToaster, Jul 18 2016
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle