Supercomputer Evolution

Google+ Pinterest LinkedIn Tumblr +

A supercomputer is generally considered to be the frontline “cutting-edge” in terms of processing capacity (number crunching) and computational speed at the time it was built. Yet; as with all modern technologies, that which is today’s wonder supercomputer fast becomes tomorrow’s standard (off-the-shelf) computer.

Supercomputer Design and Evolution

With Moore’s Law still holding true after more than thirty years the rate at which future mass-market technologies overtake today’s cutting-edge super-duper wonders continues to accelerate. The effects of this are manifest in the abrupt about-face we have witnessed in the underlying philosophy of building supercomputers.

During the 1970s all the way through the mid-1980s supercomputers were built using specialized custom vector processors working in parallel. Typically; this meant anywhere between four to sixteen CPUs. The next phase of the supercomputer evolution saw the introduction of massive parallel processing and a drift away from vector only microprocessors. However; the processors used in the construction of this generation of supercomputers were still primarily highly specialized purpose-specific custom designed and fabricated units.

No longer is silicon fabricated into the incredibly expensive highly specialized purpose-specific customized microprocessor units that were once the heart and mind of the supercomputers of the past. Advances in mainstream technologies and scales of economy now dictate that “off-the-shelf” multi-core server class CPUs are assembled into great conglomerates combined with mind boggling quantities of storage (RAM and HDD) and interconnected using light speed transports are the order of the day.

So we now find that instead of using specialized custom-built processors in their design, the supercomputers of today and tomorrow are based on “off the shelf” server-class multi-core microprocessors, such as the IBM PowerPC, Intel Itanium, or AMD x86-64. The modern supercomputer is firmly based around massively parallel processing by clustering very large numbers of commodity processors combined with custom interconnects.

Supercomputer Hierarchal Architecture

The supercomputer of today is built on a hierarchal design where a number of clustered computers are joined by ultra high speed network (switching fabric) optical interconnections.

  1. Supercomputer – Cluster of interconnected multiple multi-core microprocessor computers
  2. Cluster Members – Each cluster member is a computer composed of a number of Multiple Instruction, Multiple Data (MIMD) multi-core microprocessors and runs its own instance of an operating system.
  3. Multi-Core Microprocessors – Each of these multi-core microprocessors has multiple processing cores of which the application software is oblivious and share tasks using Symmetric Multiprocessing (SMP) and Non-Uniform Memory Access (NUMA).
  4. Multi-Core Microprocessor Core – Each core of these multi-core microprocessors is in itself a complete Single Instruction, Multiple Data (SIMD) microprocessor capable of running a number of instructions simultaneously and many SIMD instructions per nanosecond.

Supercomputing Applications Today

The primary tasks that the supercomputers of today and those of tomorrow are used for are solidly focused on number crunching and calculation intensive tasks of enormity. By enormity we mean those large-scale computational tasks that involve massive data sets requiring real-time resolution that for all intent and purpose are beyond the generation lifetime of general purpose computers (even in large numbers) or that of the average human’s life expectancy today.

It is impractical to commence work on something that if lucky your great, great, great…..grandchildren may just see come to fruition. For one thing business will not provide you with the funding and resources required for “pie-in-the-sky” schemes. Fortunately it is this very type of tasks that supercomputers are built to tackle. Some of which include:

  • Physics – Quantum mechanics, thermodynamics, cosmology, astrophysics
  • Meteorology – Weather forecasting, climate research, global warming research, storm warnings
  • Molecular Modeling – Computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals
  • Physical Simulations – Aerodynamics, fluid dynamics, wind tunnels
  • Engineering Design – Structural simulations, bridges, dams, buildings, earthquake tolerance
  • Nuclear Research – Nuclear fusion research, simulation of the detonation of nuclear weapons, particle physics
  • Cryptography and Cryptanalysis – Code and cipher breaking, encryption
  • Earth Sciences – Geology, geophysics, volcanic behavior
  • Training Simulators – Advanced astronaut training and simulation, civil aviation training
  • Space Research – Mission planning, vehicle design, propulsion systems, mission proposals and feasibility studies and simulations

The main users of these supercomputers include: universities, military agencies, NASA, scientific research laboratories and major corporations. For more supercomputer information checkout the Top500.org list

Share.

About Author

Leave A Reply