Quantifying Supercomputer Performance

Google+ Pinterest LinkedIn Tumblr +

Because the supercomputers of today are so well endowed with raw processing capabilities and due to their architecture are now fairly “open-ended” in terms of their upper number crunching capabilities it becomes imperative that we have some mechanism by which to quantify (measure) their “real world” performance.

Supercomputer Processing and Computational Performance Limitations

Using large arrays of commodity multi-core microprocessors has proven to be a very cost effective approach to achieving ever greater performance parameters.

If you require greater number crunching ability then all you really need to do is to add a few more racks laden with microprocessors, RAM (working storage), storage proper (usually in the form of Redundant Arrays of Inexpensive Disks also known as RAID arrays) and the necessary high-speed optical interconnects. More performance and processing capacity is now yours.

This has meant that when building and installing a new supercomputer it is possible to begin using the machine long before it is fully complete. This phased introduction factor is of considerable fiscal importance since the sooner you can begin to utilize the products of your capital outlays the sooner you begin to get a return on your investment and the less the ultimate overall Total Cost of Ownership TOC) of the entire facility will be.

So it is that the modern supercomputer can be considered to be a work in progress with new assemblies of processing, memory, compute, storage and interconnects being incorporated (assimilated) into the ever growing supercomputer conglomerate as and when they become available.

Because of this you might say that the supercomputer of today is in a sense a machine of infinite processing capacity potential and rightly so.

One factor to note here is that unlike the traditional mainframe computers; which are characterized by “hot swappable” components and fully functional operating “up time” measured in years, new “cutting-edge” supercomputers may from time to time require a total shutdown in order for many of its scheduled (planned) servicing, upgrading and component replacement procedures. Don’t forget that planned downtime is by far preferable to unplanned downtime.

Floor Space and the Almighty Dollar

There is however one major constraint upon this development that has and no doubt will continue to limit supercomputer performance and it comes from the most unexpected of directions. No; it is not the almighty dollar, it is floor space. Alright; the dollar does in many ways influence floor space availability but in itself does not limit supercomputer processing and performance as much as the actual available floor space itself does.

Where the floor space and its availability or lack thereof comes into its own is in the decision making realm of future additions and replacements to existing supercomputer components. There will always be some point at which the performance and processing gains to be realized through the addition of more computing elements (microprocessors, RAM and HDDs etc) along with improved interconnects will be overtaken by the performance and processing gains realizable through upgrade options.

Supercomputer Component Upgrade Options

Some obvious upgrade options will include the replacement of large portions of a supercomputer’s component hardware (microprocessors etc) with newer and denser processing hardware and arrays as well as swapping out hardware components such as the microprocessor and RAM with ever higher clock speeds.

We have already seen the introduction of commodity “off-the-shelf” multi-core microprocessors into the supercomputer environment so it will come as no surprise when we eventually see the replacement of the current two and four core server-class microprocessors with microprocessors encompassing 64, 128, 256 and higher core counts.

Not only this; we will also see ever higher numbers of these new more massively multi-core microprocessors installed per compute board and yes; you guessed it, ever larger numbers of compute board arrays per rack. In fact IBM are already producing compute boards with 64 multi-core microprocessors and have just recently announced a compute board with 254 such microprocessors. Unfortunately; it will not follow that we will also witness more racks per supercomputer, as there is a finite limit to the total floor space of every processing and supercomputer facility.

Management Decisions

Supercomputer and advanced computer modeling facility management decisions such as “what to add and when to add it” along with “what to upgrade and when to perform the upgrade” will be largely based upon data detailing current performance, the expected post-implementation performance increases delivered, budget, cost/benefit ratios, the current processing tasks requirements and fulfillment and the extrapolated future processing task requirements and delivery.

In order to do this in a meaningful way we must have performance and processing quantification metrics that can be used to reliably detail and compare the performance characteristics of any given supercomputer with those of every other supercomputer regardless of design, architecture and software environment.

Quantifying Absolute and Relative Supercomputer Performance

The processing and performance characteristics and capabilities of “normal” computers such as a user’s desktop PC, enterprise workstations, mainstream production server class computers and mainframe computers are measured in terms of Millions of Instructions Per Second (MIPS). Supercomputer processing and performance characteristics and capabilities are; on the other hand, measured in terms of Floating Point Operations Per Second (FLOPS).

A floating point number is a number expressed in scientific notation (a basic number, a base and an exponent). For example: 3.145 x 106 which would be 3,145,000 in long hand. Floating point number processing allows a computer to manipulate incredibly long numbers; both large and small, that it would otherwise be unable to manage.

The enormous raw processing power delivered by the vast conglomerates of commodity microprocessors; which comprise the modern supercomputer, means that the modern supercomputer is capable of executing mind-numbingly high numbers of floating point operations every second. In order to make these very large numbers easier for the human mind to deal with, the SI prefix system has been adopted for use in the expression of a supercomputer’s stated measured sustained floating point operations per second (flops) capabilities.

The SI prefix convention is as follows: Mega = 106, Giga = 109, Tera = 1012, Peta = 1015, Exa = 1018 and Zetta = 1021. For example; a supercomputer capable of sustaining a processing stream at the rate of 100,000,000,000,000 or 100 x 1012 floating point operations per second (flops) is said to have a performance rating of 100 Teraflops.

Share.

About Author

Leave A Reply