SC11: Evolving From Multicore to Many Core Computing
The simplest way to create a supercomputer is to consolidate a bunch of computers into a cluster, producing what’s generally known as a high-performance computing (HPC) system. Clusters are the best approach (in some cases, the only approach) to tackling complex engineering problems. like digitally simulating the airflow around a jet engine. As digital simulation becomes standard practice in design and engineering, the need for HPC grows. This week, people who make a living studying and putting together HPC systems are gathering in Seattle, Washington, for the annual Supercomputing show (SC11).
The Pacific Northwest port city is currently blanketed in gray clouds and showers, so attendees would probably forgo plans to go sightseeing around the Pike Place Market and Pioneer Square. Instead, they’re expected to huddle indoor under the roof of Washington State Convention Center, networking as they discuss the future of HPC networks.
Changes at its Core
At the heart of the supercomputing movement is the ever-increasing horsepower of the central processor unit. With the introduction of multi-core computing, individual processors begin to resemble mini-HPC systems. Now, with the introduction of NVIDIA Maximus, GPU maker NVIDIA threatens to wrestle away some supercomputing market shares from server clusters.
Powered by a CPU, a Quadro GPU, and a Tesla GPU, a Maximus-class workstation is expected to yield more than enough computing strength to handle the kind of simulation jobs typically delegated to HPC systems. Large-scale jobs — like simulating the mechanical operations of an entire airplane, for instance — would still require clusters, but for smaller jobs, a Maximus workstation might just prove powerful enough to solve them locally. (For more, read “NVIDIA Maximus Unveiled,” Nov 14, 2011.)
If Maximus is NVIDIA’s ambush, Intel’s MIC (many integrated cores) initiative may be the countermeasure. “This is an exponential leap forward,” Intel declares. “Now that supercomputers have broken the petaflop barrier, Intel already foresees a combination of many Intel MIC processors surpassing the next big milestone: the exaflop or 1,000 petaflop barrier.” MIC processors are expected to drive up the performance of workstations, HPC clusters, and data centers.
MIC architecture, according to the chip maker, “utilizes a high degree of parallelism in smaller, lower-power, and single-threaded performance Intel® processor cores … The first product based on Intel MIC architecture targets HPC segments such as oil exploration, scientific research, financial analyses, and climate simulation, among many others. It’s codenamed Knight’s Corner, and Intel is building it on 22-nanometer manufacturing process using transistor structures as small as 22 billionths of a meter. It will scale to more than 50 Intel processing cores on a single chip.”
Earlier this month, engineering simulation software maker Altair announced a partnership with Intel to “optimize [Altair] software for future Intel MIC products.” As a benefit of this partnership, Altair will get its hands on the first commercially available Intel MIC processors, so the company can code its PBS Professional software to take full advantage of MIC’s parallel processing power.
The CPU vs. GPU date is ongoing, fueled by technology advances from Intel and NVIDIA. No matter which side you choose to align with, changes in the fundamental characters of the processor units, both CPU and GPU, are poised to catapult supercomputing to a whole new realm.
For more reports from SC11, read “SC11 Journal: News for the Supercomputing Conference” and “World’s Top Supercomputer Hits 10.51 Petaflops” at DE’s Engineering On the Edge blog.