Supercomputing

Researchers Break Record with High-speed Data Exchange

It won’t speed up your simulation job queue quite yet, but new developments in network technology, spearheaded by a group of researchers including Caltech, CERN, and the University of Victoria, have set a new record for high-speed data transfer. The test (which was performed at SC11) reached a combined rate of 186 gigabits per second (Gbps) in a wide-area network circuit. This shattered the team’s prior record of 119 Gbps set in 2009.

To put that speed in some kind of perspective, that kind of data transfer would allow you to download nearly 300,000 copies of Skyrim in a single day. The research group hopes their work paves the way for new high-speed networks that routinely handle loads of 40-100 Gbps.

Continue reading

World’s Top Supercomputer Hits 10.51 Petaflops

Japan’s K Computer is still the world’s most powerful supercomputer, according to the latest Top500 List.

The K Computer, installed at the RIKEN Advanced Institute for Computational Science (AICS) in Kobe, Japan, achieved 10.51 petaflop/s on the Linpack benchmark using 705,024 SPARC64 processing cores. The Japanese supercomputer maintained the top position thanks to a build-out that made it four times as powerful as the number two entrant, the Chinese Tianhe-1A system.

The K Computer is the first supercomputer to achieve 10 petaflop/s or 10 quadrillion calculations per second. The previous June Top500 list marked the first time that all of the top 10 achieved petaflop/s performance. Continue reading

Supercomputing Breakthroughs on Display at SC11

The annual SC11 (Supercomputing 2011) international conference on high-performance computing (HPC) is taking place this week in Seattle, WA. I attended last year in New Orleans, and can tell you it’s an impressive — nearly overwhelming — display of cutting-edge computing technologies. This year, DE’s Executive Editor Steve Robbins is there and will be filing a wrap-up report, but we wanted to fill you in on some of the big announcements already made at the show.

Jen-Hsun Huang, founder and CEO of NVIDIA, gave the keynote address at 8:30 a.m. Pacific this morning. Prior to founding NVIDIA, Huang held engineering, marketing, and general management positions at LSI Logic, and was a microprocessor designer at Advanced Micro Devices.

Huang talked about the use of energy in CPUs limiting the ability to achieve exascale computing. He said it would take today’s CPU architecture 20 megawatts of power to reach exascale computing, and that it wouldn’t happen until 2035 with the current technology. He said new technology would be needed to reach the exaflop computing level by 2019, which is a goal of the industry.

NVIDIA has already made news at the show by announcing its Maximus technology. NVIDIA Maximus brings together the  3D graphics capability of NVIDIA Quadro professional graphics processing units (GPUs) with the  parallel-computing power of the NVIDIA Tesla C2075 companion processor — under a unified technology that the company says transparently assigns work to the right processor. DE‘s senior editor, Kenneth Wong, says NVIDIA’s Maximus technology is expected to let you work in a CAD modeling program, render photorealistic product shots, and run simulation jobs — all at the same time. For more on Maximus, check out Kenneth’s Virtual Desktop blog post, which includes a podcast interview with David Watters, NVIDIA’s senior director of marketing for manufacturing and design segments.

NVIDIA is targeting Maximus squarely at the design engineering market. Check out how the company says Maximus speeds photorealistic rendering in Dassault Systemes’ CATIA V6: Continue reading

The Birth of a Simulation Benchmark Model

When design engineers run a simulation in their favorite engineering software, massive amounts of number crunching occurs behind the scenes to simulate a particular event. Such simulation is critical to designers who can save time and costs by doing fewer real-world tests and more digital tests of their designs. But how do we know the simulations are accurate?

Let’s take a look at one example recently featured in ORNL Review. A team of mechanical engineers at Sandia National Laboratory was given 60 million processor hours this year on Oak Ridge Leadership Computing Facility‘s Jaguar supercomputer to conduct high-fidelity simulations of combustion in advanced engines.

The models they create are validated against benchmark experiments to simulate turbulent combustion at different scales. Once validated, the models can be used by design engineers, as the article explains:

These models are then used in engineering-grade simulations, which run on desktops and clusters to optimize designs of combustion devices using diverse fuels. Because industrial researchers must conduct thousands of calculations around a single parameter to optimize a part design, calculations need to be inexpensive.

Continue reading

 

Become a fan of DE on Facebook

Categories