Prelude to GTC 2013: GPUs Are Not Just for Pretty Visuals

As NVIDIA gets ready for the annual GPU Technology Conference, the company is also making an effort to redefine GPU's purpose as more than graphics acceleration. Its Tesla-brand GPUs, built for parallel processing, will play a bigger role in tackling high-performance computing tasks.

The Titan supercomputer at Oak Ridge National Lab is an example of GPU-powered parallel processing.

The GPU was initially conceived as a device to boost visualization, but it has evolved so much beyond its origin that the term graphics processor seems like a misnomer. Today, GPUs are a big part of parallel computing, also called high-performance computing (HPC). They’re fueling large-scale simulation, analysis, and number crunching in scientific research, space exploration, weather prediction, and more. At the upcoming GPU Technology Conference (GTC), NVIDIA plans to expose many other use of the GPU beside producing pretty visuals with dense pixels. (Note: DE is a media partner of GTC.)

A glance at the GTC presentation catalog shows that:

  • Boeing has been expanding its GPU deployment, not just for visualization but also for engineering, manufacturing, and support;
  • Harley-Davidson relies on rendering software (Bunkspeed SHOT, Luxion KeyShot) and design and analysis software (from Autodesk and Dassault) to create conceptual designs; and
  • Fluid has been using GPU-powered Reality Server to deliver product imagery for online product stores.

Parallel computing or HPC is where several processor technologies — CPU, APU, and GPU — are expected to jostle for market share and dominance. The APU (accelerated processing unit), primarily a vision of AMD, is based on the notion that placing a CPU and GPU on the same chip is a better approach. In NVIDIA’s vision, the GPU is not a replacement for the CPU, but it’s well-positioned as a coprocessor that augments the CPU and increases its parallel processing power many folds, depending on the type of computing tasks involved.

Among designers and engineers working in the manufacturing sector, the GPU has proven to be an effective way to tackle compute-intense simulation jobs, often too big for a single machine to handle. Many simulation software developers are refining their code to take advantage of the additional processing cores available in the GPUs in distributed computing jobs.

In a webinar titled “Accelerate Your High Performance Computing with MSC Nastran” (delivered February 5, 2012; now archived online), Ted Wertheimer, MSC Software’s product manager, noted, “HPC allows you, the users, to be able to solve larger models, achieve greater accuracy, and of course give you an opportunity to do more design studies in the same amount of wall clock time.”

In the same webinar, Reddy Srinivas, NVIDIA’s senior software engineer, pointed out that MSC Software’s direct-equation solver is GPU-accelerated, and supports multiple GPU in Linux or Windows OS — an important factor for those considering GPU clusters in networked environments.

In an example cite during the presentation, Srinivas pointed out that a noise, vibration and harshness (NVH) analysis that took more than 1,000 minutes to complete on a single CPU was completed in roughly 100 minutes when accelerated with an eight-core CPU and two GPUs. The same job still took nearly 200 minutes when running on eight-core CPU without GPUs. In price-performance analysis, Srinivas calculated that an extra 13% investment in the GPUs yielded a 200% performance (compared to the cost and performance of an eight-core CPU system).

Some GTC presentations will focus on cloud computing, or remote access to GPU processing.

Will Wade, NVIDIA’s director of grid products, summarizes his talk as follows: “As enterprises look to move PCs to the cloud, users are more and more demanding of a better experience and support for all of their devices. NVIDIA GRID for Enterprise enables IT managers to deliver on an experience equal to a local PC with all the promised benefits of a virtual desktop environment.”

By contrast, Brian Madden, author of The VDI Delusion, argues: “Why is only 2% of the world on VDI (virtual desktop infrastructure) right now? Probably because VDI delivers a user experience that is, best case, the equivalent of a ten-year-old PC.” Still, Madden doesn’t think VDI is doomed. He plans to tell you “how it will fit into your overall strategy moving forward.”

VDI allows users to remotely access powerful processing capacity — whether driven by GPUs, CPUs, or APUs — from devices usually not associated with large-scale distributed computing. It offers the tantalizing possibility that you may, for example, use a mobile tablet to retrieve, view, render, and simulate large assemblies. With partners like Citrix and VMWare, NVIDIA plans to discuss and demonstrate this approach.

For more, visit GTC home page here.

If you enjoyed this post, make sure you subscribe to my RSS feed!

Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>






Get DE mobile