|CUDA in Brief |
CUDA, which stands for Compute Unified Device Architecture, is NVIDIA’s parallel computing architecture for use with the company’s graphics processing units (GPUs). While these processors offer performance advantages over traditional CPUs for many engineering computations, they require software providers port their applications to run on these processors.
As graphics processing units (GPUs) move into mainstream computing for computationally intensive tasks, NVIDIA is supplementing the hardware with an increasingly comprehensive software development toolkit, the NVIDIA CUDA Toolkit 4.0. This toolkit provides capabilities to improve computational performance of parallel operations on multiple GPUs, speed porting of existing applications to the GPU, and provide software developers with more tools to enable them to produce fast and high-quality applications.
The CUDA Toolkit 4.0 was announced on February 28, and the release candidate was made available to registered developers on March 4. This new toolkit includes the ability of multiple GPUs to work together more seamlessly, without intervention of the workstation’s CPU. It also provides for a unified addressing model for memory, which enables programmers to more easily port applications and use data that resides in either CPU memory or GPU memory.
Lastly, this version also incorporates a greater range of developer tools. These tools include a performance analyzer, binary disassembler, and debugger for both PCs and the Mac OS. These are common for mainstream operating systems on CPUs, and are just starting to come into their own for GPUs.
Moving GPUs and CUDA into the Mainstream
According to Sanford Russell, director of CUDA Marketing at NVIDIA, the intent is to move GPU computing more toward commercial endeavors, such as mainstream design engineering, as well as fields such as finance, embedded systems, and even commercial business. “There are 250 million CUDA-capable GPUs deployed in systems,” he explains. “This technology is established and ready to run a wide variety of software.”
Russell also goes on to explain that as GPUs become more readily available on workstations, and the programming tools become better, many mainstream engineering applications are being ported to the GPU and CUDA.
“Structural mechanics applications are readily available to run on the GPU,” he says. “GPU fluid dynamics applications are reaching the mainstream, which other areas in engineering computation are also making strides.”
Russell notes that around 90% of the workstations sold to engineering groups either already come with one or more GPUs or offer GPUs as an option. This serves to make the technology increasingly available to engineering users.
In addition to supporting C and C++, the most common languages used for commercial engineering applications, NVIDIA offers wrappers for Java, and has partners who support Fortran and Microsoft’s .NET platform and languages.
Many design engineers’ workstations probably already have GPUs installed. If you have your own source code, it’s easier than ever to port it to run on CUDA GPUs. If you’re looking toward commercial design and analysis applications, chances are your preferred vendor is supporting NVIDIA GPUs today, so ask if that software is available for your next upgrade.
Contributing Editor Peter Varhol covers the HPC and IT beat for DE. His expertise is software development, math systems, and systems management. You can reach him at email@example.com.