Bob Deragisch, Parker Aerospace's manager of enterprise systems, needed a way to tame a whale of a high-performance computing (HPC) job "growing at an ever-increasing pace that was difficult to manage and control. The aerospace business unit he supports uses ANSYS software to simulate, among other things, airflow and fluid flow inside valves and pumps, and the effects of stress on them.
"We're looking at hundreds to thousands of individual components, analyzed together as a unit," he explains. "These are [hydraulic and fuel systems] that have to fly for 30 to 50 years and meet certification requirements. We test them for all foreseeable situations, all operating conditions."
The vast amount of calculation required makes most of these tasks ill-suited for individual workstations. It was the type of analysis that could only be completed in HPC. But the daunting struggle of the in-house HPC server to keep up with the demand was evident in the job queue that stretched out like a long tail. No amount of additional processors or server racks seemed enough to bring down the stack of pending analysis runs.
Parker Aerospace's Bob Deragisch has a personal reason for wanting to
make the best aircraft systems and components: His son is an airline captain.
Deragisch began to wonder: Is there a way to supplement the server's shortage with the horsepower from engineers' workstations?
The answer came to him when he recalled SETI@home, a scientific experiment that uses web-connected computers to search for extraterrestrial intelligence. Researchers at the University of California-Berkeley have figured out a way to bundle together all the donated computing resources "unused CPU capacity that average home users have decided to dedicate to SETI's cause "into a single virtual server, powerful enough to analyze the mounds of telescope-acquired data in an attempt to pinpoint signs of intelligent alien life. Deragisch would use a similar method to turn a bunch of HP Z800 workstations into a virtual cluster, powerful enough to share the burden with the dedicated HPC server.
The success of this experiment, which led Parker Aerospace to deliver much more robust designs within the same timeframe, is now part of the company's IT strategy. The exercise was made possible by, among other things, the highly parallel nature of ANSYS software, Windows HPC Server 2008 DCC (Distributed Computing Cluster) software, Parallels Workstation Extreme virtualization platform and assistance from Intel and HP.
Taking a Deeper Look
This article and the What's New Q&A are the first two articles in a series that take an in-depth look at one company's engineering practices. Desktop Engineering's editors visited Parker Aerospace to bring the story to you via magazine articles, podcasts, online videos and a white paper, which was sponsored by Intel and HP. We hope this in-depth coverage will provide an example of how your company can adopt readily available technologies that can give you the time and tools to optimize your designs. Find out more at deskeng.com/workstationcluster and if your company would like to be the subject of an "On the Case" story, email us at email@example.com.
"Workstations are becoming extremely powerful. They are cluster nodes in their own right," notes Deragisch.
As he looked into the possibilities, Deragisch estimated that adding more Intel Xeon 5600 processors to the workstations would be far less expensive than expanding the HPC server with additional racks, floor space, cooling requirements and processors. His choice was the HP Z800 workstation, equipped with a pair of six-core Intel Xeon 5600 processors.
The engineers' 3D mechanical CAD software, which performs single-threaded operations most of the time, ate up roughly 10-20% of the workstation's horsepower. So, in each workstation, Deragisch reserved two to four cores for the engineer's primary workload. The remaining eight to 10 cores were delegated to the pool of computing resources to draw from, as part of a cluster. This workstation-based cluster's function was to process small and medium-sized jobs, to relieve pressure on the HPC server.
Parker Aerospace's workstation-based setup is different from what some call cycle scavenging or cycle stealing, where all idle cores are made available as potential cluster resources. By contrast, Parker Aerospace created a cluster using a message-passing interface (MPI). In Parker Aerospace's setup, workstation users are guaranteed access to a finite number of cores for their primary tasks, even though their machines are part of a cluster.
Whereas most clusters are assembled in a Unix or Linux environment, standard workstations almost always come with Windows operating systems (OS). The HP Z800 workstation runs 64-bit Windows 7 OS. Therefore, Deragisch's solution was to use Windows HPC Server 2008 DCC, which lets users preserve Windows 7 on their desktops while the rest of their computing cores function as parts of an HPC server. Partitioning the virtual cluster into head nodes and processing nodes was done using Parallels Workstation 4.0 Extreme, which leverages Intel Xeon CPUs and Intel Virtualization Technology for Directed I/O (VTd) to create an environment where workstations could share resources.
ANSYS HPC Licensing
Parker Aerospace uses ANSYS Mechanical, ANSYS CFX and ANSYS Icepak for structural, fluids and electronics thermal management, respectively.
"We really want to encourage our customers to take advantage of [HPC] so they can examine their designs from the system level, not at the component level," says Barbara Hutchings, ANSYS' director of strategic partnerships. "ANSYS HPC pack licensing allows extreme scalability at an incremental cost. It can take advantage of hundreds, or even thousands of cores at a very modest cost."
"Some software we use becomes prohibitively expensive when running on dozens of cores, because we'll need a license for each core," adds Deragisch. "Getting ANSYS HPC Pack offered us a significant advantage."
"The ANSYS software Parker Aerospace was using was Microsoft HPC-enabled," observes Mike Long, technical solution specialist, Microsoft Technical Computing. "There's a built-in job scheduler in the HPC product that allows ANSYS product users to simply specify from their graphical user interface the number of cores they want to use."
Hutchings explains that ANSYS "did a lot of development work to optimize our software packages for HPC, so it's off-the-shelf capability "no extra work required from Parker Aerospace."
Microsoft HPC DCC suite is part of Microsoft's vision to promote technical computing, powered by HPC, as a way for scientists, engineers and analysts to simulate and study the complex interplay of variables, as seen in biomechanical, electromechanical, financial, genomic and climate systems.
Long says he believes building the cluster as a Windows environment (as opposed to Unix or Linux environment) gives users an advantage: "You start out with people who are already familiar with Windows, so they're not required to submit jobs to a Unix or Linux cluster." This eliminates the hassle of converting jobs to a Unix- or Linux-compatible format, he points out.
Because all the cores from individual workstations must work on the same dataset in parallel, network connectivity "the speed with which the machines "talk" to one another "is critical to the performance of the virtual cluster.
Researching Solid State Drives
In Parker Aerospace's tests to replace traditional hard drives with the more stable, higher performance solid state drives, Bob Deragisch sometimes saw additional power gains. But he cautions, "If you're running on very few cores, the benefit from the lack of latency and responsiveness of solid state drive doesn't help that much. When you get to hundreds of cores, solid state drives make a tremendous difference "roughly a 30% to 50% reduction in computing time. Gains from using solid state drives depend upon the I/O (input/output) profile of the application as well; some applications are I/O-intensive, and solid state drives are of significant benefit in this instance."
"Because you're taking a single computation, dividing it into pieces, and running it on several processors, at some point those processors have to communicate with one another, because the pieces of the problem they're working on are interdependent," says Hutchings. "We have done a ton of software tuning to optimize the message-passing component of our products."
Each HP Z800 workstation comes with onboard network interface cards (NICs). Those who need to pass a large volume of data in a workstation-based cluster may opt for additional 10G NICs to speed up message passing among individual nodes. The new Intel iWARP provides direct node-to-node memory transfers to further increase performance.
Each workstation node has two Gigabyte-Ethernet switches connecting it to the network. With Intel VTd I/O, Parker Aerospace dedicates one NIC to the enterprise network; the other acts as HPC fabric.
Shorter Queues, Better Design
After Parker Aerospace's workstation-based cluster came online, engineers began seeing relief in the bottleneck.
"Some analysis jobs run for hours or days," noted Deragisch. "Now, these long-running jobs no longer tie up all our resources, and smaller runs can be executed on the workstation 'cluster' to dramatically shorten our job queues."
The supplementary computing capacity from the workstation-based cluster, which came at a modest investment in additional hardware, freed Parker Aerospace's dedicated HPC server to concentrate on larger jobs with fewer interruptions, allowing the company to explore more design alternatives.
Deragisch says the wall clock time for the wait for results is shorter for two reasons:
1. The short/medium jobs are no longer run on the company's dedicated HPC server, so the number of jobs pending is fewer.
2. Because the short/medium jobs run on the workstation-based cluster, and do not have to wait behind long-running large jobs, the overall turnaround time for these smaller jobs is shorter.
"The short jobs are not competing with the large jobs for server cluster resources," he adds. "On the basis of individual jobs, [you might see] little or no improvement, maybe even a slight degradation when running on workstations versus the HPC server. But it's not just about individual jobs; it's about substantially reducing the queue of jobs sitting on what was previously a single resource [the HPC server], by offloading small and medium jobs to the workstation-based cluster."
"Certainly there are small [design and engineering] shops that don't have clusters today. They probably can't justify buying a dedicated cluster," notes David Rich, from Microsoft's technical computing division. "But if they get used to using a cluster through their workstations, they might discover that there's enough return on investment to buy a small, dedicated cluster."
ANSYS' Hutchings says she believes familiarity with, and deployment of, HPC among engineers will eventually lead to better designs, as users will be able to examine models with higher fidelity. In other words, their models will feature greater mesh density and more geometric details, providing a more accurate depiction of the designs under consideration. In addition, with access to HPC, engineers and designers can conduct simultaneous studies of multiple design iterations, allowing them to select the best option afterward.
Currently, the high cost of HPC resources puts sophisticated computer-aided analysis and simulation beyond some manufacturers' reach. Thus, the use of this technology is often confined to validating a concept, or proving that a product would perform as intended in practice. ANSYS and its partners expect the rise of HPC will reverse the trend. They hope that, with more affordable HPC setups like that of Parker Aerospace, designers and engineers will begin identifying the most promising concepts in the early phase, then spend the rest of the development cycle perfecting the design.
Deragisch has a personal interest in delivering more robust fuel, hydraulic and flight control systems for aircraft: "My son has just been made a captain. He flies every day."
For More Information:
"Speed Product Development via Virtual Workstation Clustering" white paper
Kenneth Wong writes about technology, its innovative use, and its implications. One of DE's MCAD/PLM experts, he has written for numerous technology magazines and writes DE's Virtual Desktop blog at deskeng.com/virtual_desktop. You can follow him on Twitter at KennethwongSF, or email him via firstname.lastname@example.org.
Editor's Note: This article is part of a package of related content that includes an article written by DE on behalf of ANSYS for its magazine, as well as an HP and Intel-sponsored white paper.