“Workstation performance for about the price of a desktop” — That’s how Dell pitches its new entry-level workstation T1700.
The new unit is available in small form factor and mini-tower configurations. They’ll be powered by Intel Xeon processor E3-1200 v3, based on Intel’s next-generation Haswell architecture. The chip maker is heavily promoting Haswell as the technology to increase power efficiency. GPU options for the T1700 includes AMD FirePro and NVIDIA Quadro cards. According to Dell, the T1700 is “industry’s smallest entry-level tower workstation.” Continue reading
Astrobotic, a Pittsburgh-based space robotic technology developer, is currently one of the teams competing for the $30 million Google Lunar X Prize. To win, Astrobotic and roughly 20 other teams are racing against one another — and against time — to be the first “to safely land a robot on the surface of the Moon, have that robot travel 500 meters over the lunar surface, and send video, images and data back to the Earth,” as the rules specify.
Since Astrobotic relies on software driven simulation, conducted primarily in ANSYS and MathWorks MATLAB, the company can improve its odds in the competition by speeding up its simulation workflow. But Astrobotic came up against the “dead node” issue — workstations that became unavailable for other uses because their resources were fully consumed in simulation computing. Continue reading
Mountain climbers know Piz Daint, measuring 9,700 feet, as part of Switzerland’s snow-dusted Ortler Alps. Researchers and supercomputer nerds, however, know another Piz Daint, installed inside the Swiss National Supercomputing Center (abbreviated as CSCS in Swiss). The center is a unit of the Swiss Federal Institute of Technology in Zurich, where Albert Einstein once studied. Since supercomputers are used for, among other things, accurate weather prediction, the micro-climates of the Piz Daint in the Alps could very well be computed on the Piz Daint at the CSCS.
The supercomputer is a Cray XC30 system. Its current performance is listed as 216 TFlops, according to Top 500 Supercomputers. It’s the largest supercomputing giant Cray has assembled and delivered to date. But it’s about to get faster. When it’s retrofitted with Kepler GPUs, its speed will go up to 1 PFlops (1,000 trillion floating point operations per sec), announced NVIDIA. By early 2004, the Piz Daint will become “the fastest GPU accelerator-based scientific supercomputer in Europe,” NVIDIA noted.
Matthew Gueller chuckled when I asked him if he does rendering, as if to say, “Do you even need to ask?”
Being a professional visualization artist and surface designer, Matthew sees a large chunk of his time consumed by rendering. “Some of the images we have to render — they’re one-to-one ratio, at 72 DPI poster resolution — can take up to 16 hours to finish,” he noted. “Lots of materials involved, large data sets — they’re very CPU-intense.” Continue reading
The GPU was initially conceived as a device to boost visualization, but it has evolved so much beyond its origin that the term graphics processor seems like a misnomer. Today, GPUs are a big part of parallel computing, also called high-performance computing (HPC). They’re fueling large-scale simulation, analysis, and number crunching in scientific research, space exploration, weather prediction, and more. At the upcoming GPU Technology Conference (GTC), NVIDIA plans to expose many other use of the GPU beside producing pretty visuals with dense pixels. (Note: DE is a media partner of GTC.) Continue reading