Cloud computing, grid computing, CPUs, GPUs, gigaflops, teraflops, petaflops, exaflops, multicore, parallel computing … and on and on. I attended the SC10 event recently in New Orleans, then home for Thanksgiving, and right back to Las Vegas the following Monday to attend Autodesk University. It was a whirlwind of acronyms and amazing technologies.
The supercomputing show was great, but mind-boggling. Workstations are becoming supercomputers. The Top 500 world’s fastest computers were announced to coincide with the show. The Chinese Tianhe-1A system at the National Supercomputer Center in Tianjin is numero uno, achieving a performance level of 2.57 petaflop/s (quadrillions of calculations per second). It seems only yesterday that everyone was so excited by achieving teraflop status. There was a lot of great hardware there, and some of it will be arriving on engineers’ desks this coming year. Local clusters are a reality. I saw an unmanaged, 8-port, InfinaBand switch that is just made for workstation clusters. And for really screaming performance, rip out the hard drive and add a new solid-state drive.
There are real advantages to local supercomputing within your workgroup: no scheduling time on the data center, no hassles with the network administrator, and no special power or cooling requirements. All you need to do is get the OK for the equipment, which you have most of anyway, and off you go. Of course, getting all of your applications to work together in a multicore environment can be a pain, but setting up these clusters is getting easier. We’ll be reporting more on cluster setups in the near future.
But the really big subject on everyone’s agenda was cloud computing. If you go to Wikipedia, the definition for cloud computing is, “Internet-based computing, whereby shared servers provide resources, software, and data to computers and other devices on demand, as with the electricity grid.” That sounds so ’80s, like we are going back to the client-server days. And, in many of our discussions with vendors at SC10, the definition of cloud computing was very cloudy indeed. One vendor was demonstrating the compute power of a data center in two racks, including storage. But no doubt about it, cloud computing is the biggest news happening this year, and it’s changing the world as we know it.
Computing Gets Infinite
Let’s move on to Autodesk University. During AU, the trade press had an opportunity to meet with Carl Bass, Autodesk’s CEO. It was a question-and-answer format. Of course, one of the first questions was about how Autodesk will be using “the cloud.” This is when it all came together. Carl started to explain how compute resources becoming available to engineers has become infinite. He used the term “infinite computing.” Wow. This makes so much more sense than “the cloud.” This means an engineer can scale a visualization or simulation to the outcome desired, not by what resources are available. And the cost? $0.037 per core per hour.
Bass used a great example: An engineer doing a structural analysis of a part or assembly could schedule 15 different scenarios from their workstation to cores on Amazon.com or Verizon, then go to lunch. Upon their return, the results will be waiting and the application software will have tagged the three that are the best outcomes. The engineer can decide which action to take for the design.
Infinite computing for engineers will change the way we think and work. Infinite computing is local and remote. At the same time, engineering software is catching up with the power of computers. Together they will change our world.
Steve Robbins is the CEO of Level 5 Communications and executive editor of DE. Send comments about this subject to DE-Editors@deskeng.com.