Home / Engineering Computing / The Costs of the Cloud

The Costs of the Cloud

The last thing an engineer wants to hear are terms like TCO (total cost of ownership) and ROI (return on investment), especially when those terms are applied to technology solutions. Simply put, what most engineers want are the resources to do their jobs effectively, quickly and accurately.

However, those resources can be quite expensive. Their costs put not only a project at risk, but the company as well. What is the most cost-effective way to balance effectiveness and expense? Many firms are finding their answer these days in one of two camps: high-performance computing (HPC) cloud-based solutions (HPC as a Service) and in-house implementations.

Which specific solution is right for your company comes down to the math, what delivers value, what enhances productivity and what garners results in the most economical way. To quantify those factors, it takes some research.

On-site or in the Cloud?

Making sense of the value of HPC solutions provided by cloud service providers can be a complex undertaking. After all, the billing mechanisms can be multifaceted, including charges for provisioning, CPU time, support services and so on. Yet, at first glance, the initial costs can be quite attractive.

Barbara Hutchings, director of Partner Relations and HPC Strategy at ANSYS, offers some sage advice: “One of the first things to consider is the total costs of a solution; you have to look at software, hardware, support and administrative costs. Only when armed with that information, can you make a comparison between an onsite solution and cloud-based solution costs.” That proves to be an important point, because many potential adopters of cloud-based HPC experience “sticker shock” once they see how much HPC compute cycles cost per hour.

Hutchings also points out that “pay-as-you-go HPC cloud services can be very expensive for steady state processing.” In other words, the cloud proves to be a viable entity for projects that require temporary scale-up capabilities, or bursts of processing activity. Organizations that have a steady flow of HPC work may be better served by internal resources, because burst processing and scale on demand are not necessary requirements for operation.

However, that business model may be an exception to the rule, with most engineering firms having to scale up on a project-by-project basis and assign resources to handle bursts of activity.

Cloud-based Benefits

While much hype surrounds the cloud, there are both tangible and intangible benefits offered by outsourcing HPC. HPC system managers can leverage those benefits and extend the ROI of HPC operations. The primary benefits include:

  • Scale to better support application and job needs with automated workload-optimized node OS provisioning.
  • Provide simplified self-service access for a broader set of users, which also reduces management and training costs.
  • Accelerate collaboration or funding by extending HPC resources to community partners without their own HPC systems.
  • Enable pay-for-use with show-back and chargeback reporting for actual resource usage by user, group, project or account.
  • Support using commercial HPC service providers for surge and peak load requirements to accelerate results.
  • Enable higher cloud-based efficiency without the cost and disruption of ripping and replacing existing systems.

These benefits help cloud-based HPC deliver appreciable ROI and make hosted offerings viable for a multitude of businesses. However, cloud-based HPC services are not without their challenges as well, perhaps diminishing some of the luster of HPC cloud services. Those challenges include:

  • The costing/pricing model, which is still evolving from the traditional supercomputing approach of grants and quotas toward the pay-as-you-go model typical of cloud-based services.
  • The submission model, which is undergoing an evolutionary change from job queuing and reservations toward on-demand virtual machine provisioning and deployment.
  • Moving data in and out of the cloud, which can be costly and result in data lock-in.
  • Security, regulatory compliance, and various abilities (performance, availability, business continuity, service-level agreements, and so on).

However, as more and more vendors get involved in providing hosted HPC solutions in the cloud, many of those challenges may simply disappear.

Determining Value

The hosted HPC market is in a state of flux, with more providers coming on the scene and competition creating pricing models that are only bound to get less expensive with time. Although that situation makes it difficult to calculate ROI and TCO, it surely does point those calculations toward more affordable results.

Nonetheless, much the same can be said about on-premise deployments as well–workstations and servers continue to become more powerful, and prices continue to drop. What’s more, supporting technologies, such as storage and high speed networking, are also increasing in capacity and speed as prices drop.

Cloud

With that in mind, it becomes increasingly difficult to pick one solution over the other–yet, that is the very task that IT managers and those in charge of budgets must consider. Adding further confusion to the process is the fact that both technologies may be about to experience disruptive enhancements. The cloud is poised to gain speed and added flexibility with the arrival of software-defined networks (SDN), while on-premise equipment is bound to be affected by developments in nonvolatile memory and optical networking technologies. That brings added angst when designing “future proofing” into an HPC solution, ultimately affecting value.

The Choice is Yours

Ultimately, it will be the needs of the business and its projects that will determine which is a better fit for an organization. For example, if continuous scale change is part of the operating norm, the cloud may prove to be a better fit. However, if a business operates in a steady state fashion, with predictable loads, an on-premise solution may be the better way to go.

Another factor that can sway the argument include the speed of provisioning. If a project has a normal, planned scale-out, in-house resources may be appropriate. However, if projects are deemed critical, the instantaneous provision capabilities of the cloud may prove beneficial. Considerations such as IT staffing levels, in-house expertise and administrative chores all have an impact on determining what works better for a given situation.

It comes down to the fact that one technology may not be better than another, but more of a choice based upon which is more appropriate for a business model. As Hutchings notes, “a usage-based model may be the best fit for smaller, less frequent projects, which may not fall under an organization’s core competencies, and ultimately duty cycles may be the key determining factor.”

Hutchings also warns that “the cloud is not a single process or an isolated technology, and it has implications for a broad range of technologies, so HPC in the cloud may be part of a much bigger cloud-enablement scenario.”

With that in mind, IT may have to research the benefits of the cloud beyond just solving an HPC problem, and perhaps shift more services into the cloud, to bring overall operational expenses down.

Frank Ohlhorst is chief analyst and freelance writer at Ohlhorst.net. Send e-mail about this article to DE-Editors@deskeng.com.

 

More Information

ANSYS

About Frank J. Ohlhorst

Frank Ohlhorst is chief analyst and freelance writer at Ohlhorst.net. Send e-mail about this article to DE-Editors@deskeng.com.