Software Contract Solutions

Cloud repatriation rescues CIOs from mounting costs

The public cloud is a boon for getting applications up and running in a jiffy, but some CIOs have found that business agility comes at a premium and are rethinking their deployments via repatriation.

The public cloud offers the potential for better business agility, but offloading critical computing resources has a darker side. Some IT leaders are learning that running certain applications in a public cloud can cost more than doing so on-premises, leading them to rethink their strategies to instead bring those apps back in house.

It’s a lesson Ravi Naik, CIO of storage vendor Seagate, learned well. Naik quickly realized the benefits of elastic computing for business apps when he migrated to Amazon Web Services (AWS), part of a global plan to consolidate from four data centers to one. But Naik also rolled back a big data system from AWS when he realized that the troves of data it generated, coupled with the high bandwidth required to move the data, would cost a lot more than it would to run the system in his own data center.

“Compute is on-demand, so it’s perfect for cloud, which is elastic,” Naik says. “With storage, every second of every day, the costs are escalated.”

A backward step in cloud evolution

The about-face concerning computing rented over the internet has acquired its own catchphrase: Cloud repatriation. And despite the growth in public cloud consumption, repatriation it is becoming more common, with 85 percent of IT managers saying they were repatriating workloads from public cloud environments, according to IDC.

The company, which provides both consumer-grade storage devices and enterprise-class storage systems, generates 30 terabytes of parametric data a day in its factories. Although Seagate was only ingesting a portion of that data into its data lake, the bill to shuttle the data between seven factories across three continents could have totaled millions of dollars each year, Naik says. Moreover, it takes days to move this volume of data over the network. Both limitations forced Seagate to filter the data, ingesting the bare minimum needed.

Rolling back a data lake

After reassessing its cloud strategy — and learning a lot from running in AWS — Seagate revamped its analytics solutions using cloud-native tools and architectures. Soon after, the team realized that the optimized architecture and cloud tools would enable it to run a far more efficient ecosystem outside the public cloud. So it repatriated the data lake, moving it into Seagate storage deployed in a private cloud. Today Seagate shuttles large datasets around the world at a fraction of the cost of moving them over the WAN, while reducing ingestion times to 72 hours from weeks, Naik says.

“The repatriation of our big data ecosystem along with physical data movement using high-velocity data shuttles and cloud architectures allows us to ingest all our parametric data and provide a richer data analytics experience to our engineering and operations teams at a fraction of the cost of the public cloud,” says Naik.

The moved helped Seagate reduce spend by as much as 25 percent. “The discipline it forces on you helps you save money,” Naik says, adding that Seagate currently operates a hybrid environment that includes AWS, Azure and its own data centers.

An abundance of cloud caution

Wary of the pitfalls associated with spiraling costs that precipitate repatriation, some companies are more scrupulous about moving workloads to the public cloud in the first place.

ApolloMD, a specialty service provider for hospitals, only chooses cloud services that align its technology and business needs, says CIO Anthony Mascitti. It consumes cloud software through vendors who are hosted on AWS and uses SaaS apps such as PowerBI for data visualization and Adaptive Insights for financial analytics.

ApolloMD still operates data mining and warehousing in its own data center, but the IT department is looking to provide more self-service capabilities, which could include cloud services, to employees and partners. “My approach is to start with the business needs and the value proposition,” Mascitti says. “The tech component bridges the first two.”

As to Seagate’s decision to roll back its big data operations, Naik says it was part of the learning experience of migrating to the public cloud. And it was hardly the only hurdle, as change management presented another challenge.

In preparation for moving to the cloud, Naik created an IT training group with staff from HR to help communicate next steps. The group helped on-ramp staff to online training courses provided by Udemy. Naik also sent staff to AWS Loft sessions for refactoring in the cloud. “It was a significant game-changer,” Naik says.

CIOs must also be wary of lock-in, as most cloud vendors are incentivized to move their customers to adjacent cloud services. For example, a customer consuming data warehousing services may find it easier to use analytics and machine learning from the same vendor. But the more data and resources CIOs commit to a single platform, the harder it is to withdraw their data. CIOs want more choice and flexibility in where and how they run their workloads.

Naik expects that vendors, leery of angering customers, will eventually facilitate true multi-cloud environments in which companies can decouple compute from storage and move complete or partial workloads between clouds and on-premises systems from different vendors. “Cloud solution providers will address this,” Naik says.

 

This article originally appeared on CIO

Share