Written by Michael Cantor, CIO, Park Place
Much has been debated about cloud repatriation recently. Could it be that this reversal movement of workloads back to on-prem or private cloud settings has gathered such momentum that it can now be classified as an enterprise trend for intensive applications? The cloud repatriation movement began to emerge as some of the true costs of public cloud adoption materialised and previously difficult to anticipate cloud overheads began to surface. On the face of it, movement could appear like a complete retraction of cloud first strategies, but the detailed causes behind cloud repatriation needs closer analysis before cloud’s main benefits of easy management and flexibility are reversed.
Analyst opinions differ on how serious cloud repatriation efforts are in the enterprise. Most note that before seriously embarking on repatriation considerations, IT heads and Cloud leads need to thoroughly plan for accurate cost comparisons between the two approaches. These calculations need to be truly reflective of total costs of using public cloud providers vs running on-prem systems – complete with all their overheads of hardware, maintenance, energy, skills, management, space etc. Back in 2019, Gartner¹ concluded that public cloud repatriation remains the exception rather than the rule, given the relentless appetite that cloud consumption showed – and continues to show. In their appraisal, Gartner summarised that cloud repatriation was often the result of two broad economic situations:- 1. the failure to turn things off when not in use, and 2. the failure at the outset to realise the data consumption rate of an application. That’s not to suggest that Gartner concluded that public cloud workloads were doomed, rather they recognised that organisations who are successfully deploying workloads and storing data within a provider’s vast global infrastructure need more insight in the planning phases. Around the same timescale, IDC² commented that up to 50% of public cloud workloads could be repatriated to on-prem infrastructure or private cloud, but noted repatriation key drivers to be a mix of security, performance, alongside cost issues. 2020 saw a mass surge in cloud procurement to offer agile solutions to the pandemic. Now, could organisations be questioning their monthly usage bills in favour for a more hybrid approach?
For those who elected to move all applications to the public cloud, on paper they have far fewer capital expenses and avoid the ongoing maintenance overheads of owning, operating, and resourcing a data centre. But that’s not the whole picture. Monthly usage invoices for workloads on the public cloud will also reflect variables such as number of server instances, or storage volumes, alongside per-use service fees. These can vary dramatically per cloud provider and depend on individual traffic and usage patterns, so time must be taken to select the optimal service or face large data transfer costs. Cloud bills skyrocket for multiple reasons, such as overprovisioned resources, unnecessary capacity, and poor visibility into the environment. Anticipating accurate capacity usage is another essential consideration to avoid high billing costs associated with overprovisioning. Often investing in specialist external help in cloud migration can pay long-term dividends.
Cutting costs however isn’t the only consideration in the move from public cloud workload hosting. Some organisations seek repatriation of some, or indeed all, of their extremely sensitive workloads because of other issues such as security and compliance. This could be in response to previously unknown regulatory issues, or to address additional security demands such as extra reporting and verification steps. Having a state of constant high availability can also be a concern. Although public clouds are largely robust and secure, unforeseen cloud outages are unpredictable and can be dramatic. In these cases, cloud customers are at the mercy of the speed of the provider’s fix. It goes without saying that whilst only the cloud provider can allocate resources and direct remediation efforts, an organisations’ own SLAs may be compromised in the process.
After detailed consideration, more often than not, organisations will only selectively transfer some, but not all, workloads away from the public cloud avoiding a mass deconstruction of cloud first strategies. For those that start the repatriation process of workloads, IT needs to consider data transfer, security, hardware requirements, available skills, archiving costs and associated downtime with the migration.
So migration is just the start point. The overheads that on-premises systems carry also need to be factored back into the equation. If the infrastructure is still in place, then pushing further apps back to the core should be relatively straight forward. If it’s a wider scale repatriation, then hardware purchases, data transfer, backup and recovery, security and maintenance all need to be rigorously planned. Skills need to be available for both sides of repatriation, but these can be bolstered by trusted partners qualified to avoid downtime. As with all infrastructure IT, planning remains the most essential component of any repatriation. Tap into the most appropriate and experienced third party skills where needed for the migration, for ongoing maintenance, and to scale and support the workloads.