Open Credo

February 17, 2021 | Blog, Cloud, Cloud Native, GCP, Open Source

Anthos – A Holistic Approach to your Hybrid Cloud initiative

Multi-cloud is rapidly becoming the cloud strategy of choice for enterprises looking to modernise their applications.

And the reason is simple – it gives them much more flexibility to host their workloads and data where it suits them best.

In this post, we focus on Google’s application modernisation solution Google Anthos and the role it can play in your cloud transformation strategy.


Guy Richardson

Guy Richardson

Lead Consultant

Anthos – A Holistic Approach to your Hybrid Cloud initiative

Multi-cloud is rapidly becoming the cloud strategy of choice for enterprises looking to modernise their applications.

And the reason is simple – it gives them much more flexibility to host their workloads and data where it suits them best.

Nevertheless, many organisations are still holding back on their multi-cloud initiatives because of concerns about additional complexity.

However, all three of the leading cloud service providers, Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform, now offer a much easier way for organisations to meet their multi-cloud objectives.

In this post, we focus on Google’s application modernisation solution Google Anthos and the role it can play in your cloud transformation strategy.

But first we start with a brief recap of what multi-cloud actually means and run through some of the most common challenges of multi-cloud adoption.

What Is Multi-Cloud?

A multi-cloud is an arrangement in which an organisation uses two or more clouds to host its inventory of applications and IT services.

This typically comprises a private cloud, hosted in the organisation’s own data centre, and one or more public clouds provided by a cloud service provider (CSP). Alternatively, it may be simply made up of several public clouds without any on-premises involvement. The level of integration and orchestration between the different clouds will depend on the individual needs of the organisation.

A multi-cloud can help avoid vendor lock-in – a position in which you find yourself tethered to a single cloud provider, as you cannot easily move your deployments without significant switching costs.

However, there are more everyday practical reasons for adopting a multi-cloud strategy. For example, it may be a matter of cost or regulatory compliance. You may need cross-platform failover to ensure high availability. Or you may simply want to foster innovation by giving development teams wider access to technologies that best suit their applications.

Challenges to Multi-Cloud Adoption

Application Architecture

The public cloud is a very different IT environment from the traditional data centre. On-premises hardware is static infrastructure, where each application is generally hosted on its own dedicated server and tends to follow a monolithic design.

By contrast, the cloud is a highly dynamic virtual environment with lots of moving parts. As a result, cloud-native deployments take a different approach to application architecture in which you break the codebase down into a series of distributed components known as microservices.

This offers a number of advantages over monolithic designs.

For example, your application will be more secure, as each microservice will be isolated from the others that make up your codebase– introducing boundaries that are harder for hackers to penetrate. You can also scale each microservice independently in response to fluctuating demand. This helps you make more cost-effective use of your pay-as-you-go cloud resources. And you can also simultaneously run duplicate microservices to improve the resiliency of your application.

However, in order to realise these benefits, you’ll need to rearchitect your applications so they both:

  • take advantage of the features each vendor platform has to offer
  • remain portable between different clouds

This can be a time-consuming and expensive undertaking. So in some cases, particularly as a short-term solution, it may be better to lift and shift your applications. In other words, maintaining your existing application architecture and rehosting them in the cloud with the minimum of coding changes.

Microservice Environments

The type of environment you use to host your microservices will come down to a choice between using virtual machines (VMs) and containers.

Containerisation is an alternative virtualisation technology to VMs for partitioning the underlying application infrastructure. Unlike VMs, which require a hypervisor, containers make use of the kernel of the host operating system (OS), sharing it with other containers.

This shared approach to abstracting OS resources can significantly lower the infrastructure footprint of your applications as, unlike VMs, containers do not need their own fully blown OS to provide the environment needed to run your code.

What’s more, containers decouple your application from your underlying infrastructure. This makes them highly portable, as you can replicate them on different machines with different configurations – provided each server OS uses a compatible Linux kernel.


Regulatory Compliance

New data protection laws are emerging across the globe in response to growing concern about privacy in today’s data-driven landscape. For example, the General Data Protection Regulation (GDPR) has imposed much tighter restrictions on where you may store and process personal data about European citizens.

A large-scale enterprise with a worldwide presence may have to simultaneously comply with any number of similar privacy regulations, each with their own set of rules about data residency.

A multi-cloud can aid compliance, as it offers you more flexibility to host personal data based on territorial requirements.

Nevertheless, this will require careful attention to the cloud regions you use in your multi-cloud strategy. Moreover, you’ll need sufficient visibility and governance over your workloads to ensure you only ever move personal data between compliant geographical locations.


Unlike hybrid cloud, which implies some level of orchestration between your on-premises and cloud environments, by definition, this doesn’t necessarily have to be the case with your multi-cloud deployments.

However, in order to realise their full potential, applications must be easily ported between the different environments that make up your multi-cloud ensemble.

But this is easier said than done – owing to the disparate nature of on-premises infrastructure and different public cloud platforms. In order to bring these together, you’re likely to encounter significant strategic and technical challenges.

Visibility and Control

Visibility and control are essential to the security, cost efficiency, compliance, performance and reliability of your applications.

As with workload orchestration, this is no easy task given the complexity of multi-cloud implementations.

The only practical way to stay on top of your clouds will be through tooling that allows you to monitor and manage your assets from a central point of control.

Cloud Expertise

In its first few years, the cloud largely attracted customers such as start-ups and young, expanding companies that needed a quick and easy way to get their IT off the ground.

But, more recently, large-scale enterprises have been following in the footsteps of the early adopters, triggering huge growth in demand for cloud expertise.

As a result, trained IT professionals with specialist skills and experience of cloud migration and multi-cloud strategies are very hard to come by.

So, now let’s see how Google Anthos can help overcome these challenges.

What Is Google Anthos?

Google Anthos is an all-in-one multi-cloud platform that helps you bridge the gap between your existing on-premises and public cloud infrastructure, ultimately serving as a smooth migration pathway for transitioning legacy applications to the cloud.

It provides a relatively simple way to deploy Kubernetes container clusters to your in-house systems, thereby allowing you to leverage modern cloud technology on existing internal hardware.

In addition, Anthos will support containerised workloads hosted on AWS, Microsoft Azure and Google’s own public cloud service Google Cloud Platform, offering consistent design and services across all environments.

It can also quickly migrate your VMs to containers without the time-consuming and costly work involved in manually rearchitecting and recoding your applications. The conversion process creates containers that are managed by Google Kubernetes Engine (GKE), giving you  a  stopgap solution for migrating your workloads.

Anthos is a pre-packaged deployment of Kubernetes. It simplifies the overwhelming complexity of the cloud-native landscape into a well-defined deployment based on proven technologies and comes with a rich set of bundled components, which include logging, monitoring, centralised configuration management, security, API and high-speed connectivity services.

From a developer perspective, Anthos also includes application-level services such as Service Mesh and Cloud Run, which greatly simplify the development, deployment and management of distributed applications based on microservices.

It is geared primarily towards enterprise customers who want the freedom to deploy each of their workloads to the infrastructure that suits them best. But it also performs the role of an application modernisation accelerator by giving users, who are in the early stages of cloud adoption, a leg-up in their digital transformation journey.

Anthos is available as a 30-day trial, which is free for up to $900 (about £660) of usage. Thereafter, you can choose to use the service on either a pay-as-you-go or monthly subscription model.

Anthos Benefits

  • Universal Compatibility: In addition to support for the three leading cloud platforms, Anthos can run on any type of on-premises hardware that uses VMware vSphere. This means that, even though you’re using a Google product, you’re not forced into only using Google’s infrastructure.
  • Open Source: In line with much of Google’s approach to the cloud, Anthos is founded on open source components. This generally makes for solutions that are more vendor neutral, portable, robust and secure than proprietary alternatives. What’s more, they tend to support a much healthier ecosystem of related technologies and services.
  • Ease of Use: Anthos reduces your operational overhead, automatically taking care of many of the tasks associated with managing your container clusters. You can monitor and manage policy-driven security, auto scaling and configuration changes across all your multi-cloud environments from a single pane of glass.

It also provides a consistent managed Kubernetes experience across each of your platforms, so development and operations teams don’t have to familiarise themselves with different environments with different APIs.

  • Standardised Container Management: The core component of Anthos, Kubernetes, is the most widely used container orchestration engine. As a result, it has rapidly emerged as the industry standard for provisioning, configuring, auto scaling and managing traffic between containers.

This opens the door to increased interoperability with a much wider range of technologies built around the cluster management tool.

A Highly Flexible Long-term Solution

Anthos offers huge potential as a solution for accelerating the uptake of modern cloud technology. For enterprises looking to fully adopt Kubernetes or roll out across a number of different  cloud platforms and on-premises systems, it offers a proven, well-defined and fully-featured platform

However, it’s still a relatively new offering and has yet to reach full maturity as a product.

As a Kubernetes implementation, it may also take longer for some users to set up and learn. So it may not be suited to prospective customers who still have only limited expertise in container-based deployments.

Nevertheless, for many forward-thinking enterprises, Anthos offers the flexibility they need and should ultimately prove a better application modernisation solution in the years to come.


This blog is written exclusively by the OpenCredo team. We do not accept external contributions.



Twitter LinkedIn Facebook Email