Introduction to Anthos

Asrın Andırın
6 min readAug 23, 2022

--

Anthos is a modern cloud application management platform providing an uniform development and operations experience for on-premises and cloud.

So, what exactly is it?

Anthos is to Kubernetes what Kubernetes is to containers.

When you create a Kubernetes Deployment, K8S guarantees that your Pods are always in the desired state. Is there even a crash of a pod? To get from the current state to a desired state, Kubernetes will schedule a new one. As an end user, you’ll most likely write YAML files to express Kubernetes what you want.

Like K8’s , Anthos is a platform that will take care of your Kubernetes clusters.

It serves us 3 main conveniences,

  • Orchestration : Anthos can manage Kubernetes clusters on-premises or in the cloud, and it operates on both bare-metal servers and existing virtualized infrastructures. It provides an easy-to-use application stack that does not require the usage of a costly hypervisor layer.
  • Policies : Anthos configuration manager applies enterprise-level rules across multi-cloud deployments, ensuring compliance and security.
  • Security : Provides security to be included into an application’s develop-build-run cycle. Creates a defense-in-depth security way of providing that consistently applies a wide range of security measures across all settings.

Just tell Anthos how you want your Kubernetes clusters to behave, and it guarantees that your requirements are met. The best part is that those various clusters aren’t restricted to GKE clusters. Your Kubernetes cluster may be located anywhere as long as it can connect to Anthos with the Anthos GKE Connect Agent installed.

With that short intro, we can go on with some technical explanation.

Structure of Anthos (Figure 1)

Cloud Service Mesh

Internal Structure of Cloud Service Mesh (Figure 2)

What is Anthos Service Mesh ?

Anthos Service Mesh is Google’s implementation of the powerful Istio open-source project, allowing you to manage, observe, and secure your services without having to change your application code.

  • The Service Mesh control plane offers centralized network security rules, traffic management, service encryption, authentication, and authorisation.
  • The proxy component is installed as a sidecar component alongside your services in each pod. Additionally, you may inject the proxy just into the necessary pods. All communication now takes place through a proxy, which synchronises with the service mesh control plane to offer authentication, authorisation, and different network functionalities like telemetric and traceability data transparently without requiring any code modifications to your services.

Once your services’ data has been collected, the Anthos Service Mesh gives you extensive visibility into your microservices’ network interactions, enabling you to set and keep track of service level choices. You may set the necessary service level characteristics (latency, availability, etc.) and thresholds for each of your services using this tool, and you can also create alerts to take appropriate actions. For instance, the login service has to be accessible 99.5% of the time.

Anthos Service Mesh is supported on on-prem, bare metal and multi-cloud. However certain features of Anthos Service Mesh differs between the supported platform, for instance Cloud Monitoring is not available on VMware and Bare metal and you can use third party tools like Prometheus, Kiali, and Grafana dashboards in your environment.

Anthos Config Management (ACM)

(Figure3)

One of the major issues encountered by organizations is how to make sure deployment configurations are always consistent with the needed intended state across environments (hybrid and multicloud) and can be audited and monitored whenever necessary.

How does Anthos manage all clusters effectively?

  • A central Git repository acts as a single source of truth for all deployment configurations .It begins by creating a Git repository, (for now, Github or GSR) where you may maintain your YAML files and the intended state of your environment. This GitOps method of working is getting popular.
  • All of the configurations are stored in the config repository shown in the figure above. The repository can be installed in a place from where all of your environments can access it. The repository would generally be hosted on-prem for a hybrid environment in order to take use of any existing access control and audit needs.

You have the choice to name particular clusters using Cluster Selectors and apply particular rules to that cluster (e.g., utilize one GKE and one on-prem for DEV/TEST, and another GKE cluster for Prod), or to apply the policies to all clusters. The beautiful thing about this is that concepts like a service that targets pods using label selectors build on what you already know and like in K8S.

Any modifications from the Git repository are applied to all the clusters and periodically evaluated using the Anthos Config Management (ACM) suite of components. On all necessary GKE clusters, these components must be installed.

The key components includes,

  • Config Sync: The GKE clusters’ respective config repositories are synchronized and applied via the Config Sync component. Any mismatch between the GKE cluster’s actual state and its stored configuration is continually tracked and resolved.
  • Policy Controller: Before requests to the GKE cluster are triggered, the Policy Controller component evaluates them to guarantee compliance with your established cluster policies, whether they are linked to security or specific business rules. Any modifications to the clusters that do not adhere to the stated policies are blocked by the Policy Controller component.
  • Config Connector: Config connector is an add-on element that uses Kubernetes APIs to configure supported Google Cloud services including BigQuery and Compute Engine. By building the necessary config connection settings, applications using the services may configure them.
  • Binary Authorization: Installing the Binary Authorization component will make sure that only trustworthy container images are deployed in the GKE cluster. Many businesses only let the installation of confirmed pictures that meet their information security standards and image inspection procedures. By using the Binary Authorization configuration, the necessary images may be signed and validated during deployment.

Deployment Options with Anthos

The deployment options of Anthos can be primarily bucketed into the following three categories (In this content I will only mention about Hybrid Deployment)

  • Hybrid deployment
  • Edge deployment
  • Multicloud capability

Many businesses have constructed their infrastructure on-site. This is especially true for businesses that must stick to various rules and regulations that forbid storing consumer data in the public cloud or transferring it across borders.

These organizations are considering a variety of reliable strategies to be flexible and also provide cutting-edge application deliviring fast. Based on their cloud adoption and transformation, the enterprises can be at various stages such as

  • Infrastructure modernization — Consolidating and optimizing their infrastructure.
  • App Modernization — Modernizing or evolving their applications by moving towards cloud native solutions for faster deployment and agility.
  • Extending their workloads to cloud — Looking to extend their on-prem infrastructure to cloud for better optimization, scalability or running non-sensitive workloads.

Customers can begin with Anthos clusters on VMware’s offering if they already have investments in VMware vSphere and wish to take advantage of GKE’s capabilities to design, manage, and deploy container applications or transition their on-premise Virtual Machines to native containers.

This could serve as an excellent starting point for consolidating and updating the current infrastructure so that containers can live on-premises and be managed successfully using the same Anthos cluster lifecycle before moving the necessary apps to the cloud.

Auto scaling can be used to optimize new development workloads (non-sensitive computation) and migrate them to the cloud. Through Cloud Console, the complete infrastructure can be managed, and security guidelines can be implemented for GKE clusters hosted both on-premises and in the cloud.

Businesses can utilize the Migrate for Anthos service to convert virtual machine workloads into containers that can be immediately deployed to Anthos clusters.

--

--