Next in our series of posts taking a look at Google Cloud Anthos functionality, we’re going to take a look at attaching Kubernetes clusters running in AKS and EKS to Anthos in Google Cloud. This builds on the multi-cloud capabilities of Anthos we saw previously with GKE on AWS moving to GA. Anthos is orientated around being the management plane for all of your enterprise workload clusters, providing a centralized, consolidated hub to orchestrate infrastructure and applications. Additionally, through Anthos' add-on features the experience is enriched to facilitate cluster and application administration with Config Management, compliance at scale with Policy Controller, as well as multi-cluster traffic management courtesy of Anthos Service Mesh.
With GKE On-Prem, and as we saw previously with GKE on AWS, we’ve seen how the GKE experience can be extended beyond Google Cloud and brought to our infrastructure, whether that’s the datacenter of another cloud provider. With attached clusters, Anthos is providing the mechanisms to enroll Kubernetes clusters agnostic of environment. That means that regardless of where our clusters are running, we can benefit from the Anthos feature set, and the centralized management plane Anthos provides through the Google Cloud console.
This enables a plethora of Anthos use-cases. Whether you’re running managed clusters in EKS or AKS, running on bare-metal with
kubeadm, or leveraging Cluster API for the lifecycle of your infrastructure, you can register clusters with Anthos and gain a holistic perspective of your Kubernetes infrastructure, application deployments, traffic routing and security conformance.
We’ll be taking a look at how easy it is to register clusters with Anthos running in a variety of environments, and how the value-add of features can ameliorate our Kubernetes experience on these platforms.
Firstly we’ll be taking a look at attaching managed clusters to Anthos. As mentioned, we’ve seen how GKE can be brought to AWS via Anthos, however through attached clusters, existing clusters in EKS can be registered and added to the quorum of clusters under Anthos management. Consequently, no refactoring of cluster lifecycle or toolsets needs to change to bring Anthos to your EKS deployment.
Due later in 2020, GKE on Azure is the accompaniment of GKE on AWS, whereby the life cycling of clusters in Azure is orchestrated through an Anthos managed pipeline. However, through attached clusters we are also able to bring existing AKS clusters under Anthos' ownership.
Let’s take a look at how we can take existing managed Kubernetes clusters and add them to the GKE Hub.
To demonstrate the flow of attaching managed clusters to Anthos, our use-case will be AKS and EKS cluster deployments which will be registered in the GKE Hub, with add-ons installed to enable Anthos' features.
With GKE On-Prem and GKE on AWS, registration is handled automatically as part of the cluster bootstrap process. In this instance, we are manually adding existing clusters to the GKE Hub on an ad-hoc basis.
When a cluster is registered with Google Cloud, a long-lived, authenticated and encrypted connection is established between the cluster and the Google Cloud Hub via Connect. This acts as the main conduit to serve cluster and application state to Google Cloud, but also provides the connectivity to manage and deploy resources and configuration. This connection to Google Cloud is initiated from the cluster, whereby Google Cloud is making requests over Connect to each connected cluster, with the cluster responding back to the Google Control control plane. User services cannot route to Google Cloud via the link established via Connect.
As you can see, the process for attaching a cluster is near identical for each of the deployments. As part of the registration, a Connect agent is deployed into each cluster. After the connection is established, the Connect Agent service will exchange account credentials, technical details, and metadata about connected infrastructure and workloads necessary to manage them with Google Cloud, including the details of resources, applications, and compute resources. The agent is deployed into the
With our managed clusters registered and the
gke-connect-agent deployed, we can see that our clusters are available in the Anthos Dashboard.
Screenshot of Managed Clusters
Once we login to each cluster within the GCP Console we can administer and inspect their behavior. Navigating through each cluster provides an overview of all the cluster’s infrastructure and specification, as well as utilization and workloads.
AKS node details
Managed cluster workloads
As we’ve seen, attaching a Kubernetes cluster to Anthos is achieved through the simple process of registration and subsequent deployment of the
gke-connect-agent to communicate with the GKE Hub. Whilst at the time of writing, AKS and EKS are cited as the supported clusters which can be attached, we can extend this further and add external clusters from a variety of distributions and environments.
In this example, we’ll firstly use
kind to show how a standalone cluster can equally be added to Anthos. Following that, we’ll see how our repertoire of hosting platforms is unlimited by leveraging Cluster API to lifecycle our clusters which can also be connected to Anthos.
The simplest example of an unmanaged cluster is to run
kind locally. The beauty of this that it demonstrates that Anthos can really run on any Kubernetes distribution, but also the extent to which our clusters can be disparate but still consolidated into the single-pane-of-glass that is the Anthos Dashboard.
Again, attaching the cluster is the same process of registration and deploying the
Once the cluster is registered and we’ve logged in, our
kind cluster is similarly viewable in the GKE Hub and we can inspect it in the same fashion as if it were a GKE cluster, an aforementioned registered managed cluster or GKE on X deployment.
Managed cluster workloads
We’ve just seen how seemingly any cluster can be brought to Anthos, regardless of where it is being hosted. This unlocks powerful capabilities and compositions for how we can lifecycle our clusters and leverage Anthos and it’s features in our environments.
With Cluster API, we can use providers and Kubernetes
CustomResourceDefinitions to orchestrate the life cycling of Kubernetes clusters. Through a management cluster, we can create, scale, upgrade and destroy Kubernetes infrastructure in a variety of environments all through a declarative API. Ergo, we can leverage Cluster API to bring Anthos to many more Kubernetes-conformant distributions.
In this example, we’ll use our
kind cluster from the previous step as our bootstrap cluster, and the Cluster API AWS provider to provision a workload Kubernetes cluster in AWS.
With the Cluster API resources deployed to bootstrap our AWS hosted cluster, the requisite infrastructure is provisioned in AWS by the provider’s controllers.
Upon provisioning and bootstrapping the Cluster API workload cluster, the kubeconfig can be obtained from the management cluster in order to communicate with the workload cluster’s Kubernetes API.
Registering the cluster once again connects the cluster to Google Cloud enabling Anthos in our environment.
As we can see, all of our clusters are now registered in the GKE Hub. This demonstrates the extent to which we can bring Anthos into an array of different environments, and even how we can leverage other platforms for orchestrating clusters to enable the further extension of Anthos.
This is a testament to the ethos of Anthos, being the control plane for managing Kubernetes whilst being environment and platform agnostic. This consolidates operations and provides consistency across cloud providers, whilst embracing existing infrastructure investments and unlocking new possibilities for hybrid and multi-cloud compositions. This also allows for companies to modernize in place, continuing to run workloads on-prem or on their infrastructure but adopting Kubernetes and cloud-native principles.
Google Cloud Marketplace
A core feature of the Anthos proposition is the capability to deploy applications from the Google Cloud Marketplace to any of your Anthos registered clusters, whether they are external (attached), or in GKE (On-Prem, AWS or GCP), all through the Google Cloud Console. This catalogue of open source and licensed software simplifies the deployment and maintenance of business-critical applications, tailoring their configuration to your use case and environment.
In this instance, we are able to use a simple Nginx deployment to our EKS cluster which we registered to Anthos earlier. There is a vast catalogue of applications which are supported on Anthos, and as we can see they can be configured to be compatible with the native environment of the host cluster.
Once the marketplace application is deployed to our cluster, there is comprehensive observability for the application’s health, configuration and behavior. All the components which comprise the application can be inspected and edited if necessary, with events and raw resource YAML available.
All of this observability and orchestration is still possible whilst running the cluster in it’s native environment. Anthos in this instance is facilitating the delivery of applications to registered clusters, as well as consolidating the workloads across the GKE Hub, whether in other clouds, virtualized environments or bare metal.
Anthos Service Mesh
Anthos Service Mesh (ASM) is core to the proposition of running hybrid Kubernetes across cloud and on-premises infrastructure. Built using Istio, it enhances our experience by abstracting and automating cross-cutting concerns, such as issuing workload identities via X.509 certificates to facilitate automatic mutual TLS across our workloads and clusters, and provides mechanisms for layer 7 traffic routing within the mesh.
Additionally ASM centralizes the process of certificate issuance and renewal, leading to segregated clusters being able to have cross-boundary trust ensuring service-to-service communications can mutually authenticate.
Anthos provides the means to deploy the Istio control plane in a variety of configurations to best suit your usage of Anthos via the use of
istioctl profiles. For deployments of GKE in Google Cloud which are registered to Anthos, there is an
asm-gcp profile, whilst for GKE On-Prem, GKE on AWS, EKS and AKS the
asm-multicloud profile facilitates the installation of the Istio control plane and configuration of core features, as well as enabling auto mTLS and ingress gateways.
Due to the sidecar proxies that are deployed into each pod as part of ASM, there is a high degree of telemetry and metadata available about the traffic and behavior of our applications. This is done transparently, with the proxy intercepting inbound traffic to the pod before passing it over
localhost to the application container. With this added insight into services within the mesh, service level objectives can be defined in accordance with the four golden signals: latency, traffic, errors and saturation.
This consolidation of application SLOs into a unified management plane is a significant proposition for enterprises running segregated clusters and applications, across multiple environments. Streamlining the administrative experience and minimizing the operational overhead of managing multiple systems is at the core of Anthos' raison d’etre.
With ASM installed we can leverage the core features of traffic management, security and observability that Istio offers, but also an array of additional features available within Anthos' implementation of Service Mesh. The forte of ASM is when we have multi-cluster deployments, where applications are traversing cross-boundary for communications. We have seen with replicated control planes that Istio can be configured to communicate cross-cluster, however ASM seeks to abstract that additional layer of configuration away from the administrator, and enable cross-cluster routing but also cross-boundary trust. This is an area which is still developing, however the prospect of a managed service mesh control plane to oversee certificate issuance and renewal across multiple meshes, as well as facilitate cross-cluster routing is a significant value-add for Anthos and ASM.
Lastly, registered clusters within Anthos are treated as any other cluster in our Google Cloud environment. Consequently, workloads running on those clusters are available within the Google Cloud Console, again enabling a single-pane-of-glass for all of our workloads across environments.
This observability extends to application configuration and state, as well as telemetry data around usage and behavior through pod metrics and logs.
Deploying the Online Boutique demonstrates our capability to monitor workloads running in non-Google Cloud environments from the Google Cloud Console.
Navigating to the Google Cloud Console, all the workloads running across the Anthos environs are viewable across the control plane (where possible) and application namespaces.
Drilling down into a specific application, metadata and configuration is available depicting it’s state and behavior.
Anthos attached clusters brings the orchestration and administration of disparate Kubernetes clusters under a consolidated view of the world in GCP, whilst extending the Anthos feature set to a multitude of environments.
Later in 2020, GKE on Azure will accompany GKE on AWS as a fully supported GKE deployment. In the meantime attached clusters allows for existing Azure and other cluster environments to make use of Anthos' feature set, as well as aiding enterprises in their hybrid and multi-cloud strategies, and cloud-native transformation initiatives.
Get in touch
If you’re wanting to know more about Anthos or running hybrid and multi-cloud Kubernetes, Jetstack Consult and Kubernetes Subscription which can help you in your investigation and adoption in a variety of ways. Let us know if you’re interested in a workshop or working together to dive deeper into Anthos.