The recent release of Istio Ambient 1.22 marks a significant milestone in the evolution of service meshes. By simplifying the deployment and management of microservices, Istio Ambient aims to address some of the complexities traditionally associated with service mesh architectures. In this blog post, we'll delve into what Istio Ambient offers, how it can benefit your Kubernetes environment on Google Kubernetes Engine (GKE), and the steps to deploy it.
We recently delivered a webinar covering this. Feel free to watch it below:
Simplifying Service Mesh Management with Istio Ambient on GKE
Istio and Sidecars
Istio Sidecar mode deploys an Envoy proxy as a sidecar container alongside each service instance. This proxy intercepts and manages all network traffic between microservices and provides the Istio benefits (and provides both L4 and L7 layers), all without requiring any changes to your application code.
Challenges of Using Istio Sidecars
- Resilience: In the sidecar pattern, a sidecar needs to be “injected” into applications by modifying their Kubernetes pod specifications and redirecting traffic within the pod. Consequently, any changes or updates to the sidecar require restarting the entire pod, which can cause a service disruption, an outcome far from ideal for maintaining application availability.
- Resource Overhead: Each microservice pod in the cluster requires its sidecar proxy (Envoy) container. This additional container can result in higher memory and CPU usage, which can be significant in large-scale deployments and potentially lead to higher infrastructure costs.
- Operational Complexity: Managing a large number of sidecar proxies can introduce operational complexity and require robust automation and monitoring tools.
- Performance impact: The process of capturing and processing HTTP traffic requires significant computational resources, typically done by Istio’s sidecars, adding latency and consuming CPU and memory resources. Applications with non-conformant HTTP implementations may experience issues when their traffic is intercepted by sidecar proxies.
Introducing Sidecarless Mesh: Istio Ambient
Istio Ambient Service Mesh was announced in September 2022 as an experimental branch that introduced a new data plane mode, a significant evolution in service mesh technology, offering a sidecar-less approach that simplifies deployment and reduces overhead. By moving away from the traditional sidecar proxy model, Istio Ambient aims to streamline operations, improve performance, and make service meshes more accessible to a wider range of applications.
Benefits of Istio Ambient Mode
- Reduced Resource Consumption: By eliminating sidecar proxies, Ambient mode reduces the overall resource footprint of the service mesh, freeing up CPU and memory resources for application workloads. Ideal for environments where reducing resource consumption is critical, such as edge computing or IoT deployments.
- Simplified Deployment/Operations: Ambient mode simplifies the deployment process by centralising the service mesh functionalities, suitable for teams looking to reduce the complexity of managing a service mesh, making it easier to deploy and operate.
- High-Performance Applications: Beneficial for applications that require low latency and high throughput, as the sidecar-less architecture reduces communication overhead.
- Enhanced Security: Provides advanced security features without the complexity of managing sidecars, ensuring secure communication and policy enforcement across the mesh.
- Upgrades: No Pod restarts required, enhancing operational efficiency and reducing downtime.
Note: In contrast to sidecar mode, ambient mode supports moving application pods to an upgraded data plane without a mandatory restart or rescheduling of running application pods. However, upgrading the data plane will briefly disrupt all workload traffic on the upgraded node, and ambient mode does not currently support canary upgrades of the data plane.
Istio Ambient Release 1.22:
Istio recently announced that ambient mode has reached Beta in version 1.22! The beta release of 1.22 indicates that ambient mode features (Layer 4 and Layer 7) are now ready for production with appropriate precautions.
Note that some features remain in Alpha but will be promoted to Beta in 1.23 or later:
- Multi-cluster installations
- DNS proxying
- Interoperability with sidecars
- IPv6/Dual stack
- SOCKS5 support (for outbound)
- Istio’s classic APIs (
VirtualService
andDestinationRule
)
Features not yet implemented/supported in Ambient mode but are planned for upcoming releases:
- Controlled egress traffic
- Multi-network support
- Improve status messages on resources to help troubleshoot and understand the mesh
- VM support
How Does Ambient Mode Work?
We have mentioned the benefits of Ambient mode but how does it work? Let’s go through the life of a packet. Ambient mode architecture takes a layered approach by separating Istio’s functionalities (L4 zero-trust networking and Layer 7 policy handling) into 2 layers: Secure Overlay Layer and L7 processing layer.
Secure Overlay Layer (Ztunnels component): A base layer that handles routing and guarantees zero trust security for traffic (with mTLS, telemetry, authentication, and L4 authorization).
L7 processing layer (Waypoint Proxies): This can be enabled optionally when the user needs access to extended Istio’s features and capabilities like Circuit breaking, Rate limiting, L7 authorization policies, etc.
This approach allows incremental Service Mesh adoption to your platform, beginning with no mesh to ensure zero-trust security networking and eventually leveraging the full istio's functionalities (L7 processing).
Traffic management: Ztunnel
A core component of this new architecture is the Ztunnel (zero-trust tunnel) that facilitates the creation of a secure overlay using the HTTP Based Overlay Network Encapsulation protocol (HBONE).
It is designed to operate as a shared data plane component and handle the core functionalities of L4 traffic interception between services within the mesh, eliminating the need for sidecar proxies. It ensures secure communication between services using mTLS and centralised authentication and authorization policies, reducing the complexity of managing security across multiple sidecar proxies.
Ztunnel is deployed as a DeamonSet within the Kubernetes cluster. This ensures that a Ztunnel instance runs on each node in the cluster, providing localised traffic interception and processing.
Let’s go through the architecture to understand the traffic flow with ztunnel in the image below.
We now have a ztunnel proxy in the node, instead of a sidecar proxy in each pod. The ztunnel proxy retrieves mTLS certificates for the Service Accounts of all pods on its node using xDS configuration. The CNI agent handles traffic routing to the ztunnel, ensuring that all traffic to and from pods on the node is intercepted by the ztunnel. This setup allows for L4 networking functions to be implemented in an Ambient mesh via the ztunnel proxy, utilising an HTTP CONNECT-based (HBONE) traffic tunnelling protocol at the transport layer.
HBONE (HTTP Based Overlay Network Encapsulation protocol)
HBONE is the pattern that Ambient mode uses to facilitate communication between source and destination ztunnels, and waypoint proxies. HBONE uses mTLS to ensure that this traffic is encrypted and authenticated and runs on a dedicated port (15008).
Istio CNI
Unlike the sidecars, the Ambient mode requires the CNI plugin installed. It runs as a Pod on each Kubernetes node. It is responsible for detecting the Pods that belong to the Ambient mesh and configuring the traffic redirection between Pods and the ztunnel.
The alpha version had an eBPF mode, which was responsible for brokering traffic between the application Pods and the Ztunnel (via host network namespace) using istio-cni and eBPF, but this was removed in a restructure (no host network namespace). The purpose of the refactor between alpha and beta implementation is to be compatible with other primary CNIs that may themselves use eBPF.
The new approach instead uses iptables to open sockets in the Pod’s network namespace that route to the node’s ztunnel, allowing the ztunnel to handle traffic redirection within the pod’s network namespace while not running within the Pod itself. This is kind of similar to the traffic flow between sidecars and application pods but remains transparent to the primary Kubernetes CNI. This way network policies can continue to be enforced by the Kubernetes CNI, whether it uses eBPF or iptables, without any conflicts.
Ztunnel and Secure Overlay
In Ambient mode, traffic is managed in a way that ensures it retains the identity of the source workload throughout its journey. Here’s a detailed breakdown of how traffic redirection works:
- Ztunnel Impersonation: The ztunnel on the source node impersonates the identity of the source workload (e.g., app A). This ensures that the traffic maintains the security context and policies associated with the source workload.
- Traffic Identity Preservation: When traffic appears at the destination ztunnel, it carries the identity of the source workload.
- HBONE Overlay: An HBONE overlay is established between the ztunnels on each node. This overlay creates a secure, encrypted channel for traffic to travel between nodes.
- Traffic Encapsulation and Forwarding: The first ztunnel encapsulates the traffic and sends it through the HBONE overlay to the second ztunnel on the destination node. Then the second ztunnel receives the encapsulated traffic, decapsulates it, and forwards it to the destination workload (e.g., app B). During this process, the second ztunnel impersonates its local workload to maintain a consistent identity and security context.
This method guarantees secure, identity-preserving traffic flow between workloads across nodes, leveraging ztunnels and HBONE for efficient and secure communication within the ambient mesh.
Advanced Istio traffic management features: Waypoint proxy
Waypoint proxy is a deployment of the Envoy proxy that can be selected by applications that require more detailed traffic management and observability (L7 features). A namespace can deploy one or more Envoy-based waypoint proxies to enable the L7 features. These proxies are deployed as Pods that can be autoscaled like any other Kubernetes Deployment according to real-time traffic demand.
Advanced Istio features (L7 processing) such as retries, traffic splitting, load balancing, and observability collection can then be enabled on a case-by-case basis.
Istio’s control plane configures the ztunnels in the cluster to pass all traffic that requires L7 processing through the Waypoint Proxy.
Waypoint proxies are deployed declaratively through Kubernetes Gateway resources, with istiod automatically monitoring, deploying, and managing the associated Waypoint deployments.
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
labels:
istio.io/waypoint-for: service
name: namespace
spec:
gatewayClassName: istio-waypoint
listeners:
- name: mesh
port: 15008
protocol: HBONE
The Gateway resource is labelled with istio-waypoint
set to gatewayClassName
, indicating that it is an Istio-provided Waypoint. Additionally, the resource is tagged with istio.io/waypoint-for: service
, indicating that the Waypoint is configured to process traffic for services, which is the default.
Waypoint: Scalability and simplifying Istio
In the traditional sidecar architecture, traffic-shaping policies (like request routing, traffic shifting, or fault injection) are handled by the source sidecar proxy, whereas security policies are managed by the destination sidecar proxy. This division creates some challenges:
- Scaling: Each sidecar proxy must be aware of every other destination in the mesh, leading to a complex and inefficient scaling problem. Any configuration changes at a destination require updating all related sidecars simultaneously.
- Debugging: Splitting policy enforcement between client and server sidecars makes troubleshooting more difficult, as it can be challenging to pinpoint the source of issues.
- Mixed Environments: In environments where not all clients are part of the mesh, inconsistent behaviour can occur, such as non-mesh clients not respecting policies like canary rollouts, leading to unexpected traffic patterns.
- Ownership and Attribution: Ideally, policies should affect only the proxies within the same namespace, but in the sidecar model, policies are enforced by distributed sidecars, complicating ownership and control.
In contrast, Istio's Ambient mode centralises policy enforcement at the destination waypoint proxy. The Waypoint acts as a gateway for the namespace or service account, ensuring that all traffic entering the namespace passes through it, where all relevant policies are enforced. This approach simplifies scaling, debugging, and policy management by containing each waypoint's knowledge and responsibilities within its own namespace. A reduced configuration means lower resource usage.
Installing Ambient on a GKE cluster
Now that we've explored how Ambient Mode works, let’s implement it by deploying a GKE cluster, installing Istio in Ambient mode, and integrating a sample application into the mesh while enabling Istio features. For this demonstration, we'll use the Bank of Anthos application by Google.
We’re deploying this Bank of Anthos application across two different namespaces. In the first namespace (bank-of-ambient
), the app will be added to the mesh using Ambient mode. In the second namespace (bank-of-sidecar
), the app will be configured with traditional sidecars. This setup allows us to compare and demonstrate the differences in resource consumption and latency between the sidecar or sidecar-less data plane architectures.
GKE specifics prerequisites
By default in GKE, only the kube-system has a defined ResourceQuota for the node-critical class. As istio-cni
and ztunnel
require the node-critical class (check the docs), we will manually create a ResourceQuota in the istio-system
namespace where these components will be installed.
GKE limits
- GKE Autopilot is not supported. Autopilot doesn’t allow ‘NET_ADMIN’ and ‘SYS_ADMIN’ capabilities which are required in istio-cni agent container.
Deploy a GKE cluster and retrieve credentials:
Optionally you can enable dataplane v2 when creating the cluster, adding the flag ‘--enable-dataplane-v2
’
export PROJECT_ID=`gcloud config get-value project` && \
export M_TYPE=n1-standard-2 && \
export ZONE=europe-west2-a && \
export CLUSTER_NAME="istio-demo" && \
gcloud services enable container.googleapis.com && \
gcloud container clusters create $CLUSTER_NAME \
--cluster-version latest \
--machine-type=$M_TYPE \
--num-nodes 4 \
--zone $ZONE \
--project $PROJECT_ID
gcloud container clusters get-credentials $CLUSTER_NAME
Create the istio-system
namespace and then create a ResourceQuota:
$ kubectl create namespace istio-system
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ResourceQuota
metadata:
name: gcp-critical-pods
namespace: istio-system
spec:
hard:
pods: 1000
scopeSelector:
matchExpressions:
- operator: In
scopeName: PriorityClass
values:
- system-node-critical
EOF
Install Istio with Ambient profile using Helm. We recommend Helm as it helps to manage components separately, and the components can be easily upgraded to the latest version.
Make sure you have helm installed. Configure the Helm repo:
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
Install the base chart which contains the basic CRDs required to set up Istio.
helm install istio-base istio/base -n istio-system --wait
Install the CNI, Istiod and ztunnel components:
$ helm install istio-cni istio/cni --namespace istio-system --set profile=ambient --wait
$ helm install istiod istio/istiod --namespace istio-system --set profile=ambient --wait
$ helm install ztunnel istio/ztunnel -n istio-system --wait
Install Ingress Gateway:
helm install istio-ingress istio/gateway -n istio-ingress --wait
Verify all the components are up and running:
Istio is now installed with the Ambient profile and ready to be tested. Let’s deploy the application standalone without adding it to the mesh.
Deploy the Bank of Anthos application in the bank-of-ambient
namespace:
$ git clone https://github.com/GoogleCloudPlatform/bank-of-anthos.git
$ kubectl create namespace bank-of-ambient
$ kubectl apply -f bank-of-anthos/extras/jwt/jwt-secret.yaml -n bank-of-ambient
$ kubectl apply -f bank-of-anthos/kubernetes-manifests -n bank-of-ambient
Check the app status and access to it through a web browser using the External IP to make sure it’s up and running:
kubectl get svc frontend -n bank-of-ambient
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
accounts-db ClusterIP 34.118.226.3 <none> 5432/TCP 3h10m
balancereader ClusterIP 34.118.231.89 <none> 8080/TCP 3h10m
contacts ClusterIP 34.118.231.9 <none> 8080/TCP 3h10m
frontend LoadBalancer 34.118.236.160 34.105.185.155 80:32462/TCP 3h10m
ledger-db ClusterIP 34.118.231.243 <none> 5432/TCP 3h10m
ledgerwriter ClusterIP 34.118.229.224 <none> 8080/TCP 3h10m
transactionhistory ClusterIP 34.118.235.180 <none> 8080/TCP 3h10m
userservice ClusterIP 34.118.228.184 <none> 8080/TCP 3h10m
Deploy Gateway and VirtualService to access the frontend through the IngressGateway:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: frontend-gateway
spec:
selector:
istio: ingress # Istio installed using Helm
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: frontend-ingress
spec:
hosts:
- "*"
gateways:
- frontend-gateway
http:
- route:
- destination:
host: frontend
port:
number: 80
kubectl apply -f frontend-ingress.yaml -n bank-of-ambient
Visualising the application within the mesh
Install prometheus, grafana and kiali
$ kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.22/samples/addons/prometheus.yaml
$ kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.22/samples/addons/kiali.yaml
$ kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.22/samples/addons/grafana.yaml
At this stage, we have a GKE cluster, Istio installed with an “ambient” profile and an application up and running, however, our application is still out of the mesh. Let’s add it to the mesh!
You simply need to label the namespace where the application workloads are running:
kubectl label namespace bank-of-ambient istio.io/dataplane-mode=ambient
Let's access the app again. You won't notice any difference in its behaviour, but now the communication between the application pods is encrypted using mTLS. Additionally, Istio is now collecting TCP telemetry for all traffic between the pods. This means our app is securely within the mesh, and no restart was required!
Send traffic to the app and access kiali, you can see the app is in the mesh and traffic goes through the istio-ingress:
$ export GATEWAY_HOST_EXT=$(kubectl get service/istio-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}' -n istio-ingress)
$ curl http://${GATEWAY_HOST_EXT}
$ istioctl dashboard kiali
Let’s deploy the app in the second namespace (bank-of-sidecar
) and add it to the mesh using sidecars, remember you need to inject the namespace with the label istio-injection=enabled
$ kubectl create namespace bank-of-sidecar
$ kubectl apply -f bank-of-anthos/extras/jwt/jwt-secret.yaml -n bank-of-sidecar
$ kubectl apply -f bank-of-anthos/kubernetes-manifests -n bank-of-sidecar
kubectl label namespace bank-of-sidecar istio-injection=enabled
When you access the Kiali dashboard, you'll notice that these pods are currently outside the mesh, and each application Pod contains only a single container. Because the namespace injection label was added after the application was deployed, the Pods need to be restarted for Istio to recognize them and add a sidecar to each Pod.
Restart the Pods:
kubectl -n bank-of-sidecar rollout restart deploy
Send traffic and check Kiali dashboard. You see now is part of the mesh.
Securing the app with Authorization Policies
When an application is added to the mesh, Istio assigns a SPIFFE ID to each service's identity. This identity can be utilised to create authorization policies, allowing us to enhance the security of our application with both L4 and L7 authorization policies. Note that as ztunnel and HBONE implies the use of mTLS, it is not possible to use the DISABLE mode in a policy and such policies will be ignored.
We’re going to use the Istio sample sleep service to curl our frontend service:
kubectl apply -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml -n bank-of-ambient
Create an L4 authorization policy that restricts access, allowing only calls from the istio-ingress and sleep services:
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: frontend-policy
namespace: bank-of-ambient
spec:
selector:
matchLabels:
app: frontend
action: ALLOW
rules:
- from:
- source:
principals:
- cluster.local/ns/istio-ingress/sa/istio-ingress
- cluster.local/ns/bank-of-ambient/sa/sleep
EOF
Send some traffic from sleep and external gateway services to the frontend service:
$ export SLEEP_POD=$(kubectl get pods -n bank-of-ambient -l app=sleep -o 'jsonpath={.items[0].metadata.name}')
$ kubectl exec -it $SLEEP_POD -n bank-of-ambient -- curl frontend
$ curl http://$GATEWAY_HOST_EXT
Visit Kiali dashboard, as you can see loadgenerator
service is not sending traffic anymore, and only istio-ingress and sleep service are allowed.
Comparing Resource consumption and Latency
We have mentioned that Ambient mode reduces resource consumption and latency. To prove it, we will use Fortio for latency testing, which is a load-testing tool developed by Istio. We will make requests to the istio-ingress and frontend service. To install Fortio, just deploy the following:
kubectl apply -f <<EOF
apiVersion: v1
kind: Service
metadata:
name: fortio
spec:
ports:
- port: 8080
name: http
selector:
app: fortio
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fortio
spec:
replicas: 1
selector:
matchLabels:
app: fortio
template:
metadata:
labels:
app: fortio
spec:
containers:
- name: fortio
image: fortio/fortio:latest_release
imagePullPolicy: Always
ports:
- containerPort: 8080
EOF
Launch Fortio web interface to configure and perform latency tests by doing a port-porward on the fortio Pod. Open a web browser and go to http://localhost:8080/fortio/
kubectl port-forward svc/fortio 8080:8080
Configure fortio to to perform latency tests with 10 simultaneous connections each making 100 requests per second:
After running the tests, in terms of latency, the results were the following:
With Ambient: 138.021ms latency:
Istio Sidecar: 266.405 ms latency:
We observed a significant difference between Ambient and Sidecar modes, with Ambient showing noticeably lower latency. This demonstrates that Ambient mode enhances performance both at the platform and application levels.
Access Grafana and add this dashboard to your Grafana instance to compare the resource consumption between ambient and sidecars. Note that we haven’t configured any policies on the sidecar side, the application in the bank-of-sidecar namespace was only added to the mesh with sidecars.
istioctl dashboard grafana
- Green - Workloads with ambient (
bank-of-ambient
namespace) - Yellow - Workloads with sidecars (
bank-of-sidecar
namespace)
As you can see in the following dashboards, the total CPU and RAM of workloads in the bank-of-ambient
namespace are much lower than in the bank-of-sidecar
namespace.
Do you need L7 processing? Configure a waypoint proxy
The layered structure of Ambient mode allows you to adopt Istio gradually, transitioning seamlessly from no mesh to a secure L4 overlay, and eventually to full L7 processing where required.
With Ambient mode, most features are powered by the ztunnel, which handles traffic strictly at Layer 4. If your applications require advanced L7 mesh capabilities, you'll need to utilise a Waypoint proxy, which offers:
- Traffic Management: HTTP routing, load balancing, circuit breaking, rate limiting, fault injection, retries, and timeouts.
- Security: Advanced authorization policies based on L7 elements like request types or HTTP headers.
- Observability: Collection of HTTP metrics, access logging, and tracing.
As we already mentioned, Waypoints are configured using the Kubernetes Gateway API, these are needed to configure traffic routing and the CRDs are not installed by default in the Kubernetes clusters, let’s install those CRDs:
$ kubectl get crd gateways.gateway.networking.k8s.io &> /dev/null || \
{ kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd/experimental?ref=v1.1.0" | kubectl apply -f -; }
Create a Waypoint proxy
By default, a Waypoint will only handle traffic destined for Services in its Namespace, as traffic directed at a Pod alone is uncommon, and often used for internal purposes such as Prometheus scraping.
However, you can configure the Waypoint to handle different types of traffic: all traffic, traffic directed specifically to workloads (Pods or VMs) within the cluster, or no traffic at all. The types of traffic redirected to the Waypoint are controlled by the istio.io/waypoint-for
label on the Gateway resource.
The best and recommended approach is to start configuring the Waypoint proxy to the entire namespace, to do so, apply the following:
$ istioctl x waypoint apply --enroll-namespace -n bank-of-ambient --wait
waypoint bank-of-ambient/waypoint applied
namespace bank-of-ambient labeled with "istio.io/use-waypoint: waypoint"
Make sure the Waypoint proxy is up and running (“programed must be as True):
kubectl get gtw waypoint -n bank-of-ambient
NAME CLASS ADDRESS PROGRAMMED AGE
waypoint istio-waypoint 10.xx.xx.x True 50s
Note that a waypoint Pod was created in the bank-of-ambient
namespace:
waypoint-7c6fbc6469-vrrnb 1/1 Running 0 15s
Once a namespace is configured to use a Waypoint, all requests from Pods operating in Ambient data plane mode to any Service within that namespace will be routed through the Waypoint for L7 processing and policy enforcement.
Create an L7 authorization policy, allowing only the sleep service to perform “GET” calls to the frontend service:
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: frontend-viewer
namespace: bank-of-ambient
spec:
targetRefs:
- kind: Service
group: ""
name: frontend
action: ALLOW
rules:
- from:
- source:
principals:
- cluster.local/ns/bank-of-ambient/sa/sleep
to:
- operation:
methods: ["GET"]
EOF
Verify the policy has been enforced, by trying a DELETE call to frontend from sleep service:
kubectl exec -it $SLEEP_POD -n bank-of-ambient -- curl frontend -X DELETE
RBAC: access denied
Managing traffic
With the Waypoint proxy enabled, you can leverage more advanced Istio features such as traffic splitting between different backend services. Have a try by following the Istio docs.
Conclusion
Istio Ambient represents a significant evolution in the service mesh landscape, addressing many pain points associated with traditional sidecar-based models. By offering a simplified, resource-efficient, and performance-optimised architecture, Istio Ambient makes it easier for Platform teams to adopt and manage service meshes. This sidecar-less approach enhances security, reduces operational complexity, and improves overall efficiency, making it a compelling choice for modern microservices deployments. As service mesh technologies continue to evolve, Istio Ambient is poised to play a crucial role in the future of cloud-native infrastructure.
When our experts are your experts, you can make the most of Kubernetes
You can find the manifests and detailed steps for this blog post in the GitHub repository: https://github.com/jetstack/fleetops-gke-ambient. Stay tuned for our next blog post on https://venafi.com/blog/category/cloud-native/, where we will delve into Ambient mode combined with istio-csr.
If you’re interested in finding out more about how Istio Ambient can help your organisation, reach out to Jetstack Consult to discover how to adopt, migrate and get production-ready through our discovery workshops, strategic advisory and professional services engagements.