Gloo Mesh Workshop
Gloo Mesh Enterprise

Gloo Mesh Workshop

Table of Contents

Introduction

Gloo Mesh Enterprise is a distribution of Istio Service Mesh with production support, CVE patching, FIPS builds, and a multi-cluster operational management plane to simplify running a service mesh across multiple clusters or a hybrid deployment.
Gloo Mesh also has enterprise features around multi-tenancy, global failover and routing, observability, and east-west rate limiting and policy enforcement (through AuthZ/AuthN plugins).
Gloo Mesh

Istio support

The Gloo Mesh Enterprise subscription includes end to end Istio support:
    Upstream first
    Specialty builds available (FIPS, ARM, etc)
    Long Term Support (LTS) N-4
    Critical security patches
    Production break-fix
    One hour SLA Severity 1
    Install / upgrade
    Architecture and operational guidance,
    best practices

Service discovery

One of the common problems related to cross-cluster communication with Istio is discovery.
Istio discovery
Istio Endpoint Discovery Service (EDS) requires each Istio control plane to have access to the Kubernetes API server of each cluster. There are some security concerns with this approach, but it also means that an Istio control plane can’t start if it’s not able to contact one of the clusters.
Gloo Mesh discovery
Gloo Mesh is solving these problems. An agent running on each cluster is watching the local Kubernetes API server and passes the information to the Gloo Mesh management plane through a secured gRPC channel. Gloo Mesh is then telling the agents to create the Istio ServiceEntries corresponding to the workloads discovered on the other clusters.

Observability

Gloo Mesh is also using these agents to consolidate all the metrics and access logs from the different clusters. Graphs can then be used to monitor all the communication happening globally.
Gloo Mesh graph
And you can view the access logs on demand:
Gloo Mesh access logs

Zero trust

Gloo Mesh makes it very easy for you to implement a zero-trust architecture where trust is established by the attributes of the connection/caller/environment and by default no communication is allowed.
You can then use Gloo Mesh AccessPolicies to specify what services can talk together globally. Here is an example:
1
apiVersion: networking.mesh.gloo.solo.io/v1
2
kind: AccessPolicy
3
metadata:
4
namespace: gloo-mesh
5
name: reviews
6
spec:
7
sourceSelector:
8
- kubeServiceAccountRefs:
9
serviceAccounts:
10
- name: bookinfo-reviews
11
namespace: default
12
clusterName: cluster1
13
- name: bookinfo-reviews
14
namespace: default
15
clusterName: cluster2
16
destinationSelector:
17
- kubeServiceMatcher:
18
namespaces:
19
- default
20
labels:
21
service: ratings
Copied!
Gloo Mesh AccessPolicies are translating into Istio AuthorizationPolicies in the different clusters.
And what makes Gloo Mesh really unique is that you can then go to the UI and check what are the services currently running that are matching the criterias defined in your policy:
Gloo Mesh accesspolicy

Multi-cluster traffic and failover

Gloo Makes also provides an abstraction called TrafficPolicies that makes it very easy for you to define how services behave and interract globally. Here is an example:
1
apiVersion: networking.mesh.gloo.solo.io/v1
2
kind: TrafficPolicy
3
metadata:
4
namespace: gloo-mesh
5
name: simple
6
spec:
7
sourceSelector:
8
- kubeWorkloadMatcher:
9
namespaces:
10
- default
11
destinationSelector:
12
- kubeServiceRefs:
13
services:
14
- clusterName: cluster1
15
name: reviews
16
namespace: default
17
policy:
18
trafficShift:
19
destinations:
20
- kubeService:
21
clusterName: cluster2
22
name: reviews
23
namespace: default
24
subset:
25
version: v3
26
weight: 75
27
- kubeService:
28
clusterName: cluster1
29
name: reviews
30
namespace: default
31
subset:
32
version: v1
33
weight: 15
34
- kubeService:
35
clusterName: cluster1
36
name: reviews
37
namespace: default
38
subset:
39
version: v2
40
weight: 10
Copied!
Gloo Mesh TrafficPolicies are translating into Istio VirtualServices and DestinationRules in the different clusters.
Providing high-availability of applications across clusters, zones, and regions can be a significant challenge. Ideally, source traffic should be routed to the closest available destination, or be routed to a failover destination if issues occur.
Gloo Mesh VirtualDestinations are providing this capability. Here is an example:
1
apiVersion: networking.enterprise.mesh.gloo.solo.io/v1beta1
2
kind: VirtualDestination
3
metadata:
4
name: reviews-global
5
namespace: gloo-mesh
6
spec:
7
hostname: reviews.global
8
port:
9
number: 9080
10
protocol: http
11
localized:
12
outlierDetection:
13
consecutiveErrors: 1
14
maxEjectionPercent: 100
15
interval: 5s
16
baseEjectionTime: 120s
17
destinationSelectors:
18
- kubeServiceMatcher:
19
labels:
20
app: reviews
21
virtualMesh:
22
name: virtual-mesh
23
namespace: gloo-mesh
Copied!
Gloo Mesh VirtualDestinations are translating into Istio DestinationRules and ServiceEntries in the different clusters.

RBAC

Gloo Mesh is simplifying the way users consume the Service Mesh globally by providing all the abstractions described previously (AccessPolicies, TrafficPolicies, ...).
But using the Gloo Mesh objects has another benefit. You can now define Gloo Mesh roles that are very fine grained.
Here are a few examples about what you can do with Gloo Mesh RBAC:
    Create a role to allow a user to use a specific Virtual Mesh
    Create a role to allow a user to use a specific cluster in a Virtual Mesh
    Create a role to allow a user to only define Access Policies
    Create a role to allow a user to only define Traffic Policies
    Create a role to allow a user to only define Failover Services
    Create a role to allow a user to only create policies that target the services running in his namespace (but coming from services in any namespace)
One common use case is to create a role corresponding to a global namespace admin.

Gloo Mesh Gateway

Using the Istio Ingress Gateway provides many benefits, like the ability to configure a traffic shift for both north-south and easy-west traffic or to leverage the Istio ServiceEntries.
But the Istio Ingress Gateway doesn't provide all the capabilities that are usually available in a proper API Gateway (authentication with OAuth, authorization with OPA, rate limiting, ...).
You can configure an API Gateway like Gloo Edge to securely expose some applications running in the Mesh, but you lose some of the advantages of the Istio Ingress Gateway in that case.
Gloo Mesh Gateway provides the best of both world.
It leverages the simplicity of the Gloo Mesh API and the capabilities of Gloo Edge, to enhance the Istio Ingress Gateway.
Gloo Mesh objects called VirtualGateways, VirtualHosts and RouteTables are created by users and translated by Gloo Mesh into Istio VirtualService, DestinationRules and EnvoyFilters.

Gloo Mesh objects

Here is a representation of the most important Gloo Mesh objects and how they interract together.
Gloo Mesh objects

Wants to learn more about Gloo Mesh

You can find more information about Gloo Mesh in the official documentation:

Lab 1 - Deploy KinD clusters

Set the context environment variables:
1
export MGMT=mgmt
2
export CLUSTER1=cluster1
3
export CLUSTER2=cluster2
Copied!
Note that in case you can't have a Kubernetes cluster dedicated for the management plane, you would set the variables like that:
1
export MGMT=cluster1
2
export CLUSTER1=cluster1
3
export CLUSTER2=cluster2
Copied!
From the terminal go to the /home/solo/workshops/gloo-mesh directory:
1
cd /home/solo/workshops/gloo-mesh
Copied!
Run the following commands to deploy three Kubernetes clusters using Kind:
1
./scripts/deploy.sh 1 mgmt
2
./scripts/deploy.sh 2 cluster1 us-west us-west-1
3
./scripts/deploy.sh 3 cluster2 us-west us-west-2
Copied!
Then run the following commands to wait for all the Pods to be ready:
1
./scripts/check.sh mgmt
2
./scripts/check.sh cluster1
3
./scripts/check.sh cluster2
Copied!
Note: If you run the check.sh script immediately after the deploy.sh script, you may see a jsonpath error. If that happens, simply wait a few seconds and try again.
Once the check.sh script completes, when you execute the kubectl get pods -A command, you should see the following:
1
NAMESPACE NAME READY STATUS RESTARTS AGE
2
kube-system calico-kube-controllers-59d85c5c84-sbk4k 1/1 Running 0 4h26m
3
kube-system calico-node-przxs 1/1 Running 0 4h26m
4
kube-system coredns-6955765f44-ln8f5 1/1 Running 0 4h26m
5
kube-system coredns-6955765f44-s7xxx 1/1 Running 0 4h26m
6
kube-system etcd-cluster1-control-plane 1/1 Running 0 4h27m
7
kube-system kube-apiserver-cluster1-control-plane 1/1 Running 0 4h27m
8
kube-system kube-controller-manager-cluster1-control-plane1/1 Running 0 4h27m
9
kube-system kube-proxy-ksvzw 1/1 Running 0 4h26m
10
kube-system kube-scheduler-cluster1-control-plane 1/1 Running 0 4h27m
11
local-path-storage local-path-provisioner-58f6947c7-lfmdx 1/1 Running 0 4h26m
12
metallb-system controller-5c9894b5cd-cn9x2 1/1 Running 0 4h26m
13
metallb-system speaker-d7jkp 1/1 Running 0 4h26m
Copied!
Note that this represents the output just for cluster2, although the pod footprint for all three clusters should look similar at this point.
You can see that your currently connected to this cluster by executing the kubectl config get-contexts command:
1
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
2
cluster1 kind-cluster1 cluster1
3
* cluster2 kind-cluster2 cluster2
4
mgmt kind-mgmt kind-mgmt
Copied!
Run the following command to make mgmt the current cluster.
1
kubectl config use-context ${MGMT}
Copied!

Lab 2 - Deploy and register Gloo Mesh

First of all, you need to install the meshctl CLI:
1
export GLOO_MESH_VERSION=v1.1.5
2
curl -sL https://run.solo.io/meshctl/install | sh -
3
export PATH=$HOME/.gloo-mesh/bin:$PATH
Copied!
Gloo Mesh Enterprise is adding unique features on top of Gloo Mesh Open Source (RBAC, UI, WASM, ...).
Run the following commands to deploy Gloo Mesh Enterprise:
1
helm repo add gloo-mesh-enterprise https://storage.googleapis.com/gloo-mesh-enterprise/gloo-mesh-enterprise
2
helm repo update
3
kubectl --context ${MGMT} create ns gloo-mesh
4
helm upgrade --install gloo-mesh-enterprise gloo-mesh-enterprise/gloo-mesh-enterprise \
5
--namespace gloo-mesh --kube-context ${MGMT} \
6
--version=1.1.5 \
7
--set rbac-webhook.enabled=true \
8
--set licenseKey=${GLOO_MESH_LICENSE_KEY} \
9
--set "rbac-webhook.adminSubjects[0].kind=Group" \
10
--set "rbac-webhook.adminSubjects[0].name=system:masters"
11
kubectl --context ${MGMT} -n gloo-mesh rollout status deploy/enterprise-networking
Copied!
Then, you need to set the environment variable for the service of the Gloo Mesh Enterprise networking component:
1
export ENDPOINT_GLOO_MESH=$(kubectl --context ${MGMT} -n gloo-mesh get svc enterprise-networking -o jsonpath='{.status.loadBalancer.ingress[0].*}'):9900
2
export HOST_GLOO_MESH=$(echo ${ENDPOINT_GLOO_MESH} | cut -d: -f1)
Copied!
Finally, you need to register the two other clusters:
1
meshctl cluster register --mgmt-context=${MGMT} --remote-context=${CLUSTER1} --relay-server-address=${ENDPOINT_GLOO_MESH} enterprise cluster1 --cluster-domain cluster.local
2
meshctl cluster register --mgmt-context=${MGMT} --remote-context=${CLUSTER2} --relay-server-address=${ENDPOINT_GLOO_MESH} enterprise cluster2 --cluster-domain cluster.local
Copied!
You can list the registered cluster using the following command:
1
kubectl get kubernetescluster -n gloo-mesh
Copied!
You should get the following output:
1
NAME AGE
2
cluster1 27s
3
cluster2 23s
Copied!

Note that you can also register the remote clusters with Helm:

Get the value of the root CA certificate on the management cluster and create a secret in the remote clusters
1
kubectl --context ${MGMT} -n gloo-mesh get secret relay-root-tls-secret -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
2
kubectl --context ${CLUSTER1} create ns gloo-mesh
3
kubectl --context ${CLUSTER1} -n gloo-mesh create secret generic relay-root-tls-secret --from-file ca.crt=ca.crt
4
kubectl --context ${CLUSTER2} create ns gloo-mesh
5
kubectl --context ${CLUSTER2} -n gloo-mesh create secret generic relay-root-tls-secret --from-file ca.crt=ca.crt
Copied!
We also need to copy over the bootstrap token used for initial communication
1
kubectl --context ${MGMT} -n gloo-mesh get secret relay-identity-token-secret -o jsonpath='{.data.token}' | base64 -d > token
2
kubectl --context ${CLUSTER1} -n gloo-mesh create secret generic relay-identity-token-secret --from-file token=token
3
kubectl --context ${CLUSTER2} -n gloo-mesh create secret generic relay-identity-token-secret --from-file token=token
Copied!
Install the Helm charts
1
helm repo add enterprise-agent https://storage.googleapis.com/gloo-mesh-enterprise/enterprise-agent
2
helm repo update
3
helm install enterprise-agent enterprise-agent/enterprise-agent \
4
--namespace gloo-mesh \
5
--set relay.serverAddress=${ENDPOINT_GLOO_MESH} \
6
--set relay.cluster=cluster1 \
7
--kube-context=${CLUSTER1} \
8
--version 1.1.5
9
10
helm install enterprise-agent enterprise-agent/enterprise-agent \
11
--namespace gloo-mesh \
12
--set relay.serverAddress=${ENDPOINT_GLOO_MESH} \
13
--set relay.cluster=cluster2 \
14
--kube-context=${CLUSTER2} \
15
--version 1.1.5
Copied!
Create the KubernetesCluster objects
1
kubectl apply --context ${MGMT} -f- <<EOF
2
apiVersion: multicluster.solo.io/v1alpha1
3
kind: KubernetesCluster
4
metadata:
5
name: cluster1
6
namespace: gloo-mesh
7
spec:
8
clusterDomain: cluster.local
9
EOF
10
11
kubectl apply --context ${MGMT} -f- <<EOF
12
apiVersion: multicluster.solo.io/v1alpha1
13
kind: KubernetesCluster
14
metadata:
15
name: cluster2
16
namespace: gloo-mesh
17
spec:
18
clusterDomain: cluster.local
19
EOF
Copied!
To use the Gloo Mesh Gateway advanced features, you need to install the Gloo Mesh addons.
First, you need to create a namespace for the addons, with Istio injection enabled:
1
kubectl --context ${CLUSTER1} create namespace gloo-mesh-addons
2
kubectl --context ${CLUSTER1} label namespace gloo-mesh-addons istio-injection=enabled
3
kubectl --context ${CLUSTER2} create namespace gloo-mesh-addons
4
kubectl --context ${CLUSTER2} label namespace gloo-mesh-addons istio-injection=enabled
Copied!
Then, you can deploy the addons using Helm:
1
helm repo add enterprise-agent https://storage.googleapis.com/gloo-mesh-enterprise/enterprise-agent
2
helm repo update
3
4
helm upgrade --install enterprise-agent-addons enterprise-agent/enterprise-agent \
5
--kube-context=${CLUSTER1} \
6
--version=1.1.5 \
7
--namespace gloo-mesh-addons \
8
--set enterpriseAgent.enabled=false \
9
--set rate-limiter.enabled=true \
10
--set ext-auth-service.enabled=true
11
12
helm upgrade --install enterprise-agent-addons enterprise-agent/enterprise-agent \
13
--kube-context=${CLUSTER2} \
14
--version=1.1.5 \
15
--namespace gloo-mesh-addons \
16
--set enterpriseAgent.enabled=false \
17
--set rate-limiter.enabled=true \
18
--set ext-auth-service.enabled=true
Copied!
Finally, we need to create an AccessPolicy for the Istio Ingress Gateways to communicate with the addons and for the addons to communicate together:
1
kubectl apply --context ${MGMT} -f- <<EOF
2
apiVersion: networking.mesh.gloo.solo.io/v1
3
kind: AccessPolicy
4
metadata:
5
namespace: gloo-mesh
6
name: gloo-mesh-addons
7
spec:
8
sourceSelector:
9
- kubeServiceAccountRefs:
10
serviceAccounts:
11
- name: istio-ingressgateway-service-account
12
namespace: istio-system
13
clusterName: cluster1
14
- name: istio-ingressgateway-service-account
15
namespace: istio-system
16
clusterName: cluster2
17
- kubeIdentityMatcher:
18
namespaces:
19
- gloo-mesh-addons
20
destinationSelector:
21
- kubeServiceMatcher:
22
namespaces:
23
- gloo-mesh-addons
24
EOF
Copied!

Lab 3 - Deploy Istio

Download istio 1.10.4:
1
export ISTIO_VERSION=1.10.4
2
curl -L https://istio.io/downloadIstio | sh -
Copied!
Now let's deploy Istio on the first cluster:
1
kubectl --context ${CLUSTER1} create ns istio-operator
2
3
./istio-1.10.4/bin/istioctl --context ${CLUSTER1} operator init
4
5
kubectl --context ${CLUSTER1} create ns istio-system
6
7
cat << EOF | kubectl --context ${CLUSTER1} apply -f -
8
9
apiVersion: install.istio.io/v1alpha1
10
kind: IstioOperator
11
metadata:
12
name: istiocontrolplane-default
13
namespace: istio-system
14
spec:
15
profile: default
16
meshConfig:
17
trustDomain: cluster1
18
accessLogFile: /dev/stdout
19
enableAutoMtls: true
20
defaultConfig:
21
envoyMetricsService:
22
address: enterprise-agent.gloo-mesh:9977
23
envoyAccessLogService:
24
address: enterprise-agent.gloo-mesh:9977
25
proxyMetadata:
26
ISTIO_META_DNS_CAPTURE: "true"
27
ISTIO_META_DNS_AUTO_ALLOCATE: "true"
28
GLOO_MESH_CLUSTER_NAME: cluster1
29
values:
30
global:
31
meshID: mesh1
32
multiCluster:
33
clusterName: cluster1
34
network: network1
35
meshNetworks:
36
network1:
37
endpoints:
38
- fromRegistry: cluster1
39
gateways:
40
- registryServiceName: istio-ingressgateway.istio-system.svc.cluster.local
41
port: 443
42
vm-network:
43
components:
44
ingressGateways:
45
- name: istio-ingressgateway
46
label:
47
topology.istio.io/network: network1
48
enabled: true
49
k8s:
50
env:
51
# sni-dnat adds the clusters required for AUTO_PASSTHROUGH mode
52
- name: ISTIO_META_ROUTER_MODE
53
value: "sni-dnat"
54
# traffic through this gateway should be routed inside the network
55
- name: ISTIO_META_REQUESTED_NETWORK_VIEW
56
value: network1
57
service:
58
ports:
59
- name: http2
60
port: 80
61
targetPort: 8080
62
- name: https
63
port: 443
64
targetPort: 8443
65
- name: tcp-status-port
66
port: 15021
67
targetPort: 15021
68
- name: tls
69
port: 15443
70
targetPort: 15443
71
- name: tcp-istiod
72
port: 15012
73
targetPort: 15012
74
- name: tcp-webhook
75
port: 15017
76
targetPort: 15017
77
pilot:
78
k8s:
79
env:
80
- name: PILOT_SKIP_VALIDATE_TRUST_DOMAIN
81
value: "true"
82
EOF
Copied!
And deploy Istio on the second cluster:
1
kubectl --context ${CLUSTER2} create ns istio-operator
2
3
./istio-1.10.4/bin/istioctl --context ${CLUSTER2} operator init
4
5
kubectl --context ${CLUSTER2} create ns istio-system
6
7
cat << EOF | kubectl --context ${CLUSTER2} apply -f -
8
9
apiVersion: install.istio.io/v1alpha1
10
kind: IstioOperator
11
metadata:
12
name: istiocontrolplane-default
13
namespace: istio-system
14
spec:
15
profile: default
16
meshConfig:
17
trustDomain: cluster2
18
accessLogFile: /dev/stdout
19
enableAutoMtls: true
20
defaultConfig:
21
envoyMetricsService:
22
address: enterprise-agent.gloo-mesh:9977
23
envoyAccessLogService:
24
address: enterprise-agent.gloo-mesh:9977
25
proxyMetadata:
26
ISTIO_META_DNS_CAPTURE: "true"
27
ISTIO_META_DNS_AUTO_ALLOCATE: "true"
28
GLOO_MESH_CLUSTER_NAME: cluster2
29
values:
30
global:
31
meshID: mesh1
32
multiCluster:
33
clusterName: cluster2
34
network: network1
35
meshNetworks:
36
network1:
37
endpoints:
38
- fromRegistry: cluster2
39
gateways:
40
- registryServiceName: istio-ingressgateway.istio-system.svc.cluster.local
41
port: 443
42
vm-network:
43
components:
44
ingressGateways:
45
- name: istio-ingressgateway
46
label:
47
topology.istio.io/network: network1
48
enabled: true
49
k8s:
50
env:
51
# sni-dnat adds the clusters required for AUTO_PASSTHROUGH mode
52
- name: ISTIO_META_ROUTER_MODE
53
value: "sni-dnat"
54
# traffic through this gateway should be routed inside the network
55
- name: ISTIO_META_REQUESTED_NETWORK_VIEW
56
value: network1
57
service:
58
ports:
59
- name: http2
60
port: 80
61
targetPort: 8080
62
- name: https
63
port: 443
64
targetPort: 8443
65
- name: tcp-status-port
66
port: 15021
67
targetPort: 15021
68
- name: tls
69
port: 15443
70
targetPort: 15443
71
- name: tcp-istiod
72
port: 15012
73
targetPort: 15012
74
- name: tcp-webhook
75
port: 15017
76
targetPort: 15017
77
pilot:
78
k8s:
79
env:
80
- name: PILOT_SKIP_VALIDATE_TRUST_DOMAIN
81
value: "true"
82
EOF
Copied!
Run the following command until all the Istio Pods are ready:
1
kubectl --context ${CLUSTER1} get pods -n istio-system
Copied!
When they are ready, you should get this output:
1
NAME READY STATUS RESTARTS AGE
2
istio-ingressgateway-5c7759c8cb-52r2j 1/1 Running 0 22s
3
istiod-7884b57b4c-rvr2c 1/1 Running 0 30s
Copied!
Check the status on the second cluster using kubectl --context ${CLUSTER2} get pods -n istio-system
Set the environment variable for the service of the Istio Ingress Gateway of cluster1:
1
export ENDPOINT_HTTP_GW_CLUSTER1=$(kubectl --context ${CLUSTER1} -n istio-system get svc istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].*}'):80
2
export ENDPOINT_HTTPS_GW_CLUSTER1=$(kubectl --context ${CLUSTER1} -n istio-system get svc istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].*}'):443
3
export HOST_GW_CLUSTER1=$(echo ${ENDPOINT_HTTP_GW_CLUSTER1} | cut -d: -f1)
Copied!

Lab 4 - Deploy the Bookinfo demo app

Run the following commands to deploy the bookinfo app on cluster1:
1
bookinfo_yaml=https://raw.githubusercontent.com/istio/istio/1.10.4/samples/bookinfo/platform/kube/bookinfo.yaml
2
kubectl --context ${CLUSTER1} label namespace default istio-injection=enabled
3
# deploy bookinfo application components for all versions less than v3
4
kubectl --context ${CLUSTER1} apply -f ${bookinfo_yaml} -l 'app,version notin (v3)'
5
# deploy all bookinfo service accounts
6
kubectl --context ${CLUSTER1} apply -f ${bookinfo_yaml} -l 'account'
7
# configure ingress gateway to access bookinfo
8
kubectl --context ${CLUSTER1} apply -f https://raw.githubusercontent.com/istio/istio/1.10.4/samples/bookinfo/networking/bookinfo-gateway.yaml
Copied!
You can check that the app is running using kubectl --context ${CLUSTER1} get pods:
1
NAME READY STATUS RESTARTS AGE
2
details-v1-558b8b4b76-w9qp8 2/2 Running 0 2m33s
3
productpage-v1-6987489c74-54lvk 2/2 Running 0 2m34s
4
ratings-v1-7dc98c7588-pgsxv 2/2 Running 0 2m34s
5
reviews-v1-7f99cc4496-lwtsr 2/2 Running 0 2m34s
6
reviews-v2-7d79d5bd5d-mpsk2 2/2 Running 0 2m34s
Copied!
As you can see, it deployed the v1 and v2 versions of the reviews microservice. But as expected, it did not deploy v3 of reviews.
Now, run the following commands to deploy the bookinfo app on cluster2:
1
kubectl --context ${CLUSTER2} label namespace default istio-injection=enabled
2
# deploy all bookinfo service accounts and application components for all versions
3
kubectl --context ${CLUSTER2} apply -f ${bookinfo_yaml}
4
# configure ingress gateway to access bookinfo
5
kubectl --context ${CLUSTER2} apply -f https://raw.githubusercontent.com/istio/istio/1.10.4/samples/bookinfo/networking/bookinfo-gateway.yaml
Copied!
You can check that the app is running using kubectl --context ${CLUSTER2} get pods:
1
NAME READY STATUS RESTARTS AGE
2
details-v1-558b8b4b76-gs9z2 2/2 Running 0 2m22s
3
productpage-v1-6987489c74-x45vd 2/2 Running 0 2m21s
4
ratings-v1-7dc98c7588-2n6bg 2/2 Running 0 2m21s
5
reviews-v1-7f99cc4496-4r48m 2/2 Running 0 2m21s
6
reviews-v2-7d79d5bd5d-cx9lp 2/2 Running 0 2m22s
7
reviews-v3-7dbcdcbc56-trjdx 2/2 Running 0 2m22s
Copied!
As you can see, it deployed all three versions of the reviews microservice.
Initial setup
Get the URL to access the productpage service from your web browser using the following command:
1
echo "http://${ENDPOINT_HTTP_GW_CLUSTER1}/productpage"
Copied!
Bookinfo working
As you can see, you can access the Bookinfo demo app.

Lab 5 - Create the Virtual Mesh

Gloo Mesh can help unify the root identity between multiple service mesh installations so any intermediates are signed by the same Root CA and end-to-end mTLS between clusters and services can be established correctly.
Run this command to see how the communication between microservices occurs currently:
1
kubectl --context ${CLUSTER1} exec -t deploy/reviews-v1 -c istio-proxy \
2
-- openssl s_client -showcerts -connect ratings:9080
Copied!
You should get something like that:
1
CONNECTED(00000005)
2
139706332271040:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../ssl/record/ssl3_record.c:332:
3
---
4
no peer certificate available
5
---
6
No client certificate CA names sent
7
---
8
SSL handshake has read 5 bytes and written 309 bytes
9
Verification: OK
10
---
11
New, (NONE), Cipher is (NONE)
12
Secure Renegotiation IS NOT supported
13
Compression: NONE
14
Expansion: NONE
15
No ALPN negotiated
16
Early data was not sent
17
Verify return code: 0 (ok)
18
---
19
command terminated with exit code 1
Copied!
It means that the traffic is currently not encrypted.
Enable TLS on both clusters:
1
kubectl --context ${CLUSTER1} apply -f - <<EOF
2
apiVersion: "security.istio.io/v1beta1"
3
kind: "PeerAuthentication"
4
metadata:
5
name: "default"
6
namespace: "istio-system"
7
spec:
8
mtls:
9
mode: STRICT
10
EOF
11
12
kubectl --context ${CLUSTER2} apply -f - <<EOF
13
apiVersion: "security.istio.io/v1beta1"
14
kind: "PeerAuthentication"
15
metadata:
16
name: "default"
17
namespace: "istio-system"
18
spec:
19
mtls:
20
mode: STRICT
21
EOF
Copied!
Run the command again:
1
kubectl --context ${CLUSTER1} exec -t deploy/reviews-v1 -c istio-proxy \
2
-- openssl s_client -showcerts -connect ratings:9080
Copied!
Now, the output should be like that:
1
...
2
Certificate chain
3
0 s:
4
i:O = cluster1
5
-----BEGIN CERTIFICATE-----
6
MIIDFzCCAf+gAwIBAgIRALsoWlroVcCc1n+VROhATrcwDQYJKoZIhvcNAQELBQAw
7
...
8
BPiAYRMH5j0gyBqiZZEwCfzfQe1e6aAgie9T
9
-----END CERTIFICATE-----
10
1 s:O = cluster1
11
i:O = cluster1
12
-----BEGIN CERTIFICATE-----
13
MIICzjCCAbagAwIBAgIRAKIx2hzMbAYzM74OC4Lj1FUwDQYJKoZIhvcNAQELBQAw
14
...
15
uMTPjt7p/sv74fsLgrx8WMI0pVQ7+2plpjaiIZ8KvEK9ye/0Mx8uyzTG7bpmVVWo
16
ugY=
17
-----END CERTIFICATE-----
18
...
Copied!
As you can see, mTLS is now enabled.
Now, run the same command on the second cluster:
1
kubectl --context ${CLUSTER2} exec -t deploy/reviews-v1 -c istio-proxy \
2
-- openssl s_client -showcerts -connect ratings:9080
Copied!
The output should be like that:
1
...
2
Certificate chain
3
0 s:
4
i:O = cluster2
5
-----BEGIN CERTIFICATE-----
6
MIIDFzCCAf+gAwIBAgIRALo1dmnbbP0hs1G82iBa2oAwDQYJKoZIhvcNAQELBQAw
7
...
8
YvDrZfKNOKwFWKMKKhCSi2rmCvLKuXXQJGhy
9
-----END CERTIFICATE-----
10
1 s:O = cluster2
11
i:O = cluster2
12
-----BEGIN CERTIFICATE-----
13
MIICzjCCAbagAwIBAgIRAIjegnzq/hN/NbMm3dmllnYwDQYJKoZIhvcNAQELBQAw
14
...
15
GZRM4zV9BopZg745Tdk2LVoHiBR536QxQv/0h1P0CdN9hNLklAhGN/Yf9SbDgLTw
16
6Sk=
17
-----END CERTIFICATE-----
18
...
Copied!
The first certificate in the chain is the certificate of the workload and the second one is the Istio CA’s signing (CA) certificate.
As you can see, the Istio CA’s signing (CA) certificates are different in the 2 clusters, so one cluster can't validate certificates issued by the other cluster.
Creating a Virtual Mesh will unify these two CAs with a common root identity.
Run the following command to create the Virtual Mesh:
1
cat << EOF | kubectl --context ${MGMT} apply -f -
2
apiVersion: networking.mesh.gloo.solo.io/v1
3
kind: VirtualMesh
4
metadata:
5
name: virtual-mesh
6
namespace: gloo-mesh
7
spec:
8
mtlsConfig:
9
autoRestartPods: true
10
shared:
11
rootCertificateAuthority:
12
generated: {}
13
federation:
14
selectors:
15
- {}
16
meshes:
17
- name: istiod-istio-system-cluster1
18
namespace: gloo-mesh
19
- name: istiod-istio-system-cluster2
20
namespace: gloo-mesh
21
EOF
Copied!
When we create the VirtualMesh and set the trust model to shared, Gloo Mesh will kick off the process of unifying identities under a shared root.
First, Gloo Mesh will create the Root CA.
Then, Gloo Mesh will use the Certificate Request Agent on each of the clusters to create a new key/cert pair that will form an intermediate CA used by the mesh on that cluster. It will then create a Certificate Request (CR).
Virtual Mesh Creation
Gloo Mesh will then sign the intermediate certificates with the Root CA.
At that point, we want Istio to pick up the new intermediate CA and start using that for its workloads. To do that Gloo Mesh creates a Kubernetes secret called cacerts in the istio-system namespace.
You can have a look at the Istio documentation here if you want to get more information about this process.
Check that the secret containing the new Istio CA has been created in the istio namespace, on the first cluster:
1
kubectl --context ${CLUSTER1} get secret -n istio-system cacerts -o yaml
Copied!
Here is the expected output:
1
apiVersion: v1
2
data:
3
ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZFRENDQXZpZ0F3SUJBZ0lRUG5kRDkwejN4dytYeTBzYzNmcjRmekFOQmdrcWhraUc5dzBCQVFzRkFEQWIKTVJrd0Z3WURWU...
4
jFWVlZtSWl3Si8va0NnNGVzWTkvZXdxSGlTMFByWDJmSDVDCmhrWnQ4dz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
5
ca-key.pem: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlKS0FJQkFBS0NBZ0VBczh6U0ZWcEFxeVNodXpMaHVXUlNFMEJJMXVwbnNBc3VnNjE2TzlKdzBlTmhhc3RtClUvZERZS...
6
DT2t1bzBhdTFhb1VsS1NucldpL3kyYUtKbz0KLS0tLS1FTkQgUlNBIFBSSVZBVEUgS0VZLS0tLS0K
7
cert-chain.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZFRENDQXZpZ0F3SUJBZ0lRUG5kRDkwejN4dytYeTBzYzNmcjRmekFOQmdrcWhraUc5dzBCQVFzRkFEQWIKTVJrd0Z3WURWU...
8
RBTHpzQUp2ZzFLRUR4T2QwT1JHZFhFbU9CZDBVUDk0KzJCN0tjM2tkNwpzNHYycEV2YVlnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
9
key.pem: ""
10
root-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUU0ekNDQXN1Z0F3SUJBZ0lRT2lZbXFGdTF6Q3NzR0RFQ3JOdnBMakFOQmdrcWhraUc5dzBCQVFzRkFEQWIKTVJrd0Z3WURWU...
11
UNBVEUtLS0tLQo=
12
kind: Secret
13
metadata:
14
labels:
15
agent.certificates.mesh.gloo.solo.io: gloo-mesh
16
cluster.multicluster.solo.io: ""
17
name: cacerts
18
namespace: istio-system
19
type: certificates.mesh.gloo.solo.io/issued_certificate
Copied!
Same operation on the second cluster:
1
kubectl --context ${CLUSTER2} get secret -n istio-system cacerts -o yaml
Copied!
Here is the expected output:
1
apiVersion: v1
2
data:
3
ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZFRENDQXZpZ0F3SUJBZ0lRWXE1V29iWFhGM1gwTjlNL3BYYkNKekFOQmdrcWhraUc5dzBCQVFzRkFEQWIKTVJrd0Z3WURWU...
4
XpqQ1RtK2QwNm9YaDI2d1JPSjdQTlNJOTkrR29KUHEraXltCkZIekhVdz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
5
ca-key.pem: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlKS1FJQkFBS0NBZ0VBMGJPMTdSRklNTnh4K1lMUkEwcFJqRmRvbG1SdW9Oc3gxNUUvb3BMQ1l1RjFwUEptCndhR1U1V...
6
MNU9JWk5ObDA4dUE1aE1Ca2gxNCtPKy9HMkoKLS0tLS1FTkQgUlNBIFBSSVZBVEUgS0VZLS0tLS0K
7
cert-chain.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZFRENDQXZpZ0F3SUJBZ0lRWXE1V29iWFhGM1gwTjlNL3BYYkNKekFOQmdrcWhraUc5dzBCQVFzRkFEQWIKTVJrd0Z3WURWU...
8
RBTHpzQUp2ZzFLRUR4T2QwT1JHZFhFbU9CZDBVUDk0KzJCN0tjM2tkNwpzNHYycEV2YVlnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
9
key.pem: ""
10
root-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUU0ekNDQXN1Z0F3SUJBZ0lRT2lZbXFGdTF6Q3NzR0RFQ3JOdnBMakFOQmdrcWhraUc5dzBCQVFzRkFEQWIKTVJrd0Z3WURWU...
11
UNBVEUtLS0tLQo=
12
kind: Secret
13
metadata:
14
labels:
15
agent.certificates.mesh.gloo.solo.io: gloo-mesh
16
cluster.multicluster.solo.io: ""
17
name: cacerts
18
namespace: istio-system
19
type: certificates.mesh.gloo.solo.io/issued_certificate
Copied!
As you can see, the secrets contain the same Root CA (base64 encoded), but different intermediate certs.
Have a look at the VirtualMesh object we've just created and notice the autoRestartPods: true in the mtlsConfig. This instructs Gloo Mesh to restart the Istio pods in the relevant clusters.
This is due to a limitation of Istio. The Istio control plane picks up the CA for Citadel and does not rotate it often enough.
Now, let's check what certificates we get when we run the same commands we ran before we created the Virtual Mesh:
1
kubectl --context ${CLUSTER1} exec -t deploy/reviews-v1 -c istio-proxy \
2
-- openssl s_client -showcerts -connect ratings:9080
Copied!
The output should be like that:
1
...
2
Certificate chain
3
0 s:
4
i:
5
-----BEGIN CERTIFICATE-----
6
MIIEBzCCAe+gAwIBAgIRAK1yjsFkisSjNqm5tzmKQS8wDQYJKoZIhvcNAQELBQAw
7
...
8
T77lFKXx0eGtDNtWm/1IPiOutIMlFz/olVuN
9
-----END CERTIFICATE-----
10
1 s:
11
i:O = gloo-mesh
12
-----BEGIN CERTIFICATE-----
13
MIIFEDCCAvigAwIBAgIQPndD90z3xw+Xy0sc3fr4fzANBgkqhkiG9w0BAQsFADAb
14
...
15
hkZt8w==
16
-----END CERTIFICATE-----
17
2 s:O = gloo-mesh
18
i:O = gloo-mesh
19
-----BEGIN CERTIFICATE-----
20
MIIE4zCCAsugAwIBAgIQOiYmqFu1zCssGDECrNvpLjANBgkqhkiG9w0BAQsFADAb
21
...
22
s4v2pEvaYg==
23
-----END CERTIFICATE-----
24
3 s:O = gloo-mesh
25
i:O = gloo-mesh
26
-----BEGIN CERTIFICATE-----
27
MIIE4zCCAsugAwIBAgIQOiYmqFu1zCssGDECrNvpLjANBgkqhkiG9w0BAQsFADAb
28
...
29
s4v2pEvaYg==
30
-----END CERTIFICATE-----
31
...
Copied!
And let's compare with what we get on the second cluster:
1
kubectl --context ${CLUSTER2} exec -t deploy/reviews-v1 -c istio-proxy \
2
-- openssl s_client -showcerts -connect ratings:9080
Copied!
The output should be like that:
1
...
2
Certificate chain
3
0 s:
4
i:
5
-----BEGIN CERTIFICATE-----
6
MIIEBjCCAe6gAwIBAgIQfSeujXiz3KsbG01+zEcXGjANBgkqhkiG9w0BAQsFADAA
7
...
8
EtTlhPLbyf2GwkUgzXhdcu2G8uf6o16b0qU=
9
-----END CERTIFICATE-----
10
1 s:
11
i:O = gloo-mesh
12
-----BEGIN CERTIFICATE-----
13
MIIFEDCCAvigAwIBAgIQYq5WobXXF3X0N9M/pXbCJzANBgkqhkiG9w0BAQsFADAb
14
...
15
FHzHUw==
16
-----END CERTIFICATE-----
17
2 s:O = gloo-mesh
18
i:O = gloo-mesh
19
-----BEGIN CERTIFICATE-----
20
MIIE4zCCAsugAwIBAgIQOiYmqFu1zCssGDECrNvpLjANBgkqhkiG9w0BAQsFADAb
21
...
22
s4v2pEvaYg==
23
-----END CERTIFICATE-----
24
3 s:O = gloo-mesh
25
i:O = gloo-mesh
26
-----BEGIN CERTIFICATE-----
27
MIIE4zCCAsugAwIBAgIQOiYmqFu1zCssGDECrNvpLjANBgkqhkiG9w0BAQsFADAb
28
...
29
s4v2pEvaYg==
30
-----END CERTIFICATE-----
31
...
Copied!
You can see that the last certificate in the chain is now identical on both clusters. It's the new root certificate.
The first certificate is the certificate of the service. Let's decrypt it.
Copy and paste the content of the certificate (including the BEGIN and END CERTIFICATE lines) in a new file called /tmp/cert and run the following command:
1
openssl x509 -in /tmp/cert -text
Copied!
The output should be as follow:
1
Certificate:
2
Data:
3
Version: 3 (0x2)
4
Serial Number:
5
7d:27:ae:8d:78:b3:dc:ab:1b:1b:4d:7e:cc:47:17:1a
6
Signature Algorithm: sha256WithRSAEncryption
7
Issuer:
8
Validity
9
Not Before: Sep 17 08:21:08 2020 GMT
10
Not After : Sep 18 08:21:08 2020 GMT
11
Subject:
12
Subject Public Key Info:
13
Public Key Algorithm: rsaEncryption
14
Public-Key: (2048 bit)
15
Modulus:
16
...
17
Exponent: 65537 (0x10001)
18
X509v3 extensions:
19
X509v3 Key Usage: critical
20
Digital Signature, Key Encipherment
21
X509v3 Extended Key Usage:
22
TLS Web Server Authentication, TLS Web Client Authentication
23
X509v3 Basic Constraints: critical
24
CA:FALSE
25
X509v3 Subject Alternative Name: critical
26
URI:spiffe://cluster2/ns/default/sa/bookinfo-ratings
27
Signature Algorithm: sha256WithRSAEncryption
28
...
29
-----BEGIN CERTIFICATE-----
30
MIIEBjCCAe6gAwIBAgIQfSeujXiz3KsbG01+zEcXGjANBgkqhkiG9w0BAQsFADAA
31
...
32
EtTlhPLbyf2GwkUgzXhdcu2G8uf6o16b0qU=
33
-----END CERTIFICATE-----
Copied!
The Subject Alternative Name (SAN) is the most interesting part. It allows the sidecar proxy of the reviews service to validate that it talks to the sidecar proxy of the rating service.

Lab 6 - Access control

In the previous guide, we federated multiple meshes and established a shared root CA for a shared identity domain. Now that we have a logical VirtualMesh, we need a way to establish access policies across the multiple meshes, without treating each of them individually. Gloo Mesh helps by establishing a single, unified API that understands the logical VirtualMesh construct.
The application works correctly because RBAC isn't enforced.
Let's update the VirtualMesh to enable it:
1
cat << EOF | kubectl --context ${MGMT} apply -f -
2
apiVersion: networking.mesh.gloo.solo.io/v1
3
kind: VirtualMesh
4
metadata:
5
name: virtual-mesh
6
namespace: gloo-mesh
7
spec:
8
mtlsConfig:
9
autoRestartPods: true
10
shared:
11
rootCertificateAuthority:
12
generated: {}
13
federation:
14
selectors:
15
- {}
16
globalAccessPolicy: ENABLED
17
meshes:
18
- name: istiod-istio-system-cluster1
19
namespace: gloo-mesh
20
- name: istiod-istio-system-cluster2
21
namespace: gloo-mesh
22
EOF
Copied!
After a few seconds, if you refresh the web page, you should see that you don't have access to the application anymore.
You should get the following error message:
1
RBAC: access denied
Copied!
You need to create a Gloo Mesh Access Policy to allow the Istio Ingress Gateway to access the productpage microservice:
1
cat << EOF | kubectl --context ${MGMT} apply -f -
2
apiVersion: networking.mesh.gloo.solo.io/v1
3
kind: AccessPolicy
4
metadata:
5
namespace: gloo-mesh
6
name: istio-ingressgateway
7
spec:
8
sourceSelector:
9
- kubeServiceAccountRefs:
10
serviceAccounts:
11
- name: istio-ingressgateway-service-account
12
namespace: istio-system
13
clusterName: cluster1
14
- name: istio-ingressgateway-service-account
15
namespace: istio-system
16
clusterName: cluster2
17
destinationSelector:
18
- kubeServiceMatcher:
19
namespaces:
20
- default
21
labels:
22
service: productpage
23
EOF
Copied!
Now, refresh the page again and you should be able to access the application, but neither the details nor the reviews:
Bookinfo RBAC 1
You can create another Gloo Mesh Access Policy to allow the productpage microservice to talk to these 2 microservices:
1
cat << EOF | kubectl --context ${MGMT} apply -f -
2
apiVersion: networking.mesh.gloo.solo.io/v1
3
kind: AccessPolicy
4
metadata:
5
namespace: gloo-mesh
6
name: productpage
7
spec:
8
sourceSelector:
9
- kubeServiceAccountRefs:
10
serviceAccounts:
11
- name: bookinfo-productpage
12
namespace: default
13
clusterName: cluster1
14
destinationSelector:
15
- kubeServiceMatcher:
16
namespaces:
17
- default
18
labels:
19
service: details
20
- kubeServiceMatcher:
21
namespaces:
22
- default
23
labels:
24
service: reviews
25
EOF
Copied!
If you refresh the page, you should be able to see the product details and the reviews, but the reviews microservice can't access the ratings microservice:
Bookinfo RBAC 2
Create another AccessPolicy to fix the issue:
1
cat << EOF | kubectl --context ${MGMT} apply -f -
2
apiVersion: networking.mesh.gloo.solo.io/v1
3
kind: AccessPolicy
4
metadata:
5
namespace: gloo-mesh
6
name: reviews
7
spec:
8
sourceSelector:
9
- kubeServiceAccountRefs:
10
serviceAccounts:
11
- name: bookinfo-reviews
12
namespace: default
13
clusterName: cluster1
14
destinationSelector:
15
- kubeServiceMatcher:
16
namespaces:
17
- default
18
labels:
19
service: ratings
20
EOF
Copied!
Refresh the page another time and all the services should now work:
Bookinfo working
If you refresh the web page several times, you should see only the versions v1 (no stars) and v2 (black stars), which means that all the requests are still handled by the first cluster.

Lab 7 - Traffic policy

We're going to use Gloo Mesh Traffic Policies to inject faults and configure timeouts.
Let's create the following TrafficPolicy to inject a delay when the v2 version of the reviews service talk to the ratings service on cluster1.:
1
cat << EOF | kubectl --context ${MGMT} apply -f -
2
apiVersion: networking.mesh.gloo.solo.io/v1
3
kind: TrafficPolicy
4
metadata:
5
name: ratings-fault-injection
6
namespace: gloo-mesh
7
spec:
8
sourceSelector:
9
- kubeWorkloadMatcher:
10
labels:
11
app: reviews
12
version: v2
13
namespaces:
14
- default
15
clusters:
16
- cluster1
17
destinationSelector:
18
- kubeServiceRefs:
19
services:
20
- clusterName: cluster1
21
name: ratings
22
namespace: default
23
policy:
24
faultInjection:
25
fixedDelay: 2s
26
percentage: 100
27
EOF
Copied!
If you refresh the webpage, you should see that it takes longer to get the productpage loaded when version v2 of the reviews services is called.
Now, let's configure a 0.5s request timeout when the productpage service calls the reviews service on cluster1.
1
cat << EOF | kubectl --context ${MGMT} apply -f -
2
apiVersion: networking.mesh.gloo.solo.io/v1
3
kind: TrafficPolicy
4
metadata:
5
name: reviews-request-timeout
6
namespace: gloo-mesh
7
spec:
8
sourceSelector:
9
- kubeWorkloadMatcher:
10
labels:
11
app: productpage
12
namespaces:
13
- default
14
clusters:
15
- cluster1
16
destinationSelector:
17
- kubeServiceRefs:
18
services:
19
- clusterName: cluster1
20
name: reviews
21
namespace: default
22
policy:
23
requestTimeout: 0.5s
24
EOF
Copied!
If you refresh the page several times, you'll see an error message telling that reviews are unavailable when the productpage is trying to communicate with the version v2 of the reviews service.
Bookinfo v3
Let's delete the TrafficPolicies:
1
kubectl --context ${MGMT} -n gloo-mesh delete trafficpolicy ratings-fault-injection
2
kubectl --context ${MGMT} -n gloo-mesh delete trafficpolicy reviews-request-timeout
Copied!

Lab 8 - Multi-cluster Traffic

On the first cluster, the v3 version of the reviews microservice doesn't exist, so we're going to redirect some of the traffic to the second cluster to make it available.
Multicluster traffic
Let's create the following TrafficPolicy:
1
cat << EOF | kubectl --context ${MGMT} apply -f -
2
apiVersion: networking.mesh.gloo.solo.io/v1
3
kind: TrafficPolicy
4
metadata:
5
namespace: gloo-mesh
6
name: simple
7
spec:
8
sourceSelector:
9
- kubeWorkloadMatcher:
10
namespaces:
11
- default
12
destinationSelector:
13
- kubeServiceRefs: