Gloo Mesh Workshop
Gloo Mesh Enterprise

Gloo Mesh Workshop

Table of Contents

Introduction

Gloo Mesh Enterprise is a distribution of Istio Service Mesh with production support, CVE patching, FIPS builds, and a multi-cluster operational management plane to simplify running a service mesh across multiple clusters or a hybrid deployment.
Gloo Mesh also has enterprise features around multi-tenancy, global failover and routing, observability, and east-west rate limiting and policy enforcement (through AuthZ/AuthN plugins).
Gloo Mesh

Istio support

The Gloo Mesh Enterprise subscription includes end to end Istio support:
  • Upstream first
  • Specialty builds available (FIPS, ARM, etc)
  • Long Term Support (LTS) N-4
  • Critical security patches
  • Production break-fix
  • One hour SLA Severity 1
  • Install / upgrade
  • Architecture and operational guidance, best practices

Service discovery

One of the common problems related to cross-cluster communication with Istio is discovery.
Istio discovery
Istio Endpoint Discovery Service (EDS) requires each Istio control plane to have access to the Kubernetes API server of each cluster. There are some security concerns with this approach, but it also means that an Istio control plane can’t start if it’s not able to contact one of the clusters.
Gloo Mesh discovery
Gloo Mesh is solving these problems. An agent running on each cluster is watching the local Kubernetes API server and passes the information to the Gloo Mesh management plane through a secured gRPC channel. Gloo Mesh is then telling the agents to create the Istio ServiceEntries corresponding to the workloads discovered on the other clusters.

Observability

Gloo Mesh is also using these agents to consolidate all the metrics and access logs from the different clusters. Graphs can then be used to monitor all the communication happening globally.
Gloo Mesh graph
And you can view the access logs on demand:
Gloo Mesh access logs

Zero trust

Gloo Mesh makes it very easy for you to implement a zero-trust architecture where trust is established by the attributes of the connection/caller/environment and by default no communication is allowed.
You can then use Gloo Mesh AccessPolicies to specify what services can talk together globally. Here is an example:
1
apiVersion: networking.mesh.gloo.solo.io/v1
2
kind: AccessPolicy
3
metadata:
4
namespace: gloo-mesh
5
name: reviews
6
spec:
7
sourceSelector:
8
- kubeServiceAccountRefs:
9
serviceAccounts:
10
- name: bookinfo-reviews
11
namespace: default
12
clusterName: cluster1
13
- name: bookinfo-reviews
14
namespace: default
15
clusterName: cluster2
16
destinationSelector:
17
- kubeServiceMatcher:
18
namespaces:
19
- default
20
labels:
21
service: ratings
Copied!
Gloo Mesh AccessPolicies are translating into Istio AuthorizationPolicies in the different clusters.
And what makes Gloo Mesh really unique is that you can then go to the UI and check what are the services currently running that are matching the criterias defined in your policy:
Gloo Mesh accesspolicy

Multi-cluster traffic and failover

Gloo Makes also provides an abstraction called TrafficPolicies that makes it very easy for you to define how services behave and interract globally. Here is an example:
1
apiVersion: networking.mesh.gloo.solo.io/v1
2
kind: TrafficPolicy
3
metadata:
4
namespace: gloo-mesh
5
name: simple
6
spec:
7
sourceSelector:
8
- kubeWorkloadMatcher:
9
namespaces:
10
- default
11
destinationSelector:
12
- kubeServiceRefs:
13
services:
14
- clusterName: cluster1
15
name: reviews
16
namespace: default
17
policy:
18
trafficShift:
19
destinations:
20
- kubeService:
21
clusterName: cluster2
22
name: reviews
23
namespace: default
24
subset:
25
version: v3
26
weight: 75
27
- kubeService:
28
clusterName: cluster1
29
name: reviews
30
namespace: default
31
subset:
32
version: v1
33
weight: 15
34
- kubeService:
35
clusterName: cluster1
36
name: reviews
37
namespace: default
38
subset:
39
version: v2
40
weight: 10
Copied!
Gloo Mesh TrafficPolicies are translating into Istio VirtualServices and DestinationRules in the different clusters.
Providing high-availability of applications across clusters, zones, and regions can be a significant challenge. Ideally, source traffic should be routed to the closest available destination, or be routed to a failover destination if issues occur.
Gloo Mesh VirtualDestinations are providing this capability. Here is an example:
1
apiVersion: networking.enterprise.mesh.gloo.solo.io/v1beta1
2
kind: VirtualDestination
3
metadata:
4
name: reviews-global
5
namespace: gloo-mesh
6
spec:
7
hostname: reviews.global
8
port:
9
number: 9080
10
protocol: http
11
localized:
12
outlierDetection:
13
consecutiveErrors: 2
14
maxEjectionPercent: 100
15
interval: 5s
16
baseEjectionTime: 30s
17
destinationSelectors:
18
- kubeServiceMatcher:
19
labels:
20
app: reviews
21
virtualMesh:
22
name: virtual-mesh
23
namespace: gloo-mesh
Copied!
Gloo Mesh VirtualDestinations are translating into Istio ** DestinationRules** and ServiceEntries in the different clusters.

RBAC

Gloo Mesh is simplifying the way users consume the Service Mesh globally by providing all the abstractions described previously (AccessPolicies, TrafficPolicies, ...).
But using the Gloo Mesh objects has another benefit. You can now define Gloo Mesh roles that are very fine grained.
Here are a few examples about what you can do with Gloo Mesh RBAC:
  • Create a role to allow a user to use a specific Virtual Mesh
  • Create a role to allow a user to use a specific cluster in a Virtual Mesh
  • Create a role to allow a user to only define Access Policies
  • Create a role to allow a user to only define Traffic Policies
  • Create a role to allow a user to only define Failover Services
  • Create a role to allow a user to only create policies that target the services running in his namespace (but coming from services in any namespace)
One common use case is to create a role corresponding to a global namespace admin.

Gloo Mesh Gateway

Using the Istio Ingress Gateway provides many benefits, like the ability to configure a traffic shift for both north-south and easy-west traffic or to leverage the Istio ServiceEntries.
But the Istio Ingress Gateway doesn't provide all the capabilities that are usually available in a proper API Gateway (authentication with OAuth, authorization with OPA, rate limiting, ...).
You can configure an API Gateway like Gloo Edge to securely expose some applications running in the Mesh, but you lose some of the advantages of the Istio Ingress Gateway in that case.
Gloo Mesh Gateway provides the best of both world.
It leverages the simplicity of the Gloo Mesh API and the capabilities of Gloo Edge, to enhance the Istio Ingress Gateway.
Gloo Mesh objects called VirtualGateways, VirtualHosts and RouteTables are created by users and translated by Gloo Mesh into Istio VirtualService, DestinationRules and EnvoyFilters.

Gloo Mesh objects

Here is a representation of the most important Gloo Mesh objects and how they interract together.
Gloo Mesh objects

Want to learn more about Gloo Mesh

You can find more information about Gloo Mesh in the official documentation:

Lab 1 - Deploy KinD clusters

Set the context environment variables:
1
export MGMT=mgmt
2
export CLUSTER1=cluster1
3
export CLUSTER2=cluster2
Copied!
Note that in case you can't have a Kubernetes cluster dedicated for the management plane, you would set the variables like that:
1
export MGMT=cluster1
2
export CLUSTER1=cluster1
3
export CLUSTER2=cluster2
Copied!
From the terminal go to the /home/solo/workshops/gloo-mesh directory:
1
cd /home/solo/workshops/gloo-mesh
Copied!
Run the following commands to deploy three Kubernetes clusters using Kind:
1
./scripts/deploy.sh 1 mgmt
2
./scripts/deploy.sh 2 cluster1 us-west us-west-1
3
./scripts/deploy.sh 3 cluster2 us-west us-west-2
Copied!
Then run the following commands to wait for all the Pods to be ready:
1
./scripts/check.sh mgmt
2
./scripts/check.sh cluster1
3
./scripts/check.sh cluster2
Copied!
Note: If you run the check.sh script immediately after the deploy.sh script, you may see a jsonpath error. If that happens, simply wait a few seconds and try again.
Once the check.sh script completes, when you execute the kubectl get pods -A command, you should see the following:
1
NAMESPACE NAME READY STATUS RESTARTS AGE
2
kube-system calico-kube-controllers-59d85c5c84-sbk4k 1/1 Running 0 4h26m
3
kube-system calico-node-przxs 1/1 Running 0 4h26m
4
kube-system coredns-6955765f44-ln8f5 1/1 Running 0 4h26m
5
kube-system coredns-6955765f44-s7xxx 1/1 Running 0 4h26m
6
kube-system etcd-cluster1-control-plane 1/1 Running 0 4h27m
7
kube-system kube-apiserver-cluster1-control-plane 1/1 Running 0 4h27m
8
kube-system kube-controller-manager-cluster1-control-plane1/1 Running 0 4h27m
9
kube-system kube-proxy-ksvzw 1/1 Running 0 4h26m
10
kube-system kube-scheduler-cluster1-control-plane 1/1 Running 0 4h27m
11
local-path-storage local-path-provisioner-58f6947c7-lfmdx 1/1 Running 0 4h26m
12
metallb-system controller-5c9894b5cd-cn9x2 1/1 Running 0 4h26m
13
metallb-system speaker-d7jkp 1/1 Running 0 4h26m
Copied!
Note that this represents the output just for cluster2, although the pod footprint for all three clusters should look similar at this point.
You can see that your currently connected to this cluster by executing the kubectl config get-contexts command:
1
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
2
cluster1 kind-cluster1 cluster1
3
* cluster2 kind-cluster2 cluster2
4
mgmt kind-mgmt kind-mgmt
Copied!
Run the following command to make mgmt the current cluster.
1
kubectl config use-context ${MGMT}
Copied!

Lab 2 - Deploy and register Gloo Mesh

1
First of all, you need to install the *meshctl* CLI:
Copied!
1
export GLOO_MESH_VERSION=v1.2.3
2
curl -sL https://run.solo.io/meshctl/install | sh -
3
export PATH=$HOME/.gloo-mesh/bin:$PATH
Copied!
Gloo Mesh Enterprise is adding unique features on top of Gloo Mesh Open Source (RBAC, UI, WASM, ...).
Run the following commands to deploy Gloo Mesh Enterprise:
1
helm repo add gloo-mesh-enterprise https://storage.googleapis.com/gloo-mesh-enterprise/gloo-mesh-enterprise
2
helm repo update
3
kubectl --context ${MGMT} create ns gloo-mesh
4
helm upgrade --install gloo-mesh-enterprise gloo-mesh-enterprise/gloo-mesh-enterprise \
5
--namespace gloo-mesh --kube-context ${MGMT} \
6
--version=1.2.3 \
7
--set rbac-webhook.enabled=true \
8
--set licenseKey=${GLOO_MESH_LICENSE_KEY} \
9
--set "rbac-webhook.adminSubjects[0].kind=Group" \
10
--set "rbac-webhook.adminSubjects[0].name=system:masters"
11
kubectl --context ${MGMT} -n gloo-mesh rollout status deploy/enterprise-networking
Copied!
Then, you need to set the environment variable for the service of the Gloo Mesh Enterprise networking component:
1
export ENDPOINT_GLOO_MESH=$(kubectl --context ${MGMT} -n gloo-mesh get svc enterprise-networking -o jsonpath='{.status.loadBalancer.ingress[0].*}'):9900
2
export HOST_GLOO_MESH=$(echo ${ENDPOINT_GLOO_MESH} | cut -d: -f1)
Copied!
Finally, you need to register the two other clusters:
1
meshctl cluster register --mgmt-context=${MGMT} --remote-context=${CLUSTER1} --relay-server-address=${ENDPOINT_GLOO_MESH} enterprise cluster1 --cluster-domain cluster.local
2
meshctl cluster register --mgmt-context=${MGMT} --remote-context=${CLUSTER2} --relay-server-address=${ENDPOINT_GLOO_MESH} enterprise cluster2 --cluster-domain cluster.local
Copied!
You can list the registered cluster using the following command:
1
kubectl --context ${MGMT} get kubernetescluster -n gloo-mesh
Copied!
You should get the following output:
1
NAME AGE
2
cluster1 27s
3
cluster2 23s
Copied!

Note that you can also register the remote clusters with Helm. refer to docs.solo.io for details.

To use the Gloo Mesh Gateway advanced features, you need to install the Gloo Mesh addons.
First, you need to create a namespace for the addons, with Istio injection enabled:
1
kubectl --context ${CLUSTER1} create namespace gloo-mesh-addons
2
kubectl --context ${CLUSTER1} label namespace gloo-mesh-addons istio-injection=enabled
3
kubectl --context ${CLUSTER2} create namespace gloo-mesh-addons
4
kubectl --context ${CLUSTER2} label namespace gloo-mesh-addons istio-injection=enabled
Copied!
Then, you can deploy the addons using Helm:
1
helm repo add enterprise-agent https://storage.googleapis.com/gloo-mesh-enterprise/enterprise-agent
2
helm repo update
3
4
helm upgrade --install enterprise-agent-addons enterprise-agent/enterprise-agent \
5
--kube-context=${CLUSTER1} \
6
--version=1.2.3 \
7
--namespace gloo-mesh-addons \
8
--set enterpriseAgent.enabled=false \
9
--set rate-limiter.enabled=true \
10
--set ext-auth-service.enabled=true
11
12
helm upgrade --install enterprise-agent-addons enterprise-agent/enterprise-agent \
13
--kube-context=${CLUSTER2} \
14
--version=1.2.3 \
15
--namespace gloo-mesh-addons \
16
--set enterpriseAgent.enabled=false \
17
--set rate-limiter.enabled=true \
18
--set ext-auth-service.enabled=true
Copied!
Finally, we need to create an AccessPolicy for the Istio Ingress Gateways to communicate with the addons and for the addons to communicate together:
1
kubectl apply --context ${MGMT} -f- <<EOF
2
apiVersion: networking.mesh.gloo.solo.io/v1
3
kind: AccessPolicy
4
metadata:
5
namespace: gloo-mesh
6
name: gloo-mesh-addons
7
spec:
8
sourceSelector:
9
- kubeServiceAccountRefs:
10
serviceAccounts:
11
- name: istio-ingressgateway-service-account
12
namespace: istio-system
13
clusterName: cluster1
14
- name: istio-ingressgateway-service-account
15
namespace: istio-system
16
clusterName: cluster2
17
- kubeIdentityMatcher:
18
namespaces:
19
- gloo-mesh-addons
20
destinationSelector:
21
- kubeServiceMatcher:
22
namespaces:
23
- gloo-mesh-addons
24
EOF
Copied!

Lab 3 - Deploy Istio

Download istio 1.11.4:
1
export ISTIO_VERSION=1.11.4
2
curl -L https://istio.io/downloadIstio | sh -
Copied!
Now let's deploy Istio on the first cluster:
1
kubectl --context ${CLUSTER1} create ns istio-system
2
cat << EOF | ./istio-1.11.4/bin/istioctl --context ${CLUSTER1} install -y -f -
3
apiVersion: install.istio.io/v1alpha1
4
kind: IstioOperator
5
metadata:
6
name: istiocontrolplane-default
7
namespace: istio-system
8
spec:
9
profile: default
10
meshConfig:
11
trustDomain: cluster1
12
accessLogFile: /dev/stdout
13
enableAutoMtls: true
14
defaultConfig:
15
envoyMetricsService:
16
address: enterprise-agent.gloo-mesh:9977
17
envoyAccessLogService:
18
address: enterprise-agent.gloo-mesh:9977
19
proxyMetadata:
20
ISTIO_META_DNS_CAPTURE: "true"
21
ISTIO_META_DNS_AUTO_ALLOCATE: "true"
22
GLOO_MESH_CLUSTER_NAME: cluster1
23
values:
24
global:
25
meshID: mesh1
26
multiCluster:
27
clusterName: cluster1
28
network: network1
29
meshNetworks:
30
network1:
31
endpoints:
32
- fromRegistry: cluster1
33
gateways:
34
- registryServiceName: istio-ingressgateway.istio-system.svc.cluster.local
35
port: 443
36
vm-network:
37
components:
38
ingressGateways:
39
- name: istio-ingressgateway
40
label:
41
topology.istio.io/network: network1
42
enabled: true
43
k8s:
44
env:
45
# sni-dnat adds the clusters required for AUTO_PASSTHROUGH mode
46
- name: ISTIO_META_ROUTER_MODE
47
value: "sni-dnat"
48
# traffic through this gateway should be routed inside the network
49
- name: ISTIO_META_REQUESTED_NETWORK_VIEW
50
value: network1
51
service:
52
ports:
53
- name: http2
54
port: 80
55
targetPort: 8080
56
- name: https
57
port: 443
58
targetPort: 8443
59
- name: tcp-status-port
60
port: 15021
61
targetPort: 15021
62
- name: tls
63
port: 15443
64
targetPort: 15443
65
- name: tcp-istiod
66
port: 15012
67
targetPort: 15012
68
- name: tcp-webhook
69
port: 15017
70
targetPort: 15017
71
pilot:
72
k8s:
73
env:
74
- name: PILOT_SKIP_VALIDATE_TRUST_DOMAIN
75
value: "true"
76
77
EOF
Copied!
And deploy Istio on the second cluster:
1
kubectl --context ${CLUSTER2} create ns istio-system
2
cat << EOF | ./istio-1.11.4/bin/istioctl --context ${CLUSTER2} install -y -f -
3
apiVersion: install.istio.io/v1alpha1
4
kind: IstioOperator
5
metadata:
6
name: istiocontrolplane-default
7
namespace: istio-system
8
spec:
9
profile: default
10
meshConfig:
11
trustDomain: cluster2
12
accessLogFile: /dev/stdout
13
enableAutoMtls: true
14
defaultConfig:
15
envoyMetricsService:
16
address: enterprise-agent.gloo-mesh:9977
17
envoyAccessLogService:
18
address: enterprise-agent.gloo-mesh:9977
19
proxyMetadata:
20
ISTIO_META_DNS_CAPTURE: "true"
21
ISTIO_META_DNS_AUTO_ALLOCATE: "true"
22
GLOO_MESH_CLUSTER_NAME: cluster2
23
values:
24
global:
25
meshID: mesh1
26
multiCluster:
27
clusterName: cluster2
28
network: network1
29
meshNetworks:
30
network1:
31
endpoints:
32
- fromRegistry: cluster2
33
gateways:
34
- registryServiceName: istio-ingressgateway.istio-system.svc.cluster.local
35
port: 443
36
vm-network:
37
components:
38
ingressGateways:
39
- name: istio-ingressgateway
40
label:
41
topology.istio.io/network: network1
42
enabled: true
43
k8s:
44
env:
45
# sni-dnat adds the clusters required for AUTO_PASSTHROUGH mode
46
- name: ISTIO_META_ROUTER_MODE
47
value: "sni-dnat"
48
# traffic through this gateway should be routed inside the network
49
- name: ISTIO_META_REQUESTED_NETWORK_VIEW
50
value: network1
51
service:
52
ports:
53
- name: http2
54
port: 80
55
targetPort: 8080
56
- name: https
57
port: 443
58
targetPort: 8443
59
- name: tcp-status-port
60
port: 15021
61
targetPort: 15021
62
- name: tls
63
port: 15443
64
targetPort: 15443
65
- name: tcp-istiod
66
port: 15012
67
targetPort: 15012
68
- name: tcp-webhook
69
port: 15017
70
targetPort: 15017
71
pilot:
72
k8s:
73
env:
74
- name: PILOT_SKIP_VALIDATE_TRUST_DOMAIN
75
value: "true"
76
77
EOF
Copied!
Run the following command until all the Istio Pods are ready:
1
kubectl --context ${CLUSTER1} get pods -n istio-system
Copied!
When they are ready, you should get this output:
1
NAME READY STATUS RESTARTS AGE
2
istio-ingressgateway-5c7759c8cb-52r2j 1/1 Running 0 22s
3
istiod-7884b57b4c-rvr2c 1/1 Running 0 30s
Copied!
Check the status on the second cluster using
1
kubectl --context ${CLUSTER2} get pods -n istio-system
Copied!
Set the environment variable for the service of the Istio Ingress Gateway of cluster1:
1
export ENDPOINT_HTTP_GW_CLUSTER1=$(kubectl --context ${CLUSTER1} -n istio-system get svc istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].*}'):80
2
export ENDPOINT_HTTPS_GW_CLUSTER1=$(kubectl --context ${CLUSTER1} -n istio-system get svc istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].*}'):443
3
export HOST_GW_CLUSTER1=$(echo ${ENDPOINT_HTTP_GW_CLUSTER1} | cut -d: -f1)
Copied!

Lab 4 - Deploy the Bookinfo demo app

Run the following commands to deploy the bookinfo app on cluster1:
1
bookinfo_yaml=https://raw.githubusercontent.com/istio/istio/1.11.4/samples/bookinfo/platform/kube/bookinfo.yaml
2
kubectl --context ${CLUSTER1} label namespace default istio-injection=enabled
3
# deploy bookinfo application components for all versions less than v3
4
kubectl --context ${CLUSTER1} apply -f ${bookinfo_yaml} -l 'app,version notin (v3)'
5
# deploy all bookinfo service accounts
6
kubectl --context ${CLUSTER1} apply -f ${bookinfo_yaml} -l 'account'
7
# configure ingress gateway to access bookinfo
8
kubectl --context ${CLUSTER1} apply -f https://raw.githubusercontent.com/istio/istio/1.11.4/samples/bookinfo/networking/bookinfo-gateway.yaml
Copied!
You can check that the app is running using
1
kubectl --context ${CLUSTER1} get pods
Copied!
1
NAME READY STATUS RESTARTS AGE
2
details-v1-558b8b4b76-w9qp8 2/2 Running 0 2m33s
3
productpage-v1-6987489c74-54lvk 2/2 Running 0 2m34s
4
ratings-v1-7dc98c7588-pgsxv 2/2 Running 0 2m34s
5
reviews-v1-7f99cc4496-lwtsr 2/2 Running 0 2m34s
6
reviews-v2-7d79d5bd5d-mpsk2 2/2 Running 0 2m34s
Copied!
As you can see, it deployed the v1 and v2 versions of the reviews microservice. But as expected, it did not deploy v3 of reviews.
Now, run the following commands to deploy the bookinfo app on cluster2:
1
kubectl --context ${CLUSTER2} label namespace default istio-injection=enabled
2
# deploy all bookinfo service accounts and application components for all versions
3
kubectl --context ${CLUSTER2} apply -f ${bookinfo_yaml}
4
# configure ingress gateway to access bookinfo
5
kubectl --context ${CLUSTER2} apply -f https://raw.githubusercontent.com/istio/istio/1.11.4/samples/bookinfo/networking/bookinfo-gateway.yaml
Copied!
You can check that the app is running using:
1
kubectl --context ${CLUSTER2} get pods
Copied!
1
NAME READY STATUS RESTARTS AGE
2
details-v1-558b8b4b76-gs9z2 2/2 Running 0 2m22s
3
productpage-v1-6987489c74-x45vd 2/2 Running 0 2m21s
4
ratings-v1-7dc98c7588-2n6bg 2/2 Running 0 2m21s
5
reviews-v1-7f99cc4496-4r48m 2/2 Running 0 2m21s
6
reviews-v2-7d79d5bd5d-cx9lp 2/2 Running 0 2m22s
7
reviews-v3-7dbcdcbc56-trjdx 2/2 Running 0 2m22s
Copied!
As you can see, it deployed all three versions of the reviews microservice.
Get the URL to access the productpage service from your web browser using the following command:
1
echo "http://${ENDPOINT_HTTP_GW_CLUSTER1}/productpage"
Copied!
Bookinfo working
As you can see, you can access the Bookinfo demo app.

Lab 5 - Create the Virtual Mesh

Gloo Mesh can help unify the root identity between multiple service mesh installations so any intermediates are signed by the same Root CA and end-to-end mTLS between clusters and services can be established correctly.
Run this command to see how the communication between microservices occurs currently:
1
kubectl --context ${CLUSTER1} exec -t deploy/reviews-v1 -c istio-proxy \
2
-- openssl s_client -showcerts -connect ratings:9080
Copied!
You should get something like that:
1
CONNECTED(00000005)
2
139706332271040:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../ssl/record/ssl3_record.c:332:
3
---
4
no peer certificate available
5
---
6
No client certificate CA names sent
7
---
8
SSL handshake has read 5 bytes and written 309 bytes
9
Verification: OK
10
---
11
New, (NONE), Cipher is (NONE)
12
Secure Renegotiation IS NOT supported
13
Compression: NONE
14
Expansion: NONE
15
No ALPN negotiated
16
Early data was not sent
17
Verify return code: 0 (ok)
18
---
19
command terminated with exit code 1
Copied!
It means that the traffic is currently not encrypted.
Enable TLS on both clusters:
1
kubectl --context ${CLUSTER1} apply -f - <<EOF
2
apiVersion: "security.istio.io/v1beta1"
3
kind: "PeerAuthentication"
4
metadata:
5
name: "default"
6
namespace: "istio-system"
7
spec:
8
mtls:
9
mode: STRICT
10
EOF
11
12
kubectl --context ${CLUSTER2} apply -f - <<EOF
13
apiVersion: "security.istio.io/v1beta1"
14
kind: "PeerAuthentication"
15
metadata:
16
name: "default"
17
namespace: "istio-system"
18
spec:
19
mtls:
20
mode: STRICT
21
EOF
Copied!
Run the command again:
1
kubectl --context ${CLUSTER1} exec -t deploy/reviews-v1 -c istio-proxy \
2
-- openssl s_client -showcerts -connect ratings:9080
Copied!
Now, the output should be like that:
1
...
2
Certificate chain
3
0 s:
4
i:O = cluster1
5
-----BEGIN CERTIFICATE-----
6
MIIDFzCCAf+gAwIBAgIRALsoWlroVcCc1n+VROhATrcwDQYJKoZIhvcNAQELBQAw
7
...
8
BPiAYRMH5j0gyBqiZZEwCfzfQe1e6aAgie9T
9
-----END CERTIFICATE-----
10
1 s:O = cluster1
11
i:O = cluster1
12
-----BEGIN CERTIFICATE-----
13
MIICzjCCAbagAwIBAgIRAKIx2hzMbAYzM74OC4Lj1FUwDQYJKoZIhvcNAQELBQAw
14
...
15
uMTPjt7p/sv74fsLgrx8WMI0pVQ7+2plpjaiIZ8KvEK9ye/0Mx8uyzTG7bpmVVWo
16
ugY=
17
-----END CERTIFICATE-----
18
...
Copied!
As you can see, mTLS is now enabled.
Now, run the same command on the second cluster:
1
kubectl --context ${CLUSTER2} exec -t deploy/reviews-v1 -c istio-proxy \
2
-- openssl s_client -showcerts -connect ratings:9080
Copied!
The output should be like that:
1
...
2
Certificate chain
3
0 s:
4
i:O = cluster2
5
-----BEGIN CERTIFICATE-----
6
MIIDFzCCAf+gAwIBAgIRALo1dmnbbP0hs1G82iBa2oAwDQYJKoZIhvcNAQELBQAw
7
...
8
YvDrZfKNOKwFWKMKKhCSi2rmCvLKuXXQJGhy
9
-----END CERTIFICATE-----
10
1 s:O = cluster2
11
i:O = cluster2
12
-----BEGIN CERTIFICATE-----
13
MIICzjCCAbagAwIBAgIRAIjegnzq/hN/NbMm3dmllnYwDQYJKoZIhvcNAQELBQAw
14
...
15
GZRM4zV9BopZg745Tdk2LVoHiBR536QxQv/0h1P0CdN9hNLklAhGN/Yf9SbDgLTw
16
6Sk=
17
-----END CERTIFICATE-----
18
...
Copied!
The first certificate in the chain is the certificate of the workload and the second one is the Istio CA’s signing (CA) certificate.
As you can see, the Istio CA’s signing (CA) certificates are different in the 2 clusters, so one cluster can't validate certificates issued by the other cluster.
Creating a Virtual Mesh will unify these two CAs with a common root identity.
Run the following command to create the Virtual Mesh:
1
cat << EOF | kubectl --context ${MGMT} apply -f -
2
apiVersion: networking.mesh.gloo.solo.io/v1
3
kind: VirtualMesh
4
metadata:
5
name: virtual-mesh
6
namespace: gloo-mesh
7
spec:
8
mtlsConfig:
9
autoRestartPods: true
10
shared:
11
rootCertificateAuthority:
12
generated: {}
13
federation:
14
selectors:
15
- {}
16
meshes:
17
- name: istiod-istio-system-cluster1
18
namespace: gloo-mesh
19
- name: istiod-istio-system-cluster2
20
namespace: gloo-mesh
21
EOF
Copied!
When we create the VirtualMesh and set the trust model to shared, Gloo Mesh will kick off the process of unifying identities under a shared root.
First, Gloo Mesh will create the Root CA.
Then, Gloo Mesh will use the Certificate Request Agent on each of the clusters to create a new key/cert pair that will form an intermediate CA used by the mesh on that cluster. It will then create a Certificate Request (CR).
Virtual Mesh Creation
Gloo Mesh will then sign the intermediate certificates with the Root CA.
At that point, we want Istio to pick up the new intermediate CA and start using that for its workloads. To do that Gloo Mesh creates a Kubernetes secret called cacerts in the istio-system namespace.
You can have a look at the Istio documentation here if you want to get more information about this process.
Check that the secret containing the new Istio CA has been created in the istio namespace, on the first cluster:
1
kubectl --context ${CLUSTER1} get secret -n istio-system cacerts -o yaml
Copied!
Here is the expected output:
1
apiVersion: v1
2
data:
3
ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZFRENDQXZpZ0F3SUJBZ0lRUG5kRDkwejN4dytYeTBzYzNmcjRmekFOQmdrcWhraUc5dzBCQVFzRkFEQWIKTVJrd0Z3WURWU...
4
jFWVlZtSWl3Si8va0NnNGVzWTkvZXdxSGlTMFByWDJmSDVDCmhrWnQ4dz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
5
ca-key.pem: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlKS0FJQkFBS0NBZ0VBczh6U0ZWcEFxeVNodXpMaHVXUlNFMEJJMXVwbnNBc3VnNjE2TzlKdzBlTmhhc3RtClUvZERZS...
6
DT2t1bzBhdTFhb1VsS1NucldpL3kyYUtKbz0KLS0tLS1FTkQgUlNBIFBSSVZBVEUgS0VZLS0tLS0K
7
cert-chain.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZFRENDQXZpZ0F3SUJBZ0lRUG5kRDkwejN4dytYeTBzYzNmcjRmekFOQmdrcWhraUc5dzBCQVFzRkFEQWIKTVJrd0Z3WURWU...
8
RBTHpzQUp2ZzFLRUR4T2QwT1JHZFhFbU9CZDBVUDk0KzJCN0tjM2tkNwpzNHYycEV2YVlnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
9
key.pem: ""
10
root-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUU0ekNDQXN1Z0F3SUJBZ0lRT2lZbXFGdTF6Q3NzR0RFQ3JOdnBMakFOQmdrcWhraUc5dzBCQVFzRkFEQWIKTVJrd0Z3WURWU...
11
UNBVEUtLS0tLQo=
12
kind: Secret
13
metadata:
14
labels:
15
agent.certificates.mesh.gloo.solo.io: gloo-mesh
16
cluster.multicluster.solo.io: ""
17
name: cacerts
18
namespace: istio-system
19
type: certificates.mesh.gloo.solo.io/issued_certificate
Copied!
Same operation on the second cluster:
1
kubectl --context ${CLUSTER2} get secret -n istio-system cacerts -o yaml
Copied!
Here is the expected output:
1
apiVersion: v1
2
data:
3
ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZFRENDQXZpZ0F3SUJBZ0lRWXE1V29iWFhGM1gwTjlNL3BYYkNKekFOQmdrcWhraUc5dzBCQVFzRkFEQWIKTVJrd0Z3WURWU...
4
XpqQ1RtK2QwNm9YaDI2d1JPSjdQTlNJOTkrR29KUHEraXltCkZIekhVdz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
5
ca-key.pem: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlKS1FJQkFBS0NBZ0VBMGJPMTdSRklNTnh4K1lMUkEwcFJqRmRvbG1SdW9Oc3gxNUUvb3BMQ1l1RjFwUEptCndhR1U1V...
6
MNU9JWk5ObDA4dUE1aE1Ca2gxNCtPKy9HMkoKLS0tLS1FTkQgUlNBIFBSSVZBVEUgS0VZLS0tLS0K
7
cert-chain.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZFRENDQXZpZ0F3SUJBZ0lRWXE1V29iWFhGM1gwTjlNL3BYYkNKekFOQmdrcWhraUc5dzBCQVFzRkFEQWIKTVJrd0Z3WURWU...
8
RBTHpzQUp2ZzFLRUR4T2QwT1JHZFhFbU9CZDBVUDk0KzJCN0tjM2tkNwpzNHYycEV2YVlnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
9
key.pem: ""
10
root-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUU0ekNDQXN1Z0F3SUJBZ0lRT2lZbXFGdTF6Q3NzR0RFQ3JOdnBMakFOQmdrcWhraUc5dzBCQVFzRkFEQWIKTVJrd0Z3WURWU...
11
UNBVEUtLS0tLQo=
12
kind: Secret
13
metadata:
14
labels:
15
agent.certificates.mesh.gloo.solo.io: gloo-mesh
16
cluster.multicluster.solo.io: ""
17
name: cacerts
18
namespace: istio-system
19
type: certificates.mesh.gloo.solo.io/issued_certificate
Copied!
As you can see, the secrets contain the same Root CA (base64 encoded), but different intermediate certs.
Have a look at the VirtualMesh object we've just created and notice the autoRestartPods: true in the mtlsConfig. This instructs Gloo Mesh to restart the Istio pods in the relevant clusters.
This is due to a limitation of Istio. The Istio control plane picks up the CA for Citadel and does not rotate it often enough.
Now, let's check what certificates we get when we run the same commands we ran before we created the Virtual Mesh:
1
kubectl --context ${CLUSTER1} exec -t deploy/reviews-v1 -c istio-proxy \
2
-- openssl s_client -showcerts -connect ratings:9080
Copied!
The output should be like that:
1
...
2
Certificate chain
3
0 s:
4
i:
5
-----BEGIN CERTIFICATE-----
6
MIIEBzCCAe+gAwIBAgIRAK1yjsFkisSjNqm5tzmKQS8wDQYJKoZIhvcNAQELBQAw
7
...
8
T77lFKXx0eGtDNtWm/1IPiOutIMlFz/olVuN
9
-----END CERTIFICATE-----
10
1 s:
11
i:O = gloo-mesh
12
-----BEGIN CERTIFICATE-----
13
MIIFEDCCAvigAwIBAgIQPndD90z3xw+Xy0sc3fr4fzANBgkqhkiG9w0BAQsFADAb
14
...
15
hkZt8w==
16
-----END CERTIFICATE-----
17
2 s:O = gloo-mesh
18
i:O = gloo-mesh
19
-----BEGIN CERTIFICATE-----
20
MIIE4zCCAsugAwIBAgIQOiYmqFu1zCssGDECrNvpLjANBgkqhkiG9w0BAQsFADAb
21
...
22
s4v2pEvaYg==
23
-----END CERTIFICATE-----
24
3 s:O = gloo-mesh
25
i:O = gloo-mesh
26
-----BEGIN CERTIFICATE-----
27
MIIE4zCCAsugAwIBAgIQOiYmqFu1zCssGDECrNvpLjANBgkqhkiG9w0BAQsFADAb
28
...
29
s4v2pEvaYg==
30
-----END CERTIFICATE-----
31
...
Copied!
And let's compare with what we get on the second cluster:
1
kubectl --context ${CLUSTER2} exec -t deploy/reviews-v1 -c istio-proxy \
2
-- openssl s_client -showcerts -connect ratings:9080
Copied!
The output should be like that:
1
...
2
Certificate chain
3
0 s:
4
i:
5
-----BEGIN CERTIFICATE-----
6
MIIEBjCCAe6gAwIBAgIQfSeujXiz3KsbG01+zEcXGjANBgkqhkiG9w0BAQsFADAA
7
...
8
EtTlhPLbyf2GwkUgzXhdcu2G8uf6o16b0qU=
9
-----END CERTIFICATE-----
10
1 s:
11
i:O = gloo-mesh
12
-----BEGIN CERTIFICATE-----
13
MIIFEDCCAvigAwIBAgIQYq5WobXXF3X0N9M/pXbCJzANBgkqhkiG9w0BAQsFADAb
14
...
15
FHzHUw==
16
-----END CERTIFICATE-----
17
2 s:O = gloo-mesh
18
i:O = gloo-mesh
19
-----BEGIN CERTIFICATE-----
20
MIIE4zCCAsugAwIBAgIQOiYmqFu1zCssGDECrNvpLjANBgkqhkiG9w0BAQsFADAb
21
...
22
s4v2pEvaYg==
23
-----END CERTIFICATE-----
24
3 s:O = gloo-mesh
25
i:O = gloo-mesh
26
-----BEGIN CERTIFICATE-----
27
MIIE4zCCAsugAwIBAgIQOiYmqFu1zCssGDECrNvpLjANBgkqhkiG9w0BAQsFADAb
28
...
29
s4v2pEvaYg==
30
-----END CERTIFICATE-----
31
...
Copied!
You can see that the last certificate in the chain is now identical on both clusters. It's the new root certificate.
The first certificate is the certificate of the service. Let's decrypt it.
Copy and paste the content of the certificate (including the BEGIN and END CERTIFICATE lines) in a new file called /tmp/cert and run the following command:
1
openssl x509 -in /tmp/cert -text
Copied!
The output should be as follow:
1
Certificate:
2
Data:
3
Version: 3 (0x2)
4
Serial Number:
5
7d:27:ae:8d:78:b3:dc:ab:1b:1b:4d:7e:cc:47:17:1a
6
Signature Algorithm: sha256WithRSAEncryption
7
Issuer:
8
Validity
9
Not Before: Sep 17 08:21:08 2020 GMT
10
Not After : Sep 18 08:21:08 2020 GMT
11
Subject:
12
Subject Public Key Info:
13
Public Key Algorithm: rsaEncryption
14
Public-Key: (2048 bit)
15
Modulus:
16
...
17
Exponent: 65537 (0x10001)
18
X509v3 extensions:
19
X509v3 Key Usage: critical
20
Digital Signature, Key Encipherment
21
X509v3 Extended Key Usage:
22
TLS Web Server Authentication, TLS Web Client Authentication
23
X509v3 Basic Constraints: critical
24
CA:FALSE
25
X509v3 Subject Alternative Name: critical
26
URI:spiffe://cluster2/ns/default/sa/bookinfo-ratings
27
Signature Algorithm: sha256WithRSAEncryption
28
...
29
-----BEGIN CERTIFICATE-----
30
MIIEBjCCAe6gAwIBAgIQfSeujXiz3KsbG01+zEcXGjANBgkqhkiG9w0BAQsFADAA
31
...
32
EtTlhPLbyf2GwkUgzXhdcu2G8uf6o16b0qU=
33
-----END CERTIFICATE-----
Copied!
The Subject Alternative Name (SAN) is the most interesting part. It allows the sidecar proxy of the reviews service to validate that it talks to the sidecar proxy of the rating service.

Lab 6 - Access control

In the previous guide, we federated multiple meshes and established a shared root CA for a shared identity domain. Now that we have a logical VirtualMesh, we need a way to establish access policies across the multiple meshes, without treating each of them individually. Gloo Mesh helps by establishing a single, unified API that understands the logical VirtualMesh construct.
The application works correctly because RBAC isn't enforced.
Let's update the VirtualMesh to enable it:
1
cat << EOF | kubectl --context ${MGMT} apply -f -
2
apiVersion: networking.mesh.gloo.solo.io/v1
3
kind: VirtualMesh
4
metadata:
5
name: virtual-mesh
6
namespace: gloo-mesh
7
spec:
8
mtlsConfig:
9
autoRestartPods: true
10
shared:
11
rootCertificateAuthority:
12
generated: {}
13
federation:
14
selectors:
15
- {}
16
globalAccessPolicy: ENABLED
17
meshes:
18
- name: istiod-istio-system-cluster1
19
namespace: gloo-mesh
20
- name: istiod-istio-system-cluster2
21
namespace: gloo-mesh
22
EOF
Copied!
After a few seconds, if you refresh the web page, you should see that you don't have access to the application anymore.
You should get the following error message:
1
RBAC: access denied
Copied!
You need to create a Gloo Mesh Access Policy to allow the Istio Ingress Gateway to access the productpage microservice:
1
cat << EOF | kubectl --context ${MGMT} apply -f -
2
apiVersion: networking.mesh.gloo.solo.io/v1
3
kind: AccessPolicy
4
metadata:
5
namespace: gloo-mesh
6
name: istio-ingressgateway
7
spec:
8
sourceSelector:
9
- kubeServiceAccountRefs:
10
serviceAccounts:
11
- name: istio-ingressgateway-service-account
12
namespace: istio-system
13
clusterName: cluster1
14
- name: istio-ingressgateway-service-account
15
namespace: istio-system
16
clusterName: cluster2
17
destinationSelector:
18
- kubeServiceMatcher:
19
namespaces:
20
- default
21
labels:
22
service: productpage
23
EOF
Copied!
Now, refresh the page again and you should be able to access the application, but neither the details nor the reviews:
You can create another Gloo Mesh Access Policy to allow the productpage microservice to talk to these 2 microservices:
1
cat << EOF | kubectl --context ${MGMT} apply -f -
2
apiVersion: networking.mesh.gloo.solo.io/v1
3
kind: AccessPolicy
4
metadata:
5
namespace: gloo-mesh
6
name: productpage
7
spec:
8
sourceSelector:
9
- kubeServiceAccountRefs:
10
serviceAccounts:
11
- name: bookinfo-productpage
12
namespace: default
13
clusterName: cluster1
14
destinationSelector:
15
- kubeServiceMatcher:
16
namespaces:
17
- default
18
labels:
19
service: details
20
- kubeServiceMatcher:
21
namespaces:
22
- default
23
labels:
24
service: reviews
25
EOF
Copied!
If you refresh the page, you should be able to see the product details and the reviews, but the reviews microservice can't access the ratings microservice:
Bookinfo RBAC 2
Create another AccessPolicy to fix the issue:
1
cat << EOF | kubectl --context ${MGMT} apply -f -
2
apiVersion: networking.mesh.gloo.solo.io/v1
3
kind: AccessPolicy
4
metadata:
5
namespace: gloo-mesh
6
name: reviews
7
spec:
8
sourceSelector:
9
- kubeServiceAccountRefs:
10
serviceAccounts:
11
- name: bookinfo-reviews
12
namespace: default
13
clusterName: cluster1
14
destinationSelector:
15
- kubeServiceMatcher:
16
namespaces:
17
- default
18
labels:
19
service: ratings
20
EOF
Copied!
Refresh the page another time and all the services should now work:
Bookinfo working
If you refresh the web page several times, you should see only the versions v1 (no stars) and v2 (black stars), which means that all the requests are still handled by the first cluster.

Lab 7 - Traffic policy

We're going to use Gloo Mesh Traffic Policies to inject faults and configure timeouts.
Let's create the following TrafficPolicy to inject a delay when the v2 version of the reviews service talk to the ratings service on cluster1.:
1
cat << EOF | kubectl --context ${MGMT} apply -f -
2
apiVersion: networking.mesh.gloo.solo.io/v1
3
kind: TrafficPolicy
4
metadata:
5
name: ratings-fault-injection
6
namespace: gloo-mesh
7
spec:
8
sourceSelector:
9
- kubeWorkloadMatcher:
10
labels:
11
app: reviews
12
version: v2
13
namespaces:
14
- default
15
clusters:
16
- cluster1
17
destinationSelector:
18
- kubeServiceRefs:
19
services:
20
- clusterName: cluster1
21
name: ratings
22
namespace: default
23
policy:
24
faultInjection:
25
fixedDelay: 2s
26
percentage: 100
27
EOF
Copied!
If you refresh the webpage, you should see that it takes longer to get the productpage loaded when version v2 of the reviews services is called.
Now, let's configure a 0.5s request timeout when the productpage service calls the reviews service on cluster1.
1
cat << EOF | kubectl --context ${MGMT} apply -f -
2
apiVersion: networking.mesh.gloo.solo.io/v1
3
kind: TrafficPolicy
4
metadata:
5
name: reviews-request-timeout
6
namespace: gloo-mesh
7
spec:
8
sourceSelector:
9
- kubeWorkloadMatcher:
10
labels:
11
app: productpage
12
namespaces:
13
- default
14
clusters:
15
- cluster1
16
destinationSelector:
17
- kubeServiceRefs:
18
services:
19
- clusterName: cluster1
20
name: reviews
21
namespace: default
22
policy:
23
requestTimeout: 0.5s
24
EOF
Copied!
If you refresh the page several times, you'll see an error message telling that reviews are unavailable when the productpage is trying to communicate with the version v2 of the reviews service.
Bookinfo v3
Let's delete the TrafficPolicies:
1
kubectl --context ${MGMT} -n gloo-mesh delete trafficpolicy ratings-fault-injection
2
kubectl --context ${MGMT} -n gloo-mesh delete trafficpolicy reviews-request-timeout
Copied!

Lab 8 - Multi-cluster Traffic

On the first cluster, the v3 version of the reviews microservice doesn't exist, so we're going to redirect some of the traffic to the second cluster to make it available.
Multicluster traffic
Let's create the following TrafficPolicy:
1
cat << EOF | kubectl --context ${MGMT} apply -f -
2
apiVersion: networking.mesh.gloo.solo.io/v1
3
kind: TrafficPolicy
4
metadata:
5
namespace: gloo-mesh
6
name: simple
7
spec:
8
sourceSelector:
9
- kubeWorkloadMatcher:
10
namespaces:
11
- default
12
destinationSelector:
13
- kubeServiceRefs:
14
services:
15
- clusterName: cluster1
16
name: reviews
17
namespace: default
18
policy:
19
trafficShift:
20
destinations:
21
- kubeService:
22
clusterName: cluster2
23
name: reviews
24
namespace: default
25
subset:
26
version: v3
27
weight: 75
28
- kubeService:
29
clusterName: cluster1
30
name: reviews
31
namespace: default
32
subset:
33
version: v1
34
weight: 15
35
- kubeService:
36
clusterName: cluster1
37
name: reviews
38
namespace: default
39
subset:
40
version: v2
41
weight: 10
42
EOF
Copied!
If you refresh the page several times, you'll see the v3 version of the reviews microservice:
Bookinfo v3
But as you can see, the ratings aren't available. That's because we only allowed the reviews microservice of the first cluster to talk to the ratings microservice.
Let's update the AccessPolicy to fix the issue:
1
cat << EOF | kubectl --context ${MGMT} apply -f -
2
apiVersion: networking.mesh.gloo.solo.io/v1
3
kind: AccessPolicy
4
metadata:
5
namespace: gloo-mesh
6
name: reviews
7
spec:
8
sourceSelector:
9
- kubeServiceAccountRefs:
10
serviceAccounts:
11
- name: bookinfo-reviews
12
namespace: default
13
clusterName: cluster1
14
- name: bookinfo-reviews
15
namespace: default
16
clusterName: cluster2
17
destinationSelector:
18
- kubeServiceMatcher:
19
namespaces:
20
- default
21
labels:
22
service: ratings
23
EOF
Copied!
If you refresh the page several times again, you'll see the v3 version of the reviews microservice with the red stars:
Bookinfo v3
Let's delete the TrafficPolicy:
1
kubectl --context ${MGMT} -n gloo-mesh delete trafficpolicy simple
Copied!

Lab 9 - Traffic failover

If you refresh the web page several times, you should see only the versions v1 (no stars) and v2 (black stars), which means that all the requests are handled by the first cluster.
Another interesting feature of Gloo Mesh is its ability to manage failover between clusters.
In this lab, we're going to configure a failover for the reviews service:
After failover
Then, we create a VirtualDestination to define a new hostname (reviews.global) that will be backed by the reviews microservice runnings on both clusters.
1
cat << EOF | kubectl --context ${MGMT} apply -f -
2
apiVersion: networking.enterprise.mesh.gloo.solo.io/v1beta1
3
kind: VirtualDestination
4
metadata:
5
name: reviews-global
6
namespace: gloo-mesh
7
spec:
8
hostname: reviews.global
9
port:
10
number: 9080
11
protocol: http
12
localized:
13
outlierDetection:
14
consecutiveErrors: 2
15
maxEjectionPercent: 100
16
interval: 5s
17
baseEjectionTime: 30s
18
destinationSelectors:
19
- kubeServiceMatcher:
20
labels:
21
app: reviews
22
virtualMesh:
23
name: virtual-mesh
24
namespace: gloo-mesh
25
EOF
Copied!
Finally, we can define another TrafficPolicy to make sure all the requests for the reviews microservice on the local cluster will be handled by the VirtualDestination we've just created.
1
cat <<