- Shared control plane (multi-network)
- Prerequisites
- Setup the multicluster mesh
- Setup cluster 1 (primary)
- Setup cluster 2
- Start watching cluster 2
- Deploy example service
- Deploy helloworld v2 in cluster 2
- Deploy helloworld v1 in cluster 1
- Cross-cluster routing in action
- Cleanup
- See also
Shared control plane (multi-network)
Follow this guide to configure a multicluster mesh using a sharedcontrol planewith gateways to connect network-isolated clusters.Istio’s location-aware service routing feature is used to route requests to different endpoints,depending on the location of the request source.
By following the instructions in this guide, you will setup a two-cluster mesh as shown in the following diagram:
Shared Istio control plane topology spanning multiple Kubernetes clusters using gateways
The primary cluster, cluster1
, runs the full set of Istio control plane components while cluster2
onlyruns Istio Citadel, Sidecar Injector, and Ingress gateway.No VPN connectivity nor direct network access between workloads in different clusters is required.
Prerequisites
Two or more Kubernetes clusters with versions: 1.13, 1.14, 1.15.
Authority to deploy the Istio control plane
Two Kubernetes clusters (referred to as
cluster1
andcluster2
).
The Kubernetes API server of cluster2
MUST be accessible from cluster1
in order to run this configuration.
- You can use the
kubectl
command to access both thecluster1
andcluster2
clusters with the—context
flag,for examplekubectl get pods —context cluster1
.Use the following command to list your contexts:
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* cluster1 cluster1 user@foo.com default
cluster2 cluster2 user@foo.com default
- Store the context names of your clusters in environment variables:
$ export CTX_CLUSTER1=$(kubectl config view -o jsonpath='{.contexts[0].name}')
$ export CTX_CLUSTER2=$(kubectl config view -o jsonpath='{.contexts[1].name}')
$ echo CTX_CLUSTER1 = ${CTX_CLUSTER1}, CTX_CLUSTER2 = ${CTX_CLUSTER2}
CTX_CLUSTER1 = cluster1, CTX_CLUSTER2 = cluster2
If you have more than two clusters in the context list and you want to configure your mesh using clusters other thanthe first two, you will need to manually set the environment variables to the appropriate context names.
Setup the multicluster mesh
In this configuration you install Istio with mutual TLS enabled for both the control plane and application pods.For the shared root CA, you create a cacerts
secret on both cluster1
and cluster2
clusters using the same Istiocertificate from the Istio samples directory.
The instructions, below, also set up cluster2
with a selector-less service and an endpoint for istio-pilot.istio-system
that has the address of cluster1
Istio ingress gateway.This will be used to access pilot on cluster1
securely using the ingress gateway without mutual TLS termination.
Setup cluster 1 (primary)
- Deploy Istio to
cluster1
:
When you enable the additional components necessary for multicluster operation, the resource footprintof the Istio control plane may increase beyond the capacity of the default Kubernetes cluster you created whencompleting the Platform setup steps.If the Istio services aren’t getting scheduled due to insufficient CPU or memory, consideradding more nodes to your cluster or upgrading to larger memory instances as necessary.
$ kubectl create --context=$CTX_CLUSTER1 ns istio-system
$ kubectl create --context=$CTX_CLUSTER1 secret generic cacerts -n istio-system --from-file=samples/certs/ca-cert.pem --from-file=samples/certs/ca-key.pem --from-file=samples/certs/root-cert.pem --from-file=samples/certs/cert-chain.pem
$ istioctl manifest apply --context=$CTX_CLUSTER1 \
-f install/kubernetes/operator/examples/multicluster/values-istio-multicluster-primary.yaml
Note that the gateway addresses are set to 0.0.0.0
. These are temporary placeholder values that willlater be updated with the public IPs of the cluster1
and cluster2
gateways after they are deployedin the following section.
Wait for the Istio pods on cluster1
to become ready:
$ kubectl get pods --context=$CTX_CLUSTER1 -n istio-system
NAME READY STATUS RESTARTS AGE
istio-citadel-55d8b59798-6hnx4 1/1 Running 0 83s
istio-galley-c74b77787-lrtr5 2/2 Running 0 82s
istio-ingressgateway-684f5df677-shzhm 1/1 Running 0 83s
istio-pilot-5495bc8885-2rgmf 2/2 Running 0 82s
istio-policy-69cdf5db4c-x4sct 2/2 Running 2 83s
istio-sidecar-injector-5749cf7cfc-pgd95 1/1 Running 0 82s
istio-telemetry-646db5ddbd-gvp6l 2/2 Running 1 83s
prometheus-685585888b-4tvf7 1/1 Running 0 83s
- Create an ingress gateway to access service(s) in
cluster2
:
$ kubectl apply --context=$CTX_CLUSTER1 -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: cluster-aware-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: tls
protocol: TLS
tls:
mode: AUTO_PASSTHROUGH
hosts:
- "*.local"
EOF
This Gateway
configures 443 port to pass incoming traffic through to the target service specified in arequest’s SNI header, for SNI values of the local top-level domain(i.e., the Kubernetes DNS domain).Mutual TLS connections will be used all the way from the source to the destination sidecar.
Although applied to cluster1
, this Gateway instance will also affect cluster2
because both clusters communicate with thesame Pilot.
Determine the ingress IP and port for
cluster1
.- Set the current context of
kubectl
toCTX_CLUSTER1
- Set the current context of
$ export ORIGINAL_CONTEXT=$(kubectl config current-context)
$ kubectl config use-context $CTX_CLUSTER1
Follow the instructions inDetermining the ingress IP and ports,to set the
INGRESS_HOST
andSECURE_INGRESS_PORT
environment variables.Restore the previous
kubectl
context:
$ kubectl config use-context $ORIGINAL_CONTEXT
$ unset ORIGINAL_CONTEXT
- Print the values of
INGRESS_HOST
andSECURE_INGRESS_PORT
:
$ echo The ingress gateway of cluster1: address=$INGRESS_HOST, port=$SECURE_INGRESS_PORT
- Update the gateway address in the mesh network configuration. Edit the
istio
ConfigMap
:
$ kubectl edit cm -n istio-system --context=$CTX_CLUSTER1 istio
Update the gateway’s address and port of network1
to reflect the cluster1
ingress host and port,respectively, then save and quit. Note that the address appears in two places, the second under values.yaml:
.
Once saved, Pilot will automatically read the updated network configuration.
Setup cluster 2
- Export the
cluster1
gateway address:
$ export LOCAL_GW_ADDR=$(kubectl get --context=$CTX_CLUSTER1 svc --selector=app=istio-ingressgateway \
-n istio-system -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}') && echo ${LOCAL_GW_ADDR}
This command sets the value to the gateway’s public IP and displays it.
The command fails if the load balancer configuration doesn’t include an IP address. The implementation of DNS name support is pending.
- Deploy Istio to
cluster2
:
$ kubectl create --context=$CTX_CLUSTER2 ns istio-system
$ kubectl create --context=$CTX_CLUSTER2 secret generic cacerts -n istio-system --from-file=samples/certs/ca-cert.pem --from-file=samples/certs/ca-key.pem --from-file=samples/certs/root-cert.pem --from-file=samples/certs/cert-chain.pem
$ istioctl manifest apply --context=$CTX_CLUSTER2 \
--set profile=remote \
--set values.global.mtls.enabled=true \
--set values.gateways.enabled=true \
--set values.security.selfSigned=false \
--set values.global.controlPlaneSecurityEnabled=true \
--set values.global.createRemoteSvcEndpoints=true \
--set values.global.remotePilotCreateSvcEndpoint=true \
--set values.global.remotePilotAddress=${LOCAL_GW_ADDR} \
--set values.global.remotePolicyAddress=${LOCAL_GW_ADDR} \
--set values.global.remoteTelemetryAddress=${LOCAL_GW_ADDR} \
--set values.gateways.istio-ingressgateway.env.ISTIO_META_NETWORK="network2" \
--set values.global.network="network2" \
--set autoInjection.enabled=true
Wait for the Istio pods on cluster2
, except for istio-ingressgateway
, to become ready:
$ kubectl get pods --context=$CTX_CLUSTER2 -n istio-system -l istio!=ingressgateway
NAME READY STATUS RESTARTS AGE
istio-citadel-55d8b59798-nlk2z 1/1 Running 0 26s
istio-sidecar-injector-5749cf7cfc-s6r7p 1/1 Running 0 25s
istio-ingressgateway
will not be ready until you configure the Istio control plane in cluster1
to watchcluster2
. You do it in the next section.
Determine the ingress IP and port for
cluster2
.- Set the current context of
kubectl
toCTX_CLUSTER2
- Set the current context of
$ export ORIGINAL_CONTEXT=$(kubectl config current-context)
$ kubectl config use-context $CTX_CLUSTER2
Follow the instructions inDetermining the ingress IP and ports,to set the
INGRESS_HOST
andSECURE_INGRESS_PORT
environment variables.Restore the previous
kubectl
context:
$ kubectl config use-context $ORIGINAL_CONTEXT
$ unset ORIGINAL_CONTEXT
- Print the values of
INGRESS_HOST
andSECURE_INGRESS_PORT
:
$ echo The ingress gateway of cluster2: address=$INGRESS_HOST, port=$SECURE_INGRESS_PORT
- Update the gateway address in the mesh network configuration. Edit the
istio
ConfigMap
:
$ kubectl edit cm -n istio-system --context=$CTX_CLUSTER1 istio
Update the gateway’s address and port of network2
to reflect the cluster2
ingress host and port,respectively, then save and quit. Note that the address appears in two places, the second under values.yaml:
.
Once saved, Pilot will automatically read the updated network configuration.
- Prepare environment variables for building the
n2-k8s-config
file for the service accountistio-reader-service-account
:
$ CLUSTER_NAME=$(kubectl --context=$CTX_CLUSTER2 config view --minify=true -o jsonpath='{.clusters[].name}')
$ SERVER=$(kubectl --context=$CTX_CLUSTER2 config view --minify=true -o jsonpath='{.clusters[].cluster.server}')
$ SECRET_NAME=$(kubectl --context=$CTX_CLUSTER2 get sa istio-reader-service-account -n istio-system -o jsonpath='{.secrets[].name}')
$ CA_DATA=$(kubectl get --context=$CTX_CLUSTER2 secret ${SECRET_NAME} -n istio-system -o jsonpath="{.data['ca\.crt']}")
$ TOKEN=$(kubectl get --context=$CTX_CLUSTER2 secret ${SECRET_NAME} -n istio-system -o jsonpath="{.data['token']}" | base64 --decode)
An alternative to base64 —decode
is openssl enc -d -base64 -A
on many systems.
- Create the
n2-k8s-config
file in the working directory:
$ cat <<EOF > n2-k8s-config
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: ${CA_DATA}
server: ${SERVER}
name: ${CLUSTER_NAME}
contexts:
- context:
cluster: ${CLUSTER_NAME}
user: ${CLUSTER_NAME}
name: ${CLUSTER_NAME}
current-context: ${CLUSTER_NAME}
users:
- name: ${CLUSTER_NAME}
user:
token: ${TOKEN}
EOF
Start watching cluster 2
- Execute the following commands to add and label the secret of the
cluster2
Kubernetes.After executing these commands Istio Pilot oncluster1
will begin watchingcluster2
for services and instances,just as it does forcluster1
.
$ kubectl create --context=$CTX_CLUSTER1 secret generic n2-k8s-secret --from-file n2-k8s-config -n istio-system
$ kubectl label --context=$CTX_CLUSTER1 secret n2-k8s-secret istio/multiCluster=true -n istio-system
- Wait for
istio-ingressgateway
to become ready:
$ kubectl get pods --context=$CTX_CLUSTER2 -n istio-system -l istio=ingressgateway
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-5c667f4f84-bscff 1/1 Running 0 16m
Now that you have your cluster1
and cluster2
clusters set up, you can deploy an example service.
Deploy example service
As shown in the diagram, above, deploy two instances of the helloworld
service,one on cluster1
and one on cluster2
.The difference between the two instances is the version of their helloworld
image.
Deploy helloworld v2 in cluster 2
- Create a
sample
namespace with a sidecar auto-injection label:
$ kubectl create --context=$CTX_CLUSTER2 ns sample
$ kubectl label --context=$CTX_CLUSTER2 namespace sample istio-injection=enabled
- Deploy
helloworld v2
:
ZipZip
$ kubectl create --context=$CTX_CLUSTER2 -f @samples/helloworld/helloworld.yaml@ -l app=helloworld -n sample
$ kubectl create --context=$CTX_CLUSTER2 -f @samples/helloworld/helloworld.yaml@ -l version=v2 -n sample
- Confirm
helloworld v2
is running:
$ kubectl get po --context=$CTX_CLUSTER2 -n sample
NAME READY STATUS RESTARTS AGE
helloworld-v2-7dd57c44c4-f56gq 2/2 Running 0 35s
Deploy helloworld v1 in cluster 1
- Create a
sample
namespace with a sidecar auto-injection label:
$ kubectl create --context=$CTX_CLUSTER1 ns sample
$ kubectl label --context=$CTX_CLUSTER1 namespace sample istio-injection=enabled
- Deploy
helloworld v1
:
ZipZip
$ kubectl create --context=$CTX_CLUSTER1 -f @samples/helloworld/helloworld.yaml@ -l app=helloworld -n sample
$ kubectl create --context=$CTX_CLUSTER1 -f @samples/helloworld/helloworld.yaml@ -l version=v1 -n sample
- Confirm
helloworld v1
is running:
$ kubectl get po --context=$CTX_CLUSTER1 -n sample
NAME READY STATUS RESTARTS AGE
helloworld-v1-d4557d97b-pv2hr 2/2 Running 0 40s
Cross-cluster routing in action
To demonstrate how traffic to the helloworld
service is distributed across the two clusters,call the helloworld
service from another in-mesh sleep
service.
- Deploy the
sleep
service in both clusters:
ZipZip
$ kubectl apply --context=$CTX_CLUSTER1 -f @samples/sleep/sleep.yaml@ -n sample
$ kubectl apply --context=$CTX_CLUSTER2 -f @samples/sleep/sleep.yaml@ -n sample
- Wait for the
sleep
service to start in each cluster:
$ kubectl get po --context=$CTX_CLUSTER1 -n sample -l app=sleep
sleep-754684654f-n6bzf 2/2 Running 0 5s
$ kubectl get po --context=$CTX_CLUSTER2 -n sample -l app=sleep
sleep-754684654f-dzl9j 2/2 Running 0 5s
- Call the
helloworld.sample
service several times fromcluster1
:
$ kubectl exec --context=$CTX_CLUSTER1 -it -n sample -c sleep $(kubectl get pod --context=$CTX_CLUSTER1 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
- Call the
helloworld.sample
service several times fromcluster2
:
$ kubectl exec --context=$CTX_CLUSTER2 -it -n sample -c sleep $(kubectl get pod --context=$CTX_CLUSTER2 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
If set up correctly, the traffic to the helloworld.sample
service will be distributed between instances on cluster1
and cluster2
resulting in responses with either v1
or v2
in the body:
Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8
Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv
You can also verify the IP addresses used to access the endpoints by printing the log of the sleep’s istio-proxy
container.
$ kubectl logs --context=$CTX_CLUSTER1 -n sample $(kubectl get pod --context=$CTX_CLUSTER1 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') istio-proxy
[2018-11-25T12:37:52.077Z] "GET /hello HTTP/1.1" 200 - 0 60 190 189 "-" "curl/7.60.0" "6e096efe-f550-4dfa-8c8c-ba164baf4679" "helloworld.sample:5000" "192.23.120.32:15443" outbound|5000||helloworld.sample.svc.cluster.local - 10.20.194.146:5000 10.10.0.89:59496 -
[2018-11-25T12:38:06.745Z] "GET /hello HTTP/1.1" 200 - 0 60 171 170 "-" "curl/7.60.0" "6f93c9cc-d32a-4878-b56a-086a740045d2" "helloworld.sample:5000" "10.10.0.90:5000" outbound|5000||helloworld.sample.svc.cluster.local - 10.20.194.146:5000 10.10.0.89:59646 -
In cluster1
, the gateway IP of cluster2
(192.23.120.32:15443
) is logged when v2 was called and the instance IP in cluster1
(10.10.0.90:5000
) is logged when v1 was called.
$ kubectl logs --context=$CTX_CLUSTER2 -n sample $(kubectl get pod --context=$CTX_CLUSTER2 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') istio-proxy
[2019-05-25T08:06:11.468Z] "GET /hello HTTP/1.1" 200 - "-" 0 60 177 176 "-" "curl/7.60.0" "58cfb92b-b217-4602-af67-7de8f63543d8" "helloworld.sample:5000" "192.168.1.246:15443" outbound|5000||helloworld.sample.svc.cluster.local - 10.107.117.235:5000 10.32.0.10:36840 -
[2019-05-25T08:06:12.834Z] "GET /hello HTTP/1.1" 200 - "-" 0 60 181 180 "-" "curl/7.60.0" "ce480b56-fafd-468b-9996-9fea5257cb1e" "helloworld.sample:5000" "10.32.0.9:5000" outbound|5000||helloworld.sample.svc.cluster.local - 10.107.117.235:5000 10.32.0.10:36886 -
In cluster2
, the gateway IP of cluster1
(192.168.1.246:15443
) is logged when v1 was called and the gateway IP in cluster2
(10.32.0.9:5000
) is logged when v2 was called.
Cleanup
Execute the following commands to clean up the example services and the Istio components.
Cleanup the cluster2
cluster:
$ istioctl manifest generate --context=$CTX_CLUSTER2 \
--set profile=remote \
--set values.global.mtls.enabled=true \
--set values.gateways.enabled=true \
--set values.security.selfSigned=false \
--set values.global.controlPlaneSecurityEnabled=true \
--set values.global.createRemoteSvcEndpoints=true \
--set values.global.remotePilotCreateSvcEndpoint=true \
--set values.global.remotePilotAddress=${LOCAL_GW_ADDR} \
--set values.global.remotePolicyAddress=${LOCAL_GW_ADDR} \
--set values.global.remoteTelemetryAddress=${LOCAL_GW_ADDR} \
--set values.gateways.istio-ingressgateway.env.ISTIO_META_NETWORK="network2" \
--set values.global.network="network2" \
--set autoInjection.enabled=true | kubectl --context=$CTX_CLUSTER2 delete -f -
$ kubectl delete --context=$CTX_CLUSTER2 ns sample
$ unset CTX_CLUSTER2 CLUSTER_NAME SERVER SECRET_NAME CA_DATA TOKEN INGRESS_HOST SECURE_INGRESS_PORT INGRESS_PORT LOCAL_GW_ADDR
Cleanup the cluster1
cluster:
$ istioctl manifest generate --context=$CTX_CLUSTER1 \
-f install/kubernetes/operator/examples/multicluster/values-istio-multicluster-primary.yaml | kubectl --context=$CTX_CLUSTER1 delete -f -
$ kubectl delete --context=$CTX_CLUSTER1 ns sample
$ unset CTX_CLUSTER1
$ rm n2-k8s-config
See also
Google Kubernetes Engine
Set up a multicluster mesh over two GKE clusters.
IBM Cloud Private
Example multicluster mesh over two IBM Cloud Private clusters.
Replicated control planes
Install an Istio mesh across multiple Kubernetes clusters with replicated control plane instances.
Shared control plane (single-network)
Install an Istio mesh across multiple Kubernetes clusters with a shared control plane and VPN connectivity between clusters.
Simplified Multicluster Install [Experimental]
Configure an Istio mesh spanning multiple Kubernetes clusters.
DNS Certificate Management
Provision and manage DNS certificates in Istio.