Running Istio on Minikube to implement traffic routing

There are multiple strategies you could use to introduce new service versions to an existing product. In this article I will go over how you can implement Canary deployments using Istio in a k82 cluster. The same approach can be used to implement Blue/Green deployments too.

  • Canary deployments: Route a small percentage of traffic to a new service vs. just shifting the entire traffic to the new version. This gives you a chance to route real production traffic to a small sampling of requests, collect data and then decide on next course of action.
  • Blue-Green Deployments: You maintain two production environments. One environment (say Blue) is currently serving production traffic. The other environment, Green, is idle. You deploy and test a new service version to the idle Green environment. Once you are satisfied you can reroute traffic from current production Blue env to the new Green environment. One added advantage is that if some issues do come up with the rollout to the Green environment, then you can quickly revert back to the Blue environment. I must add that in todays’ dynamic on-demand Cloud architectures, you do not need to keep an always available 2nd production environment. Just bring that up when you have new version to deploy. This also means you have to have a solid Infrastructure as Code strategy using tools such as Terraform (or Cloud provider specific frameworks such as CloudFormation for AWS, Azure Resource Manager template or GCP Cloud Deployment Manager).
  • Mirroring Deployments: You mirror production traffic to another parallel environment. This is especially useful if you intend to implement dark launches of products prior to actually releasing them. Very useful for legacy modernization projects where you want to build pieces of the existing system in a new architecture and test it against production traffic as you make the transformation journey.

Prerequisites

I am running this on a Mac. Some things may (probably will) differ for other operating systems.

Start minikube

Install Isto

See instructions at https://istio.io/docs/setup/kubernetes/install/kubernetes/

Verify Istio installation

Look at the line above for istio-ingressgateway. After starting Minikube we created a tunnel to get access to the Minikube provided load balancer. This tunnel runs from your local machine to the k8s cluster. The external-ip is the means to get access to your service. If you kill the tunnel (which you should have open in a separate terminal) the external-ip will change to <pending>. Bring it back up and viola you have an external ip assigned again.

Verify Istio Pods are deployed

Validate that there exists a istio-system namespace and list the pods in that namespace.

Make sure that the pods are running (or marked as Completed) before moving to the next step.

Istio Service Mesh Configuration for traffic routing

 

Access the service

You ran this previously. Repeating it again. Note down the EXTERNAL-IP address

In my case I can access the service at 10.96.195.128

Route more traffic to v2…

Edit the app-gateway.yaml and adjust the route weights for v1 to 10 and v2 to 90.

Now when you access the service most requests will route to v2. This is how you can adjust traffic routing between your old service and new service.

Cleanup…

Most developers (other than startups) may never have to setup a k8s cluster. So this blog shows you more than you need to know. But I find this approach to learning using a local k8s cluster more fun. Once you are past this go ahead and setup a k8s test cluster in GCP and rerun this same example example against that.

(Optional) Building the docker Images

For the examples above I use my Docker images in dockerhub. But if you care to build your own then here are the steps. Use code from simple node project at https://github.com/thomasma/expressjs_docker

Build the Blue docker image and run inside minikube. Replace mattazoid with your dockerhub account (or you can stay local).

Build the Green docker image and run inside minikube. First modify basicexpresshello.js and replace Blue with Green or some text that will help you distingusih between the two services – old vs new.

  • docker build -t mattazoid/hello:v2 .
  • docker push mattazoid/hello:v2
  • kubectl create hellov2 –image=mattazoid/hello:v2 –port=3000
  • kubectl expose deployment hellov2 –type=NodePort
  • kubectl get pod
  • kubectl get service
  • Access service –  curl $(minikube service hellov2 –url)

 

References