Being able to update software without stopping it is really important for keeping customers happy and making sure everything keeps running smoothly. That's where zero downtime deployment comes in. It helps you switch between different versions of a program without any interruptions. One of the best ways to do this is through Blue-Green Deployment.
What is Zero Downtime Deployment?
Zero downtime deployment, also known as zero-downtime release, refers to a deployment strategy that ensures continuous availability of an application during the deployment process. This means that users can continue to access the application without experiencing any disruptions or service interruptions.
What is Blue-Green Deployment?
Blue-green deployment is a deployment strategy that involves two identical environments: blue and green. It is an effective technique to reduce downtime and risk during application releases. The blue environment is the current production environment, where your users access your application. The green environment is the new version of your application, which you want to deploy and test.
The idea is to deploy the new version to the green environment while keeping the blue environment running. Then, you can perform various tests and validations on the green environment, without affecting the users on the blue environment. Once you are confident that the green environment is stable and ready, you can switch the traffic from the blue environment to the green environment, effectively making the green environment the new production environment. If something goes wrong, you can quickly switch back to the blue environment, minimizing the impact on your users.
Advantages ✅ | Disadvantages ❌ |
---|---|
Reduces downtime during releases 👍 | Requires duplicate resources 💸 |
Enables easy and rapid rollback 🚀 | Additional complexity in implementation 😕 |
Allows testing new version before go-live 🧪 | Stateful apps may have sync issues 🤯 |
Gradual rollout possible 🏁 | Not optimal for large legacy monoliths 🦕 |
Zero downtime deployment ⏱️ | Canary deployments may be a better fit 📈 |
Maintains full capacity during switchover 💯 |
How to Implement Blue-Green Deployment in Kubernetes?
Kubernetes is a popular platform for deploying and managing containerized applications. It offers various features and tools that can help you implement blue-green deployment in a scalable and reliable way.
One of the key components of Kubernetes is the service, which defines how to access a set of pods (containers) that provide a specific functionality. A service acts as a load balancer, routing requests to the pods that match its selector criteria. By changing the selector of a service, you can control which pods receive the traffic.
Prerequisites for Blue-Green Deployment
To follow along and deploy an application using the Blue-Green technique on Kubernetes, you need the following:
- A Kubernetes cluster with at least 2 nodes to host the Blue and Green environments. Local tools like Minikube or cloud providers like AWS EKS, GKE etc can be used to create a Kubernetes cluster.
- kubectl CLI installed and configured to connect to the Kubernetes cluster. kubectl is used to create, update and delete Kubernetes resources.
- Docker image of the application to be deployed. The sample manifests in this guide use a simple Nginx Docker image for demonstration. In your scenario, this would be the container image of your actual application.
Step-by-Step Implementation
To implement blue-green deployment in Kubernetes, you need to have two sets of pods: one for the blue version and one for the green version of your application. You also need to have a service that points to either the blue or the green pods, depending on which version you want to expose to your users.
1. Create a Namespace
We start by creating a namespace to deploy our application resources into. This helps isolate the Blue-Green deployment from other applications running in the cluster.
apiVersion: v1
kind: Namespace
metadata:
name: blue-green
Run the following to create the namespace:
kubectl apply -f namespace.yaml
2. Create Blue and Green Deployments
Next, we create two deployments, one for the Blue environment and one for the Green environment.
The only difference between the two deployments is the "env" label which identifies them as either Blue or Green.
Here is the YAML for the Blue deployment:
And here is the YAML for the Green deployment:
The key things to note are:
app label
is the same for both since they host the same applicationenv label
from identifying is different to identify Blue and Green deploymentsreplicas are
the 3 for both Blue and Green, so capacity is identical- Both use the same nginx:1.14.2 image for now
Run these commands to create the deployments:
kubectl apply -f blue-deployment.yaml
kubectl apply -f green-deployment.yaml
3. Create Service
Now we need a Kubernetes Service to expose our application to external traffic.
The Service uses selectors to route traffic to either the Blue or Green deployment. Here is an example manifest:
Notice the env: blue
selector which routes traffic to the Blue deployment currently.
Deploy the Service:
kubectl apply -f service.yaml
4. Verify Initial Blue Deployment
At this point, the application is deployed in the Blue environment and serving traffic. Let us verify it.
Get the external IP of the Load Balancer service:
kubectl get service nginx -n blue-green
Copy the external IP and access it in a browser. You should see the default Nginx welcome page.
So we have verified that the Blue environment is deployed and accessible correctly.
5. Update Green Deployment
Now we simulate an application change by updating the container image of the Green deployment to nginx:1.16.1.
Here is the updated Green deployment YAML:
# ... metadata remains same
spec:
replicas: 3
selector:
matchLabels:
app: nginx
env: green
template:
metadata:
labels:
app: nginx
env: green
spec:
containers:
- name: nginx
image: nginx:1.16.1 # image updated
ports:
- containerPort: 80
Deploy the updated Green deployment:
kubectl apply -f green-deployment-v2.yaml
Kubernetes will gracefully update the Green pods to use the new image.
6. Update Service to Route Traffic to Green
Now that Green is deployed and tested, let us route traffic to it.
Update the Service selector from env: blue
to env: green
:
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: blue-green
spec:
type: LoadBalancer
selector:
app: nginx
env: green # selector updated
ports:
- protocol: TCP
port: 80
targetPort: 80
Apply the change:
kubectl apply -f service.yaml
This will seamlessly shift traffic from Blue to Green with no downtime. The update is complete!
7. Verify Green Deployment
Access the application using the service's external IP as before. You should now see the updated Nginx 1.16.1 welcome page.
So we have verified that the deployment to our new Green environment succeeded and users see the change.
8. Rollback
If something goes wrong, we can quickly roll back by updating the service selector back to "env: blue". This will shift traffic back to the known good Blue environment.
kubectl apply -f service-rollback.yaml
That's it! The Blue-Green deployment is successful. The same flow can be used for any application deployment.
Benefits of Blue-Green Deployment in Kubernetes
Blue-Green Deployment Benefits, Such As:
- It reduces the risk of errors and downtime during deployments, as you can test and validate the new version before exposing it to your users.
- It allows you to perform fast and easy rollbacks, as you can switch back to the previous version with a simple update of the service selector.
- It enables continuous delivery and integration, as you can deploy new versions frequently and seamlessly, without disrupting your users or compromising your quality standards.
- It simplifies your deployment process and pipeline, as you only need to manage two sets of pods and one service for each application.
Factors to Consider
Here are some key points to consider when implementing Blue-Green deployments:
- Data stores: Update schema and migrate data gracefully before shifting traffic to Green. For stateful apps, ensure any persistent data is accessible from Blue and Green.
- Testing: Test the Green environment thoroughly via automated and manual testing before routing live traffic to it. Catch any issues early.
- Custom Domains: If using a custom domain name, update DNS records to point to Green's IP before the switchover.
- Zero downtime: Perform steps like schema changes, DNS updates, load balancer switches etc. with zero downtime during the cutover.
- Tooling: Kubernetes tools like Helm, Istio, Jenkins etc can further simplify and automate Blue-Green deployments.
- Gradual rollout: For additional safety, use Istio or similar tools to shift a portion of traffic first before complete switchover. This technique is known as Canary deployments.
When Not to Use Blue-Green?
While Blue-Green deployment is very popular, it may not be the best choice in some scenarios
- Significant database schema changes require careful data migration planning to prevent inconsistencies between Blue and Green.
- Stateful applications that depend on shared persistent storage or volumes require data synchronization between the two environments.
- Even keeping two identical environments in sync can be complex and expensive for large legacy monoliths.
In such cases, alternatives like Rolling deployments, Canary releases or simple Red/Black deployment may be better options that cause less duplication of resources.
Conclusion
Blue-Green deployment is a reliable technique to reduce risk and downtime during continuous delivery. Kubernetes provides native primitives like Deployments, Services and Ingress that can be easily adapted for Blue-Green workflows.
By using two sets of pods and one service, you can deploy and test new versions of your application without affecting your users, and then switch traffic between them gradually or instantly. This way, you can minimize the risk of errors, downtime, and rollback issues.
FAQs
What is Blue-Green deployment?
Blue-Green deployment refers to maintaining two identical production environments, Blue and Green. While Blue is active, Green remains idle. Any new code changes are deployed and tested in Green first before routing traffic to it. This provides a smooth transition between versions with minimal downtime.
What are the benefits of Blue-Green over rolling deployments?
Blue-Green deployment provides zero downtime release and easy rollback if issues arise. It also maintains full capacity during the switchover. Rolling deployments incrementally update instances, so may have a performance impact during the transition.
How does traffic routing work in Blue-Green on Kubernetes?
Kubernetes Services uses selectors to route traffic to pods. Updating the Service selector from Blue to Green shifts traffic between environments without downtime.
How can I test the new version before routing live traffic to it?
Thoroughly test the code changes and validate performance in the Green environment first using automated and manual testing. Monitor metrics after gradual rollout to a portion of users.
When should I not use Blue-Green deployment?
Blue-Green may not suit apps with significant data migrations or schema changes between versions. It also requires duplicate resources which can be complex and expensive for large legacy monoliths.