Joel Weirauch

Joel Weirauch

DevOps Contractor dabbling in Infosec. I write code too!

Quick, templated Django deployments to Kubernetes with bash!

A simple, bash based, k8s templated deployment system for a Django app.

Joel Weirauch

9 minute read

Pic 3

A few years ago, when Kubernetes was still a pretty new project and Docker Swarm had been released I threw my eggs in the Swarm basket. And honestly, that worked pretty well for ~3 years running production systems. Now, however, it’s totally obvious that Kubernetes is the king and it’s not super obvious to me what the longevity of Swarm is. So, with all of that, I’ve been dabbling around a bit more with Kubernetes and running a few small workloads with it.

One thing that I started to notice early on is that there are a lot more config files involved with running anything useful in Kubernetes vs spinning up a few services in Swarm! The defacto standard for managing all of those config files is Helm, but Helm is a pretty complicated beast all on its own. For someone just starting out and just wanting to get a workload up and running on a Kubernetes cluster, trying to sort out how to use Helm while also sorting out how to build the correct Kubernetes YAML configuration can be a bit overwhelming.

I decided that, to get up and running, it would be easiest to just write straight Kubernetes YAML and use kubectl to apply it. This worked fine while I was initially creating the configs, but it got a little annoying once I had the configs working and wanted to do subsequent deployments. I had a few things in each config file that I needed to modify for each deploy and trying to remember all the spots to update each time was getting old quickly.

I took another quick look at Helm but it still seemed like way too much for what I needed at the moment, which was really only to update a couple variables across a handful of YAML files.

BASH to the rescue!

I’m a huge fan of creating little BASH helper scripts. I usually build helper scripts to automate longer, complicated commands or for any simple scripts that just need to string together a few system commands, possibly with variables. So, I decided it would be easier to build a small BASH script to automate doing deploys into my k8s clusters (staging and production) than to go through creating Helm charts, figuring out how to setup a chart repo, etc. I only had a few requirements for this script:

  • Use the same script and same templates to deploy to both staging and production environments
  • Be able to replace variables in the templates with command line args
  • Able to connect to the correct k8s cluster (I’m using GKE clusters and the gcloud command)
#!/bin/bash

APP_VERSION=$2
DJANGO_ENV=$1

PROJECT="example-project"
ENVIRONMENTS="prod stage"

IF [ -Z "$APP_VERSION" ]
THEN
    ECHO "NO VERSION SPECIFIED!"
    EXIT 1
FI

IF [[ $ENVIRONMENTS =~ $DJANGO_ENV ]]
THEN
    ECHO "DEPLOYMENT TO $DJANGO_ENV REQUESTED"
ELSE
    ECHO "INVALID ENVIRONMENT $DJANGO_ENV"
    EXIT 1
FI

while true; do
    read -p "Deploy $APP_VERSION to $DJANGO_ENV? [y/n] " yn

    case $yn in
        [Yy]* ) break;;
        [Nn]* ) exit;;
        * ) echo "Please answer yes or no.";;
    esac
done

# Connect to the proper cluster
if [ $DJANGO_ENV == 'prod' ]
then
    gcloud container clusters get-credentials $PROJECT-production
else
    gcloud container clusters get-credentials $PROJECT-staging
fi

# Pause a bit to allow emergency escape
sleep 5

# Delete any existing migration jobs
kubectl delete job $PROJECT-$DJANGO_ENV-migrations

# Delete the deployments for these services, we can only ever have one running at a time
kubectl delete deployment $PROJECT-$DJANGO_ENV-transaction-queue-worker

echo "Pausing to allow services to terminate..."
sleep 45

templates=("dashboard.yml.template" "default-worker.yml.template" "transaction-queue-worker.yml.template")

for i in "${templates[@]}"
do
    echo "Applying template $i"
    template=`cat "templates/$i" | sed "s/{{PROJECT}}/$PROJECT/g; s/{{APP_VERSION}}/$APP_VERSION/g; s/{{DJANGO_ENV}}/$DJANGO_ENV/g"`
    echo ""

    echo "$template" | kubectl apply -f -
done

This script has a handful of things that are specific to this project but I think that it does a good job of illustrating a simple way to automate a common range of tasks that might be encountered deploying something like a Django application with a few components.

How does it all work?

First, to execute a deployment all I need to do is run ./deploy prod 0-0-17 and the above script will be executed with instructions to deploy version 0-0-17 to prod.

Variables and Sanity Checks
APP_VERSION=$2
DJANGO_ENV=$1

PROJECT="example-project"
ENVIRONMENTS="prod stage"

IF [ -Z "$APP_VERSION" ]
THEN
    ECHO "NO VERSION SPECIFIED!"
    EXIT 1
FI

IF [[ $ENVIRONMENTS =~ $DJANGO_ENV ]]
THEN
    ECHO "DEPLOYMENT TO $DJANGO_ENV REQUESTED"
ELSE
    ECHO "INVALID ENVIRONMENT $DJANGO_ENV"
    EXIT 1
FI

Starting at the top, we have a few variables we define. First, we are getting the environment DJANGO_ENV and the Docker image tag APP_VERSION to deploy from command line arguments.

Next we set a variable to hold the project name and another that specifies the valid environments that we can deploy to. After that we have a couple of sanity checks to make sure that we’ve specified a version to deploy and also that we’ve specified a valid environment.

Confirm Environment and Version
while true; do
    read -p "Deploy $APP_VERSION to $DJANGO_ENV? [y/n] " yn

    case $yn in
        [Yy]* ) break;;
        [Nn]* ) exit;;
        * ) echo "Please answer yes or no.";;
    esac
done

Once we’ve done the sanity checks we spit back out to the user what we are planning to do and loop through a prompt waiting to get a yes response. If we get a no we will terminate the script and otherwise just keep asking until we get a yes or a no response. I like to err on the side of caution and make extra certain that the user is paying attention and the automation is going to do what the expect. I can’t even count the number of times I’ve typed out prod instead of stage or goofed the version number and this check has saved me from deploying the wrong code to the wrong environment!

Connect to the Proper Cluster
# Connect to the proper cluster
if [ $DJANGO_ENV == 'prod' ]
then
    gcloud container clusters get-credentials $PROJECT-production
else
    gcloud container clusters get-credentials $PROJECT-staging
fi

# Pause a bit to allow emergency escape
sleep 5

Now we can start the actual deployment. The first step is to make sure we are talking to the right k8s cluster. To do that we use gcloud to pull down the credentials for the cluster that we are targeting with the DJANGO_ENV variable. Because we only have two environments, staging and production, and we’ve already sanity checked that the environment provided is valid, this is just a simple if else. After we execute the gcloud command we pause for 5 seconds to let the user quickly ctrl+c out of the script if anything looks wrong. gcloud will spit out which credentials it has fetched, so this is just a good check to let the user back out if something goes haywire.

Special Steps
# Delete any existing migration jobs
kubectl delete job $PROJECT-$DJANGO_ENV-migrations

# Delete the deployments for these services, we can only ever have one running at a time
kubectl delete deployment $PROJECT-$DJANGO_ENV-transaction-queue-worker

echo "Pausing to allow services to terminate..."
sleep 45

At this point we hit a couple of the items more specific to this environment. For Django migrations I run a batch job in k8s and that container is usually still hanging around when the next deploy happens, so I instruct k8s to delete that deployment before we create a new one.

Also, I have a handful of deployments that are only ever meant to have a single replica, they are basically background jobs that run on an interval and queue up work.They are idempotent and also able to pick up where they left off if they are terminated unexpectedly. So what I do is delete those deployments as well and then pause the script for 45 seconds (it seems to take a tad less than 45 seconds on average before the pods are terminated and removed). I do this because the k8s depoyment would otherwise spin up the new pod before removing the old one. I imagine there’s a better way to handle this directly in k8s but I haven’t devoted the time to discovering it just yet.

Deploy Templates
templates=("dashboard.yml.template" "default-worker.yml.template" "transaction-queue-worker.yml.template")

for i in "${templates[@]}"
do
    echo "Applying template $i"
    template=`cat "templates/$i" | sed "s/{{PROJECT}}/$PROJECT/g; s/{{APP_VERSION}}/$APP_VERSION/g; s/{{DJANGO_ENV}}/$DJANGO_ENV/g"`
    echo ""

    echo "$template" | kubectl apply -f -
done

Here’s where the magic actually starts and we do the templated deployment. I have a bash array with the names of the template files that I need to deploy and then I simply loop over those. To do the variable replacement, I cat the contents of each template out and pipe that to a couple of sed commands that handle replacing the template variables with their proper values. The result of this is put in a bash variable that I then echo out and pipe to kubectl. One improvement I’d like to make to this would be to have a second array of variables and then be able to also loop over those and replace them in the template instead of running multiple sed commands. But, for now I only have the three variables so it’s not a huge deal.

##What does a template look like?

The beauty of this is that the templates are really just standard Kubernetes YAML files. The only difference is I’ve chosen to end them with .template and inside the files, if I want to use a variable I just do a {{VARIABLE_NAME}} Here’s an example of one of the templates:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{PROJECT}}-{{DJANGO_ENV}}-transaction-queue-worker
  labels:
    app: {{PROJECT}}-{{DJANGO_ENV}}-transaction-queue-worker
spec:
  replicas: 1
  selector:
    matchLabels:
      app: {{PROJECT}}-{{DJANGO_ENV}}-transaction-queue-worker
  template:
    metadata:
      labels:
        app: {{PROJECT}}-{{DJANGO_ENV}}-transaction-queue-worker
    spec:
      containers:
      - name: {{PROJECT}}-{{DJANGO_ENV}}-transaction-queue-worker
        image:  gcr.io/{{PROJECT}}/{{PROJECT}}-app:{{APP_VERSION}}
        imagePullPolicy: Always
        command: ['python','manage.py','queue_transaction_checks']

As you can see, it’s literally just a standard k8s YAML file but it makes use of the variables that will be replaced by sed. At the stage that I’m at, this has proven to be an easy, clear way to use the same configuration files between two different environments and to ensure that I never forget to update a variable somewhere when I’m running a deployment.

And there you have it, a simple way to deploy to multiple Kubernetes clusters using variables and vanilla Kubernetes YAML files. No extra utilities necessary and no need to run any special components, like tiller, in your Kubernetes cluster either. One of these days I might improve this a little bit, either still using BASH or possibly making a simple Python or Go utility in place of the BASH script. Or maybe I’ll graduate to where using Helm actually makes more sense, but for now it’s quick, easy to comprehend and it just works.

Recent posts

Categories

About