Deploy a Clojure Web Application to Kubernetes (GKE)

You have Clojure Web Application, but how do you deploy it to a Kubernetes cluster and make it available at a particular URL?

Introduction

In a previous blog post I showed how it was possible to take a Clojure project containing a Pedestal back-end and a React front-end, and package it as a Docker container which can be run as a standalone Docker image using docker run, or as part of a Docker Swarm using docker service.

In this post, I will show how it is possible to deploy the same Docker container to a Kubernetes Cluster and to make the application available at a particular URL of your choosing.

A repository containing the code can be found here (tag v1.2).

Summary of the steps

  • Create a GCP Project
  • Build and Tag your Docker image
  • Upload your image to the Google Registry
  • Create a GKE Cluster
  • Get a Google Static IP address
  • Create an A record in your DNS for the IP endpoint
  • Create a Google managed SSL certificate
  • Deploy your image to the cluster
  • Create a back-end service over your container
  • Create an Ingress front-end service connecting the external IP address to the back-end service over https

The Steps

Setting up a Cloud Project on GCP

Before you start, you’ll need to create a Google Cloud Project. This is a simple process, so I won’t go into the details as instructions can be found here.

The current demo assumes that the project name is clojure-app-v1, that a Docker daemon is running locally, and that the docker binary is on your path. Also, the URL at which the application is published is chosen to be https://demo.timpsongray.com. Obviously, this will be different for you.

You should also install both the gcloud and kubectl command line tools locally. If this is your first GCP project, don’t forget to also initialize the Cloud SDK. This will set your account’s credentials, authorize access to GCP API’s, and establish a base configuration, such as your default compute region and zone.

Once you have installed gcloud locally, installing kubectl is simply a matter of running

$ gcloud components install kubectl

from the command line.

Finally, with the tools installed and your project created, you can view its details using

$ gcloud projects describe allocations-accounting-v1

Setting your Current Project

Now, you should set allocations-accounting-v1 to be the current project. (If you’ve run gcloud init as above, this should already have been done, but it’s no harm to set it a second time.)

$ gcloud config set project allocations-accounting-v1

Connecting you Docker Repository to GCP

In order to easily publish a docker image from your local machine to a GCP container registry you should configure docker to use gcloud as the credential helper for all Google’s registries using

$ gcloud auth configure-docker

Build and Tag the docker image

Build the docker image

If your using the repo, you can do this by issuing

$ make clean-all
$ make docker

from the command line.

Tag the image for upload to the Google container registry

Now we’ll tag the image so it conforms with the image names expected by the Google registry.

$ docker tag clojure-app-v1 gcr.io/allocations-accounting-v1/allocations-accounting:v1.0

By default, if you don’t specify a tag the above command adds :latest to the end of the full tag name. It’s good practice to explicitly specify a tag.

The Google registry location is constructed using gcr.io/<PROJECT_NAME>/<IMAGE_NAME>.

Push the image to the Google Registry

Before Google will accept a pushed image you need to enable the Google Container Registry API for your project

$ gcloud services enable containerregistry.googleapis.com

and then push the tagged image using

$ docker push gcr.io/allocations-accounting-v1/allocations-accounting:v1.0

Create the Cluster

First enable the Kubernetes Engine API using

$ gcloud services enable container.googleapis.com

which may take a few minutes.

When complete, we ask GKE to create a cluster with a single node, which is sufficient for illustrative purposes.

$ gcloud container clusters create allocations-accounting-v1-cluster --num-nodes=1

Again, this may take a few minutes as the cluster’s resources are created, deployed and health-checked.

Import Credentials

Once the cluster has been created we sync credentials

$ gcloud container clusters get-credentials allocations-accounting-v1-cluster

This will create a kubeconfig entry for the cluster, and allow you to manage the new cluster using the kubectl command line tool.

Get a static IP address for your site

We have decided to publish our application at a well-known url (i.e. demo.timpsongray.com), so we need to ensure that we have a stable, externally addressable IP address. We do this by asking GCP to assign a global IP address for our use.

$ gcloud compute addresses create allocations-app-v1-addr --global

Find the external IP address

Now, find what IP address was assigned using

$ gcloud compute addresses list

which should return something like

NAME                            ADDRESS/RANGE   TYPE      PURPOSE  NETWORK  REGION  SUBNET  STATUS
allocations-app-v1-addr         34.120.154.247  EXTERNAL                                    RESERVED

Make a note of the IP address, and then add an A record to your DNS associating the name demo.timpsongray.com with that IP address. You may need to wait a little while for the DNS changes to propogate.

Create a Secret

The application uses either environment variables or docker secrets to configure itself. From an internal perspective, this distinction is abstracted away with the use of the walmartlabs/dyn-edn Clojure library.

However, GKE adds a feature to its use of secrets that is not available with docker swarm - it’s possible to have a secret’s value dynamically injected into a container’s environment as a standard environment variable.

In order to make use of this, of course, you must create a secret. This can be done as follows. (The value of <THESECRET> should be the password for the keystore used by the Jetty instance in your application).

$ kubectl create secret generic \
    allocations-app-v1-secrets \
    --from-literal=ALLOC_KEYSTORE_PASSWORD='<PASSWORD>' \
    --from-literal ALLOC_SESSION_STORE_KEY='<16 byte session key>'

You can check that the secret was created successfully by issuing the following command and inspecting the results

$ kubectl describe secrets/allocations-app-v1-secrets

It’s important that the length of the ALLOC_SESSION_STORE_KEY value is precisely 16 bytes.

Deploy your App to the Cluster

Now deploy the Docker image containing the application to a container running in the cluster specifying the image recently pushed to the Google registry.

During the deployment GKE will be requested to inject the values of the ALLOC_KEYSTORE_PASSWORD and ALLOC_SESSION_STORE_KEY from the allocations-app-v1-secrets resource as an environment variable (also called ALLOC_KEYSTORE_PASSWORD and ALLOC_SESSION_STORE_KEY respectively) into the container’s run-time environment. These variables are used by the Clojure application to gain access to Jetty’s keystore, which is required to allow Jetty to publish the application on an https endpoint; and the key used to encode session cookies.

Kubernetes secrets can also be made available within the container at a particular mount point (using tmpfs). This is similar to Docker swarm’s strategy. We could use it here, but the environment variable approach is simpler and the use of the dyn-edn library ensures that there’s very little transition to be done moving from a local development environment and the Kubernetes production environment.

Now instruct GKE to deploy the application using

$ kubectl apply -f deploy.yaml

where the contents of the deploy.yaml file is as follows

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: allocations-app-v1
  name: allocations-app-v1-web
spec:
  replicas: 1
  selector:
    matchLabels:
      app: allocations-app-v1
      tier: allocations-app-v1-web
  template:
    metadata:
      labels:
        app: allocations-app-v1
        tier: allocations-app-v1-web
    spec:
      containers:
      - image: gcr.io/allocations-accounting-v1/allocations-accounting:v1.0
        name: allocations-app-v1-app
        ports:
        - containerPort: 8081
        env:
        - name: ALLOC_HOST_NAME
          value: demo.timpsongray.com
        - name: ALLOC_KEYSTORE_PASSWORD
          valueFrom:
            secretKeyRef:
              key: ALLOC_KEYSTORE_PASSWORD
              name: allocations-app-v1-secrets
        - name: ALLOC_SESSION_STORE_KEY
          valueFrom:
            secretKeyRef:
              key: ALLOC_SESSION_STORE_KEY
              name: allocations-app-v1-secrets

Note that the env in the deployment yaml file also specifies a value for ALLOC_HOST_NAME. This is important as the application will make decisions about what ports to use for serving content and api endoints based on this value.

In the current codebase, if the host name ends with the string “timpsongray.com” then all communication is assumed to occur on port 80. This is probably what’s intended for a production system served using https. Obviously, your site name will be different and you should adjust the code.

Create a Back-End Service

To access the deployed application GKE is requested to create a back-end service over the pods containing the deployment. The request will be for a NodePort service, which exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using :. Superset of ClusterIP.

This is done using

$ kubectl apply -f service.yaml

where the content of the service.yaml file is as follows

apiVersion: v1
kind: Service
metadata:
  name: allocations-app-v1-svc
  annotations:
    cloud.google.com/app-protocols: '{"app-https-port":"HTTPS","app-http-port":"HTTP"}'
  labels:
    app: allocations-app-v1
spec:
  type: NodePort
  selector:
    app: allocations-app-v1
    tier: allocations-app-v1-web
  ports:
    - name: app-https-port
      port: 8081
      targetPort: 8081
    - name: app-http-port
      port: 8080
      targetPort: 8080
Why http? (and other notes on Health Checks)

GKE will automatically create Health Checks to check the status of the backend services created, and which expose your deployment.

By default, for web services, GKE will probe the app at a particular path (/ or /healthz) using a particular protocol (http or https).

Network load balancers require legacy health checks. These must be http, which means that the backend must support http probing by the health checking mechanisms. Don’t disable http on NodePort service (the backend service) or GKE will complain.

Although Legacy health checks can be https, the Network Load Balancer only supports http.

Check the Service’s Status

A convenient way to check if a web application is running correctly is to use port forwarding from your local machine to tunnel directly to the running pod. You can use the following command to open a tunnel to the node’s port 8081 (which is the home port for the containerized Clojure application) from port 8080 on localhost.

$ gcloud container clusters get-credentials \
  allocations-accounting-v1-cluster --zone us-east4-a --project allocations-accounting-v1 \
  && kubectl port-forward $(kubectl get pod \
  --selector="app=allocations-app-v1,tier=allocations-app-v1-web" \
  --output jsonpath='{.items[0].metadata.name}') 8080:8081

and then in your browser, navigate to https://localhost:8080.

If everything is operating correctly, you should see the home page of the Clojure application served by Jetty.

In your console window type Ctrl+C to stop port forwarding.

Set up External Routing

In the following section we will connect our chosen URL demo.timpsongray.com to the application.

But first we’ll need to perform a few checks and actions.

Check that the app’s DNS name is available

From the command line run

$ nslookup demo.timpsongray.com

and ensure that the address returned is the static IP address that was created by Google earlier. This indicates that the DNS is responding correctly.

Create a Google Managed SSL Certificate

We want to use https on our publicly accessible endpoint so we’ll need to install an SSL certificate. There are a few ways to do this, but the most convenient is to use GCP’s managed SSL certificates.

DNSSEC

In order for the managed certicate creation to happen correctly, and for the external IP address you provisioned to be associated with it (when you create the Ingress service), DNSSEC must be enabled on your domain and the A record you created on the domain must point to the static IP address.

If either of these aren’t set correctly, you may see a Status: FailedNotVisible status when you issue the kubectl describe managedcertificate command below, and the Ingress creation will fail.

We can request a managed SSL certificate using

$ kubectl apply -f cert.yaml

where the content of the cert.yaml file is

apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
  name: allocations-app-v1-cert
spec:
  domains:
    - demo.timpsongray.com

Check the status of the SSL certificate

We can check the status of the SSL provisioning process using

$ gcloud compute ssl-certificates list --global

which will show something like

NAME                TYPE     CREATION_TIMESTAMP             EXPIRE_TIME  MANAGED_STATUS
mcrt-5fc3491d-8eb3  MANAGED  2020-05-28T06:38:28.757-07:00               PROVISIONING
demo.timpsongray.com: PROVISIONING

indicating that provisioning has started, and that the URL is as expected.

Create an Ingress Front-End Service

In order to connect the outside world with the back-end service we will create load-balanced Ingress service using

$ kubectl apply -f ingress.yaml

where the content of the ingress.yaml file is

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: allocations-app-v1-web
  annotations:
    kubernetes.io/ingress.global-static-ip-name: "allocations-app-v1-addr"
    kubernetes.io/ingress.allow-http: "false"
    networking.gke.io/managed-certificates: "allocations-app-v1-cert"
  labels:
    app: allocations-app-v1
spec:
  backend:
    serviceName: allocations-app-v1-svc
    servicePort: 8081

Although the kubectl command returns quickly it can take a few minutes for the ingress to be provisioned, deployed and stabilized. You can check its status using

$ kubectl describe ingress allocations-app-v1

When the provisioning is completed, you should be able to navigate to https://demo.timpsongray.com and view your application.

Viewing Logs

Kubernetes allows you to inspect the logs of the Clojure application if you specify the pod in which it’s running. In order to discover the pod name you can issue the following command

$ kubectl get pods

which will return something like

NAME                                     READY   STATUS    RESTARTS   AGE
allocations-app-v1-web-5d966f5d8-v2wgn   1/1     Running   0          32m

You can then issue the following command (substituting the correct pod name) to view the application logs

$ kubectl logs allocations-app-v1-web-5d966f5d8-v2wgn

Closure

Deploying a web application to Kubernetes and exposing it on the web with a specific URL isn’t particularly difficult, but there are a number of places where things can go pear-shaped. Hopefully, this will help when you try to do the same thing.

Edit this page

Kieran Owens
Kieran Owens
CTO of Timpson Gray

Experienced Technology Leader with a particular interest in the use of functional languages for building accounting systems.

comments powered by Disqus

Related