In  part 1 of this post, we learned how to create a Spring Boot application, create a Docker image for it and push it to a Docker registry. At the end, we installed Minikube in an Ubuntu VM. In this second part, we will get familiar with some Kubernetes terminology, deploy the application to our Minikube cluster and update the application. The sources used for the application can be found at GitHub. The Docker registry which we use can be found here (or you can use your own Docker registry).

Terminology

First, we will explain some of the terminology which is used with Kubernetes. It will give us a basic idea of what everything means. It does not aim to be complete but these terms will pass by when we deploy our application to Minikube and then you will have some basic understanding of how it works.

kubernetes-overview

  • Master: The Master in the cluster, manages the cluster;
  • Node: A Node is a working machine in the cluster (can be a VM or physical machine);
    • Kubelet: Each Node has a Kubelet, which communicates with the Master (API server);
    • A Node also runs a Docker runtime;
  • Pod: A Pod represents one or more container representations and runs on a Node. A Pod is an atomic unit inside a Kubernetes cluster. A Pod also has a unique IP address inside a Kubernetes cluster which is not exposed to the outside;
  • Deployment: A Deployment creates Pods with containers inside them. A Deployment is created on the Master, which in its turn will create Pods for it;
  • Service: A Service combines Pods and specifies how to access them (e.g. with a Service it is possible to expose your application outside the Kubernetes cluster). Because of a Service, your application will not be impacted when Pods suddenly terminate (a new Pod will automatically be created again). It is a kind of abstraction layer above Pods;
    • Selector: With a Selector you are able to indicate which Pods belong to this Service;
    • Label: A Label is set to a Pod to indicate that it belongs to a Service.

Step 5: Deploy the application to Minikube

Now that we know some of the terminology, we will go through the steps how to deploy our application to the Minikube cluster. Meanwhile, we will explore some of the kubectl commands. Much is based on the interactive tutorial for Minikube. This tutorial is quite good, but you only need to click the commands in order to execute them. This way, I don’t have the impression that I really did something and I tend to forget quite easily what I have been doing. Therefore, I decided to do the work myself with an own basic Spring Boot application. At the end, our goal is to have some basic understanding of how we can deploy our application to Kubernetes.

Check cluster info

First, we will check the information of our cluster:

sudo kubectl cluster-info

In the output we can see that our Master is up and running:

Kubernetes master is running at https://192.168.2.51:8443
KubeDNS is running at https://192.168.2.41:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Minkube only runs one Node and this is our Master Node. We can check this as follows:

sudo kubectl get nodes

The output shows us one Master Node. Of course, this is fine in our test setup, but in production mode you would probably need at least three Nodes: one for the Master and two Nodes for your application redundancy:

NAME       STATUS      ROLES     AGE     VERSION
minikube   Ready       master    12d     v1.10.0

Deploy an application

In order to deploy an application, you need a Deployment configuration. This will instruct Kubernetes how to create and update your applications. Thus, inside a Node, a containerized app will be deployed and on the Master the Deployment configuration resides.

We create a Deployment configuration with the following command:

sudo kubectl run mykubernetesplanet --image=mydeveloperplanet/mykubernetesplanet:0.0.1-SNAPSHOT --port=8080

After run, we need to set the name of the image (in our case mykubernetesplanet), and with the image option, we set the image name just like we would do for pulling the image with Docker. We also set the tag we want to pull.

After creating the Deployment Configuration, we can check our deployment with the following command:

sudo kubectl get deployments

The output shows our deployment.

NAME                 DESIRED    CURRENT    UP-TO-DATE    AVAILABLE    AGE
mykubernetesplanet   1          1          1             0            45s

At this point, we have one Pod running for our application.

Check Pod information

Our Pod has been given a unique name. With the following command, we can view the list of available Pods:

sudo kubectl get pods

The output gives us something like:

NAME                                 READY    STATUS     RESTARTS    AGE
mykubernetesplanet-ccc66688c-qg4zm   1/1      Running    1           2m

When you try this yourself, you will notice that the first part of the Node name will be the same on your machine, but the last part will differ and is Node specific.

Finally, we can retrieve information about our Pod by issuing the following command:

sudo kubectl describe pods mykubernetesplanet-ccc66688c-qg4zm

I did not list the output here, because it is quite long. But when you check this out yourself, you can take a look at the information which is returned from the command.

It is also possible to look at the standard output of your application:

sudo kubectl logs mykubernetesplanet-ccc66688c-qg4zm

And with the following command, you are able to execute commands on the Pod. E.g. show the environment variables or open a bash terminal:

sudo kubectl exec mykubernetesplanet-ccc66688c-qg4zm env
sudo kubectl exec -ti mykubernetesplanet-ccc66688c-qg4zm bash

Expose application publicly

At this moment, our application is running in the Kubernetes cluster, but is still not accessible from outside the cluster. And that is eventually what we want. In order to accomplish this, we need to create a Service.

We can retrieve a list of the available services with the following command:

sudo kubectl get services

It outputs the following. As we can see, a default Service is already available but not yet for your application.

NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1                  443/TCP   12d

Now, let’s expose our application to the outside world:

sudo kubectl expose deployment/mykubernetesplanet --type="NodePort" --port 8080

A new Service is being created:

service "mykubernetesplanet" exposed

The application is exposed on a certain port, and in order to know this port, we execute the following:

sudo kubectl describe services/mykubernetesplanet

This will show us information about the Service.

Name:                      mykubernetesplanet
Namespace:                 default
Labels:                    run=mykubernetesplanet
Annotations:               <none>
Selector:                  run=mykubernetesplanet
Type:                      NodePort
IP:                        10.101.49.0
Port:                      <unset> 8080/TCP
TargetPort:                8080/TCP
NodePort:                  <unset> 31600/TCP
Endpoints:                 172.17.0.4:8080
Session Affinity:          None
External Traffic Policy:   Cluster
Events:                    <none>

In the output, we notice that the Selector has the value ‘run=mykubernetesplanet’, which is also the label our Pod is carrying.

The port number at property NodePort is the port which is exposed for our application, 31600 in our case.

So, now it is time to check whether our application is exposed to our Ubuntu VM. Execute the following in a terminal:

curl $(sudo minikube ip):31600/hello

This returns our “Hello Kubernetes!” welcome message.

Hello Kubernetes!

Entering the url in the browser, gives us the same result.

Now, let’s check which labels are present. First, we check the label of our deployment:

sudo kubectl describe deployment

Kubernetes has automatically added this label to our Deployment.

Labels:  run=mykubernetesplanet

Now we can check which Pods and Services also have the same label;

sudo kubectl get pods -l run=mykubernetesplanet
sudo kubectl get services -l run=mykubernetesplanet

We can also add a new label to our Pod:

sudo kubectl label pod mykubernetesplanet-ccc66688c-qg4zm app=v.0.0.1-SNAPSHOT

And check whether the label is added successfully to our Pod:

sudo kubectl describe pods mykubernetesplanet-ccc66688c-qg4zm

With the following command we can remove the Service and make sure that our application is not exposed anymore outside our Kubernetes cluster:

sudo kubectl delete service -l run=mykubernetesplanet

Step 6: Update the application

Next, we will update our application to version 0.0.2-SNAPSHOT and add the host name to our hello response. The latter is needed to check the load balancing feature when we create more than one Pod for our application.

First, we change the version of our application in our pom:

<version>0.0.2-SNAPSHOT</version>

In our HelloController we will add the host name:

public String hello() {
  StringBuilder message = new StringBuilder("Hello Kubernetes!");
  try {
    InetAddress ip = InetAddress.getLocalHost();
    message.append(" From host: " + ip);
  } catch (UnknownHostException e) {
    e.printStackTrace();
  }
  return message.toString();
}

Build the application with maven:install and push our new application to Docker by means of docker:push. In  Docker Hub we see our new Docker image with tag 0.0.2-SNAPSHOT.

At this point, we can update our application in Minikube. We will perform a rolling update. This means that we can update the application with zero downtime.

First, update the image of the deployment to version 0.0.2-SNAPSHOT:

sudo kubectl set image deployments/mykubernetesplanet mykubernetesplanet=mydeveloperplanet/mykubernetesplanet:0.0.2-SNAPSHOT

The command notifies the Deployment configuration to use a different image for your application and initiates a rolling update. The output is:

deployment.apps "mykubernetesplanet" image updated

Check the status of the roll-out as follows:

sudo kubectl rollout status deployments/mykubernetesplanet

The output is eventually (when the roll-out has been completed):

deployment "mykubernetesplanet" successfully rolled out

If you notice that something goes wrong with the deployment, you can undo the roll-out with the following command:

sudo kubectl rollout undo deployments/mykubernetesplanet

You can now check our hello url and verify that the host address is returned in the hello message.

Running Multiple instances

In version 0.0.2-SNAPSHOT, we added the host name to the hello message. A Service has an integrated load balancer and because we print the host name, we can check this feature. Before we can check this, we will create more than 1 Pod to run our application otherwise it wouldn’t make much sense to load balance 😉

When we run the get deployments command, we can see the number of desired Pods and the number of current Pods. In our case, we have 1 desired Pod and 1 current Pod.

NAME                  DESIRED    CURRENT    UP-TO-DATE    AVAILABLE    AGE
mykubernetesplanet    1          1          1             1            1h

We can scale up the number of Pods by means of the following command. We will scale up to 4 Pods:

sudo kubectl scale deployments/mykubernetesplanet --replicas=4

After this, 3 new Pods will be created and we can check this with:

sudo kubectl get pods -o wide

The output will be something like:

NAME                                  READY    STATUS    RESTARTS    AGE    IP            NODE
mykubernetesplanet-ccc66688c-fs5mm    1/1      Running   0           53s    172.17.0.6    minikube
mykubernetesplanet-ccc66688c-lc8lj    1/1      Running   0           53s    172.17.0.7    minikube
mykubernetesplanet-ccc66688c-qg4zm    1/1      Running   0           1h     172.17.0.4    minikube
mykubernetesplanet-ccc66688c-wjsfr    1/1      Running   0           53s    172.17.0.6    minikube

Make sure that the Service is running and run curl from a terminal with our hello url. In the return message you will notice that the different Pods are hit when invoking the url several times after each other.

Summary

In this second part, we got familiar with some of the terminology used within Kubernetes. We deployed and updated our application to the Kubernetes cluster and used kubectl for executing commands on the cluster. We now should have some basic understanding of how Kubernetes works.