Kubernetes clusters in IBM Cloud

Containers Orchestration with Kubernetes

Blumareks
12 min readSep 5, 2019

One of the reasons why I wanted to learn about Containers and their orchestration mechanisms with Kubernetes is my fascination with Epic Fortnite. Kids and adults alike love the game. But it has become a movement. And recently it hosted an event — during which more than 10 million users participated in the Fortnite Marshmello Concert. I was there with my son. If you want to learn more about it check this link here: https://www.theverge.com/2019/2/21/18234980/fortnite-marshmello-concert-viewer-numbers

So I was amazed with the orchestration of the entire event. It blew my mind. So I started to learn about containers and Kubernetes. This blog post presents my findings. It is based on a simple containerized application that is going through the basic operations with help of the container orchestration based on the Kubernetes (or K8s for short).

If you want to learn on how to deploy containers to a Kubernetes based cluster, how to debug a running application, rollout new releases from a private container registry, experience the K8s automatic rollback, and finally how to ssh to a running application node in a K8s cluster — this blog post is for you.

In this blog post you will be using an example app from the following open source repository:

https://gitlab.com/ibm/kube101

The entire process will cover the following actions, that you might adjust depending on the cloud you are going to use. We are going to use IBM Cloud.

Step 1: setting up a Kubernetes cluster

Step 2: cloning an application, building it into a container, and uploading its version to a container registry

Step 3: deploying an application to the Kubernetes cluster, understanding basic deployment rules, and health checks, checking and understanding logs and the Kubernetes deployment definitions

Step 4: experiencing Kubernetes automation in keeping the application nodes active, as well rolling out a failing application that forces Kubernetes to rollback

Step 5: updating an application to a new release and replacing the old app with the new release

Step 6: discussing a smart load balancing and routing with Istio.

Step 1: setting up a Kubernetes cluster

For this step we are going to use IBM Cloud. The other cloud that support Kubernetes might have some different setup steps — please look into the instructions on doing it here:

https://kubernetes.io/docs/setup/

IBM provides two setups: one with a vanilla style Kubernetes, and the other one as RH OpenShift flavor of Kubernetes. This steps describe the vanilla Kubernetes deployment:

  • use this link to invoke IBM Cloud registration/signup/login page (this URL gives an author = me some brownie points): https://ibm.biz/Bd2CUa
Chart 1. Signing up/ Logging in to IBM Cloud for free (IBM Cloud lite account)

As soon as you see the dashboard, just look for the Kubernetes containers in the catalog under the Containers category. In our example we will be using 3 worker nodes, just in one zone. When setting up a machine you might want to have it as small as it is possible — in my case I am using 3 worker nodes, each based on the 2 vCPUs, and 4GB RAM.

Couple more steps to configure your environment when you are using it for the first time are specified on the next screen:

Chart 2. Installing CLIs, and accessing your K8s cluster in steps
Chart 3. The final installation report of CLIs and plug-ins

Also you would need a Docker CLI — get it from here: https://docs.docker.com/engine/installation/

Step 2: cloning an application, building it into a container, and uploading its version to a container registry

While waiting for the K8s cluster to be ready, you can now get ready with your example application. From time to time check the progress of the provisioning .

Chart 4. The progress of K8s cluster provisioning — in progress to ready states

First action is to work with you application. You will want to clone or download your example app. You can use git command as follows, or simply download a zipped application (use this link to access the repository: https://gitlab.com/ibm/kube101, download the zipped application, and then unpack it in the folder of your choice). In order to clone an app directly from the Github repository use the following git command:

git clone https://gitlab.com/ibm/kube101

Now you are ready to build an app and upload its version to the container registry. So let’s login to IBM Cloud, and to the Container Registry service

ibmcloud login -a cloud.ibm.com -r us-south -g default

and then access your cluster by your name (in my case it is Marek-cluster —

ibmcloud ks cluster config -cluster Marek-cluster]

export MYCLUSTERNAME=<provide your cluster name here>

ibmcloud ks cluster config — cluster $MYCLUSTERNAME

As soon as the command executes you should be getting the env export command, that you need to copy, paste and execute to be able to access your cluster from kubectl CLI — for example by using the command:

kubectl cluster-info

When you are done — you should be able to access your Container Registry service on IBM Cloud. It is here:

ibmcloud cr region-set us-south

export the name of your registry:

export MYREGISTRY=<name of my registry like the registry for US-SOUTH is ‘us.icr.io’>

choose the unique name for your registry:

export MYREGISTRYNAMESPACE=<provide your container registry namespace here>

and then you can add your registry:

ibmcloud cr namespace-add $MYREGISTRYNAMESPACE

After you have your access to the CLIs: kubectl, and ibmcloud cr with your region and CR namespace, we are ready to add a new image to the repository — go inside the downloaded repo:

cd kube101

setup your application name

export MYAPPNAME=<come up with a name>

and run this command

ibmcloud cr build — tag $MYREGISTRY/$MYREGISTRYNAMESPACE/$MYAPPNAME-status-page:1 status_page

What this command does? Basically it runs a Docker build command that is available here: <your — download-directory>/kube101/status_page/Dockerfile

execute more command on this file:

$ more status_page/Dockerfile

If you are not too familiar with the Docker, the Dockerfile does the following with help of Docker CLI.

First, it builds a new image based on the ubuntu 18.04 image from DockerHub registry. For this particular deployment you set up environment, with a name of an app (status_page), etc.

Second, you update the release of the ubuntu and install Python and its application server. Then you define Docker working and point to the local directory.

Finally you install Python Flask and run it for the outside calls (IP address 0.0.0.0).

The built image then is being uploaded to our private registry specified at $MYREGISTRY/$MYREGISTRYNAMESPACE/$MYAPPNAME-status-page with the tagged version no 1.

Now you are ready to deploy this application as a container in a Kubernetes cluster. See the next step on how to do it.

Step 3: deploying an application to the Kubernetes cluster, understanding basic deployment rules, and health checks, checking and understanding logs and the Kubernetes deployment definitions

As soon as your cluster is ready you should be able to execute the following command (check the configuration steps above if it doesn’t work):

kubectl get all -o wide

This would give you some details of the cluster:

$ kubectl get all -o wide

NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR

svc/kubernetes 172.21.0.1 <none> 443/TCP 4h <none>

You should be able now to proceed to the deployment of your containerized application. You need to edit the deployment YAML file: deploy/status-deployment.yaml

vi deploy/status-deployment.yaml

Change in the 19th line the link to your image — change the container registry name from kubeworkshop101/$YOURNAME-status-page:1 to include your registrynamespace name, and the app name.

my image: us.icr.io/mareks-kube101/blog-status-page:1

Couple words of caution on YAML files. These files are sensitive in terms of using the spaces. Do not change it, or you are risking that the deployment script won’t work, or would have unpredictable behavior.

apiVersion: apps/v1
kind: Deployment
metadata:
name: status-web
labels:
app: status-web
spec:
replicas: 3

Script 1. from the deployment script

In the status-deployment.yaml file you specify, that you are going to run 3 replicas of the web-app. They are going to be deployed in 3 different pods. Pods are basic deployment elements of the Kubernetes. The pods would be hosting your images — in this case status-web pod is deployed with the container available in the private container repository in IBM Cloud: us.icr.io/mareks-kube101/blog-status-page:1 . In addition to that, there are worker nodes. These are the nodes connected to hardware (virtually assigned to you from the larger hardware pool, or dedicated solely to you — you might want to check the price difference on the virtual and dedicated environments). Finally there is a Readiness Probe defined. Start with your deployment:

$ kubectl apply -f deploy/status-deployment.yaml

When you check what is happening with your cluster you will notice the following:

$ kubectl get all -o wide

NAME READY STATUS RESTARTS AGE IP NODE
po/status-web-5c576d4d57–2mm6t 0/1 Running 0 1m 172.30.248.2 10.87.83.211
po/status-web-5c576d4d57–49gts 0/1 Running 0 1m 172.30.174.130 10.87.83.213
po/status-web-5c576d4d57–7sk58 0/1 Running 0 1m 172.30.174.129 10.87.83.213

The STATUS is not complete under STATUS name: the pods are waiting with 0/1 value.

Nonetheless you can try to connect to this app. Use the following command to discover the External IP, and the next command to discover the externally exposed Port:

$ kubectl get nodes -o wide

NAME STATUS AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION

10.87.83.208 Ready 4h v1.14.6+IKS 169.62.96.221 Ubuntu 18.04.3 LTS 4.15.0–58-generic

10.87.83.211 Ready 4h v1.14.6+IKS 169.62.96.211 Ubuntu 18.04.3 LTS 4.15.0–58-generic

10.87.83.213 Ready 4h v1.14.6+IKS 169.62.96.212 Ubuntu 18.04.3 LTS 4.15.0–58-generic

Now you can check on the service to find the above mentioned Port (you need to note the external port after 5000: in the below example it is 30302)

$ kubectl get service -l app=status-web -o wide

NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR

status-web 172.21.113.177 <nodes> 5000:30302/TCP 34s app=status-web

If you try to access one of the pods (lets pick the first external address): 169.62.96.221

on the 30302 port, by putting it in the browser as follows:

http://169.62.96.221:30302

Yes, it doesn’t work. Why?

It is a step in validating the logs that would help you discover why the status-page app doesn’t work. Let’s describe with kubectl the pod you were using:

$ kubectl describe pod/status-web-5c576d4d57–2mm6t

Name: status-web-5c576d4d57–2mm6t
Namespace: default
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
Created Created container status-web
27m 27m 1 kubelet, 10.87.83.211 spec.containers{status-web} Normal Started Started container status-web
27m 3m 149 kubelet, 10.87.83.211 spec.containers{status-web} Warning Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500

It says it failed the HTTP probe to REDIS.

So it turns out we need to satisfy this requirement and launch also REDIS. Use the following command:

$ kubectl apply -f deploy/redis-deployment.yaml

The command results with the following logs:

deployment “redis-leader” configured
service “redis-leader” created
deployment “redis-follower” created
service “redis-follower” created

And the pod status gives you the following output:

$ kubectl get all -o wide

NAME READY STATUS RESTARTS AGE IP NODE

po/redis-follower-8d7b95759-d6r4z 1/1 Running 0 4m 172.30.174.133 10.87.83.213

po/redis-follower-8d7b95759-h97nz 1/1 Running 0 4m 172.30.248.3 10.87.83.211

po/redis-leader-7b47fcbd8–5f9r9 1/1 Running 0 5m 172.30.174.132 10.87.83.213

po/status-web-5c576d4d57–2mm6t 1/1 Running 0 36m 172.30.248.2 10.87.83.211

po/status-web-5c576d4d57–49gts 1/1 Running 0 36m 172.30.174.130 10.87.83.213

po/status-web-5c576d4d57–7sk58 1/1 Running 0 36m 172.30.174.129 10.87.83.213

Now you can again try to reach out to the pod over NodePort (remember to use your values for the IP and the Port — in my case they are the following:

http://169.62.96.221:30302

)

You should be able to see the System Status app.

Chart 6. System Status App works

Step 4: experiencing Kubernetes automation in keeping the application nodes active, as well rolling out a failing application that forces Kubernetes to rollback

If by any chance now in production one of your pods dies (or becomes unhealthy), Kubernetes would redeploy it. The easiest way to achieve it is thru killing one of the pods monually, and observing what is going to happen.

You are going to kill the pod we were accessing previously using the following command;

$ kubectl delete pod/status-web-5c576d4d57–2mm6t

pod “status-web-5c576d4d57–2mm6t” deleted

As soon as pod is being deleted, the K8s orchestration services would try to achieve the requested level of service (remind the line in the deployment YAML file):

apiVersion: apps/v1
kind: Deployment
app: status-web
spec:
replicas: 3

$ kubectl get all -o wide

NAME READY STATUS RESTARTS AGE IP NODE

po/redis-follower-8d7b95759-d6r4z 1/1 Running 0 6m 172.30.174.133 10.87.83.213

po/redis-follower-8d7b95759-h97nz 1/1 Running 0 6m 172.30.248.3 10.87.83.211

po/redis-leader-7b47fcbd8–5f9r9 1/1 Running 0 7m 172.30.174.132 10.87.83.213

po/status-web-5c576d4d57–2mm6t 1/1 Terminating 0 37m 172.30.248.2 10.87.83.211

po/status-web-5c576d4d57–49gts 1/1 Running 0 37m 172.30.174.130 10.87.83.213

po/status-web-5c576d4d57–7sk58 1/1 Running 0 37m 172.30.174.129 10.87.83.213

po/status-web-5c576d4d57-qbd6b 1/1 Running 0 8s 172.30.248.4 10.87.83.211

Now you can also try to see what happens if the bad image is being presented to the K8s orchestrator. It will check if the pod becomes healthy. If not K8s it will keep the previous pods — by rollback process.

In order to achieve this you need to create a new version of the image. Edit the status_page/status_page/views.py file and add the failing line at the line 4 for example: import noimport

Since there isn’t noimport library, it would break the application in execution.

Now you are ready to create a new version:

$ ibmcloud cr build — tag $MYREGISTRY/$MYREGISTRYNAMESPACE/$MYAPPNAME-status-page:2 status_page

And now update also deployment YAML deploy/status-deployment.yaml to reference :2 in the line it previously was saying :1

$ kubectl get pods -l app=status-web

NAME READY STATUS RESTARTS AGE

status-web-54cbfdbfc8-w866s 0/1 Error 1 6s

status-web-5c576d4d57–49gts 1/1 Running 0 6h

status-web-5c576d4d57–7sk58 1/1 Running 0 6h

status-web-5c576d4d57-qbd6b 1/1 Running 0 5h

$ kubectl logs status-web-54cbfdbfc8-w866s

* Serving Flask app “status_page”
* Environment: production

WARNING: This is a development server. Do not use it in a production deployment.

Use a production WSGI server instead.

* Debug mode: off

Usage: flask run [OPTIONS]

Error: While importing “status_page”, an ImportError was raised:

Traceback (most recent call last):

File “/usr/local/lib/python3.6/dist-packages/flask/cli.py”, line 240, in locate_app

__import__(module_name)

File “/var/www/status_page/status_page/__init__.py”, line 19, in <module>

import status_page.views # noqa

File “/var/www/status_page/status_page/views.py”, line 4, in <module>

import noimport

ModuleNotFoundError: No module named ‘noimport’

Step 5: updating an application to a new release and replacing the old app with the new release

After removing the line 4 from the views.py, you can create a new release with the tag 3. Update the deployment YAML file — deploy/status-deployment.yaml with the tag 3 as well. After applying the new deployment, the rollout of the version will start, and if the first deployments are successful, the new version would replace the previous ones.

Recapping the actions:

i. Edit the status_page/status_page/views.py file and remove the failing line at the line 4 (import noimport )
ii. build a new image tagged 3: ibmcloud cr build — tag $MYREGISTRY/$MYREGISTRYNAMESPACE/$MYAPPNAME-status-page:3 status_page
iii. edit the deployment YAML deploy/status-deployment.yaml to reference :3 in the line it previously was saying :2 (the failing image)

When it is done check how the rollout of the new version progresses by terminating pods with old versions, and starting pods with new versions.

$ kubectl get pods -l app=status-web

NAME READY STATUS RESTARTS AGE

status-web-5c576d4d57–49gts 1/1 Running 0 6h

status-web-5c576d4d57–7sk58 1/1 Running 0 6h

status-web-5c576d4d57-qbd6b 1/1 Terminating 0 6h

status-web-67957458cf-g7lcx 0/1 ContainerCreating 0 1s

status-web-67957458cf-zs97v 1/1 Running 0 12s

If by any chance you need to peek into the container to see what is happening in the production (but remember, all the changes will be lost, since the pods are ephemeral, and containers are immutable) — it is possible. Use in such the case following command specifying the desired pod:

$ kubectl -it exec status-web-67957458cf-zs97v bash

root@status-web-67957458cf-zs97v:/var/www/status_page# ls

Dockerfile MANIFEST.in Makefile README.md settings.cfg setup.py status_page tests tox.ini

(when you are done, just write $ exit to leave the container).

Step 6: discussing a smart load balancing and routing with Istio.

The next step would be investigate the use of ISTIO based smart routing and load balancing.

Summary

This blogpost covered a very basic functionality of Kubernetes, based on the Docker containers, and IBM Cloud clusters, and IBM Cloud Container Registry. You have explored building and deploying of the containerized application to IBM Cloud based Kubernetes cluster, self healing of a terminated pod, rollback of a rollout with the failing version of the application. Finally you checked the logs, corrected the application leading to a healthy rollout, and ssh’ed to a running container for further diagnostics.

References:

This blog post follows a lab created by JJ Asghar at this location: https://ibm.gitlab.io/workshop/

--

--

Blumareks
Blumareks

Written by Blumareks

I am a technology advocate for autonomous robots, AI, mobile and Internet of Things - with a view from both the enterprise and a robotics startup founder.

Responses (1)