Deploying Kubernetes in single-node mode on a RONIN machine

Do you have a Kubernetes application that you would like to test or develop in the cloud, but don't want to get into the nitty gritty of having to set up and configure a whole cluster? Well, thankfully the Kubernetes team at Canonical have developed MicroK8s for Ubuntu, a lightweight, single-package Kubernetes distribution designed for simplicity and efficiency. It’s perfect for developers and researchers who need a fast, hassle-free way to test Kubernetes workloads without managing a full-scale cluster. Since it easily runs on a single node and requires minimal resources, it’s ideal for local development, CI/CD pipelines, or even edge computing. MicroK8s also makes it easy to prototype, debug, and refine Kubernetes workflows in the cloud before deploying them to larger production clusters—saving time, cost, and complexity while ensuring everything works as expected!

This blog post will show you how to set up Kubernetes in single-node mode on a RONIN Ubuntu machine in just minutes with MicroK8s.

Installing Kubernetes with MicroK8s

First, make sure that you have created a fresh Ubuntu 22.04 machine in RONIN and connected to the terminal.

Note: Ensure you select a machine that is big enough to run the container and workflow you would like to deploy (we recommend a t3.large as the smallest machine to use) and also ensure the root drive size is big enough to deploy your container and your required input and output files!

To install MicroK8S, simply run:

sudo snap install microk8s --classic

Wait a few minutes and then make sure Kubernetes is running:



sudo microk8s status --wait-ready

Check default Kubernetes commands are working:



sudo microk8s kubectl get all -n kube-system

sudo microk8s kubectl get nodes

sudo microk8s kubectl get pods -A

You now have Kubernetes running on your RONIN machine - Easy as pie, right!

Enabling Kubernetes Add-Ons

MicroK8s offers add-ons which are pre-packaged, optional features that extend the functionality of your Kubernetes cluster with a simple command (microk8s enable). These add-ons make it easy to set up common Kubernetes services without manually configuring them.

To get a list of the available add-ons and their status:

sudo microk8s status --wait-ready

To install a specific add-on, e.g. 'dns':

sudo microk8s enable dns

Kubernetes Example: Jupyter Notebook

Here we are going to show an example of setting up a Kubernetes workflow with MicroK8s. In this example, we will focus on deploying Jupyter on Kubernetes.

First, make sure the required add-ons are enabled:

sudo microk8s enable dns
sudo microk8s enable hostpath-storage 

Now we need to create a persistent volume for storing notebooks:

sudo nano jupyter-pv.yaml

Paste the following contents into the persistent volume file and save it

# jupyter-pv.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jupyter-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi

Then, apply the persistent volume 

sudo microk8s kubectl apply -f jupyter-pv.yaml

Next, deploy a Jupyter notebook inside Kubernetes and persist research work on the persistent volume

sudo nano jupyter-deployment.yaml

Paste the following contents into the deployment file and save it

# jupyter-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jupyter
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jupyter
  template:
    metadata:
      labels:
        app: jupyter
    spec:
      containers:
      - name: jupyter
        image: jupyter/scipy-notebook:latest
        ports:
        - containerPort: 8888
        volumeMounts:
        - name: jupyter-storage
          mountPath: /home/jovyan/work
      volumes:
      - name: jupyter-storage
        persistentVolumeClaim:
          claimName: jupyter-pvc

Now, apply the deployment:

sudo microk8s kubectl apply -f jupyter-deployment.yaml

Check the pod is running and note down the NAME for later:

sudo microk8s kubectl get pods

If the pod does not yet have status of Running (it may say Pending or ContainerCreating), wait a few minutes, then rerun the command to check progress. Do not move onto the next step until the status says Running.

Finally, we need to expose Jupyter as a Kubernetes Service

sudo nano jupyter-service.yaml

Paste the following contents into the service file and save it

# jupyter-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: jupyter-service
spec:
  selector:
    app: jupyter
  ports:
  - protocol: TCP
    port: 8888
    targetPort: 8888

To connect to Jupyter, we need to first get the Jupyter token - copy this for later:

sudo microk8s kubectl logs -l app=jupyter | grep "token=" | cut -d '=' -f 2 | uniq

Then, we need to forward Jupyter’s port to localhost so that we can connect to it (replace jupyter-abcdef1234 with the actual pod name from the get pods command we ran above)

sudo microk8s kubectl port-forward jupyter-abcdef1234 8888:8888

Now all we need to do is connect to port 8888 via an SSH port tunnel - RONIN LINK makes this really easy, just click "CONNECT TO MACHINE", scroll to the bottom of the modal and in the "LINK TO A CUSTOM APPLICATION" section input "8888" (i.e. the default port that Jupyter is running on) for both the local port and remote port.

💡
This same process can be used for other Kubernetes applications that are configured to run on different ports, just exchange 8888 with the port your application is running on!

Jupyter should then open up automatically in your browser. Enter the token we copied earlier at the top and you can get started running Jupyter workflows on your single node Kubernetes cluster! 

Note: If you reboot your machine you will need to re-run these last 3 steps of getting the token, forwarding the port to localhost and then connecting to Jupyter with RONIN LINK.

via GIPHY