Back & Architecture

Knowing KubeVirt

KubeVirt is a Kubernetes extension that allows you to run and manage Virtual Machines (VMs) alongside containers in a Kubernetes cluster. It aims to provide a unified platform for managing both container and VM workloads, integrating virtualisation and container capabilities.

KubeVirt therefore allows you to manage virtual machines in a similar way to Kubernetes pods, offering features such as scalability, orchestration and centralised administration.

Features

The main features of KubeVirt are:

  1. Orchestration of virtual machines in Kubernetes: Allows you to run and manage virtual machines (VMs) as if they were containers in a Kubernetes cluster, leveraging the Kubernetes orchestration infrastructure for VM management..
  2. Kernel-based Virtual Machine (KVM) support: KubeVirt uses KVM for virtualisation, allowing you to run VMs on nodes that have KVM support.
  3. Integration with the Kubernetes ecosystem: Being based on Kubernetes, KubeVirt integrates with other Kubernetes tools, such as Helm, Prometheus, and other network monitoring and management solutions.
  4. Scalability and high availability: KubeVirt allows virtual machines to scale automatically, as is done with containers, and offers options to ensure high availability of VMs.
  5. Unified infrastructure management: provides a single control plane to manage both container-based applications and virtual machines, making it easy to manage both types of workloads from a single place.
  6. Hybrid workload support: Supports hybrid workloads that can be containers and virtual machines, which is useful in migration environments or when you have legacy applications that require VMs.
  7. Volume and storage persistence: KubeVirt enables the use of persistent volumes for virtual machines, ensuring that data remains available through VM restarts or migrations.
  8. User interface and APIs: Provides Kubernetes-based user interfaces (such as kubectl) and RESTful APIs that enable management of virtual machines programmatically and through graphical interfaces.
  9. Virtual Network Management: Allows you to configure virtual networks for virtual machines, with advanced connectivity options and support for Kubernetes virtual networks.
  10. Real-time migration of Virtual Machines: KubeVirt supports live migration of virtual machines, allowing them to be moved between nodes without interrupting their operation, similar to container migrations in Kubernetes.

These features allow you to combine the best of virtual machines and containers within a single environment.

Deploying KubeVirt in RKE2 environment

Without wanting to go into too much detail, we will now see how to deploy KubeVirt. So, first of all you will need to launch the following commands:

# Set up the latest version of KubeVirt
export RELEASE=$(curl https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt)

# Deploy KubeVirt Operator in its latest stable version
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml

# Deploy KubeVirt CR in its latest stable version
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr.yaml

To get everything up correctly, it will be necessary to put SELinux in permissive mode without blocking any action, otherwise virt-handler will not raise:

# Apply on each machine
sudo setenforce 0

KubeVirt deploys the following elements:

  • Deployments:
    • virt-api: Serves to expose the KubeVirt API server that handles requests related to virtual machines and associated resources, allowing to interact with the virtualisation infrastructure.
    • virt-controller: Used to manage the lifecycle of virtual machines in the cluster, such as creating, starting, stopping and managing VM instances.
    • virt-operator: serves to manage the installation, configuration and upgrade of KubeVirt in the cluster, acting as the main operator that coordinates the deployment of KubeVirt resources.
  • DaemonSets:
    • virt-handler: runs on each node in the cluster and is responsible for interacting directly with hypervisors (such as KVM) to manage the virtual machines on that node.

Creating a virtual machine with KubeVirt

We will be able to create a virtual machine with KubeVirt through the following YAML:

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: fedora-vm
  namespace: default
spec:
  running: true
  template:
    metadata:
      labels:
        kubevirt.io/vm: fedora-vm
    spec:
      domain:
        devices:
          disks:
            - disk:
                bus: virtio
              name: mydisk
        resources:
          requests:
            memory: 2Gi   # 2 GB RAM
            cpu: 1        # 1 CPU core
      volumes:
        - name: mydisk
          containerDisk:
            image: quay.io/kubevirt/fedora-cloud-registry-disk-demo:latest  # Fedora Cloud base image
        - name: cloudinit
          cloudInitNoCloud:
            userData: |
              #cloud-config
              users:
                - name: fedora # Username
                  sudo: ALL=(ALL) NOPASSWD:ALL
                  shell: /bin/bash
                  plain_text_passwd: fedora # User Password
                  lock_passwd: false
              chpasswd:
                expire: false

Once the YALM has been created, we are going to run it using the following command:

kubectl apply -f fedora-vm.yaml

By default, the virtual machine created will have 2 GB of RAM and 1 CPU core. To access it, it will be done through fedora using the following command:

kubectl get pod -l kubevirt.io/vm=fedora-vm -o wide

Access to the machine can be via SSH, or with the virtctl binary.

Creating a virtual machine with additional disk

To create a machine with an additional 50GB disk, it will be necessary to create the following YAML:

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: fedora-vm
  namespace: default
spec:
  running: true
  template:
    metadata:
      labels:
        kubevirt.io/vm: fedora-vm
    spec:
      domain:
        devices:
          disks:
            - name: mydisk
              disk:
                bus: virtio
            - name: additional-disk
              disk:
                bus: virtio
        resources:
          requests:
            memory: 2Gi   # 2 GB RAM
            cpu: 1        # 1 CPU core
      volumes:
        - name: mydisk
          containerDisk:
            image: quay.io/kubevirt/fedora-cloud-registry-disk-demo:latest  # Fedora Cloud base image
        - name: additional-disk
          persistentVolumeClaim:
            claimName: pvc-fedora-vm  # The PVC we created earlier
        - name: cloudinit
          cloudInitNoCloud:
            userData: |
              #cloud-config
              users:
                - name: fedora
                  sudo: ALL=(ALL) NOPASSWD:ALL
                  shell: /bin/bash
                  plain_text_passwd: fedora
                  lock_passwd: false
              chpasswd:
                expire: false

In addition, the creation of a VP will be necessary:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: kubevirt-disk-pv
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 50Gi
  hostPath:
    path: /datadrive/onesaitplatform/kubevirt/disk1
  storageClassName: 'local-storage'

And the creation of a PVC associated with that VP:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: kubevirt-disk-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi
  volumeMode: Filesystem
  volumeName: kubevirt-disk-pv

In order to create it, we will apply the YAML created with the following command:

kubectl apply -f disk-pv.yaml
kubectl apply -f disk-pvc.yaml
kubectl apply -f fedora-vm.yaml

Once we access our newly created machine, we will be able to see how the 50GB disk we just created appears but unformatted and unmounted:

Access to the machine via SSH

The machine can be accessed through one of the nodes that make up the cluster. To do this, we need to know the IP that has been assigned to the virtual machine:

kubectl get pod -l kubevirt.io/vm=fedora-vm -o wide

Within the worker, for example, we access via:

ssh fedora@<ipMachine>

We check how the machine has 1 VCPU and 1 GB of RAM.

Access to the machine using virtctl

To install virtctl, it will be necessary to launch the following commands:

# Download the virtctl binary:
curl -LO https://github.com/kubevirt/kubevirt/releases/download/v0.56.0/virtctl-linux-amd64

# Give permissions to the virtctl binary:
chmod +x virtctl-linux-amd64

# Move the binary to the correct location:
sudo mv virtctl-linux-amd64 /usr/local/bin/virtctl

In order to access the machine, the following command must be executed:

virtctl console fedora-vm

How to access the machine from outside?

It will be necessary to create a NodePort type service or in clouds such as Azure, Amazon or Google. You could create a balancer type service that redirects the requests to that machine.

In the following example we will see how to access via a NodePort type service. To do this, we must apply the following YAML:

apiVersion: v1
kind: Service
metadata:
  name: fedora-vm-nodeport
  namespace: default
spec:
  selector:
    kubevirt.io/vm: fedora-vm  # Name of the virtual machine
  ports:
    - protocol: TCP
      port: 22        # Port you will access (in this case SSH)
      targetPort: 22   # Port within the VM
      nodePort: 30022  # Port to be exposed on the nodes
  type: NodePort

We apply using the following command:

kubectl apply -f service-nodeport.yaml

Once created, we could access the machine from anywhere with internet access through the IP of the worker node machine and the NodePort that has been exposed:

ssh fedora@<ipWorker> -p 30022

Other interesting commands

To finish, here are some other commands that we consider interesting to know:

Stop the machine

virtctl pause vm fedora-vm

Start the machine

virtctl unpause vm fedora-vm

Delete the machine

kubectl delete vm fedora-vm

Header Image: KubeVirt + Onesait Platform

✍🏻 Author(s)

Leave a Reply