Skip to content

Vagrant configuration to orchestrate a Kubernetes cluster

Notifications You must be signed in to change notification settings

aguslr/vagrant-k8s

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This repository contains the configuration files necessary to orchestrate a Kubernetes cluster using Vagrant and provision it with Ansible. It works both with VirtualBox and Libvirt boxes.

Installation

Before anything, Vagrant and either libvirt or VirtualBox should be installed:

  • For APT based distributions:

    sudo apt install vagrant vagrant-libvirt
    
  • For RPM based distributions:

    sudo dnf install vagrant vagrant-libvirt
    
  • For Windows and macOS, refer to Vagrant's and VirtualBox's download pages.

Set-up

First of all, we have to clone the repository:

git clone https://github.com/aguslr/vagrant-k8s && cd vagrant-k8s

Afterwards, to orchestrate a cluster, we follow these steps:

  1. Setup the VMs with this command:

    vagrant up
    
  2. Once the VMs are up, connect to the control plane:

    vagrant ssh master -- kubectl get nodes -o wide
    

Alternatively, we can attach the VMs to a physical interface so they are reachable from any machine in the network:

  1. Assign the interface to a variable and setup the VMs:

    BRIDGE_IFACE=br0 vagrant up
    
  2. Access the Dashboard UI by connecting to the URL that is displayed in a post-up message along with the token.

Managing Kubernetes

To use the Kubernetes client to interact with the cluster from our local machine, we must prepare the environment:

  1. Copy the configuration locally:

    vagrant ssh master -- cat .kube/config > ${KUBECONFIG:-$HOME/.kube/config}
    
  2. Now we can run kubectl commands:

    kubectl get nodes -o wide
    

Configuration

Everything can be configured using a YAML file named settings.yml:

k8s:
  master:
    cpus:   4
    memory: 2048
  workers:
    count:  2
    cpus:   2
    memory: 2048

network:
  bridge:    ''
  mac:       '525400000a00'
  pods_cidr: '192.168.0.0/16'

versions:
  box:        'debian/bookworm64'
  calico:     'v3.28.2'
  dashboard:  'v2.7.0'
  kubernetes: 'v1.31'

Alternatively, environment variables can also be used:

Variable Function Default
BRIDGE_IFACE Network interface to attach VMs to empty
K8S_PODS_CIDR Default network CIDR for pods 192.168.0.0/16
K8S_MAC_ADDRESS MAC address for master node 525400000a00
K8S_MASTER_CPUS Number of CPUs for master node 4
K8S_MASTER_MEMORY Amount of memory for master node 2048
K8S_NODES_COUNT Number of worker nodes 2
K8S_NODE_CPUS Number of CPUs for each worker node 2
K8S_NODE_MEMORY Amount of memory for each worker node 2048
LIBVIRT_DEFAULT_URI URI for libvirt daemon to connect to qemu:///system
VAGRANT_BOX Remote image to use as base for VMs debian/bookworm64

For example, to orchestrate a cluster with nodes running Debian 12 attached to the network interface eth0, we do:

BRIDGE_IFACE=eth0 VAGRANT_BOX=generic/debian12 vagrant up

Supported boxes

The following official boxes have been tested:

Alternative, these non-official boxes have been tested:

References

About

Vagrant configuration to orchestrate a Kubernetes cluster

Topics

Resources

Stars

Watchers

Forks