Skip to main content

Stand-up Vanilla

Now, finally, Kubernetes.

K3s

Let's inspect K3s/standup-vanilla.yml:

  1. The first thing we will do is a get a script from https://get.k3s.io/ that installs K3s, from the official k3s distribution source. If you would like to view that script, and you should, open up the link in a web browser. Or with:
    cat ~/.HAB/remote_content/k3s_install.sh
  2. We are also going to get some roll based permissions and a cloud controller for kube-vip, and define some resources for MetalLB. Kube-vip allows us to use a virtual IP address (vip) on our network. This means kubectl will still work if one of the master nodes is destroyed: in a process called load balancing the control plane. We can also use kube-vip to load balance other services, but it's just not a mature enough project to rely on it heavily. Load balancing is essentially a HA process that says "Hey, you are making a request, and we see that it can be serviced here, as opposed to here, so we will forward you there."
  3. Then we are going to create and save a server token which is used by our hosts to identify and join the cluster.
  4. Install k3s using the token on our Leader and install kube-vip, followed by Lieutenants, followed by Workers.
    1. Of note, the Pull kube-vip image command is creating a manifest definition (which can be inspected at /tmp/kube-vip.yaml on the leader) for kube-vip. This is an atypical way to generate and install manifests, but works well in the kube-vip ecosystem to generate a dynamic manifest, Ansible could do the same thing, and may in the future of this guide.
    2. We will also be copying vault-node-token generated by the leader so that we can get other hosts to join the cluster.
  5. Once the worker hosts are ready, we will add "worker" role to worker nodes, and transfer the cluster credentials to the control computer giving us access to the cluster.
  6. Lastly, we will install MetalLB as our load balancer. Kube-vip can act as a load balancer, but it lacks the features we need, like easy integration with NGINX to deploy websites.

Before we can run the command, we must also, finally, hearken back to the set of IPs we assigned (be it on paper or actually in our router) to the Kubernetes cluster, which have thus far been unused.

Choose one of those IPs. That will be assigned to the control-plane as the vip. It will be the IP that kubectl looks to connect to. The rest will be the block of IP that MetalLB will be able to assign to various other things, serviceRange. Make sure that the vip address is not in the serviceRange set. Also, make sure these are codified as ranges.

The final command, for us, looks like this:

apb K3s/standup-vanilla.yml -e 'vip=10.1.0.50 serviceRange=10.1.0.51-10.1.0.99'

If you are comfortable with what it's expected to do, go ahead and run it now.

It may take a 5 to 20 minutes, so stand up and walk around.

Confirm Everything is in Running Order

To confirm that everything is up and running, on your control machine run:

kubectl get nodes -o wide

And you should see something like this:

% kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
nuc1 Ready control-plane,etcd,master 6h12m v1.26.0+k3s2 10.1.0.30 <none> Ubuntu 22.04.1 LTS 5.15.0-58-generic containerd://1.6.14-k3s1
nuc2 Ready worker 6h9m v1.26.0+k3s2 10.1.0.31 <none> Ubuntu 22.04.1 LTS 5.15.0-58-generic containerd://1.6.14-k3s1
pi1 Ready control-plane,etcd,master 6h9m v1.26.0+k3s2 10.1.0.20 <none> Debian GNU/Linux 11 (bullseye) 5.15.84-v8+ containerd://1.6.14-k3s1
pi2 Ready control-plane,etcd,master 6h10m v1.26.0+k3s2 10.1.0.21 <none> Debian GNU/Linux 11 (bullseye) 5.15.84-v8+ containerd://1.6.14-k3s1
pi3 Ready worker 6h8m v1.26.0+k3s2 10.1.0.22 <none> Debian GNU/Linux 11 (bullseye) 5.15.84-v8+ containerd://1.6.14-k3s1
pi4 Ready worker 6h5m v1.26.0+k3s2 10.1.0.23 <none> Debian GNU/Linux 11 (bullseye) 5.15.84-v8+ containerd://1.6.14-k3s1
pi5 Ready worker 6h7m v1.26.0+k3s2 10.1.0.24 <none> Debian GNU/Linux 11 (bullseye) 5.15.84-v8+ containerd://1.6.14-k3s1

All the nodes there? We hope so! But reach out if not.

Now, let's make sure that kube-vip was set up correctly as well.

Change the namespace to the low level kube-system namespace:

% kubectl ns kube-system
Context "default" modified.
Active namespace is "kube-system".

And view all the pods running in there (You can think of these pods as system processes on your computer):

% kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-5c6b6c5476-ww5lk 1/1 Running 0 4h18m 10.42.0.3 nuc1 <none> <none>
kube-vip-cloud-provider-5459795b8-9q55f 1/1 Running 0 4h18m 10.42.0.5 nuc1 <none> <none>
kube-vip-ds-9fc7g 1/1 Running 0 4h13m 10.1.0.21 pi2 <none> <none>
kube-vip-ds-csl6d 1/1 Running 0 4h16m 10.1.0.20 pi1 <none> <none>
kube-vip-ds-tchnq 1/1 Running 0 4h18m 10.1.0.30 nuc1 <none> <none>
local-path-provisioner-5d56847996-jrdbc 1/1 Running 0 4h18m 10.42.0.4 nuc1 <none> <none>
metrics-server-7b67f64457-6l5tj 1/1 Running 0 4h18m 10.42.0.2 nuc1 <none> <none>

In particular, pay attention to the kube-vip-ds pods, they should be running, READY 1/1, with 1 each on each control-plane host. These Pods run containers, which run images, and each image makes sure the node is up. To test it, you can kill one of the nodes, and still run the kubectl get pods command. This works because kubectl is not pointing to a single host, but to a virtual IP in the cluster that resolves to a host only if that host is available as determined by kube-vip.

You can also inspect the MetalLB namespace, and confirm that it is healthy:

% kubectl ns metallb-system
Context "default" modified.
Active namespace is "metallb-system".
% kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
controller-577b5bdfcc-2fbwr 1/1 Running 0 4h29m 10.42.5.2 nuc2 <none> <none>
speaker-2tnp7 1/1 Running 0 4h29m 10.1.0.30 nuc1 <none> <none>
speaker-4xlrf 1/1 Running 0 4h29m 10.1.0.23 pi4 <none> <none>
speaker-bqdkf 1/1 Running 0 4h29m 10.1.0.21 pi2 <none> <none>
speaker-cgfn5 1/1 Running 0 4h29m 10.1.0.22 pi3 <none> <none>
speaker-d2q86 1/1 Running 0 4h29m 10.1.0.24 pi5 <none> <none>
speaker-lfmc8 1/1 Running 0 4h29m 10.1.0.20 pi1 <none> <none>
speaker-wzbwl 1/1 Running 0 4h29m 10.1.0.31 nuc2 <none> <none>
01/20/23-22:05:16 ~/Desktop/HAB git(HAB)(master) ⎈ metallb-system

You will notice that there is a speaker pod on each host. These pods also report the availability of the host but do so for the purpose of routing app traffic, not kubectl traffic.

tip

The -o flag is for output, and it is being passed the wide argument, without this flag, we don't get as much data as is available to us.

There are two other useful commands to be aware of at this stage:

kubectl describe pod kube-vip-ds-9fc7g
kubectl logs kube-vip-ds-9fc7g

The describe command will print all the information about the pod. The logs command will print all the logs produced by the pod.

These have proven to be immensely helpful when troubleshooting a problem.

If everything is looking good, you are now the proud owner of a Kubernetes Cluster—Yet another Pleb is the server farm now.

Add New Vanilla Hosts

If needed, we now have the ability to bring a new vanilla host into the cluster, which has been added to the host plan, with a one-liner command, without tearing down or setting up anything but that host:

apb Hosts/standup-live.yml -e "plan=host-plan-2022-03-17" && apb K3s/standup-vanilla.yml -e 'vip=10.1.0.50 serviceRange=10.1.0.51-10.1.0.99'

Now that we are up and running as a vanilla Kubernetes provider, let's investigate how to tear it down.