This page looks best with JavaScript enabled

Get started using Kubernetes in your homelab

 ·  ☕ 6 min read  ·  🤖 Javy de Koning

Kubernetes, it’s hip, it’s here to stay and comes up in about any conversation. Today I’ve spent some time converting my single-node homelab from docker-compose to a single-node Kubernetes stack. Because I only have a single server in my lab, I will be using minikube. minikube is a tool that lets you run Kubernetes locally on a single-node Kubernetes cluster.

In this post, I’ll take you over the basics to get your network sorted and your first deployment running. I will assume you already have a Linux machine with Docker pre-installed.

The main idea is to get some hands-on experience running Kubernetes with a use-case that most of us will have.

Preparing the container host

To install minikube, make sure that Docker is installed correctly and the daemon is running. You can verify this by invoking docker ps.

Next, let’s install the dependencies. My host runs Alpine Linux, in this post we will be using the latest binaries from Google.

If you’d rather use your standard repo (e.g. apk add kubectl) that could work. Substitute with yum, apt-get, snap depending on the distribution you are running.


kubectl is a command line interface for running commands against Kubernetes clusters. The official documentation is located here, but it boils down to running the below commands:

curl -LO "$(curl -s"
chmod +x ./kubectl
mv ./kubectl /usr/local/bin/kubectl
kubectl version --client -o yaml

If all went well, you should see output like:

  buildDate: "2020-12-18T12:09:25Z"
  compiler: gc
  gitCommit: c4d752765b3bbac2237bf87cf0b1c2e307844666
  gitTreeState: clean
  gitVersion: v1.20.1
If attempting to execute the kubectl binary results in an error message like: not found you need to install libc6-compat by running apk add libc6-compat.


To install minikube we will download the binaries move them to our /usr/local/bin/ directory as follows:

curl -LO
sudo install minikube-linux-amd64 /usr/local/bin/minikube
minikube version

If minikube was installed correctly you should now see the version output in your terminal. Next we will start minikube using the docker driver. The docker driver allows you to install Kubernetes into an existing Docker install. On Linux, this does not require virtualization to be enabled. In short this runs Docker-in-Docker.

minikube config set driver docker
minikube start

If you run into are running as root, you need to add the --force flag. This will force minikube to perform possibly dangerous operations. For more info, read this issue on github. rootless is currently not support by the Docker driver. If you don’t use --force, you will be faced with the following error:

❌ Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.

So how does this work?

Let’s run a few commands to better understand what is happening.

  • docker ps, will return all the containers running on the docker host. Notice how there is only a single k8s Docker container running.
CONTAINER ID  IMAGE                                          COMMAND
5209ac2dd2d9  "/usr/local/bin/entr…"
  • kubectl get pods -A, this should return the pods running on our newly created Kubernetes “cluster”.
NAMESPACE   NAME                             READY
kube-system coredns-74ff55c5b-46pcr          1/1
kube-system etcd-minikube                    1/1
kube-system kube-apiserver-minikube          1/1
kube-system kube-controller-manager-minikube 1/1
kube-system kube-proxy-4x759                 1/1
kube-system kube-scheduler-minikube          1/1
kube-system storage-provisioner              1/1

So why are there multiple pods running, but we only have a single Docker container on our host? That’s because they are nested, minikube runs Docker-in-Docker. We can confirm this by running docker exec minikube docker ps:

IMAGE         COMMAND                STATUS        NAMES
85069258b98a  "/storage-provisioner" Up 10 minutes k8s_storage-...
bfe3a36ebd25  "/coredns -conf /etc…" Up 11 minutes k8s_coredns_...
10cc881966cf  "/usr/local/bin/kube…" Up 11 minutes k8s_kube-pro...
b9fa1895dcaa  "kube-controller-man…" Up 12 minutes k8s_kube-con...
ca9843d3b545  "kube-apiserver --ad…" Up 12 minutes k8s_kube-api...
0369cf4303ff  "etcd --advertise-cl…" Up 12 minutes k8s_etcd_etc...
3138b6e3d471  "kube-scheduler --au…" Up 12 minutes k8s_kube-sch...


Now that we have Docker-in-Docker with multiple bridge networks it becomes slightly harder to understand from a networking aspect.

Let’s go over this our network briefly so it’s easier to understand how everything is connected. The network now looks as below (your CIDR ranges might be different):

  - Name: Lan
      - Host: MacBook
      - Host: Container host
          - Name: bridge
              - Name: minikube
                  - Name: k8s services

Discover your own

To find out how yours look, try running the following commands on your container host:

# Get minikube container IP (All should result in the same IP)
minikube ip
docker inspect minikube | jq -r ".[].NetworkSettings.Networks[].IPAddress"
cat ~/.minikube/profiles/minikube/config.json | jq -r ".Nodes[].IP"

# Get Docker bridge network in use by minikube
docker network ls # Should return a minikube bridge network
docker network inspect minikube | jq -r ".[].IPAM.Config[].Subnet"

# Kubernetes Config (Service network)
cat ~/.minikube/profiles/minikube/config.json | jq -r ".KubernetesConfig"
This assumes you have jq installed. Alternatively you could just use grep e.g. docker network inspect minikube | grep -i subnet

Network graph

flowchart TB
    isp(("isp - internet")) <-- via wan0--> router
    subgraph lan["lan"]
      router[" - router"]
      host[ - container host]
      mbp[" - macbook"]

      router --route via> host
      mbp --route via> router
    host --"route via br0"--> br0
    host --"route via"-->minikube

    subgraph br0[" - docker br0"]
      minikube[" - minikube container"]

    subgraph services[" - k8s-services"]
      none["None: we will deploy services in pt2."]

    minikube --k8s ServiceCIDR--> services

Enable routing

To get traffic from your lan routed to minikube and the Kubernetes services that will be running within, we need to enable IP forwarding on our container host, as well as make the next hop known to our router.

On our container host that means we need to set net.ipv4.ip_forward to 1. We also need to make sure our host firewall (iptables) allows forwarding.

Host config

On Alpine we need to update settings in /etc/sysctl.conf, then apply them by running: sysctl -p.

# Update sysctl.conf
echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf
sysctl -p

# Update route table with hop for Kubernetes Services network.
# 1. Get Kubernetes ServiceCIDR
cat ~/.minikube/profiles/minikube/config.json | jq -r ".KubernetesConfig.ServiceCIDR"
# 2. Get minikube IP
minikube ip
# 3. route add <ServiceCIDR> via <Minikube IP>
ip route add via

# Allow forwarding in iptables
iptables -A FORWARD -i eth0 -j ACCEPT

Router configuration

Finally, we need to instruct our router to forward packets for the minikube bridge network, as well as the Kubernetes Services network to our container host. Remember, we can get those networks as follows:

# 1. Get Minikube bridge network
docker network inspect minikube | jq -r ".[].IPAM.Config[].Subnet"
# 2. Get Kubernetes ServiceCIDR
cat ~/.minikube/profiles/minikube/config.json | jq -r ".KubernetesConfig.ServiceCIDR" 

On my router (Tomato) those settings are under Advanced -> Routing -> Static Routing Table.

router configuration

You should now be able to get traffic to Minikube! Let’s test that from our laptop/desktop.

traceroute to (, 64 hops max, 52 byte packets
 1 (  2.217 ms  1.861 ms  1.021 ms
 2 (  1.196 ms  2.003 ms  1.582 ms
 3 (  1.283 ms  2.201 ms  1.455 ms

Great, you’ve now completed your minikube setup and network configuration. In the next post, we’ll create and deploy our first Kubernetes Service!

Resources to review

Javy de Koning
Javy de Koning
Geek 🤓, Love sports 🏃‍♂️🏋️‍♂️, Food 🍛, Tech 💻, @Amsterdam ❌❌❌.