Creating a Multi-Node Kubernetes Cluster with different multiple VPS or local machines

Raphael Mazzine, Ph.D.
16 min readApr 3, 2021

Check this story at my webpage: https://rmazzine.com/2021/04/03/creating-a-multi-node-kubernetes-cluster-with-different-multiple-vps-or-local-machines/

The problem is simple, you have several VPS in different providers or a local machines that you want to integrate as nodes in a Kubernetes Cluster.

All Kubernetes nodes must communicate with each other, this is a work done by Kube-Proxy. Additionally, Kubernetes needs a plugin to handle networking and DNS for the services. All these aspects must be taken into consideration when connecting nodes in different networks.

Therefore, to ensure full communication between machines (nodes), the creation of a Virtual Network seems reasonable. In our case we will create a Virtual Private Network using SoftEther VPN.

Requirements:

Ubuntu Version: 18.04
SoftEther Version: 4.34
Docker Version: 1.5–1build1
Kubernetes Version: 1.20.5–00

Many thanks to Michika Iranga Perera which his videos https://www.youtube.com/watch?v=NUVXA4-i3Bg&list=PLqvezrIY9RJCDdCyFVAESGQHLtAlkgezc helped me a lot to write this tutorial. If this tutorial was useful to you, please, like his videos!

For this tutorial, we will make one Kubernetes master node, that is also the VPN server and one Kubernetes working node. However, you can modify this architecture to have high-availability Kubernetes cluster, with several Master nodes, for example.

Tutorial:

In the Master Node machine:

Update and upgrade your packages:

sudo apt update
sudo apt upgrade

We will need to compile some packages, install build-essential package:

sudo apt install build-essential

Go to https://www.softether-download.com/en.aspx and select the link for SoftEther Server and download it. Below we download the version 4.34:

curl -O https://www.softether-download.com/files/softether/v4.34-9745-rtm-2020.04.05-tree/Linux/SoftEther_VPN_Server/64bit_-_Intel_x64_or_AMD64/softether-vpnserver-v4.34-9745-rtm-2020.04.05-linux-x64-64bit.tar.gz

Decompress SoftEther VPN server files:

tar zxf softether-vpnserver-v4.34-9745-rtm-2020.04.05-linux-x64-64bit.tar.gz

Go to vpnserver file folder and compile files:

cd vpnserver/
make

Accept all licenses.

Then move the VPN server files to /usr/local folder:

cd ..
mv ./vpnserver /usr/local/

Now go to the VPN server folder and change access rules for files:

cd /usr/local/vpnserver/
chmod 600 *
chmod 700 vpncmd
chmod 700 vpnserver

Create a file to initialize the VPN (using VI or your preferred editor)

vi start.sh

Paste the following configuration and save:

#!/bin/bash
TAP_ADDR=10.192.1.1
cd /usr/local/vpnserver
./vpnserver start
sleep 2
/sbin/ifconfig tap_soft $TAP_ADDR
systemctl restart dnsmasq

Change access rules for start.sh

chmod +x start.sh

Run vpncmd file to verify if your system has all requirements to run SoftEther VPN.

./vpncmd# select 3 (Use of VPN Tools (certificate creation and Network Traffic Speed Test Tool))
Select 1, 2 or 3: 3
# type check to make verification:
VPN Tools>check

If it passes everything you will receive the following message:

All checks passed. It is most likely that SoftEther VPN Server / Bridge can operate normally on this system.

In case you didn’t receive this message, please, first solve the problems before continuing this tutorial.

Now exit vpncmd :

VPN Tools>exit

Start the SoftEther VPN server:

./vpnserver start

You will receive a message like:

Let's get started by accessing to the following URL from your PC:https://<YOUR-PUBLIC-IP>:5555/
or
https://<YOUR-PUBLIC-IP>/
Note: IP address may vary. Specify your server's IP address.
A TLS certificate warning will appear because the server uses self signed certificate by default. That is natural. Continue with ignoring the TLS warning.

Verify if the connection address has your public IP <YOUR-PUBLIC-IP> .

Now you will need to install SoftEther VPN Server Manager. (https://www.softether-download.com/en.aspx). It’s available to Windows and MacOS. However, for Ubuntu you can use the Windows version by using Wine package.

After installed, open SoftEther VPN Server Manager:

Then, click on New Setting and write the Setting Name (for example KubernetesVPNServer), your Host Name which is your VPN public address and in Port Number write 5555 . Then click in OK:

Then Select the server and click on Connect , it will open a new window telling you to create a password to your VPN Server, create it and save.

Then, select Remote Access VPN Server

Then, click on Next > and Yes . It will ask to you add a Virtual Hub Name , left it as VPN and click on OK.

It will open a Dynamic DNS Function window, do not modify anything and click on Exit.

A new window like the below will open, do not modify anything and just click on OK .

Then another window, offering VPN Azure Cloud will open. Click on Disable VPN Azure and then, Ok.

Finally a window with some setup will open, do not modify anything and just click on Close .

Then, the management window will appear. Click on Local Bridge Setting , then on Virtual Hub select VPN .

Then, select Bridge with New Tap Device

And write in New Tap Device Name as soft .

Click on Create Local Bridge .

It will open a window Using Local Bridge Function on VM , just click on OK . Then, a successful message will come, click again in Ok and then click Exit in the Local Bridge Settings window. Then click again Exit to close the manager screen.

Now, return again to the terminal of the Master/VPN Server machine and create the vpnserver.service service file.

vi /etc/systemd/system/vpnserver.service

Paste the following settings:

[Unit]
Description=SoftEther VPN Service
[Service]
Restart=on-failure
RestartSec=5s
WorkingDirectory=/usr/local/vpnserver
Type=forking
ExecStart=/usr/local/vpnserver/start.sh
[Install]
WantedBy=multi-user.target

Then, enable it using systemctl :

systemctl enable vpnserver.service

Go to the VPN Server folder and stop it:

cd /usr/local/vpnserver
./vpnserver stop

Install net-tools :

sudo apt install net-tools

Edit /etc/systemed/resolved.conf:

vi /etc/systemd/resolved.conf

Delete all entries and paste:

#  This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See resolved.conf(5) for details
[Resolve]
DNS=8.8.8.8
#FallbackDNS=
#Domains=
#LLMNR=no
#MulticastDNS=no
#DNSSEC=no
#Cache=yes
DNSStubListener=no

Update /etc/resolv.conf :

ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf

Install dnsmasq :

sudo apt install dnsmasq

Update /etc/dnsmasq.conf :

vi /etc/dnsmasq.conf

Append to the end and save:

interface=tap_soft
dhcp-range=tap_soft,10.192.1.2,10.192.1.254,12h
dhcp-option=tap_soft,3,10.192.1.1

Reboot:

sudo reboot

After reboot, check if vpnserver and dnsmasq services are running correctly:

service vpnserver status
service dnsmasq status

If in the Active section is written active (running) , continue the tutorial, if not, verify what is wrong.

Now, in the worker machine:

Update and upgrade your packages:

sudo apt update
sudo apt upgrade

We will need to compile some packages, install build-essential package:

sudo apt install build-essential

Install net-tools :

sudo apt install net-tools

Go to https://www.softether-download.com/en.aspx and select the link for SoftEther Client and download it. Below we download the version 4.34:

curl -O https://www.softether-download.com/files/softether/v4.34-9745-rtm-2020.04.05-tree/Linux/SoftEther_VPN_Client/64bit_-_Intel_x64_or_AMD64/softether-vpnclient-v4.34-9745-rtm-2020.04.05-linux-x64-64bit.tar.gz

Decompress SoftEther VPN server files:

tar zxf softether-vpnclient-v4.34-9745-rtm-2020.04.05-linux-x64-64bit.tar.gz

Go to vpnserver file folder and compile files:

cd vpnclient/
make

Accept all licenses.

Return to previous directory and create a folder to send the VPN client.

cd ..
mkdir /etc/softether

Move the vpnclient folder to the created folder:

mv ./vpnclient /etc/softether

Now go again to the SoftEther VPN Server Manager and connect to your server:

Then click on Manage Virtual Hub and then in Manage Users :

Click on New to create a new user, insert User Name (suggestion: worker) and password, then click on Ok .

Click on Exit in the Manage Users window, then Exit again on Management of Virtual Hub window, and then, Exit again to close connection with server.

Then, go again to Worker’s terminal and go to \etc\softether\vpnclient folder:

cd /etc/softether/vpnclient/

Now check if your client has all requirements, like we did with the server:

./vpncmd# Choose 3 (Use of VPN Tools (certificate creation and Network Traffic Speed Test Tool))
Select 1, 2 or 3: 3
# Write check
VPN Tools>check

If it outputs a message like:

All checks passed. It is most likely that SoftEther VPN Server / Bridge can operate normally on this system.

Proceed with tutorial, if not, check the error before continuing.

Exit VPN Tools

VPN Tools> exit

Start VPN Client

./vpnclient start

Now, let’s connect the client to the server. We will use the ./vpncmd file again:

./vpncmd
# Press 2 (Management of VPN Client)
Select 1, 2 or 3: 2
# Don't type anything and just press enter
Hostname of IP Address of Destination:
Connected to VPN Client "localhost".
# Create a nic with name soft
VPN Client> niccreate soft
NicCreate command - Create New Virtual Network Adapter
The command completed successfully.
# Create account name with soft too
VPN Client> accountcreate soft
AccountCreate command - Create New VPN Connection Setting
# Type the VPN public IP (<VPN_SERVER_PUBLIC_IP>) (or the host name) with the port number (5555)
Destination VPN Server Host Name and Port Number:<VPN_SERVER_PUBLIC_IP>:5555
# Type our created Virtual Hub Name (VPN)
Destination Virtual Hub Name: VPN
# Write the created username (suggested: worker, or the one you choose)
Connecting User Name: worker
# Type the adapter name as soft
Used Virtual Network Adapter Name: soft
# The output must be
The command completed successfully.
# Insert the password
VPN Client> accountpassword soft
# Type your client password
Password: **************
# Specify password as standard
Specify standard or radius: standard
# It must return a success message
The command completed successfully.
# Make connection
VPN Client>accountconnect soft
# Verify if connection was successful, Status must be equal to Connected
VPN Client> accountlist
# Now exit
VPN Client> exit

Now, stop the VPN client to make some further configurations:

./vpnclient stop

And then, check if the IP forward is enabled:

cat /proc/sys/net/ipv4/ip_forward

If it returns 0 , then enable it by:

echo 1 > /proc/sys/net/ipv4/ip_forward

And make it permanent by:

echo net.ipv4.ip_forward=1 >> /etc/sysctl.conf && sysctl -p

Now, create a starting file start.shto the VPN client:

vi start.sh

And paste the following configuration:

#!/bin/bash
cd /etc/softether/vpnclient
./vpnclient start
(
echo "2"
echo ""
echo "accountconnect"
echo "soft"
echo "exit" ) | ./vpncmd
ifconfig vpn_soft 10.192.1.2

Then, change access rights of that file:

chmod +x start.sh

Now, try to connect with the start.sh script:

./start.sh

If there’s no error, you connected successfully to the VPN Server. If you still has doubts if the connection was successful, go to the SoftEther VPN Server Manager, select the server, press Connect , then go to Manage Virtual Hub , then Manage Sessions and check if there’s the Worker session connected.

Stop again the Worker VPN client:

./vpnclient stop

And create a service for the Client VPN connection:

vi /etc/systemd/system/vpnclient.service

And paste the following configuration file:

[Unit]
Description=SoftEther k8s vpn client
[Service]
Restart=on-failure
RestartSec=5s
WorkingDirectory=/etc/softether/vpnclient
Type=forking
ExecStart=/bin/bash /etc/softether/vpnclient/start.sh
[Install]
WantedBy=multi-user.target

Reload the daemon and enable the service:

systemctl daemon-reload
systemctl enable vpnclient.service

Finally, try to start the service:

service vpnclient start

Verify if service started successfully:

systemctl status vpnclient

If the parameter Active is equal to active (running) the service started successfully.

Now your server and local machine (or VPS) can connect each other under the VPN created.

Install Kubernetes and Docker

Now, on both machines you will need to install Docker and Kubernetes.

Do the same instructions on all machines.

Docker Installation (you can follow the instructions in https://docs.docker.com/engine/install/ubuntu/):

sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpgecho \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

Kubernetes Installation (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/):

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --systemsudo apt-get updatesudo apt-get install -y apt-transport-https ca-certificates curlsudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpgecho "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.listsudo apt-get updatesudo apt-get install -y kubelet kubeadm kubectlsudo apt-mark hold kubelet kubeadm kubectl

Disable swap (as required by Kubernetes):

sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

On Master Node:

Now, only on master node, initializes the Kubernetes cluster:

kubeadm init --pod-network-cidr=15.243.0.0/16 --apiserver-advertise-address=10.192.1.1

###### ATTENTION ######

IF THE PREVIOUS STEP WAS SUCCESSFUL, SKIP THIS.

If the initialization master node kubelet initialization fails, one possible error is the configuration of /var/lib/kubelet/config.yaml

In this case, if you verify this configuration file:

cat /var/lib/kubelet/config.yaml

You may seecgroupDriver equal to systemd

cgroupDriver: systemd

This can be solved by modifying this field value to cgroupfs

cgroupDriver: cgroupfs

Then, if you check the kubelet again, you may get no errors (Active: active (running))

systemctl status kubelet

Then, you can get the worker connection parameters by:

kubeadm token create --print-join-command

####################

Wait the server configuration, then, when finished, save the Worker connection parameters, it looks something like:

kubeadm join 10.192.1.1:6443 --token y3fr6g.aaaasfsaffs \
--discovery-token-ca-cert-hash sha256:asf4saf4asf4as4fa4fsa4fasf4as4f4asf654a4fs

Then, set kubectl configuration to allow you use it on server:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now install the Kubernetes Network Plugin. In our case, we will use Calico. The installation information can be found here. However, we will need to do some adaptations.

First, let’s install the Tigera Calico operator:

kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml

With all resources created successfully, let’s download the Calico Custom Resource file:

curl -O https://docs.projectcalico.org/manifests/custom-resources.yaml

Then, let’s adapt to our Network CIDR address and some additional modifications:

vi custom-resources.yaml

Delete everything and paste the code below, or add the changes (cidr and nodeAddressAutoDetectionV4 parameters):

# This section includes base Calico installation configuration.
# For more information, see: https://docs.projectcalico.org/v3.18/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: 26
cidr: 15.243.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
nodeAddressAutodetectionV4:
firstFound: false
cidrs: [10.192.1.0/24]

Then, let’s apply this resource:

kubectl apply -f custom-resources.yaml

Then, confirm all pods are mounted and run successfully using the following command:

watch kubectl get pods -n calico-system

With all pods created successfully, now let’s check the node internal IP:

kubectl get nodes -o wide

You may notice the INTERNAL-IP is different from the IP the machine has on VPN (10.192.1.1, you can verify the Master VPN adapter IP by running the command ip addr and checking the inet address of tap_soft ).

Then, we need to change the Kubernetes Master node address, you can do it editing the file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf and adding in the starting execution parameter ExecStart the parameter --node-ip=10.192.1.1 , where 10.192.1.1 is the machine IP address of the VPN adapter:

vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Add the parameter --node-ip=10.192.1.1 like that (the last line) :

# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --node-ip=10.192.1.1

Then reload the daemon and restart the Kubernetes service:

systemctl daemon-reload
systemctl restart kubelet.service

Wait few minutes and try to get the node INTERNAL-IP again with the command:

kubectl get nodes -o wide

If the INTERNAL-IP changed to the VPN adapter IP of your machine, the configuration was done successfully.

On worker node:

First, let’s check the IP address of the VPN Adapter:

ip addr

Look for the adapter named vpn_soft and see the inet address. This address is the IP of the Worker machine in the VPN Adapter.

Now, on worker node, paste the connection parameters you saved previously:

kubeadm join 10.192.1.1:6443 --token y3fr6g.aaaasfsaffs \
--discovery-token-ca-cert-hash sha256:asf4saf4asf4as4fa4fsa4fasf4as4f4asf654a4fs

When the connection finishes, try to check if the worker node connected by typing in your master node:

kubectl get nodes -o wide

You may notice the INTERNAL-IP address is not the same we got from the VPN Adapter, therefore, we must do the same procedure we did in the master and modify the /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file:

vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Similarly, on the last line add the parameter --node-ip=10.192.1.2 (if your worker machine has a different VPN IP, modify the parameter value to it). It will be something like:

# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --node-ip=10.192.1.2

Then restart the daemon and the Kubernetes kubelet service:

systemctl daemon-reloadsystemctl restart kubelet.service

Wait few minutes, and then, verify on the Master machine if the Worker node change the INTERNAL-IP to the VPN Adapter IP:

kubectl get nodes -o wide

If it changed to that address, you successfully configured the Master and Worker!

Testing Kubernetes Network:

To test if our configuration was done correctly and network works on both nodes, let’s do the following.

These commands are done using kubectl on the Master node (since we did not configure kubectl on worker or our local machine yet).

First, let’s see the nodes status:

kubectl get nodes

Both nodes must be with STATUS=Ready . If not, use kubectl describe node <NAME_OF_NODE> to check further details.

Then, let’s remove taint on the master node to allow pods being scheduled on it:

kubectl taint nodes --all node-role.kubernetes.io/master-

Now, let’s create a busybox on Master and in the Worker nodes and try a nslookup on a kubernetes service called kubernetes and verify if the IP address can be resolved using Kubernetes DNS.

First, type again kubectl get nodes and take note of the master node name <MASTER_NODE_NAME> and the Worker node name <WORKER_NODE_NAME> .

Let’s try the Master node first, copy the command below and replace <MASTER_NODE_NAME> with your master node name:

kubectl run -it --rm --overrides='{"spec":{"nodeSelector": { "kubernetes.io/hostname": "<MASTER_NODE_NAME>" }}}' --restart=Never busybox --image=gcr.io/google-containers/busybox sh

When the pod is ready, try the nslookup command:

nslookup kubernetes

It must return the resolved address like below:

We can also test if internet connection is right using nslookup www.google.com , if it can resolves the address name, returning Google IP, the internet connection is fine.

Then exit the busybox with:

exit

Now, do the same, but for the Worker node, replace the <WORKER_NODE_NAME> with the Worker node name:

kubectl run -it --rm --overrides='{"spec":{"nodeSelector": { "kubernetes.io/hostname": "<WORKER_NODE_NAME>" }}}' --restart=Never busybox --image=gcr.io/google-containers/busybox sh

When the pod is ready, repeat the process done before using nslookup :

nslookup kubernetes

If it found the same result as in the Master node pod, the kubernetes DNS is working correctly on both nodes (and very likely the pod communication too). If you want, you can also test if the domain name server resolution is working on internet too with nslookup www.google.com like in the previous step. In this case you might get a different IP for Google if your Master and Node are in different geographical locations.

Using kubectl on your local machine:

If your master and Worker node are different from the machine you use, you can use kubeconfig to control the cluster. However, there’s some additional steps you must make, differently from you usually would do.

First, on your master node, get the kubeconfig parameters with:

cat ~/.kube/config

Then, on your local machine (after installed kubectl)

gedit ~/.kube/config

You will note the server parameter has the VPN IP address of the master node. As your local machine is not connected to the VPN, you must change it to the public address of the master machine:

Then, in the case above, change the 10.192.1.1:6443 to <YOUR_MASTER_PUBLIC_IP>:6443 and save it.

However, if you try to use kubectl you will receive a message like:

Unable to connect to the server: x509: certificate is valid for 10.96.0.1, 10.192.1.1, not <YOUR_MASTER_PUBLIC_IP>

As said, this happens because Kubernetes is only authorized to communicate with the addresses above, but not with the master public IP.

To add one additional IP to the allowed addresses, go to the master machine console and get the kubeadm configuration file with the following command:

kubectl -n kube-system get configmap kubeadm-config -o jsonpath='{.data.ClusterConfiguration}' > kubeadm.yaml

Then, edit it by adding the certSANs: parameter (under apiServer ) and add the local IP’s allowed above and your master public IP address like below:

vi kubeadm.yaml

Move the old certificates to the current folder, forcing Kubernetes to generate them again:

mv /etc/kubernetes/pki/apiserver.{crt,key} ~

And then, make Kubernetes generate the certificates:

kubeadm init phase certs apiserver --config kubeadm.yaml

Then, find the container id of kube-apiserver pod:

docker ps | grep kube-apiserver | grep -v pause

And stop it to reload the new configurations:

docker kill <CONTAINER_ID>

And then, upload the new configuration file:

kubeadm init phase upload-config kubeadm --config kubeadm.yaml

Now, you will be able to access your Kubernetes cluster in your local machine.

--

--