I started experimenting with kubernetes on bare-metal about a month back and I’m documenting some of the details here.
My network layout is essentially flat(one dedicated vlan for everything k8s) and I wanted to run low demanding controller software on VMware ESXi. So I had the k8s dedicated vlan added to ESXI hosts and a vSwitch managing the vlan for me on ESXi side. Also, I had 3 physical servers(HP G10) added to the same network. I made sure that my network has no dhcp-helpers or other dhcp servers in the network running. And we have no firewall rules restricting traffic inside the same vlan.
The plan is to run the maas and juju controllers on ESXi, two k8s masters on bare metal(which will also be running workloads) and one dedicated worker node on bare metal.
Creating the MAAS controller is easy, just create a VM with modest resources(4GB/2core) and install Ubuntu 18.04 LTS. And then install MAAS and initialize the admin user.
sudo add-apt-repository ppa:maas/stable
sudo apt update
sudo apt install maas
sudo maas init <you'll be asked to provide admin user info>
Then we can visit MAAS web ui at http://<your.maas.ip>:5240/MAAS/ and login with the admin user we created. Import an ssh key for the admin user and confirm the sources, and then we can get to the home page.
After that, we need to configure the networks MAAS should manage, the VM I’m using had two vnics, one for accessing the UI trough and the other inside the k8s vlan. Both were automatically identified by MAAS. In the subnets section in the MAAS ui, we can see there are two fabrics, one with our public nic and the other with our k8s vlan nic.
Despite the picture, DHCP should be disabled for both
By clicking on the “untagged” label(despite they all being named untagged, they’re referring to multiple vlans) of the proper fabric, we can get to the subnet settings. By scrolling down and clicking “Reserve range”, I added an IP range MAAS can’t put machines in. And by clicking “Reserve dynamic range”, I added an IP range where MAAS can offer DHCP IP’s from(there shouldn’t be any other machines in this range). Now, we can enable DHCP for the vlan by scrolling up and clicking “Take action” -> “provide DHCP”.
Now to discover our physical machines in MAAS, we have to set the servers to netboot using PXE, and then boot them once. MAAS will provide dhcp IP’s for the machines and netboot them, and add a new user to machine control panel(HP iLO in our case) trough IPMI(supports other protocols too, see documentation). Make sure iLO or whatever the management software has remote IPMI port enabled, so MAAS can log in and boot the machine when required.
At this point, we have three machines showing up in our MAAS ui. They have randomly generated names assigned. Edit the names and assign them to the proper DNS zone(usually DNS something.maas, you can add additional DNS zones by going to DNS section). And make sure we have authoritative DNS record added with your MAAS management machines hostname, otherwise, we’ll face weird issues where machines being provisioned trying to reach our public interface.
Now, we can start enlisting machines by going to “machines” section, clicking on the machine we need to enlist and clicking on “Take action” -> “Enlist”. If the remote IPMI connection is working properly, MAAS will boot the machine and install a dummy Ubuntu OS on it. And MAAS will also set the default boot options so that the machine will properly boot next time.
After all three are enlisted, we are ready to install juju.
Running juju requires us to deploy a juju controller, as far as I can see, this can be deployed on lxc to but I didn’t want to bother finding out. So what I did was I created a VM on ESXi and put it in the k8s vlan. When I booted it, it got discovered by maas, I enlisted it as the same and I did for physical machines. But after this, the machine memory showed up as 0bytes(it was 4Gb actually). So I logged into MAAS db and set it to the proper value. Also, we tag this machine as juju in MAAS machines section.
sudo -u maas psql -d maasdb maasdb=>update maasserver_node set memory=4096 where memory=0;
Now install juju
sudo add-apt-repository ppa:juju/stable
sudo apt update
sudo apt install juju
Add maas cloud to juju by running
$juju add-cloud
Select cloud type: maas
Enter a name for your maas cloud: maas-cloud
Enter the API endpoint url: <MAAS url from above, looks like http://x.x.x.x:5240/MAAS>
$juju add-credential maas-cloud
Enter credential name: maas-cloud-creds
Using auth-type "oauth1".
Enter maas-oauth: <token is under "MAAS Keys" in user setting page in MAAS ui>
Now, bootstrap juju controller in the VM we just discovered by running bellow. It should run for a couple of minutes and provision the juju controller in our VM.
juju bootstrap --constraints tags=juju <maas-cloud name form above> <arbitary controller name>
ex:
juju bootstrap --constraints tags=juju maas-cloud juju-controller
After that’s finished, if you have a 5 physical node enlisted, you can run below to deploy a production-grade cluster with HA.
juju deploy canonical-kubernetes
But this is sort of a waste of resources because it’s using a machine each to deploy easyrsa and api load balancer. So I’m going to change it a bit by editing the bundle.yaml. Before that, we’re tagging the three machines we have in MAAS. All three have “k8s” tag, and two has “master” tag, the other one has “worker” tag.
The modified bundle.yaml is attached, it’ll run easyrsa/a master on node-0, api load balancer/a master on node-1, k8s worker in node-2(technically on all three because we want to run workloads on the masters too), and etcd in all three nodes, And it’ll also move the api loadbalancer to port 8443 from the usual 443 because it conflicts when we try to run k8s worker.
Save this to a file and deploy with
juju deploy ./bundle.yaml
We can monitor the progress by running
watch -c juju status --color
After everything is in active/idle state, copy the kubecfg file.
juju scp kubernetes-master/0:config ~/.kube/config
And now we can do normal kubernetes stuff.
Author : Madushan Nishantha is a Senior DevOps Engineer at CMS.