photo by Jeff Loucks on Unsplash

photo by Jeff Loucks on Unsplash

Recently I started building a tool hashi-up which installs HashiCorp Consul and Nomad on any remote Linux host, whether it is a laptop, VM or Raspberry Pi.

In this article, I will describe how to set up a small Consul and Nomad cluster on a bunch of Raspberry Pis running Ubuntu 20.04. I took Ubuntu Server instead of Raspberry Pi OS (previously called Raspbian), for two reasons: it comes with a 64bit version, and it has better support for cloud-init.

What do we need

To follow along with all the instructions in this tutorial, you will need:

  • at least 2 Raspberry Pis (version 3 or 4)
  • some micro SD cards
  • Software for writing disk images - e.g. etcher or flash
  • Ubuntu Raspberry Pi images, found here
  • a Consul and Nomad binary
  • hashi-up! a tool for simplifying installation of Consul and Nomad
  • some spare time

Overview

In summary, the following steps are what it takes to end with a fully functional cluster:

  • Prepare SSH keys and SD cards
  • Adjust some configuration
  • Boot the Raspberry Pis
  • Bootstrap Consul server node
  • Bootstrap Nomad server node
  • Join other Consul agents
  • Join other Nomad clients
  • Test the Nomad cluster

First preparations

Before we start building the cluster, we have some preparations to do.

SSH key

First, we need to create an SSH key that we will use to access the Raspberry Pis. If you already of a keys available, you can skip this of course. To create a new SSH key, run the following:

ssh-keygen -t rsa -b 4096 -f ~/.ssh/hashi-rpi -C hashi-rpi

Flash the operating system to your SD card

Next, we will prepare the SD cards with Ubuntu server. Download the Ubuntu server image you can find here. Now use Etcher.io to flash the OS image to your SD cards.

Configure the SD card with cloud-init

Last, we will customize the image with a little bit of cloud-init configuration.

What is cloud-init? According to the documentation:

Cloud-init is the industry standard multi-distribution method for cross-platform cloud instance initialization. It is supported across all major public cloud providers, provisioning systems for private cloud infrastructure, and bare-metal installations.

As stated in the documentation, its primary use is to initialize cloud instances. Still, because cloud-init is installed in the official Ubuntu server image since 18.04, we can leverage this software to configure our Raspberry Pi when they boot.

In our use case, we will use cloud-init to set a hostname, add our SSH key to the instances and, for the servers, give a static IP address. Of course, there are many other things you can do with cloud-init. For more information, have a closer look at the documentation you can find [here]().

So, to make cloud-init do the things we want to do, do the following:

  • mount the SD card
  • edit the user-datafile in the boot partition
  • edit the network-config file to configure a static IP address (recommended for the Consul and Nomad server image)

An example for the user-data

#cloud-config

hostname: hashi-server
manage_etc_hosts: true

package_update: false
package_upgrade: false

ssh_authorized_keys:
  - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDfkByGmwRUjUINP5pYu17yvX2qSdlgeaqlW5MrfsQqdj5WgWnddIPLnH8vFDA376JI3HpzkmZ99VdIJIseBDzv2WzRIE1KnmpzJkHAjY2t/pkxeTXxyBVjTMxg7+PR9Uc+05KcU2TdPAyTDradIsnk5+kPZenE9O3ZK85hEEXVWzbnQCxx4iaTWpXz+ufQ1pmueZDC1GfI+hRyMzALpPj6rkUh0rVsRYLtfDjtmh62dfm20wwZKY/eD4sMcXVsH2bJT1k9fPDQ62pYb7s7uQQTXV38RpkiEfPfhTTxOpLz9LMmovZ91I3ohkLaCFpNhQubEbrNnM2EhNDkdIsObxYH hashi-rpi

For a static IP address, an example for network-config:

version: 2
ethernets:
  eth0:
    dhcp4: no
    addresses: [192.168.0.35/24]
    gateway4: "192.168.0.1"
    nameservers:
      addresses: ["192.168.0.1","8.8.8.8","8.8.4.4"]

Edit every SD card for all your cluster nodes. By default all the Raspberry Pis will have ubuntu as hostname, so it is recommended to give each of them a unique hostname via the cloud-init user-data.

Pro tip: using the flash utility, you can easily flash the image and edit the hostname and user-data in a single command:

flash -n hashi-server -u user-data -F network-config ubuntu-20.04-preinstalled-server-arm64+raspi.img
flash -n hashi-agent-001 -u user-data ubuntu-20.04-preinstalled-server-arm64+raspi.img
flash -n hashi-agent-002 -u user-data ubuntu-20.04-preinstalled-server-arm64+raspi.img

Finally, put every SD card in a Raspberry Pi, connect them to your network and a power supply. Boot the devices, and after a few minutes, all of them should be up and running.

Bootstrapping the server node

Now that all our units are running, it is time to start the server node of our cluster.

In the previous step, we gave at least one node a static IP address, so that will be a perfect candidate.

Start a Consul server with the following command:

hashi-up consul install \
  --ssh-target-addr 192.168.0.35 \
  --ssh-target-user ubuntu \
  --ssh-target-key ~/.ssh/hashi-rpi \
  --server \
  --advertise-addr "{{ GetInterfaceIP \"eth0\" }}" \
  --bind-addr "{{ GetInterfaceIP \"eth0\" }}" \
  --client-addr 0.0.0.0

As soon as that finishes, start a Nomad server with hashi-up:

hashi-up nomad install \
  --ssh-target-addr 192.168.0.35 \
  --ssh-target-user ubuntu \
  --ssh-target-key ~/.ssh/hashi-rpi \
  --server \
  --advertise "{{ GetInterfaceIP \"eth0\" }}" \
  --bootstrap-expect 1

And the server node is up and running!

You can verify if it was successful by accessing the UI. at http://192.168.0.35:8500 and http://192.168.0.35:4646, or by listing the members using the CLI:

>$ consul members -http-addr=http://192.168.0.35:8500
Node                  Address               Status  Type    Build  Protocol  DC   Segment
hashi-server          192.168.0.35:8301     alive   server  1.8.1  2         dc1  <all>

Join other client nodes

Of course, only one server node isn’t a working cluster yet. Next thing to do is joining the other Raspberry Pis as client agents, so we can start scheduling some jobs on our little single-board computers.

To add the other nodes, we need to have their IP addresses. If you didn’t give them static IP addresses, you could find them by having a look at your router and the leased addresses, or with some command-line tools, like nmap.

Then, use hashi-up to install and start the Consul and Nomad agents:

export SERVER_IP=192.168.0.35
export AGENT_1_IP=192.168.0.120
export AGENT_2_IP=192.168.0.121

hashi-up consul install \
  --ssh-target-addr $AGENT_1_IP \
  --ssh-target-user ubuntu \
  --ssh-target-key ~/.ssh/hashi-rpi \
  --advertise-addr "{{ GetInterfaceIP \"eth0\" }}" \
  --bind-addr "{{ GetInterfaceIP \"eth0\" }}" \
  --retry-join $SERVER_IP

hashi-up consul install \
  --ssh-target-addr $AGENT_2_IP \
  --ssh-target-user ubuntu \
  --ssh-target-key ~/.ssh/hashi-rpi \
  --advertise-addr "{{ GetInterfaceIP \"eth0\" }}" \
  --bind-addr "{{ GetInterfaceIP \"eth0\" }}" \
  --retry-join $SERVER_IP

hashi-up nomad install \
  --ssh-target-addr $AGENT_1_IP \
  --ssh-target-user ubuntu \
  --ssh-target-key ~/.ssh/hashi-rpi \
  --client \
  --advertise "{{ GetInterfaceIP \"eth0\" }}"
  
hashi-up nomad install \
  --ssh-target-addr $AGENT_2_IP \
  --ssh-target-user ubuntu \
  --ssh-target-key ~/.ssh/hashi-rpi \
  --client \
  --advertise "{{ GetInterfaceIP \"eth0\" }}"  

Sit back and relax for a moment while everything gets installed, and in the end, you will see the clients joining the server node to form a cluster.

A keen eye will spot that we only started the Consul nodes with a --retry-join flag. Because Nomad is running besides a Consul agent, the Nomad clients will discover the server to join via the Service Discovery feature of Consul.

The same CLI commands will now display the other members:

>$ consul members -http-addr=http://192.168.0.35:8500
Node                  Address               Status  Type    Build  Protocol  DC   Segment
hashi-server          192.168.0.35:8301     alive   server  1.8.1  2         dc1  <all>
hashi-agent-001       192.168.0.120:8301    alive   client  1.8.1  2         dc1  <default>
hashi-agent-002       192.168.0.120:8301    alive   client  1.8.1  2         dc1  <default>

>$ nomad node status --address=http://100.88.118.99:4646
ID        DC   Name             Class   Drain  Eligibility  Status
0b7402a9  dc1  hashi-agent-001  <none>  false  eligible     ready
3198fcf0  dc1  hashi-agent-002  <none>  false  eligible     ready

Further improvements

In this tutorial, we built a rather simple Consul and Nomad cluster with a single server node.

Consul and Nomad do not operate in a secure-by-default configuration, so it is recommended to enable encryption and the ACL system. The hashi-up tool has built-in support the enable those features on the agents.

Also, as a Raspberry Pi tend to break sometimes, it is a good idea to add some extra server nodes to create a high available cluster.

Final thoughts

With the small tool hashi-up, it takes only a few minutes to build an operational Consul and Nomad cluster. In this post, I illustrated how you could use it to run it against some Raspberry Pis, ideal for running a home lab or for learning and experimenting:

PS: An alternative way is using some pre-built Ubuntu images with Consul and Nomad already installed.


See also:


References: