OpenFaaS and GKE Autopilot
Introduction
In February this year, Google introduced GKE Autopilot, a revolutionary mode of operations for managed Kubernetes that lets you focus on your software, while GKE Autopilot manages the infrastructure.
GKE offers already a fully managed Kubernetes-as-service that makes setting up and operating a Kubernetes cluster easier. GKE Autopilot takes a step further. In this mode, Google not only takes care of the control plane but also eliminates all node management operations.
With this newly released mode, you will:
- Optimize for production like a Kubernetes expert
- Enjoy a stronger security posture from the get-go
- Use Google as your SRE for both nodes and the control plane
- Pay for the optimized resources you use
OpenFaaS is a platform that makes Open Source serverless easy and accessible on any cloud or host, even on a Raspberry Pi. It allows you to build your Function-as-a-Service platform on top of Kubernetes, avoiding vendor lock-in. Still, your platform engineering team needs to build and maintain a running a Kubernetes cluster.
In this post, we will deploy OpenFaaS on a GKE Autopilot cluster and see how the two can work together.
Requirements
Make sure you have to following tools available on your system:
- gcloud: the CLI tool to create and manage Google Cloud resources
- arkade: portable Kubernetes marketplace
- kubectl: the Kubernetes command-line tool, allows you to run commands against Kubernetes clusters
- faas-cli: the official CLI for OpenFaaS
- hey: (optionally) a tiny program that sends some load to a web application
Create your GKE Autopilot cluster
While I do prefer tools like Terraform to provision my infrastructure, for this tutorial, I’ll be using the gcloud
utility mainly because, at the time of writing, the Google Terraform provider does not support the GKE Autopilot yet.
Prepare the following environment variable for this tutorial:
export PROJECT=<your google cloud project id>
export REGION=<your preferred region>
Create a dedicated network and subnetwork for our GKE Autopilot cluster.
gcloud compute networks create faas \
--project $PROJECT \
--subnet-mode custom
gcloud compute networks subnets create faas \
--project $PROJECT \
--region $REGION \
--network faas \
--range "10.5.0.0/20"
We will create a GKE cluster with private nodes for better security, meaning they will only have a private IP address. To make outbound connections, for example, pulling images from DockerHub, we must configure Cloud NAT.
gcloud compute routers create faas \
--network faas \
--region $REGION \
--project $PROJECT
gcloud compute routers nats create faas \
--project $PROJECT \
--region $REGION \
--router faas \
--nat-all-subnet-ip-ranges \
--auto-allocate-nat-external-ips
After provisioning all those network resources, we can continue with creating our private GKE Autopilot cluster.
gcloud container clusters create-auto faas \
--project $PROJECT \
--region $REGION \
--network faas \
--subnetwork faas \
--enable-private-nodes \
--master-ipv4-cidr 172.8.0.0/28
Note that this could take some time, expect it to be around five minutes or more.
As soon as our cluster is ready, we need to get authentication credentials to connect to the cluster.
gcloud container clusters get-credentials faas \
--project $PROJECT \
--region $REGION
To verify the cluster configuration, you can use the following command to see all of your resources across namespaces:
kubectl get all --all-namespaces
Or, have a look at the available nodes. In my case, the autopilot has configure two nodes to run at least all the required system pods.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gk3-faas-default-pool-71094390-6bl8 Ready <none> 2m9s v1.18.12-gke.1210
gk3-faas-default-pool-90a1e42c-0328 Ready <none> 2m7s v1.18.12-gke.1210
Install OpenFaaS
To install OpenFaaS on Kubernetes, we have two options:
- either use
arkade
- or use to official Helm chart
We will use the first option for this tutorial as it is the fastest and easiest way to install OpenFaaS. Because we’re running on a public cloud provider, we configure OpenFaaS with a LoadBalancer; in this case, a Google Network LoadBalancer will be created.
arkade install openfaas --load-balancer
Unfortunately, this will fail. By default, the OpenFaaS Helm chart puts Node Selectors with the key beta.kubernetes.io/arch
on the pods, which is not allowed by GKE Autopilot.
Error: admission webhook "validation.gatekeeper.sh" denied the request: [denied by autogke-node-affinity-selector-limitation] If not using workload separation, node selector is not allowed on labels with keys: <{"beta.kubernetes.io/arch"}>; Autopilot allows node selectors only on labels with keys: <["topology.kubernetes.io/region", "topology.kubernetes.io/zone", "failure-domain.beta.kubernetes.io/region", "failure-domain.beta.kubernetes.io/zone", "cloud.google.com/gke-os-distribution", "kubernetes.io/os", "kubernetes.io/arch"]>.
Let’s tweak our installation command a little bit:
arkade install openfaas --load-balancer --set nodeSelector=null
Now the installation is successful!
=======================================================================
= OpenFaaS has been installed. =
=======================================================================
# Get the faas-cli
curl -SLsf https://cli.openfaas.com | sudo sh
# Forward the gateway to your machine
kubectl rollout status -n openfaas deploy/gateway
kubectl port-forward -n openfaas svc/gateway 8080:8080 &
# If basic auth is enabled, you can now log into your gateway:
PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)
echo -n $PASSWORD | faas-cli login --username admin --password-stdin
faas-cli store deploy figlet
faas-cli list
# For Raspberry Pi
faas-cli store list \
--platform armhf
faas-cli store deploy figlet \
--platform armhf
# Find out more at:
# https://github.com/openfaas/faas
Thanks for using arkade!
You will notice that it will take a little longer until all the OpenFaaS pods are scheduled and running. Some of them will probably remain in Pending
state until other nodes are provisioned and ready to allow pods to be scheduled. With Autopilot, the underlying compute infrastructure is provisioned and scaled based on our workload specifications and dynamic load, providing highly efficient resource optimization.
In my case, three additional nodes are created when installing OpenFaaS.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gk3-faas-default-pool-71094390-027g Ready <none> 5m7s v1.18.12-gke.1210
gk3-faas-default-pool-71094390-6bl8 Ready <none> 21m v1.18.12-gke.1210
gk3-faas-default-pool-71094390-lt34 Ready <none> 5m7s v1.18.12-gke.1210
gk3-faas-default-pool-90a1e42c-0328 Ready <none> 21m v1.18.12-gke.1210
gk3-faas-default-pool-90a1e42c-qz2j Ready <none> 5m4s v1.18.12-gke.1210
Login to OpenFaaS
Grab the public endpoint of our Gateway and the password to log in using the faas-cli
export GATEWAY_IP=$(kubectl get service gateway-external -n openfaas -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
export PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)
echo -n $PASSWORD | faas-cli login --username admin --password-stdin --gateway http://$GATEWAY_IP:8080
Deploy a sample function
We can deploy the simple NodeInfo function from the store. This function will get info about the machine deployed on and shows information like CPU count, OS and Uptime. For this tutorial, such a lightweight function is good enough.
faas-cli store deploy "NodeInfo" --gateway http://$GATEWAY_IP:8080
# Check for the Pod to become available ("Status: Ready")
faas-cli describe nodeinfo
Just like when we installed OpenFaaS earlier, it can take some time until the function pod is ready. Again for the same reason, if there is room on the available nodes, the pod can be scheduled immediately. Otherwise, a new node needs to be provisioned.
When it is ready, you can invoke the function:
$ faas-cli invoke nodeinfo --gateway http://$GATEWAY_IP:8080
$ faas-cli invoke nodeinfo --gateway http://$GATEWAY_IP:8080
Reading from STDIN - hit (Control + D) to stop.
Hostname: nodeinfo-76bf9fbfd6-6z92v
Arch: x64
CPUs: 2
Total mem: 3940MB
Platform: linux
Uptime: 200
Add some load
Let’s use the hey
utility to put some modest load on our NodeInfo function.
hey -q 1 -c 10 -z 600s "http://$GATEWAY_IP:8080/function/nodeinfo"
It will use 10 concurrent workers, each firing one request per second, which is perhaps not a significant load, but it will trigger the autoscaling of OpenFaaS.
Monitor the numbers of pods closely. At a certain time, the number of pods is increased because of the load.
$ kubectl get pods -n openfaas-fn
NAME READY STATUS RESTARTS AGE
nodeinfo-76bf9fbfd6-2njmj 0/1 Pending 0 18s
nodeinfo-76bf9fbfd6-6z92v 1/1 Running 0 8m8s
nodeinfo-76bf9fbfd6-khm8g 0/1 Pending 0 18s
nodeinfo-76bf9fbfd6-tswrg 0/1 Pending 0 18s
nodeinfo-76bf9fbfd6-xc7f2 0/1 Pending 0 18s
Let’s have a look at the current nodes:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gk3-faas-default-pool-71094390-6bl8 Ready <none> 175m v1.18.12-gke.1210
gk3-faas-default-pool-71094390-lt34 Ready <none> 159m v1.18.12-gke.1210
gk3-faas-default-pool-90a1e42c-0328 Ready <none> 175m v1.18.12-gke.1210
gk3-faas-default-pool-90a1e42c-k8p9 Ready <none> 138m v1.18.12-gke.1210
gk3-faas-default-pool-90a1e42c-qz2j Ready <none> 159m v1.18.12-gke.1210
gk3-faas-nap-dph42xt1-099fa9fc-gwq3 Ready <none> 41s v1.18.12-gke.1210
gk3-faas-nap-dph42xt1-a4ab000e-f8t7 Ready <none> 31s v1.18.12-gke.1210
As you can see, some additional nodes, apparently in a different node pool, are provisioned based on all our NodeInfo pods’ required resources.
When you cancel the load test, you will notice that the nodes will disappear eventually as they are no longer needed.
Conclusion
We were able to deploy OpenFaaS on a GKE Autopilot Kubernetes cluster. From a developer’s perspective, the experience is the same as with other Kubernetes distributions. Still, this new mode does free up teams to focus on the actual workloads and less on managing Kubernetes clusters.
With Autopilot, GKE configures and manages the underlying infrastructure, including nodes and node pools enabling users to only focus on the target workloads and pay per pod resource requests (CPU, memory, and ephemeral storage).
This makes it a perfect fit for a Function-as-a-Service platform like OpenFaaS, as you’re billed only for the function instances that are actually running.
The only difference I noticed is that starting a new instance of a function can take a little bit longer because of the extra nodes’ provision. Especially with a peek of increased traffic, it can be a problem, but I’m sure that this will scale up and down smoothly in daily operations.
There is, of course, still a management fee of a GKE cluster, so if you’re running a couple of functions, I can always recommend taking a look at faasd, a lightweight & portable faas engine. OpenFaaS reimagined, but without the cost and complexity of Kubernetes.
See also:
- A serverless appliance for your Raspberry Pi with faasd
- Provision a Multi-Region k3s cluster on Google Cloud with Terraform
References: