photo by Scott Webb on Unsplash

photo by Scott Webb on Unsplash

Introduction

Google Cloud Identity-aware Proxy, or in short IAP, is an access control tool on the Google Cloud Platform for controlling access based on who is making an HTTP request to your application or who is making SSH connections to your virtual servers. As part of the BeyondCorp security model, it enables context-aware access from virtually any location to your applications or VMs without the need for bastion hosts or a traditional VPN.

inlets PRO logo

inlets PRO is a Software Defined Network (SDN) for connecting applications. It allows you to tunnel your private service to a remote network, or get a public IP and serve traffic to your users. As it is Cloud Native by design, it can run on containers or VMs. By running the server part of the tunnel, also known as an exit-node, on a Google Compute Engine instance, and connecting a client running in a private datacenter, your private services become available for your employees or customers from anywhere without the hassle of VPNs.

What if we could combine Google IAP and inlets? In other words, can we use Google Cloud Identity-Aware Proxy and inlets to create context-aware access control for our on-premises services?

I’m sure the combination will be useful in many use cases. Perhaps you would like to give SSH access to some internal servers from anywhere. Or you want to expose administrative services such as Grafana dashboards or PostgreSQL’s admin interface.

inlets PRO is the perfect fit to make such administrative services available with a public endpoint, but exposing them directly to the internet introduces risk. Forwarding TCP traffic with IAP allows you to reduce that risk, ensuring only authorized users gain access to these sensitive services.

TCP forwarding with IAP.

So what does the IAP TCP forwarding looks like?

From the documentation:

You can use IAP TCP forwarding for other TCP-based protocols by using the gcloud compute start-iap-tunnel command to allocate a local port. The local port tunnels data traffic from the local machine to the remote machine in an HTTPS stream. IAP then receives the data, applies access controls, and forwards the unwrapped data to the remote port. Conversely, any data from the remote port is also wrapped before it’s sent to the local port where it’s then unwrapped.

diagram

a PostgreSQL server protected by Google IAP

As an example, let’s say we have a PostgreSQL server running on a GCE VM instance, named my-postgres-vm. With the following command, we bring the database service to our local machine:

gcloud compute start-iap-tunnel my-postgres-vm 5432 \
  --local-host-port=localhost:5432 \
  --zone=europe-west1-b

All traffic sent to localhost:5432 is forwarded to the VM instance. The port is only accessible by applications running on your local computer.

There are some necessary steps to follow before the command above will be successful.

First, to allow IAP to connect to your VM instances, a proper firewall rule is required. With this firewall rule, you allow ingress traffic from the IP range containing all the IP addresses used by IAP for TCP forwarding to all ports you want to be accessible using IAP TCP forwarding.

E.g. to allow PostgreSQL access to all VM instances in your network, run:

gcloud compute firewall-rules create allow-postgresql-ingress-from-iap \
  --direction=INGRESS \
  --action=allow \
  --rules=tcp:5432 \
  --source-ranges=35.235.240.0/20

Next, to control which users and groups are allowed to use IAP TCP forwarding and which VM instances they’re allowed to connect to, configure IAM permissions. These permissions can be granted on project-level or instance-level.

E.g.

gcloud projects add-iam-policy-binding PROJECT_ID \
    --member=user:EMAIL \
    --role=roles/iap.tunnelResourceAccessor

Only users with the correct IAM permissions can access your service from any location via the Cloud Identity-Aware Proxy with those two steps. All ingress traffic is blocked by the firewall, except if coming from IAP, making it a secure setup.

One extra step would be enabling Cloud Audit logs for IAP, which lets you view a request and see all the access levels a user has and hasn’t met.

TCP tunnelling with inlets PRO.

The Cloud Native Tunnel for L4 TCP traffic, inlets PRO, is independent of Google Cloud. In fact, you could use any cloud provider to create a public exit-node for your services. The interesting part is that it is using somehow the same techniques as the Cloud IAP.

An inlets PRO client establishes an outbound connection an inlets PRO server over HTTPS. All traffic is sent over an encrypted WebSocket using HTTPS which works well with HTTP proxies, VPNs, and VM networks.

diagram

inlets PRO exit-node running on Google Cloud Platform

The fastest way to create such an exit-node is using a handy utility inletsctl, which automates the task of creating the server node on cloud infrastructure. It has support for many cloud provider, and obviously, Google Cloud Platform is one of them.

After creating a Service Account key file (more info here), you are only one command away from having an inlets PRO server:

inletsctl create \
  --pro \
  --provider gce \
  --project-id $PROJECT_ID \ 
  --access-token-file ./key.json

The output of the command above will display how to connect an inlets PRO client to punch out a service like PostgreSQL:

inlets PRO (0.7.0) exit-server summary:
  IP: 104.154.249.125
  Auth-token: rekwwOPQhe2792hqtJDJHjrpR3ZhqsAzsFOW6nTSmzBIGCUkpe1tWGgWA3KXJa32

Command:
  export LICENSE=""
  export PORTS="8000"
  export UPSTREAM="localhost"

  inlets-pro client --url "wss://104.154.249.125:8123/connect" \
	--token "rekwwOPQhe2792hqtJDJHjrpR3ZhqsAzsFOW6nTSmzBIGCUkpe1tWGgWA3KXJa32" \
	--license "$LICENSE" \
	--upstream $UPSTREAM \
	--ports $PORTS

Now, there are some attention points when creating an exit-node with inletsctl.

For starters, it requires a default network in the target project and, at the time of writing, there is no option to define another network of your choice.

Next, maybe more important, it will create a rather coarse-grained firewall rule, allowing traffic from any source to any port of the instance. I understand the reasoning behind it, because they don’t know which ports a customer wants to expose, but perhaps you would rather see some more fine-grained firewall rules applied. Of course you can always add some stricter rules yourself, only allowing traffic from your datacenter to the control port 8123 and allowing traffic from your customer datacenter to the data port.

Those two remarks are maybe easy to fix, and as it is open source, reporting this as an issue or even contributing to the project is perhaps something I will do later on.

inlets PRO tunnel with Identity-Aware Proxy

What do we have to do to enable context-aware access control to on-prem service by combining Identity-Aware Proxy and inlets PRO?

When we combine the two, you will have:

  • a GCE VM instance with a public IP running an inlets PRO server
  • a firewall rule allowing ingress traffic coming from your datacenter to port 8123, the control port of inlets PRO
  • a firewall rule allowing ingress traffic coming from the IP range used by IAP to the ports you would like to expose
  • an inlets PRO client running in your datacenter connected to the server via encrypted WebSockets and an authentication token
  • IAM permissions granted to the users allowed to create IAP tunnels

diagram

inlets PRO exit-node protected by Google IAP

Now, when a user is allowed to create an IAP tunnel, they can forward TCP traffic to the VM on a certain port. Instead of a service or an application, the inlets PRO server is listening on that specific port, tunnelling all the traffic to the inlets PRO client in your datacenter, which in turn forwards the traffic to the targeted service or application.

In short, with the proper IAM roles, one could securely access the on-premises service from everywhere. And that is what context-aware access is all about!

Known limitations

While this set up looks very promising, some limitations are worth to mention.

First, IAP’s TCP forwarding feature isn’t intended for bulk transfer of data, and Google reserves the right to rate-limit users abusing this service.

Second, IAP automatically disconnects sessions after 1 hour of inactivity, and they recommend having logic in your applications to handle reestablishing a tunnel when it becomes disconnected.

For use cases such as granting access to your private database servers to execute some administrative tasks, those limitations shouldn’t be a problem.

Provisioning with Terraform

To make it a little bit easier to get you started, I put together a small Terraform Module.

This Terraform module provisions all the resources mentioned above, from the VM instance running inlets PRO to the firewall rules and IAM permissions.

In the above diagrams, we took PostgreSQL as an example. With the following Terraform manifest, you could easily install the pictured architecture.

provider "google" {
  project = var.project
  region  = var.region
}

resource "google_compute_network" "inlets" {
  name                    = "inlets"
  auto_create_subnetworks = false
}

resource "google_compute_subnetwork" "inlets" {
  name          = "inlets"
  ip_cidr_range = var.ip_cidr_range
  region        = var.region
  network       = google_compute_network.inlets.id
}

module "postgresql" {
  source     = "../"
  name       = "postgresql"
  zone       = var.zone
  network    = google_compute_network.inlets.name
  subnetwork = google_compute_subnetwork.inlets.name
  ports      = [3306]
  members = [
    "user:[email protected]",
    "user:[email protected]",
  ]
}

output "postgresql" {
  value = module.postgresql.inlets_cmd
}

Just like inletsctl, the outcome of the terraform apply run displays the command for connecting the client to the inlets PRO server.

Apply complete! Resources: 11 added, 0 changed, 0 destroyed.

Outputs:

postgresql = inlets-pro client --url=wss://35.187.19.85:8123/connect --token=irgx7ExYVseSjvuFeln8qH0s3ExvT59D --license-file=$HOME/inlets-license --upstream=localhost --ports='3306'

The Terraform Module is available on GitHub, so have a look and give it a try.

Wrapping up

In this post, we looked at how inlets PRO and Cloud IAP can work together.

The inlets client creates a secure tunnel using TLS and a token for the control-plane, with an additional firewall rule, only allowing traffic from your private datacenter.

On the other side, the inlets tunnel’s data-plane is protected by Google Cloud Identity-Aware Proxy, meaning that only users with the proper IAM permissions can access the service.

With this combination, you can expose administrative services, such as SSH, RDP, or databases like PostgreSQL in a secure way, ensuring only authorized users gain access to them.


See also:


References: