Private Networking on Hetzner Cloud with Tailscale

5 min read
infrastructure kubernetes networking terraform tailscale hetzner

This is a follow up to Why I Built My Own Kubernetes Cluster


The cluster post was about why I built my own Kubernetes cluster. This one is about the private network that makes the rest possible. I wanted the cluster to live in a VPC-style network with zero public node IPs, reachable only from my own internal network.

Tailscale is the bridge here. I keep Raspberry Pi Zero exit nodes across Europe with friends for VPN use, so I already live in Tailscale. That is why it was the obvious choice for private access.

What I Wanted from the Network

I had three non-negotiables:

  1. Private-only cluster nodes: no public IPs anywhere except one gateway.
  2. Internal access: any device in my Tailnet should reach internal services directly.
  3. Separation of public vs. private apps: snapbyte.dev stays public, internal tools stay private.

That meant I needed a VPC-style network that could handle egress through a single NAT gateway, and ingress through an internal load balancer with a private IP address. The cluster could stay dark on the public internet, but still be reachable from my Tailnet.

Network Shape

Hetzner does not call this a VPC, but the architecture is the same idea: an isolated private network with a controlled egress point. I kept the design small and opinionated:

  • Network CIDR: 10.0.0.0/16
  • Subnet: 10.0.128.0/24
  • Single NAT gateway with a public IP for outbound traffic
  • Private-only Kubernetes nodes inside the subnet

In Terraform, the core resources are the network, subnet, firewall rules, NAT gateway, and Tailscale ACLs. The base network shape is small and explicit:

resource "hcloud_network" "vpc_network" {
  name     = "vpc-network"
  ip_range = "10.0.0.0/16"
}

resource "hcloud_network_subnet" "vpc_subnet" {
  network_id   = hcloud_network.vpc_network.id
  type         = "cloud"
  ip_range     = "10.0.128.0/24"
}

The routing flow is simple and deterministic:

Cluster Node -> Subnet Gateway -> NAT Gateway -> Internet

The NAT gateway is the only box with a public IP. Everything else is private.

VPC-style network 10.0.0.0/16

Subnet 10.0.128.0/24

public DNS

egress

internal DNS + subnet route

subnet route

Tailnet

Laptop

Phone

RPi Zero exit nodes

Internet

NAT Gateway

Public LB

Private LB

Kubernetes Nodes

Public apps

Internal apps

The NAT Gateway and Egress

The NAT gateway is a Debian 12 server with two jobs: perform MASQUERADE NAT for the subnet and advertise the subnet into my Tailnet.

The NAT gateway is just a Hetzner server with a public IP and a default route for the network:

resource "hcloud_server" "nat_gateway" {
  name        = "vpc-nat-gateway"
  image       = "debian-12"
  server_type = "cx23"
  user_data   = data.cloudinit_config.nat_gateway_cloud_init.rendered

  network {
    network_id = hcloud_network.vpc_network.id
    ip         = cidrhost(hcloud_network_subnet.vpc_subnet.ip_range, 1)
  }
}

resource "hcloud_network_route" "nat_gateway_route" {
  network_id  = hcloud_network.vpc_network.id
  destination = "0.0.0.0/0"
  gateway     = flatten(hcloud_server.nat_gateway.network)[0].ip
}

Cloud-init does the two important bits: NAT and Tailscale subnet routing.

#cloud-config
write_files:
  - path: /etc/network/interfaces
    content: |
      post-up echo 1 > /proc/sys/net/ipv4/ip_forward
      post-up iptables -t nat -A POSTROUTING -s '${network_ipv4_cidr}' -o eth0 -j MASQUERADE

runcmd:
  - ['tailscale', 'up', '--auth-key=${tailscale_auth_key}', '--advertise-routes=${subnet_ip_range}']

The important detail is that cluster nodes never need public IPs. They just route their outbound traffic to the NAT gateway, and the gateway handles the rest.

The gateway sits behind a narrow firewall, so only the minimum network paths exist in and out.

Private Access with Tailscale

Tailscale is not just for logging in. It’s the entry point into the private network.

The network Terraform code manages Tailscale ACLs and subnet routes. When the gateway comes up, it advertises 10.0.128.0/24 to the Tailnet. My laptop, phone, or any of those Raspberry Pi exit nodes can reach any node or service inside the subnet.

resource "tailscale_acl" "vpc_nat_gateway_acl" {
  acl = jsonencode({
    tagOwners = { "tag:gateway" = ["autogroup:admin"] }
    autoApprovers = { routes = { "10.0.128.0/24" = ["tag:gateway"] } }
    acls = [{ action = "accept", src = ["autogroup:admin"], dst = ["*:*"] }]
  })
}

This gives me a private network that feels like a LAN, but works across continents.

Internal Subdomains on a Private Load Balancer

The Kubernetes cluster sits behind two ingress controllers: one public, one private. The private one uses a Hetzner load balancer with a private IP on the subnet. That means I can map internal DNS names to that private IP and access internal apps only from my Tailnet.

The load balancers are not created here. They show up when the two different nginx ingress controllers are deployed, and hccm provisions them: one with public IP and one with private IP.

So I can run public projects like snapbyte.dev on the public ingress, while keeping ArgoCD, Grafana, or other internal tools locked to the private ingress. The internal subdomains resolve inside my Tailnet, not on the public internet.

Example DNS split:

grafana.int.noreturn.dev -> 10.0.128.x (private LB IP)

snapbyte.dev -> <public-lb-ip>

Minimal Exposure

The firewall rules are intentionally minimal:

  • Inbound: only WireGuard/Tailscale
  • Outbound: DNS, NTP, HTTPS, and STUN

There is no SSH open to the internet. The NAT gateway is the only exposed node, and it is still only reachable through Tailscale.

What I Learned

  • A small /24 subnet is enough for a personal cluster when you plan ranges up front.
  • Subnet routes make the private network feel local without exposing anything publicly.

Next

The network is the base layer. Next up is the Kubernetes provisioning piece: Talos images, node pools, and how I keep public and private ingress separate without leaking the cluster to the internet.