
Combining NGINX, Tailscale, Terraform, and a Lightsail Proxy to Securely Expose My Private K8s Services — Without Breaking the Bank
Introduction
Running a private K3s cluster at home is fantastic for personal projects, but securely exposing those services on a custom domain can be challenging. While high-availability solutions like AWS Fargate behind an Application Load Balancer (ALB) are tempting, they’re often overkill for a small homelab in terms of cost and complexity. Tailscale’s built-in Serve and Funnel features make sharing services easy, but they lack a static IP—a necessity for root-level domain records.
My solution: Use a single AWS Lightsail instance (costing about $3.50/month) running Tailscale and NGINX. This setup provides a static IP, terminates TLS, and securely forwards traffic to my private K3s cluster via Tailscale. If you’re curious about how I use Flux to deploy Tailscale and other apps inside my cluster, check out this post. In this article, we’ll focus on the Terraform + Tailscale + Lightsail setup.
The Basic Flow
- Kubernetes on a Private Network
- All nodes remain behind NAT, with no open ports to the outside world.
- The Tailscale Operator is installed in the cluster (via GitOps/Flux), enabling workloads to communicate over Tailscale’s mesh VPN.
- Lightsail Public Proxy
- A low-cost VM (approximately $3.50/month) with a static IP.
- Runs Tailscale and NGINX, terminating TLS with Let’s Encrypt.
- Proxies traffic to services in my homelab cluster via Tailscale.
- Tailscale
- Securely connects the Lightsail instance and the K3s cluster.
- Eliminates the need for port forwarding or a dedicated VPN.
- Flexible ACLs allow fine-grained control over which services can be accessed.
Why Lightsail Instead of ECS + ALB?
- Cost: A single Lightsail instance costs a fraction of an ECS + ALB setup (just a few dollars a month).
- Simplicity: Managing one VM with NGINX is far easier than maintaining a full ECS cluster.
- Static IP: Lightsail includes a static IP by default, which is essential for root domain DNS.
If your traffic grows beyond a single instance’s capacity, you can always transition to a more robust setup (like ECS + ALB or a multi-region proxy). For my homelab, Lightsail is more than sufficient.
1. Configuring Tailscale for the Cluster
I use Flux and Kustomize to deploy the Tailscale DaemonSet (or Operator) into K3s. If you’re curious about the GitOps process, I cover it in this blog post. Here’s the gist:
-
Tailscale Auth Key: Generated in the Tailscale admin console, securely stored by Terraform, and synced into Kubernetes using the External Secrets Operator.
-
Annotations: Services I want to expose over Tailscale are annotated as follows:
metadata: annotations: tailscale.com/expose: "true"
This ensures Tailscale routes traffic to the service’s cluster IP.
Tailscale ACLs & Security
One of Tailscale’s standout features is its granular ACL control. For example:
{
"acls": [
// Allow proxy to talk to nodes tagged with k8s-public
{
"action": "accept",
"src": ["tag:proxy"],
"dst": ["tag:k8s-public:*"],
"srcPosture": ["posture:primaryStable"]
}
],
// Allow ops group to ssh to proxy & public nodes
"ssh": [
{
"action": "check",
"src": ["group:ops"],
"dst": ["tag:proxy", "tag:k8s-public"],
"users": ["ubuntu"]
}
]
}
- This allows the proxy (
tag:proxy
) to communicate with services in the cluster (tag:k8s-public
) but nothing else. - It also restricts SSH access to these machines to specific user groups.
2. Provisioning the Lightsail Proxy with Terraform
For this setup, I use Terraform to automate the creation of the Lightsail instance, allocate a static IP, and configure the necessary networking and software. Here’s the Terraform configuration:
Lightsail Instance
The aws_lightsail_instance
resource defines the VM, including its name, availability zone, operating system (Ubuntu 22.04), and instance size (nano_2_0). The user_data
script handles the initial setup, including installing Tailscale, NGINX, and Certbot.
resource "aws_lightsail_instance" "tailscale_proxy" {
name = var.ls_instance_name
availability_zone = var.ls_availability_zone # e.g., "us-east-1a"
blueprint_id = "ubuntu_22_04" # Ubuntu 22.04 LTS
bundle_id = "nano_2_0" # Nano instance (cheapest tier)
# User data script to bootstrap the instance
user_data = <<-EOF
#!/bin/bash
apt-get update -y
# Install Tailscale
curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/jammy.noarmor.gpg \
| gpg --dearmor -o /usr/share/keyrings/tailscale-archive-keyring.gpg
curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/jammy.tailscale-keyring.list \
| tee /etc/apt/sources.list.d/tailscale.list
apt-get update -y
apt-get install -y tailscale nginx python3 python3-venv libaugeas0
# Set up a virtual environment for Certbot
python3 -m venv /opt/certbot/
/opt/certbot/bin/pip install --upgrade pip
/opt/certbot/bin/pip install certbot certbot-nginx
ln -s /opt/certbot/bin/certbot /usr/bin/certbot
# Start Tailscale and authenticate
(tailscaled --tun=userspace-networking &)
sleep 5
tailscale up --authkey=${var.tailscale_auth_key} --ssh
# Configure NGINX
cat <<NGINXCONF >/etc/nginx/sites-available/default
# Root domain
server {
# Update with your domain
server_name www.example.com example.dev;
location / {
# Update with your machine name from pod that is exposed
proxy_pass http://personal-site-personal-site; # Tailscale IP for personal site
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
# Astro requires this since we terminate SSL
proxy_set_header Origin http://\$host;
}
}
# Redirect www to root
server {
# Update with your domain
server_name www.example.com;
listen 80;
# Update with your domain
return 301 https://example.com\$request_uri;
}
NGINXCONF
systemctl restart nginx
# Note: SSL certificates must be configured manually via Certbot after setup.
# See the README for instructions.
EOF
}
Static IP Allocation
A static IP is essential for root domain DNS records. The aws_lightsail_static_ip
resource allocates the IP, and aws_lightsail_static_ip_attachment
associates it with the instance.
resource "aws_lightsail_static_ip" "proxy_ip" {
name = "${var.ls_instance_name}-ip"
}
resource "aws_lightsail_static_ip_attachment" "proxy_ip_attach" {
static_ip_name = aws_lightsail_static_ip.proxy_ip.name
instance_name = aws_lightsail_instance.tailscale_proxy.name
}
Open Inbound Ports
To allow HTTP and HTTPS traffic, the aws_lightsail_instance_public_ports
resource opens ports 80 and 443.
resource "aws_lightsail_instance_public_ports" "proxy_ports" {
instance_name = aws_lightsail_instance.tailscale_proxy.name
port_info {
from_port = 80
to_port = 80
protocol = "tcp"
}
port_info {
from_port = 443
to_port = 443
protocol = "tcp"
}
}
3. NGINX + Certbot on the Proxy
Once Terraform provisions the instance, the user_data
script handles most of the setup. However, you’ll need to SSH into the instance to complete the SSL certificate configuration with Certbot. Here’s how:
-
SSH into the Instance:
ssh ubuntu@<tailscale-ip>
-
Update references to domains in
/etc/nginx/sites-enabled/default
- If you didn’t update the terraform user_data, make sure to change out the domains in the nginx configuration.
-
Run Certbot:
sudo certbot --nginx -d yourdomain.com
-
Restart NGINX:
sudo systemctl restart nginx
The NGINX configuration included in the user_data
script ensures that:
- HTTP traffic is redirected to HTTPS.
- HTTPS traffic is proxied to the appropriate Tailscale IP for your services.
Why This Setup Works
This Terraform configuration automates the creation of a lightweight, secure proxy for your homelab. By combining Lightsail’s affordability and simplicity with Tailscale’s secure mesh VPN, you get:
- A static IP for root domain hosting.
- Automated provisioning and configuration.
- A secure, private connection to your K3s cluster.
- Granular access control for your network
If your traffic grows or you need high availability, you can transition to a more robust setup like ECS Fargate + ALB. For now, this Lightsail-based solution is cost-effective and easy to maintain.
Final Thoughts
This setup demonstrates how to use Terraform, Tailscale, and Lightsail to securely expose services from a private K3s cluster. It’s a great option for homelabs or small projects where cost and simplicity are key considerations.
Interested in more details?
- Check out my previous post on building GitOps for Kubernetes.
- The complete Terraform configuration is available on GitHub. Pull requests and questions are welcome!