Effective use of 1 IP on Proxmox
I can read this blog post to you thanks to AI.
Introduction
I recently picked up a couple of low-end bare metal boxes from OVH’s Kimsufi range, and honestly, for the price, I’m pretty happy. That said, Kimsufi machines come with a bit of a catch: you only get one public IPv4 address — and there’s no option to buy more. To make things trickier, OVH also added a monthly fee for IPs, whereas they used to just charge a one-time setup. So if you’re working with a tight budget, stacking extra IPs might not be realistic — even on their SoYouStart or regular offerings.
I run pretty much everything on Proxmox because it makes snapshots, backups, and quick experimentation super easy. This post is just a brain-dump of what’s been working for me when trying to network multiple VMs with a single public IP.
Giving VMs Internet Access with NAT
Host Configuration
The idea here is to make your Proxmox host behave a bit like a home router — give VMs a private IP range and NAT their traffic out through the host’s public IP.
Here’s what to add to the bottom of your /etc/network/interfaces
:
auto vmbr1
iface vmbr1 inet static
address 10.6.0.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.6.0.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.6.0.0/24' -o vmbr0 -j MASQUERADE
This does a few things:
- Creates a new virtual bridge
vmbr1
. - Enables IP forwarding on the host.
- NATs all outbound traffic from the
10.6.0.0/24
subnet to look like it came from your host’s public IP.
From the outside, everything just looks like it’s coming from the Proxmox machine.
A couple notes:
- Make sure
vmbr1
is a unique bridge name. If you already have one, usevmbr2
or whatever. - Feel free to pick a different private subnet than
10.6.0.0/24
— just make sure to update the iptables rules. vmbr0
is assumed to be your public interface. Double-check that if you’re using something different.
VM/CT Configuration
When setting up a VM or container:
- Attach the NIC to
vmbr1
in the Proxmox UI. - Important: Uncheck the "Firewall" option on that NIC — Proxmox’s firewall could interfere since we’re doing our own iptables stuff.
Inside the VM:
- Set a static IP (e.g.
10.6.0.20
). - Set the gateway to
10.6.0.1
.
Containers can be configured through the UI or by editing the config file directly.
Forwarding Ports to VMs
If you want to expose a service (like a game server or web app) running in one of your VMs to the internet, you’ll need to forward specific ports.
Here’s an example that forwards a UDP WireGuard port and a TCP Minecraft port:
auto vmbr1
iface vmbr1 inet static
# rest of interface config...
# for a udp port
post-up iptables -t nat -A PREROUTING -i vmbr0 -p udp --dport 51820 -j DNAT --to 10.6.0.100:51820
post-down iptables -t nat -D PREROUTING -i vmbr0 -p udp --dport 51820 -j DNAT --to 10.6.0.100:51820
# for a tcp port
post-up iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 25565 -j DNAT --to 10.6.0.65:25565
post-down iptables -t nat -D PREROUTING -i vmbr0 -p tcp --dport 25565 -j DNAT --to 10.6.0.65:25565
This forwards incoming traffic to the right internal IP and port. From the outside, all anyone sees is your public IP.
Note:
- Don’t try to forward the same port to multiple machines.
- Use
iptables -t nat -L -v
to see active rules.
Hosting Multiple Web Servers
Ideally, I’d like each web server to fetch and manage its own TLS certificates and handle termination locally — that way, the connection stays encrypted all the way to the destination, and each VM can renew and manage its own certs independently.
To make this work, I run an NGINX proxy VM at the "edge" of my network. It doesn’t terminate TLS itself — it just peeks at the SNI (the hostname in the TLS handshake) and forwards the raw TLS stream to the correct backend.
The trick is using NGINX’s stream module. On a VM with IP 10.6.0.50
, install NGINX with stream support (usually available via your distro’s package manager), and in /etc/nginx/nginx.conf
, add:
stream {
map $ssl_preread_server_name $tlsBackend {
www.example.com 10.6.0.80:443;
}
server {
listen 443;
proxy_pass $tlsBackend;
ssl_preread on;
}
}
This lets NGINX read the hostname from incoming TLS connections, look it up in a simple map, and proxy the encrypted traffic directly to the right VM. From the client’s point of view, it's a direct connection to the target server.
If you’re using HTTP-01 challenges (e.g., with Let’s Encrypt), you’ll also need to forward port 80. You can do that using the regular HTTP proxying in NGINX:
server {
listen 80;
server_name www.example.com;
location / {
proxy_pass http://10.6.0.80:80;
proxy_set_header Host www.example.com;
proxy_pass_request_headers on;
}
}
You do need to explicitly set the Host
header and pass request headers through, otherwise the backend won’t know what domain the request was for.
Once that’s in place, you can forward TCP port 80 and 443 to your proxy VM, and as long as your DNS points to your public IP, everything should just work. You’ll hit www.example.com
in a browser, and the request will be routed to 10.6.0.80
— where the actual web server lives.
In theory, you could route all incoming traffic through this proxy VM and forward to different services, but I haven’t tested that under load. So far, this setup works fine for my modest needs.
A Quick Note on SRV Records
One little trick I’ve used is SRV records — especially handy with something like Minecraft where you can map different subdomains to different ports:
server1.example.com
-> port 25565server2.example.com
-> port 25566
This Namecheap guide explains it better than I probably could.