Skip to main content

Setting up wireguard

 Time has come to explore a new (well, it's not that new anymore, it was introduced in kernel 5.6) VPN technology that is more lightweight and faster on single-board computers (SBCs) than something like OpenVPN. I'm talking about Wireguard. The key takeaways for why Wireguard might be better than other VPN technologies (like OpenVPN, or IPSec) are:

  • very small code, easier to audit
  • runs in kernel space, not userspace 
  • configuration can be done with standard linux tools (like ip, iproute, iptables), but there are some helper scripts that simplify setting up/starting up
  • doesn't support cypher negotiation (thus preventing downgrade attacks)
  • is quiet by default and doesn't reply to random packets from the Internet (difficult to scan for wireguard concentrators and try brute-force attacks)
  • has better performance than userspace encryption (to be tested)

All good network tutorials begin with a network diagram and end with a packet capture. So, let's say you want to host a VPN concentrator (or VPN server, whatever you want to call it) on a SBC. I'm using the "collector's edition" Odroid N1 running Ubuntu 22.04 and kernel 5.17.5.

This is not your regular copy-paste tutorial, but tries to explain some issues in more depth to help you troubleshoot when things go south.

Prerequisites

In short, you need a few things:

  • Wireguard support:
    • it can be a kernel module - see zcat /proc/config.gz | grep -i wireguard
    • the wireguard-dkms package can add support for older kernels
    • if you're unlucky to get the kernel module working, there's a userspace implementation that you can use instead: https://www.wireguard.com/xplatform/
  • Wireguard client/tools: installable for different OSes (Linux, Windows or mobile-based): https://www.wireguard.com/install/
  • Networks that don't filter UDP traffic - since Wireguard uses only UDP for transport 
  • Ability to do port forwarding on your home router (unless the SBC is exposed to the internet directly)
  • A public IP address (either static, or dynamic, with dynamic DNS). If your server is behind Carrier-Grade NAT (CGN) you may not be able to self-host completely.
  • A network diagram where you can assign IPs to your clients (not mandatory, but makes visualization easier). You'll need to choose a private network prefix that will be used for tunnel addressing (see RFC1918). In this guide we'll be using 172.20.20.0/24. The prefix doesn't matter, but it's best that it doesn't overlap with other prefixes in your LAN, your client's LANs, or various networks Docker likes to set up.
     


 The use case is this - you host a Wireguard Concentrator (or server) on a SBC of your choice in your home and allow roaming clients (such as your phone or laptop) or family members to connect to your home and share resources with them (e.g - allow them to print remotely to your printer, or access your NAS) over this VPN link. You can also route all your traffic through your home connection - either for privacy reasons (you're in a hostile/restricted LAN) or to bypass GeoIP restrictions.

Difficulty level

Setting up the server is not that difficult, but you'll need to generate and manage keys for the server and each client. This key management is really all that is when configuring Wireguard. The encryption keys for client/server are used to derive session encryption keys that get renewed periodically. If one key is incorrect, the peers can't communicate. Wireguard makes no effort to help exchange these keys, because it considers it outside of its scope.

Depending on the level of effort you want to put in here are your options:

  • The "I'm too young to network" level - Sign up for Tailscale and they will handle key exchanges between client-server, and as an added bonus can create full-mesh networks between all nodes in your network (all nodes are servers and can communicate with all other clients directly). No configuration needed, no port forwarding, etc. It's really a nice solution, but you depend on "the cloud" to authenticate to the network. (Here are some geeky details that they handle behind the scenes that I enjoyed reading: https://tailscale.com/blog/how-nat-traversal-works/)
  • The "Hey, Not Too Rough" level - you may want to avoid the command-line as much as possible. In this case you can run a self-hosted Wireguard web-configuration GUI that takes care of keys and client configuration. But beware - some GUIs overwrite manual configuration or are unable to import existing configuration! One such GUI is https://github.com/ngoduykhanh/wireguard-ui. Or if you prefer text-based GUIs, I heard pivpn supports wireguard as well.
  • The "Hurt me plenty" level - you can do all the configuration manually. Guess which one we'll be doing?

Ok, let's get started! 

The networking bit

Let's get some networking out of the way:

1. Turn on IP Forwarding on your Wireguard Concentrator: https://linuxconfig.org/how-to-turn-on-off-ip-forwarding-in-linux. This is needed if you want/need clients to communicate with each other via the server (a hub-and-spoke VPN), or with hosts in your LAN.

2. Make sure your firewall (presumably iptables) allows traffic to your Wireguard port. More on this later.

3. If you want resources in your LAN (like PC1) to be able to connect to VPN guests (e.g. Dad's PC), you need to announce the VPN route in your LAN. This means two things:

  • your DHCP server (in this case running on your router) should advertise the route 172.20.20.0/24 via the Wireguard Concentrator's LAN IP (192.168.1.5). All DHCP speaking hosts should receive the route and send traffic for VPN clients via the correct gateway. This can usually be done with something like this (for dnsmasq): 

dhcp-option=option:classless-static-route,0.0.0.0/0,192.168.1.1,172.20.20.0/24,192.168.1.5

 Beware of this issue, though: https://github.com/systemd/systemd/issues/7792. It's best to send the default route through this option too.

  • your default gateway router needs to have 172.20.20.0/24 configured as a static route (with 192.168.1.5 as a next-hop) because it doesn't learn it through DHCP. Why, you may ask, if all DHCP hosts learn it? Well, some DHCP implementations may not actually install this learned route (happened on my Android phone), and hosts with static IPs need to have it manually added too. In case a LAN host doesn't know about this route, it will forward traffic to the default gateway (your router), which will need to route it back to your wireguard concentrator and send back an ICMP Redirect message to the sender (https://ipwithease.com/icmp-redirects/) to optimize the packet flow.

PC1# ping 172.20.20.1
PING 172.20.20.1 (172.20.20.1) 56(84) bytes of data.
64 bytes from 172.20.20.1: icmp_seq=1 ttl=64 time=2.48 ms
From 192.168.1.1: icmp_seq=2 Redirect Host(New nexthop: 192.168.1.5)
64 bytes from 172.20.20.1: icmp_seq=2 ttl=64 time=1.58 ms

 (Naturally, this works after wireguard is up, but you get the idea)

Setting up the server

Log into the future wireguard server and let's install some packages:

SBC# sudo apt update
SBC# sudo apt install wireguard

Let's say that this wireguard VPN will be called "Home", so we'll be naming the wireguard interface (and configuration) wgh instead of wg0 (as most tutorials show). In case later you'll be setting up other wireguard instances, it's easier to tell them apart this way (e.g. wgw could be a VPN set for Work).

Let's create a server private key, and let's keep it private:

SBC# wg genkey | sudo tee /etc/wireguard/private-wgh.key

SBC# sudo chmod go= /etc/wireguard/private-wgh.key

Now, let's derive a public key from this private one, and also, save it to a file (note, it's not mandatory for the keys to be saved to files. They'll appear in the configuration directly, but it might be more handy to read them from files for client configuration):

SBC# sudo cat /etc/wireguard/private-wgh.key | wg pubkey | sudo tee /etc/wireguard/public-wgh.pub

The keys are just base64-encoded data, and look something like this:

SBC# cat /etc/wireguard/private-wgh.key

kHUhvsnSPvq2yDktordVLKV8/wUEJFDjVu27WgAJRlU= 

I will avoid pasting actual keys from now on, but I will point out which key you need to use by enclosing text like this: <private-wgh.key>, that you'll need to replace with the actual key. Key management is most of the hassle of setting up Wireguard, but once you configure one client, you get the hang of it.

You might be worried that since the key is shorter, it's less secure than something RSA uses (after all a RSA 2048 key takes up about 360 characters in base64 encoding). Well, the difference is explained by having a different encryption algorithm Curve25519, which needs shorter keys to give the same security level as RSA. How eliptic curves work is brilliantly explained in this Computerphile video: https://www.youtube.com/watch?v=NF1pwjL9-DE

Ok, enough detour. Let's get back to configuring  Wireguard. We'll need to create the server config, which basically looks like this:

SBC# cat /etc/wireguard/wgh.conf

[Interface]
Address = 172.20.20.1/24
SaveConfig = true
PostUp = iptables -t nat -I POSTROUTING -o eth0 -j MASQUERADE
PreDown = iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 38271
PrivateKey = <private-wgh.key>

So, we're currently configuring the wgh interface (it derives its name from the config file name), that will have the 172.20.20.1/24 IP address. This sets up the tunnel network, and this traffic will be encrypted and carried over the Internet. 

The SaveConfig option automatically saves changes done with the wg command to this file, so that they are persistent. 

The PostUp and PreDown commands run when the tunnel is set up or torn down. In our case we're setting up NAT, so that traffic coming from the tunnel and destined to resources in our LAN gets NAT'ed to our LAN IP. This means that hosts in our LAN (such as the printer in the picture above) don't need to know how to talk to hosts from 172.20.20.0/24, because they'll only see traffic from the LAN address space (192.168.1.5 in this diagram). Depending on your firewall setup, you may need to tweak the rules.

The ListenPort is just the UDP port used to receive Wireguard traffic. The default port is 51820. I personally prefer not to use well-known ports when exposing services on the internet, so you're free to pick a random port number (obligatory XKCD).

Now, comes a tricky networking part. This port is open on your SBC, inside your LAN. But to accept traffic from the internet, your router needs to know to receive unsolicited traffic on this port and forward it to your SBC. This is called port forwarding, and typically is configured in your router's GUI and usually takes the following parameters (the wording may differ):

  • Source IP address - the address from the Internet that is expected to send traffic. In this case you're expecting traffic from anywhere, so use 0.0.0.0/0
  • Source/Service port - this is actually the destination port of the traffic, so for simplicity use the same port as ListenPort
  • Destination IP address - this is the LAN address of the host meant to receive this traffic. In my diagram it's 192.168.1.5
  • Destination port - this is the destination port where to send the traffic. In this case it's ListenPort.
  • Protocol - you only need to forward UDP.

I say it's tricky, because it depends on your router manufacturer, but Google is your friend: https://www.wikihow.com/Set-Up-Port-Forwarding-on-a-Router

With all this set up, it's time to turn on the server. You can do:

SBC# systemctl enable wg-quick@wgh.service

SBC# sudo service wg-quick@wgh start

SBC# sudo service wg-quick@wgh status

If there are no problems, you should see a wgh interface with IP 172.20.20.1. Great! Now, you need to add a bunch of clients.

Client configuration

Client configuration is surprisingly similar to the server configuration. Clients have their own private/public keys, and their own assigned tunnel IP address. They also can have extra configuration options, such as forwarding DNS traffic over the tunnel, or routing default gateway over the tunnel. The details in the Linux client configuration apply to other clients as well.

Central management for client configuration

Though it's not mandatory, I recommend keeping client configuration on the server and distributing the config to the actual clients. This allows you to more quickly review who the clients are and restore client configuration in case its needed. Otherwise you could run the commands and do the configuration directly on the clients, without involving the server. Let's assume we do it on the server (since we have all the tools). We only need to create a directory where to keep all the files and name them appropriately:

SBC# mkdir /etc/wireguard/client-config

SBC# cd /etc/wireguard/client-config

Linux clients

Linux clients are configured pretty much the same way as the server:

1. Install wireguard from the distribution package manager:

DadsPC# sudo apt-get update

DadsPC# sudo apt-get install wireguard

2. Create a client private/public key pair. We'll be doing this on the server side, though it can be done on the client as well:

SBC# cd /etc/wireguard/client-config

SBC# wg genkey | sudo tee /etc/wireguard/client-config/dads-pc-wgh.key

SBC# cat /etc/wireguard/client-config/dads-pc-wgh.key | wg pubkey | sudo tee /etc/wireguard/client-config/dads-pc-wgh.pub

3. Generate the client config (and transfer it to the client). It should exist on DadsPC at /etc/wireguard/wgh.conf

SBC# cat /etc/wireguard/client-config/dads-pc-wgh.conf

[Interface]
PrivateKey = <dads-pc-wgh.key>
Address = 172.20.20.4/24
PostUp = /usr/sbin/ifmetric wgh 1000

[Peer]
PublicKey = <public-wgh.pub>
AllowedIPs = 172.20.20.0/24, 192.168.1.0/24
Endpoint = <my-server-dns-name-or-ip.com>:38271
PersistentKeepalive = 60

DadsPC# scp sbc:/etc/wireguard/client-config/dads-pc-wgh.conf /etc/wireguard/wgh.conf

Let's analyze this a bit. On the client side you're creating an interface called wgh (derived from wgh.conf) that has the private key that you paste in from dads-pc-wgh.key that you generated in step 2. You also need to assign it a unique tunnel IP address from the server's pool (as Address). 

This interface can connect directly to one peer (the server) defined by Endpoint - the server's static public IP or DNS name (remember? it was a prerequisite) and will communicate on the port you defined on the server as ListenPort.

The PublicKey will be server's public key that we stored on the server, in public-wgh.pub. It needs to be pasted in.

The AllowedIPs directive needs a bit of careful attention. It represents a set of subnets that are accessible via the tunnel. These will be routes that get configured on the client and routed via wgh interface (notice there is no nexthop or gateway defined. How can this be? Well, the interface type is point-to-point, so it simply forwards traffic to the other side without needing its IP address). 

In our case, for DadsPC we want him to have access to the printer in our LAN, and I also want to have access from my LAN PC1 to DadsPC over the tunnel. This is why I gave him access to my whole 192.168.1.0/24 subnet. Also, Dad's PC is accessible by all wireguard clients (has a route for 172.20.20.0/24). If you want to restrict access for specific clients, use individual hosts (e.g. 172.20.20.1/32, 172.20.20.3/32). 

For some clients (e.g. mobile phones) you might want to add 0.0.0.0/0 as AllowedIPs, to force default gateway through the tunnel. Avoid doing this unless needed, because it's inefficient in terms of carrying all the traffic through the server.

Note that client configuration-alone won't protect your LAN from unwanted access. The client is free to change these routes as he pleases, and access resources you don't want them to access. The solution? Remember that all vpn traffic (even between 2 wireguard clients) flows through your concentrator. Here you're in control and can use iptables in the FORWARD chain to control what each client is allowed to access. But a rigorous security policy is outside of the scope for now.

The PersistentKeepalive directive allows NAT devices and firewalls along the way to keep the connection open even when traffic isn't flowing. It is needed when you want to "piggyback" and connect from the server to the client. Otherwise, the client will punch-through firewalls and NATs on its own even without this, but the sever can't do it on the way back. Use this option on clients that you want to access even if they're not sending traffic. On mobile devices you may want to avoid it, because it could slowly drain battery and prevent sleep.

The PostUp command runs on the client and is not necessary - I've added it as an example. Actually, I use it to change the wireguard interface (and routes) metric on my laptop. By default the wireguard routes have a low metric (0) so that traffic flows through the VPN if there are two identical routes (e.g. a default gateway through your LAN and a default gateway through your VPN). But for a roaming laptop (like the one in the diagram), that has wireguard active all the time, this creates a problem when it's in the home LAN. Because it will see 192.168.1.0/24 via wifi with metric 600, and 192.168.1.0/24 via wgh with metric 0, and will prefer to send traffic via the tunnel. In my case I don't want this, so I force the tunnel to have a higher metric. But again, this is a corner case, put here for reference.

Great! The client configuration is done. To apply it, and have the tunnel up all the time, run:

DadsPC# systemctl enable wg-quick@wgh

DadsPC# service wg-quick@wgh restart

DadsPC# service wg-quick@wgh status

DadsPC# ping 172.20.20.1

Remember that since the service is not persistent, you'll need to restart it to apply changes you make to the configuration file.

Is it working? No! We're done with the client configuration, but the server needs to be aware of this client too!

Back on the server side you need to add a Peer section in /etc/wireguard/wgh.conf that looks like this:

SBC# cat /etc/wireguard/wgh.conf

... Interface output ommited ...

[Peer]
PublicKey = <dads-pc-wgh.pub>
AllowedIPs = 172.20.20.4/32
# Name = Dad's PC

The configuration is simpler. You need to add the peer's public key and allow just its IP in AllowedIPs (this means they can't change it on their end). If you want to route their LAN devices as well (and create a LAN-to-LAN tunnel), you can add their LAN subnet as well, but you'll need to handle routing yourself.

It may be good practice to save the name of the peer as a comment. Sadly, wireguard doesn't offer easy naming of peers, but this project helps with this by naming the peers and enhancing wg's output.
 

Now, you can reload the server configuration and bidirectional communication should start flowing.

SBC# service wg-quick@wgh restart

DadsPC# ping -c 2 172.20.20.1

DadsPC# ping -c 2 192.168.1.5

Windows clients

The Windows client is easy to install from https://www.wireguard.com/install/. Once you start it you can import a configuration file (or write it directly). If you took my advice and prepared the client-wgh.conf file on the server, simply transfer it to the Windows host and add it to the client. The client will allow you to activate it and should show the same details as the linux client. Don't forget to add the server-side Peer as well.

 



Android/Iphone

Mobile devices are equally easy. You can feed the Wireguard client a configuration file, or, if they have a camera, a QR code. The QR code contains the configuration file in an easy to transfer format.

So, for Android, install Wireguard from the Play Store (or from APK if you wish) and let's generate a QR code for scanning (on the server).

SBC# sudo apt-get install qrencode 

Generate the configuration normally, as for any client (again, on the server side). When you're done, convert the configuration in a pretty ASCII ART  picture (well, technically it's not ASCII ART, but ANSI ART), like this:

SBC# qrencode -t ansiutf8 -r /etc/wireguard/client-config/my-phone-wgh.conf

 


Next, use the phone Wireguard apps to scan this code and the tunnel will be added to your configuration. Neat! Don't forget to add the peer on the server-side as well.

Performance

Let's run some tests and compare performance with OpenVPN and direct traffic between a client and the server in the same wired LAN. The goal is to remove the network as a bottleneck and see how much traffic we can push through the tunnel. I'll be using iperf3 for the tests. As I said, the server is an Odroid N1 (RK3399), the client will be a linux laptop.

+------------------------+-------+----------+---------+----------------------------------------------------------------------+
|     Transfer type      | Delay | Download | Upload  |                              CPU Usage                               |
+------------------------+-------+----------+---------+----------------------------------------------------------------------+
| LAN, without tunneling | 1ms   | 930Mbps  | 937Mbps | 100% 1 little core, for network traffic                              |
| OpenVPN                | 1.6ms | 264Mbps  | 265Mbps | 50% 1 little core, for network traffic + 100% 1 big core for openvpn |
| Wireguard              | 1.8ms | 695Mbps  | 788Mbps | 100% 1 little core, 50% all other cores                              |
+------------------------+-------+----------+---------+----------------------------------------------------------------------+

So, about a 3x increase in throughput! That's worth switching technologies.

Problems

Inevitably, shit happens. What can you do? Start by reading the logs:

#service wg-quick@wgh status

Where did that packet go?

A general rule of thumb when troubleshooting network issues is to try and validate if packets flow where you expect them to flow. To do this, you generally start a test traffic (like ping) and do packet captures along the route, ideally in all nodes where you can, so that you can validate that traffic flows out the correct interface and arrives at the next node in the path. 

Dad's PC can't access the Wireguard server in my test. A quick packet capture on Dad's PC filtered by ListenPort, can show us that traffic is leaving and going to the server.

DadsPC$ sudo tcpdump -n -i any udp and port 38271
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
18:47:09.519598 wlp1s0 Out IP 192.168.100.10.37090 > 86.99.120.226.38271: UDP, length 96
18:47:09.645068 wlp1s0 Out IP 192.168.100.10.37090 > 86.99.120.226.38271: UDP, length 128

The same capture on the destination SBC, shows us that traffic isn't reaching the SBC's eth0 interface. 

SBC# tcpdump -n -i eth0 udp and port 38271
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes

^C
0 packets captured
3 packets received by filter
0 packets dropped by kernel


This means, that either our router or the Internet at large is eating the packets. When you find a node where traffic is entering one interface, but isn't leaving the expected interface you need to dig deeper and see if it's a routing problem or a firewall issue.

ICMP Ping goes into the tunnel, nothing comes out on the peer

Consider this: you're pinging a wireguard client from within your LAN (let's say from PC1 - 192.168.1.10 you want to ping 172.20.20.4) and you get no reply. Let's see why not?

  • tcpdump on PC1 shows traffic is going to your VPN client via the correct gateway which is the VPN concentrator 192.168.1.5:

    10:14:36.674436 enp1s0 Out IP 192.168.1.10 > 172.20.20.4: ICMP echo request, id 13, seq 1497, length 64

  • the VPN concentrator receives the packet on eth0 and forwards it via wgh:

    10:22:11.325392 eth0  In  IP 192.168.1.10 > 172.20.20.4: ICMP echo request, id 13, seq 1941, length 64
    10:22:11.325538 wgh   Out IP 192.168.1.10 > 172.20.20.4: ICMP echo request, id 13, seq 1941, length 64

  • the destination receives nothing, however:

    <crickets>

So, what's going on? Why isn't the destination picking up the packet? (Assuming that ping from the vpn concentrator works.

Well, in this case the problem is the destination doesn't have 192.168.1.0/24 in its AllowedIPs directive. This makes the client drop all traffic which it can't reach, and apparently you don't see this traffic coming into the wgh interface.  Once I added the prefix in AllowedIPs on the client and restarted the wireguard service, traffic starts flowing! Yay!

Doctor, iptables is eating my packets!

Traffic arrives at the node, but doesn't make it out (or doesn't reach the application level). The obvious culprit: iptables. If you're unlucky and have a ton of iptables rules, it may be difficult to see where the problem is. So, let's run a TRACE! It's like tcpdump, but for iptables!

Add a trace in the PREROUTING raw table that matches your test traffic (e.g. ICMP):

SBC# iptables -t raw -D PREROUTING -p icmp --source 192.168.1.9/32 -j TRACE

In this case I want to see what happens to ICMP traffic from a LAN PC that goes through the tunnel.

Start a ping from the monitored source and... where should we see trace messages? Internet wisdom says to look in /var/log/kern.log or /var/log/syslog, but in this case Internet wisdom is wrong. If you're running a modern distro, iptables was replaced with iptables-nft, and tracing is done by running:

SBC# xtables-monitor --trace
PACKET: 2 ec26630e IN=eth0 MACSRC=0:1e:6:45:9:5a MACDST=0:1e:6:ae:d4:2e MACPROTO=0800 SRC=192.168.1.9 DST=172.20.20.2 LEN=84 TOS=0x0 TTL=64 ID=14614DF
 TRACE: 2 ec26630e raw:PREROUTING:rule:0x2:CONTINUE  -4 -t raw -A PREROUTING -s 192.168.1.9/32 -p icmp -j TRACE
 TRACE: 2 ec26630e raw:PREROUTING:return:
 TRACE: 2 ec26630e raw:PREROUTING:policy:ACCEPT
 TRACE: 2 ec26630e nat:PREROUTING:return:
 TRACE: 2 ec26630e nat:PREROUTING:policy:ACCEPT
PACKET: 2 ec26630e IN=eth0 OUT=wgh MACSRC=0:1e:6:45:9:5a MACDST=0:1e:6:ae:d4:2e MACPROTO=0800 SRC=192.168.1.9 DST=172.20.20.2 LEN=84 TOS=0x0 TTL=63 ID=14614DF
 TRACE: 2 ec26630e filter:FORWARD:rule:0x72:JUMP:DOCKER-USER  -4 -t filter -A FORWARD -j DOCKER-USER
 TRACE: 2 ec26630e filter:DOCKER-USER:return:
 TRACE: 2 ec26630e filter:FORWARD:rule:0x6f:JUMP:DOCKER-ISOLATION-STAGE-1  -4 -t filter -A FORWARD -j DOCKER-ISOLATION-STAGE-1
 TRACE: 2 ec26630e filter:DOCKER-ISOLATION-STAGE-1:return:
 TRACE: 2 ec26630e filter:FORWARD:return:
 TRACE: 2 ec26630e filter:FORWARD:policy:DROP 

This shows you how the packet is tested throughout various rules and chains - and in this case our FORWARD table drops it, because there is no explicit PERMIT rule. So, we need to explicitly allow traffic in and out wgh interface to be forwarded by our SBC:

SBC# iptables -A FORWARD -o wgh -j ACCEPT

SBC# iptables -A FORWARD -i wgh -j ACCEPT

Once testing shows that it works, it's best to add the iptables commands to the /etc/wireguard/wgh.conf file in PostUp, PreDown:

PostUp = iptables -t nat -I POSTROUTING -o eth0 -j MASQUERADE
PostUp = iptables -A FORWARD -i wgh -j ACCEPT
PostUp = iptables -A FORWARD -o wgh -j ACCEPT
PreDown = iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE || true
PreDown = iptables -D FORWARD -i wgh -j ACCEPT || true
PreDown = iptables -D FORWARD -o wgh -j ACCEPT || true

The purpose of the || true statement in the PreDown commands is to allow the wireguard service to stop even if for some reason the iptables rules are not there.

Note, that the fact that we got to the FORWARD table with our trace demonstrates that IP forwarding is enabled on this host (https://linuxconfig.org/how-to-turn-on-off-ip-forwarding-in-linux). Otherwise, the decision to drop the packet would have been made before iptables, and we wouldn't have seen the packet reach the FORWARD table.

Idle clients are not accessible from the server

This happens when clients are behind a NAT or Firewall, because they track state and if there is silence for a while (60-120s), the state gets deleted and a new connection needs to be made. The solution in this case is for the client to use PersistentKeepalive to force send traffic to the server every x seconds. See this for details: https://www.wireguard.com/quickstart/#nat-and-firewall-traversal-persistence

The service won't stop or start?!

Sometimes a configuration issue might cause the service to fail to start or stop. And service wg-quick@wgh status might not show anything useful. In this case, try running the actual commands and see what the problem is:
 

/usr/bin/wg-quick down wgh

/usr/bin/wg-quick up wgh

In case you're still stuck, there are other great troubleshooting ideas here: https://www.tangramvision.com/blog/what-they-dont-tell-you-about-setting-up-a-wireguard-vpn

Oh, no! A miscreant has stolen my phone!

In case you want to cut off access to the Wireguard server and you don't have access to the client (e.g. stolen, broken, out of reach) you can simply remove (or comment out) the respective [Peer] entry in your Wireguard server config. It's not as complicated as OpenVPN, where you need to permanently revoke a certificate. Later you can uncomment the peer and it will be able to happily reconnect.

Remember to restart/reload the Wireguard server for the changes to apply.

A look under the hood

I promised that a good tutorial ends with a packet capture. How about we see the actual decrypted traffic as it goes over the wire (we should be able to, since we have the keys)?

Huh... so it won't be a good tutorial after all... :( After digging through the code and the wireguard mailing list I wasn't able to compile their helper tools that allow me to copy the encryption keys from kernel space and decode the packets. The gritty details that have stopped working can be found here: https://blog.salrashid.dev/articles/2022/wireguard_wireshark/

One more thing I need to look into is adding IPv6 support for the tunnel. This helps when routing a client's whole traffic, so it has access to dual-stack. The complications arises from my ISP dynamic prefix-delegated /48 prefix. If I were to allocate from it, I'd need to reconfigure both server and clients when rebooting my router. Plan B is too horrible to mention, but I'll do it anyway - use NAT66 to translate a unique local address (ULA) to a public one - details here: https://blogs.infoblox.com/ipv6-coe/you-thought-there-was-no-nat-for-ipv6-but-nat-still-exists/. But that's a fight for another day!


 


Comments

Popular posts from this blog

Home Assistant + Android TV = fun

Here's a quick setup guide for controlling your Android TV from within Home Assistant. I've used it to control a genuine Android TV (Philips 7304) and an Odroid N2 running Android TV. For this to work you need ADB access. It can usually be enabled from within Developer Settings. The great part is - you don't need root access! The most important things are described in the androidtv component for Home Assistant: https://www.home-assistant.io/integrations/androidtv/ Make sure you go through the adb setup. My configuration is simple (inside configuration.yaml): media_player:   - platform: androidtv     name: TV Bedroom ATV     host: 192.168.1.61     device_class: androidtv Once Home Assistant restarts, your TV might require you to accept the connection (adb authentication). This happens only once (or until you reset your ATV to factory settings). Once running the integration will show you the current ATV state (on or off) and allows you to turn it on or off.

SmokePing + InfluxDB export + docker + slaves + Grafana = fun

I've been working for a while on this project - with the purpose of getting SmokePing measurements from different hosts (slaves) into InfluxDB so that we can better graph them with Grafana. The slaves run multiple Smokeping instances inside Docker so that they have separate networking (measure through different uplinks, independently). This will not be a comprehensive configuration guide, but a quick "how to" to handle setup and basic troubleshooting. It assumes you already know how to set up and operate a regular Smokeping install with or without slaves and that you are fluent in Smokeping configuration syntax, know your way around Docker and aren't a stranger from InfluxDB and Grafana (sorry, there's a lot of information to take in). 1. Getting Smokeping with InfluxDB support - you can get it either from the official page (most changes have been merged) - https://github.com/oetiker/SmokePing (PR discussion here: https://github.com/oetiker/SmokePing/issues/

Installing Home Assistant Supervised on an old 32bit HP laptop

 I've received a challenge from my former boss: an old HP laptop that was born in 2005:  an HP-Compaq NC6220 ( https://www.pocket-lint.com/laptops/reviews/hp/68181-hp-compaq-nc6220-notebook-laptop/ ). The specs are abysmal: So, i386, 1.7GHz single-core CPU (remember those?), 1G of DDR2 RAM (2x512M) and a 40GB ATA (not SATA!) drive. But hey, at least it has a serial port!  The challenge is to install HomeAssistant ( https://www.home-assistant.io/ ) on it so that he can monitor some Zigbee temperature sensors and relays (via a gateway). The first hurdle was to remove the BIOS password - following this nice guide: https://www.youtube.com/watch?v=ZaGKyb0ntSg Next-up - install HASSOS. Unfortunately, it doesn't support i386, but only x86_64... So, I went the Home Assistant Supervised route, and installed Debian 11 i386 edition from a netinstall USB ( https://cdimage.debian.org/debian-cd/current/i386/iso-cd/debian-11.6.0-i386-netinst.iso ).   Once Debian was up and running (didn't