Wireguard reminds me of policy-based IPsec
March 25, 2025•3,606 words
I recently seen a post on reddit in r/Wireguard about how 'AllowedIPs' worked in Wireguard, asking was it possible to build a dynamic, routed network with wireguard tunnels.
The question interested me cos I'd never really thought about it properly. I tend to use Wireguard to VPN into my home or some VPS systems I have - all running Linux - from my phone or laptop. As part of this I use the "AllowedIPs" configuration option to add the routes I need on either end. But the "link layer" - such that it is - was somewhat confusing to me. As was why you'd call the toggle for what routes are added "AllowedIPs".
I had the same questions in my head as the poster of the reddit thread. Can I just set up routes to the far-side of a Wireguard tunnel and will they work? Is there any need for a route to point to an IP address (as with IP over Ethernet, where the IP is used via ARP/ND to find the MAC address for the frame), or can it just point directly at the interface (as with IP on a serial line w/HDLC or similar)? Can we run BGP or other routing protocols over a Wireguard link to exchange routes? And will they be effective if inserted in the tables either side?
I decided to lab it up to test.
Lab Setup
I built a very basic lab like this, with the goal being to set up a tunnel from S1 to S2:
+----------------+ +----------------------------+ +----------------+
| S1 | | R1 | | S2 |
| | | | | |
| eth1 |----------| eth1 eth2 |----------| eth1 |
| 10.1.0.2/24 | | 10.1.0.1/24 10.2.0.1/24 | | 10.2.0.2/24 |
+----------------+ +----------------------------+ +----------------+
<--------------------------------------------------------->
Wireguard Tunnel
All systems were just Linux containers based on Debian, with veth pairs connecting them. R1 was simply enabled for IPs on both interfaces and routing enabled in sysctl and nftables so it would act as a router. I added a route to 10.2.0.0/24 on S1 pointing to R1 (10.1.0.1) and a route to 10.1.0.0/24 on S2 pointing to R1 (10.2.0.1), after which I could ping from S1 to S2.
With this in place I configured a basic Wireguard tunnel for each side, configs as follows:
root@s1:~# cat /etc/wireguard/wg0.conf
[Interface]
PrivateKey = 0G/YfiSxtaUpSGQ5GMLpcNQGwQIMDSPkPN4tMVjwjHI=
Address = 100.64.187.1/32
[Peer]
PublicKey = HRyEEQXB02irTFUQwtSoe0zqoNftss8IifFJFKZ46xE=
Endpoint = 10.2.0.2:51820
AllowedIPs = 198.18.0.1/32
root@s2:~# cat /etc/wireguard/wg0.conf
[Interface]
Address = 198.18.0.1/32
PrivateKey = uFQKcMjkBDeBKJFlqy7z9mi2C1fjBuzTwMqfaalbyW0=
SaveConfig = false
ListenPort = 51820
[Peer]
PublicKey = G3D758Mb0jyo9H9MnTeDO6Hnh/mqHju3R3Cols4P+jo=
AllowedIPs = 100.64.187.1/32
The first thing I did slightly different than my normal setup is use a /32 IP each side, from a totally different subnet. For the purpose of this test I just used IPv4. As these containers aren't running a full systemd setup I couldn't simply enable the wg-quick@wg0 service as usual. So I looked at the unit file and could see it would start and stop the service like this:
ExecStart=/usr/bin/wg-quick up %i
ExecStop=/usr/bin/wg-quick down %i
Starting the service
I started the wireguard service on my test containers as follows:
bash -x /usr/bin/wg-quick up wg0
Running with 'bash -x' showed me everything the script was doing. Effectively what this did was:
- Read the configuration script form /etc/wireguard/wg0.conf
- Built up a string variable, $WG_CONFIG, with the various options from the config file (including 'AllowedIPs')
- Added the 'wg0' interface with 'ip link add wg0 type wireguard'
- Configured the wg0 tunnel by running 'wg setconf wg0 ' with a reference to the $WG_CONFIG var (using process substitution)
- Added the address from the 'Address' line of the config to the wg0 interface with 'ip addr add'
- Read the MTU of the interface with my system's default rotue on it and set the wg0 MTU to the same less 80.
- Added a route to each of the 'AllowedIPs' from the config, to 'dev wg0' rather than any IP
So far so good, I can ping both sides fine. And it's working like a serial link, we don't need to care about having IPs on the same subnet or any link-layer addressing:
root@s1:~# ping 198.18.0.1
PING 198.18.0.1 (198.18.0.1) 56(84) bytes of data.
64 bytes from 198.18.0.1: icmp_seq=1 ttl=64 time=0.845 ms
root@s2:~# ping 100.64.187.1
PING 100.64.187.1 (100.64.187.1) 56(84) bytes of data.
64 bytes from 100.64.187.1: icmp_seq=1 ttl=64 time=0.248 ms
On R1 we just see the encrypted, tunnelled packets:
root@officepc:~# tcpdump -c 2 -i eth2 -l -p -nn
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth2, link-type EN10MB (Ethernet), snapshot length 262144 bytes
22:20:55.557954 IP 10.2.0.2.51820 > 10.1.0.2.46723: UDP, length 128
22:20:55.558106 IP 10.1.0.2.46723 > 10.2.0.2.51820: UDP, length 128
Can we just route over this interface?
So what happens if we add more routes over one of the wg0 interfaces, let's try this:
root@s1:~# ip route add 8.8.8.8/32 via 198.18.0.1
root@s1:~# ip route get fibmatch 8.8.8.8
8.8.8.8 via 198.18.0.1 dev wg0
We have an issue pinging though, geting a strange message back from the local system:
root@s1:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
From 100.64.187.1 icmp_seq=1 Destination Host Unreachable
ping: sendmsg: Required key not available
From 100.64.187.1 icmp_seq=2 Destination Host Unreachable
ping: sendmsg: Required key not available
The 'Required key not available' is interesting. Let's take a look at another of the commands the wg-quick script ran when it executed:
root@s1:~# wg show wg0 allowed-ips
HRyEEQXB02irTFUQwtSoe0zqoNftss8IifFJFKZ46xE= 198.18.0.1/32
Here we are reading the conf for wg0, and we see that the destination 198.18.0.1/32 is the only one listed. It seems if we try to route packets for any other destinations over the wg0 interface it fails as it can't find the key associated with that destination
Where have I seen this before?
Without revisiting the IPsec RFCs this immediately reminded me of policy-based IPsec, in which we create potentially multiple Security Associations on a device, and define IP subnets as "interesting traffic" to associate with each SA, controlling which is used when sending traffic to those destinations.
It also reminded me of how in more modern times we tend to use route-based IPsec tunnels ('VTI' in Cisco speak), and how they work under the hood. In that scenario the IPsec SA is set up with 0.0.0.0/0 (or ::/0) as the interesting traffic selector for the SA, but the system associates each SA with a virtual interface. That allows us to control what traffic goes where by adding entries to the routing table that point to the VTI interface. In other words we initialise the tunnel such that it will accept traffic for ANYWHERE, and then use another mechanism (routing) to control what traffic is actually sent over it. To most of us network engineers this makes much more sense.
I guess this is what we need to do for Wireguard too.
CryptoKey Routing
Let me stop for a moment here to discuss this concept as explained by Wireguard. When I first started this lab I was just experimenting, and didn't really look at the docs. But of course they explain quite clearly - on the www.wireguard.com front page no less - how it works.
Wireguard uses something called "CryptoKey" routing. The docs say this "works by associating public keys with a list of tunnel IP addresses that are allowed inside the tunnel". It goes on to describe that "when the network interface wants to send a packet to a peer (a client), it looks at that packet's destination IP and compares it to each peer's list of allowed IPs to see which peer to send it to." The key thing here is network interface. A Wireguard interface can effectively be multipoint, with a single interface connecting many peers. You can route at the kernel level any networks towards a wg interface, and when they hit that interface wireguard looks at the CryptoKey Routing Table for that interface, and sends the traffic to the peer configured for those destinations. When we ran "wg show wg0 allowed-ips" above we were viewing the Cyrptokey Routing Table for wg0.
That scenario probably works well for "dial-up" VPN type access, with multiple clients who connect to a single VPN concentrator. But for the kind of point-to-point networks we might build between routers across the internet it's not really ideal. What we effectively want is something more akin to GRE or VTI tunnels, with routing protocols controlling what address ranges go over each rather than any statically configured ranges.
In brief the way we achieve this is:
- Use a separte wg interface for every peer
- Set the 'AllowedIPs' to 0.0.0.0/0, ::/0 for every peer
That means everything routed to a given interface will be sent to the peer associated with it. And we can set up routes in the kernel pointing to those interfaces (or IPs known over them) to control what is sent where.
Back to the lab
Disable route creation when tunnels come up
So it seems like we need to initialise the tunnel with 'AllowedIPs' set to 0.0.0.0/0 to ensure all IPv4 traffic is allowed over the tunnel. But at the same time without adding a route to this default range via the wg0 interface. This can be achieved simply by adding the following under the "Interface" section of the wg0.conf file:
Table = off
Updated wireguard configs
My updated configs were as follows:
root@s1:~# cat /etc/wireguard/wg0.conf
[Interface]
PrivateKey = 0G/YfiSxtaUpSGQ5GMLpcNQGwQIMDSPkPN4tMVjwjHI=
Address = 100.64.187.1/32
Table = off
[Peer]
PublicKey = HRyEEQXB02irTFUQwtSoe0zqoNftss8IifFJFKZ46xE=
Endpoint = 10.2.0.2:51820
AllowedIPs = 0.0.0.0/0
root@s2:/# cat /etc/wireguard/wg0.conf
[Interface]
Address = 198.18.0.1/32
PrivateKey = uFQKcMjkBDeBKJFlqy7z9mi2C1fjBuzTwMqfaalbyW0=
SaveConfig = false
ListenPort = 51820
Table = off
[Peer]
PublicKey = G3D758Mb0jyo9H9MnTeDO6Hnh/mqHju3R3Cols4P+jo=
AllowedIPs = 0.0.0.0/0
Enable the tunnels with new configuration
When I init the tunnel I see this, notice how no routes are added:
root@s1:~# /usr/bin/wg-quick up wg0
[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip -4 address add 100.64.187.1/32 dev wg0
[#] ip link set mtu 1420 up dev wg0
Which is all well and good, however now at the kernel routing level nothing is going to try and be sent over the tunnel. We need to manually add a route for the far-side in each container:
root@s1:~# ip route add 198.18.0.1/32 dev wg0
root@s2:~# ip route add 100.64.187.1/32 dev wg0
After which we can ping:
root@s1:~# ping 198.18.0.1
PING 198.18.0.1 (198.18.0.1) 56(84) bytes of data.
64 bytes from 198.18.0.1: icmp_seq=16 ttl=64 time=0.199 ms
64 bytes from 198.18.0.1: icmp_seq=17 ttl=64 time=0.184 ms
64 bytes from 198.18.0.1: icmp_seq=18 ttl=64 time=0.880 ms
Adding more routes
So now if we add more routes are they sent? Lets add a dummy int on S2:
root@s2:~# ip link add dummy0 type dummy
root@s2:~# ip link set dev dummy0 up
root@s2:~# ip addr add 8.8.8.8/32 dev dummy0
...and see if we can route to it from S1:
root@s1:~# ip route add 8.8.8.8/32 via 198.18.0.1
root@s1:~# ip route get fibmatch 8.8.8.8
8.8.8.8 via 198.18.0.1 dev wg0
root@s1:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=64 time=0.237 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=64 time=0.259 ms
Working exactly as expected.
BGP
Ok so now can we start using a routing protocol over this tunnel to advertise routes? We have our static routes for the /32 IPs of the far-side wg0 tunnel, so next-hop resolution should be ok. Let's give it a shot with FRR.
S1 config:
router bgp 65500
neighbor 198.18.0.1 remote-as 65501
neighbor 198.18.0.1 disable-connected-check
!
address-family ipv4 unicast
neighbor 198.18.0.1 soft-reconfiguration inbound
neighbor 198.18.0.1 route-map ALLOW-ALL in
neighbor 198.18.0.1 route-map DENY-ALL out
exit-address-family
!
route-map ALLOW-ALL permit 100
!
route-map DENY-ALL deny 100
!
S2 Config:
router bgp 65501
neighbor 100.64.187.1 remote-as 65500
neighbor 100.64.187.1 disable-connected-check
!
address-family ipv4 unicast
redistribute kernel
redistribute connected
neighbor 100.64.187.1 soft-reconfiguration inbound
neighbor 100.64.187.1 route-map ALLOW-ALL in
neighbor 100.64.187.1 route-map EIGHTEIGHT out
exit-address-family
!
ip prefix-list EIGHTEIGHT seq 5 permit 8.8.8.8/32
!
route-map ALLOW-ALL permit 100
!
route-map EIGHTEIGHT permit 100
match ip address prefix-list EIGHTEIGHT
On S1 I can see that it learns the route to 8.8.8.8/32 from S2 over BGP:
s1# show bgp ipv4 unicast neighbors 198.18.0.1 received-routes
BGP table version is 29, local router ID is 172.20.20.2, vrf id 0
Default local pref 100, local AS 65500
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes: i - IGP, e - EGP, ? - incomplete
Network Next Hop Metric LocPrf Weight Path
*> 8.8.8.8/32 198.18.0.1 0 0 65501 ?
Total number of prefixes 1
And it has been installed into the kernel routing table correctly:
root@s1:~# ip route get fibmatch 8.8.8.8
8.8.8.8 nhid 22 via 198.18.0.1 dev wg0 proto bgp metric 20 onlink
We can ping too:
root@s1:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=64 time=0.225 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=64 time=0.216 ms
Woot!
Multiple Endpoints
The other question is will this cause any problems if we add a third node, and want to have tunnels from S1 to S2 and S3 simultaneously? Let's change up the topology a little:
+----------------+
│ │
│ S2 │
│ │
│ │
│ lo: 2.2.2.2 │
+----------------------------+ | │
│ │ | |
│ eth2 │ | |
│ 10.2.0.1/24 │----------| eth1 |
│ │ | 10.2.0.2/24 |
│ │ | |
+----------------+ | | +----------------+
| | | R1 |
| S1 | | | +----------------+
| | | | | |
| eth1 |----------│ eth1 │ | S3 |
| 10.1.0.2/24 | │ 10.1.0.1/24 │ | |
│ │ │ eth3 │----------| eth1 |
| | │ 10.3.0.1/24 │ | 10.3.0.2/24 |
│ │ │ │ │ │
│ lo: 1.1.1.1 │ +----------------------------+ │ │
│ │ │ │
+----------------+ │ lo: 3.3.3.3 │
│ │
+----------------+
So say on S1 we want to create wireguard tunnels to S2 and S3, and run BGP over them.
The main thing we need to do to achieve this is use multiple wg interfaces, one for each peer. This is required because we set 'AllowedIPs' to 0.0.0.0/0 (everything) for the peer we already have on wg0. We can't add another peer under the same wg interface and also set that range as AllowedIPs. Instead we set up a second interface, wg1, and configure the second peer under it, again with AllowedIPs=0.0.0.0/0. This way we get two interfaces (so we can route over), each of which will accept and encrypt traffic for any IP we route over it.
My wg configs are now:
root@s1:~# cat /etc/wireguard/wg0.conf
[Interface]
PrivateKey = 0G/YfiSxtaUpSGQ5GMLpcNQGwQIMDSPkPN4tMVjwjHI=
Table = off
[Peer]
PublicKey = HRyEEQXB02irTFUQwtSoe0zqoNftss8IifFJFKZ46xE=
Endpoint = 10.2.0.2:51820
AllowedIPs = 0.0.0.0/0, ::/0
root@s1:~# cat /etc/wireguard/wg1.conf
[Interface]
PrivateKey = 0G/YfiSxtaUpSGQ5GMLpcNQGwQIMDSPkPN4tMVjwjHI=
Table = off
[Peer]
PublicKey = Mi87VBithUnR8zI9cWqgMaYeK4PaNvA4tG3h4GLjLh8=
Endpoint = 10.3.0.2:51820
AllowedIPs = 0.0.0.0/0, ::/0
And on the head-end servers:
root@s2:~# cat /etc/wireguard/wg0.conf
[Interface]
PrivateKey = uFQKcMjkBDeBKJFlqy7z9mi2C1fjBuzTwMqfaalbyW0=
SaveConfig = false
ListenPort = 51820
Table = off
[Peer]
PublicKey = G3D758Mb0jyo9H9MnTeDO6Hnh/mqHju3R3Cols4P+jo=
AllowedIPs = 0.0.0.0/0, ::/0
root@s3:~# cat /etc/wireguard/wg0.conf
[Interface]
PrivateKey = CIjf0eWDIWJI4UwSZLwim2pbAyQpBQAeI12p+9vRJGs=
SaveConfig = false
ListenPort = 51820
Table = off
[Peer]
PublicKey = G3D758Mb0jyo9H9MnTeDO6Hnh/mqHju3R3Cols4P+jo=
AllowedIPs = 0.0.0.0/0, ::/0
Bring up multiple tunnels
We bring up both tunnels on the S1 side as follows:
bash -x /usr/bin/wg-quick up wg0
bash -x /usr/bin/wg-quick up wg1
And we then see this:
root@s1:~# wg show all allowed-ips
wg0 HRyEEQXB02irTFUQwtSoe0zqoNftss8IifFJFKZ46xE= 0.0.0.0/0 ::/0
wg1 Mi87VBithUnR8zI9cWqgMaYeK4PaNvA4tG3h4GLjLh8= 0.0.0.0/0 ::/0
Two wg interfaces, each with a single peer (and the public key for these), configured to allow all traffic over them.
Unnumbered wg interface config
You'll see in the above wg configuration I did not include any 'Address' config. I wanted to test configuring it as an unnumbered interface, and routing between an IP configured on our loopback interface. To achieve this I added an IP to the lo interface on each machine:
S1: ip addr add 1.1.1.1/32 dev lo
S2: ip addr add 2.2.2.2/32 dev lo
S3: ip addr add 3.3.3.3/32 dev lo
I then added static routes between these over the wg interfaces:
root@s1:~# ip route add 2.2.2.2/32 dev wg0
root@s1:~# ip route add 3.3.3.3/32 dev wg1
root@s2:~# ip route add 1.1.1.1/32 dev wg0
root@s3:~# ip route add 1.1.1.1/32 dev wg0
With this done on S1 I can see the two routes in place:
root@s1:~# ip route get 2.2.2.2
2.2.2.2 dev wg0 src 1.1.1.1 uid 0
root@s1:~# ip route get 3.3.3.3
3.3.3.3 dev wg1 src 1.1.1.1 uid 0
And I can ping to them:
root@s1:~# ping -c 2 2.2.2.2
PING 2.2.2.2 (2.2.2.2) 56(84) bytes of data.
64 bytes from 2.2.2.2: icmp_seq=1 ttl=64 time=0.232 ms
64 bytes from 2.2.2.2: icmp_seq=2 ttl=64 time=0.173 ms
root@s1:~# ping -c 2 3.3.3.3
PING 3.3.3.3 (3.3.3.3) 56(84) bytes of data.
64 bytes from 3.3.3.3: icmp_seq=1 ttl=64 time=0.231 ms
64 bytes from 3.3.3.3: icmp_seq=2 ttl=64 time=0.233 ms
BGP
Now we can set up BGP between all hosts to exchange routes again.
S1 Config:
router bgp 65500
neighbor 2.2.2.2 remote-as 65501
neighbor 2.2.2.2 description S2
neighbor 2.2.2.2 disable-connected-check
neighbor 2.2.2.2 update-source 1.1.1.1
neighbor 3.3.3.3 remote-as 65502
neighbor 3.3.3.3 description S3
neighbor 3.3.3.3 disable-connected-check
neighbor 3.3.3.3 update-source 1.1.1.1
!
address-family ipv4 unicast
neighbor 2.2.2.2 route-map ALLOW-ALL in
neighbor 2.2.2.2 route-map DENY-ALL out
neighbor 3.3.3.3 route-map ALLOW-ALL in
neighbor 3.3.3.3 route-map DENY-ALL out
exit-address-family
!
route-map ALLOW-ALL permit 100
!
route-map DENY-ALL deny 100
S2 Config:
root@s2:~# ip link add dummy0 type dummy
root@s2:~# ip link set dev dummy0 up
root@s2:~# ip addr add 198.19.0.1/32 dev dummy0
router bgp 65501
neighbor 1.1.1.1 remote-as 65500
neighbor 1.1.1.1 description S1
neighbor 1.1.1.1 disable-connected-check
neighbor 1.1.1.1 update-source 2.2.2.2
!
address-family ipv4 unicast
redistribute connected
neighbor 1.1.1.1 route-map ALLOW-ALL in
neighbor 1.1.1.1 route-map BGP-OUT out
exit-address-family
!
ip prefix-list DUMMY0 seq 5 permit 198.19.0.1/32
!
route-map BGP-OUT permit 100
match ip address prefix-list DUMMY0
!
route-map ALLOW-ALL permit 100
S3 Config:
root@s3:~# ip link add dummy0 type dummy
root@s3:~# ip link set dev dummy0 up
root@s3:~# ip addr add 198.18.0.1/32 dev dummy0
router bgp 65502
neighbor 1.1.1.1 remote-as 65500
neighbor 1.1.1.1 description S1
neighbor 1.1.1.1 disable-connected-check
neighbor 1.1.1.1 update-source 3.3.3.3
!
address-family ipv4 unicast
redistribute connected
neighbor 1.1.1.1 route-map ALLOW-ALL in
neighbor 1.1.1.1 route-map BGP-OUT out
exit-address-family
!
ip prefix-list DUMMY0 seq 5 permit 198.18.0.1/32
!
route-map BGP-OUT permit 100
match ip address prefix-list DUMMY0
!
route-map ALLOW-ALL permit 100
And the result is BGP comes up and we learn both dummy IPs on S1:
s1# show bgp ipv4 unicast
BGP table version is 2, local router ID is 1.1.1.1, vrf id 0
Default local pref 100, local AS 65500
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes: i - IGP, e - EGP, ? - incomplete
Network Next Hop Metric LocPrf Weight Path
*> 198.18.0.1/32 3.3.3.3 0 0 65502 ?
*> 198.19.0.1/32 2.2.2.2 0 0 65501 ?
Displayed 2 routes and 2 total paths
We can see these in the kernel routing table too:
root@s1:~# ip route show proto bgp
198.18.0.1 nhid 27 via 3.3.3.3 dev wg1 metric 20 onlink
198.19.0.1 nhid 30 via 2.2.2.2 dev wg0 metric 20 onlink
And we can sucessfully ping each:
root@s1:~# ping -c 2 198.18.0.1
PING 198.18.0.1 (198.18.0.1) 56(84) bytes of data.
64 bytes from 198.18.0.1: icmp_seq=1 ttl=64 time=0.221 ms
64 bytes from 198.18.0.1: icmp_seq=2 ttl=64 time=0.228 ms
root@s1:~# ping -c 2 198.19.0.1
PING 198.19.0.1 (198.19.0.1) 56(84) bytes of data.
64 bytes from 198.19.0.1: icmp_seq=1 ttl=64 time=0.226 ms
64 bytes from 198.19.0.1: icmp_seq=2 ttl=64 time=0.188 ms
What have we discovered?
Wireguard operates in a way reminiscent of IPsec.
With IPsec, subnets are associated with security associations, and when we do route-based/VTI tunnels we set those subnets to 0.0.0.0/0 or ::/0. This ensures all traffic we route over the tunnel interface will be accepted.
In Wireguard:
- We associate IP subnets with specific peers using the 'AllowedIPs' statement
- We need to set AllowedIPs to 0.0.0.0/0, ::/0 if we want anything we route via a wg interface to be transmitted
- We need to set 'Table = off' in our wireguard interface config to stop it adding everything in AllowedIPs as routes on the system
- We can then control what traffic is sent over the wg tunnel by adding routes to the system via the wg interface
- We can set up an IP on the local loopback adapter and use that instead of configuring IPs on each wg interface
- * Similar to 'ip unnumbered Loopback0' on a Cisco IOS serial interface
- We cannot have multiple peers using the same wg interface with this setup, so we use a separate wg interface for every peer
- With can add a host route for a remote BGP neighbor via a wg interface and establish a BGP session to it
- Routes learnt via this next-hop are recursively looked up and the correct wg interface is selected and used when sending traffic to destinations learnt in BGP.