IPv6 Link-Local Vanity Addressing

Rocks

With the expanded address space, and the introduction of letters (A thru F), it is possible to embed words into an address, making it easier to recognize the source of that address. Facebook is a common example where they have embedded face:b00c into the address


$ host facebook.com
facebook.com has address 157.240.3.35
facebook.com has IPv6 address 2a03:2880:f101:83:face:b00c:0:25de

Other common examples are dead:beef or cafe.

Using Vanity Addressing with Link-Local interfaces

A colleague suggested using Vanity Link-Local Addresses, primarily to make pinging the next hop interface easier. A simple use would be the upstream router is ::1, and the downstream router would be ::2. From the downstream router, one would ping:

$ ping fe80::1%eth0

64 bytes from fe80::1%eth0: icmp_seq=1 ttl=64 time=3.97 ms
64 bytes from fe80::1%eth0: icmp_seq=2 ttl=64 time=4.60 ms

Because link-local prefix of fe80:: is on every interface, the address must be scoped with the %eth0.

This certainly makes less typing, and faster troubleshooting than trying to look up the link-local address of the next hop router.

Using Vanity Link-Local Addressing as part of your addressing plan

My SOHO network is probably a little more complex that most. But this technique can also be applied to larger networks.

SOHO Network

As you can see there are four basic networks (with some smaller ones not shown) in my network. I have a /56 from my ISP, and for the most part, the networks are divided on nibble boundaries.

prefixUse
2001:db8:c011:fd00Production network
2001:db8:c011:fd40Testing network
2001:db8:c011:fd50IPv6-only network
2001:db8:c011:fd80DMZ Network

Creating Vanity Link-Local addresses from the address plan

In the old classful networking days, the IT group would have IPv4 subnets memorized. The 10 network, or the 171 network. With IPv6, it is possible to do this again, but with IPv6 prefixes.

Since, I only have a /56, the last 2 bytes of the address are mine to deploy (e.g. 2001:db8:c011:fd00). I have taken the last two bytes, and applied them to link-local addresses, such that the prefix 2001:db8:c011:fd44:: becomes a link-local address fe80::ea9f:80ff:fef3:fd44.

In a perfect world, it would just shorten this to fe80::fd44, however I run OpenWrt routers which don’t support Vanity Link-Local Addressing.

Vanity Link-Local Addressing and OpenWrt

I like OpenWrt, it is a very powerful, extremely configurable, and extendable routing platform. One can even run bird, an internet routing daemon on OpenWrt, including RIPng.

OpenWrt does use EUI-64 addressing for the Link-Local addresses, and does permit changing MAC addresses on interfaces. With this knowledge, we can embed the vanity address into the MAC address on OpenWrt.

In the LuCI web interface: Network->Interfaces->Devices Tab

OpenWrt MAC Addresses

I have changed the last two bytes of the MAC address to FD:60

This change can also be made in the file /etc/config/network on the OpenWrt router with the addition of the option macaddr

for OpenWrt 22.03.x

config device
    option name 'br-lan'
    option type 'bridge'
    list ports 'eth1'
    list ports 'eth2'
    list ports 'eth3'
    list ports 'eth4'
    option macaddr '74:83:C2:4C:FD:60'

for OpenWrt 19.07.x

config interface 'lan'
    option type 'bridge'
    option ifname 'eth0.1'
    option proto 'static'
    option netmask '255.255.255.0'
    option ip6assign '60'
    option ipaddr '192.168.7.1'
    option macaddr '74:83:C2:4C:FD:60'

After rebooting your router, this will yield a pseudo vanity link-local address of: fe80::7683:c2ff:fe4c:fd60

But you say, “fe80::1 is much easier than fe80::7683:c2ff:fe4c:fd60.” And you would be right. But as I said, OpenWrt doesn’t have a facility to create simple link-local addresses like fe80::1.

Vanity Link-Local addressing and RIPng

As many readers know, I have been running RIPng (Routing Information Protocol for IPv6) for several years. RIPng isn’t a perfect routing protocol, there are better/faster protocols out there, but RIPng doesn’t require the administrator to be a router jockey to use it. Therefore I feel that RIPng is an excellent choice for SOHO (Small Office/Home Office) networks.

RIPng advertisements are like other routing protocols and is limited to the link, and will not cross routers. So link-local addresses figure heavily in understanding the sources of RIPng advertisements.

Based on creating vanity MAC addresses (which become vanity link-local addresses) you can now see that RIPng information, such as neighbours (or peers), or even the routing table is easier to understand.

Using the bird CLI utility, birdcl, you can look into what RIPng is doing. For example, looking at neighbours:

# birdcl 
BIRD 2.0.11 ready.
bird> show rip neighbor
rip1:
IP address                Interface  Metric Routes    Seen
fe80::7683:c2ff:fe61:fd60 br-lan          1      4     17       #from IPv6-only router
fe80::224:a5ff:fef1:fd11  br-lan          1      2     30       #from DNS service router
fe80::ea9f:80ff:fef3:fd44 eth0.4          1      4     19       #from Test Network router
fe80::c2c1:c0ff:fe01:fda0 eth0.3          1      5     15       #from Wireguard2 router
fe80::290:a9ff:fea6:fd91  eth0.3          1      5     19       #from Wireguard1 router
bird> 

As you can see, this router has five neighbours which include routers on the FD11, FD44, FD60, FD91, and FDA0 networks.

The bird folks have changed (in version 2.x) how routes are displayed, which works well for 80 column screens, but I find harder to read than the older 1.6 version.

bird> show route
Table master6:
::/0                 unicast [rip1 11:23:21.109] * (120/3)
    via fe80::58ef:68ff:fe0d:fd00 on eth0
2001:db8:8011:fd98::/64 unicast [rip1 11:23:21.109] * (120/3)
    via fe80::58ef:68ff:fe0d:fd00 on eth0
2001:db8:8011:fd60::/60 unicast [direct1 11:23:25.508] * (240)
    dev br-lan
2001:db8:8011:fd80::/64 unicast [rip1 11:23:21.109] * (120/3)
    via fe80::58ef:68ff:fe0d:fd00 on eth0
fd10:111:0:8::/62    unicast [rip1 11:23:21.109] * (120/2)
    via fe80::58ef:68ff:fe0d:fd00 on eth0
2001:db8:8011:fd44::/62 unicast [rip1 11:39:37.119] * (120/2)
    via fe80::58ef:68ff:fe0d:fd00 on eth0
2001:db8:8011:fda4::/64 unicast [rip1 11:23:21.109] * (120/3)
    via fe80::58ef:68ff:fe0d:fd00 on eth0

I find it easier to view the routes and where they are from by using the Linux ip -6 route command, which also sorts the routes.

# ip -6 route
2001:db8:8011:fd00::a1b via fe80::7683:c2ff:fe61:fd60 dev br-lan  metric 1024 
2001:db8:8011:fd00::/64 dev br-lan  metric 1024 
2001:db8:8011:fd00::/62 via fe80::ca9e:43ff:fe51:c04e dev br-lan  metric 1024 
2001:db8:8011:fd04::/62 via fe80::ca9e:43ff:fe51:c04e dev br-lan  metric 1024 
2001:db8:8011:fd08::/62 via fe80::216:3eff:feb7:c2be dev br-lan  metric 1024 
2001:db8:8011:fd0c::/62 via fe80::9683:c4ff:fe15:f188 dev br-lan  metric 1024 
2001:db8:8011:fd11::/64 via fe80::224:a5ff:fef1:fd11 dev br-lan  metric 1024        #FD11 DNS services
2001:db8:8011:fd40::fb0 via fe80::ea9f:80ff:fef3:fd47 dev eth0.4  metric 1024 
2001:db8:8011:fd40::/64 dev eth0.4  metric 1024 
2001:db8:8011:fd44::/64 via fe80::ea9f:80ff:fef3:fd47 dev eth0.4  metric 1024       #FD44 Test Network
2001:db8:8011:fd44::/62 via fe80::ea9f:80ff:fef3:fd47 dev eth0.4  metric 1024 
2001:db8:8011:fd60::/60 via fe80::7683:c2ff:fe61:fd60 dev br-lan  metric 1024       #FD60 IPv6-only
2001:db8:8011:fd80::a6b via fe80::c2c1:c0ff:fe01:fda1 dev eth0.3  metric 1024       #FD80 DMZ
2001:db8:8011:fd80::/64 dev eth0.3  metric 1024 
2001:db8:8011:fd88::/62 via fe80::2866:2cff:fe49:d36c dev eth0.3  metric 1024 
2001:db8:8011:fd98::/64 via fe80::290:a9ff:fea6:fd91 dev eth0.3  metric 1024 
2001:db8:8011:fd90::/60 via fe80::290:a9ff:fea6:fd91 dev eth0.3  metric 1024 
2001:db8:8011:fda0::/62 via fe80::c2c1:c0ff:fe01:fda1 dev eth0.3  metric 1024 
2001:db8:8011:fda4::/64 via fe80::c2c1:c0ff:fe01:fda1 dev eth0.3  metric 1024 

As you can see, not all of the routers in my network have Vanity Link-Local addresses. These are lesser routers, usually Virtual Routers (VRs), OpenWrt running inside Linux Containers (LXD) for testing. But the Vanity Link-Local addresses are there, making it easier to understand where packets are coming from.

Vanity Link-Local Addresses are a good thing

Although OpenWrt doesn’t support Vanity Link-Local addressing, it can be approximated by creating vanity MAC addresses. These address hints will help you in understanding your network topology, and bring more meaning to what would be otherwise random link-local addresses.


Notes:

  • Since making the drawing, I have moved the IPv6-only network prefix from FD50 to FD60, since I wanted more address space for the test network. So there will be references to both in this article.
  • I am using bird 1.6.6 on my older 19.07.x routers, and bird 2.0.11 in my newer 22.03.x routers, and some of the route displays will not be exactly the same.
  • I have added comments to route displays, such as “#FD60 IPv6-only”, to provide clarity, the comments are not displayed as part of the command output.

LXD MACVLAN Containers

Rescue!
Linux Containers now with VMs

I have been running Linux Containers (LXD) for several years now, and I have found them really useful. The key advantages of Linux Containers are:

  • Running full Linux OS inside a container, which makes troubleshooting much easier (than Docker)
  • Cloud-like experience. Cloud computing has taken off, not just because it is someone else’s computer, but because it has made spinning up a server easy. Linux Containers provides a similar easy experience in adding a server. Some of the capabilities are: snapshots of containers, transferring containers from one host to another, quick creation and removal of containers.
  • Secure and scalable. Linux containers run by default, as unprivilaged thereby protecting the host system. And because Linux Containers are light weight, I have been able to scale up webserver containers to over 30 running on a Raspberry Pi 3B+.
  • Excellent IPv6 Support. Containers have persistent MAC addresses, and therefore will request the same IPv4 address, and form the same SLAAC addresses after every reboot, regardless of container boot order (unlike Docker).

Connecting your Container to the Internet

As an old networking guy, in the past I have taken a network approach to getting my containers connected to the internet. I would configure a linux bridge (the front bridge) on the host, and then connect the Containers and the Host to that bridge. Like this:

Using a Linux Bridge

This method works quite well, allowing your router to provide addresses to your containers, and thus Internet connectivity. However, it is tricky to setup, and there is a risk of cutting off network access to the host when moving the host connection from the ethernet port to the Linux Bridge.

Using MACVLAN interface

There is an easier way, which doesn’t require the setting up of the front bridge or moving the host network connection. It is using the MACVLAN network attachment.

The MACVLAN technique uses the features of modern network interfaces (NICs) that support virtual interfaces. With virtual interface support, a single NIC can support not only multiple IP addresses, but several MAC (Media Access Control) addresses as well.

[Network Diagram with MACVLAN] Using a Linux MACVLAN

Creating LXD Profile for MACVLAN

LXD Containers use a profile to determine which resources to attach to, such as hard disk or network. The default LXD proflie looks like:

$ lxc profile show default
config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by: []

In the eth0 section, you can see that by default, the container will attach to the LXD linux bridge it sets up at init time, called lxdbr0. Unfortunately, that is a bridge to no where.

So we’ll create another profile that connects to the Host NIC via MACVLAN. First copy the default, and then change a couple of lines, specifically the nictype and the parent. The parent is the name of the host ethernet device. On a Raspberry Pi running Pi OS, it is eth0.

$ lxc profile copy default macvlan
$ lxc profile show macvlan
config: {}
description: Default LXD profile
devices:
  eth0:
    nictype: macvlan
    parent: eth0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: enet
used_by: []

Using the MACVLAN profile

Now that you have a MACVLAN profile, using it is as simple as launching a new container with the -p macvlan option. For example, firing up a Container running Alpine OS:

$ lxc launch -p macvlan images:alpine/3.16 test
Creating test
Starting test  
$

Looking at the container running with the lxc ls command:

$ lxc ls
+---------+---------+------------------------+-----------------------------------------------+-----------+-----------+
|  NAME   |  STATE  |          IPV4          |                     IPV6                      |   TYPE    | SNAPSHOTS |
+---------+---------+------------------------+-----------------------------------------------+-----------+-----------+
| test    | RUNNING | 192.168.215.129 (eth0) | fd73:c73c:444:0:216:3eff:fe65:2b99 (eth0)     | CONTAINER | 0         |
|         |         |                        | 2001:db8:8011:fd44:216:3eff:fe65:2b99 (eth0)  |           |           |
+---------+---------+------------------------+-----------------------------------------------+-----------+-----------+

And you can see that the new container already had IPv4 and IPv6 addresses (from my router). Let’s try a ping from inside the container.

Checking connectivity from inside the MACVLAN attached Container

We’ll step inside the container with the lxc exec command, and ping via IPv6 and IPv4.

$ lxc exec test sh

~ # ping -c 3 he.net
PING he.net (2001:470:0:503::2): 56 data bytes
64 bytes from 2001:470:0:503::2: seq=0 ttl=56 time=34.192 ms
64 bytes from 2001:470:0:503::2: seq=1 ttl=56 time=33.554 ms
64 bytes from 2001:470:0:503::2: seq=2 ttl=56 time=33.959 ms

--- he.net ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 33.554/33.901/34.192 ms

~ # ping -c 3 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: seq=0 ttl=59 time=11.835 ms
64 bytes from 1.1.1.1: seq=1 ttl=59 time=11.933 ms
64 bytes from 1.1.1.1: seq=2 ttl=59 time=11.888 ms

--- 1.1.1.1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 11.835/11.885/11.933 ms
~ # 

As you can see we can get out to the internet without all the fuss of the old way of setting up a front bridge.

Is there a downside to using MACVLAN?

With it being so easy to connect to your LAN using MACVLAN, are there any downsides?

A key limitation of using MACVLAN is that one must use an Ethernet NIC. Or to put it another way, you can’t use Wifi when using a MACVLAN interface. This is because the MACVLAN interface is a virtual interface, and creates additional MAC addresses for each container.

In a managed Wifi network (which 99% are), the Access Point (AP) will only talk to the MAC address of the client which registered with the AP. A MACVLAN will try to use additional MAC addresses which will be rejected by the AP (since those addresses are not registered with the AP.

That doesn’t mean that you couldn’t try to set up a Wireless Bridge with Wireless Distribution System (WDS), but that is beyond the scope of this article. For now, think of MACVLAN as only using your Ethernet NIC.

OK, but can the Container talk to the LXD Host?

The short answer is NO. But there is a work-around. Because the Containers are talking on a virtual interface of the NIC, and the host is on the physical interface, it doesn’t see the traffic. However, if one adds an additional MACLAN interface on the Host, the Container will be able to communicate with the Host.

Fortunately, there is a script to automagically create a MACVLAN interface on the LXD Host. This script is called: lxd_add_macvlan_host.sh.

The script does not make any permanent changes to the host, but rather configures the MACVLAN interface on the fly. If you want this to be permanent, then invoke the script from /etc/rc.local (you may have to enable rc.local if you are using systemd).

Using LXD without installing it first

Fortunately, if you want to learn about Linux Containers before doing an actual install, Ubuntu has setup a cloud service, where you can try LXD online. The session is limited to 30 minutes, but it is free.

You can try LXD using the Ubuntu Cloud service.

LXD Try-It

The linux containers have real-routable IPv6 addresses, which can be accessed from the internet.

$ ping 2602:fc62:a:2000:216:3eff:fe06:83ba -c3
PING 2602:fc62:a:2000:216:3eff:fe06:83ba(2602:fc62:a:2000:216:3eff:fe06:83ba) 56 data bytes
64 bytes from 2602:fc62:a:2000:216:3eff:fe06:83ba: icmp_seq=1 ttl=50 time=73.8 ms
64 bytes from 2602:fc62:a:2000:216:3eff:fe06:83ba: icmp_seq=2 ttl=50 time=70.8 ms
64 bytes from 2602:fc62:a:2000:216:3eff:fe06:83ba: icmp_seq=3 ttl=50 time=70.8 ms

--- 2602:fc62:a:2000:216:3eff:fe06:83ba ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 70.764/71.777/73.790/1.423 ms

It is designed to allow you to get familiar with LXD, but you can actually run services (for 30 minutes) such as a web server or even manage LXD using LXDware over IPv6!

Using LXD on a Pi

Of course, LXD runs on more powerful machines than just Raspberry Pi’s. But you can enjoy the advantages of Linux Containers in your own home by installing it on a Pi. Using MACVLAN interfaces means it just got simpler to install and use Linux Containers.

Running IPv6-only in the SOHO with OpenWrt

Wayland

While we are still a few years off from turning off IPv4 on the internet, it is possible to turn off IPv4 in your SOHO (Small Office Home Office) network. Or turn it off on one of your SOHO networks. Google is showing over 40% of connections to the Internet Search Engine are over IPv6. By running an IPv6-only network in the SOHO, you will:

  • Find out what breaks with IPv4 disabled
  • Simplify your firewall rules, since only IPv6 is supported
  • Discover how well most of the Internet works using DNS64/NAT64

I have been running an IPv6-only network in my house for a couple of years now, and it works amazingly well. There are items, such as my Internet Radio which do not work over IPv6, and I have a Dual Stack DMZ for IoT-like devices like that. But the major OSs (Windows, Mac, Linux, iOS, Android) all do IPv6 quite well these days.

Getting from here to there

The key to running IPv6-only, and still browsing the other 59% of the Internet is using a transition technology, DNS64/NAT64. This has two parts, a special DNS (Domain Name Server) which will synthesize IPv6 addresses (AAAA records) when a DNS request returns only an A record (IPv4-only).

IPv6 only DNS64

Once your laptop has a synthesized IPv6 address, it will connect to the NAT64 (Net Address Translation v6 to v4) which will do the work of translating the synthesized IPv6 address to a real IPv4 address and send the packet out on the Internet.

IPv6 only DNS64

Using OpenWrt for DNS64/NAT64

Fortunately, you can take any of the thousand or so routers supported by OpenWrt, and run both DNS64 and NAT64. OpenWrt is an actively developed open source project for SOHO routers.

Forgunately, DNS64 and NAT64 can run on the same OpenWrt router.

DNS64

OpenWrt uses DNSMasq as a DNS/DHCPv4 server by default. Since we’ll be running IPv6-only, we won’t need the DHCPv4 server. I have disabled DNSMasq and installed unbound DNS server. Specifically because it has a checkbox that enables DNS64, making it the easiest DNS64 installation you will ever do.

unbound dns64

Just click on the Enable DNS64 checkbox, and you are running a DNS64 server

Unbound’s DNS64 uses the well known prefix (WKP) of 64:ff9b::/96 by default, but can be changed to any IPv6-prefix in your network.

NAT64

NAT64 on OpenWrt uses a handy packet tool called jool. Jool can do many things but to get it configured for NAT64 requires the command line. ssh into the router install jool (on OpenWrt version 22.03)

opkg update
opkg install kmod-jool-netfilter jool-tools-netfilter

Then configure jool

jool instance add --pool6 64:ff9b::/96

You will most likely want this to be persistent across router reboots, therefore add the following to your /etc/rc.local file (before exit 0. This can be done using the web interface, Luci, under System->Startup->Local Startup

unbound dns64

Testing IPv6-only

Now that you have DNS64 and NAT64 running on your OpenWrt router, connect a laptop, and try to ping an IPv4-only website, such as twitter.com or cira.ca (the Canadian Internet Registration Authority)

$ ping -c2 cira.ca
PING cira.ca(a39c698e40c082be1.awsglobalaccelerator.com (64:ff9b::321:b811)) 56 data bytes
64 bytes from a39c698e40c082be1.awsglobalaccelerator.com (64:ff9b::321:b811): icmp_seq=1 ttl=120 time=13.1 ms
64 bytes from a39c698e40c082be1.awsglobalaccelerator.com (64:ff9b::321:b811): icmp_seq=2 ttl=120 time=14.7 ms

--- cira.ca ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 13.146/13.928/14.710/0.782 ms

Ping tests DNS64 to synthesize an IPv6 address (see the WKP 64:ff9b::/96 is being used above), and NAT64 to do the actual ping (icmp message).

Running IPv6-only

Running IPv6-only is a great way to check out what apps work (and what doesn’t work) on IPv6-only. A few notable apps that don’t work well on IPv6-only:

  • Zoom (Jitsi works just fine on IPv6-only)
  • Many VPN software/providers (Wireguard works well in IPv6-only)

That said it is amazing what does work well on IPv6-only. As a test, this past summer, I put our house guests on the IPv6-only network, and at the end of the week, I asked them if they had run into any problems. They replied they had no problems on the IPv6-only network during the week.

With the help of DNS64 and NAT64 on OpenWrt, you are now ready to run an IPv6-only network in your SOHO network, and see how the future of the Internet will work.


Notes:

  • as of 2022-10-24, jool will report (Xtables disabled)”, this is normal, as OpenWrt now uses netfilter
  • To be fair, www.cira.ca has an IPv6 address, but cira.ca does not
  • Be aware, that if you are a guest at my house, you will be put onto the IPv6-only network 😉

23 November 2022

Mostly IPv6-only with RFC 8925

Fun!

Transition: Mostly IPv6-only

I have been thinking about the transition to IPv6 wrong. For some time I have seen IPv6-only as the natural progression from Dual-Stack. But that strategy doesn’t accommodate all the IPv4-only devices (such as IoT) which will be with us for years.

With the standardization of RFC 8925 IPv6-Only Preferred Option for DHCPv4, I see another phase in the transition to IPv6-only. The phase of mostly IPv6-only.

Mostly IPv6-only means that for a given subnet, most of the devices are capable of IPv6, and will use it. But the remaining small number of devices, such as badge readers, HVAC, and the like can still operate using just IPv4.

Signaling an IPv6-only option in IPv4 DHCP

Initially, this sounds counter-intuitive. After all, if a device is capable of IPv6-only, it won’t even need to make a DHCPv4 request, and therefore would not see the IPv6-only option.

However, that is looking at it from the IPv6 side. Looking at it from the IPv4 side, there are advantages to letting devices know that the IPv6-only mechanisms (Native/DNS64/NAT64) are in place. And therefore for those devices which are capable of IPv6-only, there is no need to use IPv4,

Reintroducing DHCPv4 Option 108

Back in the heyday of DHCP, there were many options being allocated for all sorts of useful things, such as: * Multicast Assignment through DHCP (was option 100) * IPX Compatibility (was option 110)

Option 108 was initially reserved for Swap Path, but was never standardized in a RFC.

RFC 8925 reintroduces Option 108 as IPv6-Only Preferred. IANA (Internet Assigned Numbering Authority) maintains a list of current DHCP Options.

How does DHCPv4 Option 108 work?

If the network supports IPv6-only, then the DHCPv4 server can be configured to include Option 108 in the DHCP offer to the client.

Dynamic Host Configuration Protocol (Offer)
    Message type: Boot Reply (2)
    Hardware type: Ethernet (0x01)
    Hardware address length: 6
    Hops: 0
    Transaction ID: 0xf0c7a14c
...
    Option: (3) Router
        Length: 4
        Router: 192.168.156.1
    Option: (6) Domain Name Server
        Length: 4
        Domain Name Server: 192.168.156.1
    Option: (15) Domain Name
        Length: 3
        Domain Name: lan
    Option: (108) Removed/Unassigned
        Length: 4
        Value: 00000000
    Option: (255) End
        Option End: 255

As you can see tshark (and tcpdump and wireshark) have not been updated for option 108 support yet.

Once the client, which supports Option 108 sees the option, it will stop the request process for an IPv4 address, and the device will become IPv6-only on your Mostly IPv6-only network. The DHCPv4 server will not allocate an IPv4 address, and there will be no lease for an address.

If the device does not support Option 108, it will ignore it, and continue to request an IPv4 address, and operate using IPv4 in your Mostly IPv6-only network.

Advantage of using Option 108

In addition to signaling to the devices which can run IPv6-only, the use of Option 108 reduces the burden for IPv4 addresses.

In a Dual-Stack network, all devices will have an IPv4 address. In a Mostly IPv6-only network, only the devices which can not operate in an IPv6-only environment will consume an IPv4 address.

As device software is upgraded, and perhaps will support IPv6-only, the usage of IPv4 addresses should decline.

Support for Option 108 in use today

There are operating systems which support Option 108 today, quite possibly in your network.

  • iOS 15 (iPhones)
  • MacOS 12.0.1 (Macs)
  • Windows (Not yet)

Sadly, Linux long used as the test bed for the internet does not support Option 108 (as of systemd v251. A feature request has been submitted.

Setting up Option 108 in your network using OpenWrt

Before setting up Option 108, ensure you have DNS64/NAT64 setup to support IPv6-only operation, as nodes responding to the option will become IPv6-only. See this post for info on how to configure OpenWrt for DNS64/NAT64 (ipv6hawaii.org)

Although OpenWrt does not directly support Option 108, it is possible to configure it from the web interface (LuCI).

After logging into the web interface, go to Network->Interfaces->LAN (Edit)->DHCP Server->Advanced Settings:

OpenWrt Interface config

In the DHCP Options enter the string 108,0000i. The letter i is required to make option 108 a 4 byte value (which the RFC requires). Click Save and then Save & Apply

Or, if you prefer, you can edit the dhcp config file /etc/config/dhcp and add the line in the lan stanza:

config dhcp 'lan'
    option interface 'lan'
    ...
    list dhcp_option '108,0000i'

And restart networking:

/etc/init.d/network restart

Mostly IPv6-only is a good thing

RFC 8925 provides a mechanism that brings us one step closer to an IPv6-only world, while still providing connectivity to the devices which do not yet support IPv6.

While I see this extending the long tail of IPv4, and I would like to see everything IPv6-only, the real goal should be connectivity for all devices on your network. Option 108 helps us get there.


Notes:

  1. Thanks to the Dnsmasq author for his quick response in how to force a 4 byte value for Option 108.

 

Openwrt and Netfilter

Fun!

Not Non-Fungible Token, but Netfilter NFT

It has been a long time coming, but Netfilter have finally arrived in the latest release of OpenWrt (22.03.x). Netfilter brings network parity for IPv6, while improving firewall performance.

The Early Days

A little history, Linux has always had the concept of packet filtering, or a firewall. The original packet filter, ipfw, integrated into Linux in 1994, was based on a BSD ipfirewall. In Linux version 2.2 (1999), ipchains was introduced, but was stateless only. The Linux firewall evolved again with iptables in version 2.4 (2001), where stateful inspection returned.

iptables continued to evolve, but with separate streams, one for IPv4, another for IPv6, and yet another for the Ethernet (ebtables). Having multiple tables impacted performance, and led to kernel code duplication.

NFT

Netfilter‘s new packet-filtering utility, nft, replaces the older tools: iptables, ip6tables, arptables and ebtables.

At first look, nft options look quite a bit different from the old iptables.

nft --help
Usage: nft [ options ] [ cmds... ]

Options (general):
  -h, --help                      Show this help
  -v, --version                   Show version information
  -V                              Show extended version information

Options (ruleset input handling):
  -f, --file <filename>           Read input from <filename>
  -D, --define <name=value>       Define variable, e.g. --define foo=1.2.3.4
  -i, --interactive               Read input from interactive CLI
  -I, --includepath <directory>   Add <directory> to the paths searched for include files. Default is: /etc
  -c, --check                     Check commands validity without actually applying the changes.
  -o, --optimize                  Optimize ruleset

Options (ruleset list formatting):
  -a, --handle                    Output rule handle.
  -s, --stateless                 Omit stateful information of ruleset.
  -t, --terse                     Omit contents of sets.
  -S, --service                   Translate ports to service names as described in /etc/services.
  -N, --reversedns                Translate IP addresses to names.
  -u, --guid                      Print UID/GID as defined in /etc/passwd and /etc/group.
  -n, --numeric                   Print fully numerical output.
  -y, --numeric-priority          Print chain priority numerically.
  -p, --numeric-protocol          Print layer 4 protocols numerically.
  -T, --numeric-time              Print time values numerically.

A key difference is that nft wants to read a json-like formatted file of rules, rather than just a simple (sometimes not so simple) command line of iptables.

The first useful command is to show the tables defined (on OpenWrt). Netfilter has a new address family, inet which applies to IPv4 and IPv6.

# nft list tables
table inet fw4

Unfortunately, for the new-comer, that doesn’t appear to tell us much. But in fact, it is stating that there is a table of the family type of inet with the name fw4. A more informative command shows the chains and rules in the table (fw4):

# nft list table inet fw4
table inet fw4 {
    chain input {
        type filter hook input priority filter; policy accept;
        iifname "lo" accept comment "!fw4: Accept traffic from loopback"
        ct state established,related accept comment "!fw4: Allow inbound established and related flows"
        tcp flags syn / fin,syn,rst,ack jump syn_flood comment "!fw4: Rate limit TCP syn packets"
        iifname "br-lan" jump input_lan comment "!fw4: Handle lan IPv4/IPv6 input traffic"
        iifname "wan" jump input_wan comment "!fw4: Handle wan IPv4/IPv6 input traffic"
    }

    chain forward {
        type filter hook forward priority filter; policy drop;
        ct state established,related accept comment "!fw4: Allow forwarded established and related flows"
        iifname "br-lan" jump forward_lan comment "!fw4: Handle lan IPv4/IPv6 forward traffic"
        iifname "wan" jump forward_wan comment "!fw4: Handle wan IPv4/IPv6 forward traffic"
        jump handle_reject
    }

    chain output {
        type filter hook output priority filter; policy accept;
        oifname "lo" accept comment "!fw4: Accept traffic towards loopback"
        ct state established,related accept comment "!fw4: Allow outbound established and related flows"
        oifname "br-lan" jump output_lan comment "!fw4: Handle lan IPv4/IPv6 output traffic"
        oifname "wan" jump output_wan comment "!fw4: Handle wan IPv4/IPv6 output traffic"
    }

...

    chain forward_wan {
        icmpv6 type { destination-unreachable, time-exceeded, echo-request, echo-reply } limit rate 1000/second counter packets 4 bytes 416 accept comment "!fw4: Allow-ICMPv6-Forward"
        icmpv6 type . icmpv6 code { packet-too-big . no-route, parameter-problem . no-route, parameter-problem . admin-prohibited } limit rate 1000/second counter packets 0 bytes 0 accept comment "!fw4: Allow-ICMPv6-Forward"
        meta l4proto esp counter packets 0 bytes 0 jump accept_to_lan comment "!fw4: Allow-IPSec-ESP"
        udp dport 500 counter packets 0 bytes 0 jump accept_to_lan comment "!fw4: Allow-ISAKMP"
        meta nfproto ipv6 tcp dport { 22, 80, 443 } counter packets 1 bytes 80 jump accept_to_lan comment "!fw4: ext_mgmt_fwd"
        meta nfproto ipv6 udp dport { 22, 80, 443 } counter packets 0 bytes 0 jump accept_to_lan comment "!fw4: ext_mgmt_fwd"
        jump reject_to_wan
    }

...
    chain mangle_forward {
        type filter hook forward priority mangle; policy accept;
        iifname "wan" tcp flags syn tcp option maxseg size set rt mtu comment "!fw4: Zone wan IPv4/IPv6 ingress MTU fixing"
        oifname "wan" tcp flags syn tcp option maxseg size set rt mtu comment "!fw4: Zone wan IPv4/IPv6 egress MTU fixing"
    }
}

As you can see there are IPv4 and IPv6 rules in the Netfilter table. The nft man page is long but full of good info.

Firewall abstraction of OpenWrt

Fortunately, the developers of OpenWrt have kept the familiar web interface (LuCI) for firewall configuration, and the user doesn’t have to know that nft is now managing everything, rather than the older tools: iptables, ip6tables, arptables and ebtables.

OpenWrt Firewall Configuration

Conclusion

Why Netfilter? It reduces kernel code duplication, has more efficient execution, thus better performance, and adds atomic changes to filter rules.

Why OpenWrt? With the recent advent of malware targeting Small Office, Home Office [SOHO] routers, such as ZuoRAT, it is good to see an alternative to that old OEM router software, in a project which is active, and responding to security vulnerabilities.

nft has been around since 2014. Finally with the Netfilter address family of inet, IPv6 has equal status with IPv4. And with OpenWrt (22.03.x), your SOHO router will take advantage of a 21st Century firewall.


Photo Credit: NFT mania is here, and so are the scammers by Marco Verch under Creative Commons 2.0

Originally posted on www.makiki.ca (IPv6-only website)

 

Managing Linux Containers with LXD Dashboard

Traffic

Server Farm in the Palm of your hand

In the past I have written about Linux Containers (LXD), a light-weight visualization for Linux. And how it is much more IPv6-friendly than Docker. But until now, the management of LXD has been via the CLI command lxc.

There are other LXD GUI management projects, but LXD Dashboard not only runs in a container, on a host that is also managed by LXD Daskboard, but it can also manage LXD on remote hosts.

IPv6 Friendly

LXD is IPv6 Friendly, in that containers will obtain a SLAAC and/or DHCPv6 address, and get the same address after container restarts, or even through LXD host reboots.

This makes it easy to create a DNS entry for the Linux Container, since the automatically created IPv6 address is pretty much static.

LXD Interface

LXD is actually two parts, the lxd daemon, and the lxc CLI client which makes calls to the lxd daemon. This allows one to list, for example, the Linux containers which are running (or stopped) on a specific host.

$ lxc ls
+--------+---------+------------------------+----------------------------------------------+------------+-----------+
|  NAME  |  STATE  |          IPV4          |                     IPV6                     |    TYPE    | SNAPSHOTS |
+--------+---------+------------------------+----------------------------------------------+------------+-----------+
| alpine | RUNNING | 192.168.215.104 (eth0) | fd6a:c19d:b07:2080:216:3eff:fecf:bef5 (eth0) | PERSISTENT | 0         |
|        |         |                        | 2001:db8:ebbd:2080:216:3eff:fecf:bef5 (eth0) |            |           |
+--------+---------+------------------------+----------------------------------------------+------------+-----------+
| w10    | RUNNING | 192.168.215.225 (eth0) | fd6a:c19d:b07:2080:216:3eff:feb2:f03d (eth0) | PERSISTENT | 0         |
|        |         |                        | 2001:db8:ebbd:2080:216:3eff:feb2:f03d (eth0) |            |           |
+--------+---------+------------------------+----------------------------------------------+------------+-----------+
| w2     | RUNNING | 192.168.215.232 (eth0) | fd6a:c19d:b07:2080:216:3eff:fe7f:b6a5 (eth0) | PERSISTENT | 0         |
|        |         |                        | 2001:db8:ebbd:2080:216:3eff:fe7f:b6a5 (eth0) |            |           |
+--------+---------+------------------------+----------------------------------------------+------------+-----------+
| w3     | RUNNING | 192.168.215.208 (eth0) | fd6a:c19d:b07:2080:216:3eff:fe63:4544 (eth0) | PERSISTENT | 0         |
|        |         |                        | 2001:db8:ebbd:2080:216:3eff:fe63:4544 (eth0) |            |           |
+--------+---------+------------------------+---------------------------

Until now the CLI has been the way to manage LXD containers.

LXD secure management API

The LXD daemon has elevated privileges, since it is messing with routing tables and such to make networking work for the container. A secure socket can be enabled for remote management, usually on port 8443. To enable use the following command:

lxc config set core.https_address [::]:8443

It is possible to set a management password, but it more secure to use a certificate, which I’ll discuss later.

Conveniently, the LXD daemon listens on both IPv4 and IPv6.

Web-based LXD Managment with LXD Dashboard

There is an actively developed project by LXDware called LXD Dashboard The Dashboard runs inside a Linux container, and although it is recommended that one use a Ubuntu container, I find Alpine containers to be much smaller, and load faster.

After working with the author, he wrote up my notes as a nice how-to install on Alpine. There are some additional libraries which are needed under Alpine Linux. The how-to is pretty much a copy/paste the command lines needed to install the current release on an Alpine container. (v3.4 at the time of this writing).

After creating a container, I copy/paste the IPv6 address into my DNS, so I only need reference it by name, thereafter. Since Linux Containers keep the same MAC and IPv6 address, even after restarts, you only need to update the DNS once.

LXD Dashboard: first steps

Once the Dashboard is installed in a Linux Container, and you have nginx and php-fpm are up and running, it is time to point your web browser to the Linux Container. Since I use DNS, I just enter http://lxdware/ into the browser.

Initial Registration screen

LXD Dashboard will present an initial registration screen, where you can create a login. Be sure to make a note of your username and password, this will become the master admin user. After logging in (below), you can add more users.

Registration

Logging into LXD Dashboard

Once you have registered, you can now log into the Dashboard using the same username and password entered at registration.

Registration

Adding Additional Users to LXD Dashboard

After logging in, you can add more users by clicking on your login name in the upper right hand corner, which opens a menu, select Settings.

Once in Settings, you can add additional users, which can belong to predefined groups, or add your own groups. The LXDWARE site has more info on Role Based Access Control (RBAC)

Settings

Other parameters such as adding your own certificates, or setting refresh timers can be adjusted in the Settings section.

Adding LXD hosts

There isn’t much to do with the Dashboard until you add one or more LXD Hosts. It is here, where we will use the Certificate method of accessing the LXD daemons. The steps are:

  1. Copy/Paste the LXD Dashboard Certificate to a file
  2. Transfer that file to the LXD Host
  3. Use lxc config trust add <cert file> command to add the LXD Dashboard certificate to the LXD Host
Copying the LXD Dashboard Certificate

After logging in to LXD Dashboard, click on the View Certificate button to view the Certificate. Copy, then paste that into a file, and name it something like lxddashboard.crt

Certificate

Transfer the Cert to the LXD Host

Use an IPv6-friendly tool, like scp to copy the certificate file to the remote LXD Host. Place somewhere convenient, like /tmp/

Add the Cert to the LXD Host

After sshing to the remote LXD Host, issue the following command to add the Certificate to the LXD daemon configuration

lxc config trust add /tmp/lxddashboard.crt

Back on LXD Dashboard, add the remote LXD Host

Now that the remote host is listening to port 8443 and now has the certificate from LXD Dashboard, it is time to add the host to the Dashboard. Click on the upper right button +Add Host

Add Hosts

Fill in the info about the host. Since IPv6 is well supported, just enter in the DNS name of your IPv6 Host. Since we are using IPv6, we can ignore “External Address & Port” (IPv4 NAT items).

If you have more than one LXD Host, just click +Add Host again, and keep adding your LXD Hosts (be sure to add the Cert to the host first).

Start managing LXD with the Dashboard

Now that you have your LXD hosts added, you are ready to start/stop/launch containers. First let’s drill down on one of the LXD Hosts in your list.

Showing Host

Paikea is a Raspberry Pi with 15 containers configured. Information about the host is shown on the bottom part of the screen.

Getting down to the containers

Clicking on the Containers will switch the display to a list of the containers running on my host Paikea. Be patient! Raspberry Pis are not the fastest machines on the planet, and LXD Dashboard asks for a lot of information from the LXD host.

listing containers

On the right side of each container line is that status (stopped/running) and a triangle/square button which will start/stop the container.

Looking at a single container with LXD Dashboard

Continuing to drill down, by clicking on a container name, it is possible to see more detail for that particular container, including how many processes are running inside the container and memory used.

single container

Along the top, are menu options to configure the containers, which interfaces, snapshots, etc. It is also possible to exec to the container which pops up a black screen and logs you into the container as root. This is all done using the LXD API over IPv6!

exec to a container

Above is an exec session to an OpenWrt Router running in a container

IPv6-enabled LXD Dashboard, ready for prime time

I have only touched upon basic container management with LXD Dashboard, but there is much more that one can do. Bringing a friendly web interface to LXD, which works well over IPv6 is great.

I have watched LXD Dashboard improve over the past year. The development is active, and the author welcomes suggestions for future versions. LXD Dashboard in a dual stack or IPv6-only network is a welcome addition to your Linux Container toolbox.


Happy Boys Day (5 May)

RIPng: routing for the SOHO (Redux)

 

Routers

RIPng guiding the packet flows

It has been a couple years since I last wrote about RIPng. It has been running quietly, and efficiently in my SOHO (Small Office/Home Office) network. Sure there are other better routing protocols such as OSPFv3 or IS-IS which are the work-horses of the Enterprise. But they also have dedicated network engineers managing them. The ease of deployment, makes RIPng the perfect IPv6 routing protocol for non-network experts, just plug-in and go.

BIRD: Internet Routing Daemon

BIRD is an open source routing daemon which supports many routing protocols such as RIPng, Babel, OSPF, and iBGP. It runs on Linux, FreeBSD, NetBSD, and OpenBSD.

In my last RIPng article, BIRD was at version 1.6, and the examples are for that version. In December 2017, version 2 was released, but I found issues with configuring RIPng, and waited until some of the issues could be resolved. Now BIRD has released version 2.08, and it integrates well with my existing 1.6 network.

BIRD & OpenWrt

I have been running BIRD 1.6 on my OpenWrt routers for years, I wanted to try the newer version 2.08. Fortunately, the Devs at OpenWrt build both versions. It is easy to install using OpenWrt’s software manager.

Unlike BIRD 1.6, there is no separate version for IPv4 and IPv6. BIRD 2 supports both. Because the OpenWrt software manager automatically handles dependencies I usually just install the user-space CLI tool bird2cl, which will pull in the bird daemon as well.

BIRD can be installed using the OpenWrt web interface, or after ssh-ing to the router running the following:

opkg update
opkg install bird2cl

Editing files in OpenWrt using nano

Like most Linux distros, the vi editor is included in the base system. But vi has cryptic commands, and can be daunting to the new user.

Fortunately, there is simpler user friendly editor called nano which is available, and only needs to be installed.

opkg install nano

There are many tutorials on the internet on how to use nano, but the official documentation is always a good place to start.

Network

Editing bird.conf with nano

Configuring BIRD for RIPng

There is no web interface for configuring RIPng, and the following must be done via a ssh session. But once it is done, you should not need to change the configuration in the future.

Unfortunately, the example /etc/bird.conf file which is installed by default is full of examples for the other supported protocols, but pretty scarce for RIPng. The easiest thing to do is to log into your router with ssh and replace it with this example:

# EXAMPLE BIRD RIPng Config
# Required for kernel local routes to be exported to RIPng
protocol kernel {
    ipv6 {
        export all;     # Default is export none
    };
}

# Required to get info about Net Interfaces from Kernel
protocol device {
}

#advertises directly connected interfaces to upstream
protocol direct {
    ipv6;
    interface "*";
}

# Configure RIPng in Bird
protocol rip ng {
     ipv6 {
        import all;
        export all;
     };
     interface "*" {
        mode multicast;
    };
}

The key to telling BIRD that this is RIPng is the protocol rip ng line. The ng tells BIRD to use the IPv6 version of RIP.

It is possible to refine the interfaces, so that RIPng routing announcements aren’t being sent (and then dropped) to your ISP. But putting an interface "*" makes this config work for all routers in your SOHO network.

If you wanted to exclude the upstream interface (called wan on OpenWrt), use the line interface interface "eth0","br-lan".

Configuring your Firewall for RIPng

Just like last time, the default policy on OpenWrt is to block in-bound packets from the wan (or upstream interface). So a firewall rule must be created to allow RIPng packets to pass. This is the same as with version 1.6.

Append the following to /etc/config/firewall

config rule
        option name 'RIPng'
        option family 'ipv6'
        list proto 'udp'
        option src 'wan'
        list src_ip 'fe80::/10'
        option dest_port '521'
        option target 'ACCEPT'

Starting RIPng

Now that you have the configuration file in place, and the firewall ready, you can start BIRD running RIPng on your router.

/etc/init.d/bird restart

That’s it! BIRD is now running RIPng on your network.

Looking a little deeper into your RIPng network (optional)

Although RIPng is pretty much a start and forget routing protocol, there is a nice troubleshooting tool, birdcl to peek under the covers. It will show the key aspects of RIPng:

  • Interfaces running RIPng
  • Peers or other routers your RIPng is talking to
  • The routing table, which routes have been learned by RIPng

Using the CLI tool, birdcl, it is easy to see how RIPng is working.

# birdcl 
BIRD 2.0.8 ready.
bird> 

Helpful commands are to look at the interfaces enabled for RIPng, and how many neighbours (other routers running RIPng) have been found.

bird> show rip int
rip1:
Interface  State  Metric   Nbrs   Timer
eth0       Up          1      0  24.311
wan        Up          1      3   8.381
br-lan     Up          1      0   0.961

Displaying the RIPng neighbours command will provide more info

bird> show rip neig
rip1:
IP address                Interface  Metric Routes    Seen
fe80::2ac6:8eff:fe16:19d7 wan             1     23  22.898
fe80::216:3eff:fe28:54f0  wan             1      2  26.902
fe80::7683:c2ff:fe61:fd60 wan             1      6  21.931

As you can see, there are 3 other routers running RIPng, all upstream on the wan interface. RIPng uses IPv6 link-local addresses. It is a good idea to keep a cheat-sheet handy of your routers link-local addresses which will make it easier to understand which routers are peers/neighbours.

And of course you can use birdcl to show the routes in your network as well.

bird> show route
Table master6:
::/0                 unicast [rip1 09:34:47.143] * (120/4)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd60::/60 unicast [rip1 09:34:47.149] * (120/2)
        via fe80::7683:c2ff:fe61:fd60 on wan
2001:db8:8011:fd94::/64 unicast [rip1 09:34:47.143] * (120/4)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd80::/64 unicast [rip1 09:34:47.143] * (120/4)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd44::/62 unicast [rip1 09:34:47.143] * (120/2)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd44::fb0/128 unicast [direct1 09:34:47.139] * (240)
        dev wan
2001:db8:8011:fd04::/62 unicast [rip1 09:34:47.143] * (120/3)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd00::/56 unicast [rip1 09:34:47.143] * (120/4)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd11::/64 unicast [rip1 09:34:47.143] * (120/4)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd00::/64 unicast [rip1 09:34:47.143] * (120/4)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd44::a1b/128 unicast [rip1 09:34:47.149] * (120/2)
        via fe80::7683:c2ff:fe61:fd60 on wan
2001:db8:8011:fd40::/64 unicast [rip1 09:34:47.143] * (120/3)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd47::/64 unicast [rip1 09:34:47.143] * (120/2)
        via fe80::ea9f:80ff:feec:d5f3 on wan
2001:db8:8011:fd46::/64 unicast [rip1 09:34:47.149] * (120/2)
        via fe80::216:3eff:fe28:54f0 on wan
2001:db8:8011:fd45::/64 unicast [direct1 09:34:47.139] * (240)
        dev br-lan
2001:db8:8011:fd44::/64 unicast [direct1 09:34:47.139] * (240)
        dev wan
                     unicast [rip1 09:34:47.149] (120/2)
        via fe80::7683:c2ff:fe61:fd60 on wan
2001:db8:8011:fd80::/62 unicast [rip1 09:34:47.143] * (120/3)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd84::/62 unicast [rip1 09:34:47.143] * (120/3)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd88::/61 unicast [rip1 09:34:47.143] * (120/3)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd90::/60 unicast [rip1 09:34:47.143] * (120/4)
        via fe80::2ac6:8eff:fe16:19d7 on wan

The top entry, ::/0, is the default route pointing to the upstream router fe80::2ac6:8eff:fe16:19d7 on the wan interface. This is the path packets will take to get to the internet.

The last numbers (120/3) means 120 seconds for the life time of this route, and 3 indicates how many route-hops away is that network. As you can see the furthest network is 4 hops away from this router.

But unless you need to troubleshoot your network, or are just curious about how RIPng works, you shouldn’t need to run birdcl. After all RIPng is basically a plug-and-play routing protocol.

BIRD v2.08 & RIPng is stable and ready for prime time

Earlier versions of BIRD v2 had interoperability problems with BIRD v1, but those are now in the rear view mirror. RIPng is very easy to setup, once you have an example config file. It is a tried and true routing protocol that gets the job done, making a multi-router SOHO network easy to stand up and maintain.

RIPng will be quietly keeping your network going while you worry about the real problems in the world.

Originally posted on www.makiki.ca (IPv6-only)
Updated on 13 April 2022 – fixed example bird.conf

 

Privacy, SLAAC & RFC 8981

Routers

The Changing nature of SLAAC

It has been six (6) years since I last wrote about SLAAC. With the latest standard, RFC 8981, it is time to revisit it, and understand how it has evolved over the years.

Stateless Address Auto Configuration (SLAAC) was a novel improvement over the static address configuration of IPv4 in the late 1990s. It rose out of the interesting quandary of how to create a globally unique address without the help of any servers. In the early days of SLAAC, the interface MAC address, expanded to a EUI-64, was used to create the IID (Interface ID) or last 64 bits of the IPv6 address.

History of adding Privacy to SLAAC

Privacy issues arose rather quickly, since MAC addresses are burned into the interface card, and therefore do not change. The concern was that with a static global IPv6 address, users could be easily tracked (web-based cookies were still in their infancy at the time).

Over the years, SLAAC and Privacy have been revisited by the IETF:

  • in 2001: RFC 3041 – Privacy Extensions for Stateless Address Autoconfiguration in IPv6 (Obsoleted by: RFC 4941)
  • in 2007: RFC 4841 – Privacy Extensions for Stateless Address Autoconfiguration in IPv6 (Obsoleted by: RFC 8981)
  • in 2021: RFC 8981 – Temporary Address Extensions for Stateless Address Autoconfiguration in IPv6 (current standard)

RFC 3041 – Privacy Extensions for Stateless Address Autoconfiguration in IPv6

In 2001 RFC 3041 introduced the concept of Privacy Extensions for SLAAC. IPv6 already had the concept of multiple addresses, since every interface had a Link-Local, and a Global Address.

An additional address, the Temporary Address was added. The Temporary Address would use an algorithm to create a randomized IID. A new Temporary Address would be generated on a daily or weekly basis (RFC 3041 is fuzzy on this point). A new Temporary Address would be generated just before the existing one expired.

Temporary Addresses would be used for outbound connections, thus providing some level of privacy and “makes it more difficult for eavesdroppers and other information collectors to identify”1 the user.

RFC 4941 – Privacy Extensions for Stateless Address Autoconfiguration in IPv6

The IETF updated the standard of using Privacy Extensions in 2007 with RFC 4841. The key differences from 2001 are:

  • Exclude Anycast (RFC2526) addresses from potential Temporary Address pool
  • Add User configuration to enable/disable Temporary Addresses
  • Duplicate Address Detection (DAD) must be run on all new Temporary Addresses, not just the first
  • Default state of Temporary Addressing is disabled
  • Different IIDs for different Prefixes are allowed
  • Algorithm used to generate random IIDs is no longer limited to MD5

Hosts would still use the MAC address (EUI-64) to create a Stable Address, and create Temporary Addresses in addition to the Stable Address.

RFC 8981 – Temporary Address Extensions for Stateless Address Autoconfiguration in IPv6

Fast forward to 2021, where IPv6 adoption is running at 35% (based on Google’s IPv6 Stats), and privacy has become a real concern of internet users.

The key changes from the 2007 (RFC 4941) standard:

  • Configuration of Stable Addresses are no longer required, a host is permitted to only instantiate Temporary Addresses
  • Temporary Addresses are now enabled by default
  • Reduces the maximum life time of Temporary Addresses from 1 week to 2 days, and concurrent Temporary Addresses from 7 to 3
  • A life time randomization is also introduced, Temporary Addresses will no longer have the same lifetime

With these changes, a host now can only have Temporary Addresses and the duration of each will be different. Finally, IPv6 web surfers will have Internet anonymity similar to those hiding behind IPv4 NAT.

Privacy vs Usability

The standard around SLAAC and Privacy has been evolving. As the major OS’s implement the latest round of changes (in RFC 8981), the use of Temporary Addresses will increase, since the default has changed to enabled.

In the Enterprise

I see this as introducing some problems with the use of SLAAC. Enterprises, which have non-reputability requirements, want to be able to link an Address with an employee. In the past, DHCPv6 has been the solution, since a host will obtain the same address (or IID) every time, making security and non-reputability easier. However, not all OS’s implement DHCPv6, most notably Android. Android phones and IoT that is based on Android will continue to use SLAAC into the future. I don’t expect that Enterprise IT will welcome the change that SLAAC Temporary Addresses on devices are now enabled by default.

In the SOHO (Small Office/Home Office) Networks

A higher level of Privacy is usually desired in smaller networks, where plug-and-play are important to deployment. I see SLAAC more often deployed in these smaller networks because of the simplicity. But with the option of hosts only using Temporary Addresses, reachability can be an issue. The SOHO network will have to rely on mDNS and Service Discovery to find printers, security cameras, and other IoT devices. Unfortunately not all IoT will implement Bonjour/Avahi (Service Discovery Protocols).

In long-lived TCP connections

Since Temporary Addresses will be enabled by default, and are used by out-bound connections by default, what is to happen to long-lived TCP connections, such as ssh to remote devices.

When Temporary Addresses age out, these long-lived TCP connections are broken. With a single ssh session, there is little effort in logging in again. But what if the host is an IT management station, and there are many ssh TCP sessions, all seemingly randomly aging out? Suddenly, it is no longer fun.

Fortunately, the authors of ssh saw the need to bind a ssh session to a specific source IP address. If the host has a Stable Address, one just needs to bind the session to the Stable Address as a source address. This is done with a -b option where myhost.example.com is the DNS name of my Stable Address.

ssh -b myhost.example.com remotehost.example.com

Another type of long-lived TCP session is one of file sharing, or mounting a remote server on the local filesystem, making it easy to do file operations. sshfs not only is an excellent way to do file sharing, but the link is encrypted (since it uses ssh under the covers). However, sshfs will also fall prey to every changing Temporary Addresses, breaking those file sharing connections.

Unfortunately, sshfs does not have a -b option, but it does implement ssh‘s -o options mechanism. One can bind a file sharing to a local Stable Address by:

sshfs -oBindAddress=myhost.example.com remotehost.example.com:/tmp local_mount_point/ 

Erosion of SLAAC Simplicity

The Privacy pendulum has swung to the more privacy side with the latest SLAAC Temporary Addresses standard (RFC 8981), and only time will tell if more networks gravitate towards, or away from using SLAAC. SLAAC had the upper hand on simplicity, but the complexity of creating a Semantically Opaque Stable Address (RFC 7217), and managing up to three Temporary Addresses, each with their own lifetime timers has eroded that simplicity.

I can see use-cases for Temporary-Address-only hosts, such as web-surfing at a coffee shop, but in stable networks, such as Enterprise or SOHO, having Temporary Addresses enabled by default may be a challenge.


Notes:

Virtual OpenWrt on LXD (redux)

Traffic

Virtual Network in the Palm of your hand

With the latest release of OpenWrt 21.02.0, running a Virtual Router (VR) on Linux Containers (LXD) is much easier, than when I wrote about it back in 2019.

The biggest improvement is that Ubuntu has started to build OpenWrt images on their LXD image server. This allows one to skip all the build-your-own-image steps. Ubuntu is supporting three architectures, x86_64, ARM64, and ARMhf (32 bit). The last is supported on my Raspberry Pi 3b+.

It is now possible to launch an OpenWrt Container with one line (almost). However the Container needs a few fixes after it is launched to work properly.

Motivation, why run OpenWrt in a Container?

Of course, I can run OpenWrt on one of hundreds of real consumer routers, and I do. OpenWrt has excellent IPv6 support, including DHCPv6-PD (prefix delegation), and a really nice web-based firewall configuration.

Why wouldn’t I want that for my virtual machines, as I have for my real ones? Well… I would.

Bridging, the better way to setup LXD

I have been using Linux Containers for a couple of years, and watched people set up LXD in a variety of network configurations, including the default. Unfortunately, even the default network config is not IPv6 friendly.

Setting up a front bridge on the host takes a bit of pre-work, but it is the most transparent way to support IPv6 on your Linux Containers, and also supports running a Virtual OpenWrt router without any additional work.

A front bridge

Bridging is the act of forwarding packets at the ethernet layer. Setting up a front bridge (br0) requires that the host is ethernet attached to the rest of your network. Wifi can not be used, as bridging between Wifi and ethernet requires more than a simple cable plug-in.

Virtual router Network

In the diagram above, br0 is what I am calling a front bridge, everything other than the physical ethernet jack is connected to br0, including the host. Depending on your Linux Distro, this can be daunting to some. Systemd doesn’t help, as it hasn’t really simplified linux networking.

Configure a front bridge

If you haven’t set up a front bridge, see Configuring systemd for a LXD Front Bridge for the 6 easy steps.

Install LXD (if you haven’t already)

If you haven’t already installed LXD on your Raspberry Pi or other Linux machine, please look at Linux Containers on the Pi blog post.

Creating LXD profiles

In order for a Linux Container machine to connect to the network, it needs a profile. The default profile connects the Container to lxdbr0 which is not, by default, connected to anything.

I create a profile to connect my Containers to br0 by default, a profile I call extbridge, which looks like:

$ lxc profile show extbridge
config: {}
description: bridged networking LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: extbridge
used_by: []

After I am happy with that profile, I usually just copy it over to the default profile, as most of my Containers are only attached to br0 and get their addressing from the upstream router (both IPv4 & IPv6).

lxc profile copy extbridge default

Creating a profile for OpenWrt

OpenWrt requires two interfaces in order to route. As the earlier diagram shows, the OpenWrt will be routing between br0 and lxdbr0.

Interestingly, when I first used OpenWrt 21.02.0 as a container, the interfaces were reversed (from my previous articles). So I created another profile with eth0 as the WAN, and eth1 as the LAN. (which I call two intf rev, for reversed)

Create twointfrev profile

lxc profile create twointfrev
lxc profile edit twointfrev
    config: {}
    description: 2 interfaces
    devices:
      eth0:
        name: eth0
        nictype: bridged
        parent: br0
        type: nic
      eth1:
        name: eth1
        nictype: bridged
        parent: lxdbr0
        type: nic
      root:
        path: /
        pool: default
        type: disk
    name: twointfrev
    used_by: []

Launch the OpenWrt container

Finally, we get to the easy part. After all the prep of setting up br0, and the twointfrev profile, launching the container is anti-climatic.

lxc launch -p twointfrev images:openwrt/21.02 router21

LXD will automagically pull down the image from the image server, and create a Container named router21.

Fixing the OpenWrt Container

Unfortunately, you will notice that the WAN (eth0) interface will have an IPv4 and IPv6 address, the LAN address will not.

$ lxc ls router21
+----------+---------+-------------------+-----------------------------------------------+-----------+-----------+
|   NAME   |  STATE  |       IPV4        |                     IPV6                      |   TYPE    | SNAPSHOTS |
+----------+---------+-------------------+-----------------------------------------------+-----------+-----------+
| router21 | RUNNING | 10.1.1.108 (eth0) |  2001:db8:8011:fd00::599 (eth0)               | CONTAINER | 0         |
|          |         |                   |  2001:db8:8011:fd00:216:3eff:fef1:25d8 (eth0) |           |           |
+----------+---------+-------------------+-----------------------------------------------+-----------+-----------+

For whatever reason, there are parts missing in this image, most notably the br-lan bridge. Hopefully this will be addressed in future OpenWrt images. But for now, we need to connect to the OpenWrt CLI and do some fixing.

We’ll use LXD’s console access:

lxc exec router21 sh

BusyBox v1.33.1 (2021-08-31 22:20:08 UTC) built-in shell (ash)

~ # 
  1. The following commands will all be done on the OpenWrt CLI. First, create a bridge & br-lan interface. Edit /etc/config/network, add/edit:
config interface 'wan6'
    option reqprefix 'auto'

config device
        option type 'bridge'
        option name 'br-lan'
        list ports 'eth1'
        option bridge_empty '1'

config interface 'lan'
        option proto 'static'
        option device 'br-lan'
        option ipaddr '192.168.88.1'
        option netmask '255.255.255.0'
        option ip6assign '64' 

Note, assign an IPv4 address that works for your network, I chose 192.168.88.1 for my network.

  1. Allow external web management. Edit /etc/config/firewall, add at the bottom of the file:
config rule               
    option target 'ACCEPT'
    option src 'wan'      
    option proto 'tcp'    
    option dest_port '80' 
    option name 'ext_web' 
  1. Restart networking & firewall for changes to take effect
/etc/init.d/network restart
/etc/init.d/firewall restart

OPTIONAL set ULA

  1. Global ULA configuration is not available in Luci – and must be configured manually
uci set network.globals=globals
uci set network.globals.ula_prefix='fdb5:df0c:2121::/64'
uci commit

You will now see the global ULA on the LAN interface now

Exit the LXD console to return to your LXD host

exit
$

Managing the OpenWrt Virtual Router (VR) from a Web Browser

Now you should be able to point your web browser to the WAN address (see output of lxc ls router21 eth0 address). and login, password is blank.

http://[2001:db8:ebbd:2080::599]/

Follow the instructions to set a password, and configure the firewall as you like.

Virtual router Network

Managing your shiny new VR

The OpenWrt router should work just like a real one. This includes the warning message you receive, the first time you click on Network->Interfaces

Virtual router Network

This happens on real routers as well, just click on Continue and all will be well.

You should see that the router now has received Prefix Delegation (PD) from the upstream router, and has applied that to the LAN interface.

$ lxc ls router21
+----------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
|   NAME   |  STATE  |         IPV4          |                     IPV6                      |   TYPE    | SNAPSHOTS |
+----------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| router21 | RUNNING | 192.168.88.1 (br-lan) |  2001:db8:8011:fd04::1 (br-lan)               | CONTAINER | 0         |
|          |         | 10.1.1.35 (eth0)      |  2001:db8:8011:fd00::11b (eth0)               |           |           |
|          |         |                       |  2001:db8:8011:fd00:216:3eff:feb7:c2be (eth0) |           |           |
+----------+---------+-----------------------+-----------------------------------------------+-----------+-----------+

Address Stability of OpenWrt on LXD

Because all of this is running on LXD, there is address stability. No matter how many times you reboot the Raspberry Pi/Linux host, or restart containers in different order, the addresses remain the same. This means the addresses above can be entered into your DNS server with out churn.

Excellent IPv6 support

LXD is the best at container customization, and virtual networking (IPv4 and IPv6). With LXDs flexibility, it is easy to create templates to scale up multiple applications (e.g. a webserver farm running in the palm of your hand). OpenWrt is one of the best Open source router projects, and now it can be run virtually as well. Now you have a server farm in the palm of your hand, with excellent IPv6 support and a great firewall!


Notes:

  • some of the screen shots are from a Pi host, and others from an AMD host.
  • IPv6 addresses have been changed to conform with Documentation addresses RFC 3849

Palm Photo by Alie Koshes

Virtual hosting, the IPv6 way

Mesh

Virtual Hosting, the IPv6 Way

 

Virtual Hosting: The act of hosting several web servers on a single piece of hardware

Before there were VM (Virtual Machines), Containers (light-weight VMs), VPS (Virtual Private Servers), there was a need to host multiple servers on a single machine. In the 1990s the internet was exploding, and everyone wanted to have their own web site. The concept of running multiple web servers on a single piece of hardware was created, and implemented in one of the first open source webservers, Apache.

Virtual Hosting, the Old Way

Apache implemented virtual hosting by having the server examine the HTTP Header looking for the Host Field. Based on the Host Field the web server would utilize a specific configuration for that Virtual web server, and serve up files based on that configuration.

A typical Apache Configuration would be:

<VirtualHost *:80>
    ServerAdmin webmaster@localhost
    ServerName www.makikiweb.com
    DocumentRoot /home/makiki/
    <Directory />
        Options FollowSymLinks
        AllowOverride None
    </Directory>
</VirtualHost>

The advantage of the Apache VirtualHost was that back in the days of IPv4, a single IP address could server many, many websites. This was good for the conservation of IPv4 address space.

However, sharing a single IP address means that you can’t get to the website by IP address alone:

IPv4 only addressing

Virtual Hosting, the IPv6 Way

With seemingly inexhaustible amount IPv6 address space, it is time to rethink how we implement Virtual Hosting. We no longer need to share a single IP address for a stable of servers.

It is possible to add many IP addresses to the web server host

$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether b8:27:cb:bf:52:9c brd ff:ff:ff:ff:ff:ff
    inet6 2001:db8:8:78::f03/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 2001:db8:8:78::f02/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 2001:db8:8:78::f01/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 2001:db8:8:78::102/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 2001:db8:8:78::101/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 2001:db8:8:78::100/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 2001:db8:8:78::1/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::ba27:cbff:febf:529c/64 scope link 
       valid_lft forever preferred_lft forever

Now each Virtual web server can to have its own IPv6 address. This allows the web server to direct the request directly to the virtual server based on IP address. Using a more modern web server such as nginx also allows individual configuration for each virtual web server such as support for TLS (or not), Proxy Protocol (or not).

A typical nginx configuration for three virtual hosts would be:

# Server 1 on 2001:db8:8:78::102
server {
    listen [2001:db8:8:78::102]:80 proxy_protocol ;

    # setup to log original client IP address
    # incoming IPv4 requests from two proxies   
    set_real_ip_from  2001:db8:0:80:1000:3b:1:1;
    set_real_ip_from  2001:db8:0:82:1000:3b:1:1;
    real_ip_header    X-Forwarded-For;
    root /home/makiki;
    access_log /var/log/nginx/makiki.log;
    location / {
        index         index.html index.htm;
    }
}

# Server 2 on 2001:db8:8:78::f02
server {
    listen [2001:db8:8:78::f02]:80 ;
    root /home/makiki;
    access_log /var/log/nginx/makiki.log;
    location / {
        index         index.html index.htm;
    }
}

# Server 3 on 2001:db8:8:78::f03
server {
    listen [2001:db8:8:78::f03]:80 ;
    root /home/makikica;
    access_log /var/log/nginx/makikica.log; 
    location / {
            index         index.html index.htm;
    }

Each Virtual web server is listening on a unique address with specific configuration for that web server.

The example above shows:

  • Server 1 & 2 are simulating a dual-stack server with one server listening via the IPv4 reverse proxy, and Server 2 listening directly to IPv6. Both are serving the same content (same Document Root), and logging to the same log file. The log file will have both IPv6 and IPv4 addresses (provided by the proxy_protocol).
  • Server 3 is an IPv6-only server, serving different content.

Having an IP address for each web server also eliminates the dreaded “Sorry…” page, as the web server is no longer reliant on the Host Field to direct the request.

Conclusion

When implementing IPv6 networks and services, it is important to not fall into the same IPv4 design constraints and work-a-rounds in the past. Take some time to think outside the box and create a cleaner design, an IPv6 design.