LXD MACVLAN Containers

Rescue!
Linux Containers now with VMs

I have been running Linux Containers (LXD) for several years now, and I have found them really useful. The key advantages of Linux Containers are:

  • Running full Linux OS inside a container, which makes troubleshooting much easier (than Docker)
  • Cloud-like experience. Cloud computing has taken off, not just because it is someone else’s computer, but because it has made spinning up a server easy. Linux Containers provides a similar easy experience in adding a server. Some of the capabilities are: snapshots of containers, transferring containers from one host to another, quick creation and removal of containers.
  • Secure and scalable. Linux containers run by default, as unprivilaged thereby protecting the host system. And because Linux Containers are light weight, I have been able to scale up webserver containers to over 30 running on a Raspberry Pi 3B+.
  • Excellent IPv6 Support. Containers have persistent MAC addresses, and therefore will request the same IPv4 address, and form the same SLAAC addresses after every reboot, regardless of container boot order (unlike Docker).

Connecting your Container to the Internet

As an old networking guy, in the past I have taken a network approach to getting my containers connected to the internet. I would configure a linux bridge (the front bridge) on the host, and then connect the Containers and the Host to that bridge. Like this:

Using a Linux Bridge

This method works quite well, allowing your router to provide addresses to your containers, and thus Internet connectivity. However, it is tricky to setup, and there is a risk of cutting off network access to the host when moving the host connection from the ethernet port to the Linux Bridge.

Using MACVLAN interface

There is an easier way, which doesn’t require the setting up of the front bridge or moving the host network connection. It is using the MACVLAN network attachment.

The MACVLAN technique uses the features of modern network interfaces (NICs) that support virtual interfaces. With virtual interface support, a single NIC can support not only multiple IP addresses, but several MAC (Media Access Control) addresses as well.

[Network Diagram with MACVLAN] Using a Linux MACVLAN

Creating LXD Profile for MACVLAN

LXD Containers use a profile to determine which resources to attach to, such as hard disk or network. The default LXD proflie looks like:

$ lxc profile show default
config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by: []

In the eth0 section, you can see that by default, the container will attach to the LXD linux bridge it sets up at init time, called lxdbr0. Unfortunately, that is a bridge to no where.

So we’ll create another profile that connects to the Host NIC via MACVLAN. First copy the default, and then change a couple of lines, specifically the nictype and the parent. The parent is the name of the host ethernet device. On a Raspberry Pi running Pi OS, it is eth0.

$ lxc profile copy default macvlan
$ lxc profile show macvlan
config: {}
description: Default LXD profile
devices:
  eth0:
    nictype: macvlan
    parent: eth0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: enet
used_by: []

Using the MACVLAN profile

Now that you have a MACVLAN profile, using it is as simple as launching a new container with the -p macvlan option. For example, firing up a Container running Alpine OS:

$ lxc launch -p macvlan images:alpine/3.16 test
Creating test
Starting test  
$

Looking at the container running with the lxc ls command:

$ lxc ls
+---------+---------+------------------------+-----------------------------------------------+-----------+-----------+
|  NAME   |  STATE  |          IPV4          |                     IPV6                      |   TYPE    | SNAPSHOTS |
+---------+---------+------------------------+-----------------------------------------------+-----------+-----------+
| test    | RUNNING | 192.168.215.129 (eth0) | fd73:c73c:444:0:216:3eff:fe65:2b99 (eth0)     | CONTAINER | 0         |
|         |         |                        | 2001:db8:8011:fd44:216:3eff:fe65:2b99 (eth0)  |           |           |
+---------+---------+------------------------+-----------------------------------------------+-----------+-----------+

And you can see that the new container already had IPv4 and IPv6 addresses (from my router). Let’s try a ping from inside the container.

Checking connectivity from inside the MACVLAN attached Container

We’ll step inside the container with the lxc exec command, and ping via IPv6 and IPv4.

$ lxc exec test sh

~ # ping -c 3 he.net
PING he.net (2001:470:0:503::2): 56 data bytes
64 bytes from 2001:470:0:503::2: seq=0 ttl=56 time=34.192 ms
64 bytes from 2001:470:0:503::2: seq=1 ttl=56 time=33.554 ms
64 bytes from 2001:470:0:503::2: seq=2 ttl=56 time=33.959 ms

--- he.net ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 33.554/33.901/34.192 ms

~ # ping -c 3 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: seq=0 ttl=59 time=11.835 ms
64 bytes from 1.1.1.1: seq=1 ttl=59 time=11.933 ms
64 bytes from 1.1.1.1: seq=2 ttl=59 time=11.888 ms

--- 1.1.1.1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 11.835/11.885/11.933 ms
~ # 

As you can see we can get out to the internet without all the fuss of the old way of setting up a front bridge.

Is there a downside to using MACVLAN?

With it being so easy to connect to your LAN using MACVLAN, are there any downsides?

A key limitation of using MACVLAN is that one must use an Ethernet NIC. Or to put it another way, you can’t use Wifi when using a MACVLAN interface. This is because the MACVLAN interface is a virtual interface, and creates additional MAC addresses for each container.

In a managed Wifi network (which 99% are), the Access Point (AP) will only talk to the MAC address of the client which registered with the AP. A MACVLAN will try to use additional MAC addresses which will be rejected by the AP (since those addresses are not registered with the AP.

That doesn’t mean that you couldn’t try to set up a Wireless Bridge with Wireless Distribution System (WDS), but that is beyond the scope of this article. For now, think of MACVLAN as only using your Ethernet NIC.

OK, but can the Container talk to the LXD Host?

The short answer is NO. But there is a work-around. Because the Containers are talking on a virtual interface of the NIC, and the host is on the physical interface, it doesn’t see the traffic. However, if one adds an additional MACLAN interface on the Host, the Container will be able to communicate with the Host.

Fortunately, there is a script to automagically create a MACVLAN interface on the LXD Host. This script is called: lxd_add_macvlan_host.sh.

The script does not make any permanent changes to the host, but rather configures the MACVLAN interface on the fly. If you want this to be permanent, then invoke the script from /etc/rc.local (you may have to enable rc.local if you are using systemd).

Using LXD without installing it first

Fortunately, if you want to learn about Linux Containers before doing an actual install, Ubuntu has setup a cloud service, where you can try LXD online. The session is limited to 30 minutes, but it is free.

You can try LXD using the Ubuntu Cloud service.

LXD Try-It

The linux containers have real-routable IPv6 addresses, which can be accessed from the internet.

$ ping 2602:fc62:a:2000:216:3eff:fe06:83ba -c3
PING 2602:fc62:a:2000:216:3eff:fe06:83ba(2602:fc62:a:2000:216:3eff:fe06:83ba) 56 data bytes
64 bytes from 2602:fc62:a:2000:216:3eff:fe06:83ba: icmp_seq=1 ttl=50 time=73.8 ms
64 bytes from 2602:fc62:a:2000:216:3eff:fe06:83ba: icmp_seq=2 ttl=50 time=70.8 ms
64 bytes from 2602:fc62:a:2000:216:3eff:fe06:83ba: icmp_seq=3 ttl=50 time=70.8 ms

--- 2602:fc62:a:2000:216:3eff:fe06:83ba ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 70.764/71.777/73.790/1.423 ms

It is designed to allow you to get familiar with LXD, but you can actually run services (for 30 minutes) such as a web server or even manage LXD using LXDware over IPv6!

Using LXD on a Pi

Of course, LXD runs on more powerful machines than just Raspberry Pi’s. But you can enjoy the advantages of Linux Containers in your own home by installing it on a Pi. Using MACVLAN interfaces means it just got simpler to install and use Linux Containers.

Running IPv6-only in the SOHO with OpenWrt

Wayland

While we are still a few years off from turning off IPv4 on the internet, it is possible to turn off IPv4 in your SOHO (Small Office Home Office) network. Or turn it off on one of your SOHO networks. Google is showing over 40% of connections to the Internet Search Engine are over IPv6. By running an IPv6-only network in the SOHO, you will:

  • Find out what breaks with IPv4 disabled
  • Simplify your firewall rules, since only IPv6 is supported
  • Discover how well most of the Internet works using DNS64/NAT64

I have been running an IPv6-only network in my house for a couple of years now, and it works amazingly well. There are items, such as my Internet Radio which do not work over IPv6, and I have a Dual Stack DMZ for IoT-like devices like that. But the major OSs (Windows, Mac, Linux, iOS, Android) all do IPv6 quite well these days.

Getting from here to there

The key to running IPv6-only, and still browsing the other 59% of the Internet is using a transition technology, DNS64/NAT64. This has two parts, a special DNS (Domain Name Server) which will synthesize IPv6 addresses (AAAA records) when a DNS request returns only an A record (IPv4-only).

IPv6 only DNS64

Once your laptop has a synthesized IPv6 address, it will connect to the NAT64 (Net Address Translation v6 to v4) which will do the work of translating the synthesized IPv6 address to a real IPv4 address and send the packet out on the Internet.

IPv6 only DNS64

Using OpenWrt for DNS64/NAT64

Fortunately, you can take any of the thousand or so routers supported by OpenWrt, and run both DNS64 and NAT64. OpenWrt is an actively developed open source project for SOHO routers.

Forgunately, DNS64 and NAT64 can run on the same OpenWrt router.

DNS64

OpenWrt uses DNSMasq as a DNS/DHCPv4 server by default. Since we’ll be running IPv6-only, we won’t need the DHCPv4 server. I have disabled DNSMasq and installed unbound DNS server. Specifically because it has a checkbox that enables DNS64, making it the easiest DNS64 installation you will ever do.

unbound dns64

Just click on the Enable DNS64 checkbox, and you are running a DNS64 server

Unbound’s DNS64 uses the well known prefix (WKP) of 64:ff9b::/96 by default, but can be changed to any IPv6-prefix in your network.

NAT64

NAT64 on OpenWrt uses a handy packet tool called jool. Jool can do many things but to get it configured for NAT64 requires the command line. ssh into the router install jool (on OpenWrt version 22.03)

opkg update
opkg install kmod-jool-netfilter jool-tools-netfilter

Then configure jool

jool instance add --pool6 64:ff9b::/96

You will most likely want this to be persistent across router reboots, therefore add the following to your /etc/rc.local file (before exit 0. This can be done using the web interface, Luci, under System->Startup->Local Startup

unbound dns64

Testing IPv6-only

Now that you have DNS64 and NAT64 running on your OpenWrt router, connect a laptop, and try to ping an IPv4-only website, such as twitter.com or cira.ca (the Canadian Internet Registration Authority)

$ ping -c2 cira.ca
PING cira.ca(a39c698e40c082be1.awsglobalaccelerator.com (64:ff9b::321:b811)) 56 data bytes
64 bytes from a39c698e40c082be1.awsglobalaccelerator.com (64:ff9b::321:b811): icmp_seq=1 ttl=120 time=13.1 ms
64 bytes from a39c698e40c082be1.awsglobalaccelerator.com (64:ff9b::321:b811): icmp_seq=2 ttl=120 time=14.7 ms

--- cira.ca ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 13.146/13.928/14.710/0.782 ms

Ping tests DNS64 to synthesize an IPv6 address (see the WKP 64:ff9b::/96 is being used above), and NAT64 to do the actual ping (icmp message).

Running IPv6-only

Running IPv6-only is a great way to check out what apps work (and what doesn’t work) on IPv6-only. A few notable apps that don’t work well on IPv6-only:

  • Zoom (Jitsi works just fine on IPv6-only)
  • Many VPN software/providers (Wireguard works well in IPv6-only)

That said it is amazing what does work well on IPv6-only. As a test, this past summer, I put our house guests on the IPv6-only network, and at the end of the week, I asked them if they had run into any problems. They replied they had no problems on the IPv6-only network during the week.

With the help of DNS64 and NAT64 on OpenWrt, you are now ready to run an IPv6-only network in your SOHO network, and see how the future of the Internet will work.


Notes:

  • as of 2022-10-24, jool will report (Xtables disabled)”, this is normal, as OpenWrt now uses netfilter
  • To be fair, www.cira.ca has an IPv6 address, but cira.ca does not
  • Be aware, that if you are a guest at my house, you will be put onto the IPv6-only network ūüėČ

23 November 2022

Mostly IPv6-only with RFC 8925

Fun!

Transition: Mostly IPv6-only

I have been thinking about the transition to IPv6 wrong. For some time I have seen IPv6-only as the natural progression from Dual-Stack. But that strategy doesn’t accommodate all the IPv4-only devices (such as IoT) which will be with us for years.

With the standardization of RFC 8925 IPv6-Only Preferred Option for DHCPv4, I see another phase in the transition to IPv6-only. The phase of mostly IPv6-only.

Mostly IPv6-only means that for a given subnet, most of the devices are capable of IPv6, and will use it. But the remaining small number of devices, such as badge readers, HVAC, and the like can still operate using just IPv4.

Signaling an IPv6-only option in IPv4 DHCP

Initially, this sounds counter-intuitive. After all, if a device is capable of IPv6-only, it won’t even need to make a DHCPv4 request, and therefore would not see the IPv6-only option.

However, that is looking at it from the IPv6 side. Looking at it from the IPv4 side, there are advantages to letting devices know that the IPv6-only mechanisms (Native/DNS64/NAT64) are in place. And therefore for those devices which are capable of IPv6-only, there is no need to use IPv4,

Reintroducing DHCPv4 Option 108

Back in the heyday of DHCP, there were many options being allocated for all sorts of useful things, such as: * Multicast Assignment through DHCP (was option 100) * IPX Compatibility (was option 110)

Option 108 was initially reserved for Swap Path, but was never standardized in a RFC.

RFC 8925 reintroduces Option 108 as IPv6-Only Preferred. IANA (Internet Assigned Numbering Authority) maintains a list of current DHCP Options.

How does DHCPv4 Option 108 work?

If the network supports IPv6-only, then the DHCPv4 server can be configured to include Option 108 in the DHCP offer to the client.

Dynamic Host Configuration Protocol (Offer)
    Message type: Boot Reply (2)
    Hardware type: Ethernet (0x01)
    Hardware address length: 6
    Hops: 0
    Transaction ID: 0xf0c7a14c
...
    Option: (3) Router
        Length: 4
        Router: 192.168.156.1
    Option: (6) Domain Name Server
        Length: 4
        Domain Name Server: 192.168.156.1
    Option: (15) Domain Name
        Length: 3
        Domain Name: lan
    Option: (108) Removed/Unassigned
        Length: 4
        Value: 00000000
    Option: (255) End
        Option End: 255

As you can see tshark (and tcpdump and wireshark) have not been updated for option 108 support yet.

Once the client, which supports Option 108 sees the option, it will stop the request process for an IPv4 address, and the device will become IPv6-only on your Mostly IPv6-only network. The DHCPv4 server will not allocate an IPv4 address, and there will be no lease for an address.

If the device does not support Option 108, it will ignore it, and continue to request an IPv4 address, and operate using IPv4 in your Mostly IPv6-only network.

Advantage of using Option 108

In addition to signaling to the devices which can run IPv6-only, the use of Option 108 reduces the burden for IPv4 addresses.

In a Dual-Stack network, all devices will have an IPv4 address. In a Mostly IPv6-only network, only the devices which can not operate in an IPv6-only environment will consume an IPv4 address.

As device software is upgraded, and perhaps will support IPv6-only, the usage of IPv4 addresses should decline.

Support for Option 108 in use today

There are operating systems which support Option 108 today, quite possibly in your network.

  • iOS 15 (iPhones)
  • MacOS 12.0.1 (Macs)
  • Windows (Not yet)

Sadly, Linux long used as the test bed for the internet does not support Option 108 (as of systemd v251. A feature request has been submitted.

Setting up Option 108 in your network using OpenWrt

Before setting up Option 108, ensure you have DNS64/NAT64 setup to support IPv6-only operation, as nodes responding to the option will become IPv6-only. See this post for info on how to configure OpenWrt for DNS64/NAT64 (ipv6hawaii.org)

Although OpenWrt does not directly support Option 108, it is possible to configure it from the web interface (LuCI).

After logging into the web interface, go to Network->Interfaces->LAN (Edit)->DHCP Server->Advanced Settings:

OpenWrt Interface config

In the DHCP Options enter the string 108,0000i. The letter i is required to make option 108 a 4 byte value (which the RFC requires). Click Save and then Save & Apply

Or, if you prefer, you can edit the dhcp config file /etc/config/dhcp and add the line in the lan stanza:

config dhcp 'lan'
    option interface 'lan'
    ...
    list dhcp_option '108,0000i'

And restart networking:

/etc/init.d/network restart

Mostly IPv6-only is a good thing

RFC 8925 provides a mechanism that brings us one step closer to an IPv6-only world, while still providing connectivity to the devices which do not yet support IPv6.

While I see this extending the long tail of IPv4, and I would like to see everything IPv6-only, the real goal should be connectivity for all devices on your network. Option 108 helps us get there.


Notes:

  1. Thanks to the Dnsmasq author for his quick response in how to force a 4 byte value for Option 108.

 

Openwrt and Netfilter

Fun!

Not Non-Fungible Token, but Netfilter NFT

It has been a long time coming, but Netfilter have finally arrived in the latest release of OpenWrt (22.03.x). Netfilter brings network parity for IPv6, while improving firewall performance.

The Early Days

A little history, Linux has always had the concept of packet filtering, or a firewall. The original packet filter, ipfw, integrated into Linux in 1994, was based on a BSD ipfirewall. In Linux version 2.2 (1999), ipchains was introduced, but was stateless only. The Linux firewall evolved again with iptables in version 2.4 (2001), where stateful inspection returned.

iptables continued to evolve, but with separate streams, one for IPv4, another for IPv6, and yet another for the Ethernet (ebtables). Having multiple tables impacted performance, and led to kernel code duplication.

NFT

Netfilter‘s new packet-filtering utility, nft, replaces the older tools: iptables, ip6tables, arptables and ebtables.

At first look, nft options look quite a bit different from the old iptables.

nft --help
Usage: nft [ options ] [ cmds... ]

Options (general):
  -h, --help                      Show this help
  -v, --version                   Show version information
  -V                              Show extended version information

Options (ruleset input handling):
  -f, --file <filename>           Read input from <filename>
  -D, --define <name=value>       Define variable, e.g. --define foo=1.2.3.4
  -i, --interactive               Read input from interactive CLI
  -I, --includepath <directory>   Add <directory> to the paths searched for include files. Default is: /etc
  -c, --check                     Check commands validity without actually applying the changes.
  -o, --optimize                  Optimize ruleset

Options (ruleset list formatting):
  -a, --handle                    Output rule handle.
  -s, --stateless                 Omit stateful information of ruleset.
  -t, --terse                     Omit contents of sets.
  -S, --service                   Translate ports to service names as described in /etc/services.
  -N, --reversedns                Translate IP addresses to names.
  -u, --guid                      Print UID/GID as defined in /etc/passwd and /etc/group.
  -n, --numeric                   Print fully numerical output.
  -y, --numeric-priority          Print chain priority numerically.
  -p, --numeric-protocol          Print layer 4 protocols numerically.
  -T, --numeric-time              Print time values numerically.

A key difference is that nft wants to read a json-like formatted file of rules, rather than just a simple (sometimes not so simple) command line of iptables.

The first useful command is to show the tables defined (on OpenWrt). Netfilter has a new address family, inet which applies to IPv4 and IPv6.

# nft list tables
table inet fw4

Unfortunately, for the new-comer, that doesn’t appear to tell us much. But in fact, it is stating that there is a table of the family type of inet with the name fw4. A more informative command shows the chains and rules in the table (fw4):

# nft list table inet fw4
table inet fw4 {
    chain input {
        type filter hook input priority filter; policy accept;
        iifname "lo" accept comment "!fw4: Accept traffic from loopback"
        ct state established,related accept comment "!fw4: Allow inbound established and related flows"
        tcp flags syn / fin,syn,rst,ack jump syn_flood comment "!fw4: Rate limit TCP syn packets"
        iifname "br-lan" jump input_lan comment "!fw4: Handle lan IPv4/IPv6 input traffic"
        iifname "wan" jump input_wan comment "!fw4: Handle wan IPv4/IPv6 input traffic"
    }

    chain forward {
        type filter hook forward priority filter; policy drop;
        ct state established,related accept comment "!fw4: Allow forwarded established and related flows"
        iifname "br-lan" jump forward_lan comment "!fw4: Handle lan IPv4/IPv6 forward traffic"
        iifname "wan" jump forward_wan comment "!fw4: Handle wan IPv4/IPv6 forward traffic"
        jump handle_reject
    }

    chain output {
        type filter hook output priority filter; policy accept;
        oifname "lo" accept comment "!fw4: Accept traffic towards loopback"
        ct state established,related accept comment "!fw4: Allow outbound established and related flows"
        oifname "br-lan" jump output_lan comment "!fw4: Handle lan IPv4/IPv6 output traffic"
        oifname "wan" jump output_wan comment "!fw4: Handle wan IPv4/IPv6 output traffic"
    }

...

    chain forward_wan {
        icmpv6 type { destination-unreachable, time-exceeded, echo-request, echo-reply } limit rate 1000/second counter packets 4 bytes 416 accept comment "!fw4: Allow-ICMPv6-Forward"
        icmpv6 type . icmpv6 code { packet-too-big . no-route, parameter-problem . no-route, parameter-problem . admin-prohibited } limit rate 1000/second counter packets 0 bytes 0 accept comment "!fw4: Allow-ICMPv6-Forward"
        meta l4proto esp counter packets 0 bytes 0 jump accept_to_lan comment "!fw4: Allow-IPSec-ESP"
        udp dport 500 counter packets 0 bytes 0 jump accept_to_lan comment "!fw4: Allow-ISAKMP"
        meta nfproto ipv6 tcp dport { 22, 80, 443 } counter packets 1 bytes 80 jump accept_to_lan comment "!fw4: ext_mgmt_fwd"
        meta nfproto ipv6 udp dport { 22, 80, 443 } counter packets 0 bytes 0 jump accept_to_lan comment "!fw4: ext_mgmt_fwd"
        jump reject_to_wan
    }

...
    chain mangle_forward {
        type filter hook forward priority mangle; policy accept;
        iifname "wan" tcp flags syn tcp option maxseg size set rt mtu comment "!fw4: Zone wan IPv4/IPv6 ingress MTU fixing"
        oifname "wan" tcp flags syn tcp option maxseg size set rt mtu comment "!fw4: Zone wan IPv4/IPv6 egress MTU fixing"
    }
}

As you can see there are IPv4 and IPv6 rules in the Netfilter table. The nft man page is long but full of good info.

Firewall abstraction of OpenWrt

Fortunately, the developers of OpenWrt have kept the familiar web interface (LuCI) for firewall configuration, and the user doesn’t have to know that nft is now managing everything, rather than the older tools: iptables, ip6tables, arptables and ebtables.

OpenWrt Firewall Configuration

Conclusion

Why Netfilter? It reduces kernel code duplication, has more efficient execution, thus better performance, and adds atomic changes to filter rules.

Why OpenWrt? With the recent advent of malware targeting Small Office, Home Office [SOHO] routers, such as ZuoRAT, it is good to see an alternative to that old OEM router software, in a project which is active, and responding to security vulnerabilities.

nft has been around since 2014. Finally with the Netfilter address family of inet, IPv6 has equal status with IPv4. And with OpenWrt (22.03.x), your SOHO router will take advantage of a 21st Century firewall.


Photo Credit: NFT mania is here, and so are the scammers by Marco Verch under Creative Commons 2.0

Originally posted on www.makiki.ca (IPv6-only website)

 

Managing Linux Containers with LXD Dashboard

Traffic

Server Farm in the Palm of your hand

In the past I have written about Linux Containers (LXD), a light-weight visualization for Linux. And how it is much more IPv6-friendly than Docker. But until now, the management of LXD has been via the CLI command lxc.

There are other LXD GUI management projects, but LXD Dashboard not only runs in a container, on a host that is also managed by LXD Daskboard, but it can also manage LXD on remote hosts.

IPv6 Friendly

LXD is IPv6 Friendly, in that containers will obtain a SLAAC and/or DHCPv6 address, and get the same address after container restarts, or even through LXD host reboots.

This makes it easy to create a DNS entry for the Linux Container, since the automatically created IPv6 address is pretty much static.

LXD Interface

LXD is actually two parts, the lxd daemon, and the lxc CLI client which makes calls to the lxd daemon. This allows one to list, for example, the Linux containers which are running (or stopped) on a specific host.

$ lxc ls
+--------+---------+------------------------+----------------------------------------------+------------+-----------+
|  NAME  |  STATE  |          IPV4          |                     IPV6                     |    TYPE    | SNAPSHOTS |
+--------+---------+------------------------+----------------------------------------------+------------+-----------+
| alpine | RUNNING | 192.168.215.104 (eth0) | fd6a:c19d:b07:2080:216:3eff:fecf:bef5 (eth0) | PERSISTENT | 0         |
|        |         |                        | 2001:db8:ebbd:2080:216:3eff:fecf:bef5 (eth0) |            |           |
+--------+---------+------------------------+----------------------------------------------+------------+-----------+
| w10    | RUNNING | 192.168.215.225 (eth0) | fd6a:c19d:b07:2080:216:3eff:feb2:f03d (eth0) | PERSISTENT | 0         |
|        |         |                        | 2001:db8:ebbd:2080:216:3eff:feb2:f03d (eth0) |            |           |
+--------+---------+------------------------+----------------------------------------------+------------+-----------+
| w2     | RUNNING | 192.168.215.232 (eth0) | fd6a:c19d:b07:2080:216:3eff:fe7f:b6a5 (eth0) | PERSISTENT | 0         |
|        |         |                        | 2001:db8:ebbd:2080:216:3eff:fe7f:b6a5 (eth0) |            |           |
+--------+---------+------------------------+----------------------------------------------+------------+-----------+
| w3     | RUNNING | 192.168.215.208 (eth0) | fd6a:c19d:b07:2080:216:3eff:fe63:4544 (eth0) | PERSISTENT | 0         |
|        |         |                        | 2001:db8:ebbd:2080:216:3eff:fe63:4544 (eth0) |            |           |
+--------+---------+------------------------+---------------------------

Until now the CLI has been the way to manage LXD containers.

LXD secure management API

The LXD daemon has elevated privileges, since it is messing with routing tables and such to make networking work for the container. A secure socket can be enabled for remote management, usually on port 8443. To enable use the following command:

lxc config set core.https_address [::]:8443

It is possible to set a management password, but it more secure to use a certificate, which I’ll discuss¬†later.

Conveniently, the LXD daemon listens on both IPv4 and IPv6.

Web-based LXD Managment with LXD Dashboard

There is an actively developed project by LXDware called LXD Dashboard The Dashboard runs inside a Linux container, and although it is recommended that one use a Ubuntu container, I find Alpine containers to be much smaller, and load faster.

After working with the author, he wrote up my notes as a nice how-to install on Alpine. There are some additional libraries which are needed under Alpine Linux. The how-to is pretty much a copy/paste the command lines needed to install the current release on an Alpine container. (v3.4 at the time of this writing).

After creating a container, I copy/paste the IPv6 address into my DNS, so I only need reference it by name, thereafter. Since Linux Containers keep the same MAC and IPv6 address, even after restarts, you only need to update the DNS once.

LXD Dashboard: first steps

Once the Dashboard is installed in a Linux Container, and you have nginx and php-fpm are up and running, it is time to point your web browser to the Linux Container. Since I use DNS, I just enter http://lxdware/ into the browser.

Initial Registration screen

LXD Dashboard will present an initial registration screen, where you can create a login. Be sure to make a note of your username and password, this will become the master admin user. After logging in (below), you can add more users.

Registration

Logging into LXD Dashboard

Once you have registered, you can now log into the Dashboard using the same username and password entered at registration.

Registration

Adding Additional Users to LXD Dashboard

After logging in, you can add more users by clicking on your login name in the upper right hand corner, which opens a menu, select Settings.

Once in Settings, you can add additional users, which can belong to predefined groups, or add your own groups. The LXDWARE site has more info on Role Based Access Control (RBAC)

Settings

Other parameters such as adding your own certificates, or setting refresh timers can be adjusted in the Settings section.

Adding LXD hosts

There isn’t much to do with the Dashboard until you add one or more LXD Hosts. It is here, where we will use the¬†Certificate¬†method of accessing the LXD daemons. The steps are:

  1. Copy/Paste the LXD Dashboard Certificate to a file
  2. Transfer that file to the LXD Host
  3. Use lxc config trust add <cert file> command to add the LXD Dashboard certificate to the LXD Host
Copying the LXD Dashboard Certificate

After logging in to LXD Dashboard, click on the View Certificate button to view the Certificate. Copy, then paste that into a file, and name it something like lxddashboard.crt

Certificate

Transfer the Cert to the LXD Host

Use an IPv6-friendly tool, like scp to copy the certificate file to the remote LXD Host. Place somewhere convenient, like /tmp/

Add the Cert to the LXD Host

After sshing to the remote LXD Host, issue the following command to add the Certificate to the LXD daemon configuration

lxc config trust add /tmp/lxddashboard.crt

Back on LXD Dashboard, add the remote LXD Host

Now that the remote host is listening to port 8443 and now has the certificate from LXD Dashboard, it is time to add the host to the Dashboard. Click on the upper right button +Add Host

Add Hosts

Fill in the info about the host. Since IPv6 is well supported, just enter in the DNS name of your¬†IPv6 Host. Since we are using IPv6, we can ignore “External Address & Port” (IPv4 NAT items).

If you have more than one LXD Host, just click +Add Host again, and keep adding your LXD Hosts (be sure to add the Cert to the host first).

Start managing LXD with the Dashboard

Now that you have your LXD hosts added, you are ready to start/stop/launch containers. First let’s drill down on one of the LXD Hosts in your list.

Showing Host

Paikea is a Raspberry Pi with 15 containers configured. Information about the host is shown on the bottom part of the screen.

Getting down to the containers

Clicking on the Containers will switch the display to a list of the containers running on my host Paikea. Be patient! Raspberry Pis are not the fastest machines on the planet, and LXD Dashboard asks for a lot of information from the LXD host.

listing containers

On the right side of each container line is that status (stopped/running) and a triangle/square button which will start/stop the container.

Looking at a single container with LXD Dashboard

Continuing to drill down, by clicking on a container name, it is possible to see more detail for that particular container, including how many processes are running inside the container and memory used.

single container

Along the top, are menu options to configure the containers, which interfaces, snapshots, etc. It is also possible to exec to the container which pops up a black screen and logs you into the container as root. This is all done using the LXD API over IPv6!

exec to a container

Above is an exec session to an OpenWrt Router running in a container

IPv6-enabled LXD Dashboard, ready for prime time

I have only touched upon basic container management with LXD Dashboard, but there is much more that one can do. Bringing a friendly web interface to LXD, which works well over IPv6 is great.

I have watched LXD Dashboard improve over the past year. The development is active, and the author welcomes suggestions for future versions. LXD Dashboard in a dual stack or IPv6-only network is a welcome addition to your Linux Container toolbox.


Happy Boys Day (5 May)

RIPng: routing for the SOHO (Redux)

 

Routers

RIPng guiding the packet flows

It has been a couple years since I last wrote about RIPng. It has been running quietly, and efficiently in my SOHO (Small Office/Home Office) network. Sure there are other better routing protocols such as OSPFv3 or IS-IS which are the work-horses of the Enterprise. But they also have dedicated network engineers managing them. The ease of deployment, makes RIPng the perfect IPv6 routing protocol for non-network experts, just plug-in and go.

BIRD: Internet Routing Daemon

BIRD is an open source routing daemon which supports many routing protocols such as RIPng, Babel, OSPF, and iBGP. It runs on Linux, FreeBSD, NetBSD, and OpenBSD.

In my last RIPng article, BIRD was at version 1.6, and the examples are for that version. In December 2017, version 2 was released, but I found issues with configuring RIPng, and waited until some of the issues could be resolved. Now BIRD has released version 2.08, and it integrates well with my existing 1.6 network.

BIRD & OpenWrt

I have been running BIRD 1.6 on my OpenWrt routers for years, I wanted to try the newer version 2.08. Fortunately, the Devs at OpenWrt build both versions. It is easy to install using OpenWrt’s software manager.

Unlike BIRD 1.6, there is no separate version for IPv4 and IPv6. BIRD 2 supports both. Because the OpenWrt software manager automatically handles dependencies I usually just install the user-space CLI tool bird2cl, which will pull in the bird daemon as well.

BIRD can be installed using the OpenWrt web interface, or after ssh-ing to the router running the following:

opkg update
opkg install bird2cl

Editing files in OpenWrt using nano

Like most Linux distros, the vi editor is included in the base system. But vi has cryptic commands, and can be daunting to the new user.

Fortunately, there is simpler user friendly editor called nano which is available, and only needs to be installed.

opkg install nano

There are many tutorials on the internet on how to use nano, but the official documentation is always a good place to start.

Network

Editing bird.conf with nano

Configuring BIRD for RIPng

There is no web interface for configuring RIPng, and the following must be done via a ssh session. But once it is done, you should not need to change the configuration in the future.

Unfortunately, the example /etc/bird.conf file which is installed by default is full of examples for the other supported protocols, but pretty scarce for RIPng. The easiest thing to do is to log into your router with ssh and replace it with this example:

# EXAMPLE BIRD RIPng Config
# Required for kernel local routes to be exported to RIPng
protocol kernel {
    ipv6 {
        export all;     # Default is export none
    };
}

# Required to get info about Net Interfaces from Kernel
protocol device {
}

#advertises directly connected interfaces to upstream
protocol direct {
    ipv6;
    interface "*";
}

# Configure RIPng in Bird
protocol rip ng {
     ipv6 {
        import all;
        export all;
     };
     interface "*" {
        mode multicast;
    };
}

The key to telling BIRD that this is RIPng is the protocol rip ng line. The ng tells BIRD to use the IPv6 version of RIP.

It is possible to¬†refine¬†the interfaces, so that RIPng routing announcements aren’t being sent (and then dropped) to your ISP. But putting an¬†interface "*"¬†makes this config work for all routers in your SOHO network.

If you wanted to exclude the upstream interface (called wan on OpenWrt), use the line interface interface "eth0","br-lan".

Configuring your Firewall for RIPng

Just like last time, the default policy on OpenWrt is to block in-bound packets from the wan (or upstream interface). So a firewall rule must be created to allow RIPng packets to pass. This is the same as with version 1.6.

Append the following to /etc/config/firewall

config rule
        option name 'RIPng'
        option family 'ipv6'
        list proto 'udp'
        option src 'wan'
        list src_ip 'fe80::/10'
        option dest_port '521'
        option target 'ACCEPT'

Starting RIPng

Now that you have the configuration file in place, and the firewall ready, you can start BIRD running RIPng on your router.

/etc/init.d/bird restart

That’s it!¬†BIRD is now running RIPng on your network.

Looking a little deeper into your RIPng network (optional)

Although RIPng is pretty much a start and forget routing protocol, there is a nice troubleshooting tool, birdcl to peek under the covers. It will show the key aspects of RIPng:

  • Interfaces running RIPng
  • Peers or other routers your RIPng is talking to
  • The routing table, which routes have been learned by RIPng

Using the CLI tool, birdcl, it is easy to see how RIPng is working.

# birdcl 
BIRD 2.0.8 ready.
bird> 

Helpful commands are to look at the interfaces enabled for RIPng, and how many neighbours (other routers running RIPng) have been found.

bird> show rip int
rip1:
Interface  State  Metric   Nbrs   Timer
eth0       Up          1      0  24.311
wan        Up          1      3   8.381
br-lan     Up          1      0   0.961

Displaying the RIPng neighbours command will provide more info

bird> show rip neig
rip1:
IP address                Interface  Metric Routes    Seen
fe80::2ac6:8eff:fe16:19d7 wan             1     23  22.898
fe80::216:3eff:fe28:54f0  wan             1      2  26.902
fe80::7683:c2ff:fe61:fd60 wan             1      6  21.931

As you can see, there are 3 other routers running RIPng, all upstream on the wan interface. RIPng uses IPv6 link-local addresses. It is a good idea to keep a cheat-sheet handy of your routers link-local addresses which will make it easier to understand which routers are peers/neighbours.

And of course you can use birdcl to show the routes in your network as well.

bird> show route
Table master6:
::/0                 unicast [rip1 09:34:47.143] * (120/4)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd60::/60 unicast [rip1 09:34:47.149] * (120/2)
        via fe80::7683:c2ff:fe61:fd60 on wan
2001:db8:8011:fd94::/64 unicast [rip1 09:34:47.143] * (120/4)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd80::/64 unicast [rip1 09:34:47.143] * (120/4)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd44::/62 unicast [rip1 09:34:47.143] * (120/2)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd44::fb0/128 unicast [direct1 09:34:47.139] * (240)
        dev wan
2001:db8:8011:fd04::/62 unicast [rip1 09:34:47.143] * (120/3)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd00::/56 unicast [rip1 09:34:47.143] * (120/4)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd11::/64 unicast [rip1 09:34:47.143] * (120/4)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd00::/64 unicast [rip1 09:34:47.143] * (120/4)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd44::a1b/128 unicast [rip1 09:34:47.149] * (120/2)
        via fe80::7683:c2ff:fe61:fd60 on wan
2001:db8:8011:fd40::/64 unicast [rip1 09:34:47.143] * (120/3)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd47::/64 unicast [rip1 09:34:47.143] * (120/2)
        via fe80::ea9f:80ff:feec:d5f3 on wan
2001:db8:8011:fd46::/64 unicast [rip1 09:34:47.149] * (120/2)
        via fe80::216:3eff:fe28:54f0 on wan
2001:db8:8011:fd45::/64 unicast [direct1 09:34:47.139] * (240)
        dev br-lan
2001:db8:8011:fd44::/64 unicast [direct1 09:34:47.139] * (240)
        dev wan
                     unicast [rip1 09:34:47.149] (120/2)
        via fe80::7683:c2ff:fe61:fd60 on wan
2001:db8:8011:fd80::/62 unicast [rip1 09:34:47.143] * (120/3)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd84::/62 unicast [rip1 09:34:47.143] * (120/3)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd88::/61 unicast [rip1 09:34:47.143] * (120/3)
        via fe80::2ac6:8eff:fe16:19d7 on wan
2001:db8:8011:fd90::/60 unicast [rip1 09:34:47.143] * (120/4)
        via fe80::2ac6:8eff:fe16:19d7 on wan

The top entry, ::/0, is the default route pointing to the upstream router fe80::2ac6:8eff:fe16:19d7 on the wan interface. This is the path packets will take to get to the internet.

The last numbers (120/3) means 120 seconds for the life time of this route, and 3 indicates how many route-hops away is that network. As you can see the furthest network is 4 hops away from this router.

But unless you need to troubleshoot your network, or are just curious about how RIPng works, you shouldn’t need to run¬†birdcl. After all RIPng is basically a plug-and-play routing protocol.

BIRD v2.08 & RIPng is stable and ready for prime time

Earlier versions of BIRD v2 had interoperability problems with BIRD v1, but those are now in the rear view mirror. RIPng is very easy to setup, once you have an example config file. It is a tried and true routing protocol that gets the job done, making a multi-router SOHO network easy to stand up and maintain.

RIPng will be quietly keeping your network going while you worry about the real problems in the world.

Originally posted on www.makiki.ca (IPv6-only)
Updated on 13 April 2022 – fixed example bird.conf

 

Privacy, SLAAC & RFC 8981

Routers

The Changing nature of SLAAC

It has been six (6) years since I last wrote about SLAAC. With the latest standard, RFC 8981, it is time to revisit it, and understand how it has evolved over the years.

Stateless Address Auto Configuration (SLAAC) was a novel improvement over the static address configuration of IPv4 in the late 1990s. It rose out of the interesting quandary of how to create a globally unique address without the help of any servers. In the early days of SLAAC, the interface MAC address, expanded to a EUI-64, was used to create the IID (Interface ID) or last 64 bits of the IPv6 address.

History of adding Privacy to SLAAC

Privacy issues arose rather quickly, since MAC addresses are burned into the interface card, and therefore do not change. The concern was that with a static global IPv6 address, users could be easily tracked (web-based cookies were still in their infancy at the time).

Over the years, SLAAC and Privacy have been revisited by the IETF:

  • in 2001: RFC 3041 – Privacy Extensions for Stateless Address Autoconfiguration in IPv6 (Obsoleted by: RFC 4941)
  • in 2007: RFC 4841 – Privacy Extensions for Stateless Address Autoconfiguration in IPv6 (Obsoleted by: RFC 8981)
  • in 2021: RFC 8981 – Temporary Address Extensions for Stateless Address Autoconfiguration in IPv6 (current standard)

RFC 3041 – Privacy Extensions for Stateless Address Autoconfiguration in IPv6

In 2001 RFC 3041 introduced the concept of Privacy Extensions for SLAAC. IPv6 already had the concept of multiple addresses, since every interface had a Link-Local, and a Global Address.

An additional address, the Temporary Address was added. The Temporary Address would use an algorithm to create a randomized IID. A new Temporary Address would be generated on a daily or weekly basis (RFC 3041 is fuzzy on this point). A new Temporary Address would be generated just before the existing one expired.

Temporary Addresses would be used for outbound connections, thus providing some level of privacy and “makes it more difficult for eavesdroppers and other information collectors to identify”1 the user.

RFC 4941 – Privacy Extensions for Stateless Address Autoconfiguration in IPv6

The IETF updated the standard of using Privacy Extensions in 2007 with RFC 4841. The key differences from 2001 are:

  • Exclude Anycast (RFC2526) addresses from potential Temporary Address pool
  • Add User configuration to enable/disable Temporary Addresses
  • Duplicate Address Detection (DAD) must be run on all new Temporary Addresses, not just the first
  • Default state of Temporary Addressing is disabled
  • Different IIDs for different Prefixes are allowed
  • Algorithm used to generate random IIDs is no longer limited to MD5

Hosts would still use the MAC address (EUI-64) to create a Stable Address, and create Temporary Addresses in addition to the Stable Address.

RFC 8981 – Temporary Address Extensions for Stateless Address Autoconfiguration in IPv6

Fast forward to 2021, where IPv6 adoption is running at 35% (based on Google’s IPv6 Stats), and privacy has become a real concern of internet users.

The key changes from the 2007 (RFC 4941) standard:

  • Configuration of Stable Addresses are no longer required, a host is permitted to only instantiate Temporary Addresses
  • Temporary Addresses are now enabled by default
  • Reduces the maximum life time of Temporary Addresses from 1 week to 2 days, and concurrent Temporary Addresses from 7 to 3
  • A life time randomization is also introduced, Temporary Addresses will no longer have the same lifetime

With these changes, a host now can only have Temporary Addresses and the duration of each will be different. Finally, IPv6 web surfers will have Internet anonymity similar to those hiding behind IPv4 NAT.

Privacy vs Usability

The standard around SLAAC and Privacy has been evolving. As the major OS’s implement the latest round of changes (in RFC 8981), the use of Temporary Addresses will increase, since the default has changed to enabled.

In the Enterprise

I see this as introducing some problems with the use of SLAAC. Enterprises, which have non-reputability requirements, want to be able to link an Address with an employee. In the past, DHCPv6 has been the solution, since a host will obtain the same address (or IID) every time, making security and non-reputability easier. However, not all OS’s implement DHCPv6, most notably Android. Android phones and IoT that is based on Android will continue to use SLAAC into the future. I don’t expect that Enterprise IT will welcome the change that SLAAC Temporary Addresses on devices are now enabled by default.

In the SOHO (Small Office/Home Office) Networks

A higher level of Privacy is usually desired in smaller networks, where plug-and-play are important to deployment. I see SLAAC more often deployed in these smaller networks because of the simplicity. But with the option of hosts only using Temporary Addresses, reachability can be an issue. The SOHO network will have to rely on mDNS and Service Discovery to find printers, security cameras, and other IoT devices. Unfortunately not all IoT will implement Bonjour/Avahi (Service Discovery Protocols).

In long-lived TCP connections

Since Temporary Addresses will be enabled by default, and are used by out-bound connections by default, what is to happen to long-lived TCP connections, such as ssh to remote devices.

When Temporary Addresses age out, these long-lived TCP connections are broken. With a single ssh session, there is little effort in logging in again. But what if the host is an IT management station, and there are many ssh TCP sessions, all seemingly randomly aging out? Suddenly, it is no longer fun.

Fortunately, the authors of ssh saw the need to bind a ssh session to a specific source IP address. If the host has a Stable Address, one just needs to bind the session to the Stable Address as a source address. This is done with a -b option where myhost.example.com is the DNS name of my Stable Address.

ssh -b myhost.example.com remotehost.example.com

Another type of long-lived TCP session is one of file sharing, or mounting a remote server on the local filesystem, making it easy to do file operations. sshfs not only is an excellent way to do file sharing, but the link is encrypted (since it uses ssh under the covers). However, sshfs will also fall prey to every changing Temporary Addresses, breaking those file sharing connections.

Unfortunately, sshfs does not have a -b option, but it does implement ssh‘s -o options mechanism. One can bind a file sharing to a local Stable Address by:

sshfs -oBindAddress=myhost.example.com remotehost.example.com:/tmp local_mount_point/ 

Erosion of SLAAC Simplicity

The Privacy pendulum has swung to the more privacy side with the latest SLAAC Temporary Addresses standard (RFC 8981), and only time will tell if more networks gravitate towards, or away from using SLAAC. SLAAC had the upper hand on simplicity, but the complexity of creating a Semantically Opaque Stable Address (RFC 7217), and managing up to three Temporary Addresses, each with their own lifetime timers has eroded that simplicity.

I can see use-cases for Temporary-Address-only hosts, such as web-surfing at a coffee shop, but in stable networks, such as Enterprise or SOHO, having Temporary Addresses enabled by default may be a challenge.


Notes:

Virtual OpenWrt on LXD (redux)

Traffic

Virtual Network in the Palm of your hand

With the latest release of OpenWrt 21.02.0, running a Virtual Router (VR) on Linux Containers (LXD) is much easier, than when I wrote about it back in 2019.

The biggest improvement is that Ubuntu has started to build OpenWrt images on their LXD image server. This allows one to skip all the build-your-own-image steps. Ubuntu is supporting three architectures, x86_64, ARM64, and ARMhf (32 bit). The last is supported on my Raspberry Pi 3b+.

It is now possible to launch an OpenWrt Container with one line (almost). However the Container needs a few fixes after it is launched to work properly.

Motivation, why run OpenWrt in a Container?

Of course, I can run OpenWrt on one of hundreds of real consumer routers, and I do. OpenWrt has excellent IPv6 support, including DHCPv6-PD (prefix delegation), and a really nice web-based firewall configuration.

Why wouldn’t I want that for my virtual machines, as I have for my real ones? Well… I would.

Bridging, the better way to setup LXD

I have been using Linux Containers for a couple of years, and watched people set up LXD in a variety of network configurations, including the default. Unfortunately, even the default network config is not IPv6 friendly.

Setting up a front bridge on the host takes a bit of pre-work, but it is the most transparent way to support IPv6 on your Linux Containers, and also supports running a Virtual OpenWrt router without any additional work.

A front bridge

Bridging is the act of forwarding packets at the ethernet layer. Setting up a front bridge (br0) requires that the host is ethernet attached to the rest of your network. Wifi can not be used, as bridging between Wifi and ethernet requires more than a simple cable plug-in.

Virtual router Network

In the diagram above,¬†br0¬†is what I am calling a¬†front bridge, everything other than the physical ethernet jack is connected to br0,¬†including¬†the host. Depending on your Linux Distro, this can be daunting to some. Systemd doesn’t help, as it hasn’t really simplified linux networking.

Configure a front bridge

If you haven’t set up a¬†front bridge, see¬†Configuring systemd for a LXD Front Bridge¬†for the 6 easy steps.

Install LXD (if you haven’t already)

If you haven’t already installed LXD on your Raspberry Pi or other Linux machine, please look at¬†Linux Containers on the Pi¬†blog post.

Creating LXD profiles

In order for a Linux Container machine to connect to the network, it needs a profile. The default profile connects the Container to lxdbr0 which is not, by default, connected to anything.

I create a profile to connect my Containers to br0 by default, a profile I call extbridge, which looks like:

$ lxc profile show extbridge
config: {}
description: bridged networking LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: extbridge
used_by: []

After I am happy with that profile, I usually just copy it over to the default profile, as most of my Containers are only attached to br0 and get their addressing from the upstream router (both IPv4 & IPv6).

lxc profile copy extbridge default

Creating a profile for OpenWrt

OpenWrt requires two interfaces in order to route. As the earlier diagram shows, the OpenWrt will be routing between br0 and lxdbr0.

Interestingly, when I first used OpenWrt 21.02.0 as a container, the interfaces were reversed (from my previous articles). So I created another profile with eth0 as the WAN, and eth1 as the LAN. (which I call two intf rev, for reversed)

Create twointfrev profile

lxc profile create twointfrev
lxc profile edit twointfrev
    config: {}
    description: 2 interfaces
    devices:
      eth0:
        name: eth0
        nictype: bridged
        parent: br0
        type: nic
      eth1:
        name: eth1
        nictype: bridged
        parent: lxdbr0
        type: nic
      root:
        path: /
        pool: default
        type: disk
    name: twointfrev
    used_by: []

Launch the OpenWrt container

Finally, we get to the easy part. After all the prep of setting up br0, and the twointfrev profile, launching the container is anti-climatic.

lxc launch -p twointfrev images:openwrt/21.02 router21

LXD will automagically pull down the image from the image server, and create a Container named router21.

Fixing the OpenWrt Container

Unfortunately, you will notice that the WAN (eth0) interface will have an IPv4 and IPv6 address, the LAN address will not.

$ lxc ls router21
+----------+---------+-------------------+-----------------------------------------------+-----------+-----------+
|   NAME   |  STATE  |       IPV4        |                     IPV6                      |   TYPE    | SNAPSHOTS |
+----------+---------+-------------------+-----------------------------------------------+-----------+-----------+
| router21 | RUNNING | 10.1.1.108 (eth0) |  2001:db8:8011:fd00::599 (eth0)               | CONTAINER | 0         |
|          |         |                   |  2001:db8:8011:fd00:216:3eff:fef1:25d8 (eth0) |           |           |
+----------+---------+-------------------+-----------------------------------------------+-----------+-----------+

For whatever reason, there are parts missing in this image, most notably the br-lan bridge. Hopefully this will be addressed in future OpenWrt images. But for now, we need to connect to the OpenWrt CLI and do some fixing.

We’ll use LXD’s console access:

lxc exec router21 sh

BusyBox v1.33.1 (2021-08-31 22:20:08 UTC) built-in shell (ash)

~ # 
  1. The following commands will all be done on the OpenWrt CLI. First, create a bridge & br-lan interface. Edit /etc/config/network, add/edit:
config interface 'wan6'
    option reqprefix 'auto'

config device
        option type 'bridge'
        option name 'br-lan'
        list ports 'eth1'
        option bridge_empty '1'

config interface 'lan'
        option proto 'static'
        option device 'br-lan'
        option ipaddr '192.168.88.1'
        option netmask '255.255.255.0'
        option ip6assign '64' 

Note, assign an IPv4 address that works for your network, I chose 192.168.88.1 for my network.

  1. Allow external web management. Edit /etc/config/firewall, add at the bottom of the file:
config rule               
    option target 'ACCEPT'
    option src 'wan'      
    option proto 'tcp'    
    option dest_port '80' 
    option name 'ext_web' 
  1. Restart networking & firewall for changes to take effect
/etc/init.d/network restart
/etc/init.d/firewall restart

OPTIONAL set ULA

  1. Global ULA configuration is not available in Luci – and must be configured manually
uci set network.globals=globals
uci set network.globals.ula_prefix='fdb5:df0c:2121::/64'
uci commit

You will now see the global ULA on the LAN interface now

Exit the LXD console to return to your LXD host

exit
$

Managing the OpenWrt Virtual Router (VR) from a Web Browser

Now you should be able to point your web browser to the WAN address (see output of lxc ls router21 eth0 address). and login, password is blank.

http://[2001:db8:ebbd:2080::599]/

Follow the instructions to set a password, and configure the firewall as you like.

Virtual router Network

Managing your shiny new VR

The OpenWrt router should work just like a real one. This includes the warning message you receive, the first time you click on Network->Interfaces

Virtual router Network

This happens on real routers as well, just click on Continue and all will be well.

You should see that the router now has received Prefix Delegation (PD) from the upstream router, and has applied that to the LAN interface.

$ lxc ls router21
+----------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
|   NAME   |  STATE  |         IPV4          |                     IPV6                      |   TYPE    | SNAPSHOTS |
+----------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| router21 | RUNNING | 192.168.88.1 (br-lan) |  2001:db8:8011:fd04::1 (br-lan)               | CONTAINER | 0         |
|          |         | 10.1.1.35 (eth0)      |  2001:db8:8011:fd00::11b (eth0)               |           |           |
|          |         |                       |  2001:db8:8011:fd00:216:3eff:feb7:c2be (eth0) |           |           |
+----------+---------+-----------------------+-----------------------------------------------+-----------+-----------+

Address Stability of OpenWrt on LXD

Because all of this is running on LXD, there is address stability. No matter how many times you reboot the Raspberry Pi/Linux host, or restart containers in different order, the addresses remain the same. This means the addresses above can be entered into your DNS server with out churn.

Excellent IPv6 support

LXD is the best at container customization, and virtual networking (IPv4 and IPv6). With LXDs flexibility, it is easy to create templates to scale up multiple applications (e.g. a webserver farm running in the palm of your hand). OpenWrt is one of the best Open source router projects, and now it can be run virtually as well. Now you have a server farm in the palm of your hand, with excellent IPv6 support and a great firewall!


Notes:

  • some of the screen shots are from a Pi host, and others from an AMD host.
  • IPv6 addresses have been changed to conform with Documentation addresses¬†RFC 3849

Palm Photo by Alie Koshes

Virtual hosting, the IPv6 way

Mesh

Virtual Hosting, the IPv6 Way

 

Virtual Hosting: The act of hosting several web servers on a single piece of hardware

Before there were VM (Virtual Machines), Containers (light-weight VMs), VPS (Virtual Private Servers), there was a need to host multiple servers on a single machine. In the 1990s the internet was exploding, and everyone wanted to have their own web site. The concept of running multiple web servers on a single piece of hardware was created, and implemented in one of the first open source webservers, Apache.

Virtual Hosting, the Old Way

Apache implemented virtual hosting by having the server examine the HTTP Header looking for the Host Field. Based on the Host Field the web server would utilize a specific configuration for that Virtual web server, and serve up files based on that configuration.

A typical Apache Configuration would be:

<VirtualHost *:80>
    ServerAdmin webmaster@localhost
    ServerName www.makikiweb.com
    DocumentRoot /home/makiki/
    <Directory />
        Options FollowSymLinks
        AllowOverride None
    </Directory>
</VirtualHost>

The advantage of the Apache VirtualHost was that back in the days of IPv4, a single IP address could server many, many websites. This was good for the conservation of IPv4 address space.

However, sharing a single IP address means that you can’t get to the website by IP address alone:

IPv4 only addressing

Virtual Hosting, the IPv6 Way

With seemingly inexhaustible amount IPv6 address space, it is time to rethink how we implement Virtual Hosting. We no longer need to share a single IP address for a stable of servers.

It is possible to add many IP addresses to the web server host

$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether b8:27:cb:bf:52:9c brd ff:ff:ff:ff:ff:ff
    inet6 2001:db8:8:78::f03/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 2001:db8:8:78::f02/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 2001:db8:8:78::f01/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 2001:db8:8:78::102/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 2001:db8:8:78::101/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 2001:db8:8:78::100/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 2001:db8:8:78::1/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::ba27:cbff:febf:529c/64 scope link 
       valid_lft forever preferred_lft forever

Now each Virtual web server can to have its own IPv6 address. This allows the web server to direct the request directly to the virtual server based on IP address. Using a more modern web server such as nginx also allows individual configuration for each virtual web server such as support for TLS (or not), Proxy Protocol (or not).

A typical nginx configuration for three virtual hosts would be:

# Server 1 on 2001:db8:8:78::102
server {
    listen [2001:db8:8:78::102]:80 proxy_protocol ;

    # setup to log original client IP address
    # incoming IPv4 requests from two proxies   
    set_real_ip_from  2001:db8:0:80:1000:3b:1:1;
    set_real_ip_from  2001:db8:0:82:1000:3b:1:1;
    real_ip_header    X-Forwarded-For;
    root /home/makiki;
    access_log /var/log/nginx/makiki.log;
    location / {
        index         index.html index.htm;
    }
}

# Server 2 on 2001:db8:8:78::f02
server {
    listen [2001:db8:8:78::f02]:80 ;
    root /home/makiki;
    access_log /var/log/nginx/makiki.log;
    location / {
        index         index.html index.htm;
    }
}

# Server 3 on 2001:db8:8:78::f03
server {
    listen [2001:db8:8:78::f03]:80 ;
    root /home/makikica;
    access_log /var/log/nginx/makikica.log; 
    location / {
            index         index.html index.htm;
    }

Each Virtual web server is listening on a unique address with specific configuration for that web server.

The example above shows:

  • Server 1 & 2 are¬†simulating a dual-stack¬†server with one server listening via the IPv4 reverse proxy, and Server 2 listening directly to IPv6. Both are serving the same content (same Document Root), and logging to the same log file. The log file will have both IPv6¬†and¬†IPv4 addresses (provided by the proxy_protocol).
  • Server 3 is an¬†IPv6-only¬†server, serving different content.

Having an IP address for each web server also eliminates the dreaded “Sorry…” page, as the web server is no longer reliant on the¬†Host Field¬†to direct the request.

Conclusion

When implementing IPv6 networks and services, it is important to not fall into the same IPv4 design constraints and work-a-rounds in the past. Take some time to think outside the box and create a cleaner design, an IPv6 design.

IPv6-only 802.11s Wireless Mesh Network

Mesh

Mesh Network Concept

Wireless Mesh is the buzzword for the 2020s. In the effort to have wall-to-wall Wifi coverage, SOHO (Small Office/Home Office) router vendors, have carved out a new, more expensive, market niche. If one router is good, then three routers must be better (and more profitable).

In this article, we’ll delve into the technology of IPv6-only Wireless Mesh,¬† the benefits, and downsides if using a Wireless Mesh, instead of the common traditional single SOHO Wireless Router.

What is Wireless Mesh

Wireless Mesh is a group of routers acting together to provide broader wireless coverage, while creating a non-looping paths through the network. The IEEE (The Institute of Electrical and Electronics Engineers) which brought you Ethernet (802.3) and Wifi (802.11), created an ammendment to the Wifi Standard for wireless mesh called 802.11s in 2012.

There are many commercial offerrings for Wireless Mesh network on the market today. Many extend, or implement proprietary aspects of of mesh networking, which creates vendor lock-in.

Fortunately, OpenWrt, the open-source router software, implements the 802.11s standard, and it is possible to create a multi-node wireless mesh with a variety of router vendor’s products.

Wireless Mesh using OpenWrt & 802.11s

Below is a diagram of a simple 3 node wireless mesh network. The Mesh Portal connects the wireless mesh network to the wired network. The left-most router acts as a Layer 3 (network layer) boundary for the mesh network, providing RA (Router Advertisements), and NAT64/DNS64 services.

Wireless Mesh

The wireless mesh network is a single L2 domain, all attached devices will be in the same subnet (or IPv6 prefix).

In this mesh network, all nodes are on the same 5 Ghz channel, while the 2.4 Ghz access APs can be on different channels, but share the same SSID to allow wireless devices to easily roam from one mesh node to the next.

Two of the nodes, Waialae-nui and Iao, also server as regular Access Points (AP) allowing wireless nodes to connect to the mesh network.

Looking at 802.11s loop prevention

In order to prevent looping in mesh networks, 802.11s uses Path Discovery using Path Requests (PREQ) and Path Reply (PREP) messages.

Node A looking for a path to Node H. It will flood PREQ messages to all nodes using the ethernet broadcast MAC address. Node H will receive the PREQ message from Nodes F & J, but F will have a lower hop-count, thus the path back to Node A will be via Node F. Node H will send the PREP message back to Node A via the F-D-A path, which Node A will record, noting that the best path from A to H is along the path of A-D-F-H. The use of PREQ and PREP messages remove the possibility of looping in a Mesh network.

Wireless Mesh Path Discovery

PCAP of PREQ & PREP messages

Mesh Networking Addressing, the Air Bridge

A mesh network can be thought of as a giant distributed ethernet bridge forwarding ethernet frames from wired and wireless interfaces across the 802.11s mesh network. In order to transport entire ethernet frames (and their IP payloads) across the wireless mesh, an additional mesh control header is appended after 802.11 header. The Mesh Control Header contain the additional MAC address fields of original source and destination.

In the diagram below, you can see the original sender STA 1 and destination, STA 7 are preserved in Mesh Control Header (as addr 5 & 6) as the packet traverses the mesh network.

Wireless Mesh Path Addressing

PCAP of the Six Addresses used to create the Air Bridge

Determining if your OpenWrt router supports 802.11s

The OpenWrt router must be running version 19.07.x, as wireless mesh is not supported in earlier releases.

802.11s wireless mesh must be supported at the hardware level. In order to determine if your OpenWrt router has this support run the following command:

# iw list | grep mesh
         * mesh point
         * #{ managed } <= 2048, #{ AP, mesh point } <= 8, #{ P2P-client, P2P-GO } <= 1, #{ IBSS } <= 1,
         * mesh point
         * #{ managed } <= 2048, #{ AP, mesh point } <= 8, #{ P2P-client, P2P-GO } <= 1, #{ IBSS } <= 1,

In the above example of dual band router, the * mesh point indicates that it does support 802.11s mesh. The { AP, mesh point } indicates that mesh and AP functionality are simultaneously supported on the same radio.

Configuring OpenWrt for 802.11s Mesh

In order to configure wireless mesh, where all nodes run on the same channel, it is good to scout the area you wish to cover with the wireless mesh network to determine a relatively empty channel. 5 Ghz is the better choice as it has more available non-overlapping channels (9 20 Mhz channels, vs 3 20 Mhz channels on 2.4 Ghz). You may choose to use 40 Mhz channels on the 5 Ghz band, but that reduces the available channels to four. There are the DFS channels (16 20 Mhz, or 8 40 Mhz), but they are not recommended for Wireless Mesh, as nodes will become unavailable, as they move out of the DFS band.

5 Ghz Band

Once you have selected a single channel to be used by your mesh, the following must be done for each of the mesh nodes in your network.

  1. Install a wpad which supports mesh
  2. Add /etc/config/wireless with a mesh interface
  3. And configure mesh channel in /etc/config/wireless
  4. Run the wifi command to reload the wireless config

Before configuring the OpenWrt router for 802.11s, one must remove the default wpad (WPA daemon) and install one that supports mesh networking. Ensure you do this while attached to the router via a wired connection, as you are likely to lose your wireless connection.

opkg remove wpad-mini
opkg remove wpad-basic
opkg install wpad-mesh-openssl

Then add this stanza to the /etc/config/wireless file:

config wifi-iface 'mesh'
        option network 'mesh lan'
        option device 'radio0'
        option mode 'mesh'
        option mesh_id 'mymesh' # anything, this connects the nodes into one mesh (plus the password if there's any)
        option encryption 'psk2/aes' # or 'none'
        option key 'mysecret'   

Change mymesh and mysecret to match your naming and password needs. They must be the same for all mesh nodes

And change the channel of the 5 Ghz Radio to your mesh channel in /etc/config/wireless

config wifi-device 'radio1'
        option type 'mac80211'
        option hwmode '11a'  # indicates the 5Ghz radio
        option path 'pci0000:00/0000:00:00.0'
        option channel '44'  # the Channel to be used for Mesh Network
        option htmode 'HT20'

Run the wifi command to cause OpenWrt to reload the wireless configuration.

Clearly, you will need to run through these steps for each node in your wireless mesh.

Managing the Mesh Nodes

802.11s only provides Layer 2 connectivity between the nodes. In order to be able to manage the nodes the router must be put into dumb AP mode, and assigned a static IP address

In order to not lose connectivity while configuring the router, assign a static IP address to the the LAN interface. Since this is to be an IPv6-only network, assign a static IPv6 address in the same prefix as your IPv6-only network.

Edit the¬†/etc/config/network¬†file, adding¬†ip6gw¬†and¬†ip6addr¬†to the LAN interface. Normally IPv6 gateway addresses are link-local addresses, but that requires interface scope. In this case, it is easier to point back to the NAT64 router’s Global Address (GUA) on LAN interface as the gateway.

For an IPv6-only network, it is OK to leave the IPv4 static address, as it won’t be used, and maybe helpful, to recover the node on your workbench.

config interface 'lan'
        option type 'bridge'
        option ifname 'eth0.1'
        option proto 'static'
        list ipaddr '192.168.222.1/24'
        option ip6gw '2001:db8:8011:fd60::1'
        list ip6addr '2001:db8:8011:fd60::3/64'

To place the router into dumb AP mode, one must disable the DHCPv4 server, and disable IPv6 RA and DHCPv6 services.

Edit /etc/config/dhcp file to disable DHCPv4, and comment out the following lines for RA and DHCPv6 servers on the LAN interface.

config dhcp 'lan'
        option interface 'lan'
        option ignore '1'
        # option ra_management '1'
        # option ra 'server'
        # option dhcpv6 'server'

Ensure that you are connected to the router via the static IPv6 address (it is a good time to put that address in your local DNS server), and restart networking on the router for the changes to take effect.

/etc/init.d/networking restart

Admiring your 802.11s mesh network

In the default OpenWrt configuration, the LAN ports, and the Wireless Radios (2.4 & 5 Ghz) are bridge together. This means by default, the wireless mesh will be bridged to the LAN ports of each router, making it easy to manage by plugging into any LAN port with your laptop.

Since you have already assigned DNS names to the static IPv6 addresses of each node, it is easy to¬†ssh¬†to a node and see if your mesh network is up and running. In this example I’ll¬†ssh¬†to the¬†mesh portal¬†node¬†kahaluu.

$ ssh root@6kahaluu-ap
BusyBox v1.30.1 () built-in shell (ash)

  _______                     ________        __
 |       |.-----.-----.-----.|  |  |  |.----.|  |_
 |   -   ||  _  |  -__|     ||  |  |  ||   _||   _|
 |_______||   __|_____|__|__||________||__|  |____|
          |__| W I R E L E S S   F R E E D O M
 -----------------------------------------------------
 OpenWrt 19.07.4, r11208-ce6496d796
 -----------------------------------------------------
root@Kahaluu:~# 

The iw command will display that status of paths as well as of each peer. The Paths command is a shorter output. Make sure you use the correct wireless interface (wlan0 or wlan1).

root@Kahaluu:~# iw dev wlan0 mpath dump
DEST ADDR         NEXT HOP          IFACE   SN  METRIC  QLEN    EXPTIME     DTIM    DRET    FLAGS
c4:e9:84:2f:4d:de c4:e9:84:2f:4d:de wlan0   69838   326 0   0   100 0   0x4
84:c9:b2:54:30:5c 84:c9:b2:54:30:5c wlan0   0   299 0   0   0   0   0x10

Since this is Layer 2-based, you will only see MAC addresses of each node. It is probably a good time to make a note of each nodes MAC address. The key fields of this output are the Destination Address, Next Hope, and Metric columns. Because this is only a 3 node mesh, the Destination Address and Next Hop will almost always be the same. To see the mesh in action, run the wifi command on another node, while running the MPath on the original node, not that initially, the connectivity will be via the third node, before reverting to a direct connection.

Watching a mesh node reattach

root@Iao:/etc/config# iw dev wlan1 mpath dump
DEST ADDR         NEXT HOP          IFACE   SN  METRIC  QLEN    EXPTIME     DTIM    DRET    FLAGS
84:16:f9:eb:c9:ae 84:c9:b2:54:30:5c wlan1    6   556 0   1260    100 0   0x5
84:c9:b2:54:30:5c 84:c9:b2:54:30:5c wlan1   27943   257 0   1270    100 0   0x15

root@Iao:/etc/config# iw dev wlan1 mpath dump DEST ADDR NEXT HOP IFACE SN METRIC QLEN EXPTIME DTIM DRET FLAGS 84:16:f9:eb:c9:ae 84:16:f9:eb:c9:ae wlan1 9 326 0 3460 200 1 0x5 84:c9:b2:54:30:5c 84:c9:b2:54:30:5c wlan1 27943 257 0 3460 100 0 0x15

root@Iao:/etc/config# iw dev wlan1 mpath dump DEST ADDR NEXT HOP IFACE SN METRIC QLEN EXPTIME DTIM DRET FLAGS 84:16:f9:eb:c9:ae 84:16:f9:eb:c9:ae wlan1 10 358 0 1260 100 0 0x5 84:c9:b2:54:30:5c 84:c9:b2:54:30:5c wlan1 27943 257 0 1260 100 0 0x15


Dumping information about the peer nodes

Using the iw command it is possible to view quite a bit of information about the peer nodes.

root@Kahaluu:~# iw dev wlan0 station dump
Station c4:e9:84:2f:4d:de (on wlan0)
    inactive time:  0 ms
    rx bytes:   1358471
    rx packets: 4338
    tx bytes:   621897
    tx packets: 2269
    tx retries: 85
    tx failed:  1
    rx drop misc:   19
    signal:     -74 [-74] dBm
    signal avg: -74 [-74] dBm
    Toffset:    163995750460 us
    tx bitrate: 65.0 MBit/s MCS 6 short GI
    rx bitrate: 39.0 MBit/s MCS 4
    rx duration:    366589 us
    last ack signal:0 dBm
    expected throughput:    31.218Mbps
    mesh llid:  0
    mesh plid:  0
    mesh plink: ESTAB
    mesh local PS mode: ACTIVE
    mesh peer PS mode:  ACTIVE
    mesh non-peer PS mode:  ACTIVE
    authorized: yes
    authenticated:  yes
    associated: yes
    preamble:   long
    WMM/WME:    yes
    MFP:        yes
    TDLS peer:  no
    DTIM period:    2
    beacon interval:100
    connected time: 114 seconds
Station 84:c9:b2:54:30:5c (on wlan0)
    inactive time:  20 ms
    rx bytes:   366529
    rx packets: 2751
    tx bytes:   621
    tx packets: 4
    tx retries: 0
    tx failed:  0
    rx drop misc:   35
    signal:     -80 [-80] dBm
    signal avg: -79 [-79] dBm
    Toffset:    106140160060 us
    tx bitrate: 6.5 MBit/s MCS 0
    rx bitrate: 45.0 MBit/s MCS 2 40MHz short GI
    rx duration:    60280 us
    mesh llid:  0
    mesh plid:  0
    mesh plink: ESTAB
    mesh local PS mode: ACTIVE
    mesh peer PS mode:  ACTIVE
    mesh non-peer PS mode:  ACTIVE
    authorized: yes
    authenticated:  yes
    associated: yes
    preamble:   long
    WMM/WME:    yes
    MFP:        yes
    TDLS peer:  no
    DTIM period:    2
    beacon interval:100
    connected time: 114 seconds

Key information here is Signal strength, bitrate, and mesh parameters.

Configuring APs on your mesh network

Now that you have your mesh nodes all configured, and peering setup, you still need to configure some (or all) of the Nodes in AP mode. This will allow wireless devices, such as laptops, cell phones, IoT devices, to communicate across your new mesh network.

You have a choice, you can configure an AP on the 5 Ghz interface if your initial HW check displayed { AP, mesh point }. If not, then you will must use the 2.4 Ghz interface as an AP.

Configuring an AP

Configuring an AP on a node is the same as you would configure an AP on a non-mesh network router. It is easier to do via the *LuCI Web Interface, but it can also be done via the CLI.

Edit the /etc/config/wireless file, which should be default already have an AP configuration stanza

config wifi-iface 'default_radio0'
    option device 'radio0'
    option network 'lan'
    option mode 'ap'
    option key 'sEcrEt'
    option ssid 'holoholo'
    option encryption 'psk2'

Update the key with your preferred wireless password, and the SSID with the name of the AP.

Run the wifi command to reload the wireless config

Roaming from AP to AP with 802.11r

In order to enable roaming as you wander around your mesh network, you must use the same key and ssid for all of the APs on the mesh network.

If you want to add 802.11r support (for faster roaming handoff) add the following options to your AP configuration stanza. The mobility domain (4 hex digits) must be the same for each AP that allows roaming.

config wifi-iface 'default_radio0'
       ...
       # 802.11r below
       option ft_over_ds '1'
       option mobility_domain 'EAEA' # four HEX digits, hawaiian for 'air'
       option ft_psk_generate_local '1'
       option ieee80211r '1'

Or use the LuCI web interface and just check the box Wireless Mesh 802.11r

And add a four hex digit mobility domain when the box appears. Wireless Mesh 802.11r

Using your Wireless Mesh Network

Now that the mesh network is up and running, and APs have been configured and enabled, it is time to use the wireless mesh network.

Configure your wireless device to use the shared AP SSID (holoholo in this example).

The NAT64/DNS64 router (kapalua) will provide the IPv6-only addressing across the mesh network and your wireless device will obtain a SLAAC and/or DHCPv6 address.

Once your wireless device has an IPv6 address, the world is at your finger tips.

Downside to using Wireless Mesh

All of those wireless hops come at an additional cost of higher latency and lower throughput. Since each wireless hop must store and forward each packet, there is additional latency added. And if you chose to use 5 Ghz APs, then each AP must share the same access channel as the mesh channel, adding additional latency.

Latency & Throughput

Use ping to measure latency across the mesh network.

$ ping -c10 6paikea.hoomaha.net                    
PING 6paikea.hoomaha.net(2607:c000:8011:fd44:1867:49ff:fee8:555b) 56 data bytes
64 bytes from 2607:c000:8011:fd44:1867:49ff:fee8:555b: icmp_seq=1 ttl=62 time=2.84 ms
64 bytes from 2607:c000:8011:fd44:1867:49ff:fee8:555b: icmp_seq=2 ttl=63 time=9.43 ms
64 bytes from 2607:c000:8011:fd44:1867:49ff:fee8:555b: icmp_seq=3 ttl=63 time=4.50 ms
64 bytes from 2607:c000:8011:fd44:1867:49ff:fee8:555b: icmp_seq=4 ttl=63 time=9.31 ms
64 bytes from 2607:c000:8011:fd44:1867:49ff:fee8:555b: icmp_seq=5 ttl=63 time=9.83 ms
64 bytes from 2607:c000:8011:fd44:1867:49ff:fee8:555b: icmp_seq=6 ttl=63 time=9.45 ms
64 bytes from 2607:c000:8011:fd44:1867:49ff:fee8:555b: icmp_seq=7 ttl=63 time=2.46 ms
64 bytes from 2607:c000:8011:fd44:1867:49ff:fee8:555b: icmp_seq=8 ttl=63 time=5.93 ms
64 bytes from 2607:c000:8011:fd44:1867:49ff:fee8:555b: icmp_seq=9 ttl=63 time=9.03 ms
64 bytes from 2607:c000:8011:fd44:1867:49ff:fee8:555b: icmp_seq=10 ttl=63 time=9.24 ms

--- 6paikea.hoomaha.net ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 44ms
rtt min/avg/max/mdev = 2.457/7.202/9.831/2.816 ms

Compared to a wired connection, where the latency is about 1 ms.

Throughput

Transferring a 251 MB video file from RAM disk to RAM disk, using scp on a Raspberry Pi 3B+. All times measured by scp.

Rate Time Comment
7.4MB/ 00:33 Wired with Powerline adapters
20.6MB/s 00:12 Wired GigE on same network
2.6MB/s 01:35 Mesh with 1 hop – same channel
4.0MB/s 01:03 Mesh with 1 hop – 2.4 Ghz access, 5 Ghz Mesh
17.8MB/s 00:14 Direct connect to 802.11n bridged
11.8MB/s 00:21 Direct connect to 802.11ac routered

The 3B+ Pi although equipped with a GigE interface, is throughput limited to 200 Mbit, but that is enough for this testing.

As it can be seen, a traditional router (non-mesh) has much better throughput. And bridged Wifi is faster than routed Wifi (in the traditional router scenarios).

802.11r Fast Roaming Performance

How fast is Fast Roaming (802.11r)? Walking between two APs, noted an extra long ping when the switch happens.

$ ping 6paikea.hoomaha.net
PING 6paikea.hoomaha.net(2607:c000:8011:fd44:1867:49ff:fee8:555b) 56 data bytes
64 bytes from 2607:c000:8011:fd44:1867:49ff:fee8:555b: icmp_seq=1 ttl=62 time=8.70 ms
64 bytes from 2607:c000:8011:fd44:1867:49ff:fee8:555b: icmp_seq=2 ttl=63 time=8.65 ms
64 bytes from 2607:c000:8011:fd44:1867:49ff:fee8:555b: icmp_seq=3 ttl=63 time=8.40 ms
...
64 bytes from 2607:c000:8011:fd44:1867:49ff:fee8:555b: icmp_seq=27 ttl=63 time=3.97 ms
64 bytes from 2607:c000:8011:fd44:1867:49ff:fee8:555b: icmp_seq=28 ttl=63 time=403 ms  <====
64 bytes from 2607:c000:8011:fd44:1867:49ff:fee8:555b: icmp_seq=29 ttl=63 time=3.46 ms
64 bytes from 2607:c000:8011:fd44:1867:49ff:fee8:555b: icmp_seq=30 ttl=63 time=10.1 ms
64 bytes from 2607:c000:8011:fd44:1867:49ff:fee8:555b: icmp_seq=31 ttl=63 time=15.9 ms

Without 802.11r enabled, I noted that my test the wireless device (a Chromebook) would not switch APs, even though the Chromebook was right next to another AP until it was put to sleep and reawakened.

IPv6 Support

Since 802.11s wireless mesh, is a layer 2 protocol, it is possible to configure an IPv6-only network on top of the mesh network.

Wireless Mesh

Router Advertisements are sent into the mesh network normally, as one would expect. Neighbour discovery (NDP) which also uses multicast works correctly.

Of course, one doesn’t need to run an IPv6-only network as part of a wireless mesh network, but many issues can be flushed out by running IPv6-only.

Conclusion: A matter of size

Looking at the Commercial Wireless Mesh Products online, the prices range from about $200-$1000. All of them make claims of covering 400-600 m2 (4000 to 6000 square feet). That is a really large house, or you just want to give your horses Wifi in the barn.

Perhaps only a distant room or office requires a stronger wireless signal. OpenWrt supports Wireless Distribution System (WDS), which is much easier to configure, can easily extend your wireless coverage.

Wireless mesh has its place in large area loop-free distribution of wireless networking. However, most SOHO (Small Office/Home Office) environments will not require the broad area coverage that 802.11s provides.

More info

Originally published on www.makiki.ca