Category: Sys Admin

Adventures with Aredn VLANs

I recently put a node up for San Francisco Wireless Emergency Mesh. It’s something I’ve been intending to do for a while but just never made the time. One of the SFWEM members reached out when he saw me on and asked if I were interested in putting up a node. Since it was something I’ve been meaning to do for a while, I got to work.

The hardware I’m using is a Ubiquiti Rocket M5 with an AMO-5G13 omni directional antenna. The hardware runs a custom firmware from Amateur Radio Emergency Data Network. The setup and configuration is well documented and went smoothly. After mounting the Rocket M5 on a roof mount and running an ethernet to a POE, the node was online.


This node provides good coverage of the general area. It’s currently connected to the rest of the SFWEM mesh network by node KJ6WEG-OAK-Griz-SectorM5 up on Gizzley Peak.

Things got interesting when I started to add a second node. This one is based on a NanoBeam M5 and is intended to create a point-to-point connection to another node on the network, most likely KJ6DZB-USS-HORNET-SOUTH on the USS Hornet.

Putting two devices on a network with the Aredn firmware is supposed to allow them to set up a device-to-device (DtD) connection over the network instead over the RF network.

The Aredn DtD documention was a bit confusing to me. Specifically when I read about the use of VLANs, I assumed that putting the switch ports for both nodes on VLAN 2, they would find and communicate over the network. That’s not what was needed. I know, for whatever reason I suffered a VLAN mental slip.

My network configuration had three VLANs: 1 as the default, 2 for AMPRNET and 3 for Lorawan devices. Since I thought that Aredn wanted to be on VLAN 2, I reconfigured all the switches as: 1 as the default, 2 for Aredn, 3 for Lorawan and 4 for AMPRNET. But this configuration doesn’t work. The SFWEM nodes could get an IP address from the router on VLAN 1 for their WAN interface but they didn’t see each other.

After a few frustrating hours staring at configuration screens, reading and re-reading the Aredn docs, chatting with SFWEM members on slack and wading through my VLAN experience, I realized that I was misunderstanding the use of untagged, VLAN 1 and VLAN 2 by the Aredn firmware. What I realized is that the nodes want to be on their own VLAN and they’ll send WAN data tagged for VLAN 1 while tagging packets for VLAN 2 when doing any DtD communications.

I reconfigured my network switches again but this time as: 1 as the default, 2 for Aredn DtD, 3 for AMPRNET, 4 for Lorawan devices and 5 for SFWEM. The important part here is that the ports for SFWEM nodes are set to tag VLAN 1, tag VLAN 2 and untagged VLAN 5. This gives the Aredn their own default network on VLAN 5, makes VLAN 2 available for DtD communications and allow VLAN 1 traffic to leave the switch for the great beyond.

Both nodes are now online and connected via DtD.

Raspberry PI VLANs

The Raspberry PI is a lovely small computer but sometimes the documentation leaves much to be desired. Expecially when searching for information online. Take networking, if you just need the default configuration everything just works. And that’s one of the joys of working with a Raspberry PI, many tasks just work. But as soon as you want to do anything outside the norms, things become difficult.

Take for instance, adding a VLAN to a PI. Searching online will bring up lots of details on what people have done in the past to add a VLAN and configure it. Sadly, the hows for this have changed over time and most of the information out there is worse that wrong. It forces users to follow steps that just don’t work and then lots of time spent trying to figure out what was done wrong. It’s a rather frustrating aspect of working with a PI.

In an effort to help me remember the steps and for anyone who stumbles into the need to add a VLAN, here are the steps for Stretch, Raspbian Linux 9:

$ sudo apt-get install vlan
$ sudo vconfig add eth0 2
$ sudo bash -c 'echo "interface eth0.2" >> /etc/dhcpcd.conf'
$ sudo ifconfig eth0.2 up
  1. Install the vlan package.
  2. Add vlan 2 to interface to eth0. Change 2 to which ever VLAN you need and eth0 to the physical ethernet interface.
  3. Add a new interface entry to the dhcpcd.conf file so that an IP address can be assigned.
  4. Bring up interface eth0.2.

Update 2021-03-05: There are a couple of steps that I neglected to include. With the steps above, the VLAN will be lost on reboot. Actually those steps won’t work since the VLAN kernel module isn’t loaded. You’ll need to do that before running the vconfig command.

modprobe 8021q
echo 8021q >> /etc/modules

Next, add

vconfig add eth0 5

to /etc/rc.local. If your rc.local has an

exit 0

line at the end, the vconfig command needs to be added before the exit.

With these changes, your VLAN configuration will be re-created when the system is rebooted.

Influx Telegraf and inputs.exec

I’m a fan of Influxdb for capturing data over time. Coupling it with Grafana and interesting dashboards come to life.

Part of Influx’s tool set is Telegraf, their data collection tool. It comes with a slew of data input and output plugins that are reasonably easy to configure and use. I use two of them fairly regularly, inputs.snmp and inputs.exec. The inputs.snmp plugin uses the long standing SNMP protocol to pull data from network devices. Configuration is fairly straight forward. Here’s a sample for collecting data from a network switch:

  agents = ["NAME_OR_IP"]
  version = 2
  community = "COMMUNITY"
  timeout = "60s"

    oid = "RFC1213-MIB::sysUpTime.0"
    name = "uptime"

    oid = "RFC1213-MIB::sysName.0"
    name = "source"
    is_tag = true

    oid = "IF-MIB::ifTable"
    name = "interface"
    inherit_tags = ["source"]

      oid = "IF-MIB::ifDescr"
      name = "ifDescr"
      is_tag = true

Change NAME_OR_IP to the device name / IP address and the COMMUNITY to the configured SNMP community on the device and Telegraf will pull data from the switch every 60 seconds.

I put one of these configuration files in the


directory for each device. I use the device name as the file name. So for network switch ns1, the configuration file is


At home, the network has 4 switches and there are 4 .conf files in the telegraf.d directory. The inputs.snmp plugin handles all the .conf files and processes the data from all the network devices as expected.

The second Telegraf plugin I often use is inputs.exec. This will launch a program and collect the output to send to the influx database. CSV, JSON, etc. all work to feed the Influx engine.

A typical configuration file looks like:

  commands = [

  interval = "60s"
  timeout = "10s"
  data_format = "json"
  name_suffix = "_purpleair"
  tag_keys = [

In this case, the exec will run the /usr/local/bin/ program and capture the data from a PurpleAir device every 60 seconds.

The problem is that the inputs.exec plugin doesn’t allow for multiple instances as with the inputs.snmp plugin. If there are more than one .conf file with inputs.exec, only the last one read by telegraf will be used. As such, more than one program cannot be used to feed via telegraf into influxdb. Rather annoying.

To get around this, I create another instance of the telegraf service. That includes a new systemd service file, a separate /etc/telegraf_EXECNAME folder and supporting configuration files.
In /lib/systemd/system/telegraf_EXECNAME.service:

Description=The plugin-driven server agent for reporting metrics into InfluxDB

ExecStart=/usr/bin/telegraf -config /etc/telegraf_EXECNAME/telegraf.conf -config-directory /etc/telegraf_EXECNAME/telegraf.d $TELEGRAF_OPTS
ExecReload=/bin/kill -HUP $MAINPID


In the /etc/systemd/system/ directory, a symbolic link to the new services file:

cd /etc/systemd/system/
ln -s /lib/systemd/system/telegraf_EXECNAME.service .

In the /etc/telegraf_EXECNAME/telegraf.conf:

  interval = "10s"
  round_interval = true
  metric_batch_size = 1000
  metric_buffer_limit = 10000
  collection_jitter = "0s"
  flush_interval = "10s"
  flush_jitter = "0s"
  precision = ""
  debug = true
  logtarget = "file"
  logfile = "/var/log/telegraf/telegraf_EXECNAME.log"
  logfile_rotation_interval = "1d"
  logfile_rotation_max_size = "50MB"
  logfile_rotation_max_archives = 10
  hostname = ""
  omit_hostname = false


[Add the needed options to the influxdb section for where the influxdb is hosted]

Note that this .conf file has removed all the collection information for the localhost, that remains in the original telegraf instance.

The .conf for the inputs.exec plugin are placed in


To kick off the new service:

systemctl daemon-reload
systemctl enable telegraf_EXECNAME
systemctl start telegraf_EXECNAME

[In all the examples above, replace EXECNAME with a name that describes what’s being run.]

Creating multiple instances of the Telegraf service is annoying but it does allow me to collect the data from multiple places by running programs that reach out, gather the data and format for use with Telegraf and then into an InfluxDB database. See for some examples.

Tags : , ,

Finding Raspberry Pi’s

I’m using Raspberry Pi’s for all sorts of projects. They’re fabulous, inexpensive computers that run Linux and can interact with the physical world. But sometimes, I have trouble finding them on my network. I know they’re there but I don’t have any idea of their IP address. For Pi’s that are connected to the display, keyboard and mouse, this isn’t much of a problem. But when the Pi is running headless, it’s a bit annoying trying to find it.

An easy way is to use nmap to find the all hosts on the local network with a specific MAC address prefix. For Raspberry Pi’s, there are two: B8:27:EB for older RPy models 1, 2, 3 and DC:A6:32 for RPy 4.

#! /bin/sh

nmap -sP | awk '/^Nmap/{ip=$NF}/B8:27:EB/{print ip}'
nmap -sP | awk '/^Nmap/{ip=$NF}/DC:A6:32/{print ip}'

nmap will find hosts on the specified network and awk will pull out the IP address of the host if the MAC address prefix matches those of the RPy’s.

Note that you’ll need to change the to match your local network.

Black Rock City Wifi Summit 2018

The Black Rock City Wifi Summit 2018 took place last Wednesday. As with previous summits that I’ve attended, it was an interesting discussion with a Burning Man tech staff, artists, and various theme camp representatives. The venue was the Thunderdome conference room at the Burning Man Headquarters.

Rolf (sp?) with the org lead a general presentation on the goals, issues and plans for this coming year. In general, he’s asking for frequency coordination to help facilitate access by everyone, to lower noise and such.

The past two years has had troubles with connectivity. For the most part, things just didn’t work. Connecting a NanoBeam to the sector antennas on the NOC tower didn’t work. The ISP had major routing issues and they were late in bringing the backbone online.

The plan for this year is to provide configuration files before heading out to the playa. These are designed to configure a NanoBeam NBE-5AC-GEN2. Other Ubiquiti gear may work but they’re testing and providing configuration for the NanoBeam.

The link between a NanoBeam and the NOC tower gear is on the 5Ghz band. The org is requesting that city participants stay off the 5Ghz band to help facilitate infrastructure connections. Local wifi in camps, art installations, mutant vehicles, etc. should be on the 2.4Ghz band. If the access point provides both 2.4Ghz and 5Ghz access, the org requests that the 5Ghz band be disabled. Doing so will help to keep the noise floor lower on that band.

If you already have a project in the works that is using 5Ghz for communications, don’t fret too much. The org will not be using the upper most channel on the 5Ghz band. That’s 5.825Ghz with a 20Mhz bandwidth. It should be easy enough to configure any radio gear already in use to use that channel. Hopefully there won’t be too much interference with other users on the band.

The plan is for the backbone to be live by 8/20. It consists of (2) 130Mbps connections to the ISP. For folks arriving on playa before gate opens, net access has the potential of helping communications greatly. For a city with a population north of 70,000, it’s going to be interesting to see how the available bandwidth holds up. I can easily see throttling of non-org access occurring as the week progresses. That said, I’m glad that the Burning Man org is working to share the resources they have with the city at large.

There was a short discussion on power. The gist is that the Ubiquiti radios want a stable 24v power source. Grounding the radios is also a good thing. That means driving a copper round bar into the playa 2′ – 3′. And use a surge suppressor. There is lightning and static on the playa that can quickly turn the gear into used carbon.

There was also a mention that microwave based communication equipment doesn’t like to sway. So using a pole that’s too high and moves in the wind will cause connectivity issues with the NOC tower.

If you’re planning to attempt local wifi via the org’s backbone, here’s the hardware you’ll most likely need. At least this is the gear I’m planning to bring on playa:

  • Ubiquiti Network NanoBeam NBE-5AC-GEN2
  • Ubiquiti Network Unifi AP AC Lite
  • network switch
  • 24v dc-dc converter
  • some power source, most likely 2 solar panels and 2 12V deep cycle batteries
  • surge protector for use between the NanoBeam and the switch
  • grounding rod
  • mast / tower along with equipment to secure it

It sounds like the org will be using Ubiquiti Network Rocket Prism AC radios behind their sector antennas. I’m not sure if I can gain access to one before heading out to the playa but it would be nice to test the gear and configuration before heading to the dust.

The org asked for the community to help each other out during the week with doctors hours. Basically we define a schedule and recruit volunteers who are willing to be a network doctor. When someone on playa has an issue, they can come to one of these doctors for help. There was also a mention that doctors may also want to be on a particular MURS radio channel during their office hours. I’m intrigued by this idea and intend to host some time at my camp.

There was also a discussion on the use of APRS for tracking mutant vehicle telemetry data. Someone mentioned putting together an on-playa web service that provides a map of the city along with locations of mutant vehicles or anything else using APRS. Unfortunately I didn’t catch the person to talk more before the summit ended but I really like the idea.

As with many summits like this, my to-do / wish list has just expanded:

  • create a local status dashboard for the network connection
    • raspberry pi based
    • ping to local nanobeam
    • ping to tower antenna
    • ping to google
    • current bandwidth in use through local radio
    • local radio ip address
  • bring some fox hunting gear for 2.4Ghz and 5ghz
  • set up a server with ubiquiti’s access point management system (unifi controller?)
  • configure for a captive portal w/ timeout
  • allow other access points to be adopted and push a stable, usable configuration to them
  • host network doctor’s hours
  • test the configuration on litebeams (LBE‑5AC‑Gen2) as I have a couple left over from another project
  • find my murs radios and verify that they still work
  • find a good dc-dc power filter

Good luck and come visit me at Frozen Oasis, we have killer margaritas!

Tags : , ,