Category: Sys Admin
The Black Rock City Wifi Summit 2018 took place last Wednesday. As with previous summits that I’ve attended, it was an interesting discussion with a Burning Man tech staff, artists, and various theme camp representatives. The venue was the Thunderdome conference room at the Burning Man Headquarters.
Rolf (sp?) with the org lead a general presentation on the goals, issues and plans for this coming year. In general, he’s asking for frequency coordination to help facilitate access by everyone, to lower noise and such.
The past two years has had troubles with connectivity. For the most part, things just didn’t work. Connecting a NanoBeam to the sector antennas on the NOC tower didn’t work. The ISP had major routing issues and they were late in bringing the backbone online.
The plan for this year is to provide configuration files before heading out to the playa. These are designed to configure a NanoBeam NBE-5AC-GEN2. Other Ubiquiti gear may work but they’re testing and providing configuration for the NanoBeam.
The link between a NanoBeam and the NOC tower gear is on the 5Ghz band. The org is requesting that city participants stay off the 5Ghz band to help facilitate infrastructure connections. Local wifi in camps, art installations, mutant vehicles, etc. should be on the 2.4Ghz band. If the access point provides both 2.4Ghz and 5Ghz access, the org requests that the 5Ghz band be disabled. Doing so will help to keep the noise floor lower on that band.
If you already have a project in the works that is using 5Ghz for communications, don’t fret too much. The org will not be using the upper most channel on the 5Ghz band. That’s 5.825Ghz with a 20Mhz bandwidth. It should be easy enough to configure any radio gear already in use to use that channel. Hopefully there won’t be too much interference with other users on the band.
The plan is for the backbone to be live by 8/20. It consists of (2) 130Mbps connections to the ISP. For folks arriving on playa before gate opens, net access has the potential of helping communications greatly. For a city with a population north of 70,000, it’s going to be interesting to see how the available bandwidth holds up. I can easily see throttling of non-org access occurring as the week progresses. That said, I’m glad that the Burning Man org is working to share the resources they have with the city at large.
There was a short discussion on power. The gist is that the Ubiquiti radios want a stable 24v power source. Grounding the radios is also a good thing. That means driving a copper round bar into the playa 2′ – 3′. And use a surge suppressor. There is lightning and static on the playa that can quickly turn the gear into used carbon.
There was also a mention that microwave based communication equipment doesn’t like to sway. So using a pole that’s too high and moves in the wind will cause connectivity issues with the NOC tower.
If you’re planning to attempt local wifi via the org’s backbone, here’s the hardware you’ll most likely need. At least this is the gear I’m planning to bring on playa:
- Ubiquiti Network NanoBeam NBE-5AC-GEN2
- Ubiquiti Network Unifi AP AC Lite
- network switch
- 24v dc-dc converter
- some power source, most likely 2 solar panels and 2 12V deep cycle batteries
- surge protector for use between the NanoBeam and the switch
- grounding rod
- mast / tower along with equipment to secure it
It sounds like the org will be using Ubiquiti Network Rocket Prism AC radios behind their sector antennas. I’m not sure if I can gain access to one before heading out to the playa but it would be nice to test the gear and configuration before heading to the dust.
The org asked for the community to help each other out during the week with doctors hours. Basically we define a schedule and recruit volunteers who are willing to be a network doctor. When someone on playa has an issue, they can come to one of these doctors for help. There was also a mention that doctors may also want to be on a particular MURS radio channel during their office hours. I’m intrigued by this idea and intend to host some time at my camp.
There was also a discussion on the use of APRS for tracking mutant vehicle telemetry data. Someone mentioned putting together an on-playa web service that provides a map of the city along with locations of mutant vehicles or anything else using APRS. Unfortunately I didn’t catch the person to talk more before the summit ended but I really like the idea.
As with many summits like this, my to-do / wish list has just expanded:
- create a local status dashboard for the network connection
- raspberry pi based
- ping to local nanobeam
- ping to tower antenna
- ping to google
- current bandwidth in use through local radio
- local radio ip address
- bring some fox hunting gear for 2.4Ghz and 5ghz
- set up a server with ubiquiti’s access point management system (unifi controller?)
- configure for a captive portal w/ timeout
- allow other access points to be adopted and push a stable, usable configuration to them
- host network doctor’s hours
- test the configuration on litebeams (LBE‑5AC‑Gen2) as I have a couple left over from another project
- find my murs radios and verify that they still work
- find a good dc-dc power filter
Good luck and come visit me at Frozen Oasis, we have killer margaritas!
I had a recent occurrence at work that caused me to look around for a tool to monitor a directory for any changes made. Since there didn’t seem to be anything out there, I created a check called dirchanged. It looks at all the files in a directory and creates an sha256 has of the names and contents of the files. That hash is compared to a known value to determine if there have been any changes made.
There are a couple of issues with this check specifically that it doesn’t look into subdirectories and that the hash for comparison is passed on the command line from within the Nagios configuration files. I think the first issue will be fixed soon enough w/ a flag to indicate if the directory tree is to be traversed. The second issue is more cumbersome in that the hash value has to be stored somewhere. I’m not yet certain that putting it in the Nagios configuration files is better than putting it somewhere on the target file system. From the security standpoint, having the check not stored on the target file system is better, much less chance of it being changed by bad guys.
I’ll let it run for a while and see how it behaves and if changes are warranted.
It seems that there is still some good customer service out there. This conversation took place after a few second wait and lasted around 1 minute.
Thank you for choosing Adobe. A representative will be with you shortly. Your estimated wait time is 0 minute(s) and 3 second(s) or longer as there are 1 customer(s) in line ahead of you.
You are now chatting with Abhinandan.
Abhinandan: Thank you for contacting Adobe chat support, my name is Abhinandan, I have received your query. Please allow me a moment to review the details of your request.
Abhinandan: Hello Peter, how are you?
peter: doing ok
Abhinandan: Thank you.
Abhinandan: May I please have your email address registered with Adobe?
Abhinandan: Thank you.
Abhinandan: Could you please provide me the serial number of your product?
peter: nnnn nnnn nnnn nnnn nnnn nnnn
Abhinandan: Thank you.
Abhinandan: May I know what exact error you are facing?
peter: my laptop was stolen. i’ve purchased a replacement laptop, restored from my backups and i’m now trying to use illustrator. when launching, it asks to be activated. when i try to activate, i receive “activation limit reached for adobe creative suite 5 web premium.”
Abhinandan: I will solve this issue for you.
Abhinandan: You are welcome.
Abhinandan: I have made necessary changes from my end, request you to please try reactivating your product.
Abhinandan: Please close and re launch the product.
Abhinandan: Is it working?
peter: problem solved! you have my thanks.
Abhinandan: Is there anything else I can help you with?
peter: that was my only issue.
Abhinandan: You may receive an email survey in reference to this interaction with Adobe. Your feedback is very much appreciated.
Abhinandan: Thank you for contacting Adobe. We are available 7 days a week, 24 hours a day. Goodbye!
Thank you for chatting with us. Please click the “Close” button on the top right of the chat window to tell us how we did today.
Thanks Abhinandan and thanks Adobe!
I recently had the need to recycle several old computers and by old, I mean Power Mac G4 old. These systems were state of the art in 1999.
As any good IT professional knows, you don’t want to just recycle any computers without first erasing the data on the hard disks. Now I doubt that there’s anything of any importance on the disks but it’s better to erase them to be certain. A traditional way would be to reinitialize the disk and overwrite all the sectors several times. But why follow tradition when there are so many other, more creative ways to erase the contents!
I think I’ve come up with a method guaranteed to destroy the contents of a hard disk for those occasions when you need to be reasonably sure that the data can never be recovered.
What do you think? Too much?
Now before you think of trying this at home let me just say no, stop, don’t try this at home. That’s insane. This forge was running at approximately 2,300 degrees fahrenheit. There’s no way any appliance in your house generates nearly enough heat to reproduce this madness. Just put the thought out of your head.
This has been driving me nuts for the past several months but I hadn’t made the time to figure out the problem. Basically, the only account that could be used to ssh into our OS X server was the admin account. The admin account lives in the traditional Unix /etc/passwd database. Any account that was created via Workgroup Manager, like mine, (that is one that lives in Open Directory, OS X’s LDAP authentication database) wouldn’t work. As I said, this has been driving me nuts and I finally spent some time digging through the man pages, configuration files and log files to figure out what was going on.
It seems that a previous sysadmin had added the AllowUsers keywords to the sshd configuration file in /etc/sshd_config. On the AllowUsers line were listed the users who were able to connect via ssh. And wouldn’t you know, my account wasn’t listed.
I got to this point by reading through the /var/log/secure.log file to see what OS X was recording as the problem with connecting. There was one line in particular that stood out:
Jan 18 15:13:09 xyzzy sshd: User peter from 192.168.1.154 not allowed because not listed in AllowUsers
AllowUsers? That’s strange. I don’t remember anywhere in OS X that would use a convention like this to control the environment. But a quick search on Google shows that this was a keyword used in the sshd configuration file. Adding my account name to the list and I was able to ssh in without any problem. Oh yeah, life is good!
One cool side note, sshd didn’t have to be restarted. It’s smart enough to know the configuration file has changed. Makes it very easy to test configuration changes.
But modifying the /etc/sshd_config file every time I need to allow ssh access to someone isn’t an easy way to manage account priveleges on OS X. Looking a bit more at the sshd_config man page shows that there’s also a AllowGroup option. So I removed the AllowUsers line and replaced it with:
Then using the standard Workgroup Manager, I added a new group called ssh and put the various accounts that need ssh access into the group. Now any accounts that needs ssh access can easily be added (or removed) from the ssh group and sshd will automatically give them access.