Part of Influx’s tool set is Telegraf, their data collection tool. It comes with a slew of data input and output plugins that are reasonably easy to configure and use. I use two of them fairly regularly, inputs.snmp and inputs.exec. The inputs.snmp plugin uses the long standing SNMP protocol to pull data from network devices. Configuration is fairly straight forward. Here’s a sample for collecting data from a network switch:
[[inputs.snmp]] agents = ["NAME_OR_IP"] version = 2 community = "COMMUNITY" timeout = "60s" [[inputs.snmp.field]] oid = "RFC1213-MIB::sysUpTime.0" name = "uptime" [[inputs.snmp.field]] oid = "RFC1213-MIB::sysName.0" name = "source" is_tag = true [[inputs.snmp.table]] oid = "IF-MIB::ifTable" name = "interface" inherit_tags = ["source"] [[inputs.snmp.table.field]] oid = "IF-MIB::ifDescr" name = "ifDescr" is_tag = true
Change NAME_OR_IP to the device name / IP address and the COMMUNITY to the configured SNMP community on the device and Telegraf will pull data from the switch every 60 seconds.
I put one of these configuration files in the
directory for each device. I use the device name as the file name. So for network switch ns1, the configuration file is
At home, the network has 4 switches and there are 4 .conf files in the telegraf.d directory. The inputs.snmp plugin handles all the .conf files and processes the data from all the network devices as expected.
The second Telegraf plugin I often use is inputs.exec. This will launch a program and collect the output to send to the influx database. CSV, JSON, etc. all work to feed the Influx engine.
A typical configuration file looks like:
[[inputs.exec]] commands = [ "/usr/local/bin/purpleair_json.py https://www.purpleair.com/data.json?show=DEVICEID&key=APIKEY" ] interval = "60s" timeout = "10s" data_format = "json" name_suffix = "_purpleair" tag_keys = [ "ID", ]
In this case, the exec will run the /usr/local/bin/purpleair_json.py program and capture the data from a PurpleAir device every 60 seconds.
The problem is that the inputs.exec plugin doesn’t allow for multiple instances as with the inputs.snmp plugin. If there are more than one .conf file with inputs.exec, only the last one read by telegraf will be used. As such, more than one program cannot be used to feed via telegraf into influxdb. Rather annoying.
To get around this, I create another instance of the telegraf service. That includes a new systemd service file, a separate /etc/telegraf_EXECNAME folder and supporting configuration files.
[Unit] Description=The plugin-driven server agent for reporting metrics into InfluxDB Documentation=https://github.com/influxdata/telegraf After=network.target [Service] EnvironmentFile=-/etc/default/telegraf_EXECNAME User=telegraf ExecStart=/usr/bin/telegraf -config /etc/telegraf_EXECNAME/telegraf.conf -config-directory /etc/telegraf_EXECNAME/telegraf.d $TELEGRAF_OPTS ExecReload=/bin/kill -HUP $MAINPID Restart=on-failure RestartForceExitStatus=SIGPIPE KillMode=control-group [Install] WantedBy=multi-user.target
In the /etc/systemd/system/multi-user.target.wants directory, a symbolic link to the new services file:
cd /etc/systemd/system/multi-user.target.wants/ ln -s /lib/systemd/system/telegraf_EXECNAME.service .
In the /etc/telegraf_EXECNAME/telegraf.conf:
[agent] interval = "10s" round_interval = true metric_batch_size = 1000 metric_buffer_limit = 10000 collection_jitter = "0s" flush_interval = "10s" flush_jitter = "0s" precision = "" debug = true logtarget = "file" logfile = "/var/log/telegraf/telegraf_EXECNAME.log" logfile_rotation_interval = "1d" logfile_rotation_max_size = "50MB" logfile_rotation_max_archives = 10 hostname = "" omit_hostname = false [[outputs.influxdb]]
[Add the needed options to the influxdb section for where the influxdb is hosted]
Note that this .conf file has removed all the collection information for the localhost, that remains in the original telegraf instance.
The .conf for the inputs.exec plugin are placed in
To kick off the new service:
systemctl daemon-reload systemctl enable telegraf_EXECNAME systemctl start telegraf_EXECNAME
[In all the examples above, replace EXECNAME with a name that describes what’s being run.]
Creating multiple instances of the Telegraf service is annoying but it does allow me to collect the data from multiple places by running programs that reach out, gather the data and format for use with Telegraf and then into an InfluxDB database. See https://github.com/pkropf/telegraf for some examples.