My employer NeosIT offers a web based SMS notifiyng solution for organizations with security roles named ZABOS. In the last months we extended the ZABOS application to support digital alerting through POCSAG. After some problems with a third party component we implemented the ability to collect all POCSAG telegrams delivered in the near circumcircle and to notify the authorized recipients by SMS. Most of the incoming telegrams are discarded because they are not assigned in our database. But nevertheless I was interested in graphical representation of all incoming POCSAG messages, and additionally in a comparision to alerts sent with ZVEI, an analogue notification protocol. The ZABOS application log file contains all relevant information, which I wanted to extract.

Setting the stage

Our current infrastructure is based upon Fedora systems and some CentOS boxes. A central Logstash server collects incoming log messages through the Lumberjack input filter. After reviewing possible alternatives I had decided to implement statsd, InfluxDB and Grafana.

InfluxDB

InfluxDB is an open-source distributed time-series database which stores points in time and assigns key/values to it. Installing it on Fedora 22 is easy: get the latest RPM, install it and open the TCP ports:

wget https://s3.amazonaws.com/influxdb/influxdb-0.9.4.2-1.x86_64.rpm
sudo dnf install ./influxdb-0.9.4.2-1.x86_64.rpm</code>

# open network ports
# 8083: admin GUI port
sudo firewall-cmd --add-port=8083/tcp --permanent
# 8086: REST API
sudo firewall-cmd --add-port=8086/tcp --permanent
sudo firewall-cmd --reload

systemctl start influxdb
journalctl -f -u influxdb

After installing the RPM, navigate to http://localhost:8083 and set up a new database. The screenshots in the official documentation are slightly outdated, so use the query input:

CREATE DATABASE "demo"
CREATE USER "demo" WITH PASSWORD 'demo'

Make sure, that you can although open the URL http://localhost:8086/ping. It should return a valid HTTP 204 response.

statsd

statsd is node.js service which collects time series, provided through UDP or TCP. Most producers, e.g. Logstash, provide a statsd interface. statsd itself is pluggable and has a backend plug-in for InfluxDB. Every incoming time series is forwarded to the InfluxDB instance.

# get required packages
sudo dnf install nodejs npm git
cd /opt
sudo git clone https://github.com/etsy/statsd.git
cd statsd

# download InfluxDB backend
npm install statsd-influxdb-backend -d

# open network ports
firewall-cmd --add-port=8125/tcp --permanent
firewall-cmd --add-port=8125/udp --permanent
firewall-cmd --reload

# make configuration directory an copy example configuration
mkdir /etc/statsd/
cp exampleConfig.js /etc/statsd/config.js

# create a user
adduser statsd
# add systemd unit
vi /etc/systemd/system/statsd.service

The statsd.service file contains the unit definition for systemd. I mostly used the sample given at digitalocean.com:

[Service]
ExecStart=/usr/bin/node /opt/statsd/stats.js /etc/statsd/config.js
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=statsd
User=statsd
Group=statsd
Environment=NODE_ENV=production</code>

[Install]
WantedBy=multi-user.target

After saving the unit definition, edit the /etc/statsd/config.js:

{
influxdb: {
version: 0.9, // !!! we installed 0.9
host: '127.0.0.1', // InfluxDB host. (default 127.0.0.1)
port: 8086, // InfluxDB port. (default 8086)
database: 'demo', // InfluxDB database instance. (required)
username: 'demo', // InfluxDB database username. (required)
password: 'demo', // InfluxDB database password. (required)
flush: {
enable: true // Enable regular flush strategy. (default true)
},
proxy: {
enable: false, // Enable the proxy strategy. (default false)
suffix: 'raw', // Metric name suffix. (default 'raw')
flushInterval: 1000 // Flush interval for the internal buffer.
// (default 1000)
}
},
port: 8125, // StatsD port.
backends: ['./backends/console', 'statsd-influxdb-backend'],
debug: true,
legacyNamespace: false
}

If you miss the version property, statsd-influxdb-backend uses the old protocol version. 0.9 is incompatible with prior versions, so you will receive HTTP 404 errors after forwarding messages from statsd to InfluxDB.

# enable service
systemctl enable statsd
systemctl start statsd

journcalctl -f -u statsd

Logstash

In our special case I had to use the logstash-forwarder to forward the ZABOS application log to Lumberjack. To be compatible with our existing Logstash infrastructure, I configured a special input filter to extract POCSAG RICs and ZVEI series from the ZABOS log file. The filter itself is out of scope of this blog entry.

The statsd output filter for Logstash provides the ability to send extracted log data to statsd. The configuration is straight forward:

filter {
file {
# log extraction logic skipped
}
}

output {
if [pocsag_subric] {
statsd {
host => "127.0.0.1"
port => 8125
increment => "pocsag.incoming.%{pocsag_subric}"
}
}
}

This conditional output filter increments a key with the given POCSAG SubRIC if a pocsag_subric field is not empty.

After manual running the Logstash agent with the configuration above, Logstash sends all found POCSAG SubRICs to the local statsd instance which in turn forwards it to InfluxDB.

One note about logstash-output-influxdb: it supports a direct output into InfluxDB without using statsd, but it supports only the old API piror 0.9. In addition, most time series producers are sending in a statsd format. So the setup I described is more complex but you gain an advantage in flexibility.

Grafana

At this point I was able to forward all POCSAG telegrams to InfluxDB. To visualize the collected information, I installed Grafana. The Grafana client connects to different backend databases like OpenTSDB, ElasticSearch and InfluxDB to produce time series based graphs. Installing Grafana can be accomplished with yum/dnf:

sudo dnf install https://grafanarel.s3.amazonaws.com/builds/grafana-2.1.3-1.x86_64.rpm</code>

# open ports
firewall-cmd --add-port=3000/tcp --permanent
firewall-cmd --reload

systemctl enable grafana-server
systemctl start grafana-server

After navigating to http://localhost:3000 you need to set up a new data source: Click on Data Sources > Add New and enter the credentials to your InfluxDB instance.

Important information: You have to enter a FQDN as database URL and not http://localhost! Your browser will directly connect to the InfluxDB backend, so your browser must have access to the InfluxDB REST endpoint.

If you need professional consulting or development services for the topics above, just look on our website or leave us a message at info[at]neos-it[dot]de.

I am asking you for a donation.

You liked the content or this article has helped and reduced the amount of time you have struggled with this issue? Please donate a few bucks so I can keep going with solving challenges.

Categories: DevOps

2 Comments

InfluxData - InfluxDB Week in Review – October 19, 2015 · December 7, 2015 at 5:57 am

[…] Blog: Collecting and Visualizing Metrics with statsd, InfluxDB and Grafana on Fedora 22 […]

InfluxDB Week in Review - October 19, 2015 | InfluxData · May 23, 2017 at 10:01 pm

[…] Blog: Collecting and Visualizing Metrics with statsd, InfluxDB and Grafana on Fedora 22 […]

Comments are closed.