HOW To setup remote logging (ELK stack) and reduce MicroSD writes

I am using Openhabianpi. And I was looking for a solution to reduce micro SD usage. But also store my logs in a remote location.

I am already running an Elasticsearch/Kibana cluster. But the challenge is still a nice one.

I will not include on this example on How to run Elasticsearch. But I will focus on:

  1. Openhab2 configuration to deliver Over TCP/UDP logs
  2. Logstash setup with Filters to transform logs to readable ones.
  3. Kibana plugins for Log reading.

I wanted a way to transfer logs without writing to MicroSD or even external storage. The best solution at that moment in time was a TCP Socket.

I wanted the following to happen:
OPENHAB2 Log entry → <Over TCP/HTTP> → HTTPS Elasticsearch form.

I wanted to support Mutation of fields (timeMillis to readable format, field separation etc).

Configuration
OPENHAB2 logging config (/var/lib/openhab2/etc/org.ops4j.pax.logging.cfg):

I changed the Entries in sections:
# Rolling file appender
# Event log appender
# Audit file appender

Using the Socket Appender I could achieve this.
Logstash Socket requirements:

  1. JSON ( Json Layout)
  2. New line at end (eventEol setting)
  3. One line json (compact setting)
  4. The Complete option of Json layout should be False! this way, each line is a Bracketed JSON
  5. The name option Is the type of Log
  • LOGFILE = Normal Logs (/var/log/openhab/openhab.log)
  • EVENT = Event Logs (/var/log/openhab/event.log)
  • AUDIT (Self explanatory)

Rolling File Appender Example:

log4j2.appender.out2.type = Socket
log4j2.appender.out2.name = LOGFILE
log4j2.appender.out2.host = 10.10.10.8
log4j2.appender.out2.port = 9000 
log4j2.appender.out2.protocol = TCP
log4j2.appender.out2.reconnectionDelayMillis = 1000
log4j2.appender.out2.immediateFlush = true
log4j2.appender.out2.layout.type = JsonLayout
log4j2.appender.out2.layout.complete = false
log4j2.appender.out2.layout.compact = true
log4j2.appender.out2.layout.eventEol = true

Logstash Config (Logstash I used is 6.5.3):

I choose logstash because I do have a machine whchi is used as Bastion at home. And Logstash is a powerful to mutate, and transfer logs.

I wanted logstash to be able to listen on a port and accept logs. I did create the listener, which supports JSON input:

/etc/logstash/conf.d/input.conf

input {
      tcp {
        host => "0.0.0.0"
        port => 9000
        codec => "json"
        type => "openhab"
      }
    }
  • TCP/Port 9000 / Codec: Json / Type(tag) Openhab

Then I wanted to parse these logs! and make changes:

etc/logstash/conf.d/filter.conf

filter {
  if [type] == "openhab" {
    mutate {
      add_field => { "program" => "%{loggerName}" }
      rename => { "level" => "log_level" }
    }
    date {
      match => [ "timeMillis", "UNIX_MS" ]
    }
  }
}
  • Replace @timestamp with the timeMillis
  • Add a field with loggerName (Useful for the logtrail)
  • Rename level to log_level (useful for the logtrail again)
  • loggerNames can be huge! and make logs Unreadable. I am building a filter to make them smaller and “easy to read” will post later.

Output (Ship the logs to Elasticsearch):

I would like to ship my logs to elasticsearch the output covers 2 types of outputs Openhab and rsyslog:

/etc/logstash/conf.d/output.conf

output {
  if [type] == "openhab" {
    elasticsearch {
      hosts => ["https://elastic.something:443"]
      user => "something"
      password => "something"
      ssl => "true"
      ssl_certificate_verification => "false"
      index => "openhab-%{+YYYY.MM.dd}"
    }
  } else {
    elasticsearch {
      hosts => ["https://elastic.something:443"]
      user => "something"
      password => "something"
      ssl => "true"
      ssl_certificate_verification => "false"
      index => "rsyslog-%{+YYYY.MM.dd}"
    }
  }
}

The next phase is to Configure Kibana.

I wanted to have a Tail -f like logging. And decided to use Kibana 6.5.3 with the latest Logtrail plugin.

Plugin can be found here: GitHub - sivasamyk/logtrail: Kibana plugin to view, search & live tail log events
It is a nice one!

I wanted to be able to see my logs in a nice way. Therefore I changed the logtrail.json conf to the following:

{
  "version" : 2,
  "index_patterns" : [
    {
      "es": {
        "default_index": "openhab-*"
      },
      "tail_interval_in_seconds": 10,
      "es_index_time_offset_in_seconds": 0,
      "display_timezone": "local",
      "display_timestamp_format": "MMM DD HH:mm:ss",
      "max_buckets": 500,
      "default_time_range_in_days" : 0,
      "max_hosts": 100,
      "max_events_to_keep_in_viewer": 5000,
      "default_search": "",
      "fields" : {
        "mapping" : {
            "timestamp" : "@timestamp",
            "hostname" : "host",
            "program": "program",
            "message": "message"
        },
        "message_format": "{{{message}}}",
        "keyword_suffix" : "keyword"
      },
      "color_mapping": {
         "field": "log_level",
           "mapping": {
             "ERROR": "#FF0000",
             "WARN": "#FFEF96",
             "DEBUG": "#B5E7A0",
             "TRACE": "#CFE0E8"
          }
       }
    }
  ]
}

With these small changes I can see my logs! and tail them like a pro!

How this looks? well like this:

It is my first tutorial, I will try to make it better over time! your feedback will make me correct it.
I wish I will help people with ideas by posting this.

5 Likes

Thanks for sharing. Since there is more than just logs doing writes check this guide out. I run my system with zero writes 99.9% of the time on an odroid c2 which is done another way.

very nicely done Βασσίλη !

Some more formating.

I wanted the “program” variable to be shorter.
First option was to replace long standard strings with shorter ones. here is an example of my mutation:

mutate {
  rename => { "level" => "log_level" }
  gsub => [
    "loggerName", "smarthome.event", "EVENT",
    "loggerName", "org.openhab", "OH2",
    "loggerName", "binding.zwave", "B.ZW",
    "loggerName", "handler", "H",
    "loggerName", "internal.protocol", "I.P"
  ]
}

The output of this is like:
OH2.B.ZW.I.P.commandclass.ZWaveClockCommandClass:

Later I wanted to have a log more arranged. I decided to play a bit with logstash and I made up this ruby line, which makes the program field to be 26 Characters and keep the last 26 or add spaces

  if [type] == "openhab" {
    mutate {
      rename => { "level" => "log_level" }
    }
    ruby{
      code => "
        event.set('program',  '%26.26s' % event.get('loggerName').chars.last(26).join)
      "
    }
  }
1 Like