Logging to Logstash+Elasticsearch+Kibana

A few weeks ago I looked into piping my openHAB logs to Elasticsearch via logstash. I found the solution to work reliably but realized that the combination wasn’t as interesting to me as I thought.

Steps to take:

  • Install Logstash, Elasticsearch and Kibana
  • Configure a “log4j” input for Logstash
  • Add he following logging setting to the openHAB log file /var/lib/openhab2/etc/org.ops4j.pax.logging.cfg:
    log4j.rootLogger = WARN,out,tcp,osgi:* # add tcp here
    #
    # TCP appender - Logstash+Elasticsearch+Kibana
    log4j.appender.tcp=org.apache.log4j.net.SocketAppender
    log4j.appender.tcp.Port=3456
    log4j.appender.tcp.RemoteHost=192.168.11.230
    log4j.appender.tcp.ReconnectionDelay=10000
    log4j.appender.tcp.Application=openHAB
    log4j.appender.tcp.append=true
    log4j.appender.tcp.maxFileSize=10MB
    log4j.appender.tcp.maxBackupIndex=10
    

Also see: https://blog.lanyonm.org/articles/2015/12/29/log-aggregation-log4j-spring-logstash.html

If you are interested in the topic please post a comment and we can discuss enhancements and write a more detailed Tutorial.

3 Likes

I’m interested in knowing more about why you found it to be less useful than you expected.

Doing this or splunk is high on my list of things to add to my system in the near future. I too would be interested in what you found less than useful with the ELK stack and whether you think some other approach would work better.

For my setup I mainly want to centralize my logging and having a nice query ability. My first thought was to use rsyslog, configure my servers to log to syslog and pipe it all into splunk.

Hi Rich,

I use graylog at home, and I like it for the reasons you are motivated to try. I haven’t added the logs from openhab yet though, just my other software I have running.

I run it all within docker, which I believe you also use. Just let me know if you would be interested in my docker-compose file, it’s for x86.

Craig

Hey guys,
just as @rlkoshak already highlighted I am using the ELK stack to aggregate, search, filter and process logs from multiple servers over long time spans. It’s amazing for server/infrastructure monitoring and alerting. I can definitely recommend it.

Bringing the openHAB logs into elasticsearch was a nice exercise and I was happy when it worked out just fine. A grok filter was easily build and everything was ready to be used.

After a few days I realized, that most of the aggregated log lines where not really important to me. All interesting item value changes are already persisted to InfluxDB and brought into life via Grafana. During the work with openHAB a realtime view at the actual log lines is more useful than the slow Kibana frontend. ELK can still be useful for the above mentioned use case. You can store everything normally logged to openhab.log and filter these log lines for irregularities. My server is still doing that, maybe I’ll find some use for this data at a later point.

If needed I can dig up the logstash side of my configuration and provide it here as well. If one of you guys is interesting to implement opemHAB with ELK, I can open the first posting as a wiki posting so you can add your additions :wink:

Best! Thomas

Great info and you answered an important question, how quickly the logs make it into the stack.

One use case I can think of that could be useful is to correlate logged events from multiple servers (e.g. mosquito).

Hi @ThomDietrich

Thanks for the exploratory work here. Pretty interesting results so far.

One interesting use case for me would be to stream full debug logs (i.e. by full I mean all openHAB and ESH components at DEBUG log level) somewhere other than userdata/logs. Often time I run into a problem, and wish I had debugging turned on. Then when I turn it on, the problem doesn’t occur for a while and my log files fill up with useless debug-level information. I see little downside to streaming the debug logs somewhere, as the debug statements already are being executed. There may be some network impact, but I believe that will be minimal on a 1Gb wired network.

Do you know if it’s possible to stream to logstash at DEBUG level, while keeping the log files in userdata/logs at their default levels?

I do not know that. Maybe an extension of the logging component would be needed for that. One would need to check out the code to learn more.
One thing I want to add: If someone indeed looks into improvements to make a better integration possible (I do currently not have the time for that) it would definitely be worth it to implement the option to stream the log in a json format. That would make the whole processing in logstash a lot easier. See https://blog.lanyonm.org/articles/2015/12/29/log-aggregation-log4j-spring-logstash.html

I have not used ELK, but… log4j uses components called appenders to specify log output destinations (e.g., console appender, file appender). Each appender can have a different log level threshold. I have no experience with it, but there does appear to be Logstash/filebeat appender support. If you configure the existing file appender to log at INFO level or above and configure the filebeat (LogStash) appender to DEBUG that should do what you want.

Thanks for the info.

So, I’d come up with another appender for logstash (similar to what I’m doing for zwave today) configured to stream the logs at DEBUG level to logstash (in JSON format as @ThomDietrich suggests). Then configure logstash to catch the incoming JSON-formatted log stream.

I might give this a shot, but I may not be able to get to it for a little while. I’m in the middle of installing a rack and terminating a bunch of cabling for network, CATV, zoned audio, motorized blinds, openHAB, etc., which is turning out to be a whole lot more work than I had expected. Our 90 year old house was completely torn apart for a renovation project, so it was a good time to install a mess of structured wiring. :wink:

Yes, that’s the essential idea. It appears you could use a log4j SocketAppender to write to LogStash. Here’s a Gist that might give some hints.

Good luck with your project. Sounds interesting.

@steve1 what you are describing is exactly what I did in the first posting. I’ve defined a new appender which sends logs over TCP to logstash directly. no need to takt the detour over a file.
The interesting aspect would be to define a json output format, which needs changes in the openHAB core files (as far as I remember).

Thanks. As I get closer to being done, I was planning to post some pics, along with a description of the functionality. It’s been a fun (and time-consuming) project. I’m pretty amazed at how well I’ve been able to integrate things into openHAB, yet still keep it very “spouse-friendly”. :grinning:

A related question… I’m not sure I want or need a full ELK stack but I would like to specifically monitor ERROR level openHAB log messages (with filtering for false or insignificant errors). I’m wondering about the simplest, most minimalistic, way to do that. I can think of several options, but I’m interested in your opinions and suggestions.

This should work without changing anything in core OH/ESH:

sidenote: log4j2 is coming soon in OH2 :slight_smile:

Has anyone managed to make this work with OpenHAB 2.2 and log4j2? I’ve tried, but the SocketAppender is not found.

2018-01-09 20:34:22,720 CM Configuration Updater (Update: pid=org.ops4j.pax.logging) ERROR Unable to locate plugin type for SocketAppender
2018-01-09 20:34:22,730 CM Configuration Updater (Update: pid=org.ops4j.pax.logging) ERROR Unable to locate plugin for SocketAppender
2018-01-09 20:34:22,736 CM Configuration Updater (Update: pid=org.ops4j.pax.logging) ERROR Unable to invoke factory method in class class org.apache.logging.log4j.core.config.AppendersPlugin for element Appenders. java.lang.NullPointerException

I got stuck here with using log4j2.

Any guidance would be appreciated :frowning:

I had my logstash running with this simple conifgs:

input {
tcp {
mode => “server”
host => “192.168.0.25”
port => 2281
}
}
output {
stdout {}
}

Then i edited the cfg as follows:

Custom Loggers

log4j2.logger.rules.name=org.eclipse.smarthome.model.script
log4j2.logger.rules.level=DEBUG
log4j2.logger.rules.additivity=false
log4j2.logger.rules.appenderRefs=elk, outfile
log4j2.logger.rules.appenderRef.elk.ref=tcp
log4j2.logger.rules.appenderRef.outfile.ref = LOGFILE
log4j2.logger.rules.appenderRef.outfile.level = INFO

log4j2.appender.tcp.type=Socket
log4j2.appender.tcp.name=tcp
log4j2.appender.tcp.port=2281
log4j2.appender.tcp.host=192.168.0.25
log4j2.appender.tcp.layout.type=JsonLayout

When i check the stdout (log) for logstash i dont find anything.

I know this topic is a bit old.
But my additions on this might help.
My personal option would be
There are different options to ingest Elasticsearch with logs.

  1. Directly (Using an appender to log4j) and deliver logs to ES
  2. Rsyslog using json and Elasticsearch orLogstash
  3. The discussed socket.

Technically the safest (If somebody has https would be Elasticsearch with bulk updates and for second option I would use TCP Socket with TLS termination (IF possible for security).

The question: Is it possible to add an external appender for log4j2 on openhab’s log4j?
I would like to use one of the Elasticsearch’s Appenders.