Custom events logging format for splunk

I’d like to use a custom log format of the events.log to be able to parse it using splunk, a graph visualization tool

For example:

onkyoPower state updated to OFF

I’d like to have it in a more simple form, such as:

onkyoPower=OFF

Because splunk needs properties in form key=value

Any help is appreciated

You will be able to control quite a bit by editing your logback.xml file, using the instructions found here.

Can you actually modify the contents of the actual log statement through the logback.xml? My understanding is that the “onkyoPower state updated to OFF” is actually generated by the thing logging and logback.xml mainly controls the rest of the logging statement (e.g. date stamp, originator, etc) and not the statement itself. That would be cool though if you can.

I would approach this using the Logging Persistenec binding. That way you can separate out just the items you want to have logged (rather than everything published to the event bus) if you want to and control the format of the statement. It separates out the item name as the logger name and the state as the msg so you can use logging:pattern=logger=%msg.

1 Like

Indeed, I want to feed the whole event bus to splunk, this one was just an example. I’ve cut of the time while pasting, you’re right.

I’ll dig deeper into logback to understand how the current message type is generated in first place.

Many thanks for your help.

You can still use the Logging persistence binding for all your items. Just configure your .persistence file to apply to all items.

I wonder if a Splunk Persistence binding would have enough demand to make it worth while writing…

If the Logging Persistence bundle can produce the right format, splunk can read it and forward it for indexing. That and a nice how-to in the wiki would be helpful!

Take a look at Splunk’s Field Extraction/Field Transformation functionality. With simple REGEX you can extract the source material even when it’s not in key=value format.

That also works when you’re using Splunk Cloud, which is where I’m using it (for work)

If you’ve got access to the Splunk’s indexer layer, you can also do the relevant extraction at that layer, after the forwarders send it over (in the native format)

It’ll save you a lot of fiddling trying to get the log into the right format, and it’ll come in handy if you want to extract from the regular openHAB.log file (it’s even less structured)

Thank you very much for all your help, I’ve got it!

I’ve struggled with splunk quite a bit, I was able to do the regex but somehow was still unable to do a search with the newly extracted fields so I did it in OpenHAB:

This is the new appender:

<appender name="SPLUNKFILE" class="ch.qos.logback.core.FileAppender">
	<file>${openhab.logdir:-logs}/splunk.log</file>
	<encoder>
		<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS Z} openhabitem %replace(%replace(%msg){' received command ', '="'}){' state updated to ', '="'}"%n</pattern>
	</encoder>
</appender>

And then, to log all bus events to it:

<logger name="runtime.busevents" level="INFO" additivity="false">
	<appender-ref ref="EVENTFILE" />
	<appender-ref ref="SPLUNKFILE" />
	<appender-ref ref="STDOUT" />
</logger>

This works like a charm and gives the following output:

2015-10-27 13:09:31.438 +0100 openhabitem onkyoDimmerLevel="1"
2015-10-27 13:09:31.635 +0100 openhabitem onkyoNETPlayStatus="S--"
2015-10-27 13:09:31.664 +0100 openhabitem onkyoSource="35"
2015-10-27 13:09:31.765 +0100 openhabitem onkyoOnline="ON"
2015-10-27 13:09:33.046 +0100 openhabitem onkyoListenMode="11"
2015-10-27 13:09:51.000 +0100 openhabitem onkyoListenMode="13"
2015-10-27 13:09:55.512 +0100 openhabitem onkyoVolume="36"
2015-10-27 13:09:56.011 +0100 openhabitem onkyoVolume="37"
2015-10-27 13:09:56.501 +0100 openhabitem onkyoVolume="41"
2015-10-27 13:09:57.004 +0100 openhabitem onkyoVolume="42"
2015-10-27 13:10:19.777 +0100 openhabitem onkyoPower="ON"
2015-10-27 13:10:32.796 +0100 openhabitem onkyoOnline="ON"

So far, not bad. The only remaining problem is that somehow splunk seems to lock the file, meaning if OpenHAB needs a restart it cannot write to the file anymore. But I’m sure that I’ll be able to figure out this problem

2 Likes