ChatGPT / AI - Event Log Analysis

Here’s my solution to use chatgpt/openai to analyze my events.log from yesterday and summarize it to be spoken in the morning to alert me of any strangeness.

items:

String	AI_LogResults					"AI Log Results [%s]"											(HomeState)	

rule:

var String openAI_Request4 = 'Review the attached OpenHAB event log and summarize, in plain English and three sentences, what operational activity occurred around failures, automations, or user actions, excluding raw status noise and hp printer.'
var String openAI_Script4 = '/etc/openhab/scripts/analyze_log.sh'
var String openAI_File4 = '/var/log/openhab/Archived/events.log.7.log'

if (gInternet.state == ON) {

	logInfo("NIGHTLYSTUFF","DONE - Summary of Analysis of Home Automation Events Log.")
	var String results55 = executeCommandLine(Duration.ofSeconds(200), bash, openAI_Script4, openAI_File4, openAI_Request4)
		
	if (results55 !== null && results55 != NULL && results55 != '') { 	

		results55 = results55.trim()
		
		AI_LogResults.postUpdate(results55)
		
		logInfo("OPENAI","-----------------------------------------------------------------------------")
		logInfo("OPENAI",results55)
		logInfo("OPENAI","-----------------------------------------------------------------------------")
	}
}

script:

#!/usr/bin/env bash

OPENAI_API_KEY="<your key>" 

MODEL="gpt-4.1-mini"
MAX_TOKENS=500
CHUNK_LINES=500
SLEEP_BETWEEN=1

LOG_FILE="$1"
REQUEST_TEXT="$2"

if [[ -z "$LOG_FILE" || -z "$REQUEST_TEXT" ]]; then
  echo "Usage: $0 /path/to/logfile.log \"Your request text\""
  exit 1
fi

if [[ ! -f "$LOG_FILE" ]]; then
  echo "Error: File not found: $LOG_FILE"
  exit 1
fi

TMP_DIR=$(mktemp -d)
split -l "$CHUNK_LINES" "$LOG_FILE" "$TMP_DIR/chunk_"

SUMMARIES=()

for CHUNK in "$TMP_DIR"/chunk_*; do
  TMP_JSON=$(mktemp)
  TMP_RESP=$(mktemp)

  jq -n --arg model "$MODEL" \
        --arg prompt "$REQUEST_TEXT" \
        --rawfile log "$CHUNK" \
        --argjson max_tokens "$MAX_TOKENS" \
        '{
          model: $model,
          messages: [
            {role: "user", content: ("Analyze the following log chunk and " + $prompt + "\n\n" + $log)}
          ],
          max_tokens: $max_tokens
        }' > "$TMP_JSON"

  # Use --no-buffer and output to a file to avoid hanging
  curl --no-buffer -s https://api.openai.com/v1/chat/completions \
    -H "Authorization: Bearer $OPENAI_API_KEY" \
    -H "Content-Type: application/json" \
    -d @"$TMP_JSON" > "$TMP_RESP"

  SUMMARY=$(jq -r '.choices[0].message.content' < "$TMP_RESP")
  SUMMARIES+=("$SUMMARY")

  rm -f "$TMP_JSON" "$TMP_RESP"
  sleep "$SLEEP_BETWEEN"
done

COMBINED=$(printf "%s\n\n" "${SUMMARIES[@]}")

FINAL_JSON=$(mktemp)
FINAL_RESP=$(mktemp)

jq -n --arg model "$MODEL" \
      --arg prompt "$REQUEST_TEXT" \
      --arg combined "$COMBINED" \
      --argjson max_tokens "$MAX_TOKENS" \
      '{
        model: $model,
        messages: [
          {role: "user", content: ("Combine these chunk summaries and " + $prompt + "\n\n" + $combined)}
        ],
        max_tokens: $max_tokens
      }' > "$FINAL_JSON"

curl --no-buffer -s https://api.openai.com/v1/chat/completions \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d @"$FINAL_JSON" > "$FINAL_RESP"

jq -r '.choices[0].message.content' < "$FINAL_RESP"

rm -rf "$TMP_DIR" "$FINAL_JSON" "$FINAL_RESP"

Output:

The OpenHAB system underwent a large-scale restart or reload during which numerous smart home devices—including lighting, thermostats, media players, sensors, and security components—were reinitialized and transitioned mostly from offline or unknown to online states, with typical temporary communication errors resolving without critical failure. Multiple automation scripts ran to adjust system modes, turning off lights and switches, disabling alarms, enabling motion sensors, pausing Roomba vacuums, and issuing voice alerts such as weather warnings and cryptocurrency price drops, indicating coordinated automated routines rather than direct user control. Throughout the period, intermittent connectivity issues affected certain Z-Wave nodes and the HP LaserJet printer but were transient and resolved automatically, while ongoing environmental monitoring and sensor updates triggered numerous rule executions, reflecting stable system operation without major disruptions.

Best, Jay

Which openhab.event.X loggers do you have enabled?

More detailed info about failures would be available from openhab.log. Have you tried to include that in addition to events.log?

Did you try to use the ChatGPT add-on and failed? If this could be made to work with that add-on this would make a really nice self-contained rule template.

Note: a quick and dirty way to read a text file into a variable in a rule is to use executeCommandLine and "cat", "/path/to/file".

Have you found the summary to be accurate?

See attached, any recommendations on more/better logging at the event level log?

log4j2.xml (44.5 KB)

Have most my bindings broken out by binding by log. Have logreader addon sending me errors via email.

Nope, didn’t try it because the first project I did with openai was analyzing pictures taken from cameras and that binding didn’t have that capability.

Yes, so far.

Best, Jay

OK, so you did enable RuleStatusInfoEvent to be logged in addition to the defaults. I couldn’t figure out how ChatGPT could see your rules were running so figured you had to have modified the defaults.

If you want ChatGPT to see as you add or remove stuff you could enable the Added and Removed loggers. I find the StartLevelEvents to be useful. In particular if OH failed to reach start level 100 would mean something is amiss (usually a Thing didn’t come online).

But since this is all text based it should work with the binding, right? And if one can use the binding that means the rule can be self-contained without needing to download and edit a separate shell script. And if it’s a self-contained rule, it could be made into a rule template that users can just install and configure.

It should also work with any LLM that uses the OpenAI API standard (which is pretty much all of them including many local LLMs).

But the chunking and gathering the results would become a little messy because the ask and the response are asynchronous. But it’s not impossible to deal with if one uses a timer to gather the results until they all come in.

Do you need to chunk to avoid limits with ChatGPT? Does it choke if too large a prompt or is there limit on the API? Or do you find the results are better if you feed it limited chunks at a time?

This was the most difficult part of it, since you’re paying to have it process 2 - 3 MB of event logs, it depends on the model, model pricing, model upload amount and speed at which you want it done. I went the cheapest as possible, hence the gpt-4.1-mini model and chunked it.

Getting a response the cheapest takes 2.5 minutes for it to upload/process all the entries.

Today (midnight till 3 pm CST) was 106 requests and I paid $1.09. Yes, that seems expensive for a day, it is for me but I’ve been playing with this quite a bit so some of it is testing it.

input_tokens output_tokens input_cached_tokens
2876331 12914 83584

From reading the ChatGPT binding docs, it didn’t seem like I could upload an image or a text file. I never tried the binding after reading the spec’s of it.


Here’s the output on one I just ran for the current days event.log.

Around January 26, 2026, between about 10:30 AM and 12:30 PM, the OpenHAB system underwent a broad startup and rule initialization, bringing numerous smart home devices—including lighting, media players, sensors, and security zones—online and refreshing their statuses. During this period, various automations ran extensively: motion sensors triggered occupancy tracking and corresponding lighting adjustments, robotic vacuum cleaners started and completed cleaning cycles with related environment and lighting changes, and alarms and fault indicators were cleared or reactivated as part of routine resets; additionally, media devices like Sonos and Echo received automated playback and volume commands. Transient communication issues occurred briefly with some Z-Wave nodes, Amazon Echo devices, and the Dirigera gateway, but all recovered automatically without prolonged failures, and environmental sensors continuously updated system states to support active monitoring and automation—overall reflecting normal operation with no critical errors.


Best, Jay

Image no, but text is text. Once it’s in a String you send it as a part of the command to the chat Item. You really just need to pass in the text. It doesn’t necessarily need to be a separate file. Even your shell script doesn’t actually send the file. It posts it as part of the HTML body in the call to curl.

I just have the lowest level of API access. I posted my $5 and mainly use it with Mealie to scrap recipes from photos. It’s been running about one cent per recipe. I set it up with OH but I’ve not yet found a good use for it. This strikes me as something that might entertain me, at least for a bit.

Eventually I plan on moving to a Ollama or something else local.

Using Grok for the curl code scripting piece and error rejection from OpenAI. OpenAI API started to put restrictions on the file upload. Too much data, too fast uploading it, needs to be json, needs to be a pdf, needs to .txt and Grok debugging kept trying to comply with OpenAI errors but it never worked throwing back errors with the attachment or simply just hanging their w/o any returned results.

I finally said start over and we went very slowly step by step to get it working with chunking. It probably took about 1.5 hrs for it to work consistently.

Best, Jay

For grins, I just ran a little test. I already had the ChatGPT add-on set up so it was basically a two liner to test without chunking.

let events = actions.Exec.executeCommandLine(time.Duration.ofSeconds(1), 'cat', '/openhab/userdata/logs/events.log');

items.OpenAI.sendCommand('Review this openHAB event log and summarize, in plain English and three sentences, what operational activity occurred around failures, automations, or user actions, excluding raw status noise. \n' + events);

The results get posted to the OpenAI Item as an update and were not spectacular.

During the logged period, multiple devices transitioned from an uninitialized state to online, indicating successful initialization and connection for various smart home devices, including multiple Chromecast devices and sensors. There were also instances of communication errors with some devices, such as the UPS and IP cameras, which temporarily went offline but later returned to an operational state. Additionally, user actions triggered updates to automation scripts, including a command to announce to the family that it was time to leave for school, reflecting active user engagement with the smart home system.

The IP Cameras did temporarily go offline (they are always temporarily going offline, but the UPS is offline for good. I broke my NUT server during an upgrade yesterday. It was definitely not temporarily offline and it remains offline. It never came back so the AI is lying in about this one.

And the user action part was completely bogus. That was a completely automated action not reflecting active user engagement. But I can see how the AI might have become confused about that. It probably saw its own response as something a person pushed into OH.

This call plus the previous calls to provide a more random announcement when it’s time to leave for school took 64,729 tokens and cost me about $0.04. But this wasn’t a full day’s log (those are gziped up in my config and I didn’t bother unzipping it just for this test. So a full day’s log would probably be more tokens overall, but I wouldn’t expect it to be more than 30% more.

I might experiment to see if the results are better if I chunk and or provide a better initial prompt explaining what some things mean and telling it what to look out for. My openhab.log is so sparse I don’t see any real benefit to passing it too right now.

But overall, it seems the chunking isn’t strictly necessary. Though I’m not sure how the max tokens properties impacts things on the ChatGPT Thing.

My ChatGPT Thing config:

version: 1
things:
  chatgpt:account:openai-account:
    location: Rich's Office
    config:
      apiKey: non of your business
      apiUrl: https://api.openai.com/v1/chat/completions
      modelUrl: https://api.openai.com/v1/models
      model: gpt-4o-mini
      temperature: 0.5
      topP: 1
      systemMessage: "You are the manager of the openHAB smart home. You know how to manage devices in a smart home or provide their current status. You can also answer a question not related to devices in the house. Or, for example, you can compose a story upon request. I will provide information about the smart home; if necessary, you can perform the function; if there is not enough information to perform it, then clarify briefly, without listing all the available devices and parameters for the function. If the question is not related to devices in a smart home, then answer the question briefly, maximum 3 sentences in everyday language. The name, current status and location of devices is displayed in 'Available devices'. Use the items_control function only for the requested action, not for providing current states. Available devices:"
      maxTokens: 1000
      keepContext: 2
      contextThreshold: 10000
      useSemanticModel: true
    channels:
      chat:
        type: chat
        config:
          model: gpt-4o-mini
          temperature: 0.5
          maxTokens: 1000
          topP: 1

1 Like