Please explain errors

Strange things, OH2 stops responding and too many errors in log, can anybody translate to human language? :slight_smile:
This happens when i try to view graphs, i connected from remote computer, OH2 UI very slow loading and stops responding.
OH2, MariaDB, mqtt is installed locally.

2017-02-06 09:25:29.907 [WARN ] [ore.internal.events.OSGiEventManager] - Dispatching event to subscriber 'org.openhab.core.events.internal.EventBridge@b4488c' takes more than 5000ms.
2017-02-06 09:25:45.289 [WARN ] [ore.internal.events.OSGiEventManager] - Dispatching event to subscriber 'org.eclipse.smarthome.io.monitor.internal.EventLogger@10a388b' takes more than 5000ms.
2017-02-06 09:26:49.625 [WARN ] [.whistlingfish.harmony.HarmonyClient] - Send heartbeat failed
...
2017-02-06 09:26:50.016 [WARN ] [rg.jivesoftware.smack.XMPPConnection] - Connection closed with error
...
2017-02-06 09:26:49.678 [WARN ] [ore.internal.events.OSGiEventManager] - Dispatching event to subscriber 'org.eclipse.smarthome.core.internal.items.ItemUpdater@10a3e75' takes more than 5000ms.
2017-02-06 09:26:49.676 [WARN ] [ore.internal.events.OSGiEventManager] - Dispatching event to subscriber 'org.openhab.core.events.internal.EventBridge@b4488c' takes more than 5000ms.
2017-02-06 09:26:49.640 [WARN ] [com.zaxxer.hikari.pool.HikariPool   ] - 1m20s359ms25μs430ns - Thread starvation or clock leap detected (housekeeper delta=yank-default).
2017-02-06 09:26:50.846 [ERROR] [o.client.mqttv3.internal.ClientState] - openhab2: Timed out as no activity, keepAlive=60,000 lastOutboundActivity=1,486,362,410,069 lastInboundActivity=1,486,362,348,100
2017-02-06 09:26:52.076 [ERROR] [g.mqtt.internal.MqttMessagePublisher] - Error publishing...
Client is not connected (32104)
...
017-02-06 09:26:52.099 [ERROR] [t.mqtt.internal.MqttBrokerConnection] - MQTT connection to broker was lost
Timed out waiting for a response from the server (32000)
...
2017-02-06 09:27:27.220 [ERROR] [t.mqtt.internal.MqttBrokerConnection] - MQTT connection to 'oh2agdisk' was lost: Unexpected error : ReasonCode 6 : Cause : Unknown
2017-02-06 09:27:27.227 [INFO ] [t.mqtt.internal.MqttBrokerConnection] - Starting connection helper to periodically try restore connection to broker 'oh2agdisk'
2017-02-06 09:27:37.232 [INFO ] [t.mqtt.internal.MqttBrokerConnection] - Starting MQTT broker connection 'oh2agdisk'
...
2017-02-06 10:17:02.298 [ERROR] [n.mqtt.internal.MqttMessagePublisher] - Error publishing message: Too many publishes in progress
2017-02-06 10:17:06.114 [ERROR] [n.mqtt.internal.MqttMessagePublisher] - Error publishing message: Too many publishes in progress
2017-02-06 10:17:06.163 [ERROR] [n.mqtt.internal.MqttMessagePublisher] - Error publishing message: Too many publishes in progress
...
2017-02-06 10:21:49.855 [ERROR] [org.knowm.yank.Yank                 ] - Error in SQL query!!!
java.sql.SQLTransientConnectionException: yank-default - Connection is not available, request timed out after 38010ms.
...
2017-02-06 10:27:59.798 [WARN ] [eclipse.jetty.servlet.ServletHandler] - Error for /chart
java.lang.OutOfMemoryError: Java heap space

I’m not an expert on this but it is looking like there is a major error happening either in the MQTT binding or the JDBS addon. All of the errors in the log are MQTT related. What seems to be happening is at somepoint before this a thread ran amok consuming more and more resources until MQTT no longer can maintain its connection to the broker and in the end OH runs out of Heap space (i.e. memory).

If you ran a top I bet you would see openHAB consuming 100% of at least one of your CPUs and grow in memory until it crashes.

Given this occurs when you try to view a chart I’m guessing it is either JDBC or the Charting Servlet that is the root cause but have no way to know given these logs.

I think this is worthy of filing an issue. This is clearly a bug. Its going to be hard to diagnose though.

Today i got high cpu load (90-100%) and again after viewing graphs.
Not all errors like in my first post, but only:

2017-02-06 10:17:06.163 [ERROR] n.mqtt.internal.MqttMessagePublisher] - Error publishing message: Too many publishes in progress

OH2 do not stopped, but works too slow.

Is this information enough to fill issue?

I believe: Not.

To file an issue, you need to give enough information for others to be able to reproduce the error (this is key)
Then, the troubleshooting steps will help to identify the root cause.

You need to include the following type of info:

  • Hardware used (e.g. Raspberry Pi 3)
  • Software used (e.g. Raspbian Jessie, Oracle Java JDK 8u121 x64, openHAB 2 Snapshot xyz, other)
  • Add-ons used (e.g. MQTT, JDBC-MariaDB, other)
  • Relevant configuration data (e.g. mqtt.cfg, jdbc.cfg, other)
  • Steps to reproduce and/or info from troubleshooting already performed
  • Expected behaviour versus actual behaviour
  • DEBUG (or TRACE) log outputs