Mystery problems: main fuse was blown, /dev/ttyUSB0 disappeared, OH starts new log file...?

I’ve run into some strange things. Maybe they’re not all related, but I want to list them all anyway.

1. The main fuse was blown
Somewhere around 19h00, the main fuse was blown. There was no heavy consumption going on (only the oven was preheating), and none of the “sub fuses” were blown. This happened as well a few weeks ago, also when the oven was preheating. I took it out of its place, and inspected the wiring. No damage. A mystery.
I’ve got a UPS running, so openHAB kept running, as well as my router and modem.

2. All my “EnOcean” things went offline, because the Bridge went offline
I “fixed” the fuse situation at 19h03, and then these logs started to appear:

2025-07-31 19:05:06.173 [INFO ] [ab.event.ThingStatusInfoChangedEvent] - Thing 'enocean:bridge:ccb063c185' changed from OFFLINE (CONFIGURATION_ERROR): Port could not be found to OFFLINE (CONFIGURATION_PENDING): opening serial port...
2025-07-31 19:05:06.192 [INFO ] [ab.event.ThingStatusInfoChangedEvent] - Thing 'enocean:bridge:ccb063c185' changed from OFFLINE (CONFIGURATION_PENDING): opening serial port... to OFFLINE (CONFIGURATION_ERROR): Port could not be found

3. /dev/ttyUSB0 had disappeared
This is also something I’ve encountered before. Very annoying, as I don’t notice it, and it of course makes connection with my Eltako setup (via the EnOcean bridge) impossible.
I now did some googling, and a lot of people on “the internet” think brltty causes this sometimes. Apparently, it’s something for blind people, but I’m not blind, so I uninstalled it. Although some of it still seems to remain on my system, even after a reboot:

erik@MinipcLG2:~$ apt list | grep -i brltty

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

brltty-espeak/noble 6.6-4ubuntu5 amd64
brltty-flite/noble 6.6-4ubuntu5 amd64
brltty-speechd/noble 6.6-4ubuntu5 amd64
brltty-x11/noble 6.6-4ubuntu5 amd64
brltty/noble,now 6.6-4ubuntu5 amd64 [overgebleven configuratie]

4. openHAB started a new events.log
Apparently, openHAB restarted when I “fixed” the fuse. Why?

2025-07-31 19:01:33.621 [INFO ] [org.openhab.core.Activator          ] - Starting openHAB 4.3.5 (Release Build)
2025-07-31 19:01:34.106 [INFO ] [.core.internal.i18n.I18nProviderImpl] - Time zone set to 'Europe/Brussels'.
2025-07-31 19:01:34.115 [INFO ] [.core.internal.i18n.I18nProviderImpl] - Location set to 'XXXXXXXXXXXXXXXXX'.
2025-07-31 19:01:34.115 [INFO ] [.core.internal.i18n.I18nProviderImpl] - Locale set to 'nl_BE'.
2025-07-31 19:01:34.116 [INFO ] [.core.internal.i18n.I18nProviderImpl] - Measurement system set to 'SI'.
2025-07-31 19:01:45.488 [INFO ] [.core.model.lsp.internal.ModelServer] - Started Language Server Protocol (LSP) service on port 5007

5. ‘Critical error while reading DBUS response’ during startup
This error seems to be appearing for some time now during startup, while I never had that problem in the past:

2025-07-31 03:48:44.828 [ERROR] [com.github.hypfvieh.DbusHelper      ] - Critical error while reading DBUS response (maybe no bluetoothd daemon running?)
org.freedesktop.dbus.errors.NoReply: No reply within specified time
	at org.freedesktop.dbus.RemoteInvocationHandler.executeRemoteMethod(RemoteInvocationHandler.java:202) ~[?:?]
	at org.freedesktop.dbus.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:89) ~[?:?]
	at jdk.proxy22.$Proxy146.Introspect(Unknown Source) ~[?:?]
	at com.github.hypfvieh.DbusHelper.findNodes(DbusHelper.java:41) ~[?:?]
	at com.github.hypfvieh.bluetooth.DeviceManager.scanForBluetoothAdapters(DeviceManager.java:114) ~[?:?]
	at org.openhab.binding.bluetooth.bluez.internal.DeviceManagerWrapper.scanForBluetoothAdapters(DeviceManagerWrapper.java:45) ~[?:?]
	at org.openhab.binding.bluetooth.bluez.internal.BlueZDiscoveryService.startScan(BlueZDiscoveryService.java:90) ~[?:?]
	at org.openhab.binding.bluetooth.bluez.internal.BlueZDiscoveryService.lambda$0(BlueZDiscoveryService.java:71) ~[?:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572) ~[?:?]
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:358) ~[?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
	at java.lang.Thread.run(Thread.java:1583) [?:?]

But maybe that’s been around since I installed the BTHome Binding several weeks ago. I couldn’t tell…

Everything works now; hopefully removing brltty (except for that ‘configuration’) will keep /dev/ttyUSB0 from disappearing. But there’s still a lot of mystery… Maybe someone has some insight? Could the disappearing of /dev/ttyUSB0 caused something that made the Eltako device blow the fuse? (That would not be ideal…) Or is it the opposite, and did the fuse blowing cause /dev/ttyUSB0 to disappear? (Also not ideal…)

(Maybe the last point (about DBUS) deserves its own thread, but since the B in USB also stands for “bus”, I thought it worth mentioning here as well.)

The log file is rotated every start. Default behaviour.

Hopefully removing the tty packages solved it. Good choice to use a UPS, big change your rpi had more trouble recovering from this without it. I had so many sd problems after outages, I stopped using rpi’s.

But there was no restart around 19h00. Or at least, there was no reason for it: the UPS kept the Linux box running.

I now see I should have worded my 4th point differently. I changed it.

Why did openHAB restart…? Is there a way to retro-actively find that out? I checked the “previous” openhab.log. The last lines of it give no clue (but actually a new mystery):

2025-07-31 16:06:01.273 [WARN ] [io.openhabcloud.internal.CloudClient] - Socket.IO disconnected: ping timeout
2025-07-31 16:06:01.274 [INFO ] [io.openhabcloud.internal.CloudClient] - Disconnected from the openHAB Cloud service (UUID = 3e...42, base URL = http://localhost:8080)
2025-07-31 16:06:29.661 [INFO ] [io.openhabcloud.internal.CloudClient] - Connected to the openHAB Cloud service (UUID = 3e...42, base URL = http://localhost:8080)

That was almost three hours before the fuse situation. And why would there have been an issue with the cloud service?

The “previous” events.log only mentions things going offline (because they lost power), nothing about openHAB shutting down…

Can’t tell you why based on the logs, sorry. Must say it is a little weird, but as everything works as expected, give it some time and see if it happens again.

Agreed there was no apparent reason for it, but we can see that OH did restart at 19:01:33 prior to when the fuse got fixed and close enough to 19:00 to make me think that the blown fuse did indeed cause OH to restart. More likely the whole machine crashed and came back up.

An UPS is usually pretty good at providing clean power to a device in these sorts of situations. But if the EnOcean has any sort of physical connection to the RPi, and it is not also protected from surges by the UPS, it’s possible that a power surge came through the EnOcean and smacked the RPi offline. RPis are notoriously sensitive to power fluctuations and stuff like that and surges cam come into the RPi through any physical connection, not just the power plug.

I can’t say anything about root causes of the blown fuse. But 3 and 4 make sense if everything is wired as I describe.

Not really. If you have StartlevelEvents being logged to events.log (it’s not be default) the last line in events.log will be “Startleve ‘0’ reached” on a normal shutdown. So you can potentially tell the difference between a crash and a normal shutdown. But you can’t tell why OH shutdown or if there was a crash you can’t tell why it crashed.

But the problem gets worse if ZRAM is involved because if the RPi itself crashed because of a power blip, you’ll lose all those logs anyway since there wasn’t time to flush the RAM to disk.

That makes sense. And is pretty frightening… It also supports the idea that the disappearing of /dev/ttyUSB0 is a consequence of the fuse being blown (or probably more accurately, the cause of the fuse being blown?), rather rhan by that braille software.

I’m not running openHAB on a Raspberry Pi, but on a mini-pc, but I assume the effect is the same. :slight_smile: Is there a way to check whether the Linux box restarted then? I already restarted again since then, so who -b is no longer useful…

I doubt it was the cause. Any device with the right certifications on the label (e.g. FCC, UL, IC, etc) are tested to show they won’t fail in that sort of way. It should be all but physically impossible for that EnOcean device to draw that much power to flip the main circuit. Of course there are always exceptions and edge cases to look out for but the oven seems to be the most likely culprit given the information we have.

Yes. Back in the old cable TV days, a lot of the higher end surge protectors would also let you plug in phone, Ethernet, and coax into the surge protector. An electrical surge could reach and mess with your devices through any cable.

uptime will tell you how long it’s been since the last boot.

09:59:02 up 59 days,  1:45,  1 user,  load average: 0.16, 0.10, 0.05

Note the first field is the current time, not the time the machine last booted.

uptime -s gives the full date time the machine started in a format easier to parse (if you wanted to put this into a rule for example).

2025-06-03 08:13:22

man uptime will show all the other options to this command.

Depending on how long ago the machine came up you might find something in the syslogs but over time those get rolled out and deleted. A command like uptime is going to be a better way I think.

I never knew about who -b. Learn something new every day!

Note, I have a rule template that will save to an Item and log to openhab.log during starting a rough estimate of the time that OH last went offline based on the timestamp of the last log entry in the most recent archived events.log file. It’s not exact but it’s a reasonable proxy for when OH last had any activity before starting up again.

You could use that with uptime to see if the machine remained running while OH restarted on its own by comparing the two times. Of course the template doesn’t work if the logs got wiped out because they were in ZRAM when the machine lost power.

But since I rebooted again last night (to make everything work again), that wouldn’t tell us anything about the reboot before, right?

I didn’t mean that the EnOcean device caused the fuse to be blow, but that the short circuit (from whichever cause) caused the fuse to be blown and the EnOcean device to send a current to the Linux box’ USB port. So what you said :slight_smile: (Although those pretty certifications might as well check for that kind of behavior?)

Glad to bring something to the table! :wink:

No, and unless you record that special somewhere that isn’t logs, there’s going to be no way to learn anything about that boot before the last boot. The logs eventually go away as they are rotated off. All the status stuff in /proc and /sys which is where all this information comes from is volatile and only shows the current boot’s information.

There are external tools (Zabbix, Prometheus + Graphana, ELK stack, GreyLog, etc) which will poll your machines and keep that sort of information up to forever if you want it. But the host only really cares about the current boot.

You can look at /var/log/boot.log and see if the information you seek is there. I don’t think all Linux distros create that though. But at least on Ubuntu 24.04 the logs from the last 8 boots are saved. That basically captures all that text that scrolls by when the machine starts up (or is hidden behind the nice screen with the logo). It won’t tell you anything about why it went down though, just what happened during the boot process.

I think they mainly test to make sure they don’t misbehave themselves. Except for devices designed to block surges like that, I don’t think they test to see if individual electronics are themselves able to block a surge.