Mapdb persistence - force server restart loses values sometimes

9/10 the Mapdb persistence is working fine. I use a docker instance of openhab, and have had the odd occasion where I’ve had to restart the host and when openhab comes back up, values in the mapdb are lost (normally the state of a bunch of my switches, so setpoints, temperature settings etc.

What could cause the mapdb to either delete itself, or is there some key that is recreated that causes the mapdb to be discarded.

Is there another model…maybe json/xml (that I could source control and restore) that would restore the values in this situation

Well mapdb isn’t transactionally safe so if you pull the power plug then this might result in a corrupted storage. I don’t know if it helps but there’s a setting for the binding how long to wait before syncing to disk you might want to try.
But this is a corner case. You mustn’t restart OH at all, and if there’s really a need you should gracefully shut it down.

Well I wouldn’t say I was pulling the plug. But I have had to restart the host as the docker container isn’t responding. And if it’s in that state, generally you can’t ssh into karaf to give it a graceful shutdown.

Just to make sure, you have mounted userdata into the container, right? In particular $OH_USERDATA/persistence/mapdb is mounted?

Yeah it’s a volume mount to my system. I agree with @mstormi that it’s a bit of an edge case. Also yesterday I started committing to git the jsondb and mapdb files…I’ll just figure out how to restore it…or work out a rule for setting defaults (things in there like the max/min set points for my heating and times to turn things on are in there).

Can you git the jsondb without shutting doem OpenHAB to quiet the database? Copying any active database is asking for corruption trouble.

The JSONDB isn’t really like a regular database that is constantly written to a little but at a time. But periodically it does get return all at once and it doesn’t take more than a few hundred milliseconds. The likelihood of running a git add at just that moment is very low. And there are backups created at each write so if you include the backups you won’t lose anything. And you will have the previous version of the DB in your gut history.

There is no guarantee that you won’t get unlucky but there is plenty of ways to fall back to a working copy.

In the couple of years that I’ve been checking in my jsondb, a couple of times a week, I’ve not yet had a corruption.

Fair enough. It is safest to assume a database is constantly written to until you learn better.
Since it appears to be a file, is there any file locking we could utilize to further minimize colliding g with an update?

File locking tends to only work for writes. Many can read but only one can write at a time. Given that, typical file system locking would not work here because the backups script is only reading. And I’m pretty sure that there isn’t good write locking in place. The script could run lsof org.eclipse.smarthome.core.thing.Thing.json to see if it’s being written to before grabbing a copy, but that’s no guarantee that OH won’t start writing to it after the lsof command. But all of this would have to be built in addition to what git and openHAB and the file system provides.

I just wondered whether OH locked the file before & while writing to it.