SQLite sometimes fails with SQLITE_BUSY after OH upgrade to 4.0.3

Hi

I am running OH in a docker container on my Synology NAS.
I recently upgraded from OH 3.4.0 to 4.0.3.
After the upgrade I started getting occasional errors with.

2023-10-26 06:00:00.845 [WARN ] [jdbc.internal.JdbcPersistenceService] - JDBC::store: Unable to store item
org.openhab.persistence.jdbc.internal.exceptions.JdbcSQLException: Error in SQL query!!!; [SQLITE_BUSY] The database file is locked (database is locked) Query: INSERT OR IGNORE INTO item0016 (TIME, VALUE) VALUES( strftime(‘%Y-%m-%d %H:%M:%f’ , ‘now’ , ‘localtime’), CAST( ? as DOUBLE) ) Parameters: [0.021]; Pool Name= yank-default; SQL= INSERT OR IGNORE INTO item0016 (TIME, VALUE) VALUES( strftime(‘%Y-%m-%d %H:%M:%f’ , ‘now’ , ‘localtime’), CAST( ? as DOUBLE) )
at org.openhab.persistence.jdbc.internal.db.JdbcSqliteDAO.doStoreItemValue(JdbcSqliteDAO.java:123) ~[?:?]
at org.openhab.persistence.jdbc.internal.JdbcMapper.storeItemValue(JdbcMapper.java:220) ~[?:?]
at org.openhab.persistence.jdbc.internal.JdbcPersistenceService.internalStore(JdbcPersistenceService.java:162) ~[?:?]
at org.openhab.persistence.jdbc.internal.JdbcPersistenceService.store(JdbcPersistenceService.java:140) ~[?:?]
at org.openhab.core.persistence.internal.PersistenceManager.lambda$7(PersistenceManager.java:170) ~[?:?]
at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) ~[?:?]
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179) ~[?:?]
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179) ~[?:?]
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179) ~[?:?]
at java.util.AbstractList$RandomAccessSpliterator.forEachRemaining(AbstractList.java:720) ~[?:?]
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?]
at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) ~[?:?]
at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) ~[?:?]
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?]
at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596) ~[?:?]
at org.openhab.core.persistence.internal.PersistenceManager.lambda$3(PersistenceManager.java:168) ~[?:?]
at java.util.concurrent.ConcurrentHashMap$ValuesView.forEach(ConcurrentHashMap.java:4780) ~[?:?]
at org.openhab.core.persistence.internal.PersistenceManager.handleStateEvent(PersistenceManager.java:165) ~[?:?]
at org.openhab.core.persistence.internal.PersistenceManager.stateUpdated(PersistenceManager.java:334) ~[?:?]
at org.openhab.core.items.GenericItem.lambda$1(GenericItem.java:258) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
at java.lang.Thread.run(Thread.java:833) ~[?:?]

I have a rule that updates two items every 10 minutes and most of the updates get stored without any error.
I did not change anything else in the update so I am guess it must be the OH update that cause the issue as I have never seen the issue before.
It looks like the error will only happen on one of the two items, so I am guessing this might be a threading issue in the SQLite persistance service.
I tried restarting both NAS and OH several times.

Anybody have an input what might cause this ?

Reported in [jdbc-sqlite] Sometimes fails with SQLITE_BUSY with OH 4.0.3 · Issue #15821 · openhab/openhab-addons · GitHub