I am running SQ lite on my Raspberry pi 3 and get the following message recently (after upgrade to 2.5 M3 (!?)
java.sql.SQLException: [SQLITE_BUSY] The database file is locked (database is locked) Query: SELECT time, value FROM item0403 ORDER BY time DESC LIMIT 0,1000 Parameters: []
I guess the DB is full,
This makes me think about reading (again) the various threads about which persistance to use and why…
Anyway - how to delete single entried instead of the entire DB?
An SQLite database is limited in size to 140 terabytes (247 bytes, 128 tibibytes). And even if it could handle larger databases, SQLite stores the entire database in a single disk file and many filesystems limit the maximum size of files to something less than this.
I doubt the database is full. I suspect there is a lock still hanging around that didn’t get cleared.
Are the SQLite files on an NFS mounted volume?
Try rebooting the whole machine to see if that clears the lock.
Well, you could got through the pain of figuring out what process has the lock on the file and restarting the process, if one exists. Or you can save all that work and just reboot.
There is a known problem with NFS mounts where file locks end up being flakey.
I actually helped (up to now), but I will keep an eye on it.
Thanks, Rich
EDIT:
It just happend again:
2019-09-21 08:33:25.528 [ERROR] [org.knowm.yank.Yank ] - Error in SQL query!!!
java.sql.SQLException: [SQLITE_BUSY] The database file is locked (database is locked) Query: SELECT time, value FROM item0539 WHERE TIME>'2019-09-19 08:33:19' AND TIME<'2019-09-21 08:33:22' ORDER BY time ASC Parameters: []
I renamed the sqlite-jdbc.db and restarted OH.
After OH created a new DB I still get the error:
2019-09-24 13:31:24.655 [ERROR] [org.knowm.yank.Yank ] - Error in SQL query!!!
java.sql.SQLException: [SQLITE_BUSY] The database file is locked (database is locked) Query: SELECT time, value FROM item0504 WHERE TIME>'2019-09-24 12:46:21' ORDER BY time ASC Parameters: []
I’m not surprised that forcing it to recreate the database file didn’t work. I don’t think it is a problem with the file itself. It is a problem either internal to SQLite or with the file system. At this point my only suggestion is to file an issue. I’m out of idea.
Thanks Rich.
I don’t know yet how to file an issue, but I will do it after I checked other things.
Item0504 (error above) is my item “System_started” (in the freshly generated DB)
So nothing obvious.
SQLITE.org states (https://www.sqlite.org/rescode.html#busy):
The SQLITE_BUSY result code indicates that the database file could not be written (or in some cases read) because of concurrent activity by some other database connection, usually a database connection in a separate process.
I wonder whicih other process except oh will access my sqlite DB!?
That’s the root of the problem. There wouldn’t be. That is why I asked about NFS and focused on the file system more so than the SQLite itself.
That does raise something. There actually might be a 2GB limit here. Raspbian is 64-bits I’m pretty sure but your JDK is probably 32-bit as that’s the recommendation as the 64-bit version of Java has problems with serial devices on RPi. And typically 32-bit software cannot handle file sizes over 2GB.
But that doesn’t explain the problem continuing to occur after you reinitialized the DB by removing the old file and having it create a new one.
Thanks again, Rich.
I wouldn’t mind to switch to another DB.
However I would like to understand the root cause, so I will spend some more time for my investigation.
In the meantime I will also check out the various alternative persistence approaches here.
I use MapDB for restore on startup and InfluxDB for history. I’m pretty happy with that combo. I think rrd4j is more popular than InfluxDB over all though.
What I did was:
clean cache & temp (again).
increased time for copying back the rules after the “system started” signal.
Maybe the persistence was not yet ready?!