SQLITE_BUSY - DB Full?

All,

I am running SQ lite on my Raspberry pi 3 and get the following message recently (after upgrade to 2.5 M3 (!?)

java.sql.SQLException: [SQLITE_BUSY]  The database file is locked (database is locked) Query: SELECT time, value FROM item0403 ORDER BY time DESC  LIMIT 0,1000 Parameters: []

I guess the DB is full,
This makes me think about reading (again) the various threads about which persistance to use and why…:slight_smile:

Anyway - how to delete single entried instead of the entire DB?

How much free space on your SD Card? df -hl

I checked it already.
Enough space an all mount points.
(Don’t have Access to post the results)

My results ofdf -hl

[19:01:01] openhabian@Homer:~$ df -hl
    Dateisystem    Größe Benutzt Verf. Verw% Eingehängt auf
    /dev/root        30G     13G   16G   46% /
    devtmpfs        481M       0  481M    0% /dev
    tmpfs           486M       0  486M    0% /dev/shm
    tmpfs           486M     50M  437M   11% /run
    tmpfs           5,0M    4,0K  5,0M    1% /run/lock
    tmpfs           486M       0  486M    0% /sys/fs/cgroup
    /dev/sda1        41M     21M   20M   52% /boot
    tmpfs            98M       0   98M    0% /run/user/1000

An SQLite database is limited in size to 140 terabytes (247 bytes, 128 tibibytes). And even if it could handle larger databases, SQLite stores the entire database in a single disk file and many filesystems limit the maximum size of files to something less than this.

I doubt the database is full. I suspect there is a lock still hanging around that didn’t get cleared.

Are the SQLite files on an NFS mounted volume?

Try rebooting the whole machine to see if that clears the lock.

If they are NFS mounted, is the database user able to write to the mounted disk?

Sounds window’ish, but I’ll try :slight_smile:
(it’s not a mounted NFS)

Well, you could got through the pain of figuring out what process has the lock on the file and restarting the process, if one exists. Or you can save all that work and just reboot.

There is a known problem with NFS mounts where file locks end up being flakey.

1 Like

Perhaps on Linux, but I am an old Unix (Sun Solaris) admin. :wink:

There are bad/flakey impelmentations of NFS on some Linux and the BSDs too. Since Solaris is dead…

1 Like

Fair enough :wink:

I actually helped (up to now), but I will keep an eye on it.
Thanks, Rich

EDIT:
It just happend again:

2019-09-21 08:33:25.528 [ERROR] [org.knowm.yank.Yank                 ] - Error in SQL query!!!
java.sql.SQLException: [SQLITE_BUSY]  The database file is locked (database is locked) Query: SELECT time, value FROM item0539 WHERE TIME>'2019-09-19 08:33:19' AND TIME<'2019-09-21 08:33:22' ORDER BY time ASC Parameters: []

EDIT II:
The db is big, but ok (I guess):

drwxrwxr-x+  5 openhab openhab 4,0K Sep 21 08:43 .
drwxrwxr-x+ 19 openhab openhab 4,0K Sep 12 14:08 ..
drwxrwxr-x+  2 openhab openhab 4,0K Sep 12 14:02 db4o
drwxrwxr-x+  2 openhab openhab 4,0K Sep 12 14:02 mapdb
drwxrwxr-x+  2 openhab openhab 4,0K Sep 12 14:02 rrd4j
-rw-rw-r--   1 openhab openhab 2,7G Sep 21 08:43 sqlite-jdbc.db

I renamed the sqlite-jdbc.db and restarted OH.
After OH created a new DB I still get the error:

2019-09-24 13:31:24.655 [ERROR] [org.knowm.yank.Yank                 ] - Error in SQL query!!!
java.sql.SQLException: [SQLITE_BUSY]  The database file is locked (database is locked) Query: SELECT time, value FROM item0504 WHERE TIME>'2019-09-24 12:46:21' ORDER BY time ASC Parameters: []

I’m not surprised that forcing it to recreate the database file didn’t work. I don’t think it is a problem with the file itself. It is a problem either internal to SQLite or with the file system. At this point my only suggestion is to file an issue. I’m out of idea.

Thanks Rich.
I don’t know yet how to file an issue, but I will do it after I checked other things.
Item0504 (error above) is my item “System_started” (in the freshly generated DB)
So nothing obvious.

SQLITE.org states (https://www.sqlite.org/rescode.html#busy):
The SQLITE_BUSY result code indicates that the database file could not be written (or in some cases read) because of concurrent activity by some other database connection, usually a database connection in a separate process.

I wonder whicih other process except oh will access my sqlite DB!?

That’s the root of the problem. There wouldn’t be. That is why I asked about NFS and focused on the file system more so than the SQLite itself.

That does raise something. There actually might be a 2GB limit here. Raspbian is 64-bits I’m pretty sure but your JDK is probably 32-bit as that’s the recommendation as the 64-bit version of Java has problems with serial devices on RPi. And typically 32-bit software cannot handle file sizes over 2GB.

But that doesn’t explain the problem continuing to occur after you reinitialized the DB by removing the old file and having it create a new one.

See How to file an Issue. The repo where the persistence stuff lives is openhab1-addons/bundles/persistence at main · openhab/openhab1-addons · GitHub.

Though be prepared to not get satisfaction as SQLite was dropped as officially supported by OH a long time ago.

Thanks again, Rich.
I wouldn’t mind to switch to another DB.
However I would like to understand the root cause, so I will spend some more time for my investigation.

In the meantime I will also check out the various alternative persistence approaches here.

I use MapDB for restore on startup and InfluxDB for history. I’m pretty happy with that combo. I think rrd4j is more popular than InfluxDB over all though.

I don’t have high expectations on the persistence.

Storage for up to a year, stability and the option to create graphs.
(for restore on startup I also use mapDB an am very happy with it.)

I did some more test and now it’s gone.

What I did was:
clean cache & temp (again).
increased time for copying back the rules after the “system started” signal.
Maybe the persistence was not yet ready?!

Raspian is 32 bits.