No space left on device | MapDB error

My openHAB 2.5.9 keeps breaking down always with the same error since I activated the MapDB Persistence Service. I don’t find the reason why. I takes approx 3-5 days till it breaks down.

The error is always the same and the last in the openhab.log:

2020-11-16 02:19:08.199 [ERROR] [org.quartz.core.JobRunShell         ] - Job MapDB_SchedulerGroup.Commit_Transaction threw an unhandled Exception: 
java.io.IOError: java.io.IOException: No space left on device
	at org.mapdb.Volume$FileChannelVol.putByte(Volume.java:920) ~[?:?]
	at org.mapdb.StoreWAL.walIndexVal(StoreWAL.java:277) ~[?:?]
	at org.mapdb.StoreWAL.commit(StoreWAL.java:579) ~[?:?]
	at org.mapdb.EngineWrapper.commit(EngineWrapper.java:94) ~[?:?]
	at org.mapdb.EngineWrapper.commit(EngineWrapper.java:94) ~[?:?]
	at org.mapdb.DB.commit(DB.java:1643) ~[?:?]
	at org.openhab.persistence.mapdb.internal.MapDBPersistenceService$CommitJob.execute(MapDBPersistenceService.java:243) ~[?:?]
	at org.quartz.core.JobRunShell.run(JobRunShell.java:202) [bundleFile:?]
	at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) [bundleFile:?]
Caused by: java.io.IOException: No space left on device
	at sun.nio.ch.FileDispatcherImpl.pwrite0(Native Method) ~[?:1.8.0_241]
	at sun.nio.ch.FileDispatcherImpl.pwrite(FileDispatcherImpl.java:66) ~[?:1.8.0_241]
	at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:89) ~[?:1.8.0_241]
	at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[?:1.8.0_241]
	at sun.nio.ch.FileChannelImpl.writeInternal(FileChannelImpl.java:772) ~[?:1.8.0_241]
	at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:758) ~[?:1.8.0_241]
	at org.mapdb.Volume$FileChannelVol.writeFully(Volume.java:865) ~[?:?]
	at org.mapdb.Volume$FileChannelVol.putByte(Volume.java:918) ~[?:?]
	... 8 more
2020-11-16 02:19:08.494 [ERROR] [org.quartz.core.ErrorLogger         ] - Job (MapDB_SchedulerGroup.Commit_Transaction threw an exception.
org.quartz.SchedulerException: Job threw an unhandled exception.
	at org.quartz.core.JobRunShell.run(JobRunShell.java:213) [bundleFile:?]
	at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) [bundleFile:?]
Caused by: java.io.IOError: java.io.IOException: No space left on device
	at org.mapdb.Volume$FileChannelVol.putByte(Volume.java:920) ~[?:?]
	at org.mapdb.StoreWAL.walIndexVal(StoreWAL.java:277) ~[?:?]
	at org.mapdb.StoreWAL.commit(StoreWAL.java:579) ~[?:?]
	at org.mapdb.EngineWrapper.commit(EngineWrapper.java:94) ~[?:?]
	at org.mapdb.EngineWrapper.commit(EngineWrapper.java:94) ~[?:?]
	at org.mapdb.DB.commit(DB.java:1643) ~[?:?]
	at org.openhab.persistence.mapdb.internal.MapDBPersistenceService$CommitJob.execute(MapDBPersistenceService.java:243) ~[?:?]
	at org.quartz.core.JobRunShell.run(JobRunShell.java:202) ~[?:?]
	... 1 more
Caused by: java.io.IOException: No space left on device
	at sun.nio.ch.FileDispatcherImpl.pwrite0(Native Method) ~[?:1.8.0_241]
	at sun.nio.ch.FileDispatcherImpl.pwrite(FileD

I check the following folder: openHAB/tmpfs/userdata/persistence/mapdb
image

The files are not so big.
I’m running openhab on my Synology DS716+

my mapdb.persist looks like:

// persistence strategies have a name and a definition and are referred to in the "Items" section
Strategies {
  // if no strategy is specified for an item entry below, the default list will be used
  everyMinute   : "0 * * * * ?"
  every5Minutes : "0 */5 * * * ?"
  everyHour     : "0 0 * * * ?"
  everyDay      : "0 0 0 * * ?"
  default = everyChange
}

/* 
 * Each line in this section defines for which item(s) which strategy(ies) should be applied.
 * You can list single items, use "*" for all items or "groupitem*" for all members of a group
 * item (excl. the group item itself).
 */

Items {
    // persist all items on every change and every minute and restore them from the db at startup
    //* : strategy = everyChange, everyMinute, restoreOnStartup
	MilaSleeping, AllSleeping : strategy = everyChange, restoreOnStartup
	vDaylight, vTimeOfDay : strategy = everyChange, restoreOnStartup
	EG_LivingDining_Scene, EG_LivingDining_Dimmer, EG_LivingDining_Mood : strategy = everyChange, restoreOnStartup
	EG_LivingDining_Dimmer_ON, EG_LivingDining_Dimmer_NIGHT, EG_LivingDining_Dimmer_CLEAN, EG_LivingDining_Dimmer_RELAX, EG_LivingDining_Dimmer_WORK, EG_LivingDining_Dimmer_PARTY, EG_LivingDining_Dimmer_TV, EG_LivingDining_Dimmer_EVENT : strategy = everyChange, restoreOnStartup
	EG_LivingDining_Mood_ON, EG_LivingDining_Mood_NIGHT, EG_LivingDining_Mood_CLEAN, EG_LivingDining_Mood_RELAX, EG_LivingDining_Mood_WORK, EG_LivingDining_Mood_PARTY, EG_LivingDining_Mood_TV, EG_LivingDining_Mood_EVENT : strategy = everyChange, restoreOnStartup	
	PersonOneSensorOne, PersonOneSensorTwo, PersonTwoSensorOne, gPersonOnePresent_LastChangeOffline, gPersonOnePresent_LastChangeOnline, gPersonTwoPresent_LastChangeOffline, gPersonTwoPresent_LastChangeOnline, PersonOneSensorOne_LastSeen, PersonTwoSensorOne_LastSeen, SomeonePresent, gPersonOnePresent_Delayed, gPersonTwoPresent_Delayed : strategy = everyChange, restoreOnStartup
}

Screenshot from my PaperUI - Addons- Persistence:

It doesn’t matter how big the files are. What matters is how big openHAB/tmpfs is and how much other stuff is stored in it. Given that it’s a tmpfs that means it’s a RAM disk (i.e. it exists in RAM, not on a disk) so it’s likely pretty small itself. You probably need to expand the size of the tmpfs.

This could have several reasons:
a) disk is full which can be checked by: df
b) no inodes left which canbe checked by: df -i
c) quota is full: check with command quota

Thanks for the help. I thinks that it. I did change the openHAB-2.5.10-syno-noarch-0.001.spk package from the Synology Installation here in this file:
openHAB-tmpfs:

From:

mount -t tmpfs -o size=20M none $TMPFS

To:

mount -t tmpfs -o size=100M none $TMPFS

I will report if the this will not be the solution.