I suddenly have a full harddrive!!
Nothing can write to disk.
Help!
I recently ran an SD sync from the openhabian-config and now my primary drive appears to be full?!?
Seems really weird. How could it be connected?
It can’t have copied itself to itself because even that wouldn’t equal 64GB
I can see from a windows app that there appears to be 61.6GB in a folder called fd in the dev folder, which seems suspiciously like the amount needed to fill my drive, but I can’t click into that folder using windows through a network share.
Maybe my harddrive setup is strange.
I have an SSD as the primary drive, mmcblk, which has the boot partition and main partition.
Then I was trying to sync that with the sd card in sda1.
The prompts in the openhabian-config showed me the right names to trust it was going to do the right thing.
ok, so it was definitely copying itself in a loop.
Don’t know if my configuration of disks was the cause of the strange behaviour, or if I pressed something wrong.
Anyway, I’ll leave this here in case anyone does the same thing.
There could still be more. My disk seems suspisiuosly full
It copied repeats to ‘storage/syncmount’
can’t delete them easily. Doing most of them manually but etc, and opt folders have permissions or something stopping me deleting them.
Analyse the log files located in /var/log resp. /var/log/openhab.
They should be able to tell something about the root cause of the problem or at least give a hint.
Only one thing in the logs. But I think this was from when the disk was full.
2022-01-05 15:35:50.994 [SEVERE] [org.apache.karaf.main.Main] - Could not launch framework
java.lang.RuntimeException: Error installing bundle listed in startup.properties with url: mvn:org.ops4j.pax.url/pax-url-aether/2.6.10 and startlevel: 5
at org.apache.karaf.main.Main.installAndStartBundles(Main.java:611)
at org.apache.karaf.main.Main.launch(Main.java:306)
at org.apache.karaf.main.Main.main(Main.java:183)
Caused by: org.osgi.framework.BundleException: An error occurred trying to read the bundle
at org.eclipse.osgi.storage.Storage.stageContent0(Storage.java:1154)
at org.eclipse.osgi.storage.Storage.stageContent(Storage.java:1115)
at org.eclipse.osgi.storage.Storage$4.getContent(Storage.java:781)
at org.eclipse.osgi.storage.Storage.install(Storage.java:713)
at org.eclipse.osgi.internal.framework.BundleContextImpl.installBundle(BundleContextImpl.java:182)
at org.apache.karaf.main.Main.installAndStartBundles(Main.java:604)
... 2 more
Caused by: java.io.IOException: No space left on device
at java.base/java.io.FileOutputStream.writeBytes(Native Method)
at java.base/java.io.FileOutputStream.write(FileOutputStream.java:354)
at org.eclipse.osgi.storage.StorageUtil.readFile(StorageUtil.java:82)
at org.eclipse.osgi.storage.Storage.stageContent0(Storage.java:1147)
... 7 more
Check if the disk has enough space in /var/log partition.
If it is still full no update can be stored.
As the time difference seems to be nearly exact one hour 16:34:37 vs 15:35:50 it could be that the timezone in your environment is not correct and the error message belongs to the current time.
Thus check the openhab.log’s file timestamp. If it also shows about 16:34:37 then this is the error/reason why OH does not start.
Oh come on. There’s extensive documentation on what is supported and anything not described in there isn’t, simple as that.
It should be obvious that features will not work under arbitrary conditions / in all possible configurations.
For the same reason it’s also impossible to warn of any idea a user might come up with, no matter if it’s good or weird, obvious or far-fetched.
There’s also extensive warnings of using SSDs as well so as you obviously didn’t know and nonetheless used one, you didn’t properly read the docs.