[OH 2.4.0 M6] Testing Results

Official :slight_smile: I just started to feel that RPi is might not be powerful enough for my setup.
I’m running multiple (so many…) things on it that these problems might caused by this.

1 Like

If it’s a rPi3 (b, b+) it should be ok… it is true… you are running lots of addons (and local MySQL?) but I wouldn’t give up on it just yet. Anyway… it doesn’t hurt to move to a stronger computing platform (e.g. NUC or Laptop) to run OH2 :slight_smile:

It’s an RPi 3B. I’m running OH, Node-RED, TasmoAdmin and a few python scripts for communicating with different protocols (security system, etc…). Also I’m running Kodi on that RPi, because mainly I used that just for media center. I know it that this is not the best case, but I wanted to try out openHab, what I can do with it (and not) and if it fits my needs I’ll upgrade the hardware…

your RAM may be all used up… (this can be a problem on SBCs)
what is the output of:

free
(or cat /proc/meminfo |grep -i swap)

I’m monitoring the memory usage for days, and it is always almost full, but not not entirely.

          total        used        free      shared  buff/cache   available
Mem:         864188      660408       56528        5060      147252      144844
Swap:        102396       72704       29692

However that’s the one I thought of, because sometimes I saw errors in openHab, that it can’t allocate memory for something… another guess for me is disk I/O. I had a previous problem where rules stopped working and that was because of the I/O, it was just too slow to write everything to the SD. Now I have solved this, I’m using MySQL on my NAS, so not locally and also moved the logs to the NAS…

1 Like

Disk I/O (usually) isn’t a bottleneck in terms of performance, nevertheless you should still move all write-intensive data storage over to your NAS to avoid getting hit by SD corruption.

The largest benefit will be in swapping [well, paging to use the correct term] to NAS, too. See this post.

Either way, please stay on topic. Open a new thread if you want to discuss server optimizations.

1 Like

Sorry for spamming this post, but it seemed that this problem is related to M6.
Anyway it seems it solved itself

1 Like

hey guys, I’ve just updated to m6 from 2.3. ive cleared the cache but openhab won’t start and in the log I have the following.

2018-12-02 08:15:13.905 [SEVERE] [org.apache.karaf.main.Main] - Could not launch framework
java.lang.RuntimeException: Error initializing storage.
at org.eclipse.osgi.internal.framework.EquinoxContainer.(EquinoxContainer.java:70)
at org.eclipse.osgi.launch.Equinox.(Equinox.java:31)
at org.eclipse.osgi.launch.EquinoxFactory.newFramework(EquinoxFactory.java:24)
at org.apache.karaf.main.Main.launch(Main.java:256)
at org.apache.karaf.main.Main.main(Main.java:178)
Caused by: java.io.FileNotFoundException: /var/lib/openhab2/cache/org.eclipse.osgi/.manager/.fileTableLock (Permission denied)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.(RandomAccessFile.java:243)
at org.eclipse.osgi.internal.location.Locker_JavaNio.lock(Locker_JavaNio.java:36)
at org.eclipse.osgi.storagemanager.StorageManager.lock(StorageManager.java:388)
at org.eclipse.osgi.storagemanager.StorageManager.open(StorageManager.java:701)
at org.eclipse.osgi.storage.Storage.getChildStorageManager(Storage.java:1776)
at org.eclipse.osgi.storage.Storage.getInfoInputStream(Storage.java:1793)
at org.eclipse.osgi.storage.Storage.(Storage.java:132)
at org.eclipse.osgi.storage.Storage.createStorage(Storage.java:85)
at org.eclipse.osgi.internal.framework.EquinoxContainer.(EquinoxContainer.java:68)
… 4 more
any ideas how to fit the permission issue?

How did you upgrade?

post the output of

openhab-cli info

#1, #9

here is the output.
I used the openhababian config

Version: 2.4.0.M6 (Build)

User: openhab (Active Process 22767)
User Groups: openhab tty dialout audio bluetooth gpio

Directories: Folder Name | Path | User:Group
----------- | ---- | ----------
OPENHAB_HOME | /usr/share/openhab2 | openhab:openhab
OPENHAB_RUNTIME | /usr/share/openhab2/runtime | openhab:openhab
OPENHAB_USERDATA | /var/lib/openhab2 | openhab:openhabian
OPENHAB_CONF | /etc/openhab2 | openhab:openhabian
OPENHAB_LOGDIR | /var/log/openhab2 | openhab:openhabian

URLs: http://192.168.178.42:8080
https://192.168.178.42:8443

I didn’t know that the openhabian config tool allows you to use Milestone builds.
I though that you could choose only between stable (2.3.0) and snapshot (2.4.0.S)

anyway, try:

sudo su
systemctl stop openhab2
mv /var/log/openhab2/openhab.log /var/log/openhab2/openhab.log.old
chown -R openhab:openhab  /var/lib/openhab2/
systemctl start openhab2
tail -f /var/log/openhab2/openhab.log

then,

apt-get update && apt-get upgrade

to get the new one: [OH 2.4.0 M7] Testing Results :slight_smile:

time to upgrade people! Let’s make OH2.4 the most stable release ever ! :wink:

upgrading to m7 fixed it

1 Like

We introduced that a couple of weeks ago.

1 Like

I cant find the milestone build in the openhabian-config ??
I just tried testing and snapshot, and both gave me the #1447 release…

testing should be giving you the milestone builds … raise a Github issue and preceisely describe the config you start from and steps you tried to get the milestone build.

It seems that he got it to work from the openhabian-config:

Yes that’s what’s in openHABian for testing.
I use http://openhab.jfrog.io/openhab/openhab-linuxpkg testing main (which essentially is the same base location as the unstable repo except the testing tag of course) and get my milestone builds from there.
I recall discussing with Benjy and that there was some recent change, but I don’t find the reference any more.
@Benjy can you please state which is the current testing repo to be used for milestone builds ?

I think the “official” testing is deb https://dl.bintray.com/openhab/apt-repo2 testing main

See here: https://www.openhab.org/download/

Not sure if that docs statement is up to date. And as you pointed out yourself it’s what he used and he said it didn’t give him the milestone but the latest snapshot.