Not sure if this is the correct forum and this is my first post so please correct me if wrong.
I’ve tried all day now to get OH running on Alpine. I’ve tried Alpine 3.8/3.9, OH2.4/2.5m1/2.5snapshot. I started using a VirtualBox with Alpine guest but moved on to an Intel NUC to verify this issue. I tried inside docker and without. Right now I replicated it on an Intel NUC, Alpine 3.9, openJDK8, and the OpenHAB2.4 download zip (no docker).
The error is the same and replicated when trying to configure the ZWave device: Adding the ZWave Binding, Inbox->Choose binding->ZWave Binding->ZWave Serial Controller
To make sure nothing is wrong with the stick, permissions, or aything else I have confirmed access to my ZWaveStick by building and running OpenZWave-Control-Panel and it works fine. I have set permissions to 777 on /dev/ttyACM0 to be sure there are no access problems.
At the ZWave configuration screen, the OH CLI console immediately exits with:
A fatal error has been detected by the Java Runtime Environment:
SIGSEGV (0xb) at pc=0x00000000000036f6, pid=12902, tid=0x00007fb199b8cb10
JRE version: OpenJDK Runtime Environment (8.0_191-b12) (build 1.8.0_191-b12)
Java VM: OpenJDK 64-Bit Server VM (25.191-b12 mixed mode linux-amd64 compressed oops)
Derivative: IcedTea 3.10.0
Distribution: Custom build (Tue Jan 8 12:55:26 UTC 2019)
An error report file with more information is saved as:
I’ve also attached the full hs_err_pid log file that shows the last frame being
After adding the ZWave Binding I see a /userdata/tmp/libNRJavaSerial_root_0 folder appear with a
libNRJavaSerial.so shared library in it. This shared library seems to depend on glibc which is not provided by Alpine since it’s musl based.
I filed a bug with the ZWave binding but Chris Jackson pointed out that he ZWave module is just using the provided serial connection so the problems lies with the core OpenHAB installation.
I did try all of this on Ubuntu18 plus the normal OpenHAB container and everythis worked fine.
In short, I think OpenHAB on Alpine needs to pull in a serial library that is compiled against musl instead of glibc.
When you tried in Docker, was this a custom build image or did you download the alpine version of the official Docker image from DockerHub? There is an Alpine 3.8 image hosted there that I believe works.
@Benjy, as the knower of all things apt and OH, where would this issue need to be filed?
This is a tricky one, I believe this issue in openhab-core is related so it may be worth adding to it for this problem.
@MFornander, are you able compile a .jar https://github.com/NeuronRobotics/nrjavaserial using musl libc? If you are, does openHAB function correctly with this .jar file placed in the addons folder?
It was an official alpine 3.8 (and tried the 3.9 too) image from docker hub. However I thought the docker image had issues after a few hours so I replicated the problem with an alpine install on an Intel NUC without docker.
I’d love to compile the JAR since I have the Alpine setup ready to go. What the quickest way to build the jar if you don’t mind explaining? Should we use the NeuronRobotics repo or the openHAB fork?
You might look at the Alpine Dockerfile in the openhab-sicker repo. I’m pretty sure OH works just fine but they might do something special to make it work which should be apparent in the Dockerfile.
No problem, the repo above compiles its own .jar file. So as long as you have the build packages,
make linux64 should produce the .jar file in
build/libs. Unfortunately, I tested this on a standard alpine VM for myself and adding the new .jar file to the
addons directory had the same crash. Perhaps @wborn could tell me I am doing this wrong?
From what I can see, the OH Docker image downloads the regular Zulu JDK. Which suggests to me that the image is using glibc and not musl, as Azul have a different file for that. Scratch that, it also looks like that line is not used so can’t see a difference.
I’ve tried the Alpine Dockerfile image from openHAB and it doesn’t crash when trying the same process as a standard install of Alpine.
Instead of recompiling the library you can also add glibc support to Alpine (sgerrand/alpine-pkg-glibc). That way you’ll probably also have less issues with other bindings depending on libraries compiled with glibc.
I think the Alpine Docker containers also have this issue since it uses the same nrjavaserial JAR which doesn’t contain any libraries compiled with musl.
Adding glibc is great idea/fix thank you.
I’ll look into this more next week since I’d like a clean and minimal Alpine installation with just the standard musl libs as the openHAB host in the end. That’s usually the whole point of ALpine is to keep a hardened minimal server without anything else. I’d love to see what that openHAB alpine image does. Maybe it goes the glibc route. I’ll keep you posted next week.
Keep in mind that for some bindings they need access to some things for some people that are not in the container. For instance, lots of users have external Python scripts they want to call using the Exec binding from OH. Well, there’s no Python in the Docker containers (neither Alpine nor Debian). Until recently arping didn’t exist breaking part of the functionality of Network Binding. Native ping still doesn’t exist making that option for Network Binding unusable when running in Docker.
When one is working with a tool whose entire purpose for existing is to bridge between different technologies, what constitutes the “minimal server without anything else” will vary from one user to the next. It’s something to keep in mind as you progress. Personally, I outsource the stuff that doesn’t exist in the default container to another Python service I run rather than modify the official image. But the official image will contain stuff that may not be relevant to you but is relevant to other users. A balance needs to be found in the official images.
This is a really interesting and great reply thanks. Based on this I’m thinking of running the OH server directly on the NUC and separate any other technologies such as influx databases, graphing, ipcam management, in Docker containers.