I seem to have a port conflict on my Synology with the latest OH2 - I suspect it’s an issue with the Karaf debug port which I believe defaults to 5005 (which my synology is using!), although the error isn’t very explicit in this respect -:
Launching the openHAB runtime...
./runtime/karaf/bin/karaf: ./runtime/karaf/bin/setenv: line 84: arch: not found
ERROR: transport error 202: bind failed: Address already in use
ERROR: JDWP Transport dt_socket failed to initialize, TRANSPORT_INIT(510)
JDWP exit error AGENT_ERROR_TRANSPORT_INIT(197): No transports initialized [../../../src/share/back/debugInit.c:750]
FATAL ERROR in native method: JDWP No transports initialized, jvmtiError=AGENT_ERROR_TRANSPORT_INIT(197)
Aborted (core dumped)
I’ve tried changing the debug port, but so far unsuccessfully (at least I can’t get past this error) - does anyone know how this is done?
I should add that I tried hacking around with the scripts and added the -Xrunjdwp part direct to the startup - this also didn’t work, although it had a different effect (from memory it gave an error multiple options were set).
Thanks - the reason I was running in debug (other than wanting to debug ) is because if I run the normal script I get the following error -:
./runtime/karaf/bin/karaf: ./runtime/karaf/bin/setenv: line 84: arch: not found
Maybe this is unrelated, but I do want to run in debug, so I’d like to solve this - otherwise I’ll need to look at setting up another environment I guess.
Interesting, I would expect the same error to happen in debug mode too though…
So the command arch doesn’t work on your synology. What OS is it running and which JDK?
Does the JDK support the G1 garbage collector? You can check using java -version -XX:+UseG1GC
In the setenv, you can comment out lines 84-88, which set the JVM options, that should get rid of that arch error and hopefully the normal start should then work.
For the debug mode, try the following:
Didn’t you just say it would only happen in debug though?
It’s Synologys system, which is busybox 1.16.1
Java is -:
java version "1.7.0_80"
Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
Java HotSpot(TM) Server VM (build 24.80-b11, mixed mode)
The export config that you pasted looks similar to what I’ve tried before, although I used another port since 5006 is also in use… Anyway, it doesn’t help - sorry.
It does - it’s in the first error message I posted as well…
Ah - is that another change then? I thought 1.7 was the dependancy, not 1.8. For sure, the previous version which I loaded a few weeks back worked fine under 1.7
Commenting out the lines you suggested has worked to some degree - the runtime has started and I have the console, but there’s no web interface (I’m getting an error there which I’ll need to track down). However, ‘exit’ doesn’t work, so I’m stuck in the console - something isn’t right, but maybe that’s associated with running 1.7.
Having killed off the console, I confirmed that actually the export you posted does work (by which I mean I don’t get the error - it still doesn’t actually work yet) - when I ran it earlier I still have port 5006 which is also used on Synology.
I guess I now need to work out how to upgrade the system to 1.8 and see if that fixes things…
@kai - can you confirm that Java 8 is indeed now required - I thought that previous discussions were that 7 was going to be the baseline and I can’t find 8 stated in the docs anywhere (yet). This might be a problem for Synology as the docs say only 7 is currently supported
2016-01-16 21:10:59.442 [WARN ] [url.mvn.internal.AetherBasedResolver] - Error resolving artifactorg.openhab.core:org.openhab.io.rest.docs:jar:2.0.0-SNAPSHOT:Could not transfer artifact org.openhab.core:org.openhab.io.rest.docs:jar:2.0.0-SNAPSHOT from/to oh-snapshot-repo (http://oss.jfrog.org/libs-snapshot/): Failed to transfer file: http://oss.jfrog.org/libs-snapshot/org/openhab/core/org.openhab.io.rest.docs/2.0.0-SNAPSHOT/org.openhab.io.rest.docs-2.0.0-SNAPSHOT.jar. Return code is: 503 , ReasonPhrase:Service Unavailable: Back-end server is at capacity.
It looks like whatever is serving up the online system is broken (or maxed out)!