EXTRA_JAVA_OPTS -Xmx settings ie Grafana crashes openhab 2.4

Not sure if this is actually the right place to start this topic, but I couldn´t find any more suitable.

Since I´m having fatal problems with Grafana rendering crashing openhab, after upgrading openhab from 2.3 to 2.4, Markus @mstormi suggest to look at the EXTRA_JAVA_OPTS in the /etc/default/openhab2 file.

I have no idea what it should be. But there are two lines and they both identical, like this:

EXTRA_JAVA_OPTS="-Xms400m -Xmx512m"
EXTRA_JAVA_OPTS="-Xms400m -Xmx512m"

I have no clue about Java. I dont know the reason why there are two identical lines. Is this an mistake, and could the two identical lines be the cause of my problems? I for sure have not modified this file.
Markus suggested to increase by 100m… I would have to ask, which one, or is it both?

This is the full opehab2 file:

# openHAB 2 service options

#########################
## PORTS
## The ports openHAB will bind its HTTP/HTTPS web server to.

#OPENHAB_HTTP_PORT=8080
#OPENHAB_HTTPS_PORT=8443

#########################
## HTTP(S) LISTEN ADDRESS
##  The listen address used by the HTTP(S) server.
##  0.0.0.0 (default) allows a connection from any location
##  127.0.0.1 only allows the local machine to connect

#OPENHAB_HTTP_ADDRESS=0.0.0.0

#########################
## BACKUP DIRECTORY
## Set the following variable to specify the backup location.
## runtime/bin/backup and runtime/bin/restore will use this path for the zip files.

#OPENHAB_BACKUPS=/var/lib/openhab2/backups

#########################
## JAVA OPTIONS
## Additional options for the JAVA_OPTS environment variable.
## These will be appended to the execution of the openHAB Java runtime in front of all other options.
## 
## A couple of independent examples:
##   EXTRA_JAVA_OPTS="-Dgnu.io.rxtx.SerialPorts=/dev/ttyUSB0:/dev/ttyS0:/dev/ttyS2:/dev/ttyACM0:/dev/ttyAMA0"
##   EXTRA_JAVA_OPTS="-Dgnu.io.rxtx.SerialPorts=/dev/ttyUSB0:/dev/ttyS0:/dev/ttyS2:/dev/ttyACM0:/dev/ttyAMA0"
##   EXTRA_JAVA_OPTS="-Dgnu.io.rxtx.SerialPorts=/dev/ttyUSB0:/dev/ttyS0:/dev/ttyS2:/dev/ttyACM0:/dev/ttyAMA0"

EXTRA_JAVA_OPTS="-Xms400m -Xmx512m"
EXTRA_JAVA_OPTS="-Xms400m -Xmx512m"

#########################
## OPENHAB DEFAULTS PATHS
## The following settings override the default apt/rpm locations and should be used with caution.
## openHAB will fail to update itself if you're using different paths. 
## Only set these if you are testing and are confident in debugging.
 
#OPENHAB_HOME=/usr/share/openhab2
#OPENHAB_CONF=/etc/openhab2
#OPENHAB_RUNTIME=/usr/share/openhab2/runtime
#OPENHAB_USERDATA=/var/lib/openhab2
#OPENHAB_LOGDIR=/var/log/openhab2

#########################
## OPENHAB USER AND GROUP
## The user and group that takes ownership of openHAB. Only available for init.d systems.
## To edit user and group for systemd, see the service file at /usr/lib/systemd/system/openhab2.service.

#OPENHAB_USER=openhab
#OPENHAB_GROUP=openhab

#########################
## SYSTEMD START MODE
## The Karaf startmode for the openHAB runtime. Only available for systemctl/systemd systems.
## Defaults to daemon when unset here. Multiple options can be used without quotes. 
## debug increases log output. daemon launches the Karaf/openHAB processes. 

#OPENHAB_STARTMODE=debug

It doesn’t do harm but is not intentional. Probably some intermediate problem with the openHABian scripts.
Just delete one line.
But as you already have double the default a lack of heap space (second parameter) shouldn’t be your problem.
I recently changed openHABian defaults to 250/350. Try 250 on the Xms parameter or even completely omit it.
That’ll pre-allocate less memory to Java/OH thus leave more to other apps such as Grafana.
Seach the forum for Xmx or “garbage collection” if you want to know the background.

I´ll give it a try Markus… Thanks…

I´ll also add the Grafana problems to this thread, as I see Rich have made some comments on it in the other thread about new docs… It think it´ll be more suitable to discuss memory settings and Grafana crashing openhab in here, in hope to perhaps finde the reason for my problems.

@rlkoshak
I´ll answer your questions/points regarding Grafana from the New Docs Discussion thead, in here, as it´s more suitable.

Lets me start from the beginning:
I had openhab 2.3 running with Grafana 5.1.4 on an Rpi 3B. This setup has been running for several months without problems. (Actually since June 2018). This version of Grafana does not use the PhantomJS.
I could render at least 4 Grafana charts from a sitemap just fine. It took some seconds to show, but it worked, and openhab 2.3 has never crashed on this setup.
You´re correct in assuming, I did manually installed Grafana without the use of openhabian-config. (Important note, read later why).

I then updated openhab 2.3 to 2.4 release, still using the same Grafana 5.1.4. And then I discovered this problem with Grafana. I could no longer render more than 1 (sometimes 2) charts, otherweise openhab crashed, serious crashed, nothing in the log.

I then updated Grafana to the latest release, 5.4.2. This version requires the PhantomJS, which I ofcouse had to download and install as well.
The result is exactly the same. Openhab crashes when trying to rendering more than 1 (sometimes 2) charts…

After that, I started to ask around about Grafana problems. Markus @mstormi was kind enough to answering, Yet his answer may seem correct, (the Rpi is not good for Grafana charting), I kinda refused to acknowledged this answer, and I started a new test. This time on my second Rpi, which is a Rpi 3B+ (more CPU power but same RAM).
This Rpi 3B+ got a clean openhab 2.4 hassle-free install, on an SSD connected to the USB port, (just as my first Rpi have). It has only 3 bindings running. The latest IHC binding, the System binding and the WMbus binding (which doesn´t work, but thats another issue).

I created a few items from the IHC binding, installed Influxdb and Grafana from (IMPORTANT answer to important note) openhabian-config. I´d setup the persisting file, which is basicly a copy from my main system, created 6 charts in Grafana, made an sitemap and started adding charts to this sitemap.

I managed to add 5 charts and it worked fine. Adding the 6. chart, it will crash openhab 2.4, exactly like it does on my main Rpi. After trying to more I discovered openhab would crash with 5 charts as well. But I have not managed to get openhab to crash with 4 charts, yet, on this clean install test system.

Conclusion:
This tells me, something is seriously wrong with openhab 2.4 or Grafana (no matter what version), or a combination of both. It has nothing to do with my main system beeing pushed to its limits, since this new test system which is very far for beeing pushed to its limits will crash as well.

And notice, the test system was a clean installation, Influxdb and Grafana was setup from the use of openhabian-config.

I sure would like to find the reason for this. I have to admit, I refuse to accept the Rpi beeing to short on resources, unless it´s something in the openhab 2.4 which has changed to be using alot of resources openhab 2.3 didnt do.
But this problem is way beyound my knowlegde of openhab, Grafana, Java or Linux… I have no idea what to do or where to look or even search. Markus has been kind enough to suggest a few tings. Thats basicly where I am this minute.

Hopefully you better understand my concern with Rpi recommendation and Grafana from the openhabian-config.

I just tried this… Openhab crashed again.
Omitting it made openhab crash as well.

This part is concerning. Are there any errors in the syslog? It shouldn’t be the case that one program can cause another to die like that.

I think it is pretty safe to say that there is definitely something wrong with Grafana. In my experience I was seeing Grafana consume all the CPU and take up huge amounts of RAM on a VM, let alone an RPi which is why I dropped the PhantomJS library. After I got rid of that library everything seemed to return to normal.

It is very true though that this should not cause OH to crash. But the only thing I can think of that would cause that is if the kernel is killing it for some reason. If it is doing so there should be something in the syslog.

There theoretically could be something wrong with OH, or probably more correctly Java itself. But it the way Java usually works is it grabs the resources it needs and it keeps them forever. When it runs out of those resources it starts to tidy things up in it’s memory a bit to free some space amoung the resources it has already acquired. So the fact that Grafana is growing like the blob shouldn’t have any impact on OH, expecially when using the java options described above which, IIRC, causes Java to just go out and acquire all the resources it will ever have when it starts rather than gradually growing and acquiring them over time.

So I’m still skeptical that this is necessarily an OH problem. Or maybe it is better to say, I don’t know if this is a problem that OH has power to solve. It is already doing everything “right” to insulate itself from being impacted by a server run amok like Grafana appears to be doing. And all of that is handled in Java, not OH code, so if there is a bug it is in the Java Runtime Environment.

My suggestion is to look in /var/log/syslog for anything that references an error, java, or openHAB, particularly around the time that OH dies.

Try following this post of mine. You should only have one EXTRA_JAVA_OPTS, so just delete one and make it what is in that thread. If you raise the heap size too much you can force Linux to start using the SWAP file which is bad when you are running slow flash storage.

Run this command

free -h

and make sure the swap shows 0B as USED for the swap…

See the suggestions to look at the log files as you should diagnose what is wrong first before making changes to a system in the blind hope that you magically stumble on the solution.

The openhab.log file will mention “OOME” if you are running out of heap space from what I have seen…

Agree, Grafana uses alot of CPU and RAM when rendering. This I saw when using the system binding on the test setup. And thats also why I partly agree with Markus, that an Rpi can be short on resources for rendering several Grafana charts. But even though the Rpi is limited, it shouldn´t crash Openhab. Rendering should just take more time to succeed (If Grafana is coded correct, which I have no idea of).
What I did see in the Grafana log, is that its just “killing signal” when Openhab crashes. But I believe this is due to Openhab crash and the signal then goes off.

You could be right on this one. It´s very hard to inspect (specially for a user like me with highly limited knowledge on things like these, in a Linux/Java enviroment). I really wish someone with more knowledge could give it a try as well, and see what happens. Also it would be nice to test on another computer/hardware, again pushing the hardware to it´s limits. I have thought about given Debian a try on an extra laptop I have with an Intel i7, 8GB of RAM and SSD drive. But I have no experience on how to. And second, I´m not sure what the result would be worth… If it can handle 100 charts at the same time without crashing, is it then safe to assume, it´s due to the limited resources in the Rpi. I would say no, cause again, it shouldn´t crash openhab, no matter how limited hardware one is using.

I´ll look at the syslog tonight… At least I can force this problem whenever I like, just by rendering a few charts :slight_smile:

I use SSD´s (I havn´t got patient for slow flash storage) :slight_smile:

I have tried several different kind of changes… Including changes due to getting an warning when openhab 2.4 starts, about the number of threads. This warning also started after updating openhab from 2.3 to 2.4. But I have been told it´s safe to ignore this warning… Dont know whay though, and it´s difficult to ignore any warnings at all, when having fatal problems like openhab crashing.

I have read the post you´re linking to. I has also checked the -Xmx change (it´s already set for 512m). But not the other suggestion.

When ever I change anything, I change it back to what it was, if it doesn´t do any good. But you´re right, I´m picking in blind, because I have no knowledge on these kind of things… I rely totally on others suggestions. The log files are very limited when openhab crashes… But I have probably not inspected the right logfiles on this matter… Rich pointed me to the syslog. I´ll se what it tells me later today.

SSDs connected to a PI3 via a crippled bus are far slower than ram.

Another suggestion is to check your power supply and also the cable that is running your PI. If they can not supply enough power due to being low quality the Pi could be crashing. It could be that it runs fine until the CPU load goes high enough to draw too much power and then the crash occurs. Good PS and cables are very important yet people still use an old phone USB charger and some cable they purchased off ebay for $0.50
Can you stress the CPU in another way not using Grafana to see if it crashes?

My point is dont get bogged down assuming this is ram related when it may be another cause.
syslog and also the openhab.log file should be checked for clues.

SSD´s are way faster than SD cards (SD cards are also flash RAM´s :slight_smile: ).

All my Rpi´s are using original 2.5A powersupplies.

Important note - When I say openhab crash, the Rpi´s are still running. It´s “just” openhab which crash and restart itself.

Maybe… I´ll have to figure out how to. Only thing I can think of is by the use of the Ipcamera binding. But I only got one IP camera, so I´m not sure if it would work.

I´m not assuming anything… I gave up on that long time ago :slight_smile: What I do know for a fact is, that Grafana charting is taking up alot of CPU and RAM when rendering. But this should not cause a crash and restart of openhab.

Just came to think of it…
Could this be a BasicUI issue only? I use BasicUI and no other UI´s (except Using the openhab app on my android phone, which also will crash openhab as well, if I enter the sitemap containing more than one charts).
Maybe I should give Habpanel or ClassicUI a try just to see what happens…

Look in your logs first before wasting peoples time other wise people will quickly ignore you as we are not paid support people…

If openhab restarts then the openhab.log probably has something in it.

I´ll let you know on a few hours.

Follow the manual installation instructions for openHABian and you will end up with an openHABian environment built on that Debian machine. There is nothing special about openHABian. It’s just a bunch of scripts and those scripts will work on any Debian based Linux distro.

matt1’s point is that even though the SSD may be faster on it’s own, it plugs into the RPi over a shared USB 2 connection that drops the usable speed to where there isn’t much difference between the two.

I don’t understand the parenthetical.

Here are some logfiles…
(Actually I wasn´t aware of these logfiles untill Rich mentioned them. Thx Rich)…

kern.log

Jan  8 23:16:07 openHABianPi kernel: [598803.985468] phantomjs invoked oom-killer: gfp_mask=0x24200ca(GFP_HIGHUSER_MOVABLE), nodemask=0, order=0, oom_score_adj=0
Jan  8 23:16:07 openHABianPi kernel: [598803.985475] phantomjs cpuset=/ mems_allowed=0
Jan  8 23:16:07 openHABianPi kernel: [598803.985487] CPU: 2 PID: 5854 Comm: phantomjs Not tainted 4.9.35-v7+ #1014
Jan  8 23:16:07 openHABianPi kernel: [598803.985490] Hardware name: BCM2835
Jan  8 23:16:07 openHABianPi kernel: [598803.985510] [<8010fb3c>] (unwind_backtrace) from [<8010c058>] (show_stack+0x20/0x24)
Jan  8 23:16:07 openHABianPi kernel: [598803.985520] [<8010c058>] (show_stack) from [<80455880>] (dump_stack+0xd4/0x118)
Jan  8 23:16:07 openHABianPi kernel: [598803.985530] [<80455880>] (dump_stack) from [<8026cd84>] (dump_header+0x9c/0x1f4)
Jan  8 23:16:07 openHABianPi kernel: [598803.985541] [<8026cd84>] (dump_header) from [<802106a0>] (oom_kill_process+0x3e0/0x4e4)
Jan  8 23:16:07 openHABianPi kernel: [598803.985550] [<802106a0>] (oom_kill_process) from [<80210b08>] (out_of_memory+0x124/0x334)
Jan  8 23:16:07 openHABianPi kernel: [598803.985559] [<80210b08>] (out_of_memory) from [<80215c34>] (__alloc_pages_nodemask+0xcf4/0xdd0)
Jan  8 23:16:07 openHABianPi kernel: [598803.985569] [<80215c34>] (__alloc_pages_nodemask) from [<8024239c>] (handle_mm_fault+0xb2c/0xd80)
Jan  8 23:16:07 openHABianPi kernel: [598803.985579] [<8024239c>] (handle_mm_fault) from [<8071a134>] (do_page_fault+0x33c/0x3b0)
Jan  8 23:16:07 openHABianPi kernel: [598803.985587] [<8071a134>] (do_page_fault) from [<801011e8>] (do_DataAbort+0x48/0xc4)
Jan  8 23:16:07 openHABianPi kernel: [598803.985595] [<801011e8>] (do_DataAbort) from [<80719964>] (__dabt_usr+0x44/0x60)
Jan  8 23:16:07 openHABianPi kernel: [598803.985598] Exception stack(0x8d32bfb0 to 0x8d32bff8)
Jan  8 23:16:07 openHABianPi kernel: [598803.985603] bfa0:                                     6b69e000 00000000 ffffffff 0002fd01
Jan  8 23:16:07 openHABianPi kernel: [598803.985609] bfc0: 0100000e 7ea62a08 03fd2190 00000001 03fd2190 7ea62b18 00000000 00000001
Jan  8 23:16:07 openHABianPi kernel: [598803.985614] bfe0: 0402d038 7ea629c0 00f554e1 01067e28 20000030 ffffffff
Jan  8 23:16:07 openHABianPi kernel: [598803.985617] Mem-Info:
Jan  8 23:16:07 openHABianPi kernel: [598803.985628] active_anon:117456 inactive_anon:117487 isolated_anon:0
Jan  8 23:16:07 openHABianPi kernel: [598803.985628]  active_file:24 inactive_file:65 isolated_file:0
Jan  8 23:16:07 openHABianPi kernel: [598803.985628]  unevictable:0 dirty:2 writeback:22 unstable:0
Jan  8 23:16:07 openHABianPi kernel: [598803.985628]  slab_reclaimable:2037 slab_unreclaimable:3283
Jan  8 23:16:07 openHABianPi kernel: [598803.985628]  mapped:407 shmem:379 pagetables:1180 bounce:0
Jan  8 23:16:07 openHABianPi kernel: [598803.985628]  free:4081 free_pcp:0 free_cma:14
Jan  8 23:16:07 openHABianPi kernel: [598803.985638] Node 0 active_anon:469824kB inactive_anon:469948kB active_file:96kB inactive_file:260kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:1628kB dirty:8kB writeback:88kB shmem:1516kB writeback_tmp:0kB unstable:0kB pages_scanned:15 all_unreclaimable? no
Jan  8 23:16:07 openHABianPi kernel: [598803.985649] Normal free:16324kB min:16384kB low:20480kB high:24576kB active_anon:469824kB inactive_anon:469948kB active_file:96kB inactive_file:260kB unevictable:0kB writepending:96kB present:1015808kB managed:994232kB mlocked:0kB slab_reclaimable:8148kB slab_unreclaimable:13132kB kernel_stack:3904kB pagetables:4720kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:56kB
Jan  8 23:16:07 openHABianPi kernel: lowmem_reserve[]: 0 0
Jan  8 23:16:07 openHABianPi kernel: [598803.985657] Normal: 162*4kB (UMEH) 152*8kB (UMEHC) 108*16kB (UMEHC) 57*32kB (MEHC) 34*64kB (UMEH) 13*128kB (UEH) 10*256kB (UMEH) 3*512kB (MH) 1*1024kB (M) 1*2048kB (H) 0*4096kB = 16424kB
Jan  8 23:16:07 openHABianPi kernel: 1294 total pagecache pages
Jan  8 23:16:07 openHABianPi kernel: [598803.985702] 806 pages in swap cache
Jan  8 23:16:07 openHABianPi kernel: [598803.985705] Swap cache stats: add 233900, delete 233094, find 620955/646684
Jan  8 23:16:07 openHABianPi kernel: [598803.985707] Free swap  = 0kB
Jan  8 23:16:07 openHABianPi kernel: [598803.985709] Total swap = 102396kB
Jan  8 23:16:07 openHABianPi kernel: [598803.985711] 253952 pages RAM
Jan  8 23:16:07 openHABianPi kernel: [598803.985714] 0 pages HighMem/MovableOnly
Jan  8 23:16:07 openHABianPi kernel: [598803.985715] 5394 pages reserved
Jan  8 23:16:07 openHABianPi kernel: [598803.985717] 2048 pages cma reserved
Jan  8 23:16:07 openHABianPi kernel: [598803.985720] [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name
Jan  8 23:16:07 openHABianPi kernel: [598803.985746] [  146]     0   146     2645      419      11       0       36             0 systemd-journal
Jan  8 23:16:07 openHABianPi kernel: [598803.985752] [  150]     0   150     2894        1       8       0      133         -1000 systemd-udevd
Jan  8 23:16:07 openHABianPi kernel: [598803.985759] [  447]   105   447     1000       40       7       0       43             0 avahi-daemon
Jan  8 23:16:07 openHABianPi kernel: [598803.985765] [  453]     0   453     1698       16       7       0       41             0 cron
Jan  8 23:16:07 openHABianPi kernel: [598803.985771] [  454]     0   454     8035       49      11       0      247             0 rsyslogd
Jan  8 23:16:07 openHABianPi kernel: [598803.985776] [  458]   104   458     1372       32       7       0       52          -900 dbus-daemon
Jan  8 23:16:07 openHABianPi kernel: [598803.985781] [  461]   105   461      968        2       6       0       55             0 avahi-daemon
Jan  8 23:16:07 openHABianPi kernel: [598803.985787] [  483]     0   483      848       21       6       0       32             0 systemd-logind
Jan  8 23:16:07 openHABianPi kernel: [598803.985792] [  532]     0   532     1788       14       7       0       91             0 wpa_supplicant
Jan  8 23:16:07 openHABianPi kernel: [598803.985798] [  751]     0   751      640       12       5       0       59             0 dhcpcd
Jan  8 23:16:07 openHABianPi kernel: [598803.985803] [  755]   999   755   249486    27652     203       0     3517             0 influxd
Jan  8 23:16:07 openHABianPi kernel: [598803.985809] [  757]   109   757    31645     1313      61       0     2362             0 node
Jan  8 23:16:07 openHABianPi kernel: [598803.985814] [  764]   111   764   244794     2967      50       0     1627             0 grafana-server
Jan  8 23:16:07 openHABianPi kernel: [598803.985820] [  799]     0   799     1131        1       7       0       32             0 agetty
Jan  8 23:16:07 openHABianPi kernel: [598803.985826] [  810]   106   810     1443       26       7       0       85             0 ntpd
Jan  8 23:16:07 openHABianPi kernel: [598803.985832] [  874]     0   874     1964        0       9       0      127         -1000 sshd
Jan  8 23:16:07 openHABianPi kernel: [598803.985838] [  951]     0   951     6165       48      15       0      260             0 nmbd
Jan  8 23:16:07 openHABianPi kernel: [598803.985843] [  959]   109   959     1282       11       7       0       12             0 tail
Jan  8 23:16:07 openHABianPi kernel: [598803.985849] [  965]     0   965     9350       39      23       0      382             0 smbd
Jan  8 23:16:07 openHABianPi kernel: [598803.985854] [  970]     0   970     9350       36      20       0      385             0 smbd
Jan  8 23:16:07 openHABianPi kernel: [598803.985860] [22518]   110 22518     1700       17       6       0       30             0 dirmngr
Jan  8 23:16:07 openHABianPi kernel: [598803.985865] [30488]     0 30488     2246        9       9       0      147             0 sshd
Jan  8 23:16:07 openHABianPi kernel: [598803.985871] [30499]  1000 30499     2281       40       8       0      129             0 sshd
Jan  8 23:16:07 openHABianPi kernel: [598803.985877] [30501]  1000 30501      583       14       6       0       25             0 sftp-server
Jan  8 23:16:07 openHABianPi kernel: [598803.985882] [  488]     0   488     2246       25       9       0      130             0 sshd
Jan  8 23:16:07 openHABianPi kernel: [598803.985887] [  495]  1000   495     2279       48       8       0      122             0 sshd
Jan  8 23:16:07 openHABianPi kernel: [598803.985892] [  497]  1000   497      583       27       5       0       12             0 sftp-server
Jan  8 23:16:07 openHABianPi kernel: [598803.985900] [ 3795]     0  3795     2246        9       9       0      148             0 sshd
Jan  8 23:16:07 openHABianPi kernel: [598803.985905] [ 3803]  1000  3803     2246       34       8       0      127             0 sshd
Jan  8 23:16:07 openHABianPi kernel: [598803.985910] [ 3806]  1000  3806     1970       67       8       0      264             0 bash
Jan  8 23:16:07 openHABianPi kernel: [598803.985916] [ 4898]   109  4898   236127   112970     313       0     3281             0 java
Jan  8 23:16:07 openHABianPi kernel: [598803.985923] [ 5852]   111  5852    75417    27194     101       0        0             0 phantomjs
Jan  8 23:16:07 openHABianPi kernel: [598803.985928] [ 5853]   111  5853    75033    24807      95       0        0             0 phantomjs
Jan  8 23:16:07 openHABianPi kernel: [598803.985933] [ 5854]   111  5854    74894    34379     113       0        0             0 phantomjs
Jan  8 23:16:07 openHABianPi kernel: [598803.985937] Out of memory: Kill process 4898 (java) score 425 or sacrifice child
Jan  8 23:16:07 openHABianPi kernel: [598803.986562] Killed process 4898 (java) total-vm:944508kB, anon-rss:451880kB, file-rss:0kB, shmem-rss:0kB
Jan  8 23:16:07 openHABianPi kernel: [598804.199696] oom_reaper: reaped process 4898 (java), now anon-rss:12kB, file-rss:4kB, shmem-rss:0kB

This is from the syslog:

Jan  8 23:15:58 openHABianPi grafana-server[764]: t=2019-01-08T23:15:58+0100 lvl=info msg=Rendering logger=png-renderer path="d-solo/ZkZGWpgRk/nilan-ind-ud-temperatur?refresh=30s&orgId=1&panelId=2&from=now-18h&to=now&width=1000&height=500"
Jan  8 23:15:58 openHABianPi grafana-server[764]: t=2019-01-08T23:15:58+0100 lvl=info msg=Rendering logger=png-renderer path="d-solo/ZkZGWpgRk/nilan-ind-ud-temperatur?refresh=30s&orgId=1&panelId=4&from=now-18h&to=now&width=1000&height=500"
Jan  8 23:15:58 openHABianPi grafana-server[764]: t=2019-01-08T23:15:58+0100 lvl=info msg=Rendering logger=png-renderer path="d-solo/ZkZGWpgRk/nilan-ind-ud-temperatur?refresh=30s&orgId=1&panelId=6&from=now-18h&to=now&width=1000&height=500"
Jan  8 23:15:59 openHABianPi influxd[755]: [httpd] 10.4.28.237 - openhab [08/Jan/2019:23:15:59 +0100] "POST /write?consistency=one&db=openhab_db&p=%5BREDACTED%5D&precision=n&rp=autogen&u=openhab HTTP/1.1" 204 0 "-" "okhttp/2.4.0" fa6e7ff3-1392-11e9-8504-b827eb19aad8 23236
Jan  8 23:16:00 openHABianPi influxd[755]: [httpd] 10.4.28.237 - openhab [08/Jan/2019:23:16:00 +0100] "POST /write?consistency=one&db=openhab_db&p=%5BREDACTED%5D&precision=n&rp=autogen&u=openhab HTTP/1.1" 204 0 "-" "okhttp/2.4.0" faaa26fd-1392-11e9-8505-b827eb19aad8 146093
Jan  8 23:16:01 openHABianPi influxd[755]: [httpd] 10.4.28.237 - openhab [08/Jan/2019:23:16:01 +0100] "POST /write?consistency=one&db=openhab_db&p=%5BREDACTED%5D&precision=n&rp=autogen&u=openhab HTTP/1.1" 204 0 "-" "okhttp/2.4.0" fb22ec6d-1392-11e9-8506-b827eb19aad8 6147
Jan  8 23:16:01 openHABianPi influxd[755]: [httpd] 10.4.28.237 - openhab [08/Jan/2019:23:16:01 +0100] "POST /write?consistency=one&db=openhab_db&p=%5BREDACTED%5D&precision=n&rp=autogen&u=openhab HTTP/1.1" 204 0 "-" "okhttp/2.4.0" fb3228a9-1392-11e9-8507-b827eb19aad8 6028
Jan  8 23:16:01 openHABianPi influxd[755]: [httpd] 10.4.28.237 - openhab [08/Jan/2019:23:16:01 +0100] "POST /write?consistency=one&db=openhab_db&p=%5BREDACTED%5D&precision=n&rp=autogen&u=openhab HTTP/1.1" 204 0 "-" "okhttp/2.4.0" fb418051-1392-11e9-8508-b827eb19aad8 6596
Jan  8 23:16:01 openHABianPi influxd[755]: [httpd] 10.4.28.237 - openhab [08/Jan/2019:23:16:01 +0100] "POST /write?consistency=one&db=openhab_db&p=%5BREDACTED%5D&precision=n&rp=autogen&u=openhab HTTP/1.1" 204 0 "-" "okhttp/2.4.0" fb5fd30d-1392-11e9-8509-b827eb19aad8 4342
Jan  8 23:16:05 openHABianPi influxd[755]: ts=2019-01-08T22:16:05.186180Z lvl=info msg="Executing query" log_id=0Cjw9nNl000 service=query query="SELECT mean(value) FROM openhab_db.autogen.nilan_Input_T3_Exhaust WHERE time >= now() - 18h GROUP BY time(2m) fill(previous)"
Jan  8 23:16:05 openHABianPi influxd[755]: ts=2019-01-08T22:16:05.297718Z lvl=info msg="Executing query" log_id=0Cjw9nNl000 service=query query="SELECT mean(value) FROM openhab_db.autogen.nilan_Input_RH WHERE time >= now() - 18h GROUP BY time(2m) fill(previous)"
Jan  8 23:16:05 openHABianPi influxd[755]: [httpd] 10.4.28.237 - openhab [08/Jan/2019:23:16:05 +0100] "POST /write?consistency=one&db=openhab_db&p=%5BREDACTED%5D&precision=n&rp=autogen&u=openhab HTTP/1.1" 204 0 "-" "okhttp/2.4.0" fda57475-1392-11e9-850c-b827eb19aad8 7749
Jan  8 23:16:05 openHABianPi influxd[755]: ts=2019-01-08T22:16:05.333376Z lvl=info msg="Executing query" log_id=0Cjw9nNl000 service=query query="SELECT mean(value) FROM openhab_db.autogen.nilan_Input_T7_Inlet WHERE time >= now() - 18h GROUP BY time(2m) fill(previous)"
Jan  8 23:16:05 openHABianPi influxd[755]: [httpd] ::1, ::1,10.4.28.237 - grafana [08/Jan/2019:23:16:05 +0100] "GET /query?db=openhab_db&epoch=ms&q=SELECT+mean%28%22value%22%29+FROM+%22nilan_Input_RH%22+WHERE+time+%3E%3D+now%28%29+-+18h+GROUP+BY+time%282m%29+fill%28previous%29 HTTP/1.1" 200 3934 "http://localhost:3000/d-solo/ZkZGWpgRk/nilan-ind-ud-temperatur?refresh=30s&orgId=1&panelId=4&from=now-18h&to=now&width=1000&height=500&render=1" "Mozilla/5.0 (Unknown; Linux) AppleWebKit/538.1 (KHTML, like Gecko) PhantomJS/2.1.1 Safari/538.1" fda2b806-1392-11e9-850b-b827eb19aad8 117475
Jan  8 23:16:05 openHABianPi influxd[755]: ts=2019-01-08T22:16:05.415291Z lvl=info msg="Executing query" log_id=0Cjw9nNl000 service=query query="SELECT mean(value) FROM openhab_db.autogen.nilan_Input_T8_Outdoor WHERE time >= now() - 18h GROUP BY time(2m) fill(previous)"
Jan  8 23:16:05 openHABianPi influxd[755]: ts=2019-01-08T22:16:05.494693Z lvl=info msg="Executing query" log_id=0Cjw9nNl000 service=query query="SELECT mean(value) FROM openhab_db.autogen.nilan_Output_ExhaustSpeed WHERE time >= now() - 18h GROUP BY time(2m) fill(previous)"
Jan  8 23:16:05 openHABianPi influxd[755]: ts=2019-01-08T22:16:05.536826Z lvl=info msg="Executing query" log_id=0Cjw9nNl000 service=query query="SELECT mean(value) FROM openhab_db.autogen.nilan_Output_InletSpeed WHERE time >= now() - 18h GROUP BY time(2m) fill(previous)"
Jan  8 23:16:05 openHABianPi influxd[755]: [httpd] ::1, ::1,10.4.28.237 - grafana [08/Jan/2019:23:16:05 +0100] "GET /query?db=openhab_db&epoch=ms&q=SELECT+mean%28%22value%22%29+FROM+%22nilan_Input_T3_Exhaust%22+WHERE+time+%3E%3D+now%28%29+-+18h+GROUP+BY+time%282m%29+fill%28previous%29%3BSELECT+mean%28%22value%22%29+FROM+%22nilan_Input_T7_Inlet%22+WHERE+time+%3E%3D+now%28%29+-+18h+GROUP+BY+time%282m%29+fill%28previous%29%3BSELECT+mean%28%22value%22%29+FROM+%22nilan_Input_T8_Outdoor%22+WHERE+time+%3E%3D+now%28%29+-+18h+GROUP+BY+time%282m%29+fill%28previous%29 HTTP/1.1" 200 11080 "http://localhost:3000/d-solo/ZkZGWpgRk/nilan-ind-ud-temperatur?refresh=30s&orgId=1&panelId=2&from=now-18h&to=now&width=1000&height=500&render=1" "Mozilla/5.0 (Unknown; Linux) AppleWebKit/538.1 (KHTML, like Gecko) PhantomJS/2.1.1 Safari/538.1" fd85ea09-1392-11e9-850a-b827eb19aad8 447428
Jan  8 23:16:05 openHABianPi influxd[755]: [httpd] ::1, ::1,10.4.28.237 - grafana [08/Jan/2019:23:16:05 +0100] "GET /query?db=openhab_db&epoch=ms&q=SELECT+mean%28%22value%22%29+FROM+%22nilan_Output_ExhaustSpeed%22+WHERE+time+%3E%3D+now%28%29+-+18h+GROUP+BY+time%282m%29+fill%28previous%29%3BSELECT+mean%28%22value%22%29+FROM+%22nilan_Output_InletSpeed%22+WHERE+time+%3E%3D+now%28%29+-+18h+GROUP+BY+time%282m%29+fill%28previous%29 HTTP/1.1" 200 2811 "http://localhost:3000/d-solo/ZkZGWpgRk/nilan-ind-ud-temperatur?refresh=30s&orgId=1&panelId=6&from=now-18h&to=now&width=1000&height=500&render=1" "Mozilla/5.0 (Unknown; Linux) AppleWebKit/538.1 (KHTML, like Gecko) PhantomJS/2.1.1 Safari/538.1" fdbda225-1392-11e9-850d-b827eb19aad8 123388
Jan  8 23:16:07 openHABianPi kernel: [598803.985468] phantomjs invoked oom-killer: gfp_mask=0x24200ca(GFP_HIGHUSER_MOVABLE), nodemask=0, order=0, oom_score_adj=0
Jan  8 23:16:07 openHABianPi kernel: [598803.985475] phantomjs cpuset=/ mems_allowed=0
Jan  8 23:16:07 openHABianPi kernel: [598803.985487] CPU: 2 PID: 5854 Comm: phantomjs Not tainted 4.9.35-v7+ #1014
Jan  8 23:16:07 openHABianPi kernel: [598803.985490] Hardware name: BCM2835
Jan  8 23:16:07 openHABianPi kernel: [598803.985510] [<8010fb3c>] (unwind_backtrace) from [<8010c058>] (show_stack+0x20/0x24)
Jan  8 23:16:07 openHABianPi kernel: [598803.985520] [<8010c058>] (show_stack) from [<80455880>] (dump_stack+0xd4/0x118)
Jan  8 23:16:07 openHABianPi kernel: [598803.985530] [<80455880>] (dump_stack) from [<8026cd84>] (dump_header+0x9c/0x1f4)
Jan  8 23:16:07 openHABianPi kernel: [598803.985541] [<8026cd84>] (dump_header) from [<802106a0>] (oom_kill_process+0x3e0/0x4e4)
Jan  8 23:16:07 openHABianPi kernel: [598803.985550] [<802106a0>] (oom_kill_process) from [<80210b08>] (out_of_memory+0x124/0x334)
Jan  8 23:16:07 openHABianPi kernel: [598803.985559] [<80210b08>] (out_of_memory) from [<80215c34>] (__alloc_pages_nodemask+0xcf4/0xdd0)
Jan  8 23:16:07 openHABianPi kernel: [598803.985569] [<80215c34>] (__alloc_pages_nodemask) from [<8024239c>] (handle_mm_fault+0xb2c/0xd80)
Jan  8 23:16:07 openHABianPi kernel: [598803.985579] [<8024239c>] (handle_mm_fault) from [<8071a134>] (do_page_fault+0x33c/0x3b0)
Jan  8 23:16:07 openHABianPi kernel: [598803.985587] [<8071a134>] (do_page_fault) from [<801011e8>] (do_DataAbort+0x48/0xc4)
Jan  8 23:16:07 openHABianPi kernel: [598803.985595] [<801011e8>] (do_DataAbort) from [<80719964>] (__dabt_usr+0x44/0x60)
Jan  8 23:16:07 openHABianPi kernel: [598803.985598] Exception stack(0x8d32bfb0 to 0x8d32bff8)
Jan  8 23:16:07 openHABianPi kernel: [598803.985603] bfa0:                                     6b69e000 00000000 ffffffff 0002fd01
Jan  8 23:16:07 openHABianPi kernel: [598803.985609] bfc0: 0100000e 7ea62a08 03fd2190 00000001 03fd2190 7ea62b18 00000000 00000001
Jan  8 23:16:07 openHABianPi kernel: [598803.985614] bfe0: 0402d038 7ea629c0 00f554e1 01067e28 20000030 ffffffff
Jan  8 23:16:07 openHABianPi kernel: [598803.985617] Mem-Info:
Jan  8 23:16:07 openHABianPi kernel: [598803.985628] active_anon:117456 inactive_anon:117487 isolated_anon:0
Jan  8 23:16:07 openHABianPi kernel: [598803.985628]  active_file:24 inactive_file:65 isolated_file:0
Jan  8 23:16:07 openHABianPi kernel: [598803.985628]  unevictable:0 dirty:2 writeback:22 unstable:0
Jan  8 23:16:07 openHABianPi kernel: [598803.985628]  slab_reclaimable:2037 slab_unreclaimable:3283
Jan  8 23:16:07 openHABianPi kernel: [598803.985628]  mapped:407 shmem:379 pagetables:1180 bounce:0
Jan  8 23:16:07 openHABianPi kernel: [598803.985628]  free:4081 free_pcp:0 free_cma:14
Jan  8 23:16:07 openHABianPi kernel: [598803.985638] Node 0 active_anon:469824kB inactive_anon:469948kB active_file:96kB inactive_file:260kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:1628kB dirty:8kB writeback:88kB shmem:1516kB writeback_tmp:0kB unstable:0kB pages_scanned:15 all_unreclaimable? no
Jan  8 23:16:07 openHABianPi kernel: [598803.985649] Normal free:16324kB min:16384kB low:20480kB high:24576kB active_anon:469824kB inactive_anon:469948kB active_file:96kB inactive_file:260kB unevictable:0kB writepending:96kB present:1015808kB managed:994232kB mlocked:0kB slab_reclaimable:8148kB slab_unreclaimable:13132kB kernel_stack:3904kB pagetables:4720kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:56kB
Jan  8 23:16:07 openHABianPi kernel: lowmem_reserve[]: 0 0
Jan  8 23:16:07 openHABianPi kernel: [598803.985657] Normal: 162*4kB (UMEH) 152*8kB (UMEHC) 108*16kB (UMEHC) 57*32kB (MEHC) 34*64kB (UMEH) 13*128kB (UEH) 10*256kB (UMEH) 3*512kB (MH) 1*1024kB (M) 1*2048kB (H) 0*4096kB = 16424kB
Jan  8 23:16:07 openHABianPi kernel: 1294 total pagecache pages
Jan  8 23:16:07 openHABianPi kernel: [598803.985702] 806 pages in swap cache
Jan  8 23:16:07 openHABianPi kernel: [598803.985705] Swap cache stats: add 233900, delete 233094, find 620955/646684
Jan  8 23:16:07 openHABianPi kernel: [598803.985707] Free swap  = 0kB
Jan  8 23:16:07 openHABianPi kernel: [598803.985709] Total swap = 102396kB
Jan  8 23:16:07 openHABianPi kernel: [598803.985711] 253952 pages RAM
Jan  8 23:16:07 openHABianPi kernel: [598803.985714] 0 pages HighMem/MovableOnly
Jan  8 23:16:07 openHABianPi kernel: [598803.985715] 5394 pages reserved
Jan  8 23:16:07 openHABianPi kernel: [598803.985717] 2048 pages cma reserved
Jan  8 23:16:07 openHABianPi kernel: [598803.985720] [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name
Jan  8 23:16:07 openHABianPi kernel: [598803.985746] [  146]     0   146     2645      419      11       0       36             0 systemd-journal
Jan  8 23:16:07 openHABianPi kernel: [598803.985752] [  150]     0   150     2894        1       8       0      133         -1000 systemd-udevd
Jan  8 23:16:07 openHABianPi kernel: [598803.985759] [  447]   105   447     1000       40       7       0       43             0 avahi-daemon
Jan  8 23:16:07 openHABianPi kernel: [598803.985765] [  453]     0   453     1698       16       7       0       41             0 cron
Jan  8 23:16:07 openHABianPi kernel: [598803.985771] [  454]     0   454     8035       49      11       0      247             0 rsyslogd
Jan  8 23:16:07 openHABianPi kernel: [598803.985776] [  458]   104   458     1372       32       7       0       52          -900 dbus-daemon
Jan  8 23:16:07 openHABianPi kernel: [598803.985781] [  461]   105   461      968        2       6       0       55             0 avahi-daemon
Jan  8 23:16:07 openHABianPi kernel: [598803.985787] [  483]     0   483      848       21       6       0       32             0 systemd-logind
Jan  8 23:16:07 openHABianPi kernel: [598803.985792] [  532]     0   532     1788       14       7       0       91             0 wpa_supplicant
Jan  8 23:16:07 openHABianPi kernel: [598803.985798] [  751]     0   751      640       12       5       0       59             0 dhcpcd
Jan  8 23:16:07 openHABianPi kernel: [598803.985803] [  755]   999   755   249486    27652     203       0     3517             0 influxd
Jan  8 23:16:07 openHABianPi kernel: [598803.985809] [  757]   109   757    31645     1313      61       0     2362             0 node
Jan  8 23:16:07 openHABianPi kernel: [598803.985814] [  764]   111   764   244794     2967      50       0     1627             0 grafana-server
Jan  8 23:16:07 openHABianPi kernel: [598803.985820] [  799]     0   799     1131        1       7       0       32             0 agetty
Jan  8 23:16:07 openHABianPi kernel: [598803.985826] [  810]   106   810     1443       26       7       0       85             0 ntpd
Jan  8 23:16:07 openHABianPi kernel: [598803.985832] [  874]     0   874     1964        0       9       0      127         -1000 sshd
Jan  8 23:16:07 openHABianPi kernel: [598803.985838] [  951]     0   951     6165       48      15       0      260             0 nmbd
Jan  8 23:16:07 openHABianPi kernel: [598803.985843] [  959]   109   959     1282       11       7       0       12             0 tail
Jan  8 23:16:07 openHABianPi kernel: [598803.985849] [  965]     0   965     9350       39      23       0      382             0 smbd
Jan  8 23:16:07 openHABianPi kernel: [598803.985854] [  970]     0   970     9350       36      20       0      385             0 smbd
Jan  8 23:16:07 openHABianPi kernel: [598803.985860] [22518]   110 22518     1700       17       6       0       30             0 dirmngr
Jan  8 23:16:07 openHABianPi kernel: [598803.985865] [30488]     0 30488     2246        9       9       0      147             0 sshd
Jan  8 23:16:07 openHABianPi kernel: [598803.985871] [30499]  1000 30499     2281       40       8       0      129             0 sshd
Jan  8 23:16:07 openHABianPi kernel: [598803.985877] [30501]  1000 30501      583       14       6       0       25             0 sftp-server
Jan  8 23:16:07 openHABianPi kernel: [598803.985882] [  488]     0   488     2246       25       9       0      130             0 sshd
Jan  8 23:16:07 openHABianPi kernel: [598803.985887] [  495]  1000   495     2279       48       8       0      122             0 sshd
Jan  8 23:16:07 openHABianPi kernel: [598803.985892] [  497]  1000   497      583       27       5       0       12             0 sftp-server
Jan  8 23:16:07 openHABianPi kernel: [598803.985900] [ 3795]     0  3795     2246        9       9       0      148             0 sshd
Jan  8 23:16:07 openHABianPi kernel: [598803.985905] [ 3803]  1000  3803     2246       34       8       0      127             0 sshd
Jan  8 23:16:07 openHABianPi kernel: [598803.985910] [ 3806]  1000  3806     1970       67       8       0      264             0 bash
Jan  8 23:16:07 openHABianPi kernel: [598803.985916] [ 4898]   109  4898   236127   112970     313       0     3281             0 java
Jan  8 23:16:07 openHABianPi kernel: [598803.985923] [ 5852]   111  5852    75417    27194     101       0        0             0 phantomjs
Jan  8 23:16:07 openHABianPi kernel: [598803.985928] [ 5853]   111  5853    75033    24807      95       0        0             0 phantomjs
Jan  8 23:16:07 openHABianPi kernel: [598803.985933] [ 5854]   111  5854    74894    34379     113       0        0             0 phantomjs
Jan  8 23:16:07 openHABianPi kernel: [598803.985937] Out of memory: Kill process 4898 (java) score 425 or sacrifice child
Jan  8 23:16:07 openHABianPi kernel: [598803.986562] Killed process 4898 (java) total-vm:944508kB, anon-rss:451880kB, file-rss:0kB, shmem-rss:0kB
Jan  8 23:16:07 openHABianPi kernel: [598804.199696] oom_reaper: reaped process 4898 (java), now anon-rss:12kB, file-rss:4kB, shmem-rss:0kB
Jan  8 23:16:09 openHABianPi systemd[1]: openhab2.service: main process exited, code=killed, status=9/KILL
Jan  8 23:16:12 openHABianPi karaf[5896]: Can't connect to the container. The container is not running.
Jan  8 23:16:12 openHABianPi systemd[1]: openhab2.service: control process exited, code=exited status=1
Jan  8 23:16:12 openHABianPi systemd[1]: Unit openhab2.service entered failed state.
Jan  8 23:16:17 openHABianPi systemd[1]: openhab2.service holdoff time over, scheduling restart.
Jan  8 23:16:17 openHABianPi systemd[1]: Stopping openHAB 2 - empowering the smart home...
Jan  8 23:16:17 openHABianPi systemd[1]: Starting openHAB 2 - empowering the smart home...
Jan  8 23:16:17 openHABianPi systemd[1]: Started openHAB 2 - empowering the smart home.

It seems like Java is beeing killed, which is probably the reason why openhab restarts. All cause by Out of memory… And it seem PhantomJS is the main cause…

This is what happens with my main system, which right now is running Grafana 5.3.4 and therefore need PhantomJS.

I´ll try if I can get back to the old grafana version without loosing my charts… Or I´ll try install 5.1.4 on my test Rpi, cause the version 5.1.4 did also crash my mainsetup, even though that version does not use PhantomJS.

Hmm… Actually, I DO run Grafana 5.1.4… This version should not be using PhantomJS… What the heck is going on…

Very informative logs and I agree, PhantomJS is indeed the source. It looks like it’s consuming all the RAM and the kernel looks around for the process that is using up the most amount of RAM that is running at the same or lower priority as PhantomJS to kill, which of course will be OH. Though it seems like it should kill PhantomJS instead. This is my first experience with this sort of thing.

If you don’t use statically generated chart images you don’t need PhantomJS. Given these logs I’d say you should not use PhantomJS under any circumstances on an RPi, perhaps no one should use it.

Get rid of PhantomJS and use a webview like

Webview url="http://argus:3000/d/000000001/home-automation?panelId=1&orgId=1&tab=display&fullscreen&from=now-1d&to=now&kiosk=1"

to put the charts on your sitemap.

It sounds like perhaps you didn’t install PhantomJS yourself?

Clearly it’s installed so we need to find how it got installed. If it got installed by openHABian we need to file an issue to cause openHABian to NOT install it, or at least not install it by default.

1 Like

Webview crashed openhab as well…
I´m not quite sure which URL to use for Webview. I just used the original URL when I tried webview first time, and it did crash openhab as well.

But let me refrase some first, cause I know where PhantomJS came from:

This mess started when I updated openhab from 2.3 to 2.4.
At that time I was running Grafana 5.1.4 - This version do NOT require PhantomJS. Infact PhantomJS has never been installed on the Rpi at that time. Therefore it cant have been PhantomJS which was the original cause of openhab 2.4 crashing .

When I discovered the problem, I updated Grafana to latest version, 5.4.2. That version do require PhantomJS, so I had to install that one as well.

Updating Grafana did not solve my problem with openhab crashing, so I uninstalled Grafana 5.4.2 again, (remember I asked you some days ago how to get rid of Grafana, and you answered Purge command)… I did purged Grafana. And then I installed the 5.1.4 again, which is the version running now…

Now the question is - why the heck does it still use PhantomJS

EDIT:
I just renamed the PhantomJS file to something else… Now Grafana refuse to render at all… (rendering error).

I tested the webview again, and it does seem to work, though it doesnt refresh. I was using the wrong URL when I first tried, (infact I was using the rendering URL, which ofcouse crashed openhab as well)…
EDIT AGAIN - Webview DOES refresh.

Still struggling to find the reason why Grafana 5.1.4 needs PhantomJS for rendering.

Well, seems like I´m about to give up find the reason why Grafana 5.1.4 needs PhantomJS.
This is the error I get:

t=2019-01-09T23:56:37+0100 lvl=info msg=Rendering logger=png-renderer path="d-solo/UgPbUoggz/stort-bad?refresh=30s&orgId=1&panelId=4&from=1547052995409&to=1547074595409&width=1000&height=500&tz=UTC%2B01%3A00"
t=2019-01-09T23:56:37+0100 lvl=eror msg="Could not start command" logger=png-renderer LOG15_ERROR= LOG15_ERROR="Normalized odd number of arguments by adding nil"
t=2019-01-09T23:56:37+0100 lvl=eror msg="Rendering failed." logger=context userId=1 orgId=1 uname=admin error="fork/exec /usr/share/grafana/tools/phantomjs/phantomjs: no such file or directory"
t=2019-01-09T23:56:37+0100 lvl=eror msg="Request Completed" logger=context userId=1 orgId=1 uname=admin method=GET path=/render/d-solo/UgPbUoggz/stort-bad status=500 remote_addr=10.4.28.30 time_ms=15 size=1703 referer="http://10.4.28.237:3000/d/UgPbUoggz/stort-bad?refres$

I renamed the PhantomJS file back to its original name. Then I took a look at @matt1 suggestions for the changes in the link he provided.

I changed the /etc/default/openhab2 to
EXTRA_JAVA_OPTS="-XX:+HeapDumpOnOutOfMemoryError -Xmx512m"

And then rebooted.

After this, I can render 3 charts wihtout a problem, (almost)… It seems like something else fails when I do so.

2019-01-10 00:06:32.003 [WARN ] [su.litvak.chromecast.api.v2.Channel ] - Error while reading
su.litvak.chromecast.api.v2.ChromeCastException: Remote socket closed
	at su.litvak.chromecast.api.v2.Channel.read(Channel.java:425) ~[235:org.openhab.binding.chromecast:2.4.0]
	at su.litvak.chromecast.api.v2.Channel.access$200(Channel.java:51) ~[235:org.openhab.binding.chromecast:2.4.0]
	at su.litvak.chromecast.api.v2.Channel$ReadThread.run(Channel.java:137) [235:org.openhab.binding.chromecast:2.4.0]
2019-01-10 00:06:32.060 [WARN ] [su.litvak.chromecast.api.v2.Channel ] -  <--  null payload in message 

==> /var/log/openhab2/events.log <==

2019-01-10 00:06:32.084 [hingStatusInfoChangedEvent] - 'chromecast:chromecast:255f3cf49521e13fa5f92fc38ae7ac51' changed from ONLINE to OFFLINE
2019-01-10 00:06:32.098 [hingStatusInfoChangedEvent] - 'chromecast:chromecast:255f3cf49521e13fa5f92fc38ae7ac51' changed from OFFLINE to OFFLINE (COMMUNICATION_ERROR): Interrupted while waiting for response

It does come back to online again a few seconds after.

2019-01-10 00:06:42.472 [hingStatusInfoChangedEvent] - 'chromecast:chromecast:255f3cf49521e13fa5f92fc38ae7ac51' changed from OFFLINE (COMMUNICATION_ERROR): Interrupted while waiting for response to ONLINE

This only happens sometimes…

Whats good, openhab doesn´t seem to crash, even though I´m using PhantomJS and rendering… I have thought about that maybe webview would be better… It´s alot faster. I have not tested how much memory it takes though. I´ll keep an close eye on the system the next few days… I dont really trust it atm.

However I still wish I could find the reason why Grafana needs PhantomJS using version 5.1.4. From what I´ve read, it´s required from version 5.2.0 only.

webview with no PhantomJS present? that’s the main point. there should be no PhantomJS on your machine at all.

is a slightly different url, see my example. if you are using the same url as you are with image then there is no difference as far as Grafana is concerned.

actually, that version comes with PhantomJS. and if your are generating static images in that version your are using PhantomJS to do so.

The newer version of grafana does not ship with PhantomJS and has no ability to generate static image charts unless you go out of your way to install PhantomJS yourself.

they got rid of PhantomJS for a reason…

because older versions of grafana have anyways used PhantomJS and shipped with Phantonjs. they dropped the library with the latest version because PhantomJS is old, buggy, and no longer being maintained. they did not, however, provide a replacement for the functionality that library provided. so your choice is to manually install PhantomJS yourself and suffer the consequences, or abandon generating static chart images.

is only required to be installed manually from 5.2.0. Previous versions of Grafana ship with the library as part of it.

This is what I was trying to get at in the recommendation thread. if you are using the latest version of grafana which is installed by OpenHABian then you should not be experiencing this sort of problem. it is the manual steps you took to add PhantomJS which ultimately led to the problem.

I found I got less ram issues out of a PI when I left off the Xms parameter. Since I no longer use a PI I never looked into it in enough depth to be 100% sure why this was, perhaps it was FRAGMENTATION caused by the use of a GC which does not compact after cleaning.

I value my time higher then the cost to upgrade so I see it as a waste of time trying to make a system that is clearly near its maximum to work as it may just stop working the next update or when I add a few extra components to my smart home… I now use an Odroid C2 since it has twice the ram and twice the cpu power and everything you own for the PI can be moved across except the case which may work too if you take a dremel/file to it.

If I outgrow the C2 (I use 5% of the CPU and only 25% of the ram so its not close to running out for what I do) I would be looking at this as my next step but note the ram will cost you more as it is not supplied…
https://www.hardkernel.com/shop/odroid-h2/

You can then use your SSD via real SATA3 and unlock its speed (or use M2 drive) and have up to 32gb of ram. All in a fanless design and very low power draw. X86 based which means you get less issues from things in linux not working on ARM processors which happens from time to time.