The kind people at BananaPi are sending me one of their new BananaPi Zeros to test OH on. It is a pretty impressive little device (same form factor as a RPi Zero but runs a quad core A7 H2+ CPU 1.2 GHz versus the single core 1 GHz in the RPi Zero). So processor wise I’m thinking it will handle openHAB without problem. However, it only comes with 512M of RAM so I have my doubts it can handle OH in that regard.
But I would like to provide a useful write-up after I test which means I need to know more about how much RAM all of your setups are using so I can provide something along the lines of “If you have a simple setup with 5 or fewer add-ons it will work for you.”
So I’m asking everyone who is willing to please provide the following information if they are running on Linux:
How much memory is your OH using? (see note 1)
How many active bundles? (see note 2)
How many Items?
How many lines of Rules?
I’ll provide some commands you can run to get this information below.
This will print out the number of active bundles. Don’t worry, it will be a largish number as it counts way more than just installed add-ons.
This command also requires logging in to the Karaf console.
ssh -p 8101 openhab@localhost 'smarthome:items list | grep -c .*'
The number printed out will be the total number of Items
From the same machine OH is running
wc -l /etc/openhab2/rules/*.rules | tail -n 1
Use the location of your rules files if it is not in the standard ap-get/yum installed locations. The number printed out will be the total number of lines in all of your Rules files. This will include white space and comments.
Notes
I realize this is not an accurate measure of the actual amount of memory OH is actually using but it is close enough for my purposes.
This will list all the active bundles which include third-party bundles, the bundles that make up ESh and OH cores, and your installed add-ons.
To get to Mb divide by 1024.
Thanks!
Edit: Noticed I was missing a digit in my memory report. No way is OH running in that little memory.
mmmhh would love to help you, but seems that I have trouble running your commands:
running OH2.2 latest snapshot on a NUC VM through Virtualbox; host and guest are running Ubuntu 16.04.3 LTS
output of memory usage is just a single comma
port option needs to be a small “p” for me, but even then number of bundles and items is given as “0"
Number of lines in rules is listed with " 372”, which sounds about reasonable…
I should have tested the ps commands on raspbian. It didn’t occur to me that ps would behave differently.
It is interesting how everyone running outside of Docker is using an order of magnitude less memory. ps must include the whole container’s environment or Docker grabs a bunch of virtual memory up front or something.
Using this command (with openhab) I get for the java running process a value of
511040
The entries for karaf (6536) , frontal (85724) and tail (5128) process are of no concern, I guess. I wondering where are they coming from. Old leftovers!
No, they are running on your machine. The amount of memory Karaf being used should be pretty much the same for everyone and I think that is really only the start script anyway. frontail and tail are showing up because they include the word “openhab” in their commands that started them. It is likely picking up on openhab.log.
I think the architecture has the biggest influence on this. When I run the openHAB demo (build #1082) in Docker or via an unpacked tar.gz on the same host (Ubuntu 16.04.3) the memory usage is more or less the same:
Docker:
14496504 kB
Unpacked tar.gz
14512820 kB
It’s even lower in Docker for me… but that will just be the error margin. With Docker the process will run using the same kernel. The virtualization penalty is very low. That is what makes container technology so popular.
Also, for these running in Docker… we can also run commands inside the Docker container. :-p
(Though I must admit I’m not sure exactly how ps reacts inside of a container…)
docker exec -it openhab /bin/bash
For me, the number is exactly the same as outside of the container.