How much RAM does OH Use For You?

The kind people at BananaPi are sending me one of their new BananaPi Zeros to test OH on. It is a pretty impressive little device (same form factor as a RPi Zero but runs a quad core A7 H2+ CPU 1.2 GHz versus the single core 1 GHz in the RPi Zero). So processor wise I’m thinking it will handle openHAB without problem. However, it only comes with 512M of RAM so I have my doubts it can handle OH in that regard.

But I would like to provide a useful write-up after I test which means I need to know more about how much RAM all of your setups are using so I can provide something along the lines of “If you have a simple setup with 5 or fewer add-ons it will work for you.”

So I’m asking everyone who is willing to please provide the following information if they are running on Linux:

  1. How much memory is your OH using? (see note 1)
  2. How many active bundles? (see note 2)
  3. How many Items?
  4. How many lines of Rules?

I’ll provide some commands you can run to get this information below.

My setup is:

  1. 4891694 kB
  2. 133 Active bundles
  3. 382 Items
  4. 672 Lines

Commands

  1. From the same machine as OH is running
ps eo vsz,command | grep openhab | grep -v grep | cut -d ' ' -f 1

The number will be the amount of virtual memory used by OH in kB

  1. This command requires logging in to the Karaf console. The default password is habopen.
ssh -p 8101 openhab@localhost 'bundle:list | grep -c Active'

This will print out the number of active bundles. Don’t worry, it will be a largish number as it counts way more than just installed add-ons.

  1. This command also requires logging in to the Karaf console.
ssh -p 8101 openhab@localhost 'smarthome:items list | grep -c .*'

The number printed out will be the total number of Items

  1. From the same machine OH is running
wc -l /etc/openhab2/rules/*.rules | tail -n 1

Use the location of your rules files if it is not in the standard ap-get/yum installed locations. The number printed out will be the total number of lines in all of your Rules files. This will include white space and comments.

Notes

  1. I realize this is not an accurate measure of the actual amount of memory OH is actually using but it is close enough for my purposes.

  2. This will list all the active bundles which include third-party bundles, the bundles that make up ESh and OH cores, and your installed add-ons.

  3. To get to Mb divide by 1024.

Thanks!

Edit: Noticed I was missing a digit in my memory report. No way is OH running in that little memory.

I’m running in a VM on a HP Microserver Gen8. On this setup I get:

  1. 3341912
  2. 159 Active bundles
  3. 497 items
  4. 3490 lines

Good luck with your testing!

1 Like

Raspberry Pi 3 Model B Rev 1.2 running Openhabian (stable build)

  1. 584540
  2. 133 Active bundles
  3. 456 Items
  4. 1146 lines
1 Like

OH 2.1.0 running in a Docker container on Ubuntu Server

  1. 4793376
  2. 92
  3. 98
  4. 0 (I use NodeRed)

I had to do

ps -A …

in order to get some output, I’m not sure if it changes the result ?

1 Like

That should work but if you are in doubt you can run the PS on its own and make sure that number is in the right column.

I too run in Docker and your memory usage is really close to mine. I wonder if that makes a difference.

Raspberry PI 3 running OH2.1

  1. 554656
  2. 126
  3. 1056
  4. 634
1 Like

I’m running on a RasPI2 using openhabian with OH2.1 stable release

1.) 511040
2.) 121
3.) 210
4.) 240

1 Like

Hmmmm.

OK just try: ps aux | grep openhab | grep -v grep and look at the number in the fifth column.

mmmhh would love to help you, but seems that I have trouble running your commands:
running OH2.2 latest snapshot on a NUC VM through Virtualbox; host and guest are running Ubuntu 16.04.3 LTS
output of memory usage is just a single comma
port option needs to be a small “p” for me, but even then number of bundles and items is given as “0"
Number of lines in rules is listed with " 372”, which sounds about reasonable…

See my response to opus immediately above your post for an alternative.

That is odd. I built these commands on Ubuntu 17.10. ssh and ps shouldn’t be that different between the two.

Just tried it again, doh, the -P is indeed a typo in the above post. I’m using lowercase p.

What do you get when you just log in and execute the command?

ssh -p 8101 openhab@localhost

bundle:list

You should get a listing of all your Items.

Adding an extra dash ‘-’ before eo also worked for me:

ps -eo vsz,command | grep openhab | grep -v grep | cut -d ' ' -f 1

Currently I still use RPi3/Raspbian setup with OH2.1 (stable):

  • 824176 kB
  • 122 Active bundles
  • 952 Items
  • 1924 Lines

I’ll be migrating this setup soon to a NUC/Docker setup to fix memory, performance and reliability issues. The Pi3 also runs InfluxDB/Grafana.

1 Like

I should have tested the ps commands on raspbian. It didn’t occur to me that ps would behave differently.

It is interesting how everyone running outside of Docker is using an order of magnitude less memory. ps must include the whole container’s environment or Docker grabs a bunch of virtual memory up front or something.

Thanks to all who have responded thus far!

Using this command (with openhab) I get for the java running process a value of
511040
The entries for karaf (6536) , frontal (85724) and tail (5128) process are of no concern, I guess. I wondering where are they coming from. Old leftovers!

1 Like

No, they are running on your machine. The amount of memory Karaf being used should be pretty much the same for everyone and I think that is really only the start script anyway. frontail and tail are showing up because they include the word “openhab” in their commands that started them. It is likely picking up on openhab.log.

The extra dash did help!

Edited my original post to show all values.

Running Ras PI3 with OH 2.1

  1. 473108
  2. 100
  3. 282
  4. 161
1 Like

I think the architecture has the biggest influence on this. When I run the openHAB demo (build #1082) in Docker or via an unpacked tar.gz on the same host (Ubuntu 16.04.3) the memory usage is more or less the same:

  • Docker:
    • 14496504 kB
  • Unpacked tar.gz
    • 14512820 kB

It’s even lower in Docker for me… but that will just be the error margin. With Docker the process will run using the same kernel. The virtualization penalty is very low. That is what makes container technology so popular. :slight_smile:

1 Like

Also, for these running in Docker… we can also run commands inside the Docker container. :-p
(Though I must admit I’m not sure exactly how ps reacts inside of a container…)

docker exec -it openhab /bin/bash

For me, the number is exactly the same as outside of the container.

I forgot to mention that I am also running inside a docker (which itself is running in an Ubuntu VM).

Maybe have a look at the java runtime involved in the particular setup. Zulu embedded on raspi vs. …?