Raspberry Pi 4 released

I’ve done a full switch from my odroid C2 to the RPi 4. So far it looks good, OH seems to start faster, probably due to the SSD being on usb3. I had no problems getting my Razberry daughter card to work with the rpi 4 (using openhabian-config tool to set the correct jvm parameters for serial console).

The openhab-cli with backup and restore is extremely useful for fast and successful migrations.

Regards S

How to use / enable the zram feature? Which version is needed (2.5M1)?

Some great news for all openhabian users in this tread. We have a new version, openHABian 1.5 which supports RPi 4. :rocket: Have a look: https://github.com/openhab/openhabian/releases/tag/v1.5

openHABian maintainers


This is a feature of openHABian, not openHAB. The version of openHAB does not matter.

And a big thank you to all of the openHABian maintainers! I’m mostly looking forward to zram, but I didn’t think Pi4 support would be this quick.

I am running openhab 2.5.0.M2 as a docker container on a rock64 and see the following output of my top:


Indicating 25% of my 4GB memory.

The settings are:

EXTRA_JAVA_OPTS=-Duser.timezone=Europe/Berlin -Xms400m -Xmx650m

Was running before 2.4.0 in a container on RPI3 and had from time to time (week, two) a freeze of the system with the same settings.
Looking forward how it makes it on the rock64, but it looks like it takes more memory as expected. The system and my rules are huge.

That’s the thing. It would take less than 1G total and maybe 500M resident if you omitted docker.

Which is 1G which is the RPi3’s total memory.

That’s really not a suitable application for Docker, it just makes things a lot worse.

to be specific, it’s not docker suitable application on RPI.
Docker OH is very ok to run on machines which are meant to be as servers

I might agree that RPI3 is not optimal running OH on docker. I exprimented with docker cluster, hoping getting some fault tolerance and failover. I was getting mixed results with cluster. Stepped back for production running a bare docker container on RPI3 and the utilities like ebusd, mqtt, influxdb, garfana, openvpn, landroid, glusterfs … in the cluster. This worked perfect until the cluster and glusterfs burned my rather small sd cards (16GB), almost all 6 at all. Now moved to ssd 120GB. And finally moved the OH production to rock64 with 4GB.
Waiting RPI4 to revieve the cluster for all again.

And btw I love docker, the best what happened to me in the last few years (IT related)

1 Like

Seems you got some spare time … well. Consider using zram for your cluster, you can install openHABian on top of Raspbian and selectively install zram from the menu if you don’t want the full openHABian setup.

Has anyone tried an in-place update to Buster on an existing Pi 3 OpenHABian image and had it work on a Pi 4?

My OH server is running on an mSATA and its not as simple of a test as cloning an SD card and seeing what happens. From-scratch would burn a lot of time, but reading through this thread, I can’t tell if anyone has done an upgrade, or just a from-scratch install.

I’ve done the in place upgrade on my five RPis (none of them are RPi4s). Three of the five worked great. Two of the five failed and I had to rebuild them from scratch, which wasn’t really that big of a deal. I’d definitely make sure you have a good and proven backup before proceeding.

I can’t speak to whether an in place upgrade on an RPi <4 will work with an RPi4, but I would be surprised if it didn’t.

Raspberrypi.org recommends a fresh install for both rpi 3 and 4. It’s not that much effort imo, so play it safe :sunglasses:

Regards s

Probably that Rock64 runs a 64-bit OS which consumes more memory because the memory address space in instructions is bigger. I happily run openHAB with Docker on a RPi3 (32-bit) and it only consumes 22% memory according to top.

It also looks like openHABian always uses -Xmx350m regardless of the architecture?

yes Xms and Xmx are static values I created an issue/ note to myself to adjust that in openHABian

After having struggled through it once on a RPI2->RPI3, I just rebuild now. It’s not worth the struggle to me.

If you really want to do it, I’d suggest: Make a backup! Then try your in-place upgrade on the 3 and make sure it still works. Then try putting it in the 4. It won’t boot; you’ll have to update the boot partition to have the right files and data. Hopefully the Internet can tell you what needs to be updated, but you should be able to take the files off a Pi image for the 4 and update the config for your old kernel.

It’s not impossible, just fiddly and poorly documented.

FWIW To throw my oar in, 1GB on a R-Pi just isn’t enough memory. Yes the JVM might be limited to 350MB, but that’s not actually how much RAM gets used.

The debate should be over whether 2GB or 4GB is the sweet spot. If you’re running stock, then 2GB is probably fine. But if you’re offloading tmp and cache to a tmpfs (i.e. ramdisk - which means your SD card will actually last longer) then 4GB becomes your sweet spot. e.g.



RAM isn’t just used for your Java heap…


If you’re talking about RPi4 only then you can restrict it and the answer is rather simple: pay $10 for one GB or just $5 ?.

Hopefully you’re referring to ZRAM not tmpfs.
And zram won’t need as much RAM for the same purpose and I’d say you’re back to 2GB for the sweet spot.

Either way, the true discussion is not on which RPi4 model to buy but if to buy at all or stay with your RPi2/3 w/ 1GB. Which would be an even sweeter spot :wink: albeit not compatible with the thread title.

For one, your JVM RAM usage is awfully high, way above average. You should get well below 1GB unless you run really exotic stuff or stuff to have mem leaks.
[ although that’s something we should have another closer look at - I have a strong feeling that the changes in 2.5M2(?) make up for a serious increase in RAM demand. My own brutto RAM demand has always (up to and incl. 2.5M1) stagnated at ~700MB now it jumped to >900M
For two, the relevant column is the “RES” column which is what you’re effectively keeping in RAM. So there’s well headroom to even run zram and do it all on a 1GB machine.

No, I mean a real tmpfs. The two directories I quote are normally empty at start. There’s nothing to save off afterwards and you don’t have to worry about trashing your filesystem or your config because you powered the R-Pi off or typed reboot accidentally. Those two issues alone are enough for me to say don’t run ZRAM ever.
ZRAM would probably have an advantage with the OH internal databases. They annoy me. But not enough to take the ZRAM disadvantages.

Maybe. But I’ve already discounted the 1GB as simply not being enough. Yes it starts. Yes the JVM does run. But add in mysql for persistence (a shade over 200MB on mine), and even without the tmpfs’es you’re going to struggle (In fact mine did. And a burnt our SD is a PITA. So are the endless 500 errors you get because jersey isn’t ready for you).

Like I said previously… The JVM isn’t the only thing in RAM. You don’t want to push your buffer cache too low, or you’ll just be bound on SD IO anyway. More RAM means less physical IO (Reading or writing). And on mine it’s a shade under 2GB free, so I’d argue it’s better to go 4GB than 2GB (And the price isn’t worth worrying about anyway unless you’re planning on deploying thousands of these things)

Yeah my JVM residency is high. Is there a leak? Probably, it is Java after all. If I was concerned enough I’d drag out a copy of eclipse and start to look. Would I find the time to do that? Probably not.

I’d probably also argue that if memory or performance were a real concern you wouldn’t write OH in Java… (ducks).

Those extras above the JVM size (mysql, tmpfs, buffer cache) are what push you above the 1GB mark. I’d argue that they’re not a nice to have but a necessity.

In fact now I probably have room to push the mosquitto server onto the same host rather than having to have a separate system for it.

Would I argue for 2GB? Not for my workload… Which I consider quite light.


That’s some misunderstanding. No it doesn’t trash the filesystem, on reboot/power loss you only fall back to an earlier but consistent i.e. working filesystem state.
A little funny how touchy (correct term? I’m no native speaker) people react on this reboot thingy.
In fact it could have happened in the past as well that a ‘reboot’ could have killed consistency of one of your databases.
It ain’t apples and apples to compare tmpfs’ing those 2 dirs to zram’ing other dirs that also are heavily written to such as /var/log. Because that also means way more writes on your system and thus a higher risk of getting hit by SD wearout.
You can change zram config to only apply to exactly those two dirs, then you get 100% the same risk as with tmpfs but better performance/less RAM usage w/ zram because it compresses memory.
And if you cease using ‘reboot’ it even keeps the data across box restarts so e.g. your next OH startup will be faster because Karaf cache is available after reboot.

On RAM, I wouldn’t choose anything other than a 4 GB RPi if I was to buy a new system. The more the better, even more so if a setup already is as large as yours.
But the average thread reader’s OH setup is small enough to fit into and already is running on a 1GB system and now he’s wondering if it’s worth upgrading.