Migration from RPI to NUC

, ,

Hi everyone,

I will start with this thread to report about my findings along the way of my migration from a RPi4 to NUC.
This was mainly triggered because of stability issues which different potential root causes.

So, I started to run OH3 on a NUC to see and learn.
Maybe one or the other here finds it helpful.
Any feedback is welcome.

Starting point is my (in general) mature OH3 installation on this HW:

Platform information:
Hardware: RPi 4 8 GB and 32 GB SSD + zwave: Aeotec Z-Stick Gen5 (ZW090) - USB
OH 3.0.1 + openhabian

My Bindings:
Amazon Dash Buttons, Amazon Alexa Control, DenonMarantz, Gardena Smart Control Irrigation, Logitech Harmony Hub, HP Printer, KM200 (Buderus), Neato, Netatmo, TR-064, VolvoOnCall, Z-wave

The issue I had are discussed here and are possibly power related (voltage), data corruption or general HW failure:

Because I thought about a NUC since quite a while - see here - I thought it’s time just to try it:
Otherwise I could use the NUC for my son :wink:

So I bought a NUC7CJYH with (GB RAM and 32 GB SSD).
I have chosen a small SSD to keep it easier for a full drive backup (dd).

Because I wanted to keep my config and to avoid too much effort, I have chosen the backup function of openhabian.

@mstormi Thank you very much for openhabian and its functions. The installation was extremely easy on Ubuntu and my new NUC!

Backup (RPi):
I created a backup using openhabian and furthermore an image of the last RPI4 SSD using Win32DiskImager.
(In general creating an image of an SD-card or external SSD is easier to handle than the internal SSD of the NUC) - so one downside so far.

I have put these to my NAS for later access.

I started with the Ubuntu 20.04 LTS Server, put this to an SD-card on USB (dont’t forget to enable USB boot and set it to boot first from USB).
No issues during the installation. See manual installation here:
For later use I also did another installation on a bootable SD-card (8GB) which becomes handy later on (backup of internal SSD)

After the installation of Ubuntu and openhab using openhabian making use of the backup was needed:
The backups I did before are on the SSD of the previous system. For future use I created a SSD image (32GB) and put it to my NAS.
For Ubuntu I needed to adjust fstab to mount my NAS share automatically on startup:

//  /media/NAS_incoming  cifs  credentials=/etc/NAS_credentials,iocharset=utf8,sec=ntlmssp  0  0

The credentials file contains:
Make is unreadable for others:

sudo chown root: /etc/NAS_credentials
sudo chmod 600 /etc/NAS_credentials

For security reasons I did a backup of the internal drive every now and then.
To do so, I booted from the 8GB Ubuntu SD-card mentioned above used in a USB card reader) and created an image of the internal SSD to another USB drive:

dd if=/dev/sda of=/media/usbHDD/OH3_Backup/2021.06.24_OH_Bckp_NUC.img

Because I recognized along the way, that I still miss some files from my previous RPi system, I mounted the image of the RPi from my NAS:
In my case it looked like this:

sudo mount -o ro,loop,offset=272629760 /media/NAS_incoming/2021.06.24_OH_3.0.2_stable_last_Rpi4_image_before_NUC.img /media/iso/

This ia based on this site - so to get the right offset to the 2nd partition chick it out:

This was very handy every now and then.

To copy several files needed for my setup, I created my own simple restore script.
This made it much easier for my specific stuff.
So it’s not directly applicable to your system, but might be helpful for linux rookies like me out there :wink:
(Please note that mounting the iso image is required).

# fix permissions just in case
sudo chown -R openhab:openhab /etc/openhab/
sleep 1s
sudo chmod ug+x /etc/openhab/scripts/*
sleep 1s
# copy openhab cloud identification
sudo cp -rp /media/iso/var/lib/openhab/uuid /var/lib/openhab/uuid
sleep 1s
sudo mkdir /var/lib/openhab/openhabcloud
sleep 1s
sudo cp -rp /media/iso/var/lib/openhab/openhabcloud/secret /var/lib/openhab/openhabcloud/secret
sleep 1s
sudo chown -R openhab:openhab /var/lib/openhab/openhabcloud/
sleep 1s
# copy mapdb DB
sudo cp -rp /media/iso/var/lib/openhab/persistence/mapdb/* /var/lib/openhab/persistence/mapdb
sleep 1s
# copy the target directories for ip camera ftp upload
sudo cp -rp /media/iso/var/www /var/
sleep 1s

# copy karaf keys
sudo cp -rp /media/iso/home/openhab/*id_rsa* /home/openhab/
# copy files and folders from /etc/openhab/html
sudo cp -rp /media/iso/etc/openhab/html/abus/ /etc/openhab/html/
sudo cp -rp /media/iso/etc/openhab/html/squirrel/ /etc/openhab/html/
sudo cp -rp /media/iso/etc/openhab/html/teleeye/ /etc/openhab/html/
sudo cp -rp /media/iso/etc/openhab/html/xido/ /etc/openhab/html/
sudo cp -rp /media/iso/etc/openhab/html/neato-botvac/ /etc/openhab/html/
sudo cp -rp /media/iso/etc/openhab/html/openweathermap/ /etc/openhab/html/
sudo cp -rp /media/iso/etc/openhab/html/backgrounds/ /etc/openhab/html/
sudo cp -rp /media/iso/etc/openhab/html/tmp/ /etc/openhab/html/
sudo cp -rp /media/iso/etc/openhab/html/habpanel-map-widget/ /etc/openhab/html/
sudo cp -rp /media/iso/etc/openhab/html/icons/ /etc/openhab/html/
sudo cp -rp /media/iso/etc/openhab/html/habpanel-map-widget/ /etc/openhab/html/
sleep 1s
# FIRST: copy latest version of habpanel (just web part in webui bundle) into /etc/openhab/html/habpanel containing assets, fonts, vendor
# copy icons for habpanel
sudo cp -rp /media/iso/etc/openhab/html/habpanel/assets/icons/* /etc/openhab/html/habpanel/assets/icons/
# finally copy the backup files from the provious system:
sudo cp -rp /media/iso/var/lib/openhab/backups /var/lib/openhab/

The following issues occured during the migration:

  1. Copying the previous influxdb over to the new system lead to the error below:
    I created the DB first using this guide:
    InfluxDB+Grafana persistence and graphing

But got:

2021-06-25 16:34:15.818 [ERROR] [rnal.influx1.InfluxDB1RepositoryImpl] - database connection failed
org.influxdb.InfluxDBIOException: java.net.ConnectException: Failed to connect to /
        at org.influxdb.impl.InfluxDBImpl.ping(InfluxDBImpl.java:395) ~[bundleFile:?]
        at org.openhab.persistence.influxdb.internal.influx1.InfluxDB1RepositoryImpl.checkConnectionStatus(InfluxDB1RepositoryImpl>
        at org.openhab.persistence.influxdb.internal.influx1.InfluxDB1RepositoryImpl.connect(InfluxDB1RepositoryImpl.java:72) [bun>
        at org.openhab.persistence.influxdb.InfluxDBPersistenceService.activate(InfluxDBPersistenceService.java:112) [bundleFile:?]
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) ~[?:?]
        at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) ~[?:?]
        at java.lang.reflect.Method.invoke(Unknown Source) ~[?:?]

In general I would like to solve this to use my old data and I guess it’s simple.
So I assume it’s about access rights or the fact that the NUC uses a different interface name (eno1 insted of eth0!?) - but that’s just a guess.
After deleting the openhab_db in /var/lib/influxdb/data a brand new DB worked flawlessly.

Which leads me to my second problem.
2. Dash Buttons:
Even though I used this guide (and searching the net):

I am not able to get them beyond initializing:

2021-06-26 08:02:48.271 [WARN ] [mmon.WrappedScheduledExecutorService] - Scheduled runnable ended with an exception:
java.lang.NoClassDefFoundError: Could not initialize class org.pcap4j.core.NativeMappings
        at org.pcap4j.core.Pcaps.findAllDevs(Pcaps.java:56) ~[?:?]
        at org.pcap4j.core.Pcaps.getDevByName(Pcaps.java:125) ~[?:?]
        at org.openhab.binding.amazondashbutton.internal.pcap.PcapUtil.getNetworkInterfaceByName(PcapUtil.java:56) ~[?:?]
        at org.openhab.binding.amazondashbutton.internal.handler.AmazonDashButtonHandler.lambda$0(AmazonDashButtonHandler.java:62) ~[?:?]
        at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) ~[?:?]
        at java.util.concurrent.FutureTask.run(Unknown Source) ~[?:?]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:?]
        at java.lang.Thread.run(Unknown Source) [?:?]

I assume that this might be related to the different interface name (eno1 instead of eth0) and I changed this in the mainUI. I even deleted one dash button and created it new - but no change.
All of these dash buttons used to work on my RPi for years.

So, any suggestion would be greatly appreciated.

And Again:
Thanks for the backup function of openhabian.
I used to setup my system in the past 7 years many times and using the restore function killed the fear of starting all over again :slight_smile:

I would like to give a short update about my impression running OH3.1 on a NUC.
Of course one month is not a sufficient time period for a final conclusion, but I am very happy after the migration.

  1. zwave works quicker (switching a group containing zwave devices sometimes took up to 7 seconds on the RPi - now no delay).
  2. The stability seems to be much better (I did not have issues with network and / or USB power (?)
    like described here: OH3 - Network gone - no way to recover - #33 by NCO

However, time will tell…

The issues I had after the migration are not all solved yet though:

  1. System info binding not working on intel - SOLVED: Sysinfo Binding install tip Ubuntu on x64 Intel - #2 by NCO
  2. dash buttons not working - still OPEN: Dash buttons not working anymore - #2 by NCO
    (I still assume it’s caused by the different network device name, but I don’t know yet, how to fix it.)

You’re welcome. Hopefully you’re aware that openHABian on Ubuntu is not supported ?

Of course I am.
But it’s worth a try anyway :slight_smile:

And there is nothing to complain about it.
I just have one minor issue, but I see you are already involved :smiley:

Works well so far.
Created a backup of the internal disk to my NAS.
Cloning it back to another external disk was successful as well and will be my process to do a full image every now and then.
Of course I will do regular backups using openhabian as well.

The only open point is the dash button issue…

I will post an update here once I have run the NUC significanlty long enough to be able to do a proper judgment.


Why don’t you use debian. I have had the same configuration for years. NUC and ubuntu since OH2.2 until now OH3.1.
I use 29 bindings, over 1300 items and over 200 rules. Over the years there were no major problems.

But if they rewrite it anyway, why not with debian?
I’ve read that debian is the better alternative and I want to rebuild it now.
Greetings, Markus

Thanks for your input, Markus.
I just have chosen ubuntu, because it’s mentioned every now and then to be working well on NUCs.
Possibly it’s the same with debian.
I actually don’t have preference other than the reason above, but I am quite happy with it so far.

But if you ran it successfully for years, why the change?
(never change a running system) :wink:

That’s right. :grin::+1:
But i have the problem that i no longer know which bindings new channels got through updates. Since these are not updated when the channel is activated, I wanted to do everything again after switching from OH2 to OH3. And then with the recommended setup. I also think, I have quite a few dead setups running, since this system was my learning and trial setup.
Greetings Markus

I am still learning and trying every day :slight_smile:
And I love it with the help of this community!

This topic was automatically closed 41 days after the last reply. New replies are no longer allowed.