My first steps with OpenHab

@Kim_Andersen You mentioned that you’ve got some Modbus kit, can you shine any light on this subject?

I can, but I see @rossko57 is already at it, and he´s the overaall expert in this :smiley:
However I would suggest keeping the modbus issues in its own/new thread, or at least LordLiverpool post some logs and setups/more info… “Getting tons of Modbus commenication errors” doesnt really help anything :slight_smile:

1 Like

Correct. PaperUI also interacts with OH core through the REST API.

I recommend modifying the JSON DB entries using the REST API. The REST API Docs addon (under the Misc tab) is fully interactive meaning you can make the same calls that PaperUI does when creating/editing stuff. At a high level it’s usually a:

  1. GET to get an example Thing/Item/Rule or get the JSON for the specific Thing/Item/Rule you want to edit.
  2. Edit the JSON as desired.
  3. A PUT (edit) or POST (create new) to apply your changes or create a new Thing/Item/Rule.

They can all be done through the API Docs or you can use curl

When you do it this way you get some error checking so if you mess up the JSON syntax or something it tells you before modifying the JSONDB. Also you don’t have to stop openHAB to make the edits.

Not quite. OH loads the config from the JSONDB first into memory. Then it loads the config from $OH_CONF into memory. From that point it operates off of the combined stuff loaded into memory. This is why anything that is in $OH_CONF will take precedence over anything done through PaperUI.

JSONDB is just a way for OH to write out the config changes made through the REST API (e.g. PaperUI) so it persists after OH is shutdown.

tl;dr: OH has two independent methods for defining and preserving it’s configuration: JSONDB and text files. JSONDB configs are read first and the text configs read next. Both are loaded into memory and available for use by OH, but it’s memory that OH is operating on. Configs in text configs are read second and therefore override those in JSONDB. Internally, OH encodes the information about Items/Things/Rules as JSON in response to REST API calls, but that does not mean that it’s all in the JSONDB. For consistency though, the same JSON format is used in the responses to the REST API calls as is used to store them in the JSONDB itself.

2 Likes

Thanks Rich

That makes perfect sense now and explains a few things I was misunderstanding.

The main one was (Which is obvious now) …

Why does openHAB need to shut down… BEFORE creating a backup, if everything is stored in the JSON & Text files.

Answer…

While OH is running, the JSON files are dormant?
So a backup wouldn’t contain any recent changes

Also explains why an OH restart once in a while isn’t such a bad thing.

Pretty much. Periodically OH will write out to the JSONDB. I’ve not looked to see whether it’s every so often or just when something changes to the config (e.g. added a new Thing). But any operation you do through the REST API which results in a change that is stored in the JSONDB, it will immediately write that change out to the JSONDB. And when it does write out the JSONDB, it will create a backup in the $OH_USERDATA/jsondb/backups folder. I’ve not looked at them to try and correlate the backups with changes made to see if that is the only time it writes out or to the JSONDB or if it also periodically writes out stuff at other times too.

This is also why you have to stop OH if you manually edit the JSONDB files directly. OH might turn around and overwrite your changes. Also, I’m not sure if OH reads in the JSONDB files when they change, it might only read them in during startup.

If you attempt to make a backup while OH is actively writing to the JSONDB, the backup would almost certainly become corrupted. I think stopping OH before taking a backup is to ensure that doesn’t happen.

It’s not necessarily a bad thing for lots of reasons (I know Markus disagrees) but I don’t think the JSONDB is part of it. It is more for dealing with orphaned objects (i.e. memory leaks) and other cruft that builds up over time when there are minor bugs in the code. No one writes perfect code.

In fact, from a JSONDB perspective it could be a risky thing as if the machine loses power or otherwise stops before OH has finished closing down the JSONDB might be incomplete or otherwise corrupted. I personally have had cases where I got impatient and issued a kill -9 on OH and ended up with a blank JSONDB file. Luckily there was that automatically created backup to rescue me.

1 Like

I followed the nice tutorial here:
https://community.openhab.org/t/migrating-dsl-rules-to-jsr223-javascript-with-a-case-example/73352

to create my first rule. I saw that the lewie libraries had supposedly been superseded by the ones here:

so I tried those first (putting them in the correct, different, directory) but I couldn’t get anything to work. So I reverted to the lewie libraries and my “Hello Openhab” rule worked.

If I can get the ventilation bypass operating automatically by the end of today, I’ll be more than satisfied.

I have a rule (JSR223/Javascript) working which runs every couple of minutes, checks the outdoor and indoor temperatures, and sets the bypass switch item to ON or OFF according to whether the indoor temperature is higher or lower. I see the switch change state in PaperUI. All good. But the bypass state doesn’t change. Whereas if I operate the switch manually, it does. Time to open a thread.

Solved! Now I have my first useful rule to operate the bypass. I’m done for today.

1 Like

I decided to get on with trying to move the logs off the SD card. Ultimately I plan to use a USB drive, but while waiting for that, I decided to use an old 2.5" hard drive I had lying around. Grappling with the obscurities of fstab in order to mount the drive was hard - I managed to render the system unbootable, probably by forgetting to create the mount point first, or perhaps by omitting a line break (a little more resilience wouldn’t go amiss methinks). I rescued it by using Ext2Fsd under Windows to restore my backup of fstab. I followed a better tutorial and got the drive mounted successfully. I worked out that the fmask and dmask options were preventing the drive from being writeable, so I fixed that and pointed OH’s log settings to the drive. Unfortunately, nothing is being written, either to the standard location or the new one. Not sure why yet.

Ok fixed it. The drive is NTFS and apparently I needed to set the owner to openhab (not openhabian) using UID in fstab, and then set max permissions via fmask/dmask.

For the record, when using openHABian you can use openhabian-config to set up ZRAM which will put all the openHAB logging into a compressed RAM file system which prevents those writes to the SD card. I think it might even write persistence (MapDB or rrd4j) there as well.

If your only concern is the logs, that would be an easier and less complicated approach, particularly since is built right in to openHABian.

1 Like

Thanks Rich. Having conquered fstab, I shouldn’t have too much trouble switching to the USB stick when I get it.

Also be aware that a thumb drive is also flash and will also wear out with repeated writes over time.

Sure. I’m not worried about that, as long as my SD card is not getting hammered at the same rate.

Backup: I wanted to automate the copying of backups from /srv/openhab2-userdata/backups to a shared Windows folder (it’s not a NAS but it’ll be available often enough to serve my purposes). Because Windows disables the unsafe smb1 protocol now, I needed to add:

client max protocol = SMB3

to /etc/samba/smb.conf right under the line that says:

workgroup = WORKGROUP

This fixes the “protocol negotiation failed” error. I worked out the command line to copy all the backups from the backups folder to the share, suppressing prompts, and using a credentials file to store the user name and password:

smbclient -A=/usr/local/etc/credentials_file //WindowsPC/shared_dir -c "prompt; recurse; lcd /srv/openhab2-userdata/backups; mput *"

I used a special, very restricted backup user that I created on Windows. I then created a backup user on Linux that is the only user that can read the credentials file, and created a bash script to execute the above command line. I then used visudo to allow my normal user to execute this script under the backup user without a password. I can then run a command like this:

sudo -u mybackupuser /path/to/backup/script

Remaining for another day: automate the creation of the backups themselves, and then run both the backup and copy automatically at regular intervals.

I finally managed to get control of the ventilation flow rate using a dimmer. The js script I was using as a transformation had a bug or two.

I’d also forgotten to attach the transformation map to the Modbus control item. It seems that you can control the bypass without switching to Modbus control, oddly, but not the ventilation flow rate.

I managed to set up a daily backup and copy it to a network drive by running the two relevant scripts in cron jobs. I realised that I need to use the root crontab to run both, the backup because it requires root, and the copy because it has to run under a different user.

I’ve swapped the hard drive for a USB stick to hold the logs. I got an ultra-slimline stick because cramming four USB devices into the four ports of the RPi is tricky. The Enocean and Z-Wave adapters are bulky and barely fit.

I’ll probably move the backups to the USB stick too, to avoid more SD card wear and tear.

Using this backup command you can specify the location of the backup file:

openhab-cli backup --full /my/alternative/backupdir/openhab2-backup-$(date +"%y_%m_%d-%H_%M_%S").zip

Now it would be nice to also relocate the tmp dir to the USB stick.