RXTX fhs_lock() Error: opening lock file: /var/lock/LCK..ttyUSB0: File exists. It is mine

Hello,

I’m running openHAB 2.4.0 Build #1322 within a docker container for two weeks and suddenly my Zwave and Zigbee USB dongles no longer work. Karaf shows these messages:

RXTX fhs_lock() Error: opening lock file: /var/lock/LCK…ttyUSB0: File exists. It is mine

0 k testRead() Lock file failed
RXTX fhs_lock() Error: opening lock file: /var/lock/LCK…ttyACM0: File exists. It is mine

0 k testRead() Lock file failed

Have restarted openHab, restarted my Synology NAS with no luck.

USB devices are present, docker runs in highes access level mode. The permissions on /dev/ttyUSB0 and /dev/ttyACM0 are ok.

So I really don’t understand why this is happening? Any ideas?

Regards,
Bastiaan

I have seen this occasionally using snapshot 1325, but I never got to the bottom of it. I think it was an OH restart that resolved it for me. Did you try restarting the container, or is it setup to restart when the NAS reboots? Not much info for you, but you’re not alone!

Good to hear I’m not alone on this. Have rebooted the NAS multiple times, and so the docker container. It doens’t solve it.

I have created a github issue to ask the developers to take a look at it: https://github.com/openhab/openhab-core/issues/383

I had been running OH as root when seeing this, but I recently changed to using an opehab account and found an issue that could be related. I mentioned this here. Check the permissions of /run/lock to see if the account running OH can’t write the lock file. If not, here is what I did…

Add openhab user to the lock group:

#usermod -a -G lock openhab

Verify membership:

#id openhab
uid=964(openhab) gid=963(openhab) groups=963(openhab),5(tty),18(dialout),54(lock)

Temporarily change group owner of /run/lock and add group write permission to see if this fix will work for you

#chown root:lock /run/lock
#chmod g+w lock

The /run/lock is recreated after each restart, so you need to make this change to set the group owner of /run/lock and add the group write permission:

#vi /usr/lib/tmpfiles.d/legacy.conf

Look for this line:

d /run/lock 0755 root root -

Comment it out (add a hash symbol before it) and insert a new line with:

d /run/lock 0775 root lock -

Save. Restart openHAB and verify functionality. Upon reboot, the permissions on /run/lock should be set with ownership root:lock and permission mask 775. I’m still investigating this, but I think it may be a new issue. I’m currently on 1330 and have not seen this issue again.

2 Likes

I’m running openhab from docker on a Synology NAS. The openhab container runs with high privilege, so I take it runs with root. So do you think the same steps would apply to my environment?

Permissions on /run/lock:

root@NAS02:~# ls -la /run/ | grep lock
drwxr-xr-x  6 root     root            220 Aug 14 21:30 lock

I may be taking you down a rabbit hole. The actual error you are seeing means that the lock file was not cleaned up (OH not shutdown properly, power outage, etc.). Docker adds another layer to this. Try shutting down OH (not the container) properly with CTRL-D in Karaf. The lock files should be cleaned up. I think I had to install ssh for this when I was playing with the Docker image. I know you said you have restarted OH, but maybe you were just restarting the container or killing the Java pid? Another option is to manually delete the lock files before starting OH, after an improper shutdown.

Previously, I shutdown OH containter via Docker. Now I did some cleanshutdowns via karaf. The ‘LCK…ttyACM0’ lock file gets removed, but ‘LCK…ttyUSB0’ isnt. I list the files thru karaf with ‘shell:ls’:

openhab> shell:cd '/run/lock'
openhab> shell:ls
LCK..ttyACM0 LCK..ttyUSB0
openhab> shell:ls

I first want to try to delete ‘LCK…ttyUSB0’ as it looks like that one doenst get deleted. Karaf and shell doenst seem to have a way to delete files. Do you know other way to do this? Enable SSH in the container?

That’s how I would do it… don’t know of any other way.

And do you recal how you did this?

I actually found the files within the docker folder structure:

/volume2/@docker/btrfs/subvolumes/8054d215f06ee16eb42d66929168bfe403b484366ef7db36b7d3d9d76947e5b5/run/lock

Checked with Karaf, and the file dates match. When I shutdown OH, the lock files do get deleted. So I think the origen of the problem is not that the files already exist.

If you shutdown OH, delete the files, and restart OH, do you still get the error?

Yes, the errors come again:

RXTX fhs_lock() Error: opening lock file: /var/lock/LCK..ttyUSB0: File exists. It is mine                                  
                                                                                                                           
$r testRead() Lock file failed                                                                                           
RXTX fhs_lock() Error: opening lock file: /var/lock/LCK..ttyACM0: File exists. It is mine                                  
                                                                                                                           
$r testRead() Lock file failed 

I’ve done a chmod 777 to the files while OH is running, error remains.

This looks like it may be an ESH issue with nrjavaserial, but possibly with Docker factors in play. Snapshot build 1330 has ESH 0.10.0.201808130936 and nrjavaserial 3.14.0, which was merged into ESH on July 5th. The previous version was 3.12.0. Which version of nrjavaserial do you have (in Karaf, list |grep serial)? Maybe you could try an earlier snapshot, or 2.3 to see if the error occurs there too?

I’ve tested this as root on an up-to-date Archlinux (without any virtualization layer). It fails with the given message above.

To me it looks like a threading issue.

  1. before openhab is beeing started, no /var/lock/LCK…ttyACM0 file exists
  2. openhab is started (with enabled zwave binding and configured controller)
  3. /var/lock/LCK…ttyACM0 exists (beeing created by openhab start obviously)
  4. doing a search fails to find any device but prints the known error message (due to the locking issue)

I’ve tried to remove the created lock befor doing the search, this prevents the error message but doesn’t find me any devices.

  1. before openhab is beeing started, no /var/lock/LCK…ttyACM0 file exists
  2. openhab is started (with enabled zwave binding and configured controller)
  3. /var/lock/LCK…ttyACM0 exists, so I deleted it
  4. doing a search again doesn’t create the lock file but also doesn’t find anything

Okay, I’ve tried with an older version of nrjavaserial too.

  1. download http://jcenter.bintray.com/org/openhab/nrjavaserial/3.12.0.OH/nrjavaserial-3.12.0.OH.jar
  2. bundle:install -s file://nrjavaserial-3.12.0.OH.jar
  3. made sure only 3.12.0.OH version of nrjavaserial bundle is active (bundle:stop 230)

230 │ Resolved │ 80 │ 3.14.0 │ nrjavaserial
238 │ Active │ 80 │ 3.12.0.OH │ nrjavaserial

  1. given error message remains

RXTX fhs_lock() Error: opening lock file: /var/lock/LCK…ttyACM0: File exists. It is mine

testRead() Lock file failed

/var/lock/ is actually a symlink to /run/lock/, and it is recreated on startup. Maybe restarting the device could help?

There’s definitely no problem with /var/lock (/run/lock) permissions, as I test openhab2 as openhab user with correct permissions and root (no difference). I even removed the symlink and created /var/lock with 777.

I’m testing with fresh installs (purge /opt/openhab2/, download, extract, run, enable zwave binding, add controller, discover) while having an eye on /var/lock

  • 2.4.0.M1 works
  • 2.4.0.M2 controller added successfully, discovery fails
  • 2.4.0.M3 controller added successfully, discovery fails
  • 2.4.0.M4 controller added successfully, discovery fails

Fun fact:
adding the controller always works. But viewing/changing controller settings or doing a discovery always fails for M2 and later

@Kai any builds available in between M1 and M2 to bisect the issue?

M1 was done on Aug 15 and M2 on Aug 28, so you might want to check the snapshots between those dates - I am just not sure, if those are still anywhere available.
Here you can see the changes between the two milestones.
As this happens with Z-Wave dongles, maybe @chris has any input on the issue?