Sometimes rules don't get executed?

Sounds like a great issue to file. :wink:

However how would you implement an asynchronous call in OH, or how would it affect the rule thread used by that rule? Because the rule will start - somewhere a http call is issued - the call will be ran asynchronously, but the rule will have to wait for that thread to finish in order to get the return value of the HTTP call. Or it should exit the rule until you get an answer and somehow (?) continue the execution after the call?

I was just curious, how can it can be possible…

Similar to the way a binding can poll an attached device. There might be Thing defined with channels, the channels bind to Items. You send a “begin” command to some Item, perhaps a String Item with the command actually being the target URL.
And then forget about it. That’s the asynchronous part, no waiting around. If it was from within the rule, rule execution continues immediately, just like myLight.sendCommand(ON).

Behind the scenes, the binding manages actually sending an HTTP request, waiting for timeout, error checking response, perhaps retrying failures, decoding eventual response. Finally updating some Item via a bound channel. All invisible to rules engine.

The user has another rule triggered on Item update to deal with the response. just like when Item myLight received update

Just a fantasy for now I’m afraid.

Just as rossko57 describes. I’ll mention that the executeCommandLine Action works this way already when you don’t supply a timeout argument. Rather then waiting around for the script called to complete it immediately returns and runs the script in the background.

However, you don’t get the result from the script so I’m not certain this overall approach would work in Rules since the vast majority of the time we care what gets returned by the HTTP request.

What could work however is to make an HTTP 2 version binding that works similar to the Exec 2 binding. This would give us an input and output Channel that we can link Items to. We can sendCommand to the input channel to kick off the HTTP request and then trigger a Rule when the output Item receives an update which will be populated with the result from the request.

This would move the HTTP request out of the Rules entirely

Interesting… maybe a good thing to implement?
Should I make an issue about it in openhab-addons and add this description?

I’m pretty sure that replacing the HTTP binding with a 2.x version is a desire. I’d be surprised if there isn’t an issue already open for it. Look in the openhab2-addons github repo for an open issue. If one doesn’t exist it certainly wouldn’t hurt to create one.

Tried to find one, but I was unable to find any issue related to this binding.

I have created one.

Hope someone contributes :slight_smile:

Also I found that when this “lock-up” happens, so the rules are not executed, the RPi becomes very slooow… You can’t really tell it from the OH web interface, but when I try to login via SSH everything is really slow… Just a login takes 1-2 mins… However the CPU is only idling around 5%…

Sounding like some non-rules (directly) issue, lower level system hanging up awaiting something external - file transfer, network?
Not easy to track down but it’s a step forward, good observation :slight_smile:

Confident in that SD card?

1 Like

No I’m not confident at all with that SD card. I have used it before for around 2 years for Kodi (OSMC).

I’m just wanted to let you and others know what might be a symptoms of this problem, if it gets fixed maybe it can help others.

However I think the SD can’t handle these much write/read requests. When I moved the entire /var/log to NAS, it seemed a lot better, it worked for hours without a problem (I had to revert because it messed up everything else :slight_smile: ), but I can’t say that for sure, because sometimes after a restart, it works good for a day or 2…

Out of interest, why don’t you run openHAB on the NAS ?

I don’t know how to advise looking for blocked up queues in java system, maybe someone else can help here.

Because it is a really old, cheap D-Link NAS which replacement is in progress :slight_smile: It can barely handle the different shares, p2p downloads, etc… and it only has busybox, not a full Linux system.

Anyway, if you asked… I have tried asking this on this forum multiple times, but I haven’t found someone who has done anything similar or experienced this.
So as I stated I want to replace my NAS. I thought of building a custom PC which has a lot more power than an NAS with Ubuntu server on it. And I would like to use it as a NAS and run openHab and other services which is required for OH/Home automation (Mosquitto, other python scripts…).

This will be good for me because:

  • I will have an all in one system (yes, single failure point…)
  • It will be easily modifiable/upgradeable
  • It can run anything so if something is incompatible, I can “easily” switch to other platform(s).
  • I live in a flat, so I don’t have much space to have several RPi or more than one bigger “server” at home.
  • Problems like this (SD card…) won’t ever happen because it will have a lots of HDD/SSD (cache) space…
  • My RPi can be used again just as a media server/client (Kodi)

My only questions are:

  • Which is the minimum to go for a setup like this? Because I haven’t found anyone who runs OH on a server built for it. Usually users run it on RPi or an old PC/Laptop
  • Are there any problems which are known? (Java JDK vs Ubuntu, etc…). I thought of running these services, each one in a Docker container.
  • Any suggestions?

Stick with a RPi for OH. Don’t move OH to the NAS, this will result in NAS and OH operations to interfere with each other, and you will probably encounter problems that noone here will be able to help you with because we don’t run this combination. Move swapping, logging and persistence to the NAS instead as I explained in the thread I linked to a couple of posts up.
And better get a purpose-made NAS than to build your own. A couple of bucks, yes, but a lot less hours you need to spend on this, and NAS makers are better at providing NAS services than you are.

You might consider a VM based setup if you want to move towards centralizing everything on a single more powerful machine. There are a number of us who run OH, NAS, and other services in VMs using Xen or ESXi. Personally, I run OpenMediaVault on one VM, all my home automation stuff in another one, and all my media services in a third all on a single physical desktop server running ESXi. I’m pretty happy with the performance, reliability, and maintainability.

You probably can’t buy a desktop type machine that has less CPU or RAM than is needed to run OH. But you need to survey all the services you plan on running on this machine and size it accordingly.

None I know of. You can and should use openHABian to set it up if you don’t go with Docker.

I run everything in Docker and it works well. But you make significant compromises including neutering the Exec binding.

I doubt that this new machine will actually solve the root problem. By far the most common platform to run OH on is a RPi and almost no one is experiencing the same problems you are. So the problems you are facing are either hardware (in which case moving to a new RPi/SD card would be sufficient) or software, in which case moving to a larger and more powerful machine will not solve the problem.

Thanks!

Firstly: I have successfully moved for now just the OH log folder to my NAS. The main problem - that rules are not executed sometimes - seems to gone, it has been working for a day now and I haven’t checked the log fully, but as I used my “home” today, no problems found. I will keep testing it and post the results…

Secondly,

  • @mstormi thanks for your response on this. Yes, that was my thought about this first, but now it seems that some day maybe the performance of an RPi won’t be enough to handle just OH. Or isn’t that true? I don’t know how the items/rules affect the performance (more rule/item/things more RAM, but how many? Is there a big change of RAM usage if for example I will have double things/items comparing to now and for example 3 times more rule?)
  • I have other projects which I want to achieve with my smarthome and almost every one needs a seperate device. If I can I want to decrease the number of these devices (Raspberries mostly).

@rlkoshak Thanks for the advices.
So you mean that you have three seperate VM and each VM has it’s own purpose. I know that I can have problems with a unique hardware like this one, but I think if I virtualize it some way, that should minimize this problems… And yes, before moving to this new setup, I install everything on another PC and test it out, how it works… it is not the same, but maybe I can see how hard to move to a centralized device.

I don’t think so. It should do well as long as you don’t put exceptionally demanding rules or other programs onto your box. I’ve got close to 1000 items and a large number of rules on my box, with less than 600 MB of RAM resident in use by OH. No remarkable growth on adding items or rules.
Sure you can go with a VM as well, but please only do so if you’re really familiar with that virtualization stuff, and be aware of the consequences of a HW outage.

Can’t comment without knowing your goals, but if they’re really about smarthome, your existing openHAB server should be able to run them, so all you might still need are maybe some sensors and actuators, with OH giving you lots of choices.

Thanks! I will consider that, because I don’t think that I will ever have that much item in my OH (because this is only a flat, not a big one, so I don’t have automatize my garage door and etc…).

What I want to achieve (which came to my mind right now):

  • Have a screen on the wall with HabPanel (requires a monitor and some processing unit - like Intel Computing Stick - or at least a tablet). However I want to have a bigger a screen.
  • I want to modify my mirror in the hall so that it has MagicMirror on it. This requires another RPi.
  • And for example I already have another RPi one just to make my old but functionable printer into a network printer.

Yes these are not so CPU or GPU based tasks (besides HabPanel), but I can’t go with only one device because these things are at different points of my house…

Fantastic. I would consider replacing that SD card. I think that is the most likely culprit at this point.

Unlikely. There are people happily running on RPis with thousands of Items, a dozen bindings and thousands of lines of Rules code. Rules parsing and startup times are reported to be a problem on the RPi but once OH is up and running everything works great.

Yes. I have separate VMs for separate purposes. And all the services that I installed that are not software appliances (OMV) run in Docker containers. They are configured to store all their data on the OMV NAS using NFS mount points.

I have the following VMs:

  • OMV: NAS including netatalk, NFS and CIFS shares
  • Home Automation: openHAB, InfluxDB, Grafana, Mosquitto
  • Media: Calibre, Gogs, Plex Media Server (soon to include NextCloud)
  • Virtual Desktop: GUI based programs like Adobe Digital Editions, MakeMKV, Handbreak, VSCode, etc.

It is a pretty modestly powered Intel i3 server class machine with lots of RAM. I picked this server because it is very quiet.

This is very important, particularly if you put critical services on the VMs. I can tell you a story.

You will notice that firewall is not one of the VMs I have listed above. I used to run pfSense in a VM on this physical machine as well. This is my firewall so pretty much all the networking goes through this VM, including the networking of the physical machine.

One day I decided to experiment with Snort but I underprovisioned the pfSense VM to run Snort. So I figured I’d just take the VM down and allocate more RAM and another CPU. Bad idea. The problem is that the main GUI for managing VMs is web based. That means I need network connectivity. With the firewall down there was no network connectivity so I couldn’t get to the GUI to modify or to even bring the VM back up and restore my networking. I was pretty much dead in the water.

Luckily I was smart enough to make sure the VM came back up automatically when the physical machine reboots. But now that VM was sitting there unmodifiable and there was a circular dependency in place (i.e. the hardware networking depended on a VM running on itself). So I have moved my pfSense to a micro server running on it’s own hardware and all is good.

If you plan to host your NAS as a VM, a situation like this could be a real possibility as well so keep that in mind.

I agree with Markus, you can likely host many if not all of these services on the same RPi.

I’m not sure how building a beefer central server is going to address any of this. You will still need all of those other separate RPi machines.

NOTE: An RPi Zero W would be more than enough power to host a HABPanel display and would be a heck of a lot cheaper than an Intel Compute Stick.

Thanks for this :slight_smile: I think I will have enough info right now…
Just to add: I didn’t said that a central server would solve these issues where I need a unique RPi for it, I just want to minimize them, so I thought about adding as many thing as I can into a central server so I don’t have to have several RPis which can be handled on one server…