New Zoneminder Binding for Zoneminder versions >= 1.34.0

@mhilbush good job! It is working again, many thanks!

Ok, good.

I’ll submit the fix for inclusion in an upcoming snapshot build.

Edit: Submitted

The change was merged last night, so the next snapshot build should contain the fix.

Thanks for pointing out the error, and for testing the fix.

@mhilbush, thank you for the binding, it works great! One question, do you have plans to add support of ZM run states in the binding? The run states are useful for example, when you want to turn off in the night analysing on all cameras and reactivate it in the day. The API function for activation of a new state is

http://{{host}}/zm/api/states/change/{{new_state}}.json

Looks like that API lets you start/stop/restart the entire Zoneminder service through the API? I hadn’t considered adding that to the binding. Can’t you accomplish what you want by disabling each of the monitors using the enable channel? Although, admittedly, if you have a bunch of monitors it can be a bit clumsy.

This feature is a little bit confusing, because it combines 2 different logical functions:

  1. Start/Stop/Restart of the ZM service
  2. A kind of “preset” for monitor activities

I agree that the first option is not needed in the binding, as it belongs more or less to the system maintenance. But the second one is quite a useful function for automation. In the ZM faq there are a couple of scenarios, where it can be applied. The function was also mentioned in this topic. These “presets” are available in the main UI page:

From the technical perspective, the function can be implemented as a string channel, which contains current value of the “run state” (GET /api/states.json) and can receive commands with a new state. The only one thing to check: if a new state is the same as the current one, it should be ignored, as ZM has no such check. It tries to set the new status even if it is the same, which leads to a needless restart of monitors and a pause in the capturing.

I’ve just found, that the new version of ZM (1.37) has a killer-feature: it allows to specify scripts, that will be executed at the start and at the end of events! It allows to go away from pooling without to install 3rd-party tools, like zmeventnotification. This version is not yet released, but as for me, it works quite stable. Hope, it will be released soon!

Couldn’t find if there was a discussion before. Please point me if I missed it.
I have an issue with a system startup when openhab with zoneminder binding starts before the zoneminder server starts. In this case my zoneminder:server thing switches into OFFLINE state with ERROR: CONFIGURATION_ERROR.
Everything I need to do to fix it is to disable/enable server thing when zm server is up and running.
As a workaround - I created a rule that runs every 5 minutes, checks the status of the thing and if it’s offline - disables it and enables in 10 seconds.
Do you know if there is an easier way to fix the issue?
Thanks.

The binding is not very smart at startup. The first thing it does is check the Zoneminder version number to see if the binding is compatible with the version of Zoneminder that’s running. If that check fails for any reason (including no response from the Zoneminder server), the binding just assumes something is not configured correctly.

It would take some rework in the binding to handle the case where the Zoneminder server is unavailable (versus an incompatible version), in which case it should continue to retry until it can successfully get the version number.

In the mean time, is it possible for you to force Zoneminder to start before openHAB?

Thanks for the explanation. Now it make sence.

Unfortunately no, it’s a separate computer which is not even on UPS now.
Anyway, my rule to restore zm binding works.
In case if someone needs it - here is the code:

configuration: {}
triggers:
  - id: "1"
    configuration:
      cronExpression: 0 0/5 * * * ? *
    type: timer.GenericCronTrigger
conditions: []
actions:
  - inputs: {}
    id: "2"
    configuration:
      type: application/javascript;version=ECMAScript-2021
      script: >-
        (function () {
            let thingMgr = osgi.getService('org.openhab.core.thing.ThingManager');
            let ThingUID = Java.type('org.openhab.core.thing.ThingUID');
            let zoneMinderThing = new ThingUID('zoneminder:server:zoneminder');
            let zoneMinderThingStatus = things.getThing('zoneminder:server:zoneminder').status;
          
            if(zoneMinderThingStatus === 'OFFLINE') {
                thingMgr.setEnabled(zoneMinderThing, false);
            
                let ScriptExecution = Java.type("org.openhab.core.model.script.actions.ScriptExecution");
                let ZonedDateTime = Java.type("java.time.ZonedDateTime");
                //first timer to turn it back on in 10 seconds
                ScriptExecution.createTimer(ZonedDateTime.now().plusSeconds(10), function(){
                  thingMgr.setEnabled(zoneMinderThing, true);                             
                });
                // second timer - to do something after it works. In my case - refresh HabPanel 
                 ScriptExecution.createTimer(ZonedDateTime.now().plusSeconds(15), function(){
                      var RuleManager = osgi.getService("org.openhab.core.automation.RuleManager");                             
                      RuleManager.runNow('your-rule-id', false, null); 
                  });
            }

          
        })();
    type: script.ScriptAction

I’ve updated my Zoneminder install (on Debian) to the latest version 1.36.24 and my OH install (also Debian) to v3.3 and I’ve noticed that since the upgrades the items linked to the alarm channels on my modect monitors are always set to ON. I made the fatal mistake of upgrading more than one thing at a time so I’m not sure when the issue first started.

Has anyone else noticed the same behaviour? I’m wondering if this is a problem with Zoneminder or an oddity with my install.

I’m on openhab 3.2 and moved from zoneminder 1.36.16 to 1.36.17.
Then I saw a change that alarm is always on and the monitor state isn’t idle anymore. Just prealert and changes to tape when recording.

Maybe something changed in the API in v1.36.17?

I’ve tried to find the right API call to get the monitor alarm status but either it’s not documented or I can’t see it. I’ve found the API call to get the monitor details but this returns the monitor status, not the monitor alarm status.

I turned on the debug log for the Zoneminder binding and saw the following error repeated in the log:

2022-09-05 13:37:07.603 [DEBUG] [inding.zoneminder.internal.handler.ZmBridgeHandler] - Bridge: IOException on GET request, url='http://zoneminder.mydomain/zm/api/monitors.json': java.util.concurrent.ExecutionException: java.io.EOFException: HttpConnectionOverHTTP@8295b7e::SocketChannelEndPoint@6698e5f1{l=/192.168.0.12:45940,r=zoneminder.mydomain/192.168.0.23:80,ISHUT,fill=-,flush=-,to=0/0}{io=1/0,kio=1,kro=1}->HttpConnectionOverHTTP@8295b7e(l:/192.168.0.12:45940 <-> r:zoneminder.http://zoneminder.mydomain/zm/api/monitors.json/192.168.0.23:80,closed=false)=>HttpChannelOverHTTP@468a16f(exchange=HttpExchange@4fae4434{req=HttpRequest[GET /zm/api/monitors.json HTTP/1.1]@d9e0414[TERMINATED/null] res=HttpResponse[null 0 null]@2c08df3b[PENDING/null]})[send=HttpSenderOverHTTP@6d2a600c(req=QUEUED,snd=COMPLETED,failure=null)[HttpGenerator@50180679{s=START}],recv=HttpReceiverOverHTTP@2d5b06ca(rsp=IDLE,failure=null)[HttpParser{s=CLOSED,0 of -1}]]
``

If I call the API mentioned in the above log (http://zoneminder.mydomain/zm/api/monitors.json) with curl I get a huge big JSON payload containing the monitor info for all the monitors.

@mhilbush what API call does the binding use to get the monitor alarm status? I’d like to check if the API is constantly returning the alarm status as active on my monitors.

http://zoneminder.mydomain/zm/api/monitors/alarm/id:M/command:status.json

In the above URL, replace M with the monitor ID.

You should get a response like this.

{
"status": "0"
}

FWIW, I think the status numbers changed in 1.37. But you’re running 1.36, so that wouldn’t be the issue.

And, for reference, these are the states used in 1.36.x

And in 1.37.x (which will be a breaking change)

Thanks for the info!

I have 12 monitors configured. ID 1 is my Doorbird that is constantly recording. The output of the status API call for that one is:

{"status":"5","output":"5"}

The output for all the other 11 is:

{"status":"1","output":"1"}

Status 1 is the PREALARM state which corresponds to all my monitors in OpenHAB having the alarm channel permanently set to ON.

Which version of Zoneminder are you running?

I’m still on 1.34.26

This might be the commit that changed the numbering in 1.36.

That looks like the culprit, let me go and stand in front of one of my cams with my laptop running an SSH session…

Yup, that looks like the relevant commit. I wandered back and forth across one of the zones of the cam at the back door and saw the output of the alarm API call change to:

{"status":"3","output":"3"}

Then shortly after it changed to:

{"status":"4","output":"4"}

Then after a few seconds it changed back to the usual:

{"status":"1","output":"1"}

It seems a bit annoying that the commit changes the numbering of the states but I’m sure there was a good reason for it.

Would a solution be to query the Zoneminder version when initialising the binding using the API:

higgers@sauron:curl http://server/zm/api/host/getVersion.json
higgers@sauron:{"version":"1.36.24","apiversion":"2.0"}

If the major version is 1.36 and the minor version is less that 17 then add one to the status number?

(I’m sure you’ve already thought of that, just trying to be helpful :slight_smile: )

Yes, and in fact at binding startup I already query the version number since I need to check the minimum Zoneminder version supported by the binding (1.34.x).

1 Like