Time reading from Remote OH instance not accepted via remote openhab binding

I recently setup a Rpi3 OH instance with the Zwave binding and have linked items via the remote openhab binding to a Rpi4 OH instance that I use for reporting and other bindings (camera, sony and another zwave instance). All is fine except for datetime items. I have battery channels linked to both a number (battery %) and a datetime based on the update (when the battery % reading occurred). The raw datetime reading of the item is of this format * 2022-08-17T07:58:55.669228-0400 which I modify in the state description to * Aug-17 7:58 AM. It appears the raw item format is being transferred to the primary OH instance and I get an error:
`

2022-08-14 16:54:04.273 [WARN ] [l.handler.RemoteopenhabBridgeHandler] - Failed to parse state “2022-08-14T16:54:04.240979-0400” for item WaterHeaterLeak11_BatteryLevel_time: Text ‘2022-08-14T16:54:04.240979-0400’ could not be parsed at index 23

`
Based on “index 23” it is choking on the additional .xxx979. Right now I have kept the reporting on the remote server and am looking at a DSL rule to update another DateTime item with an acceptable format for the remote OH binding. Any other ideas?

Bob

DateTimes are hard. This is probably a case where an issue needs to be filed. I’m not entirely certain where the problem would best be fixed though.

The problem is the code that parses the DateTime String is not expecting the nanoseconds to be a part of the String. Ideally, I would expect the toString() for the DateTimeType not to print a String that it can’t also parse.

Verify that if you trigger a rule and log out the DateTime’s state’s toString is including that nanoseconds. If it is not doing so from a rule, the problem might be in the binding. Otherwise it’s in the core.

To work around this you’ll need to adjust the DateTime in a rule and zero out the nanoseconds before updating a different Item which gets shared over the remote OH binding.

Confirms a couple things and gives me a couple more to check.

Agreed, that is what I thought too. Edit: although it is strange that within the remote OH, there is no problem dealing with the extra nanoseconds.

I used the battery channel in the UI setting DateTime with the “Timestamp on Update” profile. That must produce the raw toString() somehow behind the scenes. Don’t think the binding is involved, as it is all configured in the UI

Will check to see what happens here. Maybe the rule will produce a different result and I will not have to create an extra item for sharing

This was where I was heading, but I thought to post issue first for more perspective.

Thanks !

Bob

The problem could be the "Timestamp on Update” profile generating the nanoseconds ?
I know that the remote openHAB binding works generally well with DateTimeType as one of my common test during development was to try with the DateTimeType updated by the remote NTP binding.

Please confirm that the problem is only when using the "Timestamp on Update” profile.

Maybe I could be less strict in the remote OH binding and also accept nanoseconds.

First this is a good binding, thanks. Second I think these are microseconds or whatever is between milli and nano.

After @rlkoshak suggestion to try a rule

> rule "Date Time to rpi4 from rpi3"
> when
>     Item BasementDoorMotion75_Sensortemperature changed
> then
>     val DateTimeType date = new DateTimeType(now)
>     Temperature_DateTime_from_remote.sendCommand(date)
> end

I still get the longer version


so it is not just Timestamp on Update.

My thought last night was that the changes to datetime in OH3.0 (to (or from) Joda something?) were about the same time as the remote server binding, so maybe that messed up the testing? Also this part of the documentation doesn’t seem right anymore.

I think this will fix the issue.

As to the workaround I realized (again while sleeping) that I can just add a Timestamp on Update to the Remote OH channel in the Rpi4 instance with the item I want to track, avoiding doing anything in the remote server. I was going to try this today. EDIT: This did not work :frowning_face: The Timestamp on Update gets triggerred when the remote server goes off and on. I’ll have to create an item on the remote or wait for a fix.

Bob

I am asking myself if there was a recent change in core framework in the way to format DateTimeType.

Looking at code of DateTimeType, I understand what’s wrong. A fix is required in the remote openHAB binding as you can have 3 or 6 on 9 digits after the seconds.

2 Likes

Was this something on your TODO list? I took a look, but do not have enough knowledge at this point to try a PR. I think either more options need to be defined at this point

    private static final String DATE_FORMAT_PATTERN = "yyyy-MM-dd'T'HH:mm:ss.SSSZ";
    private static final DateTimeFormatter FORMATTER_DATE = DateTimeFormatter.ofPattern(DATE_FORMAT_PATTERN);

or something like

private static final String DATE_FORMAT_PATTERN = "yyyy-MM-dd'T'HH:mm:ss.nnnnnnnnnZ";

but I’m not sure how that will work if all the places after the period are not there. It is not real high priority, but after a couple of weeks I was just wondering.

Bob

Yes but I was in vacation.
Maybe something I could look at next weekend.

1 Like

Can you please try the new jar I included in the following PR (rename first .zip into .jar) ?

After testing on my side, it does not work anymore !

Edit: I fixed my error. Now working for me. I uploaded a new jar for testing in the PR.

Appears to work. I did get a brief “offline error during connection” or something like that, but it came back. Nothing in the log at the time. Also no parse log errors, so will mark as solution. I’ll let it run for a while. (my DateTime updates are only once a day. The linkage just picked up the recent value)

Thanks again

Bob

PS- I noticed the Refresh PR. I had that issue too, but moved the refresh rule back to the remote server.
PSS- Only other quirk is that I count the watt zwave reports. I had to move those back to the remote because the remote binding seems to update the item periodically. Probably due to short communication gaps, like the one above.

Fix has been merged.

2 Likes