openHAB 4.0 SNAPSHOT discussion

That’s a good summary, and yes, JS is back and as long as you don’t use any language features that are available in Nashorn but not in Graal (I must admit, I’m not even sure if such a feature exists), all profiles and transformations will work.

2 Likes

Given the nature of transformations and the lack of openHAB injecting a bunch of stuff into them like it does with rules, I’m comfortable saying that any JS transform written for Nashorn will work with GraalVM unchanged.

2 Likes

Try updating to the latest snapshot. It isn’t called by the new ScriptTransformationService, but I’ve also submitted a PR to avoid this condition.

Ideally we should be able to just say MAP(waveplus) or MAP(config:waveplus) (to differentiate from file resource) but that would just create confusion I guess.

I am currently working on adjusting the transformation editor to recent core changes (SCRIPT transformation) and I took the chance to add a note to the transformation editor that says:

Tip: Use JINJA(config:jinja:jinja) for Item state transformations. and provides a clipboard icon to directly copy JINJA(…) to clipboard.

1 Like

running 3416
trying to install influxdb persistance from UI
getting

=> /var/log/openhab/openhab.log <==
2023-05-08 21:22:00.579 [ERROR] [core.karaf.internal.FeatureInstaller] - Failed installing ‘openhab-persistence-influxdb’: Unable to resolve root: missing requirement [root] osgi.identity; osgi.identity=openhab-binding-hue; type=karaf.feature; version=“[4.0.0.SNAPSHOT,4.0.0.SNAPSHOT]”; filter:=“(&(osgi.identity=openhab-binding-hue)(type=karaf.feature)(version>=4.0.0.SNAPSHOT)(version<=4.0.0.SNAPSHOT))” [caused by: Unable to resolve openhab-binding-hue/4.0.0.SNAPSHOT: missing requirement [openhab-binding-hue/4.0.0.SNAPSHOT] osgi.identity; osgi.identity=org.openhab.binding.hue; type=osgi.bundle; version=“[4.0.0.202305081105,4.0.0.202305081105]”; resolution:=mandatory [caused by: Unable to resolve org.openhab.binding.hue/4.0.0.202305081105: missing requirement [org.openhab.binding.hue/4.0.0.202305081105] osgi.wiring.package; filter:=“(osgi.wiring.package=org.openhab.core.config.discovery.upnp)”]]
2023-05-08 21:22:02.437 [ERROR] [core.karaf.internal.FeatureInstaller] - Failed to refresh bundles after processing config update
org.apache.felix.resolver.reason.ReasonException: Unable to resolve root: missing requirement [root] osgi.identity; osgi.identity=openhab-binding-hue; type=karaf.feature; version=“[4.0.0.SNAPSHOT,4.0.0.SNAPSHOT]”; filter:=“(&(osgi.identity=openhab-binding-hue)(type=karaf.feature)(version>=4.0.0.SNAPSHOT)(version<=4.0.0.SNAPSHOT))” [caused by: Unable to resolve openhab-binding-hue/4.0.0.SNAPSHOT: missing requirement [openhab-binding-hue/4.0.0.SNAPSHOT] osgi.identity; osgi.identity=org.openhab.binding.hue; type=osgi.bundle; version=“[4.0.0.202305081105,4.0.0.202305081105]”; resolution:=mandatory [caused by: Unable to resolve org.openhab.binding.hue/4.0.0.202305081105: missing requirement [org.openhab.binding.hue/4.0.0.202305081105] osgi.wiring.package; filter:=“(osgi.wiring.package=org.openhab.core.config.discovery.upnp)”]]
at org.apache.felix.resolver.Candidates$MissingRequirementError.toException(Candidates.java:1341) ~[org.eclipse.osgi-3.18.0.jar:?]
at org.apache.felix.resolver.ResolverImpl.doResolve(ResolverImpl.java:433) ~[org.eclipse.osgi-3.18.0.jar:?]
at org.apache.felix.resolver.ResolverImpl.resolve(ResolverImpl.java:420) ~[org.eclipse.osgi-3.18.0.jar:?]
at org.apache.felix.resolver.ResolverImpl.resolve(ResolverImpl.java:374) ~[org.eclipse.osgi-3.18.0.jar:?]
at org.apache.karaf.features.internal.region.SubsystemResolver.resolve(SubsystemResolver.java:256) ~[?:?]
at org.apache.karaf.features.internal.service.Deployer.deploy(Deployer.java:399) ~[?:?]
at org.apache.karaf.features.internal.service.FeaturesServiceImpl.doProvision(FeaturesServiceImpl.java:1069) ~[?:?]
at org.apache.karaf.features.internal.service.FeaturesServiceImpl.lambda$doProvisionInThread$13(FeaturesServiceImpl.java:1004) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
at java.lang.Thread.run(Thread.java:833) ~[?:?]
Caused by: org.apache.felix.resolver.reason.ReasonException: Unable to resolve openhab-binding-hue/4.0.0.SNAPSHOT: missing requirement [openhab-binding-hue/4.0.0.SNAPSHOT] osgi.identity; osgi.identity=org.openhab.binding.hue; type=osgi.bundle; version=“[4.0.0.202305081105,4.0.0.202305081105]”; resolution:=mandatory [caused by: Unable to resolve org.openhab.binding.hue/4.0.0.202305081105: missing requirement [org.openhab.binding.hue/4.0.0.202305081105] osgi.wiring.package; filter:=“(osgi.wiring.package=org.openhab.core.config.discovery.upnp)”]
at org.apache.felix.resolver.Candidates$MissingRequirementError.toException(Candidates.java:1341) ~[org.eclipse.osgi-3.18.0.jar:?]
… 12 more
Caused by: org.apache.felix.resolver.reason.ReasonException: Unable to resolve org.openhab.binding.hue/4.0.0.202305081105: missing requirement [org.openhab.binding.hue/4.0.0.202305081105] osgi.wiring.package; filter:=“(osgi.wiring.package=org.openhab.core.config.discovery.upnp)”
at org.apache.felix.resolver.Candidates$MissingRequirementError.toException(Candidates.java:1341) ~[org.eclipse.osgi-3.18.0.jar:?]
at org.apache.felix.resolver.Candidates$MissingRequirementError.toException(Candidates.java:1341) ~[org.eclipse.osgi-3.18.0.jar:?]
… 12 more

I’m up and running on #3455 and I’m starting to see periodically a new warning from rrd4j.

2023-05-11 09:41:31.054 [WARN ] [d4j.internal.RRD4jPersistenceService] - Could not persist 'Dads_Motion_Timeout' to rrd4j database: Bad sample time: 1683819690. Last update time was 1683819690, at least one second step is require

Now, this could be related to a time problem. When I rebuilt my VM recently OH was running for a little bit using the wrong timezone. I’ve since fixed that problem but there was likely some data stored with a bad timestamp because of that. I only mention that in case it’s relevant.

However, the Item it’s complaining about above was never saved when the timezone was bad. So this warning is not caused by the timezone issue.

Also, what’s weird is that the sample time and last update time are the same.

The warning doesn’t occur every time the Item changes or every time the state is persisted. I can’t quite find the pattern. I can say that the warning above occurred the very first time the Item received a non-NULL state but not anytime after that (so far).

But I’ve updated other Items from NULL to another state and not received the warning.

This might be related to the latest refactoring to prevent performance problems in persistence. It would be great if you could come up with a way to reproduce that (a good chance, like 1 out of 10 would be sufficient).

I confirm I saw the same appearing from time to time since some days. As stated by @J-N-K , I suspected it to be related to persistance refactoring that took place past week.

Yes, working on something. Maybe if I can set up a loop to spam a test Item with changes I can force it.

I’m now on #3459 which seems to now have the new units and I’m fighting some weirdness there. Apparently Bq/m³ isn’t supported anymore (I’d rather use pCi/L anyway but that’s never been supported)?

It also doesn’t like my date time format string pattern for Number:Time Items any more.

2023-05-12 08:08:37.892 [ERROR] [rg.openhab.core.types.util.UnitUtils] - Unknown unit from pattern: %1$tH:%1$tM:%1$tS

If possible it would be nice if that error from UnitUtils would report the Item name too, though I’ve only got one Item that do this so I know what Item it is. Others might not be so lucky. Though you can use the developer sidebar now to search in metadata so searching for “%1$tH:%1$tM:%1$tS” will find the Items with that pattern in metadata.

I’m also seeing errors from some units that should be a problem, like:

2023-05-12 08:15:47.898 [ERROR] [rg.openhab.core.types.util.UnitUtils] - Unknown unit from pattern: ॰F

I’ve not tried the upgrade cli tool yet though. I did add Bq/m³ as a new unit metadata just to see what would happen and it still doesn’t like it.

More to follow once I run the upgrade tool to see what happens.

What dimension do you use for Bq/m3? I don’t think we have something that fits „specific activity“. In that case you can use a plain number item and put the unit in the state description (but lose UoM support),

If that is a common use-case, we can add a dimension for that.

Regarding the Fahrenheit error: it seems the the character before the F is not correct. Can you check that?

Crikey! Are you making a nuclear weapon? on OH?? :slight_smile:

Bq/m³

It was never listed as being officially supported but I tried it once just to see if it would work given units like W/m² exist in the docs. Surprisingly it seemed to work!

Note, they way I’ve implemented it was:

  1. Added Bq/m³ as the unit in the MQTT Channel Config.
  - id: radon_st
    channelTypeUID: mqtt:number
    label: Short Term Radon Average
    description: ""
    configuration:
      stateTopic: waveplus_bridge/basement/radon_st
      unit: Bq/m³
  1. Set the State Description Pattern to %.0f Bq/m³.

I tried to add a transform to convert it to pCi/L but found that unit isn’t supported at all. I essentially did it the same as described above but Bq is the only supported unit for Radioactivity and it wasn’t important enough to me to file an issue. It’s just kind of annoying that the AirThings Wave+ shows in the app pCi/L but actually reports over the BTLE Bq/m³ for some reason.

I’ll definitely find a workaround, even if it’s to just revert to a unitless Number and just use the State Description (at which point I’ll add back in that transform).

But I might not be the only person using a slightly off the books unit that used to work but may no longer work.

I live over granite. As granite decomposes it releases radon which can cause lung cancer if the levels are high enough for long enough (before mitigation we were seeing levels around 4 pCi/l/100 Bq/m³. I have radon detectors to help keep track of the level and ensure that our mitigation is working (it’s basically a fan that sucks the air from under the foundation and vents it above the roof) and OH alerts me if there’s a problem.

3 Likes

A quick question about the upgrade tool. Does OH need to be stopped before running it or can it work on a running OH instance? I’ll add a note to the docs either way.

More fun with units and rrd4j:

Bq/m³

For the radioactivity Items I stripped out the units and am just using a plain number for now and things seem to work just fine. I still see my Bq/m³ in the UI. I have a rule to update but that’s not a big deal.

Number:Time State Description

For the Number:Time Item though I’m seeing unexpected errors though from UnitUtils.

Previously I was seeing:

2023-05-12 08:08:37.892 [ERROR] [rg.openhab.core.types.util.UnitUtils] - Unknown unit from pattern: %1$tH:%1$tM:%1$tS

Thinking the problem was that this type of formatting was no longer supported I decided to do some experiments. I create a JS transform to convert the seconds to the above format and I set the unit metadata to s.

The error persists.

2023-05-12 10:51:54.348 [ERROR] [rg.openhab.core.types.util.UnitUtils] - Unknown unit from pattern: JS(config:js:secsToStr):%s

Note, I didn’t check before but now I suspect that the original formatting was working (confirmed, I don’t need the JS transform to see HH:MM:SS and can use the date time formatting after all) and the problem is UnitUtils doesn’t know how to handle a State Description pattern that is more than just the units.

But what’s odd is shouldn’t it be using the unit metadata since it’s present? Why does it care what the State Description is when the unit is defined?

rrd4j

I still need to create a rule to see if IO can reproduce it but if I’m interpreting the sample time correctly, the error occurs when the sample time is less than 1 second from the last time a record was saved/created in rrd4j. However everything I’ve tried to force it to happen has failed. :frowning:

  1. created a Number Item TestNumber
  2. rrd4j default persistence strategy so this Item should get saved. Verified that TestNumber.rrd exists
  3. update the Item
    a. 10 times as fast as possible
    b. 100 times as fast as possible
    c. 1000 times as fast as possible
    d. 10 times with one second between updates
    e. 10 times with half a second between updates
    f. 10 times with 100 msec between updates
    g. 10 times with 1100 msec between updates
    h. 10 times with 999 msec between updates (Bingo, got it to happen once)

Running this script, at least on my system, causes the error to occur one in every five executions of the rule.

var Thread = Java.type('java.lang.Thread');
for(let i=0; i<10; i++) {
  items['TestNumber'].postUpdate(i);
  Thread.sleep(999);
}

I think I found you Bq/m³ issue. This is a bug that was introduced in [units] Added Bq/m³ and ppb units by paulianttila · Pull Request #1368 · openhab/openhab-core · GitHub when Bq/m³ was added as supported unit. Bq/m³ is not compatible with kg/m³ and therefore the “compatibility check” fails if you try to set a Bq/m³ state to an item with the dimension Density.

See: Add dimension RadiationSpecificActivity by J-N-K · Pull Request #3608 · openhab/openhab-core · GitHub

1 Like

I totally use pCi with my Airthings Wave+! I have mine coming in via MQTT though. I add the units via a Ruby script:

def self.tech
  Java::Tech
end

PCI = tech.units.indriya.unit.TransformedUnit.new(
  "pCi",
  tech.units.indriya.unit.Units::BECQUEREL,
  tech.units.indriya.function.MultiplyConverter.ofRational(37, 1000)
)
tech.units.indriya.format.SimpleUnitFormat.instance.label(PCI, PCI.symbol)

And my item:

Number:Radioactivity Bunk_Radon "Bunk Room Radon [%.2f pCi]" <gas> (eBunkAirthings, gInflux, gMapDB) ["Measurement"] { channel="mqtt:homie300:mosquitto:airthings:2930082615#radon_2Dshort_2Dterm_2Davg" }

This has worked since openHAB 3.4 to 4.0.0.M2 (I’m not on the latest snapshots yet to test with the new units stuff). The binding posts in becquerels, and OH converts it internally to pCi. I don’t remember the units being per cubic meter though.

I have added “Ci”, “mCi”, “µCi” and “nC”. Will add “pCi” as well. “pCi/l”. should also work (at least “nC/l” does, there is a test for that).

1 Like

Me too. I use GitHub - Drolla/WavePlus_Bridge: Airthings Wave Plus Bridge to Wifi/LAN and MQTT. I was trying to use pCi/L at the binding though, not adding it through a rule. Maybe that’s the difference.

Is the fix for RRD persistence merged yet?

from above, I still can not install and uninstall persistence services from the UI
Can someone give me a hint about how to manually uninstall RRD from karaf or something

ok well now on 3460
See how it goes. Oddly now Influxdb is already installed and RRD is not