OK, thanks.
And what you assumed was right. I tried the UI-based approach and it didn’t have any significant delay with respect to Python. So must have to do something with file-based in general or JSRule approach.
So every approach except for the JSRule one is working fast, some delay occurs for the JSRule rule. JSRule is so slow that the status in the meanwhile has changed to ACTIVE.
Nevertheless, I started porting very simple Jython rules to JS with the rule builder. So far nothing I am missing. I will have to dig more deeply into how to use libraries and so on.
I believe there is a section in the docs. The tl;dr is you’ll have to create an npm module (it’s not hard) and put it as a folder under $OH_CONF/automation/js/node-modules. Then you use require to bring it into your rule.
For third party libraries, they can be installed by npm.
I seem to have some serious performance issues. Maybe I need help to check what is wrong with my system.
I am on the path of porting rules from Python to JS. Works fine so far, but most of the rules don’t have any relevance regarding performance.
But some of them really have. Now I have ported one script that reads PV forecast data (just 72 lines via HTTP) and then calculates some values and writes them into the influxdb database.
With the Python version that took 5s (most of that being response times from the PV API and influxdb), with JS script it took more than 30s. Then I simplified a little bit and compared those two scripts:
Python
test_PV_forecast.log.info("Start Python Rule PV-Vorhersage")
for fieldNo in range(0, 5000):
milliDate = ZonedDateTime.now().toInstant().toEpochMilli()
test_PV_forecast.log.info("Completed penalty code.")
It’s not clear what you are doing with that epoch but I wouldn’t be surprised if keeping it a ZonedDateTime were more performant. It does seem to shave about 200 msec from my runs which should shave around 8 seconds from your runtime.
I just took randomly a line of code and repeated that 5000 times. In my code, I used that one to write to influxdb (but only 144 times). Certainly, there is a better way to do that, but that is not my question. I can live with a few hundred ms. But the difference between Python and JS Script is my problem, and obviously also the difference between your system and mine. Something seems to be wrong with mine.
I have one solar EV charging optimization algorithm that takes about 90s to 180s in Python. I cannot afford a runtime 5-10 times more.
I couldn’t say. My machine isn’t an RPi but it’s nothing special. I wouldn’t expect these long runtimes for rules on anything but a really underpowered machine like an RPi 2b or RPi 0w. I’ve not seen any reports of exceptionally long runtimes from any other users on this forum outside of the initial run of the rule. And even for that there is a PR open (I think it hasn’t been merged yet) to fix a bug in the caching which will improve that.
I wouldn’t begin to guess where to look for why these rules are taking so much longer for you. You can try jRuby or Groovy and see if they are more performant.
JRuby takes 300ms for the same code example. Something is wrong with my JS Script, would be happy to somehow find the root cause, but I have no clue where to look at. Does it make sense to increase log level?
Possibly. I’m not sure what that would show. The problem isn’t waiting for the rule to start, it’s after the rule is already running. So there wouldn’t be anything from OH or the add-on being logged at that point. Those parts of OH will be waiting for your rule to complete.
It might just be, on an RPi at least, heavily looping code like that just isn’t performant in GraalVM JS. That is kind of an edge case for OH rules so it’s not surprising the problem hasn’t been seen before.
@florian-h05 Any idea from your side about this performance issue? Maybe this is connected with the strange finding I reported as a bug where JS rule and rule builder were very different in performance. Maybe it’s just my system that is somehow weird, and I would really appreciate to get some hints how to track this down.
Why should just JavaScript be orders of magnitude more slowly?
Any suggestions for other code I could test for performance, that is not heavily looping?
BTW: The InfluxDB HTTP API I am using to write future values of PV forecast into the database (yes, you can use time values in the future there) does only accept epoch seconds, milliseconds or nanoseconds, as far as I know. Of course I could use something else to store the next 72h of forecast. But writing it into the Persistence database is convenient as it remains persisted for evaluating afterwards how good the forecast really was.
The other languages (Python, JRuby) could do thousands of toEpoch conversions within less than a second… I think I rather have a general JS scripting performance problem.