Rules and rule templates YAML integration

I think we’re talking slightly past each other. I was imagining that if a rule was based on a template, you would have two “export” buttons (or a sub-button choice) where you could either export the “stub” (template reference and configuration only) or you could export the “detempated version” with actions, trigger and conditions, but without the template reference and configuration.

I’m not sure if what you’re saying is that the last option isn’t relevant, or if you just didn’t understand what I meant. I never meant that a version should be exported that was both tied to a template, and had actions, triggers and conditions that would be overwritten by the template.

Yes, because I understood your either/or sentence to mean only one approach would be possible. I have no problem with supporting both options.

1 Like

Indeed. https://github.com/openhab/openhab-core/pull/5401 proposes to make the methods of org.openhab.model.script.scoping.StateAndCommandProvider static.

There’s no point in deleting these methods just because they weren’t in use, how many bytes do that save in the JAR?

Deleting unused methods makes it easier to get the “complete overview”.

Sometimes perhaps, but what keeps me from getting the overview is rather the lack of documentation that explains the structure, and all the “levels” involved. If you try to debug the parsing of scripts/Xbase, it’s not easy to keep “the red thread” while things are called in a deep call stack.

So, to me, the reason things become difficult to follow is rather about when things become “too vertical”, (too many interfaces and implementations that “work together” to somehow achieve a goal) and this structure isn’t explained. Deleting methods won’t change that :wink:

@Lolodomo I have a situation that I’m not quite sure how to handle. RuleSerializer.setRulesToBeSerialized() can encounter rules that can’t be serialized, for various reasons (if the rule has shared context, isn’t conforming to the limitations of DSL if it’s the DSL serializer, etc.).

I’ve looked at Things and Items, and there’s basically no way to handle a “failure to serialize” there. I guess all instances can be serialized, unlike with rules.

The question is, if there are multiple rules “submitted”, and just one or some that don’t qualify, do I “reject” the whole “batch”, or only those that don’t qualify? And, how do I communicate back the details of why they can’t be serialized?

I can just make the method throw an Exception, but it won’t enable a very detailed feedback, and certainly not a “per rule feedback”.

There’s also the relationship with generateFormat(), once you call the former, the models will be created and these won’t be free’d until you call generateFormat(). In a way, it’s like a try-with-resources, where the final operation releases any allocated resources. But, it must be clear then, in case of a failure to add rules to serialization, do you still have to call generateFormat()?

In a way, it would be better if these interfaces extended AutoCloseable, and that close() was what actually released the resources. That would make it work with try-with-resources, but it can’t actually be done, since, as it is now, the caller must keep track of the modelName so that the converter knows “what to release”.

All in all, I’m not sure I understand why serialization is split into a multi-step process, it does increase that chance of resource leaks somewhat. But, regardless of this, I need to “inject” error handling/feedback into all this somehow.

Rule.getScript() returns since few hours XEpression and does not return XBlockExpression.

Where? In a local test you’ve made?

Also, I was imprecise. It was actually setScript(XExpression) that was my “main point” and what I tried to achieve, but you might have done that too?

In upstream — https://github.com/openhab/openhab-core/commits/main/ — the argument of Rule.setScript() is XExpression.

1 Like

Obviously I can offer no help in terms of technically how to implement any of this.

But from an end user’s perspective the ideal behavior would be to only reject those rules that cannot be serialized and accept the ones that can. For each rejected rule I would expect at least a WARN level log briefly explaining which rule was rejected why, one entry per rejected rule. Ideally, this information would be shown in a toast or pop-up or some other similar UI element in MainUI too.

Would it make sense to handle each rule individually even when submitted as a batch? Then all that management stuff and exceptions and such becomes a little easier since ther would only be the one rule being serialized at a time.

The thing is that I’m not quite sure when this “batch functionality” is used. In the UI when you switch between “Code” and other tabs, it will just be one rule. When you export, you might be able to select multiple rules and then export..?

But, these rules will all be exported into one document - as multiple rule entries. That’s why I’m wondering if it’s best to reject the whole batch. Because, otherwise, some rules will be missing, and you’d have to have the user notice in the log that some were dropped.

When it comes to having UI pop-ups informing you, that’s also tricky, because this all goes through the REST API. There are “limited options for feedback” - because you get an HTTP request, and you have to do all your “informing” in the response. For this kind of request, the response is the resulting document (containing the rules). There’s nowhere to put any warnings or other kind of feedback there - which leaves you with the HTTP status you return. But, you have to return 200 OK if the resulting document is to be used, otherwise the web browser will see the response as an error - so there’s nothing much you can do there either.

It would of course be possible to craft a “special JSON document” that contains both the resulting rules document and some detailed error feedback, but this would then require specialized parsing on the other side, where these things would need to be separated again and treated individually. Given that the REST API isn’t meant to only serve the UI, but to be used by other integration, or by users manually, this is a really “unfriendly” way to do it - not to mention non-standard.

Again, that’s not really possible with a REST API. You get one request, you get to give exactly one response. So, you can’t “OK” some rules and fail the others, the request is either a success or a failure. Technically, internally in Core, they are treated individually, but the result must be combined and returned as one.

As I result, I’m wondering if rejecting the whole batch might be the better option. Then you could at least give error feedback on which ones failed, and the user could unselect these, if it is indeed possible to export more than one rule at a time. Or, whether that is possible will be up to what I make in the WebUI, but I can’t see any other use for the ability to do “batch conversion”.

You can do this for Things and Items, so I would expect the same would be for rules.

Which is why in an ideal world there would be something in the UI to tell them which rules failed too. But most users would look in the file, find not all the rules are there and then check the logs.

I would find it annoying if I had to “pre-screen” and only select those rules which can be exported to start as opposed to just selecting all and exporting and getting those rules that can be exported this way at least. It imposes a lot of extra understanding and extra work on the end user to have to figure out which rules can be exported and only select those first in order to get any exports at all.

But it also would be a lot of extra work on the end user to not be told which rules were not included.

Idea: would it be possible to put a comment in the produced file code for each rule that cannot be exported in the place where it’s code would have gone if it were serializable? That could be the reporting mechanism.

// rule xyz123 cannot be exported: has one or more conditions

The resulting file will still be usable, and it will be clear to the end users which rules were skipped and why (or at least the first reason why, a rule may have multiple reason why it cannot be serialized).

That might be doable, but it will be a bit complicated. It would have to apply to both DSL and YAML rules, so there would need to be some way to achieve this “comment” in YAML too (in a way that the user will actually notice). There are fewer reasons why I rule can’t be exported to YAML, but at least shared context DSL rules and SimpleRules are excluded.

Is that really enough though, is the question. I agree that if you could report back what failed in parallel with generating what you can, that might be the best thing, but since you can’t really do that, the UI will still behave as if “all is OK”. I’m just not sure how user-friendly that is.

My preliminary (untested) “serializability test” for DSL rules looks like this:

        for (Rule rule : rules) {
            if (rule instanceof SimpleRule) {
                result.add(new SerializationValidationResult(false, "Rule '" + rule.getUID() + "' is a SimpleRule with an inaccessible action"));
                continue;
            }
            if (rule.getConfiguration().get("sharedContext") instanceof Boolean shared && shared.booleanValue()) { //TODO: (Nad) Key name
                result.add(new SerializationValidationResult(false, "Rule '" + rule.getUID() + "' is a DSL rule with shared context"));
                continue;
            }
            errors.clear();
            if (!rule.getTags().isEmpty()) {
                errors.add("has tags");
            }
            if (rule.getVisibility() != Visibility.VISIBLE) {
                errors.add("is invisible");
            }
            List<Trigger> triggers = rule.getTriggers();
            if (triggers.isEmpty()) {
                errors.add("has no triggers");
            } else {
                for (Trigger trigger : triggers) {
                    try {
                        buildModelTrigger(trigger);
                    } catch (SerializationException e) {
                        errors.add("trigger '" + trigger.getId() + "': " + e.getMessage());
                    }
                }
            }

            if (!rule.getConditions().isEmpty()) {
                errors.add("has conditions");
            }
            if (rule.getActions().size() != 1) {
                errors.add("has " + rule.getActions().size() + " actions; one is required");
            } else {
                Action action = rule.getActions().getFirst();
                if (action.getConfiguration().get("type") instanceof String type) {
                    if (DSLRuleProvider.MIMETYPE_OPENHAB_DSL_RULE.equals(type)) {
                        if (action.getConfiguration().get("script") instanceof String script) {
                            if (script.isBlank()) {
                                errors.add("has an empty scripted DSL action");
                            }
                        } else {
                            errors.add("has no action script");
                        }
                    } else {
                        errors.add("doesn't have a scripted DSL action");
                    }
                } else {
                    errors.add("doesn't have a scripted action");
                }
            }

            if (errors.isEmpty()) {
                result.add(new SerializationValidationResult(true, ""));
            } else {
                result.add(new SerializationValidationResult(false, "Rule '" + rule.getUID() + "': " + String.join(", ", errors)));
            }
        }

So, there are quite a few things that can disqualify a rule from being exported to DSL.

It’s a lot better than nothing at all.

I’d say it’s better than only putting something in the logs.

And I’d also say it’s better than just rejecting everything because there’s just one rule that can’t be serialized.

Maybe if the comment(s) are at the very top instead of scattered through the resultant file.

It seems like the check works reasonably well, here I tried to serialize a set of different test rules on my system:

08:34:58.423 [ERROR] [ternal.fileformat.FileFormatResource] - Check result: [SerializabilityResult [ok=true], SerializabilityResult [ok=false, failureReason=Rule 'test:full-rule': has tags, is invisible, has conditions, has 2 actions; one is required], SerializabilityResult [ok=true], SerializabilityResult [ok=true], SerializabilityResult [ok=false, failureReason=Rule 'combined-2' is a DSL rule with shared context], SerializabilityResult [ok=true], SerializabilityResult [ok=true]]

However, when it comes to adding the failures as comments in the resulting file, I’m not sure that can be done. The reason is the two-step process that is used, the failures take place in step 1, while the output is generated in step 2. There’s not really any way to “transfer information” between the two steps, in step 1 the rules are added to a temporary model, and in step 2 the model is serialized. The rules that fail, fail to be added to the model, so they simply “don’t exist” as far as step 2 is concerned.

I’ve made a way to pre-check rules for serializability with a given serializer (DSL or YAML). I’m thinking, what if, when you click “generate file” (or whatever the text is), the pre-check is run, and if one or more rules fail, you get a pop-up with some information, and an option to uncheck unsupported rules?

That would still allow selecting all, and let the system do the “tedious” unselecting for you…

That would meet the usability goals I have for this.

  1. visibility into which rules cannot be serialized and why
  2. being able to bulk select everything and let OH not reject serializing at least the ones it can
1 Like