Design Pattern: Rule Refresh

Please see Design Pattern: What is a Design Pattern and How Do I Use Them for a desciption of DPs.

Notes

This is based largely off of the work of @CrazyIvan359 which I’ve extracted and made generic. Also, this design pattern is only applicable the openHAB 3 Rules Engine, known as the Next-Gen Rules Engine or NGRE in openHAB 2.x.

At present, this DP is only implemented in Jython, but other languages will be forthcoming.

Problem Statement

There are times when the triggers for a rule need to be regenerated. For example, if the Items that make up the triggers for the rule are based in Item metadata, membership of a Group, special tags, or some other criteria that is based on some attribute of the Item. An other example might be to check that everything is configured correctly (e.g. needed variables are defined in configuration.py, necessary Items are defined, etc.).

Using the standard helper library annotations, a rule’s triggers are only created at script load time and never again. In order to pick up the changes, the entire script needs to be reloaded. For example, if using Jython Drop-in Replacement for Expire 1.x Binding, after adding or changing the expire metadata on an Item the expire.py script would have to be reloaded in order to pick up that change.

Concept

To avoid the need to reload the script every time a rule’s triggers need to change, create a Switch Item that, when it receives an ON command it triggers a reload rule that first deletes the old rule and then recreates the rule that does the work using the current Item information.

I’ve written a collection of helper functions at https://github.com/rkoshak/openhab-rules-tools under rules_utils to facilitate this.

There are multiple stages necessary to set this up.

  1. At scriptLoaded create the reload rule, and if necessary, a reload Item to trigger the reload rule. When the reload rule receives an ON command, the rule executes and first deletes the rule that does the work and then recreates it with the latest configuration information. The create_simple_rule function in rules_utils handles this all in one function call including creating the reload Item if it doesn’t already exist.

  2. Trigger the reload rule or call the function directly.

  3. The reload rule first deletes the old rule using delete_rule from the rules_utils library and then recreates the triggers for the rule that does the work before finally creating the rule itself. This is a perfect opportunity to do some error checking to make sure that all the configs are valid before enabling the rule and reporting on what’s wrong and how to fix it. The create_rule function from rules_utils can facilitate this if the triggers or config checking doesn’t involve metadata or create_rule_with_metadata function if the Items that define the triggers contain a specific metadata.

For the end user of the rules, they would, for example, modify the Item metadata. Then either issue an ON command to the reload Item

ssh -P 8101 openhab@localhost 'smarthome:send <Reload_Item> ON`

where <Reload_Item> is the Item created in 1.

Or the user can find the Reload rule in the list in PaperUI and click the play triangle icon to cause the rule to run.

Examples

Note: All of these examples are available at openhab-rules-tools for download and use (see example above). Please obtain them from there instead of copying the code below.

Item Metadata triggers

Here is a debounce rule using the rules_utils functions (see Design Pattern: Debounce) and Item metadata to define which Items must trigger the rule.

from core.metadata import get_value, get_metadata
from core.utils import send_command_if_different, post_update_if_different
from core.log import logging, LOG_PREFIX, log_traceback
from community.time_utils import parse_duration
from community.timer_mgr import TimerMgr
from community.rules_utils import create_simple_rule, delete_rule, load_rule_with_metadata

init_logger = logging.getLogger("{}.Debounce".format(LOG_PREFIX))

timers = TimerMgr()

RELOAD_DEBOUNCE_ITEM = "Reload_Debounce"

@log_traceback
def get_config(item_name, logger):
    """Parses the config string to validate it's correctness and completeness.
    At a minimum it verifies the proxy Item exists, the timeout exists and is
    parsable.
    Arguments:
      item_name: the name of an Item to get the debounce metadata from
    Returns:
      An Item metadata Object or None if there is no such metadata or the
      metadata is malformed.
    """
    try:
        cfg = get_metadata(item_name, "debounce")
        assert cfg, "There is no debounce metadata"
        assert items[cfg.value], "The proxy Item {} does not exist".format(cfg.value)
        assert "timeout" in cfg.configuration, "There is no timeout supplied"
        assert parse_duration(cfg.configuration["timeout"]), "Timeout is not valid"
        return cfg
    except AssertionError:
        init_logger.error("Debounce config on {} is not valid: {}"
                          "\nExpected format is : debounce=\"ProxyItem\"[timeout=\"duration\", states=\"State1,State2\", command=\"True\"]"
                          "\nwhere:"
                          "\n  ProxyItem: name of the Item that will be commanded or updated after the debounce"
                          "\n  timeout: required parameter with the duration of the format 'xd xh xm xs' where each field is optional and x is a number, 2s would be 2 seconds, 0.5s would be 500 msec"
                          "\n  states: optional, list all the states that are debounced; when not present all states are debounced; states not in the list go directly to the proxy"
                          "\n  command: optional, when True the proxy will be commanded; when False proxy will be updated, defaults to False"
                          .format(item_name, get_value(item_name, "expire")))
        return None

@log_traceback
def end_debounce(state, proxy_name, is_command, log):
    """Called at the end of the debounce period, update or commands the proxy
    Item with the passed in state if it's different from the proxy's current
    state.
    Arguments:
      state: the state to update or command the proxy Item to
      proxy_name: the name of the proxy Item
      is_command: flag that when true will cause the function to issue a command
      instead of an update.
      log: logger used for debug logging
    """
    if is_command:
        log.debug("Commanding {} to {} if it's not already that state"
            .format(proxy_name, state))
        send_command_if_different(proxy_name, state)
    else:
        log.debug("Updating {} to {} if it's not already that state"
            .format(proxy_name, state))
        post_update_if_different(proxy_name, state)

@log_traceback
def debounce(event):
    """Rule that get's triggered by any Item with a valid debounce metadata
    config changes. Based on the configuration it will debounce some or all of
    the possible states, waiting the indicated amount of time before forwarding
    the state (command or update) to a proxy Item.
    """
    cfg = get_metadata(event.itemName, "debounce")
    if not cfg:
        return

    timers.cancel(event.itemName)

    isCommand = True if cfg.configuration["command"] == "True" else False
    proxy = cfg.value
    states = [st.strip() for st in cfg.configuration["state"].split(",")] if "state" in cfg.configuration else None
    timeout = str(cfg.configuration["timeout"])

    if not states or (states and str(event.itemState) in states):
        debounce.log.debug("Debouncing {} with proxy={}, command={}, timeout={}, and"
                      " states={}".format(event.itemName, proxy, isCommand,
                      timeout, states))
        timers.check(event.itemName, timeout, function=lambda: end_debounce(event.itemState, proxy, isCommand, debounce.log))
    else:
        debounce.log.debug("{} changed to {} which is not in {}, not debouncing"
                      .format(event.itemName, event.itemState, states))
        end_debounce(event.itemState, proxy, isCommand, debounce.log)

@log_traceback
def load_debounce(event):
    """Called at startup or when the Reload Debounce rule is triggered. It
    deletes and recreates the Debounce rule. Should be called at startup and
    when the metadata is changes on Items since there is no event to do this
    automatically.
    """

    if not delete_rule(debounce, init_logger):
        init_logger("Failed to delete rule")
        return

    debounce_items = load_rule_with_metadata("debounce", get_config, "changed",
            "Debounce", debounce, init_logger,
            description=("Delays updating a proxy Item until the configured "
                         "Item remains in that state for the configured amount "
                         "of time"),
            tags=["openhab-rules-tools","debounce"])
    if debounce_items:
        [timers.cancel(i) for i in timers.timers if not i in debounce_items]

@log_traceback
def scriptLoaded(*args):
    if create_simple_rule(RELOAD_DEBOUNCE_ITEM, "Reload Debounce", load_debounce,
                          init_logger,
                          description=("Recreates the Debounce rule with the "
                                       "latest debounce metadata. Run this rule "
                                       "when modifying debounce metadata"),
                          tags=["openhab-rules-tools","debounce"]):
        load_debounce(None)

@log_traceback
def scriptUnloaded():
    """
    Cancels all the timers when the script is unloaded to avoid timers from
    hanging around and deletes the rules.
    """

    timers.cancel_all()
    delete_rule(load_debounce, init_logger)
    delete_rule(debounce, init_logger)

Theory of operation: At scriptLoaded, create_simple_rule is called which is attached to the load_debounce function in that file. If the Reload_Debounce Item doesn’t already exist, it will be created along with the Reload Debounce rule. The load_debounce is called to create the Debounce rule itself.

In load_debounce first we delete the old dynamically created rule. Next the load_rule_with_metadata is called. In addition to the usual stuff necessary to define a rule, a get_config function is also passed as an argument. get_config is defined at the top of the file and it takes an item_name and a logger. This function extracts the metadata or tags or what ever is being used by the Item to define it’s configuration and ensures it is valid and usable. In this case it makes sure that there is “debounce” metadata, it has a value, that the value maps to an Item that exists, and that it has a timeout parameter that can be parsed. If it fails any of those checks, the reason it failed is logged along with some usage information to help the user correct the problem. When get_config returns None, that Item will not have a trigger generated for it.

load_rule_with_metadata returns a list of all the Item names for which a trigger was created. This can be particularly useful in cases where a Timer is created for each Item that needs to be cancelled if the config, in this case the “debounce” metadata is removed from the Item, which is what the last line of that function does.

debounce is the function that has the dynamically created triggers rule calls based on the triggers, in this case a change to any Item that has valid “debounce” metadata. In this case it implements an Item debounce algorithm. end_debounce can be ignored as it’s not related to this DP.

Finally, when the script is unloaded, the rules are deleted and timers cancelled.

If creating rules to share, be sure to use meaningful names and provide a description and tags for the rules.

configuration.py

Perhaps there is more configuration necessary than just Item triggers, such as variables that need to be in configuration.py. This is an example of the publisher rule for the MQTT Event Bus (a work in progress, there may be typos, see MQTT 2.5 Event Bus).

from core.log import logging, LOG_PREFIX, log_traceback
from community.rules_utils import create_simple_rule, delete_rule, create_rule

init_logger = logging.getLogger("{}.mqtt_eb".format(LOG_PREFIX))

@log_traceback
def check_config(log):
    """Verifies that all the settings exist and are usable."""

    try:
        from configuration import mqtt_eb_name
    except:
        log.error("mqtt_eb_name is not defined in configuration.py!")
        return False

    broker = None
    try:
        from configuration import mqtt_eb_broker
        broker = mqtt_eb_broker
    except:
        log.error("mqtt_eb_broker is not defined in configuration.py!")
        return False

    if not actions.get("mqtt", broker):
        log.error("{} is not a valid broker Thing ID".format(broker))
        return False
    return True


@log_traceback
def mqtt_eb_pub(event):
    """Called when a configured Item is updated or commanded and publsihes the
    event to the event bus.
    """

    if not check_config(mqtt_eb_pub.log):
        init_logger.error("Cannot publish event bus event, deleting rule")
        delete_rule(mqtt_eb_pub, init_logger)
        return

    from configuration import mqtt_eb_name, mqtt_eb_broker

    is_cmd = hasattr(event, 'itemCommand')
    msg = str(event.itemCommand if is_cmd else event.itemState)
    topic = "{}/out/{}/{}".format(mqtt_eb_name, event.itemName,
                                  "command" if is_cmd else "state")
    retained = False if is_cmd else True
    init_logger.info("Publishing {} to  {} on {} with retained {}"
                     .format(msg, topic, mqtt_eb_broker, retained))
    action = actions.get("mqtt", mqtt_eb_broker)
    if action:
        action.publishMQTT(topic, msg, retained)
    else:
        init_logger.error("There is no broker Thing {}!".format(mqtt_eb_broker))

@log_traceback
def load_mqtt_eb_pub(event):
    """Deletes and recreates the MQTT Event Bus publisher and online rules."""

    # Delete the old publisher rule.
    if not delete_rule(mqtt_eb_pub, init_logger):
        init_logger("Failed to delete rule!")
        return

    # Reload to get the latest config parameters.
    import configuration
    reload(configuration)

    # Default to publishing all updates and all commands for all Items.
    puball = True
    try:
        from configuration import mqtt_eb_puball
        puball = mqtt_eb_puball
    except:
        init_logger.warn("No mqtt_eb_puball in configuration.py, "
                                  "defaulting to publishing all Items")

    # Don't bother to create the rule if we can't use it.
    if not check_config(init_logger):
        init_logger.error("Cannot create MQTT event bus publication rule!")
        return

    triggers = []

    # Create triggers for all Items.
    if puball:
        [triggers.append("Item {} received update".format(i))
         for i in items]
        [triggers.append("Item {} received command".format(i))
         for i in items]

    # Create triggers only for those Items with eb_update and eb_command tags.
    else:
        [triggers.append("Item {} received update".format(i))
         for i in items
         if ir.getItem(i).getTags().contains("eb_update")]
        [triggers.append("Item {} received command".format(i))
         for i in items
         if ir.getItem(i).getTags().contains("eb_command")]

    # No triggers, no need for the rule.
    if not triggers:
        init_logger.warn("No event bus Items found")
        return

    # Create the rule to publish the events.
    if not create_rule("MQTT Event Bus Publisher", triggers, mqtt_eb_pub,
                       init_logger,
                       description=("Publishes updates and commands on "
                                    "configured Items to the configured "
                                    "event bus topics"),
                       tags=["openhab-rules-tools","mqtt_eb"]):
        init_logger.error("Failed to create MQTT Event Bus Publisher!")

@log_traceback
def scriptLoaded(*args):
    """Creates and then calls the Reload MQTT Event Bus Publisher rule."""

    if create_simple_rule("Reload_MQTT_PUB",
                          "Reload MQTT Event Bus Publisher",
                          load_mqtt_eb_pub, init_logger,
                          description=("Reload the MQTT Event Bus publisher "
                                       "rule. Run when changing configuration.py"),
                          tags=["openhab-rules-tools","mqtt_eb"]):
        load_mqtt_eb_pub(None)

@log_traceback
def scriptUnloaded():
    """Deletes the MQTT Event Bus Publisher and Online rules and the reload rule."""

    delete_rule(load_mqtt_eb_pub, init_logger)
    delete_rule(mqtt_eb_pub, init_logger)

Theory of operation: This is very much like the Item metadata example above. But in this case, the triggers for the rule are optionally determined by variables defined in configuration.py. In particular, mqtt_eb_broker and mqtt_eb_name must be defined or the Rule will not be created. Also, if mqtt_eb_puball is not defined, it assumes that all Item updates and commands should be published to the event bus. Pay particular attention to the meaningful error messages when the configuration is not valid and how, if the configuration is not sufficient to run the rule, it’s not created. Finally, the Items are tagged instead of using metadata.

Different types of triggers

Sometimes one may need to mix and match triggers based on Item metadata as well as some fixed triggers. The following is not a full example, but just shows the relevant parts of the ephem_tod library at openhab-rules-tools.

@log_traceback
def load_etod(event):
    """Called at startup or when the Reload Ephemeris Time of Day rule is
    triggered, deletes and recreates the Ephemeris Time of Day rule. Should be
    called at startup and when the metadata is added to or removed from Items.
    """

    # Because we have other rule triggers beyond the Item ones we need to do
    # more work in this function than usual.
    init_logger.info("Creating Ephemeris Time of Day Rule...")

    # Remove the existing rule if it exists.
    if not delete_rule(ephem_tod, init_logger):
        return None

    # Generate the rule triggers with the latest metadata configs.
    triggers = generate_triggers(NAMESPACE, check_config, "changed",
                                 init_logger)
    if not triggers:
        init_logger.warn("There are no Items with valid etod metadata")
        return None
    etod_items = get_items_from_triggers(triggers)
    triggers.append("System started")
    triggers.append("Time cron 0 2 0 * * ? *")

    # Create the rule.
    if not create_rule("Ephemeris Time of Day", triggers, ephem_tod, init_logger,
            description="Creates the timers that drive the {} state machine."
                        .format(ETOD_ITEM),
            tags=["openhab-rules-tools","etod"]):
        return None

    [timers.cancel(i) for i in timers.timers if not i in etod_items]

@log_traceback
def scriptLoaded(*args):
    """Create the Ephemeris Time of Day rule."""

    delete_rule(ephem_tod, init_logger)
    if create_simple_rule(ETOD_RELOAD_ITEM, "Reload Ephemeris Time of Day",
            load_etod, init_logger,
            description=("Regenerates the Ephemeris Time of Day rule using the"
                         " latest {} metadata. Run after adding or removing any"
                         " {} metadata to/from and Item."
                         .format(NAMESPACE, NAMESPACE)),
            tags=["openhab-rules-tools","etod"]):
        load_etod(None)

Theory of operation: In this case, in addition to creating triggers based on Items with metadata, we also create a trigger for System started and one for two minutes after midnight.

Advantages and Disadvantages

The advantage is it allows rules that depend on stuff that doesn’t generate an event such as modifying Item metadata or Group membership to be refreshed without restarting openHAB or reloading the script files. The disadvantage is it requires creation of a special reload Item and you still have to manually trigger the reload rule in order to refresh the rule’s triggers.

Related Design Patterns and Examples

Design Pattern How it’s used
Design Pattern: Debounce First example
MQTT 2.5 Event Bus The second example
Design Pattern: Time Of Day The third example
1 Like

Great work Rich! I’ll will be putting this to work next time I have time to work on my rules. I’ve ended up using this pattern in several places now.

I did notice in your first example that you are cancelling all timers before starting the trigger generation pass, is this intentional? By doing so you essentially reset all running timers back to their full duration. In Expire I set it up to build a list of items configured this pass and then purge timers that aren’t in the new list. This way you preserve timers for items that are still configured and remove those for items that either have no configuration or that have configuration that was changed and is no longer valid.

I’m not cancelling all the timers. When load_debounce is called I:

  • delete the debounce rule
    if not delete_rule(debounce, init_logger):
        init_logger("Failed to delete rule")
        return
  • create the new rule with newly generated triggers by calling load_rule_with_metadata, this function generates the triggers and returns a list of the names of the Item for which a trigger was created
    debounce_items = load_rule_with_metadata("debounce", get_config, "changed",
            "Debounce", debounce, init_logger,
            description=("Delays updating a proxy Item until the configured "
                         "Item remains in that state for the configured amount "
                         "of time"),
            tags=["openhab-rules-tools","debounce"])
  • then, I cancel only those timers that are currently scheduled that are not in that list of Items for which triggers were created (i.e. cancel any timers for those Items that no longer have debounce metadata but keep those that do). I don’t do anything for those where the debounce metadata merely changed, which is a design decision. For those, the old timer would expire using the old configuration and next time it needs to be created it will use the new configuration.
    if debounce_items:
        [timers.cancel(i) for i in timers.timers if not i in debounce_items]

Any scheduled timers that still have rule triggers created are left alone. I tested this but if it’s not working for you I need to look into it.

That’s what I get for reading this on my phone lol. You are of course correct, I didn’t scroll over and only saw [timers.cancel(i) but I should have picked up on the list comprehension because of the leading square bracket.