[OH3] TUT HABot rules language interpreter in rules

In the docs (Multimedia | openHAB) there are two interpreters to choose from, which can also be used in rules with interpret(“String”). In the UI in OH3, however, you can still use the “HABot OpenNLP Interpreter”, which also “understands” the symantic tags. How can I use the HABot HLI in my rule with “result = interpret(VoiceCommand.state, “HabotInterpreter”, “sonos:PLAY5:kitchen”)”?

does no one know if i can even include the habot interpreter in a rule as an interpreter?

I have now found out:
to execute rules via voice command and still use another human language interpreter you have to do the following:

  • display the interpreters via the rest interface (OH3: https://:8993/#!/developer/api-explorer) -> GET /voice/interpreters
    Output:
[
  {
    "id": "rulehli",
    "label": "Rule-based Interpreter"
  },
  {
    "id": "opennlp",
    "label": "HABot OpenNLP Interpreter",
    "locales": [
      "fr",
      "de",
      "en"
    ]
  },
  {
    "id": "system",
    "label": "Built-in Interpreter",
    "locales": [
      "fr",
      "de",
      "en"
    ]
  }
]
  • edit default voice interpreter: OH3 Settings Menu -> Voice - > Default Human Language Interpreter: Rule-based Interpreter (set also the voice command item in the voice interpreter settings menu)
  • create a new rule:
rule "Voice control"
when
    Item VoiceCommand received command
then
    var String command = VoiceCommand.state.toString.toLowerCase
    logInfo("Voice.Rec","VoiceCommand received "+command)
    if ( command.contains("alles aus") ) {
        say("Schalte alles aus!")
    }
    else if ( command.contains("test") || command.contains("ok") ) {
        //another action here
    }
    else {
        result = interpret(VoiceCommand.state, "opennlp", "sonos:PLAY1:ringcon123")
    }
end

The part of the rule with “opennlp” could now be one of the following: “rulehli”, “opennlp”, “system”.
However, only “system” or “opennlp” makes sense. opennlp is the HABot, which also looks at the tags and metatags of your items and can then control the items according to your voice command.

Nevertheless, I would ask you to add this to the docs: Multimedia

I seem to be missing something here. When on an OH3 page, I interact with habot and can get charts and such. When I use the OpenHab App, there is a microphone, and I assumed I would get the same functionality, but I don’t. That microphone goes to the VoiceCommand item? If so, then semantic tags can’t be used in the app commands?