Thank you for the soon feedback. That helped: I think I set the logging level wrongly and now I know I’m interested in the rule based interpreter. But Vosk will not send the recognized string to the interpreter - but will only response with “unknown voice command”.
Do you have any idea how to connect via, called via HAB speaker to the rule based interpreter (basically write into the configured string item & do not response with “unknown voice command”)?
Update, 2023-01-10: See #6 in HAB Speaker (dialog processing in the browser) - #31 by ornostar : Is it possible to use the HAB speaker in combi with the systems interpreter?
Regarding logging:
#1: Your hint worked. I see the recognition process. Setting the debug level generally to debug led an amount of logs. Somehow in these logs the desired weren’t available (I assume because the 1000 lines were overriden too fast.
Regarding Interpreter:
2# My commands won’t work easily with the build in interpreter since I have multiple instances of equipment in each room.
Example: “Licht” [eng. “Light”] is used in at least two items per room, for light dimming and light switch:
Switch LichtK2SwitchItem "Licht (K2)" <slider> (gLicht, gSwitch) ["K2","Licht","Point"] {channel="knx:device:4836da5d03:LichtK2SwitchChannel"} LichtK2DimmerItem
Dimmer LichtK2DimmerItem "Licht (K2)" <slider> (gLicht, gDimmer, gK2) ["K2","Licht","Point"] {channel="knx:device:4836da5d03:LichtK2DimmerChannel"}
#3 The usage of abbreviations (example above: K2 for Kind2 [eng. children2] is not recognized by vosk (maybe another model might help? I’ll give it a try.)
Edit: Bigger model doesn’t work. User, group and permissions are set identically between models. I’ve just renamed the folder using ‘mv’ cmd.
2023-01-09 20:30:49.577 [DEBUG] [oice.voskstt.internal.VoskSTTService] - loading model
2023-01-09 20:30:49.627 [WARN ] [oice.voskstt.internal.VoskSTTService] - IOException loading model: Failed to create a model
#4 regional settings are correctly set to germany/german.
#5 I’ve switched to the rule based interpreter (settings->voice
). This lead to a change while using ‘say’ command in openhab-cli
, but hab speaker still uses another interpreter (or none?!).
Edit: By “change” is meant that the string item containing the voice command was updated.
#6 Some interpreter works. I’ve checked this by using a german command via chat (HABot) - in comparison to shell I’ve got the dialogue here and it asks me whether he found the correct item (while on openhab-cli
it’s plain ‘not found’)
Edit: I’ve just recognized that the interpreter in HABot
is not the same than the build-in
. But due to the restriction of abbreviations (letters) & similar item naming (#2) I’m focusing on rule interpreter.
#7 As mentioned in #5 I’ve already changed to the rule based interpretation - but only with openhab cli. As mentioned in the very first question I’ve got trouble getting the rule based interpretation working with vosk/Hab speaker. What do you mean by “and set it on “Other Services/Rule Voice Interpreter””?
Regarding audio output:
#8 I’m remotely connect so I cannot use the locale sink; I’ve typed
openhab:voice say(“hallo”, marytts:bits3hsmm, habspeaker::79a0-95a8-b269::sink)
I did not hear any sound. Nor I saw anything in the logs. That’s currently acceptable since I don’t want to have a dialog but just give commands and have a answer (yes/no).
Regarding openhabian and vosk
#9 model works, as mentioned above. I’ll give the biggest model a chance since abbreviations/letters aren’t recognized.
#10 lib is/was installed and is up to date.
Misc
Yeah, I’ve meant HABot.
I’ll give my feedback regarding HAB speaker into the market place topic.