Simple way to use Google Home with IFTTT and myopenHAB

Tags: #<Tag:0x00007fe731848b98> #<Tag:0x00007fe731848a80> #<Tag:0x00007fe731848968>


I have just configured IFTTT and openHAB so that the VoiceCommand item gets my “commands”. It seems to be working, unfortunately, it also seems like everything I tell my Google Home mini is sent to openHAB.

The trigger I created (“What do you want to say?”) is “bitte $”. So I thought only those commands that start with “bitte” will be sent to openHAB, but that doesn’t seem to be the case.

What do I have to do to make it work that way? I can’t really use the Home mini for anything else now. :smiley:

Ok, nevermind, seems to be “bitte” is just ignored(?). I changed it to “test” and it works.

I guess I’ll have to come up with some phrase that makes it somewhat intuitive to use. Or I’ll have to add all kinds of applets with different trigger phrases.

One more thing: It seems to be working rather well, exposing the VoiceCommand item in the openHAB cloud. What bothers me though is the need to use a trigger phrase (which in my case is still “test”: “Hey Google, test, turn on the kitchen light”). I can’t think of a intuitive phrase to use.

So I was thinking if it would be possible to “route” all commands that I speak into Google Home to my VoiceCommand item in openHAB first, see if there is a command that I wanto openHAB to execute, and then send everything else “back” to the Google Home. It would require the Google Home / Assistant to have some kind of functionality to receive commands via text / interface, which I havent really found so far. I found this:, but it seems like this is only usable for sending text to the Home which it then “speaks” via the built in speaker.

Does anyone know if that would be possible?

I’m having no problems with “Hey Google, turn on the kitchen lights”

I don’t really want to be “restricted” that way. I would like to be able to use commands in whichever way I want (“Hey Google, kitchen lights off”, “Hey Google, turn kitchen lights off”, “Hey Google, lights in kitchen off” etc.). That works with the Android openHAB app already, and I don’t want to have to use a different syntax with the Google Home. It seems like there is (currently) no way to control the Home via some network interface.

Here’s my rule which I can turn on/off lights, even chain multiple lights together

Text ingredient: Turn $
Assistant response: OK, I’m turning $
Send to VoiceCommand: turn {{TextField}}

The only restriction is I have to say the word “turn”.

Yea, that’s what I mean. I don’t want to have to say “turn” every single time. In the english language there might not be so may ways to say “turn off the lights”, but in the german language there are quite a few (“mach das licht in der küche an”, “schalte das licht in der küche an”, “licht in der küche an”, “küchenlicht anschalten” and many more). In addition to that, I dim my lights, so I will have to add all kinds of applets or trigger phrases.

Also, there are things that Google Home/Assistent itself could be told to turn on (e.g. “turn the music on”), which would not work with your applet/rule, as the command is routed to your openHAB. You would then have to use a different command (“play music” instead of “turn on music”).

I want to have kind of a natural language processing. I don’t want to (and I don’t want to have my wife have to) learn an exact phrase that she has to use to turn on a light. I want it to be as intuitive as possible. That would work if I had the chance to send everything to openHAB first, my rule would then see if the command is anything that should be done via openHAB (like this, and if the command is not for openHAB itself, it could then send it “back” to the Home/Assistant.

where does this file go im having trouble exposing items

Unfortunately there’s no way to do what you’re looking to do. There is no way to capture what you hear and send it back if it’s not what you want. As for processing the language, that’s what makes the Google assistant what it is. Google has been working on that AI engine for years and it requires special programming languages for natural language processing, so I don’t think trying to do it in OH is going to be effective. You would have to choose every key word that would possible be uttered and parse for it and try to figure out the context of what someone is saying, which is what Google’s machine learning algorithms have been developed for over the past decade. It’s actually a lot harder than people think it is.

As of right now if you want the full benefit of natural language processing you can either wait for the OH skill to come out on the Google assistant or you can write multiple rules in IFTTT to try to catch all scenarios. There is always one other option and that would be to write your own Google assistant action.

Thanks for your reply.

I have been using a changed version of the “natural language processing script” which I posted above ( for a while now, and it’s working just fine for my needs. I don’t really need it to be as good as Googles natural language processing.

I do not have to keep using an exact phrase like “Turn kitchen lights off”, as long as the sentence contains everything the script needs it works fine (“kitchen lights off”, “lights in kitchen off”, “roller shutter living room down”, “close shutters in living room” etc).

I’m waiting for the Google Home openHAB integration at the moment. Looking forward to trying that.

Hi all,

just had a quick read of this long discussion and I just have one question before I continue. For all this to work, are you using the openhab applet in IFTTT? and if so, does that require using myopenhab,org? Or are you using IFTTT directly back to your openhab system at home?


what do you mean please by “force value”?

Is there any way to have IFTTT send some kind of device id/name along with the command? I had a mechanism with Alexa where I could identify the specific echo used and then assume the room based on that. I’ve not found that with this. Otherwise it works just as documented.