Using Google Home/Nest as microphone/speaker

  • Platform information:
    • Hardware: armv7l
    • OS: Linux raspberrypi 4.9.41-v7+ #1023 SMP Tue Aug 8 16:00:15 BST 2017 armv7l GNU/Linux
    • Java Runtime Environment: openjdk version “11.0.3” 2019-04-16 LTS
      OpenJDK Runtime Environment Zulu11.31+16-CA (build 11.0.3+7-LTS)
      OpenJDK Client VM Zulu11.31+16-CA (build 11.0.3+7-LTS, mixed mode)
    • openHAB version: 3.1.0 - Release Build
  • Issue of the topic:
    Hey, I would like to use my Google Home and Nest devices as speakers and microphones. For some applications, I would like to set-up a voice command that would trigger a rule.

For example, Ok Google: I’m leaving home → Hallway lights would turn on IF its night time.

Is there a way of achieving that?

Thanks!

1 Like

A proxy item is the way to go here. Just crate an item that is not linked to any device but give it the google assistant switch metadata (google won’t know the difference). You can now control that item via google commands either direct commands or as part of scenes.

On the OH side you just need to establish whatever rules you want that respond to changes in the proxy item. Then using google to change the proxy item triggers any OH actions that you wish.

That’s a great idea! I’ve done that and it works now :slight_smile:

I have a few more questions/issues:

  1. When the rule is done, I would like to notify using the Google Home’s speaker. But currently it doesn’t work (I have an audio action that’s supposed to say “something” when the action is done on my Google Home’s speaker, but it does not work).
  2. Is it possible to only use OH and only use Google Home and Nest as speakers/microphones? I would like to manage my Smart Home at one place instead of having to have “Items” in both applications.
  3. Is it possible to choose philips hue scenes from OH?

Thanks a lot for the help!

  1. I use assistant-relay project for this cause (it’s discontinued but still does the trick) so you could give a feedback to your speakers, when the action is done either in broadcast on all speakers (works with horrible delays, so you’ll be hearing a lot of echo’s) or use a targeted broadcast to a specific rooms. Worth to mention, that with each message you’ll be hearing additional info, like “Message from $username + $message” (in my case “Message from openhab + $message”)
  2. You can define all of your items in openhab, add metadata for google home to properly recognize the items and then link your openhab via openhabcloud to do the job.
  3. I don’t own Philips hue, but scenes are present in the documentation of the Philips Hue Binding
  1. If you install the chromecast binding, and add your google speakers as things, they are registered with OH as audiosinks and you can then send sounds and text-to-speech to them.
  1. I’m not sure what you mean by only use Home and Nest as speakers/microphones, but I suspect you mean can you bypass the built-in assistant and send the audio commands directly to OH. The answer is I don’t think so. The android OH app will let you speak into your phone, use google speech-to-text and send the text command to OH, but I don’t think there’s anyway to disable the assistant on google speakers.

  2. Yes, it is possible. I don’t use Hue myself, but I’ve seen many examples of people working it out here on the forums, and I believe the basics are laid out in the Hue binding docs.

as for me, there is a huge flaw, while using chromecast binding (and it is not a flaw of the binding, but Google Home’s) - whenever you’re sending any message to your Google Home devices, if there was any streaming, it will gonna cancel and will not restore after the message, that’s the reason, I’ve utilized assistant-relay

Thanks for the reply!

  1. That might actually work, maybe with a bit twitching and playing around so that the device that received the command is the devices that will output the message. Also, kinda annoying that there’s a prebuilt prefix to the output message. I’ll look into it, thanks!
  2. Yup, that’s exactly what I did also with the help of @JustinG 's comment with the proxy item.
  3. The documentation is kinda hard for me, it’s not really straight forward, I’m able to retrieve the hue scenes but with the mainUI, I see no option to actually set the scene value.

Thanks for the reply!

  1. Won’t this mean that I have to have my TV open at all time in order to receive the messages?
  2. I see, so I’m left with 2 options, either have it all controlled via Google Home services with the addition of OH proxies to make better conditions and integrations, or entirely removing Google Home devices and buying just microphones and speakers to receive and transmit messages from and to them.
  3. As I said to @ypyly in my response:

The documentation is kinda hard for me, it’s not really straight forward, I’m able to retrieve the hue scenes but with the mainUI, I see no option to actually set the scene value.

proper naming of the items and reading this thread should give you a lot of answers.
In short, I name all of my items as Room_Device_SpecificItem
So I can actually use something like:

rule "Presense Changed - Light"
when
    Member of MotionPresense changed
then
    val room = triggeringItem.name.split("_").get(0)
    val pause = LightMotionControls.members.findFirst[ p | p.name == room + "_Motion_Pause" ] 
    val override = LightMotionOverride.members.findFirst[ ov | ov.name == room + "_Manual_Override"] 

to get the actual location of the triggering item and to be able to control other items in that room.
Also with such approach, you just need to write the rule once and can add more and more same devices without need to append rules.

P.S. this response was written, before I’ve seen latest message

  1. So, the only chromecast that you have, would be TV?
  2. It really depends, how seriously do you rely on Google Home’s stuff, I’ve heard a lot of good stuff about sonoff’s
  3. If you want to work with your home automation beyond just switching on and off lights, you’ll need to start learning basics, as well as start learing properly reading documentation/googling.

That’s pretty cool! I’ll use that :slight_smile:

  1. Not at all. This is a small problem with Google’s overuse of the term Chromecast. It is true that the the hardware pieces that plug into the TVs are Chromecast devices, but more generally, the term Chromecast refers to google’s entire protocol that runs on all it’s smart devices. So the Chromecast binding doesn’t connect through any one device to all the others, it connects directly to each google device, speakers and tv units.

  2. If you are just intent on the voice control then there are several non-google based options, but if you wish to integrate the google speakers, then yes your options are somewhat limited.

  3. The docs are always up for improvement. If you can identify particular areas where they are insufficient or even better offer suggestions for specific improvements, by all means open a new thread or best yet, click on the link at the bottom of any doc page to join in the creation or discussion about them on github.

1 Like
  1. Wow, That was really insightful! I didn’t know that. So basically you’re saying that ChromeCast is a protocol that is ALSO located on Google Home and Nest Mini among other google devices? I’ll look into it!
  2. Well, I’m not really sure what my intents are yet, I would know them when I know the technologies I have laid out for me (or not?), but basically, yes, I want everything local, everything cheap and everything customizable.
  3. It actually might be a good idea. I’ll try that also.

Thanks for the reply it was super helpful!

In OH 3 you don’t need to include the location in the Item name any more. You can use the semantic model and one of the new semantic model Actions to determine which location an Item is in. Actions | openHAB. The same can be done for equipment.

just for a clarification, if we’re talking about Google Home devices, we’re having:

  1. Chromecast - video device
  2. Chromecast - audio device
  3. Chromecast - audio group

right, if you don’t stick with all the configuration in the text files, or am I missing something?

I don’t recommend it as it’s just too much work and looking things up, but it is very possible and many people do set up a semantic model in .items files. It’s just Group memberships and Item tags. Nothing more really.

1 Like

ok, let’s not overflood this thread with off-topic, but I’d definitely like to talk with you and others about that, since I prefer all the configuration to be text based

You’re right, it is off-topic. But I really think there’s a place for this discussion in another thread. Could you maybe open one and have this discussion there and link it to this thread? I would love to read your thoughts on this matter.