Chromecast / Google Home - automatic start playing a stream when motion is detected in a room

In this example a Google Home device which works as a Chromecast Audio device and is located in the bathroom of the house starts playing (a preferred) stream as soon as someone enters the room -> motion is detected.

The challenge is to not start the stream again and again when new motion is detected but the device is already playing something (i.e. because the stream was already started or a different stream is [i.e. via Audiogroup] is already playing.

What you need for this tutorial:

  • OH 2 & Chromecast Binding set up
  • Google Home Mini or other Chromecast Audio device in your selected room
  • Motion Detector in your selected room
  • A URL you want to stream or alternatively a mp3 file stored in the sounds subfolder of your configration (i.e. /etc/openhab2/sounds/mysound.mp3)

Thing definition (Google Home / Chromecast device):

file: chromecast.things


chromecast:audio:Floor         [ ipAddress="192.168.1.220" ]
chromecast:audio:Office	       [ ipAddress="192.168.1.217" ]
chromecast:audio:Bath          [ ipAddress="192.168.1.216" ]

/* Chromecast Audio Group for my Google Home Mini's */
/* must define this group in your Google Home App on the same device you point to here*/
chromecast:audiogroup:House    [ ipAddress="192.168.1.220", port=42576]

Now set up your groups & items for the Chromecast device


/* Assistenten (Alexa, Google Home, ...)  */

Group gAssistenten "Assitenten" <computer>

// Google Home Mini Bathroom - minimum items required for this tutorial

Dimmer Google_Home_Bath_Volume           "GH Bath Volume "              <soundvolume> (gAssistenten) { channel="chromecast:audio:Bath:volume" }
Player Google_Home_Bath_Player           "GH Bath Player"                             (gAssistenten) { channel="chromecast:audio:Bath:control" }
Switch Google_Home_Bath_Idle             "GH Bath Idle"                     <network> (gAssistenten) { channel="chromecast:audio:Bath:idling" }

and here are other items I make use of, defined in my items file:

// Power Switch - You need to adapt this to your setup - this is the power switch I turn on in case the Google Home is not already switched on
Switch Switch_DG_Bath_Google_Home        "Google Home Bath"         <recorder>			(gAssistenten) [ "Switchable" ] 	{ knx="<6/0/1" }

// Motion Detector
Switch Motion_DG_Bath   "Motion Bath"           <presence>      (Motion_DG)                  { knx="<16/0/0" }

// Groups for motion detector
Group 	 Motion         "Motion"                <present>       (All)
Group 	 Motion_DG      "Motion Upper Level"  	<present>       (Motion)

… and finally the rules which turn on the Google Home and start the stream (motion_sensors.rules)


var Timer TimerPlayStream = null

// this rule will turn on the GH device and start a stream

rule "Motion Bath DG"
  when
     Item Motion_DG_Bath received command 
  then
     if(receivedCommand===ON) {
 		// Motion has been detected in the bathroom 
 		
 		if (Switch_DG_Bath_Google_Home.state===OFF){
 			//switch on Google Home (power switch)
 			sendCommand(Switch_DG_Bath_Google_Home, ON)
			
			// because the booting of the Google Home takes in my case between 25 and 45 seconds we need to wait before we can start a stream
			if (TimerPlayStream!==null)
	        		TimerPlayStream.cancel()
				TimerPlayStream = createTimer(now.plusSeconds(20)) [|
				    logInfo("RULE.SWITCH_ON_STREAM", "Timer - Google Home Bath Idle status -> " + Google_Home_Bath_Idle.state.toString())
					while(Google_Home_Bath_Idle.state===ON)	{
                        logInfo("RULE.SWITCH_ON_STREAM", "While Loop - Google Home Bath Idle status -> " + Google_Home_Bath_Idle.state.toString())
						
						// we now try to start the stream, in case this is sucessful the Google_Home_Bath_Idle will switch to OFF (and we leave the While Loop)
						// this example will start the german radio station Deutschlandfunk
					 	playStream("chromecast:audio:Bath","http://st01.dlf.de/dlf/01/128/mp3/stream.mp3")
						logInfo("RULE.SWITCH_ON_STREAM", "Stream send now sleep 1.5 seconds" )
						Thread::sleep(1500)
					}
				] 
 		} 
		// in case the power is already on and our Google Home should be started already, we only want to start the stream
		else {
			// Only start the stream if it is not already playing - so we check the state of the Player item 
			// we need to check != PLAY because when you startup OH the Play may not be initialized and thus the item state is not PAUSE
			if (Google_Home_Bath_Player.state!==PLAY) {
			
	        	// start the stream because Google Home is not playing anything right now -> Deutschlandfunk
    	    	playStream("chromecast:audio:Bath","http://st01.dlf.de/dlf/01/128/mp3/stream.mp3")
				// alternatively you can start your own mp3 file
        		//playSound("chromecast:audio:Bath","mysound.mp3")

				// I was not successful using the say command on a specified audio sink - per documentation it should work like this:
 				//say("Hello, welcome in the bathroom","voicerss:deDE","chromecast:audio:Bath")        
			}
 		}
     } else if(receivedCommand===OFF) {
		// at this time we do not do anything here
	}
  end


// when someone put the power off we force/switch the Player item to PAUSE, as it does not get initialized in PAUSE state when the the Google Home is booting
rule "Google Home Bath Power Off"
	when
		/* Google Home switch off */ 
         Item Steckdose_DG_Bath_Waschbacken_Unten changed from ON to OFF
	then
        // need to set the player to Pause, so next time it is in the right state when we start
		sendCommand(Google_Home_Bath_Player,PAUSE)
		
	end

/* Is the Google Home busy ? - just logging the state changes - not needed for the example to work*/
rule "GH Bath Idle has changed"
	when
		Item Google_Home_Bath_Idle changed
	then
		logInfo("RULE.AUDIO", "Google Home Bath Idle changed! ->" + Google_Home_Bath_Idle.state.toString())
	end

Please note I do not have a stopping of the Google Home streaming in this example as this can be done easily by applying a timer to either the motion detection (i.e. when turned of, reschedule everytime it is turned off) or by a timer when the stream is started.

I did translate my rules/items/things from German to English for this tutorial and hope I did not miss any translation so that the setup is working out of the box.
If you find and error please let me know so that I can update this tutorial.
If you have further suggerstions let me know and I can try to include them here as well. (i.e. I was thinking of playing a different stream depending on the time of the day or changing the volume of the stream based on time [no loud music after 10:00 pm])

Hope this helps others.

15 Likes

@gloeckner_ronny

UPDATE: I found a way to back into to my desired abstraction. I can declare an easy to identify Player Item name and link it to the associated Chromecast Thing. If I do this for each Chromecast, then each will have a “named” link. I can then retrieve the links (http://localhost:8080/rest/links) and look for my specific named Chromecast Item and retrieve the ThingID.


I am looking at transitioning from a Raspberry Pi flask application which casts audio to a Chromecast or Google Home over to using openHAB as my notifications server.

I installed the Chromecast binding and had PaperUI scan for devices on my network. I now have a list of openHAB Things for each of my devices and audio groups. This process has allowed me to add my devices without having to create a things file where the IP addresses are hard coded. When PaperUI adds Chromecast sinks by scanning the network, the sinkIDs have a very long alphanumeric string as the ThingID (e.g., chromecast:chromecast:d7bb05e22908d203fb79aea9a8509072).

When I inspect the Thing configuration, the audio devices have two channels: control and volume. I was able to create Player and Dimmer Items which I can link to the control and volume channels respectively.

I saw in the openHAB documentation that the multimedia functions (playSound, playStream, say, etc.) require an audio sink as a parameter. Do you know if there is a way to abstract the use of the Chromecast sinkID (e.g., a linked Item)? I would prefer to use an nicely named entity rather than having to use an unruly ThingID. Do you happen to know what options for passing the sink parameter are?

Regards,

Mike

Hi Mike

I do something very similar.

The info on this page will help.

Specifically the Karaf console command to list all audio sinks

smarthome:audio sinks

Yeah, the Karaf console gives the same sink names. But those are still difficult to easily visually associate with the actual “device”.

My goal is to use a “human readable” name and be able to translate that into the sink name for use in the multimedia function calls. Via PaperUI I allowed the Chromecast binding to scan and find all my Chromecast devices. I then went into the Things config and linked an Item to each Thing’s Control Channel.

Group gChromecasts

Player      myAudioGroup
            "My Audio Group" (gChromecasts)

My approach is still evolving as I am trying to “globalize” my Chromecast notification function.

val Functions.Function1<String,String> getChromecastSink = [ String itemName |
    try
    {
        val allLinks = sendHttpGetRequest("http://localhost:8080/rest/links")
        var String filter = "$..[?(@.itemName=='"+itemName+"')].channelUID"
                                                                                                                                                                                                                    
        var output = transform("JSONPATH", filter, allLinks)
        var chromecastSink = output.split(":").get(0)+":"+output.split(":").get(1)+":"+output.split(":").get(2)
        return chromecastSink
    }
    catch(Exception e)
    {
        logError("notifications", "Failure in getChromecastSink: " + e.toString)
        return null
    }
]

var String itemName = null
var String broadcastMessage = null

rule "Broadcast Notification"
when
    Item MyBroadcastNotification received update
then
    try
    {
        // If it's between 8am and 10pm, broadcast to a message to the Whole House
        if  ((now.getHourOfDay > 07)  && (now.getHourOfDay < 22))
        {
            itemName = MyBroadcastNotification.state.toString
            broadcastMessage = null

            if (itemName.contains(";"))
            {
                broadcastMessage = itemName.split(";").get(1)
                itemName = itemName.split(";").get(0)
            }

            val audioSink = getChromecastSink.apply(itemName)
            playSound(audioSink, "doorbell.mp3")
            if (broadcastMessage !== null)
            {
                createTimer(now.plusSeconds(3)) [ |
                    say(broadcastMessage, "voicerss:enUS", audioSink)
                ]
            }
        }
    }
    catch(Exception e)
    {
        logError("notifications", "Failure in Broadcast Notification: {}", e.toString)
    }
end

rule "Broadcast a notification"
when
    "an event is triggered"
then
    MyBroadcastNotification.postUpdate("myAudioGroup;This is a test")
end

Mike

2 Likes

The 2.4.0-SNAPSHOT build has PR ESH #6043 which should make it easier to associate the sink names with your actual Chromecast.

openhab> audio sinks
  All amps (chromecast:audiogroup:928bff2a-3d13-4e1b-bdc2-fcefb70483c6)
  Amp 1 (chromecast:audio:7fda5762a13ce62aea8f35e74a0cc9bd)
  Amp 2 (chromecast:audio:176fc3d5dcae6d7fa49c407ed836fe3a)
  Google Home (chromecast:chromecast:9ee48f2e334e85865a2c80a996a84654)
  System Speaker (javasound)
* System Speaker (with mp3 support) (enhancedjavasound)
  TV (chromecast:chromecast:60fb8f54e9f427cf7666dcb17718a6bb)
  Web Audio (webaudio)

Then you could create and use a variable in rules with a human readable name that has the sink ID as value.

1 Like

Hi

That sounds like a great idea.

You might want to build in a delay, slightly longer than the chime duration, as I’ve noticed that initial commands are ignored if a second command is issued too soon.

IE

I tried a Say instruction, followed by a "play stream’.

I hear the ChromeCast activating, but rarely did it play the TTS.

1 Like

Yup - I’d already run into the fact that the sounds don’t queue.

if (broadcastMessage !== null)
{
    createTimer(now.plusSeconds(3)) [ |
        say(broadcastMessage, "voicerss:enUS", audioSink)
    ]
}

Mine works pretty seamlessly. I installed the voiceRSS Voice add-on, got my API and edited the cfg… and it worked. Having said that, I’ve always played the chime first followed by the TTS. Maybe I “warmed up” the Chromecast (I hear the Google Home “bling”, then the chime, then the TTS. Perhaps if I sent the TTS first it would get “lost”.

Mike

@wborn

Now that I have the latest Chromecast binding snapshot, how are these human readable sink names associated? In the Karaf Console, the audio sinks still only lists the “cryptic” sink ID.

I named each thing in PaperUI… but I’d done this when I was using the 2.3.0 binding. Do I need to refresh somehow?

Mike

The class providing the audio sinks command is not part of the Chromecast Binding. So that’s why you don’t see the improved output if you only update the binding. It will only be available if you fully upgrade to 2.4.0-SNAPSHOT or newer.

Ah. I’m going to put off upgrading OH until we work through the Chromecast testing. For now I have the Link Item approach to find my audio sink.

Are you suggesting to manually get the sinks from the console and hardcode them into a rule? I was hoping that this PR might be exposing the AudioManager.getAllSinks() method for use through the Automation API (I use JSR223-Jython), but I haven’t been able to get this to work.

Yes my comment was about the Karaf Console command. I don’t think there is a way to get them programmatically in such rules. I haven’t tried it with a JSR223 engine myself.

1 Like

@5iver @wborn,

I declared an item for each of my Chromecasts and linked them to the auto-discovered Things in PaperUI. This let me have a “known” name and label for each of my Chromecasts. It also let me declare a Group for my Chromecasts which I can loop through the members in rules logic.

Mike

The format of the sinks are consistent, so I made items to hold the UUIDs of my speakers, and items for the URI to send to the speaker (this is openhab2-Jython)…

from org.eclipse.smarthome.model.script.actions.Audio import playStream
audioSink = "upnpcontrol:upnprenderer:{}".format(str(items[event.itemName.replace("_PlayURI","_UUID")]))
playStream(audioSink, str(event.itemCommand))

Similar conceptually - using Items to find the audio sink. My Item declarations are not assigned a hardcoded value. Rather, I use PaperUI to link them to the auto-discovered audio sink Things. In this manner, I can search my links (http://localhost:8080/rest/links) for my “known” Item name (or Item label) and grab the UUID from the link information returned by the REST API call.

Many ways to skin this cat :wink:

1 Like

Could you please explain this more… maybe a screenshot? I don’t seem to have any audio sink Things. Are these binding specific (I’m not using a Google Home/Chromecast)?

EDIT: Never mind… I see from your code that your just parsing the channelUID of a an item linked to a Google Home/Chromecast Thing. Calling it an audio sink Thing confused me… there is no such Thing (pun intended).

Ah! Since this topic is titled “Chromecast / Google Home…” I made a bad assumption that you were referring to a Google device. The Chromecast binding auto discovers Chromecast devices (including Google Home devices) and adds them as openHAB Things.

Is there an openHAB binding for the class of audio devices you are dealing with?

Mike

This is a generic UPnP binding that includes functionality to use a renderer as an audio sink. Sorry to hijack the thread… the discussion about getting an audio sink from an Item name caught my eye, as I was also trying to do the same programmatically.

Doc on that binding seems pretty sparse. I can’t find information on its Things declaration or whether it supports auto-discovery. Nevertheless, for the case of Things present in PaperUI…

val Functions.Function1<String,String> getChromecastSink = [ String itemName |
    try
    {
        val allLinks = sendHttpGetRequest("http://localhost:8080/rest/links")
        var String filter = "$..[?(@.itemName=='"+itemName+"')].channelUID"
        var output = transform("JSONPATH", filter, allLinks)
        var chromecastSink = output.split(":").get(0)+":"+output.split(":").get(1)+":"+output.split(":").get(2)
        return chromecastSink
    }
    catch(Exception e)
    {
        logError("my_log", "Failure in getChromecastSink: {}", e.toString)
        return null
    }
]

var String itemName = null
var String broadcastMessage = null

rule "Broadcast Notification"
when
    Item messageBroadcast received update
then
    try
    {
        itemName = messageBroadcast.state.toString

        broadcastMessage = null
        if (itemName.contains(";"))
        {
            broadcastMessage = itemName.split(";").get(1)
            itemName = itemName.split(";").get(0)
        }

        val audioSink = getChromecastSink.apply(itemName)
        playSound(audioSink, "my.mp3")
        if (broadcastMessage !== null)
        {
            createTimer(now.plusSeconds(3)) [ |
                say(broadcastMessage, "voicerss:enUS", audioSink)
            ]    
        }
    }
    catch(Exception e)
    {
        logError("my_log", "Failure in Broadcast Notification: {}", e.toString)
    }
end
String      messageBroadcast
            "Notification Broadcast Information"

Player      backBedroom_audioPlayer
            "Back Bedroom Clock" (gChromecasts)

Once the backBedroom_audioPlayer Item is declared, in the PaperUI Thing configuration dialog, I add a link to the Control Channel. Since that Channel requires a Player Item, the Link dialog will only present Player Items in the selection list. Once you select the Item and click Link, after a few seconds, the Link will be set.

1 Like

I also ran into the non-blocking issue with the say command. My work around is to invoke the say command first and then in a while loop, wait till ChromeCast’s title field is empty. If you are using Python, you can use something like this:

# Play the given message and wait till it finishes (up to                       
# MAX_SAY_WAIT_TIME_IN_SECONDS seconds). Afterward, pause the player.           
# After this call, isActive() will return False.                                
# @param message string the message to tts                                      
def playMessage(message, castItemPrefix = 'FF_GreatRoom_ChromeCast'):           
    say(message)                                                                
                                                                                
    # Wait until the cast is available again or a specific number of seconds    
    # has passed. This is a workaround for the limitation that the OpenHab      
    # say method is non-blocking.                                               
    seconds = 2                                                                 
    time.sleep(seconds)                                                         
    while seconds <= MAX_SAY_WAIT_TIME_IN_SECONDS:                              
        if hasTitle(): # this means the announcement is still happening.        
            time.sleep(1)                                                       
            seconds += 1                                                        
        else: # announcemen is finished.                                        
            seconds = MAX_SAY_WAIT_TIME_IN_SECONDS + 1                          
                                                                                
    pause(castItemPrefix) # the Player needs to be manually reset to PAUSE.

Source: https://github.com/yfaway/openhab-rules/blob/master/automation/jsr223/aaa_modules/cast_manager.py