OrangeAssist - Google Assistant Integration

OrangeAssist

What this does:
This is basically Google Home (GH) Hub equivalent for HabPanel. It not only acts as a “audio-visual element,” but it fully integrates to Google Assistant as if you are in front of a Google Assistant device (google home, google home mini, google home hub, etc).

We have a combination of 8 Google Devices (home, mini, hub) throughout the house and they work beautifully with openHAB with few exceptions:

  • You have a device supported by Google Assistant/Device but not openHAB
  • You have an openHAB device not supported by Google Assistant
  • You want Google Assistant to control security devices, rather than openHAB

So basically, I have a few security locks that are supported by Google Home, but currently do not have an openHAB binding. I can say “Lock the front door” to GH, and it will lock it, but I can’t lock them through an openhab rule. That’s how/why i came up with orangeassist.

I created this with python using Google Assistant Service since the Library version does not support text input yet. I didn’t want it to run in same hardware as OH yet, so it’s running in a tiny orange pi on a headless armbian. This orange pi also hosts my NCID server for my caller id + habpanel integration

Many of us here use Chromecast Binding as audio sink for OH, but we all know that using that basically STOPS whatever is currently playing and does not resume. With this new integration, instead of using say() in my rules, I simply send “broadcast XXXX” to OrangeAssist and can even use the built in GH chain commands like “broadcast time to eat then turn on kitchen lights”

A simple rule to send the command to OrangeAssist.

val orangeassistPostURL = "http://lucky:charms@orangeassist:5000/assist/ask?html=1"
val timeoutSec = 10

rule "Send OrangeAssist Command"
when
	Item orangeassistcmd received command
then
	var result = sendHttpPostRequest(orangeassistPostURL, "application/json", orangeassistcmd.state.toString, timeoutSec*1000)
	postUpdate(orangeassistcmdResult, result)
	orangeassistcmdSwitch.sendCommand(ON)
end

As you may have noticed. I wrote the OrangeAssist REST API part with Basic Auth as a simple security. Some might ask about the popup/slider. This HTML is provided by Google themselves as part of the SDK/API response.

What I really like about this is that it opens EVERY QUERY you can think of and it will answer back, just like being in front of a Google Home device. The difference here is that you can automate those queries, and use text as input instead of your voice.

Here’s one of my favorites:
When I wake up (usually at 4AM), I go to the kitchen to prepare my breakfast. My security cameras (Blue Iris) detects motion and triggers OH. OH turns on the kitchen lights (zwave). After turning on the lights, OH sends “how’s my day” to OrangeAssist, which then triggers a routine of my Google Home device. It tells me the weather, my appointments, how long my drive to work is, etc.

You can also REGISTER the instance and OrangeAssist will show up as a device under your Google Assistant app:

As far as creating the binding. I’m not too keen on doing it just yet. Google Assistant SDK is not that mature yet, and still keeps changing.

Dont mind the blocky gradient (it’s a gif with limited colors)

Dont worry. If I don’t create the binding, I will at least create a HOW-TO.

Orange Assistant

Steps

  1. Familiarize yourself with Google Assistant SDK
  2. Configure a Developer Project and Account Settings. The link already shows you the steps, but here they are anyway.
    1. Go to Google Actions
      1. If you don’t have an existing project, click Add/Import
      2. If you have an exiting project, just click it
    2. Enable Google Assistant API
      1. Make sure your project is selected (drop down on top)
    3. Configure Consent Screen
    4. Configure Activity Controls (IMPORTANT!!) Make sure these are enabled:
      • Web & App Activity
      • Include Chrome history and activity from sites, apps, and devices that use Google services
      • Device Information
      • Voice & Audio Activity
    5. Register your device
      1. REMEMBER Model Id. We will need that for later
  3. Clone the repo
  4. I advise that you create a virtual environment for this.
    1. Follow VENV Instructions if you have not created a virtual environment before.
    2. Activate the environment.
  5. Install the required librabries:
    1. Follow Configure a new Python virtual environment
    2. Install more packages
      sudo apt-get install portaudio19-dev libffi-dev libssl-dev libmpg123-dev
    3. pip install -r requirements.txt
  6. Assuming you have configured everything, you need create the credentials file:
    1. Follow instructions from this Google page
    2. A credentials.json file is usually saved in the same directory as the tool, or in a .config folder. Remember where this is. You can keep this file here, on in the same folder as or.py (our code)
  7. Run the code!
    python -m or
  8. Open helpers/tester.html

Device Registration

If you have done the above, this will only allow your server to interact with Google Assistant SDK. If you actually want your assistant to show up under the
Google Home app, you have REGISTER your device.

Notes

  1. This is written under Python 3.7. If there is a need for a lower version, I am willing to help and refactor.

Empty responses

  1. Google Assistant sometimes return an empty response depending on the query. In some cases, the response is empty but Audio is played on the device. Try different
    screen_modes to see which best suits your query. This is a known issue which currently does not have a fix.

REST API

Request

1. Send a POST request to the your server instance: http://< yourserver > : < port > / assist/ask.
2. Body of the request includes the following:
Key Required Description
request Yes The actual query. You don’t need to include “Okay, Google”
uuid No If you include this, the response will echo it back to you.
output_html_file No Where to write the response data
output_audio_file No Where to write the audio data
is_play_audio No If true, this will actually play the audio on the machine’s default speaker
screen_mode No Valid values from here
language No Valid calues from here
is_return_html No If true, the response JSON will have the complete html in the html field

Sample Request:

{
    "request": "What time is it",
    "uuid": 12123e23422123,
    "output_html_file": someoutput.html,
    "output_audio_file": someaudio.wav,
    "is_play_audio": true,
    "screen_mode": "OFF",
    "language": "en-US",
    "is_return_html": true
}

Response

Key Description
request The actual query echoed back.
uuid UUID echoed back.
output_html_file Where it saved the html
output_audio_file Where it saved the audio data
html Contains the full html code if any.

Sample Response:

{
    "output_audio_file": "/output/output.wav",
    "output_html_file": "/output/output.html",
    "request": "What time is it",
    "text": "It's 5:39.",
    "uuid": "9303483171002469"
}

Configuration

Configuration is done on the config.json file

Key Description
is_debug If true, you enter a console-based loop.
is_verbose Logging level
host Host for the server
port Port for the server
username If you set this with the password, a Basic Auth is enabled for the POST of the REST API. Leave blank to disable
password If you set this with the password, a Basic Auth is enabled for the POST of the REST API. Leave blank to disable
device_model_id The device model ID
device_id The device ID
on_success_post_to Not yet implemented
credentials_file Path to your credentials file.
delete_output_files_sec Files are kept for this number of seconds. After that, files are deleted.
screen_mode Valid values from here
project_id Your Project ID

Sample Config

{
    "is_debug": false,
    "is_verbose": true,
    "host": "0.0.0.0",
    "port": "2828",
    "username": "lucky",
    "password": "charms",
    "device_model_id" : "XXX",
    "device_id" : "XXX",
    "on_success_post_to": "http://url_to/post_to",
    "credentials_file": "credentials.json",
    "delete_output_files_sec": 60,
    "screen_mode": "PLAYING",
    "project_id": "XXX"
}

DEMO

**CLI
Set is_debug in config.json to true

**Tester
Set is_debug in config.json to false, then use tester.html

Requires Python 3.6+ due to my use of literal string interpolations

21 Likes

This is really great - and would be the missing piece in my home. Please give more details in a how-to!

Michael

1 Like

Welcome back, Lucky, we have missed you…
PS: We are all still waiting on an update on your teasing Picture in picture of you doorbell on your samsung TV post.

1 Like

Oh yeah. I will update that hopefully tomorrow :smiley:

1 Like

@vzorglub Updated How To Automatically Show Security Feed on any TV with PIP

I will try to make the how-to for this one (orange assist) today

This looks really interesting… even just the ability to do the broadcast functionality is exciting!

I can definitely see using this, particularly the Broadcast capability! Can you Broadcast a sound (e.g., an mp3 file)?

The broadcast part is just like how you would issue a broadcast command (or any command, for that matter) to any Google home enabled device. So, you can actually say “play < this song > from spotify on < home group >” (assuming you have group created for your devices at home, configurable via Google Home app).

I’ll create the how-to as soon as I get home today from work. The code is written in python, has the backend to connect to invoke Google Assistant SDK, and I created a REST service on it so OH can POST commands to it.

Some example commands I use currently:

  1. What is the temperature downstairs?
    a. I have a HoneyWell TotalConnect-based thermostat downstairs that has no OH Binding. I receive the response, and I parse it to update an item.
  2. Periodically issue “sync my devices”
  3. I have a daily (12AM) cron-based rule to issue:
    a. “Wake me up at < time >” - my daily alarm for work
    b. “Remind me to < something > at < time >” - daily reminders

The code has an ability to also POST to openHAB once a response was received.

Lucky,

Thanks for the additional information.

My particular use case is a notification scenario. I play an sound (e.g., ding-dong.mp3), followed by an announcement “phrase”. I can put the mp3 on a webserver (e.g., a flask server on my RPi) so that a “Chromecast” can be directed to find it and play it. I was just wondering if OrangeAssist could instruct Google Assitant to find these local mp3 files in a similar way.

Regards.

Mike

I dont suppose you can do that even on an actual GOogle Home, but what I’ve done for such case is actually upload an mp3 to my Google Play Music account. That song (sound/mp3) can be played by Orange Assist.

It plays it on a Google Home or audio group… but the request is handled by the “Google Home Notifier”.

You set up a notifier by passing it the Chromecast (or audio group) name. Then you call the play_mp3 (passing the URL to the mp3 on the web server) and play_tts (passing the phrase). In the case of a phrase, it converts it to a sound file and deposits it on the web server. The play request to the Chromecast requires that the sound object be on the web… it just so happens that the host for this web server where the sound file resides is on my LAN.

The main issue with this setup is that you have to instantiate a separate notifier for each Chromecast and audio group. And secondly, to integrate it into openHAB requires “external” scripts to invoke/execute a notification passing the parameters (Chromecast name & the notification details). And, at the end of the day, when you do send them, it interrupts what’s playing. So, an integrated Google Assistant means to broadcast a sound or a phrase is the desired outcome.

Your idea of putting my sound on Google Music sounds promising. But… say, if it’s playing Spotify and then I tell it to play the Google Music mp3, … it’s not going to go back to Spotify. Perhaps the sequence of “Detect if anything is playing via the Chromecast Thing Title channel, Play Google Music MP3, Broadcast a phrase, if something was playing, continue playing that Title” will allow seamless broadcasts.

Mike

Which user is used? In our home 4 persons voices were detected. An often needed function is “where is my smartphone”. But the result depends of the detected voice…

It’s the one you used to create the oAuth credentials from. I will provide instructions. For me, I created a separate “home” account. For your application, if you want to find your phone, you need to generate the credentials from the account used in that phone.

Hi @luckymallari, this looks like just what I need. You mentioned a HOW-TO: do you have any draft instructions or even just a link to the code so we can get started playing around?
Thanks!

It will be in https://github.com/LuckyMallari/orange-assist
Stay tuned.

Debug Mode

Non Debug

I also added multi-language support.
Also you can choose if you want text or html returned as response, as well as write the response on an html file.

Will post code soon

3 Likes

Added initial setup info so when I deliver the code, you just run it

There’s currently an open issue on the sdk itself where some commands do not return the supplemental_text so some commands will return empty. I have a solution in mind so the code delivery will take a few more moments.

Code drop is in

Updated instructions (OP)

1 Like

I failed at step 7 “run the code”.
Got this error:

(env) [21:10:56] root@openHABianPi:/home/openhabian/orange-assist# python -m or
Traceback (most recent call last):
  File "/usr/lib/python3.5/runpy.py", line 183, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
  File "/usr/lib/python3.5/runpy.py", line 153, in _get_module_details
    code = loader.get_code(mod_name)
  File "<frozen importlib._bootstrap_external>", line 775, in get_code
  File "<frozen importlib._bootstrap_external>", line 735, in source_to_code
  File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
  File "/home/openhabian/orange-assist/or.py", line 28
    f = f"output/{f}"
                    ^
SyntaxError: invalid syntax
(env) [21:11:56] root@openHABianPi:/home/openhabian/orange-assist#

Any ideas where I went wrong? I carefully followed all the previous steps and got the expected responses up to this point.

I’m using a lot of f"" interpolations. I coded through Python3.7.

You need to use at least Python3.6 or you change all my strings to 2.7 :slight_smile: Just install Python3.6

I added that info now on the OP

More generic instructions:
https://developers.google.com/assistant/sdk/guides/library/python/embed/install-sample