Integrate a Snips voice assistant with openHAB - Walkthrough

Hi!

I would like to share how I’m implementing an openHAB voice assistant on a Raspberry Pi using Snips - a voice platform running on your device, with a strong focus on privacy.

In a nutshell, Snips performs Hotword Detection, Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU) then outputs intents published on a MQTT topic. openHAB subscribes to these topics and perform actions on your items by interpreting the intent.

First, a disclaimer: I’m not in any way associated with Snips nor am I getting anything for promoting their product, this is actually my second attempt in my quest to build a DIY Amazon Echo/Google Home alternative which respects my privacy, see my CMU Sphinx Speech-to-Text for another (working!) solution. I decided to give Snips a try for various reasons, mostly because their hotword detection accuracy is far better, and because it allowed me to plug the microphone to another Raspberry Pi to put in a better location than my main openHAB server. It is now running in my living room with satisfying results so far!

The hardware:

  • A Raspberry Pi 3 - I went with a starter kit (55 €) coming with a power supply, a case and a memory card preloaded with Raspbian because I’m dedicating this Pi to Snips; however, it should work just fine on openHABian along your openHAB server;
  • A far-field microphone array - the Snips team actually put a blog post out benchmarking the best candidates, but I already had the MiniDSP UMA-8 (~110 € with shipping, having problems with it though) and a PlayStation Eye (7 €) which gave similar results in my experience. To start, I think the Eye is a great choice;
  • A speaker to hear Snips’ audio feedback and dialogue (this is optional because openHAB can take care of this itself as well)

Installing Snips

Follow the documentation here: https://github.com/snipsco/snips-platform-documentation/wiki/1.-Setup-the-Snips-Voice-Platform-on-your-Raspberry-Pi

I went the easy route - enabled the SSH server on the Pi, connected to it and simply installed with:

curl https://install.snips.ai -sSf | sh.

Snips runs on Docker and this script will install it along with the other dependencies, and put commands like snips, snips-watch and snips-install-assistant in your PATH.

It will also attempt to configure your microphone and speakers with ALSA if you haven’t done so yourself.

Configuring the Google ASR (optional, for non-English speakers)

Snips has a built-in ASR for English running entirely on your device for maximum privacy, which is just fine if every user is comfortable with English, but for other languages*, for now you’ll have to use Google :frowning: :

  • sign up to the Google Cloud Platform at https://console.cloud.google.com (I created a dedicated Google account),
  • configure billing including adding a payment method (unfortunately…) - even though there’s a free tier: you get 60 minutes per month for free then it’s $0,006/15 seconds, see https://cloud.google.com/speech/; You’ll also get approx. 250 € in credits to use the first year
  • create a project (https://console.cloud.google.com/projectcreate)
  • enable the Cloud Speech API (go to APIs & Services > Library, then Speech API, then Enable)
  • create a service account (go to APIs & Services > Credentials) then Create credentials and Service account in the dropdown; fill out the form, choose JSON as the key type and click Create - you’ll get a JSON file download;
  • Copy the JSON file from Google on your Snips RPi with SCP, SMB or another way, and put it in /opt/snips/config/googlecredentials.json

* Snips claims to support French, German, Spanish and Korean in their documentation but German assistants cannot be created in the web console for now. It’s “coming soon” though IIRC.

Building & testing your assistant

Continue reading the documentation linked above (https://github.com/snipsco/snips-platform-documentation/wiki/2.-Running-your-first-end-to-end-assistant), and check out this video as well:

https://player.vimeo.com/video/223255884

I started from the IoT bundle to get the basic intents for operating lights and simple switches, but you can also add your own to launch scenes, control music etc.

Take time to properly test your assistant with the web interface to see how different sentences translate to an intent and slots because that’s what you’ll work with in openHAB later.

Once you’re satisfied with your assistant, download it from the Snips web console, copy it on your Raspberry Pi and install it (you can use the snips-install-assistant command if available).

Then run snips, let it start, run snips-watch in another terminal and say “Hey Snips” then a command - the output of snips-watch should display the events including the resulting intent.

Connecting to openHAB

As mentioned before Snips events are published on a MQTT broker, you can use your own but by default Snips will use its own Mosquitto instance running on port 9898.
I don’t run a separate MQTT broker for other things, so I simply had openHAB connect to Snips’ broker.

The MQTT binding will maintain a connection, subscribe to topics and update the state of openHAB items with messages coming from Snips.

  1. Install the MQTT binding in Paper UI

  1. Configure the MQTT binding

    edit conf/services/mqtt.cfg and add the connection to the broker:

snips.url=tcp://<your_snips_ip>:9898
  1. Create openHAB Items to hold info from Snips
    You can configure items in many ways but I suggest you create at least those below.
    Create a conf/items/snips.items file and add:
String Snips_Intent     "Snips Intent"      { mqtt="<[snips:hermes/nlu/intentParsed:state:default]" }
Switch Snips_Listening  "Snips Listening"   { mqtt="<[snips:hermes/asr/toggleOn:state:ON],<[snips:hermes/asr/toggleOff:state:OFF]" }

You can subscribe to other topics, in particulier intents are also published to the hermes/intent/<intentName> topics.

Perform actions in openHAB based on the intent

If everything is configured propertly, your Snips_Listening switch will turn on and off when Snips is listening, and most importantly, Snips_Intent will be updated with JSON string representations of the intents as they come.

How you react to those updates is entirely up to you. Typically, it will involve rules.

My solution will perhaps feel a little opinionated: I decided to leverage the JavaScript capabilities of the new experimental rules engine (mostly because JSON parsing is trivial), and the script editor in my Flows Builder app.

Here’s how to get started:

  1. Create a new flow
  2. Drag and drop a “When an item state is updated” trigger node, a “Execute a given script” action node on the design surface and link them up
  3. Click the trigger node and set Snips_Intent as the Item
  4. Click the action node, set Javascript (application/javascript) as the scripting language, and click Edit script…

Below is an example of my work in progress:

var intent = JSON.parse(state);

if (!intent.intent) {
  print('Snips_Intent is not in the expected format!');
} else {
  print('Snips intent name: ' + intent.intent.intentName);
}

// find slots of certain types for later reference
var locationSlot = intent.slots.filter(function (slot) {
  return (slot.slotName === 'objectLocation');
});
var typeSlot = intent.slots.filter(function (slot) {
  return (slot.slotName === 'objectType');
});
var colorSlot = intent.slots.filter(function (slot) {
  return (slot.slotName === 'objectColor');
});
var type = (typeSlot.length) ? typeSlot[0].rawValue : null;
var location = (locationSlot.length) ? locationSlot[0].rawValue : null;
var color = (colorSlot.length) ? colorSlot[0].rawValue : null;

if (type) {
  print('type=' + type);
}
if (location) {
  print('location=' + location);
}
if (color) {
  print('color=' + color);
}

function normalizeObjectType(t) {
  if (!t || t.indexOf('lum') >= 0 || t.indexOf('lampe') >= 0)
    return 'light';
  // TODO other types
  return t;
}

function getColorValue(c) {
  switch (c) {
    case 'blanc':  return '0,0,100';
    case 'rose':   return '300,100,10';
    case 'jaune':  return '500,100,100';
    case 'orange': return '25,100,100';
    case 'vert':   return '100,100,50';
    case 'violet': return '280,100,100';
    case 'bleu':   return '200,100,100';
    case 'rouge':  return '0,100,100';
    default:       return '0,0,100';
  }
}

function getLightItem(l) {
   // operate all lights via the 'Lights' group if no location provided
  if (!l)
    return 'Lights';
  // otherwise, assume the item name is "Hue_" + the capitalized location
  return 'Hue_' + (l.charAt(0).toUpperCase() + l.slice(1));
}

var normalizedType = normalizeObjectType(type);

switch (intent.intent.intentName) {
  case 'ActivateObject':
    {
      if (normalizedType === 'light') {
        var item = getLightItem(location);
        events.sendCommand(item, ON);
      } else {
        print('TODO: other object types');
      }
    }
    break;
  case 'DeactivateObject':
    {
      if (normalizedType === 'light') {
        var item = getLightItem(location);
        events.sendCommand(item, OFF);
      } else {
        print('TODO: other object types');
      }
    }
    break;
  case 'ActivateLightColor':
    {
      var item = getLightItem(location);
      events.sendCommand(item, getColorValue(color));
    }
    break;
}
  1. Publish the rule by clicking on the “Publish” button in the toolbar


That’s it - Hope this helps, I’ll update with up-to-date info as necessary!

Cheers! :wink:

15 Likes

Love it!
Thanks for sharing @ysc!
I’m already using Google Home (no Polish support yet :confused:) but I think I’ll give it a try.

1 Like

Great article and I love the possibilities the service seems to offer. Besides that the microphones topic is definitely interesting, thanks for sharing the article.

Another great alternative presented recently was mycroft, which will soon be added to http://docs.openhab.org/addons/io.html

2 Likes

I went the easy way. I am running python script on snips host to monitor different MQTT topics, example:

# Process a message as it arrives
def on_message(client, userdata, msg):
    if msg.topic == 'hermes/intent/user_radiorelax':
            mqtt_client2.publish('/myhome/radiorelax/command', '1')
            say('Your wish is my command, I will play relax music for you')
    elif msg.topic == 'hermes/intent/user_off-relax':
            mqtt_client2.publish('/myhome/radiorelax/command', '0')
            say('Your wish is my command, relax music is off')  

Where mqtt_client2 is openhab. I also wrote couple of other python scripts for snips, like time, date, joke, what is new in plex, traffic times etc.

I am planning to share my setup when I’ll be satisfied with results. As for today Google ASR implementation is broken in snips, so new users have to use built in ASR. There also problems with ASR sensitivity and some speaker/microphone sets. I am using all in one Jabra 410 and after some tricks it works fine.

1 Like

You’re right, I had an old version of the Docker container, I tried to update but it didn’t work afterwards (segfault). So I reverted to the revision I had before which is a month old and still working for me:

docker pull snipsdocker/platform@sha256:ffc11d577517f2c988b6770d2a0cde3328f83edd38e093a858a5b8f967126a95

(maybe one of the tags would work too)

I am considering voice command/assistance and Snips looks very interesting.
I am waiting for google api to see how it behaves, but having everything offline is very appealing to be honest.
My main question is how is it difficult to expand assistant? Is it just “download and install” of new skills, or expanding on existing skills that I previously created online? I would like it to start a timer for example, or some other useful skill (one day to play home videos on kodi or something similar). Does anyone have any experience with this, how are you satisfied with Snips @ysc ?

Also, I guess it should not be very difficult to get Snips working with node-red, since it is using mqtt? Nothing on the internet about node-red and snips though…

@dakipro I mainly gave Snips a try because of its quite good keyword detection and its MQTT-based approach - you can even use other clients than openHAB to subscribe to other non-smarthome related stuff. I haven’t checked lately but I think they were developing a full fledged skills architecture - rather than having to code your MQTT clients yourself. Maybe ask around or check out what’s hot in https://snipslabs.slack.com.

To be honest I don’t use it that often, it was mainly to launch some scenes and control the lights in the living room, as a matter of fact I still have to restore it since I reformatted the Raspberry Pi for my Smart Home Day talk :wink:

Hello,
Will this microphone work with snips installed on a rpi 3b+?

http://vespermems.com/products/vm1010/

Can you please tell something about the “tricks” that are needed to use the Jabra 410? Are there e.g. issues regarding the range?

Thank you,
Michael

Sure, here is my asound.conf file.

pcm.!default {
  type asym
  playback.pcm {
    type plug
    slave {
      pcm "hw:1,0"
      rate 48000
      format "S16_LE"
      channels 2
    }
  }
  capture.pcm {
    type plug
    slave.pcm "hw:1,0"
  }
}

If you don’t have this file then
copy text
then ssh to pi and do:
nano /etc/asound.conf
paste: ctrl + v
write: ctrl + o
exit: ctrl + x

I have two units and I am happy with performance, including range.

1 Like

Thank you very much, Ra!

Hey guys,

I hope that this is the right place for my question. Is there anybody interested on a proper App for Snips that allows to use OpenHAB similar to Alexa, etc? At least I would like to offer my support for such a project.

2 Likes

Hi.
Snips seems to be a good and private approach for voice recognition also in combination with openhab.
In the tutorials it seems that a login at the snips server is required.
Is this really the case or can I run snips completely offline?

Thx

You have to login online to make your assistant. You use the snips servers to teach the model that you download to your raspberry pi. Once you have downloaded this model you can run it completely offline.
This also means that to make changes to your assistant you always need to log into the web console and you do all your model retraining in the cloud.
But as the actual use of the trained assistant happens on your hardware offline snips servers never actually hear your voice and nothing gets recorded or send to them in daily use.
Best regards Johannes

2 Likes

If you want to try a 100% open-source voice-control that supports openHAB as well and does not require interaction with any foreign server SEPIA Open Assistant might be interesting as well :slight_smile:

For information, I will start working on a way to integrate Snips (or at least a part of Snips features) in openHAB.

3 Likes

Excellent! :+1:

I explained my intentions in the Git issue.
I would appreciate help to define the set of intents we should define by default in openHAB.
Once done, I would need another help to setup and publish this set of intents in different languages in Snips Console. Of course I will provide the French one.

Very bad news: with Snips now bought by Sonos, this will be the end of Snips as open. They plan to close Snips console at the end of January. We have until this date to deploy a voice assistant from the console to our hardware. Then there will be no way to update/enhance the voice assistant.

By the way, as my work on openHAB side was almost finished, I will finish it and publish it. And I will try to publish something on Snips console before they close it, at least for French users.

But the Snips solution has unfortunately no future for us.

2 Likes

Indeed. Seems like I have wasted lots of time to get this integrated into my openHAB setup. What a pity!

Is there any equivalent successor in sight? SEPIA Open Assistant doesn’t make a big impression on me yet.