Smart Home Day - Offline Voice Recognition Demo Setup

Hi,

Thanks to all of you who attended my talk at the Smart Home Day about offline voice recognition – hope you appreciated it as much as I appreciated preparing and delivering it :slight_smile:.

I’ve been asked if I could share the configuration I made, so here it is, hopefully I haven’t missed anything, before I scratch it:

Hardware

Configuration

https://gist.github.com/ghys/a55555481e22bd705cdba7742ae99373

referred to as “the Gist” below.

Sphinx configuration

Optional: You can use LiveDE.java from the Gist to test it against the commands.gram grammar (you’ll need the Sphinx library in your CLASSPATH: sphinx4-core-5prealpha-SNAPSHOT.jar). I used BlueJ during the demo since it’s installed by default on Raspberry Pi Desktop. Note: it didn’t work well during the demo but it was bad luck, when I tried it while preparing it was far better :wink: Use phases following the grammar for example: Büro Licht einschalten, Schlafzimmer Licht ausschalten etc.

Snips configuration

(note: it is recommended to install Snips on a dedicated Raspberry Pi for best performance)

  • Run Snips with snips
  • You can test your setup with snips-watch in another terminal window - say Hey Snips then for instance can you please make the lights in the kitchen yellow

openHAB configuration

For Snips:

  • First disable the Sphinx bundle (with the Karaf console or by moving the .jar out of the addons folder). Restart openHAB to ensure the microphone input is released.
  • Open Flows Builder from http://localhost:8080/flowsbuilder/index.html
    • Drag and drop a When an item is updated node from the left column
      • Click the node and change “Item” to Snips_Intent in the right column
    • Drag and drop a Execute a given script node from the left column
      • Click the node then the “Edit script” button in the right column and paste this script then click Save:
var colors = { yellow: '50,100,100', blue: '200,100,100', red: '0,100,100' }; // etc.
var parsed = JSON.parse(state);

if (!parsed.input || !parsed.intent) {
  throw 'intent not recognized';
}
var colorSlots = parsed.slots.filter(function (s) { return s.slotName == 'objectColor'});
var color = (colorSlots.length) ? colorSlots[0].rawValue : null;
var locationSlots = parsed.slots.filter(function (s) { return s.slotName == 'objectLocation'});
var location = (locationSlots.length) ? locationSlots[0].rawValue : null;
var item = (location == 'kitchen') ? 'Light1' : (location == 'bedroom') ? 'Light2' : null;
var colorValue = (color && colors[color]) ? colors[color] : '0,0,100'; // default to white
print('location=' + location + ' colorValue=' + colorValue + ' item=' + item);

if (item) {
  switch (parsed.intent.intentName) {
    case 'ActivateLightColor':
      events.sendCommand(item, colorValue);
      break;
    case 'ActivateObject':
      events.sendCommand(item, ON);
      break;
    case 'DeactivateObject':
      events.sendCommand(item, OFF);
      break;
  }
}
  • Link the two nodes by clicking the output of the orange one and dragging to the input of the blue one.
    It should look like this:
    image
  • Save (3rd icon in the toolbar), give a name (e.g. “snips”) and click the blue Publish button.
  • Start using Snips: say “hey Snips” then a supported sentence (“please set the kitchen lights blue”) - openHAB will change the items’ states and the rule should execute the script accordingly.

That’s it – please share your feedback if you try this!

Additional resources:

16 Likes

Awesome! Thanks for sharing!

Sounds like the reason for buying a Raspi3.

this looks promising!
i’m looking for something “offline” and found this thread.
could this be an option for a c&p user?

Id really benefit from a video tutorial of this. I am getting so lost. I dont know what Im doing wrong but cmu sphinx installs and speech recognizer initialized but I never get a message after I set it to start listening. I dont get any message that says its listening or any errors. It just does nothing.