Voice control z wave in my home

Hello,
I want to add a voice control to the z wave network in my home.

I use Aeotec z stick on a rpi 3.

I found this product:

Is this product can integrate with my z wave network and enable voice control?

Anyone know of another products capable of doing this?

Thanks

Mycroft. Has an OH skill.

Thank for your reply.
Any other suggestions?

I know of two main approaches:

  1. Use the OpenHAB Cloud Service to link your local instance of OpenHAB to commercial externally hosted speech services (e.g. Alexa).
  2. Use a (more) local (more) open source speech system such as https://mycroft.ai/ with a binding to connect directly to OpenHAB.

I do not like exposing control of my infrastructure outside of my own firewall so use a Mycroft Mark 1 for speech recognition / intent parsing / speech and the Mycroft OpenHAB binding to connect locally into OpenHAB.

The speech recognition still takes place outside of my home (Mycroft infrastructure into Google, but being replaced in stages by FOSS, including Mozilla speech), but this is mixed in anonymously with all other users improving privacy.

Overall, I have a hardware device based on a RPI2 which answers to ‘Hey Mycroft’ and can locally access OpenHAB to ‘turn on the bedsde lights’.

Tech details of how to integrate Mycroft and OpenHAB are here:

Thanks.
I prefer it without the cloud.

So this is the product?
How does it connect with openhab? Over wifi? Bluetooth? Ethernt?
Does it work on batteries or need a power source?

Hi,

There are two hardware devices, and you can install the software yourself on a PC or a Raspberry Pi.

The history is here:

I have a Mark I, which is basically a Raspberry Pi 2 in a custom case, with a LED face and speaker / microphone. Using a standard mains PSU, it can use the RPi wired Ethernet, or WLAN to connect.

With two lines of config (port and address), the OpenHab2 Skill connects directly to the OpenHAB REST API and uses tags in the main config to build a list of device names for speech control. See the linked tutorial for details.

The Mark II is an improved device which is on both Kickstarter and Indie Go Go. It is not complete, and is in beta at the moment.

The Mark I is not as compete nor polished as Alexa nor Google Home, but it is designed to use open source components, and be hackable. For instance, the RPi GPIO ports are available on the back of the Mark I for your own projects. Even the PCBs are Open Hardware.

An an ex-software engineer, I like the ability to hack/ modify/ contribute to FOSS and keep as much of the architecture LOCAL as can reasonably be done. Mycroft is also contributing to open instrastructure such as Mozilla Common Voice.

Note - Mycroft does need some components to run remotely in the cloud, but there is a project to give the option for EVERYTHING locally - the problem is you will probably need a $1000 server and GPU to run the speech to text fast enough.
Details are here:

Thanks for the very detailed explaination.
Dror

I see that each of my Mycroft Devices must be Paired with my home.mycroft.ai account.
What does it Mean?
What about security? If the company closes, the product will not work anymore?

I use Alexa, it works perfectly and with routines and rules, its fantastic.

Hi again,
Im thinking of using snips on a rpi 3b+
Fot rhis i neew a good microphone.
Anyone knows if this product will work?

http://vespermems.com/products/vm1010/

Hi,

The home.mycroft.ai account is used to setup Mycroft devices, such as to set which voice is used. This is required as I mentioned earlier, speech recognition still takes place remotely on shared servers (Mycroft infrastructure into Google, but being replaced in stages by FOSS, including Mozilla speech), but this is mixed in anonymously with all other users improving privacy.

Today, Mycroft devices rely on Mycroft.ai infrastructure to work - this does introduce an external security risk but using a commercial device shares all words spoken with an advertising provider who knows who is speaking and may use the information to target ads.

I prefer to accept the risk of Mycroft.ai servers being compromised and (say) allowing my speech commands to be logged or changed over exposing a direct control API over my home devices to a fully external commercial speech device.

As I mentioned above, there is a project to move everything locally - read the Mycroft Personal Server link above for details.