I’m sitting on it right now, setting up TTS.
Unfortunately, absolutely nothing works.
Since I would like to have everything offline, I installed Pico Text-to-Speech.
I installed the necessary LIB on the Pi4.
As audio playback I specified Web to be able to test my scripts directly in the browser.
Unfortunately absolutely nothing is played back.
Since I don’t have any speakers connected to my Pi, since it stays in a rack and nobody could hear that anyway, I have to rely on the webplayer.
Furthermore, I’m trying to redirect playback to a SnapCast server, which is also installed on the Pi.
I already have Mopidy set up and can use it to listen to music all over the house nice and in sync.
Now I’d like to direct TTS to Snapcast so I can hear the responses on the speakers in the house.
Per script, I could then also target the necessary speakers.
But unfortunately I have no idea how to direct the output to a file (e.g. /tmp/openhab) so that I can specify this as a stream in Snapcast.
(Stuart Hanlon, UK importer of Velbus hardware)
Purely as a curve ball suggestion
Why not put a SymLink in so that the default folder with the TTS files becomes a subfolder of /etc/openhab/sounds
I don’t know exactly how to configure, but I guess you have to create a virtual sound output for openHAB. You have to chose this sink for openHAB, then redirect the signal to the snapcast server as a source
Is the command line pico2wav working on the PI ? The binding uses it.
Could you check the openHAB log when you try to use the Say command ?
The wav files from the picotts service are produced into the openHAB_user_data folder (in the tmp subdirectory). Usually /var/lib/openhab/tmp. Are the wav files there ?
Then the second part to setup is an audiosink. I don’t know the webplayer sink but maybe you can use another audiosink.
What are your snapcast clients ?
Are they linux system with pulseaudio ?
If so, you can use the pulseaudio binding.
Each pulseaudio thing you will create, targeting a client, will also create an audiosink entry as the output for your tts action.
I was curious about snapcast, and I read some documentation about it.
This morning I was in a hurry and write my comment rather shortly. I can now take some more time to add additional relevant information
So, the first part (pico tts) is addressed by my previous post. If you want better sound quality you can use the mimic tts as mdar said. But it needs a mimic server on the network (you can ask me for help in the community discussion, as I’m the binding author)
So, for the second part (outputting sound from openHAB).
You are trying to redirect output of a TTS service to another sound system. This is exactly the job of the openHAB “audiosink”. On the documentation, you can see the two included audiosink that you already know (javasink, which is the default alsa soundcard, and webplayer), and a mention “Additionally, certain bindings register their supported devices as audio sinks, e.g. Sonos speakers.”
So you have to find the right binding which provide an audiosink that you can use for playing your TTS to.
And then you could choose it instead of the javasound or webplayer audiosink, and send your TTS to it without any additional configuration.
the mpd binding, but it doesn’t provide an audiosink
the snapcast binding (not official) more or less maintained, but it doesn’t seem to register an audiosink in openHAB. Maybe you can ask on the community topic if someone can do it ? I developed two audiosinks for two different bindings, and by looking at the snapcast documentation it should be OK (by the TCP or the pipe file interface maybe ?)
if you are willing to drop MPD, you can use the squeezebox binding with another audio playing software : may I suggest “Logitech Media Center” if you don’t know it ? It is a multiroom software, you can think of it as a mpd + snapcast, as it already includes the synchronization mechanism. You install a software on every client (namely squeezelite), and it connects to the Logitech Media Center to play your library (even online library like Spotify). It needs a little bit more configuration than MPD, but it is IMHO more flexible.
And the squeezebox binding, available for openHAB, register an audiosink for every player found in the Logitech Media Server. So no additional configuration to play your TTS.
The pulseaudio binding allows openHAB to connect to and control a pulseaudio server on the network (or locally). You make a bridge thing for a pulseaudio server. And inside the bridge you can create one or several “audiosink” thing targeting a sound output available on this pulseaudio server (audio jack, hdmi, optical, or even a purely software pulseaudio sink loaded by a plugin).
Each audiosink thing is an openHAB audiosink, and you can directly play your TTS sound to it.
So, with this pulseaudio binding, you have to choose between two options :
Option 1 for the pulseaudio binding, without snapcast (openhab → pulseaudio binding → pulseaudio sink on each client → speaker) - Configure your pulseaudio servers on each computer you want to send sound to by loading the proper module-cli-protocol-tcp module (see pulseaudio binding documentation). Then make in openHAB a bridge thing for every computers you want to send sound to. Make an audiosink inside each bridge for the desired output. Activate the audiosink for them, and Voilà.
Option 2 for the pulseaudio binding, with snapcast (openHAB → pulseaudio binding → pulseaudio pipe sink → snapcast → each snapcast client → speaker) Configure the pulseaudio server on the computer with the snapcast server. Also load the proper module-cli-protocol-tcp module as the pulseaudio binding documentation. Create a bridge thing on openHAB to connect to the pulseaudio server on the snapcast machine. Create a thing inside the bridge thing to connect to the pulseaudio virtual sink (the one created by the module-pipe-sink). Activate the openhab audiosink functionality for this think. And Voilà. (I’m not 100% sure that this “option 2” works because I didn’t test it, but theorically it should work)
Thanks for your detailed option.
For me, only option 2 comes into question here, since all clients are already equipped with the Snapcast client and it is not only Linux systems.
(Linux, Android, Windows, FireTV-Android).
The WAV data is actually stored here: /var/lib/openhab/tmp
The stupid thing is, a new file is always created. The name therefore never stays the same.
(Example: 15577437559502774695896193164.wav, 90815241810315516649997364681.wav, 9081524182288832291713534046.wav,…)
I installed the Pulseaudio binding and tried to create a bridge.
Unfortunately this fails.
Yes, the binding connects to an existing pulseaudio server. So your PI must have pulseaudio installed.
pulseaudio --version should respond if it is on your raspberry.
On some distribution it is already installed (recent Desktop distribution), but for older version you have to do it by yourself. And beware : installing pulseaudio is generally a big low level change and many things can break. You should back up your system before.
On top of that, pulseaudio is by default a user process, so IF your rasberry pi is a “desktop one” with autologin on a desktop, it should be OK. But otherwise (minimal rapsberry installation with no graphical login), you have to install it first, and make it a “system-wide service” for it to start automatically.
Caution : I can’t speak for all version of raspberry OS or raspian or whatever, it may varies greatly and I can be plain wrong (my raspberry pis are old and many changes could have happened). it is just general guidance.
I hope you like to tinker your rasbperry and make many internet searches… Because even with pulseaudio installed and running, this is not over…
pulseaudio must have the module-cli-protocol-tcp properly loaded, as stated by the pulseaudio openHAB binding documentation. This module allow external commands sent by openHAB to control pulseaudio. It can be done with the following command line :
pactl load-module module-cli-protocol-tcp port=4712
But this command line is only good to the next reboot, so you have to find a way to make it start at each boot. The way to do it depends on the way you start pulseaudio, system wide or by user, and you need to put the line
at the end of the file /etc/pulse/system.pa (system wide installation) or etc/pulse/default.pa for a user installation I think.
I don’t think it’s a good idea to do a full tutorial here, the openHAB community forum is maybe not the good place to do support for mimic 3 installation. But I can point you direction to the official documentation :
The arm64 package should be OK to install on your PI, and it nearly is a one-liner installation. After that, you have to run the mimic3 server on the PI for the binding to connect to.
I hope you have a PI4 with sufficient memory, mimic 3 consumes many more ressources than pico.
Okay thanks for the further info.
I am using the minimal version of Raspberry OS with no desktop.
It will probably take me a while to get everything installed and set up.
Some tinkering is not a problem, a simple solution would have been great though.
But there doesn’t seem to be one at the moment.
I’ll have a look at mimic 3 when it works with PulseAudio.
Before that it makes no sense.
The forum here is quite a good start for a tutorial, especially if it concerns the product itself.
As long as you find it via search engine, there can never be a bad place on the Internet.
(Stuart Hanlon, UK importer of Velbus hardware)
The documentation on Mimic seems quite well written.
I’ll confess to not started with it yet, but even as someone who is hopeless with coding and configuring, I’m prepared to give it a try.
The link to the documentation is in the post for Mimic