How to prevent overlap of say commands?

Hello,

I am using the say command for Openhab telling me what’s going on. Yet, I have the problem that if by coincidence, two rules want to say something at the same time, they talk overlapping and you cannot understand anything (like when two or three people talk at the same time).

Is there a way to prevent that? I tried it with the introduction of a new String item and a rule, but it still does not work, as the say command seems not to wait until it’s finished; but it continues so the saying variable is reset to false too early:

var boolean saying = false
rule "Say Text"
when
        Item    vSAY    received command
then
                while (saying) {
                        Thread.sleep(500)
                        logInfo("SAY", "Waiting other reading to finish.")
                }
                saying = true
                say(vSAY.state)
                saying = false
end

Does anyone have an idea of how to prevent the overlap?

This might help


Isn’t it a bug (or dev request) if “say” it not serializig its output by itself?

Without looking into the code I’do say no. I can’tell say on which audio-sink you are using the say command, but when using it on the device which is also supposed to play music such a behavior would be counterproductive. Imagine a running play list of x songs, shall your say command be scheduled after the last song?

1 Like

Hello @opus

I don’t understand your last post. I’m not thinking too much about schedule after the last song of a playlist, but some way to determine when the say command has finished to have a sequence of say commands. I mean - I’m not too deep in that - but doesn’t a command have a clear moment when it starts and another one when it finishes? Even if the say command runs in a separate thread, wouldn’t it be able to register some kind of “finished” listener?

Hi @rossko57

thanks a lot for the links. I looked at them, but they’re more about playSound than the “say” command. Unfortunately, I found no solution that is practicable in the context I stated above, but it’s actually useful for other things I was struggling with.

If you have an idea for the problem with “say” as given in my first post, I’d appreciate your help.

In what way would you select which audio stream (a say command, a played music file, a radio stream etc…) is to be interrupted?
What instance should keep the knowledge of what is actually playing, the instances that creates a say command has no knoledge of what is actually playing!
What is absolutly clear, if you submit a say command you want to interupt the actually played stream.

Yes, in the use case you describe, that is a problem.

The same problems arise, both say and playSound are asynchronous ; similar solutions (band aids really) can be applied e.g. queuing. The difficulty in applying here would be to calculate how long a sound the TTS service generates.

In my own system, announcements are limited to important events - doorbell, intruder alarm etc. It is possible to manage those by simply blocking/queuing for a period as long as the likely longest announcement + a rest. In my selective usage that rarely gets invoked anyway, I really want an unobtrusive system that doesn’t babble on. Other folks would like a constant flow of chatter of course :slight_smile:

An issue is already opened: https://github.com/eclipse/smarthome/issues/2333

Hi @opus

oh - I’m not looking at all to interrupt anything. I don’t play any audio streams on the machine but only use the say command and there I want no interruption, but sequence (as I described in my first post) and that is why I tried to write a rule that lets me invoke say indirectly and where I hoped it would lead to a sequence of said things instead of all at once.

Okay - I’ve now got a somewhat working solution for me. Here’s what I’ve done to have all speech happen sequentially without overlap:

  • Do not use “say” in rules, but write a shell script and execute it with executeCommandLine
  • Install task-spooler via apt
  • For my speech output I use now previously created mp3s that contain the spoken text. I run them from the shell script using mplayer with tsp; this schedules them in a FIFO queue and all works as expected: no more overlaps, no more problems. :slight_smile:

lolodomo commented 13 days ago in https://github.com/eclipse/smarthome/issues/2333

What is clearly not working even with Sonos is when you have several rules triggered, each one calling say or playSound. In this case, the notification are not in sequence. A lock mechanism is necessary. I will fix this case.

@Lolodomo thank you for your activity on that issue because like @antares2001 I’m struggeling with that behavour and I was very happy to read your announcement.
I’m no coder, but if I can support you (maybe doing some testing) let me know.

@antares2001 thank you for sharing your solution. Would like to try your suggestion, but I’m afraid that might be too difficult to me.

I have not yet published the fix for Sonos because it does not work 100% until now but I should be able to publish something reliable before the end of this month. My fix will include a test and fix (if needed) of the save/restore stuff.

Hello @anfaenger

from my system I will briefly describe what I did to help you set up your system. You might have to fiddle around with it a bit since my setup has grown much more complex recently and does all I want now :slight_smile:

  1. Create some mp3 files that contain the spoken text (you might record your voice, or if you know a friend with a nice voice ask her to record the text for you :wink: )
  2. Install task-spooler, which will provide you the command tsp
  3. Create a shell script say.sh in /usr/local/bin that you will call from your OpenHAB rules with the following code and the mp3 file as parameter:
    /usr/bin/tsp -n /usr/local/bin/dosay.sh "$@"
  4. Create the script dosay.sh in /usr/local/bin/ where you execute mplayer with the mp3 file

In your rules you will have a line like
executeCommandLine('/usr/local/bin/say.sh@@' + "hello_world.mp3")
which triggers the sound.

If you need any further help, I don’t hesitate to ask and post what you’ve done. I’d also be glad if you shared your setup and improvements here.

Hi @antares2001 and thanks alot for your infos!

At the moment I’m working on a function that gives me various informations when I’m leaving the house.
Something like: "Pay attention! Backdoor is open, left window in living room is open. Outside temperature is twenty degrees. Weather forecast say rain is possible, take an umbrella with you!"
Maybe it is not very flexible if I have to create mp3-files in advance, not knowing which text-strings to speak will come.
For example, when anouncing the forecast weather conditions, which I get from the weather-binding.
What do you think?

What you can do of course, is use a text2speech software on your computer to generate the speech dynamically. You just have to see which one works with your machine, as I experienced the Raspberry Pi to be too slow for that.

Of course what you could do if on a Raspberry Pi: you do the dynamic generation with a TTS tool like festival (https://wiki.ubuntuusers.de/Festival/) and cache the generated audio somehow as mp3. So before you actually execute the TTS program you check before, whether you had that string already once used and generated an mp3. If yes, then you play the mp3 otherwise you let the mp3 be generated, store it and then play it. So first time will be laggy, but then you have instant play for cached stuff.

Just as a quick idea :slight_smile:

P.S.: I like the idea of such recommendations. I have to think of integrating them as well.

P.P.S.: Another way would be to record words (that’s what I did) instead of sentences for “left”, “right”, “window”, “door”, “light”, … and then build the sentence on the fly from the single word mp3s.

You are right, I’m on RPi. And like you suggest, I’m already using TTS with VoiceRRS. So I will go on that way because I like the flexibillity to announce every string that will come around.

Main problem at the moment is: Sometimes I get overlap of say commands.
Thx