I’ve been trying to use the Sonos binding and audio sink with a recent snapshot version of OH2 on a Raspberry Pi. I’ve had some issues and I’m wondering if these are known and/or if there are workarounds.
I have three Sonos Play 1 speakers. I installed the Sonos binding and made the Sonos the default audio sink. I then tried to send audio files and use VoiceRSS TTS from the Karaf console with no success and no errors in the log file for Sonos or the audio manager (even at trace level). The log file did indicate that the VoiceRSS audio was being successfully downloaded and apparently streamed somewhere.
After hours of experimentation, I finally discovered that if I ungrouped my speakers I could send audio to each of them. Although that was definite progress, the audio (especially from TTS) was truncated after the first syllable or two.
I noticed that the Paper UI only listed the Sonos:PLAY:1 as the audio sink option although there are three speakers with different UDNs. Is this normal? Later, I also configured the defaultSink in the resource.cfg file.
Can audo and TTS be sent to a speaker group? Do I need to identify the group coordinator and send the audio only to that speaker or should the sink handle this for me?
What can I do about the TTS and audio being truncated?
You can of course send TTS to a current group of Sonos. You don’t have to care about the coordinator, the binding is automatically requesting the coordinator to render the TTS message.
Of course, you need a thing defined for each Sonos.
I never encountered TTS messages being truncated.
Thanks for confirming it should work that way (which I had assumed).
Yes, obviously I had a thing defined for each Sonos device. Otherwise, I wouldn’t have been able to send audio or TTS to the standalone speakers. I’m also using the player controls for the speakers without a problem and have them sucessfully integrated with Alexa voice control.
In any case, at this point what I know is that TTS/say and sending audio files didn’t work when the speakers were grouped and they immediately started working when the speakers were put into standalone mode. I also know the Paper UI audio sink menu items did not match what was listed by “smarthome:audio sinks” in the Karaf console. I also know the audio was severely truncated using both the bundled mps3 (barking and doorbell), custom mp3s and TTS (maybe a timing issue?). Given everything else I’ve tried has worked other than the audio sink functionality, the evidence seems to implicate the sink or the audio manager as the source of my problems.
If there are no good theories about what is causing these behaviors I’ll probably investigate an alternative solution for TTS (e.g., sonos-http-api).
No worries. However I have seen other threads where people have had issues with the Sonos audio sink so it appears I’m not the only one.
Fortunately, it only took about 5 minutes to set up a sonos-http-api Docker container and that has worked flawlessly so far for both playing audio files and for TTS. As bonuses, it has a much richer interface with more TTS providers, Spotify support, SiriusXM support and better radio support. It even has music search. I’m working on a Jython code to grab our radio station favorites from a rule and dynamically create items with Alexa-compatible labels and tags so we use voice to control the stations. Fun!
Can you please point the other threads where people have problems with the Sonos audio sink ?
The only issue I am aware of is the concurrent TTS commands that can overlap and this bug will be fixed soon.
What would you like to know? The documentation is at the link I posted earlier in this topic. I’ve seen a few topics that mention “sonos-http-api” but no dedicated topics AFAIK.
The sonos-http-api is a node.js server that provides a ReST endpoint and gateway to your Sonos speakers. I use Jython/JSR223 for rules so I wrote a pseudo-binding that encapsulates the ReST calls. Now I can access the gateway using code like sonos.say("hello") or sonos.play_siriusxm('MSNBC') from my rules.
Same here, so that’s not a difference. In general, in a “works on my machine” scenario I try to determine what’s different about the two contexts. The problem is openHAB is that there are so many permutations of software versions, operating systems, host hardware, installation techniques, configuration techniques, devices, device configurations and so on that it can be difficult to identify that difference.
With the OH1 Sonos binding, I had an out-of-memory issue. I had similar “works on my machine” responses. Eventually, after many hours of debugging and analyzing server thread dumps I found the binding did indeed have an issue with runaway thread creation that consumed all the available memory and effectively crashed the server. It seemed to happen mostly with multiple speaker configurations so people with one speaker never experienced the problem.
Similar configuration here. I’m running OH 2.2.0-20170719101852-1, Rpi3 (Raspbian 8), VoiceRSS with three Play 1 speakers using wifi for communication. VoiceRss apiKey was verified external to OH2 and OH2 logs indicate a successful access to the service (although the resulting audio output is truncated). There’s a single wifi access point and good signals to all speakers (no repeaters or bridges, which appeared to cause issues for OH1 Sonos). I’m using Chrome with security extensions (OH host is whitelisted) on a Mac to access the Paper UI. My Java version is Oracle 1.8.0_131-b11.