[Solved] OH Multimedia: Play WAV Files Directly with EnhancedJavasound

I am attempting to use Pico TTS to manage announcements from OH over a whole home audio system. OH is running as an openhabian instance with a HifiBerry on an RPi 3B. I also have a beefy HTPC setup running Logitech Media Server (along with Plex, et al) that can be leveraged.

In using Pico TTS, I originally tried to use ‘say’ commands over squeezelite on the HTPC. Since the output of Pico is 16-bit mono wav files, and there are known issues with both .wav files and very short wav files, I changed my strategy to having OH interrupt any whole home audio functions and then output an announcement directly using enhancedjavasound. It still appears that ‘say’ functions don’t work directly with enhancedjavasound, presumably because Pico uses wav files.

So my question fairly generic in nature. Can OH directly play wav files (more specifically, using Pico or the 16-bit, mono wav format) over the System Speaker (with mp3 support) sink? Mplayer can play them, but this creates another level of complexity in dynamically generating wav files with Pico and then playing them via command line script with mplayer, or alternatively generating wav files with Pico, using a command line script to convert them to mp3 with lame, and then playing them with either squeezelite or the enhancedjavasound sink. I assume someone out there has Pico running correctly with wav files, and I can’t seem to get it working…

Thanks for any help.

AFAIK no. But it can play MP3 or plain 44kHz to any sink it has access to.
So why don’t you use those formats?

openhab> smarthome:audio sources
* System Microphone (javasound)
openhab> smarthome:audio sinks
  System Speaker (javasound)
* System Speaker (with mp3 support) (enhancedjavasound)
  Web Audio (webaudio)
openhab>

I’m trying to use a TTS service that isn’t dependent on going out to the internet for processing. That appears to limit me to Pico TTS and Mary TTS. Mary TTS is too heavy for the RPi, and both appear to produce only wav output. As a result, I don’t see how either binding can work since the default sinks don’t process wav files, and the squeezelite sink doesn’t play short wav files reliably.

In the case of Mary TTS, I could use the HTPC server, but it appears that I cannot process TTS commands on a different server from OH with the binding.

I presume most everyone uses an external TTS service for processing, or they script the system for Pico or Mary to create a wav file and then use an external application to convert it to mp3 before playing.

MaryTTS works fine on my RPi. Sure it takes a second or two to process but same is true for cloud services so there’s nothing wrong with that isn’t it ?

No, a few seconds isn’t a problem. From the docs, I read that Mary TTS wouldn’t work well on an RPi and it appeared that it couldn’t be used on a separate server, so I generally discounted it as an option.

Since I need to come up to speed on it more to consider using it, I assume it can transmit mp3s (ie, it’s not limited to wav files)? Also, just to be clear, I also assume it must run on the Pi with OH. If it can run on another server and still be accessed by OH, I would probably take that route.

Thanks for the heads-up on Mary running on the RPi. The docs say it’s too heavy. Specifically, this quote threw me off of it as an option:

While it provides good quality results, it must be noted that it is too heavy-weight for most embedded hardware like Raspberry Pis. When using this service, you should be running openHAB on some real server instead.

@mstormi It appears that Mary TTS should have the same problem as Pico TTS in that it cannot return audio/mp3 and only audio/wav. Are you processing the wav file with an external bash script as in this thread? If so, then this is still generally the problem I have with Pico, and the core issue for newer users is that there is no local TTS that outputs mp3 (or other compatible format for enhancedjavasound). The workaround is to call an external bash script for processing wav-to-mp3.

Just want to make sure I understand everything clearly…

maryTTS is available as an OH action. You don’t have to mess with audio formats, you can simply use say("…") in rules.

Ok. That makes things clearer. Much thanks. Just one more point of clarification, I assume I need to install MaryTTS separately, or does the binding install it? I installed the binding, assuming it installed everything. If it does install MaryTTS, it still doesn’t seem to work with ‘say’ commands.

I think it should work out of the box if you correctly installed the service (it should show up as a bundle in Karaf console).

Unfortunately, it doesn’t seem to work. I installed it from PaperUI, configured it as default TTS, chose a default MaryTTS voice, and tried smarthome:voice say Hello world from the console. I get no audio - same problem I had with Pico. Using smarthome:audio play doorbell.mp3 works. The sink is set as System Speaker (with mp3 support), and I have also tried it directly to the squeezelite player. No joy.

I rebooted the server, but the effect is the same.

Checking the debug log for javasound, I get the following when I try smarthome:voice say Hello world from the console.

2019-06-05 15:24:12.891 [WARN ] [me.io.javasound.internal.AudioPlayer] - No line found: line with format PCM_SIGNED 48000.0 Hz, 16 bit, mono, 2 bytes/frame, 24000.0 frames/second, little-endian not supported.

2019-06-05 15:24:12.893 [INFO ] [me.io.javasound.internal.AudioPlayer] - Available lines are:

2019-06-05 15:24:12.908 [INFO ] [me.io.javasound.internal.AudioPlayer] - interface SourceDataLine supporting 24 audio formats, and buffers of at least 32 bytes

2019-06-05 15:24:12.909 [INFO ] [me.io.javasound.internal.AudioPlayer] - interface Clip supporting 24 audio formats, and buffers of at least 32 bytes

2019-06-05 15:24:12.920 [INFO ] [me.io.javasound.internal.AudioPlayer] - interface SourceDataLine supporting 24 audio formats, and buffers of at least 32 bytes

2019-06-05 15:24:12.921 [INFO ] [me.io.javasound.internal.AudioPlayer] - interface Clip supporting 24 audio formats, and buffers of at least 32 bytes

2019-06-05 15:24:12.930 [INFO ] [me.io.javasound.internal.AudioPlayer] - Analogue Playback Boost source port

2019-06-05 15:24:12.932 [INFO ] [me.io.javasound.internal.AudioPlayer] - Max Overclock DAC source port

2019-06-05 15:24:12.933 [INFO ] [me.io.javasound.internal.AudioPlayer] - Max Overclock DSP source port

2019-06-05 15:24:12.935 [INFO ] [me.io.javasound.internal.AudioPlayer] - Max Overclock PLL source port

It still looks to me as if the wav file created by MaryTTS can’t be processed. Perhaps I’m missing something simple?

Ok. After much trial and error, I finally have a solution. TL;DR, the solution is pulseaudio.

For details, in case it helps someone else out…

There are a lot of threads on people having problems with say, in particular, and playSound. Few of them appear to be resolved, with many people hacking around the problem using the Exec binding to run say-like commands to an external command-line player. My goal was to try and avoid this problem.

I’m running OH 2.4 on an RPi 3B. In my situation, I wanted to have TTS support for various announcements triggered on events. My requirement is for a local TTS - no going to an external service. That limits the current integrated options to Mary TTS and Pico TTS. As noted earlier in this thread, I opted for PicoTTS due to resource issues noted for the RPi on Mary TTS.

Originally, I had OH just send announcements via a squeezelite instance on an HTPC, but since announcements produced via either Mary TTS or Pico TTS produced .wav files of short duration, the Logitech Media Server (in my case, on the HTPC) had problems processing them and I ended up with no audio output (even though I could play these files if loaded directly into LMS).

At this point, I figured I could just upgrade the audio on the RPi with OH and play audio directly on another audio source (I have a home audio system where I can quickly switch sources via OH). So I purchased a HiFiBerry DAC+ and upgraded the Pi.

After ensuring that the HiFiBerry card was the only one loaded and set as default, I found myself in much the same situation. Both playSound and say commands from the OH console didn’t work. I presume this is a javasound issue on using the correct card as I could play both wav and mp3 files using a command line player like mplayer.

I finally opted to install pulseaudio (along with the libpulse-jni and libpulse-java packages) to act as an intermediary between all available audio (including java) and alsa. Lo and behold, it worked. I am now able to both playSound and say from within any rule to any sink. Interestingly, playSound does NOT work from the console still, though ‘say’ does. But from within a rule, it all works. In addition, now both Pico TTS and Mary TTS work without issue, but I’ll probably default to Pico TTS since Pico has a much smaller overhead.

Given the issues with audio and OH (and it appears there are many), it may be useful for new users to install pulseaudio by default. It has the added benefit of being able to stream audio from over the network reasonably simply.

In any event, I hope this information is of use to someone out there!

1 Like

Well pulseaudio all by itself is another potential troublemaker and in turn depends on a lot of OS settings and SW versions, and it’s not really needed.
Glad it works for you, but I myself have been trying to get pulseaudio to work properly for a long time without success.

I came across your thread before. I haven’t yet attempted to make OH connect across the network to my HTPC PA server. My goal is really to be able to make TTS (or any other audio, for that matter) play correctly from OH. Having a local PA installation solved most of that, except in the case of squeezelite, where local blocking is still a problem (I’m going to have to script around starting and stopping the system wide squeezelite installation on the RPi because it does not work with PA - it’s an ARM-related issue for the PA portaudio library).

From what I read in your thread, it looks like you have most everything working. FWIW, my playSound doesn’t work from the console (which is what I believe was your last issue), but it does work from within a rule (hell if I know why) as long as the sink is via System Speaker w/ mp3 support).

Brian, I am still unraveling the mystery myself but it is an onion with many layers. For me, getting the system itself to having working system sound was a nightmare. When I commissioned the machine OpenHAB runs on, I choose Debian because most of the stuff is deb based (openhabian). I had Linux experience but not with deb. (mostly CentOS, Fedora, and RH) I wanted basic to not waste resources. I have had Mint on a box as an entertainment box and sound worked a treat but for some reason my modern onboard sound card in my brand new Dell, I could not get drivers to get working (help?)
Long story less long, I bought a $20 Bluetooth dongle and a $20 bluetooth speaker, plugged them both in, voila! system sound
Layer one of onion peeled
Some time around now, as the system now has system sound ((with the bluetooth) I notice my pulse audio binding things are online and I can control volume of system sound with a slider in OpenHAB
Bonus!
Layer two… getting OpenHAB to make sound… switch configuration -->system -->audio in Paper UI from system speaker to web audio… voila! the tablet plays a sound thru OpenHAB using play sound or whatever. Yeah, getting it to work by playing the sound thru the tablet is a cheesey solution but at least I know it works.
Layer 3 is getting OpenHAB to run a script to play sounds or whatever using executeCommandLine or the exec binding. I have tried both and gotten a script to run with executeCommandLine successfully so far. The scripts need to be in the scripts directory in OpenHAB config folder. It needs ownership of file (chown) and it needs to be executable. Google is your bud if I lost anybody

Yes, getting audio working has been a beast. My HTPC runs Debian testing, while OH is on the Pi. On the HTPC, I have pulseaudio working well with the ASUS motherboard, running both over HDMI (7.1 channel) and over the analog audio out. I can also run squeezelite on it for most whole home audio functions, including streaming any movie audio out. Announcement functions from OH over squeezelite on the HTPC don’t work because of the LMS/wav issue, but since I can run squeezelite on it with pulseaudio support, I can use playSound for pre-recorded TTS messages in mp3. No ‘say’ support, though (again, the wav issue on squeezelite is a problem).

OH on the Pi has been a different problem entirely. My remaining issue is really around blocking with the OH squeezelite player since there’s no way to make squeezelite use pulseaudio on the Pi. So I have to script around shutting down the daemon, making an announcement over the speaker, and then restarting squeezelite for any audio streaming. Very hacky, but I think it’s do-able. At least with a local pulseaudio running on the Pi, I have javasound working, so I can use TTS for ‘say’ without converting to mp3 as long as squeezelite is not blocking. Before, this wasn’t working at all.

You have probably already seen it as a resource, but the Arch wiki was indispensable for getting pulseaudio working right on the HTPC correctly.

yupper doodles but it is a great resource and good on you for posting link. I have another thread started to document my learning process for something related and I’m still learning as I go

Sorry and I love this software but this seems way to hard

I guess this is a “welcome to the bleeding edge” issue in the field of diy home automation. I’m sure it will all get easier over time with improvements, so the value of learning more of the hard, tedious details will pan out. It’s a complex process, and like you, I’m learning a lot as I go. Hope to provide more solutions to things with time to make it easier on other users. The more people that adopt, the better the project will be over time.

I’ll post more back as I work on the pulseaudio server htpc - to - OH link and (hopefully) get it working smoothly and reliably. Probably going to be next week before I can dig in more, though.

Best of luck. Let’s keep sharing solutions. Everyone benefits!

1 Like