Voice Control (wit.ai) - My progress/setup + Tips/advice needed

I have a server written in python that I use for general things. I was sending the stt text there, processing it and sending on to openhab.

So I tried a few python libraries (and some on-line tests). NLTK was one (I think, it was a while back). And there was a python fuzzy logic library (which actually worked the best) this was fuzzywuzzy (see here https://github.com/seatgeek/fuzzywuzzy.

I found that NLTK was way too complicated, involved a “training database” most of the time, and was very focused on the mathematics of language processing (rather than figuring out what was meant vs what was said). What you got back was a list (or dictionary) of key words - which is what I was starting with anyway.

Really all I wanted was the noun and the verb. I decided that as I had a very limited subset of targets (ie lights and such) and a limited range of actions (ON, OFF, numbers etc), all I had to do was identify the key words and ignore the rest.

I also wanted to get rid of the send/receive from python server thing (too many links makes things fragile), so I had to implement it in openhabs “limited subset of a language similar to java” which I know nothing about, and has no debugging tools. How hard could it be?