I’ve been using OpenHAB since few years, and I found it really helpful and an awesome platform to build your domotic software, thank you so much for the great work.
Finally I found the occasion to contribute back. I wanted to share one experiment that I’m taking on during my free time, I hope it might be helpful or a good starting point for somebody else :).
For whose who don’t know DeepDetect, Deepdetect is a platform which can run also on small devices (e.g. RPI3) and allows to use deep learning algorithms with APIs for e.g. detect objects from images, and so on.
I went one step further and tried to connect DeepDetect API capabilities to OpenHAB - the result is a binary which can run on the same board (e.g. OpenHABian) reading JPEG streams (e.g. from FOSCAM cameras) or even local webcams and parse the API result as OpenHAB Item updates.
As for now, it will detect 3 categories: Humans, Vehicles and Animals - you can associate to each category an item to update, and you can run the binary multiple times against different cameras (but I would expect performance to be much lower, with a noticeable FPS drop unless you start thinking about distributing the services in different devices).
The project is purely hobbistic, and it’s a WIP, you can find it in Github with step by step instructions here: https://github.com/mudler/gluedd-cli (which may not work , but if you try it, let me know ).
Any feedback is welcome!