Forward by YOLOv3 detected objects to openHAB platform for an "AI application"

Hello openHAB community,

I’m using the object detection algorithm YOLOv3 in combination with a Raspberry Pi 3B+ equipped with an IP camera in order to recognize objects in real time. In case of the detection of desired items some kind of note/message is supposed to be displayed within my Smart Home Platform (openHAB) at the same time.

However, I’m currently struggling with the transfer of detected objects to other places. My code so far enables me just to see a livestream on my desktop from the Raspberry Pi with the from YOLO detected objects marked in it.

One option could be to forward a JSON file containing detected objects which I managed to create when I analyze single images. However, I’m struggling to integrate the data from this JSON file into a rule or script. Has anyone dealt with something similiar so far and can offer help?

The JSON File I’m able to extract looks like the following:

[{"label": "tvmonitor", "confidence": 0.88, "topleft": {"x": 157, "y": 94}, "bottomright": {"x": 345, "y": 280}}, {"label": "keyboard", "confidence": 0.8, "topleft": {"x": 123, "y": 263}, "bottomright": {"x": 333, "y": 371}}]

Thanks!
Simon

Hi Simon,

this is an interesting approach, but what is your goal what do you want to achieve. I think the frist question is: How is the data offered? MQTT Message, HTTP, ???. With the josonpath transformation you could access parts of the json and the work with them.

What are you trying to achieve?

openHAB pretty much relies on dealing with a pre-defined universe e.g. it already knows about lights and doorbells and boilers that you have.
So you could pre-create a “Dog” Item and have your data set it to “present”/“missing” or “current location” or something.

Setting something up to create a “map” of newly discovered objects is going to be a lot more work.

Thanks for your replies so far,

basically I want to achieve to be notified in openHAB once a predefined scenario occurs. For instance this can be that my IP camera detects within its sight a child and a knife at the same time. Subsequently there should exist a rule which gets triggered once these two objects are detected and trigger some kind of alert message.

@Dibbler42
How the data is offered still is one of the issues. For single images a JSON file on my computer gets created once YOLO analyzed it for objects. However, I could not figure a way/ find a good tutorial so far on how to include this data into a rule for example.
For the videolivestream I only managed to display the frame with bounding boxes around detected objects but no output file with the detection.

@rossko57
For my given case it could be sufficient then to create the two items “child” and “knife” which is set to “present”/“missing” according to the livestream. Are you familiar with someone who implemented something similiar or do you have any tips on how to get started with this approach?

I’m VERY interested in this setup. My use case is a little different from yours. I want to do object detection every 10-15 seconds from a JPG image to see if there is a cat in my yard. If there is a cat, my sprinklers will turn on for 15 seconds.

I’m tired of the neighborhood cats pooping in my yard and all of my other ideas how to deal with it are “inhumane” according to my wife.

So my question - are you running YOLO on a pi or on desktop? Do you know if a pi has enough power to process this stuff or do you need desktop class power?

As for your question about how to deal with the output, my approach would be to send the JSON through node-red (which has great JSON tools) to filter out / sort your output and decide whether to trigger an openhab item. You could set up node red with a series of switch items to detect (knife and child) then if both are “on” then you would trigger an alert…

1 Like

If YOLOv3 doesn’t have a way to publish the JSON, you will have to write something to get the JSON into openHAB. Ideally this JSON would be published using MQTT messages or make an HTTP REST call directly to the openHAB REST API.

OH works best when data and events are pushed into it rather than polling and pulling. I would create an external script that watches for new files from YOLOv3 and publishes those to OH using MQTT, but that’s mainly because I already have a script and MQTT set up that I can easily modify to do this.

I think at some point someone tried to submit a file watcher binding that would trigger when a new file appears in the file system, but it doesn’t appear to have been accepted yet.

Beyond that, you are stuck with polling from an OH Rule which will be slow and kind of ugly because there is no way to get file system events inside OH Rules that I know of (maybe it’s possible using JSR223 Rules).

@crxporter
I also thought about analyzing only a single image every 10 seconds or more since it would require way less processing power than a lifestream does.

I’m running YOLO on my Notebook but for the livestream analysis I only reach 0,4 FPS which far from fast enough for real life applications but fine for me now to work on the theoretical concept. However, if more computational power is needed I’ve also considered to switch to Google Colab which seems to be a ML cloud service where you get access for free to a quite strong GPU. I found a tutorial on doing this here (http://blog.ibanyez.info/blogs/coding/20190410-run-a-google-colab-notebook-to-train-yolov3-using-darknet-in/) but did not follow it yet.

For single image Yolo analysis a RaspPi could be sufficient too but I wouldn’t be on it.

I will take a look at what node-red is/does thank you!

Node-red is available in openhabian as an add-on in openhabian-config

My approach (if a pi has enough power) would be to run YOLO in an exec node (runs command line programs) so the output is right there ready for processing. The output would go right into node red. Your install would be different on a non-pi but running would be the same, just process the output from the exec node and trigger items as you like.

Node-red runs on any computer. If you decide this route feel free to ask me more questions about how to get it working.

If I have time I’ll set up a pi to test myself, I’ve been meaning to set up YOLO or TensorFlow on a pi anyway to see if this system will work (haven’t bought cameras yet).

Update: a Pi is definitely not going to be able to do this. I started YOLO detecting a sample jpg about 10 minutes ago, it’s still going. Meanwhile I installed YOLO on my macbook, tested it with the same sample image, and came here to comment. I’m a little sad but not surprised.

I’m trying to set up nodered but already got stuck with the installation of it. I already tried multiple approaches but still receive the error message: “Failed to install Node.js - Exit” – did you also encounter that problem during the installation? I’m going to continue the search for the root of the problem

Also: are you already using the “yolov3-tiny” version on your pi? This can give some performance improvement but probably still isn’t enough to run YOLO on the pi

Try out the install instructions from nodered.org - they have a good documentation section with “getting started” sections for most platforms.

My goal today is to try a few of the node red TensorFlow nodes to see if they’re quick enough. I also ordered an intel neural compute stick yesterday - it should speed up my pi 4 nicely…

I’m probably a few months away from buying my cameras still but I like to do early proof of concept work to make sure I’m headed in the right direction. With any luck I’ll have image recognition programming ready to go in a week or two.

Hi there. I am very interested in your project, too. Would like to do that too. Could you get it to work with Yolo? Regards,