DOODS anyone

Hi,

I’ve just found DOODS which provides a local REST service which identifies objects in images and I’d like to raise attention to it:

Is anyone already using it?
Maybe it’s interesting for the IP Cam Binding (@matt1)?

Cheers,
Sascha

I am using it ‘stand-alone’ and just for fun to detect if there are objects moving in front of our house.
Once an object is detected and not in the blacklist ( e.g. elephant, bed ) it is being tagged ( a rectangular around it’s position ) and a snapshot is send via telegram.

Why would you not want to know when there is an elephant passing by? Mind you, I suppose it depends where you live.

3 Likes

I have not tried it but is on the to do list if I ever find the time as it is a low priority for me. I don’t have issues with false positives except on one camera that my dog triggers which would benefit from this. the time vs benefit ratio for me does not make sense right now.

For those that are using it, I am Interested in hearing how much cpu load it is on a pi4 and other processors with a single camera at 1 frame per second?

There is also an interesting face recognition binding on GitHub openhab addons that was getting reviewed to be merged that stalled and got closed. If someone is keen and knows java it is worth checking out as are a number of projects.

I have a doods container running from a NAS, it performs ok if you do not provide live feed but images.
The simple idea is when a motion is detected, send the image to doods.
Very straightforward implementation.

import java.net.URL
import java.util.Base64
import javax.imageio.ImageIO
import java.awt.image.BufferedImage
import java.io.ByteArrayOutputStream

rule "Parking Motion"
when
   Item CAMParkingIP_MotionAlarm changed from OFF to ON
then
   logInfo("Parking Detection","MOTION DETECTED")

   val URL imageURL = new URL('http://<IP AND PORT FROM CAMERA>/ipcamera.jpg')
   val BufferedImage cameraImageRAW=ImageIO.read(imageURL)
   var ByteArrayOutputStream cameraImageJpg=new ByteArrayOutputStream()
   ImageIO.write(cameraImageRAW, "jpg", cameraImageJpg)
   val base64Image = Base64.getEncoder().encodeToString(cameraImageJpg.toByteArray())

   val myCommand = '{"detector_name":"default", "detect":{"*":60}, "data":"'+base64Image+'"}'

   val response = sendHttpPostRequest('http://<IP AND PORT FROM DOODS>/detect', 'application/json', myCommand ,5000)

   logInfo("Parking Detected - ",response)
end

2 Likes

I went a bit further, drawing a rectangle to highlight a region of the detected item with the detected label sending it over telegram.

I will take it even further when time allows :slight_smile: planning to trigger external lights when a human is detected at night for example. Please share your idea and feedback on how to improve my code. thanks !

import java.net.URL
import java.util.Base64
import javax.imageio.ImageIO
import java.awt.image.BufferedImage
import java.awt.Graphics
import java.io.ByteArrayOutputStream
import java.io.File

var String doodsIP="<IP>"
var String doodsPort="<PORT>"

var String openhabIP="<IP>"
var String openhabPort="<PORT>"

rule "Parking Motion"
when
   Item CAMParkingIP_MotionAlarm changed from OFF to ON
then
   logInfo("Parking Detection","MOTION DETECTED")
	
   val motionImageURL = CameraParking_ImageURL.state.toString

   val URL imageURL = new URL(motionImageURL)
   val BufferedImage cameraImageRAW = ImageIO.read(imageURL)
   val ByteArrayOutputStream cameraImageJpg = new ByteArrayOutputStream()
   ImageIO.write(cameraImageRAW, "jpg", cameraImageJpg)
   val base64Image = Base64.getEncoder().encodeToString(cameraImageJpg.toByteArray())

   val String myCommand = '{"detector_name":"default", "detect":{"*":35}, "data":"'+base64Image+'"}'

   val String response = sendHttpPostRequest('http://'+doodsIP+':'+doodsPort+'/detect', 'application/json', myCommand ,5000)

   logInfo("Parking Detection",response)

   var i = 0
   var String detectedItem = transform("JSONPATH","$.detections["+i+"].label", response)
   var Number detectionTop = transform("JSONPATH","$.detections["+i+"].top", response)
   var Number detectionLeft = transform("JSONPATH","$.detections["+i+"].left", response)
   var Number detectionBottom = transform("JSONPATH","$.detections["+i+"].bottom", response)
   var Number detectionRight = transform("JSONPATH","$.detections["+i+"].right", response)
   val telegramAction = getActions("telegram","telegram:telegramBot:MyTelegram")

   var Number imgWidth = cameraImageRAW.getWidth()
   var Number imgHeight = cameraImageRAW.getHeight()
   var Graphics g = cameraImageRAW.getGraphics()

   while (detectedItem !== "null" && response !=  detectedItem) {
        
      logInfo("Parking Detection - loop ",i + ">" + detectedItem + "<")

      var rectX = imgWidth * (Float::parseFloat(detectionLeft))
      var rectY =  imgHeight * (Float::parseFloat(detectionTop))
      var rectWidth = (imgWidth * (Float::parseFloat(detectionRight))) - (imgWidth * (Float::parseFloat(detectionLeft)))
      var rectHeight = (imgHeight * (Float::parseFloat(detectionBottom))) - (imgHeight * (Float::parseFloat(detectionTop))) 
      g.drawRect(rectX.intValue , rectY.intValue, rectWidth.intValue, rectHeight.intValue)
      g.drawString(detectedItem, rectX.intValue, rectY.intValue)

      i=i+1
      detectedItem = transform("JSONPATH","$.detections["+i+"].label", response)
      detectionTop = transform("JSONPATH","$.detections["+i+"].top", response)
      detectionLeft = transform("JSONPATH","$.detections["+i+"].left", response)
      detectionBottom = transform("JSONPATH","$.detections["+i+"].bottom", response)
      detectionRight = transform("JSONPATH","$.detections["+i+"].right", response)
   }

   var String path = File.separator + "etc" + File.separator + "openhab" + File.separator + "html" + File.separator + "image.jpg"
   var File outputfile = new File(path)
   ImageIO.write(cameraImageRAW, "jpg", outputfile)

   telegramAction.sendTelegramPhoto("http://"+openhabIP+":"+openhabPort+"/static/image.jpg" , "Movement")

1 Like

some interesting reading on the same topic https://towardsdatascience.com/object-detection-with-10-lines-of-code-d6cb4d86f606

I’ve just come across Frigate. It performs object recognition and integrates Google Coral Accelerator.

I’m now using both on a dedicated physical server, it processes up to 100Fps with object detection on live feeds of 3 cameras (thanks to the USB Coral Accelerator).

Frigate will MQTT events of anything if your interest in OpenHAB.

1 Like

Sorry to necromance this post, but with the GUI on OH3, some changes are needed. This works for me. Of course, you need to adjust the image URL and the detection (I use a region ti detect the presence of a vehicle)

logInfo("Parking Detection","xxxx Detection")

val java.net.URL imageURL = new java.net.URL("http://xxxxxx:xxxxx/ipcamera/1xxxxxx/snapshots.mjpeg")

val myCommand = '{"detector_name":"pytorch", 
  "regions":[{"top":0.25, "left":0.02, "bottom":1, "right":0.23, "detect":{"*":25},"covers":false}], 
  "data":"'+imageURL+'"}'
val response = sendHttpPostRequest('http://localhost:xxxx/detect', 'application/json', myCommand ,5000)

logInfo("Parking Detected - ",response)

...