Build your own face recognition server that interacts with openHAB by using motion detectors, IP cameras and a small DIY python application on a RPi3.
See more of the story here: How I trained my smart home to see me.
some presence items (switches) to make it work with your presence detection
rules for calling the raspberry face recognition server in your network when motion triggerd
rules to receive presence settings from server and send an alert email if unknowns are detected
But I hope this is helpful/inspiring to the community. Would we great to test it in other settings, with other IP cameras and enhance the functionality.
Hello!! I can not start uwsgi, I get an error. Help me please!
** Operational MODE: preforking ***
added ./sh_face_rec/ to pythonpath.
ImportError: numpy.core.multiarray failed to import
Traceback (most recent call last):
File "sh_face_rec/startserver.py", line 3, in <module>
import cv2
File "/home/ring0/.local/lib/python3.6/site-packages/cv2/__init__.py", line 3, in <module>
from .cv2 import *
ImportError: numpy.core.multiarray failed to import
unable to load app 0 (mountpoint='') (callable not found or import error)
*** no app loaded. going in full dynamic mode ***
I solved the previous problem, now I get another error.
2019-09-26 17:47:38,202 PID: 9258 INFO - MTCNNDetector: Creating networks and loading parameters
2019-09-26 17:47:38.203179: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-09-26 17:47:39,102 PID: 9258 INFO - FaceRecognizer: Initializing MTCNN Detector
Process Process-3:
Traceback (most recent call last):
File “/usr/lib/python3.6/multiprocessing/process.py”, line 258, in _bootstrap
self.run()
File “/usr/lib/python3.6/multiprocessing/process.py”, line 93, in run
self._target(*self._args, **self._kwargs)
File “./sh_face_rec/frameworker.py”, line 128, in work
self.faceReconizer = FaceRecognizer()
File “./sh_face_rec/facerecognizer.py”, line 69, in init
self.knn_clf = pickle.load(f)
_pickle.UnpicklingError: invalid load key, ‘\x01’.
subprocess 9258 exited with code 1
Sure just buy a third pi and use the motion project to create your own ip camera. There will be another way to use it directly without a pi using ffmpeg.
I think that it is possible (never tried it, only read others doing it) but it would need to be a PI4 with 4gb ram and USB3.
openHAB and any camera using the IpCamera binding can create a mjpeg stream if you supply the correct FFmpeg input to the binding. Recommend looking at using the snapshots.mjpeg stream that the binding creates for you so it is only 1 frames a second to lower CPU loads, especially if the camera has a snapshot URL. Probably looking at a 2% CPU load increase of a PI4 to do that so the entire CPU load will come from TensorFlow.
You can also run Tensor Flow Lite on one of these to off load the CPU power away from the PI using USB3:
However it may be better to get a second PI for that money, or better still an Odroid N2+ (what I run openHAB on) as it has twice the CPU power of a PI4 and will handle it with ease.
I configured the raspberry 3 to use the 4 cores for tensorflow processing. Because I wanted to have low latency from detecting someone on the cam until the notifaction happend. That makes the Pi fully loaded and I would NOT recommend running openhab on the same system.
If you can live with high latency times you can run tensorflow and openhab on the same PI.
Havent tried on a PI4.