I was hoping to use my old iPad as a habPanel display today but I struggled all day to get a picture feed from newly hacked xiaomi dafang camera in habPanel with chrome/safari on my iPhone/iPad.
That is interesting. I mainly use Windows and Android so had not encountered this problem.
I had issues displaying the picture feed on Android until I disabled HTTP to HTTPS redirection on the camera. The process for doing that is mentioned in the third post. Have you tried that yet?
Yes, but as far as I am aware it does not support push-to-talk functionality. You can play pre-loaded audio files and initiate an audio recording. Either of these could be started manually or by adding a command to a script.
Thanks for the information. I will try to export the pictures stored on the SD card to my NAS. Maybe this way I can somehow embed the pic in habpanel.
I did not get MQTT to work with the new binding (lack of skill on my side unfortunately) released for v2.4. Is there any way to test/log this in openhab (for example I saw in the openhab logs that the cam items received updates)?
It is even easier as I just found out. No need to change/write scripts. The image of the motion is already provided in the topic “myhome/dafang/motion/snapshot”.
In the configuration file you just need to set
save_snapshot=true
publish_mqtt_message=true
Then you can add to the sample you provided in the first post:
mqtt.things: Type image : motionimage "Motion Image" [ stateTopic="myhome/dafang/motion/snapshot" ]
Hi,
I’m interested too in this topic but I need extra help to create the mosquitto thing.
Would you please give me some tips about the steps I need to do to configure the Dafang and the mosquitto broker?
Where are you stuck? Is the firmware already running on your camera? For the mqtt part Max explained everything that needs to be done (see also links to the binding doc).
After you installed the broker in openhab you add the address in the conf file of the camera. The other files you mentioned will be created in openhab in the usual folder for things and items.
Have you tried this with the non-panning original Xiaofang? The Dafang Hacks has support for the Wyze V2/Xiaofang 1S… so, will this example w/o the panning commands? Has anyone tried it?
I just hacked two WYZE V2 cameras and am hoping to get them working with HABPanel. Has anyone had any success? On a Mac, I can display the web interface, but for dashboard purposes it is not great because the video has a large frame around it. I’d like to just have the stream fill the panel. I ultimately want to use the dashboard on an iPad and Raspberry Pi.
I have had similar issues displaying video with my foscams. One of the big issues is that the RTSP feeds have a big delay on them, which is not very useful for live video feeds.
The way I solved it is to use motion (which I’m sire you are familiar with) https://motion-project.github.io/ but I changed the motion source code to read directly from my foscam firmware. This isn’t a Foscam firmware hack, Foscam has an undocumented low level (socket level) access to the h264 video feed (and other features).
So I added access to this video feed to motion, and then serve up the mjpeg feeds from motion (at 640x480 resolution) via web pages. These web pages can be embedded in any Openhab page using the Webview widget in the sitemap. Works on everything I have tried, web, IOS, classic UI etc. (I haven’t tried android).
I also use motion for motion detection, motion tracking (with my foscam pan-and-zoom cameras), etc, all the usual motion good stuff.
You could do this with the normal motion interface (which supports RTSP), I didn’t use that interface because of the 20s lag on the foscam RTSP feed, but there may not be the same lag on other cameras.
This is similar to the ffserver/ffmpeg method, but motion is designed for cameras, can handle multiple cameras, and allows lots of customization, text overlays (I overlay the weather), motion highlighting, motion detection, motion tracking (if you have the right interface - I added my own foscam one), and is intended to run as a daemon.
It’s a bit processor intensive, running 5 cameras takes about 60% of one of my (2.4GHz) server cores, but overall, that’s not bad.
I’ve attached some pictures of what it looks like, the screenshot is of my iPad (live 640x480 video) - recordings are in full HD.
The other nice thing is that because it’s served up in a web page, you can lay it out how you like, and make actions if you click on an image. Here there are 4 live video feeds, clicking on any of them zooms the video feed to full screen, click on it again, and it goes back to the 4 up display. You need your own web server, and a little html skill, but not too much, and most people running OH2 have their own web server.
@Nicholas_Waterton
Motion is definitely a good way to go especially if you have multiple cameras and if they don’t have motion detection available via an API then it is a must to check it out. I have stumbled onto people who are using ffmpeg and a filter to do motion detection via scripts, but people who use it say the detection is not great and the Motion project has far better motion detection then using that hack method.
What may interest you is that the IpCamera binding now has ffmpeg features built in which allows you to cast your camera feeds to chromecasts with no scripts, no extra servers and it should work on all platforms not just linux.
Hi all,
My V2 cam is working fine and the integration in OH2 is working too. So good so far.
The motion detection via MQTT is working and I got an picture in PaperUI. Thanks
Now I’m trying to deactivate the motion detection by a rule, but I found no command to get it deactivated.
I have a thing like this:
Type switch : motion “Motion Detection” [ stateTopic=“zennix/ipc1/motion/detection”, commandTopic=“zennix/ipc1/motion/detection/set” ]