Hi , i passed my afternoon on this trying to understand…
i followed step behind from the first post,
read multiple contents about ffmpeg an ffserver…
And i still can’t view my webcam in my sitemap in H264
if someone can help me i’ll appreciate it…
Openhabian
RPI3
my webcam model is foscam Fi9826p V2
the HTTP port is 4143
webcam IP 192.168.1.41:4143
ONVIF port 1024
RTSP format :rtsp://USER:PWD@IP:PORT/videoSub
the sub stream is configured in h264
my ffserver.conf
HTTPPort 8090
BindAddress 0.0.0.0
MaxClients 10
MaxBandwidth 1000
<Feed camera1.ffm>
File /tmp/camera1.ffm
ACL allow localhost
#uncomment if you want an IP range to be able to access the stream
#ACL allow 192.168.0.0 192.168.255.255
</Feed>
<Stream camera1.mjpg>
Feed camera1.ffm
Format mpjpeg
VideoFrameRate 10
VideoSize 640x360
VideoBitRate 2048
VideoIntraOnly
NoAudio
Strict -1
</Stream>
<Stream stat.html>
Format status
ACL allow localhost
ACL allow 192.168.0.0 192.168.255.255 127.0.0.1
</Stream>
I have had this method working with hikvision as well so it works for many brands, what you quoted was for the example I posted on how to get dahua and Amcrest working, which now I have added better methods into the ipcamera binding I would use that for cameras which have http urls for the streams. If your camera only has rtsp stream access then this thread would be the way unless you setup another server to do conversions using blue iris or motion project on a pi or similar.
Please help me with ffserver and ffmpeg.
I tried what you wrote earlier but I don’t understand how they worked for you.
First without httpport I couldn’t start ffserver.
I used the config from the first post and camera1.ffm doesn’t exist. 404
but camera1.mjpg is 200
If I start ffmpeg to http://ip:8090/camera1.mjpg I got:
av_interleaved_write_frame(): Broken pipe
Error writing trailer of http://…/camera1.mjpg: Broken pipe
What did I do wrong?
UPDATE:
I don’t know what is different, but this is working now:
first of all I’d like to thank you for pointing me for this solution. I want to share solution I “create” which might find someone usefull especially regarding to high CPU usage.
Eventhough it is not issue in my solution (my HPE microserver can handle the streaming with no issue) I believe it is wasting of resources when 30% of CPU is used when nobody is watching current stream.
However one question:
What happens if you have multiple devices? This will run always, consuming (I think) a measurable amount of CPU. Has anyone tried using this with more than 1 camera on RPI?
Buy cameras that already create mjpeg streams and the IpCamera binding has brands listed that support this. You then can have multiple cameras with zero cpu load.
Setup a dedicated video server to do the conversion and provide other great features and nice ways to review the footage. Cost of the server and ongoing electrical costs can add up and make it better to choose option 1 for some people.
Use HLS streams instead, see IpCamera binding as this can do it from a rtsp source and the readme file covers it.
Buy a new server for Openhab that has intel quicksync features to allow ffmpeg to do some processing with hardware acceleration.
Thanks I will first try and see how this works.
Basically I don’t want to invest too much into this (this is why I bought a cheap Xiaomi camera, where I will hack the firmware, which only supports RTSP) because it will be only indoor cameras (and only a few, max 3) and I don’t want to record it, just to have a view if I’m away.
This is what I wanted to ask, but it seems that there is no way to detect when a specific sitemap/group is viewed in BasicUI…
Yes it is installed when you install ffmpeg however it is only on Linux it was never done for the other platforms. Motion, shinobi and a few others can also do this and more.
I successfully did install ffmpeg but ffserver is not there. Could it be that ffserver has been removed from the package in the current version? If so, is there any alternative way to get your approach working?
That is possible as it was discontinued in 2006 and the ffmpeg website recommends other ways, just google it or use the packages I mentioned in my last reply.
However give me a few days I will see how easy it is to code into the ipcamera binding.
Try the latest ipcamera binding, I put a bit of code in that need testing to look for bugs but it is working for me if you only open it once. Ability to open the stream on multiple devices are the same time will be looked at over the next few days. If you leave the normal stream source field empty, it will now uses rtsp and ffmpeg to create the mjpeg.
@rkrisi
If the camera does not have bugs in the onvif support it should work and even get auto found by the binding now. If the camera breaches onvif requirements are does not report a snapshot url it can still be used by setting it up manually as a HTTPONLY thing type.
@z4nD4R
Thanks for sharing your method, really good to have multiple ways to choose from and in the forum for people to find. The binding can now create the stream on demand without the user needing to push a button, if the UI is open and needing the stream the CPU load is there creating the stream. To disable it like your method if the UI is left open then you would need to use a string that a button can clear or fill that is used in the Video item. However it is far easier to just close the UI and the binding stops.
I’m working with Ip camera Binding now and so far I have couple of troubles first of all I kinda don’t get MJPEG (even it is foscam camera - supplier informed me that mjped is no longer supported - looks like true for as I’m no able to turn the mjpeg type stream via CGI command). Will in this case the binding perform coversion for me? ( I have setup the FFMPEG input and output).
Second issue is the zoommotion detection looks like not working at all, any suggestion?
Sorry, I wrote it wrongly without commas, the second issue is the features like zoom, motion detection, enable/disable motion detection, pan, tilt and infra LED control aren’t working.
The issue is my Foscam is not supporting mjpeg (as I wrote above) and I’d love to use RTSP > MJPEG conversion on the fly when the UI is opened.
I will try to use suggested method with snapshots.mjpeg feature.
It would be a good idea to test if your camera supports the api by entering in the URLs into any browser. And contact Foscam support as it may be as simple as ticking a box to enable cgi/api but as I don’t own Foscam I do not know.