but when i try to see the stream from another pc ( same network) i can't see it
can you help me pls ?
Prabin Kumar Das
what's showing on the browser ?
nothing , when I try to connect from another computer it seems like tries to load , it is like when you click refresh and it goes infenitely
and a note , I don't know it is important or not jetson nano has no wifi it connects wired
New to MotionEye (Ubuntu, Foscam C1 cameras) - how can I get audio recorded? Video looks great, but no audio. Device feed: rtsp://ip_address:554/videoMain - Thanks
Prabin Kumar Das
@Kivanc-IOT as per my knowledge the motioneye transfers the video stream through the local network of a router.. It needs to connected to a wifi...
@JesseWebDotCom as far as I know there is no feature of audio recording... And also pi dont have a mix..
My hik vision camera has audio in and out feature is there any feasibility I can record audio also in motioneye.
@Kivanc-IOT check the streaming port of your camera and try with vlc media player application
@ccrisan are monitor feature and action button features available for motion eye version 0.42 in motion version 4.2.2 os version Ubuntu 18.10 for open media vault 5.0
I am using it via docker
Using motion eye:dev-amd64
I am completely new can u please help me out for my him vision ptz cameras whatever I am trying I am unable add those features
Can anyone please help me out
@Prabindas001 it is connected via cable so is not valid ?
@Kivanc-IOT what about my issue is there any solution
I'd love to monitor my rpi motioneye cameras with nagios through SNMP, still no on building snmp into the default distro so i don't have to rebuild them all on Raspbian?
Hi, I am new on Github. Newbie question : did someone actually managed to have the same cam provide motion detection with Motion Eye AND stream on the net. I get either fluid integration and stream on lovelace Home Assistant (EspHome integration) or motion detection, but no fluid streaming through Lovelace? I have been struggling on this for several days and would like know if it is even possible? Thanks for your help...
Hey guys I just ran the migrateconf script on my config files and now when I restart motioneye (docker) all of my cams are gone. Nothin in the logs.. Any ides?
Nothing is written to the log specified in motioneye.conf, even when logging level is debug.
NVM. The issue was that I was using an old docker tag so it was an old version of motioneye
Hi, does anyone use RPI4 with MotionEye to do a downstream of an IP Camera? I have a rpi3 with the same setup and the stream was fast but with a rpi4 the stream has a 10/20 s lag. Does anyone have the same problem?
Hi to everyone, I am looking for a solution, how to take a picture with using external hardware button connected to some GPIO pin. Is this please possible?
Probably stupid question, it will be possible, the right question should be, how to do it?
This message was deleted
Do not see feature of GPIO input available on Motiioneye but alternatively you can use Nodered platform if you are familiar with. Trigger Push Button over GPIO and use nodered-contrib-camerapi node to take a picture and store in the /home/pi/Pictures directory. However you can use MMAL feature only on one condition, either Motioneye or Nodered application to use camera, cannot use both simultaneously.
Jakob Sultan Ericsson
Quick question. Is it possible to build a direct link to a captured image? I trigger some scripts and get the image path (/var/lib/motioneye/Camera1/2020-05-28/09-08-25.jpg) but I would like to create a nice clickable link in my mail.
Hey Is there someone here that can help me I am setting up my motioneye camera system I am using a raspberry pi zero w set to fast network camera and I have an hub where this camera is added but I cant seem to let it save to my nfs share... somebody that could help me? I only get custom path in storage options
Tomás Rojas Castiglione
Hi I'm havin issues and I dk if it is woth opening an issue on the repo so I'll ask here first. Here is the thing: I am usung the latest motioneyeos and when I reboot, I cannot enter via the web UI. Apparently it is not conected to the wifi anymore
Good day. I want to run MotionEye as a hub only with no camera physically attached. Is that possible? I have tried a few times, and the Pi I'm running it on won't boot without a camera attached. Thanks for any help.
Question for group: Running motioneye on a zero which I know from two tests is unable to automatically update; it will hang after auto update attempt reponding to pings but nothing else until power cycled. It will however run after power cycle and I can SSH in. It does not run cleanly though; for example some of the streaming config options are missing, and it can reboot. Any suggestions as to where to look in which log files for clues as to the problem with auto update?
I should add that this unit is not accessible due to Covid lock downs which is why after reading the FAQ have not been able to follow standard procedure of re-flashing SD card. Identical unit in same property was able to update and run OK so I suspect trouble somewhere with the SD card.
@bbanerd Can you tell me how he did it, or do you know if there's something specific to make that work? I'm trying to get it running, but they system doesn't seem to to boot without a camera attached. Is there a separate download I need to look for or something?
Nevermind - I'm an idiot. I was trying to use Pi Zero image for Pi 3
think I've got it now.
I have installed on docker using this command however it is still storying to the var and etc of my local machine docker run --name="motioneye" \ -p 8765:8765 \ --hostname="motioneye" \ -v /etc/localtime:/etc/localtime:ro \ -v /srv/dev-disk-by-label-Data/Data/motioneye/etc/:/etc/motioneye \ -v /srv/dev-disk-by-label-Data/Data/motioneye/lib/:/var/lib/motioneye \ --restart="always" \ --detach=true \ ccrisan/motioneye:master-armhf
Yo guys. I was wondering if there is an API I could use to make an android and ios app for motioneye
I know I can start and stop recording using the web control interface. Is it possible to trigger a still frame shot using it as well?
Călin, I just want to THANK you for all the work you put on this project. Is an amazing piece of software! Keep it up.
I figured out how to trigger stills from the web control API as well. This is a wonderful project and it just keeps getting better...
Hi all, Is there way to pass data when calling a webhook when motion occurs. Ex) pass camera name, date etc?
What are you trying to communicate with? Depending on what the project is, there may be a better option than webhooks...
Essentially I would like to try to get some kind of person detection or facial detection solution running. But am not sure if there are other solutions out there do that with motioneye or not
Are you using any kind of home automation? You gotta paint the big picture if you want suggestions...
I am currently running Home assistant for my home automation software. That runs on machine A it is not super powerful. Machine B will house motion eye and is powerful. The idea would be, when the system is armed and motion (preferably an actual person and not a false positive) is detected that would signal to home assistant to alert the users. I would also want the recorded videos to be uploaded to the cloud (motion eye does this with google drive). From what I've experienced with motioneye it does a rather good job but since the cameras are outdoors, weather, wildlife etc can trip the motion. I am running motion eye in docker and would like the second tier of detection (person/facial detection) be also a docker container. I've tried zoneminder and shinobi and wasn't impressed with the amt of setup for what you get, hence why i am circling back to motioneye.
Ok. Now we're getting somewhere. First, the easiest way to communicate with HA from motionEye is MQTT. If you're using ONVIF cameras, look into ONVIF-motion-events. It's a little routine that runs in docker that monitors the cameras for motion events and takes motion detection off of motionEye. Saves a lot of processor time. Then, for object detection, you have multiple options. There's Frigate that is apparently able to process a live feed. I am using DOODS as I didn't seem to catch as much as I figured I would with Frigate. Haven't played with it again lately Look into another program that runs along side HA called Appdaemon. Allows you to write your automation in python instead of HA's scripting language. Way more powerful.
Thank you @MYeager1967, your suggestions are super helpful! I think I am going to go the frigate route and set it up to where it will monitor the camera feeds, and if it detects a person, it will signal to motion eye to start and stop recording. (using home assistant as the in between). Thank you!!