vidgear application acts as a client and connets to an RTMP server like youtube/twitch/nginx
YouTube is currently supported, I'll add support for other streams too in near future.
the other is where the vidgear application acts as a server with an open port and RTMP clients can connect and send/receive video.
it seems like the case where the vidgear application connects as a client and ingests video is already covered.
I think this behavior is similar to Vidgear's high-performance NetGear API, Kindly go through it, and also see its wiki. This API is exclusively designed to transfer video frames synchronously and asynchronously between interconnecting systems over the network in real-time.
I'm talking about streaming video TO YouTube so that people can watch it on YouTube.
oh, ok, we can work on that.
To achieve interoperability with other applications like VLC or OBS, another protocol like RTMP or HLS is required.
Yes, That's a disadvantage of using NetGear API.
RTMP isn't the best, but it's somehow become the standard, and it works.
I may work on Streaming Feature in couple of days, depending on my schedule. But, I may have to choose the best between HLS, HSS, DASH and now RTMP for vidgear based on real-time performance and supported features.
TypeError: my_frame_generator() missing 1 required positional argument: 'self'. Does someone knows why could this be due to? Ty!
Should I maybe use instead the regular NetGear with the bidirectional mode?
Yes, No doubt NetGear_Async is performance-wise superior but it provide NO support for any NetGear Exclusive modes (including bi-directional mode) yet as mentioned in its wiki. But performance difference is not that huge and also, I'm bringing performance updates for NetGear API too in upcoming PR too, so stay tuned.
I have been trying to merge the client code and the server code into the same python file but I dont manage to get it work (example here).
That's not the right way to do it. You should take ideas from its test file for correctly merging both server and client into one.
return_data. According to the documentation they enable to send data (of any datatype). However, while trying to send the frames back and forth (as numpy array) I keep getting TypeError as object of type ndarray is not JSON serializable. Am I missing something? I'm trying to work around that and somehow encode and decode the numpy arrays into other datatype.
portvalues at opposite end for sending frames bi-directionally.
NDarraydatatype for bi-directional mode in NetGear API, in upcoming PR but you've to wait for it.
developmentbranch, Kindly see its WIKI example. Goodluck!
pattern=1). As I mentioned before, it might be a problem of my network and I will try using my work network probably tomorrow and let you know
However, its far from being ideal.
@snavas can you elaborate?
In the following days, I'll probably try to implement the asynchronous data transfer runing in both machines a server and a client.
Yes, it is possible to implement by oneself but if you want this feature in vidgear, then you have to wait for long. As my other pending project(s) is right now my priority, and thereby I can only work on this project in my free time. But if you need this feature in time, then consider support this project through helpful donation.
Regarding the frame compression you mentioned before, the way it is implemented in the NetGear API it would only work for the server frames in the case of the bi-directional frame transmission, right?
Yup, it's true.
Otherwise, I could try combining compression and reducing the frame
Reducer can work bi-directionally but compression works one way only for now. You can implement similar behavior for client's end too in your script, it's easy. Adding this feature internally might make things complicated.
Actually, I could also look into some opencv functions for reducing frame size as well.
Nope, it's the best way and only ideal way to reduce size, and I've been working with OpenCV for now 4 years almost.
Thank you @abhiTronix for clarifying!
However, its far from being ideal.
@snavas can you elaborate?
I meant, I need to reduce the frame size a lot in order to get decent framerate in both ends.
However, this afternoon I've done further testing. I have been setting up a NetGear_Async client and server pair per each PC. Together with frame size reduction and the NetGear Frame Encoding/Decoding Compression I have been managing quite acceptable results!
You can implement similar behavior for client's end too in your script, it's easy.
I might give a chance to this and try to implement it tomorrow though!
multiclient_modeattribute for enabling this mode easily.
zmq.PUB/zmq.SUBpatterns in this mode.
PAIRmessaging patterns internally.
request_timeout(in seconds) for handling polling.
DONTWAITflag for interruption-free data receiving.
Reducer()function in Helper.py to aid reducing frame-size on-the-go for more performance.
recv()function at client's end to reduce system load.
mkdocs.ymlfor Mkdocs with relevant data.
docsfolder to handle markdown pages and its assets.
.md) to docs folder, which are carefully crafted documents based on previous Wiki's docs, and some completely new additions.
Hi Abhishek! Your library is absolutely amazing, however, there is a feature that I think would be easy for you to implement that could make it even more powerful. This came up on stackoverflow a few days ago here. This person is pointing out that when one is trying to stream to an RTMP server using ffmpeg, the location of the server is put in the command where the name of the output file normally goes. Example:
ffmpeg -re -i file.flv -c copy -f flv rtmp://live.twitch.tv/app/<stream key>
I got this example from here.
It would be amazing if we could provide rtmp addresses instead of output file paths with
Thanks! - Devon
Hi, i got a bit of a question/issue, i was trying to encode a ~70 frames video with a low resolution, but kept on getting empty output files. I had a strong suspicion that ffmpeg was terminated early, due to in some other cases I ended up with no chapters in a video that previously had that, a typical issue from ffmpeg getting closed early, at least per google. As such, i had a look at the source and i saw the following in the
close() method of
if self.__output_parameters and "-i" in self.__output_parameters: self.__process.terminate()
While i may wildly guess it's to prevent from wrongfully encoding more than the duration of frames supplied by the user, in this case that seems to lead to the last buffer of frames getting dropped and ffmpeg exiting before it's supposed to, and there won't be any output since all the frames were in that buffer. Commenting out those lines fixed my issue. I'll happily pop over to github and make an issue, but i thought i'd ask here if i should consider it a bug first, or there is some specific reason why it's done this way. Either way, it might be smart to let users prevent killing the process when the -i flag is supplied without editing the source code for vidgear.