Im new to the project, and was hoping for some advice.
I'm trying to use aiortc allow two clients (browser webcams) to connect to the server, and have the server broadcast both ( and it will only ever be two), video streams to each user. So they can see each other. What example on github would best help me with this ? I was thinking about merging webcam & apprtc. I'll be honest though. Im unsure how to convert the image reciver to a media player. Any advice would be appreciated.
recv, right?), but how can I do it on the client?
@pc.on("track")handlers, I check if the other one has been initialized. They happen sequentially, so one of them must have both initialized and from there I set up a function that processes them in a loop
Hi, I'm trying to process with OpenCV and python a live video transmission from a WebRTC site: https://webrtc-streamer.herokuapp.com/webrtcstreamer.html?video=Bahia&options=rtptransport%3Dtcp%26timeout%3D60& But I can't open the video with cv2.videocapture (URL).
How could I open the video that is compatible with OpenCV? I appreciate it if you demonstrate the code.
Hi everyone. I know this might not be the correct channel for this, but just in case. Has anyone had experience with accepting multiple webrtc sources, encoding to a single tiled video output and then republishing via RTMP to twitch/youtube/... ?
If i understand the workflow correctly I would first establish a webrtc connection between server and client applications, use ffmpeg (?) to encode to video and then output to rtmp endpoint. In theory seems simple, just checking if anyone has additional knowledge and is willing to share. Especially with ffmpeg I'm not entierly clear how would I arrange tiles or allow server to receive instructions on how to actually tile the video if new participants are added etc...
I hope you are well
I have this issue with audio tracks in which some of the audio frames/packets are missing and not sent to the user or encoder.
More details in GitHub issue:
Any help is appreciated