Hey all, I have a short question, that I hope you can maybe guide me in the right direction.
I want to implement a camera-service in python that distributes a live stream of a USB-Camera to other services. I want to do this without an internet connection over the local network. I have two other services that use the stream, a classification service, in python and a front-end that displays the live stream written in html and javascript.
You can find the whole question here: https://stackoverflow.com/questions/56582908/webrtc-connection-on-localhost-without-an-internet-connection
return new Promise(function(resolve) {
if (pc.iceGatheringState === 'complete') {
resolve();
} else {
function checkState() {
if (pc.iceGatheringState === 'complete') {
pc.removeEventListener('icegatheringstatechange', checkState);
resolve();
}
}
pc.addEventListener('icegatheringstatechange', checkState);
Hey guys,
Im new to the project, and was hoping for some advice.
I'm trying to use aiortc allow two clients (browser webcams) to connect to the server, and have the server broadcast both ( and it will only ever be two), video streams to each user. So they can see each other. What example on github would best help me with this ? I was thinking about merging webcam & apprtc. I'll be honest though. Im unsure how to convert the image reciver to a media player. Any advice would be appreciated.
recv
, right?), but how can I do it on the client?
@pc.on("datachannel")
and @pc.on("track")
handlers, I check if the other one has been initialized. They happen sequentially, so one of them must have both initialized and from there I set up a function that processes them in a loop
Hi, I'm trying to process with OpenCV and python a live video transmission from a WebRTC site: https://webrtc-streamer.herokuapp.com/webrtcstreamer.html?video=Bahia&options=rtptransport%3Dtcp%26timeout%3D60& But I can't open the video with cv2.videocapture (URL).
How could I open the video that is compatible with OpenCV? I appreciate it if you demonstrate the code.
Hi everyone. I know this might not be the correct channel for this, but just in case. Has anyone had experience with accepting multiple webrtc sources, encoding to a single tiled video output and then republishing via RTMP to twitch/youtube/... ?
If i understand the workflow correctly I would first establish a webrtc connection between server and client applications, use ffmpeg (?) to encode to video and then output to rtmp endpoint. In theory seems simple, just checking if anyone has additional knowledge and is willing to share. Especially with ffmpeg I'm not entierly clear how would I arrange tiles or allow server to receive instructions on how to actually tile the video if new participants are added etc...