Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
    Will Price
    av.library_versions gives
    {'libavutil': (56, 38, 100),
     'libavcodec': (58, 65, 103),
     'libavformat': (58, 35, 101),
     'libavdevice': (58, 9, 103),
     'libavfilter': (7, 70, 101),
     'libswscale': (5, 6, 100),
     'libswresample': (3, 6, 100)}
    and ffmpeg
    libavutil      56. 51.100 / 56. 51.100
    libavcodec     58. 91.100 / 58. 91.100
    libavformat    58. 45.100 / 58. 45.100
    libavdevice    58. 10.100 / 58. 10.100
    libavfilter     7. 85.100 /  7. 85.100
    libavresample   4.  0.  0 /  4.  0.  0
    libswscale      5.  7.100 /  5.  7.100
    libswresample   3.  7.100 /  3.  7.100
    libpostproc    55.  7.100 / 55.  7.100
    so it seems PyAV is built against a different version... yet that doesn't explain why I can instantiate a Codec class for 'h264' but not use 'h264' when adding a stream
    Will Price
    Ah, I've found av.codecs_available!
    So h264 is present there
    Will Price
    Very odd, it appears to be an issue caused by importing something else, when I move import av to the top of my script I no longer encounter the issue. I will try and investigate what import causes it
    Will Price
    Ah, it appears decord interferes with it.
    Will Price
    I've filed an issue on PyAV PyAV-Org/PyAV#735 and also on decord dmlc/decord#128
    To fix this, just import av before decord, although hopefully we can get a bug fix in decord and/or av so that they don't conflict with one another

    I see examples of parsing live http streams, but they seem to all follow this pattern:

    url = 'http://website.com/playlist.m3u8'
    input_container = av.open(url)
    for frame in input_container.decode():
        # do something with frame

    I'm just wondering why is it a for loop and not a while loop for something like a live stream that is indefinite? Also, if it has to be a for loop, it does not appear to exit the for loop when the stream has ended for me. What could a possible exit condition be, to break out of the loop?

    1 reply
    Mohammad Moallemi

    Hi People,

    I hope you are loud and proud.

    I'm a newbie to PyAV and I'm using aiortc for WebRTC MediaServer, in aiortc I have av.VideoFrame objects available for each video frame and I want to create HLS video segments from the frames in real-time.

    As you can see in this project:
    They have used OpenCV video frame bytes piped to FFmpeg CLI for HLS streaming

    My question is, how can I use PyAV for consuming av.VideoFrame objects and extract 2-second video segments consisting of 60 frames for HLS streaming?

    Thanks in advance

    Hieu Trinh
    Hi, I would like to make a video with mostly static images. I was thinking make a various framerate mp4 by setting the right pts for each frame. So far I encounter what been mention in #397 (PyAV-Org/PyAV#397) . In the end, is it possible to manipulate pts when encoding ? Or do I need to use ffmpeg C interface for that ? (I am using 6.2.0 because I have ffmpeg-3)
    Dear all, has anyone got experience in using pipe with av.open()? I am trying to use pipe to feed data to an av container. the code is like below:
    r, w = os.pipe()
    fr = os.fdopen(r, 'rb')
    av.open(fr, mode='r')
    i got exception and message with 'file's name must be str-like' something like that
    is there anyone seen anything like this before? thanks all
    Mohammad Moallemi

    Hi all,
    I have this issue while passing video/audio frames/packets to container muxer:


    output = av.open(HLS_MANIFEST, options=HLS_OPTS, mode='w', format='hls')
    out_stream_a = output.add_stream('aac')
    out_stream_a.options = {}
            frame: AudioFrame = await self.track.recv()
            packet = out_stream_a.encode(frame)


    sender(audio) Traceback (most recent call last):
      File "/home/ubuntu/aiortc_tests/venv/lib/python3.6/site-packages/aiortc/rtcrtpsender.py", line 295, in _run_rtp
        payloads, timestamp = await self._next_encoded_frame(codec)
      File "/home/ubuntu/aiortc_tests/venv/lib/python3.6/site-packages/aiortc/rtcrtpsender.py", line 248, in _next_encoded_frame
        frame = await self.__track.recv()
      File "server.py", line 102, in recv
        packet = out_stream_a.encode(frame)
      File "av/stream.pyx", line 155, in av.stream.Stream.encode
      File "av/codec/context.pyx", line 466, in av.codec.context.CodecContext.encode
      File "av/audio/codeccontext.pyx", line 40, in av.audio.codeccontext.AudioCodecContext._prepare_frames_for_encode
      File "av/audio/resampler.pyx", line 122, in av.audio.resampler.AudioResampler.resample
    ValueError: Input frame pts 10637760 != expected 10632960; fix or set to None.
    Igor Rendulic
    Has anyone succesfully compiled PyAv in Docker form arm64v8 platform?
    Rizwan Ishaq
    Hi, I want to use PyAv
    suppose, I have an image from opencv want to encode it in h264 stream
    so for each image from opencv to h264 encoded image
    I tried
    but how to get the bytes ... or without writing to the container, how can I get the h264 encode frame from image??
    Jani Šumak

    HI guys!

    Before I start: great work. I am working on an ffmpeg API wrapper and did my best with some other libraries, but with PyAV I got my “hello world” (wav to mp3) working in 20 min :)

    I wanted to know if there is any option to pass a opened file to the av.openglobal method? I am using FastAPI (starlett) and would like to leverage streming and async functions as much as possible.

    Currently I save the incoming file to a temp folder, process it, and then crate a background taks to clean it up. This makes sense for some taskst, but for some it would be nice if I could just pass the file object to av.open.

    Hope this makes sanse.


    I need help
    Can someone help me?

    I am trying to make av.AudioFrame.
    I want to construct it from pydub.AudioSegment.

    I found the method av.AudioFrame.from_ndarray but it gives me AssertionError.

    Antonio Šimunović

    Hello, I could use some help with remuxing scenario. I'm trying to remux a RTSP H264 stream from IP camera to fragmented MP4 stream to use it in a live streaming WebSocket solution. On the client side is the MediaSource object. Having trouble getting the bytes result from the output buffer, where getvalue() call end with an empty bytes object for all non-keyframe packets.

    Here is the code:

    This is the produced output:

    YES True
    NO False
    NO False
    NO False
    NO False
    NO False
    NO False
    YES True
    NO False

    I expect that every call to getvalue() returns a non-empty bytes object, but only after muxing keyframe packet I will get non-empty result in call to BytesIO.getvalue().
    What am I missing?

    1 reply
    Igor Rendulic
    @simunovic-antonio Hey Antonio. I've implemented exactly what you're working on (RTSP stream to MP4 segments). You can check the docs on how to use it here: http://developers.chryscloud.com/edge-proxy/homepage/. Specifically you need to create conf.yaml file under .../data folder with on_disk: true flag turned on. It will also remove old MP4 segments based on your definition. If you're interested in a solution on how to store MP4 segments with PyAv then check this file: https://github.com/chryscloud/video-edge-ai-proxy/blob/master/python/archive.py
    Andre Vallestero
    Does anyone know how I would be able to send decoded frames as y4m format to a piped output? My current method of converting it to a ndarray and writing the bytes to the pipe output produces the following error message from my video encoder (rav1e):
    !! Could not input video. Is it a y4m file?
    Error: Message { msg: "Could not input video. Is it a y4m file?"
    Fredrik Lundkvist
    Hi guys! Is setting the DAR on a stream not supported at the moment? When i set a stream with defined DAR as the template for my output stream, i get a ValueError specifying invalid argument upon writing to the output stream
    Yun-Ta Tsai

    Hi, I am trying to encode frames as H265 packets to stream the network. But for some reason, the packet size is always zero (works on H264 however). Is this expected? Thanks in advance.

        codec = av.CodecContext.create('hevc', 'w')
        codec.width = 1280
        codec.height = 960
        codec.pix_fmt = 'yuvj420p'
        codec.time_base = Fraction(1, 36)
        yuv_packets = []
        for rgb_frame in rgb_frames:
            transposed_rgb_frame = np.rint(rgb_frame.transpose(1, 0, 2) *
            frame = av.VideoFrame.from_ndarray(transposed_rgb_frame)
            packets = codec.encode(frame)
            for packet in packets:


    x265 [info]: build info [Linux][GCC 8.3.1][64 bit] 8bit
    x265 [info]: using cpu capabilities: MMX2 SSE2Fast LZCNT SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
    x265 [info]: Main profile, Level-4 (Main tier)
    x265 [info]: Thread pool created using 20 threads
    x265 [info]: Slices                              : 1
    x265 [info]: frame threads / pool features       : 4 / wpp(15 rows)
    x265 [info]: Coding QT: max CU size, min CU size : 64 / 8
    x265 [info]: Residual QT: max TU size, max depth : 32 / 1 inter / 1 intra
    x265 [info]: ME / range / subpel / merge         : hex / 57 / 2 / 3
    x265 [info]: Keyframe min / max / scenecut / bias: 25 / 250 / 40 / 5.00
    x265 [info]: Lookahead / bframes / badapt        : 20 / 4 / 2
    x265 [info]: b-pyramid / weightp / weightb       : 1 / 1 / 0
    x265 [info]: References / ref-limit  cu / depth  : 3 / off / on
    x265 [info]: AQ: mode / str / qg-size / cu-tree  : 2 / 1.0 / 32 / 1
    x265 [info]: Rate Control / qCompress            : CRF-28.0 / 0.60
    x265 [info]: tools: rd=3 psy-rd=2.00 early-skip rskip signhide tmvp b-intra
    x265 [info]: tools: strong-intra-smoothing lslices=6 deblock sao
    WARNING:deprecated pixel format used, make sure you did set range correctly
    (1280, 960, 3)
    (1280, 960, 3)

    Hi guys,

    if I push a frame through a filter graph the filtered frame lost its time_base:

    import av.filter
    input_container = av.open(format='lavfi', file='sine=frequency=1000:duration=5')
    input_audio_stream = input_container.streams.audio[0]
    agraph = av.filter.Graph()
    abuffer = agraph.add_abuffer(template=input_audio_stream)
    abuffersink = agraph.add("abuffersink")
    for frame in input_container.decode():
        new_frame = agraph.pull()

    I took a look at the c api and there I can get the time base via the filter context. I couldn't find any way to access it through pyav. Any idea?
    I guess pyav should add it to the pulled frame

    1 reply
    Patrick Snape
    I have an open issue at the moment (72 hours old) that I wanted some advice on: PyAV-Org/PyAV#778
    I've been thinking about it over the past few days and I might just be being naive and what I want to do is in fact not currently possible - would love some feedback
    Nikolay Tiunov
    Hi everyone! Does anybody know how to make the pyave encode h264 frames using the AVCC format not Annex B?
    1 reply
    hi all, just trying to use pyav to add some meta-data to a container. does it have bindings for that? if so, they don't seem to be documented anywhere. I tried this search query, which turned up nothing useful. I also tried output_container.metadata["title"] = "foo" and output_stream.metadata["title"] = "foo", neither of which resulted in a file which appeared to have any such attribute associated with it when inspected in Windows Explorer.
    I realize that this functionality is built into the ffmpeg binary and trivially accessible via -metadata but I wasn't using it in my script and I'd prefer not to start now if that can be avoided
    hey good news! actually the reason that it didn't appear to be working is because Windows Explorer is bad, not because your bindings don't have feature parity to ffmpeg. So, thank you so much contributors for your time and patience, great work on this thing, sorry I didn't realize this earlier. For posterity, it isn't explicitly documented, but I was still able to find out how to do this from combing sources and finding some tests
    Good day people! I am a Python developer looking for a way to extract motion vectors from an RSTP video. Before I dive into the library, I would like to ask whether this is something that I could achieve using PyAV. In particular, I need to avoid decoding the entire video to save CPU usage. Thank you
    George Sakkis

    Hi all, new to PyAV and video software in general. The example on parsing has a comment "We want an H.264 stream in the Annex B byte-stream format" but doesn't mention why. Indeed, trying the same example with the original mp4 input file cannot parse any packets.

    More importantly, and this is my main question, is there a general way to write packets of an arbitrary av.VideoStream in a way that they can be parsed later, ideally without an intermediate step of calling ffmpeg?

    1 reply
    Muhammad Ali
    I m using https://pyav.org/docs/develop/cookbook/basics.html#remuxing with python3.9 but script fails to run complaining cache download failing , what I may be missing
    1 reply
    Hello! Is there a way to receive more P-frames and B-frames when streaming a video?
    I am still trying to get motion vectors without decoding the actual frame. Is it possible?