Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
    Joanna Bitton

    I'm trying to write a video containing audio using lossless codecs. I'm using "libx264rgb" for my video codec which works perfectly fine and as expected. However, I've been having trouble finding a compatible lossless audio codec.

    I tried using "flac", but I got an error saying that that is an experimental feature:

    > Traceback (most recent call last):
    >   File "/data/users/jbitton/fbsource/fbcode/buck-out/dev/gen/aml/ai_red_team/augmentations/tests/video_tests/pytorch_test#binary,link-tree/aml/ai_red_team/augmentations/tests/video_tests/pytorch_test.py", line 92, in test_compose_without_tensor
    >     audio_codec="flac",
    >   File "/data/users/jbitton/fbsource/fbcode/buck-out/dev/gen/aml/ai_red_team/augmentations/tests/video_tests/pytorch_test#binary,link-tree/torchvision/io/video.py", line 129, in write_video
    >     container.mux(packet)
    >   File "av/container/output.pyx", line 198, in av.container.output.OutputContainer.mux
    >   File "av/container/output.pyx", line 204, in av.container.output.OutputContainer.mux_one
    >   File "av/container/output.pyx", line 174, in av.container.output.OutputContainer.start_encoding
    >   File "av/container/core.pyx", line 180, in av.container.core.Container.err_check
    >   File "av/utils.pyx", line 107, in av.utils.err_check
    > av.AVError: [Errno 733130664] Experimental feature: '/tmp/tmpbwvyrh3j/aug_pt_input_1.mp4' (16: mp4)

    Additionally, I tried using "mp4als" which is a valid audio codec in ffmpeg, however PyAV raises an error saying it is an unknown codec.

    Would appreciate any advice, thanks in advance!

    Does anyone know how to tell a stream that there has been a frame drop? basically I'm receiving an audio stream through the network and re-encoding it. The problem is that as soon as there is a frame drop the encoder fails on (pts X >= expect pts Y). To fix it I set the pts to None before calling encode with the frame but then it means the played frame is considered the next which makes the video out of sync audio/video wise. Here's a code sample of what I'm doing:
    import av
    container = av.open(rtsp_url)
    video_stream = container.streams.video[0]
    audio_stream = container.streams.audio[0]
    out_container = av.open(some_file, mode="w", format="mpegts")
    video_stream_out = out_container.add_stream(template=video_stream)
    audio_stream_out = out_container.add_stream(codec_name="aac", rate=44100)
    while True:
        packet = next(container.demux(video_stream, audio_stream))
        if packet.stream.type == 'video':
            packet.stream = video_stream_out
        elif packet.stream.type == 'audio':
            audio_frames = packet.decode()
            for a_frame in audio_frames:
                a_frame.pts = None # If I don't set this to None if there is a packet drop the encode function will fail...
                packets = audio_stream_out.encode(a_frame) # When I do set it to None, audio will play ridiculously fast
                for a_packet in packets:
                    a_packet.stream = audio_stream_out
    1 reply
    Razvan Grigore
    hey, I'm trying to build https://github.com/jocover/jetson-ffmpeg codec into av, should pip3 install av --no-binary av work with the custom compiled ffmpeg? I still get this error: pkg-config returned flags we don't understand: -pthread -pthread any hints?
    Razvan Grigore
    -- worked from deb-src modified build
    Luke Wong
    hi, i want to know :how to pyav read rtmp stream nobuffer ,like ffmpeg command line : ffplay -fflag nobufer rtmp://xxxxxxx
    Brian Sherson
    Isn’t len(frame.planes) supposed to be equal to len(frame.layout.channels)?
    >>> frame = next(d) ; frame
    <av.AudioFrame 0, pts=0, 512 samples at 48000Hz, 7.1, s32p at 0x7fd146428b38>
    >>> len(frame.planes)
    >>> len(frame.layout.channels)
    I suspect this has something to do with why frame.to_ndarray() throws a segfault on 7.1 layout audio.
    Akshay Bhuradia

    I want to convert m3u8 to mp4 and i would like to do conversion as well as download on same time. for download i have used http protocol of ffmpeg.

    I am running this command

    ffmpeg -i ultra.m3u8 -c copy -listen 1 -seekable 1 -f mp4

    when i trigger this url(""), then file start's download, but i am not able to play video.

    and i get error when all chunks are read:

    [hls @ 0x55da053b4100] Opening 'ultra177.ts' for reading
    [tcp @ 0x55da0540f940] Connection to tcp:// failed: Connection refused
    [tcp @ 0x55da05520480] Connection to tcp:// failed: Connection refused
    [tcp @ 0x55da053ca780] Connection to tcp:// failed: Connection refused
    [tcp @ 0x55da05485f80] Connection to tcp:// failed: Connection refused
    [tcp @ 0x55da053ced40] Connection to tcp:// failed: Connection refused
    [tcp @ 0x55da054255c0] Connection to tcp:// failed: Connection refused
    [tcp @ 0x55da0540f940] Connection to tcp:// failed: Connection refused
    [tcp @ 0x55da05435380] Connection to tcp:// failed: Connection refused

    frame=53236 fps=7939 q=-1.0 Lsize= 476447kB time=00:29:36.30 bitrate=2197.3kbits/s speed= 265x
    video:446847kB audio:28278kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.278083%


    Hello, I'm new to PyAV. I've been trying to record my screen using 'x11grab' and then broadcast it to an RTMP stream, but I've been having no luck. Here's my code:

    import av
    if __name__ == '__main__':
        x11grab = av.open(':0.0', format='x11grab', options={'s':'1280x1024', 'framerate':'60'})
        output = av.open('rtmp://localhost', format='flv')
        ostream = output.add_stream('libx264', framerate=60, video_size='1280x1024')
        ostream.options = {}
        ostream.pix_fmt = 'yuv420p'
        istream = x11grab.streams.video[0]
        gen = x11grab.demux(istream)
        for _ in range(1000):
            packet = next(gen)
            packet.stream = ostream
            if packet.dts is None:

    When I run this, it just hangs, no output whatsoever. Could somebody please point me in the right direction on how I could go about making this work? Thank you!

    Daniel Takacs
    I want to generate a video with changing resolution (e.g.: first frame should be 640x480 but the next is 320x240 or 480x640). When I naively started changing the stream.width and stream.height I got Input picture width (xxx) is greater than stride (yyy) error during the next stream.encode() so I suppose I need to do something more. Any idea what is missing? Is it possible to do with pyav or I need to go down to the C api?
    Jimmy Berry
    Is it possible (and if so how) to access frame metadata like lavfi.astats.Overall.RMS_level when using a av.filter.Graph()configured with astats=metadata=1? frame.side_data is seemingly always empty after push() frame into graph and pull() back out.
    Paul Wieland
    Hello - I'm using OSX Catalina and having a problem installing av.
    Any insight?

    Hi all. I am recording a RTP live stream into disk. The program is working correctly but the start time of the video files does not start in 0. I've checked the input container and effectively the start_time attribute is not set to 0. I've tried to force the start_time of the input container to 0 but it is protected.

    On the other hand, I've found that if those segments with the start_time not set to 0 are muxed using ffmpeg cli with the flag '-c copy' the offset is correctly removed. So, I've been playing with the audio and video packet pts and dts to remove the initial offset but after some segments the program crashes because problems in the decoding process...

    How do you use to restart or set the start_time to 0?

    @PaulWieland You're using Python 2 there; you need Python 3 instead. If you have it installed, it should be set up so you can run 'python3' instead of 'python' and 'pip3' instead of 'pip'. If not, get it from https://www.python.org/downloads/
    Also, have a look at how to create a venv: https://docs.python.org/3/tutorial/venv.html
    Once you have a virtual environment (that has been made with the correct Python installation) active, the plain 'python' and 'pip' commands will refer to the correct programs within that terminal session (and not to the deprecated legacy version that comes with macOS for backwards compatibility).
    Peter Hausamann
    Hi, I have a camera that produces frames in the Bayer RGGB format which I would like to encode with PyAV. With plain ffmpeg this works fine with the -pix_fmt bayer_rggb8 flag, however it seems like this is not supported by PyAV:
    av.VideoFrame.from_ndarray(img, format="bayer_rggb8")
    ValueError: Conversion from numpy array with format `bayer_rggb8` is not yet supported
    Samuel Smith

    So I'm recording an online radio stream and dumping it to a file. So the file is raw AAC audio data. From there, I put the raw data into a container with:

    in_container = av.open('raw_stream.aac', mode='r')
    in_stream = in_container.streams.audio[0]
     out_container = av.open('stream.m4a', mode='w')
      out_stream = out_container.add_stream(template=in_stream)
       for packet in in_container.demux(in_stream):
            # skip flush packet
            if packet.dts is None:
            # We need to assign the packet to the new stream.
            packet.stream = out_stream

    But my question is, could I perhaps bypass having to write the file and instead directly insert the raw audio data into the output container incrementally?

    Samuel Smith
    Well, I guess I could just pass the http stream straight into av.open.. but what fun is that?
    Hi all. Im using PyAV to re-mux on air masters from a client (HBO) - Im re-muxing ProRes 422HQ, and masters with up to 16 discrete, mono 48Khz LPCM tracks with no viable audio layout / channel info (dont ask, its just how it is) - Now I am able to parse, re-mux and do that just fine, however audio channels after the 1st appear to be disabled, and it appears to be a default behavior in FFMPEG (see https://trac.ffmpeg.org/ticket/2626 & https://trac.ffmpeg.org/ticket/3622 ) - now usually doing AV dev in AVFoundation / CoreMedia / CoreVideo CoreAudio on macOS - and enabling tracks is trivial as is making track group associations). My Question: is there a way in PyAV to change this behavior so discrete audio tracks are enabled by default? Is there a way to associate audio tracks to belong to an audio group so they belong to the same language?
    Thank you

    You can see the tracks here https://gist.github.com/vade/139c60b7ba57485a2c0c93f17095bc40

    Note that audio track 1 - default is YES, on all others its NO. This was produced by PyAV with FFMPEG

    ffmpeg version 4.3-2~18.04.york0 Copyright (c) 2000-2020 the FFmpeg developers

    Ah - I think FFMPEG calls this 'disposition' : https://ffmpeg.org/ffmpeg-all.html#Main-options
    Evan Clark
    Does anyone have srt working with pyav, get a protocol not supported with the latest pyav?
    Ramzan Bekbulatov
    Hello! How can I split audio to chunks of fixed length using PyAV?
    Anyone has ffmpeg animation code example of char by char animation?
    Thank you in advance for help...
    Hey guys! I need to build wheels for the arm platform (aarch32 and aarch64) on some ubuntu system (to avoid having to compile locally on devices, which proved to be very problematic). How would I go about that?
    Hey. I manipulate very large video files that each take ~3sec to open. If I try to open those in parallel using a ThreadPoolExecutor the opening time jumps to ~20sec each. A solution?
    Alex Hutnik
    I’m reading from an RTSP source, manipulating each frame, and then muxing it to disk. The frame manipulation takes roughly 0.75 seconds per frame. How does PyAV / libav know to buffer incoming packets while my frame manipulation code blocks until my next call to demux?
    Haujet Zhao

    Help, please!


    In the example below, I extract the even frames (dropping the odd frames) from the input to the output.

    The input frame rate is 30fps, and it's duration is 60s.

    So the expected output frame rate is expected to be 30fps, and the duration should be 30s.

    But the result from pyav is not as expected. The fps remains 30fps and the output duration became 57.37s, it's just the tbr became 15 from 30.

    Here is the code:

    import av
    inputFile = 'test.mkv'
    outputFile = 'test_out.mkv'
    container_in = av.open(inputFile) # the input framerate is 30fps, duration is 60s.
    container_in.streams.video[0].thread_type = 'AUTO'
    container_out = av.open(outputFile, mode='w')
    stream = container_out.add_stream('libx264', rate=30, options={"crf":"23"})
    # stream = container_out.add_stream('libx264', rate=30, options={"crf":"23", 'r':'30'})
    stream.width = container_in.streams.video[0].codec_context.width
    stream.height = container_in.streams.video[0].codec_context.height
    stream.pix_fmt = container_in.streams.video[0].codec_context.pix_fmt
    for count, frame in enumerate(container_in.decode(video=0)):
        if count % 2 == 0:
            # only the even number frame will be written to the out put
            # so the expected output is like 2x of input
            # the frame rate is expected to be 30fps, and the duration is expected to be 30s
    # The expected output video should have the framerate 30fps, but instead of generating new timestamp, the output frames just copied the input timestamp.
    # Even if I use the {'vsync':'drop'} stream option, the output still won't change.
    # This behavior is so different from using CLI.

    Expected behavior

    The input video stream is:

    Stream #0:0: Video: h264 (High), yuv420p(tv, bt709, progressive), 2160x1440, 30 fps, 30 tbr, 1k tbn, 60 tbc (default)
          DURATION        : 00:01:00.066000000

    The output video stream is expected to be:

    Stream #0:0: Video: h264 (High), yuv420p(progressive), 2160x1440, 30 fps, 30 tbr, 1k tbn, 60 tbc (default)
          DURATION        : 00:00:30.033000000

    Actual behavior

    But the actual output video stream is:

    Stream #0:0: Video: h264 (High), yuv420p(progressive), 2160x1440, 30.30 fps, 15 tbr, 1k tbn, 60 tbc (default)
          DURATION        : 00:00:57.366000000

    Hello all,
    I need to bundle both PyAV and libav with my project, but need to compile them myself as I need ffmpeg in LGPL license "mode".
    The issue is that the current configuration for building both libav and PyAV in MacOS will cause the absolute path to the linked libraries to be used:

    otool -L libavcodec.58.54.100.dylib
        /Users/pablo/Repos/vendor/build/ffmpeg-4.2/lib/libavcodec.58.dylib (compatibility version 58.0.0, current version 58.54.100)
        /Users/pablo/Repos/vendor/build/ffmpeg-4.2/lib/libswresample.3.dylib (compatibility version 3.0.0, current version 3.5.100)
        /Users/pablo/Repos/vendor/build/ffmpeg-4.2/lib/libavutil.56.dylib (compatibility version 56.0.0, current version 56.31.100)
    otool -L buffer.cpython-37m-darwin.so                                  
        /Users/pablo/Repos/vendor/build/ffmpeg-4.2/lib/libavutil.56.dylib (compatibility version 56.0.0, current version 56.31.100)
        /Users/pablo/Repos/vendor/build/ffmpeg-4.2/lib/libswscale.5.dylib (compatibility version 5.0.0, current version 5.5.100)
        /Users/pablo/Repos/vendor/build/ffmpeg-4.2/lib/libavdevice.58.dylib (compatibility version 58.0.0, current version 58.8.100)

    Instead, I would like to remove the explicit setting of the path for the shared libraries and change it to something of the form:


    as it's already on the provided binary wheels:

    otool -L _core.cpython-37m-darwin.so 
        @loader_path/.dylibs/libavdevice.58.8.100.dylib (compatibility version 58.0.0, current version 58.8.100)
        @loader_path/.dylibs/libavformat.58.29.100.dylib (compatibility version 58.0.0, current version 58.29.100)
        @loader_path/.dylibs/libswresample.3.5.100.dylib (compatibility version 3.0.0, current version 3.5.100)

    Any help on modifying the configuration for building is very appreciated.


    @shycats I think you want to look at install_name_tool (it appears you are on macOS X?
    Hi All - is it possible to query the if the loaded container has interlaced video frames without doing a demux and a decode and introspecting an AVFrame? Ideally id like to query the container / video stream, but perhaps there isn't a clean way otherwise. Thank you.
    Hey, I'm having some issues when trying to decode some of my own video files (HLS downloads and remuxes, OBS recordings, ffmpeg transcodes) using PyAV. A lot of the time, it starts spewing out errors (where ffmpeg nor ffplay show none), and sometimes it crashes the entire application without tracebacks, while in others it just results in the application getting stuck.
    Here's my ffplay output from one of the files:
    > ffplay -loglevel level+verbose -i '.\2020-10-06 18-09-10.mp4'
    [info] ffplay version 4.2.2 Copyright (c) 2003-2019 the FFmpeg developers
    [info]   built with gcc 9.2.1 (GCC) 20200122
    [info]   configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
    [info]   libavutil      56. 31.100 / 56. 31.100
    [info]   libavcodec     58. 54.100 / 58. 54.100
    [info]   libavformat    58. 29.100 / 58. 29.100
    [info]   libavdevice    58.  8.100 / 58.  8.100
    [info]   libavfilter     7. 57.100 /  7. 57.100
    [info]   libswscale      5.  5.100 /  5.  5.100
    [info]   libswresample   3.  5.100 /  3.  5.100
    [info]   libpostproc    55.  5.100 / 55.  5.100
    [verbose] Initialized direct3d renderer.
    [h264 @ 00000267b678f880] [verbose] Reinit context to 1600x912, pix_fmt: yuv420p
    [info] Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '.\2020-10-06 18-09-10.mp4':
    [info]   Metadata:
    [info]     major_brand     : isom
    [info]     minor_version   : 512
    [info]     compatible_brands: isomiso2avc1mp41
    [info]     encoder         : Lavf58.29.100
    [info]   Duration: 00:00:35.55, start: 0.000000, bitrate: 1487 kb/s
    [info]     Stream #0:0(und): Video: h264 (High), 1 reference frame (avc1 / 0x31637661), yuv420p(tv, bt470bg/unknown/unknown, left), 1600x900 (1600x912) [SAR 1:1 DAR 16:9], 1309 kb/s, 60 fps, 60 tbr, 90k tbn, 120 tbc (default)
    [info]     Metadata:
    [info]       handler_name    : VideoHandler
    [info]     Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 161 kb/s (default)
    [info]     Metadata:
    [info]       handler_name    : SoundHandler
    [ffplay_abuffer @ 00000267b6785740] [verbose] tb:1/44100 samplefmt:fltp samplerate:44100 chlayout:0x3
    [ffplay_abuffersink @ 00000267b6785180] [verbose] auto-inserting filter 'auto_resampler_0' between the filter 'ffplay_abuffer' and the filter 'ffplay_abuffersink'
    [auto_resampler_0 @ 00000267b6785340] [verbose] ch:2 chl:stereo fmt:fltp r:44100Hz -> ch:2 chl:stereo fmt:s16 r:44100Hz
    [h264 @ 00000267b67a8dc0] [verbose] Reinit context to 1600x912, pix_fmt: yuv420p
    [ffplay_abuffer @ 00000267b683dc00] [verbose] tb:1/44100 samplefmt:fltp samplerate:44100 chlayout:0x3
    [ffplay_abuffersink @ 00000267bb9649c0] [verbose] auto-inserting filter 'auto_resampler_0' between the filter 'ffplay_abuffer' and the filter 'ffplay_abuffersink'
    [auto_resampler_0 @ 00000267bb964c80] [verbose] ch:2 chl:stereo fmt:fltp r:44100Hz -> ch:2 chl:stereo fmt:s16 r:44100Hz
    [ffplay_buffer @ 00000267b681cf00] [verbose] w:1600 h:900 pixfmt:yuv420p tb:1/90000 fr:60/1 sar:1/1 sws_param:
    [verbose] Created 1600x900 texture with SDL_PIXELFORMAT_IYUV.
    [AVIOContext @ 00000267b677aac0] [verbose] Statistics: 6659901 bytes read, 2 seeks
    The behaviour is generally unpredictable.
    With the file getting decoded successfully, but not without errors, and with subpar performance:
    > poetry run benchmark 'E:\Recordings\2020-10-06 18-09-10.mp4'
    2020-10-07 09:03:43,013 - switcher.decoder [INFO]       Initialize Decoder
    2020-10-07 09:03:43,051 - switcher.decoder [INFO]       Start Decoder threads
    2020-10-07 09:03:43,052 - switcher.decoder [INFO]       Starting video decoder
    2020-10-07 09:03:43,053 - switcher.decoder [INFO]       Starting audio decoder
    2020-10-07 09:03:43,075 [ERROR] co located POCs unavailable
    2020-10-07 09:03:43,131 [ERROR] co located POCs unavailable
     (repeated 9 more times)
    2020-10-07 09:03:43,131 [WARNING]       DTS 2284470 < 2286000 out of order
    2020-10-07 09:03:43,132 [ERROR] co located POCs unavailable
    2020-10-07 09:03:43,176 - switcher.decoder [INFO]       Audio decoding finished
    2020-10-07 09:03:43,183 - switcher.decoder [INFO]       Video decoding finished
    2020-10-07 09:03:43,184 - switcher.decoder [INFO]       All video frames consumed
    2020-10-07 09:03:43,200 - switcher.benchmark [INFO]     Decode time: 0.1460065
    2020-10-07 09:03:43,200 - switcher.benchmark [INFO]     Framerate:   27.29551903111826
    Other times resulting in a proper crash with a traceback in the decoder threads:
    > poetry run benchmark "E:\Recordings\2020-10-06 18-09-10.mp4"
    2020-10-07 09:03:24,986 - switcher.decoder [INFO]       Initialize Decoder
    2020-10-07 09:03:25,021 - switcher.decoder [INFO]       Start Decoder threads
    2020-10-07 09:03:25,022 - switcher.decoder [INFO]       Starting video decoder
    2020-10-07 09:03:25,024 - switcher.decoder [INFO]       Starting audio decoder
    2020-10-07 09:03:25,042 [ERROR] co located POCs unavailable
    2020-10-07 09:03:25,115 [ERROR] co located POCs unavailable
     (repeated 13 more times)
    2020-10-07 09:03:25,116 [WARNING]       Multiple frames in a packet.
    2020-10-07 09:03:25,116 [ERROR] Reserved bit set.
    2020-10-07 09:03:25,117 [ERROR] Number of bands (48) exceeds limit (40).
    Exception in thread Thread-2:
    Traceback (most recent call last):
      File "C:\Python38\lib\threading.py", line 932, in _bootstrap_inner
      File "C:\Python38\lib\threading.py", line 870, in run
    2020-10-07 09:03:25,120 [ERROR] co located POCs unavailable
        self._target(*self._args, **self._kwargs)
    2020-10-07 09:03:25,120 [ERROR] Invalid NAL unit size (555421141 > 1380).
      File "C:\Users\Dell\git\StreamPipe\switcher\switcher\decoder.py", line 58, in _audio_decoder
    2020-10-07 09:03:25,121 [ERROR] Error splitting the input into NAL units.
        for frame in self.container.decode(self.audio_stream):
      File "av\container\input.pyx", line 183, in decode
    2020-10-07 09:03:25,122 [ERROR] co located POCs unavailable
      File "av\packet.pyx", line 103, in av.packet.Packet.decode
      File "av\stream.pyx", line 168, in av.stream.Stream.decode
      File "av\codec\context.pyx", line 506, in av.codec.context.CodecContext.decode
    Exception in thread Thread-1:
    Traceback (most recent call last):
      File "C:\Python38\lib\threading.py", line 932, in _bootstrap_inner
      File "av\codec\context.pyx", line 409, in av.codec.context.CodecContext._send_packet_and_recv
      File "C:\Python38\lib\threading.py", line 870, in run
      File "av\error.pyx", line 336, in av.error.err_check
        self._target(*self._args, **self._kwargs)
      File "C:\Users\Dell\git\StreamPipe\switcher\switcher\decoder.py", line 47, in _video_decoder
        for frame in self.container.decode(self.video_stream):
      File "av\container\input.pyx", line 183, in decode
    av.error.InvalidDataError: [Errno 1094995529] Invalid data found when processing input; last error log: [aac] Number of bands (48) exceeds limit (40).
      File "av\packet.pyx", line 103, in av.packet.Packet.decode
      File "av\stream.pyx", line 168, in av.stream.Stream.decode
      File "av\codec\context.pyx", line 506, in av.codec.context.CodecContext.decode
      File "av\codec\context.pyx", line 409, in av.codec.context.CodecContext._send_packet_and_recv
      File "av\error.pyx", line 336, in av.error.err_check
    av.error.InvalidDataError: [Errno 1094995529] Invalid data found when processing input; last error log: [h264] co located POCs unavailable
    The only issue causing this I can think of right now, is that I am trying to decode both the audio and video streams at the same time.
    Alright, I narrowed it down, to the issue being just that.
    Nikita Melentev

    Hi there! Trying to shut up any logging from av and ffmpeg. Tried this


    Btw, there is av.logging.QUIET in docs, but there is no actual constant I see when library installed. I'm encoding video and got tons of

    [jpeg2000 @ 0x7f10fc000940] End mismatch 3
    [jpeg2000 @ 0x7f10fc000940] End mismatch 4
    [jpeg2000 @ 0x7f10fc000940] End mismatch 4
    [jpeg2000 @ 0x7f10fc000940] End mismatch 3
    [jpeg2000 @ 0x7f10fc000940] End mismatch 4
    [jpeg2000 @ 0x7f10fc000940] End mismatch 3

    on each frame. Is there easy way to prevent such logs?

    Maria Khrustaleva

    Hello all,
    I want to generate video with a frame rotation record in the metadata of video stream.
    Is there any way to do this?
    Something like:

    stream = output_container.add_stream("mpeg4", 24)
    stream.metadata['rotate'] = angle

    Similary for this:

    ffmpeg -i video.mp4 -c copy -metadata:s:v:0 rotate=90 rotated_video.mp4
    @vade thanks for your answer, that worked!
    I have another question, I'm trying to take advantage of the buffer protocol and use the av.buffer.Buffer.update() function to create a VideoFrame from a Buffer object that is generated by another lib (not numpy nor PIL). The problem is that this function only copies the buffer as is, and doesn't take into account the padding that must be added for libav and complains about the buffer not being the same size, is this the expected behaviour? I saw that the functions VideoFrame.from_image() and VideoFrame.from_ndarray() do take the padding into account, but I would prefer not to package PIL nor numpy with my lib. Is there any plan to support creating a VideoFrame from any python object that follows the buffer protocol adding the needed padding? Thanks!
    Ok, so I tried to make a workaround, but found a really weird behaviour when trying to implement it. Here is the problem: https://stackoverflow.com/q/64467742/8861787 . Any ideas?
    Anyone have any idea why av.open("default", format="alsa") results in "ValueError: no container format 'alsa'" on Ubuntu 18.04? I have the same code running on a Pi and a Jetson, and it does not produce the same problem. Running the command "ffmpeg -f alsa -i default -t 10 out.wav" works fine... am I missing some sort of dev package?
    nm, building pyav from scratch using the instructions in the git repo fixed the issue... now I just need to figure out how to export the newly built package from the venv....
    can the framerate touch 100fps?