Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
    Alright, I narrowed it down, to the issue being just that.
    Nikita Melentev

    Hi there! Trying to shut up any logging from av and ffmpeg. Tried this


    Btw, there is av.logging.QUIET in docs, but there is no actual constant I see when library installed. I'm encoding video and got tons of

    [jpeg2000 @ 0x7f10fc000940] End mismatch 3
    [jpeg2000 @ 0x7f10fc000940] End mismatch 4
    [jpeg2000 @ 0x7f10fc000940] End mismatch 4
    [jpeg2000 @ 0x7f10fc000940] End mismatch 3
    [jpeg2000 @ 0x7f10fc000940] End mismatch 4
    [jpeg2000 @ 0x7f10fc000940] End mismatch 3

    on each frame. Is there easy way to prevent such logs?

    Maria Khrustaleva

    Hello all,
    I want to generate video with a frame rotation record in the metadata of video stream.
    Is there any way to do this?
    Something like:

    stream = output_container.add_stream("mpeg4", 24)
    stream.metadata['rotate'] = angle

    Similary for this:

    ffmpeg -i video.mp4 -c copy -metadata:s:v:0 rotate=90 rotated_video.mp4
    @vade thanks for your answer, that worked!
    I have another question, I'm trying to take advantage of the buffer protocol and use the av.buffer.Buffer.update() function to create a VideoFrame from a Buffer object that is generated by another lib (not numpy nor PIL). The problem is that this function only copies the buffer as is, and doesn't take into account the padding that must be added for libav and complains about the buffer not being the same size, is this the expected behaviour? I saw that the functions VideoFrame.from_image() and VideoFrame.from_ndarray() do take the padding into account, but I would prefer not to package PIL nor numpy with my lib. Is there any plan to support creating a VideoFrame from any python object that follows the buffer protocol adding the needed padding? Thanks!
    Ok, so I tried to make a workaround, but found a really weird behaviour when trying to implement it. Here is the problem: https://stackoverflow.com/q/64467742/8861787 . Any ideas?
    Anyone have any idea why av.open("default", format="alsa") results in "ValueError: no container format 'alsa'" on Ubuntu 18.04? I have the same code running on a Pi and a Jetson, and it does not produce the same problem. Running the command "ffmpeg -f alsa -i default -t 10 out.wav" works fine... am I missing some sort of dev package?
    nm, building pyav from scratch using the instructions in the git repo fixed the issue... now I just need to figure out how to export the newly built package from the venv....
    can the framerate touch 100fps?
    Does PyAV support outputting to an RTSP stream?
    Hi, does anyone know how to decode motion vector from side data with PyAV?
    I am currently getting the error: Frame does not match AudioResampler setup. Isn't the job of the resampler to convert from an arbitrary audio setup to a determined audio format?
    Or is there something else that I am missing to prepare the frame for the resampler?
    Just trying to convert frames that are in fltp stereo 48000 to s16 stereo 48000. Nearly the same format, just going from fltp to s16.
    Interesting, the format apparently changes from 44100 to 48000 near the beginning of the stream? Wild.
    Maxim Kochurov
    hi there, I have troubles with pip install)
    I've updated ffmpeg on my system and it still fails to install pyav
    any suggestions?
    @ferrine I do not think that installing pyav with an out of date / mismatched version would stop pyav from installing unless you are typing to build pyav from source. There is not much to go on from your statement other than: it doesn't install. If you have the logs that are reported from the install, it might be helpful to post the logs in a pastebin alongside which OS you are trying to install on. Some easy things to check: make sure pip and setuptools are up-to-date.
    Hallo everybody
    I'm trying to use PyAV to make timelapse videos on a Raspberry Pi, mostly as a first experiment to get used with it. I have managed to acquire frames from the Pi Camera and encode them into an h264 stream (I followed the generate_video.py example) - the problem is, the encoding quality is very low, and the bitrate is low (about 1000 kbps). I can't find any quality/bitrate settings in the PyAV documentation...
    Kai Hu
    Hi there, I want to use seek method, but find that it does not work at most times ---- the video is still decoded from the first frame. Only very big offsets work . Is it normal?
    Witold Baryluk
    Hi. Is there a way to setup the output stream desired bitrate? I.e. h264, 4000x2000, 30Mbps, or something like that? I would like to set the encoder effort / quality too for output.
    Hi, is it possible to use PyAV to load a .yuv file? (with pixel format yuv422p16le)
    Jim Condon
    I'm trying to get the raw encoded data out of the packet in MediaRecorder in aiortc. I'm saving it as a raw aac. The file from the container is correct. I'm calling stream.encode, and accessing the data afterwards. However when I use "packet.to_bytes()" to get the AAC data in MediaRecorder, the data I print out does not match the data in file. Am I using packet wrong? How do I access the raw encoded data from a packet?
    Craig Niles
    I'm trying to use PyAv with ffmpeg built for hardware accel (installed with pip install av --no-binary av). I've tested ffmpeg (4.2.2) and it works fine. I can run FFMpeg from the CLI to encode and decode using hardware acceleration. However, when trying to create a decoder with PyAV it raises an 'UnknownCodec' exception. Oddly, it raises the exception for all codecs, not just those using hw accel. When I inspect the contents of av.codecs_available, i can see the cuvid codecs and and other codecs. Not sure whats going on. I tested a very similar setup with PyAV 6.x and ffmpeg 3.4.8 built for hwaccel and didn't have any issues... Anyone have any thoughts as to what could be the problem?
    Craig Niles
    bleh figured it out; multiple libav shared objects were on the linker and it was loading the incorrect libs..can work-around by setting LD_LIBRARY_PATH until getting the dependencies sorted out properly in the custom deb package.
    Will Price

    Hi there, thanks for PyAV, it's a great tool! I'm wondering whether there is a way to iterate through the codecs available to PyAV? I seem to be running into a similar issue to PyAV-Org/PyAV#57 where I can access a codec (h264) via the ffmpeg cli, but when I try and add a stream to a container I get an UnknownCodecError.

    It looks like the Codec class (https://pyav.org/docs/develop/api/codec.html) might be of help, but it seems to assume that you already know the codec name. I suppose I could iterate over the output of ffmpeg -codecs and see which ones PyAV recognises. Is there a better way?

    for reference my ffmpeg and pyav packages are installed via conda-forge
    Will Price
    Interestingly I can do av.codec.Codec('h264') without error, but the add_stream call on an output container causes the UnknownCodecError
    1 reply
    Will Price
    Comparing av.library_versions and the output of ffmpeg it appears they are built against different versions of the ffmpeg libs
    av.library_versions gives
    {'libavutil': (56, 38, 100),
     'libavcodec': (58, 65, 103),
     'libavformat': (58, 35, 101),
     'libavdevice': (58, 9, 103),
     'libavfilter': (7, 70, 101),
     'libswscale': (5, 6, 100),
     'libswresample': (3, 6, 100)}
    and ffmpeg
    libavutil      56. 51.100 / 56. 51.100
    libavcodec     58. 91.100 / 58. 91.100
    libavformat    58. 45.100 / 58. 45.100
    libavdevice    58. 10.100 / 58. 10.100
    libavfilter     7. 85.100 /  7. 85.100
    libavresample   4.  0.  0 /  4.  0.  0
    libswscale      5.  7.100 /  5.  7.100
    libswresample   3.  7.100 /  3.  7.100
    libpostproc    55.  7.100 / 55.  7.100
    so it seems PyAV is built against a different version... yet that doesn't explain why I can instantiate a Codec class for 'h264' but not use 'h264' when adding a stream
    Will Price
    Ah, I've found av.codecs_available!
    So h264 is present there
    Will Price
    Very odd, it appears to be an issue caused by importing something else, when I move import av to the top of my script I no longer encounter the issue. I will try and investigate what import causes it
    Will Price
    Ah, it appears decord interferes with it.
    Will Price
    I've filed an issue on PyAV PyAV-Org/PyAV#735 and also on decord dmlc/decord#128
    To fix this, just import av before decord, although hopefully we can get a bug fix in decord and/or av so that they don't conflict with one another

    I see examples of parsing live http streams, but they seem to all follow this pattern:

    url = 'http://website.com/playlist.m3u8'
    input_container = av.open(url)
    for frame in input_container.decode():
        # do something with frame

    I'm just wondering why is it a for loop and not a while loop for something like a live stream that is indefinite? Also, if it has to be a for loop, it does not appear to exit the for loop when the stream has ended for me. What could a possible exit condition be, to break out of the loop?

    1 reply
    Mohammad Moallemi

    Hi People,

    I hope you are loud and proud.

    I'm a newbie to PyAV and I'm using aiortc for WebRTC MediaServer, in aiortc I have av.VideoFrame objects available for each video frame and I want to create HLS video segments from the frames in real-time.

    As you can see in this project:
    They have used OpenCV video frame bytes piped to FFmpeg CLI for HLS streaming

    My question is, how can I use PyAV for consuming av.VideoFrame objects and extract 2-second video segments consisting of 60 frames for HLS streaming?

    Thanks in advance

    Hieu Trinh
    Hi, I would like to make a video with mostly static images. I was thinking make a various framerate mp4 by setting the right pts for each frame. So far I encounter what been mention in #397 (PyAV-Org/PyAV#397) . In the end, is it possible to manipulate pts when encoding ? Or do I need to use ffmpeg C interface for that ? (I am using 6.2.0 because I have ffmpeg-3)
    Dear all, has anyone got experience in using pipe with av.open()? I am trying to use pipe to feed data to an av container. the code is like below:
    r, w = os.pipe()
    fr = os.fdopen(r, 'rb')
    av.open(fr, mode='r')
    i got exception and message with 'file's name must be str-like' something like that
    is there anyone seen anything like this before? thanks all
    Mohammad Moallemi

    Hi all,
    I have this issue while passing video/audio frames/packets to container muxer:


    output = av.open(HLS_MANIFEST, options=HLS_OPTS, mode='w', format='hls')
    out_stream_a = output.add_stream('aac')
    out_stream_a.options = {}
            frame: AudioFrame = await self.track.recv()
            packet = out_stream_a.encode(frame)


    sender(audio) Traceback (most recent call last):
      File "/home/ubuntu/aiortc_tests/venv/lib/python3.6/site-packages/aiortc/rtcrtpsender.py", line 295, in _run_rtp
        payloads, timestamp = await self._next_encoded_frame(codec)
      File "/home/ubuntu/aiortc_tests/venv/lib/python3.6/site-packages/aiortc/rtcrtpsender.py", line 248, in _next_encoded_frame
        frame = await self.__track.recv()
      File "server.py", line 102, in recv
        packet = out_stream_a.encode(frame)
      File "av/stream.pyx", line 155, in av.stream.Stream.encode
      File "av/codec/context.pyx", line 466, in av.codec.context.CodecContext.encode
      File "av/audio/codeccontext.pyx", line 40, in av.audio.codeccontext.AudioCodecContext._prepare_frames_for_encode
      File "av/audio/resampler.pyx", line 122, in av.audio.resampler.AudioResampler.resample
    ValueError: Input frame pts 10637760 != expected 10632960; fix or set to None.