Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
    Nitan Alexandru Marcel
    Another error now: Input frame pts 487305 != expected 0; fix or set to None.
    but this in resampler
    @nitanmarcel sometimes decoders have issues in the middle of the stream. Some are safe to ignore, some not. Try to run some re-encoding through command line with ffmpeg. I bet it will have those errors too. I don't think it has anything to do with PyAV library.
    Nitan Alexandru Marcel
    @quantotto The commandline worked: ffmpeg -i input.mp3 -f s16le -acodec pcm_s16le output.raw
    Nitan Alexandru Marcel
    oh I fixed it by setting pts to None
    Nitan Alexandru Marcel
    Could anyone point me to the part in the ffmpeg code where it handles the command line input? Thanks.
    Nice, I can use matrix instead of the bad gitter app
    Zeyu Dong

    @dong-zeyu yes, you have to calculate pts / dts yourself. Something like below worked for me:

    import av
    avin = av.open("test.264", "r", "h264")
    avout = av.open("test.mp4", "w", format="mp4")
    time_base = int(1 / avin.streams[0].time_base)
    rate = avin.streams[0].base_rate
    ts_inc = int(time_base / rate)
    ts = 0
    for pkt in avin.demux():
        pkt.pts = ts
        pkt.dts = ts
        ts += ts_inc

    That works! Thank you!

    1 reply
    isn't pts increase as 1? I see testsrc=fps=30 output pts as 1,2,3,4,....
    @NewUserHa PTS increases in time_base units. For example, if time_base for the video is 1/1000, then 1 unit of pts is actually a millisecond. Time base is something arbitrary that video creator (or software creating the video) chooses and then other timestamps (like pts, dts) align. Going with example of 1/1000 as timebase, if a video has 2 FPS frame rate, each packet / frame will have a step of half second, which is 500 in terms of pts / dts increases.
    testsrc=fps=30 in filter graph output pts as 1,2,3,4,.... I guess if it's because of this testsrc..
    tested via pypy. still has that latency issue.

    Hi everyone, I could use some help regarding trimming the padding from a video's frames. I've created an issue here with more details: PyAV-Org/PyAV#802. But basically, if I reshape numpy array of a frame's plane along line size and then slice along frame width to get a frame without padding, I'm not getting a properly aligned frame. However, when I try accounting for memory alignment, results are better which leads me to suspect it might be something to do with PyAv ignoring memory alignment issues in its frame width property.

    def _remove_padding(self, plane):
        """Remove padding from a video plane.
            plane (av.video.plane.VideoPlane): the plane to remove padding from
            numpy.array: an array with proper memory aligned width
        buf_width = plane.line_size
        bytes_per_pixel = 1
        frame_width = plane.width * bytes_per_pixel
        arr = np.frombuffer(plane, np.uint8)
        if buf_width != frame_width:
            align_to = 16
            # Frame width that is aligned up with a 16 bit boundary
            frame_width = (frame_width + align_to - 1) & ~(align_to - 1)
            # Slice (create a view) at the aligned boundary
            arr = arr.reshape(-1, buf_width)[:, :frame_width]
        return arr.reshape(-1, frame_width)

    Although the above manual alignment kind of works, it's not correct for every format. In my case, the above works correctly for luma plane but not chroma planes. I'm not sure how to proceed so any advice would be really helpful. Thanks.

        # Frame width that is aligned up with a 16 bit boundary
    Correction: it's supposed to be 16 pixel boundary
    Kevin Lin
    hey guys, when I write a frame using output.mux it squishes it vertically
    output = av.open(output_name, 'w')
    stream = output.add_stream('h264', fps)
    ... #later in a loop
    frame = av.VideoFrame.from_ndarray(frame, format='bgr24')
    packet = stream.encode(frame)
    the printout says its the right size <av.VideoFrame #0, pts=None bgr24 1080x2400 at 0x7fc15a027868>
    so I think its something wrong with stream.encode or output.mux lines?
    Kevin Lin
    ah i needed to set stream.height and stream.width
    Hi, can someone help me redoing a specific Ffmpeg command with PyAV?
    1 reply
    @nitanmarcel said people might be able to help here
    @rojserbest let's try
    does pyav have new commits after Jan 2021?
    I am trying to record my microphone with PyAV. the ffmpeg command is ffmpeg -f pulse -i 4 out.wav . What would be the equivalent in pyav ?
    1 reply

    hi, does anyone know how to change audio stream frame size ?

    import av
    container = av.open('8k.wav')
    for frame in container.decode(audio=0):

    I get 2048(256ms), but I want to get 160 (20ms) sample, how can I change it?

    Hello everyone
    How can I pass the -protocol_whitelist flag to PyAV
    I want to get the frames of an RTP stream, of which I have the SDP file.
    In ffmpeg to do this I have to pass -whitelist_protocol with rtp, udp,file if not it fails. How to do this in PyAV?
    Marcel Alexandru Nitan
    How do you send Http Headers with .open?
    Or it isn't possible?
    @dcordb you can use options argument of av.open
    But do I pass this literally? Like: av.open('foo', options=['-protocol_whitelist rtp,udp,file'])
    In general, can all ffmpeg options be passed to option keyword argument literally?
    Options expects a dict
    Keys are parameter names (without -) and values are parameters' values. { "protocol_whitelist": "rdp,udp,file" }
    You can pass anything in options that you would pass to ffmpeg command line
    Thank you!
    Hello everyone. Am I missing something? PyAV-Org/PyAV#809
    @MarshalX worked fine for me on Ubuntu 20.04.2 with the same PyAV and ffmpeg. Worth running under gdb and seeing if it gives you more info in terms of backtrace of this crash.
    Python 3.9.5 (default, May  4 2021, 03:33:11)
    [Clang 12.0.0 (clang-1200.0.32.29)] on darwin
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import av
    >>> c = av.open(':0', mode='r', format='avfoundation')
    >>> c.non_block
    >>> stream = c.streams[0]
    >>> stream
    <av.AudioStream #0 pcm_f32le at 44100Hz, stereo, flt at 0x10b5195e0>
    >>> for packet in c.demux(stream):
    ...     print(packet)
    <av.Packet of #0, dts=5346293809909, pts=5346293809909; 4096 bytes at 0x10b320220>
    <av.Packet of #0, dts=5346324390590, pts=5346324390590; 4096 bytes at 0x10b320360>
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "av/container/input.pyx", line 142, in demux
      File "av/container/core.pyx", line 258, in av.container.core.Container.err_check
      File "av/error.pyx", line 336, in av.error.err_check
    av.error.BlockingIOError: [Errno 35] Resource temporarily unavailable: ':0'
    Anyone know how to get avfoundation to run in blocking mode, or how to poll for packets on an audiostream? If I catch BlockingIOError it "works", but it will eat all the cpu
    Hello ~ is it possible to encode frame to a Packet and then decode the frame from it immediately after? I'm trying to change the codec for the original frame from hevc to mpeg4. When I try decode the just-encoded Packet, by using decode_context.decode(new_packets[0]), the error says Operation not permitted;. Is it because the Packet is generated from encoding operation? or there is something wrong with the decode context set up
    Blake VandeMerwe

    I'm trying to set the metadata fields on an MPEG-TS clip. I can set service_type using the container_options but can't figure out how to set the metadata since there's a space for the field name and there's two of them (python dictionary)

    example from ffmpeg docs,

    ffmpeg -i file.mpg -c copy \
         -mpegts_original_network_id 0x1122 \
         -mpegts_transport_stream_id 0x3344 \
         -mpegts_service_id 0x5566 \
         -mpegts_pmt_start_pid 0x1500 \
         -mpegts_start_pid 0x150 \
         -metadata service_provider="Some provider" \   <--- these
         -metadata service_name="Some Channel" \


            out = av.open(
                    'mpegts_copyts': '1',
                    'mpegts_service_type': 'mpeg2_digital_hdtv',
            out.flags |= 'DISCARD_CORRUPT'
    hi, anyone here?
    2 replies
    Pablo Prietz
    Hi. I have an audio filter graph with which I would like to control the audio volume. It contains a "volume" av.FilterContext object. The initial volume is set on init (filter_graph.add("volume", "volume=1.0:precision=float")). I would like to change the volume on demand without reinitializing the whole graph. From what I understand, I would need to send a command to the filter (I checked that av.filter.filter.Filter.command_support is true) but it does not look like there is an API to send commands to the filter object. Am I overlooking something?
    i would like to compile pyAV for android. Anyone can help? thanks
    Joachim Laguarda

    Hello All!
    I have quite a far fetched question :)
    I've tried to compile a large python project using Nuitka.
    The project use PyAV for everything video related.
    There seems to be an incompatibility between Nuitka and PyAv, at runtime I got a

    Traceback (most recent call last):
      File "...\test.py", line 1, in <module>
        import av
      File "...\av\__init__.py", line 15, in <module av>
      File "...\av\audio\__init__.py", line 1, in <module av.audio>
      File "av\audio\frame.pyx", line 1, in init av.audio.frame
    ModuleNotFoundError: No module named 'av.frame'

    Have anyone encountered this issue?
    Can this come from the fact that there is a frame.pyd in 'av\' in 'av\audio\' and in 'av\video\' ?
    Any help would be greatly appreciated.

    Hello. Can somebody tell me, how can I list audio devices with ffmpeg python?
    Hello All! I have a question about how to save the audio frames, so that I can load them off-line during training
    I have noticed that there is a simple example in the official document for extracting visual frames and saving them as images:
    import av
    container = av.open(path_to_video)
    for frame in container.decode(video=0):
        frame.to_image().save('frame-%04d.jpg' % frame.index)