Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Peter Hausamann
    @phausamann
    Hi, I have a camera that produces frames in the Bayer RGGB format which I would like to encode with PyAV. With plain ffmpeg this works fine with the -pix_fmt bayer_rggb8 flag, however it seems like this is not supported by PyAV:
    av.VideoFrame.from_ndarray(img, format="bayer_rggb8")
    ValueError: Conversion from numpy array with format `bayer_rggb8` is not yet supported
    Samuel Smith
    @smith153

    So I'm recording an online radio stream and dumping it to a file. So the file is raw AAC audio data. From there, I put the raw data into a container with:

    in_container = av.open('raw_stream.aac', mode='r')
    in_stream = in_container.streams.audio[0]
     out_container = av.open('stream.m4a', mode='w')
      out_stream = out_container.add_stream(template=in_stream)
    
       for packet in in_container.demux(in_stream):
            print(packet)
    
            # skip flush packet
            if packet.dts is None:
                continue
    
            # We need to assign the packet to the new stream.
            packet.stream = out_stream
    
            out_container.mux(packet)
    
        in_container.close()
        out_container.close()

    But my question is, could I perhaps bypass having to write the file and instead directly insert the raw audio data into the output container incrementally?

    Samuel Smith
    @smith153
    Well, I guess I could just pass the http stream straight into av.open.. but what fun is that?
    vade
    @vade
    Hi all. Im using PyAV to re-mux on air masters from a client (HBO) - Im re-muxing ProRes 422HQ, and masters with up to 16 discrete, mono 48Khz LPCM tracks with no viable audio layout / channel info (dont ask, its just how it is) - Now I am able to parse, re-mux and do that just fine, however audio channels after the 1st appear to be disabled, and it appears to be a default behavior in FFMPEG (see https://trac.ffmpeg.org/ticket/2626 & https://trac.ffmpeg.org/ticket/3622 ) - now usually doing AV dev in AVFoundation / CoreMedia / CoreVideo CoreAudio on macOS - and enabling tracks is trivial as is making track group associations). My Question: is there a way in PyAV to change this behavior so discrete audio tracks are enabled by default? Is there a way to associate audio tracks to belong to an audio group so they belong to the same language?
    Thank you

    You can see the tracks here https://gist.github.com/vade/139c60b7ba57485a2c0c93f17095bc40

    Note that audio track 1 - default is YES, on all others its NO. This was produced by PyAV with FFMPEG

    ffmpeg version 4.3-2~18.04.york0 Copyright (c) 2000-2020 the FFmpeg developers

    vade
    @vade
    Ah - I think FFMPEG calls this 'disposition' : https://ffmpeg.org/ffmpeg-all.html#Main-options
    Evan Clark
    @djevo1_gitlab
    Does anyone have srt working with pyav, get a protocol not supported with the latest pyav?
    Ramzan Bekbulatov
    @uburuntu
    Hello! How can I split audio to chunks of fixed length using PyAV?
    umair-riaz-official
    @umair-riaz-official
    Hi.
    Anyone has ffmpeg animation code example of char by char animation?
    Thank you in advance for help...
    matheus2740
    @matheus2740
    Hey guys! I need to build wheels for the arm platform (aarch32 and aarch64) on some ubuntu system (to avoid having to compile locally on devices, which proved to be very problematic). How would I go about that?
    ngoguey
    @Ngoguey42
    Hey. I manipulate very large video files that each take ~3sec to open. If I try to open those in parallel using a ThreadPoolExecutor the opening time jumps to ~20sec each. A solution?
    Alex Hutnik
    @alexhutnik
    I’m reading from an RTSP source, manipulating each frame, and then muxing it to disk. The frame manipulation takes roughly 0.75 seconds per frame. How does PyAV / libav know to buffer incoming packets while my frame manipulation code blocks until my next call to demux?
    Haujet Zhao
    @HaujetZhao

    Help, please!

    Overview

    In the example below, I extract the even frames (dropping the odd frames) from the input to the output.

    The input frame rate is 30fps, and it's duration is 60s.

    So the expected output frame rate is expected to be 30fps, and the duration should be 30s.

    But the result from pyav is not as expected. The fps remains 30fps and the output duration became 57.37s, it's just the tbr became 15 from 30.

    Here is the code:

    import av
    
    inputFile = 'test.mkv'
    outputFile = 'test_out.mkv'
    
    
    container_in = av.open(inputFile) # the input framerate is 30fps, duration is 60s.
    container_in.streams.video[0].thread_type = 'AUTO'
    
    container_out = av.open(outputFile, mode='w')
    stream = container_out.add_stream('libx264', rate=30, options={"crf":"23"})
    # stream = container_out.add_stream('libx264', rate=30, options={"crf":"23", 'r':'30'})
    
    stream.width = container_in.streams.video[0].codec_context.width
    stream.height = container_in.streams.video[0].codec_context.height
    stream.pix_fmt = container_in.streams.video[0].codec_context.pix_fmt
    
    for count, frame in enumerate(container_in.decode(video=0)):
        if count % 2 == 0:
            # only the even number frame will be written to the out put
            # so the expected output is like 2x of input
            # the frame rate is expected to be 30fps, and the duration is expected to be 30s
            container_out.mux(stream.encode(frame))
            print(count)
    
    container_in.close()
    container_out.close()
    
    
    # The expected output video should have the framerate 30fps, but instead of generating new timestamp, the output frames just copied the input timestamp.
    # Even if I use the {'vsync':'drop'} stream option, the output still won't change.
    # This behavior is so different from using CLI.

    Expected behavior

    The input video stream is:

    Stream #0:0: Video: h264 (High), yuv420p(tv, bt709, progressive), 2160x1440, 30 fps, 30 tbr, 1k tbn, 60 tbc (default)
        Metadata:
          DURATION        : 00:01:00.066000000

    The output video stream is expected to be:

    Stream #0:0: Video: h264 (High), yuv420p(progressive), 2160x1440, 30 fps, 30 tbr, 1k tbn, 60 tbc (default)
        Metadata:
          DURATION        : 00:00:30.033000000

    Actual behavior

    But the actual output video stream is:

    Stream #0:0: Video: h264 (High), yuv420p(progressive), 2160x1440, 30.30 fps, 15 tbr, 1k tbn, 60 tbc (default)
        Metadata:
          DURATION        : 00:00:57.366000000
    shycats
    @shycats

    Hello all,
    I need to bundle both PyAV and libav with my project, but need to compile them myself as I need ffmpeg in LGPL license "mode".
    The issue is that the current configuration for building both libav and PyAV in MacOS will cause the absolute path to the linked libraries to be used:

    otool -L libavcodec.58.54.100.dylib
    libavcodec.58.54.100.dylib:
        /Users/pablo/Repos/vendor/build/ffmpeg-4.2/lib/libavcodec.58.dylib (compatibility version 58.0.0, current version 58.54.100)
        /Users/pablo/Repos/vendor/build/ffmpeg-4.2/lib/libswresample.3.dylib (compatibility version 3.0.0, current version 3.5.100)
        /Users/pablo/Repos/vendor/build/ffmpeg-4.2/lib/libavutil.56.dylib (compatibility version 56.0.0, current version 56.31.100)
    (...)
    otool -L buffer.cpython-37m-darwin.so                                  
    buffer.cpython-37m-darwin.so:
        /Users/pablo/Repos/vendor/build/ffmpeg-4.2/lib/libavutil.56.dylib (compatibility version 56.0.0, current version 56.31.100)
        /Users/pablo/Repos/vendor/build/ffmpeg-4.2/lib/libswscale.5.dylib (compatibility version 5.0.0, current version 5.5.100)
        /Users/pablo/Repos/vendor/build/ffmpeg-4.2/lib/libavdevice.58.dylib (compatibility version 58.0.0, current version 58.8.100)
    (...)

    Instead, I would like to remove the explicit setting of the path for the shared libraries and change it to something of the form:

    @loader_path/<library-name>

    as it's already on the provided binary wheels:

    otool -L _core.cpython-37m-darwin.so 
    _core.cpython-37m-darwin.so:
        @loader_path/.dylibs/libavdevice.58.8.100.dylib (compatibility version 58.0.0, current version 58.8.100)
        @loader_path/.dylibs/libavformat.58.29.100.dylib (compatibility version 58.0.0, current version 58.29.100)
        @loader_path/.dylibs/libswresample.3.5.100.dylib (compatibility version 3.0.0, current version 3.5.100)
    (...)

    Any help on modifying the configuration for building is very appreciated.

    Thanks!

    vade
    @vade
    @shycats I think you want to look at install_name_tool (it appears you are on macOS X?
    Hi All - is it possible to query the if the loaded container has interlaced video frames without doing a demux and a decode and introspecting an AVFrame? Ideally id like to query the container / video stream, but perhaps there isn't a clean way otherwise. Thank you.
    golyalpha
    @golyalpha
    Hey, I'm having some issues when trying to decode some of my own video files (HLS downloads and remuxes, OBS recordings, ffmpeg transcodes) using PyAV. A lot of the time, it starts spewing out errors (where ffmpeg nor ffplay show none), and sometimes it crashes the entire application without tracebacks, while in others it just results in the application getting stuck.
    Here's my ffplay output from one of the files:
    > ffplay -loglevel level+verbose -i '.\2020-10-06 18-09-10.mp4'
    [info] ffplay version 4.2.2 Copyright (c) 2003-2019 the FFmpeg developers
    [info]   built with gcc 9.2.1 (GCC) 20200122
    [info]   configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
    [info]   libavutil      56. 31.100 / 56. 31.100
    [info]   libavcodec     58. 54.100 / 58. 54.100
    [info]   libavformat    58. 29.100 / 58. 29.100
    [info]   libavdevice    58.  8.100 / 58.  8.100
    [info]   libavfilter     7. 57.100 /  7. 57.100
    [info]   libswscale      5.  5.100 /  5.  5.100
    [info]   libswresample   3.  5.100 /  3.  5.100
    [info]   libpostproc    55.  5.100 / 55.  5.100
    [verbose] Initialized direct3d renderer.
    [h264 @ 00000267b678f880] [verbose] Reinit context to 1600x912, pix_fmt: yuv420p
    [info] Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '.\2020-10-06 18-09-10.mp4':
    [info]   Metadata:
    [info]     major_brand     : isom
    [info]     minor_version   : 512
    [info]     compatible_brands: isomiso2avc1mp41
    [info]     encoder         : Lavf58.29.100
    [info]   Duration: 00:00:35.55, start: 0.000000, bitrate: 1487 kb/s
    [info]     Stream #0:0(und): Video: h264 (High), 1 reference frame (avc1 / 0x31637661), yuv420p(tv, bt470bg/unknown/unknown, left), 1600x900 (1600x912) [SAR 1:1 DAR 16:9], 1309 kb/s, 60 fps, 60 tbr, 90k tbn, 120 tbc (default)
    [info]     Metadata:
    [info]       handler_name    : VideoHandler
    [info]     Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 161 kb/s (default)
    [info]     Metadata:
    [info]       handler_name    : SoundHandler
    [ffplay_abuffer @ 00000267b6785740] [verbose] tb:1/44100 samplefmt:fltp samplerate:44100 chlayout:0x3
    [ffplay_abuffersink @ 00000267b6785180] [verbose] auto-inserting filter 'auto_resampler_0' between the filter 'ffplay_abuffer' and the filter 'ffplay_abuffersink'
    [auto_resampler_0 @ 00000267b6785340] [verbose] ch:2 chl:stereo fmt:fltp r:44100Hz -> ch:2 chl:stereo fmt:s16 r:44100Hz
    [h264 @ 00000267b67a8dc0] [verbose] Reinit context to 1600x912, pix_fmt: yuv420p
    [ffplay_abuffer @ 00000267b683dc00] [verbose] tb:1/44100 samplefmt:fltp samplerate:44100 chlayout:0x3
    [ffplay_abuffersink @ 00000267bb9649c0] [verbose] auto-inserting filter 'auto_resampler_0' between the filter 'ffplay_abuffer' and the filter 'ffplay_abuffersink'
    [auto_resampler_0 @ 00000267bb964c80] [verbose] ch:2 chl:stereo fmt:fltp r:44100Hz -> ch:2 chl:stereo fmt:s16 r:44100Hz
    [ffplay_buffer @ 00000267b681cf00] [verbose] w:1600 h:900 pixfmt:yuv420p tb:1/90000 fr:60/1 sar:1/1 sws_param:
    [verbose] Created 1600x900 texture with SDL_PIXELFORMAT_IYUV.
    [AVIOContext @ 00000267b677aac0] [verbose] Statistics: 6659901 bytes read, 2 seeks
    The behaviour is generally unpredictable.
    With the file getting decoded successfully, but not without errors, and with subpar performance:
    > poetry run benchmark 'E:\Recordings\2020-10-06 18-09-10.mp4'
    2020-10-07 09:03:43,013 - switcher.decoder [INFO]       Initialize Decoder
    2020-10-07 09:03:43,051 - switcher.decoder [INFO]       Start Decoder threads
    2020-10-07 09:03:43,052 - switcher.decoder [INFO]       Starting video decoder
    2020-10-07 09:03:43,053 - switcher.decoder [INFO]       Starting audio decoder
    2020-10-07 09:03:43,075 [ERROR] co located POCs unavailable
    2020-10-07 09:03:43,131 [ERROR] co located POCs unavailable
     (repeated 9 more times)
    2020-10-07 09:03:43,131 [WARNING]       DTS 2284470 < 2286000 out of order
    2020-10-07 09:03:43,132 [ERROR] co located POCs unavailable
    2020-10-07 09:03:43,176 - switcher.decoder [INFO]       Audio decoding finished
    2020-10-07 09:03:43,183 - switcher.decoder [INFO]       Video decoding finished
    2020-10-07 09:03:43,184 - switcher.decoder [INFO]       All video frames consumed
    2020-10-07 09:03:43,200 - switcher.benchmark [INFO]     Decode time: 0.1460065
    2020-10-07 09:03:43,200 - switcher.benchmark [INFO]     Framerate:   27.29551903111826
    Other times resulting in a proper crash with a traceback in the decoder threads:
    > poetry run benchmark "E:\Recordings\2020-10-06 18-09-10.mp4"
    2020-10-07 09:03:24,986 - switcher.decoder [INFO]       Initialize Decoder
    2020-10-07 09:03:25,021 - switcher.decoder [INFO]       Start Decoder threads
    2020-10-07 09:03:25,022 - switcher.decoder [INFO]       Starting video decoder
    2020-10-07 09:03:25,024 - switcher.decoder [INFO]       Starting audio decoder
    2020-10-07 09:03:25,042 [ERROR] co located POCs unavailable
    2020-10-07 09:03:25,115 [ERROR] co located POCs unavailable
     (repeated 13 more times)
    2020-10-07 09:03:25,116 [WARNING]       Multiple frames in a packet.
    2020-10-07 09:03:25,116 [ERROR] Reserved bit set.
    2020-10-07 09:03:25,117 [ERROR] Number of bands (48) exceeds limit (40).
    Exception in thread Thread-2:
    Traceback (most recent call last):
      File "C:\Python38\lib\threading.py", line 932, in _bootstrap_inner
        self.run()
      File "C:\Python38\lib\threading.py", line 870, in run
    2020-10-07 09:03:25,120 [ERROR] co located POCs unavailable
        self._target(*self._args, **self._kwargs)
    2020-10-07 09:03:25,120 [ERROR] Invalid NAL unit size (555421141 > 1380).
      File "C:\Users\Dell\git\StreamPipe\switcher\switcher\decoder.py", line 58, in _audio_decoder
    2020-10-07 09:03:25,121 [ERROR] Error splitting the input into NAL units.
        for frame in self.container.decode(self.audio_stream):
      File "av\container\input.pyx", line 183, in decode
    2020-10-07 09:03:25,122 [ERROR] co located POCs unavailable
      File "av\packet.pyx", line 103, in av.packet.Packet.decode
      File "av\stream.pyx", line 168, in av.stream.Stream.decode
      File "av\codec\context.pyx", line 506, in av.codec.context.CodecContext.decode
    Exception in thread Thread-1:
    Traceback (most recent call last):
      File "C:\Python38\lib\threading.py", line 932, in _bootstrap_inner
      File "av\codec\context.pyx", line 409, in av.codec.context.CodecContext._send_packet_and_recv
        self.run()
      File "C:\Python38\lib\threading.py", line 870, in run
      File "av\error.pyx", line 336, in av.error.err_check
        self._target(*self._args, **self._kwargs)
      File "C:\Users\Dell\git\StreamPipe\switcher\switcher\decoder.py", line 47, in _video_decoder
        for frame in self.container.decode(self.video_stream):
      File "av\container\input.pyx", line 183, in decode
    av.error.InvalidDataError: [Errno 1094995529] Invalid data found when processing input; last error log: [aac] Number of bands (48) exceeds limit (40).
      File "av\packet.pyx", line 103, in av.packet.Packet.decode
      File "av\stream.pyx", line 168, in av.stream.Stream.decode
      File "av\codec\context.pyx", line 506, in av.codec.context.CodecContext.decode
      File "av\codec\context.pyx", line 409, in av.codec.context.CodecContext._send_packet_and_recv
      File "av\error.pyx", line 336, in av.error.err_check
    av.error.InvalidDataError: [Errno 1094995529] Invalid data found when processing input; last error log: [h264] co located POCs unavailable
    golyalpha
    @golyalpha
    The only issue causing this I can think of right now, is that I am trying to decode both the audio and video streams at the same time.
    golyalpha
    @golyalpha
    Alright, I narrowed it down, to the issue being just that.
    Nikita Melentev
    @pohmelie

    Hi there! Trying to shut up any logging from av and ffmpeg. Tried this

    logging.basicConfig()
    av.logging.set_level(av.logging.PANIC)
    logging.getLogger('libav').setLevel(logging.ERROR)

    Btw, there is av.logging.QUIET in docs, but there is no actual constant I see when library installed. I'm encoding video and got tons of

    [jpeg2000 @ 0x7f10fc000940] End mismatch 3
    [jpeg2000 @ 0x7f10fc000940] End mismatch 4
    [jpeg2000 @ 0x7f10fc000940] End mismatch 4
    [jpeg2000 @ 0x7f10fc000940] End mismatch 3
    [jpeg2000 @ 0x7f10fc000940] End mismatch 4
    [jpeg2000 @ 0x7f10fc000940] End mismatch 3

    on each frame. Is there easy way to prevent such logs?

    Maria Khrustaleva
    @Marishka17

    Hello all,
    I want to generate video with a frame rotation record in the metadata of video stream.
    Is there any way to do this?
    Something like:

    stream = output_container.add_stream("mpeg4", 24)
    stream.metadata['rotate'] = angle

    Similary for this:

    ffmpeg -i video.mp4 -c copy -metadata:s:v:0 rotate=90 rotated_video.mp4
    shycats
    @shycats
    @vade thanks for your answer, that worked!
    shycats
    @shycats
    I have another question, I'm trying to take advantage of the buffer protocol and use the av.buffer.Buffer.update() function to create a VideoFrame from a Buffer object that is generated by another lib (not numpy nor PIL). The problem is that this function only copies the buffer as is, and doesn't take into account the padding that must be added for libav and complains about the buffer not being the same size, is this the expected behaviour? I saw that the functions VideoFrame.from_image() and VideoFrame.from_ndarray() do take the padding into account, but I would prefer not to package PIL nor numpy with my lib. Is there any plan to support creating a VideoFrame from any python object that follows the buffer protocol adding the needed padding? Thanks!
    shycats
    @shycats
    Ok, so I tried to make a workaround, but found a really weird behaviour when trying to implement it. Here is the problem: https://stackoverflow.com/q/64467742/8861787 . Any ideas?
    daznz
    @daznz
    Anyone have any idea why av.open("default", format="alsa") results in "ValueError: no container format 'alsa'" on Ubuntu 18.04? I have the same code running on a Pi and a Jetson, and it does not produce the same problem. Running the command "ffmpeg -f alsa -i default -t 10 out.wav" works fine... am I missing some sort of dev package?
    daznz
    @daznz
    nm, building pyav from scratch using the instructions in the git repo fixed the issue... now I just need to figure out how to export the newly built package from the venv....
    PankajChohan9820
    @PankajChohan9820
    can the framerate touch 100fps?
    Jake
    @UnknownError_gitlab
    Does PyAV support outputting to an RTSP stream?
    su
    @SuX97
    Hi, does anyone know how to decode motion vector from side data with PyAV?
    Jake
    @UnknownError_gitlab
    I am currently getting the error: Frame does not match AudioResampler setup. Isn't the job of the resampler to convert from an arbitrary audio setup to a determined audio format?
    Or is there something else that I am missing to prepare the frame for the resampler?
    Jake
    @UnknownError_gitlab
    Just trying to convert frames that are in fltp stereo 48000 to s16 stereo 48000. Nearly the same format, just going from fltp to s16.
    Jake
    @UnknownError_gitlab
    Interesting, the format apparently changes from 44100 to 48000 near the beginning of the stream? Wild.
    Maxim Kochurov
    @ferrine
    hi there, I have troubles with pip install)
    I've updated ffmpeg on my system and it still fails to install pyav
    any suggestions?
    Jake
    @UnknownError_gitlab
    @ferrine I do not think that installing pyav with an out of date / mismatched version would stop pyav from installing unless you are typing to build pyav from source. There is not much to go on from your statement other than: it doesn't install. If you have the logs that are reported from the install, it might be helpful to post the logs in a pastebin alongside which OS you are trying to install on. Some easy things to check: make sure pip and setuptools are up-to-date.
    werner58
    @werner58__twitter
    Hallo everybody
    I'm trying to use PyAV to make timelapse videos on a Raspberry Pi, mostly as a first experiment to get used with it. I have managed to acquire frames from the Pi Camera and encode them into an h264 stream (I followed the generate_video.py example) - the problem is, the encoding quality is very low, and the bitrate is low (about 1000 kbps). I can't find any quality/bitrate settings in the PyAV documentation...
    Kai Hu
    @hukkai
    Hi there, I want to use seek method, but find that it does not work at most times ---- the video is still decoded from the first frame. Only very big offsets work . Is it normal?
    Witold Baryluk
    @baryluk
    Hi. Is there a way to setup the output stream desired bitrate? I.e. h264, 4000x2000, 30Mbps, or something like that? I would like to set the encoder effort / quality too for output.
    xxia-kathy
    @xxia-kathy
    Hi, is it possible to use PyAV to load a .yuv file? (with pixel format yuv422p16le)
    Jim Condon
    @jimcondon
    I'm trying to get the raw encoded data out of the packet in MediaRecorder in aiortc. I'm saving it as a raw aac. The file from the container is correct. I'm calling stream.encode, and accessing the data afterwards. However when I use "packet.to_bytes()" to get the AAC data in MediaRecorder, the data I print out does not match the data in file. Am I using packet wrong? How do I access the raw encoded data from a packet?
    Craig Niles
    @cniles
    I'm trying to use PyAv with ffmpeg built for hardware accel (installed with pip install av --no-binary av). I've tested ffmpeg (4.2.2) and it works fine. I can run FFMpeg from the CLI to encode and decode using hardware acceleration. However, when trying to create a decoder with PyAV it raises an 'UnknownCodec' exception. Oddly, it raises the exception for all codecs, not just those using hw accel. When I inspect the contents of av.codecs_available, i can see the cuvid codecs and and other codecs. Not sure whats going on. I tested a very similar setup with PyAV 6.x and ffmpeg 3.4.8 built for hwaccel and didn't have any issues... Anyone have any thoughts as to what could be the problem?
    Craig Niles
    @cniles
    bleh figured it out; multiple libav shared objects were on the linker and it was loading the incorrect libs..can work-around by setting LD_LIBRARY_PATH until getting the dependencies sorted out properly in the custom deb package.
    Will Price
    @willprice

    Hi there, thanks for PyAV, it's a great tool! I'm wondering whether there is a way to iterate through the codecs available to PyAV? I seem to be running into a similar issue to PyAV-Org/PyAV#57 where I can access a codec (h264) via the ffmpeg cli, but when I try and add a stream to a container I get an UnknownCodecError.

    It looks like the Codec class (https://pyav.org/docs/develop/api/codec.html) might be of help, but it seems to assume that you already know the codec name. I suppose I could iterate over the output of ffmpeg -codecs and see which ones PyAV recognises. Is there a better way?