Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Sam Mesterton-Gibbons
    @samdbmg
    @jbitton Looking at what the resampler actually does, it seems to copy your first AudioFrame and read the input sample rate from it. Since AudioFrame.from_ndarray doesn't set the sample rate, try setting it directly after you create the frame, which is what the tests seem to do
    AliBahri
    @AliBahri94
    Hi there, I have got a question about the H264 codec. I would like to extract packets which are I-frame and convert them to frame and chang them and again convert to a packet and write them with packets which aren't I-frame in the container. On the other hand, I have got some packets that I encoded frames and packets which have been obtained by container.demux() and I would like to write them together in a container. I have got a python code that does this task but finally, I have only I-frames in the video. This problem is in my thesis. anybody can help me? Thanks
    Joanna Bitton
    @jbitton
    @samdbmg you are so right, thank you so much!! it worked :)
    Razvan
    @raznav_twitter
    Hello, I am trying to run PyAV + x11grab to record the screen on ubuntu. I have installed PyAV using conda-forge. Here is the code.
    container = av.open(':0.0', format='x11grab', options={'s':'800x600', 'i':'0:0'})
    and the error
    Traceback (most recent call last):
    File "t07record.py", line 3, in <module>
    container = av.open(':0.0', format='x11grab', options={'s':'800x600', 'i':'0:0'})
    File "av/container/core.pyx", line 354, in av.container.core.open
    File "av/container/core.pyx", line 131, in av.container.core.Container.cinit
    File "av/format.pyx", line 90, in av.format.ContainerFormat.cinit
    ValueError: no container format 'x11grab'
    Do I have to recompile from source and add x11grab as a format or I am missing something?
    Thank you very much for any ideas!
    Razvan
    @raznav_twitter
    I think the ffmpeg for conda-forge does not have the x11grab ( whereis ffmpeg and which ffmpeg )
    LI Zhennan
    @nanmu42
    Hi, I am trying to play opus encoded audio stream from TCP, I wonder if there's a way to realize it sololy by PyAV? I have read this issue. I knew PyAV has got libavdevice support, but failed to find an example.
    LI Zhennan
    @nanmu42
    Also, is there a way to make playing cross platform? Maybe using alsa on Linux, avfondation on Mac? What should I use on Windows? Or is there a better way?
    Thanks in advance, any help would be appreciate.
    Hermes Zhang
    @ChenhuiZhang
    Hi, I'm a new to PyAV, I try to plot some basic info of video frame, e.g. frame size, but I don't find such property from VideoFrame, I only find the packet.size, but the value seems not what I want, e.g. the I frame and P frame are similar, but actually the I frame should much bigger than P frame, does anyone know how to get the frame size by PyAV? Thanks.
    tcdalton
    @tcdalton
    Any ideas on how I could sync two HLS playlists? I'm generating my own HLS playlist from an MPEGTS feed (doing object detection on it and reconstructing). The provider of the MPEGTS feed has an HLS version as well. I want to make sure that my feed would match the providers. Thoughts?
    tcdalton
    @tcdalton
    can you get the segment name from PyAV while parsing an HLS feed?
    Joe Martinez
    @JoePercipientAI

    I am having trouble installing PyAV.

    $ pip install av
    Collecting av
    Using cached https://files.pythonhosted.org/packages/4d/fe/170a64b51c8f10df19fd2096388e51f377dfec07f0bd2f8d25ebdce6aa50/av-8.0.1.tar.gz
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
    File "<string>", line 1, in <module>
    File "/tmp/pip-build-4yV5M1/av/setup.py", line 9, in <module>
    from shlex import quote
    ImportError: cannot import name quote

    ----------------------------------------

    Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-4yV5M1/av/

    Any ideas?
    Razvan Grigore
    @razvanphp
    Hey, is there a way to use a copy "dummy" codecContext? I'm opening a container with av.open(file="/dev/video0", format="v4l2", mode="r", options={"video_size": "1920x1080", "framerate": "30", "input_format": "h264"}) since RPi camera and Logitech C920 can give HW accelerated H264 stream, but then I have trouble working with the Packet object to bitstream over webRTC. Any clues?
    beville
    @beville
    I was thinking about a project to manipulate the chapter/menu data in an audiobook with m4b format. I'm trying to work out if pyav is going to be able help me with that. As a test, I have a file with 56 chapters in the metadata (that shows up via ffmpeg -i book.m4b -f ffmetadata book.metadata.txt). With pyav, I can see a data stream that has 56 'frames', but I'm uncertain how (or if) I can process that chapter info. Does anyone know if this is possible as things stand now?
    Ramon Caldeira
    @ramoncaldeira
    Hello! I need to read NTP information from the RTCP packets in my RTSP stream. I managed to do it with libav in C, using private structures as RTSPState, RTSPStream and RTPDemuxContext, instatiated from the priv_data field of AvFormatContext.
    Now I would like to integrate it into my Python code, using PyAV. Are patching FFMPEG and maintaining a private PyAV fork the best way forward?
    Ramon Caldeira
    @ramoncaldeira
    Alex Hutnik
    @alexhutnik
    Any ideas on reading a h264 mp4 file while it's being written? i'm recording from a RTSP stream to a file and want to read it "up to the most recent write" so to speak. Preferably through httpd which doesn't work and I suspect I have to process the packets through pyav to apply some metadata. Hence why I'm here :)
    Ramon Caldeira
    @ramoncaldeira
    @alexhutnik you can read the RTSP from pyav and from there write into the file and process the packets
    I'm trying to generate my own PyAV wheel but it looks like it's not including FFMPEG... I'm trying with $PYAV_PYTHON setup.py bdist_wheel...
    Could anybody help me fix it?
    Victor Sklyar
    @vinnitu

    @alexhutnik you can read the RTSP from pyav and from there write into the file and process the packets

    seems i have the same trouble, I need support https protocol but have to rebuild ffmpeg

    Victor Sklyar
    @vinnitu
    Now I trying build 8.0.1 such as located in pip repository, but cannot put ffmpeg .so to whl.
    Can someone point me to sample or something to help solve this trouble?
    Alex Hutnik
    @alexhutnik
    @ramoncaldeira I've tried doing that:
    if __name__ == "__main__":
        fh = open('video_bytes.bin', 'rb')
    
        codec = av.CodecContext.create('h264', 'r')
    
        while True:
    
            chunk = fh.read(1 << 16)
    
            packets = codec.parse(str(chunk))
            print("Parsed {} packets from {} bytes:".format(len(packets), len(chunk)))
    
            for packet in packets:
    
                print('   ', packet)
    
                frames = codec.decode(packet)
                for frame in frames:
                    print('       ', frame)
    Can't parse any packets
    tried to just parse the chunk directly, but parse complained that chunk was not a str
    i tried writing the packets both as raw bytes (using packet.to_bytes()) and to a stream container
    Also note that I am parsing a closed file. So this isn't even being attempted on a file that is being written to
    Alex Hutnik
    @alexhutnik
    maybe there is an issue with parse
    Ghost
    @ghost~5ec6fedfd73408ce4fe48410
    I am trying to apply FFmpeg filters to a .wav tempfile. I am getting the error: av.error.ValueError: [Errno 22] Invalid argument on the line for p in out_stream.encode(ofr): of the offending code.
    The offending code:
            in_cont = av.open(curr_tempfile.name, mode='r')
            out_cont = av.open(output_tempfile.name, mode='w')
    
            in_stream = in_cont.streams.audio[0]
            out_stream = out_cont.add_stream(
                    codec_name=in_stream.codec_context.codec,
                    rate=audio_signal.sample_rate
                )
    
            graph = Graph()
            fchain = [graph.add_abuffer(sample_rate=in_stream.rate, 
                format=in_stream.format.name, layout=in_stream.layout.name, 
                channels=audio_signal.num_channels)]
            for _filter in filters:
                fchain.append(_filter(graph))
                fchain[-2].link_to(fchain[-1])
    
            fchain.append(graph.add("abuffersink"))
            fchain[-2].link_to(fchain[-1])
    
            graph.configure()
    
            for ifr in in_cont.decode(in_stream):
                graph.push(ifr)
                ofr = graph.pull()
                ofr.pts = None
                for p in out_stream.encode(ofr):
                    out_cont.mux(p)
    
            out_cont.close()
    The list filterscontains lambda functions of the format: lambda graph: graph.add("name_of_filter", "named_filter_arguments")
    For example a _filter in filters would be lambda graph: graph.add("tremolo", "f=15:d=.5")
    Hieu Trinh
    @mhtrinhLIC

    I got a VideoFrame in yuvj420p and I want to convert it to OpenCV Mat which is bgr24.

    <av.VideoFrame #3, pts=1920 yuvj420p 1920x1080 at 0x7f16f0aaf7c8>

    Currently when I run frame.to_ndarray(format='bgr24') I get deprecated pixel format used, make sure you did set range correctly. Is there a proper way to convert to prevent this warning ?
    I tried :

    arr = frame.to_ndarray()   
    mat= cv2.cvtColor(arr, cv2.COLOR_YUV2BGR_I420)

    But got ValueError: Conversion to numpy array with format yuvj420p is not yet supported.
    I know that I can use av.logging.Capture() to catch the warning, but just wondering if there is a proper way to do the conversion ?

    Ramon Caldeira
    @ramoncaldeira
    @mhtrinhLIC try updating PyAV to a version >= 8.0.0. support for yuvj420p was recently added
    Joanna Bitton
    @jbitton

    I'm trying to write a video containing audio using lossless codecs. I'm using "libx264rgb" for my video codec which works perfectly fine and as expected. However, I've been having trouble finding a compatible lossless audio codec.

    I tried using "flac", but I got an error saying that that is an experimental feature:

    > Traceback (most recent call last):
    >   File "/data/users/jbitton/fbsource/fbcode/buck-out/dev/gen/aml/ai_red_team/augmentations/tests/video_tests/pytorch_test#binary,link-tree/aml/ai_red_team/augmentations/tests/video_tests/pytorch_test.py", line 92, in test_compose_without_tensor
    >     audio_codec="flac",
    >   File "/data/users/jbitton/fbsource/fbcode/buck-out/dev/gen/aml/ai_red_team/augmentations/tests/video_tests/pytorch_test#binary,link-tree/torchvision/io/video.py", line 129, in write_video
    >     container.mux(packet)
    >   File "av/container/output.pyx", line 198, in av.container.output.OutputContainer.mux
    >   File "av/container/output.pyx", line 204, in av.container.output.OutputContainer.mux_one
    >   File "av/container/output.pyx", line 174, in av.container.output.OutputContainer.start_encoding
    >   File "av/container/core.pyx", line 180, in av.container.core.Container.err_check
    >   File "av/utils.pyx", line 107, in av.utils.err_check
    > av.AVError: [Errno 733130664] Experimental feature: '/tmp/tmpbwvyrh3j/aug_pt_input_1.mp4' (16: mp4)

    Additionally, I tried using "mp4als" which is a valid audio codec in ffmpeg, however PyAV raises an error saying it is an unknown codec.

    Would appreciate any advice, thanks in advance!

    marawan31
    @marawan31
    Does anyone know how to tell a stream that there has been a frame drop? basically I'm receiving an audio stream through the network and re-encoding it. The problem is that as soon as there is a frame drop the encoder fails on (pts X >= expect pts Y). To fix it I set the pts to None before calling encode with the frame but then it means the played frame is considered the next which makes the video out of sync audio/video wise. Here's a code sample of what I'm doing:
    import av
    
    container = av.open(rtsp_url)
    video_stream = container.streams.video[0]
    audio_stream = container.streams.audio[0]
    out_container = av.open(some_file, mode="w", format="mpegts")
    video_stream_out = out_container.add_stream(template=video_stream)
    audio_stream_out = out_container.add_stream(codec_name="aac", rate=44100)
    
    while True:
        packet = next(container.demux(video_stream, audio_stream))
        if packet.stream.type == 'video':
            packet.stream = video_stream_out
            out_container.mux(packet)
        elif packet.stream.type == 'audio':
            audio_frames = packet.decode()
            for a_frame in audio_frames:
                a_frame.pts = None # If I don't set this to None if there is a packet drop the encode function will fail...
                packets = audio_stream_out.encode(a_frame) # When I do set it to None, audio will play ridiculously fast
                for a_packet in packets:
                    a_packet.stream = audio_stream_out
                    out_container.mux(a_packet)
    1 reply
    Razvan Grigore
    @razvanphp
    hey, I'm trying to build https://github.com/jocover/jetson-ffmpeg codec into av, should pip3 install av --no-binary av work with the custom compiled ffmpeg? I still get this error: pkg-config returned flags we don't understand: -pthread -pthread any hints?
    Razvan Grigore
    @razvanphp
    -- worked from deb-src modified build
    Luke Wong
    @xiaozhubenben
    hi, i want to know :how to pyav read rtmp stream nobuffer ,like ffmpeg command line : ffplay -fflag nobufer rtmp://xxxxxxx
    Brian Sherson
    @shersonb
    Isn’t len(frame.planes) supposed to be equal to len(frame.layout.channels)?
    >>> frame = next(d) ; frame
    <av.AudioFrame 0, pts=0, 512 samples at 48000Hz, 7.1, s32p at 0x7fd146428b38>
    >>> len(frame.planes)
    9
    >>> len(frame.layout.channels)
    8