I am having trouble installing PyAV.
$ pip install av
Collecting av
Using cached https://files.pythonhosted.org/packages/4d/fe/170a64b51c8f10df19fd2096388e51f377dfec07f0bd2f8d25ebdce6aa50/av-8.0.1.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-4yV5M1/av/setup.py", line 9, in <module>
from shlex import quote
ImportError: cannot import name quote
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-4yV5M1/av/
copy
"dummy" codecContext? I'm opening a container with av.open(file="/dev/video0", format="v4l2", mode="r", options={"video_size": "1920x1080", "framerate": "30", "input_format": "h264"})
since RPi camera and Logitech C920 can give HW accelerated H264 stream, but then I have trouble working with the Packet object to bitstream over webRTC. Any clues?
ffmpeg -i book.m4b -f ffmetadata book.metadata.txt
). With pyav, I can see a data stream that has 56 'frames', but I'm uncertain how (or if) I can process that chapter info. Does anyone know if this is possible as things stand now?
$PYAV_PYTHON setup.py bdist_wheel
...
if __name__ == "__main__":
fh = open('video_bytes.bin', 'rb')
codec = av.CodecContext.create('h264', 'r')
while True:
chunk = fh.read(1 << 16)
packets = codec.parse(str(chunk))
print("Parsed {} packets from {} bytes:".format(len(packets), len(chunk)))
for packet in packets:
print(' ', packet)
frames = codec.decode(packet)
for frame in frames:
print(' ', frame)
chunk
directly, but parse complained that chunk
was not a str
packet.to_bytes()
) and to a stream container
in_cont = av.open(curr_tempfile.name, mode='r')
out_cont = av.open(output_tempfile.name, mode='w')
in_stream = in_cont.streams.audio[0]
out_stream = out_cont.add_stream(
codec_name=in_stream.codec_context.codec,
rate=audio_signal.sample_rate
)
graph = Graph()
fchain = [graph.add_abuffer(sample_rate=in_stream.rate,
format=in_stream.format.name, layout=in_stream.layout.name,
channels=audio_signal.num_channels)]
for _filter in filters:
fchain.append(_filter(graph))
fchain[-2].link_to(fchain[-1])
fchain.append(graph.add("abuffersink"))
fchain[-2].link_to(fchain[-1])
graph.configure()
for ifr in in_cont.decode(in_stream):
graph.push(ifr)
ofr = graph.pull()
ofr.pts = None
for p in out_stream.encode(ofr):
out_cont.mux(p)
out_cont.close()
filters
contains lambda functions of the format: lambda graph: graph.add("name_of_filter", "named_filter_arguments")
_filter
in filters
would be lambda graph: graph.add("tremolo", "f=15:d=.5")
I got a VideoFrame in yuvj420p
and I want to convert it to OpenCV Mat which is bgr24
.
<av.VideoFrame #3, pts=1920 yuvj420p 1920x1080 at 0x7f16f0aaf7c8>
Currently when I run frame.to_ndarray(format='bgr24')
I get deprecated pixel format used, make sure you did set range correctly
. Is there a proper way to convert to prevent this warning ?
I tried :
arr = frame.to_ndarray()
mat= cv2.cvtColor(arr, cv2.COLOR_YUV2BGR_I420)
But got ValueError: Conversion to numpy array with format yuvj420p is not yet supported
.
I know that I can use av.logging.Capture()
to catch the warning, but just wondering if there is a proper way to do the conversion ?
I'm trying to write a video containing audio using lossless codecs. I'm using "libx264rgb" for my video codec which works perfectly fine and as expected. However, I've been having trouble finding a compatible lossless audio codec.
I tried using "flac", but I got an error saying that that is an experimental feature:
> Traceback (most recent call last):
> File "/data/users/jbitton/fbsource/fbcode/buck-out/dev/gen/aml/ai_red_team/augmentations/tests/video_tests/pytorch_test#binary,link-tree/aml/ai_red_team/augmentations/tests/video_tests/pytorch_test.py", line 92, in test_compose_without_tensor
> audio_codec="flac",
> File "/data/users/jbitton/fbsource/fbcode/buck-out/dev/gen/aml/ai_red_team/augmentations/tests/video_tests/pytorch_test#binary,link-tree/torchvision/io/video.py", line 129, in write_video
> container.mux(packet)
> File "av/container/output.pyx", line 198, in av.container.output.OutputContainer.mux
> File "av/container/output.pyx", line 204, in av.container.output.OutputContainer.mux_one
> File "av/container/output.pyx", line 174, in av.container.output.OutputContainer.start_encoding
> File "av/container/core.pyx", line 180, in av.container.core.Container.err_check
> File "av/utils.pyx", line 107, in av.utils.err_check
> av.AVError: [Errno 733130664] Experimental feature: '/tmp/tmpbwvyrh3j/aug_pt_input_1.mp4' (16: mp4)
Additionally, I tried using "mp4als" which is a valid audio codec in ffmpeg, however PyAV raises an error saying it is an unknown codec.
Would appreciate any advice, thanks in advance!
import av
container = av.open(rtsp_url)
video_stream = container.streams.video[0]
audio_stream = container.streams.audio[0]
out_container = av.open(some_file, mode="w", format="mpegts")
video_stream_out = out_container.add_stream(template=video_stream)
audio_stream_out = out_container.add_stream(codec_name="aac", rate=44100)
while True:
packet = next(container.demux(video_stream, audio_stream))
if packet.stream.type == 'video':
packet.stream = video_stream_out
out_container.mux(packet)
elif packet.stream.type == 'audio':
audio_frames = packet.decode()
for a_frame in audio_frames:
a_frame.pts = None # If I don't set this to None if there is a packet drop the encode function will fail...
packets = audio_stream_out.encode(a_frame) # When I do set it to None, audio will play ridiculously fast
for a_packet in packets:
a_packet.stream = audio_stream_out
out_container.mux(a_packet)
pip3 install av --no-binary av
work with the custom compiled ffmpeg? I still get this error: pkg-config returned flags we don't understand: -pthread -pthread
any hints?
I want to convert m3u8 to mp4 and i would like to do conversion as well as download on same time. for download i have used http protocol of ffmpeg.
I am running this command
ffmpeg -i ultra.m3u8 -c copy -listen 1 -seekable 1 -f mp4 http://0.0.0.0:8080/test.mp4
when i trigger this url("http://0.0.0.0:8080/test.mp4"), then file start's download, but i am not able to play video.
and i get error when all chunks are read:
[hls @ 0x55da053b4100] Opening 'ultra177.ts' for reading
[tcp @ 0x55da0540f940] Connection to tcp://0.0.0.0:8080 failed: Connection refused
[tcp @ 0x55da05520480] Connection to tcp://0.0.0.0:8080 failed: Connection refused
[tcp @ 0x55da053ca780] Connection to tcp://0.0.0.0:8080 failed: Connection refused
[tcp @ 0x55da05485f80] Connection to tcp://0.0.0.0:8080 failed: Connection refused
[tcp @ 0x55da053ced40] Connection to tcp://0.0.0.0:8080 failed: Connection refused
[tcp @ 0x55da054255c0] Connection to tcp://0.0.0.0:8080 failed: Connection refused
[tcp @ 0x55da0540f940] Connection to tcp://0.0.0.0:8080 failed: Connection refused
[tcp @ 0x55da05435380] Connection to tcp://0.0.0.0:8080 failed: Connection refused
frame=53236 fps=7939 q=-1.0 Lsize= 476447kB time=00:29:36.30 bitrate=2197.3kbits/s speed= 265x
video:446847kB audio:28278kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.278083%
Hello, I'm new to PyAV. I've been trying to record my screen using 'x11grab' and then broadcast it to an RTMP stream, but I've been having no luck. Here's my code:
import av
if __name__ == '__main__':
x11grab = av.open(':0.0', format='x11grab', options={'s':'1280x1024', 'framerate':'60'})
output = av.open('rtmp://localhost', format='flv')
ostream = output.add_stream('libx264', framerate=60, video_size='1280x1024')
ostream.options = {}
ostream.pix_fmt = 'yuv420p'
istream = x11grab.streams.video[0]
gen = x11grab.demux(istream)
for _ in range(1000):
packet = next(gen)
packet.stream = ostream
if packet.dts is None:
continue
print(packet)
output.mux(packet)
x11grab.close()
output.close()
When I run this, it just hangs, no output whatsoever. Could somebody please point me in the right direction on how I could go about making this work? Thank you!
stream.width
and stream.height
I got Input picture width (xxx) is greater than stride (yyy)
error during the next stream.encode()
so I suppose I need to do something more. Any idea what is missing? Is it possible to do with pyav or I need to go down to the C api?
Hi all. I am recording a RTP live stream into disk. The program is working correctly but the start time of the video files does not start in 0. I've checked the input container and effectively the start_time attribute is not set to 0. I've tried to force the start_time of the input container to 0 but it is protected.
On the other hand, I've found that if those segments with the start_time not set to 0 are muxed using ffmpeg cli with the flag '-c copy' the offset is correctly removed. So, I've been playing with the audio and video packet pts and dts to remove the initial offset but after some segments the program crashes because problems in the decoding process...
How do you use to restart or set the start_time to 0?