可以用PyAV直接将视频读入3D Numpy吗?目前,我正在遍历每一帧:
i = 0
container = av.open('myvideo.avi')
for frame in container.decode(video=0):
if i == 0: V = np.array(frame.to_ndarray(format='gray'))
else: V = np.dstack((V, np.array(frame.to_ndarray(format='gray'))))
i += 1第一帧定义了一个2D Numpy数组(i=0);每个后续帧(i>0)都被使用np.dstack叠加到第一个数组上。理想情况下,我想把整个视频读到一个3D Numpy阵列的灰度帧,所有这一切都在一次。
发布于 2020-01-30 23:29:02
我无法使用PyAV找到解决方案,而是使用凤尾巨蟒。
ffmpeg-python是FFmpeg (如PyAV )的Pythonic绑定。
代码会同时将整个视频读入一个3D灰度帧的NumPy数组中。
该解决方案执行以下步骤:
n x height x width NumPy数组。这是代码(请阅读注释):
import ffmpeg
import numpy as np
from PIL import Image
in_filename = 'in.avi'
"""Build synthetic video, for testing begins:"""
# ffmpeg -y -r 10 -f lavfi -i testsrc=size=160x120:rate=1 -c:v libx264 -t 5 in.mp4
width, height = 160, 120
(
ffmpeg
.input('testsrc=size={}x{}:rate=1'.format(width, height), r=10, f='lavfi')
.output(in_filename, vcodec='libx264', t=5)
.overwrite_output()
.run()
)
"""Build synthetic video ends"""
# Use FFprobe for getting the resolution of the video frames
p = ffmpeg.probe(in_filename, select_streams='v');
width = p['streams'][0]['width']
height = p['streams'][0]['height']
# https://github.com/kkroening/ffmpeg-python/blob/master/examples/README.md
# Stream the entire video as one large array of bytes
in_bytes, _ = (
ffmpeg
.input(in_filename)
.video # Video only (no audio).
.output('pipe:', format='rawvideo', pix_fmt='gray') # Set the output format to raw video in 8 bit grayscale
.run(capture_stdout=True)
)
n_frames = len(in_bytes) // (height*width) # Compute the number of frames.
frames = np.frombuffer(in_bytes, np.uint8).reshape(n_frames, height, width) # Reshape buffer to array of n_frames frames (shape of each frame is (height, width)).
im = Image.fromarray(frames[0, :, :]) # Convert first frame to image object
im.show() # Display the image输出:

更新:
使用PyAV
在使用PyAV时,我们必须对视频进行逐帧解码。
在ffmpeg-python上使用PyAV的主要优点是我们可以在不存在FFmpeg CLI的情况下使用它(在FFmpeg中没有ffmpeg.exe )。
对于将所有视频帧读取到一个NumPy数组中,我们可以使用以下步骤:
代码示例(使用OpenCV显示用于测试的框架):
import av
import numpy as np
import cv2
# Build input file using FFmpeg CLI (for testing):
# ffmpeg -y -f lavfi -i testsrc=size=192x108:rate=1:duration=10 -vcodec libx264 -pix_fmt yuv420p myvideo.avi
container = av.open('myvideo.avi')
frames = [] # List of frames - store video frames after converting to NumPy array.
for frame in container.decode(video=0):
# Decode video frame, and convert to NumPy array in BGR pixel format (use BGR because it used by OpenCV).
frame = frame.to_ndarray(format='bgr24') # For Grayscale video, use: frame = frame.to_ndarray(format='gray')
frames.append(frame) # Append the frame to the list of frames.
# Convert the list to NumPy array.
# Shape of each frame is (height, width, 3) [for Grayscale the shape is (height, width)]
# the shape of frames is (n_frames, height, width, 3) [for Grayscale the shape is (n_frames, height, width)]
frames = np.array(frames)
# Show the frames for testing:
for i in range(len(frames)):
cv2.imshow('frame', frames[i])
cv2.waitKey(1000)
cv2.destroyAllWindows()https://stackoverflow.com/questions/59973078
复制相似问题