我正在尝试学习如何将ffmpeg命令行背景模糊过滤器转换为ffmpeg-python格式。'-lavfi'和[0:v]scale=ih*16/9:-1,boxblur=luma_radius=min(h\,w)/20:luma_power=1:chroma_radius=min(cw\,ch)/20:chroma_power=1[bg];[bg][0:v]overlay=(W-w)/2:(H-h)/2,crop=h=iw*9/16
来自https://github.com/kkroening/ffmpeg-python的基本示例很容易学习简单的技巧,但是如何学习完整的转换语法呢?
发布于 2021-01-20 11:27:08
不知道你是不是猜到了。但有一种方法对我很管用。
Tip1:使用库对任何过滤器进行“编码”的前提条件是理解ffmpeg命令行语法。
Tip2:通常,ffmpeg.filter()将过滤器名称作为第一个参数。后面是所有的过滤条件。此函数用于将下游流返回到您刚刚创建的filter节点。
例如:在来自问题的示例ffmpeg命令行中...阅读它告诉我,你想缩放视频,然后应用boxblur滤镜,然后裁剪。
因此您可以用ffmpeg-python术语将其表示为
# create a stream object, Note that any supplied kwargs are passed to ffmpeg verbatim
my_vid_stream = ffmpeg.input(input_file, "lavfi")
# The input() returns a stream object has what is called 'base_object' which represents the outgoing edge of an upstream node and can be used to create more downstream nodes. That is what we will do. This stream base_object has two properties, audio and video .. assign the video stream to a new variable, we will be creating filters to only video stream, as indicated by [0:v] in ffmpeg command line.
my_vid_stream = mystream.video
# ffmpeg.filter() takes the upstream node followed by the name of the filter, followed by the configuration of the filter
# first filter you wanted to apply is 'scale' filter. So...
my_vid_stream = ffmpeg.filter(my_vid_stream,"scale","ih*16/9:-1")
# next to the upstream node create a new filter which does the boxblur operation per your specs. so ..
my_vid_stream = ffmpeg.filter(my_vid_stream,"boxblur", "min(h\,w)/20:luma_power=1:chroma_radius=min(cw\,ch)/20:chroma_power=1[bg];[bg][0:v]overlay=(W-w)/2:(H-h)/2")
# finally apply the crop filter to it's upstream node and assign the output stream back to the same variable. so ...
my_vid_stream = ffmpeg.filter(my_vid_stream, "crop", h="iw*9/16")
# now generate the output node and write it to an output file.
my_vid_stream = ffmpeg.output(my_vid_stream,output_file)
## to see your pipeline in action. call the ffmpeg.run(my_vid_stream)希望这篇文章能帮助你或任何其他正在努力有效利用这个库的人。
发布于 2021-01-20 13:36:38
我在FFmpeg-python上工作,在添加自定义命令方面有很大的灵活性。在这里我提到了一个例子,我添加了一个循环来覆盖视频,并添加了一个连接过滤器,你可以从这里学习如何添加重置过滤器。
audios = []
inputs = []
#generate an empty audio
e_aud_src = rendering_helper.generate_empty_audio(0.1)
e_aud = (
ffmpeg.input(e_aud_src)
.audio
)
for k, i in enumerate(videos):
inp = ffmpeg.input(i['src'], ss=i['start'], t=(i['end'] - i['start']))
inp_f = (inp.filter_multi_output('split')[k]
.filter_('scale', width=(i['width'] * Factors().factors['w_factor']), height=(i['height'] * Factors().factors['h_factor'])).filter('setsar', '1/1')
.setpts(f"PTS-STARTPTS+{i['showtime']}/TB"))
audio = ffmpeg.probe(i['src'], select_streams='a')
if audio['streams'] and i['muted'] == False:
a = (inp.audio.filter('adelay', f"{i['showtime'] * 1000}|{i['showtime'] * 1000}"))
else:
a = e_aud
audios.append(a)
e_frame = (e_frame.overlay(inp_f, x=(i['xpos'] * Factors().factors['w_factor']), y=(i['ypos'] * Factors().factors['h_factor']), eof_action='pass'))
mix_audios = ffmpeg.filter_(audios, 'amix') if len(audios) > 1 else audios[0]
inp_con = ffmpeg.concat(e_frame, mix_audios, v=1, a=1)
return inp_con发布于 2022-02-27 19:40:55
我的2美分:这里我们有3个视频,是褪色的黑色到彼此。我发现有多个输入的过滤器会接受一个元组作为第一个参数,也可能是一个列表。
y="-y“的事情也是刚刚尝试过的--看起来这个库足够直观了,至少考虑到我扭曲的(不是那么扭曲的)思想。
import ffmpeg
infile = "test-video.mp4"
outfile = str(infile) +'.crossfade.mp4'
if __name__ == '__main__':
faded = ffmpeg.input(infile, ss=10, to=21)
into = ffmpeg.input(infile, ss=30, to=41)
faded = ffmpeg.filter((faded, into), 'xfade', transition="fadeblack", duration=1, offset=10)
into = ffmpeg.input(infile, ss=60, to=71)
faded = ffmpeg.filter((faded, into), 'xfade', transition="fadeblack", duration=1, offset=20)
# overwrite: n="-n" means never , same for y="-y" always
written = ffmpeg.output(faded, outfile, y="-y")
ffmpeg.run(written)https://stackoverflow.com/questions/61872725
复制相似问题