'How to convert ffmpeg complex_filter to ffmpeg-python
I am trying to learn to convert ffmpeg command line background blur filter to ffmpeg-python
format.
'-lavfi'
and [0:v]scale=ih*16/9:-1,boxblur=luma_radius=min(h\,w)/20:luma_power=1:chroma_radius=min(cw\,ch)/20:chroma_power=1[bg];[bg][0:v]overlay=(W-w)/2:(H-h)/2,crop=h=iw*9/16
Basic examples from https://github.com/kkroening/ffmpeg-python are good to learn simple tricks, but How to learn to the full transformation syntax?
Solution 1:[1]
Not sure if you figured this or not ... but here's an approach that worked for me.
Tip1: A pre-requisite for using the library to 'encode any filter' is understanding the ffmpeg command line syntax.
Tip2: In general, ffmpeg.filter()
takes the filter name as first param. This is followed by all the filter conditions. This function returns the stream downstream to the filter node you just created.
For example: In the example ffmpeg command line from the question... reading it tells me that you want to scale the video and then apply the boxblur filter followed by crop.
So you'd represent this in ffmpeg-python
terms as
# create a stream object, Note that any supplied kwargs are passed to ffmpeg verbatim
my_vid_stream = ffmpeg.input(input_file, "lavfi")
# The input() returns a stream object has what is called 'base_object' which represents the outgoing edge of an upstream
# node and can be used to create more downstream nodes. That is what we will do. This stream base_object has two properties,
# audio and video .. assign the video stream to a new variable, we will be creating filters to only video stream,
# as indicated by [0:v] in ffmpeg command line.
my_vid_stream = mystream.video
# ffmpeg.filter() takes the upstream node followed by the name of the filter, followed by the configuration of the filter
# first filter you wanted to apply is 'scale' filter. So...
my_vid_stream = ffmpeg.filter(my_vid_stream,"scale","ih*16/9:-1")
# next to the upstream node create a new filter which does the boxblur operation per your specs. so ..
my_vid_stream = ffmpeg.filter(my_vid_stream,"boxblur", "min(h\,w)/20:luma_power=1:chroma_radius=min(cw\,ch)/20:chroma_power=1[bg];[bg][0:v]overlay=(W-w)/2:(H-h)/2")
# finally apply the crop filter to it's upstream node and assign the output stream back to the same variable. so ...
my_vid_stream = ffmpeg.filter(my_vid_stream, "crop", h="iw*9/16")
# now generate the output node and write it to an output file.
my_vid_stream = ffmpeg.output(my_vid_stream,output_file)
## to see your pipeline in action. call the ffmpeg.run(my_vid_stream)
Hope this helps you or anyone else struggling to effectively utilize this lib.
Solution 2:[2]
I am working on FFmpeg-python
there is much flexibility in adding custom commands. Here I am mentioning an example where I am adding a loop to overlay videos and adding a concatenate filter you can learn from here how to add reset filters.
audios = []
inputs = []
#generate an empty audio
e_aud_src = rendering_helper.generate_empty_audio(0.1)
e_aud = (
ffmpeg.input(e_aud_src)
.audio
)
for k, i in enumerate(videos):
inp = ffmpeg.input(i['src'], ss=i['start'], t=(i['end'] - i['start']))
inp_f = (inp.filter_multi_output('split')[k]
.filter_('scale', width=(i['width'] * Factors().factors['w_factor']), height=(i['height'] * Factors().factors['h_factor'])).filter('setsar', '1/1')
.setpts(f"PTS-STARTPTS+{i['showtime']}/TB"))
audio = ffmpeg.probe(i['src'], select_streams='a')
if audio['streams'] and i['muted'] == False:
a = (inp.audio.filter('adelay', f"{i['showtime'] * 1000}|{i['showtime'] * 1000}"))
else:
a = e_aud
audios.append(a)
e_frame = (e_frame.overlay(inp_f, x=(i['xpos'] * Factors().factors['w_factor']), y=(i['ypos'] * Factors().factors['h_factor']), eof_action='pass'))
mix_audios = ffmpeg.filter_(audios, 'amix') if len(audios) > 1 else audios[0]
inp_con = ffmpeg.concat(e_frame, mix_audios, v=1, a=1)
return inp_con
Solution 3:[3]
My 2 cents: Here we have 3 videos that are faded black into eachother. I found that a filter having more than one input will accept a tuple as first parameter, maybe also a list.
And the y="-y" thing was also just tried - it seems this library is intuitive enough, alt least given my twisted (not that twisted) mind.
import ffmpeg
infile = "test-video.mp4"
outfile = str(infile) +'.crossfade.mp4'
if __name__ == '__main__':
faded = ffmpeg.input(infile, ss=10, to=21)
into = ffmpeg.input(infile, ss=30, to=41)
faded = ffmpeg.filter((faded, into), 'xfade', transition="fadeblack", duration=1, offset=10)
into = ffmpeg.input(infile, ss=60, to=71)
faded = ffmpeg.filter((faded, into), 'xfade', transition="fadeblack", duration=1, offset=20)
# overwrite: n="-n" means never , same for y="-y" always
written = ffmpeg.output(faded, outfile, y="-y")
ffmpeg.run(written)
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | jaksco |
Solution 2 | Dharman |
Solution 3 | user2692263 |