Convert videos, audio and image sequences quickly and efficiently.

Ok, sometimes you need a quick way to convert image sequence to video clip, join clips, rip audio, mux audio from other clip, deinterlace footage (anybody still shooting interlaced videos? yeah, I need a new camcorder) and do that with minimal quality loss or even without re-encoding.

The way I do this is the ffmpeg way, and the x264 way. Even though not everyone favour the format, it’s ok for simple tasks. And, yeah, it’s command line way – the fastest way.

Good news is, once you get a grasp of it, you will better understand all of the GUI’s frontends that use ffmpegs, mencoders etc. anyway for actually doing all the work behind. No, that doesn’t sound funny.

All of the commands are one-liners, just so you know. Let’s go:


ffmpeg -i InputFile -c:v libx264 -preset ultrafast -crf 0 Output.mp4

This line means you want to encode a lossless file as fast as possible, good for batch jobs etc. but also with the biggest file sizes. You can get around that with single switch from -preset ultrafast to -preset veryslow, which means the same quality (because the -crf 0 parameter means lossless output), but better compression – you just contribute a little more time for the task.

And if you ever need real compression, just tweak the -crf 0 parameter to -crf 18 (that is near to lossless visually, but provides real gain in terms of file size; this value is good for final footage):

ffmpeg -i inputFile -c:v libx264 -preset veryslow -crf 18 Output.mp4


ffmpeg -r 25 -i frame%04d.png -vcodec libx264 -crf 18 Output.mp4

This -i (input file) switch assumes, your files are named in convention: frame0000.png, frame0001.png, etc. You get that just using the prefix (“frame” in this case) and the number of digits (4 in this case, passed as “%04d“).

In case your frames start from number of – lets say 70 – then you need to pass this information as well:

ffmpeg -r 25 -start_number 70 -i frame%04d.png -vcodec libx264 -crf 18 Output.mp4

The -r 25 switch means output will play at 25 frames per second.


ffmpeg -skip_frame nokey -i sequence_%04d.png -vf 'tile=8x8' -an -vsync 0 spritesheet.png

If you have image sequence that would look nice in a game, you want to create a sprite sheet. With this example 8×8 grid is created and ready to import to Unity3D for example.


ffmpeg -i InputFile -an -f image2 "frame%05d.jpg"

Of course, if you plan to re-encode the sequence to video or any other job, it’s better to use lossless format, like png.


ffmpeg -f concat -i FileList.txt -c copy Output.mp4

Just create a FileList.txt text file that looks like this:
file '/path/file1'
file '/path/file2'
file '/path/file3'

These can be either relative or absolute paths. If your clips have the same parameters you can join them without re-encoding (-c copy switch), however if they vary in parameters or codecs, you can still join them, but you need to encode the output file:

ffmpeg -f concat -i FileList.txt -c:v libx264 -preset veryslow -crf 18 Output.mp4


ffmpeg -i InputFile -ss 00:00:30.0 -c copy -t 00:00:10.0 -async 1 Output.mp4

Simply cuts a 10-second long clip, starting at 30.0s time.

-ss seeks to time,

-t output clip length.


for vid in *.MTS; do ffmpeg -y -i "$vid" -c:v libx264 -preset veryslow -crf 18 -c:a libfdk_aac -b:a 196k "${vid%.MTS}.mp4"; done

This may look intimidating but is as simple as the previous ones. Only this one iterates through all files in directory that have .MTS extension and compresses them individually into files of the same name, but with .mp4 extension. Also compresses audio into AAC 196k bitrate format. If you don’t need to compress audio, but simply copy  audio to output, just use -c:a copy switch instead.


for %%f in (*.MTS) do ffmpeg -i %%f -c:v libx264 -preset veryslow -crf 18 %%f.mp4

If used from a command line and not a .bat file simply switch “%%f” with a “%f”. Pause is just to keep the terminal window opened on finish, to review encoding process.


ffmpeg -i InputFile -ab 192k -ac 2 -ar 48000 -vn audio.mp3

Quite self explanatory: -ab is a bitrate, -ac channel number (2 for stereo), -ar sampling frequency.


From an .mp3 file:

ffmpeg -i AudioInputFile -i VideoInputFile -c copy -map 0:a:0 -map 1:v:0 -c:a copy Output.mp4

From another video clip:

ffmpeg -r 25 -i InputVideoFileWithAudioSource -i InputVideoFileThatNeedsAudio -c copy -map 0:a:0 -map 1:v:0 -c:a copy -shortest Output.mp4

-map switch copies first audio stream from the InputVideoFileWithAudioSource (file can have multiple audio streams) and video stream from the InputVideoFileThatNeedsAudio and combines them into Output.mp4. All without re-encoding.

-r switch is for framerate,

-shortest switch means that output will have length of the shorter input file.


ffmpeg -i InputFile -vf deshake Output.mp4

Just a simple video filter that will deshake (or try at least) your clip a little. There are better filters for that, but the deshake filter will get the job done with the simplest cases.


ffmpeg -i frame_%04d.png  -vf hqdn3d=8:6:6:4 "denoised_frame_%04d.png"

A “High Quality DeNoise 3D” filter, that can remove harsh noise and improve image quality. I often use it to denoise path tracer renders.

Values respectively: spatial_luma:spatial_chroma:temporal_luma:temporal_chroma filtering weight.


So my camcorder shoots 1080i video, which means interlaced footage. I decided to get something out of it and convert 1080i 25 fps video to, 720p 50fps video:

ffmpeg -i InputFile -filter:v yadif=1 -s "1280x720" -sws_flags spline -r 50 -c:a libfdk_aac -b:a 196k -c:v libx264 -preset veryslow -crf 18 Output.mp4

Filter yadif=1 uses top and bottom field as separate frames, which means double fps,

-s is an output footage resolution,

-sws_flags spline is a filter for upscaling a vid (1080i frame is actually a 1920×540 frame size, interlacing top and bottom fields line by line) that returns best quality from my experience,

-r as always means framerate.

These are just most common tasks with the ffmpeg Ido.

Command line may be intimidating, but once you get a hang of it, it gets the job done insanely fast.

And while this is no news by any means, it’s good to have cheat sheet with all of the most common tasks listed in one place.

Something extra that I find using a lot: rip stills from an interlaced footage:

ffmpeg -i 00034.mts -filter:v yadif=1 -s "1280x720" -sws_flags spline -r 5 -q:v 1 -an -f image2 "frame%05d.jpg"

-r is fps, which means 5 frames every second for 25fps footage.


Yay! Volumetrics in cycles!

Yay! Volumetrics in cycles! Volumetric clouds in cycles.

How to:

rendertimes with settings similar to above are around 10min in 720p on i7 3770.

other results with this approach:

This slideshow requires JavaScript.

Spring water.

just a quick render.

blender cycles, fairly high sample count, still some grain but i have no patience to render it for any longer.


also @deviantart.

Zywiec Zdroj is a registered trademark of Zywiec Zdroj SA

some recent models.

stuff for a project.

bathroom stuff

models (with couple more) are available

animation test.

trying to figure out noise modifier in graph editor for an automated camera shake and movement. planes positions and rotations are modified this way as well.

great thing about this method is that you can create typical keyframes for postion and rotation (or anything, actually) just like you normally would when animating an object (it’s important that object has at least one keyframe for the attribute you want), then just go to the graph editor and bring up properties sidemenu (hit N while hovering over graph editor workspace) and add modifier.
i’ve used noise modifier for position and rotation for x,y,z axis with slightly different values each (varying around 5 strength, 250 scaling in time and phases 0-300) that gives smooth sliding for the planes, like you would observe during straight flight.
for rotation there is noise as well but very gentle (str 0.05, with scaling around 100).
and one more thing is that when you set blend type to ‘add’, it will simply add noise with your keyframe movement.
no additional dummies and parenting objects for this (like you would have to do with 3dsmax for example). now that’s really cool. you can even add even more modifiers to enhance this effect and give more randomness, or add a time limited camera shake, just to spice up your animation. efortless.

of course this method isn’t perfect, but it simulates random movement i wanted well enough. and when joined with slight camera noise for position and rotation (strength 0.005 for rotation and 0.1 for position, with scaling 20-60) it gives that nice feeling when planes have to make their way through air and pilots try to correct their position to keep formation.
and that’s the idea. exact settings and values will vary of course, depending on your scene, camera and object distance, movement type etc.
my scene has object in size of 10 and camera of focal length 50.

a simple carpet with particles.

for future reference ;)

creating carpet in blender with partle system

-create couple simple meshes (strand instances) set the strand origins to ther roots and group them (ctrl+g while selected), the more different meshes – the better but just couple will do (if you use random size and rotation settings later),
-add particle system to a plane, set emission to start and end in the first active frame and lifetime of the time you want strands to be visible (that will make the particles to show up all at once and will make them visible for the amount of their lifetime),
– turn off physics,
– under render settings choose group and add the group you have created, change size, add some randomness – always helps,
– add rotation with initial directon, add some randomness, phase and phase randomness (play with the settings to get the desired effect),
-increase the particle number to get the strand density you want.