Procedural cycles shader for clouds.

Quick walkthrough of shader creation for procedural clouds. When you follow this, you will know how to simply duplicate a cube to get unique cloud for each object.

For more complicated shapes you can simply combine cubes (keeping them as separate objects) or add more noise textures with different blending modes.

And the best thing? It uses very low memory to generate really complicated cloud shapes.

Or use a particle system (instancing a cube with randomized size, rotation etc) and still – each cloud will be unique.

available @blendswap.com

Convert videos, audio and image sequences quickly and efficiently.

Ok, sometimes you need a quick way to convert image sequence to video clip, join clips, rip audio, mux audio from other clip, deinterlace footage (anybody still shooting interlaced videos? yeah, I need a new camcorder) and do that with minimal quality loss or even without re-encoding.

The way I do this is the ffmpeg way, and the x264 way. Even though not everyone favour the format, it’s ok for simple tasks. And, yeah, it’s command line way – the fastest way.

Good news is, once you get a grasp of it, you will better understand all of the GUI’s frontends that use ffmpegs, mencoders etc. anyway for actually doing all the work behind. No, that doesn’t sound funny.

All of the commands are one-liners, just so you know. Let’s go:


FIRST, SIMPLY ENCODE A FILE

ffmpeg -i InputFile -c:v libx264 -preset ultrafast -crf 0 Output.mp4

This line means you want to encode a lossless file as fast as possible, good for batch jobs etc. but also with the biggest file sizes. You can get around that with single switch from -preset ultrafast to -preset veryslow, which means the same quality (because the -crf 0 parameter means lossless output), but better compression – you just contribute a little more time for the task.

And if you ever need real compression, just tweak the -crf 0 parameter to -crf 18 (that is near to lossless visually, but provides real gain in terms of file size; this value is good for final footage):

ffmpeg -i inputFile -c:v libx264 -preset veryslow -crf 18 Output.mp4


IMAGE SEQUENCE TO A FILE

ffmpeg -r 25 -i frame%04d.png -vcodec libx264 -crf 18 Output.mp4

This -i (input file) switch assumes, your files are named in convention: frame0000.png, frame0001.png, etc. You get that just using the prefix (“frame” in this case) and the number of digits (4 in this case, passed as “%04d“).

In case your frames start from number of – lets say 70 – then you need to pass this information as well:

ffmpeg -r 25 -start_number 70 -i frame%04d.png -vcodec libx264 -crf 18 Output.mp4

The -r 25 switch means output will play at 25 frames per second.


IMAGE SEQUENCE TO A SPRITESHEET (MAY BE VIDEO AS WELL)

ffmpeg -skip_frame nokey -i sequence_%04d.png -vf 'tile=8x8' -an -vsync 0 spritesheet.png

If you have image sequence that would look nice in a game, you want to create a sprite sheet. With this example 8×8 grid is created and ready to import to Unity3D for example.


VIDEO TO AN IMAGE SEQUENCE

ffmpeg -i InputFile -an -f image2 "frame%05d.jpg"

Of course, if you plan to re-encode the sequence to video or any other job, it’s better to use lossless format, like png.


JOIN MULTIPLE CLIPS WITHOUT RE-ENCODING

ffmpeg -f concat -i FileList.txt -c copy Output.mp4

Just create a FileList.txt text file that looks like this:
file '/path/file1'
file '/path/file2'
file '/path/file3'

These can be either relative or absolute paths. If your clips have the same parameters you can join them without re-encoding (-c copy switch), however if they vary in parameters or codecs, you can still join them, but you need to encode the output file:

ffmpeg -f concat -i FileList.txt -c:v libx264 -preset veryslow -crf 18 Output.mp4


CUT CLIPS WITHOUT RE-ENCODING

ffmpeg -i InputFile -ss 00:00:30.0 -c copy -t 00:00:10.0 -async 1 Output.mp4

Simply cuts a 10-second long clip, starting at 30.0s time.

-ss seeks to time,

-t output clip length.


BATCH CONVERT ALL FILES IN DIRECTORY (LINUX)

for vid in *.MTS; do ffmpeg -y -i "$vid" -c:v libx264 -preset veryslow -crf 18 -c:a libfdk_aac -b:a 196k "${vid%.MTS}.mp4"; done

This may look intimidating but is as simple as the previous ones. Only this one iterates through all files in directory that have .MTS extension and compresses them individually into files of the same name, but with .mp4 extension. Also compresses audio into AAC 196k bitrate format. If you don’t need to compress audio, but simply copy  audio to output, just use -c:a copy switch instead.


BATCH CONVERT ALL FILES IN DIRECTORY (WINDOWS)

for %%f in (*.MTS) do ffmpeg -i %%f -c:v libx264 -preset veryslow -crf 18 %%f.mp4
pause

If used from a command line and not a .bat file simply switch “%%f” with a “%f”. Pause is just to keep the terminal window opened on finish, to review encoding process.


RIP AUDIO STREAM TO AN .MP3

ffmpeg -i InputFile -ab 192k -ac 2 -ar 48000 -vn audio.mp3

Quite self explanatory: -ab is a bitrate, -ac channel number (2 for stereo), -ar sampling frequency.


MUX AUDIO FROM ONE VIDEO TO ANOTHER WITHOUT RE-ENCODING

From an .mp3 file:

ffmpeg -i AudioInputFile -i VideoInputFile -c copy -map 0:a:0 -map 1:v:0 -c:a copy Output.mp4

From another video clip:

ffmpeg -r 25 -i InputVideoFileWithAudioSource -i InputVideoFileThatNeedsAudio -c copy -map 0:a:0 -map 1:v:0 -c:a copy -shortest Output.mp4

-map switch copies first audio stream from the InputVideoFileWithAudioSource (file can have multiple audio streams) and video stream from the InputVideoFileThatNeedsAudio and combines them into Output.mp4. All without re-encoding.

-r switch is for framerate,

-shortest switch means that output will have length of the shorter input file.


DESHAKE VIDEO

ffmpeg -i InputFile -vf deshake Output.mp4

Just a simple video filter that will deshake (or try at least) your clip a little. There are better filters for that, but the deshake filter will get the job done with the simplest cases.


DEINTERLACE A VIDEO

So my camcorder shoots 1080i video, which means interlaced footage. I decided to get something out of it and convert 1080i 25 fps video to, 720p 50fps video:

ffmpeg -i InputFile -filter:v yadif=1 -s "1280x720" -sws_flags spline -r 50 -c:a libfdk_aac -b:a 196k -c:v libx264 -preset veryslow -crf 18 Output.mp4

Filter yadif=1 uses top and bottom field as separate frames, which means double fps,

-s is an output footage resolution,

-sws_flags spline is a filter for upscaling a vid (1080i frame is actually a 1920×540 frame size, interlacing top and bottom fields line by line) that returns best quality from my experience,

-r as always means framerate.

These are just most common tasks with the ffmpeg Ido.

Command line may be intimidating, but once you get a hang of it, it gets the job done insanely fast.

And while this is no news by any means, it’s good to have cheat sheet with all of the most common tasks listed in one place.

Yay! Volumetrics in cycles!

Yay! Volumetrics in cycles! Volumetric clouds in cycles.

How to:

rendertimes with settings similar to above are around 10min in 720p on i7 3770.

other results with this approach:

This slideshow requires JavaScript.