2D Sprite Sheets in Blender 3D.

Simple addon for creating Sprite Sheets in Blender. It can render out an animation directly to sprite sheet or load and combine external images.
Seamless integration, no external software.

Available here.


Screen Layout PIE menu.

So with blender’s PIE’s menus, switching screen layouts is really great.

Problem is, PIE’s are filled with options on the fly in alphabetical order, which means you can’t really organize your PIE the way you’d like. So I wrote a little addon that will do just that – simply fill in your Screen Layout  names in addon configuration to the corresponding NumPad values, save preferences and never worry again!

Help section will contain information on how to setup hotkey for the PIE.

This is where you get the addon: link.

This slideshow requires JavaScript.

Baking blender procedural 3D textures into 2D tileable images.

If you like Blender’s 3D procedural textures and wanted to use them to produce 2D tileable images in other software, here’s an idea how to bake them.

For undistorted bake result it’s good to use cylinder, if you want to tile across one axis only.
However to tile the image across both axis, you will need to try different approach.
No, not the sphere – the torus!
While it produces some distortion, it’s acceptable for less regular shapes, like cracks etc.

.blend file http://www.pasteall.org/blend/35800

Procedural cycles shader for clouds.

Quick walkthrough of shader creation for procedural clouds. When you follow this, you will know how to simply duplicate a cube to get unique cloud for each object.

For more complicated shapes you can simply combine cubes (keeping them as separate objects) or add more noise textures with different blending modes.

And the best thing? It uses very low memory to generate really complicated cloud shapes.

Or use a particle system (instancing a cube with randomized size, rotation etc) and still – each cloud will be unique.

available @blendswap.com

How to fix .bvh files in Blender to easily animate your rigged character. Retarget .bvh to rigify.

How to fix .bvh files in Blender to easily animate your rigged character and export to Unity3D as a spritesheet for a 2D themed platformer-style game.

With tons of free .bvh files available all over internet this will get your rigify rigged character up and running in minutes.

This MakeWalk plugin from MakeHuman makes all bone retargeting a breeze. Well, most of the time.

Convert videos, audio and image sequences quickly and efficiently.

Ok, sometimes you need a quick way to convert image sequence to video clip, join clips, rip audio, mux audio from other clip, deinterlace footage (anybody still shooting interlaced videos? yeah, I need a new camcorder) and do that with minimal quality loss or even without re-encoding.

The way I do this is the ffmpeg way, and the x264 way. Even though not everyone favour the format, it’s ok for simple tasks. And, yeah, it’s command line way – the fastest way.

Good news is, once you get a grasp of it, you will better understand all of the GUI’s frontends that use ffmpegs, mencoders etc. anyway for actually doing all the work behind. No, that doesn’t sound funny.

All of the commands are one-liners, just so you know. Let’s go:


ffmpeg -i InputFile -c:v libx264 -preset ultrafast -crf 0 Output.mp4

This line means you want to encode a lossless file as fast as possible, good for batch jobs etc. but also with the biggest file sizes. You can get around that with single switch from -preset ultrafast to -preset veryslow, which means the same quality (because the -crf 0 parameter means lossless output), but better compression – you just contribute a little more time for the task.

And if you ever need real compression, just tweak the -crf 0 parameter to -crf 18 (that is near to lossless visually, but provides real gain in terms of file size; this value is good for final footage):

ffmpeg -i inputFile -c:v libx264 -preset veryslow -crf 18 Output.mp4


ffmpeg -r 25 -i frame%04d.png -vcodec libx264 -crf 18 Output.mp4

This -i (input file) switch assumes, your files are named in convention: frame0000.png, frame0001.png, etc. You get that just using the prefix (“frame” in this case) and the number of digits (4 in this case, passed as “%04d“).

In case your frames start from number of – lets say 70 – then you need to pass this information as well:

ffmpeg -r 25 -start_number 70 -i frame%04d.png -vcodec libx264 -crf 18 Output.mp4

The -r 25 switch means output will play at 25 frames per second.


ffmpeg -skip_frame nokey -i sequence_%04d.png -vf 'tile=8x8' -an -vsync 0 spritesheet.png

If you have image sequence that would look nice in a game, you want to create a sprite sheet. With this example 8×8 grid is created and ready to import to Unity3D for example.


ffmpeg -i InputFile -an -f image2 "frame%05d.jpg"

Of course, if you plan to re-encode the sequence to video or any other job, it’s better to use lossless format, like png.


ffmpeg -f concat -i FileList.txt -c copy Output.mp4

Just create a FileList.txt text file that looks like this:
file '/path/file1'
file '/path/file2'
file '/path/file3'

These can be either relative or absolute paths. If your clips have the same parameters you can join them without re-encoding (-c copy switch), however if they vary in parameters or codecs, you can still join them, but you need to encode the output file:

ffmpeg -f concat -i FileList.txt -c:v libx264 -preset veryslow -crf 18 Output.mp4


ffmpeg -i InputFile -ss 00:00:30.0 -c copy -t 00:00:10.0 -async 1 Output.mp4

Simply cuts a 10-second long clip, starting at 30.0s time.

-ss seeks to time,

-t output clip length.


for vid in *.MTS; do ffmpeg -y -i "$vid" -c:v libx264 -preset veryslow -crf 18 -c:a libfdk_aac -b:a 196k "${vid%.MTS}.mp4"; done

This may look intimidating but is as simple as the previous ones. Only this one iterates through all files in directory that have .MTS extension and compresses them individually into files of the same name, but with .mp4 extension. Also compresses audio into AAC 196k bitrate format. If you don’t need to compress audio, but simply copy  audio to output, just use -c:a copy switch instead.


for %%f in (*.MTS) do ffmpeg -i %%f -c:v libx264 -preset veryslow -crf 18 %%f.mp4

If used from a command line and not a .bat file simply switch “%%f” with a “%f”. Pause is just to keep the terminal window opened on finish, to review encoding process.


ffmpeg -i InputFile -ab 192k -ac 2 -ar 48000 -vn audio.mp3

Quite self explanatory: -ab is a bitrate, -ac channel number (2 for stereo), -ar sampling frequency.


From an .mp3 file:

ffmpeg -i AudioInputFile -i VideoInputFile -c copy -map 0:a:0 -map 1:v:0 -c:a copy Output.mp4

From another video clip:

ffmpeg -r 25 -i InputVideoFileWithAudioSource -i InputVideoFileThatNeedsAudio -c copy -map 0:a:0 -map 1:v:0 -c:a copy -shortest Output.mp4

-map switch copies first audio stream from the InputVideoFileWithAudioSource (file can have multiple audio streams) and video stream from the InputVideoFileThatNeedsAudio and combines them into Output.mp4. All without re-encoding.

-r switch is for framerate,

-shortest switch means that output will have length of the shorter input file.


ffmpeg -i InputFile -vf deshake Output.mp4

Just a simple video filter that will deshake (or try at least) your clip a little. There are better filters for that, but the deshake filter will get the job done with the simplest cases.


ffmpeg -i frame_%04d.png  -vf hqdn3d=8:6:6:4 "denoised_frame_%04d.png"

A “High Quality DeNoise 3D” filter, that can remove harsh noise and improve image quality. I often use it to denoise path tracer renders.

Values respectively: spatial_luma:spatial_chroma:temporal_luma:temporal_chroma filtering weight.


So my camcorder shoots 1080i video, which means interlaced footage. I decided to get something out of it and convert 1080i 25 fps video to, 720p 50fps video:

ffmpeg -i InputFile -filter:v yadif=1 -s "1280x720" -sws_flags spline -r 50 -c:a libfdk_aac -b:a 196k -c:v libx264 -preset veryslow -crf 18 Output.mp4

Filter yadif=1 uses top and bottom field as separate frames, which means double fps,

-s is an output footage resolution,

-sws_flags spline is a filter for upscaling a vid (1080i frame is actually a 1920×540 frame size, interlacing top and bottom fields line by line) that returns best quality from my experience,

-r as always means framerate.

These are just most common tasks with the ffmpeg Ido.

Command line may be intimidating, but once you get a hang of it, it gets the job done insanely fast.

And while this is no news by any means, it’s good to have cheat sheet with all of the most common tasks listed in one place.

Something extra that I find using a lot: rip stills from an interlaced footage:

ffmpeg -i 00034.mts -filter:v yadif=1 -s "1280x720" -sws_flags spline -r 5 -q:v 1 -an -f image2 "frame%05d.jpg"

-r is fps, which means 5 frames every second for 25fps footage.

Yay! Volumetrics in cycles!

Yay! Volumetrics in cycles! Volumetric clouds in cycles.

How to:

rendertimes with settings similar to above are around 10min in 720p on i7 3770.

other results with this approach:

This slideshow requires JavaScript.