Image Effects: Desaturation Vignette

One of the most time-consuming but most enjoyable tasks I perform regularly with Beer Busters is editing, formatting and posting blog articles. The vast majority of content I work with is created by others; I just polish it and get it up on the interwebs. This includes photos. These photos are often taken with an iPhone, rather than a fancy high-end camera, and are taken under less than ideal conditions for a perfect photograph. So, through my attempts to jazz-up these pics for presentation, I’ve settled on a few techniques that are my go-to methods.

Desaturation Vignette

In case you are not familiar (or have never watched a Blender tutorial by Andrew Price), vignetting “is a reduction of an image’s brightness or saturation at the periphery compared to the image center” (Wikipedia). This effect can be the result of several factors, including filters, secondary lenses, multiple element lenses and anything that essentially causes the light at the periphery of a field of view to be dimmed compared to light in the center. It is radial in nature because lenses are, after all, convex and circular. We think of photographs as being rectangular (and they are, obviously) but that’s because the film plane is rectangular and records a section within a circular image projected by the lens. Often, this effect is undesirable and occurs unintentionally. However, many photographers and digital artists intentionally introduce vignetting or add it in post-production.

Vignetting, from a Nature Academy tutorial
Vignetting, from a Nature Academy tutorial

Some would say – myself included – that this effect is often overused and added unnecessarily. Though, it can be a great effect. (I dig on Andrew Price but he does some fantastic work and, to be fair, doesn’t use much vignetting these days.) In most cases, however, the effect is used to darken the edges of an image. But, as Wikipedia told us up there, it can also desaturate the edges, which can be a more subtle and more interesting effect. Achieving this in just about any image editing software is fairly simple. Here, I will describe how to do it in GIMP. There are other methods to achieve this result. However, the technique described here is non-destructive and, as such, is more flexible to alter. (The photograph used in this tutorial was taken by Steph Heffner.)

When you load up a photograph in GIMP you’ll have one layer. First, simply duplicate that layer.

Duplicate the background layer
Duplicate the background layer

Then, with the top layer selected, go to Color -> Desturate. This will bring up a dialogue box with three options: Lightness, Luminosity and Average. Pick whichever one looks the awesomest.

Lightness, here, is the awesomest
Lightness, here, is the awesomest

Now, right click on the desaturated layer in the layer stack and choose “Add Layer Mask” from the menu. This brings up another dialogue, choose the “White (Full Opacity)” option. A white square will appear next to the thumbnail image of the layer in the layer stack. This is a representation of the layer mask, which is essentially a one channel (grayscale) image. It’s completely white, meaning every area of the layer is visible (not masked).

Add mask to desaturated layer
Add mask to desaturated layer

You can paint in greyscale on the layer mask, altering the opacity of the layer. White areas of the mask are fully opaque, black areas are fully transparent and tones in between are semi-transparent. This is a non-destructive edit; the original data of the desaturated layer is still there. So, at this point, you can use any number of techniques to desaturate any area of the image you want.

But, to create the vignette effect, this is what I most often do. First, under the view menu check “Snap to Canvas Edges” – this is usually off by default. This causes things to, unsurprisingly, snap to the edges of the canvass.

Set snapping to canvas edges
Set snapping to canvas edges

Then create an elliptical selection over the entire image. Now, depending on the degree to which you want the desaturation to extend inward from the edges, you can shrink the selection. To do this, under the Select menu select “Shrink”. A dialogue box will appear in which you can enter the amount of shrinkage in a number of units. (That was, unintentionally, a rather amusing sentence.)

Shrink the selection
Shrink the selection

I do this because it keeps the selection centered and allows me to define the size of the selection numerically, which can be useful when applying a similar effect to multiple similar images. This method also keeps the aspect ratio of the selection the same as the image. However, you could alternately set guides to the horizontal and vertical centers of the image then create the selection by starting at the center of the image and growing outward. If you hold CTRL while making a selection, it expands from the center.

Hold CTRL to expand selection from center
Hold CTRL to expand selection from center

So this selection will define the boundary between the desaturated edges and the fully saturated center. Of course we want the transition between the two to be subtle and gradated. To accomplish that, we simply feather the selection by choosing Select -> Feather. Now we have another dialogue box where we define the amount of feathering.

Feather the selection
Feather the selection

Next, simply fill the area of the selection with black using the paint bucket tool. Make sure you are still editing the layer mask, not the layer contents, by clicking the layer mask thumb in the layer stack.

Fill the selected area of the mask with black
Fill the selected area of the mask with black

Viola, as they say. In most circumstances you won’t want the edges to be completely desaturated (subtlety, as always, is your friend.) So you can simply adjust the layer opacity of the desaturated layer to mute the effect.

Adjust opacity of the desaturated layer
Adjust opacity of the desaturated layer
The final result
The final result

Of course, you can use this technique for a variety of effects, not just desaturation:

Darkened vignette
Darkened vignette
Radial blur
Radial blur
Soft Glow
Soft Glow

You can use this effect to give some subtle highlight to any area of an image you want to bring focus to. If, for example, you want to highlight a subject that happens to not be in the center of the frame, you can simply adjust your selection area accordingly.

Thanks for reading, be sure to keep an eye out for more image effect tutorials coming soon.

“Where Have You Been?” Asks the Internet


Ok, so maybe the whole internet hasn’t been clamoring for answers as to the whereabouts of nanobot master and theZEDLAB proprietor Wayne Baker (that’s me). But some out there may have noticed that the website has not been updated in about two months! That’s a long time – though it doesn’t feel like it. So, for those whose curiosity has for far too long been left unsatiated, here’s the scoop…

Aside from running theZEDLAB and overseeing a swarm of nanobots, I am involved in two other outfits: Ghostbungle Industries – film production and some freelance design and VFX work, and Beer Busters – a craft beer podcast and blog. First, the podcast, which is the more recent venture.

Beer Busters Podcast

Beer Busters logoI have a brother. His name is Dan. Dan studied broadcasting, with a focus on radio, at Temple University (in Philadelphia). For several years now, Dan has been interning at rock radio station 93.3 WMMR (also in Philadelphia). When not at the radio station or working his other two jobs (he’s a busy guy), he pursues his dream of creating a many-headed media empire in the style of The Nerdist, the brain-child of Chris Hardwick.

I also have a cousin. Her name is Steph. Steph has been obsessed with craft beer and has been homebrewing her own for many years. She is a member of the local hombrewers club (Berks County Homebrew Club) and aspires to one day open her own commercial brewery. As an elementary school teacher, Steph has earned (and I mean earned, teaching is an incredibly difficult and underappreciated job) the luxury of having a good deal of free time in the summer and spends a lot of it travelling. And visiting breweries. I mean, every brewery within a 100-mile radius of wherever she finds herself.

One day, Dan and Steph decided to combine their powers to create a craft beer podcast, and Beer Busters was born. After the ball was rolling and they had one episode under their belts, they came to me to do some marketing design for the podcast. I did some logos, promotional stuff, social media imagery, etc. They invited me to sit in on the second episode and I had such a good time I never left. My contributions expanded to include building/maintaining the website, managing digital assets, more promotional design, creating content for the show (Happy Fun Time Games), co-host, and, ultimately, full-blown third partner. At the time of this writing, we have six episodes in the bag and will be recording the seventh in just a few days. And it’s been awesome. But it’s also dominated my time, and I could use a little improvement in the time-management area. (I do have to watch TV at some point, right?)

Ghostbungle Industries

Ghostbungle Industries logo

I also have been working with another partner (and cousin), Tom, for several years in a fits-and-starts endeavor to produce short films (and, one day, longer ones), under the name Ghostbungle Industries. Tom has worked with video production and streaming as well as web development at Chester County Intermediate Unit (Intermediate Units are unique entities in the state of Pennsylvania; they essentially provide services to area school districts). He has recently left CCIU to take on the position of Director of Network Engineering at SpectiCast, a regional media streaming start-up.

We have one complete and polished (mostly) short film, an adaptation of Edgar Allan Poe’s The Cask of AmontilladoSince the production of that film we have drifted from project to project, displaying an uncanny knack at coming up with great ideas, making them even greater, and then collapsing under the weight of our own scope-creep (and the distraction of other projects – and life, in general). We have, in the interim, done some freelance graphic design and VFX works for outside clients. Recently, after a bit of soul-searching, we decided to run with a particular idea, the details of which I won’t subject you to just now. This project represents a make-or-break refresh of Ghostbungle and, as such, will require a committed re-focusing on my part.


Beaker_1500x1500So, what does this mean for theZEDLAB? Well, as previously stated, it’s clear I need to be a little better organized at managing my time. It seems I’ve come to the conclusion that the best way to do that and accomplish many sundry creative pursuits is to take on more projects than there are hours in a day. And work a regular job, you know, to pay the rent and all. But I am still committed to theZEDLAB and will continue to produce content here.

I have been tossing around the idea, however, of expanding the scope of this site (there’s that scope-creep again) to include more about design in general, instead of focusing exclusively on Blender and CG. I’m probably going to be doing more blog-type stuff as tutorials, while fun and rewarding, are far more time consuming. There will still be tutorials, and I may also start doing image editing (Photoshop/GIMP) and vector (Illustrator/Inkscape) tutorials. But I have many interests related to design and figure why not just write about all that as well.

I’ve written this post as much for anyone out there who might be interested as I have for myself. All I really want in life is to be creative. But sometimes I bite off more than I can chew, sometimes I get needlessly bogged down in details, and sometimes I just plain get lazy. Lately, as all these various pursuits have been clamoring for space in my brain, I’ve realized that now is the time to dig in, get focused, get organized, and get things done. Life is short, after all.

Material Masking in Cycles

[box color=”gray”]Normals and Material Masking Series – Part 1 | Part 2 | Part 3[/box]


In this third and final installment of the series, I will demonstrate how to use a series of mask images to apply several different shader setups to different areas of an object in a single material. To recap, in the first part of this series, using a simple cube as an example, I created a greyscale bump map image and used it to bake a normal map using Blender Internal. In the second part I discussed the proper way to apply a normal map in both Blender Internal and Cycles.

Bump and normal maps

Now, using the original bump map image as a starting point, I will create a series of simple black and white images to use as masks for applying several types of materials to various areas of the cube, following the details of the normal map. If you were smart ahead of time and saved a layered GIMP (xcf) or Photoshop (psd) image of the bump map, it’s simply a mater of isolating the elements, turning them white, and making the background black.

The Base Material

The base material is a grey metal texture. To achieve this look, I first add a [tooltip gravity=”n” title=”Input -> Texture Coordinate”]Texture Coordinate[/tooltip] node and an
[tooltip gravity=”n” title=”Texture -> Image Texture”]Image Texture[/tooltip] node then plug the “UV” output socket of the Texture Coordinate node into the “Vector” input socket of the Image Texture node. Then, load in the metal texture image (this texture is from and can be found here).

Base metal image texture

Now, I want to mix two shader types for this material: Diffuse and Glossy. First, add the [tooltip gravity=”n” title=”Shader -> Diffuse”]Diffuse[/tooltip] shader node and plug the “Color” output of the Image Texture node into the “Color” input of the Diffuse Shader node. This will apply the metal texture image as the diffuse color of the cube.

For the Glossy shader, I want to color it with the same texture but lighten and de-saturate it first. This is easily achieved by adding a [tooltip gravity=”n” title=”Color -> Mix”]Mix[/tooltip] node, keeping the default type as “Mix” and plugging the “Color” output socket of the Image Texture node into the top “Color1” input socket of the Mix node. Then, set the bottom “Color2” to a gray color (R:0.5, G:0.5, B:0.5) and plug the “Color” output socket of the Mix node into the “Color” input socket of the Glossy node.

Ligthen and de-saturate the image texture for glossy color

To mix these two shader nodes I will add a [tooltip gravity=”n” title=”Shader -> Mix Shader”]Mix Shader[/tooltip] node then plug the “BSDF” output socket of the Glossy node into the top “Shader” input socket of the Mix Shader node and the “BSDF” output socket of the Diffuse node into the bottom “Shader” input socket. I will also use the metal texture image to control the mix factor of these two shader nodes, but I want to have some control over the mixing. For that I will add a [tooltip gravity=”n” title=”Converter -> Color Ramp”]Color Ramp[/tooltip] node then plug the “Color” output of the Image Texture node into the “Fac” input of the Color Ramp node and plug the “Color” output of the Color Ramp node into the “Fac” input of the Mix Shader node (this is mixing data types but for this purpose it will work fine). Finally, plug the “Shader” output of the Mix Shader node into the “Surface” input of the Material Output node.

Note: the normal map is also applied here, for this process see Part 2 of this series. Make sure to plug the “Vector” output socket of the Normal map node into the “Vector” input sockets of all subsequent shader nodes added.

Mixing the diffuse and glossy shaders (click for larger view)
Render with base metal texture

Altering the Color of the Base Material with a Mask Image

The first area of detail on the cube in which I will alter the material is the rough square area on the right-hand side of the cube. For this part, I will simply alter the color of the metal texture in this area. To do this I first create the mask image, using my original bump map as a guide, where the area of the square is pure white and the remaining area is pure black.

Mask image created using bump map as a guide

Now I will add another Image Texture node, plug the “UV” output socket of the Texture Coordinate node into the “Vector” input socket of the Image Texture node and load in the mask image. Since all I want to do here is tint the metal texture image in this particular are of the cube, there is no need to add a new shader. I can simply alter the image before it plugs into the Diffuse node. To do this I will add a Mix node with the mode set to “Overlay” and drop it in between the “Color” output socket of the Image Texture node and the “Color” input socket of the Diffuse node. Making sure the “Color” output from the Image Texture node is running into the top “Color1” input of the Mix node, I can then change the “Color2” input to whatever color I want to tint the texture.

Tinting an area of the image texture with a mix node set to overlay (click for larger view)
Render with tinting applied

Adding Emission to the Engraved Lines

The next area of detail I want to focus on is the engraved lines that run along the top and bottom edges of the cube and around the circular details on the left-hand side. I will make these emit a cool sci-fi green light. Again, the first step is to create a mask image to isolate this area of the cube. Using the original bump image this is a simple task.

Mask image created using bump map as a guide

Since I want this area to be glowing with light, I’ll first add and [tooltip gravity=”n” title=”Shader -> Emission”]Emission[/tooltip] shader node and set the color to a yellow-green and the strength to 1. Also, for the sake of organization, I will group the nodes comprising the metal material by shift-selecting them all and hitting Ctrl+G. Then press Tab to close the Group Node. Next, to mix the node tree to this point with my new Emission node, I’ll add a Mix Shader node and plug the “Shader” output socket of the Group Node into the top “Shader” input of the Mix Shader node and the “Emission” output of the Emission node into the bottom “Shader” input.

Note: there is no “Vector” input on an Emission shader node, so I cannot apply the normal map to this shader.

Group the previous material setup and add the emission node

Now, I want to control the mix factor of my base material and the Emission shader with the mask image. So, I’ll add another Image Texture node, load in the mask image and make sure the “UV” output of the Texture Coordinate node is plugged into the “Vector” input of the new Image Texture node. Then, simply plug the “Color” output socket of the Image Texture node into the “Fac” input socket of the Mix Shader node. (This is again mixing data types, but since the mask image is only black and white there will be no loss of data here.)

Use mask image to control the mix factor of emission shader

Render with emission shader mix
Render with emission shader mix

Darkening the Vent Holes

I want to fake some depth in the vent on the left-hand side of the cube and I will do that by simply coloring in the holes with pure black. For this and the next part, I will use the same technique that was used above and “daisy-chain” onto the end of the node tree. I will also again group the previous material nodes. First, of course, create the mask image.

Mask image created using bump map as a guide

Then add a Diffuse shader node (make sure the Normal Map “Vector” is plugged into the Diffuse node “Vector”) and set the color to pure black, add a Mix Shader node and plug in just like before. And, finally, add another Image Texture node, load in the mask image (make sure the “UV” from the Texture Coordinate node is plugged into the “Vector” of the new Image Texture node) and plug the “Color” output socket of the Image Texture node into the “Fac” input of the Mix Shader node.

Adding black shader to vent holes (click for larger view)

Making the Text Glass

Lastly, I want to make the text on the top of the cube glass and have light coming from behind it. I’m sure you get the routine by now, so I won’t repeat myself. Just add a Glass shader node and apply the same technique, using a mask image for the Text.

Mask image created using bump map as a guide
Adding glass shader to text (click for larger view)

Now, to add the light behind the glass, I will add a plane to the scene and position it inside the cube just under the top side. For visual interest, I prefer the light not be even across the plane, but brighter in the center and fade out to the edges. To do this, UV unwrap the plane, then in GIMP or Photoshop create a grayscale image of a radial gradient – fairly simple.

An emission plane inside the cube, to shine light through the glass, and gradient texture to control emission intensity

Then use the following node setup, loading the gradient image into the Image Texture node. The Mix node is used to control the overall brightness without having to edit the image.

Node setup for emitter plane

The final result:

Final render

This technique works well to mix materials following the influence of a normal or bump map. But it can be used in a wide variety of circumstances, anytime you need part of an object to be one material and another part a different material. That’s it for the Normals and Material Masking series, I hope it was helpful and informative.

<- Part 2: Using Normal Maps in BI and Cycles

Some textures used in creating models appearing in imagery on this post are from, a most excellent source for photo textures.

Quick Reference: Using Normal Maps in BI and Cycles

[box color=”gray”]Normals and Material Masking Series – Part 1 | Part 2 | Part 3[/box]


In Part 1 of this series I covered how to bake a normal map from a bump map using Blender Internal’s texture baking abilities. This part demonstrates how to properly apply a normal map to a material in both Blender Internal and Cycles.

To quickly review, I have a simple subdivided cube, with some holding loops, unwrapped and the UVs arranged to maximize the sides of the cube facing the camera. I have also loaded the normal map image into the UV/Image Editor workspace.

Cube UV layout and normal map

Using a Normal Map in Blender Internal

To apply a normal map to a material in Blender Internal, first add a material to the cube then add a new texture to the material. Next, select “Image or Movie” and load in the normal map image. Under Image Sampling check the box next to Normal Map. It’s also a good idea to check Minimum Filter Size under Filter and set the Filter Size to 0.10 (the lowest setting possible). This will give a better quality result, especially if the normal map image is not particularly large. Set the Mapping to UV and, under Influence, check Normal and set the value to 1.

Texture settings for normal map in Blender Internal (Click for larger view)

Using a Normal Map in Cycles

To apply the normal map to the cube in Cycles, first, add a material to the cube. Now bring up the Node Editor workspace (I generally split the window vertically with the Node Editor on the top and the 3D Viewport on the bottom.) By default, you will have a Diffuse BSDF shader node plugged into the “Surface” socket of a Material Output node. We’ll just use the diffuse shader, but give it a sexy color.

Default Cycles material nodes

Now we need to get the normal map image into the mix and we’ll do that with two nodes: Texture Coordinate and Image Texture. First, hit SHIFT+A to add a node and select Input -> Texture Coordinate, we’ll use this to map the image to the UV coordinates of the cube. Then add an Image Texture node (Texture -> Image Texture). Plug the “UV” output socket of the Texture Coordinate node into the “Vector” input socket of the Image Texture node. Also, change the drop-down box below the image file on the Image Texture node from “Color” to “Non-Color Data.”

Texture Coordinate and Image Texture nodes (Click for larger view)

Now at this point, there are two things I’ve seen people do that are not the best way to apply a normal map:

[list icon=”busy”]

  •  Plugging the “Color” output socket of the Image Texture Node into the “Displacement” input socket of the Material Output node.
  •  Adding a Normal node, plugging the “Color” output socket of the Image Texture node into the “Normal” input socket of the Normal Node and the “Dot” output socket into the “Displacement” input socket of the Material Output node.


Doing either of these is mismatching data types, which are indicated by the color of the sockets. Though in some circumstances this works fine, it is usually best practice to avoid plugging a socket into a socket of a different color. In this case, though you will get some displacement as a result, you are essentially turning the normal map back into a bump map by limiting the range of data being used. (For more info on data types, etc. keep an eye out for an upcoming post: Understanding the Node Interface.)

How NOT to set up a normal map (Click for larger view)

The proper way to apply the normal map is to add – no surprise here – a Normal Map node (Vector -> Normal Map). Then plug the “Color” output from the Image Texture node into the “Color” input of the Normal Map node and the “Vector” output into the “Normal” input of the Diffuse BSDF node. This converts the color data from the image (yellow socket) into vector data (purple socket) and applies it to the normal of the shader node. Make sure the drop-down box is set to “Tangent Space” and the UV box is set to “UV Map” (or whatever the name is of your object’s UV map to which the normal map is aligned.) I’m not sure why this UV option is here; the Image Texture node vector is already set to UV, but best to do both.

Using the Normal Map node (Click for larger view)
Cube render with normal map in Cycles

If you are building a material that mixes multiple shader nodes, simply plug the Normal Map “Vector” output socket into the “Normal” input socket of all the shader nodes. (Not all shader nodes have a “Normal” input, such as Emitter shader nodes.) As an example, I’ve mixed a Glossy shader with the Diffuse shader and applied the normal map to both. (I’ve collapsed some nodes in the image below to save space.)

Using a normal map in a material with multiple shaders (Click for larger view)
Cube render in Cycles with mixed shader material

The next part of this series, Material Masking in Cycles, will demonstrate how to use mask images derived from the original bump map to mix several shaders, producing the result below with one material.


<- Part 1: Baking Normals from a Bump MapPart 3: Material Masking in Cycles ->

Some textures used in creating models appearing in imagery on this post are from, a most excellent source for photo textures.

Quick Reference: Baking Normals from a Bump Map

[box color=”gray”]Normals and Material Masking Series – Part 1 | Part 2 | Part 3[/box]


Brief Overview of Bump and Normal Maps

Normal Maps and Bump Maps are textures used to fake detail on a 3D mesh using data from 2D images. A bump map uses a single channel image (grayscale) in which 50% gray represents the baseline surface of the face. Values toward white indicate raised features and values towards black indicate recessed features. A normal map is a kind of three dimensional bump map that uses the RGB channels of an image to represent perturbations in the X, Y and Z axes.

Examples of a bump and normal map

The most common way of generating a normal map is by “baking” it onto the UVs of a mesh. There are several ways of doing this. The most common are:

[list icon=”sign-in”]

  • Using the geometry of a high-poly detailed mesh to bake normals onto a lower-poly version (this technique is used extensively in video games.)
  • Using Blender’s Multiresoultion modifier, which allows you to sculpt a subdivided version of a mesh while maintaining the lower-resolution base mesh, you can bake normals from the subdivided version onto the base mesh.
  • Modeling separate objects just above the surface of another object, this geometry can be baked by projecting onto the underlying faces.


But there is another way to generate a normal map in Blender, by baking from a bump map.

Baking Normals from a Bump Map

In this example, I’ve simply added a Subdivision Surface modifier to a cube and added in some edge loops to sharpen the edges. I’ve then marked seams and UV unwrapped the cube, arranging the UVs so that the sides facing the camera are isolated and enlarged and those facing away are scaled down. In the UV/Image editor I’ve created a new image (at the bottom of the UV/Image Editor workspace: click New, an image options dialogue box will apear, name it and enter a size – I’ve used the default 1024×1024) and exported the UV Layout (at the bottom of the UV/Image editor workspace: UVs -> Export UV Layout).

UV-Unwrapping the cube (click for larger view)

With the UV Layout image as a guide in GIMP I created this bump map, using 50% gray as the background. When saving out the final bump map image, make sure the UV Layout is not visible.

Bump map drawn in GIMP – the UV lines are not visible in the final bump map image (Click for larger view)

Currently, texture baking can only be done in the Blender Internal render engine. The texture generated, however, can be used in either BI or Cycles (or any other rendering engine, provided normal maps are interpreted the same way – some renderers may apply the coordinates differently.) To create the normal map, first add a material to the cube (the material settings are not important for this task.) Then add a texture, set to “Image or Movie” and open the bump map texture. Make sure the mapping is set to UV and the influence is set to Normal with a value of 1. Also, under Bump Mapping Method select Best Quality. Note: if you are baking normals from a bump map derived from a photographic texture, changine the Bump Mapping Method will alter the level/frequency of detail baked into the normal map from the bump map.

Material and texture settings for the bump map

Next, under the render panel there is an area labeled Bake. Set the Bake Mode to “Normals” and hit Bake. (The other bake settings are the defaults.) As long as the bump map texture is active on the cube material this will bake a normal map based on the influence of the bump map. Note: make sure the cube is selected and there is an image active for the UVs (this was done before exporting the UV Layout). Otherwise you will get an error. Once it bakes, you can save out the normal map from the UV/Image Editor.

The normal map bake and baking settings

So, why go to all this trouble to create a normal map? A normal map will inherently shade and respond to lighting better than a bump map, simply because it uses more data and alters the surface normal on all three axes. (Although, sometimes a bump map is more appropriate.) Below is an example of two cubes, with a simple metal texture, rendered in Cycles. One has just the bump map applied and the other just the normal map. You can see the details look deeper and cripser with the normal map, though you may not want such heavy effects – in which case you can use the bump. By baking the normal map, you add to your options and, in most cases, can get better results than bump mapping alone.

Comparison of the influences of a bump and normal map (Click for larger view)

In Part 2 of this series, I will cover how to properly use normal maps in both Blender Internal and Cycles. Part 3 will cover material masking, a technique I used to achieve this final result.


Part 2: Using Normal Maps in BI and Cycles ->

Some textures used in creating models appearing in imagery on this post are from, a most excellent source for photo textures.

Quick Reference: Using “Clamp” in Cycles

Being that my current machine isn’t nearly as meaty when it comes to processing as I’d like – and doesn’t have a graphics card that is compatible with Cycles GPU rendering – I’m always looking for ways to improve performance when rendering in Cycles. Blender Guru has a post worth checking out, 4 Easy Ways to Speed Up Cycles, but I’d like to mention a simple technique I’ve just recently discovered, using the Clamp setting.


Using Clamp

This example is still a work in progress, but is suitable for demonstrating the use of Clamp. (You can view the final image here in my portfolio.) The scene is lit by two emitter planes and a HDRI map. The bounces are already set to pretty conservative levels, but it’s till firefly city. Here’s a render after 2642 samples with Clamp set to 0, using progressive refine so the whole image renders at once (I won’t tell you how long it took on my computer, it’s a little embarassing.)

2642 Samples, Clamp set to 0

In the Render panel, under Sampling, there is an option called Clamp (more on the particulars of what this does below.)  By setting the “Clamp” to 2, this is the result after 1363 samples.

1363 Samples, Clamp set to 2
1363 Samples, Clamp set to 2

You’ll notice that the image is a little dimmer and duller, the colors are less vibrant. That is easily fixed in the compositor with a simple Color Balance node. I essentially want to increase the contrast and brighten the image. Also, might as well do a little color correction while we’re at it.

With Color Balance node, reduce the value of the Lift and increase the value of the Gain
With Color Balance node, reduce the value of the Lift and increase the value of the Gain

With a little post-processing, an image that is less muddy. I’ve reversed the dimming effect of Clamp and added some flair to the image that I would have done anyway, at the same time. A bit more refinement of the image is a good idea, but this illustrates the basic point.

1363 Samples, Clamp set to 2, Color Balance adjusted
1363 Samples, Clamp set to 2, Color Balance adjusted

So, utilizing the Clamp setting, I was able to get a render that is virtually noise-free in about half the render time. You’ll have to experiment to find a Clamp value that balances render time and image quality for each scene. Remember that using clamp will dim the image (particularly the highlights) but that is usually fixed pretty easily with some post-processing. (You’ll notice the paper on the mini notebook has blown out a bit, this can be fixed by making the texture darker.)

Why Use Clamp

For more detail on Clamp, as well as other noise reduction techniques, explanations on how they work and what causes noise in the first place, I suggest checking out this page in the Blender Wiki. Briefly, Clamp caps the value of each sample, easing fluctuations from pixel to pixel. This obviously reduces the physical accuracy of the final image and dulls the brighter highlights, but these issues are usually either negligable or easily corrected.

I chose to use Clamp over using Filter Glossy or other techniques to reduce noise in this render for three main reasons:[list icon=”plus”]

  • It’s faster, easier and more effective at noise reduction over the entire image
  • Given the amount of glass and liquid as well as their prominence in the image, I wanted to preserve the accuracy of those light interactions as much as possible
  • It doesn’t involve making adjustments to individual elements or shaders


Disabling caustics removes entirely a real physical effect of light (though it often doesn’t appreciably diminish the quality of the image and is a good idea to use in conjunction with setting a Clamp value), while Filter Glossy forces a reduction in sharpness. With Clamp, the inaccuracy is mostly in the intensity of highlights.

Project Breakdown: Male Head

During the creation of this portrait I developed a workflow that involved rendering out several images in Blender than layering those images together in GIMP. The video shows each separately rendered element building to the final composite with the layer blending mode and opacity noted for each. This technique is obviously best suited for still images, but provides a number of advantages:

[list icon=”plus”]

  • Creating duplicates of the head model for each hair particle system, with emitter rendering off and the hair layer masked by the original, gives just the visible portions of the hair against transparency and allows tweaking of each hair element without having to re-render the entire scene
  • Reinforce the bump and ambient occlusion by rendering images of the head model with a straight white material with just the bump applied and a material with just the baked occlusion texture
  • Control the overall look by adjusting the blend mode and opacity of each element and tweaking elements “by hand”, such as erasing/subduing areas of specular highlight


This breakdown does not include any of the modeling process. The base mesh was created by a combination of box modeling and sculpting, then retopologized and detail sculpted with Blender’s Multiresolution modifier. Scuptris was used to sculpt the bump mapping, using several skin texture brushes.

Dynamic Lighting in the Compositor

Dynamic-Lighting-in-the-Compositor_BannerThis tutorial demonstrates how to dynamically alter the intensity and color of the lighting in a scene in Blender’s compositor, after rendering. I first read about this technique in Digital Lighting and Rendering byJeremy Birn (a book I highly recommend), and decided to see if I could figure out the particulars and do it in Blender. Essentially, by using pure red, green and blue lights, you can split the color channels of the image, adjust them independently, and recombine them.

It should be noted, the main body of this tutorial covers a three-point lighting system in the Blender Internal rendering engine. However, this can be used with more than three lights and can also be done in Cycles. These points will be covered at the end. Also, if you’re already experienced at using the compositor, after reading the “Setting Up the Scene” portion you can jump down to the full node tree image below to see the compositing setup.

Setting Up the Scene

In the example, I’ve set up three spot lamps (though you can use any lamp type). In the Lamp Settings panel for each lamp, set the colors of one to pure red, one to pure green and on one to pure blue, and set the energy of each to 1. Hexadecimal values for the colors are shown in the image.


Now, go to the Render Panel. Under Layers are listed Passes, these are the different render passes the renderer will deliver separately when checked. In the compositor, each checked pass appears as a socket on the Render Layers node. Check “color”, this will deliver an unshaded pass of just the scene colors. It is important not to check the “diffuse” pass, that will deliver a shaded color pass of the scene. The shading here will come from the split RGB channels. That’s it, now render and jump to the compositor (ctrl+left arrow, or select “compositing” from the screen layout drop menu at the top of the window).


 In the Compositor

As always, make sure you check “use nodes” in the toolbar at the bottom of the upper workspace. You will now have a Render Layers node with a “color” socket along with the usual “image”, “alpha” and “Z” sockets. Hit shift+a to add a node, select Converter -> Separate RGBA. Plug the “image” socket from the Render Layers node into the “image” socket on the Separate RGBA node. The “R”, “G” and “B” output sockets of the Separate RGBA node will now output grayscale images representing the red, green and blue channels of the rendered image. Since we set our three lamps to be pure red, green and blue, these images will represent the influence of each separate lamp.

Now add a Mix node (Color -> Mix). Change the mode in the drop down box from “mix” to “color” then plug the “R” output socket from the Separate RGBA node into the top “image” socket of the Mix node and the “color” output socket from the Render Layers node into the bottom “image” socket on the Mix node. This takes the grayscale image representing the red channel and colors it with the unshaded colors from the color pass, essentially giving what the render would look like with just that lamp.

Dynamic-Lighting-in-the-Compositor_Fig-03Now we can add two more Mix nodes set to “color” mode and do the same thing for the remaining “G” and “B” output sockets of the Separate RGBA node. At this point, we have the influence of the three lamps isolated and we can pretty much do whatever we want with each. You can add Brightness/Contrast nodes, Color Balance nodes, or anything to get all kinds of crazy effects. But we’re concerned with simply altering the intensity and color of each lamp and the best way to do that is with RGB Curve nodes. Add three RGB Curves node (Color -> RGB Curves) and plug the “image” output socket of each Mix node into one. (I’ve collapsed the lower two Mix and RGB Curves nodes in the image to save space.)


Then, to combine the whole image back together, add another Mix node, this time set to “add” mode, and plug the “image” output sockets of two of the RGB Curves nodes into each of the “image” input sockets of  the Mix node. Finally add another Mix node (set to “add”) and plug the “image” output of the previous mix node into the top “image” input of the sencond Mix node and the “image” ouptut of the remaining RGB Curves node into the bottom “image” input socket. Of course, then plug the “image” output of the final node into the “image” input of the Composite node. Here is the final node setup (click for larger view):

Dynamic-Lighting-in-the-Compositor_Fig-05Now, by adjusting the “C” channel of the RGB Curves nodes you can change the intensity of each lamp and by adjusting the “R”, “G” and “B” channels you can change the color. Here’s a comparison of the render ouput and the composited output with adjustments to the RGB Curves nodes:

Dynamic-Lighting-in-the-Compositor_Fig-06Notes on Using More Lamps and Cycles

So this works fine and dandy for three lamps, as there are three channels in an RGB image. But what if you want to use more lamps? Well, you can by using Light Groups.

Say, for example, you wanted to do this with five lamps. You can select three of those lamps (colors set to pure red, green and blue) and hit ctrl+g to create a group. Then name it something clever like “Group A”. Then, select the remaining two lamps (color set to pure red and green) and do the same thing and name that group something equally clever like “Group B”. Now go to the Render panel and under Layers, first add a new Render layer (hit the plus sign next to the Render Layer list). Again, clever name. Make sure to select “color” under the Passes for the new Render Layer.

A bit further down on the pane you will see “Light:” and a box. Click the box and up pop your Light Groups. Select one for one render layer and the other for the other render layer. This causes each render layer to only be influenced by the lights in the group assigned. (Note: it doesn’t matter to any of this which Scene Layer your lamps or any other object in the scene are located, provided they’re active for rendering, of course.)


Now you can just copy the node tree setup for three lamps and change the render layer on the the drop down box of the second Render Layers node to the new layer. Since we only have two more lamps you can delete everything coming ot of the “B” socket of the Separate RGB node. Finally just mix these two trees together at the end with another Mix node set to “add”. And, baby, you got a stew.

Dynamic-Lighting-in-the-Compositor_Fig-08So, what about Cycles? Well, you can apply this technique in Cycles as well. Everything is exactly the same except when using  the Render Passes. Instead of a single color pass option, Cycles allows you to choose to deliver Direct, Indirect and Color passes for Diffuse, Glossy and Transmission rays. Just select color for each.


Then in the compositor, use Mix nodes to add the three output sockets these passes create and continue from there with the same node setup described above.

Dynamic-Lighting-in-the-Compositor_Fig-10I have to note, I’ve only experimented briefly using this technique with Cycles. Some issues may arise, particularly with coloring reflections. Also, any environment lighting will obvioulsy change the whole dynamic. The example I used here has the world color set to absolute black. I’m sure there are ways to deal with these issues, however. If you come up with or know of any tips that can make this effective in Cycles, feel free to comment. I may explore this further in a later tutorial.

Anyway, that’s all for now. Hope you learned something and that I wasn’t too long-winded. Until next time…

theZEDLAB Site Launch

NanobotsMy swarm of nanobot minions and I have been working diligently for weeks and now our labors come to fruition. This is theZEDLAB, a place for Blender tutorials and resources as well as a blog featuring all things CG.

My name is Wayne, nice to meet you. I have been practicing digital art in many forms for most of my life. In recent years, I have moved into the realm of 3D modeling and rendering, thanks to the premiere open-source, completely free, and surprisingly powerful software that is Blender. You can read more about me and my background on the about page and in my portfolio.

TheZEDLAB is an idea I’ve been pondering for quite some time. Sites like this (or what I hope this will be) are the reason I came to the world of CG. What seemed at one time to be something far too complicated and based on software far too expensive has become a focal point of my artistic endeavors. Blender, which is contantly evolving and growing through the contributions of its users, has put the power of CG into the hands of anyone willing to put in the effort and take the time to learn. For this, I, and the entire Blender community, owe a debt of gratitude, not only to the people who make Blender, and to fantastic educational sources like Blender Cookie and Blender Guru (the two I most frequent), but also to each other. The online community of Blender users is unique in their passion, their support of one another and the ever growing pool of talent from inspired amatures to pros. And so this is my way to try to give back to those who have given me so much and to hopefully inspire others to explore their imaginations, to take on new challenges, to…boldly go where no one has gone before? no, wait, that’s not rigtht – above all, to create.

If you share my excitement at seeing technology evolve to allow whatever is in our heads to be given life, and for that technology to be available to everyone, if you are a fellow Blender Nerd, then I hope you will enjoy what I have to offer here, in this little corner of the interwebs.

What you will find at theZEDLAB

The tutorials will cover a range of topics, from modeling and texturing to lighting and rendering. One of my goals is also to create a library of “quick reference” tutorials; I’ve often come across great little tips and techniques, however many times they’re buried deep in a long tutorial covering an entire scene. Here, I will have quick easy access to such little nuggets of wisdom without having to search through a lengthy video or post. There will also be “project breakdowns”, short videos that demonstrate, concisely without a lot of narration, how different elements and techniques come together in producing a final work.

There will be resources such as models and textures most of which will be freely available to use for any purpose you want, commercial or personal.

And, of course, there will be the blog. I mean, c’mon, I’m not gonna go to the trouble of putting together a whole website and not stick my opinions somewhere on it. This will cover everything from Blender news, cool CG stuff I might stumble across on the interwebs, noteworthy VFX in film and televsion, hopefully some interviews and the like from people who make awesome things, and anything else related to CG, or even design in general, that I feel inclined to bloviate on.

Thanks for stopping by. Be sure to like theZEDLAB on Facebook, and follow theZEDLAB on Twitter.


Post Script: What’s in a Name?

You may be thinking theZEDLAB is kind of an odd name. It is. There’s an interesting story behind it. Well, maybe not that interesting, but there is a story. And this is it.

The original name of this site was theMESHLAB. Cool, clever, right? I thought so too. So, I registered the domain, I set up a Facebook page, a Twitter account and a Google account. I designed a logo and some basic site imagery. All was well and going swimmingly. Then I created a YouTube channel. No problems there. But when I tried to find that channel through my personal You Tube account (so I could subscribe to myself, of course), I came across a plethora of tutorial videos with MeshLab in the title. “Uh oh”, I thought. Turns out MeshLab is an open-source software for cleaning up and editing unstructured 3D meshes generated by 3D scanners. So, not only is there something out there already called MeshLab, that something is also related to 3D graphics.

I could have continued using the name, it is an open-source project after all and not some for-profit software company. The chance of lawsuit or other legal action was slim. However, I chose to change the name for two reasons, one selfish and one rather noble – if I do say so myself. The first reason was, a quick Google search for MeshLab (something I probably should have done long before I began painting towards the corner) yeilds quite a number of results relating to this software and would have overshadowed the interweb presence of my humble website. The second reason was because, being a user of open source software like Blender, I respect those who contribute to such projects. And, to be fair, MeshLab was established in 2005, so they obvioulsy came up with the clever name first. Plus it looks like a pretty cool piece of software. So I yield to them and provide this little plug.

What followed was a grueling couple of days trying to come up with a new name, desperately hoping I could still use “lab” in it somehow so I could keep the little beaker logo and the laboratory motif. There were several I won’t bore you with here (anymore than I already have) and most were either already in use in one form or another or were parked domains asking for thousands of dollars. One I was rather fond of was blend(in), I even made a cool logo for that one. But go ahead, go to GoDaddy and look for, you can have it $9,000 plus.

Why theZEDLAB then? Well, first of all it’s not that far away from theMESHLAB. I wanted something that represented CG and 3D modeling and I wanted it to end in “lab”. And what makes 3D art different from 2D art? The third dimension, the depth, the Z-axis. But “zee lab” doesn’t sound very good, and is too close to “sea lab”. So, though I hail from the US, I took a cue from our friends across the pond and went with “zed”. It just sounds better. (Also, if you squint, it looks like “Zelda”)

So there it is, the complete unabridged history of the name theZEDLAB.