This will be the About page.
One of the most time-consuming but most enjoyable tasks I perform regularly with Beer Busters is editing, formatting and posting blog articles. The vast majority of content I work with is created by others; I just polish it and get it up on the interwebs. This includes photos. These photos are often taken with an iPhone, rather than a fancy high-end camera, and are taken under less than ideal conditions for a perfect photograph. So, through my attempts to jazz-up these pics for presentation, I’ve settled on a few techniques that are my go-to methods.
In case you are not familiar (or have never watched a Blender tutorial by Andrew Price), vignetting “is a reduction of an image’s brightness or saturation at the periphery compared to the image center” (Wikipedia). This effect can be the result of several factors, including filters, secondary lenses, multiple element lenses and anything that essentially causes the light at the periphery of a field of view to be dimmed compared to light in the center. It is radial in nature because lenses are, after all, convex and circular. We think of photographs as being rectangular (and they are, obviously) but that’s because the film plane is rectangular and records a section within a circular image projected by the lens. Often, this effect is undesirable and occurs unintentionally. However, many photographers and digital artists intentionally introduce vignetting or add it in post-production.
Some would say – myself included – that this effect is often overused and added unnecessarily. Though, it can be a great effect. (I dig on Andrew Price but he does some fantastic work and, to be fair, doesn’t use much vignetting these days.) In most cases, however, the effect is used to darken the edges of an image. But, as Wikipedia told us up there, it can also desaturate the edges, which can be a more subtle and more interesting effect. Achieving this in just about any image editing software is fairly simple. Here, I will describe how to do it in GIMP. There are other methods to achieve this result. However, the technique described here is non-destructive and, as such, is more flexible to alter. (The photograph used in this tutorial was taken by Steph Heffner.)
When you load up a photograph in GIMP you’ll have one layer. First, simply duplicate that layer.
Then, with the top layer selected, go to Color -> Desturate. This will bring up a dialogue box with three options: Lightness, Luminosity and Average. Pick whichever one looks the awesomest.
Now, right click on the desaturated layer in the layer stack and choose “Add Layer Mask” from the menu. This brings up another dialogue, choose the “White (Full Opacity)” option. A white square will appear next to the thumbnail image of the layer in the layer stack. This is a representation of the layer mask, which is essentially a one channel (grayscale) image. It’s completely white, meaning every area of the layer is visible (not masked).
You can paint in greyscale on the layer mask, altering the opacity of the layer. White areas of the mask are fully opaque, black areas are fully transparent and tones in between are semi-transparent. This is a non-destructive edit; the original data of the desaturated layer is still there. So, at this point, you can use any number of techniques to desaturate any area of the image you want.
But, to create the vignette effect, this is what I most often do. First, under the view menu check “Snap to Canvas Edges” – this is usually off by default. This causes things to, unsurprisingly, snap to the edges of the canvass.
Then create an elliptical selection over the entire image. Now, depending on the degree to which you want the desaturation to extend inward from the edges, you can shrink the selection. To do this, under the Select menu select “Shrink”. A dialogue box will appear in which you can enter the amount of shrinkage in a number of units. (That was, unintentionally, a rather amusing sentence.)
I do this because it keeps the selection centered and allows me to define the size of the selection numerically, which can be useful when applying a similar effect to multiple similar images. This method also keeps the aspect ratio of the selection the same as the image. However, you could alternately set guides to the horizontal and vertical centers of the image then create the selection by starting at the center of the image and growing outward. If you hold CTRL while making a selection, it expands from the center.
So this selection will define the boundary between the desaturated edges and the fully saturated center. Of course we want the transition between the two to be subtle and gradated. To accomplish that, we simply feather the selection by choosing Select -> Feather. Now we have another dialogue box where we define the amount of feathering.
Next, simply fill the area of the selection with black using the paint bucket tool. Make sure you are still editing the layer mask, not the layer contents, by clicking the layer mask thumb in the layer stack.
Viola, as they say. In most circumstances you won’t want the edges to be completely desaturated (subtlety, as always, is your friend.) So you can simply adjust the layer opacity of the desaturated layer to mute the effect.
Of course, you can use this technique for a variety of effects, not just desaturation:
You can use this effect to give some subtle highlight to any area of an image you want to bring focus to. If, for example, you want to highlight a subject that happens to not be in the center of the frame, you can simply adjust your selection area accordingly.
Thanks for reading, be sure to keep an eye out for more image effect tutorials coming soon.
In this third and final installment of the series, I will demonstrate how to use a series of mask images to apply several different shader setups to different areas of an object in a single material. To recap, in the first part of this series, using a simple cube as an example, I created a greyscale bump map image and used it to bake a normal map using Blender Internal. In the second part I discussed the proper way to apply a normal map in both Blender Internal and Cycles.
Now, using the original bump map image as a starting point, I will create a series of simple black and white images to use as masks for applying several types of materials to various areas of the cube, following the details of the normal map. If you were smart ahead of time and saved a layered GIMP (xcf) or Photoshop (psd) image of the bump map, it’s simply a mater of isolating the elements, turning them white, and making the background black.
The Base Material
The base material is a grey metal texture. To achieve this look, I first add a [tooltip gravity=”n” title=”Input -> Texture Coordinate”]Texture Coordinate[/tooltip] node and an
[tooltip gravity=”n” title=”Texture -> Image Texture”]Image Texture[/tooltip] node then plug the “UV” output socket of the Texture Coordinate node into the “Vector” input socket of the Image Texture node. Then, load in the metal texture image (this texture is from CGTextures.com and can be found here).
Now, I want to mix two shader types for this material: Diffuse and Glossy. First, add the [tooltip gravity=”n” title=”Shader -> Diffuse”]Diffuse[/tooltip] shader node and plug the “Color” output of the Image Texture node into the “Color” input of the Diffuse Shader node. This will apply the metal texture image as the diffuse color of the cube.
For the Glossy shader, I want to color it with the same texture but lighten and de-saturate it first. This is easily achieved by adding a [tooltip gravity=”n” title=”Color -> Mix”]Mix[/tooltip] node, keeping the default type as “Mix” and plugging the “Color” output socket of the Image Texture node into the top “Color1” input socket of the Mix node. Then, set the bottom “Color2” to a gray color (R:0.5, G:0.5, B:0.5) and plug the “Color” output socket of the Mix node into the “Color” input socket of the Glossy node.
To mix these two shader nodes I will add a [tooltip gravity=”n” title=”Shader -> Mix Shader”]Mix Shader[/tooltip] node then plug the “BSDF” output socket of the Glossy node into the top “Shader” input socket of the Mix Shader node and the “BSDF” output socket of the Diffuse node into the bottom “Shader” input socket. I will also use the metal texture image to control the mix factor of these two shader nodes, but I want to have some control over the mixing. For that I will add a [tooltip gravity=”n” title=”Converter -> Color Ramp”]Color Ramp[/tooltip] node then plug the “Color” output of the Image Texture node into the “Fac” input of the Color Ramp node and plug the “Color” output of the Color Ramp node into the “Fac” input of the Mix Shader node (this is mixing data types but for this purpose it will work fine). Finally, plug the “Shader” output of the Mix Shader node into the “Surface” input of the Material Output node.
Note: the normal map is also applied here, for this process see Part 2 of this series. Make sure to plug the “Vector” output socket of the Normal map node into the “Vector” input sockets of all subsequent shader nodes added.
Altering the Color of the Base Material with a Mask Image
The first area of detail on the cube in which I will alter the material is the rough square area on the right-hand side of the cube. For this part, I will simply alter the color of the metal texture in this area. To do this I first create the mask image, using my original bump map as a guide, where the area of the square is pure white and the remaining area is pure black.
Now I will add another Image Texture node, plug the “UV” output socket of the Texture Coordinate node into the “Vector” input socket of the Image Texture node and load in the mask image. Since all I want to do here is tint the metal texture image in this particular are of the cube, there is no need to add a new shader. I can simply alter the image before it plugs into the Diffuse node. To do this I will add a Mix node with the mode set to “Overlay” and drop it in between the “Color” output socket of the Image Texture node and the “Color” input socket of the Diffuse node. Making sure the “Color” output from the Image Texture node is running into the top “Color1” input of the Mix node, I can then change the “Color2” input to whatever color I want to tint the texture.
Adding Emission to the Engraved Lines
The next area of detail I want to focus on is the engraved lines that run along the top and bottom edges of the cube and around the circular details on the left-hand side. I will make these emit a cool sci-fi green light. Again, the first step is to create a mask image to isolate this area of the cube. Using the original bump image this is a simple task.
Since I want this area to be glowing with light, I’ll first add and [tooltip gravity=”n” title=”Shader -> Emission”]Emission[/tooltip] shader node and set the color to a yellow-green and the strength to 1. Also, for the sake of organization, I will group the nodes comprising the metal material by shift-selecting them all and hitting Ctrl+G. Then press Tab to close the Group Node. Next, to mix the node tree to this point with my new Emission node, I’ll add a Mix Shader node and plug the “Shader” output socket of the Group Node into the top “Shader” input of the Mix Shader node and the “Emission” output of the Emission node into the bottom “Shader” input.
Note: there is no “Vector” input on an Emission shader node, so I cannot apply the normal map to this shader.
Now, I want to control the mix factor of my base material and the Emission shader with the mask image. So, I’ll add another Image Texture node, load in the mask image and make sure the “UV” output of the Texture Coordinate node is plugged into the “Vector” input of the new Image Texture node. Then, simply plug the “Color” output socket of the Image Texture node into the “Fac” input socket of the Mix Shader node. (This is again mixing data types, but since the mask image is only black and white there will be no loss of data here.)
Darkening the Vent Holes
I want to fake some depth in the vent on the left-hand side of the cube and I will do that by simply coloring in the holes with pure black. For this and the next part, I will use the same technique that was used above and “daisy-chain” onto the end of the node tree. I will also again group the previous material nodes. First, of course, create the mask image.
Then add a Diffuse shader node (make sure the Normal Map “Vector” is plugged into the Diffuse node “Vector”) and set the color to pure black, add a Mix Shader node and plug in just like before. And, finally, add another Image Texture node, load in the mask image (make sure the “UV” from the Texture Coordinate node is plugged into the “Vector” of the new Image Texture node) and plug the “Color” output socket of the Image Texture node into the “Fac” input of the Mix Shader node.
Making the Text Glass
Lastly, I want to make the text on the top of the cube glass and have light coming from behind it. I’m sure you get the routine by now, so I won’t repeat myself. Just add a Glass shader node and apply the same technique, using a mask image for the Text.
Now, to add the light behind the glass, I will add a plane to the scene and position it inside the cube just under the top side. For visual interest, I prefer the light not be even across the plane, but brighter in the center and fade out to the edges. To do this, UV unwrap the plane, then in GIMP or Photoshop create a grayscale image of a radial gradient – fairly simple.
Then use the following node setup, loading the gradient image into the Image Texture node. The Mix node is used to control the overall brightness without having to edit the image.
The final result:
This technique works well to mix materials following the influence of a normal or bump map. But it can be used in a wide variety of circumstances, anytime you need part of an object to be one material and another part a different material. That’s it for the Normals and Material Masking series, I hope it was helpful and informative.
Some textures used in creating models appearing in imagery on this post are from CGTextures.com, a most excellent source for photo textures.
In Part 1 of this series I covered how to bake a normal map from a bump map using Blender Internal’s texture baking abilities. This part demonstrates how to properly apply a normal map to a material in both Blender Internal and Cycles.
To quickly review, I have a simple subdivided cube, with some holding loops, unwrapped and the UVs arranged to maximize the sides of the cube facing the camera. I have also loaded the normal map image into the UV/Image Editor workspace.
Using a Normal Map in Blender Internal
To apply a normal map to a material in Blender Internal, first add a material to the cube then add a new texture to the material. Next, select “Image or Movie” and load in the normal map image. Under Image Sampling check the box next to Normal Map. It’s also a good idea to check Minimum Filter Size under Filter and set the Filter Size to 0.10 (the lowest setting possible). This will give a better quality result, especially if the normal map image is not particularly large. Set the Mapping to UV and, under Influence, check Normal and set the value to 1.
Using a Normal Map in Cycles
To apply the normal map to the cube in Cycles, first, add a material to the cube. Now bring up the Node Editor workspace (I generally split the window vertically with the Node Editor on the top and the 3D Viewport on the bottom.) By default, you will have a Diffuse BSDF shader node plugged into the “Surface” socket of a Material Output node. We’ll just use the diffuse shader, but give it a sexy color.
Now we need to get the normal map image into the mix and we’ll do that with two nodes: Texture Coordinate and Image Texture. First, hit SHIFT+A to add a node and select Input -> Texture Coordinate, we’ll use this to map the image to the UV coordinates of the cube. Then add an Image Texture node (Texture -> Image Texture). Plug the “UV” output socket of the Texture Coordinate node into the “Vector” input socket of the Image Texture node. Also, change the drop-down box below the image file on the Image Texture node from “Color” to “Non-Color Data.”
Now at this point, there are two things I’ve seen people do that are not the best way to apply a normal map:
- Plugging the “Color” output socket of the Image Texture Node into the “Displacement” input socket of the Material Output node.
- Adding a Normal node, plugging the “Color” output socket of the Image Texture node into the “Normal” input socket of the Normal Node and the “Dot” output socket into the “Displacement” input socket of the Material Output node.
Doing either of these is mismatching data types, which are indicated by the color of the sockets. Though in some circumstances this works fine, it is usually best practice to avoid plugging a socket into a socket of a different color. In this case, though you will get some displacement as a result, you are essentially turning the normal map back into a bump map by limiting the range of data being used. (For more info on data types, etc. keep an eye out for an upcoming post: Understanding the Node Interface.)
The proper way to apply the normal map is to add – no surprise here – a Normal Map node (Vector -> Normal Map). Then plug the “Color” output from the Image Texture node into the “Color” input of the Normal Map node and the “Vector” output into the “Normal” input of the Diffuse BSDF node. This converts the color data from the image (yellow socket) into vector data (purple socket) and applies it to the normal of the shader node. Make sure the drop-down box is set to “Tangent Space” and the UV box is set to “UV Map” (or whatever the name is of your object’s UV map to which the normal map is aligned.) I’m not sure why this UV option is here; the Image Texture node vector is already set to UV, but best to do both.
If you are building a material that mixes multiple shader nodes, simply plug the Normal Map “Vector” output socket into the “Normal” input socket of all the shader nodes. (Not all shader nodes have a “Normal” input, such as Emitter shader nodes.) As an example, I’ve mixed a Glossy shader with the Diffuse shader and applied the normal map to both. (I’ve collapsed some nodes in the image below to save space.)
The next part of this series, Material Masking in Cycles, will demonstrate how to use mask images derived from the original bump map to mix several shaders, producing the result below with one material.
Some textures used in creating models appearing in imagery on this post are from CGTextures.com, a most excellent source for photo textures.
Brief Overview of Bump and Normal Maps
Normal Maps and Bump Maps are textures used to fake detail on a 3D mesh using data from 2D images. A bump map uses a single channel image (grayscale) in which 50% gray represents the baseline surface of the face. Values toward white indicate raised features and values towards black indicate recessed features. A normal map is a kind of three dimensional bump map that uses the RGB channels of an image to represent perturbations in the X, Y and Z axes.
The most common way of generating a normal map is by “baking” it onto the UVs of a mesh. There are several ways of doing this. The most common are:
- Using the geometry of a high-poly detailed mesh to bake normals onto a lower-poly version (this technique is used extensively in video games.)
- Using Blender’s Multiresoultion modifier, which allows you to sculpt a subdivided version of a mesh while maintaining the lower-resolution base mesh, you can bake normals from the subdivided version onto the base mesh.
- Modeling separate objects just above the surface of another object, this geometry can be baked by projecting onto the underlying faces.
But there is another way to generate a normal map in Blender, by baking from a bump map.
Baking Normals from a Bump Map
In this example, I’ve simply added a Subdivision Surface modifier to a cube and added in some edge loops to sharpen the edges. I’ve then marked seams and UV unwrapped the cube, arranging the UVs so that the sides facing the camera are isolated and enlarged and those facing away are scaled down. In the UV/Image editor I’ve created a new image (at the bottom of the UV/Image Editor workspace: click New, an image options dialogue box will apear, name it and enter a size – I’ve used the default 1024×1024) and exported the UV Layout (at the bottom of the UV/Image editor workspace: UVs -> Export UV Layout).
With the UV Layout image as a guide in GIMP I created this bump map, using 50% gray as the background. When saving out the final bump map image, make sure the UV Layout is not visible.
Currently, texture baking can only be done in the Blender Internal render engine. The texture generated, however, can be used in either BI or Cycles (or any other rendering engine, provided normal maps are interpreted the same way – some renderers may apply the coordinates differently.) To create the normal map, first add a material to the cube (the material settings are not important for this task.) Then add a texture, set to “Image or Movie” and open the bump map texture. Make sure the mapping is set to UV and the influence is set to Normal with a value of 1. Also, under Bump Mapping Method select Best Quality. Note: if you are baking normals from a bump map derived from a photographic texture, changine the Bump Mapping Method will alter the level/frequency of detail baked into the normal map from the bump map.
Next, under the render panel there is an area labeled Bake. Set the Bake Mode to “Normals” and hit Bake. (The other bake settings are the defaults.) As long as the bump map texture is active on the cube material this will bake a normal map based on the influence of the bump map. Note: make sure the cube is selected and there is an image active for the UVs (this was done before exporting the UV Layout). Otherwise you will get an error. Once it bakes, you can save out the normal map from the UV/Image Editor.
So, why go to all this trouble to create a normal map? A normal map will inherently shade and respond to lighting better than a bump map, simply because it uses more data and alters the surface normal on all three axes. (Although, sometimes a bump map is more appropriate.) Below is an example of two cubes, with a simple metal texture, rendered in Cycles. One has just the bump map applied and the other just the normal map. You can see the details look deeper and cripser with the normal map, though you may not want such heavy effects – in which case you can use the bump. By baking the normal map, you add to your options and, in most cases, can get better results than bump mapping alone.
In Part 2 of this series, I will cover how to properly use normal maps in both Blender Internal and Cycles. Part 3 will cover material masking, a technique I used to achieve this final result.
Some textures used in creating models appearing in imagery on this post are from CGTextures.com, a most excellent source for photo textures.
Being that my current machine isn’t nearly as meaty when it comes to processing as I’d like – and doesn’t have a graphics card that is compatible with Cycles GPU rendering – I’m always looking for ways to improve performance when rendering in Cycles. Blender Guru has a post worth checking out, 4 Easy Ways to Speed Up Cycles, but I’d like to mention a simple technique I’ve just recently discovered, using the Clamp setting.
This example is still a work in progress, but is suitable for demonstrating the use of Clamp. (You can view the final image here in my portfolio.) The scene is lit by two emitter planes and a HDRI map. The bounces are already set to pretty conservative levels, but it’s till firefly city. Here’s a render after 2642 samples with Clamp set to 0, using progressive refine so the whole image renders at once (I won’t tell you how long it took on my computer, it’s a little embarassing.)
In the Render panel, under Sampling, there is an option called Clamp (more on the particulars of what this does below.) By setting the “Clamp” to 2, this is the result after 1363 samples.
You’ll notice that the image is a little dimmer and duller, the colors are less vibrant. That is easily fixed in the compositor with a simple Color Balance node. I essentially want to increase the contrast and brighten the image. Also, might as well do a little color correction while we’re at it.
With a little post-processing, an image that is less muddy. I’ve reversed the dimming effect of Clamp and added some flair to the image that I would have done anyway, at the same time. A bit more refinement of the image is a good idea, but this illustrates the basic point.
So, utilizing the Clamp setting, I was able to get a render that is virtually noise-free in about half the render time. You’ll have to experiment to find a Clamp value that balances render time and image quality for each scene. Remember that using clamp will dim the image (particularly the highlights) but that is usually fixed pretty easily with some post-processing. (You’ll notice the paper on the mini notebook has blown out a bit, this can be fixed by making the texture darker.)
Why Use Clamp
For more detail on Clamp, as well as other noise reduction techniques, explanations on how they work and what causes noise in the first place, I suggest checking out this page in the Blender Wiki. Briefly, Clamp caps the value of each sample, easing fluctuations from pixel to pixel. This obviously reduces the physical accuracy of the final image and dulls the brighter highlights, but these issues are usually either negligable or easily corrected.
I chose to use Clamp over using Filter Glossy or other techniques to reduce noise in this render for three main reasons:[list icon=”plus”]
- It’s faster, easier and more effective at noise reduction over the entire image
- Given the amount of glass and liquid as well as their prominence in the image, I wanted to preserve the accuracy of those light interactions as much as possible
- It doesn’t involve making adjustments to individual elements or shaders
Disabling caustics removes entirely a real physical effect of light (though it often doesn’t appreciably diminish the quality of the image and is a good idea to use in conjunction with setting a Clamp value), while Filter Glossy forces a reduction in sharpness. With Clamp, the inaccuracy is mostly in the intensity of highlights.
During the creation of this portrait I developed a workflow that involved rendering out several images in Blender than layering those images together in GIMP. The video shows each separately rendered element building to the final composite with the layer blending mode and opacity noted for each. This technique is obviously best suited for still images, but provides a number of advantages:
- Creating duplicates of the head model for each hair particle system, with emitter rendering off and the hair layer masked by the original, gives just the visible portions of the hair against transparency and allows tweaking of each hair element without having to re-render the entire scene
- Reinforce the bump and ambient occlusion by rendering images of the head model with a straight white material with just the bump applied and a material with just the baked occlusion texture
- Control the overall look by adjusting the blend mode and opacity of each element and tweaking elements “by hand”, such as erasing/subduing areas of specular highlight
This breakdown does not include any of the modeling process. The base mesh was created by a combination of box modeling and sculpting, then retopologized and detail sculpted with Blender’s Multiresolution modifier. Scuptris was used to sculpt the bump mapping, using several skin texture brushes.
This tutorial demonstrates how to dynamically alter the intensity and color of the lighting in a scene in Blender’s compositor, after rendering. I first read about this technique in Digital Lighting and Rendering byJeremy Birn (a book I highly recommend), and decided to see if I could figure out the particulars and do it in Blender. Essentially, by using pure red, green and blue lights, you can split the color channels of the image, adjust them independently, and recombine them.
It should be noted, the main body of this tutorial covers a three-point lighting system in the Blender Internal rendering engine. However, this can be used with more than three lights and can also be done in Cycles. These points will be covered at the end. Also, if you’re already experienced at using the compositor, after reading the “Setting Up the Scene” portion you can jump down to the full node tree image below to see the compositing setup.
Setting Up the Scene
In the example, I’ve set up three spot lamps (though you can use any lamp type). In the Lamp Settings panel for each lamp, set the colors of one to pure red, one to pure green and on one to pure blue, and set the energy of each to 1. Hexadecimal values for the colors are shown in the image.
Now, go to the Render Panel. Under Layers are listed Passes, these are the different render passes the renderer will deliver separately when checked. In the compositor, each checked pass appears as a socket on the Render Layers node. Check “color”, this will deliver an unshaded pass of just the scene colors. It is important not to check the “diffuse” pass, that will deliver a shaded color pass of the scene. The shading here will come from the split RGB channels. That’s it, now render and jump to the compositor (ctrl+left arrow, or select “compositing” from the screen layout drop menu at the top of the window).
In the Compositor
As always, make sure you check “use nodes” in the toolbar at the bottom of the upper workspace. You will now have a Render Layers node with a “color” socket along with the usual “image”, “alpha” and “Z” sockets. Hit shift+a to add a node, select Converter -> Separate RGBA. Plug the “image” socket from the Render Layers node into the “image” socket on the Separate RGBA node. The “R”, “G” and “B” output sockets of the Separate RGBA node will now output grayscale images representing the red, green and blue channels of the rendered image. Since we set our three lamps to be pure red, green and blue, these images will represent the influence of each separate lamp.
Now add a Mix node (Color -> Mix). Change the mode in the drop down box from “mix” to “color” then plug the “R” output socket from the Separate RGBA node into the top “image” socket of the Mix node and the “color” output socket from the Render Layers node into the bottom “image” socket on the Mix node. This takes the grayscale image representing the red channel and colors it with the unshaded colors from the color pass, essentially giving what the render would look like with just that lamp.
Now we can add two more Mix nodes set to “color” mode and do the same thing for the remaining “G” and “B” output sockets of the Separate RGBA node. At this point, we have the influence of the three lamps isolated and we can pretty much do whatever we want with each. You can add Brightness/Contrast nodes, Color Balance nodes, or anything to get all kinds of crazy effects. But we’re concerned with simply altering the intensity and color of each lamp and the best way to do that is with RGB Curve nodes. Add three RGB Curves node (Color -> RGB Curves) and plug the “image” output socket of each Mix node into one. (I’ve collapsed the lower two Mix and RGB Curves nodes in the image to save space.)
Then, to combine the whole image back together, add another Mix node, this time set to “add” mode, and plug the “image” output sockets of two of the RGB Curves nodes into each of the “image” input sockets of the Mix node. Finally add another Mix node (set to “add”) and plug the “image” output of the previous mix node into the top “image” input of the sencond Mix node and the “image” ouptut of the remaining RGB Curves node into the bottom “image” input socket. Of course, then plug the “image” output of the final node into the “image” input of the Composite node. Here is the final node setup (click for larger view):
Now, by adjusting the “C” channel of the RGB Curves nodes you can change the intensity of each lamp and by adjusting the “R”, “G” and “B” channels you can change the color. Here’s a comparison of the render ouput and the composited output with adjustments to the RGB Curves nodes:
So this works fine and dandy for three lamps, as there are three channels in an RGB image. But what if you want to use more lamps? Well, you can by using Light Groups.
Say, for example, you wanted to do this with five lamps. You can select three of those lamps (colors set to pure red, green and blue) and hit ctrl+g to create a group. Then name it something clever like “Group A”. Then, select the remaining two lamps (color set to pure red and green) and do the same thing and name that group something equally clever like “Group B”. Now go to the Render panel and under Layers, first add a new Render layer (hit the plus sign next to the Render Layer list). Again, clever name. Make sure to select “color” under the Passes for the new Render Layer.
A bit further down on the pane you will see “Light:” and a box. Click the box and up pop your Light Groups. Select one for one render layer and the other for the other render layer. This causes each render layer to only be influenced by the lights in the group assigned. (Note: it doesn’t matter to any of this which Scene Layer your lamps or any other object in the scene are located, provided they’re active for rendering, of course.)
Now you can just copy the node tree setup for three lamps and change the render layer on the the drop down box of the second Render Layers node to the new layer. Since we only have two more lamps you can delete everything coming ot of the “B” socket of the Separate RGB node. Finally just mix these two trees together at the end with another Mix node set to “add”. And, baby, you got a stew.
So, what about Cycles? Well, you can apply this technique in Cycles as well. Everything is exactly the same except when using the Render Passes. Instead of a single color pass option, Cycles allows you to choose to deliver Direct, Indirect and Color passes for Diffuse, Glossy and Transmission rays. Just select color for each.
Then in the compositor, use Mix nodes to add the three output sockets these passes create and continue from there with the same node setup described above.
I have to note, I’ve only experimented briefly using this technique with Cycles. Some issues may arise, particularly with coloring reflections. Also, any environment lighting will obvioulsy change the whole dynamic. The example I used here has the world color set to absolute black. I’m sure there are ways to deal with these issues, however. If you come up with or know of any tips that can make this effective in Cycles, feel free to comment. I may explore this further in a later tutorial.
Anyway, that’s all for now. Hope you learned something and that I wasn’t too long-winded. Until next time…