The rendering stage is most crucial and might require a complete transfer of ALL resources to the render system just to make sure that all of it is available when it is needed. This will prevent render errors (ie lost textures, models or shaders) but will also be quite the drive space hog. While the Layout process is more for the scene setup itself, the need to begin the shading and lighting happens here as well since a lot of the actual lighting design is based on where the camera is located, what is in frame, what the subject matter is, color and how all of this will either make or break a shot.


  • Complete and fix shaders for AOV work
  • Once each final animation is complete, append the shader source, Mosaic fragments, lighting groups, textures and other misc data blocks into the layout set
  • Make shadow maps for basic lighting setups for each shot
  • Place custom lighting setups for each shot to emphasize the subject and action
  • Test frames for each sequence (lighting errors, camera errors, blur (?) and DOF errors, shader and texture errors etc…)
  • Submit RIB export folder to Animux SVN
  • SSH/VNC into the Animux server and setup job in Dr.Queue for rendering



The Shading Pipeline is one of the final steps in finishing a scene, usually involves fixing textures, building shaders, editing shader code and spending a lot of time previewing each object to make sure that this surface looks the way it is supposed to.

Much of the environment is composed of shaders, though there are several objects that demanded the use of textures. Much of the shader code is pretty set it design and function, the final touches need to be completed first such as the AOV code.

Shader Development

The shaders being developed for Project Widow are being designed to work with Blender's Material system. Functions that can be controlled through Blender will not only save time but allow a much greater degree of control, such as using one shader that will have varying looks associated with them over multiple objects - all because the Blender Material used that object will dictate how that shader is rendered. So in a sense with a single shader like concrete there can be 20 different objects, each with it's own Blender Material applied, which in turn can result in 20 different appearances of concrete in the final render.

The current list of custom shaders :

Some of the shaders are very similar to others, in these cases they are more for R+D than production, such as in this case was later revised into


All custom shaders written for this project will need to be fixed in order to fully be used in the idea of using Blender as a compositor.

Below is the AOV parameter listing that will be used.

Display "+./Renders/<####>_layers.exr" "exr" "color aov_diffuse" "string layername" "1 RenderLayer.Diffuse" "string[3] channelnames" [ "R" "G" "B" ] "string exrpixeltype" "float" "quantize" [ 0 0 0 0 ]

Display "+./Renders/<####>_layers.exr" "exr" "color aov_surfacecolor" "string layername" "1 RenderLayer.Color" "string[4] channelnames" [ "R" "G" "B" "A" ] "string exrpixeltype" "float" "quantize" [ 0 0 0 0 ]

Display "+./Renders/<####>_layers.exr" "exr" "color aov_occlusion" "string layername" "1 RenderLayer.AO" "string[3] channelnames" [ "R" "G" "B" ] "string exrpixeltype" "float" "quantize" [ 0 0 0 0 ]

Display "+./Renders/<####>_layers.exr" "exr" "color aov_specular" "string layername" "1 RenderLayer.Spec" "string[3] channelnames" [ "R" "G" "B" ] "string exrpixeltype" "float" "quantize" [ 0 0 0 0 ]

Display "+./Renders/<####>_layers.exr" "exr" "color aov_reflection" "string layername" "1 RenderLayer.Reflect" "string[3] channelnames" [ "R" "G" "B" ] "string exrpixeltype" "float" "quantize" [ 0 0 0 0 ]

Display "+./Renders/<####>_layers.exr" "exr" "color aov_refraction" "string layername" "1 RenderLayer.Refract" "string[3] channelnames" [ "R" "G" "B" ] "string exrpixeltype" "float" "quantize" [ 0 0 0 0 ]

Display "+./Renders/<####>_layers.exr" "exr" "normal N" "string layername" "1 RenderLayer.Normal" "string[3] channelnames" [ "X" "Y" "Z" ] "string exrpixeltype" "float" "quantize" [ 0 0 0 0 ]

Display "+./Renders/<####>_layers.exr" "exr" "normal _uv" "string layername" "1 RenderLayer.UV" "string[3] channelnames" [ "U" "V" "A" ] "string exrpixeltype" "float" "quantize" [ 0 0 0 0 ]

Display "+./Renders/<####>_layers.exr" "exr" "rgba" "string layername" "1 RenderLayer.Combined" "string exrpixeltype" "float" "quantize" [ 0 0 0 0 ] "string image_metadata:BlenderMultiChannel" "Blender V2.43 and newer"




Renderman Light Shaders can be extremely versatile tools to achieve unique lighting solutions and setups.

Since this is quite a large environment in Blender, there will be some cheating in order to optimize the renders as much as possible. One of these solutions are light blockers, or image maps that block or permit light based on the values in the image itself.

One such use for this will be the light beam that streams from the shaft, it will be much easier to use a light to simulate the bars or grill than it would to render a shadow map itself. The same could be said for lighting that is setup in the distance, where detail is not needed as much as lighting near the camera.

Also there are instances where new geometry is introduced to the scene to become a form of light blocker, that is in some cases the existing geometry for some reason will not be included in the shadow map render pass and thus in order to prevent this light from spilling into areas where we do not want it to, this geometry is added as a sort of insurance that the shadow will be produced.

Such is the case with the Main_Scene_01 blend file. There are two areas in this scene, one is "above" ground and the main area that this entire short is located "below". For some reason the plane geometry that is the dirt area around the air shaft does not get picked up when rendering shadow maps. Despite flipping the normals, moving the actual vertices and several other ways to correct, the geometry still does not like to be rendered. So the addition of blocker geometry was added to the scene which serves no other purpose than to block light.


Render Passes

Since Renderman has the added benefit of creating and reusing render passes, this is an absolute vital process to reduce render times as well as resource space.



The shader tools we are using or in some cases developing, have evolved over time. In this time it has become very clear what has been needed in such a tool and thus new focus has been spent on developing the code to be more tailored to Widow itself than a generalized RSL builder. In time effort will be made to work with Shrimp more, as it seems to be the most robust shader editor, however it is written in C++ and thus requires compilation, as opposed to WidowShade which can be updated anytime without compiling. The trade off is performance and certain functionality.

The shader development pipeline has undergone some massive changes over the year, something that is finally coming together. Since the tools use XML for nodes as well as forms of shader project storage, it is possible to port over shader nodes written for WidowShade into Shrimp. Shrimps strengths are it's powerful node collection and customized RSL functions but it lacks in viewing previews of nodes themselves. WidowShade can do that but it's code is messy at times and cannot support AOV at the moment. So one option is to build custom ready shaders meant for WidowShade and port them over to Shrimp. Another option is to export the entire RSL code and insert it as a node itself.

Of course the older shaders will need to be rewritten in Shrimp, but it should not be too hard to do.

The reason for the shader development refactor is simply to make things more streamlined, without a lot of shader files everywhere, as well as making shader to shader functions easier to work with.

The Renderman shading language is that, a programming language that outputs imagery and in our case we have to build and treat shader development very much like software development. In the case of our tools and how everything is going to tie together it may require rebuilding the tools that we need to get this done.


This RSL builder is now the tool Project Widow will be using to create all the custom shaders used in the short. The reason is mainly due to the built in AOV code that is essential to the end stage of compositing. It also is a very complete tool with nodes that are not found in any other editor. However there has been some editing of the source code since Blender cannot process long file names in the text editor, thus the file "rsl_shrimp_shadingmodels.h" has been renamed to "rsl_shadingmodels.h" and all the nodes that point to this file are changed as well. This version of Shrimp is a local build, nobody is required to have this build on their machines, though the header files are required in order to compile shaders from within Blender.

The process of including the restructured header files into the Project Widow blend files has started, this way any production files for the project are able to be used in any stage past Layout.

AOV output naming is also being changed from using a single TIFF file to the OpenEXR layered image format, with all the required AOV output needed for compositing.

Shrimp Trackers :


This is an in house shader builder based off of ShaderLink. This is a Python based program that is similar to the functionality and look of SLer and ShaderMan, however it has some advantages over the previously used programs, not to mention the heavy work currently being done to build a solid shader pipeline. Some of this is taking code (be it Python or RSL) directly from other programs, cramming the various RSL functions found on the net into a series of header files based off the same files from Shrimp, making custom XML nodes that will interface directly with Mosaic's AOV presets and making sure that this can be distributed to all members of team via the SVN server.

This tool has been replaced by Shrimp, though the source code will remain in the SVN. All changes made for WidowShade will be ported over to the original ShaderLink code project soon.

The following tools have been depreciated in favor of Shrimp.


I will be using Shaderman for building my shaders and then fixing them afterwards in a code editor to "clean it up" so that it is readable. This way the SL code can be presented to others in a formal readable format that programmers are used to seeing (tabs for instance). While I am using Shaderman 0.7 (original program) for the majority of the shader building, simply because I have been using this program for 5 years and am used to it, I will also be working on Shaderman.Next and mostly with adding bricks to the collection. Since Shaderman 0.7 is a Windows only software it doesn't run on Linux unless you do some configuring in WINE. With Shaderman.Next, the code is completely open source, thus expandable.


This is also a Python based shader tool, though a bit more complete as it has more bricks to use initially and a bit more options of how to work with it.

Outside Research

One of the best ways to figure things out with RenderMan, or as inspiration is to use the Pixar Research Library, something that is highly technical and only those with some excellent math skills should attempt to duplicate. For artists it serves as great way to fully understand some of the concepts of RenderMan and it can also serve as a point in the right direction. Considering these papers are from the very place where it all began it serves directly to RenderMan as opposed to "general" rendering methods.

Pixar Papers


Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License