R+D

This page will be focused on research of the harder elements of the short and how to get them accomplished in a timely and efficient manner, as well as documenting the process for future reference.

Effects

Spider Web

This model, or series of objects, can be made quite easily using bezier curves in Blender. The issue is that using this method will make it VERY difficult to animate later on, at least as far as I know. Is there a way to rig curves in Blender? If not what other options? Cloth animation would seem to be an ideal way to dynamically alter the webbing as the spider walks across, however the Cloth sim cannot be used on curves. Maybe use very thin polygons? However is a cloth sim really needed, is it overkill for such a short animation when in the story the web gets destroyed anyway?

Curves seem to be the best option for this web and any other spider strands, the issue is just how to control it better than keyframing each control point by hand. If it can be rigged, that would at least help in the movement of the strands.

The web really would be the absolute hardest thing to model and animate in this entire short.

—There are a few options

  1. Bezier curves with a wave modifier (With individual curves animated by hand?)
  2. Polygon model (thin 3 sided tubes, with physics sim converted to ipo data and hand editied, probably the most versatile method)
  3. 2D plane with texture map of spider web

There actually might be a need for multiple kinds of webs. I outsourced the initial script and storyboard to a local friend of mine who does this kind of work and he textually helped to visualize the environment. One of the things he added were older cobwebs, flowing gently in the wind - something the wave modifier web R+D file would be usefull for. Curves are a must it seems, for the strands of silk attached to the spiders rear.

Much more work needs to be done for this, since this is part of the central story.

After a series of experiments with soft bodies and collisions, finally there is a GOOD test blend file! The strand is a single curve, with 3 control points. One of these control points is the pivot that bends the rest of the strand. The other two are the anchors that hold the strand in place. With some extreme settings, the strand also bounces almost like the real thing. With enough planning, using strands will work for dynamic spider vs. web effects.

It is in the R+D directory of the project folder.

The sim works well, however the data does NOT export to Aqsis. Even when baked, the soft body does not deform at all and remains in place. This is due to the points not actually moving in 3D space in their EDIT mode. At this point there is nothing that can be done to alter that.

Curve+Hooks+Armature

I think the best approach to this would be to hand animate the main web, which the spider will interact with, having it simulated would probably give us less than stellar results.

We could use a polygon web, converted to curves, with custom placed hooks attached to an armature, run a simulation (Wave modifier?) on the web and hand animate the armature to make it look like the spider is affecting the web as it walks over it, although when you look at spiders walking on webs, they don't have much of an effect on the web, except a little movement over all.

Another thing about this idea is that we can parent the spider feet bones to the web armature bones, which means we could have the web be blown about, and the spider would stay in contact with the web.

A python script would be really good for this, for example if an armature bone comes into contact with another armature bone, it parents itself, or snaps to the bones center, not sure if that's even possible with the current blender API, but it'd be pretty cool.

Further testing with armature bones in fact does not export to RenderMan.

Again the problem with the curve method is that the points of the curve do not move according to Blender, while in Edit Mode the points will return to their original location so as with the simulation this method too does not render like we want it to.

The rest

The rest of the webs could be a mix of curves and/ or poly meshes with cloth/ soft body sims or wave modifiers.

Also the possibility to use texture maps on 2D planes shouldn't be overlooked, i think this would be a nice fast way of getting complex looking webs in no time.

web.PNG
(polygon web)

Conclusion

There is no simple way to do this. We should look at ultra thin polygons for anything we might use for dynamic interaction with the spider. Curves for single strands attached to the end. Texture maps for long distance shots. Pretty much everything we can think of and get away with we should use, even if we have to cheat. This I think is going to be the absolute hardest part of the entire process.

Blowing Paper

There will be a need for blowing paper and other forms of trash / debris while the train is ripping past the camera and our subject. One of the simulations possible with Blender is Cloth so the idea of creating cloth objects as trash is ideal. With Blender 2.48 it is also possible to use effectors which can dynamically alter the movement of the trash. This at least gives us a chance to show that some of the latest goodies of Blender can be exported (unlike the spider web tests which have proven to be difficult), so we are going to push it to it's limits. Every single thing that can possibly be blown by wind will be.

One of the only questions with using a cloth object is : will or could it retain a base shape and still move effectively as "paper" instead of "cloth"? The testing I have done has been with flat planes so far, I need to try other geometry and then answer that question.

More testing has been done to test the limits of this effect and so far I have yet to be disappointed by the simulations. Even with a dozen 4 face polygons as individual cloth objects the simulation seemed to run fairly well. This test is in the R+D directory. The cloth objects are not instanced so if by chance one would want to see this be rendered, it will take a bit longer to export to RIB. There currently is no settings for RenderMan in this file, it was strictly a Blender cloth simulation test.

So far of all the testing done for the various effects wanted, the cloth seems to be the most rewarding effect, which does come as a sigh of relief since there will be at least a few shots where bits of paper and debris are being blown by the wind, having a cloth simulation to do the hard part will save a bit of time, plus then we can showcase that at least some of the cooler latest developments in Blender is exportable to RenderMan.


Custom Python Scripts

The custom python scripts are usually downloaded from the net to test out it's usability in the pipeline. Some are custom made. So far several of them have found their way into the project folder. These scripts are tested locally before being placed into the project folder, one is for stability and the second is for usefulness. Some have only one function, others have multiple uses throughout the entire process. Mosaic is also included in the folder, as it is a python script.

ConSpot

This python script was written by a Blender user per request. This script generates a spotlight that is constrained to an empty object. This allows quick and easy placement of spotlights, as you can move the empty to the subject that needs to be lit, and move the spotlight to further tune the lighting. The idea came after watching a video demonstrating the Lpics software developed by Pixar during the production of "Cars", in the video these spotlight systems were present and they were added in at the click of a mouse. This makes moving lights around much easier than the default Blender setup. While it is easy to build this rig by hand, it does take some time to do.

To use the script, place it in your scripts folder. The script is programmed to place itself in the Objects menu so that when Blender loads up you can access this either in the Scripts window, or while in the 3D window - making ConSpot easy to access was important.
When you click on the script it will place the empty object where your cursor is located, then places a spotlight 10 units above it on the Z axis.
Further development of the script will likely happen, though unsure of when. Ideas include making a GUI to allow different lights to add instead of just the current Spotlight, such as Sun and Hemi lights. Also to allow changing the distance between the two. As of right now the script functions quite well.


OpenEXR Layers

This is one of the areas of R+D that has been approached by both Eric Beck and the Aqsis developers. The idea is to have multiple layers of image data in a single exr file that can be used by the Image node in Blender's compositor. The first task was to build the exr display driver so that it could allow such a process to be done and the next was to add that functionality into Mosaic. Some of what is needed is also something that Blender's renderer cannot do without multiple files and tricks to combine them all together.

The reason for this is mainly for AOV information.

- alpha
- color
- diffuse
- ambient (this is flat ambient color+occlusion+GI)
- shadow
- caustic
- specular
- reflection
- refraction
- mist

This will allow a greater degree of control over how the final frame will look without having to render each frame over again if one thing is just not quite up to the look wanted, or needed.

Now that the AOV code is functional again in Aqsis, we can begin to fully test this out. However due to the fact that there are quite a few custom shaders built specifically for this production, these shaders have not had the required AOV code added so that will need to be fixed long before full render tests can be done.

More will be fully explained later….


Renderfarm

DrQueue_Overview.png
  • Configuration, Testing and Processing

Once the setup is complete, I will be taking WIP Project Widow assets (RIB and shaders) and conducting tests. Once some initial tests are completed I hopefully will be able to setup some easy way to transfer production RIB files to the farm server.

Once this test is completed I will write up a report of how it was done, what assets used, how RIB files are handled and transfered, the render times and roughly how much space each process took.

Over a two week period developing a renderfarm on the Animux network has come to fruition. The issue at hand was how to export Blender data into RIB data, use the STARTRENDER.SH script that is written upon export, then use DrQueue to read that script and send the job over the network to the slaves. STARTRENDER.SH was not designed for this purpose, it was designed for single machine execution. We had many errors involving permissions and lost file paths. The ONLY way to solve this was to hand edit the exported script to include a custom call. However this posed a problem since there are a lot of shots and exported scenes, this can cause some confusion and user error.

Then the realization that there was no "universal" solution to this problem, there is a solution. We write a custom script that modifies the STARTRENDER.SH script to be used, then use the modified script to run the job from DrQueue.

Animux_renderjob_scriptprocessing.jpg

Using this method the fear that the jobs would only render per frame on a single slave has been proven false. At this time 2 running slave are up and running, with the master also serving as a slave (for testing purposes). These slaves both run when a job is sent, though they both each render a single frame, per CPU, at a time. Since there is no distributed bucket distribution available in Aqsis (yet) this is the only way to do such a render. Though keep in mind that these render slaves are high powered systems, running 2 AMD64 bit CPU's, each with 2 GB of RAM. When the farm is fully operational it will consist of 20 slaves all with the same hardware.

Update : Animux is dead it seems, so a smaller scale version of this is being built.


RedWidow

RedWidow is a project that derived off the need for a solid project management tool that is being developed as an "in house" application at this time until proven stable enough for public use.

While we currently use Shotgun for Project Widow, the RedWidow system will be developed once production stops, to be worked on full time.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License