LambSoft: Production Work

 
As a test of LambSoft's MoveTools and Pro Motion in a production environment and to produce some animations for marketing purposes, we created an animated short called 'Chippy'. We used a Flock of Birds system and a pair of data-gloves to motion-capture a ballet-dancer. We used Pro Motion to map the animation of the performer (below on left) to the quite-differently proportioned Chippy character. I modeled and textured the character(s) and environment, and rigged Chippy for face/ear animation
 


Chippy was a cartoon-styled chipmunk.


The performer we motion-captured (skeleton on left) was significantly different in proportion to Chippy.

 
The Chippy character was animated interacting with her own reflection in a mirror. The foreground Chippy and the mirror-reflection Chippy (known as Nasty) had slightly different animations. There was a multi-step compositing process that had to happen in order to see a final frame with all of its parts. In order to make it easier for the animator to manage the two characters, as well as to automate the rendering/compositing process somewhat, I developed a MaxScripted interface that allowed the animator to selectively turn parts of the scene on and off, and to selectively render/composite subparts of each frame. For the curious, you can see the code here.
 
 
One of the characters I created to demonstrate and showcase both MoveTools and Pro Motion was a simple robot named 'Ovis'. Ovis was animated using motion capture data from an optical motion-capture system. I used Pro Motion to animate the performer skeleton indirectly from the optical markers (the raw capture data). Pro Motion was used to map the orientations of the performer skeleton to the character skeleton with hand-keyed edits to the animation.
 
To demonstrate the custom-rigging capabilities of Pro Motion I created a simplistic character rigging interface that automated the process of applying motion-capture data to a custom character skeleton in 3ds max. Click here to see the code.

The top section automates building a performer skeleton from optical data. The bottom section helps create a mapping from one skeleton to another.