During the development of Stay and Slay, we had included ragdolls as a stretch goal, and seeing as development went according to plan I got the opportunity to implement it. This feature was super fun to work on, the feeling of accomplishment once completed was immense. In retrospect, seeing as the game mechanics were heavily based on physics it would have looked odd displaying the traditional static death animations. I'm really happy with the end result!
Expand my knowledge of the animation pipeline.
Create ragdoll physics using NVIDIA PhysX and integrate it into our existing pipeline.
Implement skinning system for ragdolls.
Due to the scarcity of reference materials available, I had to heavily rely on the PhysX documentation, which at times felt insufficient.
An optimization I did was to exclude certain bones from the simulation, such as the fingers and toes. This meant I had to adjust the skinning process to account for the missing bones. Solving this issue involved a combination of reverse transformation and experimenting through trial and error, which you can read more about down below.
I wanted to create hit feedback on the ragdolls, both for the bullet shots and kicks. However, the collision checks were based on a single large capsule, so I had to come up with a solution to determine which bone was hit.
I've used NVIDIA PhysX 4.1 and the chosen method for simulating the ragdolls is through an articulation. In practice, it's not much different from using regular dynamic rigid bodies in combination with joints. The articulation is a collection of connected rigid bodies that form a tree-like structure. It's commonly used to simulate complex objects like robots, characters, and vehicles.
"Links" are the rigid-bodies that makes up the bones of the simulation.
"Actor space" is the transform relative to itself, as per PhysX doc.
"Model space" is the transform relative to the model.
"Local space pose" Represents a pose of a model, where each transform is relative to its parent bone.
Creation of the body
Upon death I create the ragdoll. Each new link created needs its parent link passed in. The initial root link should be NULL. When creating a link I calculate the length and position through the parent and child joint matrices. All the data necessary is obtainable through the animated mesh class from which I get the corresponding joint matrices in world space.
Between every link a joint is made, this joint needs to know its position relative to both the parent and child link. So I take the joint transform expressed in world space and multiply it with the actor space of the respective link.
Due to time constraints, I had the same constraints for all joints.
Ontop of that, we felt the current ragdoll behavior was great and fitting for the game's overall feel.
<-- "getGlobalPose" retrieves an actor to worldspace transformation. Thus the inverse returns the link in actor space.
To skin the mesh onto the ragdoll, I need to keep track of the matrices of each joint. To accomplish this, I use a C-style array that contains articulation link pointers. By using the same container for all the data, I'm able to easily index the same joint between PhysX joints and regular joints.
I chose to use the links instead of the PhysX joints because I found that it was difficult to extract the necessary information from the joints. I will go into more detail about this later on.
The skinning was no pretty process, with a lot of trial and error, quick fixes, and possibly redundant steps. Since the low-level structure for skinning is already made all I had to do was create a ModelSpace or LocalSpace Pose to send into the function SetPose().
Ideally, I would save all the joints in model space, that way I could avoid the process of converting the local space pose to model space. But due to the way I handle the skinning of the joints with no PhysX equivalent, I wanted all the joints in local space ie. relative to their parent. This hints at my solution for the excluded joints, fingers, and toes.
At the moment of death, I save the local space pose of all the joints, as well as their world space pose. To accomplish this, I needed to perform a reverse transformation of the current pose to local space. Specifically, I went in reverse order from:
AppliedBindPoseInverse -> ModelSpace -> LocalSpace,
saving the world space pose along the way.
This is where the container for our links comes in handy. Because the array with all my links consists of pointers I can easily detect which index does not have a corresponding PhysX joint. From each link, I get their joint and extract the world space transform (globalJointTransform). I replace the worldSpacePose transform by index with the PhysX transform.
<-- Special case for hipJoint
After obtaining all the joint transforms in world space, the next step is to convert them to local space. Once this conversion is complete, it will be possible to replace the joints that do not have a corresponding PhysX joint with the saved local space joint.
Summary & Improvements
Even though the implementation was not all smooth sailing, I am highly satisfied with the outcome as I believe I successfully achieved the goals I had set out to accomplish.
I had a rough time understanding certain aspects of both the skinning and PhysX documentation, but I feel like the lack of reference material highlighted the importance of truly comprehending the task at hand. Through trial and error, I gained valuable insights that I would not have otherwise learned.
Something I'm curious to learn is how to best handle the application of constraints, and perhaps automate it to a certain degree. Or maybe the skeleton of the rig could hold extra information that defines degrees of freedom per joint.
Some things I would like to improve/add:
Optimize the amount of bones simulated even further (handle one PhysX bone overlapping multiple joints)
Inverse kinematics (ragdoll character controller?)
Here are some mishaps along the way --->
And a final slaughter |