Procedural animation and inverse kinematics

By Sem

Table of contents

  1. Introduction
  2. Goals and scope
  3. Inverse kinematics
    1. Two joint inverse kinematics
    2. Gradient descent inverse kinematics
  4. Procedural animation
    1. Moving the body
    2. Moving the legs
  5. Results
  6. Future reference
  7. Sources

1. Introduction

I decided to research a topic that was not discussed in the bootcamps. The topics from the bootcamps that I was interested in mesh generation and ai, however I had already explored those topics a little bit so I wanted to research something else. I had seen videos of procedural animation before that looked interesting so I decided to do something with that. Here is an example of one of procedural animation I had seen before this:

2. Goals and scope

My goal for this project is to make a creature that can traverse simple terrain (smooth terrain without abrupt height changes) using procedural animation. Procedural animation in this case meaning that the rotation and position of the legs is determined by inverse kinematics and not forward kinematics (inverse and forward kinematics are elaborated on in chapter three) and that the position of the body is not predetermined but calculated while moving across the terrain. This creature is going to simulate movement but not through physics which is another branch of procedural animation. The specific goals for the end result of this project are:

  • Creature with a body that has four legs
  • Legs have more than two joints
  • Joints can rotate on all three axes
  • Creature can freely walk on terrain without a predetermined path (as long as the terrain does not have abrupt height changes however the terrain can go upside down)

3. Inverse kinematics

In order to understand what inverse kinematics are we need to understand what “non-inverse” kinematics are, also known as forward kinematics. Forward kinematics and inverse kinematics are both methods for moving or animating arm like objects, so for example they can be used to move a robot arm or to animate a characters limbs in a video game. So let’s look at how forward and inverse kinematics would animate an arm of a character. Illustration 1 represents the characters arm, the arm is built up out of bones and in between these bones are the joints. The first joint of an arm that is only attached to one bone is the root joint in the case of this being an arm of a character that would be the shoulder joint. The end of the arm is called the end effector, the position that the arm is reaching towards and in the case of our character this would be its hand. The way forward kinematics would animate this arm is by giving it the rotations that the joints should have and calculating what the positions of the bones should be based on those rotations. Inverse kinematics works the other way around, if inverse kinematics is given a position for its root joint and a position for its end effector and it wil calculate the necessary rotations of the joints for the arm to reach these positions (Addison, 2020). How the position of the root joint and the end effector are determined in this case wil be discussed in chapter four, determining the root joint position and the end effector position is not a part of inverse kinematics.

Illustration 1 Arm.

3.1 Two joint inverse kinematics

Two joint inverse kinematics means that the arm has two joints and also two bones. The example discussed in this chapter is 2D two joint IK (inverse kinematics) which means that both joints only have one axis of rotation. If an arm only has two joints with one axis of freedom the angles of the joints can be calculated with trigonometry if the position of the root joint and the position of the end effector are available. If the lengths of the bones are known (in the case of animating a creature they are) and the distance between the root joint and the end effector are known then those distances can form a triangle (see illustration 2). Because the lengths of all the edges of this triangle are known the law of cosines can calculate the angles of this triangle and thus the angles that the joints should have for the arm to reach between the root joint and the end effector (Zucconi, 2020b).

Illustration 2 Triangle of bone lengths and distance between root joint and end effector.

For 2D 2 joint IK there are only two possible configurations of the joint angles in order for the arm to reach between the root position and the end effector. The solution is either the triangle in illustration two or that same triangle but upside down, illustration 3 shows the two possible solutions.

Illustration 3 The two configurations for the bones to reach between the root joint and the end effector.

If there are more joints and more ways that the joints can rotate there are more than two ways for the arm to reach between the root joint and the end effector. How an algorithm can choose between these options is explained in chapter 3.2.

Illustration 4 2D two joint inverse kinematics in Unity.

3.2 Gradient descent inverse kinematics

There are multiple ways to deal with the problem of having a lot of solutions, a few examples of algorithms that can be used to solve inverse kinematics are: FABRIK, CCD and the Jacobian inverse technique. The technique that I have used for this project is gradient descent. The general idea of gradient descent is to change the rotation of a joint and see if the end effector gets closer to its desired location. If the end effector gets closer by the rotation adjustment that rotation adjustment gets made and if the end effector gets further from its desired location the joint gets rotated in the opposite direction.

Illustration 5 The rotation of a joint being adjusted to see if the end effector gets closer to the target position.

Gradient descent doesn’t instantly jump to a solution, it incrementally moves towards a correct solution by slightly adjusting the angles of the joints. The reason gradient descent doesn’t jump to a solution instantly is because it doesn’t calculate an optimal solution it only tries to get closer to a solution from the current position. This solves the problem of having a lot of solutions because gradient descent moves to the solution that is closest to the current arm position. The way gradient descent gets to a final solution is to run it for more than one iteration so it has time to get to the target.

In Unity the gradient descent algorithm has to run multiple times every frame. When changing a rotation and calculating the new distance from the end effector to its target position forward kinematics is required. Forward kinematics is required because the updated rotations of the joints and the length of the bones are used to calculate the new position of the end effector. The way to calculate the position of the end effector is by starting with the position of the root joint and then adding the lengths of all the bones of the arm in the direction that they are pointing. The code of forward kinematics:

Vector3 armPosition = joints[0].position; // Position of the root joint
Quaternion rotation = Quaternion.identity;
// Calculating the position of the tip of the arm relative to the root of the arm 
// and adding that to the position of the root of the arm
for (int i = 0i < rotations.Length; i++)
{
    rotation *= Quaternion.Euler(rotations[i]);
    armPosition += rotation * (Vector3.right * boneLenghts[i]);
}
// armPosition now holds the position of the end effector

Then to program inverse kinematics in Unity the rotations need to be slightly changed from the current arm rotations and forward kinematics can be used to compare how good our new solution is. One important thing about this method is that not all rotations are changed at the same time. If we want to determine how much every joint should rotate we should not change all the rotations and then see if the solution is better because it is unknown which joint rotations made the solution better or worse. In order to know what impact every joint rotation has on the solution the joints are rotated once at a time.

There is one more part to the gradient descent algorithm that was used in this article. The gradient descent algorithm works by trying to minimise the distance between the end effector and its desired position. The gradient descent algorithm can minimise other functions too. This means that we can give the algorithm a target position but also a target rotation and then the algorithm can optimise the distance between the current end effector rotation and the desired end effector rotation too, this is shown in illustration 6.

Illustration 6 Gradient descent inverse kinematics.

4. Procedural animation

This chapter covers how the creature walks over the terrain. The creature exists out of a body and two legs. How the body moves across the terrain is discussed in paragraph 4.1. The legs use IK to determine how they should be positioned however they stil need to be given a position for the root joint and the end effector, this is discussed in paragraph 4.2. The body can be directed forwards and backwards, the way it changes direction is by rotating.

4.1 Moving the body

The method for moving the forwards and backwards is a two step process, when moving forward the steps are: step one: moving the body in its forward direction, step two: updating the rotation and position of the body to stay parallel to the ground and from a certain distance to the ground. Step two is required because the terrain is not flat, without step two the body would stay horizontal even when walking on vertical terrain. Another reason for step two is that the body only moves in its forward or backward direction so if the rotation of the body doesn’t adjust to the terrain the body would only be able to move horizontally.

Step one is very easily implemented in Unity, because Unity can tell us the forward direction of the body and we can change the scale of that vector to the desired movement speed and add or remove it from the body position. The way step two is implemented is by casting a ray down from the center of the body after it has been moved. The ray that is cast down from the body is going to look for the ground and tell us the surface normal of the ground in this new position and how far away the body is from the ground. With the ground normal we can determine what the rotation of the body should be to stay parallel to the ground like this:

private void CorrectRotation(RaycastHit hit)
{
    // hit is the raycast hit that hit the ground below the creature
    Quaternion targetRotation = Quaternion.FromToRotation(transform.up, hit.normal) * transform.rotation;
    if (Mathf.Abs(Quaternion.Angle(targetRotation, transform.rotation)) > rotationCorrectionAngle)
    {
        transform.rotation = Quaternion.RotateTowards(transform.rotation, targetRotation, rotationCorrectionSpeed * Time.deltaTime);
    }
}

To make sure that the body doesn’t get too close or far from the ground the body position should be updated as wel. The new position of the body is determined by scaling the surface normal by a predetermined distance the body should have from the ground and adding that to the position where the ray hit the ground. An important thing to mention is that the bodies position and rotation of the body isn’t instantly changed to these new positions and rotations but slightly changed towards these positions and rotations. The reason to not instantly change the rotation and position is to make the movement of the body less abrupt. The code that keeps the body at a set distance from the ground:

private void CorrectPosition(out RaycastHit hit)
{
    Ray rayDownFromBody = new Ray(transform.position, -transform.up);
    if (Physics.Raycast(rayDownFromBodyout hit100, groundLayer.value))
    {
        Vector3 targetPosition = hit.point + hit.normal * bodyToGroundDistance;
        float distanceToTargetPosition = Vector3.Distance(targetPosition, transform.position);
        if (distanceToTargetPosition < positionCorrectionSpeed * Time.deltaTime)
        {
            transform.position = targetPosition;
        }
        else
        {
            transform.position += (targetPosition - transform.position).normalized * positionCorrectionSpeed * Time.deltaTime;
        }
    }
}

The way the body is rotated left to right is also easily implemented in Unity because Unity also has a vector of the upwards direction of the body. In order to rotate the body left and right we rotate the body around the upwards direction vector of the body, like this:

// Rotating left and right
if (Input.GetKey(KeyCode.RightArrow))
{
    transform.rotation *= Quaternion.AngleAxis(rotationSpeed * Time.deltaTime, Vector3.up);
}
if (Input.GetKey(KeyCode.LeftArrow))
{
    transform.rotation *= Quaternion.AngleAxis(-rotationSpeed * Time.deltaTime, Vector3.up);
}

Illustration 7 The body of the creature traversing the terrain (without smoothing out the body position correction).

4.2 Moving the legs

The legs of the body will take the correct position when given the position of the root joint and the position of the end effector because of the inverse kinematics discussed in chapter 3. The position of the root joint and the position of the end effector still need te be determined which will be discussed in this paragraph. Determining the position of the root joint is simple because the legs should be connected to the body at all times so every frame the position of the root joint just gets set to the position of the body. The desired location of the end effector is a little more complicated because this decides where the creature is going to plant its feet and when it is going to make a step.

There are a few constraints that the legs should abide by in order to be in a realistic position. An example of those constraints is that the end effector shouldn’t get too far away from the body because then the legs wont be long enough to reach between the body and the end effector. If the legs do not fulfil all of these constraints then the legs should be moved to a new position. Only one leg can move at a time so if a leg is already moving the other leg has to wait. The constraints that determine whether the legs should move are:

  • If the end effector is too far from the body
  • If both legs are behind the body
  • If the right leg is not on the right side of the body
  • If the left leg is not on the left side of the body

Once a leg has decided that it has to move it needs to decide where it is going. The legs use ray casts to determine where they can go. If the creature is moving forward the rays are cast from in front of the creature and down towards the ground. Where the rays hit the ground is where the legs wil move to. Once the legs have decided where to step to, they slowly move to this new target position. Over the duration of the step the position of the end effector is interpolated between the previous step position and the new step position. An additional vector gets added to the end effector position while it is stepping to make the creature lift its feet while it is stepping. The code that makes the creature lift its feet off the ground when stepping:

currentMovingLeg.position = Vector3.Lerp(oldLegTarget.position, newLegTarget.position, moveProgress)
                    + transform.up * stepHeight * Mathf.Sin(moveProgress * Mathf.PI);

5. Results

While I did not reach all of the goals that I had, I am happy that I do have a creature that can walk around terrain with two legs. While the creature does not have four legs I do think that I got to a point where the concepts could be used to expand to four legs. One thing I’m not happy about that I didn’t make a goal for is how good the legs look while walking. After walking for a while the legs can get crumpled up. The creatures legs do have more than two joints and can rotate on all axes thanks to the gradient descent inverse kinematics.

Illustration 8 The final creature walking across terrain.

6. Future reference

If I were to work on this project more there are a few areas I could improve on. The most obvious area that can be improved on is making the creature have four legs instead of two. The way I could go about this is by changing the leg movement constraints to account for two more legs. Another area of improvement that I think is even more important is making the legs look more natural, because right now they sometimes look crumpled. There are multiple ways this could be improved. One way to improve the crumpled legs is to make the gradient descent method optimize some function that measures the crumpledness of the legs. Other options are to explore other inverse kinematics algorithms.

7. Sources

Addison, A. (2020, 27 augustus). Difference Between Forward Kinematics and Inverse Kinematics. Automaticaddison. https://automaticaddison.com/difference-between-forward-kinematics-and-inverse-kinematics/

Aristidou, A., Lasenby, J., Chrysanthou, Y., & Shamir, A. (2017). Inverse Kinematics Techniques in Computer Graphics: A Survey. Computer Graphics Forum, 37(6), 35–58. https://doi.org/10.1111/cgf.13310

Codeer. (2020, 28 maart). Unity PROCEDURAL ANIMATION tutorial (10 steps) [Video]. YouTube. https://www.youtube.com/watch?v=e6Gjhr1IP6w&ab_channel=Codeer

Zucconi, A. (2020a, april 12). An Introduction to Procedural Animations. Alan Zucconi. https://www.alanzucconi.com/2017/04/17/procedural-animations/

Zucconi, A. (2020b, september 14). Inverse Kinematics in 2D – Part 1. Alan Zucconi. https://www.alanzucconi.com/2018/05/02/ik-2d-1/

Related Posts