Lip Sync

For our Lip Sync project, I decided to go with two sound bites. One of them is a scene from a well-known game called ‘Metal Gear Solid’. A simple voice line that allows me to attempt lip sync for the first time. Since I’ve never done Lip Syncing before, I found this quite tedious and challenging for as simple as it may appear for face value.

Once I had my character rig and voice clip ready, I had to import the audio into Maya as a .WAV file so I could accurately assess the audio and fine-tune the sync with the characters mouth movements. To do this, I had to begin with basic jaw drops just as an anime or puppet character would do. This is to create the illusion of speaking and can be quite convincing when timed correctly.

Once I had created the jaw drop timing to the character, it was all about moving the lips and tongue to match the words being shown so that a live-viewer could look at the mouth of the 3D character and recognise a word, rather than giving the appearance that it is a foreign dubbed animation.

After completing this task, I had believed the technique was simple enough to do on a more advanced character rig. So I looked online and pulled out a character rig known as Aang from Avatar: The Last Airbender. Without testing the rig or knowing its values and control limits, I dove headfirst into lip-syncing. However, instead of sticking with an audio piece I was already familiar with, I looked for more emotionally dynamic audio that had different levels of volume and intensity.

The audio I eventually decided to go with was one from an anime called Death Note. The scene depicts a character being exposed for his evil doings, and the arguably evil character expressing that his actions are just. To me, it sounded more like a plead of innocence at a trial or a hearing of some kind, so I wanted to go for a police station investigation room chamber for the scene and began creating the theme.

I found out after beginning the animating phase that I was using an outdated model that needed to be updated, and wasn’t quite compatible with my version of Maya 2022. I had already spent a long time doing the lip-sync and was too far in to redo the exercise with an updated rig that works with my version of Maya. I took this as a learning curve to play and check the rig before committing time to an exercise that would become a hindrance.

Furthermore, I feel I should have spent more time playing with the graph editor to fix the obvious problems such as body clipping into the table and chest. And looking back at my final outcome, I wish I had put in more keyframes to create a less-robotic stiff character just as I had done for my Rotoscope animation project. I had assumed that fewer keyframes would have been appropriate since the character was sitting, but learned that my character lost subtle movements that humans naturally have even when in a docile position. By far, this was my least successful project, and I wish I had more time for a second attempt.

Leave a comment

Your email address will not be published. Required fields are marked *