音声を用いて生成する3次元顔動画システム(セッション6:アニメーション,議題:CGと文化・芸術及びCG一般)
スポンサーリンク
概要
- 論文の詳細を見る
It is often difficult to animate a face model speaking a specific speech. Even for professional animators, it always takes a lot of time. In this paper, we provide a speech-driven 3D facial animation system which allows the user to easily generate facial animations. The user only needs to give a speech as the input. The output will be a 3D facial animation relative to the input speech. There are three components in our system. The first part is the multidimensional morphable model (MMM). MMM is build from the pre-recorded training video using machine learning techniques. People can use MMM to generate realistic speech video respect to the input speech. The second part is facial tracking. Facial tracking can extract the feature points of a human subject in the synthetic speech video. The third part is Mesh-IK (mesh based inverse kinematics). Mesh-IK takes the motion of feature points as the guide to deform the 3D face models, and makes the result model have the same looking in the corresponding frame of the speech video. Thus we can have a 3D facial animation as the output.
- 2006-11-16