HomeAITencent Suggest AniPortrait: An Audio-Pushed Synthesis of Photorealistic Portrait Animation

Tencent Suggest AniPortrait: An Audio-Pushed Synthesis of Photorealistic Portrait Animation


The emergence of diffusion fashions has not too long ago facilitated the era of high-quality photographs. Diffusion fashions are refined with temporal modules, enabling these fashions to excel in creating compelling movies. Moreover, the potential to generate reasonable and dynamic portrait animations from each audio inputs and static photographs holds immense potential throughout varied domains. This progressive method finds purposes in digital actuality, gaming, and digital media. Its influence extends to content material creation, storytelling, and personalised consumer experiences.

Nevertheless, there are vital challenges in producing high-quality, visually charming animations that keep temporal consistency. These problems come up from the necessity for intricate coordination of lip actions, facial expressions, and head positions to craft visually compelling results. Current strategies have usually failed to beat this problem as a consequence of their dependency on limited-capacity mills for visible content material creation, resembling GANs, NeRF, or motion-based decoders. These networks present restricted generalization capabilities and infrequently lack stability in producing high-quality content material.

Tencent researchers launched AniPortrait, a novel framework designed to generate high-quality animated portraits pushed by audio and a reference picture. AniPortrait is split into two distinct levels. Within the first stage, transformer-based fashions extract a sequence of 3D facial mesh and head pose from the audio enter. This stage can seize delicate expressions and lip actions from the audio. Within the second stage, a sturdy diffusion mannequin is utilized via a movement module integration that transforms the facial landmark sequence right into a temporally constant and photorealistic animated portrait.

Experimental outcomes show the superior efficiency of AniPortrait in creating animations with spectacular facial naturalness, diverse poses, and glorious visible high quality. Leveraging 3D facial representations as intermediate options helps acquire flexibility and modify these options, enhancing the applicability of the proposed framework in domains like facial movement. This framework includes two modules: Audio2Lmk and Lmk2Video. Audio2Lmk is designed to extract a sequence of landmarks that captures intricate facial expressions and lip actions from the audio enter. On the similar time, Lmk2Video makes use of this landmark sequence to generate high-quality portrait movies with temporal stability.

In Audio2Lmk, pre-trained wav2vec is utilized to extract audio options. This mannequin displays sturdy generalizability, precisely figuring out each pronunciation and intonation from the audio. Furthermore, Lmk2Video’s community construction is designed to attract inspiration from AnimateAnyone, using SD1.5 because the spine and incorporating a temporal movement module. Equally, a ReferenceNet, echoing the structure of SD1.5, is used to extract look info from the reference picture and combine it into the spine. Lastly, 4 A100 GPUs are utilized for mannequin coaching, dedicating two days to every step, and the AdamW optimizer is employed, with a constant studying price of 1e-5.

In conclusion, this analysis presents AniPortrait, a diffusion model-based framework for portrait animation. This framework can generate a portrait video that includes clean lip movement and pure head actions. Nevertheless, acquiring large-scale and high-quality 3D knowledge is kind of costly. Therefore, the facial expressions and head postures in generated portrait movies can’t escape the uncanny valley impact. So, the plan is to foretell portrait movies immediately from audio to attain extra gorgeous era outcomes.


Try the Paper and GithubAll credit score for this analysis goes to the researchers of this undertaking. Additionally, don’t neglect to observe us on Twitter. Be a part of our Telegram Channel, Discord Channel, and LinkedIn Group.

In the event you like our work, you’ll love our e-newsletter..

Don’t Overlook to hitch our 39k+ ML SubReddit


Sajjad Ansari is a closing yr undergraduate from IIT Kharagpur. As a Tech fanatic, he delves into the sensible purposes of AI with a concentrate on understanding the influence of AI applied sciences and their real-world implications. He goals to articulate complicated AI ideas in a transparent and accessible method.






Supply hyperlink

Opinion World [CPL] IN

latest articles

explore more