🎉 Gate xStocks Trading is Now Live! Spot, Futures, and Alpha Zone – All Open!
📝 Share your trading experience or screenshots on Gate Square to unlock $1,000 rewards!
🎁 5 top Square creators * $100 Futures Voucher
🎉 Share your post on X – Top 10 posts by views * extra $50
How to Participate:
1️⃣ Follow Gate_Square
2️⃣ Make an original post (at least 20 words) with #Gate xStocks Trading Share#
3️⃣ If you share on Twitter, submit post link here: https://www.gate.com/questionnaire/6854
Note: You may submit the form multiple times. More posts, higher chances to win!
📅 July 3, 7:00 – July 9,
Meta announced the audio2photoreal AI framework, which can generate character dialogue scenes by inputting dubbing files
Bit News Meta recently announced an AI framework called audio2photoreal, which is capable of generating a series of realistic NPC character models and automatically "lip-syncing" and "posing" the character models with the help of existing voice-over files.
The official research report pointed out that after receiving the dubbing file, the Audio2 photoreal framework will first generate a series of NPC models, and then use quantization technology and diffusion algorithm to generate model actions, in which quantization technology provides action sample reference for the framework and diffusion Algorithm is used to improve the effect of character actions generated by the frame.
Forty-three percent of the evaluators in the controlled experiment were "strongly satisfied" with the character dialogue scenes generated by the frame, so the researchers felt that the Audio2 photoreal framework was able to generate "more dynamic and expressive" movements than competing products in the industry. It is reported that the research team has now made the relevant code and dataset public on GitHub.