Our goal when creating the four-person virtual group MAVE: was to create appealing characters, each with a completely new appearance that didn’t exist anywhere else in the world. To create an attractive character, you can’t just focus on appearance; it also has to include a wide range of facial expressions for different situations. That’s why we focused on building and developing a pipeline and technology to achieve this.
Q: I heard that you used MetaHuman to create your characters. Can you explain why?
As I mentioned, as well as having an attractive appearance, it’s very important for a compelling character to have a range of detailed facial expressions for different situations. However, creating and modifying such facial expressions is a time-consuming and expensive task, because it always involves rigging and modeling and requires iterative revisions and verification. That’s why Epic’s MetaHuman technology, which has been developed with the help of decades of experience in creating digital humans, was the perfect choice. It was a crucial part of building the pipeline for our characters.
With the MetaHuman facial rig, we were able to easily create the facial expressions we wanted, and share animations between characters. Also, we were able to focus on R&D (e.g. improvement of rig control) by referring to the Rig Logic: Runtime Evaluation of MetaHuman Face Rigs white paper released by Epic Games. In addition, the high level of compatibility with external tools such as NVIDIA’s Audio2Face, Live Link Face App for iPhone, Faceware, and FACEGOOD allowed us to apply MetaHuman animation and drastically reduce the actual production time by sharing the underlying mesh topology, UV, joint structure, and controls.
Q: Why did you choose Unreal Engine along with MetaHuman?
When we were planning MAVE:, we put a lot of thought into how our project should be positioned, and the sort of activities we’d want the virtual band to take part in. The productivity of our content was the most important consideration. A lot of activities means a lot of content production, and that requires production efficiency. Otherwise, we may have to make a compromise on visual quality. So we chose Unreal Engine not only for efficiency, but also for its real-time rendering quality. We used Unreal Engine to extend the boundary of MAVE:’s activities in various areas, including producing a transmedia music video within a short time, social media activities, and upcoming TV shows and commercials.
Social media is an important channel to engage with and create a bond, and in order to make this happen, it requires various forms and amounts of high-quality content. This is why we chose Unreal Engine above other tools. By using Unreal Engine, we were able to create various forms of content including photorealistic images, and videos to engage with fans across multiple social platforms.
Q: What kind of pipeline was used to create each character of MAVE:?
The MAVE: creation team is made up of talented individuals from a variety of backgrounds such as gaming and film industries, which means the team members have all used different DCC tools depending on their specialty. For example, team members from the gaming industry have a good understanding of real-time rendering, while those from the M&E industry have expertise in video media production, so we’ve built a special pipeline to maximize the synergy between each team member.
The pipeline consists of character planning and character creation. Character creation is divided into detailed steps such as modeling, facial expression creation and rigging, hair creation, and body calibration.
Character planning is the stage where each character’s appearance is designed. This process was conducted in close collaboration with the experts from Kakao Entertainment, who have a great deal of experience in successful K-pop band planning. However, in traditional K-pop bands, the members are selected from an existing trainee pool and their appearance is completed with make-up and styling. But for virtual bands, we have to create virtual humans as a completely new and attractive person, not only in their appearance, but also with detailed facial expressions, movements, speech patterns, and so on.
To bridge this gap and provide a working environment as close as possible to the planning team’s original one, the production team built a pipeline that uses a GAN network to automatically generate target images and manually modify or combine eigenvectors. This enabled the planning team to select an existing character and modify its parameters to fit the plans, instead of having to create a character’s appearance from scratch. The planning team helped us by sharing their insights for the formula of a successful K-pop band that they’ve built over the years.
Credit: Source link