Meishe Releases a Set of AI-powered Digital Human Models, Allowing Enterprises to Create their Virtual Avatar
Recently, Beijing Municipal Government released China's first special support policy for the digital human industry, which means the Web 3.0 innovative application industry boom represented by digital humans is coming. Under this circumstance, Meishe Co., Ltd. rolls out a series of AI-powered digital humans, which can mimic human facial expressions, body language and movement, and empower a wide range of sectors, such as broadcasting, entertainment, retail, education, tourism, etc.
The text-driven digital human combines Text-to-Speech (TTS) and video processing technology, allowing for speech synthesis processing of text content. Through the background, users can quickly form natural virtual anchor broadcast videos with simple operations.
In certain situations, when anchors are not available, the application of text-driven digital humans can greatly improve production efficiency and reduce labor costs, playing an important role in 24-hour rolling broadcast and breaking news coverage.
Meishe provides a set of services for the creation and operation of virtual hosts and virtual brand spokespersons for clients in the fields of broadcasting, television, internet, education and retail, enabling more industries to use virtual humans.
Face expression-driven digital avatar is based on a self-developed 3D rendering engine, which combines 52 types of face behavior detection and 106 face points to restore users' facial expressions. Users can complete face data collection through an RGB camera alone and drive 3D images in real-time for anthropomorphic expression.
The 3D graphics rendering engine developed by Meishe has powerful rendering capabilities, supporting FBX and OBJ models, built-in geometry, a variety of lighting and shadow effects, PBR materials, etc. The engine can truly reproduce the details of the character's skin, hair, clothing, and the texture of metal, glass and other materials.
To make AR effects with more realistic dynamic effects, the R&D team uses a physical skeleton to make its movements closer to the real world. Users can build 3D scenes through an XML scripting language, or design and create their digital human with the tools Meishe provided.
Meishe digital humans now cover a variety of business scenarios, supporting video shooting, broadcasting, video calling, etc. Moreover, the R&D team has made optimizations for different terminals, and ensured a more streamlined engine code and smaller library size, which make products have higher efficiency and lower consumption.
Enterprises can add or reduce functionality according to their needs. For example, by combining the video and audio editing capabilities of Meishe SDK, users can meet more complex business needs.