AI Technique Enhances Realism of Hair
Video games and animated movies may benefit from the ability to display hair using neural networks that have been trained on hundreds of photos of various hairstyles.
One of the most difficult components of computer graphics, particularly for animated films and video games, is hair. Each of the hundreds of individual strands that make up hair has its own shape, color, texture, and movement. It takes a lot of processing power, memory, and complex algorithms and models to simulate realistic hair.
However, new developments in AI might make hair modeling more straightforward and accurate. The goal of artificial intelligence (AI), a subfield of computer science, is to build machines and systems that are capable of learning, thinking, and other cognitive functions that typically require human intelligence. AI can be used in many different fields and applications, including image processing, robotics, speech recognition, natural language processing, and computer vision.
Hair simulation is one of the uses of AI in computer graphics. A deep learning-based technique has been created by researchers from the University of Southern California, Pinscreen, and Microsoft to instantly produce entire 3D hair geometry from single-view photos. Neural networks are used in deep learning, a kind of machine learning, to learn from massive volumes of data. Artificial neurons in layers make up neural networks, which can process and transfer information.
From input pictures, the researchers employed a generative adversarial network (GAN) to generate accurate hair models. A GAN is made up of two neural networks: a discriminator that can tell the difference between real and false outputs, and a generator that produces realistic outputs. As they compete, the discriminator and generator both get better over time.The researchers utilized their GAN to create hair geometry from 2D photos after training it on a sizable collection of 3D hair models. In order to simulate the hair with realistic lighting and shading effects, they also used a neural rendering technique.
Their technology creates 3D hair models using smartphone photographs as input and as output. Following that, the procedure is broken down into two steps: first, the system assesses the 2D orientation of each hair strand in the image; then, using a geometric model, it reconstructs the 3D shape of each strand.Different hairstyles, colors, lengths, and densities can be handled by the system. It may also deal with occlusions, such as when a person’s hair is partially hidden by their clothing or face. On a typical GPU, the system can produce 3D hair models in under a second.
The researchers assert that their approach is the first to instantly generate 3D hair geometry from single-view photos. They claim that the accuracy, speed, and visual quality of their method are all superior to those of earlier systems.The researchers are hoping that their approach can be applied to a variety of applications, including face swapping, avatar development, virtual try-on, and animation. Additionally, they want to enhance their approach by adding other data sources, like depth maps and movies.
In August 2023, the researchers presented their findings at the ACM SIGGRAPH conference.
Read more articles at: