UK Researchers Develop Method to Make 3D Facial Recreations Using a Single 2D Facial Image
The art of using 2D images to recover 3D facial geometry is known as 3D facial reconstruction, which has helped give long-dead historical figures a face and even identify murder victims. Researchers with Kingston University and the University of Nottingham have been working on a unique project dealing with the many difficulties of 3D facial reconstruction, as it’s still a very challenging problem in Vision and Graphics research. The team of artificial intelligence experts have figured out a way to make 3D facial recreations using just a single 2D image of a person’s face.
While their work does not solve the issue, the researchers believe that they are the first to approach the problem from this angle: using a Convolutional Neural Network (CNN) to map 3D coordinates from image pixels. University of Nottingham researchers Aaron Jackson and Adrian Bulat and Kingston University researchers Vasileios Argyriou and Georgios Tzimiropoulos recently published a paper on their work, titled “Large Pose 3D Face Reconstruction from a Single Image via Direct Volumetric CNN Regression.”
The abstract explains, “3D face reconstruction is a fundamental Computer Vision problem of extraordinary difficulty. Current systems often assume the availability of multiple facial images (sometimes from the same subject) as input, and must address a number of methodological challenges such as establishing dense correspondences across large facial poses, expressions, and non-uniform illumination. In general these methods require complex and inefficient pipelines for model building and fitting. In this work, we propose to address many of these limitations by training a Convolutional Neural Network (CNN) on an appropriate dataset consisting of 2D images and 3D facial models or scans. Our CNN works with just a single 2D facial image, does not require accurate alignment nor establishes dense correspondence between images, works for arbitrary facial poses and expressions, and can be used to reconstruct the whole 3D facial geometry (including the non-visible parts of the face) bypassing the construction (during training) and fitting (during testing) of a 3D Morphable Model. We achieve this via a simple CNN architecture that performs direct regression of a volumetric representation of the 3D facial geometry from a single 2D image. We also demonstrate how the related task of facial landmark localization can be incorporated into the proposed framework and help improve reconstruction quality, especially for the cases of large poses and facial expressions.”
The four co-authors know that 3D facial reconstruction is a very difficult problem to solve, as you normally need multiple images of the same face, from a variety of angles, to map out each contour. But it’s obvious that their idea of training a CNN on a data set of 2D pictures and 3D facial models has some merit.
“Besides its simplicity, our approach works with totally unconstrained images downloaded from the web, including facial images of arbitrary poses, facial expressions and occlusions,” the team wrote in the paper.
Different facial images from multiple angles are given to the system, which works to overcome challenges like facial expressions, non-uniform illumination, and setting up dense correspondences across larger facial poses. The CNN is able to guess and ‘fill in’ the non-visible parts of a face, and can ultimately use a single, unseen 2D image to quickly reconstruct the entire 3D facial geometry.
The code and models for the project will soon be available here, and the researchers have made an online demo tool available for “3D Face Reconstruction from a Single Image.” There are a few images already there if you want to try, such as former US president Barack Obama, actor Elijah Wood, and famous chemist Marie Curie. You can also upload an image of your own face, which I decided to do to abate my curiosity.
The tool instructs users to upload a frontal image, so the face detector will be able to see it, and promises that within 20 minutes of uploading, the images and 3D reconstructions will be deleted. I found a picture of myself as a bridesmaid a few years ago and decided to try that, as I was looking at the camera straight on.
Once the picture uploads, it takes less than a minute to render the virtual model, which you can move around and even share on your social media accounts. According to the demo page, 300,378 faces have been uploaded since September 7th, 2017.
3D models of faces like this could have many applications in the digital world, such as VR social media, 3D avatars for video games, and even warping one’s face in an augmented reality video.
Discuss this and other 3D printing topics at 3DPrintBoard.com, or share your comments below.[Sources: Mashable, The Verge, Aaron Jackson]
You May Also Like
3D Printing News Briefs: June 11, 2019
Starting with a little business in today’s 3D Printing News Briefs, Materialise has signed an MoU with Sigma Labs, and Carpenter Technology Corporation launched an additive manufacturing business unit, while...
Lamar University Researchers Develop 3D Printed Self-Healing Material to Cut Back on Waste
Material sample with a healed break [Image: Dr. Keivan Davami] A team of researchers from Lamar University in Texas, led by assistant professor Dr. Keivan Davami, recently developed a self-healing...
3D Printing News Briefs: May 3, 2019
We’re talking with you about all things new in today’s 3D Printing News Briefs – a new partnership, a new material, and a new design challenge. DWS has announced that...
Switzerland: Exciting New Technology Multi-Metal Electrohydrodynamic Redox 3D Printing
Researchers from Switzerland explain more about how metals dissolved and re-deposited in liquid solvents can further AM processes by promoting fabrication without post-processing. Their findings are outlined in the recently...
View our broad assortment of in house and third party products.