The art of using 2D images to recover 3D facial geometry is known as 3D facial reconstruction, which has helped give long-dead historical figures a face and even identify murder victims. Researchers with Kingston University and the University of Nottingham have been working on a unique project dealing with the many difficulties of 3D facial reconstruction, as it’s still a very challenging problem in Vision and Graphics research. The team of artificial intelligence experts have figured out a way to make 3D facial recreations using just a single 2D image of a person’s face.
While their work does not solve the issue, the researchers believe that they are the first to approach the problem from this angle: using a Convolutional Neural Network (CNN) to map 3D coordinates from image pixels. University of Nottingham researchers Aaron Jackson and Adrian Bulat and Kingston University researchers Vasileios Argyriou and Georgios Tzimiropoulos recently published a paper on their work, titled “Large Pose 3D Face Reconstruction from a Single Image via Direct Volumetric CNN Regression.”
The abstract explains, “3D face reconstruction is a fundamental Computer Vision problem of extraordinary difficulty. Current systems often assume the availability of multiple facial images (sometimes from the same subject) as input, and must address a number of methodological challenges such as establishing dense correspondences across large facial poses, expressions, and non-uniform illumination. In general these methods require complex and inefficient pipelines for model building and fitting. In this work, we propose to address many of these limitations by training a Convolutional Neural Network (CNN) on an appropriate dataset consisting of 2D images and 3D facial models or scans. Our CNN works with just a single 2D facial image, does not require accurate alignment nor establishes dense correspondence between images, works for arbitrary facial poses and expressions, and can be used to reconstruct the whole 3D facial geometry (including the non-visible parts of the face) bypassing the construction (during training) and fitting (during testing) of a 3D Morphable Model. We achieve this via a simple CNN architecture that performs direct regression of a volumetric representation of the 3D facial geometry from a single 2D image. We also demonstrate how the related task of facial landmark localization can be incorporated into the proposed framework and help improve reconstruction quality, especially for the cases of large poses and facial expressions.”
The four co-authors know that 3D facial reconstruction is a very difficult problem to solve, as you normally need multiple images of the same face, from a variety of angles, to map out each contour. But it’s obvious that their idea of training a CNN on a data set of 2D pictures and 3D facial models has some merit.
“Besides its simplicity, our approach works with totally unconstrained images downloaded from the web, including facial images of arbitrary poses, facial expressions and occlusions,” the team wrote in the paper.
Different facial images from multiple angles are given to the system, which works to overcome challenges like facial expressions, non-uniform illumination, and setting up dense correspondences across larger facial poses. The CNN is able to guess and ‘fill in’ the non-visible parts of a face, and can ultimately use a single, unseen 2D image to quickly reconstruct the entire 3D facial geometry.
The code and models for the project will soon be available here, and the researchers have made an online demo tool available for “3D Face Reconstruction from a Single Image.” There are a few images already there if you want to try, such as former US president Barack Obama, actor Elijah Wood, and famous chemist Marie Curie. You can also upload an image of your own face, which I decided to do to abate my curiosity.
The tool instructs users to upload a frontal image, so the face detector will be able to see it, and promises that within 20 minutes of uploading, the images and 3D reconstructions will be deleted. I found a picture of myself as a bridesmaid a few years ago and decided to try that, as I was looking at the camera straight on.
Once the picture uploads, it takes less than a minute to render the virtual model, which you can move around and even share on your social media accounts. According to the demo page, 300,378 faces have been uploaded since September 7th, 2017.
3D models of faces like this could have many applications in the digital world, such as VR social media, 3D avatars for video games, and even warping one’s face in an augmented reality video.
Discuss this and other 3D printing topics at 3DPrintBoard.com, or share your comments below.[Sources: Mashable, The Verge, Aaron Jackson]
You May Also Like
State of the Art: Carbon Fiber 3D Printing, Part Four
In parts one, two and three of this series, we’ve discussed the variety of technological developments taking place in the 3D printing of composites but have not yet covered the...
Parameter Optimization for 3D Printing of Continuous Carbon Fiber/Epoxy Composites
In the recently published ‘A Sensitivity Analysis-Based Parameter Optimization Framework for 3D Printing of Continuous Carbon Fiber/Epoxy Composites,’ researchers continue to explore the world of enhanced materials for fabrication of...
State of the Art: Carbon Fiber 3D Printing, Part Two
In the first part of our series on carbon fiber 3D printing, we really only just got started by providing a background on the material, some of its properties, and...
State of the Art: Carbon Fiber 3D Printing, Part Three
So far, we’ve covered some of the key aspects of carbon fiber manufacturing and how continuous carbon fiber compares to chopped in early modes of carbon fiber 3D printing. However,...
View our broad assortment of in house and third party products.