The art of using 2D images to recover 3D facial geometry is known as 3D facial reconstruction, which has helped give long-dead historical figures a face and even identify murder victims. Researchers with Kingston University and the University of Nottingham have been working on a unique project dealing with the many difficulties of 3D facial reconstruction, as it’s still a very challenging problem in Vision and Graphics research. The team of artificial intelligence experts have figured out a way to make 3D facial recreations using just a single 2D image of a person’s face.
While their work does not solve the issue, the researchers believe that they are the first to approach the problem from this angle: using a Convolutional Neural Network (CNN) to map 3D coordinates from image pixels. University of Nottingham researchers Aaron Jackson and Adrian Bulat and Kingston University researchers Vasileios Argyriou and Georgios Tzimiropoulos recently published a paper on their work, titled “Large Pose 3D Face Reconstruction from a Single Image via Direct Volumetric CNN Regression.”
The abstract explains, “3D face reconstruction is a fundamental Computer Vision problem of extraordinary difficulty. Current systems often assume the availability of multiple facial images (sometimes from the same subject) as input, and must address a number of methodological challenges such as establishing dense correspondences across large facial poses, expressions, and non-uniform illumination. In general these methods require complex and inefficient pipelines for model building and fitting. In this work, we propose to address many of these limitations by training a Convolutional Neural Network (CNN) on an appropriate dataset consisting of 2D images and 3D facial models or scans. Our CNN works with just a single 2D facial image, does not require accurate alignment nor establishes dense correspondence between images, works for arbitrary facial poses and expressions, and can be used to reconstruct the whole 3D facial geometry (including the non-visible parts of the face) bypassing the construction (during training) and fitting (during testing) of a 3D Morphable Model. We achieve this via a simple CNN architecture that performs direct regression of a volumetric representation of the 3D facial geometry from a single 2D image. We also demonstrate how the related task of facial landmark localization can be incorporated into the proposed framework and help improve reconstruction quality, especially for the cases of large poses and facial expressions.”
The four co-authors know that 3D facial reconstruction is a very difficult problem to solve, as you normally need multiple images of the same face, from a variety of angles, to map out each contour. But it’s obvious that their idea of training a CNN on a data set of 2D pictures and 3D facial models has some merit.
“Besides its simplicity, our approach works with totally unconstrained images downloaded from the web, including facial images of arbitrary poses, facial expressions and occlusions,” the team wrote in the paper.
Different facial images from multiple angles are given to the system, which works to overcome challenges like facial expressions, non-uniform illumination, and setting up dense correspondences across larger facial poses. The CNN is able to guess and ‘fill in’ the non-visible parts of a face, and can ultimately use a single, unseen 2D image to quickly reconstruct the entire 3D facial geometry.
The code and models for the project will soon be available here, and the researchers have made an online demo tool available for “3D Face Reconstruction from a Single Image.” There are a few images already there if you want to try, such as former US president Barack Obama, actor Elijah Wood, and famous chemist Marie Curie. You can also upload an image of your own face, which I decided to do to abate my curiosity.
The tool instructs users to upload a frontal image, so the face detector will be able to see it, and promises that within 20 minutes of uploading, the images and 3D reconstructions will be deleted. I found a picture of myself as a bridesmaid a few years ago and decided to try that, as I was looking at the camera straight on.
Once the picture uploads, it takes less than a minute to render the virtual model, which you can move around and even share on your social media accounts. According to the demo page, 300,378 faces have been uploaded since September 7th, 2017.
3D models of faces like this could have many applications in the digital world, such as VR social media, 3D avatars for video games, and even warping one’s face in an augmented reality video.
Discuss this and other 3D printing topics at 3DPrintBoard.com, or share your comments below.[Sources: Mashable, The Verge, Aaron Jackson]
Subscribe to Our Email Newsletter
Stay up-to-date on all the latest news from the 3D printing industry and recieve information and offers from thrid party vendors.
You May Also Like
TCT 3Sixty Brings 3D Printing to the UK this June
TCT 3Sixty, the UK’s definitive and most influential 3D printing and additive manufacturing event returns on June 8-9 to the NEC, Birmingham. TCT 3Sixty goes beyond simply raising awareness and...
3D Printing Webinar and Event Roundup: May 22, 2022
A new week means a fresh docket of 3D-printing webinars and events! In Orono, Maine, on Monday, May 23, and Tuesday, May 24, the America Makes: Manufacturing Renew3d conference will...
3D Printing Webinar and Event Roundup: May 15, 2022
This is a big week in the additive manufacturing industry—RAPID + TCT is here! But that’s not the only event in town; there will also be webinars on topics like...
3D Printed Housing Conference Takes Realistic Approach to Enormous Task
Perhaps more than any other segment within the broader 3D printing industry these days, additive construction (AC) falls victim to too much hype. There are obvious reasons for this. For...