The art of using 2D images to recover 3D facial geometry is known as 3D facial reconstruction, which has helped give long-dead historical figures a face and even identify murder victims. Researchers with Kingston University and the University of Nottingham have been working on a unique project dealing with the many difficulties of 3D facial reconstruction, as it’s still a very challenging problem in Vision and Graphics research. The team of artificial intelligence experts have figured out a way to make 3D facial recreations using just a single 2D image of a person’s face.
While their work does not solve the issue, the researchers believe that they are the first to approach the problem from this angle: using a Convolutional Neural Network (CNN) to map 3D coordinates from image pixels. University of Nottingham researchers Aaron Jackson and Adrian Bulat and Kingston University researchers Vasileios Argyriou and Georgios Tzimiropoulos recently published a paper on their work, titled “Large Pose 3D Face Reconstruction from a Single Image via Direct Volumetric CNN Regression.”
The abstract explains, “3D face reconstruction is a fundamental Computer Vision problem of extraordinary difficulty. Current systems often assume the availability of multiple facial images (sometimes from the same subject) as input, and must address a number of methodological challenges such as establishing dense correspondences across large facial poses, expressions, and non-uniform illumination. In general these methods require complex and inefficient pipelines for model building and fitting. In this work, we propose to address many of these limitations by training a Convolutional Neural Network (CNN) on an appropriate dataset consisting of 2D images and 3D facial models or scans. Our CNN works with just a single 2D facial image, does not require accurate alignment nor establishes dense correspondence between images, works for arbitrary facial poses and expressions, and can be used to reconstruct the whole 3D facial geometry (including the non-visible parts of the face) bypassing the construction (during training) and fitting (during testing) of a 3D Morphable Model. We achieve this via a simple CNN architecture that performs direct regression of a volumetric representation of the 3D facial geometry from a single 2D image. We also demonstrate how the related task of facial landmark localization can be incorporated into the proposed framework and help improve reconstruction quality, especially for the cases of large poses and facial expressions.”
The four co-authors know that 3D facial reconstruction is a very difficult problem to solve, as you normally need multiple images of the same face, from a variety of angles, to map out each contour. But it’s obvious that their idea of training a CNN on a data set of 2D pictures and 3D facial models has some merit.
“Besides its simplicity, our approach works with totally unconstrained images downloaded from the web, including facial images of arbitrary poses, facial expressions and occlusions,” the team wrote in the paper.
Different facial images from multiple angles are given to the system, which works to overcome challenges like facial expressions, non-uniform illumination, and setting up dense correspondences across larger facial poses. The CNN is able to guess and ‘fill in’ the non-visible parts of a face, and can ultimately use a single, unseen 2D image to quickly reconstruct the entire 3D facial geometry.
The code and models for the project will soon be available here, and the researchers have made an online demo tool available for “3D Face Reconstruction from a Single Image.” There are a few images already there if you want to try, such as former US president Barack Obama, actor Elijah Wood, and famous chemist Marie Curie. You can also upload an image of your own face, which I decided to do to abate my curiosity.
The tool instructs users to upload a frontal image, so the face detector will be able to see it, and promises that within 20 minutes of uploading, the images and 3D reconstructions will be deleted. I found a picture of myself as a bridesmaid a few years ago and decided to try that, as I was looking at the camera straight on.
Once the picture uploads, it takes less than a minute to render the virtual model, which you can move around and even share on your social media accounts. According to the demo page, 300,378 faces have been uploaded since September 7th, 2017.
3D models of faces like this could have many applications in the digital world, such as VR social media, 3D avatars for video games, and even warping one’s face in an augmented reality video.
Discuss this and other 3D printing topics at 3DPrintBoard.com, or share your comments below.
[Sources: Mashable, The Verge, Aaron Jackson]
Subscribe to Our Email Newsletter
Stay up-to-date on all the latest news from the 3D printing industry and receive information and offers from third party vendors.
You May Also Like
3D Printing Targets Tooling at IMTS 2024
The Western hemisphere’s largest manufacturing trade show, the International Manufacturing Technology Show (IMTS), returned to Chicago for another year, graciously including a 3D printing section once again in its West...
3D Systems and Smith+Nephew Get 510(k) Clearance for 3D Printed Ankle Replacement Treatment
3D Systems (NYSE: DDD) has received 510(k) clearance for its TOTAL ANKLE Patient-Matched Guides. The guide system will be used in conjunction with Smith+Nephew’s SALTO TALARIS Total Ankle Prosthesis and...
Printing Money Episode 21: Q2 2024 Earnings Analysis with Troy Jensen, Cantor Fitzgerald
Like sands through the hourglass, so is the Q2 2024 earnings season. All of the publicly traded 3D printing companies have reported their financials, so it is time to welcome...
3D Printing Webinar and Event Roundup: September 8, 2024
In this month’s first 3D Printing Webinar and Event Roundup, things are picking up! There are multiple in-person events this week, including the TETS Symposium, Additive Manufacturing in Medicine, a...