The art of using 2D images to recover 3D facial geometry is known as 3D facial reconstruction, which has helped give long-dead historical figures a face and even identify murder victims. Researchers with Kingston University and the University of Nottingham have been working on a unique project dealing with the many difficulties of 3D facial reconstruction, as it’s still a very challenging problem in Vision and Graphics research. The team of artificial intelligence experts have figured out a way to make 3D facial recreations using just a single 2D image of a person’s face.
While their work does not solve the issue, the researchers believe that they are the first to approach the problem from this angle: using a Convolutional Neural Network (CNN) to map 3D coordinates from image pixels. University of Nottingham researchers Aaron Jackson and Adrian Bulat and Kingston University researchers Vasileios Argyriou and Georgios Tzimiropoulos recently published a paper on their work, titled “Large Pose 3D Face Reconstruction from a Single Image via Direct Volumetric CNN Regression.”
The abstract explains, “3D face reconstruction is a fundamental Computer Vision problem of extraordinary difficulty. Current systems often assume the availability of multiple facial images (sometimes from the same subject) as input, and must address a number of methodological challenges such as establishing dense correspondences across large facial poses, expressions, and non-uniform illumination. In general these methods require complex and inefficient pipelines for model building and fitting. In this work, we propose to address many of these limitations by training a Convolutional Neural Network (CNN) on an appropriate dataset consisting of 2D images and 3D facial models or scans. Our CNN works with just a single 2D facial image, does not require accurate alignment nor establishes dense correspondence between images, works for arbitrary facial poses and expressions, and can be used to reconstruct the whole 3D facial geometry (including the non-visible parts of the face) bypassing the construction (during training) and fitting (during testing) of a 3D Morphable Model. We achieve this via a simple CNN architecture that performs direct regression of a volumetric representation of the 3D facial geometry from a single 2D image. We also demonstrate how the related task of facial landmark localization can be incorporated into the proposed framework and help improve reconstruction quality, especially for the cases of large poses and facial expressions.”
The four co-authors know that 3D facial reconstruction is a very difficult problem to solve, as you normally need multiple images of the same face, from a variety of angles, to map out each contour. But it’s obvious that their idea of training a CNN on a data set of 2D pictures and 3D facial models has some merit.
“Besides its simplicity, our approach works with totally unconstrained images downloaded from the web, including facial images of arbitrary poses, facial expressions and occlusions,” the team wrote in the paper.
Different facial images from multiple angles are given to the system, which works to overcome challenges like facial expressions, non-uniform illumination, and setting up dense correspondences across larger facial poses. The CNN is able to guess and ‘fill in’ the non-visible parts of a face, and can ultimately use a single, unseen 2D image to quickly reconstruct the entire 3D facial geometry.
The code and models for the project will soon be available here, and the researchers have made an online demo tool available for “3D Face Reconstruction from a Single Image.” There are a few images already there if you want to try, such as former US president Barack Obama, actor Elijah Wood, and famous chemist Marie Curie. You can also upload an image of your own face, which I decided to do to abate my curiosity.
The tool instructs users to upload a frontal image, so the face detector will be able to see it, and promises that within 20 minutes of uploading, the images and 3D reconstructions will be deleted. I found a picture of myself as a bridesmaid a few years ago and decided to try that, as I was looking at the camera straight on.
Once the picture uploads, it takes less than a minute to render the virtual model, which you can move around and even share on your social media accounts. According to the demo page, 300,378 faces have been uploaded since September 7th, 2017.
3D models of faces like this could have many applications in the digital world, such as VR social media, 3D avatars for video games, and even warping one’s face in an augmented reality video.
Discuss this and other 3D printing topics at 3DPrintBoard.com, or share your comments below.[Sources: Mashable, The Verge, Aaron Jackson]
You May Also Like
3D Printing News Briefs, January 9, 2021: SPEE3D & InFocus Laser Systems, Carbon, DMC & DyeMansion, Atomstack
In this weekend’s 3D Printing News Briefs, SPEE3D’s super-fast technology is heading to Brazil thanks to a new reseller agreement, and a former GE Additive executive has been named the...
RIZE Announces 2XC 3D Printer as Fifth UL GREENGUARD-Certified Product
Early this summer, RIZE, Inc. debuted its professional desktop RIZE 2XC, an adaptive 3D printer developed collaboratively with South Korean 3D printer manufacturer Sindoh as part of the RIZIUM Alliance that’s...
UNIZ Launches High Resolution LCD SLA 3D Printer “IBEE”
Among the most exciting developments in 3D printing is the proliferation of low-cost stereolithography (SLA) and digital light processing (DLP) 3D printers. A key driver of this trend is the...
Speeding Up 3D Printing: Kickstarter Campaign for Ulendo Cloud-Based Software
In 2017, researchers with the Smart and Sustainable Automation Research Lab (S2A Lab) at the University of Michigan College of Engineering developed a filtered b-spline (FBS) algorithm that could help...
View our broad assortment of in house and third party products.