Researchers Develop Dense Facial Tissue Depth Mapping Method for 3D Data Point Collection

RAPID

Share this Article

Example FTDM of a full hollowed face shell, rather than a cropped face shell.

Historians, archaeologists, and anthropologists have used 3D technology many times over the years to pull the cover back on history. There are important implications for forensic anthropology that come from analyzing the thickness of facial tissue – most especially in terms of creating facial approximations of unidentified human remains. In traditional tissue depth studies, data was collected with a variety of manual sampling methods from a limited number of facial points from multiple world populations, before ultrasounds moved in with their ability to capture in vivo data in an upright position. Now, 3D data from MRI and computed tomography (CT) scans are the most prevalent, but they’re still not foolproof.

The association between bone and skin landmarks, regardless of the specific imaging modality, in these kinds of point-centric data collection methods is often unclear and inaccurate, even with the help of modern medical imaging. According to a group of researchers from Virginia Commonwealth University, which has experience in using 3D printing for anthropological applications, and Arizona State University, until we can quantify and establish consistent positional relationships between skin and bone landmarks, regardless of the direction of measurement, this will not improve.

The researchers published a paper, titled “Open-source Tools for Dense Facial Tissue Depth Mapping (FTDM) of Computed Tomography Models,” that explains their method for dense facial tissue depth mapping (FDTM) that can get rid of several sources of error in manual point collection methods, as well as produce quantitative data for skin and bone in a simple, interactive visual format.

Summary of steps for FTDM: a) hollowing and cleaning; b) mapping to pronasale and cropping face; c) tissue depth mapping and colorization; d) visualization options: quality contour, incremental maps.

The abstract reads, “This paper describes tools for the generation of dense facial tissue depth maps (FTDMs) using de-identified head CT scans of modern Americans from the public repository, The Cancer Imaging Archives (TCIA), and the open-source program Meshlab. CT scans of 43 females and 63 males from TCIA were segmented and converted to 3D skull and face models using Mimics and exported as stereolithography (STL) files. All subsequent processing steps were performed in Meshlab. Heads were transformed to a common orientation and coordinate system using the coordinates of nasion, left orbitale, and left and right porion. Dense FTDMs were generated on hollowed, cropped face shells using the Hausdorff sampling filter. Two new point clouds consisting of the 3D coordinates for both skull and face were colorized on an RGB scale from 0.0 (red) to 40.0 mm (blue) depth values and exported as polygon file format (PLY) models with tissue depth values saved in the “vertex quality” field. FTDMs were also split into 1.0 mm increments to facilitate viewing of common depths across all faces. In total, 112 FTDMs were generated for 106 individuals. Minimum depth values ranged from 1.2 mm to 3.4 mm, indicating a common range of starting depths for most faces regardless of weight, as well as common locations for these values over the nasal bones, lateral orbital margins, and forehead superior to the supraorbital border. Maximum depths were found in the buccal region and neck, excluding the nose. Individuals with multiple scans at visibly different weights presented the greatest differences within larger depth areas such as the cheeks and neck, with little to no difference in the thinnest areas. A few individuals with minimum tissue depths at the lateral orbital margins and thicker tissues over the nasal bones (> 3.0 mm) suggested the potential influence of nasal bone morphology on tissue depths. This study produced visual quantitative representations of the face and skull for forensic facial approximation research and practice that can be further analyzed or interacted with using free software. The presented tools can be applied to pre-existing CT scans, traditional or cone-beam, adult or subadult individuals, with or without landmarks, and regardless of head orientation, for forensic applications as well as for studies of facial variation and facial growth.”

Three individuals with scans at visibly different weights. Top row contains the “heavier” face. Histograms show the distribution of facial depths from 0.0 to 40.0 mm depth, colorized from red (thinnest) to blue (thickest).

This method was able to produce face and skull points that were based on geometric relationships, able to be replicated, that produce several readable data outputs. The researchers used FTDM to determine that five people in the dataset had multiple CT scans showing different weights.

“This method produces 3D coordinates for bone and skin points, regardless of orientation of the CT scan, utilizing freely available software and can be applied to any 3D head models (as long as the skull and face models are in correct anatomical orientation to each other; models generated from the same CT scan will be),” the researchers wrote. “The publication of this method and toolset can facilitate collaborations between forensic researchers and practitioners towards the development of a standardized, accessible reference database for craniofacial identification.”

Orientation and coordinate system.

This study is part of a larger effort to use publicly available, de-identified head CT scans from The Cancer Imaging Archive (TCIA) to investigate the relationship between bone and skin for applications in craniofacial identification.

“Regardless of the methods applied to collect facial tissue depths, there have always been intrinsic limitations to both the accuracy and reproducibility of the data, mostly because of the multiple opportunities for observer error,” the researchers wrote.

“Other groups have cautioned that CT collection of FSTD data has a number of potential sources of error and that as many of those as possible should be minimized (Caple et al. 2016). The method presented here eliminates several sources of error including the effect of head position and the manual identification of landmarks.”

This dense FTDM generation method will allow researchers to quickly generate foundational data for head CT scans that can supplement other methods of facial approximation.

“In comparison to other efforts to produce dense FTDMs, the workflow outlined here utilizes accessible, open-source tools to generate and interact with FTDMs, and produces coordinates of the bone points that are closest to the sampled skin points. Such mapping allows for a more comprehensive approach to viewing tissue depth contours within one individual and between individuals and will potentially reveal more informative tissue depth regions for facial approximation methods,” the researchers concluded.

This may lead to new areas of research in facial reconstruction, should help CMF and other surgical disciplines and should lead to missing person identification worldwide.

Co-authors of the paper are Terrie Simmons-Ehrhardt, Catyana Falsetti, Anthony B. Falsetti, and Christopher J. Ehrhardt.

Discuss this and other 3D printing topics at 3DPrintBoard.com or share your thoughts below. 

Share this Article


Recent News

nTop Acquires cloudfluid, Further Integrates CFD Simulation

Rocket Lab’s 3D Printed Engines Propel IoT Satellites—and a Stock Surge



Categories

3D Design

3D Printed Art

3D Printed Food

3D Printed Guns


You May Also Like

Featured

Bloomberg: 3D Printed Rocket Maker Relativity Space Facing Financial Challenges

According to a recently published article in Bloomberg, the Long Beach-based launch company Relativity Space — one of many entities claiming to have the “world’s largest” metal 3D printer —...

LEAP 71’s AI-Designed Rocket Engine Passes First Hot-Fire Test

Dubai tech-driven space innovator LEAP 71 successfully tested a 3D printed rocket engine designed entirely by an A.I. model called Noyron. This engine, made from copper, was designed autonomously without...

2024 NASA Grants Feature Top 3D Printing Tech

NASA has again proven its commitment to innovation by awarding nearly 250 small business teams funding to develop cutting-edge technologies. This year’s Phase 1 Small Business Innovation Research (SBIR) and...

ISRO Successfully Tests 3D-Printed Liquid Rocket Engine for 665 Seconds

On May 9, 2024, the Indian Space Research Organization (ISRO) successfully conducted a long-duration hot test of a 3D printed liquid rocket engine. The tested engine, known as PS4, is...