Historians, archaeologists, and anthropologists have used 3D technology many times over the years to pull the cover back on history. There are important implications for forensic anthropology that come from analyzing the thickness of facial tissue – most especially in terms of creating facial approximations of unidentified human remains. In traditional tissue depth studies, data was collected with a variety of manual sampling methods from a limited number of facial points from multiple world populations, before ultrasounds moved in with their ability to capture in vivo data in an upright position. Now, 3D data from MRI and computed tomography (CT) scans are the most prevalent, but they’re still not foolproof.
The association between bone and skin landmarks, regardless of the specific imaging modality, in these kinds of point-centric data collection methods is often unclear and inaccurate, even with the help of modern medical imaging. According to a group of researchers from Virginia Commonwealth University, which has experience in using 3D printing for anthropological applications, and Arizona State University, until we can quantify and establish consistent positional relationships between skin and bone landmarks, regardless of the direction of measurement, this will not improve.
The researchers published a paper, titled “Open-source Tools for Dense Facial Tissue Depth Mapping (FTDM) of Computed Tomography Models,” that explains their method for dense facial tissue depth mapping (FDTM) that can get rid of several sources of error in manual point collection methods, as well as produce quantitative data for skin and bone in a simple, interactive visual format.
The abstract reads, “This paper describes tools for the generation of dense facial tissue depth maps (FTDMs) using de-identified head CT scans of modern Americans from the public repository, The Cancer Imaging Archives (TCIA), and the open-source program Meshlab. CT scans of 43 females and 63 males from TCIA were segmented and converted to 3D skull and face models using Mimics and exported as stereolithography (STL) files. All subsequent processing steps were performed in Meshlab. Heads were transformed to a common orientation and coordinate system using the coordinates of nasion, left orbitale, and left and right porion. Dense FTDMs were generated on hollowed, cropped face shells using the Hausdorff sampling filter. Two new point clouds consisting of the 3D coordinates for both skull and face were colorized on an RGB scale from 0.0 (red) to 40.0 mm (blue) depth values and exported as polygon file format (PLY) models with tissue depth values saved in the “vertex quality” field. FTDMs were also split into 1.0 mm increments to facilitate viewing of common depths across all faces. In total, 112 FTDMs were generated for 106 individuals. Minimum depth values ranged from 1.2 mm to 3.4 mm, indicating a common range of starting depths for most faces regardless of weight, as well as common locations for these values over the nasal bones, lateral orbital margins, and forehead superior to the supraorbital border. Maximum depths were found in the buccal region and neck, excluding the nose. Individuals with multiple scans at visibly different weights presented the greatest differences within larger depth areas such as the cheeks and neck, with little to no difference in the thinnest areas. A few individuals with minimum tissue depths at the lateral orbital margins and thicker tissues over the nasal bones (> 3.0 mm) suggested the potential influence of nasal bone morphology on tissue depths. This study produced visual quantitative representations of the face and skull for forensic facial approximation research and practice that can be further analyzed or interacted with using free software. The presented tools can be applied to pre-existing CT scans, traditional or cone-beam, adult or subadult individuals, with or without landmarks, and regardless of head orientation, for forensic applications as well as for studies of facial variation and facial growth.”
This method was able to produce face and skull points that were based on geometric relationships, able to be replicated, that produce several readable data outputs. The researchers used FTDM to determine that five people in the dataset had multiple CT scans showing different weights.
“This method produces 3D coordinates for bone and skin points, regardless of orientation of the CT scan, utilizing freely available software and can be applied to any 3D head models (as long as the skull and face models are in correct anatomical orientation to each other; models generated from the same CT scan will be),” the researchers wrote. “The publication of this method and toolset can facilitate collaborations between forensic researchers and practitioners towards the development of a standardized, accessible reference database for craniofacial identification.”
This study is part of a larger effort to use publicly available, de-identified head CT scans from The Cancer Imaging Archive (TCIA) to investigate the relationship between bone and skin for applications in craniofacial identification.
“Regardless of the methods applied to collect facial tissue depths, there have always been intrinsic limitations to both the accuracy and reproducibility of the data, mostly because of the multiple opportunities for observer error,” the researchers wrote.
“Other groups have cautioned that CT collection of FSTD data has a number of potential sources of error and that as many of those as possible should be minimized (Caple et al. 2016). The method presented here eliminates several sources of error including the effect of head position and the manual identification of landmarks.”
This dense FTDM generation method will allow researchers to quickly generate foundational data for head CT scans that can supplement other methods of facial approximation.
“In comparison to other efforts to produce dense FTDMs, the workflow outlined here utilizes accessible, open-source tools to generate and interact with FTDMs, and produces coordinates of the bone points that are closest to the sampled skin points. Such mapping allows for a more comprehensive approach to viewing tissue depth contours within one individual and between individuals and will potentially reveal more informative tissue depth regions for facial approximation methods,” the researchers concluded.
This may lead to new areas of research in facial reconstruction, should help CMF and other surgical disciplines and should lead to missing person identification worldwide.
Co-authors of the paper are Terrie Simmons-Ehrhardt, Catyana Falsetti, Anthony B. Falsetti, and Christopher J. Ehrhardt.
Discuss this and other 3D printing topics at 3DPrintBoard.com or share your thoughts below.
You May Also Like
MX3D Receives €2.25M to Commercialize Metal 3D Printing Welding Robots
Perhaps most known for 3D printing a massive steel bridge in the Netherlands, Dutch startup MX3D has recently received a €2.25 million investment. Funding came from DOEN Participaties, PDENH, and...
AIM Sweden and HP 3D Print Molded Fiber Tooling for Packaging
2021 is really shaping up to be the year of the application, capitalization, and consolidation. Many companies are being bought to facilitate market entry by new players. We are also...
Wi3DP to Host 3rd Edition of “Meet the Stars of 3D Printing” with Automotive Expert Panel
The upcoming edition of “Meet the Stars of 3D Printing” will explore how students and young professionals interested in additive manufacturing (AM) can build a successful career in the automotive...
Sustainable, Customizable 3D Printed Flip Flops Available on Kickstarter
It’s April in Ohio, which means that it’s almost time for me to bust out my various flip flops and welcome the warm summer weather! We often hear about 3D...
View our broad assortment of in house and third party products.