Historians, archaeologists, and anthropologists have used 3D technology many times over the years to pull the cover back on history. There are important implications for forensic anthropology that come from analyzing the thickness of facial tissue – most especially in terms of creating facial approximations of unidentified human remains. In traditional tissue depth studies, data was collected with a variety of manual sampling methods from a limited number of facial points from multiple world populations, before ultrasounds moved in with their ability to capture in vivo data in an upright position. Now, 3D data from MRI and computed tomography (CT) scans are the most prevalent, but they’re still not foolproof.
The association between bone and skin landmarks, regardless of the specific imaging modality, in these kinds of point-centric data collection methods is often unclear and inaccurate, even with the help of modern medical imaging. According to a group of researchers from Virginia Commonwealth University, which has experience in using 3D printing for anthropological applications, and Arizona State University, until we can quantify and establish consistent positional relationships between skin and bone landmarks, regardless of the direction of measurement, this will not improve.
The researchers published a paper, titled “Open-source Tools for Dense Facial Tissue Depth Mapping (FTDM) of Computed Tomography Models,” that explains their method for dense facial tissue depth mapping (FDTM) that can get rid of several sources of error in manual point collection methods, as well as produce quantitative data for skin and bone in a simple, interactive visual format.
The abstract reads, “This paper describes tools for the generation of dense facial tissue depth maps (FTDMs) using de-identified head CT scans of modern Americans from the public repository, The Cancer Imaging Archives (TCIA), and the open-source program Meshlab. CT scans of 43 females and 63 males from TCIA were segmented and converted to 3D skull and face models using Mimics and exported as stereolithography (STL) files. All subsequent processing steps were performed in Meshlab. Heads were transformed to a common orientation and coordinate system using the coordinates of nasion, left orbitale, and left and right porion. Dense FTDMs were generated on hollowed, cropped face shells using the Hausdorff sampling filter. Two new point clouds consisting of the 3D coordinates for both skull and face were colorized on an RGB scale from 0.0 (red) to 40.0 mm (blue) depth values and exported as polygon file format (PLY) models with tissue depth values saved in the “vertex quality” field. FTDMs were also split into 1.0 mm increments to facilitate viewing of common depths across all faces. In total, 112 FTDMs were generated for 106 individuals. Minimum depth values ranged from 1.2 mm to 3.4 mm, indicating a common range of starting depths for most faces regardless of weight, as well as common locations for these values over the nasal bones, lateral orbital margins, and forehead superior to the supraorbital border. Maximum depths were found in the buccal region and neck, excluding the nose. Individuals with multiple scans at visibly different weights presented the greatest differences within larger depth areas such as the cheeks and neck, with little to no difference in the thinnest areas. A few individuals with minimum tissue depths at the lateral orbital margins and thicker tissues over the nasal bones (> 3.0 mm) suggested the potential influence of nasal bone morphology on tissue depths. This study produced visual quantitative representations of the face and skull for forensic facial approximation research and practice that can be further analyzed or interacted with using free software. The presented tools can be applied to pre-existing CT scans, traditional or cone-beam, adult or subadult individuals, with or without landmarks, and regardless of head orientation, for forensic applications as well as for studies of facial variation and facial growth.”
This method was able to produce face and skull points that were based on geometric relationships, able to be replicated, that produce several readable data outputs. The researchers used FTDM to determine that five people in the dataset had multiple CT scans showing different weights.
“This method produces 3D coordinates for bone and skin points, regardless of orientation of the CT scan, utilizing freely available software and can be applied to any 3D head models (as long as the skull and face models are in correct anatomical orientation to each other; models generated from the same CT scan will be),” the researchers wrote. “The publication of this method and toolset can facilitate collaborations between forensic researchers and practitioners towards the development of a standardized, accessible reference database for craniofacial identification.”
This study is part of a larger effort to use publicly available, de-identified head CT scans from The Cancer Imaging Archive (TCIA) to investigate the relationship between bone and skin for applications in craniofacial identification.
“Regardless of the methods applied to collect facial tissue depths, there have always been intrinsic limitations to both the accuracy and reproducibility of the data, mostly because of the multiple opportunities for observer error,” the researchers wrote.
“Other groups have cautioned that CT collection of FSTD data has a number of potential sources of error and that as many of those as possible should be minimized (Caple et al. 2016). The method presented here eliminates several sources of error including the effect of head position and the manual identification of landmarks.”
This dense FTDM generation method will allow researchers to quickly generate foundational data for head CT scans that can supplement other methods of facial approximation.
“In comparison to other efforts to produce dense FTDMs, the workflow outlined here utilizes accessible, open-source tools to generate and interact with FTDMs, and produces coordinates of the bone points that are closest to the sampled skin points. Such mapping allows for a more comprehensive approach to viewing tissue depth contours within one individual and between individuals and will potentially reveal more informative tissue depth regions for facial approximation methods,” the researchers concluded.
This may lead to new areas of research in facial reconstruction, should help CMF and other surgical disciplines and should lead to missing person identification worldwide.
Co-authors of the paper are Terrie Simmons-Ehrhardt, Catyana Falsetti, Anthony B. Falsetti, and Christopher J. Ehrhardt.
Discuss this and other 3D printing topics at 3DPrintBoard.com or share your thoughts below.
You May Also Like
Chinese University of Hong Kong Studies 3D Printing for Heart Disease
In the recently published ‘Three-dimensional printing in structural heart disease and intervention,’ authors Yiting Fan, Randolph H.L. Wong, and Alex Pui-Wai Lee, all from The Chinese University of Hong Kong,...
VA Puget Sound Initiative: Advancing 3D Printing for Heart Disease
For over one hundred years, treating heart disease meant opening the patient’s chest to access the heart through open-heart surgery. The procedure usually takes between three to six hours and...
China: Improving Cell Viability by Refining Structural Design in Scaffolds
Chinese researchers are seeking new ways to create stronger cell growth and sustainability in scaffolds. With their findings outlined in the recently published, ‘Structure-induced cell growth by 3D printing of...
Scientists Use 3D Printed Models to Further Congenital Heart Disease Studies
In the recently published ‘Accurate Congenital Heart Disease Model Generation for 3D Printing,’ researchers explore 3D printing for diagnosis, treatment, and planning in congenital heart disease (CHD) patients. CHD usually...
View our broad assortment of in house and third party products.