3DPrint.com | The Voice of 3D Printing / Additive Manufacturing

Incredible Advancement in the 3D Printing of Hair on Figurines is Presented by Disney Research

There seems to be a growing trend fueled by several companies, a trend in which I have yet to take part in, but certainly look forward to doing so sometime in the future. That trend is the 3D printing of ‘mini-me’s’. Several companies are scanning individuals, and then printing out miniature versions of them (in rare cases full size versions) usually from a sandstone material. Recently, Amazon even got in on a version of this service, by dis-3teaming with Mixee Labs to allow their customers to personalize bobblehead dolls to their likeness. Although in Amazon’s case, scanning wasn’t involved, this has brought the technology further into the mainstream.

Many of these printed ‘mini-me’s’ are fairly accurate, to an extent. The one aspect of these prints, which the printers and scanners typically have a hard time capturing, is that of a person’s hair. Not only the individual hairs which may be sticking up in varying directions, but the actual patterns of the hair on a person’s head.

With this said, it appears as if Disney Researchers, including Derek Bradley, associate research scientist at Disney Research Zurich, Dr. Thabo Beeler of Disney Research Zurich and Jose I. Echevarria, a Ph.D. student who interned at the Disney lab, and Dr. Diego Gutierrez, both of the University of Zaragoza, Spain have developed a method to deal with this common issue, enabling them to capture stylized hair within a 3D scan, and represent that style via a printed figurine. The main key in how they approached this problem is “a novel multi-view stylization algorithm, which ex-tends feature-preserving color filtering from 2D images to irregular manifolds in 3D, and introduces abstract geometric details that are coherent with the color stylization,” states the research paper.

Smoothing out of the hair surface, while keeping the same basic flow and color

The researchers argue that a person’s hairstyle is a defining characteristic, one which is almost as important as a person’s face. With that said, no other means of scanning has been able to capture the flow, colors and patterns within complicated hairstyles, usually leaving the printed figurines missing these defining characteristics.

Inspired by artistic sculptures of hair in the real world and within CG models, what their method does is take a vastly complicated feature of a person, their hair style, and simplify it, while still being able to capture the main defining features. The process starts with the data collection, via multiple images taken by SLR cameras of a person’s head. The multi-view stereo reconstruction algorithm within the software then creates a course proxy surface of the person’s hair from these images. The proxy surface is then used to generate key details about the color and shape of the person’s hair. It doesn’t capture characteristics fiber-by-fiber, but instead captures the general color and a simplified version of the texture and flow, which is easily printable.

One women, two defining hair styles. (Left to right) Photo, 3D model, 3D Print

As you can see from the images within this article, the researchers have very successfully mimicked the various hairstyles of individuals, as well as their facial hair and the fur on animals or clothing. They have also scanned and printed two individuals with four different hairstyles each, showing that the hairstyles are the defining features of each individual, in the photos, as well as in the printed figurines.

The researchers have concluded that their “method generates hair as a closed-manifold surface, yet contains the structural and color elements stylized in a way that captures the defining characteristics of the hair-style”.

Certainly this research could go a long way in the commercialization of 3D printing, as well as for other applications. Let’s hear your thoughts on these new methods in the 3D printed hair texture forum thread on 3DPB.com.  Check out the video provided by Disney Research, further expanding on the capabilities of this technology.

[Source: Disney Research]
Exit mobile version