There are multiple types of 3D scanners, employing a number of different scanning technologies, but each of those technologies has limitations. Shiny surfaces and black surfaces are notoriously difficult to scan, and it’s often a challenge to capture the entire surfaces of complex objects. A scanner is a very visual tool; it can only scan what it can “see.” But a group of researchers from AICFVE Beijing Film Academy, Tel Aviv University, Shandong University, the University of British Columbia, and Ben-Gurion University of the Negev have developed a new method of 3D scanning that captures the full shape of an object, using water.
The research is documented in a paper entitled “Dip Transform for 3D Shape Reconstruction,” which you can access here. In the paper, the researchers describe how they created what’s called a dip scanner, which literally dips an object into a bath of water. The object is repeatedly dipped in different orientations, and the water’s volume displacement is measured, which provides an accurate representation of the object’s entire shape.
“The key feature of our method is that it employs fluid displacements as the shape sensor,” the researchers explain. “Unlike optical sensors, the liquid has no line-of-sight requirements, it penetrates cavities and hidden parts of the object, as well as transparent and glossy materials, thus bypassing all visibility and optical limitations of conventional scanning devices. Our new scanning approach is implemented using a dipping robot arm and a bath of water, via which it measures the water elevation. We show results of reconstructing complex 3D shapes and evaluate the quality of the reconstruction with respect to the number of dips.”
The technique is based on Archimedes’ fluid displacement discovery, which states that “the volume of fluid displaced is equal to the volume of the part that was submerged.” Dipping an object in fluid along an axis allows the scientists to measure the liquid volume displacement and transform it into a series of thin volume slices of the shape along the axis. Dipping the object in multiple orientations creates different volume displacements, which the researchers then turn into what they call a “dip transform.”
“3D reconstruction proceeds by measuring the volumes of oblique thin slices of the shape. We refer to the volumes of these slices as samples, and collect such samples along different angles,” the researchers continue. “This, in turn, equips us with the ability to generate enough data to recover the geometry of the input shape. Since our technique is based on using volume samples that are generated by liquid accessing the object, we can acquire occluded and inaccessible parts in a relatively straightforward fashion.”
The method the researchers use to reconstruct 3D shapes using dip transform is related to computed tomography; however, computed tomography requires large, expensive equipment and has to be performed in a specialized environment with multiple safety measures taken. The dip transform method has no safety issues and is inexpensive, using a fully automated system consisting of a robotic arm that lowers the object into the water as well as two rotating arms that set the object’s orientation.
The researchers demonstrated the technique using objects with a variety of complex shapes: a person riding an elephant, a statue of three cats facing inwards, and an open cube with several columns in the interior. The same objects were 3D scanned using a structured light scanner, and the differences are noticeable. The images produced by the structured light scanner are incomplete, with the parts of the object that are hidden from immediate view absent in the reconstruction. The images produced using dip transform, on the other hand, are nearly perfect reconstructions of the original objects.
The method isn’t perfect; it’s not especially fast, for one thing, and it also can be thrown off by items that include concave areas like caps or included vessels. The scientists plan to address these issues in future work.
“The success of this volumetric technique leads us to speculate that we may be able to consider a multi-modal acquisition setup where the reconstruction combines the information gained by the various modalities,” the researchers conclude. “For example, we may be able to utilize the strengths of laser scanners for sampling the shape-visible exterior and the volumetric and occluded information reconstructed from the dip transform.”
Authors of the paper include Kfir Aberman, Oren Katzir, Qiang Zhou, Zegang Luo, Andrei Sharf, Chen Greif, Baoquan Chen, and Daniel Cohen-Or. The research is being presented this week at SIGGRAPH 2017. Discuss in the Dip Transform forum at 3DPB.com.