Formnext Germany

Virginia Tech Researchers Introduce 3D Printing Device that Creates Using Crowdsourced Data

Share this Article

There are many people who still think that 3D printing must surely resemble the replicator machines from Star Trek – just tell the system what you want (tea, Earl Grey, hot!) and boom, presto, there you have it, in a matter of seconds.

The famous Star Trek Replicator

But while we’re not quite there, researchers and companies are continuing to improve upon the technology’s speed and power to make our world even closer to that of the Star Trek universe.

At the recent ACM SIGGRAPH 2019, held in Los Angeles, California, an interdisciplinary team of Virginia Tech faculty and students displayed Source Form, a standalone 3D printing device that creates objects from crowdsourced data, based on a user’s request – not an existing 3D model.

The project was mainly supported by the university’s Institute for Creativity, Arts and Technology (ICAT), the VT School of Visual Arts, and two awards (#1254287 and #1546985) from the National Science Foundation (NSF). Source Form works independently of subjective user input, using only crowdsourced data as input.

The researchers – Sam Blanchard, Jia-Bin Huang, Christopher B. Williams, Viswanath Meenakshisundaram, Joseph Kubalak, and Sanket Lokegaonkar – published a paper, titled “Source form an automated crowdsourced object generator,” that explained their innovative Source Form project.

Source Form: an automated crowdsourced object generator

The abstract writes, “Source Form is a stand-alone device capable of collecting crowdsourced images of a user-defined object, stitching together available visual data (e.g., photos tagged with search term) through photogrammetry, creating watertight models from the resulting point cloud and 3D printing a physical form. This device works completely independent of subjective user input resulting in two possible outcomes:

1. Produce iterative versions of a specific object (e.g., the Statue of Liberty) increasing in detail and accuracy over time as the collective dataset (e.g., uploaded images of the statue) grows

2. Produce democratized versions of common objects (e.g., an apple) by aggregating a spectrum of tagged image results

This project demonstrates that an increase in readily available image data closes the gap between physical and digital perceptions of form through time.”

[Image: Sam Blanchard’s Source Form site]

The device addresses the major limitation that keeps 3D printing from progressing into the Star Trek Replicator – it creates an object model, on-demand, from available crowdsourced images, rather than needing a 3D model of the object itself. Then, the model is automatically sent to an embedded mask projection vat photopolymerization 3D printer for “immediate fabrication and retrieval.” The machine’s top screen shows images of the retrieved crowdsourced photos and photogrammetry, while the bottom screen shows the final object’s layers during the print job.

“This project demonstrates that an increase in readily available image data closes the gap between physical and digital perceptions of form through time,” the paper continues. “For example, when Source Form is asked to print the Statue of Liberty today and then print again 6 months from now, the later result will be more accurate and detailed than the previous version.”

Mask projection vat photopolymerization 3D
printing

This means that Source Form’s image database will continue to expand as people keep taking pictures of the Statue of Liberty and uploading the images to their blogs and social media sites. The system gathers a completely new dataset for each print, which means that the resulting objects will be evolving always, and becoming even more high quality. The prints that Source Form fabricates over time are cataloged and displayed linearly, which allows users to see the growth and change for themselves.

Blanchard’s Source Form page explains it further:

“The users will provide inputs regarding a specific object he/she would like to construct. We have developed software that automatically crawls the text-based image retrieval results. The initially filtered images are then used for reverse image search (content-based image retrieval) to obtain more and relevant images capturing the same object on the Internet. Once Images are collected the feature-based correspondence matching step involves extracting local image patches from images and matching them across the image collection. By establishing the correspondences among images, we are able to reliably remove images that do not contain the target object.

“Given the feature correspondences, we apply structure from motion and multi-view stereo techniques for jointly estimating the camera poses and the 3D structure of the points. We then construct a 3D mesh model based on the extracted 3D point cloud. The 3D mesh models are then refined and processed for the 3D printing.”

I cannot stress how cool this is – instead of hunting around for the right STL file or 3D model to create a specific object, you can just tell the machine what you want it to make. And while we may not quite be at the point where we can emulate Captain Picard’s request for his favorite beverage, many people are working all the time to get us closer than ever.

If you’re interested in learning more information about the Source Form project’s automated imaging/photogrammetry and 3D printing pipelines, you can read the group’s paper here.

What do you think? Discuss this story and other 3D printing topics at 3DPrintBoard.com or share your thoughts in the comments below. 



Share this Article


Recent News

Josh Makeshift and the New Gold Standard 3D Printing Content Creation

Formlabs Teams Up with DMG MORI in Japan



Categories

3D Design

3D Printed Art

3D Printed Food

3D Printed Guns


You May Also Like

Making Space: Stratasys Global Director of Aerospace & Defense Conrad Smith Discusses the Space Supply Chain Council

Of all the many verticals that have been significant additive manufacturing (AM) adopters, few have been more deeply influenced by the incorporation of AM into their workflows than the space...

Featured

EOS in India: AM’s Rising Star

EOS is doubling down on India. With a growing base of aerospace startups, new government policies, and a massive engineering workforce, India is quickly becoming one of the most important...

PostProcess CEO on Why the “Dirty Little Secret” of 3D Printing Can’t Be Ignored Anymore

If you’ve ever peeked behind the scenes of a 3D printing lab, you might have caught a glimpse of the post-processing room; maybe it’s messy, maybe hidden behind a mysterious...

Stratasys & Automation Intelligence Open North American Tooling Center in Flint

Stratasys has opened the North American Stratasys Tooling Center (NASTC) in Flint, Michigan, together with automation integrator and software firm Automation Intelligence. Stratasys wants the new center to help reduce...