Robots and humans don’t see things the same way. That’s not exactly a shocking statement, but some robots operate so efficiently and smoothly that it’s easy to forget that their vision isn’t as good as ours. The main difference is that robots see things very literally – they see exactly what is in front of them, nothing more, while humans have the ability to automatically fill in the parts of objects that are missing from our immediate view.
That’s why robots haven’t quite evolved yet to the point that they’re helping us every day in our homes. For example, a sink full of dirty dishes would baffle a robot. While a human can look at such a mess and know exactly what shape a plate is, and a cup, etc., even if they’re obscured by other dishes, a robot sees only a partial shape and registers it as an unknown. But a new algorithm created by a Duke University graduate student is allowing robots to visualize the way humans do.
PhD candidate and Intelligent Robot Lab (IRL) member Ben Burchfiel and his thesis advisor and IRL director, Dr. George Konidaris, now an assistant professor of computer science at Brown University, have developed a technology that allows robots to look at objects from one angle and know exactly what they are, even if they’ve never seen them before and can’t see them in their entirety. Burchfiel and Konidaris created the algorithm by building a database of 3D scans of about 4,000 common household objects, such as furniture and appliances. Each 3D scan was converted into tens of thousands of voxels for easier processing.
The algorithm learned different categories of objects by using a variation on a technique called probabilistic principal component analysis. It searched through examples of each object and learned how they varied and how they stayed the same. So when it sees something it’s never seen before, like an unusual coffee cup, it knows the general characteristics a coffee cup has and can recognize it as such, the same way a human would.
To test the algorithm, Burchfiel and Konidaris fed it 908 new 3D examples of 10 kinds of household items, viewed only from the top. The algorithm guessed what the objects were, and what their overall three-dimensional shapes should be, about 75 percent of the time, compared to just over 50 percent for a robot without the technology. It also recognized objects that were rotated in different ways, which even the best robots haven’t been able to do.

Left: what the robot is shown; center, the robot’s guess of the entire object; right, the actual object
The technology isn’t perfect yet, though. The algorithm is still baffled by objects that are similar in shape – it might confuse a table for a dresser when viewed from above, for example.
“Overall, we make a mistake a little less than 25 percent of the time, and the best alternative makes a mistake almost half the time, so it is a big improvement,” Burchfiel said. “But it still isn’t ready to move into your house. You don’t want it putting a pillow in the dishwasher.”
Burchfiel and Konidaris are working on improving and scaling up the algorithm, though; they want robots to be able to distinguish between thousands of objects at a time. The goal is to take robots out of the predictable, ordered environment of a laboratory or assembly line and into the messy, random environment of a typical home and have them function just as well.
“That has the potential to be invaluable in a lot of robotic applications,” Burchfiel said.
The research has been documented in a paper entitled “Bayesian Eigenobjects: A Unified Framework for 3D Robot Perception,” which you can read here. The research was supported in part by the Defense Advanced Research Projects Agency (DARPA). Discuss in the Robot Vision forum at 3DPB.com.
[Source: Duke University / Images: Burchfiel and Konidaris]
Subscribe to Our Email Newsletter
Stay up-to-date on all the latest news from the 3D printing industry and receive information and offers from third party vendors.
Print Services
You May Also Like
3D Printing Software Market to Hit $6.78B Revenues by 2033
Additive Manufacturing Research (AMR) has released a new edition of its flagship market study, “AM Software Markets 2025: Analysis, Data and Forecast,” offering deep insights into the 3D printing software...
3D Printing News Briefs & Events Roundup: March 8, 2025
Starting this week, we’re shaking things up a little! We’ll be combining our 3D Printing News Briefs with a more curated weekly list of 3D printing webinars and events to...
Stratasys’ 3D Printing Takes on Cadavers in Surgery Training and Imaging
Stratasys and Siemens Healthineers have developed 3D printed, patient-specific anatomical models that replicate human tissue with incredible accuracy, transforming medical imaging, surgical planning, and education. Traditionally, surgeons have relied on...
RAPID + TCT 2025: Exploring 3D Printing’s Role in Defense
RAPID + TCT, North America’s largest additive manufacturing (AM) event, returns to Detroit this April, bringing together industry leaders, innovators, and government stakeholders to explore the latest advancements in AM....