How Artificial Intelligence Can Be Fooled with 3D Printing…and Stickers

Share this Article

[Image: Anish Athalye/Labsix]

Artificial intelligence is highly promising in many areas, but it’s still frighteningly easy to fool or confuse, it turns out. Recently, a group of researchers tested an AI algorithm by 3D printing a turtle and other objects. The turtle may have looked like a turtle to most humans, but the artificial intelligence couldn’t handle the 3D printed replica – the algorithm declared it to be a rifle. Not exactly close, nor was its identification of a 3D printed baseball as an espresso.

This was, in fact, the reaction the scientists were hoping for. Using subtle alterations imperceptible to the human eye, they changed the objects in a way that would make them unrecognizable to artificial intelligence. The technique is referred to as an adversarial attack, a way to fool AI without being evident to humans.

[Image: K. Eykholt et al]

An AI algorithm identifying a turtle as a rifle or a baseball as coffee may seem innocuous and amusing, but the implications are worrisome. Dawn Song, a computer scientist at the University of California, Berkeley, and colleagues performed an experiment last year in which they put stickers on a stop sign to confuse a common type of image recognition artificial intelligence. It worked – the AI identified the stop sign as a 45 mile per hour speed limit sign. Imagine all the stickers and graffiti you’ve seen on street signs before, and then imagine autonomous cars trying to identify them – it could be disastrous.

Song also mentioned a trick in which a Hello Kitty was placed in an image recognition AI’s view of a street scene. The cars in the scene simply disappeared. If a world full of autonomous cars truly is coming in the future as many say, catastrophic accidents could be caused by hackers messing with image recognition AI’s vision. That’s why experiments such as these are performed by scientists – to bring the weaknesses of these systems to light so they can be improved.

The scariest thing is the way that hackers can potentially change images to fool AI without humans being aware. In what is called a white box attack, hackers can see the AI’s gradients, which describe how a slight change in an input image or sound can move the output in a certain direction. By knowing the gradients, the hackers can calculate how to slightly alter inputs, a little bit at a time, to create the wrong output, causing the AI to call a turtle a rifle, for example. It’s a big change to the artificial intelligence, yet so slight that humans can’t see it.

Artificial intelligence developers are working on techniques to combat these attacks. For example, one technique embeds image compression as a step in image recognition AI. This adds jaggedness to the gradients. But these methods can be cleverly outwitted, too. In another recent paper, a team of scientists analyzed nine image recognition algorithms from a recent conference. Seven of the algorithms relied on gradient obfuscation, and the team was able to break all seven using ways like sidestepping the image compression. Each case took no more than a couple of days to hack.

Another possibility is to train an algorithm with certain constraints that keep it from being confused, using verifiable, mathematical methods. Song is concerned about the real world limitations of these defenses, however.

“There’s no mathematical definition of what a pedestrian is, so how can we prove that the self-driving car won’t run into a pedestrian?” she said. “You cannot!”

These examples are unnerving reminders of how imperfect artificial intelligence is, and how easily corrupted it can be. Should we abandon the idea of autonomous cars altogether, then? Of course not – but the fact remains that with all technology, there is a constant race between hackers and those who try to safeguard against hacking. The fact that these weaknesses have been exposed means that developers and programmers can better prepare for such attacks, and possibly make their AI algorithms more foolproof by the time we actually see the streets filling up with autonomous cars.

If you’d like to read the full paper, titled “Synthesizing Robust Adversarial Examples,” about 3D printed items being used to fool AI, you can access  it here. Authors include Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok.

Discuss this and other 3D printing topics at 3DPrintBoard.com or share your thoughts below. 

[Source: Science Magazine]

 

Share this Article


Recent News

Blueprint Webinar: The Business Case for 3D Printing Spare Parts

3D Printing & Conductivity: Fabricating Ultra-Stretchable Conductors



Categories

3D Design

3D Printed Art

3D Printed Food

3D Printed Guns


You May Also Like

Using Casting, Graphene, and SLM 3D Printing to Create Bioinspired Cilia Sensors

  What Mother Nature has already created, we humans are bound to try and recreate; case in point: biological sensors. Thanks to good old biomimicry, researchers have made their own...

Sinterit’s SLS 3D Printing and Flexible Materials Used to Make Strong Textiles for Opera Costumes

Engineering, textiles, and additive manufacturing are different industries with different growth patterns, but they are connected by an important point: structures. Additionally, each of these industries have to struggle with...

Sponsored

nScrypt Delivers 1 Meter Factory in a Tool to the US Army

Precision Micro-Dispensing and 3D printing manufacturer, nScrypt, based in Orlando, FL, whose bioprinter will travel to the International Space Station in 2019, has delivered a 3Dn-1000 multi-material Factory in a...

Sponsored

Sinterit is Going White

Is it possible to achieve white color on a small SLS 3D printer? This question was asked a hundred times. Somehow clients love white materials, but in selective laser sintering...


Shop

View our broad assortment of in house and third party products.


Print Services

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates from our 3DPrint.com.

You have Successfully Subscribed!