AMS 2025

The Only Thing I Actually Use AI for Currently

AM Research Military

Share this Article

I’d like to apologize in advance for this artificial intelligence (AI) story. I try to avoid talking about AI, machine learning (ML) and all that jazz. Many of the stories coming out about AI are silly, overhyped, and inaccurate. However, amid the morass of AI hype, there are some nuggets of usefulness.

A story by a Barcelona-based architecture firm reveals that they prototype in Midjourney. I recently developed some earrings with designer Roman Reiner. He quickly turned to the DALL-E text-to-image model to iterate ideas. He used it to quickly see if he and I were thinking along the same lines and also later pushed out some completely off-the-wall designs quickly for me to give feedback on.

Weeks ago, my girlfriend took the Footwearology 3D printing shoe course and used AI to quickly discover and work out shoe ideas. I was bouncing design ideas off of a friend and to explain what I meant, I used Wonder AI, a simple AI image generation tool. I’ve also used Wonder to generate images that explained the look and feel of a brand to a graphic designer.

A few weeks ago, I came up with a filtered pipe idea. To explain it to people in an email quickly, I made a crude AI image to save me time in explaining it. Several friends have played around with using ChatGPT to create STL files, which they can then print. That kind of functionality is bound to get a bit better over time.

It would be very powerful if we could iterate much more closely to our thoughts. Tweaking designs more accurately and iterating better would make a big difference as well. I currently find it clunky to iterate and am unable to tweak a lot of prompts more precisely to get closer to what I want. This accuracy step is super important because if I could just get near to what I want, I’d print the file every time and use that to continue my process. However, I currently use AI to gather ideas quickly and explore many different options but then get someone to 3D model the most promising one more precisely. Going to that step right away with a more directed and accurate process would really speed things up. For me, if I’m working on a relatively simple thing like an earring, for example, the process essentially starts with AI image generation tools.

Currently, I’ll usually walk around and generate ideas to myself. Then I’ll imagine a person wearing or using the object and winnow down many ideas to a handful. Next, I’ll go to Wonder AI and generate a prompt, adjusting it a few times to get something close to the idea. Sometimes this works remarkably well, but often I can’t get anywhere near what I want and I get frustrated. Finally, I’ll have around five little two-sentence explanations of a product variation. Some of these have a Wonder image with them and others do not. The reason I use Wonder is that it’s on my phone and it’s just a bit faster than doing the desktop-based stuff I’ve also tried.

Once I have around five ideas, I let them rest for a few hours. I put the ideas in a Google doc, paste in the images, and look at them again with fresh eyes. Here, I try to see if something is inauthentic or not right for me. If something feels wrong or wrong for me personally, I’ll kill it here. But if an idea is stupid, wouldn’t work, or is impossible, I’ll still keep it. Then I’ll compare the idea with the use case, client, goal, or utility. What will this do for my client, their client? What will this do for someone’s life, day, or smile? Then I’ll riff on the existing ideas. Finally, I do a brainstorming round, trying to generate the stupidest ideas possible.

I think this is really freeing and often gets some amazing ideas to surface amidst a playful process. At this point, I’ll have about ten paragraphs, many with AI-generated images. I’ll go do something else, then copy-paste out all the adequate ideas and leave behind the terrible ones, leaving around five. Then I’ll share them with the people I’m working closely with, listing them along with their ideas. We chat it out to see which ones make the cut. These will then be 3D modeled and 3D printed using desktop machines. We iterate this continually until we come up with something to test. Then we play around with the FDM version for a bit and maybe make it in vat polymerization as well. We test it and ask for feedback from users. Then we interview users about what they want and need, showing them the prints. We adjust them and make them in powder bed fusion or whatever the final technology is. That, by and large, is my process. Now, the reason I’m sharing this is twofold.

One reason I’m sharing this is that I’ve seen many more people lately share a similar ChatGPT to 3D print to final part workflow. Architects like External Reference, for example, have shared their Midjourney-to-simulation-to-3D print workflow. Nike is doing something similar, and so are many other people. To me, this is a growing trend of many people autonomously turning to ChatGPT and similar tools to generate ideas and turn them into real products. I asked several people for this article if they used AI, and all had tried it, with many incorporating it into their workflow.

This has some implications for us. If we tightly integrate this process into the 3D printing workflow, we can encourage many more people to 3D print things and to invent with 3D printing. This presents a big opportunity. However, it’s also a risk because more advancements in AI could allow these kinds of tools to bypass 3D printing for large parts.

At the same time, there is another interesting development. When I started using Google, I noticed that I was thinking in terms of search terms rather than a systematic idea. Instead of remembering the ebb and flow of the Second World War on the Eastern Front to recall a major battle, I’d think of “major WWII battles,” “key Second World War battles,” or “most important battles of the Second World War.” Rather than seeking out information to complete my own knowledge or reading a book on the Eastern Front, I was collecting little mosaics. Before I knew it, my knowledge was not made of my own brush strokes but a mosaic.

Currently, when I think of ideas and try to explain them to myself and give them a name, I’m thinking in prompts. I used to have a visual image idea of my own generation, but now I’m collecting prompts. This shift is very interesting to me. While AI might be overhyped, for prototyping, text-to-image and text-to-STL could indeed be the future.

Share this Article


Recent News

Flexibility Is the Bottom Line: Touring the Visitech Americas DLP Light Engine Factory

3D Systems Sells Geomagic Software Division to Hexagon AB



Categories

3D Design

3D Printed Art

3D Printed Food

3D Printed Guns


You May Also Like

3D Printing Webinar and Event Roundup: December 8, 2024

This week, we’ve got a number of webinars, on topics from 3D printing software and medical applications to printed electronics, PVC for industrial 3D printing, and more. There are also...

Featured

Printing Money Episode 24: Q3 2024 Earnings Review with Troy Jensen, Cantor Fitzgerald

Welcome to Printing Money Episode 24. Troy Jensen, Managing Director of Cantor Fitzgerald, joins Danny Piper, Managing Partner at NewCap Partners, once again as it is time to review the...

3D Printing Financials: 3D Systems Faces Challenges, Bets on Innovation

3D Systems (NYSE: DDD) closed its third quarter of 2024 with mixed results, navigating macroeconomic pressures while leaning on innovation to shape its future. The company reported a challenging sales...

3D Printing Webinar and Event Roundup: December 1, 2024

We’ve got several webinars this first week of December, plus events all around the world, from Chicago, Los Angeles, and Austin, Texas to the UK, Barcelona and beyond. Plus, there...