The Only Thing I Actually Use AI for Currently


Share this Article

I’d like to apologize in advance for this artificial intelligence (AI) story. I try to avoid talking about AI, machine learning (ML) and all that jazz. Many of the stories coming out about AI are silly, overhyped, and inaccurate. However, amid the morass of AI hype, there are some nuggets of usefulness.

A story by a Barcelona-based architecture firm reveals that they prototype in Midjourney. I recently developed some earrings with designer Roman Reiner. He quickly turned to the DALL-E text-to-image model to iterate ideas. He used it to quickly see if he and I were thinking along the same lines and also later pushed out some completely off-the-wall designs quickly for me to give feedback on.

Weeks ago, my girlfriend took the Footwearology 3D printing shoe course and used AI to quickly discover and work out shoe ideas. I was bouncing design ideas off of a friend and to explain what I meant, I used Wonder AI, a simple AI image generation tool. I’ve also used Wonder to generate images that explained the look and feel of a brand to a graphic designer.

A few weeks ago, I came up with a filtered pipe idea. To explain it to people in an email quickly, I made a crude AI image to save me time in explaining it. Several friends have played around with using ChatGPT to create STL files, which they can then print. That kind of functionality is bound to get a bit better over time.

It would be very powerful if we could iterate much more closely to our thoughts. Tweaking designs more accurately and iterating better would make a big difference as well. I currently find it clunky to iterate and am unable to tweak a lot of prompts more precisely to get closer to what I want. This accuracy step is super important because if I could just get near to what I want, I’d print the file every time and use that to continue my process. However, I currently use AI to gather ideas quickly and explore many different options but then get someone to 3D model the most promising one more precisely. Going to that step right away with a more directed and accurate process would really speed things up. For me, if I’m working on a relatively simple thing like an earring, for example, the process essentially starts with AI image generation tools.

Currently, I’ll usually walk around and generate ideas to myself. Then I’ll imagine a person wearing or using the object and winnow down many ideas to a handful. Next, I’ll go to Wonder AI and generate a prompt, adjusting it a few times to get something close to the idea. Sometimes this works remarkably well, but often I can’t get anywhere near what I want and I get frustrated. Finally, I’ll have around five little two-sentence explanations of a product variation. Some of these have a Wonder image with them and others do not. The reason I use Wonder is that it’s on my phone and it’s just a bit faster than doing the desktop-based stuff I’ve also tried.

Once I have around five ideas, I let them rest for a few hours. I put the ideas in a Google doc, paste in the images, and look at them again with fresh eyes. Here, I try to see if something is inauthentic or not right for me. If something feels wrong or wrong for me personally, I’ll kill it here. But if an idea is stupid, wouldn’t work, or is impossible, I’ll still keep it. Then I’ll compare the idea with the use case, client, goal, or utility. What will this do for my client, their client? What will this do for someone’s life, day, or smile? Then I’ll riff on the existing ideas. Finally, I do a brainstorming round, trying to generate the stupidest ideas possible.

I think this is really freeing and often gets some amazing ideas to surface amidst a playful process. At this point, I’ll have about ten paragraphs, many with AI-generated images. I’ll go do something else, then copy-paste out all the adequate ideas and leave behind the terrible ones, leaving around five. Then I’ll share them with the people I’m working closely with, listing them along with their ideas. We chat it out to see which ones make the cut. These will then be 3D modeled and 3D printed using desktop machines. We iterate this continually until we come up with something to test. Then we play around with the FDM version for a bit and maybe make it in vat polymerization as well. We test it and ask for feedback from users. Then we interview users about what they want and need, showing them the prints. We adjust them and make them in powder bed fusion or whatever the final technology is. That, by and large, is my process. Now, the reason I’m sharing this is twofold.

One reason I’m sharing this is that I’ve seen many more people lately share a similar ChatGPT to 3D print to final part workflow. Architects like External Reference, for example, have shared their Midjourney-to-simulation-to-3D print workflow. Nike is doing something similar, and so are many other people. To me, this is a growing trend of many people autonomously turning to ChatGPT and similar tools to generate ideas and turn them into real products. I asked several people for this article if they used AI, and all had tried it, with many incorporating it into their workflow.

This has some implications for us. If we tightly integrate this process into the 3D printing workflow, we can encourage many more people to 3D print things and to invent with 3D printing. This presents a big opportunity. However, it’s also a risk because more advancements in AI could allow these kinds of tools to bypass 3D printing for large parts.

At the same time, there is another interesting development. When I started using Google, I noticed that I was thinking in terms of search terms rather than a systematic idea. Instead of remembering the ebb and flow of the Second World War on the Eastern Front to recall a major battle, I’d think of “major WWII battles,” “key Second World War battles,” or “most important battles of the Second World War.” Rather than seeking out information to complete my own knowledge or reading a book on the Eastern Front, I was collecting little mosaics. Before I knew it, my knowledge was not made of my own brush strokes but a mosaic.

Currently, when I think of ideas and try to explain them to myself and give them a name, I’m thinking in prompts. I used to have a visual image idea of my own generation, but now I’m collecting prompts. This shift is very interesting to me. While AI might be overhyped, for prototyping, text-to-image and text-to-STL could indeed be the future.

Share this Article

Recent News

Metal Engine Nozzle 3D Printed for Intuitive Machines’ Lunar Mission

Will Stratasys Merge with HP? – Dream Mergers and Acquisitions


3D Design

3D Printed Art

3D Printed Food

3D Printed Guns

You May Also Like

Aniwaa Unveils 2024 3D Printing Landscape

France’s leading additive manufacturing (AM) marketplace, Aniwaa, released the 2024 version of its AM hardware map, showing 614 3D printer makers and their technologies. Incorporating over 100 updates based on...

3D Printing Webinar and Event Roundup: June 9, 2024

It’s another busy week of 3D printing events, with a few webinars thrown in the mix as well! Advanced Manufacturing for Defense by IDGA, in collaboration with ASTM International, is...


At the Nexus of Global 3D Printing: TCT 3Sixty Launches in UK alongside Med-Tech

Leading UK 3D printing event TCT 3Sixty is back and promises to be another must-see event for the region. Taking place alongside Med-Tech Innovation Expo, the UK and Ireland’s leading...

3D Printing Webinar and Event Roundup: June 2, 2024

Things are heating up in the AM industry, with lots of webinars and events coming this week! Stratasys continues its training courses and road trip, and some major industry events...