Many industries hope to benefit from artificial intelligence, but currently they know very little about where it works and where it’s unreliable. 3D printing is a good field to look at how AI is helping to improve accuracy and cost, because the traits of 3D printing make it a likely candidate for AI: Many diverse factors—ranging from the ratio of materials in the filament to the postprocessing—contribute to successful prints, these factors interact in highly complex ways, and results depend on perturbations in the physical environment that aren’t under the manufacturer’s control. This article samples the large literature on the use of AI in 3D printing, and offers three research projects that faced particularly interesting challenges. The solutions in these papers—even though they are only a few years old—may well have been superseded by new developments, especially by transformers and LLMs. But the thought processes followed by the authors are interesting in themselves. Together, the research that I point to in this article suggests that the use of AI is an art. The power of AI comes from handling a large variety of features that interact in subtle ways. The paper “A data-driven machine learning approach for the 3D printing process optimisation” (official site) claims to analyze several 3D parameters at once, and to be useful for many different printers. The goal was to calculate three parameters of the print: time required, filament length, and the weight of the resulting object. What’s interesting to me about the paper is that it used a two-step process. First, a convolutional neural network (CNN) took several typical 3D settings, such as extrusion width and layer height, as parameters. The researchers enhanced the inputs with two extra calculated parameters: the surface area of the object and the object’s volume. They had to reduce the number of faces and vertices for the CNN, so the algorithm randomly selected 5,000 vertices from each object. The CNN then generated several more parameters that were added to the input parameters and fed into another multilayer perceptron (MLP). The researchers tested their model on 70 3D-printed objects, each of which was run through the models with 32 different “parameter sets.” The model seemed successful, matching the real results that emerged from testing the objects. Columns with interior holes or tubes are often stronger than solid columns, apparently because material that’s further from the center resists pressure better. (I suppose that the cited article is comparing two columns of the same mass, one solid and one more spread out. This means that a solid column would be stronger than a hollow one of the same diameter, but would be unfeasibly heavy and waste a lot of material.) Nature abounds in stems and other supports—including bamboo, cactus, mint, quills, papyrus, seashells, and honeycombs—that are divided internally by complex structures such as concentric circles and cross-hatching. Could 3D-printed objects be improved by incorporating support structures like these? Note that conventional manufacturing would have little use for arbitrarily complex latices, because they would be so hard to make. 3D printing can handle them, though, and therefore makes experimentation practical. The authors of the article “3D printable biomimetic rod with superior buckling resistance designed by machine learning” tested the use of AI to find strong support structures for 3D printing. The experiment was successful, but shows how much trial and error is involved in AI: A lot of the time researchers just have to throw a lot of different algorithms or parameters at a problem and look later at what happens. The researchers tested candidates for buckling (deformations), compressive stress (vertical shortening), and axial displacement (lengthening or shortening of some dimension) under pressure. They started with 21 examples from nature—hence the term “biomimetic” in the title of the paper. But 21 data samples is nowhere near enough for machine learning. The great thing about this experiment is that synthetic data is just as good as real-life data, because the researchers are seeking new materials in any case. Hence, the researchers made tiny modifications in each sample to create variations that do not exist in nature. This left them with more than a million samples, which led to the opposite problem: too many samples to test in a reasonable amount of time. So the researchers ran their machine learning algorithm on a small sample and discovered, by manual observation, characteristics that clearly indicated that results on those types of samples would be inferior. This is another trial-and-error aspect of the experiment. Ultimately, the researchers settled on 1,500 models to test thoroughly. The next question is which algorithm to use: Gaussian process regression? Support vector machines? K-nearest neighbor? Each of the popular algorithms is best for certain applications, but for new kinds of applications, we don’t know which is the winner. The answer, as in dining out at a buffet, is to load one’s plate with everything. The researchers used half a dozen algorithms and found which had the best results. Presumably, “best” means that the model’s output matched the known strength of the test data. I shouldn’t finish describing the research process without noting that they used proprietary software: Ansys for simulation, MATLAB for data analysis, and even Microsoft Excel. This is a departure from the common use of open source libraries (particularly in Python) for AI. The basic algorithms used in the experiment are standard, however. At the end, 160 new models emerged. Tests using the standard engineering technique of finite element analysis (FEA) showed that these were stronger than the original models found in nature. Saying that the artificial designs are “better” than the natural ones is a bit unfair. Structures for animals and plants have to serve many purposes besides resisting pressure. But this experiment shows how nature, AI, and human decision-making can combine to make new ideas flower. Manufacturing defects are common in 3D printing. Because physical aspects such as environmental temperature and the consistency of the materials are so variable, manufacturers cannot always predict when a print job will fail, even using AI. Instead, many manufacturers direct cameras or other sensors at the output to look for anomalies. But the images themselves are confusing and hard to differentiate. Just as AI is routinely used nowadays to interpret medical scans, it holds promise for interpreting pictures of 3D-printed objects. Infrared camera images are the input data for “An encoder–decoder based approach for anomaly detection with application in additive manufacturing” (official site). Infrared is useful for detecting defects because it can see lesions or other weaknesses in materials that aren’t visible. But in this case, the data offered another dilemma. Anomaly detection through AI normally depends on labeled data, where the correct and defective images are identified in a trustworthy manner. But the researchers had to use data without such identification. The researchers took a bold and counterintuitive approach to this problem: they created a number of simple 3D objects and just assumed that they were all correctly formed. But after choosing their verification set, they augmented it with deliberately fake data in which they created anomalies to see whether their models would identify them. They then applied unsupervised learning (which is good for finding patterns in unlabeled data sets) to the data. Several changes were made to the input data. Each image was 64 × 64 pixels, too much to process efficiently, so the team reduced each image to 32 × 32. They also deformed the images a bit to “improve the robustness of our model against small perturbation[s] in input data.” Finally, they manipulated input data to create some synthetic data that differed only in temperature, to “regularize our network.” As indicated by the article’s title, the algorithm for analysis was a CNN-based encoder-decoder model. One subtlety in the analysis dealt with flaws that might be isolated to a small part of the image. If the model simply took a whole image as input, a small flaw could get lost in the average. So researchers altered the algorithm to examine small windows within the larger image. This research study showed how success depends on researchers’ comprehensive understanding of every aspect of their research: characteristics of the physical machines and materials that are the subject of the research, a thorough understanding of the algorithms and their weak points, and an intuition about how to handle and alter data as needed. 3D printing involves three dimensions (plus the dimension of time) but analyzing it involves many, many dimensions. The literature on AI applications to 3D printing is vast. I hope that the three sample research projects I summarized in this article give a flavor of the creativity researchers are showing in making the promising area of 3D printing more robust. You are currently viewing a placeholder content from Vimeo. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.More Information You are currently viewing a placeholder content from YouTube. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.More Information You need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.More Information You need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.More Information