Advancing Additive Manufacturing through Deep Learning: A Comprehensive
Review of Current Progress and Future Challenges
- URL: http://arxiv.org/abs/2403.00669v1
- Date: Fri, 1 Mar 2024 17:01:47 GMT
- Title: Advancing Additive Manufacturing through Deep Learning: A Comprehensive
Review of Current Progress and Future Challenges
- Authors: Amirul Islam Saimon, Emmanuel Yangue, Xiaowei Yue, Zhenyu (James)
Kong, Chenang Liu
- Abstract summary: This paper reviews the recent studies that apply deep learning for making the Additive Manufacturing process better.
It focuses on generalizing DL models for wide-range of geometry types, managing uncertainties both in AM data and DL models, overcoming limited and noisy AM data issues by incorporating generative models, and unveiling the potential of interpretable DL for AM.
- Score: 5.415870869037467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Additive manufacturing (AM) has already proved itself to be the potential
alternative to widely-used subtractive manufacturing due to its extraordinary
capacity of manufacturing highly customized products with minimum material
wastage. Nevertheless, it is still not being considered as the primary choice
for the industry due to some of its major inherent challenges, including
complex and dynamic process interactions, which are sometimes difficult to
fully understand even with traditional machine learning because of the
involvement of high-dimensional data such as images, point clouds, and voxels.
However, the recent emergence of deep learning (DL) is showing great promise in
overcoming many of these challenges as DL can automatically capture complex
relationships from high-dimensional data without hand-crafted feature
extraction. Therefore, the volume of research in the intersection of AM and DL
is exponentially growing each year which makes it difficult for the researchers
to keep track of the trend and future potential directions. Furthermore, to the
best of our knowledge, there is no comprehensive review paper in this research
track summarizing the recent studies. Therefore, this paper reviews the recent
studies that apply DL for making the AM process better with a high-level
summary of their contributions and limitations. Finally, it summarizes the
current challenges and recommends some of the promising opportunities in this
domain for further investigation with a special focus on generalizing DL models
for wide-range of geometry types, managing uncertainties both in AM data and DL
models, overcoming limited and noisy AM data issues by incorporating generative
models, and unveiling the potential of interpretable DL for AM.
Related papers
- A Comprehensive Survey of Synthetic Tabular Data Generation [27.112327373017457]
Tabular data is one of the most prevalent and critical data formats across diverse real-world applications.
It is often constrained by challenges such as data scarcity, privacy concerns, and class imbalance.
Synthetic data generation has emerged as a promising solution, leveraging generative models to learn the distribution of real datasets.
arXiv Detail & Related papers (2025-04-23T08:33:34Z) - A Survey on Diffusion Models for Anomaly Detection [41.22298168457618]
Diffusion models (DMs) have emerged as a powerful class of generative AI models.
DMAD offers promising solutions for identifying deviations in increasingly complex and high-dimensional data.
arXiv Detail & Related papers (2025-01-20T12:06:54Z) - Trends, Challenges, and Future Directions in Deep Learning for Glaucoma: A Systematic Review [0.2940464448991482]
We examine the latest advances in glaucoma detection through Deep Learning (DL) algorithms using Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)
This study focuses on three aspects of DL-based glaucoma detection frameworks: input data modalities, processing strategies, and model architectures and applications.
arXiv Detail & Related papers (2024-11-07T23:35:05Z) - RADAR: Robust Two-stage Modality-incomplete Industrial Anomaly Detection [61.71770293720491]
We propose a novel two-stage Robust modAlity-imcomplete fusing and Detecting frAmewoRk, abbreviated as RADAR.
Our bootstrapping philosophy is to enhance two stages in MIIAD, improving the robustness of the Multimodal Transformer.
Our experimental results demonstrate that the proposed RADAR significantly surpasses conventional MIAD methods in terms of effectiveness and robustness.
arXiv Detail & Related papers (2024-10-02T16:47:55Z) - From Linguistic Giants to Sensory Maestros: A Survey on Cross-Modal Reasoning with Large Language Models [56.9134620424985]
Cross-modal reasoning (CMR) is increasingly recognized as a crucial capability in the progression toward more sophisticated artificial intelligence systems.
The recent trend of deploying Large Language Models (LLMs) to tackle CMR tasks has marked a new mainstream of approaches for enhancing their effectiveness.
This survey offers a nuanced exposition of current methodologies applied in CMR using LLMs, classifying these into a detailed three-tiered taxonomy.
arXiv Detail & Related papers (2024-09-19T02:51:54Z) - 4D Contrastive Superflows are Dense 3D Representation Learners [62.433137130087445]
We introduce SuperFlow, a novel framework designed to harness consecutive LiDAR-camera pairs for establishing pretraining objectives.
To further boost learning efficiency, we incorporate a plug-and-play view consistency module that enhances alignment of the knowledge distilled from camera views.
arXiv Detail & Related papers (2024-07-08T17:59:54Z) - State-Space Modeling in Long Sequence Processing: A Survey on Recurrence in the Transformer Era [59.279784235147254]
This survey provides an in-depth summary of the latest approaches that are based on recurrent models for sequential data processing.
The emerging picture suggests that there is room for thinking of novel routes, constituted by learning algorithms which depart from the standard Backpropagation Through Time.
arXiv Detail & Related papers (2024-06-13T12:51:22Z) - Generative AI for Synthetic Data Generation: Methods, Challenges and the
Future [12.506811635026907]
The recent surge in research focused on generating synthetic data from large language models (LLMs)
This paper delves into advanced technologies that leverage these gigantic LLMs for the generation of task-specific training data.
arXiv Detail & Related papers (2024-03-07T03:38:44Z) - On the Challenges and Opportunities in Generative AI [135.2754367149689]
We argue that current large-scale generative AI models do not sufficiently address several fundamental issues that hinder their widespread adoption across domains.
In this work, we aim to identify key unresolved challenges in modern generative AI paradigms that should be tackled to further enhance their capabilities, versatility, and reliability.
arXiv Detail & Related papers (2024-02-28T15:19:33Z) - On the Resurgence of Recurrent Models for Long Sequences -- Survey and
Research Opportunities in the Transformer Era [59.279784235147254]
This survey is aimed at providing an overview of these trends framed under the unifying umbrella of Recurrence.
It emphasizes novel research opportunities that become prominent when abandoning the idea of processing long sequences.
arXiv Detail & Related papers (2024-02-12T23:55:55Z) - Deep Learning for Multi-Label Learning: A Comprehensive Survey [6.571492336879553]
Multi-label learning is a rapidly growing research area that aims to predict multiple labels from a single input data point.
Inherent difficulties in MLC include dealing with high-dimensional data, addressing label correlations, and handling partial labels.
Recent years have witnessed a notable increase in adopting deep learning (DL) techniques to address these challenges more effectively in MLC.
arXiv Detail & Related papers (2024-01-29T20:37:03Z) - A Survey on Data Augmentation in Large Model Era [16.05117556207015]
Large models, encompassing large language and diffusion models, have shown exceptional promise in approximating human-level intelligence.
With continuous updates to these models, the existing reservoir of high-quality data may soon be depleted.
This paper offers an exhaustive review of large model-driven data augmentation methods.
arXiv Detail & Related papers (2024-01-27T14:19:33Z) - Learning from models beyond fine-tuning [78.20895343699658]
Learn From Model (LFM) focuses on the research, modification, and design of foundation models (FM) based on the model interface.
The study of LFM techniques can be broadly categorized into five major areas: model tuning, model distillation, model reuse, meta learning and model editing.
This paper gives a comprehensive review of the current methods based on FM from the perspective of LFM.
arXiv Detail & Related papers (2023-10-12T10:20:36Z) - Geometric Deep Learning for Structure-Based Drug Design: A Survey [83.87489798671155]
Structure-based drug design (SBDD) leverages the three-dimensional geometry of proteins to identify potential drug candidates.
Recent advancements in geometric deep learning, which effectively integrate and process 3D geometric data, have significantly propelled the field forward.
arXiv Detail & Related papers (2023-06-20T14:21:58Z) - Deep Transfer Learning for Automatic Speech Recognition: Towards Better
Generalization [3.6393183544320236]
Speech recognition has become an important challenge when using deep learning (DL)
It requires large-scale training datasets and high computational and storage resources.
Deep transfer learning (DTL) has been introduced to overcome these issues.
arXiv Detail & Related papers (2023-04-27T21:08:05Z) - Dataset Distillation: A Comprehensive Review [76.26276286545284]
dataset distillation (DD) aims to derive a much smaller dataset containing synthetic samples, based on which the trained models yield performance comparable with those trained on the original dataset.
This paper gives a comprehensive review and summary of recent advances in DD and its application.
arXiv Detail & Related papers (2023-01-17T17:03:28Z) - A Survey on Generative Diffusion Model [75.93774014861978]
Diffusion models are an emerging class of deep generative models.
They have certain limitations, including a time-consuming iterative generation process and confinement to high-dimensional Euclidean space.
This survey presents a plethora of advanced techniques aimed at enhancing diffusion models.
arXiv Detail & Related papers (2022-09-06T16:56:21Z) - A Survey of Deep Active Learning [54.376820959917005]
Active learning (AL) attempts to maximize the performance gain of the model by marking the fewest samples.
Deep learning (DL) is greedy for data and requires a large amount of data supply to optimize massive parameters.
Deep active learning (DAL) has emerged.
arXiv Detail & Related papers (2020-08-30T04:28:31Z) - A Comprehensive Study on Temporal Modeling for Online Action Detection [50.558313106389335]
Online action detection (OAD) is a practical yet challenging task, which has attracted increasing attention in recent years.
This paper aims to provide a comprehensive study on temporal modeling for OAD including four meta types of temporal modeling methods.
We present several hybrid temporal modeling methods, which outperform the recent state-of-the-art methods with sizable margins on THUMOS-14 and TVSeries.
arXiv Detail & Related papers (2020-01-21T13:12:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.