Domain-Aware Hyperdimensional Computing for Edge Smart Manufacturing
- URL: http://arxiv.org/abs/2509.26131v1
- Date: Tue, 30 Sep 2025 11:52:14 GMT
- Title: Domain-Aware Hyperdimensional Computing for Edge Smart Manufacturing
- Authors: Fardin Jalil Piran, Anandkumar Patel, Rajiv Malhotra, Farhad Imani,
- Abstract summary: Smart manufacturing requires on-device intelligence that meets strict latency and energy budgets.<n>HyperDimensional Computing (HDC) offers a lightweight alternative by encoding data as high-dimensional hypervectors and computing with simple operations.<n>We map how encoder type, projection variance, hypervector dimensionality, and data regime shape accuracy, inference latency, training time, and training energy.<n>We obtain models that match or exceed the accuracy of state-of-the-art deep learning and Transformer models while delivering at least 6x faster inference and more than 40x lower training energy.
- Score: 1.3048920509133808
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Smart manufacturing requires on-device intelligence that meets strict latency and energy budgets. HyperDimensional Computing (HDC) offers a lightweight alternative by encoding data as high-dimensional hypervectors and computing with simple operations. Prior studies often assume that the qualitative relation between HDC hyperparameters and performance is stable across applications. Our analysis of two representative tasks, signal-based quality monitoring in Computer Numerical Control (CNC) machining and image-based defect detection in Laser Powder Bed Fusion (LPBF), shows that this assumption does not hold. We map how encoder type, projection variance, hypervector dimensionality, and data regime shape accuracy, inference latency, training time, and training energy. A formal complexity model explains predictable trends in encoding and similarity computation and reveals nonmonotonic interactions with retraining that preclude a closed-form optimum. Empirically, signals favor nonlinear Random Fourier Features with more exclusive encodings and saturate in accuracy beyond moderate dimensionality. Images favor linear Random Projection, achieve high accuracy with small dimensionality, and depend more on sample count than on dimensionality. Guided by these insights, we tune HDC under multiobjective constraints that reflect edge deployment and obtain models that match or exceed the accuracy of state-of-the-art deep learning and Transformer models while delivering at least 6x faster inference and more than 40x lower training energy. These results demonstrate that domain-aware HDC encoding is necessary and that tuned HDC offers a practical, scalable path to real-time industrial AI on constrained hardware. Future work will enable adaptive encoder and hyperparameter selection, expand evaluation to additional manufacturing modalities, and validate on low-power accelerators.
Related papers
- Hybrid Photonic-Quantum Reservoir Computing For Time-Series Prediction [0.0]
We explore the design and implementation of a Hybrid Photonic-Quantum Reservoir Computing (HPQRC) paradigm.<n>It brings together the high-speed parallelism of photonic systems with the quantum reservoir's capacity of modeling complex, nonlinear dynamics.<n>We show that HPQRC attains much higher accuracy with lower computational time than both classical and quantum-only models.
arXiv Detail & Related papers (2025-11-12T11:26:47Z) - GraphCompNet: A Position-Aware Model for Predicting and Compensating Shape Deviations in 3D Printing [46.76421610124468]
This paper introduces a data-driven algorithm for modeling and compensating shape deviations in additive manufacturing (AM)<n>Recent advancements in machine learning (ML) have improved compensation precision, but issues remain in generalizing across complex geometries and adapting to position-dependent variations.<n>We present a novel approach for powder bed fusion processes, using GraphCompNet, which is a computational framework combining graph-based neural networks with a generative adversarial network (GAN)-inspired training process.
arXiv Detail & Related papers (2025-02-11T20:22:00Z) - Designing DNNs for a trade-off between robustness and processing performance in embedded devices [1.474723404975345]
Machine learning-based embedded systems need to be robust against soft errors.<n>This paper investigates the suitability of using bounded AFs to improve model robustness against perturbations.<n>We analyze encoder-decoder fully convolutional models aimed at performing semantic segmentation tasks on hyperspectral images for scene understanding in autonomous driving.
arXiv Detail & Related papers (2024-12-04T19:34:33Z) - Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Data Quality Monitoring for the Hadron Calorimeters Using Transfer Learning for Anomaly Detection [0.7767589715518638]
Transfer learning (TL) mechanisms promise to mitigate data sparsity and model complexity by utilizing pre-trained models for a new task.<n>We present the potential of TL within the context of high-dimensional ST AD with a hybrid autoencoder architecture, incorporating convolutional, graph, and recurrent neural networks.<n>This research investigates the transferability of models trained on different sections of the Calorimeter of the Compact Muon Solenoid experiment at CERN.
arXiv Detail & Related papers (2024-08-29T15:19:06Z) - Random resistive memory-based deep extreme point learning machine for
unified visual processing [67.51600474104171]
We propose a novel hardware-software co-design, random resistive memory-based deep extreme point learning machine (DEPLM)
Our co-design system achieves huge energy efficiency improvements and training cost reduction when compared to conventional systems.
arXiv Detail & Related papers (2023-12-14T09:46:16Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Application of probabilistic modeling and automated machine learning
framework for high-dimensional stress field [1.073039474000799]
We propose an end-to-end approach that maps a high-dimensional image like input to an output of high dimensionality or its key statistics.
Our approach uses two main framework that perform three steps: a) reduce the input and output from a high-dimensional space to a reduced or low-dimensional space, b) model the input-output relationship in the low-dimensional space, and c) enable the incorporation of domain-specific physical constraints as masks.
arXiv Detail & Related papers (2023-03-15T13:10:58Z) - Task-Oriented Sensing, Computation, and Communication Integration for
Multi-Device Edge AI [108.08079323459822]
This paper studies a new multi-intelligent edge artificial-latency (AI) system, which jointly exploits the AI model split inference and integrated sensing and communication (ISAC)
We measure the inference accuracy by adopting an approximate but tractable metric, namely discriminant gain.
arXiv Detail & Related papers (2022-07-03T06:57:07Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - Adaptive Anomaly Detection for Internet of Things in Hierarchical Edge
Computing: A Contextual-Bandit Approach [81.5261621619557]
We propose an adaptive anomaly detection scheme with hierarchical edge computing (HEC)
We first construct multiple anomaly detection DNN models with increasing complexity, and associate each of them to a corresponding HEC layer.
Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network.
arXiv Detail & Related papers (2021-08-09T08:45:47Z) - Recovery of Linear Components: Reduced Complexity Autoencoder Designs [0.951828574518325]
We present an approach called Recovery of Linear Components (RLC), which serves as a middle ground between linear and non-linear dimensionality reduction techniques.
With the aid of synthetic and real world case studies, we show that the RLC, when compared with an autoencoder of similar complexity, shows higher accuracy, similar to robustness to overfitting, and faster training times.
arXiv Detail & Related papers (2020-12-14T14:08:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.