Real-Time Detection of Electronic Components in Waste Printed Circuit Boards: A Transformer-Based Approach
- URL: http://arxiv.org/abs/2409.16496v1
- Date: Tue, 24 Sep 2024 22:59:52 GMT
- Title: Real-Time Detection of Electronic Components in Waste Printed Circuit Boards: A Transformer-Based Approach
- Authors: Muhammad Mohsin, Stefano Rovetta, Francesco Masulli, Alberto Cabri,
- Abstract summary: We have proposed a practical approach that involves selective disassembling of the different types of electronic components from WPCBs.
In this paper we evaluate the real-time accuracy of electronic component detection and localization of the Real-Time DEtection TRansformer model architecture.
- Score: 4.849820402342814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Critical Raw Materials (CRMs) such as copper, manganese, gallium, and various rare earths have great importance for the electronic industry. To increase the concentration of individual CRMs and thus make their extraction from Waste Printed Circuit Boards (WPCBs) convenient, we have proposed a practical approach that involves selective disassembling of the different types of electronic components from WPCBs using mechatronic systems guided by artificial vision techniques. In this paper we evaluate the real-time accuracy of electronic component detection and localization of the Real-Time DEtection TRansformer model architecture. Transformers have recently become very popular for the extraordinary results obtained in natural language processing and machine translation. Also in this case, the transformer model achieves very good performances, often superior to those of the latest state of the art object detection and localization models YOLOv8 and YOLOv9.
Related papers
- Measuring the Recyclability of Electronic Components to Assist Automatic Disassembly and Sorting Waste Printed Circuit Boards [4.0998481751764]
We focus on the measurement of recyclability of waste electronic components (WECs) from waste printed circuit boards (WPCBs) using mathematical innovation model.
This innovative approach evaluates both the recyclability and recycling difficulties of WECs, integrating an AI model for improved disassembly and sorting.
arXiv Detail & Related papers (2024-06-24T12:33:56Z) - EEGEncoder: Advancing BCI with Transformer-Based Motor Imagery Classification [11.687193535939798]
Brain-computer interfaces (BCIs) harness electroencephalographic signals for direct neural control of devices.
Traditional machine learning methods for EEG-based motor imagery (MI) classification encounter challenges such as manual feature extraction and susceptibility to noise.
This paper introduces EEGEncoder, a deep learning framework that employs modified transformers and TCNs to surmount these limitations.
arXiv Detail & Related papers (2024-04-23T09:51:24Z) - ESTformer: Transformer Utilizing Spatiotemporal Dependencies for EEG
Super-resolution [14.2426667945505]
ESTformer is an EEG framework utilizingtemporal dependencies based on the Transformer.
The ESTformer applies positional encoding methods and the Multi-head Self-attention mechanism to the space and time dimensions.
arXiv Detail & Related papers (2023-12-03T12:26:32Z) - Machine Learning-powered Compact Modeling of Stochastic Electronic
Devices using Mixture Density Networks [0.0]
Conventional deterministic models fall short when it comes to capture the subtle yet critical variability exhibited by many electronic components.
We present an innovative approach that transcends the limitations of traditional modeling techniques by harnessing the power of machine learning.
This paper marks a significant step forward in the quest for accurate and versatile compact models, poised to drive innovation in the realm of electronic circuits.
arXiv Detail & Related papers (2023-11-10T01:34:18Z) - SimPLR: A Simple and Plain Transformer for Scaling-Efficient Object Detection and Segmentation [49.65221743520028]
We show that a transformer-based detector with scale-aware attention enables the plain detector SimPLR' whose backbone and detection head are both non-hierarchical and operate on single-scale features.
Compared to the multi-scale and single-scale state-of-the-art, our model scales much better with bigger capacity (self-supervised) models and more pre-training data.
arXiv Detail & Related papers (2023-10-09T17:59:26Z) - A Comprehensive Survey on Applications of Transformers for Deep Learning
Tasks [60.38369406877899]
Transformer is a deep neural network that employs a self-attention mechanism to comprehend the contextual relationships within sequential data.
transformer models excel in handling long dependencies between input sequence elements and enable parallel processing.
Our survey encompasses the identification of the top five application domains for transformer-based models.
arXiv Detail & Related papers (2023-06-11T23:13:51Z) - Hierarchical Point Attention for Indoor 3D Object Detection [111.04397308495618]
This work proposes two novel attention operations as generic hierarchical designs for point-based transformer detectors.
First, we propose Multi-Scale Attention (MS-A) that builds multi-scale tokens from a single-scale input feature to enable more fine-grained feature learning.
Second, we propose Size-Adaptive Local Attention (Local-A) with adaptive attention regions for localized feature aggregation within bounding box proposals.
arXiv Detail & Related papers (2023-01-06T18:52:12Z) - Knowledge Distillation of Transformer-based Language Models Revisited [74.25427636413067]
Large model size and high run-time latency are serious impediments to applying pre-trained language models in practice.
We propose a unified knowledge distillation framework for transformer-based models.
Our empirical results shed light on the distillation in the pre-train language model and with relative significant improvement over previous state-of-the-arts(SOTA)
arXiv Detail & Related papers (2022-06-29T02:16:56Z) - Full-Wave Methodology to Compute the Spontaneous Emission Rate of a
Transmon Qubit [0.0]
The spontaneous emission rate (SER) is an important figure of merit for any quantum bit (qubit)
We show how this can be done with a recently developed field-based description of transmon qubits coupled to an electromagnetic environment.
We validate our model by computing the SER for devices similar to those found in the literature that have been well-characterized experimentally.
arXiv Detail & Related papers (2022-01-11T23:47:20Z) - Efficient pre-training objectives for Transformers [84.64393460397471]
We study several efficient pre-training objectives for Transformers-based models.
We prove that eliminating the MASK token and considering the whole output during the loss are essential choices to improve performance.
arXiv Detail & Related papers (2021-04-20T00:09:37Z) - DA-DETR: Domain Adaptive Detection Transformer with Information Fusion [53.25930448542148]
DA-DETR is a domain adaptive object detection transformer that introduces information fusion for effective transfer from a labeled source domain to an unlabeled target domain.
We introduce a novel CNN-Transformer Blender (CTBlender) that fuses the CNN features and Transformer features ingeniously for effective feature alignment and knowledge transfer across domains.
CTBlender employs the Transformer features to modulate the CNN features across multiple scales where the high-level semantic information and the low-level spatial information are fused for accurate object identification and localization.
arXiv Detail & Related papers (2021-03-31T13:55:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.