Exploring Active Learning for Semiconductor Defect Segmentation
- URL: http://arxiv.org/abs/2507.17359v1
- Date: Wed, 23 Jul 2025 09:44:11 GMT
- Title: Exploring Active Learning for Semiconductor Defect Segmentation
- Authors: Lile Cai, Ramanpreet Singh Pahwa, Xun Xu, Jie Wang, Richard Chang, Lining Zhang, Chuan-Sheng Foo,
- Abstract summary: In this work, we explore active learning (AL) as a potential solution to alleviate the annotation burden.<n>We identify two unique challenges when applying AL on semiconductor XRM scans: large domain shift and severe class-imbalance.<n>To address these challenges, we propose to perform contrastive pretraining on the unlabelled data.<n>We evaluate our method on a semiconductor dataset that is compiled from XRM scans of high bandwidth memory structures composed of logic and memory dies.
- Score: 20.72106200701627
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The development of X-Ray microscopy (XRM) technology has enabled non-destructive inspection of semiconductor structures for defect identification. Deep learning is widely used as the state-of-the-art approach to perform visual analysis tasks. However, deep learning based models require large amount of annotated data to train. This can be time-consuming and expensive to obtain especially for dense prediction tasks like semantic segmentation. In this work, we explore active learning (AL) as a potential solution to alleviate the annotation burden. We identify two unique challenges when applying AL on semiconductor XRM scans: large domain shift and severe class-imbalance. To address these challenges, we propose to perform contrastive pretraining on the unlabelled data to obtain the initialization weights for each AL cycle, and a rareness-aware acquisition function that favors the selection of samples containing rare classes. We evaluate our method on a semiconductor dataset that is compiled from XRM scans of high bandwidth memory structures composed of logic and memory dies, and demonstrate that our method achieves state-of-the-art performance.
Related papers
- OCSVM-Guided Representation Learning for Unsupervised Anomaly Detection [1.0190194769786831]
Unsupervised anomaly detection (UAD) aims to detect anomalies without labeled data.<n>We propose a novel method that tightly couples representation learning with an analytically solvable one-class SVM.<n>The model is evaluated on two tasks: a new benchmark based on MNIST-C, and a challenging brain MRI subtle lesion detection task.
arXiv Detail & Related papers (2025-07-25T13:00:40Z) - Iterative Misclassification Error Training (IMET): An Optimized Neural Network Training Technique for Image Classification [0.5115559623386964]
We introduce Iterative Misclassification Error Training (IMET), a novel framework inspired by curriculum learning and coreset selection.<n>IMET aims to identify misclassified samples in order to streamline the training process, while prioritizing the model's attention to edge case senarious and rare outcomes.<n>The paper evaluates IMET's performance on benchmark medical image classification datasets against state-of-the-art ResNet architectures.
arXiv Detail & Related papers (2025-07-01T04:14:16Z) - Querying Easily Flip-flopped Samples for Deep Active Learning [63.62397322172216]
Active learning is a machine learning paradigm that aims to improve the performance of a model by strategically selecting and querying unlabeled data.
One effective selection strategy is to base it on the model's predictive uncertainty, which can be interpreted as a measure of how informative a sample is.
This paper proposes the it least disagree metric (LDM) as the smallest probability of disagreement of the predicted label.
arXiv Detail & Related papers (2024-01-18T08:12:23Z) - LESEN: Label-Efficient deep learning for Multi-parametric MRI-based
Visual Pathway Segmentation [5.726588626363204]
We propose a label-efficient deep learning method with self-ensembling (LESEN)
LESEN incorporates supervised and unsupervised losses, enabling the student and teacher models to mutually learn from each other.
Our experiments on the human connectome project (HCP) dataset demonstrate the superior performance of our method.
arXiv Detail & Related papers (2024-01-03T10:22:13Z) - Unsupervised Continual Anomaly Detection with Contrastively-learned
Prompt [80.43623986759691]
We introduce a novel Unsupervised Continual Anomaly Detection framework called UCAD.
The framework equips the UAD with continual learning capability through contrastively-learned prompts.
We conduct comprehensive experiments and set the benchmark on unsupervised continual anomaly detection and segmentation.
arXiv Detail & Related papers (2024-01-02T03:37:11Z) - Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement
Learning [53.00683059396803]
Mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images.
We propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy.
Our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation.
arXiv Detail & Related papers (2023-10-06T10:40:46Z) - Source-Free Collaborative Domain Adaptation via Multi-Perspective
Feature Enrichment for Functional MRI Analysis [55.03872260158717]
Resting-state MRI functional (rs-fMRI) is increasingly employed in multi-site research to aid neurological disorder analysis.
Many methods have been proposed to reduce fMRI heterogeneity between source and target domains.
But acquiring source data is challenging due to concerns and/or data storage burdens in multi-site studies.
We design a source-free collaborative domain adaptation framework for fMRI analysis, where only a pretrained source model and unlabeled target data are accessible.
arXiv Detail & Related papers (2023-08-24T01:30:18Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Longitudinal detection of new MS lesions using Deep Learning [0.0]
We describe a deep-learning-based pipeline addressing the task of detecting and segmenting new MS lesions.
First, we propose to use transfer-learning from a model trained on a segmentation task using single time-points.
Second, we propose a data synthesis strategy to generate realistic longitudinal time-points with new lesions.
arXiv Detail & Related papers (2022-06-16T16:09:04Z) - Self-Taught Semi-Supervised Anomaly Detection on Upper Limb X-rays [11.859913430860335]
Supervised deep networks take for granted a large number of annotations by radiologists.
Our approach's rationale is to use task pretext tasks to leverage unlabeled data.
We show that our method outperforms baselines across unsupervised and self-supervised anomaly detection settings.
arXiv Detail & Related papers (2021-02-19T12:32:58Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - A Trainable Optimal Transport Embedding for Feature Aggregation and its
Relationship to Attention [96.77554122595578]
We introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference.
Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost.
arXiv Detail & Related papers (2020-06-22T08:35:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.