A Dataset and Baseline for Deep Learning-Based Visual Quality Inspection in Remanufacturing
- URL: http://arxiv.org/abs/2511.15440v1
- Date: Wed, 19 Nov 2025 13:56:33 GMT
- Title: A Dataset and Baseline for Deep Learning-Based Visual Quality Inspection in Remanufacturing
- Authors: Johannes C. Bauer, Paul Geng, Stephan Trattnig, Petr Dokládal, Rüdiger Daub,
- Abstract summary: We propose a novel image dataset depicting typical gearbox components in good and defective condition from two automotive transmissions.<n>We evaluate different models using the dataset and propose a contrastive regularization loss to enhance model robustness.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Remanufacturing describes a process where worn products are restored to like-new condition and it offers vast ecological and economic potentials. A key step is the quality inspection of disassembled components, which is mostly done manually due to the high variety of parts and defect patterns. Deep neural networks show great potential to automate such visual inspection tasks but struggle to generalize to new product variants, components, or defect patterns. To tackle this challenge, we propose a novel image dataset depicting typical gearbox components in good and defective condition from two automotive transmissions. Depending on the train-test split of the data, different distribution shifts are generated to benchmark the generalization ability of a classification model. We evaluate different models using the dataset and propose a contrastive regularization loss to enhance model robustness. The results obtained demonstrate the ability of the loss to improve generalisation to unseen types of components.
Related papers
- Scaling Transformer-Based Novel View Synthesis Models with Token Disentanglement and Synthetic Data [53.040873127309766]
We propose a token disentanglement process within the transformer architecture, enhancing feature separation and ensuring more effective learning.<n>Our method outperforms existing models on both in-dataset and cross-dataset evaluations.
arXiv Detail & Related papers (2025-09-08T17:58:06Z) - Evaluating Vision Transformer Models for Visual Quality Control in Industrial Manufacturing [0.0]
One of the most promising use-cases for machine learning in industrial manufacturing is the early detection of defective products.
We evaluate current vision transformer models together with anomaly detection methods.
We give guidelines for choosing a suitable model architecture for a quality control system in practice.
arXiv Detail & Related papers (2024-11-22T14:12:35Z) - SINDER: Repairing the Singular Defects of DINOv2 [61.98878352956125]
Vision Transformer models trained on large-scale datasets often exhibit artifacts in the patch token they extract.
We propose a novel fine-tuning smooth regularization that rectifies structural deficiencies using only a small dataset.
arXiv Detail & Related papers (2024-07-23T20:34:23Z) - DetDiffusion: Synergizing Generative and Perceptive Models for Enhanced Data Generation and Perception [78.26734070960886]
Current perceptive models heavily depend on resource-intensive datasets.
We introduce perception-aware loss (P.A. loss) through segmentation, improving both quality and controllability.
Our method customizes data augmentation by extracting and utilizing perception-aware attribute (P.A. Attr) during generation.
arXiv Detail & Related papers (2024-03-20T04:58:03Z) - Fantastic Gains and Where to Find Them: On the Existence and Prospect of
General Knowledge Transfer between Any Pretrained Model [74.62272538148245]
We show that for arbitrary pairings of pretrained models, one model extracts significant data context unavailable in the other.
We investigate if it is possible to transfer such "complementary" knowledge from one model to another without performance degradation.
arXiv Detail & Related papers (2023-10-26T17:59:46Z) - Effective Transfer of Pretrained Large Visual Model for Fabric Defect
Segmentation via Specifc Knowledge Injection [15.171188183349395]
This study introduces an innovative method to infuse specialized knowledge of fabric defects into the Segment Anything Model (SAM)
By introducing and training a unique set of fabric defect-related parameters, this approach seamlessly integrates domain-specific knowledge into SAM.
The experimental results reveal a significant improvement in the model's segmentation performance, attributable to this novel amalgamation of generic and fabric-specific knowledge.
arXiv Detail & Related papers (2023-06-28T13:08:08Z) - A Novel Strategy for Improving Robustness in Computer Vision
Manufacturing Defect Detection [1.3198689566654107]
Visual quality inspection in high performance manufacturing can benefit from automation, due to cost savings and improved rigor.
Deep learning techniques are the current state of the art for generic computer vision tasks like classification and object detection.
Manufacturing data can pose a challenge for deep learning because data is highly repetitive and there are few images of defects or deviations to learn from.
arXiv Detail & Related papers (2023-05-16T12:51:51Z) - Few-shot incremental learning in the context of solar cell quality
inspection [0.0]
In this work, we have explored the technique of weight imprinting in the context of solar cell quality inspection.
The results have shown that this technique allows the network to extend its knowledge with regard to defect classes with few samples.
arXiv Detail & Related papers (2022-07-01T23:52:07Z) - Generative Modeling Helps Weak Supervision (and Vice Versa) [87.62271390571837]
We propose a model fusing weak supervision and generative adversarial networks.
It captures discrete variables in the data alongside the weak supervision derived label estimate.
It is the first approach to enable data augmentation through weakly supervised synthetic images and pseudolabels.
arXiv Detail & Related papers (2022-03-22T20:24:21Z) - Generative Partial Visual-Tactile Fused Object Clustering [81.17645983141773]
We propose a Generative Partial Visual-Tactile Fused (i.e., GPVTF) framework for object clustering.
A conditional cross-modal clustering generative adversarial network is then developed to synthesize one modality conditioning on the other modality.
To the end, two pseudo-label based KL-divergence losses are employed to update the corresponding modality-specific encoders.
arXiv Detail & Related papers (2020-12-28T02:37:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.