Damage detection using in-domain and cross-domain transfer learning
- URL: http://arxiv.org/abs/2102.03858v1
- Date: Sun, 7 Feb 2021 17:36:27 GMT
- Title: Damage detection using in-domain and cross-domain transfer learning
- Authors: Zaharah A. Bukhsh, Nils Jansen, Aaqib Saeed
- Abstract summary: We propose a combination of in-domain and cross-domain transfer learning strategies for damage detection in bridges.
We show that the combination of cross-domain and in-domain transfer persistently shows superior performance even with tiny datasets.
- Score: 4.111375269316102
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate the capabilities of transfer learning in the area of
structural health monitoring. In particular, we are interested in damage
detection for concrete structures. Typical image datasets for such problems are
relatively small, calling for the transfer of learned representation from a
related large-scale dataset. Past efforts of damage detection using images have
mainly considered cross-domain transfer learning approaches using pre-trained
ImageNet models that are subsequently fine-tuned for the target task. However,
there are rising concerns about the generalizability of ImageNet
representations for specific target domains, such as for visual inspection and
medical imaging. We, therefore, propose a combination of in-domain and
cross-domain transfer learning strategies for damage detection in bridges. We
perform comprehensive comparisons to study the impact of cross-domain and
in-domain transfer, with various initialization strategies, using six publicly
available visual inspection datasets. The pre-trained models are also evaluated
for their ability to cope with the extremely low-data regime. We show that the
combination of cross-domain and in-domain transfer persistently shows superior
performance even with tiny datasets. Likewise, we also provide visual
explanations of predictive models to enable algorithmic transparency and
provide insights to experts about the intrinsic decision-logic of typically
black-box deep models.
Related papers
- Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Is in-domain data beneficial in transfer learning for landmarks
detection in x-ray images? [1.5348047288817481]
We study whether the usage of small-scale in-domain x-ray image datasets may provide any improvement for landmark detection over models pre-trained on large natural image datasets only.
Our results show that using in-domain source datasets brings marginal or no benefit with respect to an ImageNet out-of-domain pre-training.
Our findings can provide an indication for the development of robust landmark detection systems in medical images when no large annotated dataset is available.
arXiv Detail & Related papers (2024-03-03T10:35:00Z) - Self-Supervised In-Domain Representation Learning for Remote Sensing
Image Scene Classification [1.0152838128195465]
Transferring the ImageNet pre-trained weights to the various remote sensing tasks has produced acceptable results.
Recent research has demonstrated that self-supervised learning methods capture visual features that are more discriminative and transferable.
We are motivated by these facts to pre-train the in-domain representations of remote sensing imagery using contrastive self-supervised learning.
arXiv Detail & Related papers (2023-02-03T15:03:07Z) - Evaluating the Label Efficiency of Contrastive Self-Supervised Learning
for Multi-Resolution Satellite Imagery [0.0]
Self-supervised learning has been applied in the remote sensing domain to exploit readily-available unlabeled data.
In this paper, we study self-supervised visual representation learning through the lens of label efficiency.
arXiv Detail & Related papers (2022-10-13T06:54:13Z) - Mere Contrastive Learning for Cross-Domain Sentiment Analysis [23.350121129347556]
Cross-domain sentiment analysis aims to predict the sentiment of texts in the target domain using the model trained on the source domain.
Previous studies are mostly cross-entropy-based methods for the task, which suffer from instability and poor generalization.
We propose a modified contrastive objective with in-batch negative samples so that the sentence representations from the same class will be pushed close.
arXiv Detail & Related papers (2022-08-18T07:25:55Z) - Exploring Data Aggregation and Transformations to Generalize across
Visual Domains [0.0]
This thesis contributes to research on Domain Generalization (DG), Domain Adaptation (DA) and their variations.
We propose new frameworks for Domain Generalization and Domain Adaptation which make use of feature aggregation strategies and visual transformations.
We show how our proposed solutions outperform competitive state-of-the-art approaches in established DG and DA benchmarks.
arXiv Detail & Related papers (2021-08-20T14:58:14Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Instance Localization for Self-supervised Detection Pretraining [68.24102560821623]
We propose a new self-supervised pretext task, called instance localization.
We show that integration of bounding boxes into pretraining promotes better task alignment and architecture alignment for transfer learning.
Experimental results demonstrate that our approach yields state-of-the-art transfer learning results for object detection.
arXiv Detail & Related papers (2021-02-16T17:58:57Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z) - CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency [119.45667331836583]
Unsupervised domain adaptation algorithms aim to transfer the knowledge learned from one domain to another.
We present a novel pixel-wise adversarial domain adaptation algorithm.
arXiv Detail & Related papers (2020-01-09T19:00:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.