Knowledge transfer between bridges for drive-by monitoring using
adversarial and multi-task learning
- URL: http://arxiv.org/abs/2006.03641v1
- Date: Fri, 5 Jun 2020 19:18:45 GMT
- Title: Knowledge transfer between bridges for drive-by monitoring using
adversarial and multi-task learning
- Authors: Jingxiao Liu, Mario Berg\'es, Jacobo Bielak, Hae Young Noh
- Abstract summary: Monitoring bridge health using vibrations of drive-by vehicles has various benefits, such as low cost and no need for direct installation or on-site maintenance of equipment on the bridge.
Many such approaches require labeled data from every bridge, which is expensive and time-consuming, if not impossible, to obtain.
We introduce a transfer learning framework using domain-adversarial training and multi-task learning to detect, localize and quantify damage.
- Score: 6.462702225377603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Monitoring bridge health using the vibrations of drive-by vehicles has
various benefits, such as low cost and no need for direct installation or
on-site maintenance of equipment on the bridge. However, many such approaches
require labeled data from every bridge, which is expensive and time-consuming,
if not impossible, to obtain. This is further exacerbated by having multiple
diagnostic tasks, such as damage quantification and localization. One way to
address this issue is to directly apply the supervised model trained for one
bridge to other bridges, although this may significantly reduce the accuracy
because of distribution mismatch between different bridges'data. To alleviate
these problems, we introduce a transfer learning framework using
domain-adversarial training and multi-task learning to detect, localize and
quantify damage. Specifically, we train a deep network in an adversarial way to
learn features that are 1) sensitive to damage and 2) invariant to different
bridges. In addition, to improve the error propagation from one task to the
next, our framework learns shared features for all the tasks using multi-task
learning. We evaluate our framework using lab-scale experiments with two
different bridges. On average, our framework achieves 94%, 97% and 84% accuracy
for damage detection, localization and quantification, respectively. within one
damage severity level.
Related papers
- SHM-Traffic: DRL and Transfer learning based UAV Control for Structural
Health Monitoring of Bridges with Traffic [0.0]
This work focuses on using advanced techniques for structural health monitoring (SHM) for bridges with traffic.
We propose an approach using deep reinforcement learning (DRL)-based control for Unmanned Aerial Vehicle (UAV)
Our approach conducts a concrete bridge deck survey while traffic is ongoing and detects cracks.
We observe that the Canny edge detector offers up to 40% lower task completion time, while the CNN excels in up to 12% better damage detection and 1.8 times better rewards.
arXiv Detail & Related papers (2024-02-22T18:19:45Z) - Active Foundational Models for Fault Diagnosis of Electrical Motors [0.5999777817331317]
Fault detection and diagnosis of electrical motors is of utmost importance in ensuring the safe and reliable operation of industrial systems.
The existing data-driven deep learning approaches for machine fault diagnosis rely extensively on huge amounts of labeled samples.
We propose a foundational model-based Active Learning framework that utilizes less amount of labeled samples.
arXiv Detail & Related papers (2023-11-27T03:25:12Z) - BridgeData V2: A Dataset for Robot Learning at Scale [73.86688388408021]
BridgeData V2 is a large and diverse dataset of robotic manipulation behaviors.
It contains 60,096 trajectories collected across 24 environments on a publicly available low-cost robot.
arXiv Detail & Related papers (2023-08-24T17:41:20Z) - DOAD: Decoupled One Stage Action Detection Network [77.14883592642782]
Localizing people and recognizing their actions from videos is a challenging task towards high-level video understanding.
Existing methods are mostly two-stage based, with one stage for person bounding box generation and the other stage for action recognition.
We present a decoupled one-stage network dubbed DOAD, to improve the efficiency for-temporal action detection.
arXiv Detail & Related papers (2023-04-01T08:06:43Z) - Visual Exemplar Driven Task-Prompting for Unified Perception in
Autonomous Driving [100.3848723827869]
We present an effective multi-task framework, VE-Prompt, which introduces visual exemplars via task-specific prompting.
Specifically, we generate visual exemplars based on bounding boxes and color-based markers, which provide accurate visual appearances of target categories.
We bridge transformer-based encoders and convolutional layers for efficient and accurate unified perception in autonomous driving.
arXiv Detail & Related papers (2023-03-03T08:54:06Z) - Generalized Few-Shot 3D Object Detection of LiDAR Point Cloud for
Autonomous Driving [91.39625612027386]
We propose a novel task, called generalized few-shot 3D object detection, where we have a large amount of training data for common (base) objects, but only a few data for rare (novel) classes.
Specifically, we analyze in-depth differences between images and point clouds, and then present a practical principle for the few-shot setting in the 3D LiDAR dataset.
To solve this task, we propose an incremental fine-tuning method to extend existing 3D detection models to recognize both common and rare objects.
arXiv Detail & Related papers (2023-02-08T07:11:36Z) - A Multitask Deep Learning Model for Parsing Bridge Elements and
Segmenting Defect in Bridge Inspection Images [1.476043573732074]
The vast network of bridges in the United States raises a high requirement for its maintenance and rehabilitation.
The massive cost of visual inspection to assess the conditions of the bridges turns out to be a burden to some extent.
This paper develops a multitask deep neural network that fully utilizes such interdependence between bridge elements and defects.
arXiv Detail & Related papers (2022-09-06T02:48:15Z) - HierMUD: Hierarchical Multi-task Unsupervised Domain Adaptation between
Bridges for Drive-by Damage Diagnosis [9.261126434781744]
We introduce a new framework that transfers the model learned from one bridge to diagnose damage in another bridge without any labels from the target bridge.
Our framework trains a hierarchical neural network model in an adversarial way to extract task-shared and task-specific features.
We evaluate our framework on experimental data collected from 2 bridges and 3 vehicles.
arXiv Detail & Related papers (2021-07-23T19:39:32Z) - Decoupled and Memory-Reinforced Networks: Towards Effective Feature
Learning for One-Step Person Search [65.51181219410763]
One-step methods have been developed to handle pedestrian detection and identification sub-tasks using a single network.
There are two major challenges in the current one-step approaches.
We propose a decoupled and memory-reinforced network (DMRNet) to overcome these problems.
arXiv Detail & Related papers (2021-02-22T06:19:45Z) - Anomaly Detection in Video via Self-Supervised and Multi-Task Learning [113.81927544121625]
Anomaly detection in video is a challenging computer vision problem.
In this paper, we approach anomalous event detection in video through self-supervised and multi-task learning at the object level.
arXiv Detail & Related papers (2020-11-15T10:21:28Z) - Damage-sensitive and domain-invariant feature extraction for
vehicle-vibration-based bridge health monitoring [25.17078512102496]
We introduce a physics-guided signal processing approach to extract a damage-sensitive and domain-invariant (DS & DI) feature from acceleration response data of a vehicle.
Our feature provides the best damage and localization results across different bridges in five of six experiments.
arXiv Detail & Related papers (2020-02-06T05:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.