Deep Convolutional Autoencoders as Generic Feature Extractors in
Seismological Applications
- URL: http://arxiv.org/abs/2110.11802v1
- Date: Fri, 22 Oct 2021 14:22:07 GMT
- Title: Deep Convolutional Autoencoders as Generic Feature Extractors in
Seismological Applications
- Authors: Qingkai Kong, Andrea Chiang, Ana C. Aguiar, M. Giselle
Fern\'andez-Godino, Stephen C. Myers, Donald D. Lucas
- Abstract summary: We develop tests to evaluate the idea of using autoencoders as feature extractors for different seismological applications.
These tests involve training an autoencoder, either undercomplete or overcomplete, on a large amount of earthquake waveforms.
We conclude that the autoencoder feature extractor approach may only perform well under certain conditions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The idea of using a deep autoencoder to encode seismic waveform features and
then use them in different seismological applications is appealing. In this
paper, we designed tests to evaluate this idea of using autoencoders as feature
extractors for different seismological applications, such as event
discrimination (i.e., earthquake vs. noise waveforms, earthquake vs. explosion
waveforms, and phase picking). These tests involve training an autoencoder,
either undercomplete or overcomplete, on a large amount of earthquake
waveforms, and then using the trained encoder as a feature extractor with
subsequent application layers (either a fully connected layer, or a
convolutional layer plus a fully connected layer) to make the decision. By
comparing the performance of these newly designed models against the baseline
models trained from scratch, we conclude that the autoencoder feature extractor
approach may only perform well under certain conditions such as when the target
problems require features to be similar to the autoencoder encoded features,
when a relatively small amount of training data is available, and when certain
model structures and training strategies are utilized. The model structure that
works best in all these tests is an overcomplete autoencoder with a
convolutional layer and a fully connected layer to make the estimation.
Related papers
- PITA: Physics-Informed Trajectory Autoencoder [7.394156899576076]
Generative models can be used to augment real-world datasets with generated data to produce edge case scenarios.
We propose the Physics-Informed Trajectory Autoencoder (PITA) architecture, whichcorporates a physical dynamics model into the loss function of the autoencoder.
arXiv Detail & Related papers (2024-03-18T12:37:41Z) - Comparative Study on the Performance of Categorical Variable Encoders in
Classification and Regression Tasks [11.721062526796976]
This study broadly classifies machine learning models into three categories: 1) ATI models that implicitly perform affine transformations on inputs; 2) Tree-based models that are based on decision trees; and 3) the rest, such as kNN.
Theoretically, we prove that the one-hot encoder is the best choice for ATI models in the sense that it can mimic any other encoders by learning suitable weights from the data.
We also explain why the target encoder and its variants are the most suitable encoders for tree-based models.
arXiv Detail & Related papers (2024-01-18T02:21:53Z) - Are We Using Autoencoders in a Wrong Way? [3.110260251019273]
Autoencoders are used for dimensionality reduction, anomaly detection and feature extraction.
We revisited the standard training for the undercomplete Autoencoder modifying the shape of the latent space.
We also explored the behaviour of the latent space in the case of reconstruction of a random sample from the whole dataset.
arXiv Detail & Related papers (2023-09-04T11:22:43Z) - Dynamic Perceiver for Efficient Visual Recognition [87.08210214417309]
We propose Dynamic Perceiver (Dyn-Perceiver) to decouple the feature extraction procedure and the early classification task.
A feature branch serves to extract image features, while a classification branch processes a latent code assigned for classification tasks.
Early exits are placed exclusively within the classification branch, thus eliminating the need for linear separability in low-level features.
arXiv Detail & Related papers (2023-06-20T03:00:22Z) - Semi-Supervised Manifold Learning with Complexity Decoupled Chart
Autoencoders [65.2511270059236]
This work introduces a chart autoencoder with an asymmetric encoding-decoding process that can incorporate additional semi-supervised information such as class labels.
We discuss the theoretical approximation power of such networks that essentially depends on the intrinsic dimension of the data manifold.
arXiv Detail & Related papers (2022-08-22T19:58:03Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Latent-Insensitive Autoencoders for Anomaly Detection and
Class-Incremental Learning [0.0]
We introduce Latent-Insensitive Autoencoder (LIS-AE) where unlabeled data from a similar domain is utilized as negative examples to shape the latent layer (bottleneck) of a regular autoencoder.
We treat class-incremental learning as multiple anomaly detection tasks by adding a different latent layer for each class and use other available classes in task as negative examples to shape each latent layer.
arXiv Detail & Related papers (2021-10-25T16:53:49Z) - Source-Agnostic Gravitational-Wave Detection with Recurrent Autoencoders [0.0]
We present an application of anomaly detection techniques based on deep recurrent autoencoders to the problem of detecting gravitational wave signals in laser interferometers.
Trained on noise data, this class of algorithms could detect signals using an unsupervised strategy, without targeting a specific kind of source.
arXiv Detail & Related papers (2021-07-27T09:56:49Z) - Autoencoders for unsupervised anomaly detection in high energy physics [105.54048699217668]
We study the tagging of top jet images in a background of QCD jet images.
We show that the standard autoencoder setup cannot be considered as a model-independent anomaly tagger.
We suggest improved performance measures for the task of model-independent anomaly detection.
arXiv Detail & Related papers (2021-04-19T05:06:57Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Ensemble Wrapper Subsampling for Deep Modulation Classification [70.91089216571035]
Subsampling of received wireless signals is important for relaxing hardware requirements as well as the computational cost of signal processing algorithms.
We propose a subsampling technique to facilitate the use of deep learning for automatic modulation classification in wireless communication systems.
arXiv Detail & Related papers (2020-05-10T06:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.