Adversarial Autoencoders in Operator Learning
- URL: http://arxiv.org/abs/2412.07811v1
- Date: Tue, 10 Dec 2024 03:54:55 GMT
- Title: Adversarial Autoencoders in Operator Learning
- Authors: Dustin Enyeart, Guang Lin,
- Abstract summary: Two prevalent neural operator architectures are DeepONets and Koopman autoencoders.
An adversarial addition to an autoencoder have improved performance of autoencoders in various areas of machine learning.
- Score: 6.03891813540831
- License:
- Abstract: DeepONets and Koopman autoencoders are two prevalent neural operator architectures. These architectures are autoencoders. An adversarial addition to an autoencoder have improved performance of autoencoders in various areas of machine learning. In this paper, the use an adversarial addition for these two neural operator architectures is studied.
Related papers
- Loss Terms and Operator Forms of Koopman Autoencoders [6.03891813540831]
Koopman autoencoders are a prevalent architecture in operator learning.
Loss functions and the form of the operator vary significantly in the literature.
This paper presents a fair and systemic study of these options.
arXiv Detail & Related papers (2024-12-05T19:48:13Z) - Triple-Encoders: Representations That Fire Together, Wire Together [51.15206713482718]
Contrastive Learning is a representation learning method that encodes relative distances between utterances into the embedding space via a bi-encoder.
This study introduces triple-encoders, which efficiently compute distributed utterance mixtures from these independently encoded utterances.
We find that triple-encoders lead to a substantial improvement over bi-encoders, and even to better zero-shot generalization than single-vector representation models.
arXiv Detail & Related papers (2024-02-19T18:06:02Z) - On a Mechanism Framework of Autoencoders [0.0]
This paper proposes a theoretical framework on the mechanism of autoencoders.
Results of ReLU autoencoders are generalized to some non-ReLU cases.
Compared to PCA and decision trees, the advantages of (generalized) autoencoders on dimensionality reduction and classification are demonstrated.
arXiv Detail & Related papers (2022-08-15T03:51:40Z) - ED2LM: Encoder-Decoder to Language Model for Faster Document Re-ranking
Inference [70.36083572306839]
This paper proposes a new training and inference paradigm for re-ranking.
We finetune a pretrained encoder-decoder model using in the form of document to query generation.
We show that this encoder-decoder architecture can be decomposed into a decoder-only language model during inference.
arXiv Detail & Related papers (2022-04-25T06:26:29Z) - Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired
Speech Data [145.95460945321253]
We introduce two pre-training tasks for the encoder-decoder network using acoustic units, i.e., pseudo codes.
The proposed Speech2C can relatively reduce the word error rate (WER) by 19.2% over the method without decoder pre-training.
arXiv Detail & Related papers (2022-03-31T15:33:56Z) - How to boost autoencoders? [13.166222736288432]
We discuss the challenges associated with boosting autoencoders and propose a framework to overcome them.
The usefulness of the boosted ensemble is demonstrated in two applications that widely employ autoencoders: anomaly detection and clustering.
arXiv Detail & Related papers (2021-10-28T17:21:25Z) - Dynamic Neural Representational Decoders for High-Resolution Semantic
Segmentation [98.05643473345474]
We propose a novel decoder, termed dynamic neural representational decoder (NRD)
As each location on the encoder's output corresponds to a local patch of the semantic labels, in this work, we represent these local patches of labels with compact neural networks.
This neural representation enables our decoder to leverage the smoothness prior in the semantic label space, and thus makes our decoder more efficient.
arXiv Detail & Related papers (2021-07-30T04:50:56Z) - Revisiting Role of Autoencoders in Adversarial Settings [32.22707594954084]
This paper presents the inherent property of adversarial robustness in the autoencoders.
We believe that our discovery of the adversarial robustness of the autoencoders can provide clues to the future research and applications for adversarial defense.
arXiv Detail & Related papers (2020-05-21T16:01:23Z) - Autoencoders [43.991924654575975]
An autoencoder is a specific type of a neural network, which is mainly designed to encode the input into a compressed and meaningful representation, and then decode it back such that the reconstructed input is similar as possible to the original one.
This chapter surveys the different types of autoencoders that are mainly used today.
arXiv Detail & Related papers (2020-03-12T19:38:47Z) - Learning Autoencoders with Relational Regularization [89.53065887608088]
A new framework is proposed for learning autoencoders of data distributions.
We minimize the discrepancy between the model and target distributions, with a emphrelational regularization
We implement the framework with two scalable algorithms, making it applicable for both probabilistic and deterministic autoencoders.
arXiv Detail & Related papers (2020-02-07T17:27:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.