Loss Terms and Operator Forms of Koopman Autoencoders
- URL: http://arxiv.org/abs/2412.04578v1
- Date: Thu, 05 Dec 2024 19:48:13 GMT
- Title: Loss Terms and Operator Forms of Koopman Autoencoders
- Authors: Dustin Enyeart, Guang Lin,
- Abstract summary: Koopman autoencoders are a prevalent architecture in operator learning.
Loss functions and the form of the operator vary significantly in the literature.
This paper presents a fair and systemic study of these options.
- Score: 6.03891813540831
- License:
- Abstract: Koopman autoencoders are a prevalent architecture in operator learning. But, the loss functions and the form of the operator vary significantly in the literature. This paper presents a fair and systemic study of these options. Furthermore, it introduces novel loss terms.
Related papers
- Adversarial Autoencoders in Operator Learning [6.03891813540831]
Two prevalent neural operator architectures are DeepONets and Koopman autoencoders.
An adversarial addition to an autoencoder have improved performance of autoencoders in various areas of machine learning.
arXiv Detail & Related papers (2024-12-10T03:54:55Z) - Autoencoding for the 'Good Dictionary' of eigen pairs of the Koopman
Operator [0.0]
This paper proposes using deep autoencoders, a type of deep learning technique, to perform non-linear geometric transformations on raw data before computing Koopman eigen vectors.
To handle high-dimensional time series data, Takens's time delay embedding is presented as a pre-processing technique.
arXiv Detail & Related papers (2023-06-08T14:21:01Z) - Tram: A Token-level Retrieval-augmented Mechanism for Source Code Summarization [76.57699934689468]
We propose a fine-grained Token-level retrieval-augmented mechanism (Tram) on the decoder side to enhance the performance of neural models.
To overcome the challenge of token-level retrieval in capturing contextual code semantics, we also propose integrating code semantics into individual summary tokens.
arXiv Detail & Related papers (2023-05-18T16:02:04Z) - Noise-Robust Dense Retrieval via Contrastive Alignment Post Training [89.29256833403167]
Contrastive Alignment POst Training (CAPOT) is a highly efficient finetuning method that improves model robustness without requiring index regeneration.
CAPOT enables robust retrieval by freezing the document encoder while the query encoder learns to align noisy queries with their unaltered root.
We evaluate CAPOT noisy variants of MSMARCO, Natural Questions, and Trivia QA passage retrieval, finding CAPOT has a similar impact as data augmentation with none of its overhead.
arXiv Detail & Related papers (2023-04-06T22:16:53Z) - A survey and taxonomy of loss functions in machine learning [51.35995529962554]
We present a comprehensive overview of the most widely used loss functions across key applications, including regression, classification, generative modeling, ranking, and energy-based modeling.
We introduce 43 distinct loss functions, structured within an intuitive taxonomy that clarifies their theoretical foundations, properties, and optimal application contexts.
arXiv Detail & Related papers (2023-01-13T14:38:24Z) - An Introduction to Autoencoders [0.0]
This article covers the mathematics and the fundamental concepts of autoencoders.
We will start with a general introduction to autoencoders, and we will discuss the role of the activation function in the output layer and the loss function.
arXiv Detail & Related papers (2022-01-11T11:55:32Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation [56.343646789922545]
We propose to automate the design of metric-specific loss functions by searching differentiable surrogate losses for each metric.
Experiments on PASCAL VOC and Cityscapes demonstrate that the searched surrogate losses outperform the manually designed loss functions consistently.
arXiv Detail & Related papers (2020-10-15T17:59:08Z) - Autoencoders [43.991924654575975]
An autoencoder is a specific type of a neural network, which is mainly designed to encode the input into a compressed and meaningful representation, and then decode it back such that the reconstructed input is similar as possible to the original one.
This chapter surveys the different types of autoencoders that are mainly used today.
arXiv Detail & Related papers (2020-03-12T19:38:47Z) - Improving Image Autoencoder Embeddings with Perceptual Loss [0.1529342790344802]
This work investigates perceptual loss from the perspective of encoder embeddings themselves.
Autoencoders are trained to embed images from three different computer vision datasets using perceptual loss.
Results show that, on the task of object positioning of a small-scale feature, perceptual loss can improve the results by a factor 10.
arXiv Detail & Related papers (2020-01-10T13:48:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.