CURTAINs Flows For Flows: Constructing Unobserved Regions with Maximum
Likelihood Estimation
- URL: http://arxiv.org/abs/2305.04646v1
- Date: Mon, 8 May 2023 11:58:49 GMT
- Title: CURTAINs Flows For Flows: Constructing Unobserved Regions with Maximum
Likelihood Estimation
- Authors: Debajyoti Sengupta, Samuel Klein, John Andrew Raine, Tobias Golling
- Abstract summary: We introduce a major improvement to the CURTAINs method by training the conditional normalizing flow between two side-band regions.
CURTAINsF4F requires substantially less computational resources to cover a large number of signal regions than other fully data driven approaches.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Model independent techniques for constructing background data templates using
generative models have shown great promise for use in searches for new physics
processes at the LHC. We introduce a major improvement to the CURTAINs method
by training the conditional normalizing flow between two side-band regions
using maximum likelihood estimation instead of an optimal transport loss. The
new training objective improves the robustness and fidelity of the transformed
data and is much faster and easier to train.
We compare the performance against the previous approach and the current
state of the art using the LHC Olympics anomaly detection dataset, where we see
a significant improvement in sensitivity over the original CURTAINs method.
Furthermore, CURTAINsF4F requires substantially less computational resources to
cover a large number of signal regions than other fully data driven approaches.
When using an efficient configuration, an order of magnitude more models can be
trained in the same time required for ten signal regions, without a significant
drop in performance.
Related papers
- Preventing Local Pitfalls in Vector Quantization via Optimal Transport [77.15924044466976]
We introduce OptVQ, a novel vector quantization method that employs the Sinkhorn algorithm to optimize the optimal transport problem.
Our experiments on image reconstruction tasks demonstrate that OptVQ achieves 100% codebook utilization and surpasses current state-of-the-art VQNs in reconstruction quality.
arXiv Detail & Related papers (2024-12-19T18:58:14Z) - Learning Normal Flow Directly From Event Neighborhoods [18.765370814655626]
We propose a novel supervised point-based method for normal flow estimation.
Using a local point cloud encoder, our method directly estimates per-event normal flow from raw events.
Our method achieves better and more consistent performance than state-of-the-art methods when transferred across different datasets.
arXiv Detail & Related papers (2024-12-15T19:09:45Z) - NUDGE: Lightweight Non-Parametric Fine-Tuning of Embeddings for Retrieval [0.7646713951724011]
Existing approaches either fine-tune the pre-trained model itself or, more efficiently, train adaptor models to transform the output of the pre-trained model.
We present NUDGE, a family of novel non-parametric embedding fine-tuning approaches.
NUDGE directly modifies the embeddings of data records to maximize the accuracy of $k$-NN retrieval.
arXiv Detail & Related papers (2024-09-04T00:10:36Z) - Optimizing the Optimal Weighted Average: Efficient Distributed Sparse Classification [50.406127962933915]
ACOWA allows an extra round of communication to achieve noticeably better approximation quality with minor runtime increases.
Results show that ACOWA obtains solutions that are more faithful to the empirical risk minimizer and attain substantially higher accuracy than other distributed algorithms.
arXiv Detail & Related papers (2024-06-03T19:43:06Z) - Transfer learning to improve streamflow forecasts in data sparse regions [0.0]
We study the methodology behind Transfer Learning (TL) through fine-tuning and parameter transferring for better generalization performance of streamflow prediction in data-sparse regions.
We propose a standard recurrent neural network in the form of Long Short-Term Memory (LSTM) to fit on a sufficiently large source domain dataset.
We present a methodology to implement transfer learning approaches for hydrologic applications by separating the spatial and temporal components of the model and training the model to generalize.
arXiv Detail & Related papers (2021-12-06T14:52:53Z) - Towards More Efficient Federated Learning with Better Optimization
Objects [1.126965032229697]
Federated Learning (FL) is a privacy-protected machine learning paradigm that allows model to be trained directly at the edge without uploading data.
One of the biggest challenges faced by FL in practical applications is the heterogeneity of edge node data, which will slow down the convergence speed and degrade the performance of the model.
We propose to use the aggregation of all models obtained in the past as new constraint target to further improve the performance of such algorithms.
arXiv Detail & Related papers (2021-08-19T09:29:17Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z) - Regularizing Generative Adversarial Networks under Limited Data [88.57330330305535]
This work proposes a regularization approach for training robust GAN models on limited data.
We show a connection between the regularized loss and an f-divergence called LeCam-divergence, which we find is more robust under limited training data.
arXiv Detail & Related papers (2021-04-07T17:59:06Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - ScopeFlow: Dynamic Scene Scoping for Optical Flow [94.42139459221784]
We propose to modify the common training protocols of optical flow.
The improvement is based on observing the bias in sampling challenging data.
We find that both regularization and augmentation should decrease during the training protocol.
arXiv Detail & Related papers (2020-02-25T09:58:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.