Unbinned Profiled Unfolding
- URL: http://arxiv.org/abs/2302.05390v3
- Date: Fri, 7 Jul 2023 20:04:16 GMT
- Title: Unbinned Profiled Unfolding
- Authors: Jay Chan, Benjamin Nachman
- Abstract summary: Unfolding is an important procedure in particle physics experiments which corrects for detector effects.
We propose a new machine learning-based unfolding method that results in an unbinned differential cross section.
- Score: 2.0813318162800707
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unfolding is an important procedure in particle physics experiments which
corrects for detector effects and provides differential cross section
measurements that can be used for a number of downstream tasks, such as
extracting fundamental physics parameters. Traditionally, unfolding is done by
discretizing the target phase space into a finite number of bins and is limited
in the number of unfolded variables. Recently, there have been a number of
proposals to perform unbinned unfolding with machine learning. However, none of
these methods (like most unfolding methods) allow for simultaneously
constraining (profiling) nuisance parameters. We propose a new machine
learning-based unfolding method that results in an unbinned differential cross
section and can profile nuisance parameters. The machine learning loss function
is the full likelihood function, based on binned inputs at detector-level. We
first demonstrate the method with simple Gaussian examples and then show the
impact on a simulated Higgs boson cross section measurement.
Related papers
- Machine Learning-based Unfolding for Cross Section Measurements in the Presence of Nuisance Parameters [0.15325041686671656]
In particle physics, the distortions they introduce are often known only implicitly through simulations of the detector.<n>Modern machine learning has enabled efficient simulation-based approaches for unfolding high-dimensional data.<n>We show how to extend machine learning-based unfolding to incorporate nuisance parameters.
arXiv Detail & Related papers (2025-12-08T01:21:34Z) - Data-Driven Self-Supervised Learning for the Discovery of Solution Singularity for Partial Differential Equations [0.0]
The appearance of singularities in the function of interest constitutes a fundamental challenge in scientific computing.<n>We propose a self-supervised learning framework for estimating the location of the singularity.<n>Various experiments are presented to demonstrate the ability of the proposed approach to deal with input perturbation, label corruption, and different kinds of singularities.
arXiv Detail & Related papers (2025-06-29T17:39:41Z) - Understanding In-context Learning of Addition via Activation Subspaces [73.8295576941241]
We study a structured family of few-shot learning tasks for which the true prediction rule is to add an integer $k$ to the input.<n>We then perform an in-depth analysis of individual heads, via dimensionality reduction and decomposition.<n>Our results demonstrate how tracking low-dimensional subspaces of localized heads across a forward pass can provide insight into fine-grained computational structures in language models.
arXiv Detail & Related papers (2025-05-08T11:32:46Z) - Multidimensional Deconvolution with Profiling [0.28587848809639416]
In many experimental contexts, it is necessary to statistically remove the impact of instrumental effects in order to physically interpret measurements.
We propose a new algorithm called Profile OmniFold (POF), which works in a similar iterative manner as the OmniFold (OF) algorithm while being able to simultaneously profile the nuisance parameters.
arXiv Detail & Related papers (2024-09-16T15:52:28Z) - Parameter-Efficient and Memory-Efficient Tuning for Vision Transformer: A Disentangled Approach [87.8330887605381]
We show how to adapt a pre-trained Vision Transformer to downstream recognition tasks with only a few learnable parameters.
We synthesize a task-specific query with a learnable and lightweight module, which is independent of the pre-trained model.
Our method achieves state-of-the-art performance under memory constraints, showcasing its applicability in real-world situations.
arXiv Detail & Related papers (2024-07-09T15:45:04Z) - Machine-learning-based particle identification with missing data [2.87527787066181]
We introduce a novel method for Particle Identification (PID) within the scope of the ALICE experiment at CERN.
Our approach improves the PID purity and efficiency of the selected sample for all investigated particle species.
arXiv Detail & Related papers (2023-12-21T10:20:10Z) - Designing Observables for Measurements with Deep Learning [0.12277343096128711]
We propose to design targeted observables with machine learning.
Unfolded, differential cross sections in a neural network output contain the most information about parameters of interest.
We demonstrate this idea in simulation using two physics models for inclusive measurements in deep in scattering.
arXiv Detail & Related papers (2023-10-12T20:54:34Z) - Less is More: On the Feature Redundancy of Pretrained Models When
Transferring to Few-shot Tasks [120.23328563831704]
Transferring a pretrained model to a downstream task can be as easy as conducting linear probing with target data.
We show that, for linear probing, the pretrained features can be extremely redundant when the downstream data is scarce.
arXiv Detail & Related papers (2023-10-05T19:00:49Z) - Particle-Based Score Estimation for State Space Model Learning in
Autonomous Driving [62.053071723903834]
Multi-object state estimation is a fundamental problem for robotic applications.
We consider learning maximum-likelihood parameters using particle methods.
We apply our method to real data collected from autonomous vehicles.
arXiv Detail & Related papers (2022-12-14T01:21:05Z) - Exploration of Parameter Spaces Assisted by Machine Learning [0.0]
We show a variety of functions and classes that implement sampling procedures with improved exploration of the parameter space assisted by machine learning.
In particular, we discuss two methods assisted by incorporating different machine learning models: regression and classification.
The code used for this paper and instructions on how to use it are available on the web.
arXiv Detail & Related papers (2022-07-20T15:09:16Z) - E-detectors: a nonparametric framework for sequential change detection [86.15115654324488]
We develop a fundamentally new and general framework for sequential change detection.
Our procedures come with clean, nonasymptotic bounds on the average run length.
We show how to design their mixtures in order to achieve both statistical and computational efficiency.
arXiv Detail & Related papers (2022-03-07T17:25:02Z) - Adaptive neighborhood Metric learning [184.95321334661898]
We propose a novel distance metric learning algorithm, named adaptive neighborhood metric learning (ANML)
ANML can be used to learn both the linear and deep embeddings.
The emphlog-exp mean function proposed in our method gives a new perspective to review the deep metric learning methods.
arXiv Detail & Related papers (2022-01-20T17:26:37Z) - Function Approximation via Sparse Random Features [23.325877475827337]
This paper introduces the sparse random feature method that learns parsimonious random feature models utilizing techniques from compressive sensing.
We show that the sparse random feature method outperforms shallow networks for well-structured functions and applications to scientific machine learning tasks.
arXiv Detail & Related papers (2021-03-04T17:53:54Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Doubly Robust Semiparametric Difference-in-Differences Estimators with
High-Dimensional Data [15.27393561231633]
We propose a doubly robust two-stage semiparametric difference-in-difference estimator for estimating heterogeneous treatment effects.
The first stage allows a general set of machine learning methods to be used to estimate the propensity score.
In the second stage, we derive the rates of convergence for both the parametric parameter and the unknown function.
arXiv Detail & Related papers (2020-09-07T15:14:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.