Machine learning offers an exciting opportunity to improve the calibration of
nearly all reconstructed objects in high-energy physics detectors. However,
machine learning approaches often depend on the spectra of examples used during
training, an issue known as prior dependence. This is an undesirable property
of a calibration, which needs to be applicable in a variety of environments.
The purpose of this paper is to explicitly highlight the prior dependence of
some machine learning-based calibration strategies. We demonstrate how some
recent proposals for both simulation-based and data-based calibrations inherit
properties of the sample used for training, which can result in biases for
downstream analyses. In the case of simulation-based calibration, we argue that
our recently proposed Gaussian Ansatz approach can avoid some of the pitfalls
of prior dependence, whereas prior-independent data-based calibration remains
an open problem.
Bias and Priors in Machine Learning Calibrations for High Energy Physics
高エネルギー物理のための機械学習校正のバイアスと先行
0.84
Rikab Gambhir,1, 2, ∗ Benjamin Nachman,3, 4, † and Jesse Thaler1, 2, ‡
Rikab Gambhir,1, 2, ∗ Benjamin Nachman,3, 4, , and Jesse Thaler1, 2, .
0.44
MIT-CTP 5432
MIT-CTP 5432
0.35
1Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
マサチューセッツ工科大学理論物理学センター, ケンブリッジ, ma 02139, usa
0.57
2The NSF AI Institute for Artificial Intelligence and Fundamental Interactions
第2回 NSF AI Institute for Artificial Intelligence and Basic Interactions
0.47
3Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
カリフォルニア大学バークレー校ローレンス・バークレー国立研究所 3Physics Division, CA 94720, USA
0.76
4Berkeley Institute for Data Science, University of California, Berkeley, CA 94720, USA Machine learning offers an exciting opportunity to improve the calibration of nearly all reconstructed objects in high-energy physics detectors.
カリフォルニア大学バークレー校の4Berkeley Institute for Data Science, University of California, Berkeley, CA 94720, USA Machine Learningは、高エネルギー物理検出器のほとんどすべての再構成されたオブジェクトの校正を改善するエキサイティングな機会を提供する。 訳抜け防止モード: 4Berkeley Institute for Data Science, University of California, Berkeley, CA 94720 USA Machine Learningはエキサイティングな機会を提供する 高エネルギー物理検出器におけるほとんどすべての再構成された物体の校正を改善する。
0.81
However, machine learning approaches often depend on the spectra of examples used during training, an issue known as prior dependence.
This is an undesirable property of a calibration, which needs to be applicable in a variety of environments.
これはキャリブレーションの望ましくない性質であり、様々な環境に適用する必要がある。
0.60
The purpose of this paper is to explicitly highlight the prior dependence of some machine learning-based calibration strategies.
本研究の目的は,機械学習による校正戦略の事前依存性を明確にすることである。
0.75
We demonstrate how some recent proposals for both simulation-based and data-based calibrations inherit properties of the sample used for training, which can result in biases for downstream analyses.
In the case of simulation-based calibration, we argue that our recently proposed Gaussian Ansatz approach can avoid some of the pitfalls of prior dependence, whereas prior-independent data-based calibration remains an open problem.
A. Simulation-based Calibration B. Prior Dependence and Bias C. Mitigating Prior Dependence D. Data-based Calibration E. Unbiased Data-based Approaches?
a. シミュレーションに基づく校正 b. 事前依存とバイアス c. 事前依存の緩和 d. データベース校正 e. unbiased data-based approach?
0.72
III. Resolution and Uncertainty in Calibrations
III。 校正の解決と不確かさ
0.76
A. Resolution B. Uncertainty
a.解決b.不確実性
0.61
IV. Gaussian Examples A. Simulation-based Calibration B. Data-based Calibration
IV。 ガウスの例 A. シミュレーションに基づく校正B. データに基づく校正
0.52
V. Calibrating Jet Energy Response
V. ジェットエネルギー応答の校正
0.77
A. Datasets B. Simulation-based Calibration C. Data-based Calibration
a. データセット b. シミュレーションベースキャリブレーション c. データベースキャリブレーション
Calibration is the task of removing bias from an inference – that is, to ensure the inference is “correct on average”.
校正は、推論からバイアスを取り除くタスクです。つまり、推論が“平均的に正しい”ことを保証します。
0.60
The are two major classes of calibration: simulation-based calibration, where the goal is to infer a truth reference object, and data-based calibration, where the goal is to match simulation and data distributions.
Both simulation-based calibrations and data-based calibrations are essential components of the experimental program in high-energy physics (HEP), and a significant amount of time is spent deriving these results to enable downstream analyses.
We focus on the ATLAS and CMS experiments at the Large Hadron Collider (LHC) for our examples, but this discussion is relevant for all of HEP (and really any experiment).
ATLAS and CMS have performed many recent calibrations, including the energy calibration of single hadrons [1, 2], jets [3, 4], muons [5, 6], electrons/photons [7–9], and τ leptons [10, 11].
The reconstruction efficiencies of all of these objects are also calibrated and include the classification efficiency of jets from heavy flavor [12, 13] and even more massive particles [14, 15].
Machine learning is a promising tool to improve both types of calibration.
機械学習は両方のキャリブレーションを改善するための有望なツールだ。
0.76
In particular, machine learning methods can readily process high-dimensional inputs and therefore can incorporate more information to improve the precision and accuracy of a calibration.
There have been a large number of proposals for improving the simulationbased calibrations of various object energies, including single hadrons [16–21], muons [22], and jets [23–33] at colliders; kinematic reconstruction in deep inelastic scattering [34]; and neutrino energies in a variety of experiments [35–40].
A non-universal calibration would have a rather limited utility, and can produce undesirable results if applied to a dataset that does not exactly match the calibration dataset.
A calibration can be biased due to the choice of estimator or fitting procedure used, even if the usual pitfalls of dataset-induced biases are taken care of.
In this paper, we explain the origin of prior dependence for common calibration techniques, with explicit illustrative examples, and demonstrate the associated bias that these procedures incur.
For simulation-based calibrations, we advocate for our Gaussian Ansatz [43] as a machine-learning-bas ed strategy that is prior independent and bias-free.
For data-based calibrations, we are unaware of any prior-independent methods in the literature.
データに基づくキャリブレーションでは、文献に先行する非依存の手法を知らない。
0.69
Many of the conclusions of this paper are well-known to the experts, but we hope that by highlighting these issues, we can inspire the development of prior-independent calibration methods.
II. THE STATISTICS OF CALIBRATION In this section, we review some of the basic features of simulated-based and data-based calibration, and discuss the issues of prior dependence and bias.
2 Prior independence is a necessary prerequisite for closure.
2 事前独立は閉鎖に必要な前提条件である。
0.71
How- ever, even with prior independence, closure is not guaranteed.
方法 たとえ 事前の独立であっても 閉鎖は保証されない
0.50
2 A. Simulation-based Calibration
2 A. シミュレーションに基づく校正
0.51
In simulation-based calibration, the goal is to infer target (or true) features zT ∈ RN from detector-level features xD ∈ RM – that is, to construct an estimator or calibration function f : RM → RN where
To carry out simulation-based calibration, one starts with a set of (xD, zT ) pairs, which typically come from an in-depth numerical simulation of an experiment.
V, xD will be the experimentally measurable features of hadronic jets and zT will be the true jet energy.
V, xDはハドロンジェットの実験的に測定可能な特徴であり、zTは真のジェットエネルギーとなる。
0.76
For concreteness, one can think of the calibration function f as being parameterized by a universal function approximator such as a neural network, whose weights and biases are learned.
This is often done by minimizing the mean squared error (MSE) loss:
これは、平均二乗誤差(MSE)損失を最小限にすることで行われることが多い。
0.61
fMSE = argmax
fMSE = argmax
0.42
g Etrain[(g(XD) − ZT )2],
g Etrain[(g(XD) − ZT )2],
0.39
(2) where capital letters correspond to random variables and E represents the expectation value over the training sample used to derive the calibration.
Using the calculus of variations, one can show that with enough training data, a flexible enough functional parameterization, and a sufficiently exhaustive training procedure, the asymptotic solution to Eq (2) is:
(3) where lowercase letters correspond to an instance of a random variable.
(3) 小文字がランダム変数のインスタンスに対応する場合。
0.52
In this way, f learns the mean value of zT for a given xD in the training set.
このようにして、f はトレーニングセット内の与えられた xD に対する zT の平均値を学ぶ。
0.75
Alternative loss functions result in statistics other than the mean.
別の損失関数は平均以外の統計結果をもたらす。
0.84
See e g Ref.
e g ref を参照。
0.72
[44] for alternative approaches, including mode learning, which is a standard target for many traditional calibrations (usually in the form of truncated Gaussian fits; see e g [9]).
[44] は、多くの伝統的なキャリブレーションの標準的ターゲットであるモード学習を含む代替のアプローチである(通常、ガウス整合の形式である: e g [9] を参照)。
0.84
B. Prior Dependence and Bias
B.事前依存とバイアス
0.68
A key assumption of simulation-based calibration is
シミュレーションに基づく校正の鍵となる仮定は
0.77
that the detector response is universal:
検出器の応答は普遍的です
0.71
ptest(xD|zT ) = ptrain(xD|zT ).
ptest(xD|zT ) = ptrain(xD|zT )。
0.40
(4) This equation says that for a given truth input zT , the detector response is the same between the training data used for deriving the calibration and the testing data used for deploying the calibration.
Note that even if the training dataset is statistically identical to the testing dataset (i.e. ptest(zT ) = ptrain(zT )), it is not guaranteed that the calibration will be unbiased.
One way to reduce the bias is if the prior is “wide and flat enough”, such that the prior asymptotically approaches a uniform sampling over the real line relative to the detector response.
(8) that if the prior p(zT ) is Gaussian with extent σ, and the detector response p(xD|zT ) is a Gaussian noise model with extent , then the bias scales as:
(12) where MLC stands for maximum likelihood classifier – see Ref.
(12) MLC は最大可算分類器の略で、Ref を参照。
0.58
[48]. Again, because the detector response p(xD|zT ) is universal, maximum likelihood calibrations are universal,3 and in certain configurations, are provably unbiased.
[43] is to estimate the (local) likelihood density by extremizing the Donsker-Varadhan representation (DVR) [49, 50] of the Kullback-Leibler divergence [51]:
(14) By parametrizing f(xD, zT ) via a specially chosen Gaussian Ansatz (see Ref. [43] for details), one can extract the local maximum likelihood estimate and resolution with a single neural network training.
If the fraction of signal is different in the training set and the test set, that is, ptest(zT ) 6= ptrain(zT ), then the output can no longer be interpreted as the probability of the signal.
Luckily, classifiers (often called taggers) are almost never used this way in HEP, since the classification score is not interpreted directly as a probability.5
In data-based calibration, the goal is to account for possible differences between a true detector response, pdata(xD) and a simulated detector model psim(xD).
That is, the goal is to match detector level features xD between data and a simulation at the distribution level, in contrast to simulation-based distribution, where the goal is to match xD and a target feature zT at the object level.
In the machine learning literature, data-based calibration is called domain adaptation.
機械学習の文献では、データに基づくキャリブレーションをドメイン適応と呼ぶ。
0.76
Machine learning domain adaptation has been widely studied in the context
機械学習の領域適応は、文脈で広く研究されてきた
0.67
psim(xD) = R dzT psim(xD|zT ) ptrain(zT ) is a simulated
psim(xd) = r dzt psim(xd|zt ) ptrain(zt ) はシミュレーションである。
0.81
4 It is not always true that a maximum likelihood calibration is unbiased.
4 最大極大校正が不偏であることは必ずしも事実ではない。
0.71
For instance, if XD is drawn from a uniform distribution U(0, zT ), then the maximum likelihood estimate from a single xD sample is ˆzT = xD, whereas an unbiased estimate would be ˆzT = 2xD.
As we will see, though, there is implicit prior dependence in h.
しかし、私たちが見るように、h には暗黙の事前依存がある。
0.64
For simplicity, consider the special case of one dimension.
単純さのため、1次元の特別な場合を考える。
0.69
Here, for any OT metric, the OT map h : R → R is simply given by:
ここで、任意の OT 計量に対して、OT 写像 h : R → R は次のように与えられる。
0.68
h(xD) = P −1
h(xD) = P−1
0.46
Pλ(xD) =R xD−∞ dx0
Pλ(xD) = R xD−∞ dx0
0.36
(17) where Pλ is the cumulative distribution function of λ, i.e. ).
17) ここで pλ は λ,すなわち λ の累積分布関数である。
0.82
This function maps quantiles of the simulated distribution to quantiles of the data distribution.
この関数は、シミュレーション分布の分位数をデータ分布の分位数にマップする。
0.75
The Jacobian of this transformation is:
この変換のヤコビアン(Jacobian)は、
0.60
data(Psim(xD)),
データ(psim(xd))
0.57
D pλ(x0 D
D pλ(x0) D
0.39
|h0(xD)| = psim(xD) pdata(h(xD))
|h0(xD)| = psim(xD) pdata(h(xD))
0.47
R dzT psim(xD|zT ) ptrain(zT )
R dzT psim(xD|zT ) ptrain(zT )
0.50
= pdata(h(xD))
= pdata(h(xD))
0.43
(18) . Thus, since the prior ptrain(zT ) explicitly appears, the derived OT-based detector model in Eq (16) is prior dependent.
(18) . したがって、前のptrain(zT ) が明示的に現れるため、Eq (16) における OT ベースの検出器モデルが事前に依存している。
0.51
In line with simulation-based calibration, the bias of a data-based calibration is the average difference between the estimator ˆp(xD) and the desired value pdata(xD), conditioned on xT .7 For OT-based calibration, the bias per
7 This differs from the simulation-based calibration definition, which was conditioned on zT .
7) シミュレーションに基づくキャリブレーション定義と異なり,zt 上で条件づけされた。
0.64
In data, there is no truth level zT .
データでは、真理レベル zT はありません。
0.82
However, sometimes, a proxy can be used as a zT in data, allowing for a direct comparison of true versus reconstructed zT values in data-based calibration.
At least in the case the special case of one dimensional OT-based calibration, however, we have shown above that the corrected response function is not universal.
This implies that all databased calibration methods in use are biased, though the degree of bias may be small if the testing and training truth-level densities are similar enough.
A. Resolution As already mentioned, the bias of a calibration refers to the difference in central tendency (such as the mean, median, or mode) between a reconstructed quantity and a reference quantity.
a. 解決 キャリブレーションのバイアス(英: bias of a calibration)とは、再構成された量と基準量の間の中央傾向(平均、中央値、モードなど)の違いを指す。
0.65
By contrast, the resolution of a calibration refers to the spread in the difference between the reconstructed and reference quantities.
対照的に、校正の解像度は、再構成された量と基準量の差の広がりを指す。
0.69
Using variance as our measure of spread, the resolution Σ2(zT ) can be
分散を拡散の尺度として使うと、分解能 σ2(zt ) は
0.65
Σ2(zT ) = Vartest[f(XD) − zT|ZT = zT ].
σ2(zt ) = vartest[f(xd) − zt|zt = zt ] である。
0.66
(20) Resolutions, like biases, can be prior-dependent.
The prior-dependence is seen by applying Bayes’ Theorem to ptrain(z0 As before, this prior dependence can be reduced if the prior is wide compared to the detector response.
If the prior p(zT ) is Gaussian with extent σ, and the detector response p(xD|zT ) is a Gaussian noise model with extent , then by applying Eq (22), one can show that the resolution scales as:
σ On the other hand, for the prior-independent MLC calibration (Eq.
σ 一方,事前独立型mlcキャリブレーション(eq)について検討した。
0.48
(12)), the resolution can be shown to be:
(12)) 解像度は次のようになる。
0.65
Σ2(zT ) = 2.
σ2(zt ) = σ2 である。
0.53
(23) In HEP (and many other) applications, however, it is common to instead refer to the resolution with respect to a measurement xD rather than the true value zT .
That is, for an inference ˆzT = f(xD), we would like a measure of the spread of zT values consistent with this measurement, which we will denote Σ(xD) (distinguished by the xD argument rather than zT ).
Depending on the context and type of calibration, there are a variety of ways to define Σ(xD) – for instance, as the standard deviation from a Gaussian fit to the distribution of reconstructed over true energies (see e g Ref. [45]).
However, for frequentist approaches where the posterior is not well defined, such as the maximum likelihood calibration, the resolution cannot be defined this way and care must be taken.
For Gaussian noise models p(xD|zT ), the likelihood is symmetric under interchanging the arguments xD and zT , so one can take the resolution to be (applying Eq (20)): Σ2(xD) = Σ2(zT ) = 2.
example, if a calibration requires multiplying the reconstructed quantity by a fixed number greater than one, then the resolution will grow by the same amount.
For instance, in the context of jet energy calibrations, xD = α η for some constant α and an observable quantity η (e g energy dependence on the pseudorapidity).
例えば、ジェットエネルギーのキャリブレーションの文脈では、ある定数 α に対して xD = α η であり、観測可能な量 η である(例えば、擬レイピディティへのエネルギー依存)。
0.68
If any of the ~yD have a non-trivial probability density, this will be inherited by the reconstructed value xD and thus xD will have a non-zero resolution.
This resolution is completely reducible, however, through a calibration that is ~yD-dependent – that is, a calibration function ˆzT = f0(~yD) rather than ˆzT = f(xD).
The ability to incorporate many auxiliary features is why machine-learning-bas ed approaches, such as the Gaussian Ansatz [43], have the potential to improve analyses at HEP experiments.
B. Uncertainty In the machine learning literature, “resolution” would be referred to as a type of “uncertainty”.
B.不確かさ 機械学習の文献では、“解決”は“確実性”の一種として言及される。
0.77
Uncertainty in the statistical context refers to the limited information about zT contained in xD.
統計的文脈の不確かさは、xDに含まれるzTに関する限られた情報を指す。
0.66
In the HEP literature, though, we use uncertainty in a different way, to instead refer to the limited information we have about the bias and resolution of a calibration.
The reason for this difference in nomenclature is that HEP research is based primarily on simulation-based inference, where data are analyzed by comparison to model predictions.
A worse resolution can degrade the statistical precision of a measurement, but if it is well-modeled by the simulation, then there is no associated systematic uncertainty (though there will still be statistical uncertainties).
The mo- 6 mentum inbalance between the jet and the Z boson will be due in part to differences in the calibration between data and simulation and in part due to the mismodeling of initial and final state radiation.
IV. GAUSSIAN EXAMPLES In this section, we demonstrate some of the calibration issues related to bias and prior dependence in a simple Gaussian example.
We assume that the truth information (the “prior”) is distributed according to a Gaussian distribution with mean µ and variance σ2:
真理情報(「主」)は平均 μ と分散 σ2 を持つガウス分布に従って分布していると仮定する。
0.82
ZT ∼ N (µ, σ2).
ZT は N (μ, σ2) である。
0.76
XD|ZT = zT ∼ N (zT , 2).
XD|ZT = zT > N (zT , >2) である。
0.67
(26) The detector response is assumed to induce Gaussian smearing centered on the truth input with variance 2: (27) For the simulation-based calibration in Sec.
IV A, the goal is to learn ZT given XD, assuming perfect knowledge of the detector response.
IVA の目標は、検出器応答の完全な知識を前提として、XD の ZT を学習することである。
0.60
For the data-based calibration in Sec.
secのデータに基づく校正のために。
0.54
IV B, the goal is to map XD in “simulation” to XD in “data”.
IV Bの目的は、XDを“シミュレーション”で“データ”でXDにマッピングすることだ。
0.81
In this latter study, we assume that data and simulation have the same true probability density and differ only in their detector response, sim 6= data – that is, psim “mismodels” pdata.
In Fig 1a, we show the simulated data, for which both the true and reconstructed values follow a Gaussian distribution.
図1aでは、真の値と再構成された値の両方がガウス分布に従うシミュレーションデータを示す。
0.79
The first step of a typical calibration is to predict the true zT from the reconstructed xD.
典型的な校正の最初のステップは、再構成された xD から真の zT を予測することである。
0.62
Since we know that the average dependence of the true zT on the reconstructed xD is linear, we perform a first-order polynomial fit to the data using numpy polyfit, which is represented by the blue dashed line in Fig 1a.
[43], the calibration function B(x) is obtained by minimizing the DVR loss function from Eq (14), such that after training: (36) (37) For Gaussian noise models, this maximum likelihood estimate is unbiased, as confirmed by the numerical results in Fig 1b.
The B and C networks are each a single node with linear activation.
B と C のネットワークは、それぞれ線形活性化を持つ単一ノードである。
0.81
The D network is set to zero by hand.
Dネットワークは手動でゼロに設定される。
0.80
Optimization is carried out with Adam [91] over 100 epochs with a batch size of 128.
最適化はAdam[91]が100エポック以上で行われ、バッチサイズは128である。
0.67
As desired, the Gaussian Ansatz yields a calibration that is independent of the prior ptrain(zT ).
願わくば、ガウスアンザッツは、以前のptrain(zT)とは独立なキャリブレーションを与える。
0.41
to get the bias from the MSE calibration approach: ) p(xD|zT ) p(z0
MSE校正法からバイアスを得る: ) p(xD|zT ) p(z0)
0.81
) p(xD) .
) p(xd) である。
0.73
(38) It is possible to solve Eq (38) analytically for the Gaussian setup:
(38) ガウス集合に対して解析的に Eq (38) を解くことができる。
0.59
To demonstrate the bias, we plug in Eq (28) into Eq (8)
バイアスを示すために、Eq (28) を Eq (8) に差し込む。
0.79
b(zT ) + zT =
b(zT ) + zT =
0.42
dxD dz0 T z0
dxD dz0 T z0
0.44
Z T . T T p(xD|z0 (cid:19)
Z T . T T p(xD|z0 (cid:19)
0.41
(cid:18) 2
(cid:18)~2
0.34
σ2 + 2 b(zT ) =
σ2 + 2 b(zT ) =
0.41
(µ − zT ). (39)
(μ − zT)。 (39)
0.40
7 As expected, b(zT ) → 0 as → 0.
7 予想通り、b(zT ) → 0 は t → 0 である。
0.64
For > 0, though, there is a non-zero bias with the MSE approach.
しかし > 0 の場合、MSE のアプローチには非ゼロバイアスがある。
0.66
The zTbinned resolutions can also be computed using Eqs.
ztbinned resolutionsはeqsを使って計算することもできる。
0.75
(22) and (23): ΣMSE(zD) = σ2 ΣMLC(zD) = .
(22)・(23) ΣMSE(zD) = σ2 などがある。
0.48
2 + σ2 ,
2 + σ2 ,
0.44
(40) (41) The fitted biases and resolutions are presented in Fig 2, which exhibits the bias expected from Eq (39).
(40) (41) 適合バイアスと解像度は図2に示され、eq(39)から期待されるバイアスを示す。
0.56
This illustrates the large bias introduced by the MSE regression procedure.
これは、MSE回帰手順によって導入された大きなバイアスを示している。
0.57
To further highlight the role of prior dependence, we repeat the MSE calibration procedure, where we test multiple values of the prior parameters µ and σ to confirm the predictions in Eq (39).
II E, we are unaware of any priorindependent data-based calibration.
iie 事前の独立データに基づく校正には気付いていない。
0.57
To highlight this challenge, we study the OT-based technique introduced in Ref.
この課題を浮き彫りにするために、Refで導入されたOTベースの技術について研究する。
0.40
[42] and mentioned in Sec.
[42] と Sec に記載されている。
0.70
II D. In our Gaussian example, the goal is to calibrate a “simulation” sample with (µsim., σsim., sim.) to match a “data” sample with (µdata, σdata, data).
For simplicity, we assume that the true spectra (determined by (µ, σ)) are the same in data and in simulation, such that there is no systematic uncertainty in the calibration (see Sec. III B).
単純さのために、真のスペクトル( (μ, σ) によって決定される)はデータとシミュレーションにおいて同じであり、キャリブレーションに系統的不確実性がないと仮定する(sec. iii b を参照)。
0.80
Only , the parameter governing the detector response, is different between simulation and data – the simulation mismodels the real detector.
To highlight the issue of prior-dependence, we consider a “training” set with one value of µtrain = 0 and a “testing” set with a different value of µtest, with a shared value of σ.
(a) 2D Histogram of the reconstructed value xD distribution versus the true value zT distribution, in the Gaussian example with µ = 0, σ = 1, and = 2. The dashed line represents a linear fit to the data points.
(b) For test values of xD, the vertical axis is the calibrated target value ˆzT (xD).
(b)xDの試験値に対して、垂直軸は校正された目標値(zT(xD))である。
0.79
The blue dots are the results from a numerical MSE fit fMSE(xD), and the error bars correspond to the numerical point resolution ΣMSE(xD), with the analytic prediction in the red dotted line.
10 FIG. 4. The data-driven calibration functions corresponding to Fig 3.
10 図4。 データ駆動型校正機能はfig3に対応する。
0.55
The blue points correspond to the calibration function htrain derived from the training set and the red points correspond to the ideal calibration htest one would derive from the test set.
The shaded histograms correspond to the zT = mtrue truth-level distributions, whereas the light triangles and dark circles correspond to xD = mreco for the fast (Delphes) and slow (Geant4) distributions respectively.
A numerical demonstration of this bias is presented in Fig. 3, where histograms of the data and simulation are presented along with the calibrated result.
The actual calibration function is plotted in Fig 4 and compared to the analytic expectation from Eqs.
実際のキャリブレーション関数は図4でプロットされ、Eqsの分析期待値と比較される。
0.82
(44) and (43).
(44)及び(43)
0.69
The fact that the calibration derived on the train set is not the same as the calibration derived on the test set shows that the calibration derived in one and applied to the other will lead to a residual bias.
To illustrate the impact of the prior dependence, we use a realistic and also extreme example where calibrations are derived in a sample of generic quark and gluon jets and then applied to a test sample of jets from the decay of a heavy new resonance.
In practice, jet energy calibrations are derived for individual jets, but this requires at least including calibrating the jet rapidity in addition to the jet energy.
We keep the problem one-dimensional in order to ensure the problem is easy to visualize and to mitigate the dependence on features that are not explicitly modeled.
=0Data-Based Gaussian ExampleTrain Set CalibrationTest Set Calibration1-1180020 00220024002600280030 00mjj [GeV]105104103102101p(mjj ) [GeV1]Dijet DistributionsQCD TruthQCD DelphesQCD GeantBSM TruthBSM DelphesQCD Geant
=0Data-based Gaussian ExampleTrain Set CalibrationTest Set Calibration1-1180020 00220024002600280030 00mjj [GeV]1051041032101p(mjj) [GeV1]Dijet DistributionsQCD TruthQCD DelphesQCD GeantBSM TruthBSM DelphesQCD GeantBSM TruphesQCD Geant
0.36
英語(論文から抽出)
日本語訳
スコア
jj jj simulation sample uses Pythia 6.426 [92] with the Z2 tune [93] and interfaced with a Geant4-based [94–96] full simulation of the CMS experiment [97].
The full simulation sample comes from the CMS Open Data Portal [102–104] and processed into an MIT Open Data format [105–108].
完全なシミュレーションサンプルは、CMS Open Data Portal [102–104] から得られ、MIT Open Dataフォーマット [105–108] に処理されます。
0.82
The fast simulation sample is available at Ref.
高速シミュレーションサンプルはRefで公開されている。
0.72
[109, 110].
[109, 110].
0.36
For each dataset, we have access to the parton-level hard-scattering scale ˆpT from Pythia, which is in general different from the jet-level transverse momentum pT we are interested in studying.
To avoid any issues related to the trigger, we focus on events where ˆpT > 1 TeV.
トリガーに関連する問題を回避するために、 spt > 1 tev のイベントに焦点を当てます。
0.75
Particles (at truth level) or particle flow candidates (at reconstructed level) are used as inputs to jet clustering, implemented using FastJet 3.2.1 [111, 112] and the antikt algorithm [113] with radius parameter R = 0.5.
No calibrations are applied to the reconstructed jets.
復元されたジェットにはキャリブレーションは適用されない。
0.57
In order to emulate two different physics processes while controlling for all hidden variables, we consider dijet events with two different sets of event weights.
Additionally, the mreco distribution is significantly different between the full and fast simulations, which to correct requires a data-based calibration.
The neural network has three hidden layers with 50 nodes per layer, with the rectified linear unit activation for intermediate layers and a linear activation for the output.
Training is performed over the QCD sample to obtain the calibration function.
校正機能を得るために、QCDサンプル上でトレーニングを行う。
0.73
The learned calibration function is then applied to both the QCD and BSM test samples
学習校正関数はQCDおよびBSMテストサンプルの両方に適用される
0.72
The result of MSE calibration is shown in Fig 6a.
MSE校正の結果を図6aに示す。
0.68
Prior to any calibration, the detector response is about 5% low in both the QCD and BSM test samples.
キャリブレーションの前には、検出器の応答はQCDとBSMの両方のテストサンプルで約5%低かった。
0.77
After calibration, the mean is nearly unity for the QCD sample, albeit with a large width – that is to say, the average bias is close to zero over the prior, but the average resolution is large.
The A, B, C, and D networks of the Gaussian Ansatz each consist of three hidden layers with 32 nodes per layer, with the same activation functions, batch size, and epochs as in the Gaussian example.
The calibration function trained on the QCD sample can be used for the BSM sample, and as Fig 6b shows, the calibration is indeed universal and unbiased, as expected.
jj The goal for the data-based calibration task is to “cor), given by the fast simulation (Delphes), rect” psim(mreco ), given by to the observed data distribution pdata(mreco the full simulation (Geant4).
jj データベースのキャリブレーションタスクの目標は、観測されたデータ分散pdata(mreco the full Simulation(Geant4)に与えられる高速なシミュレーション(Delphes, rect)psim(mreco )によって与えられる「正しい」ことである。
0.64
We now apply the same procedure described in Sec.
現在、secで説明されているのと同じ手順を適用する。
0.50
IV B to the dijet example.
IV B を例に挙げる。
0.76
An OT-based calibration is derived using QCD jets, to align the fast simulation Delphes) sample with the full simulation Geant4 sample.
jj 8 The converse is also true – attempting to use a calibration fitted on the BSM sample will lead to bias on the QCD sample, or any other BSM sample for that matter.
On the QCD sample, this calibration closes by construction.
QCDサンプルでは、このキャリブレーションは構築によって閉じる。
0.77
In particular, as shown in Fig 7a, the blue dashed line in the ratio plot fluctuates around unity, with deviations due to statistical fluctuations that differ between the two halves of the event samples.
While the resulting dashed distribution agrees better with the data histogram in dark red than does the fast sim histogram in light red, the overall agreement is still rather poor.
In this paper, we explored the prior dependence of machine learning-based calibration techniques.
本稿では,機械学習に基づくキャリブレーション手法の事前依存性について検討した。
0.71
There is a growing number of machine learning proposals for simulation-based and data-based calibration and in nearly all cases, there is a prior dependence.
We highlighted the resulting calibration bias in a synthetic Gaussian example and a more realistic particle physics example of dijet production at the LHC.
While prior independent, this technique is typically biased and does not scale well to many dimensions.
従来の独立性はあるものの、この手法は一般に偏りがあり、多くの次元にうまくスケールしない。
0.54
We proposed a new approach based on maximum likelihood estimation in Ref.
我々はRefの最大推定値に基づく新しい手法を提案した。
0.76
[43], based on parametrizing the log-likelihood with a Gaussian Ansatz.
[43] ガウスアンザッツによる対数様態のパラメトリゼーションに基づく。
0.57
Maximum-likelihood-b ased approaches are prior independent by construction and are well-motivated statistically.
最大様相に基づくアプローチは、建設によって事前に独立であり、統計的に十分に動機づけられている。
0.37
13 Parametrizing the maximum likelihood estimator with neural networks requires a different learning paradigm than current approaches, but it extends well to many dimensions.
To our knowledge, there are currently no prior-independent data-based calibration approaches.
私たちの知る限り、現在、事前非依存のデータベースのキャリブレーションアプローチはありません。
0.46
To make the most use of the complex data from the LHC and other HEP experiments, it is essential to use all of the available information for object calibration.
This will require modern machine learning to account for all of the subtle correlations in high dimensions.
これは、高次元における微妙な相関をすべて考慮し、現代的な機械学習を必要とする。
0.62
It is important, however, that we construct these machine learning calibration functions in a way that integrates all of the features of classical calibration methods.
We highlighted prior independence in this paper as a cornerstone of calibration.
本稿では,キャリブレーションの基礎として,先行的独立性を強調した。
0.56
In the future, innovations that incorporates knowledge of the detector response or physics symmetries may further enhance the precision and accuracy of machine learning calibrations.
found at https://github.com/h ep-lbdl/calibrationp riors, which makes use of Jupyter notebooks [115] employing NumPy [116] for data manipulation and Matplotlib [117] to produce figures.
All of the machine learning was performed on a Nvidia RTX6000 Graphical Processing Unit (GPU).
すべての機械学習はNvidia RTX6000 Graphical Processing Unit (GPU)上で実行された。
0.90
The physics data sets are hosted on Zenodo at Refs.
物理データセットはzenodo at refsにホストされている。
0.74
[106–108, 110].
[106–108, 110].
0.49
can be ACKNOWLEDGMENTS
できるかもしれない 裏書き
0.35
BN is supported by the U.S. Department of Energy (DOE), Office of Science under contract DE-AC0205CH11231.
BNはD-AC0205CH11231契約の下でアメリカ合衆国エネルギー省(DOE)が支援している。
0.68
RG and JT are supported by the National Science Foundation under Cooperative Agreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, http://iaifi. org/), and by the U.S. DOE Office of High Energy Physics under grant number DE-SC0012567.
RGとJTは、協力協定 PHY-20 19786 (NSF AI Institute for Artificial Intelligence and Fundamental Interactions, http://iaifi. org/)の下で国立科学財団によって支援されており、アメリカ合衆国高エネルギー物理学局(DOE Office of High Energy Physics, DE-SC0012567)によって支援されている。
0.64
[1] Georges Aad et al (ATLAS), “Topological cell clustering in the ATLAS calorimeters and its performance in LHC Run 1,” Eur.
[1] georges aad et al (atlas), “atlas calorimetersにおけるトポロジカル細胞クラスタリングとそのlhc run 1におけるパフォーマンス” である。
0.73
Phys. J. C 77, 490 (2017), arXiv:1603.02934 [hep-ex].
[2] A. M. Sirunyan et al (CMS), “Particle-flow reconstruction and global event description with the CMS detector,” JINST 12, P10003 (2017), arXiv:1706.04965 [physics.insdet].
[2] A. M. Sirunyan et al (CMS), “Particle-flow reconstruction and global event description with the CMS detector”, JINST 12, P10003 (2017), arXiv:1706.04965 [physics.insdet]. 訳抜け防止モード: [2 ]A.M. Sirunyan et al (CMS ) 『CMS検出器による粒子流再建とグローバルイベント記述』 JINST 12, P10003 (2017 ), arXiv:1706.04965 [ Physics.insdet ]
0.85
[3] Georges Aad et al (ATLAS), “Jet energy scale and resolution measured in proton–proton collisions at s = 13 TeV with the ATLAS detector,” Eur.
[5] Georges Aad et al (ATLAS), “Muon reconstruction performance of the ATLAS detector in proton–proton √ s =13 TeV,” Eur.
5]Georges Aad et al (ATLAS), “陽子-陽子-陽子-陽子-陽子-s=13TeVのATLAS検出器のミューオン再構成性能”。
0.33
Phys. J. C 76, 292 collision data at (2016), arXiv:1603.05598 [hep-ex].
Phys J. C 76, 292 collision data at (2016), arXiv:1603.05598 [hep-ex]
0.37
[6] Albert M Sirunyan et al (CMS), “Performance of the reconstruction and identification of high-momentum muons s = 13 TeV,” JINST 15, in proton-proton collisions at P02027 (2020), arXiv:1912.03516 [physics.ins-det].
6] albert m sirunyan et al (cms), "high-momentum muons s = 13 tev", jinst 15 in proton-proton collisions at p02027 (2020), arxiv:1912.03516 [physics.ins-det]。 訳抜け防止モード: 6)albert m sirunyan et al (cms)。 「高運動量ミューオンs=13tevの復元と同定の性能」 jinst 15 in proton - proton collisions at p02027 (2020) arxiv:1912.03516 [ physics.ins - det ]
0.68
[7] Georges Aad et al (ATLAS), “Electron and photon performance measurements with the ATLAS detector using the 2015–2017 LHC proton-proton collision data,” JINST 14, P12006 (2019), arXiv:1908.00005 [hep-ex].
7] georges aad et al (atlas), “2015–2017 lhc陽子-陽子衝突データを用いたatlas検出器による電子と光子の性能測定”、jinst 14, p12006 (2019), arxiv: 1908.00005 [hep-ex]。 訳抜け防止モード: 7] Georges Aad et al (ATLAS ) 「2015-2017年のLHC陽子衝突データを用いたATLAS検出器による電子と光子の性能測定」 JINST 14 P12006 (2019 ), arXiv:1908.00005 [ hep - ex ]
0.88
√
√
0.42
英語(論文から抽出)
日本語訳
スコア
√ [8] Vardan Khachatryan et al (CMS), “Performance of Photon Reconstruction and Identification with the CMS Detector in Proton-Proton Collisions at sqrt(s) = 8 TeV,” JINST 10, P08010 (2015), arXiv:1502.02702 [physics.insdet].
√ 8] vardan khachatryan et al (cms), "sqrt(s) = 8 tevでの陽子-陽子衝突におけるcms検出器による光子再構成と同定の性能" jinst 10, p08010 (2015), arxiv:1502.02702 [physics.insdet]。
0.55
[9] Vardan Khachatryan et al (CMS), “Performance of Electron Reconstruction and Selection with the CMS Detector in Proton-Proton Collisions at √s = 8 TeV,” JINST 10, P06005 (2015), arXiv:1502.02701 [physics.ins-det].
9] vardan khachatryan et al (cms), “s = 8 tevでの陽子-陽子衝突におけるcms検出器による電子再構成と選択のパフォーマンス”、jinst 10, p06005 (2015), arxiv:1502.02701 [physics.ins-det]。 訳抜け防止モード: [9 ]Vardan Khachatryan et al (CMS )「陽子-陽子衝突における電子再構成とCMS検出器の選択性能」 JINST 10, P06005 (2015 ), arXiv:1502.02701 [ Physics.ins - det ]
0.78
[10] Georges Aad et al (ATLAS), “Identification and energy calibration of hadronically decaying tau leptons with the ATLAS experiment in pp collisions at s=8 TeV,” Eur.
10] Georges Aad et al (ATLAS), “S=8TeVでの衝突におけるATLAS実験によるハドロン崩壊したタウレプトンの同定とエネルギーキャリブレーション”。
0.74
Phys. J. C 75, 303 (2015), arXiv:1412.7086 [hep-ex].
[11] A. M. Sirunyan et al (CMS), “Performance of reconstruction and identification of τ leptons decaying to hadrons √ s = 13 TeV,” JINST 13, and ντ in pp collisions at P10005 (2018), arXiv:1809.02816 [hep-ex].
11] a.m. sirunyan et al (cms), "performance of reconstruction and identification of τ leptons decaying to hadrons s = 13 tev", jinst 13, ντ in pp collisions at p10005 (2018), arxiv:1809.02816 [hep-ex]。 訳抜け防止モード: [11 ]A.M. Sirunyan et al (CMS ) 「ハドロンs=13TeV」に崩壊するτレプトンの復元と識別性能 JINST 13 and ντ in pp collisions at P10005 (2018 ) arXiv:1809.02816 [ hep - ex ]
0.79
[12] Georges Aad et al (ATLAS), “ATLAS b-jet identification performance and efficiency measurement with t¯t √ events in pp collisions at s = 13 TeV,” Eur.
12] Georges Aad et al (ATLAS), “ATLAS b-jetの識別性能と効率の測定は, s = 13 TeVで衝突した場合に, t >t > イベントで行う。
0.83
Phys. J. C 79, 970 (2019), arXiv:1907.05120 [hep-ex].
[13] A. M. Sirunyan et al (CMS), “Identification of heavyflavour jets with the CMS detector in pp collisions at 13 TeV,” JINST 13, P05011 (2018), arXiv:1712.07158 [physics.ins-det].
13] a. m. sirunyan et al (cms), “ pp collisions at 13 tev” jinst 13 p05011 (2018), arxiv:1712.07158 [physics.ins-det] では重油ジェットとcms検出器の衝突を識別する。 訳抜け防止モード: [13 ]A.M. Sirunyan et al (CMS ) 13TeVで衝突したCMS検出器による重火花ジェットの同定」 JINST 13 P05011 (2018 ), arXiv:1712.07158 [ Physics.ins - det ]
0.83
[14] Morad Aaboud et al (ATLAS), “Performance of topquark and W -boson tagging with ATLAS in Run 2 of the LHC,” Eur.
14] morad aaboud et al (atlas), “performance of topquark and w-boson tagging with atlas in run 2 of the lhc” とeurは述べている。
0.68
Phys. J. C 79, 375 (2019), arXiv:1808.07858 [hep-ex].
[15] Albert M Sirunyan et al (CMS), “Identification of heavy, energetic, hadronically decaying particles using machine-learning techniques,” JINST 15, P06005 (2020), arXiv:2004.08262 [hep-ex].
JINST 15 P06005 (2020), arXiv:2004.08262 [hep-ex]. [15] Albert M Sirunyan et al, “Identification of heavy, energetic, hadronically decaying particles using machine-learning techniques”. JINST 15 P06005 (2020), arXiv:2004.08262 [hep-ex]. 訳抜け防止モード: [15 ]Albert M Sirunyan et al (CMS ) 「重く、エネルギッシュで、ハドロン的に崩壊する粒子を機械学習技術で同定する」 JINST 15 P06005 (2020 ), arXiv:2004.08262 [ hep - ex ]
0.83
[16] Dawit Belayneh et al , “Calorimetry with deep learning: particle simulation and reconstruction for collider physics,” Eur.
16] Dawit Belayneh et al , “Calorimetry with Deep Learning: Particle Simulation and reconstruction for collider Physics”[原文](miho)【関連記事】。
0.42
Phys. J. C 80, 688 (2020), arXiv:1912.06794 [physics.ins-det].
Phys J. C 80, 688 (2020), arXiv:1912.06794 [physics.ins-det].
0.35
[17] ATLAS Collaboration, “Deep Learning for Pion Identification and Energy Calibration with the ATLAS Detector,” ATL-PHYS-PUB-2020-01 8 (2020).
[17]ATLASコラボレーション, “Deep Learning for Pion Identification and Energy Calibration with the ATLAS Detector”, ATL-PHYS-PUB-2020-01 8(2020) 訳抜け防止モード: [17 ]ATLAS 共同研究,「ATLAS 検出器を用いたピオン同定とエネルギー校正のための深層学習」 ATL - PHYS - PUB-2020 - 018 (2020 )。
0.65
[18] Akchurin, N. and Cowden, C. and Damgov, J. and Hussain, A. and Kunori, S., “On the Use of Neural Networks for Energy Reconstruction in High-granularity Calorimeters,” (2021), arXiv:2107.10207 [physics.ins-det].
18] akchurin, n. and cowden, c. and damgov, j. and hussain, a. and kunori, s. "高粒度熱量計におけるエネルギー再構成のためのニューラルネットワークの使用について" (2021), arxiv:2107.10207 [physics.ins-det]。
0.73
[19] N. Akchurin, C. Cowden, J. Damgov, A. Hussain, and S. Kunori, “Perspectives on the Calibration of CNN Energy Reconstruction in Highly Granular Calorimeters,” (2021), arXiv:2108.10963 [physics.ins-det].
19]n. akchurin, c. cowden, j. damgov, a. hussain, s. kunori, "高粒度熱量計におけるcnnエネルギー再構成の校正に関する調査" (2021), arxiv:2108.10963 [physics.ins-det]。
0.73
[20] L. Polson, L. Kurchaninov, and M. Lefebvre, “Energy reconstruction in a liquid argon calorimeter cell using convolutional neural networks,” (2021), arXiv:2109.05124 [physics.ins-det].
2021, arxiv:2109.05124 [physics.ins-det] [20] l. polson, l. kurchaninov, m. lefebvre, “畳み込みニューラルネットワークを用いた液体アルゴンカロリメータセル内のエネルギー再構成” (2021), arxiv:2109.05124 [physics.ins-det] 訳抜け防止モード: 20 ] l. polson, l. kurchaninov, m. lefebvre. 畳み込みニューラルネットワークを用いた液体アルゴンカロリメータセルのエネルギー再構成」 (2021年) , arxiv:2109.05124 [ physics.ins - det ]
0.75
[21] Joosep Pata, Javier Duarte, Jean-Roch Vlimant, Maurizio Pierini, and Maria Spiropulu, “MLPF: Efficient machine-learned particle-flow reconstruction using graph neural networks,” (2021), arXiv:2101.08578 [physics.data-an].
Joosep Pata, Javier Duarte, Jean-Roch Vlimant, Maurizio Pierini, Maria Spiropulu, “MLPF: Efficient machine-learned Particle-flow reconstruction using graph neural network” (2021), arXiv:2101.08578 [physics.data-an]. 訳抜け防止モード: [21 ]ジョゼップ・パタ,ハビエル・ドゥアルテ,ジャン - ロッホ・ヴィリアント, Maurizio Pierini, and Maria Spiropulu, “MLPF : Efficient Machine- learned Particle-flow reconstruction using graph Neural Network” (2021年)、arXiv:2101.08578 [ Physics.data - an ]
0.75
[22] Jan Kieseler, Giles C. Strong, Filippo Chiandotto, Tommaso Dorigo, and Lukas Layer, “Calorimetric Measurement of Multi-TeV Muons via Deep Regression,” (2021), arXiv:2107.02119 [physics.ins-det].
Jan Kieseler, Giles C. Strong, Filippo Chiandotto, Tommaso Dorigo, and Lukas Layer, “Calorimetric Measurement of Multi-TeV Muons via Deep Regression” (2021), arXiv:2107.02119 [physics.ins-det] 訳抜け防止モード: 【22】ヤン・キーゼラー、ジャイルズ・c・ストロング、フィリッポ・チアンドット、 tommaso dorigo, lukas layer, and lukas layer, "深回帰によるマルチtevミューオンの温度計測" (2021年) , arxiv:2107.02119 [ physics.ins - det ]
0.53
[23] ATLAS Collaboration, “Generalized Numerical Inversion: A Neural Network Approach to Jet Calibration,”
A Neural Network Approach to Jet Calibration”[23] ATLAS Collaboration, “Generalized Numerical Inversion: A Neural Network Approach to Jet Calibration”
0.42
14 ATL-PHYS-PUB-2018-01 3 (2018).
14 ATL-PHYS-PUB-2018-01 3 (2018)。
0.33
[24] ATLAS Collaboration, “Simultaneous Jet Energy and Mass Calibrations with Neural Networks,” ATL-PHYSPUB-2020-001 (2020).
atl-physpub-2020-001 (2020年)
0.11
[25] Albert M Sirunyan et al (CMS), “A Deep Neural Network for Simultaneous Estimation of b Jet Energy and Resolution,” Comput.
[25] albert m sirunyanら(cms)は、“bジェットエネルギーと解像度を同時に推定するディープニューラルネットワーク”だ。
0.60
Softw. Big Sci.
柔らかくて ビッグSci。
0.50
4, 10 (2020), arXiv:1912.06046 [hep-ex].
4, 10 (2020), arXiv:1912.06046 [hep-ex]。
0.43
[26] Rüdiger Haake and Constantin Loizides, “Machine Learning based jet momentum reconstruction in heavyion collisions,” Phys.
Rev. C 99, 064904 (2019), arXiv:1810.06324 [nucl-ex].
C 99, 064904 (2019), arXiv:1810.06324 [nucl-ex].
0.46
[27] Rüdiger Haake (ALICE), “Machine Learning based jet momentum reconstruction in Pb-Pb collisions measured with the ALICE detector,” PoS EPS-HEP2019, 312 (2020), arXiv:1909.01639 [nucl-ex].
[28] Baldi, Pierre and Blecher, Lukas and Butter, Anja and Collado, Julian and Howard, Jessica N. and Keilbach, Fabian and Plehn, Tilman and Kasieczka, Gregor and Whiteson, Daniel, “How to GAN Higher Jet Resolution,” (2020), arXiv:2012.11944 [hep-ph].
[28] Baldi, Pierre and Blecher, Lukas and Butter, Anja and Collado, Julian and Howard, Jessica N. and Keilbach, Fabian and Plehn, Tilman and Kasieczka, Gregor and Whiteson, Daniel, “How to GAN Higher Jet Resolution” (2020), arXiv:2012.1 1944 [hep-ph] 訳抜け防止モード: 28]バルディ、ピエール、ブレッシャー、ルカ、バター anja and collado、julian and howard、jessica n. and keilbach。 fabian and plehn, tilman and kasieczka, gregor and whiteson。 daniel, “how to gan higher jet resolution, ”(2020年) arxiv:2012.11944 [hep - ph]。
0.69
[29] Patrick T. Komiske, Eric M. Metodiev, Benjamin Nachman, and Matthew D. Schwartz, “Pileup Mitigation with Machine Learning (PUMML),” JHEP 12, 051 (2017), arXiv:1707.08600 [hep-ph].
Patrick T. Komiske, Eric M. Metodiev, Benjamin Nachman, Matthew D. Schwartz, “Pileup Mitigation with Machine Learning (PUMML), JHEP 12, 051 (2017), arXiv:1707.08600 [hep-ph]”。 訳抜け防止モード: (29)Patrick T. Komiske、Eric M. Metodiev、Benjamin Nachman そしてMatthew D. Schwartz氏は,“PUMML(Pileup Mitigation with Machine Learning)”と題する。 JHEP 12, 051 (2017 ), arXiv:1707.08600 [ hep - ph ]
0.87
[30] Convolutional Neural Networks with Event Images for Pileup Mitigation with the ATLAS Detector, Tech.
[31] Benedikt Maier, Siddharth M. Narayanan, Gianfranco de Castro, Maxim Goncharov, Christoph Paus, and Matthias Schott, “Pile-Up Mitigation using Attention,” (2021), arXiv:2107.02779 [physics.ins-det].
[31]Benedikt Maier, Siddharth M. Narayanan, Gianfranco de Castro, Maxim Goncharov, Christoph Paus, Matthias Schott, “Pile-Up Mitigation using Attention” (2021), arXiv:2107.02779 [physics.ins-det] 訳抜け防止モード: [31 ]Benedikt Maier, Siddharth M. Narayanan, Gianfranco de Castro, Maxim Goncharov氏、Christoph Paus氏、Matthias Schott氏は次のように述べている。 (2021年)、arXiv:2107.02779
0.70
[32] Gregor Kasieczka, Michel Luchmann, Florian Otterpohl, and Tilman Plehn, “Per-Object Systematics using DeepLearned Calibration,” (2020), arXiv:2003.11099 [hepph].
32] gregor kasieczka氏、michel luchmann氏、florian otterpohl氏、tilman plehn氏、"deeplearned calibrationを用いたオブジェクト毎のシステマティックス" (2020)、arxiv:2003.11099 [hepph]。
0.62
[33] J. Arjona Martínez, Olmo Cerri, Maurizio Pierini, Maria Spiropulu, and Jean-Roch Vlimant, “Pileup mitigation at the Large Hadron Collider with graph neural networks,” Eur.
J. Arjona Martínez氏、Olmo Cerri氏、Maurizio Pierini氏、Maria Spiropulu氏、Jean-Roch Vlimant氏は次のように述べている。 訳抜け防止モード: 33] j. arjona martínez, olmo cerri, maurizio pierini maria spiropuluとjean-roch vlimantは、“グラフニューラルネットワークによる大型ハドロン衝突の軽減策”だ。
0.58
Phys. J. Plus 134, 333 (2019), arXiv:1810.07988 [hep-ph].
Phys J. Plus 134, 333 (2019), arXiv:1810.07988 [hep-ph]
0.35
[34] Markus Diefenthaler, Abduhhal Farhat, Andrii Verbytskyi, and Yuesheng Xu, “Deeply Learning Deep Inelastic Scattering Kinematics,” (2021), arXiv:2108.11638 [hepph].
[35] Junze Liu, Jordan Ott, Julian Collado, Benjamin Jargowsky, Wenjie Wu, Jianming Bian, and Pierre Baldi (DUNE), “Deep-Learning-Based Kinematic Reconstruction for DUNE,” (2020), arXiv:2012.06181 [physics.insdet].
[35]Junze Liu, Jordan Ott, Julian Collado, Benjamin Jargowsky, Wenjie Wu, Jianming Bian, Pierre Baldi (DUNE), “Deep-Learning-Based Kiinematic Reconstruction for DUNE” (2020), arXiv:2012.06181 [physics.insdet] 訳抜け防止モード: [35 ]ユンゼ・リュー、ヨルダン・オット、ジュリアン・コラード Benjamin Jargowsky, Wenjie Wu, Jianming Bian, Pierre Baldi (DUNE) 深層学習に基づくDUNEの運動再構成(2020年) arXiv:2012.06181 [ Physics.insdet ]
0.81
[36] S. Delaquis et al (EXO), “Deep Neural Networks for Energy and Position Reconstruction in EXO-200,” JINST 13, P08023 (2018), arXiv:1804.09641 [physics.ins-det].
[36]s. delaquis et al (exo), “deep neural network for energy and position reconstruction in exo-200”、jinst 13, p08023 (2018), arxiv:1804.09641 [physics.ins-det]。 訳抜け防止モード: [36 ]S. Delaquis et al (EXO ) “Deep Neural Networks for Energy” EXO-200, ”JINST 13 P08023 (2018) における位置再構成 arXiv:1804.09641 [ Physics.ins - det ]
0.85
[37] Pierre Baldi, Jianming Bian, Lars Hertel, and Lingge Li, “Improved Energy Reconstruction in NOvA with Regression Convolutional Neural Networks,” Phys.
37] pierre baldi, jianming bian, lars hertel, lingge li, “回帰畳み込みニューラルネットワークによるnovaのエネルギー再構成を改良した”、とphysは述べている。 訳抜け防止モード: [37 ]Pierre Baldi, Jianming Bian, Lars Hertel, そしてLingge Li, “Regression Convolutional Neural NetworksでNOvAのエネルギー再構成を改良した”、とPhysは語る。
0.86
Rev. D 99, 012011 (2019), arXiv:1811.04557 [physics.ins-det].
D 99, 01 2011 (2019), arXiv:1811.04557 [physics.ins-det].
0.41
[38] R. Abbasi et al , “A Convolutional Neural Network based Cascade Reconstruction for the IceCube Neutrino Observatory,” JINST 16, P07041 (2021), arXiv:2101.11589 [hep-ex].
[38]R. Abbasi et al , “A Convolutional Neural Network based Cascade Reconstruction for the IceCube Neutrino Observatory”, JINST 16, P07041 (2021), arXiv:2101.11589 [hep-ex].
0.48
[39] M. G. Aartsen et al (IceCube), “Cosmic ray spectrum from 250 TeV to 10 PeV using IceTop,” Phys.
[39] m. g. aartsen et al (icecube), “氷上を用いて250tevから10pevまでの光線スペクトル”。
0.69
Rev. D 102, 122001 (2020), arXiv:2006.05215 [astro-ph.HE].
D 102, 122001 (2020), arXiv:2006.05215 [astro-ph.HE].
0.45
英語(論文から抽出)
日本語訳
スコア
[40] Kiara Carloni, Nicholas W. Kamp, Austin Schneider, and Janet M. Conrad, “Convolutional Neural Networks for Shower Energy Prediction in Liquid Argon Time Projection Chambers,” (2021), arXiv:2110.10766 [hepex].
Kiara Carloni, Nicholas W. Kamp, Austin Schneider, Janet M. Conrad, “Convolutional Neural Networks for Shower Energy Prediction in Liquid Argon Time Projection Chambers” (2021), arXiv:2110.10766 [hepex]. 訳抜け防止モード: 40]kiara carloni、nicholas w. kamp、austin schneider。 そしてjanet m. conrad氏は、"液体アルゴン時間投影室におけるシャワーエネルギー予測のための畳み込みニューラルネットワーク"だ。 (2021年)、arxiv:2110.10766[hepex ]。
0.55
[41] Matthew Feickert and Benjamin Nachman, “A Living Review of Machine Learning for Particle Physics,” (2021), arXiv:2102.02770 [hep-ph].
Matthew Feickert氏とBenjamin Nachman氏の“A Living Review of Machine Learning for Particle Physics” (2021年), arXiv:2102.02770 [hep-ph]。 訳抜け防止モード: 41 ]matthew feickert氏とbenjamin nachman氏,“a living review of machine learning for particle physics” (2021年) , arxiv:2102.02770 [hep - ph] であった。
0.71
[42] Chris Pollard and Philipp Windischhofer, “Transport away your problems: Calibrating stochastic simulations with optimal transport,” (2021), arXiv:2107.08648 [physics.data-an].
42] chris pollard と philipp windischhofer は “transport away your problems: calibrating stochastic simulations with optimal transport” (2021), arxiv:2107.08648 [physics.data-an] と書いている。
0.81
[43] Rikab Gambhir, Benjamin Nachman, and Jesse Thaler, “Learning uncertainties the frequentist way: Calibration and correlation in high energy physics,” (2022), arXiv:2205.03413 [hep-ph].
[43] Rikab Gambhir, Benjamin Nachman, Jesse Thaler, “Learning certainties the frequentist way: Calibration and correlation in High Energy Physics” (2022), arXiv:2205.03413 [hep-ph]。 訳抜け防止モード: [43 ]Rikab Gambhir,Benjamin Nachman,Jesse Thaler 「高エネルギー物理学における校正と相関」 (2022年)、arXiv:2205.03413 [hep - ph ]
0.64
[44] Sanha Cheong, Aviv Cukierman, Benjamin Nachman, Murtaza Safdari, and Ariel Schwartzman, “Parametrizing the Detector Response with Neural Networks,” JINST 15, P01030 (2020), arXiv:1910.03773 [physics.data-an].
[45] A. Cukierman and B. Nachman, “Mathematical Properties of Numerical Inversion for Jet Calibrations.”
[45] a. cukierman, b. nachman, 『ジェットキャリブレーションの数値インバージョンに関する数学的性質』
0.72
Nucl. Instrum. Meth.
核だ インストラム メス
0.35
A 858, 1 (2017), arXiv:1609.05195 [physics.data-an].
A 858, 1 (2017), arXiv:1609.05195 [physics.data-an]
0.39
[46] Danilo Jimenez Rezende and Shakir Mohamed, “Variational inference with normalizing flows,” International Conference on Machine Learning 37, 1530 (2015).
46] Danilo Jimenez Rezende, Shakir Mohamed, “Variational inference with normalizing flow”, International Conference on Machine Learning 37, 1530 (2015)。 訳抜け防止モード: 46 ] Danilo Jimenez Rezende と Shakir Mohamed の「正規化フローによる変分推論」 International Conference on Machine Learning 37, 1530 (2015 )
0.72
[47] Ivan Kobyzev, Simon Prince, and Marcus Brubaker, “Normalizing Flows: An Introduction and Review of Current Methods,” IEEE Transactions on Pattern Analysis and Machine Intelligence , 1 (2020).
[47]Ivan Kobyzev,Simon Prince,Marcus Brubaker, “Normalizing Flows: An Introduction and Review of Current Methods”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1 (2020)”。 訳抜け防止モード: [47 ]イヴァン・コビゼフ、シモン・プリンス、マルクス・ブルバカー 「正規化フロー : 最近の手法の紹介と展望」 IEEE Transactions on Pattern Analysis and Machine Intelligence, 1 (2020 )。
0.66
[48] Benjamin Nachman and Jesse Thaler, “E Pluribus Unum Ex Machina: Learning from Many Collider Events at Once,” (2021), arXiv:2101.07263 [physics.data-an].
[48]Benjamin Nachman,Jesse Thaler, “E Pluribus Unum Ex Machina: Learning from Many Collider Events at Once” (2021), arXiv:2101.07263 [physics.data-an]. 訳抜け防止モード: [48 ]Benjamin Nachman and Jesse Thaler, “E Pluribus Unum Ex Machina? 一度に多くの衝突イベントから学ぶ”。 (2021年)、arXiv:2101.07263 [ Physics.data - an ]
0.71
[49] Monroe D. Donsker and S. R. S. Varadhan, “Asymptotic evaluation of certain markov process expectations for large time,” (1975).
[49] Monroe D. Donsker and S. R. S. Varadhan, “Asymptotic evaluation of certain Markov process expected for large time” (1975)。 訳抜け防止モード: [49]モンロー・d・ドンスカーとs・r・s・バラダン 「大規模なマルコフ過程の期待値の漸近的評価」 ( 1975 ) .
0.72
[50] Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and R Devon Hjelm, “Mine: Mutual information neural estimation,” (2018), arXiv:1801.04062 [cs.LG].
[50]Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, R Devon Hjelm, “Mine: Mutual Information Neural Estimation” (2018), arXiv:1801.04062 [cs.LG] 訳抜け防止モード: [50 ]Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, R Devon Hjelm マイニング : 相互情報ニューラル推定(2018年) arXiv:1801.04062 [cs . LG ]
0.74
[51] Solomon Kullback and Richard A Leibler, “On information and sufficiency,” The annals of mathematical statistics 22, 79–86 (1951).
[51] Solomon Kullback と Richard A Leibler, “On information and sufficiency”, The annals of mathematical statistics 22, 79–86 (1951)。
0.42
[52] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger, “On calibration of modern neural networks,” in Proceedings of the 34th International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol.
The 34th International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol.[52] Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger, “On calibration of Modern Neural Network” in Proceedings on the 34th International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 訳抜け防止モード: [52 ]チュアン・グオ、ゲフ・プレイス、ユ・サン、 Kilian Q. Weinbergerは曰く、“現代のニューラルネットワークの校正について”。 第34回In Proceedings of the 34th International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol
0.78
70, edited by Doina Precup and Yee Whye Teh (PMLR, 2017) pp. 1321–1330.
70, doina precup and yee whye teh (pmlr, 2017) pp. 1321-1330。
0.33
[53] Kyle Cranmer, Juan Pavez, and Gilles Louppe, “Approximating Likelihood Ratios with Calibrated Discriminative Classifiers,” (2015), arXiv:1506.02169 [stat.AP].
Kyle Cranmer, Juan Pavez, Gilles Louppe, “Approximating Likelihood Ratios with Calibrated Discriminative Classifiers” (2015), arXiv:1506.02169 [stat.AP] 訳抜け防止モード: [53 ]カイル・クランマー、フアン・パベス、ジル・ループ。 『差別分類器の校正による類比の近似』(2015年) arXiv:1506.02169 [ stat . AP ]
0.58
[54] A. Rogozhnikov, “Reweighting with Boosted Decision Trees,” Proceedings, 17th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2016): Valparaiso, Chile, January 18-22, 2016 762, 012036 (2016), arXiv:1608.05806 [physics.data-an].
A. Rogozhnikov, “Reweighting with Boosted Decision Trees” Proceedings, 17th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2016): Valparaiso, Chile, January 18-22, 2016 762, 012036 (2016), arXiv: 1608.05806 [physics.data-an] 訳抜け防止モード: 54 ] A. Rogozhnikov, “ 強化された決定木による再重み付け”。 第17回Advanced Computing and Analysis Techniques in Physics Research(ACAT 2016)に参加して Chile, January 18 - 22, 2016 762, 012036 ( 2016 ), arXiv:1608.05806 [ Physics.data - an ]
0.90
[55] Anders Andreassen and Benjamin Nachman, “Neural Networks for Full Phase-space Reweighting and Parameter Tuning,” Phys.
He55] Anders Andreassen氏とBenjamin Nachman氏は,“Neural Networks for Full Phase-space Reweighting and Parameter Tuning”と題した記事を書いた。
0.68
Rev. D 101, 091901 (2020), arXiv:1907.08209 [hep-ph].
D 101, 091901 (2020), arXiv:1907.08209 [hep-ph].
0.46
15 [56] S. Diefenbacher, E. Eren, G. Kasieczka, A. Korol, B. Nachman, and D. Shih, “DCTRGAN: Improving the Precision of Generative Models with Reweighting,” Journal of Instrumentation 15, P11004 (2020), arXiv:2009.03796 [hep-ph].
15 [56] S. Diefenbacher, E. Eren, G. Kasieczka, A. Korol, B. Nachman, D. Shih, “DCTRGAN: Improving the Precision of Generative Models with Reweighting”. Journal of Instrumentation 15 P11004 (2020), arXiv:2009.03796 [hep-ph]. 訳抜け防止モード: 15 [56 ]S. Diefenbacher, E. Eren, G. Kasieczka, A. Korol, B. Nachman, D. Shih, “DCTRGAN :” 再重み付けによる生成モデルの精度向上」 Journal of Instrumentation 15 P11004 (2020 ), arXiv:2009.03796 [ hep - ph ]
0.68
[57] Benjamin Nachman and Jesse Thaler, “Neural Con(2021), arXiv:2107.08979
and Kyle Cranmer, “Learning to Pivot with Adversarial Networks,” in Advances in Neural Information Processing Systems, Vol.
そしてkyle cranmer(カイル・クランマー)氏は、神経情報処理システム(neural information processing systems, vol.)の進歩についてこう述べている。 訳抜け防止モード: そしてKyle Cranmerは語る。 敵ネットワークでPivotを学習する”。 In Advances in Neural Information Processing Systems, Vol .
0.78
30, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Curran Associates, Inc., 2017) arXiv:1611.01046 [stat.ME].
編集はI. Guyon, U.V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, R. Garnett (Curran Associates, Inc., 2017) arXiv:1611.01046[stat.ME]。
0.45
[59] James Dolen, Philip Harris, Simone Marzani, Salvatore Rappoccio, and Nhan Tran, “Thinking outside the ROCs: Designing Decorrelated Taggers (DDT) for jet substructure,” JHEP 05, 156 (2016), arXiv:1603.00027 [hep-ph].
[59]James Dolen, Philip Harris, Simone Marzani, Salvatore Rappoccio, Nhan Tran, “Thinking outside the ROCs: Designing Decorrelated Taggers (DDT) for jet substructure), JHEP 05, 156 (2016), arXiv: 1603.00027 [hep-ph]”. 訳抜け防止モード: 59 ] James Dolen, Philip Harris, Simone Marzani, Salvatore RappoccioとNhan Tran : “ROCの外で考える” DDT(Decorrelated Taggers)をジェットサブ構造に設計する」。 JHEP 05, 156 (2016 ), arXiv:1603.00027 [ hep - ph ]
[61] Justin Stevens and Mike Williams, “uBoost: A boosting method for producing uniform selection efficiencies from multivariate classifiers,” JINST 8, P12013 (2013), arXiv:1305.7248 [nucl-ex].
[62] Chase Shimmin, Peter Sadowski, Pierre Baldi, Edison Weik, Daniel Whiteson, Edward Goul, and Andreas Søgaard, “Decorrelated Jet Substructure Tagging using Adversarial Neural Networks,” (2017), arXiv:1703.03507 [hep-ex].
962] Chase Shimmin, Peter Sadowski, Pierre Baldi, Edison Weik, Daniel Whiteson, Edward Goul, and Andreas Søgaard, “Decorrelated Jet Substructure Tagging using Adversarial Neural Networks” (2017), arXiv:1703.03507 [hep-ex]。 訳抜け防止モード: [62 ]Chase Shimmin, Peter Sadowski, Pierre Baldi, Edison Weik, Daniel Whiteson, Edward Goul, and Andreas Søgaard 逆ニューラルネットワークによるジェットサブストラクチャタグの劣化(2017年) arXiv:1703.03507 [hep - ex ]
0.76
[63] Layne Bradshaw, Rashmish K. Mishra, Andrea Mitridate, and Bryan Ostdiek, “Mass Agnostic Jet Taggers,” (2019), arXiv:1908.08959 [hep-ph].
[67] Christoph Englert, Peter Galler, Philip Harris, and Michael Spannowsky, “Machine Learning Uncertainties with Adversarial Neural Networks,” Eur.
Christoph Englert氏、Peter Galler氏、Philip Harris氏、Michael Spannowsky氏は、“Machine Learning Uncertainties with Adversarial Neural Networks”と題している。
0.75
Phys. J. C79, 4 (2019), arXiv:1807.08763 [hep-ph].
Phys J. C79, 4 (2019), arXiv:1807.08763 [hep-ph].
0.34
[68] Stefan Wunsch, Simon Jórger, Roger Wolf, and Gunter Quast, “Reducing the dependence of the neural network function to systematic uncertainties in the input space,” (2019), 10.1007/s41781-020-0 0037-9, arXiv:1907.11674 [physics.data-an].
[68] stefan wunsch氏、simon jórger氏、roger wolf氏、gunter quast氏、"入力空間の系統的不確実性へのニューラルネットワーク機能の依存を減少させる" (2019), 10.1007/s41781-020-0 0037-9, arxiv: 1907.11674 [physics.data-an]。 訳抜け防止モード: 68 ] Stefan Wunsch, Simon Jórger, Roger Wolf, Gunter Quast 入力空間における系統的な不確実性へのニューラルネットワーク機能の依存を減らす」 ( 2019 ), 10.1007 / s41781 - 020 - 00037 - 9, arXiv:1907.11674 [ Physics.data - an ]
0.74
[69] Alex Rogozhnikov, Aleksandar Bukva, V. V. Gligorov, Andrey Ustyuzhanin, and Mike Williams, “New approaches for boosting to uniformity,” JINST 10, T03002 (2015), arXiv:1410.4140 [hep-ex].
69] Alex Rogozhnikov, Aleksandar Bukva, V. V. Gligorov, Andrey Ustyuzhanin, Mike Williams, “New Approach for boosting to uniformity), JINST 10, T03002 (2015), arXiv:1410.4140 [hep-ex]”. 訳抜け防止モード: 69]Alex Rogozhnikov, Aleksandar Bukva, V. V. Gligorov, Andrey Ustyuzhanin氏とMike Williams氏は次のように述べている。 JINST 10, T03002 (2015 ), arXiv:1410.4140 [ hep - ex ]
0.85
[70] CMS Collaboration, “A deep neural network to search for new long-lived particles decaying to jets,” Machine Learning: Science and Technology (2020), 10.1088/26322153/ab9 023, 1912.12238.
and Judith M. Katzy, “Adversarial domain adaptation to reduce sample bias of a high energy physics classifier,” (2020), arXiv:2005.00568 [stat.ML].
そしてjudith m. katzy氏は、"高エネルギー物理学の分類器のサンプルバイアスを減らすための逆領域適応(adversarial domain adaptation)"(2020), arxiv:2005.00568 [stat.ml]。
0.60
[72] Gregor Kasieczka, Benjamin Nachman, Matthew D. Schwartz, and David Shih, “ABCDisCo: Automating the ABCD Method with Machine Learning,” (2020), 10.1103/PhysRevD.103 .035021, arXiv:2007.14400 [hepph].
972] Gregor Kasieczka, Benjamin Nachman, Matthew D. Schwartz, David Shih, “ABCDisCo: Automating the ABCD Method with Machine Learning” (2020), 10.1103/PhysRevD.103 .035021, arXiv:2007.14400 [hepph] 訳抜け防止モード: 72) Gregor Kasieczka, Benjamin Nachman, Matthew D. Schwartz ABCDisCo : 機械学習によるABCD手法の自動化」とDavid Shih氏は語る。 (2020年), 10.1103 / PhysRevD.103.035021, arXiv:2007.14400 [ hepph ]
0.83
[73] Ouail Kitouni, Benjamin Nachman, Constantin Weisser, and Mike Williams, “Enhancing searches for resonances with machine learning and moment decomposition,” (2020), arXiv:2010.09745 [hep-ph].
73] Ouail Kitouni, Benjamin Nachman, Constantin Weisser, Mike Williams, “Enhancing search for resonances with machine learning and moment decomposition” (2020), arXiv:2010.09745 [hep-ph] 訳抜け防止モード: 73]ouail kitouni、benjamin nachman、constantin weisser、 そしてmike williamsは、”機械学習とモーメント分解による共鳴の検索を強化”した。 (2020年) , arxiv: 2010.09745 [ hep - ph ]。
0.68
[74] Aishik Ghosh and Benjamin Nachman, “A Cautionary Tale of Decorrelating Theory Uncertainties,” (2021), arXiv:2109.08159 [hep-ph].
[74]Aishik Ghosh,Benjamin Nachman, “A Cautionary Tale of Decorrelating Theory Uncertainties” (2021), arXiv:2109.08159 [hep-ph].
0.44
[75] Andrew Blance, Michael Spannowsky, and Philip Waite, “Adversarially-traine d autoencoders for robust unsupervised new physics searches,” JHEP 10, 047 (2019), arXiv:1905.10384 [hep-ph].
JHEP 10, 047 (2019), arXiv: 1905.10384 [hep-ph].[75] Andrew Blance, Michael Spannowsky, Philip Waite, “Adversarially-traine d autoencoders for robust unsupervised new Physics search”. JHEP 10, 047 (2019), arXiv: 1905.10384 [hep-ph]. 訳抜け防止モード: 75]アンドリュー・ブラン、マイケル・スパンノフスキー、フィリップ・ウェイト 敵対的に - 堅牢で教師なしの新しい物理検索のためのトレーニングされたオートエンコーダ。 jhep 10, 047 (2019 ) arxiv: 1905.10384 [ hep - ph ]。
0.58
[76] Victor Estrade, Cécile Germain, Isabelle Guyon, and David Rousseau, “Systematic aware learning - A case study in High Energy Physics,” EPJ Web Conf.
[76] Victor Estrade, Cécile Germain, Isabelle Guyon, David Rousseau, “Systematic aware learning - A Case Study in High Energy Physics”, EPJ Web Conf。 訳抜け防止モード: 76]Victor Estrade, Cécile Germain, Isabelle Guyon, David Rousseau, “Systematic aware learning - a case study in High Energy Physics” と題された。 EPJ Web Confの略。
0.87
214, 06024 (2019).
214, 06024 (2019).
0.42
[77] Stefan Wunsch, Simon Jörger, Roger Wolf, and Günter Quast, “Optimal statistical inference in the presence of systematic uncertainties using neural network optimization based on binned Poisson likelihoods with nuisance parameters,” (2020), 10.1007/s41781-020-0 0049-5, arXiv:2003.07186 [physics.data-an].
10.1007/s41781-020-0 0049-5, arXiv:2003.07186 [physics.data-an] [physics.data-an]. [77] Stefan Wunsch, Simon Jörger, Roger Wolf, Günter Quast, “binned Poisson chances with nuisance parameterss” (2020), 10.1007/s41781-020-0 0049-5, arXiv:2003.07186[physics.data-an]. 訳抜け防止モード: 77]Stefan Wunsch, Simon Jörger, Roger Wolf 系統的不確実性の存在における最適な統計的推測 ニューラル・ネットワークの最適化は、厄介なパラメータを持つ双対ポアソン確率に基づく。 ( 2020 ), 10.1007 / s41781 - 020 - 00049 - 5, arXiv:2003.07186 [ Physics.data - an ]
0.67
[78] A. Elwood, D. Krücker, and M. Shchedrolosiev, “Direct optimization of the discovery significance in machine learning for new physics searches in particle colliders,” J. Phys.
78] a. elwood, d. krücker, m. shchedrolosiev, “粒子衝突型加速器における新しい物理探索のための機械学習における発見の重要性の直接的最適化”、とj. physは語る。 訳抜け防止モード: [78 ] A. Elwood, D. Krücker, M. Shchedrolosiev 粒子衝突型加速器の新しい物理探索のための機械学習における発見の意義の直接最適化」 J. Phys。
0.85
Conf. Ser. 1525, 012110 (2020).
Conf サー。 1525, 012110 (2020).
0.39
[79] Pablo De Castro and Tommaso Dorigo, “INFERNO: Inference-Aware Neural Optimisation,” Comput.
Pablo de Castro and Tommaso Dorigo, “INFERNO: Inference-Aware Neural Optimisation”, Comput.
[80] Tom Charnock, Guilhem Lavaux, and Benjamin D. Wandelt, “Automatic physical inference with information maximizing neural networks,” Physical Review D 97 (2018), 10.1103/physrevd.97. 083004.
[80] Tom Charnock, Guilhem Lavaux, Benjamin D. Wandelt, “Automatic physical inference with information maximizing neural network”, Physical Review D 97 (2018), 10.1103/physrevd.97. 083004。 訳抜け防止モード: [80 ] Tom Charnock, Guilhem Lavaux,Benjamin D. Wandelt, ニューラルネットワークを最大化する情報を用いた自動物理推論」物理レビューD97(2018年) 10.1103 / physrevd.97.083004。
0.75
[81] Justin Alsing and Benjamin Wandelt, “Nuisance hardened data compression for fast likelihood-free inference,” Mon.
[82] Lukas Heinrich and Nathan Simpson, “pyhf/neos: initial
[82] Lukas Heinrich と Nathan Simpson, “pyhf/neos: initial
0.48
zenodo release,” (2020).
と書いた(2020年)。
0.49
[83] Sven Bollweg, Manuel Haußmann, Gregor Kasieczka, Michel Luchmann, Tilman Plehn, and Jennifer Thompson, “Deep-Learning Jets with Uncertainties and More,” SciPost Phys.
Sven Bollweg, Manuel Haußmann, Gregor Kasieczka, Michel Luchmann, Tilman Plehn, Jennifer Thompson, “Deep-Learning Jets with Uncertainties and More” SciPost Phys。 訳抜け防止モード: 83] sven bollweg、manuel haußmann、gregor kasieczka michel luchmann, tilman plehn, jennifer thompson, “deep - learning jets with uncertainties and more”(英語) scipost phys の略。
0.67
8, 006 (2020), arXiv:1904.10004 [hep-ph].
8. 006 (2020), arXiv:1904.10004 [hep-ph]。
0.43
[84] Jack Y. Araz and Michael Spannowsky, “Combine and Conquer: Event Reconstruction with Bayesian Ensemble Neural Networks,” (2021), arXiv:2102.01078 [hep-ph].
84] jack y. arazとmichael spannowskyは、”combine and conquer: event reconstruction with bayesian ensemble neural networks” (2021), arxiv:2102.01078 [hep-ph]を書いている。
0.77
[85] Marco Bellagente, Manuel Haußmann, Michel Luchmann, and Tilman Plehn, “Understanding Event-Generation Networks via Uncertainties,” (2021), arXiv:2104.04543 [hep-ph].
[85] Marco Bellagente, Manuel Haußmann, Michel Luchmann, Tilman Plehn, “Understanding Event-Generation Networks via Uncertainties” (2021), arXiv:2104.04543 [hep-ph]。 訳抜け防止モード: [85 ]Marco Bellagente, Manuel Haußmann, Michel Luchmann, Tilman Plehn 『イベント理解-不確実性によるネットワーク生成』(2021年) arXiv:2104.04543 [hep - ph ]
0.81
[86] Benjamin Nachman, “A guide for deploying Deep Learning in LHC searches: How to achieve optimality and account for uncertainty,” (2019), 10.21468/SciPostPhys .8.6.090, arXiv:1909.03081 [hep-ph].
[86]benjamin nachman, “a guide for deploy deep learning in lhc search: how to achieve optimality and account for uncertainty” (2019), 10.21468/scipostphys .8.6.090, arxiv:1909.03081 [hep-ph]。 訳抜け防止モード: 86 ]Benjamin Nachman, “LHC検索におけるディープラーニングの展開ガイド : 最適性の実現と不確実性の説明方法” (2019) 10.21468 / SciPostPhys.8.6.090, arXiv:1909.03081 [ hep - ph ]
0.65
16 [87] Tommaso Dorigo and Pablo de Castro, “Dealing with Nuisance Parameters using Machine Learning in High Energy Physics: a Review,” (2020), arXiv:2007.09121 [stat.ML].
16 [87]tommaso dorigoとpablo de castroは、“high energy physics: a review” (2020), arxiv:2007.09121 [stat.ml]で機械学習を使って迷惑パラメーターを処理している。 訳抜け防止モード: 16 [87 ]Tommaso Dorigo と Pablo de Castro, 高エネルギー物理における機械学習を用いたニュアンスパラメータによるディーリング a Review, ” ( 2020 ), arXiv:2007.09121 [ stat ] ML)。
0.63
[88] Aishik Ghosh, Benjamin Nachman, and Daniel Whiteson, “Uncertainty Aware Learning for High Energy Physics,” (2021), arXiv:2105.08742 [hep-ex].
[88]Aishik Ghosh,Benjamin Nachman,Daniel Whiteson, “Uncertainty Aware Learning for High Energy Physics” (2021), arXiv:2105.08742 [hep-ex] 訳抜け防止モード: [88 ]アイシク・ゴッシュ、ベンジャミン・ナックマン、ダニエル・ホワイトソン 『高エネルギー物理の不確かさ認識学習』(2021年) arXiv:2105.08742 [hep - ex ]
[90] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al , “Tensorflow: A system for large-scale machine learning.” in OSDI, Vol.
Martín Abadi氏、Paul Barham氏、Jianmin Chen氏、Zhifeng Chen氏、Andy Davis氏、Jeffrey Dean氏、Matthieu Devin氏、Sanjay Ghemawat氏、Geoffrey Irving氏、Michael Isard氏、al , “Tensorflow: A system for large-scale machine learning”, OSDI, Vol。 訳抜け防止モード: [90 ]マルティン・アバディ、ポール・バーラム、ジャンミン・チェン Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat Geoffrey Irving, Michael Isard, et al, “Tensorflow : A system for large-scale machine learning ”. OSDIでは、Vol。
0.84
16 (2016) pp. 265–283.
16 (2016) pp. 265-283。
0.79
[92] Torbjorn Sjöstrand, Stephen Mrenna,
92]Torbjorn Sjöstrand,Stephen Mrenna
0.31
[91] Diederik Kingma and Jimmy Ba, “Adam: A method for stochastic optimization,” (2014), arXiv:1412.6980 [cs].
[91] Diederik Kingma と Jimmy Ba, “Adam: A method for stochastic optimization” (2014), arXiv:1412.6980 [cs].
0.43
and Peter Z. Skands, “PYTHIA 6.4 Physics and Manual,” JHEP 05, 026 (2006), arXiv:hep-ph/0603175 [hep-ph].
Peter Z. Skands, “PYTHIA 6.4 Physics and Manual”, JHEP 05, 026 (2006), arXiv:hep-ph/0603175 [hep-ph]
0.46
[93] Serguei Chatrchyan et al (CMS), “Measurement of the √ s = 7 TeV √ s = 0.9 TeV,” JHEP 09, 109
93] serguei chatrchyan et al (cms), “measurement of the s = 7 tev , s = 0.9 tev” jhep 09, 109
0.33
Underlying Event Activity at the LHC with and Comparison with (2011), arXiv:1107.0330 [hep-ex].
[99] J. de Favereau, C. Delaere, P. Demin, A. Giammanco, V. Lemaître, A. Mertens, and M. Selvaggi (DELPHES 3), “DELPHES 3, A modular framework for fast simulation of a generic collider experiment,” JHEP 02, 057 (2014), arXiv:1307.6346 [hep-ex].
J. de Favereau, C. Delaere, P. Demin, A. Giammanco, V. Lema'tre, A. Mertens, and M. Selvaggi (DELPHES 3), "DELPHES 3, A modular framework for fast Simulation of a generic collider experiment", JHEP 02, 057 (2014), arXiv:1307.6346 [hep-ex]。 訳抜け防止モード: J. de Favereau, C. Delaere, P. Demin, A. Giammanco, V. Lema'tre, A. Mertens, M. Selvaggi (DELPHES 3 ) ジェネリックコライダー実験の高速シミュレーションのためのモジュラーフレームワーク「DELPHES 3」 JHEP 02, 057 (2014 ), arXiv:1307.6346 [ hep - ex ]
0.83
[100] Alexandre Mertens, “New features in Delphes 3,” Proceedings, 16th International workshop on Advanced Computing and Analysis Techniques in physics (ACAT 14): Prague, Czech Republic, September 1-5, 2014, J. Phys.
100] alexandre mertens, “new features in delphes 3” proceedings, 16th international workshop on advanced computing and analysis techniques in physics (acat 14): prague, czech republic, september 1-5, 2014 j. phys. (英語) 訳抜け防止モード: [100 ] Alexandre Mertens, “New features in Delphes 3” 第16回Advanced Computing and Analysis Techniques in Physics(ACAT 14)に参加して チェコ、2014年9月1日~5日、J. Phys。
0.70
Conf. Ser. 608, 012045 (2015).
Conf サー。 608, 012045 (2015).
0.39
[101] Michele Selvaggi, “DELPHES 3: A modular framework for fast-simulation of generic collider experiments,” Proceedings, 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013): Beijing, China, May 16-21, 2013, J. Phys.
101] Michele Selvaggi, “DELPHES 3: a modular framework for fast-simulation of generic collider experiment” Proceedings, 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013): Beijing, China, May 16-21, 2013 J. Phys。 訳抜け防止モード: 101 ] Michele Selvaggi, “DELPHES 3 : a modular framework for fast- Simulation of generic collider experiment” 第15回先端計算・解析技術国際ワークショップ(ACAT 2013)に参加して 5月16日 - 2013年5月21日、J. Phys。
0.73
Conf. Ser. 523, 012033 (2014).
Conf サー。 523, 012033 (2014).
0.39
[102] CMS Collaboration, “Simulated dataset QCD_Ptin AODSIM for 2011 collision data (SM Exclusive),” (2016), 10.7483/OPEN-
102]CMSコラボレーション, “Simulated dataset QCD_Ptin AODSIM for 2011 collision data (SM Exclusive), (2016), 10.7483/OPEN- 訳抜け防止モード: 102 ] cmsコラボレーション,“シミュレートデータセット qcd_ptin aodsim for 2011 collision data (sm exclusive )” ( 2016 ) , 10.7483 / OPEN-
0.86
1000to1400_TuneZ2_7T eV_pythia6 format CERN Open Data Portal DATA.CMS.96U2.3YAH.
1000to1400_TuneZ2_7T eV_pythia6 CERN Open Data Portal Data.CMS.96U2.3YAH
0.18
[103] CMS Collaboration, “Simulated dataset QCD_Ptin AODSIM for 2011 collision data (SM Exclusive),” (2016), 10.7483/OPEN-
103]CMSコラボレーション, “Simulated dataset QCD_Ptin AODSIM for 2011 collision data (SM Exclusive), (2016), 10.7483/OPEN- 訳抜け防止モード: 103 ] cmsコラボレーション,“シミュレートデータセット qcd_ptin aodsim for 2011 collision data (sm exclusive )” ( 2016 ) , 10.7483 / OPEN-
0.86
1400to1800_TuneZ2_7T eV_pythia6 format CERN Open Data Portal DATA.CMS.RC9V.B5KX.
1400to1800_TuneZ2_7T eV_pythia6 CERN Open Data Portal Data.CMS.RC9V.B5KX
[105] Patrick T. Komiske, Radha Mastandrea, Eric M. Metodiev, Preksha Naik, and Jesse Thaler, “Exploring the Space of Jets with CMS Open Data,” Phys.
105]patrick t. komiske氏、radha mastandrea氏、eric m. metodiev氏、preksha naik氏、jesse thaler氏、"cmsオープンデータでジェットの空間を探索する"とphys氏は述べている。 訳抜け防止モード: 105 ]Patrick T. Komiske, Radha Mastandrea, Eric M. Metodiev, Preksha NaikとJesse Thalerは、こう語る。 CMS Open Dataでジェットの空間を探索する”、とPhysは語る。
0.85
Rev. D 101, 034009 (2020), arXiv:1908.08542 [hep-ph].
D 101, 034009 (2020), arXiv:1908.08542 [hep-ph].
0.46
[106] Patrick Komiske, Radha Mastandrea, Eric Metodiev, Preksha Naik, and Jesse Thaler, “CMS 2011A Simulation | Pythia 6 QCD 1000-1400 | pT > 375 GeV | MOD HDF5 Format,” (2019).
106] Patrick Komiske, Radha Mastandrea, Eric Metodiev, Preksha Naik, and Jesse Thaler, “CMS 2011A Simulation | Pythia 6 QCD 1000-1400 | pT > 375 GeV | MOD HDF5 Format” (2019年)
0.43
[107] Patrick Komiske, Radha Mastandrea, Eric Metodiev, Preksha Naik, and Jesse Thaler, “CMS 2011A Simulation | Pythia 6 QCD 1400-1800 | pT > 375 GeV | MOD HDF5 Format,” (2019).
107] Patrick Komiske, Radha Mastandrea, Eric Metodiev, Preksha Naik, and Jesse Thaler, “CMS 2011A Simulation | Pythia 6 QCD 1400-1800 | pT > 375 GeV | MOD HDF5 Format” (2019年)
0.43
[108] Patrick Komiske, Radha Mastandrea, Eric Metodiev, Preksha Naik, and Jesse Thaler, “CMS 2011A Simulation | Pythia 6 QCD1800-inf | pT > 375 GeV | MOD HDF5 Format,” (2019).
108] Patrick Komiske, Radha Mastandrea, Eric Metodiev, Preksha Naik, and Jesse Thaler, “CMS 2011A Simulation | Pythia 6 QCD1800-inf | pT > 375 GeV | MOD HDF5 Format” (2019年)
0.44
[109] G. Kasieczka, B. Nachman, and D. Shih, “Neural (2021), arXiv:2107.08979
[109]G. Kasieczka, B. Nachman, D. Shih, “Neural (2021), arXiv:2107.08979
0.46
Conditional Reweighting,” [physics.data-an].
条件付きリウェイト”[physics.data-an]。
0.68
dataset,” (2021).
データセット” (2021年)。
0.72
[110] Benjamin Nachman and Jesse Thaler, “Delphes dijet
[113] Matteo Cacciari, Gavin P. Salam, and Gregory Soyez, “The anti-kt jet clustering algorithm,” JHEP 04, 063
113] Matteo Cacciari, Gavin P. Salam, and Gregory Soyez, “The anti-kt jet clustering algorithm” JHEP 04, 063 訳抜け防止モード: 113 ]matteo cacciari、gavin p. salam、gregory soyez。 「アンチ-ktジェットクラスタリングアルゴリズム」jhep 04,063
0.53
17 (2008), arXiv:0802.1189 [hep-ph].
17 (2008) arXiv:0802.1189 [hep-ph]
0.40
[114] Ouail Kitouni, Benjamin Nachman, Constantin Weisser, and Mike Williams, “Enhancing searches for resonances with machine learning and moment decomposition,” Journal of High Energy Physics 2021 (2021), 10.1007/jhep04(2021) 070.
Journal of High Energy Physics 2021 (2021), 10.1007/jhep04(2021) 070,"[114] Ouail Kitouni, Benjamin Nachman, Constantin Weisser, Mike Williams, “Enhancing searchs with resonances with machine learning and moment decomposition”. “Journal of High Energy Physics 2021 (2021), 10.1007/jhep04(2021) 070”. 訳抜け防止モード: [114 ]Ouail Kitouni,Benjamin Nachman,Constantin Weisser, とMike Williamsは語る。 機械学習とモーメント分解による共鳴の探索を強化する”。 Journal of High Energy Physics 2021 (2021 ) 10.1007 / jhep04(2021)070 。
0.86
[115] Thomas Kluyver, Benjamin Ragan-Kelley, Fernando Pérez, Brian Granger, Matthias Bussonnier, Jonathan Frederic, Kyle Kelley, Jessica Hamrick, Jason Grout, Sylvain Corlay, Paul Ivanov, Damián Avila, Safia Abdalla, and Carol Willing, “Jupyter notebooks – a publishing format for reproducible computational workflows,” in Positioning and Power in Academic Publishing: Players, Agents and Agendas, edited by F. Loizides and B. Schmidt (IOS Press, 2016) pp. 87 – 90.
Thomas Kluyver氏、Benjamin Ragan-Kelley氏、Fernando Pérez氏、Brian Granger氏、Matthias Bussonnier氏、Jonathan Frederic氏、Kyle Kelley氏、Jessica Hamrick氏、Jason Grout氏、Sylvain Corlay氏、Paul Ivanov氏、Damián Avila氏、Safia Abdalla氏、Carol Willing氏、"Jupyter notebooks – reproducible workflow workflows" in Academic Publishing: positioning and Power in Academic Publishing: Players, Agents and Agendas,Edit by F. Loizides and B. Schmidt (IOS Press, 2016) pp. 87 - 90。 訳抜け防止モード: [115 ]Thomas Kluyver, Benjamin Ragan - Kelley, Fernando Pérez, Brian Granger, Matthias Bussonnier, Jonathan Frederic, Kyle Kelley Jessica Hamrick, Jason Grout, Sylvain Corlay, Paul Ivanov Damián Avila, Safia Abdalla, Carol Willing, “Jupyter Notebooks – 再現可能な計算ワークフローのパブリッシングフォーマット。 学術出版における「位置づけと力」 : プレイヤー・エージェント・アジェンダ 編集: F. Loizides and B. Schmidt (IOS Press, 2016 ) pp. 87 - 90 。
0.81
[116] Charles R. Harris, K. Jarrod Millman, St’efan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fern’andez del R’ıo, Mark Wiebe, Pearu Peterson, Pierre G’erard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E. Oliphant, “Array programming with NumPy,” Nature 585, 357–362 (2020).
[116] Charles R. Harris, K. Jarrod Millman, St’efan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fern’andez del R’ıo, Mark Wiebe, Pearu Peterson, Pierre G’erard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E. Oliphant, “Array programming with NumPy,” Nature 585, 357–362 (2020). 訳抜け防止モード: 116 ]charles r. harris, k. jarrod millman, st’efan j. van der walt, ralf gommers. pauli virtanen、david cournapeau、eric wieser、julian taylor。 セバスチャン・バーグ、ナサニエル・j・スミス、ロバート・カーン、マッティ・ピカス stephan hoyer氏、marten h. van kerkwijk氏、matthew brett氏、allan haldane氏、jaime fern’andez del r'ıo氏。 mark wiebe, pearu peterson, pierre g’erard - マーチャント、ケビン・シェパード。 タイラー・レディー、ウォーレン・ウェッケスター、ハメイヤー・アブバシ、クリストフ・ゴルケ そしてtravis e. oliphant, “ array programming with numpy, ” nature 585”。 357–362 ( 2020 ) .
0.76
[117] J. D. Hunter, “Matplotlib: A 2d graphics environment,” Computing in Science & Engineering 9, 90–95 (2007).
117] j. d. hunter, “matplotlib: a 2d graphics environment”, computing in science & engineering 9, 90–95 (2007)。 訳抜け防止モード: 117 ] j. d. hunter, "matplotlib : a 2d graphics environment" 科学と工学のコンピューティング 9, 90-95 (2007)。