PMT Waveform Simulation and Reconstruction with Conditional Diffusion Network
- URL: http://arxiv.org/abs/2602.05767v1
- Date: Thu, 05 Feb 2026 15:30:47 GMT
- Title: PMT Waveform Simulation and Reconstruction with Conditional Diffusion Network
- Authors: Kainan Liu, Jingyu Huang, Guihong Huang, Jianyi Luo,
- Abstract summary: Photomultiplier tubes (PMTs) are widely employed in particle and nuclear physics experiments.<n>The accuracy of PMT waveform reconstruction directly impacts the detector's spatial and energy resolution.<n>We propose an innovative weakly supervised waveform simulation and reconstruction approach based on a conditional diffusion network framework.
- Score: 1.2599533416395765
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Photomultiplier tubes (PMTs) are widely employed in particle and nuclear physics experiments. The accuracy of PMT waveform reconstruction directly impacts the detector's spatial and energy resolution. A key challenge arises when multiple photons arrive within a few nanoseconds, making it difficult to resolve individual photoelectrons (PEs). Although supervised deep learning methods have surpassed traditional methods in performance, their practical applicability is limited by the lack of ground-truth PE labels in real data. To address this issue, we propose an innovative weakly supervised waveform simulation and reconstruction approach based on a bidirectional conditional diffusion network framework. The method is fully data-driven and requires only raw waveforms and coarse estimates of PE information as input. It first employs a PE-conditioned diffusion model to simulate realistic waveforms from PE sequences, thereby learning the features of overlapping waveforms. Subsequently, these simulated waveforms are used to train a waveform-conditioned diffusion model to reconstruct the PE sequences from waveforms, reinforcing the learning of features of overlapping waveforms. Through iterative refinement between the two conditional diffusion processes, the model progressively improves reconstruction accuracy. Experimental results demonstrate that the proposed method achieves 99% of the normalized PE-number resolution averaged over 1-5 p.e. and 80% of the timing resolution attained by fully supervised learning.
Related papers
- Arbitrary control of the temporal waveform of photons during spontaneous emission [0.0]
Control of the temporal waveform and Fock state statistics of photons produced during spontaneous emission from single quantum emitters provides a crucial tool in the establishment of hybrid quantum systems.<n>We describe a method to generate photons of any temporal waveform from emitters of any lifetime.
arXiv Detail & Related papers (2025-11-28T18:54:08Z) - Diffusion prior as a direct regularization term for FWI [0.0]
We propose a score-based generative diffusion prior into Full Waveform Inversion (FWI)<n>Unlike traditional diffusion approaches, our method avoids the reverse diffusion sampling and needs fewer iterations.<n>The proposed method offers enhanced fidelity and robustness compared to conventional and GAN-based FWI approaches.
arXiv Detail & Related papers (2025-06-11T19:43:23Z) - RadioDiff-$k^2$: Helmholtz Equation Informed Generative Diffusion Model for Multi-Path Aware Radio Map Construction [76.24833675757033]
We propose a physics-informed generative learning approach, named RadioDiff-$k2$, for accurate and efficient multipath-aware radio map (RM) construction.<n>We show that the proposed RadioDiff-$k2$ framework achieves state-of-the-art (SOTA) performance in both image-level RM construction and localization tasks.
arXiv Detail & Related papers (2025-04-22T06:28:13Z) - PreAdaptFWI: Pretrained-Based Adaptive Residual Learning for Full-Waveform Inversion Without Dataset Dependency [8.719356558714246]
Full-waveform inversion (FWI) is a method that utilizes seismic data to invert the physical parameters of subsurface media.<n>Due to its ill-posed nature, FWI is susceptible to getting trapped in local minima.<n>Various research efforts have attempted to combine neural networks with FWI to stabilize the inversion process.
arXiv Detail & Related papers (2025-02-17T15:30:17Z) - DispFormer: A Pretrained Transformer Incorporating Physical Constraints for Dispersion Curve Inversion [56.64622091009756]
This study introduces DispFormer, a transformer-based neural network for $v_s$ profile inversion from Rayleigh-wave phase and group dispersion curves.<n>DispFormer processes dispersion data independently at each period, allowing it to handle varying lengths without requiring network modifications or strict alignment between training and testing datasets.
arXiv Detail & Related papers (2025-01-08T09:08:24Z) - Effective Diffusion Transformer Architecture for Image Super-Resolution [63.254644431016345]
We design an effective diffusion transformer for image super-resolution (DiT-SR)
In practice, DiT-SR leverages an overall U-shaped architecture, and adopts a uniform isotropic design for all the transformer blocks.
We analyze the limitation of the widely used AdaLN, and present a frequency-adaptive time-step conditioning module.
arXiv Detail & Related papers (2024-09-29T07:14:16Z) - Adaptive Multi-step Refinement Network for Robust Point Cloud Registration [82.64560249066734]
Point Cloud Registration estimates the relative rigid transformation between two point clouds of the same scene.<n>We propose an adaptive multi-step refinement network that refines the registration quality at each step by leveraging the information from the preceding step.<n>Our method achieves state-of-the-art performance on both the 3DMatch/3DLoMatch and KITTI benchmarks.
arXiv Detail & Related papers (2023-12-05T18:59:41Z) - WaveDM: Wavelet-Based Diffusion Models for Image Restoration [43.254438752311714]
Wavelet-Based Diffusion Model (WaveDM) learns the distribution of clean images in the wavelet domain conditioned on the wavelet spectrum of degraded images after wavelet transform.
WaveDM achieves state-of-the-art performance with the efficiency that is comparable to traditional one-pass methods.
arXiv Detail & Related papers (2023-05-23T08:41:04Z) - Machine learning for phase-resolved reconstruction of nonlinear ocean
wave surface elevations from sparse remote sensing data [37.69303106863453]
We propose a novel approach for phase-resolved wave surface reconstruction using neural networks.
Our approach utilizes synthetic yet highly realistic training data on uniform one-dimensional grids.
arXiv Detail & Related papers (2023-05-18T12:30:26Z) - Q-Diffusion: Quantizing Diffusion Models [52.978047249670276]
Post-training quantization (PTQ) is considered a go-to compression method for other tasks.
We propose a novel PTQ method specifically tailored towards the unique multi-timestep pipeline and model architecture.
We show that our proposed method is able to quantize full-precision unconditional diffusion models into 4-bit while maintaining comparable performance.
arXiv Detail & Related papers (2023-02-08T19:38:59Z) - ShiftDDPMs: Exploring Conditional Diffusion Models by Shifting Diffusion
Trajectories [144.03939123870416]
We propose a novel conditional diffusion model by introducing conditions into the forward process.
We use extra latent space to allocate an exclusive diffusion trajectory for each condition based on some shifting rules.
We formulate our method, which we call textbfShiftDDPMs, and provide a unified point of view on existing related methods.
arXiv Detail & Related papers (2023-02-05T12:48:21Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.