Kostant relation in filtered randomized benchmarking for passive bosonic devices
- URL: http://arxiv.org/abs/2511.00842v1
- Date: Sun, 02 Nov 2025 07:53:24 GMT
- Title: Kostant relation in filtered randomized benchmarking for passive bosonic devices
- Authors: David Amaro-Alcalá,
- Abstract summary: We introduce a filter function using immanants.<n>We argue that weak coherent states and intensity measurements are sufficient to proceed with the characterization.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We reduce the cost of the current bosonic randomized benchmarking proposal. First, we introduce a filter function using immanants. With this filter, we avoid the need to compute Clebsch-Gordan coefficients. Our filter uses the same data as the original, although we propose a distinct data collection process that requires a single type of measurement. Furthermore, we argue that weak coherent states and intensity measurements are sufficient to proceed with the characterization. Our work could then allow simpler platforms to be characterized and simplify the data analysis process.
Related papers
- Prior-based Noisy Text Data Filtering: Fast and Strong Alternative For Perplexity [16.521507516831097]
We propose a prior-based data filtering method that estimates token priors using corpus-level term frequency statistics.<n>Our approach filters documents based on the mean and standard deviation of token priors, serving as a fast proxy to PPL.<n>Despite its simplicity, the prior-based filter achieves the highest average performance across 20 downstream benchmarks.
arXiv Detail & Related papers (2025-09-23T02:57:29Z) - Implicit Maximum a Posteriori Filtering via Adaptive Optimization [4.767884267554628]
We frame the standard Bayesian filtering problem as optimization over a time-varying objective.
We show that our framework results in filters that are effective, robust, and scalable to high-dimensional systems.
arXiv Detail & Related papers (2023-11-17T15:30:44Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Filter Pruning based on Information Capacity and Independence [11.411996979581295]
This paper introduces a new filter pruning method that selects filters in an interpretable, multi-perspective, and lightweight manner.
For the amount of information contained in each filter, a new metric called information capacity is proposed.
For correlations among filters, another metric called information independence is designed.
arXiv Detail & Related papers (2023-03-07T04:26:44Z) - Computational Doob's h-transforms for Online Filtering of Discretely
Observed Diffusions [65.74069050283998]
We propose a computational framework to approximate Doob's $h$-transforms.
The proposed approach can be orders of magnitude more efficient than state-of-the-art particle filters.
arXiv Detail & Related papers (2022-06-07T15:03:05Z) - Batch Normalization Tells You Which Filter is Important [49.903610684578716]
We propose a simple yet effective filter pruning method by evaluating the importance of each filter based on the BN parameters of pre-trained CNNs.
The experimental results on CIFAR-10 and ImageNet demonstrate that the proposed method can achieve outstanding performance.
arXiv Detail & Related papers (2021-12-02T12:04:59Z) - Training Compact CNNs for Image Classification using Dynamic-coded
Filter Fusion [139.71852076031962]
We present a novel filter pruning method, dubbed dynamic-coded filter fusion (DCFF)
We derive compact CNNs in a computation-economical and regularization-free manner for efficient image classification.
Our DCFF derives a compact VGGNet-16 with only 72.77M FLOPs and 1.06M parameters while reaching top-1 accuracy of 93.47%.
arXiv Detail & Related papers (2021-07-14T18:07:38Z) - Deep Convolutional Correlation Iterative Particle Filter for Visual
Tracking [1.1531505895603305]
This work proposes a novel framework for visual tracking based on the integration of an iterative particle filter, a deep convolutional neural network, and a correlation filter.
We employ a novel strategy to assess the likelihood of the particles after the iterations by applying K-means clustering.
Experimental results on two different benchmark datasets show that our tracker performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2021-07-07T02:44:43Z) - Innovative And Additive Outlier Robust Kalman Filtering With A Robust
Particle Filter [68.8204255655161]
We propose CE-BASS, a particle mixture Kalman filter which is robust to both innovative and additive outliers, and able to fully capture multi-modality in the distribution of the hidden state.
Furthermore, the particle sampling approach re-samples past states, which enables CE-BASS to handle innovative outliers which are not immediately visible in the observations, such as trend changes.
arXiv Detail & Related papers (2020-07-07T07:11:09Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.