Chicken Swarm Kernel Particle Filter: A Structured Rejuvenation Approach with KLD-Efficient Sampling
- URL: http://arxiv.org/abs/2511.12222v1
- Date: Sat, 15 Nov 2025 13:55:29 GMT
- Title: Chicken Swarm Kernel Particle Filter: A Structured Rejuvenation Approach with KLD-Efficient Sampling
- Authors: Hangshuo Tian,
- Abstract summary: Particle filters (PFs) are often combined with swarm intelligence (SI) algorithms, such as Chicken Swarm Optimization (CSO)<n>This paper investigates the theoretical interaction between SI-based rejuvenation kernels and Kullback--Leibler divergence (KLD) sampling.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Particle filters (PFs) are often combined with swarm intelligence (SI) algorithms, such as Chicken Swarm Optimization (CSO), for particle rejuvenation. Separately, Kullback--Leibler divergence (KLD) sampling is a common strategy for adaptively sizing the particle set. However, the theoretical interaction between SI-based rejuvenation kernels and KLD-based adaptive sampling is not yet fully understood. This paper investigates this specific interaction. We analyze, under a simplified modeling framework, the effect of the CSO rejuvenation step on the particle set distribution. We propose that the fitness-driven updates inherent in CSO can be approximated as a form of mean-square contraction. This contraction tends to produce a particle distribution that is more concentrated than that of a baseline PF, or in mathematical terms, a distribution that is plausibly more ``peaked'' in a majorization sense. By applying Karamata's inequality to the concave function that governs the expected bin occupancy in KLD-sampling, our analysis suggests a connection: under the stated assumptions, the CSO-enhanced PF (CPF) is expected to require a lower \emph{expected} particle count than the standard PF to satisfy the same statistical error bound. The goal of this study is not to provide a fully general proof, but rather to offer a tractable theoretical framework that helps to interpret the computational efficiency empirically observed when combining these techniques, and to provide a starting point for designing more efficient adaptive filters.
Related papers
- Entropy-Based Dimension-Free Convergence and Loss-Adaptive Schedules for Diffusion Models [3.2091923314854416]
Diffusion generative models synthesize samples by discretizing reverse-time dynamics driven by a learned score (or denoiser)<n>We develop an information-theoretic approach to dimension-free convergence that avoids geometric assumptions.<n>We also propose a Loss-Adaptive Schedule (LAS) for efficient discretization of reverse SDE which is lightweight and relies only on the training loss.
arXiv Detail & Related papers (2026-01-29T16:28:21Z) - Goal inference with Rao-Blackwellized Particle Filters [5.633221187382381]
Inferring the eventual goal of a mobile agent from noisy observations of its trajectory is a fundamental estimation problem.<n>We study such intent inference using a variant of a Rao-Blackwellized Particle Filter (RBPF)<n>We quantify how well the adversary can recover the agent's intent using information-theoretic leakage metrics.
arXiv Detail & Related papers (2025-12-10T02:48:55Z) - Inference-Time Scaling of Diffusion Language Models with Particle Gibbs Sampling [70.8832906871441]
We study how to steer generation toward desired rewards without retraining the models.<n>Prior methods typically resample or filter within a single denoising trajectory, optimizing rewards step-by-step without trajectory-level refinement.<n>We introduce particle Gibbs sampling for diffusion language models (PG-DLM), a novel inference-time algorithm enabling trajectory-level refinement while preserving generation perplexity.
arXiv Detail & Related papers (2025-07-11T08:00:47Z) - Progressive Inference-Time Annealing of Diffusion Models for Sampling from Boltzmann Densities [93.13866975467549]
We propose Progressive Inference-Time Annealing (PITA) to learn diffusion-based samplers.<n>PITA combines two complementary techniques: Annealing of the Boltzmann distribution and Diffusion smoothing.<n>It enables equilibrium sampling of N-body particle systems, Alanine Dipeptide, and tripeptides in Cartesian coordinates.
arXiv Detail & Related papers (2025-06-19T17:14:22Z) - Adversarial Transform Particle Filters [11.330617592263744]
The particle filter (PF) and the ensemble Kalman filter (EnKF) are widely used for approximate inference in state-space models.<n>We propose the Adversarial Transform Particle Filter (ATPF), a novel filtering framework that combines the strengths of the PF and the EnKF through adversarial learning.
arXiv Detail & Related papers (2025-02-10T05:31:35Z) - Optimal Particle-based Approximation of Discrete Distributions (OPAD) [7.127829790714167]
We show that for any set of particles, there is a unique weighting mechanism that minimizes the Kullback-Leibler (KL) divergence of the (particle-based) approximation from the target distribution.<n>We show that the optimal weights can be determined based on values that any existing particle-based method already computes.
arXiv Detail & Related papers (2024-11-30T17:36:59Z) - Nonlinear Filtering with Brenier Optimal Transport Maps [6.048942663334447]
This paper is concerned with the problem of nonlinear filtering, i.e., computing the conditional distribution of the state of a dynamical system.<n>Conventional sequential importance resampling (SIR) particle filters suffer from fundamental limitations, in scenarios involving degenerate likelihoods or high-dimensional states.<n>In this paper, we explore an alternative method, which is based on estimating the Brenier optimal transport (OT) map from the current prior distribution of the state to the posterior distribution at the next time step.
arXiv Detail & Related papers (2023-10-21T01:34:30Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - PAPAL: A Provable PArticle-based Primal-Dual ALgorithm for Mixed Nash Equilibrium [58.26573117273626]
We consider the non-AL equilibrium nonconptotic objective function in two-player zero-sum continuous games.
Our novel insights into the particle-based algorithms for continuous distribution strategies are presented.
arXiv Detail & Related papers (2023-03-02T05:08:15Z) - Sampling with Mollified Interaction Energy Descent [57.00583139477843]
We present a new optimization-based method for sampling called mollified interaction energy descent (MIED)
MIED minimizes a new class of energies on probability measures called mollified interaction energies (MIEs)
We show experimentally that for unconstrained sampling problems our algorithm performs on par with existing particle-based algorithms like SVGD.
arXiv Detail & Related papers (2022-10-24T16:54:18Z) - Unrolling Particles: Unsupervised Learning of Sampling Distributions [102.72972137287728]
Particle filtering is used to compute good nonlinear estimates of complex systems.
We show in simulations that the resulting particle filter yields good estimates in a wide range of scenarios.
arXiv Detail & Related papers (2021-10-06T16:58:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.