X-REFINE: XAI-based RElevance input-Filtering and archItecture fiNe-tuning for channel Estimation
- URL: http://arxiv.org/abs/2602.22277v1
- Date: Wed, 25 Feb 2026 10:20:26 GMT
- Title: X-REFINE: XAI-based RElevance input-Filtering and archItecture fiNe-tuning for channel Estimation
- Authors: Abdul Karim Gizzini, Yahia Medjahdi,
- Abstract summary: We propose X-REFINE, an XAI-based framework for joint input-filtering and architecture fine-tuning.<n>X-REFINE backpropagates predictions to derive high-resolution relevance scores for both subcarriers and hidden neurons.
- Score: 0.2578242050187029
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI-native architectures are vital for 6G wireless communications. The black-box nature and high complexity of deep learning models employed in critical applications, such as channel estimation, limit their practical deployment. While perturbation-based XAI solutions offer input filtering, they often neglect internal structural optimization. We propose X-REFINE, an XAI-based framework for joint input-filtering and architecture fine-tuning. By utilizing a decomposition-based, sign-stabilized LRP epsilon rule, X-REFINE backpropagates predictions to derive high-resolution relevance scores for both subcarriers and hidden neurons. This enables a holistic optimization that identifies the most faithful model components. Simulation results demonstrate that X-REFINE achieves a superior interpretability-performance-complexity trade-off, significantly reducing computational complexity while maintaining robust bit error rate (BER) performance across different scenarios.
Related papers
- Channel-Adaptive Edge AI: Maximizing Inference Throughput by Adapting Computational Complexity to Channel States [31.472509140661796]
emph communication and computation (IC$2$) has emerged as a new paradigm for enabling efficient edge inference in 6G networks.<n>The metric is highly complicated as it needs to account for both channel distortion and artificial intelligence (AI) model architecture and computational complexity.<n>We develop a tractable analytical model for E2E inference accuracy and leverage it to design a emphchannel-adaptive AI algorithm that maximizes inference throughput.
arXiv Detail & Related papers (2026-03-03T16:33:29Z) - Benchmarking Long Roll-outs of Auto-regressive Neural Operators for the Compressible Navier-Stokes Equations with Conserved Quantity Correction [4.935495275426904]
We present conserved quantity correction, a model-agnostic technique for incorporation physical conservation criteria within deep learning models.<n>Results demonstrate consistent improvement in the long-term stability of auto-regressive neural operator models, regardless of the model architecture.
arXiv Detail & Related papers (2026-01-30T04:27:29Z) - Mixture of Ranks with Degradation-Aware Routing for One-Step Real-World Image Super-Resolution [76.66229730098759]
In real-world image super-resolution (Real-ISR), existing approaches mainly rely on fine-tuning pre-trained diffusion models.<n>We propose a Mixture-of-Ranks (MoR) architecture for single-step image super-resolution.<n>We introduce a fine-grained expert partitioning strategy that treats each rank in LoRA as an independent expert.
arXiv Detail & Related papers (2025-11-20T04:11:44Z) - Unlocking Symbol-Level Precoding Efficiency Through Tensor Equivariant Neural Network [84.22115118596741]
We propose an end-to-end deep learning (DL) framework with low inference complexity for symbol-level precoding.<n>We show that the proposed framework captures substantial performance gains of optimal SLP, while achieving an approximately 80-times speedup over conventional methods.
arXiv Detail & Related papers (2025-10-02T15:15:50Z) - Spectra-to-Structure and Structure-to-Spectra Inference Across the Periodic Table [49.65586812435899]
XAStruct is a learning-based system capable of both predicting XAS spectra from crystal structures and inferring local structural descriptors from XAS input.<n>XAStruct is trained on a large-scale dataset spanning over 70 elements across the periodic table.
arXiv Detail & Related papers (2025-06-13T15:58:05Z) - FX-DARTS: Designing Topology-unconstrained Architectures with Differentiable Architecture Search and Entropy-based Super-network Shrinking [19.98065888943856]
Strong priors are imposed on the search space of Differentiable Architecture Search (DARTS)<n>This paper aims to reduce these prior constraints by eliminating restrictions on cell topology and modifying the discretization mechanism for super-networks.<n>FX-DARTS is capable of exploring a set of neural architectures with competitive trade-offs between performance and computational complexity.
arXiv Detail & Related papers (2025-04-25T08:34:29Z) - ZeroLM: Data-Free Transformer Architecture Search for Language Models [54.83882149157548]
Current automated proxy discovery approaches suffer from extended search times, susceptibility to data overfitting, and structural complexity.<n>This paper introduces a novel zero-cost proxy methodology that quantifies model capacity through efficient weight statistics.<n>Our evaluation demonstrates the superiority of this approach, achieving a Spearman's rho of 0.76 and Kendall's tau of 0.53 on the FlexiBERT benchmark.
arXiv Detail & Related papers (2025-03-24T13:11:22Z) - Explainable AI for Enhancing Efficiency of DL-based Channel Estimation [1.0136215038345013]
Support of artificial intelligence based decision-making is a key element in future 6G networks.<n>In such applications, using AI as black-box models is risky and challenging.<n>We propose a novel-based XAI-CHEST framework that is oriented toward channel estimation in wireless communications.
arXiv Detail & Related papers (2024-07-09T16:24:21Z) - Input Convex Lipschitz RNN: A Fast and Robust Approach for Engineering Tasks [14.835081385422653]
We introduce a novel network architecture, termed Input Convex Lipschitz Recurrent Neural Networks (ICLRNNs)<n>This architecture seamlessly integrates the benefits of convexity and Lipschitz continuity, enabling fast and robust neural network-based modeling and optimization.<n>It has been successfully applied to practical engineering scenarios, such as modeling and control of chemical process and the modeling and real-world solar irradiance prediction for solar PV system planning.
arXiv Detail & Related papers (2024-01-15T06:26:53Z) - REX: Rapid Exploration and eXploitation for AI Agents [103.68453326880456]
We propose an enhanced approach for Rapid Exploration and eXploitation for AI Agents called REX.
REX introduces an additional layer of rewards and integrates concepts similar to Upper Confidence Bound (UCB) scores, leading to more robust and efficient AI agent performance.
arXiv Detail & Related papers (2023-07-18T04:26:33Z) - Optimizing Explanations by Network Canonization and Hyperparameter
Search [74.76732413972005]
Rule-based and modified backpropagation XAI approaches often face challenges when being applied to modern model architectures.
Model canonization is the process of re-structuring the model to disregard problematic components without changing the underlying function.
In this work, we propose canonizations for currently relevant model blocks applicable to popular deep neural network architectures.
arXiv Detail & Related papers (2022-11-30T17:17:55Z) - Fast Distributionally Robust Learning with Variance Reduced Min-Max
Optimization [85.84019017587477]
Distributionally robust supervised learning is emerging as a key paradigm for building reliable machine learning systems for real-world applications.
Existing algorithms for solving Wasserstein DRSL involve solving complex subproblems or fail to make use of gradients.
We revisit Wasserstein DRSL through the lens of min-max optimization and derive scalable and efficiently implementable extra-gradient algorithms.
arXiv Detail & Related papers (2021-04-27T16:56:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.