Targeted Fine-Tuning of DNN-Based Receivers via Influence Functions
- URL: http://arxiv.org/abs/2509.15950v1
- Date: Fri, 19 Sep 2025 13:01:30 GMT
- Title: Targeted Fine-Tuning of DNN-Based Receivers via Influence Functions
- Authors: Marko Tuononen, Heikki Penttinen, Ville Hautamäki,
- Abstract summary: We present the first use of influence functions for deep learning-based wireless receivers.<n>We show that loss-relative influence with capacity-like binary cross-entropy loss and first-order updates on beneficial samples most consistently improves bit error rate toward genie-aided performance.
- Score: 7.18961038438762
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present the first use of influence functions for deep learning-based wireless receivers. Applied to DeepRx, a fully convolutional receiver, influence analysis reveals which training samples drive bit predictions, enabling targeted fine-tuning of poorly performing cases. We show that loss-relative influence with capacity-like binary cross-entropy loss and first-order updates on beneficial samples most consistently improves bit error rate toward genie-aided performance, outperforming random fine-tuning in single-target scenarios. Multi-target adaptation proved less effective, underscoring open challenges. Beyond experiments, we connect influence to self-influence corrections and propose a second-order, influence-aligned update strategy. Our results establish influence functions as both an interpretability tool and a basis for efficient receiver adaptation.
Related papers
- Better Hessians Matter: Studying the Impact of Curvature Approximations in Influence Functions [5.937280131734114]
We investigate the effect of Hessian approximation quality on influence-function attributions in a controlled classification setting.<n>Our experiments show that better Hessian approximations consistently yield better influence score quality.<n>We further decompose the approximation steps for recent Hessian approximation methods and evaluate each step's influence on attribution accuracy.
arXiv Detail & Related papers (2025-09-27T18:12:35Z) - Efficient Test-time Adaptive Object Detection via Sensitivity-Guided Pruning [73.40364018029673]
Continual test-time adaptive object detection (CTTA-OD) aims to online adapt a source pre-trained detector to ever-changing environments.<n>Our motivation stems from the observation that not all learned source features are beneficial.<n>Our method achieves superior adaptation performance while reducing computational overhead by 12% in FLOPs.
arXiv Detail & Related papers (2025-06-03T05:27:56Z) - Most Influential Subset Selection: Challenges, Promises, and Beyond [9.479235005673683]
We study the Most Influential Subset Selection (MISS) problem, which aims to identify a subset of training samples with the greatest collective influence.<n>We conduct a comprehensive analysis of the prevailing approaches in MISS, elucidating their strengths and weaknesses.<n>We demonstrate that an adaptive version of theses which applies them iteratively, can effectively capture the interactions among samples.
arXiv Detail & Related papers (2024-09-25T20:00:23Z) - Exploring Example Influence in Continual Learning [26.85320841575249]
Continual Learning (CL) sequentially learns new tasks like human beings, with the goal to achieve better Stability (S) and Plasticity (P)
It is valuable to explore the influence difference on S and P among training examples, which may improve the learning pattern towards better SP.
We propose a simple yet effective MetaSP algorithm to simulate the two key steps in the perturbation of IF and obtain the S- and P-aware example influence.
arXiv Detail & Related papers (2022-09-25T15:17:37Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Inf-CP: A Reliable Channel Pruning based on Channel Influence [4.692400531340393]
One of the most effective methods of channel pruning is to trim on the basis of the importance of each neuron.
Previous works have proposed to trim by considering the statistics of a single layer or a plurality of successive layers of neurons.
We propose to use ensemble learning to train a model for different batches of data.
arXiv Detail & Related papers (2021-12-05T09:30:43Z) - FastIF: Scalable Influence Functions for Efficient Model Interpretation
and Debugging [112.19994766375231]
Influence functions approximate the 'influences' of training data-points for test predictions.
We present FastIF, a set of simple modifications to influence functions that significantly improves their run-time.
Our experiments demonstrate the potential of influence functions in model interpretation and correcting model errors.
arXiv Detail & Related papers (2020-12-31T18:02:34Z) - Efficient Estimation of Influence of a Training Instance [56.29080605123304]
We propose an efficient method for estimating the influence of a training instance on a neural network model.
Our method is inspired by dropout, which zero-masks a sub-network and prevents the sub-network from learning each training instance.
We demonstrate that the proposed method can capture training influences, enhance the interpretability of error predictions, and cleanse the training dataset for improving generalization.
arXiv Detail & Related papers (2020-12-08T04:31:38Z) - Multi-Stage Influence Function [97.19210942277354]
We develop a multi-stage influence function score to track predictions from a finetuned model all the way back to the pretraining data.
We study two different scenarios with the pretrained embeddings fixed or updated in the finetuning tasks.
arXiv Detail & Related papers (2020-07-17T16:03:11Z) - Influence Functions in Deep Learning Are Fragile [52.31375893260445]
influence functions approximate the effect of samples in test-time predictions.
influence estimates are fairly accurate for shallow networks.
Hessian regularization is important to get highquality influence estimates.
arXiv Detail & Related papers (2020-06-25T18:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.