Contrastive Knowledge Transfer and Robust Optimization for Secure Alignment of Large Language Models
- URL: http://arxiv.org/abs/2510.27077v1
- Date: Fri, 31 Oct 2025 00:54:33 GMT
- Title: Contrastive Knowledge Transfer and Robust Optimization for Secure Alignment of Large Language Models
- Authors: Jiasen Zheng, Huajun Zhang, Xu Yan, Ran Hao, Chong Peng,
- Abstract summary: This paper addresses the limitations of large-scale language models in safety alignment and robustness.<n>It proposes a fine-tuning method that combines contrastive distillation with noise-robust training.<n>Results show that the method significantly outperforms existing baselines in knowledge transfer, robustness, and overall safety.
- Score: 9.353236468990945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the limitations of large-scale language models in safety alignment and robustness by proposing a fine-tuning method that combines contrastive distillation with noise-robust training. The method freezes the backbone model and transfers the knowledge boundaries of the teacher model to the student model through distillation, thereby improving semantic consistency and alignment accuracy. At the same time, noise perturbations and robust optimization constraints are introduced during training to ensure that the model maintains stable predictive outputs under noisy and uncertain inputs. The overall framework consists of distillation loss, robustness loss, and a regularization term, forming a unified optimization objective that balances alignment ability with resistance to interference. To systematically validate its effectiveness, the study designs experiments from multiple perspectives, including distillation weight sensitivity, stability analysis under computation budgets and mixed-precision environments, and the impact of data noise and distribution shifts on model performance. Results show that the method significantly outperforms existing baselines in knowledge transfer, robustness, and overall safety, achieving the best performance across several key metrics. This work not only enriches the theoretical system of parameter-efficient fine-tuning but also provides a new solution for building safer and more trustworthy alignment mechanisms.
Related papers
- Not All Preferences Are Created Equal: Stability-Aware and Gradient-Efficient Alignment for Reasoning Models [52.48582333951919]
We propose a dynamic framework designed to enhance alignment reliability by maximizing the Signal-to-Noise Ratio of policy updates.<n>SAGE (Stability-Aware Gradient Efficiency) integrates a coarse-grained curriculum mechanism that refreshes candidate pools based on model competence.<n> Experiments on multiple mathematical reasoning benchmarks demonstrate that SAGE significantly accelerates convergence and outperforms static baselines.
arXiv Detail & Related papers (2026-02-01T12:56:10Z) - Learning to be Reproducible: Custom Loss Design for Robust Neural Networks [4.3094059981414405]
We propose a Custom Loss Function (CLF) that balances predictive accuracy with training stability.<n>CLF significantly improves training without sacrificing predictive performance.<n>These results establish CLF as an effective and efficient strategy for developing more stable, reliable and trustworthy neural networks.
arXiv Detail & Related papers (2026-01-02T05:31:08Z) - Parameter-Efficient Fine-Tuning with Differential Privacy for Robust Instruction Adaptation in Large Language Models [11.071281023081582]
This study addresses the issues of privacy protection and efficiency in instruction fine-tuning of large-scale language models.<n>It proposes a parameter-efficient method that integrates differential privacy noise allocation with gradient clipping in a collaborative optimization framework.<n>Results show that the method outperforms baseline models in accuracy, privacy budget, and parameter efficiency, and maintains stable performance under diverse and uncertain data conditions.
arXiv Detail & Related papers (2025-12-07T08:01:01Z) - SG-OIF: A Stability-Guided Online Influence Framework for Reliable Vision Data [6.4391040754741296]
In this paper, we introduce a Stability-Guided Online Influence Framework (SG-OIF) for Approximating training-point influence on test predictions.<n>We show that SG-OIF achieves 91.1% accuracy in the top 1% prediction samples on the CIFAR-10, and 99.8% AUPR score on MNIST.
arXiv Detail & Related papers (2025-11-21T19:58:54Z) - LLM-Centric RAG with Multi-Granular Indexing and Confidence Constraints [5.2604064919135896]
This paper addresses the issues of insufficient coverage, unstable results, and limited reliability in retrieval-augmented generation under complex knowledge environments.<n>It proposes a confidence control method that integrates multi-granularity memory indexing with uncertainty estimation.<n>The results show that the method achieves superior performance over existing models in QA accuracy, retrieval recall, ranking quality, and factual consistency.
arXiv Detail & Related papers (2025-10-30T23:48:37Z) - MaP: A Unified Framework for Reliable Evaluation of Pre-training Dynamics [72.00014675808228]
Instability in Large Language Models evaluation process obscures true learning dynamics.<n>We introduce textbfMaP, a framework that integrates underlineMerging underlineand the underlinePass@k metric.<n>Experiments show that MaP yields significantly smoother performance curves, reduces inter-run variance, and ensures more consistent rankings.
arXiv Detail & Related papers (2025-10-10T11:40:27Z) - Feed Two Birds with One Scone: Exploiting Function-Space Regularization for Both OOD Robustness and ID Fine-Tuning Performance [72.57668440744301]
We propose a novel regularization that constrains the distance of fine-tuning and pre-trained model in the function space with simulated OOD samples.<n>Our approach could consistently improve both downstream task ID fine-tuning performance and OOD robustness across a variety of CLIP backbones.
arXiv Detail & Related papers (2025-08-31T12:14:34Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Uncertainty-aware multi-fidelity surrogate modeling with noisy data [0.0]
In real-world applications, uncertainty is present in both high- and low-fidelity models due to measurement or numerical noise.
This paper introduces a comprehensive framework for multi-fidelity surrogate modeling that handles noise-contaminated data.
The proposed framework offers a natural approach to combining physical experiments and computational models.
arXiv Detail & Related papers (2024-01-12T08:37:41Z) - Towards Safe Multi-Task Bayesian Optimization [1.3654846342364308]
Reduced physical models of the system can be incorporated into the optimization process, accelerating it.
These models are able to offer an approximation of the actual system, and evaluating them is significantly cheaper.
Safety is a crucial criterion for online optimization methods such as Bayesian optimization.
arXiv Detail & Related papers (2023-12-12T13:59:26Z) - Improve Noise Tolerance of Robust Loss via Noise-Awareness [60.34670515595074]
We propose a meta-learning method which is capable of adaptively learning a hyper parameter prediction function, called Noise-Aware-Robust-Loss-Adjuster (NARL-Adjuster for brevity)
Four SOTA robust loss functions are attempted to be integrated with our algorithm, and comprehensive experiments substantiate the general availability and effectiveness of the proposed method in both its noise tolerance and performance.
arXiv Detail & Related papers (2023-01-18T04:54:58Z) - NoisyMix: Boosting Robustness by Combining Data Augmentations, Stability
Training, and Noise Injections [46.745755900939216]
We introduce NoisyMix, a training scheme that combines data augmentations with stability training and noise injections to improve both model robustness and in-domain accuracy.
We demonstrate the benefits of NoisyMix on a range of benchmark datasets, including ImageNet-C, ImageNet-R, and ImageNet-P.
arXiv Detail & Related papers (2022-02-02T19:53:35Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.