Toward Robust Non-Transferable Learning: A Survey and Benchmark
- URL: http://arxiv.org/abs/2502.13593v2
- Date: Fri, 07 Mar 2025 13:45:22 GMT
- Title: Toward Robust Non-Transferable Learning: A Survey and Benchmark
- Authors: Ziming Hong, Yongli Xiang, Tongliang Liu,
- Abstract summary: Non-transferable learning (NTL) is a task aimed at reshaping the generalization abilities of deep learning models.<n>We present the first comprehensive survey on NTL and introducing NTLBench, the first benchmark to evaluate NTL performance and robustness.<n>We discuss the practical applications of NTL, along with its future directions and associated challenges.
- Score: 51.52542476904985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the past decades, researchers have primarily focused on improving the generalization abilities of models, with limited attention given to regulating such generalization. However, the ability of models to generalize to unintended data (e.g., harmful or unauthorized data) can be exploited by malicious adversaries in unforeseen ways, potentially resulting in violations of model ethics. Non-transferable learning (NTL), a task aimed at reshaping the generalization abilities of deep learning models, was proposed to address these challenges. While numerous methods have been proposed in this field, a comprehensive review of existing progress and a thorough analysis of current limitations remain lacking. In this paper, we bridge this gap by presenting the first comprehensive survey on NTL and introducing NTLBench, the first benchmark to evaluate NTL performance and robustness within a unified framework. Specifically, we first introduce the task settings, general framework, and criteria of NTL, followed by a summary of NTL approaches. Furthermore, we emphasize the often-overlooked issue of robustness against various attacks that can destroy the non-transferable mechanism established by NTL. Experiments conducted via NTLBench verify the limitations of existing NTL methods in robustness. Finally, we discuss the practical applications of NTL, along with its future directions and associated challenges.
Related papers
- One Subgoal at a Time: Zero-Shot Generalization to Arbitrary Linear Temporal Logic Requirements in Multi-Task Reinforcement Learning [3.5886171069912938]
Generalizing to complex and temporally extended task objectives and safety constraints is a critical challenge in reinforcement learning (RL)<n>In this paper, we introduce GenZ-LTL, a method that enables zero-shot generalization to arbitrary specifications.
arXiv Detail & Related papers (2025-08-03T03:17:49Z) - A Unified Empirical Risk Minimization Framework for Flexible N-Tuples Weak Supervision [13.39950379203994]
We propose a general N-tuples learning framework based on empirical risk minimization.<n>We show that our framework consistently improves generalization across various N-tuples learning tasks.
arXiv Detail & Related papers (2025-07-10T13:54:59Z) - A Comprehensive Survey on Evidential Deep Learning and Its Applications [64.83473301188138]
Evidential Deep Learning (EDL) provides reliable uncertainty estimation with minimal additional computation in a single forward pass.
We first delve into the theoretical foundation of EDL, the subjective logic theory, and discuss its distinctions from other uncertainty estimation frameworks.
We elaborate on its extensive applications across various machine learning paradigms and downstream tasks.
arXiv Detail & Related papers (2024-09-07T05:55:06Z) - Directed Exploration in Reinforcement Learning from Linear Temporal Logic [59.707408697394534]
Linear temporal logic (LTL) is a powerful language for task specification in reinforcement learning.
We show that the synthesized reward signal remains fundamentally sparse, making exploration challenging.
We show how better exploration can be achieved by further leveraging the specification and casting its corresponding Limit Deterministic B"uchi Automaton (LDBA) as a Markov reward process.
arXiv Detail & Related papers (2024-08-18T14:25:44Z) - Parameter-Efficient Fine-Tuning for Continual Learning: A Neural Tangent Kernel Perspective [125.00228936051657]
We introduce NTK-CL, a novel framework that eliminates task-specific parameter storage while adaptively generating task-relevant features.
By fine-tuning optimizable parameters with appropriate regularization, NTK-CL achieves state-of-the-art performance on established PEFT-CL benchmarks.
arXiv Detail & Related papers (2024-07-24T09:30:04Z) - On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models [25.029579061612456]
Large Language Models (LLMs) are increasingly being employed in real-world applications in critical domains such as healthcare.
It is important to ensure that the Chain-of-Thought (CoT) reasoning generated by these models faithfully captures their underlying behavior.
arXiv Detail & Related papers (2024-06-15T13:16:44Z) - On the Generalization Ability of Unsupervised Pretraining [53.06175754026037]
Recent advances in unsupervised learning have shown that unsupervised pre-training, followed by fine-tuning, can improve model generalization.
This paper introduces a novel theoretical framework that illuminates the critical factor influencing the transferability of knowledge acquired during unsupervised pre-training to the subsequent fine-tuning phase.
Our results contribute to a better understanding of unsupervised pre-training and fine-tuning paradigm, and can shed light on the design of more effective pre-training algorithms.
arXiv Detail & Related papers (2024-03-11T16:23:42Z) - Active Few-Shot Fine-Tuning [35.49225932333298]
We study the question: How can we select the right data for fine-tuning to a specific task?
We call this data selection problem active fine-tuning and show that it is an instance of transductive active learning.
We propose ITL, short for information-based transductive learning, an approach which samples adaptively to maximize information gained about the specified task.
arXiv Detail & Related papers (2024-02-13T09:19:05Z) - Uncertainty Regularized Evidential Regression [5.874234972285304]
The Evidential Regression Network (ERN) represents a novel approach that integrates deep learning with Dempster-Shafer's theory.
Specific activation functions must be employed to enforce non-negative values, which is a constraint that compromises model performance.
This paper provides a theoretical analysis of this limitation and introduces an improvement to overcome it.
arXiv Detail & Related papers (2024-01-03T01:18:18Z) - Self-Admitted Technical Debt Detection Approaches: A Decade Systematic Review [5.670597842524448]
Technical debt (TD) represents the long-term costs associated with suboptimal design or code decisions in software development.
Self-Admitted Technical Debt (SATD) occurs when developers explicitly acknowledge these trade-offs.
automated detection of SATD has become an increasingly important research area.
arXiv Detail & Related papers (2023-12-19T12:01:13Z) - Hypothesis Transfer Learning with Surrogate Classification Losses:
Generalization Bounds through Algorithmic Stability [3.908842679355255]
Hypothesis transfer learning (HTL) contrasts domain adaptation by allowing for a previous task leverage, named the source, into a new one, the target.
This paper studies the learning theory of HTL through algorithmic stability, an attractive theoretical framework for machine learning algorithms analysis.
arXiv Detail & Related papers (2023-05-31T09:38:21Z) - Large-scale Pre-trained Models are Surprisingly Strong in Incremental Novel Class Discovery [76.63807209414789]
We challenge the status quo in class-iNCD and propose a learning paradigm where class discovery occurs continuously and truly unsupervisedly.
We propose simple baselines, composed of a frozen PTM backbone and a learnable linear classifier, that are not only simple to implement but also resilient under longer learning scenarios.
arXiv Detail & Related papers (2023-03-28T13:47:16Z) - On the Learning of Non-Autoregressive Transformers [91.34196047466904]
Non-autoregressive Transformer (NAT) is a family of text generation models.
We present theoretical and empirical analyses to reveal the challenges of NAT learning.
arXiv Detail & Related papers (2022-06-13T08:42:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.