Advancing Out-of-Distribution Detection via Local Neuroplasticity
- URL: http://arxiv.org/abs/2502.15833v1
- Date: Thu, 20 Feb 2025 11:13:41 GMT
- Title: Advancing Out-of-Distribution Detection via Local Neuroplasticity
- Authors: Alessandro Canevaro, Julian Schmidt, Mohammad Sajad Marvi, Hang Yu, Georg Martius, Julian Jordan,
- Abstract summary: This paper presents a novel OOD detection method that leverages the unique local neuroplasticity property of Kolmogorov-Arnold Networks (KANs)<n>Our method compares the activation patterns of a trained KAN against its untrained counterpart to detect OOD samples.<n>We validate our approach on benchmarks from image and medical domains, demonstrating superior performance and robustness compared to state-of-the-art techniques.
- Score: 60.53625435889467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the domain of machine learning, the assumption that training and test data share the same distribution is often violated in real-world scenarios, requiring effective out-of-distribution (OOD) detection. This paper presents a novel OOD detection method that leverages the unique local neuroplasticity property of Kolmogorov-Arnold Networks (KANs). Unlike traditional multilayer perceptrons, KANs exhibit local plasticity, allowing them to preserve learned information while adapting to new tasks. Our method compares the activation patterns of a trained KAN against its untrained counterpart to detect OOD samples. We validate our approach on benchmarks from image and medical domains, demonstrating superior performance and robustness compared to state-of-the-art techniques. These results underscore the potential of KANs in enhancing the reliability of machine learning systems in diverse environments.
Related papers
- Toward a Realistic Benchmark for Out-of-Distribution Detection [3.8038269045375515]
We introduce a comprehensive benchmark for OOD detection based on ImageNet and Places365.
Several techniques can be used to determine which classes should be considered in-distribution, yielding benchmarks with varying properties.
arXiv Detail & Related papers (2024-04-16T11:29:43Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - WeShort: Out-of-distribution Detection With Weak Shortcut structure [0.0]
We propose a simple and effective post-hoc technique, WeShort, to reduce the overconfidence of neural networks on OOD data.
Our method is compatible with different OOD detection scores and can generalize well to different architectures of networks.
arXiv Detail & Related papers (2022-06-23T07:59:10Z) - Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD
Training Data Estimate a Combination of the Same Core Quantities [104.02531442035483]
The goal of this paper is to recognize common objectives as well as to identify the implicit scoring functions of different OOD detection methods.
We show that binary discrimination between in- and (different) out-distributions is equivalent to several distinct formulations of the OOD detection problem.
We also show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function.
arXiv Detail & Related papers (2022-06-20T16:32:49Z) - Igeood: An Information Geometry Approach to Out-of-Distribution
Detection [35.04325145919005]
We introduce Igeood, an effective method for detecting out-of-distribution (OOD) samples.
Igeood applies to any pre-trained neural network, works under various degrees of access to the machine learning model.
We show that Igeood outperforms competing state-of-the-art methods on a variety of network architectures and datasets.
arXiv Detail & Related papers (2022-03-15T11:26:35Z) - ReAct: Out-of-distribution Detection With Rectified Activations [20.792140933660075]
Out-of-distribution (OOD) detection has received much attention lately due to its practical importance.
One of the primary challenges is that models often produce highly confident predictions on OOD data.
We propose ReAct--a simple and effective technique for reducing model overconfidence on OOD data.
arXiv Detail & Related papers (2021-11-24T21:02:07Z) - Natural Attribute-based Shift Detection [26.093019230426442]
In vision, language, and healthcare, unpredictable behaviors on samples from the distribution different from the training distribution cause severe problems in deployment.
For better reliability of neural-network-based classifiers, we define a new task, natural attribute-based shift (NAS) detection.
Using the natural attributes present in existing datasets, we introduce benchmark datasets in vision, language, and medical for NAS detection.
arXiv Detail & Related papers (2021-10-18T13:15:59Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.