Distributionally Robust Weighted $k$-Nearest Neighbors
- URL: http://arxiv.org/abs/2006.04004v5
- Date: Wed, 16 Feb 2022 16:45:30 GMT
- Title: Distributionally Robust Weighted $k$-Nearest Neighbors
- Authors: Shixiang Zhu and Liyan Xie and Minghe Zhang and Rui Gao and Yao Xie
- Abstract summary: Learning a robust classifier from a few samples remains a key challenge in machine learning.
In this paper, we study a minimax distributionally robust formulation of weighted $k$-nearest neighbors.
We develop an algorithm, textttDr.k-NN, that efficiently solves this functional optimization problem.
- Score: 21.537952410507483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning a robust classifier from a few samples remains a key challenge in
machine learning. A major thrust of research has been focused on developing
$k$-nearest neighbor ($k$-NN) based algorithms combined with metric learning
that captures similarities between samples. When the samples are limited,
robustness is especially crucial to ensure the generalization capability of the
classifier. In this paper, we study a minimax distributionally robust
formulation of weighted $k$-nearest neighbors, which aims to find the optimal
weighted $k$-NN classifiers that hedge against feature uncertainties. We
develop an algorithm, \texttt{Dr.k-NN}, that efficiently solves this functional
optimization problem and features in assigning minimax optimal weights to
training samples when performing classification. These weights are
class-dependent, and are determined by the similarities of sample features
under the least favorable scenarios. When the size of the uncertainty set is
properly tuned, the robust classifier has a smaller Lipschitz norm than the
vanilla $k$-NN, and thus improves the generalization capability. We also couple
our framework with neural-network-based feature embedding. We demonstrate the
competitive performance of our algorithm compared to the state-of-the-art in
the few-training-sample setting with various real-data experiments.
Related papers
- Adaptive $k$-nearest neighbor classifier based on the local estimation of the shape operator [49.87315310656657]
We introduce a new adaptive $k$-nearest neighbours ($kK$-NN) algorithm that explores the local curvature at a sample to adaptively defining the neighborhood size.
Results on many real-world datasets indicate that the new $kK$-NN algorithm yields superior balanced accuracy compared to the established $k$-NN method.
arXiv Detail & Related papers (2024-09-08T13:08:45Z) - Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning [99.14132861655223]
We propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning.
We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining.
Our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.
arXiv Detail & Related papers (2022-11-29T14:52:38Z) - Towards Automated Imbalanced Learning with Deep Hierarchical
Reinforcement Learning [57.163525407022966]
Imbalanced learning is a fundamental challenge in data mining, where there is a disproportionate ratio of training samples in each class.
Over-sampling is an effective technique to tackle imbalanced learning through generating synthetic samples for the minority class.
We propose AutoSMOTE, an automated over-sampling algorithm that can jointly optimize different levels of decisions.
arXiv Detail & Related papers (2022-08-26T04:28:01Z) - Under-bagging Nearest Neighbors for Imbalanced Classification [63.026765294759876]
We propose an ensemble learning algorithm called textitunder-bagging $k$-NN (textitunder-bagging $k$-NN) for imbalanced classification problems.
arXiv Detail & Related papers (2021-09-01T14:10:38Z) - GOALS: Gradient-Only Approximations for Line Searches Towards Robust and
Consistent Training of Deep Neural Networks [0.0]
Mini-batch sub-sampling (MBSS) is favored in deep neural network training to reduce the computational cost.
We propose a gradient-only approximation line search (GOALS) with strong convergence characteristics with defined optimality criterion.
arXiv Detail & Related papers (2021-05-23T11:21:01Z) - Active Structure Learning of Bayesian Networks in an Observational
Setting [21.376800678915558]
We study active structure learning of Bayesian networks in an observational setting.
We propose a new active learning algorithm that finds with a high probability a structure with a score that is $epsilon$-close to the optimal score.
We show that for a class of distributions that we term stable, a sample complexity reduction of up to a factor of $widetildeOmega(d3)$ can be obtained, where $d$ is the number of network variables.
arXiv Detail & Related papers (2021-03-25T12:50:14Z) - Byzantine-Resilient Non-Convex Stochastic Gradient Descent [61.6382287971982]
adversary-resilient distributed optimization, in which.
machines can independently compute gradients, and cooperate.
Our algorithm is based on a new concentration technique, and its sample complexity.
It is very practical: it improves upon the performance of all prior methods when no.
setting machines are present.
arXiv Detail & Related papers (2020-12-28T17:19:32Z) - Bandit Samplers for Training Graph Neural Networks [63.17765191700203]
Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs)
These sampling algorithms are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT)
arXiv Detail & Related papers (2020-06-10T12:48:37Z) - Towards Deep Learning Models Resistant to Large Perturbations [0.0]
Adversarial robustness has proven to be a required property of machine learning algorithms.
We show that the well-established algorithm called "adversarial training" fails to train a deep neural network given a large, but reasonable, perturbation magnitude.
arXiv Detail & Related papers (2020-03-30T12:03:09Z) - Stochastic Proximal Gradient Algorithm with Minibatches. Application to
Large Scale Learning Models [2.384873896423002]
We develop and analyze minibatch variants of gradient algorithm for general composite objective functions with nonsmooth components.
We provide complexity for constant and variable stepsize iteration policies obtaining that, for minibatch size $N$, after $mathcalO(frac1Nepsilon)$ $epsilon-$subity is attained in expected quadratic distance to optimal solution.
arXiv Detail & Related papers (2020-03-30T10:43:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.