Class Equilibrium using Coulomb's Law
- URL: http://arxiv.org/abs/2104.12287v1
- Date: Sun, 25 Apr 2021 23:38:06 GMT
- Title: Class Equilibrium using Coulomb's Law
- Authors: Saheb Chhabra, Puspita Majumdar, Mayank Vatsa, Richa Singh
- Abstract summary: We propose a new algorithm to compute the equilibrium space of any data distribution where the separation among the classes is optimal.
It is observed that the proposed algorithm performs well for low-resolution images.
- Score: 90.8945770656554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Projection algorithms learn a transformation function to project the data
from input space to the feature space, with the objective of increasing the
inter-class distance. However, increasing the inter-class distance can affect
the intra-class distance. Maintaining an optimal inter-class separation among
the classes without affecting the intra-class distance of the data distribution
is a challenging task. In this paper, inspired by the Coulomb's law of
Electrostatics, we propose a new algorithm to compute the equilibrium space of
any data distribution where the separation among the classes is optimal. The
algorithm further learns the transformation between the input space and
equilibrium space to perform classification in the equilibrium space. The
performance of the proposed algorithm is evaluated on four publicly available
datasets at three different resolutions. It is observed that the proposed
algorithm performs well for low-resolution images.
Related papers
- A Mirror Descent-Based Algorithm for Corruption-Tolerant Distributed Gradient Descent [57.64826450787237]
We show how to analyze the behavior of distributed gradient descent algorithms in the presence of adversarial corruptions.
We show how to use ideas from (lazy) mirror descent to design a corruption-tolerant distributed optimization algorithm.
Experiments based on linear regression, support vector classification, and softmax classification on the MNIST dataset corroborate our theoretical findings.
arXiv Detail & Related papers (2024-07-19T08:29:12Z) - On the Trade-off between Flatness and Optimization in Distributed Learning [42.609672086459845]
This paper proposes a theoretical framework to evaluate and compare the performance of gradient-descent algorithms for distributed learning.
It shows that decentralized learning strategies are able to escape from localrs.
arXiv Detail & Related papers (2024-06-28T15:46:08Z) - Canonical Variates in Wasserstein Metric Space [16.668946904062032]
We employ the Wasserstein metric to measure distances between distributions, which are then used by distance-based classification algorithms.
Central to our investigation is dimension reduction within the Wasserstein metric space to enhance classification accuracy.
We introduce a novel approach grounded in the principle of maximizing Fisher's ratio, defined as the quotient of between-class variation to within-class variation.
arXiv Detail & Related papers (2024-05-24T17:59:21Z) - Gauge-optimal approximate learning for small data classification
problems [0.0]
Small data learning problems are characterized by a discrepancy between the limited amount of response variable observations and the large feature space dimension.
We propose the Gauge- Optimal Approximate Learning (GOAL) algorithm, which provides an analytically tractable joint solution to the reduction dimension, feature segmentation and classification problems.
GOAL has been compared to other state-of-the-art machine learning (ML) tools on both synthetic data and challenging real-world applications from climate science and bioinformatics.
arXiv Detail & Related papers (2023-10-29T16:46:05Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - An Improved Greedy Algorithm for Subset Selection in Linear Estimation [5.994412766684842]
We consider a subset selection problem in a spatial field where we seek to find a set of k locations whose observations provide the best estimate of the field value at a finite set of prediction locations.
One approach for observation selection is to perform a grid discretization of the space and obtain an approximate solution using the greedy algorithm.
We propose a method to reduce the computational complexity by considering a search space consisting only of prediction locations and centroids of cliques formed by the prediction locations.
arXiv Detail & Related papers (2022-03-30T05:52:16Z) - Learning Where to Learn in Cross-View Self-Supervised Learning [54.14989750044489]
Self-supervised learning (SSL) has made enormous progress and largely narrowed the gap with supervised ones.
Current methods simply adopt uniform aggregation of pixels for embedding.
We present a new approach, Learning Where to Learn (LEWEL), to adaptively aggregate spatial information of features.
arXiv Detail & Related papers (2022-03-28T17:02:42Z) - Probabilistic spatial clustering based on the Self Discipline Learning
(SDL) model of autonomous learning [1.9322517897534983]
Unsupervised clustering algorithm can effectively reduce the dimension of high-dimensional unlabeled data.
Traditional clustering algorithm needs to set the upper bound of the number of categories in advance.
Probability spatial clustering algorithm based on the Self Discipline Learning(SDL) model is proposed.
arXiv Detail & Related papers (2022-01-07T17:18:57Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Ellipsoidal Subspace Support Vector Data Description [98.67884574313292]
We propose a novel method for transforming data into a low-dimensional space optimized for one-class classification.
We provide both linear and non-linear formulations for the proposed method.
The proposed method is noticed to converge much faster than recently proposed Subspace Support Vector Data Description.
arXiv Detail & Related papers (2020-03-20T21:31:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.