Consistent Multiclass Algorithms for Complex Metrics and Constraints
- URL: http://arxiv.org/abs/2210.09695v2
- Date: Wed, 19 Oct 2022 03:35:01 GMT
- Title: Consistent Multiclass Algorithms for Complex Metrics and Constraints
- Authors: Harikrishna Narasimhan, Harish G. Ramaswamy, Shiv Kumar Tavker, Drona
Khurana, Praneeth Netrapalli, Shivani Agarwal
- Abstract summary: This setting includes many common performance metrics such as the multiclass G-mean and micro F1-measure.
We give a general framework for consistent algorithms for such complex design goals.
Experiments on a variety of multiclass classification tasks and fairness-constrained problems show that our algorithms compare favorably to the state-of-the-art baselines.
- Score: 38.6998359999636
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present consistent algorithms for multiclass learning with complex
performance metrics and constraints, where the objective and constraints are
defined by arbitrary functions of the confusion matrix. This setting includes
many common performance metrics such as the multiclass G-mean and micro
F1-measure, and constraints such as those on the classifier's precision and
recall and more recent measures of fairness discrepancy. We give a general
framework for designing consistent algorithms for such complex design goals by
viewing the learning problem as an optimization problem over the set of
feasible confusion matrices. We provide multiple instantiations of our
framework under different assumptions on the performance metrics and
constraints, and in each case show rates of convergence to the optimal
(feasible) classifier (and thus asymptotic consistency). Experiments on a
variety of multiclass classification tasks and fairness-constrained problems
show that our algorithms compare favorably to the state-of-the-art baselines.
Related papers
- A General Online Algorithm for Optimizing Complex Performance Metrics [5.726378955570775]
We introduce and analyze a general online algorithm that can be used in a straightforward way with a variety of complex performance metrics in binary, multi-class, and multi-label classification problems.
The algorithm's update and prediction rules are appealingly simple and computationally efficient without the need to store any past data.
arXiv Detail & Related papers (2024-06-20T21:24:47Z) - Exponentially Convergent Algorithms for Supervised Matrix Factorization [2.1485350418225244]
Supervised factorization (SMF) is a machine learning method that converges extraction and classification tasks.
Our paper provides a novel framework that 'lifts' SMF as a low-rank estimation problem in a combined factor space estimation.
arXiv Detail & Related papers (2023-11-18T23:24:02Z) - Sample-Efficient Multi-Agent RL: An Optimization Perspective [103.35353196535544]
We study multi-agent reinforcement learning (MARL) for the general-sum Markov Games (MGs) under the general function approximation.
We introduce a novel complexity measure called the Multi-Agent Decoupling Coefficient (MADC) for general-sum MGs.
We show that our algorithm provides comparable sublinear regret to the existing works.
arXiv Detail & Related papers (2023-10-10T01:39:04Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Multi-Task Learning with Prior Information [5.770309971945476]
We propose a multi-task learning framework, where we utilize prior knowledge about the relations between features.
We also impose a penalty on the coefficients changing for each specific feature to ensure related tasks have similar coefficients on common features shared among them.
arXiv Detail & Related papers (2023-01-04T12:48:05Z) - Rethinking Clustering-Based Pseudo-Labeling for Unsupervised
Meta-Learning [146.11600461034746]
Method for unsupervised meta-learning, CACTUs, is a clustering-based approach with pseudo-labeling.
This approach is model-agnostic and can be combined with supervised algorithms to learn from unlabeled data.
We prove that the core reason for this is lack of a clustering-friendly property in the embedding space.
arXiv Detail & Related papers (2022-09-27T19:04:36Z) - Transductive Few-Shot Learning: Clustering is All You Need? [31.21306826132773]
We investigate a general formulation for transive few-shot learning, which integrates prototype-based objectives.
We find that our method yields competitive performances, in term of accuracy and optimization, while scaling up to large problems.
Surprisingly, we find that our general model already achieve competitive performances in comparison to the state-of-the-art learning.
arXiv Detail & Related papers (2021-06-16T16:14:01Z) - HAWKS: Evolving Challenging Benchmark Sets for Cluster Analysis [2.5329716878122404]
Comprehensive benchmarking of clustering algorithms is difficult.
There is no consensus regarding the best practice for rigorous benchmarking.
We demonstrate the important role evolutionary algorithms play to support flexible generation of such benchmarks.
arXiv Detail & Related papers (2021-02-13T15:01:34Z) - Joint Contrastive Learning with Infinite Possibilities [114.45811348666898]
This paper explores useful modifications of the recent development in contrastive learning via novel probabilistic modeling.
We derive a particular form of contrastive loss named Joint Contrastive Learning (JCL)
arXiv Detail & Related papers (2020-09-30T16:24:21Z) - The limits of min-max optimization algorithms: convergence to spurious
non-critical sets [82.74514886461257]
min-max optimization algorithms encounter problems far greater because of the existence of periodic cycles and similar phenomena.
We show that there exist algorithms that do not attract any points of the problem.
We illustrate such challenges in simple to almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost
arXiv Detail & Related papers (2020-06-16T10:49:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.