Machine Learning with the Sugeno Integral: The Case of Binary
Classification
- URL: http://arxiv.org/abs/2007.03046v1
- Date: Mon, 6 Jul 2020 20:22:01 GMT
- Title: Machine Learning with the Sugeno Integral: The Case of Binary
Classification
- Authors: Sadegh Abbaszadeh and Eyke H\"ullermeier
- Abstract summary: We elaborate on the use of the Sugeno integral in the context of machine learning.
We propose a method for binary classification, in which the Sugeno integral is used as an aggregation function.
Due to the specific nature of the Sugeno integral, this approach is especially suitable for learning from ordinal data.
- Score: 5.806154304561782
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we elaborate on the use of the Sugeno integral in the context
of machine learning. More specifically, we propose a method for binary
classification, in which the Sugeno integral is used as an aggregation function
that combines several local evaluations of an instance, pertaining to different
features or measurements, into a single global evaluation. Due to the specific
nature of the Sugeno integral, this approach is especially suitable for
learning from ordinal data, that is, when measurements are taken from ordinal
scales. This is a topic that has not received much attention in machine
learning so far. The core of the learning problem itself consists of
identifying the capacity underlying the Sugeno integral. To tackle this
problem, we develop an algorithm based on linear programming. The algorithm
also includes a suitable technique for transforming the original feature values
into local evaluations (local utility scores), as well as a method for tuning a
threshold on the global evaluation. To control the flexibility of the
classifier and mitigate the problem of overfitting the training data, we
generalize our approach toward $k$-maxitive capacities, where $k$ plays the
role of a hyper-parameter of the learner. We present experimental studies, in
which we compare our method with competing approaches on several benchmark data
sets.
Related papers
- A naive aggregation algorithm for improving generalization in a class of learning problems [0.0]
We present a naive aggregation algorithm for a typical learning problem with expert advice setting.
In particular, we consider a class of learning problem of point estimations for modeling high-dimensional nonlinear functions.
arXiv Detail & Related papers (2024-09-06T15:34:17Z) - Achieving More with Less: A Tensor-Optimization-Powered Ensemble Method [53.170053108447455]
Ensemble learning is a method that leverages weak learners to produce a strong learner.
We design a smooth and convex objective function that leverages the concept of margin, making the strong learner more discriminative.
We then compare our algorithm with random forests of ten times the size and other classical methods across numerous datasets.
arXiv Detail & Related papers (2024-08-06T03:42:38Z) - A Weighted K-Center Algorithm for Data Subset Selection [70.49696246526199]
Subset selection is a fundamental problem that can play a key role in identifying smaller portions of the training data.
We develop a novel factor 3-approximation algorithm to compute subsets based on the weighted sum of both k-center and uncertainty sampling objective functions.
arXiv Detail & Related papers (2023-12-17T04:41:07Z) - A Federated Learning Aggregation Algorithm for Pervasive Computing:
Evaluation and Comparison [0.6299766708197883]
Pervasive computing promotes the installation of connected devices in our living spaces in order to provide services.
Two major developments have gained significant momentum recently: an advanced use of edge resources and the integration of machine learning techniques for engineering applications.
We propose a novel aggregation algorithm, termed FedDist, which is able to modify its model architecture by identifying dissimilarities between specific neurons amongst the clients.
arXiv Detail & Related papers (2021-10-19T19:43:28Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - Feature space approximation for kernel-based supervised learning [2.653409741248232]
The goal is to reduce the size of the training data, resulting in lower storage consumption and computational complexity.
We demonstrate significant improvements in comparison to the computation of data-driven predictions involving the full training data set.
The method is applied to classification and regression problems from different application areas such as image recognition, system identification, and oceanographic time series analysis.
arXiv Detail & Related papers (2020-11-25T11:23:58Z) - Towards Improved and Interpretable Deep Metric Learning via Attentive
Grouping [103.71992720794421]
Grouping has been commonly used in deep metric learning for computing diverse features.
We propose an improved and interpretable grouping method to be integrated flexibly with any metric learning framework.
arXiv Detail & Related papers (2020-11-17T19:08:24Z) - Linear Classifier Combination via Multiple Potential Functions [0.6091702876917279]
We propose a novel concept of calculating a scoring function based on the distance of the object from the decision boundary and its distance to the class centroid.
An important property is that the proposed score function has the same nature for all linear base classifiers.
arXiv Detail & Related papers (2020-10-02T08:11:51Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Information Theoretic Meta Learning with Gaussian Processes [74.54485310507336]
We formulate meta learning using information theoretic concepts; namely, mutual information and the information bottleneck.
By making use of variational approximations to the mutual information, we derive a general and tractable framework for meta learning.
arXiv Detail & Related papers (2020-09-07T16:47:30Z) - On the Estimation of Complex Circuits Functional Failure Rate by Machine
Learning Techniques [0.16311150636417257]
De-Rating or Vulnerability Factors are a major feature of failure analysis efforts mandated by today's Functional Safety requirements.
New approach is proposed which uses Machine Learning to estimate the Functional De-Rating of individual flip-flops.
arXiv Detail & Related papers (2020-02-18T15:18:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.