Learnability, Sample Complexity, and Hypothesis Class Complexity for
Regression Models
- URL: http://arxiv.org/abs/2303.16091v1
- Date: Tue, 28 Mar 2023 15:59:12 GMT
- Title: Learnability, Sample Complexity, and Hypothesis Class Complexity for
Regression Models
- Authors: Soosan Beheshti, Mahdi Shamsi
- Abstract summary: This work is inspired by the foundation of PAC and is motivated by the existing regression learning issues.
The proposed approach, denoted by epsilon-Confidence Approximately Correct (epsilon CoAC), utilizes Kullback Leibler divergence (relative entropy)
It enables the learner to compare hypothesis classes of different complexity orders and choose among them the optimum with the minimum epsilon.
- Score: 10.66048003460524
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The goal of a learning algorithm is to receive a training data set as input
and provide a hypothesis that can generalize to all possible data points from a
domain set. The hypothesis is chosen from hypothesis classes with potentially
different complexities. Linear regression modeling is an important category of
learning algorithms. The practical uncertainty of the target samples affects
the generalization performance of the learned model. Failing to choose a proper
model or hypothesis class can lead to serious issues such as underfitting or
overfitting. These issues have been addressed by alternating cost functions or
by utilizing cross-validation methods. These approaches can introduce new
hyperparameters with their own new challenges and uncertainties or increase the
computational complexity of the learning algorithm. On the other hand, the
theory of probably approximately correct (PAC) aims at defining learnability
based on probabilistic settings. Despite its theoretical value, PAC does not
address practical learning issues on many occasions. This work is inspired by
the foundation of PAC and is motivated by the existing regression learning
issues. The proposed approach, denoted by epsilon-Confidence Approximately
Correct (epsilon CoAC), utilizes Kullback Leibler divergence (relative entropy)
and proposes a new related typical set in the set of hyperparameters to tackle
the learnability issue. Moreover, it enables the learner to compare hypothesis
classes of different complexity orders and choose among them the optimum with
the minimum epsilon in the epsilon CoAC framework. Not only the epsilon CoAC
learnability overcomes the issues of overfitting and underfitting, but it also
shows advantages and superiority over the well known cross-validation method in
the sense of time consumption as well as in the sense of accuracy.
Related papers
- Probably Approximately Precision and Recall Learning [62.912015491907994]
Precision and Recall are foundational metrics in machine learning.
One-sided feedback--where only positive examples are observed during training--is inherent in many practical problems.
We introduce a PAC learning framework where each hypothesis is represented by a graph, with edges indicating positive interactions.
arXiv Detail & Related papers (2024-11-20T04:21:07Z) - Optimal Multi-Distribution Learning [88.3008613028333]
Multi-distribution learning seeks to learn a shared model that minimizes the worst-case risk across $k$ distinct data distributions.
We propose a novel algorithm that yields an varepsilon-optimal randomized hypothesis with a sample complexity on the order of (d+k)/varepsilon2.
arXiv Detail & Related papers (2023-12-08T16:06:29Z) - Tight Guarantees for Interactive Decision Making with the
Decision-Estimation Coefficient [51.37720227675476]
We introduce a new variant of the Decision-Estimation Coefficient, and use it to derive new lower bounds that improve upon prior work on three fronts.
We provide upper bounds on regret that scale with the same quantity, thereby closing all but one of the gaps between upper and lower bounds in Foster et al.
Our results apply to both the regret framework and PAC framework, and make use of several new analysis and algorithm design techniques that we anticipate will find broader use.
arXiv Detail & Related papers (2023-01-19T18:24:08Z) - Proof of Swarm Based Ensemble Learning for Federated Learning
Applications [3.2536767864585663]
In federated learning it is not feasible to apply centralised ensemble learning directly due to privacy concerns.
Most distributed consensus algorithms, such as Byzantine fault tolerance (BFT), do not normally perform well in such applications.
We propose PoSw, a novel distributed consensus algorithm for ensemble learning in a federated setting.
arXiv Detail & Related papers (2022-12-28T13:53:34Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Pairwise Learning via Stagewise Training in Proximal Setting [0.0]
We combine adaptive sample size and importance sampling techniques for pairwise learning, with convergence guarantees for nonsmooth convex pairwise loss functions.
We demonstrate that sampling opposite instances at each reduces the variance of the gradient, hence accelerating convergence.
arXiv Detail & Related papers (2022-08-08T11:51:01Z) - Parsimonious Inference [0.0]
Parsimonious inference is an information-theoretic formulation of inference over arbitrary architectures.
Our approaches combine efficient encodings with prudent sampling strategies to construct predictive ensembles without cross-validation.
arXiv Detail & Related papers (2021-03-03T04:13:14Z) - Efficient Model-Based Reinforcement Learning through Optimistic Policy
Search and Planning [93.1435980666675]
We show how optimistic exploration can be easily combined with state-of-the-art reinforcement learning algorithms.
Our experiments demonstrate that optimistic exploration significantly speeds-up learning when there are penalties on actions.
arXiv Detail & Related papers (2020-06-15T18:37:38Z) - Progressive Identification of True Labels for Partial-Label Learning [112.94467491335611]
Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.
Most existing methods elaborately designed as constrained optimizations that must be solved in specific manners, making their computational complexity a bottleneck for scaling up to big data.
This paper proposes a novel framework of classifier with flexibility on the model and optimization algorithm.
arXiv Detail & Related papers (2020-02-19T08:35:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.