Statistical Foundation Behind Machine Learning and Its Impact on
Computer Vision
- URL: http://arxiv.org/abs/2209.02691v1
- Date: Tue, 6 Sep 2022 17:59:04 GMT
- Title: Statistical Foundation Behind Machine Learning and Its Impact on
Computer Vision
- Authors: Lei Zhang and Heung-Yeung Shum
- Abstract summary: This paper revisits the principle of uniform convergence in statistical learning, discusses how it acts as the foundation behind machine learning, and attempts to gain a better understanding of the essential problem that current deep learning algorithms are solving.
Using computer vision as an example domain in machine learning, the discussion shows that recent research trends in leveraging increasingly large-scale data to perform pre-training for representation learning are largely to reduce the discrepancy between a practically tractable empirical loss and its ultimately desired but intractable expected loss.
- Score: 8.974457198386414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper revisits the principle of uniform convergence in statistical
learning, discusses how it acts as the foundation behind machine learning, and
attempts to gain a better understanding of the essential problem that current
deep learning algorithms are solving. Using computer vision as an example
domain in machine learning, the discussion shows that recent research trends in
leveraging increasingly large-scale data to perform pre-training for
representation learning are largely to reduce the discrepancy between a
practically tractable empirical loss and its ultimately desired but intractable
expected loss. Furthermore, this paper suggests a few future research
directions, predicts the continued increase of data, and argues that more
fundamental research is needed on robustness, interpretability, and reasoning
capabilities of machine learning by incorporating structure and knowledge.
Related papers
- Reproducibility and Geometric Intrinsic Dimensionality: An Investigation on Graph Neural Network Research [0.0]
Building on these efforts we turn towards another critical challenge in machine learning, namely the curse of dimensionality.
Using the closely linked concept of intrinsic dimension we investigate to which the used machine learning models are influenced by the extend dimension of the data sets they are trained on.
arXiv Detail & Related papers (2024-03-13T11:44:30Z) - Disentangling the Causes of Plasticity Loss in Neural Networks [55.23250269007988]
We show that loss of plasticity can be decomposed into multiple independent mechanisms.
We show that a combination of layer normalization and weight decay is highly effective at maintaining plasticity in a variety of synthetic nonstationary learning tasks.
arXiv Detail & Related papers (2024-02-29T00:02:33Z) - Improving Prediction Performance and Model Interpretability through
Attention Mechanisms from Basic and Applied Research Perspectives [3.553493344868414]
This bulletin is based on the summary of the author's dissertation.
Deep learning models have much higher prediction performance than traditional machine learning models.
The specific prediction process is still difficult to interpret and/or explain.
arXiv Detail & Related papers (2023-03-24T16:24:08Z) - Impact Learning: A Learning Method from Features Impact and Competition [1.3569491184708429]
This paper introduced a new machine learning algorithm called impact learning.
Impact learning is a supervised learning algorithm that can be consolidated in both classification and regression problems.
It is prepared by the impacts of the highlights from the intrinsic rate of natural increase.
arXiv Detail & Related papers (2022-11-04T04:56:35Z) - Deep Learning to See: Towards New Foundations of Computer Vision [88.69805848302266]
This book criticizes the supposed scientific progress in the field of computer vision.
It proposes the investigation of vision within the framework of information-based laws of nature.
arXiv Detail & Related papers (2022-06-30T15:20:36Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - Evaluation Methods and Measures for Causal Learning Algorithms [33.07234268724662]
We focus on the two fundamental causal-inference tasks and causality-aware machine learning tasks.
The survey seeks to bring to the forefront the urgency of developing publicly available benchmarks and consensus-building standards for causal learning evaluation with observational data.
arXiv Detail & Related papers (2022-02-07T00:24:34Z) - Nonparametric Estimation of Heterogeneous Treatment Effects: From Theory
to Learning Algorithms [91.3755431537592]
We analyze four broad meta-learning strategies which rely on plug-in estimation and pseudo-outcome regression.
We highlight how this theoretical reasoning can be used to guide principled algorithm design and translate our analyses into practice.
arXiv Detail & Related papers (2021-01-26T17:11:40Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z) - Causality Learning: A New Perspective for Interpretable Machine Learning [15.556963808865918]
interpretable machine learning is currently a mainstream topic in the research community.
This paper provides an overview of causal analysis with the fundamental background and key concepts, and then summarizes most recent causal approaches for interpretable machine learning.
arXiv Detail & Related papers (2020-06-27T13:01:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.