A New Interpolation Approach and Corresponding Instance-Based Learning
- URL: http://arxiv.org/abs/2108.11530v1
- Date: Thu, 26 Aug 2021 00:41:17 GMT
- Title: A New Interpolation Approach and Corresponding Instance-Based Learning
- Authors: Shiyou Lian
- Abstract summary: This paper introduces the measure of approximation-degree between two numerical values, then derives the corresponding one-dimensional methods and formulas.
Applying the approach to instance-based learning, a new instance-based learning method, learning using ADB, is obtained.
In principle, this method is a kind of learning by analogy, which and the deep learning belongs to inductive learning can complement each other, and for some problems, the two can even have an effect on "different approaches but equal results" in big data and cloud computing environment.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Starting from finding approximate value of a function, introduces the measure
of approximation-degree between two numerical values, proposes the concepts of
"strict approximation" and "strict approximation region", then, derives the
corresponding one-dimensional interpolation methods and formulas, and then
presents a calculation model called "sum-times-difference formula" for
high-dimensional interpolation, thus develops a new interpolation approach,
that is, ADB interpolation. ADB interpolation is applied to the interpolation
of actual functions with satisfactory results. Viewed from principle and
effect, the interpolation approach is of novel idea, and has the advantages of
simple calculation, stable accuracy, facilitating parallel processing, very
suiting for high-dimensional interpolation, and easy to be extended to the
interpolation of vector valued functions. Applying the approach to
instance-based learning, a new instance-based learning method, learning using
ADB interpolation, is obtained. The learning method is of unique technique,
which has also the advantages of definite mathematical basis, implicit distance
weights, avoiding misclassification, high efficiency, and wide range of
applications, as well as being interpretable, etc. In principle, this method is
a kind of learning by analogy, which and the deep learning that belongs to
inductive learning can complement each other, and for some problems, the two
can even have an effect of "different approaches but equal results" in big data
and cloud computing environment. Thus, the learning using ADB interpolation can
also be regarded as a kind of "wide learning" that is dual to deep learning.
Related papers
- Efficient Fairness-Performance Pareto Front Computation [51.558848491038916]
We show that optimal fair representations possess several useful structural properties.
We then show that these approxing problems can be solved efficiently via concave programming methods.
arXiv Detail & Related papers (2024-09-26T08:46:48Z) - A Simple Geometric-Aware Indoor Positioning Interpolation Algorithm
Based on Manifold Learning [10.334396596691048]
This paper proposes a simple yet powerful geometric-aware algorithm for indoor positioning tasks.
We exploit the geometric attributes of the local topological manifold using manifold learning principles.
Our proposed algorithm can be effortlessly integrated into any indoor positioning system.
arXiv Detail & Related papers (2023-11-27T07:19:23Z) - Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Revisiting Latent-Space Interpolation via a Quantitative Evaluation
Framework [14.589372535816619]
We show how data labeled with semantically continuous attributes can be utilized to conduct a quantitative evaluation of latent-space algorithms.
Our framework can be used to complement the standard qualitative comparison, and also enables evaluation for domains (such as graph) in which the visualization is difficult.
arXiv Detail & Related papers (2021-10-13T01:01:42Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - SIMPLE: SIngle-network with Mimicking and Point Learning for Bottom-up
Human Pose Estimation [81.03485688525133]
We propose a novel multi-person pose estimation framework, SIngle-network with Mimicking and Point Learning for Bottom-up Human Pose Estimation (SIMPLE)
Specifically, in the training process, we enable SIMPLE to mimic the pose knowledge from the high-performance top-down pipeline.
Besides, SIMPLE formulates human detection and pose estimation as a unified point learning framework to complement each other in single-network.
arXiv Detail & Related papers (2021-04-06T13:12:51Z) - Centralized Information Interaction for Salient Object Detection [68.8587064889475]
The U-shape structure has shown its advantage in salient object detection for efficiently combining multi-scale features.
This paper shows that by centralizing these connections, we can achieve the cross-scale information interaction among them.
Our approach can cooperate with various existing U-shape-based salient object detection methods by substituting the connections between the bottom-up and top-down pathways.
arXiv Detail & Related papers (2020-12-21T12:42:06Z) - Fast Reinforcement Learning with Incremental Gaussian Mixture Models [0.0]
An online and incremental algorithm capable of learning from a single pass through data, called Incremental Gaussian Mixture Network (IGMN), was employed as a sample-efficient function approximator for the joint state and Q-values space.
Results are analyzed to explain the properties of the obtained algorithm, and it is observed that the use of the IGMN function approximator brings some important advantages to reinforcement learning in relation to conventional neural networks trained by gradient descent methods.
arXiv Detail & Related papers (2020-11-02T03:18:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.