Rough Randomness and its Application
- URL: http://arxiv.org/abs/2304.00005v1
- Date: Tue, 21 Mar 2023 12:22:33 GMT
- Title: Rough Randomness and its Application
- Authors: Mani A
- Abstract summary: This research aims to capture a variety of rough processes, construct related models, and explore the validity of other machine learning algorithms.
A class of rough random functions termed large-minded reasoners have a central role in these.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A number of generalizations of stochastic and information-theoretic
randomness are known in the literature. However, they are not compatible with
handling meaning in vague and dynamic contexts of rough reasoning (and
therefore explainable artificial intelligence and machine learning). In this
research, new concepts of rough randomness that are neither stochastic nor
based on properties of strings are introduced by the present author. Her
concepts are intended to capture a wide variety of rough processes (applicable
to both static and dynamic data), construct related models, and explore the
validity of other machine learning algorithms. The last mentioned is restricted
to soft/hard clustering algorithms in this paper. Two new computationally
efficient algebraically-justified algorithms for soft and hard cluster
validation that involve rough random functions are additionally proposed in
this research. A class of rough random functions termed large-minded reasoners
have a central role in these.
Related papers
- Gradient Span Algorithms Make Predictable Progress in High Dimension [0.0]
We prove that all 'gradient algorithms' have deterministically on scaled random functions as the this tends to infinity.
The distributional assumption is used for training but also encompass random glasses and spin.
arXiv Detail & Related papers (2024-10-13T19:26:18Z) - Learning minimal representations of stochastic processes with
variational autoencoders [52.99137594502433]
We introduce an unsupervised machine learning approach to determine the minimal set of parameters required to describe a process.
Our approach enables for the autonomous discovery of unknown parameters describing processes.
arXiv Detail & Related papers (2023-07-21T14:25:06Z) - Arbitrarily Large Labelled Random Satisfiability Formulas for Machine
Learning Training [5.414308305392762]
We show how to generate correctly labeled random formulas of any desired size without having to solve the underlying decision problem.
We train existing state-of-the-art models for the task of predicting satisfiability on formulas with 10,000 variables.
We find that they do no better than random guessing 99% on the same datasets.
arXiv Detail & Related papers (2022-11-21T17:52:13Z) - Neural Active Learning on Heteroskedastic Distributions [29.01776999862397]
We demonstrate the catastrophic failure of active learning algorithms on heteroskedastic datasets.
We propose a new algorithm that incorporates a model difference scoring function for each data point to filter out the noisy examples and sample clean examples.
arXiv Detail & Related papers (2022-11-02T07:30:19Z) - An end-to-end deep learning approach for extracting stochastic dynamical
systems with $\alpha$-stable L\'evy noise [5.815325960286111]
In this work, we identify dynamical systems driven by $alpha$-stable Levy noise from only random pairwise data.
Our innovations include: (1) designing a deep learning approach to learn both drift and diffusion terms for Levy induced noise with $alpha$ across all values, (2) learning complex multiplicative noise without restrictions on small noise intensity, and (3) proposing an end-to-end complete framework for systems identification.
arXiv Detail & Related papers (2022-01-31T10:51:25Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Quantum-Inspired Algorithms from Randomized Numerical Linear Algebra [53.46106569419296]
We create classical (non-quantum) dynamic data structures supporting queries for recommender systems and least-squares regression.
We argue that the previous quantum-inspired algorithms for these problems are doing leverage or ridge-leverage score sampling in disguise.
arXiv Detail & Related papers (2020-11-09T01:13:07Z) - Deviation bound for non-causal machine learning [0.0]
Concentration inequalities are widely used for analyzing machine learning algorithms.
Current concentration inequalities cannot be applied to some of the most popular deep neural networks.
In this paper, a framework for modeling non-causal random fields is provided and a Hoeffding-type concentration inequality is obtained for this framework.
arXiv Detail & Related papers (2020-09-18T15:57:59Z) - The data-driven physical-based equations discovery using evolutionary
approach [77.34726150561087]
We describe the algorithm for the mathematical equations discovery from the given observations data.
The algorithm combines genetic programming with the sparse regression.
It could be used for governing analytical equation discovery as well as for partial differential equations (PDE) discovery.
arXiv Detail & Related papers (2020-04-03T17:21:57Z) - Hidden Cost of Randomized Smoothing [72.93630656906599]
In this paper, we point out the side effects of current randomized smoothing.
Specifically, we articulate and prove two major points: 1) the decision boundaries of smoothed classifiers will shrink, resulting in disparity in class-wise accuracy; 2) applying noise augmentation in the training process does not necessarily resolve the shrinking issue due to the inconsistent learning objectives.
arXiv Detail & Related papers (2020-03-02T23:37:42Z) - Learning Gaussian Graphical Models via Multiplicative Weights [54.252053139374205]
We adapt an algorithm of Klivans and Meka based on the method of multiplicative weight updates.
The algorithm enjoys a sample complexity bound that is qualitatively similar to others in the literature.
It has a low runtime $O(mp2)$ in the case of $m$ samples and $p$ nodes, and can trivially be implemented in an online manner.
arXiv Detail & Related papers (2020-02-20T10:50:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.