The Sigma-max System Induced from Randomness & Fuzziness and its Application in Time Series Prediction
- URL: http://arxiv.org/abs/2110.07722v2
- Date: Thu, 02 Jan 2025 02:24:58 GMT
- Title: The Sigma-max System Induced from Randomness & Fuzziness and its Application in Time Series Prediction
- Authors: Wei Mei, Ming Li, Yuanzeng Cheng, Limin Liu,
- Abstract summary: We focus on why the key axiom of "maxitivity" is adopted for possibility measure.
Our work provides a physical foundation for the axiomatic definition of possibility for the measure of fuzziness.
- Score: 5.648717826360932
- License:
- Abstract: This paper managed to induce probability theory (sigma system) and possibility theory (max system) respectively from the clearly-defined randomness and fuzziness, while focusing the question why the key axiom of "maxitivity" is adopted for possibility measure. Such an objective is achieved by following three steps: a) the establishment of mathematical definitions of randomness and fuzziness; b) the development of intuitive definition of possibility as measure of fuzziness based on compatibility interpretation; c) the abstraction of the axiomatic definitions of probability/ possibility from their intuitive definitions, by taking advantage of properties of the well-defined randomness and fuzziness. We derived the conclusion that "max" is the only but un-strict disjunctive operator that is applicable across the fuzzy event space, and is an exact operator for extracting the value from the fuzzy sample space that leads to the largest possibility of one. Then a demonstration example of stock price prediction is presented, which confirms that max inference indeed exhibits distinctive performance, with an improvement up to 18.99%, over sigma inference for the investigated application. Our work provides a physical foundation for the axiomatic definition of possibility for the measure of fuzziness, which hopefully would facilitate wider adoption of possibility theory in practice.
Related papers
- Probabilistic Conformal Prediction with Approximate Conditional Validity [81.30551968980143]
We develop a new method for generating prediction sets that combines the flexibility of conformal methods with an estimate of the conditional distribution.
Our method consistently outperforms existing approaches in terms of conditional coverage.
arXiv Detail & Related papers (2024-07-01T20:44:48Z) - On The Statistical Representation Properties Of The Perturb-Softmax And The Perturb-Argmax Probability Distributions [17.720298535412443]
Gumbel-Softmax and Gumbel-Argmax probability distributions are useful in learning discrete structures in discriminative learning.
Despite the efforts invested in optimizing these probability models, their statistical properties are under-explored.
We investigate their representation properties and determine for which families of parameters these probability distributions are complete.
We conclude the analysis by identifying two sets of parameters that satisfy these assumptions and thus admit a complete and minimal representation.
arXiv Detail & Related papers (2024-06-04T10:22:12Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - $\omega$PAP Spaces: Reasoning Denotationally About Higher-Order,
Recursive Probabilistic and Differentiable Programs [64.25762042361839]
$omega$PAP spaces are spaces for reasoning denotationally about expressive differentiable and probabilistic programming languages.
Our semantics is general enough to assign meanings to most practical probabilistic and differentiable programs.
We establish the almost-everywhere differentiability of probabilistic programs' trace density functions.
arXiv Detail & Related papers (2023-02-21T12:50:05Z) - Relative Probability on Finite Outcome Spaces: A Systematic Examination
of its Axiomatization, Properties, and Applications [0.0]
This work proposes a view of probability as a relative measure rather than an absolute one.
We focus on finite outcome spaces and develop three fundamental axioms that establish requirements for relative probability functions.
arXiv Detail & Related papers (2022-12-30T05:16:57Z) - Bounding Counterfactuals under Selection Bias [60.55840896782637]
We propose a first algorithm to address both identifiable and unidentifiable queries.
We prove that, in spite of the missingness induced by the selection bias, the likelihood of the available data is unimodal.
arXiv Detail & Related papers (2022-07-26T10:33:10Z) - A Logic-based Tractable Approximation of Probability [0.0]
We identify the conditions under which propositional probability functions can be approximated by a hierarchy of depth-bounded Belief functions.
We show that our approximations of probability lead to uncertain reasoning which, under the usual assumptions in the field, qualifies as tractable.
arXiv Detail & Related papers (2022-05-06T13:25:12Z) - Entropy, Information, and the Updating of Probabilities [0.0]
This paper is a review of a particular approach to the method of maximum entropy as a general framework for inference.
The ME method goes beyond the mere selection of a single posterior, but also addresses the question of how much less probable other distributions might be.
arXiv Detail & Related papers (2021-07-09T16:27:23Z) - Maximum Entropy competes with Maximum Likelihood [0.0]
Max entropy (MAXENT) method has a large number of applications in theoretical and applied machine learning.
We show that MAXENT applies in sparse data regimes, but needs specific types of prior information.
In particular, MAXENT can outperform the optimally regularized ML provided that there are prior rank correlations between the estimated random quantity and its probabilities.
arXiv Detail & Related papers (2020-12-17T07:44:22Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Distributionally Robust Bayesian Quadrature Optimization [60.383252534861136]
We study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d. samples.
A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set.
We propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose.
arXiv Detail & Related papers (2020-01-19T12:00:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.