Gaussian Mixture Models for Affordance Learning using Bayesian Networks
- URL: http://arxiv.org/abs/2402.06078v1
- Date: Thu, 8 Feb 2024 22:05:45 GMT
- Title: Gaussian Mixture Models for Affordance Learning using Bayesian Networks
- Authors: Pedro Os\'orio, Alexandre Bernardino, Ruben Martinez-Cantin, Jos\'e
Santos-Victor
- Abstract summary: Affordances are fundamental descriptors of relationships between actions, objects and effects.
This paper approaches the problem of an embodied agent exploring the world and learning these affordances autonomously from its sensory experiences.
- Score: 50.18477618198277
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Affordances are fundamental descriptors of relationships between actions,
objects and effects. They provide the means whereby a robot can predict
effects, recognize actions, select objects and plan its behavior according to
desired goals. This paper approaches the problem of an embodied agent exploring
the world and learning these affordances autonomously from its sensory
experiences. Models exist for learning the structure and the parameters of a
Bayesian Network encoding this knowledge. Although Bayesian Networks are
capable of dealing with uncertainty and redundancy, previous work considered
complete observability of the discrete sensory data, which may lead to hard
errors in the presence of noise. In this paper we consider a probabilistic
representation of the sensors by Gaussian Mixture Models (GMMs) and explicitly
taking into account the probability distribution contained in each discrete
affordance concept, which can lead to a more correct learning.
Related papers
- LaPLACE: Probabilistic Local Model-Agnostic Causal Explanations [1.0370398945228227]
We introduce LaPLACE-explainer, designed to provide probabilistic cause-and-effect explanations for machine learning models.
The LaPLACE-Explainer component leverages the concept of a Markov blanket to establish statistical boundaries between relevant and non-relevant features.
Our approach offers causal explanations and outperforms LIME and SHAP in terms of local accuracy and consistency of explained features.
arXiv Detail & Related papers (2023-10-01T04:09:59Z) - How to Combine Variational Bayesian Networks in Federated Learning [0.0]
Federated learning enables multiple data centers to train a central model collaboratively without exposing any confidential data.
deterministic models are capable of performing high prediction accuracy, their lack of calibration and capability to quantify uncertainty is problematic for safety-critical applications.
We study the effects of various aggregation schemes for variational Bayesian neural networks.
arXiv Detail & Related papers (2022-06-22T07:53:12Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Masked prediction tasks: a parameter identifiability view [49.533046139235466]
We focus on the widely used self-supervised learning method of predicting masked tokens.
We show that there is a rich landscape of possibilities, out of which some prediction tasks yield identifiability, while others do not.
arXiv Detail & Related papers (2022-02-18T17:09:32Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Uncertainty Reasoning for Probabilistic Petri Nets via Bayesian Networks [1.471992435706872]
We exploit extended Bayesian networks for uncertainty reasoning on Petri nets.
In particular, Bayesian networks are used as symbolic representations of probability distributions.
We show how to derive information from a modular Bayesian net.
arXiv Detail & Related papers (2020-09-30T17:40:54Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Ramifications of Approximate Posterior Inference for Bayesian Deep
Learning in Adversarial and Out-of-Distribution Settings [7.476901945542385]
We show that Bayesian deep learning models on certain occasions marginally outperform conventional neural networks.
Preliminary investigations indicate the potential inherent role of bias due to choices of initialisation, architecture or activation functions.
arXiv Detail & Related papers (2020-09-03T16:58:15Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z) - Symbolic Learning and Reasoning with Noisy Data for Probabilistic
Anchoring [19.771392829416992]
We propose a semantic world modeling approach based on bottom-up object anchoring.
We extend the definitions of anchoring to handle multi-modal probability distributions.
We use statistical relational learning to enable the anchoring framework to learn symbolic knowledge.
arXiv Detail & Related papers (2020-02-24T16:58:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.