pRSL: Interpretable Multi-label Stacking by Learning Probabilistic Rules
- URL: http://arxiv.org/abs/2105.13850v1
- Date: Fri, 28 May 2021 14:06:21 GMT
- Title: pRSL: Interpretable Multi-label Stacking by Learning Probabilistic Rules
- Authors: Kirchhof Michael and Schmid Lena and Reining Christopher and ten
Hompel Michael and Pauly Markus
- Abstract summary: We present the probabilistic rule stacking (pRSL) which uses probabilistic propositional logic rules and belief propagation to combine the predictions of several underlying classifiers.
We derive algorithms for exact and approximate inference and learning, and show that pRSL reaches state-of-the-art performance on various benchmark datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A key task in multi-label classification is modeling the structure between
the involved classes. Modeling this structure by probabilistic and
interpretable means enables application in a broad variety of tasks such as
zero-shot learning or learning from incomplete data. In this paper, we present
the probabilistic rule stacking learner (pRSL) which uses probabilistic
propositional logic rules and belief propagation to combine the predictions of
several underlying classifiers. We derive algorithms for exact and approximate
inference and learning, and show that pRSL reaches state-of-the-art performance
on various benchmark datasets.
In the process, we introduce a novel multicategorical generalization of the
noisy-or gate. Additionally, we report simulation results on the quality of
loopy belief propagation algorithms for approximate inference in bipartite
noisy-or networks.
Related papers
- An MRP Formulation for Supervised Learning: Generalized Temporal Difference Learning Models [20.314426291330278]
In traditional statistical learning, data points are usually assumed to be independently and identically distributed (i.i.d.)
This paper presents a contrasting viewpoint, perceiving data points as interconnected and employing a Markov reward process (MRP) for data modeling.
We reformulate the typical supervised learning as an on-policy policy evaluation problem within reinforcement learning (RL), introducing a generalized temporal difference (TD) learning algorithm as a resolution.
arXiv Detail & Related papers (2024-04-23T21:02:58Z) - Provably Efficient Representation Learning with Tractable Planning in
Low-Rank POMDP [81.00800920928621]
We study representation learning in partially observable Markov Decision Processes (POMDPs)
We first present an algorithm for decodable POMDPs that combines maximum likelihood estimation (MLE) and optimism in the face of uncertainty (OFU)
We then show how to adapt this algorithm to also work in the broader class of $gamma$-observable POMDPs.
arXiv Detail & Related papers (2023-06-21T16:04:03Z) - Multi-Classification using One-versus-One Deep Learning Strategy with
Joint Probability Estimates [0.0]
The proposed model achieves generally higher classification accuracy than other state-of-the-art models.
Numerical experiments in different applications show that the proposed model achieves generally higher classification accuracy than other state-of-the-art models.
arXiv Detail & Related papers (2023-06-16T07:54:15Z) - Probabilistic Multi-Dimensional Classification [5.147849907358484]
Multi-dimensional classification (MDC) can be employed in a range of applications where one needs to predict multiple class variables for each given instance.
Many existing MDC methods suffer from at least one of inaccuracy, scalability, limited use to certain types of data.
This paper is an attempt to address all these disadvantages simultaneously.
arXiv Detail & Related papers (2023-06-10T20:07:06Z) - Multi-annotator Deep Learning: A Probabilistic Framework for
Classification [2.445702550853822]
Training standard deep neural networks leads to subpar performances in multi-annotator supervised learning settings.
We address this issue by presenting a probabilistic training framework named multi-annotator deep learning (MaDL)
A modular network architecture enables us to make varying assumptions regarding annotators' performances.
Our findings show MaDL's state-of-the-art performance and robustness against many correlated, spamming annotators.
arXiv Detail & Related papers (2023-04-05T16:00:42Z) - Multivariate Systemic Risk Measures and Computation by Deep Learning
Algorithms [63.03966552670014]
We discuss the key related theoretical aspects, with a particular focus on the fairness properties of primal optima and associated risk allocations.
The algorithms we provide allow for learning primals, optima for the dual representation and corresponding fair risk allocations.
arXiv Detail & Related papers (2023-02-02T22:16:49Z) - How Fine-Tuning Allows for Effective Meta-Learning [50.17896588738377]
We present a theoretical framework for analyzing representations derived from a MAML-like algorithm.
We provide risk bounds on the best predictor found by fine-tuning via gradient descent, demonstrating that the algorithm can provably leverage the shared structure.
This separation result underscores the benefit of fine-tuning-based methods, such as MAML, over methods with "frozen representation" objectives in few-shot learning.
arXiv Detail & Related papers (2021-05-05T17:56:00Z) - Network Classifiers Based on Social Learning [71.86764107527812]
We propose a new way of combining independently trained classifiers over space and time.
The proposed architecture is able to improve prediction performance over time with unlabeled data.
We show that this strategy results in consistent learning with high probability, and it yields a robust structure against poorly trained classifiers.
arXiv Detail & Related papers (2020-10-23T11:18:20Z) - Learned Factor Graphs for Inference from Stationary Time Sequences [107.63351413549992]
We propose a framework that combines model-based algorithms and data-driven ML tools for stationary time sequences.
neural networks are developed to separately learn specific components of a factor graph describing the distribution of the time sequence.
We present an inference algorithm based on learned stationary factor graphs, which learns to implement the sum-product scheme from labeled data.
arXiv Detail & Related papers (2020-06-05T07:06:19Z) - Can We Learn Heuristics For Graphical Model Inference Using
Reinforcement Learning? [114.24881214319048]
We show that we can learn programs, i.e., policies, for solving inference in higher order Conditional Random Fields (CRFs) using reinforcement learning.
Our method solves inference tasks efficiently without imposing any constraints on the form of the potentials.
arXiv Detail & Related papers (2020-04-27T19:24:04Z) - Active Learning in Video Tracking [8.782204980889079]
We propose an adversarial approach for active learning with structured prediction domains that is tractable for matching.
We evaluate this approach algorithmically in an important structured prediction problems: object tracking in videos.
arXiv Detail & Related papers (2019-12-29T00:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.