Discriminative Learning for Probabilistic Context-Free Grammars based on
Generalized H-Criterion
- URL: http://arxiv.org/abs/2103.08656v1
- Date: Mon, 15 Mar 2021 19:07:17 GMT
- Title: Discriminative Learning for Probabilistic Context-Free Grammars based on
Generalized H-Criterion
- Authors: Mauricio Maca, Jos\'e Miguel Bened\'i and Joan Andreu S\'anchez
- Abstract summary: We present a family of discriminative learning algorithms for Probabilistic Context-Free Grammars (PCFGs) based on a generalization of criterion-H.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a formal framework for the development of a family of
discriminative learning algorithms for Probabilistic Context-Free Grammars
(PCFGs) based on a generalization of criterion-H. First of all, we propose the
H-criterion as the objective function and the Growth Transformations as the
optimization method, which allows us to develop the final expressions for the
estimation of the parameters of the PCFGs. And second, we generalize the
H-criterion to take into account the set of reference interpretations and the
set of competing interpretations, and we propose a new family of objective
functions that allow us to develop the expressions of the estimation
transformations for PCFGs.
Related papers
- Function-Space Regularization in Neural Networks: A Probabilistic
Perspective [51.133793272222874]
We show that we can derive a well-motivated regularization technique that allows explicitly encoding information about desired predictive functions into neural network training.
We evaluate the utility of this regularization technique empirically and demonstrate that the proposed method leads to near-perfect semantic shift detection and highly-calibrated predictive uncertainty estimates.
arXiv Detail & Related papers (2023-12-28T17:50:56Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - PARL: A Unified Framework for Policy Alignment in Reinforcement Learning from Human Feedback [106.63518036538163]
We present a novel unified bilevel optimization-based framework, textsfPARL, formulated to address the recently highlighted critical issue of policy alignment in reinforcement learning.
Our framework addressed these concerns by explicitly parameterizing the distribution of the upper alignment objective (reward design) by the lower optimal variable.
Our empirical results substantiate that the proposed textsfPARL can address the alignment concerns in RL by showing significant improvements.
arXiv Detail & Related papers (2023-08-03T18:03:44Z) - GEC: A Unified Framework for Interactive Decision Making in MDP, POMDP,
and Beyond [101.5329678997916]
We study sample efficient reinforcement learning (RL) under the general framework of interactive decision making.
We propose a novel complexity measure, generalized eluder coefficient (GEC), which characterizes the fundamental tradeoff between exploration and exploitation.
We show that RL problems with low GEC form a remarkably rich class, which subsumes low Bellman eluder dimension problems, bilinear class, low witness rank problems, PO-bilinear class, and generalized regular PSR.
arXiv Detail & Related papers (2022-11-03T16:42:40Z) - Revisiting GANs by Best-Response Constraint: Perspective, Methodology,
and Application [49.66088514485446]
Best-Response Constraint (BRC) is a general learning framework to explicitly formulate the potential dependency of the generator on the discriminator.
We show that even with different motivations and formulations, a variety of existing GANs ALL can be uniformly improved by our flexible BRC methodology.
arXiv Detail & Related papers (2022-05-20T12:42:41Z) - Initialisation and Grammar Design in Grammar-Guided Evolutionary
Computation [0.0]
We show that genetic programming (CFG-GP) is less sensitive to initialisation and grammar design than random search and GE.
We also demonstrate that observed cases of poor performance by CFG-GP are managed through simple adjustment of tuning parameters.
arXiv Detail & Related papers (2022-04-15T10:15:40Z) - Cluster Regularization via a Hierarchical Feature Regression [0.0]
This paper proposes a novel cluster-based regularization - the hierarchical feature regression (HFR)
It mobilizes insights from the domains of machine learning and graph theory to estimate parameters along a supervised hierarchical representation of the predictor set.
An application to the prediction of economic growth is used to illustrate the HFR's effectiveness in an empirical setting.
arXiv Detail & Related papers (2021-07-10T13:03:01Z) - Counterfactual Explanations for Arbitrary Regression Models [8.633492031855655]
We present a new method for counterfactual explanations (CFEs) based on Bayesian optimisation.
Our method is a globally convergent search algorithm with support for arbitrary regression models and constraints like feature sparsity and actionable recourse.
arXiv Detail & Related papers (2021-06-29T09:53:53Z) - Learning Proposals for Probabilistic Programs with Inference Combinators [9.227032708135617]
We develop operators for construction of proposals in probabilistic programs.
Proposals in inference samplers can be parameterized using neural networks.
We demonstrate the flexibility of this framework by implementing advanced variational methods.
arXiv Detail & Related papers (2021-03-01T00:17:53Z) - Target-Embedding Autoencoders for Supervised Representation Learning [111.07204912245841]
This paper analyzes a framework for improving generalization in a purely supervised setting, where the target space is high-dimensional.
We motivate and formalize the general framework of target-embedding autoencoders (TEA) for supervised prediction, learning intermediate latent representations jointly optimized to be both predictable from features as well as predictive of targets.
arXiv Detail & Related papers (2020-01-23T02:37:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.