A Picture's Worth a Thousand Words: Visualizing n-dimensional Overlap in
Logistic Regression Models with Empirical Likelihood
- URL: http://arxiv.org/abs/2011.07614v1
- Date: Sun, 15 Nov 2020 19:39:56 GMT
- Title: A Picture's Worth a Thousand Words: Visualizing n-dimensional Overlap in
Logistic Regression Models with Empirical Likelihood
- Authors: Paul A. Roediger
- Abstract summary: We introduce a sensitivity testing point of view for the maximum likelihood estimate for multidimensional predictor.
The well known condition of Silvapulle is translated to be an empirical likelihood which, with existing R code, mechanizes the process of assessing overlap status.
The code is applied to reveal the character of overlap by examining minimal overlapping structures and cataloging them in dimensions fewer than four.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this note, conditions for the existence and uniqueness of the maximum
likelihood estimate for multidimensional predictor, binary response models are
introduced from a sensitivity testing point of view. The well known condition
of Silvapulle is translated to be an empirical likelihood maximization which,
with existing R code, mechanizes the process of assessing overlap status. The
translation shifts the meaning of overlap, defined by geometrical properties of
the two-predictor groups, from the intersection of their convex cones is
non-empty to the more understandable requirement that the convex hull of their
differences contains zero. The code is applied to reveal the character of
overlap by examining minimal overlapping structures and cataloging them in
dimensions fewer than four. Rules to generate minimal higher dimensional
structures which account for overlap are provided. Supplementary materials are
available online.
Related papers
- Beyond Coarse-Grained Matching in Video-Text Retrieval [50.799697216533914]
We introduce a new approach for fine-grained evaluation.
Our approach can be applied to existing datasets by automatically generating hard negative test captions.
Experiments on our fine-grained evaluations demonstrate that this approach enhances a model's ability to understand fine-grained differences.
arXiv Detail & Related papers (2024-10-16T09:42:29Z) - Quantization of Large Language Models with an Overdetermined Basis [73.79368761182998]
We introduce an algorithm for data quantization based on the principles of Kashin representation.
Our findings demonstrate that Kashin Quantization achieves competitive or superior quality in model performance.
arXiv Detail & Related papers (2024-04-15T12:38:46Z) - Entry-Specific Bounds for Low-Rank Matrix Completion under Highly
Non-Uniform Sampling [10.824999179337558]
We show that it is often better and sometimes optimal to run estimation algorithms on a smaller submatrix rather than the entire matrix.
Our bounds characterize the hardness of estimating each entry as a function of the localized sampling probabilities.
arXiv Detail & Related papers (2024-02-29T23:24:43Z) - Learning Sparsity of Representations with Discrete Latent Variables [15.05207849434673]
We propose a sparse deep latent generative model SDLGM to explicitly model degree of sparsity.
The resulting sparsity of a representation is not fixed, but fits to the observation itself under the pre-defined restriction.
For inference and learning, we develop an amortized variational method based on MC gradient estimator.
arXiv Detail & Related papers (2023-04-03T12:47:18Z) - Semi-supervised Dense Keypoints Using Unlabeled Multiview Images [22.449168666514677]
This paper presents a new end-to-end semi-supervised framework to learn a dense keypoint detector using unlabeled multiview images.
A key challenge lies in finding the exact correspondences between the dense keypoints in multiple views.
We derive a new probabilistic epipolar constraint that encodes the two desired properties.
arXiv Detail & Related papers (2021-09-20T04:57:57Z) - A Local Similarity-Preserving Framework for Nonlinear Dimensionality
Reduction with Neural Networks [56.068488417457935]
We propose a novel local nonlinear approach named Vec2vec for general purpose dimensionality reduction.
To train the neural network, we build the neighborhood similarity graph of a matrix and define the context of data points.
Experiments of data classification and clustering on eight real datasets show that Vec2vec is better than several classical dimensionality reduction methods in the statistical hypothesis test.
arXiv Detail & Related papers (2021-03-10T23:10:47Z) - Posterior-Aided Regularization for Likelihood-Free Inference [23.708122045184698]
Posterior-Aided Regularization (PAR) is applicable to learning the density estimator, regardless of the model structure.
We provide a unified estimation method of PAR to estimate both reverse KL term and mutual information term with a single neural network.
arXiv Detail & Related papers (2021-02-15T16:59:30Z) - RatE: Relation-Adaptive Translating Embedding for Knowledge Graph
Completion [51.64061146389754]
We propose a relation-adaptive translation function built upon a novel weighted product in complex space.
We then present our Relation-adaptive translating Embedding (RatE) approach to score each graph triple.
arXiv Detail & Related papers (2020-10-10T01:30:30Z) - Dual-constrained Deep Semi-Supervised Coupled Factorization Network with
Enriched Prior [80.5637175255349]
We propose a new enriched prior based Dual-constrained Deep Semi-Supervised Coupled Factorization Network, called DS2CF-Net.
To ex-tract hidden deep features, DS2CF-Net is modeled as a deep-structure and geometrical structure-constrained neural network.
Our network can obtain state-of-the-art performance for representation learning and clustering.
arXiv Detail & Related papers (2020-09-08T13:10:21Z) - Convex Geometry and Duality of Over-parameterized Neural Networks [70.15611146583068]
We develop a convex analytic approach to analyze finite width two-layer ReLU networks.
We show that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set.
In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints.
arXiv Detail & Related papers (2020-02-25T23:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.