Lensing Machines: Representing Perspective in Latent Variable Models
- URL: http://arxiv.org/abs/2201.08848v1
- Date: Thu, 20 Jan 2022 21:42:20 GMT
- Title: Lensing Machines: Representing Perspective in Latent Variable Models
- Authors: Karthik Dinakar and Henry Lieberman
- Abstract summary: We introduce lensing, a mixed initiative technique to extract lenses or mappings between machine learned representations and perspectives of human experts.
We apply lensing for two classes of latent variable models: a mixed membership model and a matrix factorization model in the context of two mental health applications.
- Score: 1.0878040851638
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many datasets represent a combination of different ways of looking at the
same data that lead to different generalizations. For example, a corpus with
examples generated by different people may be mixtures of many perspectives and
can be viewed with different perspectives by others. It isnt always possible to
represent the viewpoints by a clean separation, in advance, of examples
representing each viewpoint and train a separate model for each viewpoint. We
introduce lensing, a mixed initiative technique to extract lenses or mappings
between machine learned representations and perspectives of human experts, and
to generate lensed models that afford multiple perspectives of the same
dataset. We apply lensing for two classes of latent variable models: a mixed
membership model, a matrix factorization model in the context of two mental
health applications, and we capture and imbue the perspectives of clinical
psychologists into these models. Our work shows the benefits of the machine
learning practitioner formally incorporating the perspective of a knowledgeable
domain expert into their models rather than estimating unlensed models
themselves in isolation.
Related papers
- When Does Perceptual Alignment Benefit Vision Representations? [76.32336818860965]
We investigate how aligning vision model representations to human perceptual judgments impacts their usability.
We find that aligning models to perceptual judgments yields representations that improve upon the original backbones across many downstream tasks.
Our results suggest that injecting an inductive bias about human perceptual knowledge into vision models can contribute to better representations.
arXiv Detail & Related papers (2024-10-14T17:59:58Z) - Evaluating Multiview Object Consistency in Humans and Image Models [68.36073530804296]
We leverage an experimental design from the cognitive sciences which requires zero-shot visual inferences about object shape.
We collect 35K trials of behavioral data from over 500 participants.
We then evaluate the performance of common vision models.
arXiv Detail & Related papers (2024-09-09T17:59:13Z) - Corpus Considerations for Annotator Modeling and Scaling [9.263562546969695]
We show that the commonly used user token model consistently outperforms more complex models.
Our findings shed light on the relationship between corpus statistics and annotator modeling performance.
arXiv Detail & Related papers (2024-04-02T22:27:24Z) - Sequential Modeling Enables Scalable Learning for Large Vision Models [120.91839619284431]
We introduce a novel sequential modeling approach which enables learning a Large Vision Model (LVM) without making use of any linguistic data.
We define a common format, "visual sentences", in which we can represent raw images and videos as well as annotated data sources.
arXiv Detail & Related papers (2023-12-01T18:59:57Z) - Compositional diversity in visual concept learning [18.907108368038216]
Humans leverage compositionality to efficiently learn new concepts, understanding how familiar parts can combine together to form novel objects.
Here, we study how people classify and generate alien figures'' with rich relational structure.
We develop a Bayesian program induction model which searches for the best programs for generating the candidate visual figures.
arXiv Detail & Related papers (2023-05-30T19:30:50Z) - Matching Multiple Perspectives for Efficient Representation Learning [0.0]
We present an approach that combines self-supervised learning with a multi-perspective matching technique.
We show that the availability of multiple views of the same object combined with a variety of self-supervised pretraining algorithms can lead to improved object classification performance.
arXiv Detail & Related papers (2022-08-16T10:33:13Z) - Inter-model Interpretability: Self-supervised Models as a Case Study [0.2578242050187029]
We build on a recent interpretability technique called Dissect to introduce textitinter-model interpretability
We project 13 top-performing self-supervised models into a Learned Concepts Embedding space that reveals proximities among models from the perspective of learned concepts.
The experiment allowed us to categorize the models into three categories and revealed for the first time the type of visual concepts different tasks requires.
arXiv Detail & Related papers (2022-07-24T22:50:18Z) - MultiViz: An Analysis Benchmark for Visualizing and Understanding
Multimodal Models [103.9987158554515]
MultiViz is a method for analyzing the behavior of multimodal models by scaffolding the problem of interpretability into 4 stages.
We show that the complementary stages in MultiViz together enable users to simulate model predictions, assign interpretable concepts to features, perform error analysis on model misclassifications, and use insights from error analysis to debug models.
arXiv Detail & Related papers (2022-06-30T18:42:06Z) - Benchmarking human visual search computational models in natural scenes:
models comparison and reference datasets [0.0]
We select publicly available state-of-the-art visual search models in natural scenes and evaluate them on different datasets.
We propose an improvement to the Ideal Bayesian Searcher through a combination with a neural network-based visual search model.
arXiv Detail & Related papers (2021-12-10T19:56:45Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - Bayesian Sparse Factor Analysis with Kernelized Observations [67.60224656603823]
Multi-view problems can be faced with latent variable models.
High-dimensionality and non-linear issues are traditionally handled by kernel methods.
We propose merging both approaches into single model.
arXiv Detail & Related papers (2020-06-01T14:25:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.