Towards Explaining Subjective Ground of Individuals on Social Media
- URL: http://arxiv.org/abs/2211.09953v1
- Date: Fri, 18 Nov 2022 00:29:05 GMT
- Title: Towards Explaining Subjective Ground of Individuals on Social Media
- Authors: Younghun Lee and Dan Goldwasser
- Abstract summary: This research proposes a neural model that learns subjective grounds of individuals and accounts for their judgments on situations of others posted on social media.
Using simple attention modules as well as taking one's previous activities into consideration, we empirically show that our model provides human-readable explanations of an individual's subjective preference in judging social situations.
- Score: 28.491401997248527
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large-scale language models have been reducing the gap between machines and
humans in understanding the real world, yet understanding an individual's
theory of mind and behavior from text is far from being resolved.
This research proposes a neural model -- Subjective Ground Attention -- that
learns subjective grounds of individuals and accounts for their judgments on
situations of others posted on social media. Using simple attention modules as
well as taking one's previous activities into consideration, we empirically
show that our model provides human-readable explanations of an individual's
subjective preference in judging social situations. We further qualitatively
evaluate the explanations generated by the model and claim that our model
learns an individual's subjective orientation towards abstract moral concepts
Related papers
- Human-like conceptual representations emerge from language prediction [72.5875173689788]
We investigated the emergence of human-like conceptual representations within large language models (LLMs)
We found that LLMs were able to infer concepts from definitional descriptions and construct representation spaces that converge towards a shared, context-independent structure.
Our work supports the view that LLMs serve as valuable tools for understanding complex human cognition and paves the way for better alignment between artificial and human intelligence.
arXiv Detail & Related papers (2025-01-21T23:54:17Z) - Graphical Perception of Saliency-based Model Explanations [6.936466872687605]
We study the perception of model explanations, specifically, saliency-based explanations for visual recognition models.
Our findings show that factors related to visualization design decisions, the type of alignment, and qualities of the saliency map all play important roles in how humans perceive saliency-based visual explanations.
arXiv Detail & Related papers (2024-06-11T20:29:25Z) - Interpreting Pretrained Language Models via Concept Bottlenecks [55.47515772358389]
Pretrained language models (PLMs) have made significant strides in various natural language processing tasks.
The lack of interpretability due to their black-box'' nature poses challenges for responsible implementation.
We propose a novel approach to interpreting PLMs by employing high-level, meaningful concepts that are easily understandable for humans.
arXiv Detail & Related papers (2023-11-08T20:41:18Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - The Neuro-Symbolic Inverse Planning Engine (NIPE): Modeling
Probabilistic Social Inferences from Linguistic Inputs [50.32802502923367]
We study the process of language driving and influencing social reasoning in a probabilistic goal inference domain.
We propose a neuro-symbolic model that carries out goal inference from linguistic inputs of agent scenarios.
Our model closely matches human response patterns and better predicts human judgements than using an LLM alone.
arXiv Detail & Related papers (2023-06-25T19:38:01Z) - From Outcome-Based to Language-Based Preferences [13.05235037907183]
We review the literature on models that try to explain human behavior in social interactions described by normal-form games with monetary payoffs.
We focus on the growing body of research showing that people react to the language in which actions are described, especially when it activates moral concerns.
arXiv Detail & Related papers (2022-06-15T05:11:58Z) - Modeling Human Behavior Part I -- Learning and Belief Approaches [0.0]
We focus on techniques which learn a model or policy of behavior through exploration and feedback.
Next generation autonomous and adaptive systems will largely include AI agents and humans working together as teams.
arXiv Detail & Related papers (2022-05-13T07:33:49Z) - Machine Explanations and Human Understanding [31.047297225560566]
Explanations are hypothesized to improve human understanding of machine learning models.
empirical studies have found mixed and even negative results.
We show how human intuitions play a central role in enabling human understanding.
arXiv Detail & Related papers (2022-02-08T19:00:38Z) - Bongard-LOGO: A New Benchmark for Human-Level Concept Learning and
Reasoning [78.13740873213223]
Bongard problems (BPs) were introduced as an inspirational challenge for visual cognition in intelligent systems.
We propose a new benchmark Bongard-LOGO for human-level concept learning and reasoning.
arXiv Detail & Related papers (2020-10-02T03:19:46Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.