Ontologies in Design: How Imagining a Tree Reveals Possibilites and Assumptions in Large Language Models
- URL: http://arxiv.org/abs/2504.03029v1
- Date: Thu, 03 Apr 2025 21:04:36 GMT
- Title: Ontologies in Design: How Imagining a Tree Reveals Possibilites and Assumptions in Large Language Models
- Authors: Nava Haghighi, Sunny Yu, James Landay, Daniela Rosner,
- Abstract summary: We argue that value-based analyses are crucial, but under-recognized in analyzing these systems.<n>Proposing a need for a practice-based engagement with pluralism, we offer four orientations for considering orientations in design.
- Score: 0.4563238570902448
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Amid the recent uptake of Generative AI, sociotechnical scholars and critics have traced a multitude of resulting harms, with analyses largely focused on values and axiology (e.g., bias). While value-based analyses are crucial, we argue that ontologies -- concerning what we allow ourselves to think or talk about -- is a vital but under-recognized dimension in analyzing these systems. Proposing a need for a practice-based engagement with ontologies, we offer four orientations for considering ontologies in design: pluralism, groundedness, liveliness, and enactment. We share examples of potentialities that are opened up through these orientations across the entire LLM development pipeline by conducting two ontological analyses: examining the responses of four LLM-based chatbots in a prompting exercise, and analyzing the architecture of an LLM-based agent simulation. We conclude by sharing opportunities and limitations of working with ontologies in the design and development of sociotechnical systems.
Related papers
- Assessing the Capability of Large Language Models for Domain-Specific Ontology Generation [1.099532646524593]
Large Language Models (LLMs) have shown significant potential for ontology engineering.
We investigate the generalizability of two state-of-the-art LLMs, DeepSeek and o1-preview, by generating from a set of competency questions.
Our findings show that the performance of the experiments is remarkably consistent across all domains.
arXiv Detail & Related papers (2025-04-24T09:47:14Z) - Embodied-R: Collaborative Framework for Activating Embodied Spatial Reasoning in Foundation Models via Reinforcement Learning [58.86928947970342]
Embodied-R is a framework combining large-scale Vision-Language Models for perception and small-scale Language Models for reasoning.
After training on only 5k embodied video samples, Embodied-R with a 3B LM matches state-of-the-art multimodal reasoning models.
Embodied-R also exhibits emergent thinking patterns such as systematic analysis and contextual integration.
arXiv Detail & Related papers (2025-04-17T06:16:11Z) - A Survey of Frontiers in LLM Reasoning: Inference Scaling, Learning to Reason, and Agentic Systems [93.8285345915925]
Reasoning is a fundamental cognitive process that enables logical inference, problem-solving, and decision-making.
With the rapid advancement of large language models (LLMs), reasoning has emerged as a key capability that distinguishes advanced AI systems.
We categorize existing methods along two dimensions: (1) Regimes, which define the stage at which reasoning is achieved; and (2) Architectures, which determine the components involved in the reasoning process.
arXiv Detail & Related papers (2025-04-12T01:27:49Z) - LogiDynamics: Unraveling the Dynamics of Logical Inference in Large Language Model Reasoning [49.58786377307728]
This paper adopts an exploratory approach by introducing a controlled evaluation environment for analogical reasoning.<n>We analyze the comparative dynamics of inductive, abductive, and deductive inference pipelines.<n>We investigate advanced paradigms such as hypothesis selection, verification, and refinement, revealing their potential to scale up logical inference.
arXiv Detail & Related papers (2025-02-16T15:54:53Z) - Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - Coding for Intelligence from the Perspective of Category [66.14012258680992]
Coding targets compressing and reconstructing data, and intelligence.
Recent trends demonstrate the potential homogeneity of these two fields.
We propose a novel problem of Coding for Intelligence from the category theory view.
arXiv Detail & Related papers (2024-07-01T07:05:44Z) - Categorical Syllogisms Revisited: A Review of the Logical Reasoning Abilities of LLMs for Analyzing Categorical Syllogism [62.571419297164645]
This paper provides a systematic overview of prior works on the logical reasoning ability of large language models for analyzing categorical syllogisms.<n>We first investigate all the possible variations for the categorical syllogisms from a purely logical perspective.<n>We then examine the underlying configurations (i.e., mood and figure) tested by the existing datasets.
arXiv Detail & Related papers (2024-06-26T21:17:20Z) - A Philosophical Introduction to Language Models - Part II: The Way Forward [0.0]
We explore novel philosophical questions raised by recent progress in large language models (LLMs)
We focus particularly on issues related to interpretability, examining evidence from causal intervention methods about the nature of LLMs' internal representations and computations.
We discuss whether LLM-like systems may be relevant to modeling aspects of human cognition, if their architectural characteristics and learning scenario are adequately constrained.
arXiv Detail & Related papers (2024-05-06T07:12:45Z) - Demystifying Chains, Trees, and Graphs of Thoughts [20.980650840083385]
We focus on identifying fundamental classes of harnessed structures, and we analyze the representations of these structures.<n>Our study compares existing prompting schemes using the proposed taxonomy, discussing how certain design choices lead to different patterns in performance and cost.
arXiv Detail & Related papers (2024-01-25T16:34:00Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - Machine-assisted quantitizing designs: augmenting humanities and social sciences with artificial intelligence [0.0]
Large language models (LLMs) have been shown to present an unprecedented opportunity to scale up data analytics in the humanities and social sciences.
We build on mixed methods quantitizing and converting design principles, and feature analysis from linguistics, to transparently integrate human expertise and machine scalability.
The approach is discussed and demonstrated in over a dozen LLM-assisted case studies, covering 9 diverse languages, multiple disciplines and tasks.
arXiv Detail & Related papers (2023-09-24T14:21:50Z) - Language Model Analysis for Ontology Subsumption Inference [37.00562636991463]
We investigate whether pre-trained language models (LMs) can function as knowledge bases (KBs)
We propose OntoLAMA, a set of inference-based probing tasks and datasets from subsumption axioms involving both atomic and complex concepts.
We conduct extensive experiments on different domains and scales, and our results demonstrate that LMs encode relatively less background knowledge of Subsumption Inference (SI) than traditional Natural Language Inference (NLI)
We will open-source our code and datasets.
arXiv Detail & Related papers (2023-02-14T00:21:56Z) - Human Factors in Model Interpretability: Industry Practices, Challenges,
and Needs [28.645803845464915]
We conduct interviews with industry practitioners to understand how they conceive of and design for interpretability while they plan, build, and use their models.
Based on our results, we differentiate interpretability roles, processes, goals and strategies as they exist within organizations making heavy use of ML models.
The characterization of interpretability work that emerges from our analysis suggests that model interpretability frequently involves cooperation and mental model comparison between people in different roles.
arXiv Detail & Related papers (2020-04-23T19:54:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.