Shared Nature, Unique Nurture: PRISM for Pluralistic Reasoning via In-context Structure Modeling
- URL: http://arxiv.org/abs/2602.21317v1
- Date: Tue, 24 Feb 2026 19:38:31 GMT
- Title: Shared Nature, Unique Nurture: PRISM for Pluralistic Reasoning via In-context Structure Modeling
- Authors: Guancheng Tu, Shiyang Zhang, Tianyu Zhang, Yi Zhang, Diji Yang,
- Abstract summary: PRISM is a model-agnostic system that augments Large Language Models with dynamic On-the-fly Epistemic Graphs.<n>On three creativity benchmarks, PRISM achieves state-of-the-art novelty and significantly expands distributional diversity.<n>Results demonstrate that PRISM successfully uncovers correct long-tail diagnoses that standard LLM miss.
- Score: 11.987225062711692
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) are converging towards a singular Artificial Hivemind, where shared Nature (pre-training priors) result in a profound collapse of distributional diversity, limiting the distinct perspectives necessary for creative exploration and scientific discovery. To address this, we propose to equip models with inference-time Nurture (individualized epistemic trajectories) using Epistemic Evolution paradigm, progressing through explore, internalize, and express. We instantiate this via PRISM (Pluralistic Reasoning via In-context Structure Modeling), a model-agnostic system that augments LLM with dynamic On-the-fly Epistemic Graphs. On three creativity benchmarks, PRISM achieves state-of-the-art novelty and significantly expands distributional diversity. Moreover, we evaluate the real-world utility via a challenging rare-disease diagnosis benchmark. Results demonstrate that PRISM successfully uncovers correct long-tail diagnoses that standard LLM miss, confirming that its divergence stems from meaningful exploration rather than incoherent noise. Overall, this work establishes a new paradigm for Pluralistic AI, moving beyond monolithic consensus toward a diverse ecosystem of unique cognitive individuals capable of collective, multi-perspective discovery.
Related papers
- ACE-Brain-0: Spatial Intelligence as a Shared Scaffold for Universal Embodiments [134.95780765985515]
We introduce ACE-Brain-0, a generalist foundation brain that unifies spatial reasoning, autonomous driving, and embodied manipulation.<n>Our key insight is that spatial intelligence serves as a universal scaffold across diverse physical embodiments.<n>We propose the Scaffold-Specialize-Reconcile(SSR) paradigm, which first establishes a shared spatial foundation, then cultivates domain-specialized experts, and finally harmonizes them through data-free model merging.
arXiv Detail & Related papers (2026-03-03T17:53:45Z) - Monotropic Artificial Intelligence: Toward a Cognitive Taxonomy of Domain-Specialized Language Models [0.0]
We argue that intense specialization represents not a limitation but an alternative cognitive architecture with distinct advantages for safety-critical applications.<n>Our framework challenges the implicit assumption that artificial general intelligence constitutes the sole legitimate aspiration of AI research.
arXiv Detail & Related papers (2026-02-27T22:30:03Z) - Generative Human-Object Interaction Detection via Differentiable Cognitive Steering of Multi-modal LLMs [85.69785384599827]
Human-object interaction (HOI) detection aims to localize human-object pairs and the interactions between them.<n>Existing methods operate under a closed-world assumption, treating the task as a classification problem over a small, predefined verb set.<n>We propose GRASP-HO, a novel Generative Reasoning And Steerable Perception framework that reformulates HOI detection from the closed-set classification task to the open-vocabulary generation problem.
arXiv Detail & Related papers (2025-12-19T14:41:50Z) - Uni-MMMU: A Massive Multi-discipline Multimodal Unified Benchmark [69.8473923357969]
Unified multimodal models aim to jointly enable visual understanding and generation, yet current benchmarks rarely examine their true integration.<n>We present Uni-MMMU, a comprehensive benchmark that unfolds the bidirectional synergy between generation and understanding across eight reasoning-centric domains.
arXiv Detail & Related papers (2025-10-15T17:10:35Z) - From Semantics, Scene to Instance-awareness: Distilling Foundation Model for Open-vocabulary Situation Recognition [14.16399307533106]
Multimodal Large Language Models (MLLMs) exhibit strong zero-shot abilities but struggle with complex Grounded Situation Recognition (GSR)<n>We exploit transferring knowledge from a teacher MLLM to a small GSR model to enhance its generalization and zero-shot abilities.<n>We propose Multimodal Interactive Prompt Distillation (MIPD), a novel framework that distills enriched multimodal knowledge from the foundation model.
arXiv Detail & Related papers (2025-07-19T16:29:02Z) - Integrating Dynamical Systems Learning with Foundational Models: A Meta-Evolutionary AI Framework for Clinical Trials [0.0]
NetraAI is a system-based framework engineered for stability and interpretability on small clinical trial datasets.<n>We formalize NetraAI's foundations, combining contraction mappings, information geometry, and evolutionary algorithms to identify predictive patient cohorts.<n>By prioritizing reliable, explainable knowledge, NetraAI offers a new generation of adaptive, self-reflective AI to accelerate clinical discovery.
arXiv Detail & Related papers (2025-05-25T03:34:33Z) - MIRROR: Multi-Modal Pathological Self-Supervised Representation Learning via Modality Alignment and Retention [57.044719143401664]
Histopathology and transcriptomics are fundamental modalities in oncology, encapsulating the morphological and molecular aspects of the disease.<n>We present MIRROR, a novel multi-modal representation learning method designed to foster both modality alignment and retention.<n>Extensive evaluations on TCGA cohorts for cancer subtyping and survival analysis highlight MIRROR's superior performance.
arXiv Detail & Related papers (2025-03-01T07:02:30Z) - Beyond DAGs: A Latent Partial Causal Model for Multimodal Learning [80.44084021062105]
We propose a novel latent partial causal model for multimodal data, featuring two latent coupled variables, connected by an undirected edge, to represent the transfer of knowledge across modalities.<n>Under specific statistical assumptions, we establish an identifiability result, demonstrating that representations learned by multimodal contrastive learning correspond to the latent coupled variables up to a trivial transformation.<n>Experiments on a pre-trained CLIP model embodies disentangled representations, enabling few-shot learning and improving domain generalization across diverse real-world datasets.
arXiv Detail & Related papers (2024-02-09T07:18:06Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.