Harnessing Collective Intelligence Under a Lack of Cultural Consensus
- URL: http://arxiv.org/abs/2309.09787v2
- Date: Tue, 19 Sep 2023 15:57:29 GMT
- Title: Harnessing Collective Intelligence Under a Lack of Cultural Consensus
- Authors: Necdet G\"urkan and Jordan W. Suchow
- Abstract summary: Cultural Consensus Theory (CCT) provides a statistical framework for detecting and characterizing divergent consensus beliefs.
We extend CCT with a latent construct that maps between pretrained deep neural network embeddings of entities and the consensus beliefs regarding those entities among one or more subsets of respondents.
We find that iDLC-CCT better predicts the degree of consensus, generalizes well to out-of-sample entities, and is effective even with sparse data.
- Score: 0.1813006808606333
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Harnessing collective intelligence to drive effective decision-making and
collaboration benefits from the ability to detect and characterize
heterogeneity in consensus beliefs. This is particularly true in domains such
as technology acceptance or leadership perception, where a consensus defines an
intersubjective truth, leading to the possibility of multiple "ground truths"
when subsets of respondents sustain mutually incompatible consensuses. Cultural
Consensus Theory (CCT) provides a statistical framework for detecting and
characterizing these divergent consensus beliefs. However, it is unworkable in
modern applications because it lacks the ability to generalize across even
highly similar beliefs, is ineffective with sparse data, and can leverage
neither external knowledge bases nor learned machine representations. Here, we
overcome these limitations through Infinite Deep Latent Construct Cultural
Consensus Theory (iDLC-CCT), a nonparametric Bayesian model that extends CCT
with a latent construct that maps between pretrained deep neural network
embeddings of entities and the consensus beliefs regarding those entities among
one or more subsets of respondents. We validate the method across domains
including perceptions of risk sources, food healthiness, leadership, first
impressions, and humor. We find that iDLC-CCT better predicts the degree of
consensus, generalizes well to out-of-sample entities, and is effective even
with sparse data. To improve scalability, we introduce an efficient
hard-clustering variant of the iDLC-CCT using an algorithm derived from a
small-variance asymptotic analysis of the model. The iDLC-CCT, therefore,
provides a workable computational foundation for harnessing collective
intelligence under a lack of cultural consensus and may potentially form the
basis of consensus-aware information technologies.
Related papers
- Independence Constrained Disentangled Representation Learning from Epistemological Perspective [13.51102815877287]
Disentangled Representation Learning aims to improve the explainability of deep learning methods by training a data encoder that identifies semantically meaningful latent variables in the data generation process.
There is no consensus regarding the objective of disentangled representation learning.
We propose a novel method for disentangled representation learning by employing an integration of mutual information constraint and independence constraint.
arXiv Detail & Related papers (2024-09-04T13:00:59Z) - Uncertainty-preserving deep knowledge tracing with state-space models [1.3791394805787949]
A central goal of knowledge tracing and traditional assessment is to quantify student knowledge and skills at a given point in time.
We introduce Dynamic LENS, a modeling paradigm that combines the flexible uncertainty-preserving properties of variational autoencoders with the principled information integration of Bayesian state-space models.
arXiv Detail & Related papers (2024-07-09T13:40:28Z) - VALID: a Validated Algorithm for Learning in Decentralized Networks with Possible Adversarial Presence [13.612214163974459]
We introduce the paradigm of validated decentralized learning for undirected networks with heterogeneous data.
VALID protocol is the first to achieve a validated learning guarantee.
Remarkably, VALID retains optimal performance metrics in adversary-free environments.
arXiv Detail & Related papers (2024-05-12T15:55:43Z) - Networked Communication for Decentralised Agents in Mean-Field Games [59.01527054553122]
We introduce networked communication to the mean-field game framework.
We prove that our architecture has sample guarantees bounded between those of the centralised- and independent-learning cases.
arXiv Detail & Related papers (2023-06-05T10:45:39Z) - Modeling Multiple Views via Implicitly Preserving Global Consistency and
Local Complementarity [61.05259660910437]
We propose a global consistency and complementarity network (CoCoNet) to learn representations from multiple views.
On the global stage, we reckon that the crucial knowledge is implicitly shared among views, and enhancing the encoder to capture such knowledge can improve the discriminability of the learned representations.
Lastly on the local stage, we propose a complementarity-factor, which joints cross-view discriminative knowledge, and it guides the encoders to learn not only view-wise discriminability but also cross-view complementary information.
arXiv Detail & Related papers (2022-09-16T09:24:00Z) - CARE: Certifiably Robust Learning with Reasoning via Variational
Inference [26.210129662748862]
We propose a certifiably robust learning with reasoning pipeline (CARE)
CARE achieves significantly higher certified robustness compared with the state-of-the-art baselines.
We additionally conducted different ablation studies to demonstrate the empirical robustness of CARE and the effectiveness of different knowledge integration.
arXiv Detail & Related papers (2022-09-12T07:15:52Z) - Acquiring and Modelling Abstract Commonsense Knowledge via Conceptualization [49.00409552570441]
We study the role of conceptualization in commonsense reasoning, and formulate a framework to replicate human conceptual induction.
We apply the framework to ATOMIC, a large-scale human-annotated CKG, aided by the taxonomy Probase.
arXiv Detail & Related papers (2022-06-03T12:24:49Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Deep Clustering by Semantic Contrastive Learning [67.28140787010447]
We introduce a novel variant called Semantic Contrastive Learning (SCL)
It explores the characteristics of both conventional contrastive learning and deep clustering.
It can amplify the strengths of contrastive learning and deep clustering in a unified approach.
arXiv Detail & Related papers (2021-03-03T20:20:48Z) - Controllable Guarantees for Fair Outcomes via Contrastive Information
Estimation [32.37031528767224]
Controlling bias in training datasets is vital for ensuring equal treatment, or parity, between different groups in downstream applications.
We demonstrate an effective method for controlling parity through mutual information based on contrastive information estimators.
We test our approach on UCI Adult and Heritage Health datasets and demonstrate that our approach provides more informative representations across a range of desired parity thresholds.
arXiv Detail & Related papers (2021-01-11T18:57:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.