Automated Meta-Analysis: A Causal Learning Perspective
- URL: http://arxiv.org/abs/2104.04633v1
- Date: Fri, 9 Apr 2021 23:07:07 GMT
- Title: Automated Meta-Analysis: A Causal Learning Perspective
- Authors: Lu Cheng, Dmitriy A. Katz-Rogozhnikov, Kush R. Varshney, Ioana Baldini
- Abstract summary: We work toward automating meta-analysis with a focus on controlling for risks of bias.
We first extract information from scientific publications written in natural language.
From a novel causal learning perspective, we propose to frame automated meta-analysis as a multiple-causal-inference problem.
- Score: 30.746257517698133
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Meta-analysis is a systematic approach for understanding a phenomenon by
analyzing the results of many previously published experimental studies. It is
central to deriving conclusions about the summary effect of treatments and
interventions in medicine, poverty alleviation, and other applications with
social impact. Unfortunately, meta-analysis involves great human effort,
rendering a process that is extremely inefficient and vulnerable to human bias.
To overcome these issues, we work toward automating meta-analysis with a focus
on controlling for risks of bias. In particular, we first extract information
from scientific publications written in natural language. From a novel causal
learning perspective, we then propose to frame automated meta-analysis -- based
on the input of the first step -- as a multiple-causal-inference problem where
the summary effect is obtained through intervention. Built upon existing
efforts for automating the initial steps of meta-analysis, the proposed
approach achieves the goal of automated meta-analysis and largely reduces the
human effort involved. Evaluations on synthetic and semi-synthetic datasets
show that this approach can yield promising results.
Related papers
- Empowering Meta-Analysis: Leveraging Large Language Models for Scientific Synthesis [7.059964549363294]
This study investigates the automation of meta-analysis in scientific documents using large language models (LLMs)
Our research introduces a novel approach that fine-tunes the LLM on extensive scientific datasets to address challenges in big data handling and structured data extraction.
arXiv Detail & Related papers (2024-11-16T20:18:57Z) - Meta-Analysis with Untrusted Data [14.28797726638936]
We show how to answer causal questions much more precisely by making two changes to meta-analysis.
First, we incorporate untrusted data drawn from large observational databases.
Second, we train richer models capable of handling heterogeneous trials.
arXiv Detail & Related papers (2024-07-12T16:07:53Z) - Reduced-Rank Multi-objective Policy Learning and Optimization [57.978477569678844]
In practice, causal researchers do not have a single outcome in mind a priori.
In government-assisted social benefit programs, policymakers collect many outcomes to understand the multidimensional nature of poverty.
We present a data-driven dimensionality-reduction methodology for multiple outcomes in the context of optimal policy learning.
arXiv Detail & Related papers (2024-04-29T08:16:30Z) - Uncertainty in Automated Ontology Matching: Lessons Learned from an
Empirical Experimentation [6.491645162078057]
Ontologies play a critical role in link and semantically integrate datasets via interoperability.
This paper approaches data integration from an application perspective, looking techniques based on ontology matching.
arXiv Detail & Related papers (2023-10-18T05:42:51Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Leveraging Domain Knowledge for Inclusive and Bias-aware Humanitarian
Response Entry Classification [3.824858358548714]
We aim to provide an effective and ethically-aware system for humanitarian data analysis.
We introduce a novel architecture adjusted to the humanitarian analysis framework.
We also propose a systematic way to measure and biases.
arXiv Detail & Related papers (2023-05-26T09:15:05Z) - Causal Intervention Improves Implicit Sentiment Analysis [67.43379729099121]
We propose a causal intervention model for Implicit Sentiment Analysis using Instrumental Variable (ISAIV)
We first review sentiment analysis from a causal perspective and analyze the confounders existing in this task.
Then, we introduce an instrumental variable to eliminate the confounding causal effects, thus extracting the pure causal effect between sentence and sentiment.
arXiv Detail & Related papers (2022-08-19T13:17:57Z) - Counterfactual Reasoning for Out-of-distribution Multimodal Sentiment
Analysis [56.84237932819403]
This paper aims to estimate and mitigate the bad effect of textual modality for strong OOD generalization.
Inspired by this, we devise a model-agnostic counterfactual framework for multimodal sentiment analysis.
arXiv Detail & Related papers (2022-07-24T03:57:40Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Towards Automatic Evaluation of Dialog Systems: A Model-Free Off-Policy
Evaluation Approach [84.02388020258141]
We propose a new framework named ENIGMA for estimating human evaluation scores based on off-policy evaluation in reinforcement learning.
ENIGMA only requires a handful of pre-collected experience data, and therefore does not involve human interaction with the target policy during the evaluation.
Our experiments show that ENIGMA significantly outperforms existing methods in terms of correlation with human evaluation scores.
arXiv Detail & Related papers (2021-02-20T03:29:20Z) - Extracting actionable information from microtexts [0.0]
This dissertation proposes a semi-automatic method for extracting actionable information.
We show that predicting time to event is possible for both in-domain and cross-domain scenarios.
We propose a method to integrate the machine learning based relevant information classification method with a rule-based information classification technique to classify microtexts.
arXiv Detail & Related papers (2020-08-01T21:22:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.