On how Cognitive Computing will plan your next Systematic Review
- URL: http://arxiv.org/abs/2012.08178v1
- Date: Tue, 15 Dec 2020 09:56:09 GMT
- Title: On how Cognitive Computing will plan your next Systematic Review
- Authors: Maisie Badami, Marcos Baez, Shayan Zamanirad, Wei Kang
- Abstract summary: We report on the insights from 24 SLR authors on planning practices, its challenges and feedback on support strategies.
We frame our findings under the cognitive augmentation framework, and report on a prototype implementation and evaluation.
- Score: 3.0816257225447763
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Systematic literature reviews (SLRs) are at the heart of evidence-based
research, setting the foundation for future research and practice. However,
producing good quality timely contributions is a challenging and highly
cognitive endeavor, which has lately motivated the exploration of automation
and support in the SLR process. In this paper we address an often overlooked
phase in this process, that of planning literature reviews, and explore under
the lenses of cognitive process augmentation how to overcome its most salient
challenges. In doing so, we report on the insights from 24 SLR authors on
planning practices, its challenges as well as feedback on support strategies
inspired by recent advances in cognitive computing. We frame our findings under
the cognitive augmentation framework, and report on a prototype implementation
and evaluation focusing on further informing the technical feasibility.
Related papers
- Identifying Aspects in Peer Reviews [61.374437855024844]
We develop a data-driven schema for deriving fine-grained aspects from a corpus of peer reviews.
We introduce a dataset of peer reviews augmented with aspects and show how it can be used for community-level review analysis.
arXiv Detail & Related papers (2025-04-09T14:14:42Z) - Dancing with Critiques: Enhancing LLM Reasoning with Stepwise Natural Language Self-Critique [66.94905631175209]
We propose a novel inference-time scaling approach -- stepwise natural language self-critique (PANEL)
It employs self-generated natural language critiques as feedback to guide the step-level search process.
This approach bypasses the need for task-specific verifiers and the associated training overhead.
arXiv Detail & Related papers (2025-03-21T17:59:55Z) - LLM-Safety Evaluations Lack Robustness [58.334290876531036]
We argue that current safety alignment research efforts for large language models are hindered by many intertwined sources of noise.
We propose a set of guidelines for reducing noise and bias in evaluations of future attack and defense papers.
arXiv Detail & Related papers (2025-03-04T12:55:07Z) - Imitate, Explore, and Self-Improve: A Reproduction Report on Slow-thinking Reasoning Systems [92.89673285398521]
o1-like reasoning systems have demonstrated remarkable capabilities in solving complex reasoning tasks.
We introduce an imitate, explore, and self-improve'' framework to train the reasoning model.
Our approach achieves competitive performance compared to industry-level reasoning systems.
arXiv Detail & Related papers (2024-12-12T16:20:36Z) - Machine Learning Information Retrieval and Summarisation to Support Systematic Review on Outcomes Based Contracting [7.081184240581488]
This article presents a study that aims to address these challenges by enhancing the efficiency and scope of systematic reviews in the social sciences through advanced machine learning (ML) and natural language processing (NLP) tools.
In particular, we focus on automating stages within the systematic reviewing process that are time-intensive and repetitive for human annotators and which lend themselves to immediate scalability through tools such as information retrieval and summarisation guided by expert advice.
The article concludes with a summary of lessons learnt regarding the integrated approach towards systematic reviews and future directions for improvement, including explainability.
arXiv Detail & Related papers (2024-12-11T17:54:01Z) - Machine Learning Innovations in CPR: A Comprehensive Survey on Enhanced Resuscitation Techniques [52.71395121577439]
This survey paper explores the transformative role of Machine Learning (ML) and Artificial Intelligence (AI) in Cardiopulmonary Resuscitation (CPR)
It highlights the impact of predictive modeling, AI-enhanced devices, and real-time data analysis in improving resuscitation outcomes.
The paper provides a comprehensive overview, classification, and critical analysis of current applications, challenges, and future directions in this emerging field.
arXiv Detail & Related papers (2024-11-03T18:01:50Z) - O1 Replication Journey: A Strategic Progress Report -- Part 1 [52.062216849476776]
This paper introduces a pioneering approach to artificial intelligence research, embodied in our O1 Replication Journey.
Our methodology addresses critical challenges in modern AI research, including the insularity of prolonged team-based projects.
We propose the journey learning paradigm, which encourages models to learn not just shortcuts, but the complete exploration process.
arXiv Detail & Related papers (2024-10-08T15:13:01Z) - A Systematic Literature Review on Large Language Models for Automated Program Repair [15.239506022284292]
It is challenging for researchers to understand the current achievements, challenges, and potential opportunities.
This work provides the first systematic literature review to summarize the applications of Large Language Models in APR between 2020 and 2024.
arXiv Detail & Related papers (2024-05-02T16:55:03Z) - LLM as a Mastermind: A Survey of Strategic Reasoning with Large Language Models [75.89014602596673]
Strategic reasoning requires understanding and predicting adversary actions in multi-agent settings while adjusting strategies accordingly.
We explore the scopes, applications, methodologies, and evaluation metrics related to strategic reasoning with Large Language Models.
It underscores the importance of strategic reasoning as a critical cognitive capability and offers insights into future research directions and potential improvements.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing [51.524108608250074]
Black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing.
We perform a systematic review to identify the key trends in the field and shed light on novel explainable AI approaches.
We also give a detailed outlook on the challenges and promising research directions.
arXiv Detail & Related papers (2024-02-21T13:19:58Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - Learning Planning-based Reasoning by Trajectories Collection and Process Reward Synthesizing [61.98556945939045]
We propose a framework to learn planning-based reasoning through Direct Preference Optimization (DPO) on collected trajectories.
Our results on challenging logical reasoning benchmarks demonstrate the effectiveness of our learning framework.
arXiv Detail & Related papers (2024-02-01T15:18:33Z) - A Systematic Literature Review on Explainability for Machine/Deep
Learning-based Software Engineering Research [23.966640472958105]
This paper presents a systematic literature review of approaches that aim to improve the explainability of AI models within the context of Software Engineering.
We aim to summarize the SE tasks where XAI techniques have shown success to date; (2) classify and analyze different XAI techniques; and (3) investigate existing evaluation approaches.
arXiv Detail & Related papers (2024-01-26T03:20:40Z) - Resilience of Deep Learning applications: a systematic literature review of analysis and hardening techniques [3.265458968159693]
The review is based on 220 scientific articles published between January 2019 and March 2024.
The authors adopt a classifying framework to interpret and highlight research similarities and peculiarities.
arXiv Detail & Related papers (2023-09-27T19:22:19Z) - Trends, Limitations and Open Challenges in Automatic Readability
Assessment Research [0.0]
This article is a survey of contemporary research on developing computational models for readability assessment.
We identify the common approaches, discuss their shortcomings, and identify some challenges for the future.
arXiv Detail & Related papers (2021-05-03T16:18:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.