LitLLM: A Toolkit for Scientific Literature Review
- URL: http://arxiv.org/abs/2402.01788v1
- Date: Fri, 2 Feb 2024 02:41:28 GMT
- Title: LitLLM: A Toolkit for Scientific Literature Review
- Authors: Shubham Agarwal, Issam H. Laradji, Laurent Charlin, Christopher Pal
- Abstract summary: Toolkit operates on Retrieval Augmented Generation (RAG) principles.
System first initiates a web search to retrieve relevant papers.
Second, the system re-ranks the retrieved papers based on the user-provided abstract.
Third, the related work section is generated based on the re-ranked results and the abstract.
- Score: 15.080020634480272
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conducting literature reviews for scientific papers is essential for
understanding research, its limitations, and building on existing work. It is a
tedious task which makes an automatic literature review generator appealing.
Unfortunately, many existing works that generate such reviews using Large
Language Models (LLMs) have significant limitations. They tend to
hallucinate-generate non-actual information-and ignore the latest research they
have not been trained on. To address these limitations, we propose a toolkit
that operates on Retrieval Augmented Generation (RAG) principles, specialized
prompting and instructing techniques with the help of LLMs. Our system first
initiates a web search to retrieve relevant papers by summarizing user-provided
abstracts into keywords using an off-the-shelf LLM. Authors can enhance the
search by supplementing it with relevant papers or keywords, contributing to a
tailored retrieval process. Second, the system re-ranks the retrieved papers
based on the user-provided abstract. Finally, the related work section is
generated based on the re-ranked results and the abstract. There is a
substantial reduction in time and effort for literature review compared to
traditional methods, establishing our toolkit as an efficient alternative. Our
open-source toolkit is accessible at https://github.com/shubhamagarwal92/LitLLM
and Huggingface space (https://huggingface.co/spaces/shubhamagarwal92/LitLLM)
with the video demo at https://youtu.be/E2ggOZBAFw0.
Related papers
- GigaCheck: Detecting LLM-generated Content [72.27323884094953]
In this work, we investigate the task of generated text detection by proposing the GigaCheck.
Our research explores two approaches: (i) distinguishing human-written texts from LLM-generated ones, and (ii) detecting LLM-generated intervals in Human-Machine collaborative texts.
Specifically, we use a fine-tuned general-purpose LLM in conjunction with a DETR-like detection model, adapted from computer vision, to localize artificially generated intervals within text.
arXiv Detail & Related papers (2024-10-31T08:30:55Z) - PROMPTHEUS: A Human-Centered Pipeline to Streamline SLRs with LLMs [0.0]
PROMPTHEUS is an AI-driven pipeline solution for Systematic Literature Reviews.
It automates key stages of the SLR process, including systematic search, data extraction, topic modeling, and summarization.
It achieves high precision, provides coherent topic organization, and reduces review time.
arXiv Detail & Related papers (2024-10-21T13:05:33Z) - Scaling Up Summarization: Leveraging Large Language Models for Long Text Extractive Summarization [0.27624021966289597]
This paper introduces EYEGLAXS, a framework that leverages Large Language Models (LLMs) for extractive summarization.
EYEGLAXS focuses on extractive summarization to ensure factual and grammatical integrity.
The system sets new performance benchmarks on well-known datasets like PubMed and ArXiv.
arXiv Detail & Related papers (2024-08-28T13:52:19Z) - LLAssist: Simple Tools for Automating Literature Review Using Large Language Models [0.0]
LLAssist is an open-source tool designed to streamline literature reviews in academic research.
It uses Large Language Models (LLMs) and Natural Language Processing (NLP) techniques to automate key aspects of the review process.
arXiv Detail & Related papers (2024-07-19T02:48:54Z) - Tool Learning with Large Language Models: A Survey [60.733557487886635]
Tool learning with large language models (LLMs) has emerged as a promising paradigm for augmenting the capabilities of LLMs to tackle highly complex problems.
Despite growing attention and rapid advancements in this field, the existing literature remains fragmented and lacks systematic organization.
arXiv Detail & Related papers (2024-05-28T08:01:26Z) - Large Language Models for Generative Information Extraction: A Survey [89.71273968283616]
Large Language Models (LLMs) have demonstrated remarkable capabilities in text understanding and generation.
We present an extensive overview by categorizing these works in terms of various IE subtasks and techniques.
We empirically analyze the most advanced methods and discover the emerging trend of IE tasks with LLMs.
arXiv Detail & Related papers (2023-12-29T14:25:22Z) - LLMRefine: Pinpointing and Refining Large Language Models via Fine-Grained Actionable Feedback [65.84061725174269]
Recent large language models (LLM) are leveraging human feedback to improve their generation quality.
We propose LLMRefine, an inference time optimization method to refine LLM's output.
We conduct experiments on three text generation tasks, including machine translation, long-form question answering (QA), and topical summarization.
LLMRefine consistently outperforms all baseline approaches, achieving improvements up to 1.7 MetricX points on translation tasks, 8.1 ROUGE-L on ASQA, 2.2 ROUGE-L on topical summarization.
arXiv Detail & Related papers (2023-11-15T19:52:11Z) - CRUISE-Screening: Living Literature Reviews Toolbox [8.292338880619061]
CRUISE-Screening is a web-based application for conducting living literature reviews.
It is connected to several search engines via an API, which allows for updating the search results periodically.
arXiv Detail & Related papers (2023-09-04T15:58:43Z) - RRAML: Reinforced Retrieval Augmented Machine Learning [10.94680155282906]
We propose a novel framework called Reinforced Retrieval Augmented Machine Learning (RRAML)
RRAML integrates the reasoning capabilities of large language models with supporting information retrieved by a purpose-built retriever from a vast user-provided database.
We believe that the research agenda outlined in this paper has the potential to profoundly impact the field of AI.
arXiv Detail & Related papers (2023-07-24T13:51:19Z) - Query Rewriting for Retrieval-Augmented Large Language Models [139.242907155883]
Large Language Models (LLMs) play powerful, black-box readers in the retrieve-then-read pipeline.
This work introduces a new framework, Rewrite-Retrieve-Read instead of the previous retrieve-then-read for the retrieval-augmented LLMs.
arXiv Detail & Related papers (2023-05-23T17:27:50Z) - Active Retrieval Augmented Generation [123.68874416084499]
Augmenting large language models (LMs) by retrieving information from external knowledge resources is one promising solution.
Most existing retrieval augmented LMs employ a retrieve-and-generate setup that only retrieves information once based on the input.
We propose Forward-Looking Active REtrieval augmented generation (FLARE), a generic method which iteratively uses a prediction of the upcoming sentence to anticipate future content.
arXiv Detail & Related papers (2023-05-11T17:13:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.