Enhancing LLM with Evolutionary Fine Tuning for News Summary Generation
- URL: http://arxiv.org/abs/2307.02839v1
- Date: Thu, 6 Jul 2023 08:13:53 GMT
- Title: Enhancing LLM with Evolutionary Fine Tuning for News Summary Generation
- Authors: Le Xiao and Xiaolin Chen
- Abstract summary: We propose a new paradigm for news summary generation using LLM with powerful natural language understanding and generative capabilities.
We use LLM to extract multiple structured event patterns from the events contained in news paragraphs, evolve the event pattern population with genetic algorithm, and select the most adaptive event pattern to input into the LLM to generate news summaries.
A News Summary Generator (NSG) is designed to select and evolve the event pattern populations and generate news summaries.
- Score: 2.1828601975620257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: News summary generation is an important task in the field of intelligence
analysis, which can provide accurate and comprehensive information to help
people better understand and respond to complex real-world events. However,
traditional news summary generation methods face some challenges, which are
limited by the model itself and the amount of training data, as well as the
influence of text noise, making it difficult to generate reliable information
accurately. In this paper, we propose a new paradigm for news summary
generation using LLM with powerful natural language understanding and
generative capabilities. We use LLM to extract multiple structured event
patterns from the events contained in news paragraphs, evolve the event pattern
population with genetic algorithm, and select the most adaptive event pattern
to input into the LLM to generate news summaries. A News Summary Generator
(NSG) is designed to select and evolve the event pattern populations and
generate news summaries. The experimental results show that the news summary
generator is able to generate accurate and reliable news summaries with some
generalization ability.
Related papers
- Idiosyncrasies in Large Language Models [54.26923012617675]
We unveil and study idiosyncrasies in Large Language Models (LLMs)
We find that fine-tuning existing text embedding models on LLM-generated texts yields excellent classification accuracy.
We leverage LLM as judges to generate detailed, open-ended descriptions of each model's idiosyncrasies.
arXiv Detail & Related papers (2025-02-17T18:59:02Z) - MetaMorph: Multimodal Understanding and Generation via Instruction Tuning [57.35160715164359]
Visual-Predictive Instruction Tuning (VPiT) is a simple and effective extension to visual instruction tuning.
VPiT teaches an LLM to predict discrete text tokens and continuous visual tokens from any input sequence of image and text data.
We train our MetaMorph model and achieve competitive performance on both visual understanding and generation.
arXiv Detail & Related papers (2024-12-18T18:58:50Z) - EventGPT: Event Stream Understanding with Multimodal Large Language Models [59.65010502000344]
Event cameras record visual information as asynchronous pixel change streams, excelling at scene perception under unsatisfactory lighting or high-dynamic conditions.
Existing multimodal large language models (MLLMs) concentrate on natural RGB images, failing in scenarios where event data fits better.
We introduce EventGPT, the first MLLM for event stream understanding.
arXiv Detail & Related papers (2024-12-01T14:38:40Z) - Neon: News Entity-Interaction Extraction for Enhanced Question Answering [2.7661475645321256]
We present the NEON framework, designed to extract emerging entity interactions as described in news articles.
NEON constructs an entity-centric timestamped knowledge graph that captures such interactions.
Our framework innovates by integrating open Information Extraction (openIE) styles into large language models.
arXiv Detail & Related papers (2024-11-19T12:17:43Z) - Personalized News Recommendation System via LLM Embedding and Co-Occurrence Patterns [6.4561443264763625]
In news recommendation (NR), systems must comprehend and process a vast amount of clicked news text to infer the probability of candidate news clicks.
In this paper, we propose a novel NR algorithm to reshape the news model via LLM Embedding and Co-Occurrence Pattern (LECOP)
Extensive experiments demonstrate the superior performance of our proposed novel method.
arXiv Detail & Related papers (2024-11-09T03:01:49Z) - Integrating Planning into Single-Turn Long-Form Text Generation [66.08871753377055]
We propose to use planning to generate long form content.
Our main novelty lies in a single auxiliary task that does not require multiple rounds of prompting or planning.
Our experiments demonstrate on two datasets from different domains, that LLMs fine-tuned with the auxiliary task generate higher quality documents.
arXiv Detail & Related papers (2024-10-08T17:02:40Z) - From News to Forecast: Integrating Event Analysis in LLM-Based Time Series Forecasting with Reflection [16.47323362700347]
We introduce a novel approach to enhance time series forecasting by reasoning across both text and time series data.
With language as a medium, our method adaptively integrates social events into forecasting models, aligning news content with time series fluctuations to provide richer insights.
Specifically, we utilize LLM-based agents to iteratively filter out irrelevant news and employ human-like reasoning to evaluate predictions.
arXiv Detail & Related papers (2024-09-26T03:50:22Z) - LLM-GAN: Construct Generative Adversarial Network Through Large Language Models For Explainable Fake News Detection [34.984605500444324]
Large Language Models (LLMs) are known for their powerful natural language understanding and explanation generation abilities.
We propose LLM-GAN, a novel framework that utilizes prompting mechanisms to enable an LLM to become Generator and Detector.
Our results demonstrate LLM-GAN's effectiveness in both prediction performance and explanation quality.
arXiv Detail & Related papers (2024-09-03T11:06:45Z) - Fighting Fire with Fire: Adversarial Prompting to Generate a
Misinformation Detection Dataset [10.860133543817659]
We propose an LLM-based approach of creating silver-standard ground-truth datasets for identifying misinformation.
Specifically speaking, given a trusted news article, our proposed approach involves prompting LLMs to automatically generate a summarised version of the original article.
To investigate the usefulness of this dataset, we conduct a set of experiments where we train a range of supervised models for the task of misinformation detection.
arXiv Detail & Related papers (2024-01-09T10:38:13Z) - Generative Context-aware Fine-tuning of Self-supervised Speech Models [54.389711404209415]
We study the use of generative large language models (LLM) generated context information.
We propose an approach to distill the generated information during fine-tuning of self-supervised speech models.
We evaluate the proposed approach using the SLUE and Libri-light benchmarks for several downstream tasks: automatic speech recognition, named entity recognition, and sentiment analysis.
arXiv Detail & Related papers (2023-12-15T15:46:02Z) - Prompt-and-Align: Prompt-Based Social Alignment for Few-Shot Fake News
Detection [50.07850264495737]
"Prompt-and-Align" (P&A) is a novel prompt-based paradigm for few-shot fake news detection.
We show that P&A sets new states-of-the-art for few-shot fake news detection performance by significant margins.
arXiv Detail & Related papers (2023-09-28T13:19:43Z) - Harnessing Explanations: LLM-to-LM Interpreter for Enhanced
Text-Attributed Graph Representation Learning [51.90524745663737]
A key innovation is our use of explanations as features, which can be used to boost GNN performance on downstream tasks.
Our method achieves state-of-the-art results on well-established TAG datasets.
Our method significantly speeds up training, achieving a 2.88 times improvement over the closest baseline on ogbn-arxiv.
arXiv Detail & Related papers (2023-05-31T03:18:03Z) - Learning to Transfer Prompts for Text Generation [97.64625999380425]
We propose a novel prompt-based method (PTG) for text generation in a transferable setting.
First, PTG learns a set of source prompts for various source generation tasks and then transfers these prompts as target prompts to perform target generation tasks.
In extensive experiments, PTG yields competitive or better results than fine-tuning methods.
arXiv Detail & Related papers (2022-05-03T14:53:48Z) - Event Transition Planning for Open-ended Text Generation [55.729259805477376]
Open-ended text generation tasks require models to generate a coherent continuation given limited preceding context.
We propose a novel two-stage method which explicitly arranges the ensuing events in open-ended text generation.
Our approach can be understood as a specially-trained coarse-to-fine algorithm.
arXiv Detail & Related papers (2022-04-20T13:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.