Tweetorial Hooks: Generative AI Tools to Motivate Science on Social
Media
- URL: http://arxiv.org/abs/2305.12265v2
- Date: Tue, 5 Dec 2023 07:36:29 GMT
- Title: Tweetorial Hooks: Generative AI Tools to Motivate Science on Social
Media
- Authors: Tao Long, Dorothy Zhang, Grace Li, Batool Taraif, Samia Menon, Kynnedy
Simone Smith, Sitong Wang, Katy Ilonka Gero, Lydia B. Chilton
- Abstract summary: We propose methods to use large language models (LLMs) to help users scaffold their process of writing a hook for complex scientific topics.
Our evaluation shows that the system reduces cognitive load and helps people write better hooks.
- Score: 14.353420910397702
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Communicating science and technology is essential for the public to
understand and engage in a rapidly changing world. Tweetorials are an emerging
phenomenon where experts explain STEM topics on social media in creative and
engaging ways. However, STEM experts struggle to write an engaging "hook" in
the first tweet that captures the reader's attention. We propose methods to use
large language models (LLMs) to help users scaffold their process of writing a
relatable hook for complex scientific topics. We demonstrate that LLMs can help
writers find everyday experiences that are relatable and interesting to the
public, avoid jargon, and spark curiosity. Our evaluation shows that the system
reduces cognitive load and helps people write better hooks. Lastly, we discuss
the importance of interactivity with LLMs to preserve the correctness,
effectiveness, and authenticity of the writing.
Related papers
- Can Stories Help LLMs Reason? Curating Information Space Through Narrative [10.840580696466535]
This paper investigates whether incorporating narrative elements can assist Large Language Models (LLMs) in solving complex problems more effectively.
We propose a novel approach, Story of Thought (SoT), integrating narrative structures into prompting techniques for problem-solving.
arXiv Detail & Related papers (2024-10-25T00:13:15Z) - Exploring Knowledge Tracing in Tutor-Student Dialogues [53.52699766206808]
We present a first attempt at performing knowledge tracing (KT) in tutor-student dialogues.
We propose methods to identify the knowledge components/skills involved in each dialogue turn.
We then apply a range of KT methods on the resulting labeled data to track student knowledge levels over an entire dialogue.
arXiv Detail & Related papers (2024-09-24T22:31:39Z) - LLMs Assist NLP Researchers: Critique Paper (Meta-)Reviewing [106.45895712717612]
Large language models (LLMs) have shown remarkable versatility in various generative tasks.
This study focuses on the topic of LLMs assist NLP Researchers.
To our knowledge, this is the first work to provide such a comprehensive analysis.
arXiv Detail & Related papers (2024-06-24T01:30:22Z) - Countering Misinformation via Emotional Response Generation [15.383062216223971]
proliferation of misinformation on social media platforms (SMPs) poses a significant danger to public health, social cohesion and democracy.
Previous research has shown how social correction can be an effective way to curb misinformation.
We present VerMouth, the first large-scale dataset comprising roughly 12 thousand claim-response pairs.
arXiv Detail & Related papers (2023-11-17T15:37:18Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Creativity Support in the Age of Large Language Models: An Empirical
Study Involving Emerging Writers [33.3564201174124]
We investigate the utility of modern large language models in assisting professional writers via an empirical user study.
We find that while writers seek LLM's help across all three types of cognitive activities, they find LLMs more helpful in translation and reviewing.
arXiv Detail & Related papers (2023-09-22T01:49:36Z) - PromptRobust: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts [76.18347405302728]
This study uses a plethora of adversarial textual attacks targeting prompts across multiple levels: character, word, sentence, and semantic.
The adversarial prompts are then employed in diverse tasks including sentiment analysis, natural language inference, reading comprehension, machine translation, and math problem-solving.
Our findings demonstrate that contemporary Large Language Models are not robust to adversarial prompts.
arXiv Detail & Related papers (2023-06-07T15:37:00Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - Measuring Dimensions of Self-Presentation in Twitter Bios and their Links to Misinformation Sharing [17.165798960147036]
Social media platforms provide users with a profile description field, commonly known as a bio," where they can present themselves to the world.
We propose and evaluate a suite of hlsimple, effective, and theoretically motivated approaches to embed bios in spaces that capture salient dimensions of social meaning.
Our work provides new tools to help computational social scientists make use of information in bios, and provides new insights into how misinformation sharing may be perceived on Twitter.
arXiv Detail & Related papers (2023-05-16T15:45:59Z) - RHO ($\rho$): Reducing Hallucination in Open-domain Dialogues with
Knowledge Grounding [57.46495388734495]
This paper presents RHO ($rho$) utilizing the representations of linked entities and relation predicates from a knowledge graph (KG)
We propose (1) local knowledge grounding to combine textual embeddings with the corresponding KG embeddings; and (2) global knowledge grounding to equip RHO with multi-hop reasoning abilities via the attention mechanism.
arXiv Detail & Related papers (2022-12-03T10:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.