FrameFinder: Explorative Multi-Perspective Framing Extraction from News
Headlines
- URL: http://arxiv.org/abs/2312.08995v1
- Date: Thu, 14 Dec 2023 14:41:37 GMT
- Title: FrameFinder: Explorative Multi-Perspective Framing Extraction from News
Headlines
- Authors: Markus Reiter-Haas, Beate Kl\"osch, Markus Hadler, Elisabeth Lex
- Abstract summary: We present FrameFinder, an open tool for extracting and analyzing frames in textual data.
By analyzing the well-established gun violence frame corpus, we demonstrate the merits of our proposed solution.
- Score: 3.3181276611945263
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Revealing the framing of news articles is an important yet neglected task in
information seeking and retrieval. In the present work, we present FrameFinder,
an open tool for extracting and analyzing frames in textual data. FrameFinder
visually represents the frames of text from three perspectives, i.e., (i) frame
labels, (ii) frame dimensions, and (iii) frame structure. By analyzing the
well-established gun violence frame corpus, we demonstrate the merits of our
proposed solution to support social science research and call for subsequent
integration into information interactions.
Related papers
- Detecting Frames in News Headlines and Lead Images in U.S. Gun Violence Coverage [12.484533062130453]
We study the value of combining lead images and contextual information with text to identify the frame of a news article.
We release the first multimodal news framing dataset related to gun violence in the U.S.
arXiv Detail & Related papers (2024-06-25T01:56:47Z) - Weak-to-Strong 3D Object Detection with X-Ray Distillation [75.47580744933724]
We propose a versatile technique that seamlessly integrates into any existing framework for 3D Object Detection.
X-Ray Distillation with Object-Complete Frames is suitable for both supervised and semi-supervised settings.
Our proposed methods surpass state-of-the-art in semi-supervised learning by 1-1.5 mAP.
arXiv Detail & Related papers (2024-03-31T13:09:06Z) - SciMMIR: Benchmarking Scientific Multi-modal Information Retrieval [64.03631654052445]
Current benchmarks for evaluating MMIR performance in image-text pairing within the scientific domain show a notable gap.
We develop a specialised scientific MMIR benchmark by leveraging open-access paper collections.
This benchmark comprises 530K meticulously curated image-text pairs, extracted from figures and tables with detailed captions in scientific documents.
arXiv Detail & Related papers (2024-01-24T14:23:12Z) - An Empirical Study of Frame Selection for Text-to-Video Retrieval [62.28080029331507]
Text-to-video retrieval (TVR) aims to find the most relevant video in a large video gallery given a query text.
Existing methods typically select a subset of frames within a video to represent the video content for TVR.
In this paper, we make the first empirical study of frame selection for TVR.
arXiv Detail & Related papers (2023-11-01T05:03:48Z) - Correspondence Matters for Video Referring Expression Comprehension [64.60046797561455]
Video Referring Expression (REC) aims to localize the referent objects described in the sentence to visual regions in the video frames.
Existing methods suffer from two problems: 1) inconsistent localization results across video frames; 2) confusion between the referent and contextual objects.
We propose a novel Dual Correspondence Network (dubbed as DCNet) which explicitly enhances the dense associations in both the inter-frame and cross-modal manners.
arXiv Detail & Related papers (2022-07-21T10:31:39Z) - A Double-Graph Based Framework for Frame Semantic Parsing [23.552054033442545]
Frame semantic parsing is a fundamental NLP task, which consists of three subtasks: frame identification, argument identification and role classification.
Most previous studies tend to neglect relations between different subtasks and arguments and pay little attention to ontological frame knowledge.
In this paper, we propose a Knowledge-guided semanticPK with Double-graph (KID)
Our experiments show KID outperforms the previous state-of-the-art method by up to 1.7 F1-score on two FrameNet datasets.
arXiv Detail & Related papers (2022-06-18T09:39:38Z) - Lutma: a Frame-Making Tool for Collaborative FrameNet Development [0.9786690381850356]
This paper presents Lutma, a collaborative tool for contributing frames and lexical units to the Global FrameNet initiative.
We argue that this tool will allow for a sensible expansion of FrameNet coverage in terms of both languages and cultural perspectives encoded by them.
arXiv Detail & Related papers (2022-05-24T07:04:43Z) - Condensing a Sequence to One Informative Frame for Video Recognition [113.3056598548736]
This paper studies a two-step alternative that first condenses the video sequence to an informative "frame"
A valid question is how to define "useful information" and then distill from a sequence down to one synthetic frame.
IFS consistently demonstrates evident improvements on image-based 2D networks and clip-based 3D networks.
arXiv Detail & Related papers (2022-01-11T16:13:43Z) - Integrating Visuospatial, Linguistic and Commonsense Structure into
Story Visualization [81.26077816854449]
We first explore the use of constituency parse trees for encoding structured input.
Second, we augment the structured input with commonsense information and study the impact of this external knowledge on the generation of visual story.
Third, we incorporate visual structure via bounding boxes and dense captioning to provide feedback about the characters/objects in generated images.
arXiv Detail & Related papers (2021-10-21T00:16:02Z) - Sister Help: Data Augmentation for Frame-Semantic Role Labeling [9.62264668211579]
We propose a data augmentation approach, which uses existing frame-specific annotation to automatically annotate other lexical units of the same frame which are unannotated.
We present experiments on frame-semantic role labeling which demonstrate the importance of this data augmentation.
arXiv Detail & Related papers (2021-09-16T05:15:29Z) - FrameAxis: Characterizing Microframe Bias and Intensity with Word
Embedding [8.278618225536807]
We propose FrameAxis, a method for characterizing documents by identifying the most relevant semantic axes ("microframes")
FrameAxis is designed to quantitatively tease out two important dimensions of how microframes are used in the text.
We demonstrate that microframes with the highest bias and intensity well align with sentiment, topic, and partisan spectrum by applying FrameAxis to multiple datasets from restaurant reviews to political news.
arXiv Detail & Related papers (2020-02-20T08:01:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.