Overview of CLEF 2019 Lab ProtestNews: Extracting Protests from News in
a Cross-context Setting
- URL: http://arxiv.org/abs/2008.00345v1
- Date: Sat, 1 Aug 2020 21:39:54 GMT
- Title: Overview of CLEF 2019 Lab ProtestNews: Extracting Protests from News in
a Cross-context Setting
- Authors: Ali H\"urriyeto\u{g}lu, Erdem Y\"or\"uk, Deniz Y\"uret,
\c{C}a\u{g}r{\i} Yoltar, Burak G\"urel, F{\i}rat Duru\c{s}an, Osman Mutlu,
and Arda Akdemir
- Abstract summary: The lab consists of document, sentence, and token level information classification and extraction tasks.
The training and development data were collected from India and test data was collected from India and China.
We have observed neural networks yield the best results and the performance drops significantly for majority of the submissions in the cross-country setting.
- Score: 3.5132824436572685
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an overview of the CLEF-2019 Lab ProtestNews on Extracting
Protests from News in the context of generalizable natural language processing.
The lab consists of document, sentence, and token level information
classification and extraction tasks that were referred as task 1, task 2, and
task 3 respectively in the scope of this lab. The tasks required the
participants to identify protest relevant information from English local news
at one or more aforementioned levels in a cross-context setting, which is
cross-country in the scope of this lab. The training and development data were
collected from India and test data was collected from India and China. The lab
attracted 58 teams to participate in the lab. 12 and 9 of these teams submitted
results and working notes respectively. We have observed neural networks yield
the best results and the performance drops significantly for majority of the
submissions in the cross-country setting, which is China.
Related papers
- Perception Test 2023: A Summary of the First Challenge And Outcome [67.0525378209708]
The First Perception Test challenge was held as a half-day workshop alongside the IEEE/CVF International Conference on Computer Vision (ICCV) 2023.
The goal was to benchmarking state-of-the-art video models on the recently proposed Perception Test benchmark.
We summarise in this report the task descriptions, metrics, baselines, and results.
arXiv Detail & Related papers (2023-12-20T15:12:27Z) - ArAIEval Shared Task: Persuasion Techniques and Disinformation Detection
in Arabic Text [41.3267575540348]
We present an overview of the ArAIEval shared task, organized as part of the first Arabic 2023 conference co-located with EMNLP 2023.
ArAIEval offers two tasks over Arabic text: (i) persuasion technique detection, focusing on identifying persuasion techniques in tweets and news articles, and (ii) disinformation detection in binary and multiclass setups over tweets.
A total of 20 teams participated in the final evaluation phase, with 14 and 16 teams participating in Tasks 1 and 2, respectively.
arXiv Detail & Related papers (2023-11-06T15:21:19Z) - UrduFake@FIRE2020: Shared Track on Fake News Identification in Urdu [62.6928395368204]
This paper gives the overview of the first shared task at FIRE 2020 on fake news detection in the Urdu language.
The goal is to identify fake news using a dataset composed of 900 annotated news articles for training and 400 news articles for testing.
The dataset contains news in five domains: (i) Health, (ii) Sports, (iii) Showbiz, (iv) Technology, and (v) Business.
arXiv Detail & Related papers (2022-07-25T03:46:51Z) - Overview of the Shared Task on Fake News Detection in Urdu at FIRE 2020 [62.6928395368204]
Task was posed as a binary classification task, in which the goal is to differentiate between real and fake news.
We provided a dataset divided into 900 annotated news articles for training and 400 news articles for testing.
42 teams from 6 different countries (India, China, Egypt, Germany, Pakistan, and the UK) registered for the task.
arXiv Detail & Related papers (2022-07-25T03:41:32Z) - UrduFake@FIRE2021: Shared Track on Fake News Identification in Urdu [55.41644538483948]
This study reports the second shared task named as UrduFake@FIRE2021 on identifying fake news detection in Urdu language.
The proposed systems were based on various count-based features and used different classifiers as well as neural network architectures.
The gradient descent (SGD) algorithm outperformed other classifiers and achieved 0.679 F-score.
arXiv Detail & Related papers (2022-07-11T19:15:04Z) - Overview of the Shared Task on Fake News Detection in Urdu at FIRE 2021 [55.41644538483948]
The goal of the shared task is to motivate the community to come up with efficient methods for solving this vital problem.
The training set contains 1300 annotated news articles -- 750 real news, 550 fake news, while the testing set contains 300 news articles -- 200 real, 100 fake news.
The best performing system obtained an F1-macro score of 0.679, which is lower than the past year's best result of 0.907 F1-macro.
arXiv Detail & Related papers (2022-07-11T18:58:36Z) - Overview of the CLEF--2021 CheckThat! Lab on Detecting Check-Worthy
Claims, Previously Fact-Checked Claims, and Fake News [21.574997165145486]
We describe the fourth edition of the CheckThat! Lab, part of the 2021 Conference and Labs of the Evaluation Forum (CLEF)
The lab evaluates technology supporting tasks related to factuality, and covers Arabic, Bulgarian, English, Spanish, and Turkish.
arXiv Detail & Related papers (2021-09-23T06:10:36Z) - SemEval-2020 Task 10: Emphasis Selection for Written Text in Visual
Media [50.29389719723529]
We present the main findings and compare the results of SemEval-2020 Task 10, Emphasis Selection for Written Text in Visual Media.
The goal of this shared task is to design automatic methods for emphasis selection.
The analysis of systems submitted to the task indicates that BERT and RoBERTa were the most common choice of pre-trained models used.
arXiv Detail & Related papers (2020-08-07T17:24:53Z) - Overview of CheckThat! 2020: Automatic Identification and Verification
of Claims in Social Media [26.60148306714383]
We present an overview of the third edition of the CheckThat! Lab at CLEF 2020.
The lab featured five tasks in two different languages: English and Arabic.
We describe the tasks setup, the evaluation results, and a summary of the approaches used by the participants.
arXiv Detail & Related papers (2020-07-15T21:19:32Z) - Multitask Models for Supervised Protests Detection in Texts [3.8073142980733]
I apply multitask neural networks capable of producing predictions for two and three of these tasks simultaneously.
This paper demonstrates performance near or above the reported state-of-the-art for automated political event coding.
arXiv Detail & Related papers (2020-05-06T17:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.