TY - GEN
T1 - CheckThat! at CLEF 2019: Automatic Identification and Verification of Claims.
AU - Elsayed, Tamer
AU - Nakov, Preslav
AU - Barrón-Cedeño, Alberto
AU - Hasanain, Maram
AU - Suwaileh, Reem
AU - Martino, Giovanni Da San
AU - Atanasova, Pepa
N1 - DBLP License: DBLP's bibliographic metadata records provided through http://dblp.org/ are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions.
PY - 2019
Y1 - 2019
N2 - We introduce the second edition of the CheckThat! Lab, part of the 2019 Cross-Language Evaluation Forum (CLEF). CheckThat! proposes two complementary tasks. Task 1: predict which claims in a political debate should be prioritized for fact-checking. Task 2: rank Web-retrieved pages against a check-worthy claim based on their usefulness for fact-checking, extract useful passages from those pages, and then use them all to decide whether the claim is factually true or false. Checkthat! provides a full evaluation framework, consisting of data in English (derived from fact-checking sources) and Arabic (gathered and annotated from scratch) and evaluation based on mean average precision (MAP) for ranking and F for classification tasks.
AB - We introduce the second edition of the CheckThat! Lab, part of the 2019 Cross-Language Evaluation Forum (CLEF). CheckThat! proposes two complementary tasks. Task 1: predict which claims in a political debate should be prioritized for fact-checking. Task 2: rank Web-retrieved pages against a check-worthy claim based on their usefulness for fact-checking, extract useful passages from those pages, and then use them all to decide whether the claim is factually true or false. Checkthat! provides a full evaluation framework, consisting of data in English (derived from fact-checking sources) and Arabic (gathered and annotated from scratch) and evaluation based on mean average precision (MAP) for ranking and F for classification tasks.
U2 - 10.1007/978-3-030-15719-7_41
DO - 10.1007/978-3-030-15719-7_41
M3 - Article in proceedings
SP - 309
EP - 315
BT - Advances in Information Retrieval
PB - Springer
ER -