Title: On the Cross-lingual Transferability of Monolingual Representations
Abstract: https://aclanthology.org/2020.acl-main.421.pdf
XQuAD (Cross-lingual Question Answering Dataset) is a benchmark dataset for evaluating cross-lingual question answering performance. The dataset consists of a subset of 240 paragraphs and 1190 question-answer pairs from the development set of SQuAD v1.1 (Rajpurkar et al., 2016) together with their professional translations into ten languages: Spanish, German, Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, and Hindi. Consequently, the dataset is entirely parallel across 11 languages.
Homepage: https://github.com/deepmind/xquad
@article{Artetxe:etal:2019,
author = {Mikel Artetxe and Sebastian Ruder and Dani Yogatama},
title = {On the cross-lingual transferability of monolingual representations},
journal = {CoRR},
volume = {abs/1910.11856},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.11856}
}
xquad
: All available languages.
Perform extractive question answering for each language's subset of XQuAD.
xquad_ar
: Arabicxquad_de
: Germanxquad_el
: Greekxquad_en
: Englishxquad_es
: Spanishxquad_hi
: Hindixquad_ro
: Romanianxquad_ru
: Russianxquad_th
: Thaixquad_tr
: Turkishxquad_vi
: Vietnamesexquad_zh
: Chinese
For adding novel benchmarks/datasets to the library:
- Is the task an existing benchmark in the literature?
- Have you referenced the original paper that introduced the task?
- If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
- Is the "Main" variant of this task clearly denoted?
- Have you provided a short sentence in a README on what each new variant adds / evaluates?
- Have you noted which, if any, published evaluation setups are matched by this variant?