Please report any queries concerning the funding data grouped in the sections named "Externally Awarded" or "Internally Disbursed" (shown on the profile page) to
your Research Finance Administrator. Your can find your Research Finance Administrator at https://www.ucl.ac.uk/finance/research/rs-contacts.php by entering your department
Please report any queries concerning the student data shown on the profile page to:
Email: portico-services@ucl.ac.uk
Help Desk: http://www.ucl.ac.uk/ras/portico/helpdesk
Email: portico-services@ucl.ac.uk
Help Desk: http://www.ucl.ac.uk/ras/portico/helpdesk
Publication Detail
Question and answer test-train overlap in open-domain question answering datasets
-
Publication Type:Conference
-
Authors:Lewis P, Stenetorp P, Riedel S
-
Publisher:Association for Computational Linguistics
-
Publication date:04/2021
-
Place of publication:Online
-
Published proceedings:Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
-
Medium:https://aclanthology.org/2021.eacl-main.86
-
Status:Published
-
Name of conference:EACL 2021
-
Language:English
-
Publisher URL:
-
Notes:This is an open access article under the CC BY 4.0 license Attribution 4.0 International (https://creativecommons.org/licenses/by/4.0/)
Abstract
Ideally Open-Domain Question Answering models should exhibit a number of competencies, ranging from simply memorizing questions seen at training time, to answering novel question formulations with answers seen during training, to generalizing to completely novel questions with novel answers. However, single aggregated test set scores do not show the full picture of what capabilities models truly have. In this work, we perform a detailed study of the test sets of three popular open-domain benchmark datasets with respect to these competencies. We find that 30% of test-set questions have a near-duplicate paraphrase in their corresponding train sets. In addition, we find that 60-70% of answers in the test sets are also present in the train sets. Using these findings, we evaluate a variety of popular open-domain models to obtain greater insight into what extent they can generalize, and what drives their overall performance. We find that all models perform substantially worse on questions that cannot be memorized from train sets, with a mean absolute performance difference of 61% between repeated and non-repeated data. Finally we show that simple nearest-neighbor models outperform a BART closed-book QA model, further highlighting the role that train set memorization plays in these benchmarks.
› More search options
UCL Researchers