HardEval: Focusing on Challenging Tokens to Assess Robustness of NER

TitreHardEval: Focusing on Challenging Tokens to Assess Robustness of NER
Type de publicationConference Paper
Année de publication2020
AuteursBernier-Colborne, G., and P. Langlais
Nom de la conférenceProceedings of The 12th Language Resources and Evaluation Conference
ÉditeurEuropean Language Resources Association
Endroit d'éditionMarseille, France
ISBN Number979-10-95546-34-4
RésuméTo assess the robustness of NER systems, we propose an evaluation method that focuses on subsets of tokens that represent specific sources of errors: unknown words and label shift or ambiguity. These subsets provide a system-agnostic basis for evaluating specific sources of NER errors and assessing room for improvement in terms of robustness. We analyze these subsets of challenging tokens in two widely-used NER benchmarks, then exploit them to evaluate NER systems in both in-domain and out-of-domain settings. Results show that these challenging tokens explain the majority of errors made by modern NER systems, although they represent only a small fraction of test tokens. They also indicate that label shift is harder to deal with than unknown words, and that there is much more room for improvement than the standard NER evaluation procedure would suggest. We hope this work will encourage NLP researchers to adopt rigorous and meaningful evaluation methods, and will help them develop more robust models.
URLhttps://www.aclweb.org/anthology/2020.lrec-1.211