Abstract
While rule-based detection of subject-verb agreement (SVA) errors is sensitive to syntactic parsing errors and irregularities and exceptions to the main rules, neural sequential labelers have a tendency to overfit their training data. We observe that rule-based error generation is less sensitive to syntactic parsing errors and irregularities than error detection and explore a simple, yet efficient approach to getting the best of both worlds: We train neural sequential labelers on the combination of large volumes of silver standard data, obtained through rule-based error generation, and gold standard data. We show that our simple protocol leads to more robust detection of SVA errors on both in-domain and out-of-domain data, as well as in the context of other errors and long-distance dependencies; and across four standard benchmarks, the induced model on average achieves a new state of the art.
Originalsprog | Engelsk |
---|---|
Titel | Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) |
Forlag | Association for Computational Linguistics |
Publikationsdato | 2019 |
Sider | 2418-2427 |
DOI | |
Status | Udgivet - 2019 |
Begivenhed | 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - NAACL-HLT 2019 - Minneapolis, USA Varighed: 3 jun. 2019 → 7 jun. 2019 |
Konference
Konference | 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - NAACL-HLT 2019 |
---|---|
Land/Område | USA |
By | Minneapolis |
Periode | 03/06/2019 → 07/06/2019 |