The Sensitivity of Annotator Bias to Task Definitions in Argument Mining

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Standard

The Sensitivity of Annotator Bias to Task Definitions in Argument Mining. / Jakobsen, Terne Sasha Thorn; Barrett, Maria; Søgaard, Anders; Lassen, David Dreyer.

Proceedings of the 16th Linguistic Annotation Workshop, LAW 2022 - held in conjunction with the Language Resources and Evaluation Conference, LREC 2022 Workshop. red. / Sameer Pradhan; Sandra Kubler. European Language Resources Association (ELRA), 2022. s. 44-61.

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Harvard

Jakobsen, TST, Barrett, M, Søgaard, A & Lassen, DD 2022, The Sensitivity of Annotator Bias to Task Definitions in Argument Mining. i S Pradhan & S Kubler (red), Proceedings of the 16th Linguistic Annotation Workshop, LAW 2022 - held in conjunction with the Language Resources and Evaluation Conference, LREC 2022 Workshop. European Language Resources Association (ELRA), s. 44-61, 16th Linguistic Annotation Workshop, LAW 2022, Marseille, Frankrig, 24/06/2022.

APA

Jakobsen, T. S. T., Barrett, M., Søgaard, A., & Lassen, D. D. (2022). The Sensitivity of Annotator Bias to Task Definitions in Argument Mining. I S. Pradhan, & S. Kubler (red.), Proceedings of the 16th Linguistic Annotation Workshop, LAW 2022 - held in conjunction with the Language Resources and Evaluation Conference, LREC 2022 Workshop (s. 44-61). European Language Resources Association (ELRA).

Vancouver

Jakobsen TST, Barrett M, Søgaard A, Lassen DD. The Sensitivity of Annotator Bias to Task Definitions in Argument Mining. I Pradhan S, Kubler S, red., Proceedings of the 16th Linguistic Annotation Workshop, LAW 2022 - held in conjunction with the Language Resources and Evaluation Conference, LREC 2022 Workshop. European Language Resources Association (ELRA). 2022. s. 44-61

Author

Jakobsen, Terne Sasha Thorn ; Barrett, Maria ; Søgaard, Anders ; Lassen, David Dreyer. / The Sensitivity of Annotator Bias to Task Definitions in Argument Mining. Proceedings of the 16th Linguistic Annotation Workshop, LAW 2022 - held in conjunction with the Language Resources and Evaluation Conference, LREC 2022 Workshop. red. / Sameer Pradhan ; Sandra Kubler. European Language Resources Association (ELRA), 2022. s. 44-61

Bibtex

@inproceedings{5403015da4a447998fd74663a0993b8d,
title = "The Sensitivity of Annotator Bias to Task Definitions in Argument Mining",
abstract = "NLP models are dependent on the data they are trained on, including how this data is annotated. NLP research increasingly examines the social biases of models, but often in the light of their training data and specific social biases that can be identified in the text itself. In this paper, we present an annotation experiment that is the first to examine the extent to which social bias is sensitive to how data is annotated. We do so by collecting annotations of arguments in the same documents following four different guidelines and from four different demographic annotator backgrounds. We show that annotations exhibit widely different levels of group disparity depending on which guidelines annotators follow. The differences are not explained by task complexity, but rather by characteristics of these demographic groups, as previously identified by sociological studies. We release a dataset that is small in the number of instances but large in the number of annotations with demographic information, and our results encourage an increased awareness of annotator bias.",
keywords = "Annotation, argument mining, bias",
author = "Jakobsen, {Terne Sasha Thorn} and Maria Barrett and Anders S{\o}gaard and Lassen, {David Dreyer}",
note = "Funding Information: Many thanks to Anna Rogers and Carsten Eriksen for their insightful comments. Maria Barrett is supported by a research grant (34437) from VILLUM FONDEN. Publisher Copyright: {\textcopyright} 2022 European Language Resources Association (ELRA).; 16th Linguistic Annotation Workshop, LAW 2022 ; Conference date: 24-06-2022",
year = "2022",
language = "English",
pages = "44--61",
editor = "Sameer Pradhan and Sandra Kubler",
booktitle = "Proceedings of the 16th Linguistic Annotation Workshop, LAW 2022 - held in conjunction with the Language Resources and Evaluation Conference, LREC 2022 Workshop",
publisher = "European Language Resources Association (ELRA)",

}

RIS

TY - GEN

T1 - The Sensitivity of Annotator Bias to Task Definitions in Argument Mining

AU - Jakobsen, Terne Sasha Thorn

AU - Barrett, Maria

AU - Søgaard, Anders

AU - Lassen, David Dreyer

N1 - Funding Information: Many thanks to Anna Rogers and Carsten Eriksen for their insightful comments. Maria Barrett is supported by a research grant (34437) from VILLUM FONDEN. Publisher Copyright: © 2022 European Language Resources Association (ELRA).

PY - 2022

Y1 - 2022

N2 - NLP models are dependent on the data they are trained on, including how this data is annotated. NLP research increasingly examines the social biases of models, but often in the light of their training data and specific social biases that can be identified in the text itself. In this paper, we present an annotation experiment that is the first to examine the extent to which social bias is sensitive to how data is annotated. We do so by collecting annotations of arguments in the same documents following four different guidelines and from four different demographic annotator backgrounds. We show that annotations exhibit widely different levels of group disparity depending on which guidelines annotators follow. The differences are not explained by task complexity, but rather by characteristics of these demographic groups, as previously identified by sociological studies. We release a dataset that is small in the number of instances but large in the number of annotations with demographic information, and our results encourage an increased awareness of annotator bias.

AB - NLP models are dependent on the data they are trained on, including how this data is annotated. NLP research increasingly examines the social biases of models, but often in the light of their training data and specific social biases that can be identified in the text itself. In this paper, we present an annotation experiment that is the first to examine the extent to which social bias is sensitive to how data is annotated. We do so by collecting annotations of arguments in the same documents following four different guidelines and from four different demographic annotator backgrounds. We show that annotations exhibit widely different levels of group disparity depending on which guidelines annotators follow. The differences are not explained by task complexity, but rather by characteristics of these demographic groups, as previously identified by sociological studies. We release a dataset that is small in the number of instances but large in the number of annotations with demographic information, and our results encourage an increased awareness of annotator bias.

KW - Annotation

KW - argument mining

KW - bias

UR - http://www.scopus.com/inward/record.url?scp=85146041801&partnerID=8YFLogxK

M3 - Article in proceedings

AN - SCOPUS:85146041801

SP - 44

EP - 61

BT - Proceedings of the 16th Linguistic Annotation Workshop, LAW 2022 - held in conjunction with the Language Resources and Evaluation Conference, LREC 2022 Workshop

A2 - Pradhan, Sameer

A2 - Kubler, Sandra

PB - European Language Resources Association (ELRA)

T2 - 16th Linguistic Annotation Workshop, LAW 2022

Y2 - 24 June 2022

ER -

ID: 333703780