Quantifying and Predicting Disagreement in Graded Human Ratings
arXiv:2605.01168v1 Announce Type: new Abstract: It is increasingly recognized that human annotators do not always agree, and such disagreement is inherent in many annotation tasks. However, not all instances in a given task elicit the same degree of opinion divergence. In this paper, we investigate annotation variation patterns in graded human ratings for inappropriate languages, including offensive language, hate speech, and toxic language perception. We examine whether the degree of annotation...
Read full article →