Definition Inter Rater Agreement

This is a simple procedure when the values are zero and one and the number of data collectors is two. If there are more data collectors, the procedure is a little more complex (Table 2). However, as long as the values are limited to only two values, the calculation remains simple. The researcher calculates only the percentage agreement for each line and on average the lines. Another advantage of the matrix is that it allows the researcher to determine whether errors are accidental and are therefore fairly evenly distributed among all flows and variables, or whether a data collector often indicates different values from other data collectors. Table 2, which has an overall reliability of 90% for interraters, found that no data collector had an excessive number of outlier assessments (scores that did not agree with the majority of the evaluators` scores). Another advantage of this technique is that it allows the researcher to identify variables that can be problematic. Note that Table 2 shows that evaluators received only 60% approval for variable 10. This variable may warrant a review to determine the cause of such a low match in its assessment. Quote: O`Neill TA (2017) An Overview of Interrater Agreement on Likert Scales for Researchers and Practitioners. Up front. Psychol. 8:777.

doi: 10.3389/fpsyg.2017.00777 Kappa statistics are often used to test the reliability of interro-parents. The importance of the reliability of reference values lies in the fact that it represents the extent to which the data collected in the study are correct representations of the measured variables. The measurement of the extent to which data collectors assign the same score to the same variables is called the reliability of the interrater. Although there were many methods for measuring the reliability of Interraters, they were traditionally measured as a percentage of agreement, calculated as the number of chord results divided by the total number of points. In 1960, Jacob Cohen criticized the use of the agreement as a percentage because of its inability to take random agreement into account. He introduced the Cohen-Kappa, which was designed to take into account the possibility that the spleens, due to uncertainty, guessed at least a few variables. Like most correlation statistics, the kappa can be between 1 and 1. While the Kappa is one of the most used statistics to test the reliability of interramas, it has limitations.