The results showed significant differences between the kappa and AC1 coefficients for several variables. If the variable has a distorted distribution of characteristics, Kappa was considered artificially low and the reliability of the variables was considered on the basis of AC1 compliance and observed. In any event, a distorted distribution of characteristics explained the differences between Kappa and AC1 and a strong agreement between the evaluators could therefore be confirmed. However, it is important to note that a distorted distribution of characteristics means that the match tested concerns one of the categories of a variable more than the other categories. Kappa accepts its maximum theoretical value of 1 only if the two observers distribute equal codes, i.e. if the corresponding amounts of rows and columns are identical. Everything is less than a perfect match. Nevertheless, the maximum value that kappa could reach in the case of unequal distributions makes it possible to interpret the actually conserved value of kappa. The equation for maximum κ is as follows: The concordance was considered almost perfect for all perioperative haemostasis variables (Table 5). None of the evaluators responded that ligatur was used. Some variables suffered from a distorted distribution of characteristics. This is a simple procedure when the values are composed of only zero and one and the number of data collectors is two. If there are more data collectors, the procedure is a little more complex (Table 2).

However, as long as the scores are limited to only two values, the calculation remains simple. The researcher calculates only the percentage of concordance for each row and averages the rows. Another advantage of the matrix is that the researcher can determine whether the errors are random and therefore fairly evenly distributed among all evaluators and variables or whether a given data collector often records different values from other data collectors. Table 2, which shows a total capacity of 90%, shows that no data collector had an excessive number of outliers (values that did not correspond to the results of the majority of evaluators). Another advantage of this technique is that it allows the researcher to identify variables that may be problematic. Note that Table 2 shows that the evaluators obtained only 60% agreement for 10 variables. This variable may warrant verification to identify the cause of such a weak concordance in its assessment. All variables in the study are dummy variables. The Inter Rater agreement is presented with respect to observed compliance, cohen cappa and Gwets AC1 coefficients with 95% confidence intervals [15, 18, 19]. Suppose you are analyzing data on a group of 50 people applying for a grant.