Seleccionar página

The results showed significant differences between the kappa and AC1 coefficients for several variables. If the variable has a distorted distribution of characteristics, Kappa was considered artificially low and the reliability of the variables was considered on the basis of AC1 compliance and observed. In any event, a distorted distribution of characteristics explained the differences between Kappa and AC1 and a strong agreement between the evaluators could therefore be confirmed. However, it is important to note that a distorted distribution of characteristics means that the match tested concerns one of the categories of a variable more than the other categories. Kappa accepts its maximum theoretical value of 1 only if the two observers distribute equal codes, i.e. if the corresponding amounts of rows and columns are identical. Everything is less than a perfect match. Nevertheless, the maximum value that kappa could reach in the case of unequal distributions makes it possible to interpret the actually conserved value of kappa. The equation for maximum κ is as follows:[16] The concordance was considered almost perfect for all perioperative haemostasis variables (Table 5). None of the evaluators responded that ligatur was used. Some variables suffered from a distorted distribution of characteristics. This is a simple procedure when the values are composed of only zero and one and the number of data collectors is two. If there are more data collectors, the procedure is a little more complex (Table 2).

However, as long as the scores are limited to only two values, the calculation remains simple. The researcher calculates only the percentage of concordance for each row and averages the rows. Another advantage of the matrix is that the researcher can determine whether the errors are random and therefore fairly evenly distributed among all evaluators and variables or whether a given data collector often records different values from other data collectors. Table 2, which shows a total capacity of 90%, shows that no data collector had an excessive number of outliers (values that did not correspond to the results of the majority of evaluators). Another advantage of this technique is that it allows the researcher to identify variables that may be problematic. Note that Table 2 shows that the evaluators obtained only 60% agreement for 10 variables. This variable may warrant verification to identify the cause of such a weak concordance in its assessment. All variables in the study are dummy variables. The Inter Rater agreement is presented with respect to observed compliance, cohen cappa and Gwets AC1 coefficients with 95% confidence intervals [15, 18, 19]. Suppose you are analyzing data on a group of 50 people applying for a grant.

Each request for assistance was read by two readers and each reader said “yes” or “no” to the proposal. Assuming that the data on the number of disagreements are as follows, with A and B being readers, the data on the main diagonal of the matrix (a and d) include the number of matches and the data outside the diagonal (b and c) count the number of disagreements: Cohen Kappa`s statistics measure interrater reliability (sometimes called interobserver coincidence). The reliability or accuracy of the interrater occurs when your data (or collectors) give the same score to the same data element. Reliability is an important part of any research study. Statistics Solutions` Kappa calculator evaluates the reliability of Inter-Rater from two evaluators on one goal. In this easy-to-use calculator, enter the frequency of agreements and disagreements between evaluators and the Kappa computer calculates your Kappa coefficient. The computer gives references that will help you qualitatively assess the degree of compliance. (For example, click here). The study population consists of the first 137 patients who underwent tonsil surgery admitted to the NTSR at St. Olav University Hospital in Trondheim. . .

.