Conducting comparisons of the reliability of coder choices in CAT
Inter-coder reliability analysis can be done by choosing either "Standard Comparisons" or "Code by Code Comparisons"
from the Analysis drop down option. For both a Standard Comparison and a Code by Code Comparison:
Select the dataset (Note: you must have "locked" a raw dataset or else uploaded an ATLAS.ti output file as a coded dataset, to have datasets available on this drop down list)
Select from the Available Coders and “Add” them to the Chosen Coders.
Select from the Available Codes and “Add” then to the Chosen Codes
For Standard Comparison, you will be further asked to select a method of comparison, either Fleiss’ Kappa or Krippendorf’s Alpha.
There is also a box to check if you want to suppress overlaps if none exist. There are no overlaps in CAT-coded data.
ATLAS.ti users, however, can generate overlapping spans of text For Code by Code Comparison, you will be further asked
to pick from Show Comparisons drop down menu:
Exact matches,
Overlaps, or
Mismatches
You will also pick from the Sort/Collate By drop down menu Code, Coder, or Quotation. Once you have
chosen the desired choices, select Run Comparison and view the table with your results.
You have the option of downloading the result as a Rich Text File (.rtf) document.