Modified on 2010/10/20 12:34 by Mark J. Hoy — Categorized as: Uncategorized

Home | Overview | Getting Started | Creating Sub-Accounts | Prepare Data | Prepare Codelist | Loading Data | Coding Styles | Assigning Coders | Coding | Memos | Comparisons | Adjudication | Reports | Ideas for CAT Improvements | CAT Help Wiki ToDo List |

Conducting comparisons of the reliability of coder choices in CAT

Inter-coder reliability analysis can be done by choosing either “Standard Comparisons” or “Code by Code Comparisons” from the Analysis drop down option. For both a Standard Comparison and a Code by Code Comparison:

  1. Select the dataset (Note: you must have “locked” a raw dataset or else uploaded an ATLAS.ti output file as a coded dataset, to have datasets available on this drop down list)
  2. Select from the Available Coders and “Add” them to the Chosen Coders.
  3. Select from the Available Codes and “Add” then to the Chosen Codes

For Standard Comparison, you will be further asked to select a method of comparison, either Fleiss’ Kappa or Krippendorf’s Alpha. There is also a box to check if you want to suppress overlaps if none exist. There are no overlaps in CAT-coded data. ATLAS.ti users, however, can generate overlapping spans of text For Code by Code Comparison, you will be further asked to pick from Show Comparisons drop down menu:

  1. Exact matches,
  2. Overlaps, or
  3. Mismatches

You will also pick from the Sort/Collate By drop down menu Code, Coder, or Quotation. Once you have chosen the desired choices, select Run Comparison and view the table with your results. You have the option of downloading the result as a Rich Text File (.rtf) document.