Welcome Guest, you are in: Login

Coding Analysis Toolkit Help Wiki

RSS RSS

Navigation





Search the wiki
»

Home | Overview | Getting Started | Creating Sub-Accounts | Prepare Data | Prepare Codelist | Loading Data | Coding Styles | Assigning Coders | Coding | Memos | Comparisons | Adjudication | Reports | Ideas for CAT Improvements | CAT Help Wiki ToDo List |

Conducting comparisons of the reliability of coder choices in CAT

Inter-coder reliability analysis can be done by choosing either “Standard Comparisons” or “Code by Code Comparisons” from the Analysis drop down option. For both a Standard Comparison and a Code by Code Comparison:

  1. Select the dataset (Note: you must have “locked” a raw dataset or else uploaded an ATLAS.ti output file as a coded dataset, to have datasets available on this drop down list)
  2. Select from the Available Coders and “Add” them to the Chosen Coders.
  3. Select from the Available Codes and “Add” then to the Chosen Codes

Image
For Standard Comparison, you will be further asked to select a method of comparison, either Fleiss’ Kappa or Krippendorf’s Alpha. There is also a box to check if you want to suppress overlaps if none exist. There are no overlaps in CAT-coded data. ATLAS.ti users, however, can generate overlapping spans of text For Code by Code Comparison, you will be further asked to pick from Show Comparisons drop down menu:

  1. Exact matches,
  2. Overlaps, or
  3. Mismatches

You will also pick from the Sort/Collate By drop down menu Code, Coder, or Quotation. Once you have chosen the desired choices, select Run Comparison and view the table with your results. You have the option of downloading the result as a Rich Text File (.rtf) document.

© 2007 - 2010 Qualitative Data Analysis Program labs (QDAP), in the University Center for Social and Urban Research, at the University of Pittsburgh, and QDAP-UMass, in the College of Social and Behavioral Sciences, at the University of Massachusetts Amherst. As of 2010, CAT and this CAT Help Wiki are maintained and improved by personnel from Texifter, LLC, which is a software start-up located in North Amherst & Springfield, MA and online at http://texifter.com/.

Content on this website was made possible with the following grants from the National Science Foundation: III-0705566 “Collaborative Research III-COR: From a Pile of Documents to a Collection of Information: A Framework for Multi-Dimensional Text Analysis” and IIS-0429293 “Collaborative Research: Language Processing Technology for Electronic Rulemaking.” We are also grateful for financial support from the U.S. Environmental Protection Agency and the U.S. Fish & Wildlife Service. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the National Science Foundation.

Home | Overview | Getting Started | Creating Sub-Accounts | Prepare Data | Prepare Codelist | Loading Data | Coding Styles | Assigning Coders | Coding | Memos | Comparisons | Adjudication | Reports | Ideas for CAT Improvements | CAT Help Wiki ToDo List |