Welcome Guest, you are in: Login

Coding Analysis Toolkit Help Wiki

RSS RSS

Navigation





Search the wiki
»

Home | Overview | Getting Started | Creating Sub-Accounts | Prepare Data | Prepare Codelist | Loading Data | Coding Styles | Assigning Coders | Coding | Memos | Comparisons | Adjudication | Reports | Ideas for CAT Improvements | CAT Help Wiki ToDo List |

Adjudicating coder decisions in CAT using the Validation module

Adjudication is the process of judging coder choices. It involves looking at codeable units one at a time and assessing the validity of the specific coding choices. To launch the launch the module, select “Validate Dataset” under the “Validation” drop-down. On the next page:

  1. Choose the dataset to validate.
  2. Select add or add all to select codes.
  3. Select continue to begin. If the dataset has already been partially validated, the validation will start where it was left off.

You can exit the validation at any time by clicking on the navigation links to the left. The next page will contain the following items:

  • Change code filter option
  • Code(s)
  • Paragraph being adjudicated

For each annotation, the next un-validated coder annotation will be displayed with the current code to be adjudicated in bold red letters. Click the valid button if the annotation is correct or the not valid button if it is not. Alternatively, you can press the keys “1” or “Y” if the annotation if valid, or the keys “3” or “N” if it is not. The adjudication module will also display:

  • The number of paragraphs left to be validated
  • The codes chosen by which coders
  • The file name and paragraph numbers

© 2007 - 2010 Qualitative Data Analysis Program labs (QDAP), in the University Center for Social and Urban Research, at the University of Pittsburgh, and QDAP-UMass, in the College of Social and Behavioral Sciences, at the University of Massachusetts Amherst. As of 2010, CAT and this CAT Help Wiki are maintained and improved by personnel from Texifter, LLC, which is a software start-up located in North Amherst & Springfield, MA and online at http://texifter.com/.

Content on this website was made possible with the following grants from the National Science Foundation: III-0705566 “Collaborative Research III-COR: From a Pile of Documents to a Collection of Information: A Framework for Multi-Dimensional Text Analysis” and IIS-0429293 “Collaborative Research: Language Processing Technology for Electronic Rulemaking.” We are also grateful for financial support from the U.S. Environmental Protection Agency and the U.S. Fish & Wildlife Service. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the National Science Foundation.

Home | Overview | Getting Started | Creating Sub-Accounts | Prepare Data | Prepare Codelist | Loading Data | Coding Styles | Assigning Coders | Coding | Memos | Comparisons | Adjudication | Reports | Ideas for CAT Improvements | CAT Help Wiki ToDo List |