Item reliability is an important issue when selecting risk-adjusters. The testing of OASIS items by the team that developed OBQI at the University of Colorado is an important source of information on reliability. In addition, inter-rater reliability of the full range of OASIS items has been examined by the Center for Home Care Policy and Research of the Visiting Nurse Service of New York (Kinatukara, Rosati and Huang, 2005), and selected items have been examined by Madigan and Fortinsky (2000).
There is considerable variation among OASIS items in their inter-rater reliability as measured by the percent agreement and Cohens kappa (a measure of agreement that adjusts for the extent to which the observed agreement is due to chance). This is particularly true when reliability statistics are reported for specific categories of multi-category items rather than the average over all categories. The results from these analyses can be used to identify potential risk-adjusters that are more (or less) reliable than others as well as content areas within domains that more (or less) reliable than others.