Performance Improvement 2003. Center for Practice and Technology Assessment

01/01/2003

Evidence Report/Technology Assessment: Criteria for Determining Disability in Speech-Language Disorders

The study included a systematic review of the literature to address two key questions about evaluating and diagnosing speech and language disorders in adults and children of concern to the Social Security Administration in making disability eligibility determinations: (1) What instruments have demonstrated reliability, validity, and normative data? (2) Do these instruments have predictive validity for an individual’s communicative impairment, performance? Approximately 42 million Americans have some type of communication disorder, which annually costs the nation $30 billion to $154 billion for lost productivity, special education, and medical care. The quality of the numerous evaluation procedures and instruments for clinical decision-making about language, speech, or voice disorders influences decisions about access to services and funding (e.g., special education services, Social Security disability income). The study found the following: (1) Reliability and validity data for the majority of instruments rarely came from peer-reviewed literature; instrument manuals yielded most such data. (2) Some manuals provided comprehensive data from well-conducted standardization studies; most did not. (3) Because normative data were usually not derived from nationally representative samples, generalizing results beyond the populations studied was difficult. (4) Sample size and representativeness problems limited the predictive validity studies. (5) Evidence about diagnostic or predictive properties of instruments addressing language, speech, and voice disorders is weak and incomplete at this time. The sparse evidence base suggested an outline and need for advancing a substantial methodological, clinical, and policymaking research agenda.

FEDERAL CONTACT: Kevin Murray, 301-427-1853 PIC ID: 7688

PERFORMER: Research Triangle Institute, Research Triangle Park, NC

 

Systems to Rate Strength of Scientific Evidence

As a key part in its strategy for meeting its legislative mandate, AHRQ undertook a systematic review and analysis of methods of rating the quality of scientific studies. The Healthcare Research and Quality Act requires that AHRQ, in collaboration with experts from the public and private sector, identify methods or systems to assess health care research results, particularly “methods or systems to rate the strength of the scientific evidence underlying health care practice, recommendations in the research literature, and technology assessments.” The University of North Carolina’s Evidence-based Practice Center identified 19 generic systems that fully addressed their key quality domains and identified seven systems that fully addressed their three domains for grading the strength of a body of evidence. The report also provides a research agenda including questions relevant for assessing the quality of scientific evidence.

FEDERAL CONTACT: David Introcaso 301-427-1213 PIC ID: 7676

PERFORMER: Research Triangle Institute, Research Triangle Park, NC

View full report

Preview
Download

"pi_2003.pdf" (pdf, 346.19Kb)

Note: Documents in PDF format require the Adobe Acrobat Reader®. If you experience problems with PDF documents, please download the latest version of the Reader®