Performance Improvement 1996. Appendix C. Senior Editorial Advisors' Review Criteria for Assessing Program Evaluations

02/01/1996

The study addresses a significant issue of policy relevance; evaluation findings are likely to be useful.

Conceptual Criteria

Conceptual Foundations

A literature review is included; the project is shown to be logically based on previous findings; the report uses either theory or models or both; the program assumptions are stated; the evaluation draws from previous evaluations (if any); the report is linked with a program and describes the program; the report presents multiple perspectives; multiple relevant stakeholders are consulted and involved; the timing is appropriate because the program is ready for evaluation.

Questions for Evaluation

The aims of the evaluation are clear, well-specified, and testable; the questions are feasible, significant, linked to the program, appropriate with respect to resources and audience, and derive logically from the conceptual foundations. The questions show ingenuity and creativity.

Findings and Interpretation

The conclusions are justified by the analyses; the summary does not go beyond what the data will support; the appropriate qualifiers are stated; the conclusions fit the entire analysis; equivocal findings are handled appropriately; the initial questions are answered; the interpretation ties in with the conceptual foundation; the report notes that the findings are either consistent with, or deviate from, the relevant literature; the presentation is understandable; the results have practical significance; the extent of program implementation is assessed.

Recommendations

The recommendations follow from findings and are worth carrying out and are affordable, timely, feasible, useful, and appropriate; the recommendations are shown to be relevant to the questions asked; the breadth or specificity of the recommendations is addressed. Any recommendations for either future evaluations or improvements or both are clearly presented.

Methods

Evaluation Design

Design considerations include overall appropriateness; soundness; feasibility; funding and time constraints; generalizability; applicability for cultural diversity; assessment of the extent of program delivery; validity; feasibility for data collection; reliability of selected measurements; use of multiple measures of key concepts; and appropriateness of the sample. In addition, variables are clearly specified and fit with the questions and concepts; the design permits measurement of the extent of implementation of the program and answering of the evaluation questions.

Data Collection

Data are collected using appropriate units of measurement for analysis, controls for participant selection and assignment bias, and proper handling of missing data and attrition. Other considerations include use of an appropriate comparison group or control; adequate sample size, response rate, and information about the sample; a data collection plan; data collection that is faithful to the plan; attention to and cooperation with the relevant community; project confidentiality; and consistency in data collection. The quality of the data (including the quality of any extant data sets used in the study) and the efficiency of sampling are addressed. The data collection is appropriate to evaluation questions.

Data Analysis

Among the factors that the data analysis addresses are the handling of attrition; the matching of the analysis to the design; the use of appropriate statistical controls; the use of methodology and levels of measurement appropriate to the type of data; and estimation of effect size. The analysis shows sensitivity to cultural categories; appropriate generalizability of inferences; and choice of an analysis type that is simple and efficient.

Cross-Cutting Factors

The following are crosscutting factors that are likely to be important at all stages of a report: clarity, presentation, operation of a state-of-the-art level, appropriateness, understandability, innovativeness, generalizability, efficiency of approach, logical relationships, and discussion of the report's limitations. The report should also address ethical issues, possible perceptual bias, cultural diversity, and any gaps in study execution.