Performance Improvement 2009. Appendix G - What characterizes An Evaluation?

01/01/2009

What characterizes An Evaluation?

For the purposes of deciding whether a project or study belongs in the Policy Information Center database of evaluations and hence in the annual Performance Improvement report to Congress, we encourage agency staff to cast a wide net. This Appendix provides a discussion to aid in this task.

Evaluation includes the process of determining the worth or value of something. It can be the analysis and comparison of actual progress versus prior plans, oriented toward improving plans for future implementation. The American Evaluation Association defines evaluation as assessing the strengths and weaknesses of programs, policies, personnel, products, and organizations to improve their effectiveness. Evaluation is the systematic collection and analysis of data needed to make decisions. Evaluation activities may include performance-related events:

  • Pinpointing the services needed.
  • Finding out what knowledge, skills, attitudes, or behaviors a program should address.
  • Establishing clear, measurable, and realistic program objectives and deciding the particular evidence that will demonstrate that the objectives have been met.
  • Developing or selecting from among alternative program approaches and determining which ones best achieve the goals.
  • Tracking whether program objectives are achieved; setting up a system that shows who gets services, how much service is delivered, how participants rate the services they receive, and which approaches are most readily adopted.
  • Trying out and assessing new program designs; determining the extent to which a particular approach is being implemented faithfully or the extent to which it attracts or retains participants.

Studies conducted or funded by an HHS agency or office would be included in the PIC evaluation database if they substantially met one or more of the following criteria:

  • Consisted of systematically collected and assessed information concerning, and provided useful feedback about, a program, population, social environment, policy, technology, need, methodology, or activity;
  • Generated information intended to inform policy decisions about or improve program effectiveness, advance the design, operation, or focus of a program or provided information regarding the context or clients served by a program;
  • Answered agreed-upon questions and provided information on specific criteria;
  • Included analysis of data and careful interpretation;
  • Resulted in information derived from direct observations or a compilation of other primary data collections;
  • Sought to provide findings that were action-focused and directed to users for whom the information would have practical value and influence thinking, policymaking or program design; or
  • Assessed effectiveness of an ongoing program in achieving its objectives, relied on the standards of experimental design to distinguish a program's effects from those of other forces, and aimed at program improvement through a modification of current operations.

Study Types Entered in PIC Evaluation Database

Researchers use many near-synonymous terms to describe their work. Here is a partial presentation to parse out some of the distinctions found in the current report.

Process, Implementation or “ Formative” Evaluation

These terms tend to overlap in meaning. These evaluations focus on early stages in program implementation and operation, before formal outcomes are apparent. It identifies procedures undertaken and decisions made in developing a program. It describes how the program operates, the services it delivers, and the functions it carries out. Such evaluations addresses whether the program was implemented and is providing services as intended. However, by additionally documenting the program's development and operation, it allows an assessment of the reasons for successful or unsuccessful performance, and provides information for potential replication.

Formative evaluations are a type of process evaluation of new programs or services that focus on collecting data on program operations so that needed changes or modifications can be made to the program in the early stages. Formative evaluations are used to provide feedback to staff about the program components that are working and those that need to be changed. Such an evaluation may used by managers as an aid to decide which strategy a program should adopt in order to accomplish its goals and objectives at a minimum cost. In addition, the evaluation might include alternative specifications of the program design itself, detailing ideal milestone and flow networks, manpower specifications, progress objectives, and budget allocations.

Outcome or “ Summative” Evaluation

An evaluation used to identify the results of a program's effort. It seeks to answer the question, "What difference did the program make?" It provides management with a statement about the net effects of a program after a specified period of operation. This type of evaluation provides information on: (1) the extent to which the problems and needs that gave rise to the program still exist, (2) ways to ameliorate adverse impacts and enhance desirable impacts, and (3) program design adjustments that may be indicated for the future. Such an evaluation may contribute to performance evaluation by comparing actual performance with that planned in terms of both resource utilization and production. It may be used by management to redirect program efforts and resources and to redesign the program structure. Impact Evaluation and Cost-Benefit Analysis are forms of Outcome or Summative Evaluation.

Impact Evaluation and “ Interim” Impact Assessments

A type of outcome evaluation that focuses on the broad, long-term impacts or results of program activities. For example, an impact evaluation could show that improved grade-school performance was the direct result of local Head Start programs. An impact evaluation would typically include an experimental, random assignment design.

Cost-Benefit Analysis

An analysis that compares present values of all benefits less those of related costs when benefits can be valued in dollars the same way as costs. A cost-benefit analysis is performed in order to select the alternative that maximizes the benefits of a program.

Feasibility Study, Evaluability Assessment, Evaluation Protocol Development

Feasibility Study is a study of the applicability or practicability of a proposed action or plan. Evaluability Assessment generally involves determining specifically whether an evaluation is practical, possible, or desirable. Evaluation protocol development would be preliminary design of an evaluation and can also represent the final stages of an evaluability assessment, meant as an aid to management decision-making about whether to proceed with a full study.

Survey

The collection of information from a common group through interviews or the application of questionnaires to a representative sample of that group. The data collection techniques are designed to collect standard information from a large number of subjects. Surveys may include polls, mailed questionnaires, telephone interviews, or face-to-face interviews. Survey projects may not involve a statistically representative sample of respondents, but instead involve a group of respondents who are considered to be broadly typical of the sample universe.

Policy Analysis, Exploratory Study, Descriptive Overview

Policy Analysis is investigation or discussion intended to help managers understand the extent of a problem or need that exists and to set realistic goals and objectives in response to such problem or need. It may be used to compare actual program activities with the program's legally established purposes in order to ensure legal compliance. Exploratory Study may be policy analysis with more direct investigation or case study development. A Descriptive Overview may be, as the phrase implies, more descriptive than analytical, although, there is a blurring here too.

Program Analysis

An analysis of options in relation to goals and objectives, strategies, procedures, and resources by comparing alternatives for proposed and ongoing programs. It embraces the processes involved in program planning and program evaluation.

Performance Measurement, Performance Assessment

Performance Measurement is ongoing data collection to determine if a program is implementing activities and achieving objectives. It measures inputs, outputs, and outcomes over time. In general, pre-post comparisons are used to assess change. Performance Assessment is a term that emphasizes the analysis of data, over its mere collection.

Literature Review, Issue Brief, Research Brief

A Literature Review consists of a summary and interpretation of research findings reported in the literature. It may include unstructured qualitative reviews by single authors as well as various systematic and quantitative procedures such as meta-analysis. An issue brief may consist primarily of policy option discussions, either found in the literature, or not. A Research Brief may be another name for either of the foregoing, with varying connotations.

View full report

Preview
Download

"PerformanceImprovement2009.pdf" (pdf, 1.26Mb)

Note: Documents in PDF format require the Adobe Acrobat Reader®. If you experience problems with PDF documents, please download the latest version of the Reader®