Consumer Education Initiatives in Financial and Health Literacy. Evaluation


In general, initiatives relied on ongoing monitoring to report on project activities. Monitoring included obtaining information from grantees on program activities and recipients or collecting web metrics. Monitoring also included process measures, such as tracking distribution of print materials, number of people enrolled, and number of trainings held, or checking accounts opened. Participants from multiple federal programs (seven) reported funding external resource centers that provided technical assistance and training to their grantees. Resource center activities included developing program and outreach materials, such as toolkits, checklists, and evaluation plans; providing technical assistance; compiling best practices and lessons learned from grantees; and conducting program evaluations. In some instances, it was mandated at a programs creation that an external agency conduct an evaluation of the program.

Evaluation of Effectiveness

Overall, participants seemed unsure about concrete measures of program success. Few federal interview participants (five) reported overall evaluations of the effectiveness of their initiatives or initiatives to document program outcomes. However, some participants mentioned that grantees had to describe processes for evaluation when applying for funding. Participants cited evaluations, such as nationwide research reports and case studies of multiple grantees. FDIC (2007) conducted a longitudinal evaluation of the Money Smart curriculum and found that program participants were more likely to open deposit accounts, save money in a mainstream deposit product, use and adhere to a budget, and have increased confidence regarding financing.

The John C. Chaffee Foster Care Independence Program funded a multisite evaluation of grantees. One resulting evaluation report focused on a specific grantees life skills program and comprised lessons learned, as well as a study of the programs approach, service implementation, populations served, barriers to implementation, and impact on youth. The evaluation examined the effects of the program on helping youth achieve better social and economic outcomes such as higher employment rates and reduced numbers of nonmarital pregnancies and births. The evaluation failed to document a significant positive or negative impact of the program on outcomes associated with a successful transition to adulthood, such as employment, earnings, educational attainment, or homelessness (Courtney et al., 2008).

ASPEs Cash and Counseling program gave rise to numerous research publications, including overall program evaluations, evaluations of specific grantee approaches and experiences, and discussions of consumer experiences. An evaluation report published in 2007 examined the programs effects on consumers and caregivers, discussed implementation issues, and described the programs effect on Medicaid and Medicare costs and services. The report compared findings for a treatment group (enrolled with the program) with a control group and stated that the treatment group was more likely to be satisfied with the care received and with their life in general (Brown et al., 2007).

ACFs Assets for Independence (AFI) initiative funded an impact report that estimated the effects of IDAs on AFI participants ability to build assets, such as home ownership, capitalization of businesses, and attainment of higher education, as well as the programs impact on participants net worth, employment status, and income. The report estimated that, after three years, participation in the program had a significant effect on increasing the rate of homeownership, business ownership, and pursuit of postsecondary education among participants. The report did not find a significant effect on net worth or income, and estimated a slight increase in likelihood of employment, although this was not significant (Mills, Lam, DeMarco, Rodger, & Kaul, 2008).

In 2006, ACFs ORR IDA program funded an evaluation to examine the characteristics of program participants, community impact of the program, and participant outcomes, such as home ownership, business ownership, vehicle purchase, and education attainment. The evaluation found that, over a five year period, 81 percent of program participants attained their IDA goal of purchasing a major asset and approximately five percent left the program without obtaining their goal. Approximately half the participants used their IDA funds to purchase a vehicle, which was the most commonly acquired asset. Approximately ten percent of participants each purchased a home or computer, six percent pursued postsecondary education, four percent used the IDA funds to start a small business, one percent carried out home renovations, and less than one percent sought job training or technical education (Hein, 2006).

Some of the private initiatives that we included have engaged in ongoing activities to measure their effectiveness and success.

  • AARPs Financial Freedom Tour and initiatives to improve medications use among seniors employ qualitative surveys to track consumers knowledge and satisfaction with the information provided through the program and consumers propensity to act on the basis of the information.
  • The Bank on Cities campaigns (e.g., Bank on San Francisco), for which AARP partners with other organizations, measure the number of new bank accounts that low-income individuals open, as one determinant of the programs success.
  • A participant from the Financial Literacy Center noted that all the Centers work was evaluated. An evaluation of a retirement planning aid for low-income workers and women found that participation in supplementary retirement accounts more than doubled after implementation of the intervention (Lusardi, Keller, & Keller, 2008). An evaluation of videos to promote retirement savings among this target population described a reduction in anxiety about future retirement needs, improvement in knowledge and awareness of future financial needs, and an increase in saving behavior (Lusardi, Keller, & Keller, 2009).
  • As part of their My Medicare Matters campaign, NCOA and its partners utilized numerous performance measures, including the number of Medicare beneficiaries who received education and personalized counseling regarding their Medicare prescription drug options, community events held, and community based organizations recruited. The campaign also used in-person and web-based customer satisfaction surveys to collect information on beneficiaries satisfaction with the education they received and their ability to make an informed decision about their Medicare prescription drug coverage. The surveys indicated that 90 percent or greater of recipients were satisfied with the education they received and felt they made an informed decision regarding their coverage (NCOA, 2007).
  • Although Stanford does not centrally administer the Chronic Disease Self-Management Program, it has been extensively evaluated since its inception. The program has been found to be effective in managing progressive, debilitating illnesses, resulting in fewer hospitalizations compared with the number of hospitalization of people who have not participated in the program. The findings also appear to endure over time; for example, initial improvements in exercise and social limitations are maintained over a two-year period. In addition, the program has been found to be effective across a diverse set of diseases, and across socioeconomic and educational levels. Results of all studies indicate that the chronic disease self-management program also led to reductions in health care expenditures  to decreases in emergency room visits, fewer hospitalizations, and fewer days in the hospital (Lorig et al., 1999; Lorig et al., 2001; Lorig, Sobel, Ritter, Laurent, & Hobbs, 2001)

Some initiatives from private organizations were commenced recently, and therefore comprehensive evaluations were not possible.

  • AARP began its initiatives to educate consumers about the recent health care reform and to educate using the Decide. Create. Share. campaign over the past year. However, to evaluate the effectiveness of the initiative, the organization plans to measure changes in baby-boomer womens knowledge about long-term care by administering national surveys at five-year intervals.
  • EARNs IDA program recently implemented a comprehensive quantitative and qualitative study to learn about program participants knowledge, attitudes, and savings behaviors. The survey is administered upon a participants initiation into the program (baseline) and at each successive year of participation. However, as data collection began in 2008, the tool has not yielded sufficient data for meaningful analysis.
  • ISU Extensions Sharpen Your Financial Coaching Skills train-the-trainer program only recently completed pilot testing and implementation.
  • NCOA launched its demonstration program consisting of eight community-based Economic Security Service Centers in 2010.

Participants noted that formal evaluations are not often feasible due to resource constraints. A participant from the Health Education Council stated the organization had limited capabilities to carry out evaluations and could benefit from collaboration with a research partner who could focus on documenting outcome measures and tracking program success. In addition, assessing the effectiveness of an initiative may be considered secondary to the organizations primary focus on implementation and service delivery.

We would love to [do research projects] but [we] do not have the resources to research. When [we] seek funding, [we] are looking to provide services on the ground and its a secondary interest to [conduct] research to know its effective. [It] would be great to get research dollars but theyre hard to find.   NCOA interview participant

Evaluation of Program Approach and Structure

Four federal participants reported that there had been some type of process evaluation. This included comprehensive evaluations, as well as guidance documents to assist future grantees in designing and implementing similar programs. ACFs AFI program funded a process evaluation that comprised case studies of 14 grantees and examined the design, implementation, and running of the initiatives (DeMarco, Mills, & Ciurea, 2008). The report contains challenges the grantees faced and identifies promising practices and lessons learned. One of the many research publications resulting from ASPEs Cash and Counseling program includes a case study of 12 state grantees experiences with planning, developing, and implementing the initiative. The report describes challenges the grantees faced, the ways in which they addressed these challenges, and lessons learned in program planning, design, and implementation (OKeeffe, 2009). AoAs Pension Counseling and Information program funded a feasibility study in 2000 that examined the possibility of expanding the initiative to a permanent, nationwide program (Westat, 2000). Although the report focused on ways to finance an expanded program, it also examined grantee approaches and compiled recommendations and a framework for expansion. The resource center serving ACFs ORR program is responsible for numerous resources, such as training materials, management guidance, and evaluation tools. One publication relevant to ORRs Microenterprise Development Program included a guidebook of lessons learned from the Microenterprise Development Program initiatives with program planning, outreach, administration, and management (Dobson, Black, & Hein, n.d.). The resource center also manages a blog that contains consumer experiences and other anecdotal evidence of success.

NCOA and AstraZeneca assessed the My Medicare Matters campaign in 2007. A resulting report examined lessons learned from program implementation, including the creation of collaborations and partnerships that were crucial to the campaign. The report outlined numerous attributes that contributed to the campaigns success, including shared goals among program partners, clear delineation of roles and responsibilities, detailed implementation and management plans, recruitment and engagement of community based partners, and use of quantitative and qualitative data to measure performance (NCOA, 2007).

Evaluation of Web-Sites and Print Materials

Federal interview participants used various methods to gather consumer feedback on web- and print-based information, including cognitive testing during formative research and usability testing. CMS officials described the way in which the Medicare & You handbook undergoes annual testing with beneficiaries and consumers. AoA surveys assessed the experiences of consumers and Area Agencies on Aging (AAA) directors who used the Eldercare Locator web-site (Customer Care Measurement & Consulting, 2006). The initiative also included consumer usability testing of the Eldercare Locator web-site (HHS, AoA, 2009). A participant from AHRQ noted that all agency products underwent consumer testing during the formative stage but that the effectiveness of the materials was not commonly evaluated following their release:

The agency does testing and evaluation of all their [consumer] products. They perform usability as well as cognitive testing... The agency does a lot of focus group testing to determine the best language to use. We ask people their opinions on everything, from the color to the illustrations. However, they have not done many full scale evaluations of their products... They are working on some comparative effectiveness evaluations with the emergence of new funding.
AHRQ interview participant

Web-based initiatives, such as and, retained metrics including the number of new and repeat visits to a site, and tracked how often people downloaded materials in order to identify products that were popular with consumers. A participant working with ASHs web-site described the challenge of measuring effectiveness of the site on an ongoing basis and the initiative to collect consumer feedback:

Effectiveness is always a challenge. We try to build into our studies [assessments] to measure self-efficacy, comprehension, [and] spot checks. We have feedback mechanism surveys and feedback points on the web-site.
ASH interview participant

Among the private initiatives we examined, a participant from AARP noted that activities to determine and evaluate the organizations web-based efforts were consistent across most initiatives. Evaluation activities included monitoring the number of people attending in-state events, community events, webinars, and tele-town halls; using web metrics to track the use of online tools and literature; and conducting focus groups with consumers for materials development and testing. Monitoring of web metrics and media coverage was also used to assess NCOAs My Medicare Matters web-site, along with web-based customer satisfaction surveys to understand whether consumers found the information helpful in making a decision about their Medicare prescription drug coverage. A participant from NEFE stated that the organization tracked the number of materials distributed for print materials, such as the Habitat for Humanitys Homeowners Manual; however, the organization had not published evaluation research. The Health Education Council conducted pilot testing during the development of a tobacco cessation curriculum targeted at correctional facilities. The organization also collected information on the number of program materials distributed and the number of individuals trained to administer the curriculum. Although the organization utilized instruments to collect metrics on changes in knowledge and attitudes resulting from the curriculum, a participant from the organization noted it was challenging to obtain such information consistently from the many institutions that implement the curriculum.

Future Evaluation Resources

Participants also cited future plans to collect data and conduct research and evaluations for their initiatives, notably for newer initiatives that had not existed long enough to conduct a thorough evaluation or collect outcomes data.

  • A participant with the John C. Chaffee Foster Care Independence Program noted the pending launch of a National Youth in Transition database, which collects case-level information on youth and young adults in foster care. The participant stated that such information would allow a stronger quantitative approach to program evaluation and effectiveness.
  • A participant with the Part D Outreach initiative stated that the program would compile a nationwide program report. However, she was not sure whether this report would be publicly available.
  • An AoA participant stated that the program would conduct a process evaluation of the Eldercare Locator in late 2010.
  • AoAs pilot of the Benefits Enrollment Centers ended in July 2010, and a process evaluation is planned.
  • Treasurys Community Financial Access Pilot will also develop a report based on lessons learned; the report will contain anecdotal descriptions of successes, failures, barriers, and recommendations for future implementation.
  • A participant working with Head Start Innovation and Improvement grantees stated that each grantee is responsible for conducting evaluations, which will occur in 2011.

View full report


"index.pdf" (pdf, 1.01Mb)

Note: Documents in PDF format require the Adobe Acrobat Reader®. If you experience problems with PDF documents, please download the latest version of the Reader®