One well-known process measure is the Early Care Environment Rating Scale (ECERS, Harms and Clifford, 1980). This measure is composed of 37 items that evaluate seven aspects of center-based care for children ages two and a half to five years. These areas are personal care routines, furnishings, language reasoning experiences, motor activities, creative activities, social development, and staff needs. Detailed descriptors are provided for each item and each item is rated as inadequate (1), minimal (3), good (5), and excellent (7). The ratings, according to the scale developers, are based on a minimum of a two-hour block of observation in the classroom. The Infant/Toddler Environment Rating Scale (ITERS, Harms, Cryer, and Clifford, 1990) is a related measure that assesses process quality in centers for children younger than two and a half years. The 35 items of the ITERS also are organized under seven domains and are rated on 7-point scales.
These same investigators have developed a 32-item observational measure, the Family Day Care Rating Scale (FDCRS), to assess process quality in child care homes (Harms and Clifford, 1989). Some items parallel items on the ITERS and the ECERS, but other items are unique because the instrument “tries to remain realistic for family day care home settings by not requiring that things be done as they are in day care centers” (p. 1).
As can be seen on Tables 1, 2, and 3, these measures are used widely in child care research. The measures have important strengths, including having good psychometric properties and being relatively easy to use reliably. Their widespread use means that cross-study comparisons are possible. These measures also have some limitations. The global composite score combines features of the physical environment, social experiences, and working conditions for staff. Some of these areas may well have greater influences on children’s intellectual functioning or social-emotional well-being than others. The composite score may underestimate effects relative to more targeted scales. A second limitation is that these measures are setting-specific. As a result, they cannot be used as interchangeable measures of quality, meaning that it is not possible to make simple comparisons across types of care or to combine scores in omnibus analyses that look at quality effects across different types of care. A third limitation is that these measures are not appropriate for assessing in-home care given by nannies or grandparents.
The Observational Record of the Caregiving Environment (ORCE) was developed to address these limitations (NICHD Early Child Care Research Network, 1996, in press-a). Because psychological theory and research have indicated the central role of experiences with caring adults for children’s well-being and development, the ORCE focuses on this domain. Both time sampled behavioral counts of caregiver actions (e.g., responds to vocalization, asks questions, speaks negatively) and qualitative ratings of those behaviors over time to characterize caregivers’ behavior with individual children are collected during a minimum of four 44-minute observation cycles spread over a two-day period. At the end of each 44-minute cycle, observers use 4-point ratings scaled from 1 = “not at all characteristic” to 4 = “highly characteristic” to describe caregiver behavior. A positive caregiving composite score is created by obtaining a mean score across scales over all of the ORCE cycles at a given age period. Higher scores indicate caregivers who are more sensitive and responsive to a child’s needs, who are warm and positive, who are cognitively stimulating, and who are not detached or hostile. Unlike the ECERS, ITERS, or FDCRS, the ORCE can be used in all types of child care and with children across the first five years. Age-appropriate behavioral descriptors for caregivers’ behaviors with infants, toddlers, and preschoolers are provided.
Another commonly used process measure is the Caregiver Interaction Scale (Arnett, 1989) that rates teachers’ sensitivity during interactions with children. This 26-item measure yields three scores (sensitivity—warm, attentive, engaged; harshness—critical, punitive; detachment—low levels of interaction, interest, or supervision) which are combined to create an overall caregiver quality score. The ratings are made after two 45-minute observations conducted on two separate occasions by two separate observers.
The Assessment Profile (Abbott-Shim & Sibley, 1992a, 1992b) assesses different aspects of quality, namely features related to health and safety, physical facilities, and individualized child services. Different forms of the instrument are available for child care homes and centers. These forms list individual items that are viewed as exemplars of (a) healthy, safe settings, (b) rich physical environments, and (c) settings that meet the needs of adult staff. Individual items are scored using a yes/no format, with “yes” designating items that were either observed or reported by staff. These items can be scored reliably (see NICHD Early Child Care Research Network, 1996). Caregivers have been observed to offer more positive caregiving in settings that receive higher Profile scores (NICHD Early Child Care Research Network, 1996, in press-a).
The CC-HOME Inventory is a measure of process quality that uses a checklist approach to create a quality score across multiple domains, including the health and safety of the physical environment, variety of experiences, and materials (NICHD Early Child Care Research Network, 1996). Derived from Bradley and Caldwell’s well-known assessment of the quality of the home environment, 45 items are scored on a yes/no basis and then summed (alpha = .81). In one study, children who attended better-quality child care homes as measured by the CC-HOME Inventory obtained higher Bayley scores at 24 months and higher school readiness and language comprehension scores at 36 months, in comparison to children who attended poorer-quality child care homes (Clarke-Stewart, Vandell, Burchinal, O’Brien, and McCartney, 2000).
Other measures have been less successful in providing reliable and valid assessments of process quality. For example, Lamb and colleagues failed to find concurrent associations between child care quality and child functioning in their study of child care in Sweden (Broberg, Hwang, Lamb, and Bookstein, 1990). One factor that likely contributed to the lack of significant relations was problems with their quality measure. The Belsky-Walker Checklist (Broberg et al., 1990) asks observers to check off if 13 positive events (e.g., caregiver provided verbal elaboration, caregiver gives heightened emotional display; signs of positive regard ) and 7 negative events (e.g., child cries; child aimless; caregivers in non-child conversations) occur at least once during 3-minute observation intervals. This 3-minute time observation frame was substantially longer than the 10- to 30-second intervals recommended for recording social interactions (Yarrow and Zahn-Waxler, 1979). Consequently, the checklist may have failed to detect meaningful distinctions in caregiver behavior because the time interval was too long to detect meaningful differences. This checklist underscores the challenge of designing and assessing process quality. Detecting relations between process quality and child outcomes requires robust measures.
"report.pdf" (pdf, 132.7Kb)
"table1.pdf" (pdf, 43.75Kb)
"table2.pdf" (pdf, 43.32Kb)