Development of a Quality Measure for Adults with Post-Traumatic Stress Disorder. B. Stage 2--Pre-Testing the Measure

05/01/2019

Once we finalized the development of the surveys, we collected quantitative and qualitative data to pre-test the measure. The quantitative data collection involved administering the surveys at specialty behavioral health organizations to assess the psychometric properties of the measure, potential approaches to scoring the measure, and potential implementation challenges. The qualitative data collection involved gathering feedback from stakeholder focus groups and individuals who coordinated measure testing within their organization to assess the measure's usefulness and feasibility.

We first describe our approach to quantitative testing and then our approach to qualitative testing.

1. Quantitative Testing of the Survey Measure

The quantitative testing was designed to answer the questions in Table IV.2. We pre-tested the measure at six behavioral health organizations, which allowed us to assess the organizations' abilities to collect the data, the initial psychometric properties of the measure, and different strategies for calculating a measure score. There are three key features of the quantitative testing design:

  • Survey completion by multiple respondent types. There is a dearth of empirical evidence to suggest which type of respondent will produce the most credible and reliable information on the delivery of evidence-based psychotherapy. Some stakeholders who participated in Stage 1 of measure testing, as well as some TAG and TEP members, suggest that clinicians may over-report the delivery of evidence-based therapeutic elements. Others suggested that clients may have difficulty in recognizing technical aspects of the therapeutic elements while they are in the midst of therapy, and may under-report the delivery of evidence-based therapeutic elements. To inform future decisions regarding the optimal respondent type, clinicians, their supervisors, and a sample of their clients completed the survey on the same sampled therapy sessions (see Section IV.B.4 for information on the sampling design). For the purposes of this data collection effort, we considered supervisors to be the most experienced and objective raters and treated their responses as the gold standard. As such, clinician and client responses were compared to supervisor responses.

  • Survey completion at multiple stages of treatment. Cognitive behavioral approaches to treating PTSD typically follow a general sequence of events. There may be appropriate variation in when specific therapeutic elements are delivered; however, one might expect certain items to be delivered rarely at, for example, the beginning or end of treatment. To begin to develop a rich understanding of the delivery of evidence-based psychotherapy across the course of treatment, clinicians and their supervisors completed the survey following three therapy sessions of clients who were at different stages in the therapy process -- beginning, middle, and end. Clients completed the survey only once.

  • Survey completion by clinicians and supervisors who represent a range of therapeutic orientations. Although the majority of the survey items reflect cognitive behavioral approaches, we recruited organizations that employ clinicians who utilized CBT as well as other types of psychotherapy in the treatment of individuals with PTSD. Obtaining this range of techniques was necessary to assess how the measure performs.

TABLE IV.2. Quantitative Pre-Testing and Analysis of Survey Measure
Letter Name Uppercase Lowercase
Importance Does performance on the measure vary? How does performance vary when different approaches to scoring the measure are applied? Descriptive analysis (mean, range, outliers) of performance
Factor-analytic structure How many underlying psychotherapeutic constructs does the measure include? What does the factor structure imply regarding the number of items in measure? EFA and CFA
Reliability: Internal consistency What is the extent of the agreement between the items in each identified factor? Alpha statistic
Reliability: Inter-rater To what extent is there agreement between clinicians, supervisors, and clients in rating the survey items and in the overall survey? Agreement using AC1 statistic
Validity To what extent does the measure distinguish between clinicians who do and do not deliver evidence-based psychotherapy? Sensitivity and specificity analyses
Feasibility On average, how long did it take participants to complete the measure? Descriptive analysis (means and ranges)

Here we describe the characteristics of the participating behavioral health organizations and data collection process.

2. Site Characteristics

From June 2014 to January 2015, we sought to recruit 36 clinicians employed by behavioral health organizations that delivered psychotherapy to adults with PTSD in outpatient treatment settings. We announced the project via the listservs of the National Council on Community Behavioral Health, American Counseling Association, and Kent State Counselor Education and Supervision. We also contacted organizations recommended by members of the TEP and project team. We identified other potential organizations through web-based searches.

As organizations expressed interest, we conducted informational meetings where we provided additional information regarding the project and its goals, and specifics about the testing activities. We then assessed whether the interested organizations met the desired requirements, employing the following:

  • Clinicians who provide psychotherapy to at least three adult clients (in various phases of treatment) with a diagnosis of PTSD.

  • Clinical supervisors who routinely provide clinical supervision via direct observation or video or audio tape, or were willing to provide these types of supervision for selected therapy sessions.

  • An individual within the organization able and willing to coordinate data collection activities for their organization, including client recruitment.

We conducted follow-up interviews to gather additional information on the number of eligible clinicians and supervisors, the type of psychotherapy provided to adults with PTSD, and the type and frequency of supervision. We confirmed that they had the capacity to participate in the testing and discussed potential challenges to their participation before selecting the final organizations. We then established a Memorandum of Understanding and a Business Associate Agreement with each organization to govern the secure use of the data submitted under this project. We provided each organization with a modest honorarium to offset the costs of data collection. Where necessary, we submitted Institutional Review Board (IRB) materials for review by organizations' internal IRBs.

In total, we recruited six behavioral health organizations with a total of 37 clinicians and nine clinical supervisors. The behavioral health organizations were located in the Midwest and on the East Coast; most served individuals with public and private insurance.

3. Clinician and Supervisor Characteristics

TABLE IV.3. Characteristics of Participating Clinicians and Supervisors by Site
    Sample Size

Average Number of Years Providing Therapy
(range)

Average Number of Years Providing Treatment for PTSD
(range)
Average Current Number of Clients Per Clinician
(range)
Current Number of Clients with PTSD
(range)
Percentage Currently Licensed Percentage with Accreditations or Certifications in CBT
Total Clinicians 37 7.5 (1-29) 6.4 (0-29) 50 (7-100) 11 (0-40) 70.3 67.6%
Supervisors 9 16 (4-30) 10.7 (2-26) 20 (0-40) 4 (0-15) 100% 88.9%
Site A Clinicians 11 2.6 (1-7) 2.6 (1-7) 33 (20-45) 6 (0-10) 63.6% 54.5%
Supervisors 2 8 (4-12) 3 (2-4) 27 (25-28) 3 (2-4) 100% 100%
Site B Clinicians 3 3 (1-5) 5 (5-5) 25 (7-60) 2 (0-3) 66.7% 66.7%
Supervisors 2 8 (6-10) 4.5 (4-5) 7 (6-8) 4 (3-5) 100% 100%
Site C Clinicians 5 6.6 (5-8) 2.4 (1-4) 24 (12-29) 24 (12-29) 100% 80%
Supervisors 1 14 (n=1) 4 (n=1) 17 (n=1) 15 (n=1) 100% 100%
Site D Clinicians 6 12.7 (2-29) 10.2 (2-29) 58 (40-75) 9 (3-20) 66.7% 100%
Supervisors 1 18 (n=1) 18 (n=1) 18 (n=1) 3 (n=1) 100% 100%
Site E Clinicians 7 9.3 (1-20) 9.14 (0-20) 100 (99-100) 19 (5-40) 71.4% 57.1%
Supervisors 1 25 (n = 1) 25 (n = 1) 0 (n = 1) 0 (n = 1) 100% 100%
Site F Clinicians 5 13.2 (5-23) 12.2 (3-21) 53 (35-70) 6 (3-9) 60% 60%
Supervisors 2 27.5 (25-30) 17 (8-26) 40 (40-40) 4 (4-4) 100% 50%

As described in Table IV.3, the clinicians who completed the survey, on average, had been providing therapy for 7.5 years and providing treatment for PTSD for 6.4 years. Clinicians' current caseloads averaged 50 clients per clinician; almost 25 percent of those clients had PTSD. On average, participating supervisors had been providing therapy for 16 years and providing treatment for PTSD for 10.7 years. Supervisors saw an average of 20 clients, including an average of four clients with PTSD. The majority of participating clinicians (70.3 percent) and all of the supervisors (100 percent) were currently licensed as mental health professionals. The majority of clinicians and supervisors were also accredited or certified in cognitive behavior therapy (67.6 percent of supervisors and 88.9 percent of clinicians).

The most common degree type was a master's degree, held by 75 percent and 67 percent of clinicians and supervisors, respectively (see Figure IV.1).

FIGURE IV.1. Clinician-Reported and Supervisor-Reported Educational Degree
FIGURE IV.1, Bar Chart: The most common degree type was a master’s degree, held by 75% of clinicians and 67% of supervisors. 10% of clinicians and 33% of supervisors reported attaining a doctoral degree.  An additional 14% of clinicians reported attaining other degrees including degrees or certifications in social work and substance abuse counseling. One clinician did not provide degree information.
* Other includes BA, CASAC, LCSW, and LSW. One clinician did not provide degree information.

Over half (54 percent) of the clinicians identified their therapeutic orientation as "supportive," whereas the majority of supervisors (78 percent) identified CPT, a form of CBT, as their therapeutic orientation (see Figure IV.2).

FIGURE IV.2. Clinician-Reported and Supervisor-Reported Therapeutic Orientation
FIGURE IV.2, Bar Chart:  The most commonly clinician-reported therapeutic orientations were “supportive” (57%), “cognitive processing therapy (CPT)” (54%), and “interpersonal” (30%). Clinicians also reported “psychodynamic” (24%), “eye movement desensitization and reprocessing (EMDR)” or “prolonged exposure therapy (PE)” (16%), and “psychoanalytic” (5%) orientations. Among supervisors, 78% reported their therapeutic orientation as CPT, 67% as “supportive,” and 44% as “Psychodynamic”. 33% reported PE, and 11% indicated EMDR or Interpersonal. Finally, 43% of clinician and 33% of supervisors identified other forms of therapy, such as CBT, dialectical behavior therapy, mindfulness, and other types of psychotherapies.
* Includes other forms of CBT, dialectical behavior therapy, mindfulness, and other types of psychotherapies.

4. Data Collection Process

Site coordinator training. To facilitate the data collection process, we asked each participating organization to identify a staff member to serve as a site coordinator. These individuals filled a critical role. Their responsibilities included providing Mathematica with the information on eligible clinicians, supervisors, and clients to draw a study sample; notifying clinicians and supervisors when they were due to complete a survey; providing follow-up reminders to clinicians and supervisors to complete past-due surveys; describing the project and data collection effort to eligible clients; and attending regular meetings with Mathematica/NCQA.

To prepare the site coordinators' for their involvement in the project, we held web-based trainings. In these trainings, we oriented the coordinators to the goals and objectives of the project and their role and responsibilities on the project. We provided guidelines and tips for communicating with clinicians, supervisors, and clients, instruction on how to access the survey, and best practices for data security. We also provided them with a packet of materials to facilitate completion of their tasks.

To further support the site coordinators, Mathematica/NCQA held frequent communication with them. Project staff emailed site coordinators no less frequently than every other day to provide updates on each site's response rates, confirm upcoming therapy session dates, and, if needed, determine if resampling was necessary due to missed therapy appointments or a client terminating therapy. They also held weekly group meetings with the sites to discuss the status of data collection activities and to collectively strategize approaches for collecting data.

Sample selection and survey administration. To select the study sample, site coordinators securely transmitted to Mathematica a list of clinicians who were currently providing psychotherapy to at least three adults with PTSD, their supervisors, and their clients. The site coordinators also provided information on the clients' treatment start date, expected length of treatment, and date of next therapy session. Mathematica, with input from the site coordinators, then classified the clients' upcoming therapy session as occurring in the beginning, middle, or end of treatment, and drew a study sample following the process described below and illustrated in Figure IV.3:

  • For each clinician, three therapy sessions were sampled from the clinician's current caseload of adults with PTSD -- one therapy session of a client who recently started therapy, a second therapy session of another client who was in the middle of therapy, and a third therapy session of another client who was toward the end of therapy.

    • The clinician completed the survey following each of the three sampled therapy sessions.

    • Clinicians were instructed to complete the survey within 24 hours of each sampled therapy session.

  • The clinician's supervisor was also sampled and also completed the survey following each of the three sampled therapy sessions.

    • Most of the participating supervisors supervised more than one participating clinician. The number of surveys a supervisor completed therefore depended on the number of participating clinicians he or she supervised. For example, a supervisor who supervised one clinician completed the survey three times, whereas, a supervisor who supervised three clinicians completed the survey nine times (three times on three different therapy sessions per clinician).

    • Supervisors were instructed to complete the survey within 24 hours of audio taper review or direction observation of the sampled therapy session.

  • The clients attending each of the sampled therapy sessions were also sampled. They completed the survey once, following the sampled therapy session.

    • If the client refused to participate in the project, the sampled therapy session was discarded; neither the clinician nor his or her supervisor completed the survey on the session. Instead, we resampled a therapy session from another client on the same clinician's caseload, if possible. In nine cases, the clinicians did not have another client in the appropriate stage of treatment to resample.

    • If the client discontinued treatment or missed three consecutive appointments, which the site coordinators suggested was an indication of passively discontinuing treatment, the therapy session was discarded. A therapy session from another client on the clinician's caseload was sampled, if possible. In 16 cases, clinicians did not have another client in the appropriate stage of treatment to resample.

  • The sampling structure resulted in survey responses on the same therapy session from clinicians, supervisors, and clients.

Once the therapy sessions were sampled, Mathematica sent each site coordinator a file with the names of the selected clients and therapy session dates, as well as direct web survey links for use by the clinicians, supervisors, and clients. Site coordinators then distributed paper and/or electronic survey alerts to participating staff 48 hours before and on the day of a selected session to remind them of the need to complete the survey following the selected therapy session. Site coordinators provided follow-up reminder letters and/or emails to staff with delayed survey responses. Appendix F depicts the data collection process.

When sampled clients checked in for their appointment, site coordinators described the project and its associated risks and benefits and invited them to participate. Clients were provided with written information about the project, information on how to access the survey online, and, if desired, a paper copy of the survey with a pre-paid return addressed envelope. In sites with local computers, clients were also given the option to complete the survey on-site prior to leaving.

FIGURE IV.3. Sampling Process
FIGURE IV.3, Diagram:  See the “Sample selection and survey administration” section for a full description of the sampling process.

Summary of response rates by site. A total of 144 therapy sessions were sampled (see Table IV.4). After accounting for attrition and refusals to participate in the project, 98 percent of clinicians, 99 percent of supervisors, and 80 percent of clients completed the survey. One clinician and one supervisor dropped from the study; new or already participating staff replaced them. Over 25 percent of sampled clients discontinued treatment or missed three consecutive therapy sessions; however, in over half of those cases, we were able to sample a replacement client from the clinician's caseload.

TABLE IV.4. Summary of Completed Surveys
    Total Number of Sampled Sessions Attrition from Treatment with Replacement* Attrition from Treatment without Replacement Clients Declined to Participate with Replacement Clients Declined to Participate without Replacement Total Expected Completed Surveys Number of Completed Surveys Response Rate
Total Clinicians 144 1 0 NA NA 98 96 98%
Supervisors 144 1 0 NA NA 98 97 99%
Clients 144 21 15 0 11 97** 78 80%
Site A Clinicians 42 0 0 NA NA 34 34 100%
Supervisors 42 0 0 NA NA 34 34 100%
Clients 42 6 1 0 2 34 23 68%
Site B Clinicians 18 0 0 NA NA 10 10 100%
Supervisors 18 0 0 NA NA 10 10 100%
Clients 18 3 4 0 1 10 8 80%
Site C Clinicians 22 0 0 NA NA 15 15 100%
Supervisors 22 0 0 NA NA 15 15 100%
Clients 22 7 0 0 0 15 14 93%
Site D Clinicians 21 0 0 NA NA 14 12 86%
Supervisors 21 1 0 NA NA 14 14 100%
Clients 21 0 4 0 3 14 13 98%
Site E Clinicians 23 0 0 NA NA 18 18 100%
Supervisors 23 0 0 NA NA 18 17 94%
Clients 23 2 3 0 0 18 17 94%
Site F Clinicians 18 1 0 NA NA 7 7 100%
Supervisors 18 0 0 NA NA 7 7 100%
Clients 18 3 3 0 5 7 4 57%
* Attribution is defined as discontinuing treatment or more than 3 consecutive missed therapy sessions.
** Note that 1 participant's refusal was mailed in after the clinician and supervisor had completed their surveys.

5. Quantitative Analysis

The quantitative analyses were designed to answer the questions in Table IV.5.

a. Quantitative testing of the measure's theoretical structure

To identify the measure's theoretical structure and assess the necessity of each survey item across clinicians, supervisors, and clients, we conducted an exploratory factor analysis (EFA) and then used the resulting EFA model as a basis for confirmatory factor analyses (CFA).

Exploratory factor analysis. Factor analysis is a data-reduction tool commonly used in measure development. It is used to examine the variability and correlation among survey items to determine if a smaller pool of items (or factors) is being measured by the items. EFA is a data-driven approach that imposes no restrictions on the data, such as pre-existing ideas about the number of constructs in the measure or the patterns of relationships between the survey items. To identify the measure's underlying structure in the EFA, we combined the clinician, supervisor, and client survey item responses. In this stage, we did not account for respondent type, but rather wanted to examine the overall factor structure. In CFA (described below), we conducted separate analyses by respondent type. We used the default oblique Geomin factor rotation method. This rotation method assumes correlation between factors but is equally robust if the factors are not sufficiently correlated or not correlated at all. Because the factor-analytic model included categorical outcome variables, we then used the robust weighted least squares means and variance (WLSMV) adjusted estimator, which does not assume normally distributed variables and provides the best option for modeling non-normal categorical or ordered data (Brown 2015), to identify the measure's underlying structure. Once we identified the EFA model, we then tested it in a CFA model.

TABLE IV.5. Quantitative Pre-Testing and Analysis of Survey Measure
Criterion Testing Question(s) Data Analysis
Importance Does performance on the measure vary by respondent type?
How does performance vary by respondent type when different approaches to scoring the measure are applied?
Descriptive analysis (mean, range, outliers) of performance
Factor-analytic structure How many underlying psychotherapeutic constructs does the measure include?
What does the factor structure imply regarding the number of items in the measure?
EFA and CFA
Reliability: Internal Consistency To what extent do items in each factor measure the same construct? Alpha statistic
Reliability: Inter-rater To what extent is there agreement between clinicians, supervisors, and clients in their survey responses? Agreement using AC1 statistic
Validity To what extent does the measure distinguish between clinicians who do and do not deliver elements of evidence-based psychotherapy when supervisor ratings are used as the gold standard? Sensitivity and specificity analyses
Feasibility On average, how long does it take participants to complete the measure? Descriptive analysis (means and ranges)
Validity To what extent does the measure distinguish between clinicians who do and do not deliver elements of evidence-based psychotherapy when supervisor ratings are used as the gold standard? Sensitivity and specificity analyses
Feasibility On average, how long does it take participants to complete the measure? Descriptive analysis (means and ranges)

Confirmatory factory analysis. CFA relies on both empirical and conceptual foundations to guide the specification and evaluation of the factor-analytic model. It is used to test how well a theoretical model fits the data. Unlike in EFA, in CFA the number of factors and the pattern of item-factor loadings are specified in advance. We conducted individual CFAs for the clinician, supervisor, and client samples to further validate the model identified in the EFA. We estimated the models using a Bayes estimator (with the flat empirical priors, 50,000 Monte Carlo Markov Chain chain runs, and two parallel chains), which is less sensitive to sample size (see Heerweg 2014) and does not allow model parameters to fall outside a plausible range (for example, correlations above one).[4] We pursued an iterative approach to model-building that included removing the items with low correlation (r <0.40) to the latent factor and examining the resulting fit of the model, and made recommendations regarding future revisions to the surveys. We measured the model fit using posterior predictive p-value (PPP; analog of the goodness-of-fit statistics for Bayesian estimator based on the usual chi-square test of the null hypothesis against alternative hypothesis). The general idea behind posterior predictive checking is that there should be little, if any, discrepancy between data generated by the model and the actual data themselves (Kaplan and Depaoli 2012). Hence, p-values greater than 0.05 indicate that the null hypothesis of little discrepancy between the model and the data cannot be rejected and that the model fits the data sufficiently well.

The EFA and CFA models were fitted in Mplus 7.1 (Muthén and Muthén 1998-2012).

b. Quantitative testing of internal consistency

The internal consistency reliability testing was designed to examine how well the items in each of the five factors correlate to each other and measure the factor's underlying construct. We used the Kuder-Richardson Formula 20 (KR20) and Cronbach's alpha co-efficients. The KR20 is appropriate for categorical items and the Cronbach's alpha for continuous items.

c. Quantitative testing of inter-rater agreement

To assess the extent to which clinicians, supervisors, and clients agreed in their assessment of the clinician's delivery of each survey item, we examined item-level and a weighted average of overall inter-rater agreement using Gwet's Adjusted for Chance-Corrected (AC1) statistic (Gwet 2014). The AC1 is based upon the assumption that the probability of agreement by chance should not exceed 0.50, whereas the probability of chance-agreement for the more traditionally used Cohen's (1960) Kappa can be any value between zero and one.[5]

d. Approaches to establishing performance metrics

For the measure to be useful for quality improvement purposes, stakeholders need metrics to assess performance. There are no clear, established standards for how to score this type of measure. As a first step in developing a measure score, we assessed whether item endorsement varied by beginning, middle, and end of treatment. If there were variation by stage of treatment, our approach to scoring would need to account for it; otherwise, it could overestimate or underestimate a clinician's delivery of evidence-based psychotherapy.

We conducted Analysis of Variance with post-hoc group comparison to compare the mean scores of each factor identified in the CFA, for each phase of treatment (beginning, middle, and end). No statistically significant differences across phases of treatment were observed. To facilitate comparison across samples (clinicians, supervisors, and clients) and to stabilize variance, factors scores for each domain were standardized to have a mean of zero and a standard deviation of one. Next, we examined the distribution of scores for each domain by respondent type (supervisor, clinician, and (client). In order to determine potential performance thresholds, we examined various cut-offs (median, mean, inter-quartile range). We selected two thresholds for use in the sensitivity and specificity analyses (described below) -- the median, a lower bound threshold -- and the 75th percentile, an upper bound threshold. Once we created thresholds for each domain, we then created summary scores across all the domains and an overall score. Clinicians who score above these thresholds are classified as delivering evidence-based psychotherapy.

e. Quantitative testing of validity

In addition to gathering feedback from the focus group and site coordinator debriefings on the face validity of the measure (described below), we also attempted to assess the measure's criterion validity by calculating its sensitivity and specificity. For the purposes of these tests, we deemed the supervisor ratings to be the gold standard. In the absence of data from an objective, independent rater, we assumed that supervisors would be the least biased raters and, among supervisors, clinicians, and clients, the raters most trained and experienced in evaluating the performance of clinicians. To calculate specificity and sensitivity, we utilized the performance metrics described earlier and compared supervisor ratings against clinician and client ratings.

6. Approach to gathering stakeholder feedback

In addition to quantitative testing, we gathered feedback on the measure through stakeholder focus groups and site coordinator debriefings. Feedback focused on the importance of the measure to improving quality of care, its face validity, facilitators and barriers to measure testing, the feasibility of implementing the measures (including the burden of data collection), and the usability of the measure results (whether they would be useful for quality improvement efforts). Here, we briefly describe each type of data collection.

Focus groups. In January 2015, we hosted five one-hour telephone focus groups to gather information on the face validity, usability, and feasibility of the measure. Participants represented four types of stakeholders:

  • Clinicians and clinical supervisors. Focus group participants included eight clinicians and clinical supervisors who had previously completed the survey. Two clinicians who were unable to attend submitted written feedback.

  • Clients. Participants included four adults in treatment for PTSD who had previously completed the survey. Clients received a $20 gift card for their participation.

  • Behavioral health organization administrators. Participants included three administrators from organizations that had participated in pre-testing the survey and one administrator who represented a behavioral health organization that was interested in but unable to participate in pre-testing the survey. One administrator who could not participate submitted written feedback.

  • Health plans and payers. Participants included eight representatives from four organizations that included two managed behavioral health organizations and two Medicaid managed care organizations.

All questions were designed to answer the main topic areas of usability, feasibility, and validity. We tailored the questions to fit the particular expertise of each type of focus group.

Site coordinator debriefings. To support the site coordinators, Mathematica/NCQA communicated frequently with them. Project staff emailed site coordinators no less frequently than every other day to provide updates on each site's response rates, confirm upcoming therapy session dates, and, if needed, determine if resampling was necessary due to missed therapy appointments or a client terminating therapy. They also held weekly group meetings with the sites to discuss the status of data collection activities and collectively strategize approaches for collecting data.

In addition to the information gathered in the weekly site coordinator meetings, in February and March 2015, we gathered written debriefing information from five sites on ways to improve and streamline data collection processes and on their perceptions of the clinical staff's response to the measure. Site coordinators were also asked to provide their assessment of the measure's face validity, though only some chose to do so. Two sites did not provide any debriefing information.

7. IRB approval and OMB clearance

Prior to the start of data collection, we submitted applications to both the New England Institutional Review Board (NEIRB) and the U.S. Office of Management and Budget (OMB) that outlined the project and its objectives, the proposed study design, sampling and data collection procedures and materials, our security plan, and data analyses. We received approval from the NEIRB on April 29, 2014, and the OMB on May 22, 2014.

8. Processes and procedures to maintain security of data

We implemented the security controls and processes we routinely use on projects that involve sensitive information. Organizations transmitted data to Mathematica via a secure, encrypted Secure File Transfer site that was password-protected. Access to sensitive data was limited to the immediate team and stored on a secure, password-protected network drive. We encrypted data in transit and at rest and will securely destroy any data collected at the end of the project. Hard-copy surveys were mailed or faxed to Mathematica staff for manual entry and stored in a secure, locked file cabinet. We will shred them at the end of the project. These safeguards are consistent with the Privacy Act of 1974, the Computer Security Act of 1987, Health Insurance Portability and Accountability Act, and the Federal Information Security Management Act of 2002, OMB Circular A-130, and National Institute of Standards and Technology computer security standards.