In recent years several researchers and child welfare agencies have begun developing predictive risk models to support child welfare decision-making. Predictive analytics is a sophisticated form of risk modeling that uses historical data to understand relationships between myriad factors to estimate a probability score for the outcome of interest. In the child welfare context, predictive risk models are intended to assist caseworkers in synthesizing data from various sources to inform decision making. However, given the overrepresentation of African American, Hispanic, and American Indian and Alaska Native children in the child welfare system, some have raised concerns that such models may inadvertently incorporate racial and ethnic biases, which could ultimately further increase existing disparities and degrade trust in child welfare services.
This project examined best practices aimed at preventing and mitigating racial and ethnic bias in child welfare agencies' use of predictive models. The resulting final report:
- Describes the potential pros and cons of using predictive risk models in child welfare settings
- Reviews actions developers and child welfare agencies can take to detect and mitigate risks of racial and ethnic bias in these models
- Discusses the significance of transparency and explainability in promoting trust in predictive risk models, including what information about a model should be disclosed to people using and affected by it
- Identifies steps federal agencies can take to promote fairness in the use of predictive risk models for child welfare