How did Michigan’s Children’s Services Agency develop executive performance dashboards?

LESSONS FROM OTHER FIELDS

How did Michigan’s Children’s Services Agency develop executive performance dashboards?

By Lynda Blancato, project leader, and Scott Kleiman, managing director, Harvard Kennedy School Government Performance Lab*

In 2019, with support from the Harvard Kennedy School of Government Performance Lab, Michigan’s Children’s Services Agency designed new executive performance dashboards that offer a comprehensive picture of agency operations through 42 carefully selected metrics. This brief highlights the Michigan experience, and includes a list of metrics selected for use and an example dashboard — all of which can be used to inform other child welfare agencies as they take steps toward transforming their own systems. 

For more detailed information about how effective dashboards can provide agency leaders high-level visibility into operational and outcome trends throughout the child welfare system and lead to action, see the companion brief, How can executive performance dashboards support child welfare agency effectiveness?

Background

New leadership at the Michigan Children’s Services Agency sought to revamp the performance dashboards regularly reviewed by the senior leadership team. The agency had existing dashboards but they were difficult for leaders to use to generate theories about how to change practice and to track whether those changes were contributing to better results. 

Many of the original metrics were selected due to reporting requirements associated with a long-standing consent decree rather than their potential operational impact. The agency was at risk for new crises to hit with little warning, as few of the featured dashboards reflected measures that were known to be early indicators of longer-term system health or child and family outcomes. And while disaggregation by local offices was common, rarely did regional leaders get information about their performance over time, making it hard for them to know where they were making progress and where to offer further guidance within their teams. 

Selecting the right metrics

In developing the new dashboards, agency leaders began by brainstorming current areas of concern in agency operations and identifying “blind spots” in existing dashboards where leadership lacked a clear picture of performance. 

For each major area of the child welfare system—centralized intake, field investigations, open cases, and out-of-home placements — the department prioritized the most important management questions in each of three performance categories: system capacity, program quality, and child and family outcomes. It then designed a dashboard metric to present data for each of these questions.

In addition, the department incorporated three other groups of metrics that would supplement those identified for specific functional areas:

  • It elevated a suite of metrics related to system-wide trends. These included expenditures relative to budget, counts of statewide child maltreatment fatalities and serious injuries, and — to observe changes in the effectiveness of the state’s reporting ecosystem — the ratio of the number of maltreatment allegations to the number of serious child injuries in Medicaid billing records.
  • As the leadership team in Michigan was particularly interested in regularly measuring progress toward reducing inequities for children and families, the agency additionally developed a set of metrics to examine disproportionality and disparities by race and ethnicity at key decision points.
  • To enable early detection of problems, both short- and long-term indicators would be tracked for some of the most critical measures. For example, to monitor the system’s success in reducing the occurrence of repeat maltreatment, the dashboards reported rates of substantiated subsequent maltreatment at intervals of one month, six months, and 12 months following case closure.

The appendix to this brief features the full set of Michigan’s performance dashboard metrics, as well as the management question each metric was designed to address.

Visualizing the data for each dashboard metric

Once Michigan selected the metrics for inclusion in the new dashboards, the department’s data team set out to design how the data would be visualized for each. One of the new dashboards focused on monitoring operations and performance at the central intake hotline, which receives calls from the public about allegations of abuse or neglect. This dashboard (see example dashboard below) presents the capacity of staff to effectively handle the volume of incoming calls to the central intake hotline. The chart on the left shows the caseload of intake specialists, defined as the average number of calls each intake specialist receives and processes every month. The chart on the right shows the share of abandoned calls each month. 

This dashboard incorporates several design elements that aid the leadership team in interpreting trends in the data and discussing potential follow-up actions: 

  • Monthly results over the past three years, revealing caseload levels for intake specialists that had slowly crept up over time along with a dramatic rise in the abandoned call rate.
  • A prompt that reminds leaders that a high rate of abandoned calls may put more children at risk of further harm, as reporters who abandon calls may not try to contact the agency again.
  • Solutions-focused guidance for interpreting trends — in this case, pointing out that the increases in either caseloads or in the abandoned call rate may indicate a need to adjust central intake staffing or scheduling. 

Other new dashboards created as part of this process include two other new design features: a target benchmark or reference line that allows leaders to determine the urgency of possible reforms; and a disaggregation of trends by operationally meaningful components to aid in identifying practices to spread and units that may need additional support.

Example central intake dashboard from Michigan’s Children’s Services Agency

Charts

Using performance dashboards to drive change 

When wait times for child protection hotline calls grow too long, community members trying to report potential incidents of abuse or neglect may hang up and not try to call again, potentially leaving children at risk of further harm. As part of regular executive team meetings to review performance data, agency leaders in Michigan noticed a spike in the abandoned call rate for calls to the central intake hotline seeking to report allegations of abuse or neglect. In early 2017, less than 5% of calls to the hotline were abandoned prior to being answered by an intake specialist. Over the following two years, the abandoned call rate steadily increased and then spiked to almost a third of calls in the middle of 2019. 

This concerning trend in the data prompted agency leaders to revamp how central intake operated. Managers began by analyzing call data to identify “hot spots” — windows in the day when wait times were particularly long and the number of abandoned calls spiked — and adjusted staff schedules to provide additional coverage during these times. In addition, the management team designed a new report to track call volume and processing times by individual staff member in order to uncover effective practices used by high-performing staff and identify workers in need of additional coaching. Agency leaders adjusted hiring practices by recruiting for intake specialist positions on a continuous basis — rather than only when positions became available — to identify a pool of candidates to quickly fill any vacancies as they arise. As these changes were implemented, the rate of abandoned hotline calls dropped from a high of over 35% to about 5%. The agency has continued to carefully monitor these trends using its new dashboards.

Appendix: Executive dashboards from Michigan’s Children’s Services Agency 

Below is the full set of performance dashboard metrics that Michigan’s Children’s Services Agency developed for monitoring its system operations and outcomes. Each dashboard measure is connected to a key management question that the agency seeks to answer through available data. 

The resources below may be useful for agency leaders working to develop or refine executive performance dashboards tailored to the priorities and needs of their own jurisdictions. 

SYSTEM LEVEL

Key Question Related Measure
Capacity1. Do we have sufficient staff capacity to effectively manage the needs of our system?Count of supervisors, field investigators, foster care case managers, other case carrying staff, centralized intake specialists, staff in training, vacant positions
2. Are we effectively allocating and deploying available funding?For current fiscal year, comparison of allocated budget versus actual expenditures by month
Quality3. Is our reporting system functioning effectively?Ratio of reports of maltreatment to total number of serious child injuries in Medicaid billing records
4. Are we effectively reducing disproportionality and disparities in outcomes across our system?Comparison by race/ethnicity of child: overall child population of state, share of screened-in reports, share of substantiated maltreatment, share entering out-of-home care, share achieving permanency within 12 months, share in care 24+ months
5. Are we reducing entries into out-of-home care and supporting children to exit care?Net entries to exits for out-of-home care
Outcomes6. Are we effectively reducing the occurrence of child fatalities and near fatalities?Among all child fatalities and near fatalities attributed to maltreatment, share with prior interaction with child welfare system AND Count of all child fatalities from non-natural causes

CENTRALIZED INTAKE

Key Question Related Measure
Capacity1. Is the volume of contacts straining the capacity of our system?Volume of contacts by type, including call-in reports, online reports, written reports, and informational requests
2. Do we have sufficient staff capacity to manage the volume of contacts?Ratio of total contacts processed to intake workers; among all calls presented, share of calls abandoned
Quality3. Are we processing contacts efficiently?Share of contacts processed with <1 hour, 1-3 hours, 3-5 hours, or 5+ hours between receipt of contact and screening decision
4. Are we making consistent screening decisions?Among all reports of child maltreatment, share of reports screened in for investigation
Outcomes5. Are we screening out reports that may have benefited from being screened in?Among reports that were screened out, share of families with subsequent contact to centralized intake / screen-in within following 3 months
6. Are we screening in reports that potentially should have been screened out?Among all investigations, share resulting in a Category V disposition AND Number of reconsideration requests by outcome

FIELD INVESTIGATIONS

Key Question Related Measure
Capacity1. Is the volume of investigations straining the capacity of our system?Count of active and overdue investigations
2. Do we have sufficient staff capacity to manage the volume of investigations?Share of staff with 11 or fewer investigations, 12 investigations, 13-14 investigations, 15+ investigations
Quality3. Are we making face-to-face contact with alleged victims in a timely way?Share of alleged victims with face-to-face contact within priority timeframes (24 or 72 hours)
4. Are we making consistent decisions regarding the level of identified risk at investigation closure?Share of investigations with Category V, IV, III, II, and I dispositions
5. Are we making consistent decisions to open ongoing cases or remove children to out-of-home settings?Share of Category I cases with out-of-home placements; share of Category III cases opening to CPS
Outcomes6. Are we closing cases that may have benefited from having services put in place?Among investigations that did not open to CPS, share with subsequent contact to centralized intake / screen-in within 3 months

ONGOING CASES

Key Question Related Measure
Capacity1. Is the volume of cases straining the capacity of our system?Count of children in open cases who are in-home, out-of-home
2. Do we have sufficient staff capacity to manage the volume of ongoing cases?Share of staff with caseloads of 16 or fewer families, 17 families, 18-19 families, 20+ families
3. Do we have sufficient service capacity to meet the needs of families?Number of families on waitlist by program type
Quality4. Are we making face-to-face contact with children on a monthly basis?Share of children with face-to-face visit in last 30 days
5. Are we making face-to-face contact with parents and caregivers on a monthly basis?Share of primary caregivers / parents with goal of reunification with face-to-face visit in last 30 days
6. Are we regularly updating service plans to meet family needs and improve time to case closure?Share of families with updated service plans / family team meetings within last 90 days
Outcomes7. Are we successfully providing supports that lower risk and keep children safe in-home?Count of cases escalating to Category I or II
8. Are we effectively supporting families to care for their children and promote child safety and wellbeing?Share of Category III cases closing within 90 days; share of Category I and II ongoing in-home cases closing within 6 months, 12 months
9. Are we effectively reducing the occurrence of repeat maltreatment?Share of children with substantiated subsequent maltreatment within 1 month, 6 months, 12 months of case closure

OUT-OF-HOME PLACEMENT

Key Question Related Measure
Capacity1. Is the volume of out-of-home placements straining the capacity of our system?Count of children in out-of-home care by placement type (kinship care-licensed, kinship care-unlicensed, foster care, residential, independent living, shelter, other) AND Share of children in each placement setting by race/ethnicity
2. Do we have sufficient staff capacity to support the volume of out-of-home placements?Share of foster care workers with caseloads of ≤14, 15, 16-17, or 18+ children; share of state and private worker caseloads meeting target
3. Do we have enough available beds to meet the need for out-of-home care?Utilization of available beds in residential, state foster care, private agency foster care
Quality4. Are we successfully placing children in appropriate placements?Count of sibling groups placed separately; count of children under 12 placed in residential / shelter; share of children placed out of county
5. Are we supporting the placement stability of children in out-of-home care?Share of children experiencing a placement disruption within 30 days of entering a new placement
6. Are we maintaining family connections for children placed out-of-home?In cases with goal of reunification, share of children with visitation with their parents never / twice / four times in past month
Outcomes7. Are we bringing youth who run away back into care quickly?Count of runaway youth by length of time on runaway status (0-7 days, 8-30 days, 31+ days)
8. Are we making sure children do not linger in foster care?Number of children in out-of-home care by length of stay (0-11 months, 12-23 months, 24-35 months, 36+ months)
9. Are we matching children who have adoption goals to adoptive families in a timely way?Count of children waiting for adoption by adoption status (matched to family, waiting for family)
10. Are we supporting families to successfully navigate the challenges of reunification?Count of families enrolling and persisting in after care following reunification
11. Are we effectively emancipating youth for successful transitions to adulthood?Count of youth exiting to emancipation; share enrolled in transitional support program AND Share of youth aging out by race/ethnicity
12. Are we keeping children safe while in out-of-home care?Count of substantiated incidents of maltreatment in care by placement type
13. Are we supporting children to achieve permanency in a timely way?Share of children achieving permanency within 6 months, 12 months, 24 months of entering care
14. Are we successfully reunifying children with their families?Share of children exiting care to reunification, adoption/guardianship, or emancipation
15. Are we reducing re-entry into care? (Are reunified families staying together?)Of all children exiting care 12 months ago, share that re-entered within 1 month, 6 months, 12 months

*The Government Performance Lab (GPL) at the Harvard Kennedy School of Government conducts research on how governments can improve the results they achieve for their citizens. An important part of this research model involves providing hands-on technical assistance to state and local governments. Through this involvement, the GPL gains insights into the barriers that governments face and the solutions that can overcome these barriers. By engaging current students and recent graduates in this effort, the GPL is able to provide experiential learning as well. The GPL wishes to acknowledge that these materials are made possible by grants and support from Casey Family Programs and the Laura and John Arnold Foundation. For more information about the Government Performance Lab, please visit our website at http://govlab.hks.harvard.edu.