©
2006 Jordan Institute |
Vol.
11, No. 3 How the Federal Government Uses Outcomes for Program Accountability: The Good, the Bad, and the Scaryby Ray Kirk, Ph.D. This issue of Practice Notes focuses on outcomes. Although our local program or agency budgets normally reflect the activities that we engage in during service delivery, those activities should result in observable or measurable outcomes for service recipients. Generally, we want to observe or measure some positive change in skills, resources, circumstances, safety, or well-being among the service recipients. Outcomes observed at the level of the service recipients are essential to knowing whether we are achieving the desired results, and can be used to improve individual case practice or even entire programs. These are good uses of outcome measures. But outcome measures are also being used at the systems level by federal offices and agencies for purposes of accountability. It is important to be aware of these activities as future federal funding of human service programs may well depend on systems-level performance on global and sometimes arbitrary outcome measures. The Child and Family Services Review During the first round of CFSRs the standards were set very high, and not a single state passed its initial review. As a result, each state developed a program improvement plan, or PIP, that had to be approved by the Children’s Bureau. The PIPs are intended to improve each state’s performance on the specified outcome measures so that states come closer to the standards during subsequent CFSRs. This sounds like straightforward accountability, and no one would argue against accountability. But, there are subtle problems with the process, and states will struggle to overcome those problems in order to improve their CFSR scores. For example, if a state has a higher than normal child removal rate, it is probably removing some children who really don’t need to be removed from their homes. Therefore, that state is likely to have a reunification rate that is higher than usual. On its face, a high reunification rate is a good thing, but if the state in question focuses on lowering its child removal rate, and succeeds, it will also likely lower its reunification rate. Thus, improvement on one of the CFSR standards may have the unintended consequence of lowering performance on another standard. In a sense, the outcome measures in question compete with one another, and simultaneous improvement on all of the CFSR outcome measures and their companion standards is unlikely. State agencies will struggle to accommodate their PIPs, and chasing those CFSR standards may have programmatic and policy consequences that, in turn, affect local program funding and service priorities. At some point in the future, as yet unknown, states’ individual funding from the federal government may be linked to CFSR performance. Thus, the stakes are high with respect to outcome accountability. The Program Assessment Review Tool A full explanation of how PART works, as well as results of the nearly 800 PART reviews completed to date, can be found at <http://www.whitehouse.gov/omb/expectmore>. But for the purpose of this discussion, it is enough to know that the basics of PART include answering 25 questions on the PART instrument, and then receiving an overall rating of program performance. A program may be rated as effective, moderately effective, adequate, ineffective, or results not demonstrated. The different ratings relate to the degree to which the program sets and achieves ambitious goals, and the degree to which the program is well managed and efficient. Like the CFSRs, the PART process sounds like straightforward accountability based in part on outcomes, and in part on efficiency. And, as with the CFSRs, few would argue that effectiveness and efficiency are not desirable. But, there are potential pitfalls in the PART process when it is applied to child and family service programs. These pitfalls in the PART process relate to the degree to which outcomes are known and measurable, and how efficiency is defined by OMB. For example, the Office of Child Support Enforcement received a rating of “effective,” the highest rating possible. Examination of the components of the program reveals that they are well specified in the program’s legislation (e.g., “locate noncustodial parents, obtain child and spousal support”). Further, although government information systems are often poor at tracking vulnerable or disadvantaged service recipients, they are usually very good at tracking money, and child support enforcement is all about getting the money and providing it to the intended recipients. For Child Support Enforcement, the outcomes are clear, and the measurement of the outcomes is fairly easy, both conceptually and in fact. Efficiency is also easy to address simply by dividing the amount of money the program brings in by how much money the program costs to administer. But what about programs that are highly varied and for which the outcomes are less defined? For example, consider the national Community-Based Child Abuse Prevention, or CBCAP program. The legislative purpose of CBCAP is to “…support community efforts to develop, operate, expand, and enhance initiatives aimed at the prevention of child abuse and neglect; [and] to support networks of coordinated resources and activities to better strengthen and support families to reduce the likelihood of child abuse and neglect…” There are many different ways to interpret the true intent of this legislative mandate. The CBCAP legislative purpose does not read like a specific program (like child support enforcement) but rather like a set of guidelines for states to follow, within which there is great flexibility. In fact, there are 50 separate state CBCAP programs, rather than a single CBCAP program operating in 50 states. In 40% of states, the CBCAP money is administered by an independent Children’s Trust Fund, and in the remainder of states the funds are administered by the state child welfare agency. There is no national database for CBCAP programs or activities. The state CBCAP lead agencies do not engage in direct service. The PART process seems to be based on an expectation that the program outcomes will be very clear, uniform throughout the country, and that there will be measurement data available. With respect to CBCAP, this is not the case. The result has been that OMB fell back on the language in the preamble to the CBCAP legislation and has specified one outcome to which all CBCAP programs will be held accountable: a decrease in the rate of first-time victims of child maltreatment. So, 50 disparate state programs that do not engage in direct service are being held responsible for changing the national statistic on first-time victims of child abuse. Furthermore, since there is no national CBCAP database, OMB will rely on data from the national child abuse and neglect data system (NCANDS), a system to which CBCAP programs do not contribute. A skeptical view of this arrangement is that CBCAP has been forced to fit into a review process that is not appropriate for its legislative purpose, relying on vicarious data sources. Similar problems of “fit” arise when efficiency is considered, since the amount of money that states can use for administrative purposes is fixed by law and the remainder of each state’s CBCAP grant is distributed to other agencies and providers. Although outcome accountability is generally a good thing, the OMB PART attempts to hold the Community-Based Child Abuse Prevention program accountable for outcomes that it does not directly influence using data that it does not collect or contribute. Even the Children’s Bureau agrees. As this article is being written, the Children’s Bureau and a working group of state CBCAP representatives are trying to negotiate a different outcome or set of outcomes for OMB to use that relate more directly to the CBCAP legislation, and for which the state CBCAP programs can provide the measurement data. Conclusion |