Program Assessment Rating Tool
U.S. Office of Management andBudget
United States

The Problem

The U.S. federal government spends more than $2 trillion each year through hundreds of programs operated by about 200 departments and agencies. However, prior to 1993, there was little government-wide attention paid to how effectively the programs operated or whether some programs worked better than others. This condition was reinforced by the way the agencies and the Office of Management and Budget (OMB) made their budget trade-off decisions. Their assessment criteria and decision-making processes were not released to the departments and agencies nor the public. Information on program results and effectiveness was used irregularly and inconsistently in their decision making processes.

In 1993, Congress passed the Government Performance and Results Act. This law required agencies to develop multi-year strategic plans, annual performance plans, measures of performance for major programs, and annually report publicly on their progress. The law became fully operational in 2000. A supply of performance data was being collected and reported by each agency. However, the Government Accountability Office (GAO) reported that the demand for the data by program managers and decision makers was low. The new performance information was not being widely used to inform program improvement or budget decisions.

In 2002, OMB launched an effort to methodically assess and rate the effectiveness of each major federal program. It then used this information to identify opportunities to improve program performance and to inform its management and budget allocation decisions. It also reported the results of its assessments publicly to encourage wider discussions about program effectiveness and to encourage improvements.

This assessment initiative was conducted via the Program Assessment Rating Tool (PART). OMB determined that there were about 1,000 major federal programs and that it would assess 200 programs each year using the PART. Therefore, it took five years to complete the first round of the initiative. However, it began reporting the results of its assessments annually.

The PART consists of a four-part survey, consisting of 25-30 questions total, for each major federal program. These questions assessed: (1) whether the program’s purpose and design were clear and defensible, (2) whether agencies set valid and long-term strategic goals for the program, (3) the quality of management over the program, and (4) whether actual program results were improving, relative to the goals set for it in the strategic plan.

Questions in the tool were tailored, depending on the type of program being assessed. For example, a different subset of questions was used if a program was regulatory-based, direct service delivery, or research and development. OMB developed seven different subsets of questions to be used. In addition, each of the four parts of the survey was weighted differently to reflect its level of significance. For example, the part on strategic planning had a weight of 10 points while the part on program results had a weight of 50 points.

Solution and Key Benefits

 What is the initiative about? (the solution)
The key benefits resulting from the PART initiative were that it created a systematic, consistent, and transparent approach to collecting, documenting, reporting, and using program-level performance information. This information was used to improve program performance, assess progress, and inform program management and budget decisions. The key beneficiaries were the American taxpayers and program service recipients.

PART has systematically shifted the “unit of analysis” for performance improvement from agencies to programs. At the same time, it shifted the burden of proof of program effectiveness from OMB examiners to program managers to have evidence and produce it. As a consequence, the PART tool is increasing the demand for performance information by program managers and decision makers in the executive branch.

To ensure consistency in its use, every OMB program examiner had to use the results of PART assessments when defending his or her budget review proposals before the OMB Director. This ensured a systematic and consistent approach, and ensured the assessment was both professional and rigorous. OMB reports that the reviews have contributed to improved program design and improved accountability.

The PART initiative is rooted in its transparency to the public. It is based on a consistent set of publicly-available assessment criteria. Not only are the assessment criteria widely available, the results of the review itself are transparent, providing answers to the questions, as well as the rationale and the evidence backing it up. The results are posted on the Internet (www.ExpectMore.gov) and readers are invited to challenge the results and/or supplement the documentation with additional materials. Readers can make their own judgment as to the reasonableness of the assessment presented. This also allows readers to make their own tradeoffs in the perceived or reported effectiveness of related programs. It allows readers the opportunity to compare programs, identify best practices, and identify ways to improve performance.

Over time, the PART ratings have become tied to senior executive performance ratings and agency budgets. As a result, it is increasingly the basis for conversations within agencies about the value of their programs, how to improve them, and how to best measure their performance.

Actors and Stakeholders

 Who proposed the solution, who implemented it and who were the stakeholders?
The PART evolved during 2002 as a consequence of OMB Director Mitch Daniels’ increasing demands on his staff for performance information that he and other policy officials could use as part of the budget development and decision making process.

OMB leadership recognized that in order to create a sustained collection and reporting of performance information as a routine part of the job of OMB program examiners, the effort would have to be embedded at the staff level in the four Resource Management Offices that comprise OMB. They also recognized that it had to be seen as an initiative led by career program examiners, not OMB political or management support staff.

The director of one of the Resource Management Offices, Marcus Peacock, formed a Performance Evaluation Team comprised of career program examiners from across OMB. The Team, led by OMB examiners Diana Espinosa and Thomas Reilly, developed the PART as a response to Daniels’ demand for information.

The Team developed the initial questionnaire and reviewed it with the departmental deputy secretaries who comprised the President’s Management Council. The Team also supported the creation of the Performance Measurement Advisory Council, comprised of non-governmental performance measurement experts, to advise them on implementation approaches.

The PART initiative gained the support of OMB Director Daniels. It was subsequently supported by his successors. As the initiative evolved and grew, the government-wide implementation of the initiative shifted to OMB’s deputy director for management, Clay Johnson, who created a new staff office to manage the growing initiative. He designated Robert Shea as its champion. Shea has led its implementation since 2003.

The key implementers of the initiative have been the OMB program examiners, in conjunction with agency personnel responsible for developing the supporting documentation.

The key stakeholders have been senior leaders, executives, and program managers in the agencies, as well as the program and budget decision makers in OMB and the White House. Congressional leaders and staff are seen as stakeholders as well, however they report that they have not substantially relied upon the information in their decision making process.

(a) Strategies

 Describe how and when the initiative was implemented by answering these questions
 a.      What were the strategies used to implement the initiative? In no more than 500 words, provide a summary of the main objectives and strategies of the initiative, how they were established and by whom.
OMB employed four implementation strategies: an incremental roll-out, a high level of engagement of affected career staff in both OMB and the agencies in the development of the initiative, a clear link to decision-making, and a high degree of transparency of the process and results.

The primary strategy was its incremental introduction over a 5-year period. This allowed agency and OMB career staffs to learn how to best integrate the analyses into their ongoing work and to allow revisions to the process based on practice.

The development process was also more open than the more traditional top-down OMB initiatives. The President’s Management Council was engaged; an external advisory committee was involved; OMB provided training both for its examiners as well as the agencies; and OMB invited the National Academy for Public Administration to pre-test the survey in several programs and provide feedback on its design.

The agencies were also engaged. OMB examiners worked with their agency counterparts to jointly define the coverage of what constituted a “program” to be included in a PART assessment. Some agencies chose broadly – such as the coverage of all of National Institutes of Health grants – while others chose more narrowly – such as the American Printing House for the Blind. OMB and agencies also jointly determined the sequence of when the reviews would occur. In some cases, such as the Defense Department, there was significant discussion over whether the PART would be for a capability, such as “air support,” or a weapons platform, such as the C-17 aircraft.

The PART results were reported by OMB examiners when they met with the OMB director for their annual budget reviews. At the agency level, the PART results were used to justify budget requests and to rate senior executives’ performance. This contributed to stronger attention to the PART process by agency career executives.

As noted earlier, the results of the reviews, and subsequent updates, were posted annually on a publicly available website. This began in the first year, and was enhanced for easier access in later years.

(b) Implementation

 b.      What were the key development and implementation steps and the chronology? No more than 500 words
OMB leadership consciously attempted to not explicitly link the PART to budget resource allocation decisions. In fact, in the beginning, 80 percent of the decisions made based on PART results were management improvement decisions. However, over time the association with budget decisions became more distinct. For example, in 2004, the media noted that the President’s FY 2005 budget submission reduced funding for programs rated as “ineffective” by an average of 38 percent and increased funding for programs rated as “effective” by an average of 22 percent.

In addition, in OMB, the guidance to agencies on how to prepare for the PART review was integrated into standard budget guidance (known as OMB Circular A-11) and PART results became an integral component of the program examiners’ decision-making package that they presented before the OMB Director during budget decision meetings.

Following is a brief chronology of implementation activities:

2002 – Established Program Evaluation Team in OMB; developed the PART questionnaire; established the external Advisory Committee; drafted guidance; conducted training.

2003 – Reported results from the first set of approximately 200 programs reviewed, as part of the Fiscal Year (FY) 2004 budget submission. Internally, OMB reviewed the results for consistency across program assessments. OMB also created an appeals process in which several departmental deputy secretaries (and members of the President’s Management Council) reviewed the fairness of the PART scores.

2004 – Reported results from the second set of approximately 200 programs as part of the FY 2005 budget submission. Electronic submission format was created to make it easier for agencies to submit information in a consistent format.

2005 – Reported results from the third set of approximately 200 programs as part of the FY 2006 budget submission. Recognized by Harvard University’s Ash Institute as a winner of its Innovations in American Government award.

2006 – Reported results from the fourth set of approximately 200 programs as part of the FY 2007 budget submission. OMB created its “ExpectMore.gov” website in order to summarize technical information for more convenient access by the public.

2007 – Completed first 5-year cycle of approximately 1,000 programs, as part of the FY 2008 budget submission. The President signed an executive order creating a government-wide Performance Improvement Council.

(c) Overcoming Obstacles

 c.      What were the main obstacles encountered? How were they overcome? No more than 500 words
The main obstacles were three-fold: those internal to OMB, those internal to agencies, and those in the external political environment.

Obstacles in OMB. The most visible internal technical obstacle in OMB was ensuring that programs were consistently rated fairly, when reviews were conducted by different OMB program divisions and examiners. The review criteria depend heavily on the need for strong professional judgment and program understanding. Federal programs vary widely in size, complexity, historical evolution, and political sensitivity. These have to be factored into any rating process in a way that is seen as fair by the stakeholders. To address this, OMB staff received consistent training, were coached by peers if PART ratings were seen to be significantly different than others had developed, and an appeals process was developed both inside OMB as well as outside by the President’s Management Council. Experts in the process subjected all completed PARTs to a consistency audit. Departments and agencies could then appeal a rating to a small committee of their peers to ensure fairness and consistency across government.

Challenges in Agencies. In a review of the implementation of PART, professor John Gilmour noted four challenges agencies faced in their implementation: (1) organizing effectively to manage the PART assessment process; (2) using the PART to communicate effectively to OMB and stakeholders; (3) developing suitable measures of performance; and (4) understanding the extent to which program managers can be held accountable for their program’s performance. In each of these cases, he identified best practices. For example, the State Department centralized its PART assessment process.

External Obstacles. External obstacles tended to be largely political. Some observers saw the PART initiative as ideologically-driven by the George Bush Administration. They believed that the methodology was designed to reduce funding for less-favored programs. Some charged that OMB is responsible for being an advocate of the President’s agenda and lack objectivity. In response, OMB created an external advisory team of experts to review its methodology and how it was being applied in order to ally such fears.

Another political concern came from Congress. Some congressional leaders saw OMB scores as making programmatic judgments that they felt were their prerogative. Some congressional leaders sought to make the PART process statutory. While OMB supported this move, it ultimately failed to win support.

(d) Use of Resources

 d.      What resources were used for the initiative and what were its key benefits? In no more than 500 words, specify what were the financial, technical and human resources’ costs associated with this initiative. Describe how resources were mobilized
The PART initiative largely was a redeployment of existing resources. This was mainly the time of OMB program examiners and their agency counterparts. There were minimal technical resources deployed to create the electronic PART submission format and the ExpectMore.gov website.

Sustainability and Transferability

  Is the initiative sustainable and transferable?
Yes. The initiative has been in place for six years. Administrative processes – such as OMB Circular A-11 -- have been revised to incorporate PART into the existing budget formulation process. A staff office in OMB was created to champion the initiative on an on-going basis. And most recently, a Presidential Executive Order was signed to create a cross-governmental council, and agency-level designated champions, to support the continued periodic examination of programs.

Presidential candidates from across the political spectrum – Rudy Giuliani, Hillary Clinton, and Fred Thompson -- have mentioned their support for continued performance reviews by OMB.

Widespread interest in PART has been expressed by visitors from other countries and state and local governments. South Korea, Canada, and the city of Arlington, Texas, are all adopting some PART-like assessment tool. Many others are studying its use.

Lessons Learned

 What are the impact of your initiative and the lessons learned?
The two key elements in the PART's success was the level of open engagement of career staff and the public in developing the tool, and the transparency in the reporting of the results.


A key lesson learned has been that creating performance information at the program level and reporting it publicly does not necessarily result in public or political pressure to improve its performance – even when the information is distilled into less technical terms. Transparency is not as strong a tool as had been originally anticipated. However, the foundation created will enable future government leaders to build initiatives from a firm foundation of factual information that required years to assemble. Also, engaging the media in using the information will be important.

A second lesson is the need for career OMB examiners to be able to make trade-offs between immediate and longer-term analyses. OMB examiners have limited time to invest on their programs. Much of an examiner’s time pressure is consumed by program inputs such as budget development, legislative analysis, policy analysis, tracking legislative earmarks, and other issues. PART implicitly expects an examiner to pay attention to longer-term program results, however the budget environment is not designed to encourage time spent on this element of an examiner’s job. Part of this is being dealt with by making the data collection and analysis process electronic. Another part is allowing examiners to conduct PART reviews during less-busy periods of the year.

Contact Information

Institution Name:   U.S. Office of Management andBudget
Institution Type:   Government Agency  
Contact Person:   Robert Shea
Title:   Associate Director, Government Performance  
Telephone/ Fax:   202-395-4568
Institution's / Project's Website:   202-395-3047
E-mail:   rjshea@omb.eop.gov  
Address:   Rm. 260, 1650 Pennsylvania Ave., NW
Postal Code:   20503
City:   Washington
State/Province:   DC
Country:   United States

          Go Back

Print friendly Page