MENU

IMLS Overview of Outcome Based Evaluation

What is outcome evaluation?

IMLS defines outcomes as benefits to people: specifically, achievements or changes in skill, knowledge, attitude, behavior, condition, or life status for program participants ("visitors will know what architecture contributes to their environments," "participant literacy will improve"). Any project intended to create these kinds of benefits has outcome goals. Outcome-based evaluation, "OBE," is the measurement of results. It identifies observations that can credibly demonstrate change or desirable conditions ("increased quality of work in the annual science fair," "interest in family history," "ability to use information effectively"). It systematically collects information about these indicators, and uses that information to show the extent to which a program achieved its goals. Outcome measurement differs in some ways from traditional methods of evaluating and reporting the many activities of museums and libraries, but we believe grantees will find that it helps communicate the value and quality of their work to many audiences beyond IMLS.

Why should my organization measure outcomes?

Many resource allocators have turned to OBE to demonstrate good stewardship of their resources. Museums and libraries use such information to validate program expansion, to create new programs, or to support staffing and training needs. For example, the Pennsylvania State Library was able to use data that showed improved student performance was associated with well-staffed school media centers to influence legislation to provide additional school librarians.

All libraries and museums strive to provide excellent services, to manage programs effectively, and to make a difference in the lives of their audiences. Any kind of systematic evaluation contributes to project quality. The OBE process supports these goals by focusing programs and providing tools for monitoring progress throughout a project. Evaluation is most effective when it is included in project planning from the very beginning. In OBE, planners clearly articulate their program purpose and check it against target audiences, intended services, expected evidence of change, and the anticipated scale of results. Gathering information during the project can test the evaluation process and can help a grantee confirm progress toward goals. This feedback can also help staff modify work plans or practices if expected results are not occurring.

How does a library or museum do outcome evaluation?

Outcome-based evaluation defines a program as a series of services or activities that lead towards observable, intended changes for participants ("a Born to Read program increases the reading time caretakers spend with children"). Programs usually have a concrete beginning and a distinct end. The loan of a book or an exhibit visit might constitute a program, since these have a beginning and an end, and increased knowledge is often a goal. An individual might complete those programs in the course of a single visit. Outcome measurements may be taken as each individual or group completes a set of services (a workshop series on art history, an after-school history field trip) or at the end of a project as a whole. Information about participants' relevant skill, knowledge, or other characteristic is usually collected at both the program beginning and end, so that changes will be evident. If a program wants to measure longer-term outcomes, of course, information can be collected long after the end of the program.

To use a familiar example, many libraries and museum provide information online. They could count the number of visitors to a web page, based on logs any Internet server can maintain. These numbers could indicate how large an audience was reached. Offering a resource, though, only provides opportunity. In order to know if online availability had a benefit, an institution needs to measure skills, attitudes, or other relevant phenomena among users and establish what portion of users were affected.

To capture information about these kinds of results, a library or museum could ask online visitors to complete a brief questionnaire. If a goal is to increase visitor knowledge about a particular institution's resources, a survey might ask questions like, "Can you name 5 sources for health information? Rate your knowledge from 1 (can't name any) to 5 (can name 5)." If visitors rate their knowledge at an average of 3 at the beginning of their experience, and 4 or 5 (or 2) at the end, the sponsoring institution could conclude that the web site made a difference in responders' confidence about this knowledge. It should be clear that such a strategy also lets you test your effectiveness in communicating the intended message!

It is rarely necessary to talk to every user or visitor. In many cases, and depending on size of the target audience and the outcome being measured, a voluntary sample of users or visitors can be used to represent the whole with reasonable confidence. Most institutions find that people enjoy and value the opportunity to say what they think or feel about a service or a product.