Loading...

Messages

Proposals

Stuck in your homework and missing deadline? Get urgent help in $10/Page with 24 hours deadline

Get Urgent Writing Help In Your Essays, Assignments, Homeworks, Dissertation, Thesis Or Coursework & Achieve A+ Grades.

Privacy Guaranteed - 100% Plagiarism Free Writing - Free Turnitin Report - Professional And Experienced Writers - 24/7 Online Support

Program evaluation fitzpatrick 4th edition

20/10/2021 Client: muhammad11 Deadline: 2 Day

Program Evaluation

Alternative Approaches and Practical Guidelines

FOURTH EDITION

Jody L. Fitzpatrick University of Colorado Denver

James R. Sanders Western Michigan University

Blaine R. Worthen Utah State University

Boston Columbus Indianapolis New York San Francisco Upper Saddle River Amsterdam Cape Town Dubai London Madrid Milan Munich Paris Montreal Toronto

Delhi Mexico City São Paulo Sydney Hong Kong Seoul Singapore Taipei Tokyo

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1

-2 69

-5 69

06 -6

Vice President and Editor in Chief: Jeffery W. Johnston Senior Acquisitions Editor: Meredith D. Fossel Editorial Assistant: Nancy Holstein Vice President, Director of Marketing: Margaret Waples Senior Marketing Manager: Christopher D. Barry Senior Managing Editor: Pamela D. Bennett Senior Project Manager: Linda Hillis Bayma Senior Operations Supervisor: Matthew Ottenweller Senior Art Director: Diane Lorenzo Cover Designer: Jeff Vanik Cover Image: istock Full-Service Project Management: Ashley Schneider, S4Carlisle Publishing Services Composition: S4Carlisle Publishing Services Printer/Binder: Courier/Westford Cover Printer: Lehigh-Phoenix Color/Hagerstown Text Font: Meridien

Credits and acknowledgments borrowed from other sources and reproduced, with permission, in this textbook appear on appropriate page within text.

Every effort has been made to provide accurate and current Internet information in this book. However, the Internet and information posted on it are constantly changing, so it is inevitable that some of the Internet addresses listed in this textbook will change.

Copyright © 2011, 2004, 1997 Pearson Education, Inc., Upper Saddle River, New Jersey 07458. All rights reserved. Manufactured in the United States of America. This publication is protected by Copyright, and permission should be obtained from the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopying, recording, or likewise. To obtain permission(s) to use material from this work, please submit a written request to Pearson Education, Inc., Permissions Department, 501 Boylston Street, Suite 900, Boston, MA 02116, fax: (617) 671-2290, email: permissionsus@pearson.com.

Library of Congress Cataloging-in-Publication Data

Fitzpatrick, Jody L. Program evaluation: alternative approaches and practical guidelines / Jody L. Fitzpatrick, James R.

Sanders, Blaine R. Worthen. p. cm.

ISBN 978-0-205-57935-8 1. Educational evaluation—United States. 2. Evaluation research (Social action programs)—

United States. 3. Evaluation—Study and teaching—United States. I. Sanders, James R. II. Worthen, Blaine R. III. Worthen, Blaine R. Program evaluation. IV. Title.

LB2822.75.W67 2011 379.1’54—dc22

2010025390 10 9 8 7 6 5 4 3 2

ISBN 10: 0-205-57935-3 ISBN 13: 978-0-205-57935-8

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1-269-56906-6

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1

-2 69

-5 69

06 -6

Everyone evaluates. As we discussed in Chapter 1, we all form opinions or make judgments about the quality of things we encounter. Such evaluations include everything from the meal we just finished eating or the movie or con- cert we saw last week to more serious endeavors—the program to help students at risk of dropping out at our high school or the parent contact program for par- ents new to our school. Our focus here is not on our individual judgments of something, but on evaluations that are more formal, structured, and public. We connect these personal evaluations with the more formal ones here, though, be- cause the earliest evaluation approaches were concerned, almost exclusively, with judging the quality of something. Those judgments were often derived by

First Approaches: Expertise and Consumer-Oriented Approaches

Orienting Questions

1. What are the arguments for and against using professional judgment as the means for evaluating programs?

2. What are the different types of expertise-oriented approaches? How are they alike and how do they differ?

3. Why is accreditation of institutions of higher education controversial today? How do these controversies reflect the controversies that frequently arise in many evaluations?

4. How is the consumer-oriented evaluation approach like the expertise-oriented approach? How is it different?

5. How do these approaches influence the practice of evaluation today?

126

5

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1-269-56906-6

Chapter 5 • First Approaches: Expertise and Consumer-Oriented Approaches 127

a group of individuals coming together to consider their criteria and the program or product to be judged.

The first modern-day approaches to evaluation were expertise-oriented and consumer-oriented evaluations. These approaches continue to be used today, though not so widely in the professional evaluation field. However, they have influenced the ways we think of evaluation and its purposes and methods. We will review each briefly, with a focus on the most widely used current method— accreditation—to illustrate the key principles of these approaches and how they affected, and continue to affect, evaluation practices.

The Expertise-Oriented Approach

The expertise-oriented approach to evaluation is probably the oldest type of formal, public evaluation and, as its name implies, it relies primarily on professional expertise to judge the quality of an institution, program, product, or activity. For example, the merits of a leadership training program for school principals could be assessed by experts from various fields including leadership, educational adminis- tration, and training who would observe the program in action, examine its mate- rials and underlying theory, perhaps interview some trainers and participants, or, in other ways, glean sufficient information to render a considered judgment about its value.

In another case, the quality of a hospital could be assessed by looking at its spe- cial programs, its operating facilities, its emergency room operations, its in-patient operations, its pharmacy, and so on, by experts in medicine, health services, and hospital administration. They could examine facilities and equipment/supplies of the hospital, its operational procedures on paper and in action, data on the fre- quency and outcomes of different procedures, the qualifications of its personnel, patient records, and other aspects of the hospital to determine whether it is meeting appropriate professional standards.

Although professional judgments are involved to some degree in all evalua- tion approaches, this one is decidedly different from others because of its direct, open reliance on professional expertise as the primary evaluation strategy. Such expertise may be provided by an evaluator or by subject-matter experts, depend- ing on who might offer most in the substance or procedures being evaluated. Usually one person will not own all of the requisite knowledge needed to adequately evaluate the program, institution, or agency. A team of experts who complement each other are much more likely to produce a sound evaluation.

Several specific evaluation processes are variants of this approach, including doctoral examinations administered by a committee, proposal review panels, site visits and conclusions drawn by professional accreditation associations, reviews of institutions or individuals by state licensing agencies, reviews of staff performance for decisions concerning promotion or tenure, peer reviews of articles submitted to professional journals, site visits of educational programs conducted at the behest of the program’s sponsor, reviews and recommendations by prestigious blue-ribbon

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1

-2 69

-5 69

06 -6

128 Part II • Alternative Approaches to Program Evaluation

TABLE 5.1 Some Features of Four Types of Expertise-Oriented Evaluation Approaches

Type of Expertise-Oriented Evaluation Approach

Existing Structure

Published Standards

Specified Schedule

Opinions of Multiple Experts

Status Affected by Results

Formal review system Yes Yes Yes Yes Usually

Informal review system Yes Rarely Sometimes Yes Usually

Ad hoc panel review No No No Yes Sometimes

Ad hoc individual review No No No No Sometimes

panels, and even the critique offered by the ubiquitous expert who serves in a watchdog role.

To impose some order on the variety of expertise-oriented evaluation activ- ities, we have organized and will discuss these manifestations in four categories: (1) formal professional review systems, (2) informal professional review systems, (3) ad hoc panel reviews, and (4) ad hoc individual reviews. Differences in these categories are shown in Table 5.1, along the following dimensions:

1. Is there an existing structure for conducting the review? 2. Are published or explicit standards used as part of the review? 3. Are reviews scheduled at specified intervals? 4. Does the review include opinions of multiple experts? 5. Do results of the review have an impact on the status of whatever is being

evaluated?

Developers of the Expertise-Oriented Evaluation Approach and Their Contributions

It is difficult to pinpoint the origins of this approach, since it has been with us for a very long time. It was formally used in education in the 1800s, when schools be- gan to standardize college entrance requirements. Informally, it has been in use since the first time an individual to whom expertise was publicly accorded ren- dered a judgment about the quality of some endeavor—and history is mute on when that occurred. Several movements and individuals have given impetus to the various types of expertise-oriented evaluations.

Elliot Eisner, an early evaluator discussed later in this chapter, stressed the role of connoisseurship and criticism in evaluation, roles that required exper- tise in the subject matter to be evaluated. James Madison and Alexander Hamilton took on the role of “expert evaluators” in discussing and elaborating on the meaning and merits of the newly proposed Constitution in The Federalist Papers. (They were experts because they were both present and active at the Constitutional Convention that drafted the document. As such, they were also

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1-269-56906-6

Chapter 5 • First Approaches: Expertise and Consumer-Oriented Approaches 129

internal evaluators!) Their writings were influential at the time and are still used by jurists in the U.S. courts to interpret the meanings of the Constitution, illustrating the important actions that can come from reasoned judgments by experts about a product. Accreditation of institutions of higher education is the primary present-day application of expertise-oriented evaluations. The New England Association of Schools and Colleges, which granted the first accredita- tion and continues accreditations for colleges and universities in New England today, began in 1885 when a group of headmasters of preparatory secondary schools began meeting with presidents of colleges in New England to discuss what graduates should know to be prepared for college. Thus, more than 100 years ago, school and college leaders were talking about ways to align their curricula!

Formal Professional Review Systems: Accreditation

Historical Foundations. To many, the most familiar formal professional review system is that of accreditation, the process whereby an organization grants approval of institutions such as schools, universities, and hospitals. Beginning in the late 1800s, regional accreditation agencies in the United States gradually supplanted the borrowed western European system of school inspections. These agencies became a potent force in accrediting institutions of higher education during the 1930s. Education was not alone in institutionalizing accreditation processes to determine and regulate the quality of its institutions. Parallel efforts were under way in other professions, including medicine and law, as concern over quality led to wide-scale acceptance of professionals judging the efforts of those educating fellow professionals. Perhaps the most memorable example is Flexner’s (1910) examination of medical schools in the United States and Canada in the early 1900s, which led to the closing of numerous schools he cited as inferior. As Floden (1983) has noted, Flexner’s study was not accreditation in the strict sense, because medical schools did not participate voluntarily, but it certainly qualified as accreditation in the broader sense: a classic example of pri- vate judgment evaluating educational institutions.

Flexner’s approach differed from most contemporary accreditation efforts in two other significant ways. First, Flexner was not a member of the profession whose efforts he presumed to judge. An educator with no pretense of medical ex- pertise, Flexner nonetheless ventured to judge the quality of medical training in two nations. He argued that common sense was perhaps the most relevant form of expertise:

Time and time again it has been shown that an unfettered lay mind, is . . . best suited to undertake a general survey. . . . The expert has his place, to be sure; but if I were asked to suggest the most promising way to study legal education, I should seek a layman, not a professor of law; or for the sound way to investigate teacher training, the last person I should think of employing would be a professor of education. (Flexner, 1960, p. 71)

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1

-2 69

-5 69

06 -6

130 Part II • Alternative Approaches to Program Evaluation

It should be noted that Flexner’s point was only partially supported by his own study. Although he was a layman in terms of medicine, he was an educator, and his judgments were directed at medical education rather than the practice of medicine, so even here appropriate expertise seemed to be applied.

Second, Flexner made no attempt to claim empirical support for the criteria or process he employed, because he insisted that the standards he used were the “obvious” indicators of school quality and needed no such support. His methods of collecting information and reaching judgments were simple and straightforward: “A stroll through the laboratories disclosed the presence or absence of apparatus, museum specimens, library, and students; and a whiff told the inside story regarding the manner in which anatomy was cultivated” (p. 79).

Third, Flexner dispensed with the professional niceties and courteous criti- cisms that often occur in even the negative findings of today’s accreditation processes. Excerpts of his report of one school included scathing indictments such as this: “Its so-called equipment is dirty and disorderly beyond description. Its outfit in anatomy consists of a small box of bones and the dried-up, filthy frag- ments of a single cadaver. A cold and rusty incubator, a single microscope, . . . and no access to the County Hospital. The school is a disgrace to the state whose laws permit its existence” (Flexner, 1910, p. 190).

Although an excellent example of expertise-oriented evaluation (if expertise as an educator, not a physician, is the touchstone), Flexner’s approach is much like that of contemporary evaluators who see judgment as the sine qua non of evalu- ation and who see many of the criteria as obvious extensions of logic and common sense (e.g., Scriven, 1973).

Accreditation in Higher Education Today. Accreditation in the United States and in many other countries today meets our criteria for an expertise-oriented, formal review system. The systems make use of an existing structure (generally an inde- pendent regional or national accreditation organization in the United States or governmental agencies in other countries), standards published by the organiza- tion responsible for accreditation, a specified schedule (for example, reviews of institutions every 2, 5, or 10 years), and opinions of multiple experts, and the status of the institution, department, college, or school is affected by the results. Accreditation is an excellent example of expertise-oriented evaluation because it uses people with expertise in the subject matter of the program or institution to form a judgment regarding the quality of the entity to be evaluated. The accredi- tation of an institution or program provides consumers and other stakeholders with some indication of the quality of the institution, as judged by experts in the field, and may facilitate summative decisions. For example, many students use an institution’s or program’s accreditation status to aid their decisions about whether to apply to or attend an institution or program. Further, the feedback the accred- itation process provides to the institution can be used for program and institutional improvement and decision making. Thus, the accreditation process serves a formative purpose as well.

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1-269-56906-6

Chapter 5 • First Approaches: Expertise and Consumer-Oriented Approaches 131

Accreditation in the United States is most common for institutions of higher education.1 We will spend a little time describing this process because it has recently become quite political and controversial, and even for those readers not involved in accreditation, the arguments illustrate the types of political issues and choices that often arise in any evaluation. These include disagreements over the purpose of the evaluation (formative or summative); the neutrality and inde- pendence of the experts or evaluators; the criteria to be used to judge the product and, thus, the data to be collected or reviewed; and the transparency of the process (what should be available to the public or other stakeholders outside the organi- zation). These controversies have emerged as the U.S. Department of Education, which has a stake in accreditation through provision of student loans to accred- ited institutions, has begun to take issue with the accreditation practices of the independent regional accrediting bodies that have traditionally reviewed colleges and universities for accreditation.

As noted earlier, in many countries, including Germany, the Netherlands, India, and the countries of the United Kingdom, institutions of higher education are required by law to be accredited. Government agencies, generally through a ministry or department of education, conduct the accreditation process. In some countries, such as Canada, there is no accreditation process for higher education, partly because most institutions of higher education are run by the provincial gov- ernments and that governance is considered to provide sufficient oversight. In the United States, accreditation evolved in a way that very much mirrors U.S. citizens’ distrust of government. With a desire to minimize government’s role nonprofit or voluntary associations carry out the accreditation tasks often fulfilled by government agencies in other countries.

As noted earlier, the New England Association of Schools and Colleges was the first accreditation organization in the United States. Originally established as a mechanism for dialogue between administrators of secondary schools and leaders of colleges in the region in 1885, it eventually evolved into the accrediting association for colleges and institutions in the region (Brittingham, 2009). Other regional associations followed, with each taking responsibility for accrediting institutions of higher education in their region. Today, there are six regional accrediting organiza- tions in the United States, each pursuing similar activities within their region.2 These

1Secondary institutions and school districts are occasionally accredited as well. Some states, for example, are moving to review school districts for accreditation and associations such as AdvancED have been formed out of the North Central and Southern accrediting associations for higher education to focus on accrediting K–12 schools. Further, many private schools are accredited. Our focus is on accreditation in higher education because it has been established for the longest period and its traditions, therefore, illus- trate much about expertise-oriented evaluation and its controversies. 2The major regional accrediting associations in the United States are the Middle States Association of Colleges and Schools, the New England Association of Schools and Colleges, the North Central Association of Colleges and Schools, the Northwest Association of Accredited Schools, the Southern Association of Colleges and Schools, and the Western Association of Schools and Colleges. Although other accredit- ing organizations exist (for example, for religious institutions), these regional accrediting associations are considered the primary accrediting bodies in the United States.

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1

-2 69

-5 69

06 -6

132 Part II • Alternative Approaches to Program Evaluation

associations focus primarily on accrediting institutions of higher education, though often they are also involved in accrediting K–12 schools. Finally, there are many ac- crediting associations that review programs in particular disciplines rather than en- tire institutions. For example, the American Bar Association accredits law schools, the Association of American Medical Colleges accredits medical schools, and the National Council for Accreditation of Teacher Education (NCATE) accredits teacher education programs, with the Teacher Education Accreditation Council (TEAC) emerging as a recent competitor to NCATE.

Accreditation of institutions of higher education by the six regional associa- tions has followed a similar plan and approach, the mission-based approach, since the 1950s. With the mission-based approach, accreditors focus on the extent to which the institution is pursuing and achieving its stated mission. Although each association also has standards for higher education that it uses in the evaluation, the mission-based approach reflects the philosophy of the associations in its eval- uations. Barbara Brittingham describes the mission-based approach and the accreditation process in the United States as “unusually focused on the future” to help the institution improve (2009, p. 18).

The Process of Accreditation. In the first stage of accreditation, the institution prepares a self-study report describing its mission and its progress toward that mis- sion, as well as how the institution meets the standards of the accrediting body. The second major stage is the core of the expertise-oriented approach: a team of peers, faculty, and administrators from other institutions in the region receives the report and conducts a site visit during which they interview faculty, administra- tors, staff, and students; review institutional records on admissions, course curric- ula, student satisfaction and outcomes; observe facilities and classrooms, and so forth. Based on their review of the report and their experiences during the site visit, the team, usually three or four experts, writes a report expressing their views regarding the institution, their recommendations concerning its accreditation sta- tus, and their suggestions for improvement. The site visit report is then reviewed by a standing commission at the accrediting association, which may amend the conclusions. The commission then presents the final conclusions to the institution.

The process is expertise-oriented in several ways: (a) the association has exper- tise concerning standards for higher education, the state and status of other institu- tions, and the practice of accreditation and review; (b) the faculty and administrators who form the site team have expertise in participating in the governance of their own universities and others where they have been employed and receive some training from the association to serve as site reviewers. Therefore, the expertise of the site visit team and the association allows those involved to make use of the standards of the association, their review of the report, and their site visit to form a final judgment of the quality of the institution. This process is a common one followed not only by the regional accrediting organizations but also by the organizations that accredit programs in individual disciplines in higher education and by organizations that accredit other educational institutions, including school districts, private schools, charter schools, secondary schools, vocational schools, and religious schools.

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1-269-56906-6

Chapter 5 • First Approaches: Expertise and Consumer-Oriented Approaches 133

Accreditation Controversies: Accreditation Politicized. So what can be contro- versial here? As one author defending the system notes, “Who better, one might ask, to evaluate the quality of a college or university than those who work in the field?” (O’Brien, 2009, p. 2). O’Brien argues that the evaluation and the relation- ship between the accrediting organizations and the institution should not be adversarial, noting, “The evaluators are not inspectors coming in with their white gloves” (O’Brien, 2009, p. 2). But the history of the controversy traces back to the GI Bill passed by Congress after World War II to provide financial assistance to returning soldiers to attend colleges and universities. The government wanted to ensure that the financial assistance went for worthwhile post secondary educa- tional activities, but did not want to get directly into the business of examining col- leges and universities for quality. So, it decided to rely on the independent regional accrediting associations, which were already reviewing colleges and universities, to determine the institutions students could receive financial aid to attend. Today, with increasing costs of higher education and more and more students attending colleges and universities, U.S. loans to students are big business. The government continues to rely on regional accrediting associations to identify the institutions of higher education that are eligible for aid, but has an increasing stake in the qual- ity of those processes given the large amounts of money distributed in student loans and other forms of aid. In addition, the institutions themselves have a large stake in the process, because many students would not attend an institution that is not accredited, for quality and financial aid reasons.

Through the Higher Education Act, originally passed in 1965, the U.S. govern- ment influences higher education in many areas, from student loans to access. In recent years, many in the U.S. Department of Education have become concerned that accreditations are not sufficiently rigorous in weeding out schools that are perform- ing poorly. Even proponents of the system note that current regional accreditation in the United States carries a “light touch” compared with government evaluations of higher education conducted in other countries (Brittingham, 2009, p. 18).

In 2005, the U.S. Department of Education appointed the Commission on the Future of Higher Education to study four issues critical to higher education, one of which was accountability. In “The Need for Accreditation Reform,” a paper prepared for that report, Robert Dickeson called the current U.S. system of ac- creditation, “a crazy-quilt of activities, processes, and structures that is frag- mented, arcane, more historical than logical, and has outlived its usefulness. More important, it is not meeting the expectations required for the future” (2006, p. 1). He concluded that “any serious analysis of accreditation as it is currently practiced results in the unmistakable conclusion that institutional purposes, rather than public purposes, predominate” (Dickeson, 2006, p. 3). He recommended that Con- gress create a National Accreditation Foundation to accredit institutions of higher education. The final report of the Commission, called the Spellings Commission for then Secretary of Education Margaret Spellings, was quite critical of current accreditation processes (U.S. Department of Education, 2006, http://www2.ed .gov/about/bdscomm/list/hiedfuture/reports/final-report.pdf). The report inspired much controversy and discussion in the higher education community, with

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1

-2 69

-5 69

06 -6

134 Part II • Alternative Approaches to Program Evaluation

organizations such as Phi Beta Kappa and the Association of American Colleges and Universities issuing statements both of support and concern regarding the re- port. The final 2008 amendment of the Higher Education Act ultimately chose to ignore some of these recommendations, but the concerns raised by the Commis- sion will continue (O’Brien, 2009) and, for our purposes, reflect some of the political concerns raised about evaluation today and, in particular, about expertise-oriented evaluation.

The regional accrediting associations see their purpose in evaluating institutions of higher education as primarily formative, helping these institutions improve. They see these goals as the best way to serve institutions, their students, and the public. By helping colleges and universities to improve and better achieve their stated mission, the accrediting associations believe they are helping students to receive a better edu- cation. In contrast, the U.S. Department of Education’s emphasis is summative. It is concerned with maintaining the U.S. position in higher education in the world and in providing educated and skilled graduates for the economy of the twenty-first cen- tury. The Department and other critics see the purpose of accreditation as providing parents, students, and other consumers with information to help them decide which institutions they should attend and where they should spend their tuition dollars. In other words, accreditation should help these consumers make summative decisions about which institutions to choose. Further, accreditation should help make sum- mative decisions about which institutions should continue. One critic notes that in the 60 years since the GI Bill was passed, “a mere handful of schools have been shut down and those largely for financial reasons . . . Meanwhile, on the accreditors’ watch, the quality of higher education is slipping” (Neal, 2008, p. 26). So, the accrediting associations have developed a process that is most useful for formative evaluation when critics see the primary purpose as summative.

Increasing Emphasis on Outcomes. Another area of disagreement concerns the factors that should be considered in accreditation. Today, the emphasis in educa- tion, and in much of evaluation around the world, is on outcomes and impacts. (See Chapter 2.) The Spellings Commission report notes the following:

Too many decisions about higher education—from those made by policymakers to those made by students and families—rely heavily on reputation and rankings derived to a large extent from inputs such as financial resources rather than out- comes. Better data about real performance and lifelong learning ability is absolutely essential if we are to meet national needs and improve institutional performance. (U.S. Department of Education, 2006, p. 14)

Just as K–12 education has moved to measuring student learning by focusing al- most entirely on the extent to which state standards are achieved, the Spellings Commission would like evaluations of institutions of higher education to rely much more heavily on measures of student outcomes.3 Although regional accrediting

3One difference between standards for K–12 education and those for higher education is that the standards for higher education would be national ones, not developed at the state level as K-12 standards are.

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1-269-56906-6

Chapter 5 • First Approaches: Expertise and Consumer-Oriented Approaches 135

associations have begun to require institutions to provide measures of student out- comes and, for accreditations of professional programs, evidence concerning pas- sage of licensing exams or job placements, the regional accreditation process also emphasizes the importance of input and process variables. Input variables include factors such as the quality of faculty, library holdings, IT capacity, classroom space and facilities, student admissions processes and decisions, and other elements that create the academic environment of the institution. Process variables articulated in standards, reviewed in self-reports, and examined by site visit teams include curricula, course requirements, and teaching quality; assistance to students through tutoring, advising, and other mechanisms; faculty-student interactions; intern- ships; and other elements of the learning process. Regional accrediting associations also consider multiple outcomes, including graduation and drop-out rates, time to graduation, knowledge and skills of graduates, and job placements. Accrediting associations argue that they must examine the entire process of higher education to make a valid judgment of the quality of the institution and to provide advice for improvement. Examining only student outcomes does not give the experts in the accreditation process sufficient information to make useful recommendations for how to change the institution, and its inputs and processes, to achieve better outcomes (Murray, 2009).

Neutrality, Transparency, and Purpose in Accreditation. Other criticisms of the current approach concern reviewers’ neutrality or objectivity and the trans- parency of the process. Evaluations are expected to be based on independent judg- ments. Such independence is intended to lead to more objective, and hence more valid, judgments of quality. Generally speaking, expertise-oriented evaluators should not be closely affiliated with the institution or product they are judging. For example, we are suspicious of an expert’s endorsement of a product when we know the expert has a financial relationship with the product’s manufacturer. Consider, for example, current discussions of the objectivity of medical research on the effectiveness of a drug when the research is funded by the pharmaceutical company that developed the drug. But accreditation processes make use of peer reviewers who are faculty and administrators from higher education institutions in the region. Accrediting organizations argue that these experts are in the best po- sition to make the judgments and provide the advice institutions need, because they know what can be accomplished in the environment of such an institution—and how to accomplish it. They have worked in it themselves. Critics, however, are concerned that the closeness of the experts to those being judged and possible competition between institutions or departments present serious conflicts of in- terest that can lead to biased judgments. Judgments as blunt as Flexner’s evalua- tions of medical schools would not see the light of day, at least in written reports.

Concerns over objectivity are heightened by the lack of transparency in the process. The U.S. Department of Education would like data and reports to be far more open, meaning that they would be available to parents, students, and the public and would contain content that is readily understood by nonexperts. For example, the Spellings Commission advocated tables presenting data on the

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1

-2 69

-5 69

06 -6

136 Part II • Alternative Approaches to Program Evaluation

knowledge and skills of graduates and other outcome measures for various colleges and universities. These tables would be available for the public to use in judging the quality of institutions, and for other colleges to use as benchmarks (U.S. Department of Education, 2006). Accreditors rely on the thick descriptions contained in self-study reports and the accreditation report. Defenders of the cur- rent system agree that the system relies heavily on confidentiality but argue that this confidentiality is one of the reasons for its success. Because of it, “institutions can be candid in their self-studies, and teams can be honest in their assessments” (O’Brien, 2009, p. 2). If reports were made public, those writing the self-report would be reluctant to discuss real problems, and accreditation teams would edit their wording for public consumption. Neither would facilitate learning about problems and making recommendations for change.

Thus, accreditation is changing and is controversial. Like many evaluations in recent years, the accreditation of colleges and universities in the United States has moved to an increasing use of mixed methods and a greater focus on outcomes. Controversies concern the purpose of these expertise-oriented evaluations, the stakeholders they serve, the measures that should take priority, the neutrality and objectivity of the judgments of quality, the transparency of the process, and the availability of results to different stakeholders. Regional accrediting associations, which for many years had no competition, are being seriously challenged, not only by the federal government, but also by popular ratings of colleges and univer- sities such as those published by U.S. News and World Report. As a result, accrediting associations are adapting and changing, but, with all their problems, they still remain a useful example of a formal review system using the expertise-oriented evaluation approach.

Other Formal Review Systems. There are numerous examples of other formal re- view systems, particularly in education. For many years, the National Council for Accreditation of Teacher Education (NCATE) has been the primary body to accredit teacher education programs. In 2000, this organization began focusing more on outcomes of such programs by examining knowledge and skills of grad- uates of the program, scores on licensure tests, and evidence that graduates are able to transfer their knowledge and skills to the classroom. The Teacher Educa- tion Accreditation Council (TEAC) has emerged as a competitor to NCATE, but with a similar focus on outcomes (Gitomar, 2007; Murray, 2009).

Some states are beginning to develop systems to review and accredit school districts within their state. For example, the Colorado Department of Education began accrediting districts in 1999 and revised the procedures substantially in 2008. The focus is very much on student outcomes and growth, but includes standards concerning “safe and civil learning environments,” and budget and financial management. Reviewers conclude the process by assigning a district a rating at one of six different levels, from accreditation with distinction to probation and nonac- creditation. Like other formal review systems, the Colorado accreditation process for school districts includes published standards, specified schedules for review (annual for districts with lower ratings, 2 to 3 years for districts at higher levels of

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1-269-56906-6

Chapter 5 • First Approaches: Expertise and Consumer-Oriented Approaches 137

accreditation), site visits by a team of external experts, and the districts’ status being affected by the results (http://www.cde.state.co.us/index_accredit.htm).

Informal Review Systems

Many professional review systems have a structure and a set of procedural guide- lines, and use multiple reviewers. Yet some lack the published standards or speci- fied review schedule of a formal review system.

A graduate student’s supervisory committee for dissertations, theses, or capstone projects is typically composed of experts in the student’s chosen field and is an example of an informal system within expertise-oriented evaluation. Struc- tures within the university, and/or faculty policies, exist for regulating such professional reviews of competence, but the committee members typically deter- mine the standards for judging each student’s performance. Fitzpatrick and Miller- Stevens (2009) have described the development and use of a rubric to assess students’ performance on capstone projects to complete a master’s program in pub- lic administration. But, typically, such criteria do not exist. Instead, the multiple experts on the committee make judgments of the student’s performance, often without discussing their criteria explicitly. And, of course, the status of students is affected by the results.

The systems established for peer reviews of manuscripts submitted to pro- fessional periodicals might also be considered examples of informal review systems, though journals’ procedures vary. Many journals do use multiple reviewers chosen for their expertise in the content of the manuscript. Unlike site visit teams for accreditation or members of a dissertation committee, reviewers do not behave as a team, discussing their reviews and attempting to reach consensus. Instead, a structure exists in the form of an editor or associate editor who selects reviewers, provides a timeframe for their reviews, and makes a final judgment about the manuscript based on the individual reviewers’ comments. However, the schedule, like that for a graduate student’s defense of a dissertation or thesis, is based on the receipt of manuscripts, although reviewers are given a specified time period in which to conduct the review. Many journals, but not all, provide reviewers with some general standards. Of course, the status of the manuscript—whether it is published, revised, or rejected—is affected by the review process.

Ad Hoc Panel Reviews

Unlike the ongoing formal and informal review systems discussed previously, many professional reviews by expert panels occur only at irregular intervals when circumstances demand. Generally, these reviews are related to no institutionalized structure for evaluation and use no predetermined standards. Such professional reviews are usually one-shot evaluations prompted by a particular, time-bound need for evaluative information. Of course, a particular agency may, over time, commission many ad hoc panel reviews to perform similar functions without their collectively being viewed as an institutionalized review system.

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1

-2 69

-5 69

06 -6

138 Part II • Alternative Approaches to Program Evaluation

Panels to Develop Standards. Common examples of ad hoc review panels include panels organized in each state in the United States to develop or revise educational standards for a state or school district, funding agencies to judge pro- posals and make recommendations for funding, and blue-ribbon panels appointed to address particular issues. These ad hoc panel reviews have no rou- tine schedule, but are organized by an agency or organization to receive input from experts on a particular issue. Thus, each of the 50 states has established standards that reflect that state’s expectations regarding what students will know in different subjects at different grades.4 There is considerable variation across the states in their standards, but the standards for each state were originally devel- oped by a panel of experts. These experts typically consist of teachers, educa- tional administrators, policymakers, and experts in the content area. The composition of the committee is intended to include experts with knowledge of the subject matter for which standards are being set and knowledge of the target population. Some sophisticated methods have been developed for the related task of expert committees identifying the cut scores, or scores that divide various test takers into groups based on their performance (Kane, 1995). (See Girard & Impara [2005] for a case study of the cut setting process by an expert panel in a public school district.)

Funding Agency Review Panels. In the United States, most federal government agencies make use of funding panels—panels of experts in the research area to be funded—to read proposals, discuss them, and make recommendations. Generally, the funding agency has developed criteria for the reviewers and, often, members of the team meet in Washington, DC, or other locations to discuss their reactions and attempt to reach some consensus. But the standards for funding vary from discipline to discipline and with the particular funding emphasis. Nevertheless, in the model of expertise-oriented evaluation, experts are coming together to make a judgment about something. Some funding organizations compose committees whose members have different areas of expertise. Thus, committees to review proposals in education can consist of a mix of educational administrators or poli- cymakers, teachers, and researchers. Likewise, committees that review proposals for community development or action can include research experts in the field as well as community members serving as experts on the particular community and its needs.

Blue-Ribbon Panels. Blue-ribbon panels are typically appointed by a high-level government official and are intended to provide advice, not on funding, but on how government should address a particular issue. The Commission on the Future of Higher Education, which was discussed earlier in this chapter, was appointed by the U.S. Department of Education in 2005, at a time when the

4These actions are somewhat in response to the federal legislation commonly known as No Child Left Behind, but many states had developed standards prior to the legislation.

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1-269-56906-6

Chapter 5 • First Approaches: Expertise and Consumer-Oriented Approaches 139

government was concerned with the long-term status of higher education in the United States and needed input from experts in the area. Members of such panels are appointed because of their experience and expertise in the field being studied. They typically are charged with reviewing a particular situation, documenting their observations, and making recommendations for action. Given the visibility of such panels, the acknowledged expertise of panel members is important if the panel’s findings are to be considered credible. At the local level, where ad hoc review panels are frequently used as an evaluative strategy for many endeavors ranging from economic development and environmental policies to school governance, expertise of panel members is no less an issue, even though the re- viewers may be of local or regional repute rather than national renown. Although recommendations of ad hoc panels of experts may have major impact, they might also be ignored, since there is often no formalized body charged with following up on their advice.

Ad Hoc Individual Reviews

Another form of expertise-oriented evaluation is the individual, professional re- view of any entity by any individual selected for his or her expertise to judge the value of the entity and, in some cases, to make recommendations for change or im- provement. Employment of a consultant to perform an individual review of some educational, social, or commercial program or activity is commonplace in many organizations.

Educational Connoisseurship and Criticism

In the previous section, we discussed applications of the expertise-oriented approach in which the experts are not necessarily evaluators. They are experts in something else—the content they are judging. Further, these applications are examples of the expertise-oriented approach, but they were formed and exist independent of the professional evaluation community. In other words, we can study these processes as examples of expertise-oriented evaluation approaches, but those in the evalu- ation community are generally not involved in establishing these activities or in conducting them, as is the case with the other approaches we will discuss. As noted, we have begun our discussion of approaches by focusing on the oldest eval- uation approach, one used for centuries before formal program evaluation emerged, to make judgments about important issues.

But, the expertise-oriented approach has also been part of the discussion of evaluation theories. In the early days of evaluation, Elliot Eisner was a key figure in discussing what evaluation should be, and his writings provide the theoretical foundation for the expertise-oriented approach and connect it to the evaluation literature (Eisner, 1976, 1985, 1991a, 1991b, 2004). Alkin and Christie (2004), in their evaluation tree depicting the origins and theories of evaluation, place Eisner, along with Michael Scriven, at the base of the valuing branch because their em- phasis was on the valuing role of evaluation—determining the value, the merit or

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1

-2 69

-5 69

06 -6

140 Part II • Alternative Approaches to Program Evaluation

worth, of the thing being evaluated. Eisner drew from the arts to describe his ap- proach to evaluation. His perspective was a useful counterpoint to the emphasis in the 1970s on social science methods and program objectives. We will briefly discuss his concepts of connoisseurship and criticism, the fundamentals of his eval- uation approach. These concepts fall within the expertise-oriented approach, because they require expertise in identifying and judging critical components or elements of the thing being evaluated.

The roles of the theater critic, art critic, and literary critic are well known and, in the eyes of many, useful roles. Critics are not without their faults. We may disagree with their views, but their reviews are good examples of direct and efficient application of expertise to that which is judged. Their criticism prompts us to think about the object being evaluated in different ways, even if we continue to disagree with their judgment. That is one goal of a written review or criticism: To prompt us to think about elements of the object that we, as nonexperts, might not have considered. Eisner (1991a) proposes that experts, like critics of the arts, bring their expertise to bear in evaluating the quality of programs in their areas of proficiency. Eisner does not propose a scientific paradigm but rather an artistic one, which he sees as an important qualitative, humanistic, nonscientific supplement to more traditional inquiry methods. He argues that we need to see the thing being evalu- ated from multiple perspectives and that the emphasis on quantitative, reduction- ist methods fails to convey many important qualities of the whole. He notes that numbers play a role in educational evaluation, his area of interest, but also limit what we see:

[W]e should be recognizing the constraints and affordances of any form of rep- resentation we elect to use. Just as a way of seeing is also a way of not seeing, a way of describing is also a way of not describing. The tools we employ for notic- ing have an enormous impact on what it is that we become aware of. If we want a replete, fulsome, generous, complex picture of a classroom, a teacher, or a student, we need approaches to that perception of such phenomena and, in addition, a form of presentation that will make those features vivid. (Eisner, 2004, p. 200)

The key elements of Eisner’s approach are connoisseurship and criticism (Eisner, 1975, 1991b). Connoisseurship is the art of appreciation—not necessar- ily a liking or preference for that which is observed, but rather an ability to notice, “to recognize differences that are subtle but significant in a particular qualitative display” (Eisner, 2004, p. 200). The connoisseur has developed knowledge of the important qualities of the object and the ability to observe and notice them well and to study the relationships among them. The connoisseur, in Eisner’s view, is aware of the complexities that exist in observing something in real-world settings and possesses refined perceptual capabilities that make the appreciation of such complexity possible. The connoisseur’s perceptual acuity results largely from a knowledge of what to look for (advance organizers or crit- ical guideposts) gained through extensive previous experience, education, and reflection on that experience.

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1-269-56906-6

Chapter 5 • First Approaches: Expertise and Consumer-Oriented Approaches 141

The analogy of wine tasting is used by Eisner (1975) to show how one must have many experiences to be able to distinguish what is significant about a wine, using a set of techniques to discern qualities such as body, color, bite, bouquet, flavor, and aftertaste, to judge its overall quality. The connoisseur’s refined palate and gustatory memory of other wines tasted is what enables him or her to distinguish subtle qualities lost on an ordinary drinker of wine and to render judgments rather than mere preferences. Connoisseurs exist in all realms of life, not solely the gusta- tory or artistic. Eisner describes a good coach as a connoisseur of the game who, when watching others at the sport, can recognize subtleties that those with less experience would miss: “We see it displayed in blazing glory in watching a first-rate basketball coach analyze the strengths of the opponents, their weaknesses, as well as the strengths and weaknesses of the team that he or she is coaching” (2004, p. 198).

Connoisseurship does not, however, require a public description or judgment of that which is perceived. The public description is the second part of the Eisner approach. “Criticism,” Eisner states, “is the art of disclosing the qualities of events or objects that connoisseurship perceives” (1979a, p. 197), as when the wine con- noisseur either returns the wine or leans back with satisfaction to declare it of acceptable, or better, quality. Or, more akin to public evaluation, criticism is when the wine critic writes a review of the wine. Evaluators are cast as critics whose con- noisseurship enables them to give a public rendering of the quality and significance of that which is evaluated. Criticism is not a negative appraisal but rather an educational process intended to enable individuals to recognize qualities and char- acteristics that might otherwise have been unnoticed and unappreciated. Criticism, to be complete, requires description, interpretation, and evaluation of that which is observed. “Critics are people who talk in special ways about what they encounter. In educational settings, criticism is the public side of connoisseurship” (Eisner, 1975, p. 13). Program evaluation, then, becomes program criticism. The evaluator is the instrument, and the data collecting, analyzing, and judging are largely hidden within the evaluator’s mind, analogous to the evaluative processes of art criticism or wine tasting. As a consequence, the expertise—training, experience, and credentials—of the evaluator is crucial, because the validity of the evaluation de- pends on the evaluator’s perception. Yet different judgments from different critics are tolerable, and even desirable, since the purpose of criticism is to expand per- ceptions, not to consolidate all judgments into a single definitive statement.

Eisner’s educational criticism focuses on four dimensions that should be portrayed in a criticism: description, development of themes, interpretation, and evaluation. The focus is on expert, and sometimes, detailed description of the factors that are important in judging the quality of the product or program. Obviously, the approach would not be the most direct for clearly establishing cause-and-effect relationships, but it can be useful in helping us to understand the nature of the in- tervention and the manner in which it leads to different outcomes. As Eisner recently stated, “Educational connoisseurship and educational criticism represent an effort to employ what the arts and humanities as partners with the social sciences have to of- fer in advancing our understanding of the process and effect of education. In an age of high-stakes testing, it is a perspective we badly need” (Eisner, 2004, p. 202).

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1

-2 69

-5 69

06 -6

142 Part II • Alternative Approaches to Program Evaluation

Influences of the Expertise-Oriented Approach: Uses, Strengths, and Limitations

Expertise-oriented approaches, generally referred to by other names, are used extensively in the United States and other countries today. Accreditation efforts are changing and expanding. Governments continue to appoint expert commissions to study issues and make recommendations. Often, such commissions help to protect government leaders from the ire of citizens when government needs to address a controversial issue. For example, closing military bases in the United States has been a controversial issue, in spite of the fact that too many bases exist. Congress and the president have resorted to appointing commissions of experts to provide “objective, non-partisan, and independent reviews” of recommendations for major base closures (http:www.brac.gov, homepage). The process has been used five times since the first commission was appointed in 1988, most recently in 2005. Like many blue-ribbon panels, the commissions have included experts in a variety of areas related to the issue. The commissions conduct site visits, seek input from the public and other experts, review information, and make recommendations to the President. The recommendations take effect unless Congress rejects the proposal within 45 days. These commissions have been able to take important actions to improve the efficiency and effectiveness of the placement of military bases.

Collectively, expertise-oriented approaches to evaluation have emphasized the central role of expert judgment, experience, and human wisdom in the evaluative process and have focused attention on such important issues as whose standards (and what degree of transparency) should be used in rendering judgments about programs. Conversely, critics of this approach suggest that it may permit evaluators to make judgments that reflect little more than personal biases. Others have noted that the presumed expertise of the experts is a potential weakness. Those using or contracting for expertise-oriented evaluations should consider carefully the various areas of expertise required for their team of expert judges. Too often the team contains only content experts, people who know various elements of the subject matter to be judged, but may lack experts in the evaluation process itself. The artic- ulation of standards, whether by the contracting organization or by the team of experts, is also important to clarify the criteria and methods used to make the judg- ments requested. Of course, as Elliot Eisner would argue, experts should look beyond the standards and use their connoisseurship to describe, interpret, and judge the dimensions they know to be important to the quality of the product. But, articulated standards help to introduce some consistency across experts and to facil- itate useful discussions among the experts when disagreements do occur.

Eisner’s writings influenced evaluators to think more about the nature of evaluation judgments and the role that experience and connoisseurship can play in helping them to notice important elements of the program or product to be evaluated. However, Eisner did not remain active in the evaluation field, and the approach was used infrequently, generally by his immediate students. Still, we continue to study his writings because of the influences he has had on evaluation practice today. Donmoyer (2005) notes that Eisner’s contributions prompted

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1-269-56906-6

Chapter 5 • First Approaches: Expertise and Consumer-Oriented Approaches 143

evaluators to consider different approaches to evaluation and the implications of each. Eisner also provided an important rationale for qualitative methods at a time when quantitative methods dominated the field. His work was useful in prompt- ing us to consider what we notice in an object. Connoisseurs know the important elements of a particular thing and learn how to form educated opinions about those elements. The connoisseurship-criticism approach also has its critics. Following Eisner’s initial proposals, House (1980) issued strong reservations, cautioning that the analogy of art criticism is not applicable to at least one aspect of evaluation:

It is not unusual for an art critic to advance controversial views—the reader can choose to ignore them. In fact, the reader can choose to read only critics with whom he agrees. A public evaluation of a program cannot be so easily dismissed, however. Some justification—whether of the critic, the critic’s principles, or the criticism—is necessary. The demands for fairness and justice are more rigorous in the evaluation of public programs. (p. 237)

However, more recently, Stake and Schwandt emphasize the importance to evaluation not only of measuring quality but also of conveying quality as it is experienced. Reminiscent of Eisner’s recognition of connoisseurship, they observe that “we do not have good enough standards for recognizing an evaluator’s prac- tical knowledge that arises from a combination of observational skill, breadth of view, and control of bias” (2006, p. 409). They conclude that “as with connoisseurs and the best blue ribbon panels, some of the best examples of synthesizing values across diverse criteria are those that rely on the personal, practical judgment of fair and informed individuals” (2006, p. 409).

The Consumer-Oriented Evaluation Approach

Like the expertise-oriented approach, consumer-oriented evaluation has existed in the practice of individuals making decisions about what to purchase, or trade, for centuries. The approaches are similar in other ways: Their primary purpose is to judge the quality of something, to establish the value, the merit or worth, of a product, program, or policy. Although all evaluations are concerned with deter- mining merit or worth, valuing is the key component of these two approaches.5

Their principal audience is the public. Unlike approaches that will be discussed in other chapters in this section, evaluations relying on these approaches often do not have another audience—a foundation, manager, policymaker, or citizens’ group—who has hired the evaluator to provide them with useful information to

5Other evaluation approaches focus on various types of use, such as stakeholder involvement or organi- zational change, and methodology, such as establishing causality or providing thick descriptions as the central component. These evaluations, too, may ultimately make a judgment of merit or worth, but that judgment, the valuing of the program or product, is not so central to the evaluation approach as it is in expertise-oriented or consumer-oriented evaluation. (See Alkin [2004], Shadish et al. [1991].)

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1

-2 69

-5 69

06 -6

make a decision or judgment. Instead, the audience for consumer-oriented and expertise-oriented approaches is a broader one—the purchasing or interested public—and is not directly known to the evaluator. Therefore, the evaluator is the major, often the only, decision maker in the study because he or she does not have other important, direct audiences to serve. But the consumer-oriented approach and the expertise-oriented approach differ dramatically in their methodologies, with the latter relying on the judgments of experts and the arts as a model. On the other hand, consumer-oriented evaluation relies on more transparent and quan- titative methods, with the judgment typically being made by an evaluator, a person with expertise in judging things, but not with the particular content expertise of expertise-oriented or connoisseur evaluations.

Popular examples of consumer-oriented evaluations that the reader will know include Consumer Reports and the U.S. News and World Report ratings of colleges and universities, but examples exist around the world. Which? is a magazine and web site in the United Kingdom that serves a mission similar to that of the Consumers’ Union, the sponsor of Consumer Reports and its web site, in the United States. Both organizations act as consumer advocates and test products to provide information to consumers on the effectiveness of various products.

The Developer of the Consumer-Oriented Evaluation Approach

Consumer-oriented evaluations first became important in educational evalua- tions in the mid to late 1960s as new educational products flooded the market with the influx of funds from the federal government for product development. Michael Scriven is the evaluator best known for prompting professional evalua- tors to think more carefully about consumer-oriented or product evaluations (1974b, 1991c). Scriven, of course, is known for many things in evaluation, and consumer-oriented or product-oriented evaluations represent only one of his contributions. His most important contributions include making evaluators aware of the meaning and importance of valuing in evaluation (Shadish et al. 1991; Alkin, 2004). He often uses examples of product evaluation in his writing to illustrate the nature of valuing and the process of deriving a value in evalua- tion. For many years, he considered Consumer Reports to be “an almost flawless paradigm” in product evaluation. However, he has expressed disappointment with their reluctance to discuss and improve their methodology and has recog- nized PC Magazine and Software Digest as developing more methodologically sound procedures (Scriven, 1991a, p. 281).

Scriven’s approach to determining the value of a product, however, is quite different from Eisner’s connoisseur approach. In fact, Scriven’s critical view of Eisner’s approach illustrates his own priorities. He states that evaluations using the connoisseurship model “may generate a valuable perspective, but it abandons much of the requirement of validity. In particular it is vulnerable to the fallacy of irrele- vant expertise, because connoisseurs are at best a bad guide to merit for the novice— and are also affected by the swing of fashion’s pendulum” (Scriven, 1991a, p. 92).

144 Part II • Alternative Approaches to Program Evaluation

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1-269-56906-6

Chapter 5 • First Approaches: Expertise and Consumer-Oriented Approaches 145

So, while Eisner’s model rests on the noticing abilities attained by the connoisseur, Scriven’s methods for product evaluation are not concerned with expertise in the content of the product, but with the evaluator’s expertise in testing and judging key components of the product. Further, although Eisner emphasizes interpreting and evaluating the product, he believes that the value added of his approach is in the description—in helping others perceive, and experience, key elements they may have overlooked. Scriven’s concern is in answering the question, “How good is this product?” To do so, he collects information to judge the product’s performance and that of its competitors on explicit, critical criteria and works to remove subjectivity from the approach. Thus, he notes the procedures used by two consumer-oriented magazines he admires represent a “‘pure testing’ approach, that is, one which minimizes the amount of subjective judgment in a particular case” (Scriven, 1991a, p. 281).

Stake and Schwandt (2006), in a discussion of the importance of evaluators discerning quality, shed some light on the differences in Eisner’s and Scriven’s approaches. They identify two approaches to conceptualizing quality: quality as measured and quality as experienced. Quality as experienced is derived from practical knowledge and personal experience, and is significant, they argue, be- cause it is the means by which many people determine quality. Eisner’s connois- seurship model would appear to be an example of evaluation that builds on such quality, through the eyes and experience of a connoisseur. In contrast, quality as measured is illustrated in Scriven’s logic of evaluation and his method for evalu- ating products. These include determining the important criteria to consider in evaluating the product, establishing standards for the criteria, examining or measuring the performance of the products and its competitors against the crite- ria using the standards, and synthesizing the results to determine the quality of the key product. Both views of quality have a role. We have discussed Eisner’s approach. Let us now describe more of Scriven’s model for judging the quality of a product.

Applying the Consumer-Oriented Approach

A key step in judging a product is determining the criteria to be used. In the consumer-oriented model, these criteria are explicit and are presumably ones val- ued by the consumer. Although Scriven writes about the possibility of conducting needs assessments to identify criteria, his needs assessments are not formal sur- veys of consumers to determine what they would like. Instead, his needs assess- ments focus on a “functional analysis” that he writes is “often a surrogate for needs assessments in the case of product evaluation” (Scriven, 1983, p. 235). By func- tional analysis, Scriven means becoming familiar with the product and consider- ing what dimensions are important to its quality:

Once one understands the nature of the evaluand, . . . one will often understand rather fully what it takes to be a better and a worse instance of that type of evaluand. Understanding what a watch is leads automatically to understanding what the dimensions of merit for one are—time-keeping, accuracy, legibility, sturdiness, etc. (1980, pp. 90–91)

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1

-2 69

-5 69

06 -6

146 Part II • Alternative Approaches to Program Evaluation

Thus, his criteria are identified by studying the product to be evaluated, not by pre- vious, extended experience with the product. Standards, developed next, are lev- els of the criteria to be used in the measurement and judgment process. They are often created or recognized when comparing the object of the evaluation with its competitors. Since the goal is to differentiate one product from another to inform the consumer about quality, standards might be relatively close together when competitors’ performances on a criterion are similar. In contrast, standards might be quite far apart when competitors differ widely. Standards, of course, can be in- fluenced by factors other than competitors, such as safety issues, regulatory re- quirements, and efficiency factors that provide common benchmarks.

Scriven’s work in product evaluation focused on describing this process and, in part because identifying criteria can be difficult, in developing checklists of criteria for others to use in evaluating products. His product checklist published in 1974 reflects the potential breadth of criteria that he recommends using in evaluating educational products (Scriven, 1974b). This product check- list, which remains useful today, was the result of reviews commissioned by the federal government, focusing on educational products developed by federally sponsored research and development centers, and regional educational labora- tories. It was used in the examination of more than 90 educational products, most of which underwent many revisions during the review. Scriven stressed that the items in this checklist were necessitata, not desiderata. They included the following:

1. Need: Number affected, social significance, absence of substitutes, multiplica- tive effects, evidence of need

2. Market: Dissemination plan, size, and importance of potential markets 3. Performance—True field trials: Evidence of effectiveness of final version with

typical users, with typical aid, in typical settings, within a typical time frame 4. Performance—True consumer: Tests run with all relevant consumers, such as

students, teachers, principals, school district staff, state and federal officials, Congress, and taxpayers

5. Performance—Critical comparisons: Comparative data provided on important competitors such as no-treatment groups, existing competitors, projected competitors, created competitors, and hypothesized competitors

6. Performance—Long-term: Evidence of effects reported at pertinent times, such as a week to a month after use of the product, a month to a year later, a year to a few years later, and over critical career stages

7. Performance—Side effects: Evidence of independent study or search for unin- tended outcomes during, immediately following, and over the long-term use of the product

8. Performance—Process: Evidence of product use provided to verify product descriptions, causal claims, and the morality of product use

9. Performance—Causation: Evidence of product effectiveness provided through randomized experimental study or through defensible quasi-experimental, expost facto, or correlational studies

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1-269-56906-6

Chapter 5 • First Approaches: Expertise and Consumer-Oriented Approaches 147

10. Performance—Statistical significance: Statistical evidence of product effective- ness to make use of appropriate analysis techniques, significance levels, and interpretations

11. Performance—Educational significance: Educational significance demonstrated through independent judgments, expert judgments, judgments based on item analysis and raw scores of tests, side effects, long-term effects and comparative gains, and educationally sound use

12. Cost-effectiveness: A comprehensive cost analysis made, including expert judgment of costs, independent judgment of costs, and comparison to competitors’ costs

13. Extended Support: Plans made for post-marketing data collection and im- provement, in-service training, updating of aids, and study of new uses and user data

These criteria are comprehensive, addressing areas from need to process to out- comes to cost. Scriven also developed a checklist to use as a guide for evaluating program evaluations, the Key Evaluation Checklist (KEC) (Scriven, 1991c, 2007). It can be found at http://www.wmich.edu/evalctr/checklists/kec_feb07.pdf.

Other Applications of the Consumer-Oriented Approach

Product evaluation is also used by organizations and industries to evaluate prod- ucts at many different stages. Successful high-technology companies such as Apple have watched and studied consumers’ reactions to iPhones and Apple stores and used these data to make changes in their products, thus using consumer-oriented evaluations for formative purposes to revise their products. Amazon.com under- took a similar process with its electronic book, Kindle. Jonathan Morrell, an evaluator who has worked with industries to conduct many product evaluations, recently described the present-day use of product evaluations in industry. Although Scriven focused on product evaluations for summative, purchasing decisions by consumers, Morrell notes that most product evaluations in industries are formative in nature, as with the examples of Apple and Amazon.com. Evaluations take place through the product’s life cycle from initial design and the production process to marketing and circulation. The stakeholders for the evaluation include not only the managers of the organization and the consumers, but others associated with the product process as well. Morrell gives the example of pilots as a stakeholder for air- planes. Their opinions on human factors issues are important in creating a product that will permit them to perform optimally in flying the plane (Morell, 2005).

Influences of the Consumer-Oriented Approach: Uses, Strengths, and Limitations

As mentioned previously, the consumer-oriented approach to evaluation has been used extensively by government agencies and independent consumer advocates to make information available on hundreds of products. One of the best known

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1

-2 69

-5 69

06 -6

148 Part II • Alternative Approaches to Program Evaluation

examples in education today is the What Works Clearinghouse (WWC), begun in 2002 by the U.S. Department of Education’s Institute for Education Sciences (IES). (See http://ies.ed.gov/ncee/wwc.) WWC is a source for consumer-oriented evalu- ation information on the outcomes of educational programs and products. Its intent, like the consumer-oriented approach reviewed here, is to help consumers— teachers, school psychologists, and educational administrators—make choices about which educational products to use.

WWC differs dramatically, however, from Scriven’s more comprehensive evaluation process because its criteria for determining program success are confined to program outcomes, and its standards are concerned with research con- fidence in those outcomes. The stated mission of WWC is “to assess the strength of evidence regarding the effectiveness of the program.”6 Products studied using randomized control trials (RCTs) or regression discontinuity designs, which are viewed by IES as superior for establishing a causal link between the product or program and the outcome, receive the highest ratings. Studies using quasi-exper- imental designs may be endorsed with reservations. Scriven’s checklists and writ- ings argued for using several different criteria to reflect the elements of the product or program that were critical to successful performance. Although many of Scriven’s criteria concerned outcomes or performance (see his criteria for judging educational products listed previously), his process emphasized a comprehensive appraisal of the product, including need, side effects, process, support for users, and cost, as well as several criteria concerning outcomes or performance. WWC’s standards concern the extent to which the research establishes a causal effect, through preferred designs, between the program or product and the intended out- come. Although we bemoan the narrowing of the range of criteria and the stan- dards to assess those criteria, WWC’s efforts do prompt the potential user to consider the effectiveness of the program in achieving its outcomes and to provide a central location for accessing comparable information on educational programs and products. Educators are currently under much pressure to increase achieve- ment, and products can mislead in their marketing. However, WWC’s efforts to in- form the consumer about the demonstrated success of programs and products is today’s most successful application of the consumer-oriented approach in educa- tion in terms of visibility and number of users. Consumers can search the web site by area of interest, with topics including Early Childhood Education, Beginning Reading, Middle School Math, Dropout Prevention, and English Language Learn- ers. Many products are judged to have insufficient research evidence for a causal relationship between the product and the outcome. The only information provided on these products is the designation “no studies meeting eligibility

6In an ironic combination of consumer-oriented and expertise-oriented approaches, a blue-ribbon panel was convened in 2008 to determine whether WWC’s review process and reports were “scientifically valid” and “provide accurate information about the strength of evidence of meaningful effects in im- portant educational outcomes.” See http://ies.ed.gov/director/board/pdf/panelreport.pdf. Commenting that their charge was not to review the mission but to determine if the information was valid, the panel concluded that the information provided was valid.

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1-269-56906-6

Chapter 5 • First Approaches: Expertise and Consumer-Oriented Approaches 149

standards.” However, for products with studies meeting the eligibility standards, reports provide a brief description of the program or product, the research con- ducted on it, and a final judgment of its effectiveness at achieving the intended outcome.

Another prominent example of the consumer-oriented approach that illus- trates the overlap between it and the expertise-oriented approach are the test reviews of the Buros Institute of Mental Measurements. The Institute was founded in 1938 and has been conducting well-respected reviews of educational and psychological tests since that time. It currently produces two series: The Mental Measurements Yearbooks, now in its 17th edition, and Test Reviews Online (see www.unl.edu/buros). The Institute is consumer oriented in that it is “dedicated to monitoring the quality of commercially-published tests . . . promoting appropriate test selection, use, and practice” (http://www.unl.edu/buros/bimm/html/catalog. html, paragraph 1). It is designed to provide consumers with information on the quality of tests used in education and psychology. Each test review provides a brief description of the test and a discussion of its development and technical features, including reliability and validity information, a commentary, a summary, and ref- erences. However, the reviews contain elements of the expertise-oriented ap- proach because they are conducted by experts in psychometrics and, although the reviews make use of a prescribed format, the criteria and standards for reviewing each test and its competitors are not explicitly identified as would be done in Scriven’s approach. The Institute encourages its reviewers to use The Standards for Educational and Psychological Testing (1999), developed jointly by the American Educational Research Association (AERA), the American Psychological Associa- tion (APA), and the National Council on Measurement in Education (NCME), as a guide, but the Institute’s primary criterion for providing information on quality are in the selection of its expert reviewers.

Although the consumer-oriented evaluation approach continues to be used by magazines and web sites that review products, the approach is not one that continues to be discussed extensively in the professional evaluator litera- ture. However, Scriven’s writings on product evaluation in the 1970s, as well as Eisner’s writings on connoisseurship and criticism, were important in influenc- ing evaluation in its early stages to consider its role in valuing a program, policy, or product and in considering methods other than traditional social science research methods, for doing so. Each approach influenced evaluation practice today.

Major Concepts and Theories

1. The hallmark of the expertise-oriented evaluation approach is its direct reliance on professional judgment in the area of the program being evaluated.

2. Variations in the types of expertise-oriented evaluations include formal and informal review systems and ad hoc panels or individual reviews. These evaluations vary as to whether they are housed under an existing structure or organization, have

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1

-2 69

-5 69

06 -6

published standards that are used to evaluate the program or product, use a predeter- mined schedule for review, employ single or multiple experts, and directly affect the status of the program.

3. Accreditation systems in higher education, extending to K–12 schools, are a promi- nent example of the expertise-oriented evaluation approach in the United States and are currently in a process of discussion and change. Differences between the regional accrediting associations in the United States and the federal government concerning the purposes of these evaluations, the nature of the data collected or reviewed (outcomes, process, and inputs), the independence or neutrality of the expert evaluators, and the transparency of the process illustrate many of the controversies and political issues that can arise in expertise-oriented and other evaluations.

4. Elliot Eisner’s educational connoisseurship and criticism model made evaluators more aware of the skills of an expert, or connoisseur, in noticing critical dimensions of a product or program and in using methods outside of traditional social science measure- ment, especially qualitative methods of observation and description, to provide a complete picture of the program or product.

5. The consumer-oriented evaluation approach differs from the expertise-oriented approach in that it does not rely on content experts, or connoisseurs of the product, but rather on experts in evaluation. The approach is also based more centrally on evaluation logic and quantitative methods.

6. Michael Scriven, who wrote extensively about such evaluations, described the key steps as identifying the important criteria for judging the product or program, develop- ing standards to judge those criteria, collecting information or data, and synthesizing the information to make a final judgment that permits the consumer to compare the product with likely alternatives.

7. Both expertise-oriented and consumer-oriented approaches made evaluators aware of the importance of valuing in their work. It helped them recognize that the central task of evaluation is to make a judgment about the value of a program, product, or policy. The approaches advocate quite different methods for making that judgment and, therefore, each added separately to evaluators’ consideration of qual- itative methods and of criteria, standards, and checklists as potential methods for collecting data.

8. Both approaches continue to be used commonly by public, nonprofit, and private organizations and industries, but are not the subject of much writing in professional eval- uation today. The absence of evaluation literature on the subject is unfortunate. We hope evaluators will return their attention to these approaches commonly used by others to bring evaluative ways of thinking to the application of the approaches today.

150 Part II • Alternative Approaches to Program Evaluation

Discussion Questions

1. How do expertise-oriented and consumer-oriented evaluation approaches differ? How are they alike?

2. What do you see as the strengths of the expertise-oriented approaches? What are their drawbacks?

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1-269-56906-6

3. If a team of experts were reviewing your school or organization, what kinds of experts would you want on the team? What criteria would you want them to use to judge the quality of your organization?

4. Referring to question 3, who would you trust to make a better judgment—someone who is an expert in the content or subject matter of your organization or someone who knows evaluation theories, and methods for judging something? Justify your response.

5. Discuss the concept of a connoisseur. Are you a connoisseur at something? What is it? How does your experience with this thing help you to notice the important factors and be able to judge them better than a novice?

6. In consumer-oriented evaluation, what is the difference in criteria and standards?

7. How should one determine the criteria for evaluating a product? Should the focus be solely or primarily on outcomes? What should be the balance among the qual- ity of inputs (staff, facilities, budget), process (the conduct of the program), and outputs or outcomes?

Chapter 5 • First Approaches: Expertise and Consumer-Oriented Approaches 151

Application Exercises

1. What outside experts review your program or organization? a. If you work in an organization that is accredited, review the standards used for

accreditation. Do you feel the standards get at the real quality issues of the pro- gram or organization? What other standards might you add?

b. What are the areas of expertise of the evaluation team? Are they content experts, management experts, finance experts, evaluation experts, or experts in other ar- eas? How do you judge the mix of expertise? Might you add others? How might others judge their independence or objectivity in judging your organization?

c. If possible, interview those involved in the accreditation and learn more about the purposes of the accreditation (whether the emphasis is formative, summa- tive, or something else) and about how it has been used.

2. Your high school is going to be visited by an outside accreditation team. What issues do you think they should attend to? What do you think they might miss in a short visit? What information do you think they should collect? What should they do while they’re visiting? Do you think such a team could make a difference for your school? Why or why not?

3. Read a review of a restaurant, movie, or play that you have attended or seen. How does your opinion differ from the critic’s? How do the critic’s opinions influence your own? Does his or her expertise in the product (connoisseurship) or his ability to com- municate it (criticism) prompt you to think about the product in different ways?

4. Look at an evaluation of an educational product of interest to you on What Works Clearinghouse at http://ies.ed.gov/ncee/wwc. Critique their presentation of informa- tion from an expertise-oriented and from a consumer-oriented approach. What information is helpful? Would other information be helpful to you in making a decision? If so, what? Does that information relate to a different criterion or standard you have? How does the information fit into the approaches reviewed in this chapter?

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1

-2 69

-5 69

06 -6

152 Part II • Alternative Approaches to Program Evaluation

5. The product or program you are interested in is not reviewed by What Works Clearinghouse, so you are going to contact the publisher or developer of this product to learn more about it. What criteria are important to you? What standards might you use to judge those criteria? What will you ask the person who repre- sents the company?

6. Examine a recent issue of Consumer Reports or a similar magazine or online publication that reviews products and critique their review of a particular prod- uct. Do you agree with their selection of the criteria to judge the product? Would you exclude any criteria? Include others? Are the standards they use to judge each product on the criteria explicit? Appropriate? How would you judge their data collection process, that is, their means for determining how each product performs on the criteria? As an expert, or perhaps a connoisseur of consumer- based evaluation, how would you judge their evaluation? How would you improve their process?

Suggested Readings

Eisner, E. W. (1991a). Taking a second look: Educational connoisseurship revisited. In M. W. McLaughlin and D. C. Philips (Eds.), Evaluation and education: At quarter century, Ninetieth Yearbook of the National Society for the Study of Education, Part II. Chicago: University of Chicago Press.

Eisner, E. W. (1991b). The enlightened eye: Qualitative inquiry and the enhancement of educational practice. New York: Macmillan.

Floden, R. E. (1980). Flexner, accreditation, and evaluation. Educational Evaluation and Policy Analysis, 20, 35–46.

O’Brien, P. M. (Ed.). Accreditation: Assuring and enhancing quality. New Directions in Higher Education, No. 145, pp. 1–6. San Francisco: Jossey-Bass.

Scriven, M. (1991). Evaluation thesaurus (4th ed.). Newbury Park, CA: Sage.

Scriven, M. (2007). Key evaluation checklist. http://www .wmich.edu/evalctr/checklists/kec_feb07.pdf

U.S. Department of Education. (2006). A test of leadership: Charting the future of U.S. higher education. Washington, DC. http://www.ed .gov/about/bdscomm/list/hiedfuture/reports/ final-report.pdf

A Case Study

For this chapter, we recommend an interview with Gary Henry on the development of the Georgia school report card in Evaluation in Action, Chapter 7. Although our interviews do not contain any evaluations that explicitly use an expertise-oriented or consumer-oriented approach, this interview illustrates the devel- opment of a school report card to be used by consumers, parents, and citizens of Georgia. Some of Dr. Henry’s work is concerned with identifying and developing the multiple crite- ria to be used on the report card, using

research studies and input from surveys of the citizens of Georgia and the advisory council to the evaluation. He discusses this process of identifying criteria in his interview and the means for formatting the information in an accessible, easy-to-use manner, and then dis- seminating it widely. The journal source is Fitzpatrick, J. L., & Henry, G. (2000). The Georgia Council for School Performance and its performance monitoring system: A dia- logue with Gary Henry. American Journal of Evaluation, 21, 105–117.

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1-269-56906-6

Clarifying the Evaluation Request and Responsibilities

Orienting Questions

1. Suppose you received a telephone call from a potential client asking if you would do an evaluation. What are some of the first questions you would ask?

2. Are there times you would decline a request for evaluation? If so, under what conditions?

3. How can an evaluability assessment help determine whether an evaluation will be productive?

4. What are some advantages and disadvantages in having an evaluation conducted by an external evaluator? By an internal evaluator?

5. What criteria would you use to select an external evaluator?

259

11

In the preceding chapters, we discussed evaluation’s promise for improving programs. The potential and promise of evaluation may create the impression that it is always appropriate to evaluate and that every facet of every program should be evaluated.

Such is not the case. The temptation to evaluate everything may be com- pelling in an idealistic sense, but it ignores many practical realities. In this chapter we discuss how the evaluator can better understand the origin of a proposed eval- uation and judge whether or not the study would be appropriate.

To clarify the discussion, we need to differentiate among several groups or individuals who affect or are affected by an evaluation study: sponsors, clients, stakeholders, and audiences.

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1

-2 69

-5 69

06 -6

260 Part III • Practical Guidelines for Planning Evaluations

An evaluation’s sponsor is the agency or individual that either requests the evaluation or provides necessary fiscal resources for its conduct, or both. Sponsors may or may not actually select the evaluator or be involved in shaping the study, but they often define the purposes of the evaluation and may specify particular areas that the evaluation should address or ways in which data should be col- lected. In other cases, the sponsor may delegate that authority to the client. The sponsor may be a funding agency or a federal or state department that oversees or regulates the activities of the organization that delivers the program.

The client is the specific agency or individual who requests the evaluation. That is, the client seeks an evaluator—internal or external—to conduct the evaluation and typically meets frequently with that evaluator as the evaluation proceeds. In some instances, the sponsor and client are the same, but not always. For example, in an evaluation of a domestic violence treatment program operated by a nonprofit agency, the agency (client) requests and arranges for the study, but the requirement and the funding may both originate with a foundation that funds the program and is, therefore, the sponsor. In contrast, the sponsor and the client are the same if the program to be evaluated is a drop-out prevention program for district high schools that is funded by the school district, and the person requesting the evaluation is a central office administrator who oversees secondary programs.

As we discussed in Chapter 1, stakeholders consist of many groups, but essen- tially include anyone who has a stake in the program to be evaluated or in the eval- uation’s results. Sponsors and clients are both stakeholders, but so are program managers and staff, the recipients of program services and their families, other agen- cies affiliated with the program, interest groups concerned with the program, elected officials, and the public at large. It is wise to consider all the potential stakeholders in a program when planning the evaluation. Each group may have a different pic- ture of the program and different expectations of the program and the evaluation.

Audiences include individuals, groups, and agencies who have an interest in the evaluation and receive its results. Sponsors and clients are usually the primary audiences and occasionally are the only audiences. Generally, though, an evaluation’s audiences will include many, if not all, stakeholders. Audiences can also extend beyond stakeholders. They can include people or agencies who fund or manage similar programs in other places or who serve similar populations and are looking for effective programs.

Understanding the Reasons for Initiating the Evaluation

It is important to understand what prompts an evaluation. Indeed, determining and understanding the purpose of the evaluation is probably the most important job the evaluation sponsor or client will have in the course of an evaluation. If some problem prompted the decision to evaluate, or if some stakeholder or spon- sor has demanded an evaluation, the evaluator should know about it. In many cases today, an evaluation is conducted in response to a mandate from a funding

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1-269-56906-6

Chapter 11 • Clarifying the Evaluation Request and Responsibilities 261

source that is concerned about being accountable to a board or the public about programs it has funded. Presumably, the decision to evaluate was prompted by someone’s need to know something. Whose need? What does that policy- maker, manager, stakeholder, or agency want to know? Why? How will they use the results? The evaluator’s first questions should begin to identify these reasons.

Sometimes the evaluation client can answer such questions directly and clearly. Unfortunately, that is not always the case. As evaluation has become pop- ular today, often evaluations are undertaken or mandated for few clear reasons other than that evaluation is a good thing or that programs should be accountable. Of course, the evaluator’s task is made more difficult when the client has no clear idea about what the evaluation should accomplish. It is not uncommon to find that clients or sponsors are unsophisticated about evaluation procedures and have not thought deeply about the purposes of the evaluation and the variety of ques- tions it could answer or issues it could address. Worse yet, they may think that all evaluations automatically address outcomes or impacts and may insist that all evaluations address the same issues regardless of the stage of the program, the decisions they or others face, or the information needs of other stakeholders.

Frequently, the purpose of the evaluation is not clear until the evaluator has carefully read the relevant materials, observed the evaluation object, and probed the aspirations and expectations of stakeholders through significant dialogue.

Such probing is necessary to clarify purposes and possible directions. When sponsors or clients are already clear about what they hope to obtain, it is crucial for evaluators to understand their motivations. They can often do so by exploring— with whomever is requesting the evaluation and other stakeholders—such ques- tions as the following:

1. Purpose. Why is this evaluation being requested? What is its purpose? What questions will it answer?

2. Users and Use. To what use will the evaluation findings be put? By whom? What others should be informed of the evaluation results?

3. The Program. What is to be evaluated? What does it include? What does it ex- clude? When and where does it operate? Who is the intended client for the pro- gram? What are the goals and objectives of the program? What problem or issue is the program intended to address? Why was it initiated? Who was involved in its planning? What prompted the selection of this strategy or intervention? Who is in charge of the program? Who delivers it? What are their skills and training? Has it ever been evaluated before? What data exist on it?

4. Program logic or theory. What are the essential program activities? How do they lead to the intended goals and objectives? What is the program theory or logic model? What have different stakeholders observed happening as a result of the program?

5. Resources and timeframe. How much time and money are available for the eval- uation? Who is available to help with it? What is the timeframe for it? When is final information needed? Are there requirements that must be met for interim reports?

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1

-2 69

-5 69

06 -6

262 Part III • Practical Guidelines for Planning Evaluations

6. Relevant contextual issues. What is the political climate and context surround- ing the evaluation? Who are the most concerned stakeholders? What individuals or groups might benefit from a positive evaluation? Who might benefit from a negative one? Will any political factors and forces preclude a meaningful and fair evaluation?

The foregoing questions are examples, and evaluators might subtract some or add others. What is important is that, through careful questioning, listening, and dia- logue, the evaluator comes to understand the purpose for the evaluation and learns more about the context in which the program operates. Not all purposes are equally valid. By listening closely to the client’s reasons for initiating the evalua- tion and talking with other stakeholders to determine their information needs and expectations for the study, the evaluator can learn much that will help ensure that the evaluation is appropriately targeted and useful.

The evaluator can also take a proactive role during this phase by suggesting other reasons for evaluating that may prove even more productive (Fitzpatrick, 1989). This strategy is particularly useful when the stakeholders are new to eval- uation and unsure of their needs. Sometimes clients assume they must follow the sponsor’s guidelines when a little dialogue with the sponsor might reveal more flexibility and open up avenues that will be more useful for the client in improv- ing the program. Some clients or sponsors may assume that evaluations should only measure whether objectives are achieved or describe program outputs, out- comes, or impacts when, in fact, other critical information needs exist that could be served by evaluation. (For example, programs in their early stages often bene- fit from describing what is happening in the program, whether its activities are be- ing delivered as planned, and whether adaptations are required.) Other clients may want to rush into data collection, seeing the evaluator’s role as “helping us with a survey” or “analyzing some test scores.” They are unfamiliar with the crit- ical planning phase and how the evaluator can help them focus the evaluation to determine what they want to know. This phase begins the important two-way communication process essential to evaluation, in which the evaluator learns as much as he or she can about the program through careful questioning, observing, and listening and, at the same time, educates the sponsor, client, or other stakeholders about what evaluation can do.

In the early days of evaluation, Cronbach emphasized the importance of the educative role of the evaluator in helping the client determine the directions of the evaluation. Others emphasize that role today (Fitzpatrick & Bickman, 2002; Schwandt, 2008). Cronbach and his colleagues write that “the evaluator, holding the mirror up to events, is an educator. . . . The evaluator settles for too little if he simply gives the best answers he can to simple and one-sided questions from his clients. He is neglecting ways in which he could lead the clients to an ultimately more productive understanding” (1980, pp. 160–161). Therefore, before proceed- ing with the evaluation, the evaluator must spend a significant period of time learning about the program, its stakeholders, the decision-making process, and the culture of the organization to accurately determine the purpose of the study.

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

N 1-269-56906-6

Chapter 11 • Clarifying the Evaluation Request and Responsibilities 263

Direct Informational Uses of Evaluation

Evaluation is intended to enhance our understanding of the value of whatever is evaluated. Yet, as we noted at the beginning of this text, evaluation has many dif- ferent uses. Examples of some of the informational uses of evaluation by policy makers, program managers, and program staff include:

1. Determining whether sufficient need exists to initiate a program and de- scribing the target audience

2. Assisting in program planning by identifying potential program models and activities that might be conducted to achieve certain goals

3. Describing program implementation and identifying whether changes from the program model have occurred

4. Examining whether certain program goals or objectives are being achieved at the desired levels

5. Judging the overall value of a program and its relative value and cost com- pared with competing programs

Each of these five uses may be directed to an entire program or to one or more of the smaller components of a program. The first two uses are frequently part of planning and needs assessment (Altschuld, 2009; Witkin & Altschuld, 1995). These tasks generally take place during the early stages of a program, but they may occur at any stage in which program changes are being considered. The third use is often described as a monitoring or process evaluation. The fourth one can be characterized as an outcome or impact study. The final use is achieved through conducting cost-effectiveness or cost-benefit studies. All of these studies serve legitimate uses for evaluation because each one serves an important, informa- tional use: enhancing our understanding of the value of the program.

Noninformational Uses of Evaluation

In addition to the direct informational uses described in the previous section, evaluation also has important noninformational uses. Cronbach and his col- leagues (1980) first noted this in arguing that the very incorporation of evalua- tion into a system makes a difference. They conclude that “the visibility of the evaluation mechanism changes behavior” (p. 159), citing as an analogy how drivers’ observance of speed limits is affected by police officers patrolling the highways in plainly marked patrol cars. They also suggest that the existence of evaluation may help convince stakeholders that the system is responsive, not impervious, to their feedback.

As the approaches in Part Two indicated, evaluations have many other im- pacts. One important use is its role in educating others, not simply about the program being evaluated, but also about alternative means for decision making. Smith (1989) writes that one of the most important benefits of evaluability assessment, a method of determining whether the program is ready for evalua- tion, is improving the skills of program staff in developing and planning programs.

Program Evaluation: Alternative Approaches and Practical Guidelines, Fourth Edition, by Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen. Published by Pearson. Copyright © 2011 by Pearson Education, Inc.

IS B

Homework is Completed By:

Writer Writer Name Amount Client Comments & Rating
Instant Homework Helper

ONLINE

Instant Homework Helper

$36

She helped me in last minute in a very reasonable price. She is a lifesaver, I got A+ grade in my homework, I will surely hire her again for my next assignments, Thumbs Up!

Order & Get This Solution Within 3 Hours in $25/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 3 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 6 Hours in $20/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 6 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 12 Hours in $15/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 12 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

6 writers have sent their proposals to do this homework:

Assignment Hut
Math Specialist
Write My Coursework
Accounting Homework Help
WRITING LAND
Calculation Master
Writer Writer Name Offer Chat
Assignment Hut

ONLINE

Assignment Hut

As an experienced writer, I have extensive experience in business writing, report writing, business profile writing, writing business reports and business plans for my clients.

$35 Chat With Writer
Math Specialist

ONLINE

Math Specialist

I will be delighted to work on your project. As an experienced writer, I can provide you top quality, well researched, concise and error-free work within your provided deadline at very reasonable prices.

$38 Chat With Writer
Write My Coursework

ONLINE

Write My Coursework

I can assist you in plagiarism free writing as I have already done several related projects of writing. I have a master qualification with 5 years’ experience in; Essay Writing, Case Study Writing, Report Writing.

$43 Chat With Writer
Accounting Homework Help

ONLINE

Accounting Homework Help

I am an elite class writer with more than 6 years of experience as an academic writer. I will provide you the 100 percent original and plagiarism-free content.

$18 Chat With Writer
WRITING LAND

ONLINE

WRITING LAND

I am an elite class writer with more than 6 years of experience as an academic writer. I will provide you the 100 percent original and plagiarism-free content.

$34 Chat With Writer
Calculation Master

ONLINE

Calculation Master

After reading your project details, I feel myself as the best option for you to fulfill this project with 100 percent perfection.

$23 Chat With Writer

Let our expert academic writers to help you in achieving a+ grades in your homework, assignment, quiz or exam.

Similar Homework Questions

Fcrr vocabulary 2 3 - Advocating for the Nursing Role in Program Design and Implementation - The electric field inside a conductor mastering physics - Trends in business information systems - Lasswell's communication model 1948 - Full developed essay - Apply moderate effect smartart style powerpoint - Barmah national park victoria - Kaizen japanese bar & grill campbell ca - Biome in a bottle - Technical English #4 - Apple financial statements past 5 years - Crohn's disease case study - Excel crash course exam answers - Power in America - Non research evidence appraisal tool - Fixed costs expressed on a per unit basis - Information management at homestyle hotels - Convert docs to sheets - Compound words starting with star - Walmart Manages Ethics and Compliance Challenges - Review academically reviewed articles - Elena is attending mandatory therapy sessions - Critical Thinking Case studies - Root element of html document - Novated lease luxury car tax limit - Which dc motor is preferred for elevators - Best akt question bank - Jcu timetable - Nsw fair trading real estate - Www onetonline org explore interests - Vegetation map colours qld - Vincit qui patitur pronunciation - Gallipoli medical research foundation - Steve hardy costco - Combinations in python - Describe perplexity and wonderment as ways to god - Iv flow rate calculator - Types of ratios with formulas - BHD421 Module 1 Case - Bcs wiley student companion site - Financial institutions management a risk management approach 8th edition - Mps internet banking aziende - What is the definition of the commutative property of addition - Wk 4 - Apply: Mobile Security - 8 page paper with references due September 27th - Ice 7th edition conditions of contract pdf - Celebrate freedom week activities - An electric motor worksheet answers - Iep goal examples for reading comprehension - Contemporary Business Leaders - Clustering - Born a crime figurative language - Worldview Paper - Worn or bald tires quizlet - 5910_ASS 1 # DRAFT 1 # Instructions: Capstone Project Summary # MBA-FPX5910 MBA Capstone Experience - Acca p7 past papers - Hypocatastasis in the bible - Classic books public domain - Angle between two planes example - The norton introduction to literature 13th edition free pdf - Cobit balanced scorecard example - Managing & using information systems a strategic approach 6th edition - Family tree of the norse gods - How many ml in 2 litres - The party line rachel rafelman - Regression Modeling - Research Methods in Criminal Justice – Chapter 4 Review Questions. - Pull factor in a sentence - Facing a task unfinished lyrics - Costmart warehouse case answers - Project sponsorship almost always resides at the executive levels - Evaluation - W7Psychotherap - Netflix tvq pm 100 5.3 - Construction of helmholtz coils - Ey lease accounting guide asc 840 - Consists of several layers of cells allowing an expandable lining - 36 rabaul street shortland - Anton paar mcr 702 - Appearance v reality macbeth - Qut college academic calendar - 2 essay question, need it in one hour 800 words each - Agile documentation and planning template - Chapter 10 understanding work teams organizational behavior - Y 3x 5x 2y 44 - +91-8306951337 vashikaran specialist near me IN Coimbatore - Introduction to java programming 11th edition liang - Improving productivity using it level 1 - Unit cost formula tutor2u - Child family and community 7th edition chapter 2 - Tattenhall park primary school - Why is avc u shaped - What is prose drama - Does humor make life better essay - Examples of unicellular organisms and multicellular organisms - 2006 chevy cobalt catalytic converter recall - Wk 5, IOP 480: Organizational Change_Rework - Johanna makes the table below to organize her notes about centripetal forces. - Which artist is credited with inventing the tenebroso technique