Books review:
Title : Program Evaluation A Practitioner’s Guide for
Trainers and Educators (Sourcebook and
Casebook)
Author : Robert O. Brinkerhoff; Dale M. Brethower;
Terry Hluchyj; Jeri Ridings Nowakowski
Publisher : Kluwer Nijhoff Publishing, Boston/The Hague/ Dordrecht/
Lancaster
Year : 1983
Reviewer : Ipung Yuwono
This book was developed by the Evaluation Training Consortium (ETC) project at the Evaluation Center, Western Michigan University.
The purpose of this book is to assist personal responsible for use in training, teacher education and other professional development programs in private and public agencies, public school, colleges and universities. By using this book, the user be expected to become more systematic in their program development activities and at the same time, help them to recognize that their tasks can never be totally organized or at times even remotely logical or orderly. Rather, program planning and evaluation is a dynamic process, set within the confines of ever changing organizational needs, priorities and values. This book can serve as practical guide for educators in order to improve their program evaluation quality.
The book is organized into two major areas. The first area is the Sourcebook , which contains chapters of guidelines, resources and references for each of the seven evaluation functions. It provides optional approaches, guidelines, and procedures for designing and conducting evaluation to meet training needs. It looks at the decisions that evaluators needs to make most often and provides the user with just those aspects of program evaluation that are more critical for success. The Sourcebook lists seven major evaluation functions, identifies thirty-five critical questions and tasks (p. 2-4), and shows alternative procedures for accomplishing the evaluation task at hand. Each question addressed in the sourcebook proposes some options for the evaluator in dealing with a question or issue, provides alternative procedures available to accomplish the evaluation tasks related to the issue. Each question also provides guidelines and criteria that help the evaluator to determine whether the job has been done adequately.
The seven major evaluation functions are:
a. focusing an evaluation and clarifying its purpose,
b. designing an evaluation,
c. collecting information,
d. analyzing information
e. reporting: interpreting and using evaluation findings,
f. managing evaluation activities,
g. evaluating evaluation efforts.
Each function is defined, then the several key decisions needed to complete the function are explained. The sourcebook contains examples, guidelines, criteria and checklists such that user can use to do more effective evaluation. Each function elaborated in a number of questions that helps plan evaluator. For example, in function: focusing an evaluation and clarifying its purpose (a), the question is "what is the purpose for evaluating?" The general purpose for evaluating should be clear. To determine the most appropriate evaluator and evaluation strategy, it’s important to know why the evaluation taking place. In the casebook there is an example of evaluation purposes (Case L1, p. 227). It also includes references to other books and resources that can be useful in evaluating training programs.
The second area is Casebook, which shows how others have designed conducted and used program evaluation procedures as a regular part of their work. These absorbing case studies illustrate the problems of people who are trying to perform evaluation productively while facing real-life obstacles. Discussion questions provided with each case study get user involved in asking the tough questions user want to be prepared to answer when it comes time to be evaluated in his own work. The casebook, which contains collection of twelve stories about evaluation applied to real-life projects and programs in different settings. These show people planning, conducting and using evaluation. Each of case-examples is a story about evaluation within a particular training program. The case examples, designed to portray evaluation applications of different types in different settings, were contributed by field practitioners and written in conjunction with ETC staff. They are fictional accounts but based on actual programs and used of evaluation. Each case-example is annotated to highlight the seven major evaluation functions as set forth in the sourcebook. This is done to show how these functions differ according to particular program needs and settings. Following each case is a set of review and discussion questions to help extend the lessons available in the case-example.
Design manual, which separated part of sourcebook and casebook, contains worksheets, directions and guidelines for designing an evaluation. Its organization is similar to the sourcebook, as it helps us produce different parts of an overall evaluation design. Each section presents an example of the design product needed; give us worksheets, directions and aids for producing that document and provides a checklist for assessing our work.
For using this book, there is no one particular order in which these materials are meant to be used. We could begin in any of the three parts, using them alone or in combination. Where we begin and how we use the materials depends on what we want to use them for.
The strengths:
The weakness:
population
characteristic of interest-being heterogeneous rather than homogeneous.
sampling unit. (Krathwohl, 1998). In addition, we can use sequential
sampling which involve gathering additional data in successive waves
until some criterion of adequacy is met.
Usefulness for own developmental research project
The main aim of the study is to develop some chapters of a mathematics textbook based on realistic mathematics in senior secondary school. These exemplary materials (chapters) should provide a model for mathematics textbook in secondary education. This study may start with a front-end analysis in order to obtain a clear picture of the starting situation and the main purpose of the study. In this step, prototype of exemplary materials will be developed. Based on reflections of developers on the prototype and formative evaluation results, prototype will be continually refined and evolve towards a final deliverable. Results of the formative evaluation may lead to revision of the prototype. In this way, each prototyping cycle represents the evolution of the system. Based on that process, it is needed to interview and to distribute questionnaire to target group (student), experts, teacher and other interested groups. In order to improve present performance of teacher in the classroom, which will teach with new approach (realistic mathematics), a training program for teacher will be developed. This training program fosters three kinds of learning outcomes: acquisition of a new knowledge, skill development and attitude change. Furthermore, in order to know student learning progress, the test will be conducted. Therefore, in focusing the evaluation, we should consider many elements in the setting that probably will influence the evaluation. In order to handle this problem, we can use key issues: "what elements in the setting are likely to influence the evaluation" in page 23-26. In designing evaluation we need to assess the quality of the design evaluation. For this aims, key issues: "how do you recognize a good design" page 64-71 might be help us. Additionally, we should use this book together with Tessmer’s book, as Tessmer was mention that the focus of his book is on the formative evaluation of instructional products such as texts, lectures and multimedia instruction (Tessmer, 1993). In the step collecting data by interview, questionnaire or test, information in the book about "collecting information" page 77-115 apparently important for us. With the information in this part of book, we can:
In the step analyze the information and summarize result, we must begin the important work of pulling together the information we have gathered. Most information will be either qualitative (from interview) or quantitative (from questionnaire and test). Information (data) must be properly handled and stored in order to prepare it for analysis. This includes coding data, aggregating and organizing it, and storing it for safekeeping and ready access. The main idea in ways that facilitate its use, and keep it from getting lost or forgotten. For support this aim, page 119-122 (aggregate and code data if necessary), page 123-126 (verify completeness and quality of raw data), page 127-144 (select & run defensible analysis) and page 145-147 (interpret the data using prespecified and alternative sets of criteria), can help us. However, in analyzing quantitative data, due to inaccurate many statistical concepts in this book, we should consult standard statistics reference books, for example Krathwohl, (1998); Grimm, (1993) Minium, (1993), or Guilford, (1978).
Education and training programs are evaluated in order to determine their quality and gain direction for improving them. It is needed clear definition of what constitutes a reasonable evaluation of educational program. There is other book, Sanders (1994), which provide standards for the practice of educational program evaluation. This book contains a set of standards that speaks to the responsible conduct and use of educational program, project and materials evaluations. The standards provide a guide for evaluating educational and training programs. It is better if this book (Brinkerhoff’s book) commonly use with Sander’s book.
To acquire that book, we can found in Toegepaste Onderwijskunde (TO) library with catalog number: TO 371.2 p 089. If we want to buy that book, we can contact to distributors:
3300 AH Dordrecht, The Netherlands.
190 Old Derby Street Hingham, Massachusetts 02043, USA.
References:
Brainard, E.A. (1996). A Hands-on Guide to School Program Evaluation,
Bloomington: Phi Delta Kappa Educational Foundation
Grimm, L.G. (1993) Statistical Applications for the Behavioral Sciences. New
York: John Wiley & Sons.
Guilford, J.P & Fruchter B, (1978). Fundamental Statistics in Psychology and
Education, (6th Edition), New York: McGraw Hill.
Krathwohl, D.R. (1998). Methods of Educational & Social Science Research 2nd
Edition. New York: Longman.
Minium, E.W. ; King, B.M.; Bear,G. (1993) Statistical Reasoning in Psychology
and Education. New York: John Wiley & Sons.
Sanders, J.R. (1994). The Program Evaluation Standards, 2nd Edition, London: Sage
Publications.
Tessmer, M. (1993). Planning and Conducting Formative Evaluations, London:
Kogan Page