Evaluating practice: guidelines for the accountable professional [Sixth edition.,Pearson new international edition /] 1292041951, 1269374508, 9781292041957, 9781269374507, 9781292052045, 129205204X

Evaluating Practicecontinues to be the most comprehensive practice evaluation text available. Focusing on single-system

342 105 10MB

English Pages 496 [501] Year 2013;2014

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover......Page 1
Table of Contents......Page 4
1. Integrating Evaluation and Practice......Page 6
2. Basic Principles of Conceptualization and Measurement......Page 32
3. Specifying Problems and Goals......Page 58
4. Developing a Measurement and Recording Plan......Page 86
5. Behavioral Observation......Page 130
6. Standardized Scales......Page 160
7. Logs......Page 206
8. Reactivity and Nonreactive Measures......Page 226
9. Selecting a Measure......Page 238
10. Basic Principles of Single-System Designs......Page 248
11. Baselining......Page 280
12. From the Case Study to the Basic Single-System Design: A-B......Page 294
13. The Experimental Single-System Designs: A-B-A, A-B-A-B, B-A-B......Page 310
14. Multiple Designs for Single Systems......Page 334
15. Changing Intensity Designs and Successive Intervention Designs......Page 358
16. Designs for Comparing Interventions......Page 376
17. Selecting a Design......Page 392
18. Basic Principles of Analysis......Page 410
19. Visual Analysis of Single-System Design Data......Page 434
20. Not for Practitioners Alone......Page 450
21. References......Page 478
C......Page 496
E......Page 497
M......Page 498
R......Page 499
Y......Page 500
Recommend Papers

Evaluating practice: guidelines for the accountable professional [Sixth edition.,Pearson new international edition /]
 1292041951, 1269374508, 9781292041957, 9781269374507, 9781292052045, 129205204X

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

9 781292 041957

Evaluating Practice: Guidelines for the Accountable Professional 6e

ISBN 978-1-29204-195-7

Evaluating Practice: Guidelines for the Accountable Professional Martin Bloom Joel Fischer John G. Orme Sixth Edition

Pearson New International Edition Evaluating Practice: Guidelines for the Accountable Professional Martin Bloom Joel Fischer John G. Orme Sixth Edition

Pearson Education Limited Edinburgh Gate Harlow Essex CM20 2JE England and Associated Companies throughout the world Visit us on the World Wide Web at: www.pearsoned.co.uk © Pearson Education Limited 2014 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without either the prior written permission of the publisher or a licence permitting restricted copying in the United Kingdom issued by the Copyright Licensing Agency Ltd, Saffron House, 6–10 Kirby Street, London EC1N 8TS. All trademarks used herein are the property of their respective owners. The use of any trademark in this text does not vest in the author or publisher any trademark ownership rights in such trademarks, nor does the use of such trademarks imply any affiliation with or endorsement of this book by such owners.

ISBN 10: 1-292-04195-1 ISBN 10: 1-269-37450-8 ISBN 13: 978-1-292-04195-7 ISBN 13: 978-1-269-37450-7

British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Printed in the United States of America

P

E

A

R

S

O

N

C U

S T O

M

L

I

B

R

A

R Y

Table of Contents

1. Integrating Evaluation and Practice     Martin Bloom/Joel Fischer/John G. Orme

1

2. Basic Principles of Conceptualization and Measurement     Martin Bloom/Joel Fischer/John G. Orme

27

3. Specifying Problems and Goals Martin Bloom/Joel Fischer/John G. Orme

53

4. Developing a Measurement and Recording Plan   Martin Bloom/Joel Fischer/John G. Orme

81

5.  Behavioral Observation Martin Bloom/Joel Fischer/John G. Orme

125

6. Standardized Scales   Martin Bloom/Joel Fischer/John G. Orme

155

7. Logs Martin Bloom/Joel Fischer/John G. Orme

201

8. Reactivity and Nonreactive Measures Martin Bloom/Joel Fischer/John G. Orme

221

9. Selecting a Measure Martin Bloom/Joel Fischer/John G. Orme

233

10. Basic Principles of Single-System Designs Martin Bloom/Joel Fischer/John G. Orme

243

11. Baselining Martin Bloom/Joel Fischer/John G. Orme

275

12. From the Case Study to the Basic Single-System Design: A-B   Martin Bloom/Joel Fischer/John G. Orme

289

13. The Experimental Single-System Designs: A-B-A, A-B-A-B, B-A-B Martin Bloom/Joel Fischer/John G. Orme

305

I

14. Multiple Designs for Single Systems Martin Bloom/Joel Fischer/John G. Orme

329

15. Changing Intensity Designs and Successive Intervention Designs Martin Bloom/Joel Fischer/John G. Orme

353

16. Designs for Comparing Interventions Martin Bloom/Joel Fischer/John G. Orme

371

17. Selecting a Design Martin Bloom/Joel Fischer/John G. Orme

387

18. Basic Principles of Analysis Martin Bloom/Joel Fischer/John G. Orme

405

19. Visual Analysis of Single-System Design Data Martin Bloom/Joel Fischer/John G. Orme

429

20. Not for Practitioners Alone Martin Bloom/Joel Fischer/John G. Orme

445

21. References

II

Martin Bloom/Joel Fischer/John G. Orme

473

Index

491

INTEGRATING EVALUATION AND PRACTICE

PURPOSE: This chapter presents an introduction to evaluation-informed practice through use of single-system designs (SSDs), a method used by helping professionals to evaluate practice. In this chapter we present the basic characteristics of single-system designs, compare this evaluation method with the classical research methods, and summarize the entire evaluation process and this book with a flowchart showing which chapters will discuss which portions of the whole process. We also present an introduction to evidence-based practice, and describe the ways in which evidence-based practice and evaluation-informed practice can be integrated. Our main purpose in this chapter is to encourage you to recognize the feasibility and the desirability of evaluating your own professional services—in whatever community setting you may work and using whatever theory guides your practice. This is a goal that we will identify as scientific practice, the combination of a sensitive and caring practitioner with the logical and empirical strengths of the applied social scientist. Introduction to Single-System Designs A Primer on Single-System Designs Specifying the Target (Problem or Objective/Goal) Measuring the Target by Forming Its Operational Definition Baseline and Intervention Phases Repeated Measures Practice Designs Evaluation Designs Analysis of the Data Decision Making on the Basis of Findings Single-System Designs and Classical Research: The Knowledge-Building Context

Evidence-Based Practice Steps in Evidence-Based Practice Integrating Evaluation-Informed Practice and Evidence-Based Practice: The PRAISES Model Single-System Evaluation, Qualitative Research, and Quantitative Research Advantages of Using Single-System Designs in Practice A Walk through the Evaluation Process Summary

From Chapter 1 of Evaluating Practice: Guidelines for the Accountable Professional, Sixth Edition. Martin Bloom, Joel Fischer, John G. Orme. Copyright © 2009 by Pearson Education, Inc. All rights reserved.

1

Integrating Evaluation and Practice

INTRODUCTION TO SINGLE-SYSTEM DESIGNS The title of this book, Evaluating Practice, says it all: Everything we present in this text is intended to facilitate your use of evaluation methods in your practice, that is, the systematic, ongoing, and more or less objective determination of whether we are obtaining the objectives and goals of practice that we set with our clients. This is a critical function of practice, especially as it is going to be conducted in this age of managed care that demands tangible evidence of effectiveness (Vandiver & Corcoran, 2007; Bolen & Hall, 2007). Virtually all the helping professions require their practitioners to evaluate their practice and their students to learn how to evaluate (Fisher, 2004; NASW, 1996; O’Donohue & Ferguson, 2003). For example, the accrediting body of schools of social work requires that all students be taught how to evaluate their own practices (Council on Social Work Education, 2003), and the Codes of Ethics of social work, clinical psychology, and counseling require evaluation of all intervention programs and services (O’Donohue & Ferguson, 2003; NASW, 1996). We call this emphasis on evaluation in practice evaluation-informed practice, because it refers to the use of formal and systematic evaluation methods to help you assess, monitor, and evaluate your cases and inform you and the client about decisions that can be made to improve your results. Although there are many ways we as practitioners receive information about what happens with our clients, it is the primary thesis of this book that evaluation-informed practice, because of its systematic and logical approach, comprises a major increment over other methods in helping us achieve effective practice. In fact, there is an emerging, methodologically strong body of evidence which shows that such routine monitoring and receipt of feedback as in evaluation-informed practice can reduce deterioration, improve overall outcome, and lead to fewer intervention sessions with no worsening of outcome for clients making progress (i.e., increased cost-effectiveness) (e.g., Faul, McMurty, & Hudson, 2001; Lambert, 2007; Lambert et al., 2002, 2003; Harmon et al., 2007). The primary method of evaluation that is the backbone of evaluation-informed practice is singlesystem designs, the topic of this book. Single-system designs essentially involve continuing observation of one client/system before, during, and after some intervention. By client/system we mean one or more

2

persons or groups being assisted by a helping professional to accomplish some goal. These goals may involve preventive, protective, promotive (the three aspects of primary prevention), interventive, or rehabilitative practices. And these practices may be delivered by social workers; psychologists; people in nursing, medicine, or the allied health fields; or professionals in education or other helping professions. For simplicity of discussion, we will use the term client to refer to individuals, groups, or other collectives, as the context will make clear. And we will use the term practitioner to refer to any helping professional. The term treatment, although widely used, is problematic because it is a medically oriented word that assumes a medical model of disease causation that may not be suitable to the full range of social and psychological concerns. So, we will mainly use the term intervention for discussions of practice that may include medical treatment, social services, psychotherapy, community change efforts, educational methods, and the like. The context of our discussion will determine what practice term we use. We suggest that you start this book by reading the case example of graduate students learning to use single-system designs that we have included on the CDROM that comes with this book. We consider that case as the Prologue to the book. Insert your CD, click on “Student Manual,” “PDF File” and “Prologue from Evaluating Practice (5th ed.).” Double click on this file to open it. As we illustrated in the case presented in the Prologue, using single-system evaluation, you can learn what the client’s problems and potentials are before the intervention. You can identify the client’s objectives and goals. From this understanding of problems and goals, you can specify a target or targets for intervention. We use the word target to refer to clients’ concerns, problems, or objectives that have been clearly specified as the focus of our interventive efforts. Combining this knowledge of the client’s situation both with our theories of human behavior and behavior change and with the relevant research base, you can construct and implement a workable intervention plan. Then we monitor the interventive process; that is, we check progress against the client’s objectives in order to know whether to change our intervention: Should we do more than, the same as, or less than we are doing, or should we do something completely different? This is the heart of the evaluation process, making use of the rapid feedback of information for present and future helping.

Integrating Evaluation and Practice

Finally, we can determine whether the overall outcome was successful in terms of the client’s objectives and goals in the social context. We will use the term evaluation to refer to this special analysis of outcomes, in contrast to monitoring the process of intervention. There are a number of simple conventions for graphing the flow of events with regard to a particular client problem. We often obtain a reference point (or more likely, a pattern of points), called a baseline, of a given problem before any intervention occurs. We then can compare changes in the target against the baseline during and after intervention. We repeat these same measures on a regular basis, using a chart to examine client change. Using certain conventions described in this book, we can determine for the ongoing intervention whether there is significant change occurring—in either the desired or undesired directions—in order to make suitable adaptations in the intervention. We also can have an empirical and rational basis for termination: when statistical significance is aligned with clinically or socially meaningful changes in the client’s situation, as predicted by one’s practice theory. This process is the essence of scientific practice—the use of science, scientific methods, and findings from scientific research—that we hope will result in a relatively objective guide to sensitive practice changes and outcomes. This is not a lock-step approach to practice and evaluation, but one that does require considerable thought and flexibility. And that’s it! The basic principles of singlesystem designs actually are few and simple. The principles can be combined in numerous ways to fit the needs of a given case or situation. The information obtained can be reasonably objective, and the sense of knowing how matters are proceeding is positively exhilarating. That’s the fun part, assuming that the case is going well. If it isn’t, then you need to know about this as soon as possible, so you can change intervention techniques or directions. Now, let us turn to a discussion of specifics that we just summarized in the preceding paragraphs. A Primer on Single-System Designs For many years helping professionals dreamed about integrating theory, research, and practice in the belief that the more clearly their theories could be stated and tested in the real world, the better their practice would be. Unfortunately, it has been difficult to create this integration, primarily because the technologies needed

were not available. The global concepts used in some theories were difficult to define in concrete terms. In classical, experimental/control group research designs, immediate feedback was almost impossible to achieve. Thus, when reasonably objective but relatively easy methods of evaluation, such as singlesystem designs, were developed, the helping professions had the tools for which they had long hoped. The phrase single-system designs refers to a set of logical and empirical procedures used to observe changes in an identified target (a specified problem or objective of the client) that is measured repeatedly over time. This relatively new evaluation technology has been described using a number of terms, all of which are about equivalent for our purposes. These terms include intensive or idiographic research, single N or N = 1 research, single-subject research or single-subject design, single case-study design, timeseries research, single-organism research, single-case experimental design, and the term we will use in this book, single-system designs. We use this term because it emphasizes the person-in-the-environment as a useful perspective for scientific practice. A given system—one person alone, but often one or more persons and/or groups interacting in ordinary life situations—is the object of evaluation and intervention. Changes are sought in either the person or persons involved, and/or the relevant social or physical environments. Thus, single-system designs refer to evaluating practice with the relevant parts of a whole system, which could range from an individual to a family to a group to a community to any size system. Explicit interventions typically are guided by a practice theory or by specific principles of practice; consequently, using single-system designs, we can learn what intervention techniques work well under what conditions. However, there are no particular intervention models (like behavior therapy) that alone are suited for single-system designs. Single-system designs are not attached to any particular theory of intervention. The single-system design model is “theory independent” in this sense; that is, the design can be used with just about any intervention theory or approach. Single-system designs have been used by, among others, psychoanalytically-oriented practitioners (Dean & Reinherz, 1986); practitioners working with groups (Johnson, Beckerman, & Auerbach, 2001) and families (Bentley, 1990); practitioners using systemic, problem-solving approaches (Hall, 2006); cognitive-behavioral practitioners (Humphrey & Brooks, 2006); practitioners conducting parent-child

3

Integrating Evaluation and Practice

interaction therapy; practitioners using narrative/music therapy (Strickland, 2006); practitioners using taskcentered and motivational interviewing (Fassler, 2007); practitioners using paradoxical intention (Kolko & Milan, 1983); and so on. In fact, we will use examples from a variety of theoretical orientations throughout this book to illustrate the adaptability and flexibility of single-system designs. Usually, but not always, single-system designs employ a before/during and/or a before/during/after approach to compare the patterns of two or more states of one client/system. The before-intervention state (baseline) is used as a frame of reference for changes occurring in the during-intervention and/or after-intervention state. Because the same client/ system is involved throughout the service period, the practitioner looks for differences in that system’s target events and, with a powerful enough evaluation design, tries to determine whether his or her efforts produced these differences. Now, let’s examine the basic characteristics of all single-system designs. Specifying the Target (Problem or Objective/Goal). A fundamental rule of any professional practice requires that you identify the problem: What are we going to try to change? In practice, this often involves client and practitioner interactions that define the problems and objectives in the given situation, as well as the strengths and resources with which they have to work. Something important goes on when clients and practitioners discuss the presenting problems and objectives and goals (what we hope to accomplish). The clients express their concerns, and practitioners simultaneously conceptualize and empathize. Practitioners conceptualize when they abstract and generalize the patterns of client behaviors (thoughts, feelings, and actions), which are given some label (a concept) representing that class of experiences. For example, the practitioner sees a client’s head slumped to the chest and the teary eyes, and hears the client talk of not sleeping well and losing interest in food and sex. The practitioner forms an initial hunch that the client may be depressed, which directs the practitioner to look for and test additional factors known to be associated with depression. Thus, concepts act as the building blocks of theory, or at least of some general principles, which in turn serve as guides to practice. It is essential to be

4

clear about how you conceptualize because the class labels you identify lead you to bodies of information that in turn guide your practice. A fuzzy concept can lead you in unfruitful directions. Accurate conceptualizing represents the practitioner’s first contribution to the helping process. Simultaneously with conceptualizing, practitioners also empathize; they actively listen to clients, cognitively and emotionally attempting to understand the meanings of these problems and objectives, and then reflect this understanding to the clients with warmth and genuineness. These are the core skills for any helping practice (Fischer, 1978; Norcross, 2002; Norcross & Hill, 2004). Scientific practice includes sensitive, empathic awareness of client problems and strengths, which constitute the practitioner’s second contribution to the helping process. Both conceptualization and empathy help the practitioner to specify what is happening in the client situation. It is important to note that no finite list of problems or objectives will ever be a comprehensive picture of the whole person or group involved. Instead, a few specified targets are chosen and represent indicators of the whole, as a compromise between feasibility and comprehensiveness. Schön (1983) describes presenting problems as “indeterminate messes”— complex, changing, fluid experiences that constitute the human condition. While case situations are often messy in this way, the practitioner tries to create some order with them to make the problems amenable to change. In effect, the practitioner is a problem constructor, assembling and recombining elements of the “mess” with the client until both share a common perception of the problem and how to solve it. As this process of problem construction continues, the mess may become more manageable. The client and practitioner may see progress—in part because they have agreed on ways of talking about or identifying problems and strengths in the client’s situation. This is a critical step in problem solving. Measuring the Target by Forming Its Operational Definition. By means of the problem construction process

we just described, the broad problem and/or objective can be changed into a specific target that the practitioner will seek to influence when it is restated as an operational definition. By specifying what operations or measurement procedures we will use to define the target—how often it occurs, in what intensity, and so on—both client and practitioner can be clear about what they are dealing with. These actual procedures of

Integrating Evaluation and Practice

measurement are called the operational definition. Changes in this target will provide feedback to the practitioner and client regarding progress toward problem resolution. But, we want to emphasize that the measurement procedures used in single-system designs are generally quite simple, and they are used only to the extent that they can help the practitioner make decisions about changes in the client’s situation. Moreover, there are formal scales and do-it-yourself procedures available to measure just about every possible target, no matter what the practitioner’s theoretical orientation. We present many case examples illustrating the range of measures throughout this book. Baseline and Intervention Phases. A phase is a time period during which a particular activity occurs. In general, there are two types of phases, baseline phases and intervention phases. The targets are measured in both phases, but during a baseline phase, no target-focused intervention is implemented, whereas during an intervention phase, one or more targetfocused helping practices are introduced. The baseline phase involves the planned, systematic collection of information regarding the target, typically before a given intervention is begun. Usually, you collect such information at the very beginning of the case as you are trying to understand the nature and extent of the problems and strengths in the client’s situation prior to intervening. You also may return to baseline conditions during the course of service, such as when one intervention appears not to be working well and the practitioner needs a fresh perspective in identifying changes in the client’s situation. These baseline or nonintervention phases constitute the core of single-system evaluation; changes in the pattern of data between baseline and intervention phases provide the key evaluation information. Students and practitioners frequently ask whether any baseline situation can ever be free of some intervention. The answer requires that the distinction be made clearly between the relationship-building interactions that are not precisely structured in a way to bring about specific changes and the interventions that are intended to change specific aspects of the client situation. Practitioners rightly believe that they

are providing a useful service while they are building a trusting relationship with the client. But this trust or rapport generally is a means to an end, which is problem resolution; rapport is not necessarily an end in itself. This vital relationship often provides the basis for implementing an intervention and for helping the client to make changes in his or her life; it may also provide sensitive information whereby the environment may be changed to benefit the client (see Bohart & Greenberg, 1997). Formal interventions, on the other hand, are planned changes in which practitioners perform certain actions with regard to their clients, to other people, or to situations, in order to achieve specified objectives. Thus, we expect changes that occur as a result of these formal interventions to be far greater than those that would occur in the baseline phase, precisely because these planned interventions are added to the rapport-building and relationshipbuilding activities in the baseline. Of course, the practitioner maintains these trusting relationships during the intervention. Repeated Measures. The heart of single-system evaluation is the collection of repeated information on the target problems or objectives. This is what is meant by a time-series design. Either the practitioner, the clients, or relevant others observe the same target problem over regular time intervals such as every day or every week—whatever is appropriate to the given problem—to see whether any changes are taking place between the “before phase” (the baseline phase), and the “during phase” (the intervention phase), and/or in the “after phase” (the follow-up phase) of the intervention. This is the basis of monitoring progress to determine whether changes are needed in the intervention program, a process that is critical to guiding practice and making sound practice decisions. Practice Designs. Whatever practice methods and theories are used, they should be clearly described. Thus, if the overall results are positive, you would know what exact services were delivered and under what conditions those results were produced. In this way, you would be able to build a repertoire of effective methods, as well as communicate clearly with others who might wish to use these same methods. Equally important, it is useful to know what does not

5

Integrating Evaluation and Practice

work with a given kind of client in a given situation, so that we won’t repeat our less effective methods. There should also be clear conceptual linkages between the identified targets and the specific interventions chosen to affect them. For example, if one were dealing with depression—a cognitive, physiological, and affective state with certain typical associated behaviors, often correlated with various kinds of environmental factors—then one’s intervention approach would need to take account of these cognitive, physiological, affective, behavioral, and environmental factors. A given intervention approach need not directly act on all of these factors, but it should have a rationale for what it does direct the practitioner to act on, and how it presumes those factors on which there is no planned action would be influenced. A practice design is the sum of systematic and planned interventions chosen to deal with a particular set of targets to achieve identified objectives and goals. Such a design may be a translation of an existing theory or set of principles of behavior change adapted to the context of a given client, or it may be newly constructed by the practitioner (see, e.g., Thyer, 2001). Evaluation Designs. In general, research designs are arrangements for making or structuring observations about a situation in order to identify lawful relationships (e.g., that a certain variable seems to vary consistently with another variable). Evaluation designs are special types of research designs that are applied to the evaluation of practice outcomes. Single-system designs, as one type of evaluation design, involve arrangements in which repeated observations before, during, and/or after intervention are compared to monitor the progress and assess the outcome of that service. All single-system designs permit an analysis of patterns of change in client problems; some of the more complex designs also allow you to infer whether the practitioner’s efforts may be causally responsible for the identified changes. Analysis of the Data. Unlike other forms of evaluation, single-system designs rely on several unique types of analysis. First, a simple visual analysis of changes in the data on the chart on which all data are recorded (i.e., from baseline through intervention periods) may indicate improvement, deterioration, or even no change, providing feedback as to whether the intervention should be maintained or revised; this is the monitoring function of single-system designs.

6

Also, a visual analysis of the data that reveals marked positive differences between baseline and intervention is sometimes used as the basis for overall accountability in the case; this is the evaluation function of single-system designs. Statistical analyses also may be used for more precise statements of outcome, and in most cases, computers can facilitate statistical analyses. Decision Making on the Basis of Findings. The ultimate purpose of doing single-system evaluations is to be able to make more effective and humane decisions about resolving problems or promoting desired objectives. That means that we will consider the visual and the statistical analyses of the intervention, along with an analysis of the social or clinical import of these events in order to guide our practice decisions. A judgment about human activities involves more than the “facts” of the matter; it involves the values that are activated by these events. Single-system analyses offer ways to test hypotheses about portions of the client situation in order to receive rapid feedback. We assume as an axiom of practice that the better the information you have about the case, the more likely you are to make better service decisions. And there you have it. These basic points are simple, but there are plenty of variations and complications, as you will read in the rest of this book. Single-system designs are approximate methods to determine whether certain practice objectives have been achieved. In exchange for the relative simplicity of methods and the rapidity of feedback, we give up some of the power that can be attained in classical research designs. However, by use of more powerful single-system designs, it is possible to do some sophisticated and rigorous evaluation. We are advocating that everyone learn to use the basic designs, even with their limitations. We also hope to motivate some to use more sophisticated designs because of their power, and because sometimes we need to know whether we caused changes in the target to occur, information that the more sophisticated designs can provide. Evaluation is simply that part of good practice that informs the practitioner about how well the intervention is proceeding relative to the original objectives. But evaluation also encourages the practitioner to be clear about the targets of intervention, as well as about the helping activities themselves. Evaluationinformed practice using single-system designs means that our evaluation efforts help guide practice decisions in a dynamic fashion.

Integrating Evaluation and Practice

Because evaluation seems to imply research, statistics, mathematics, and computer science technology, it does raise ghosts that are fearful to some practitioners and students. But this misses the point, because it is the logic of clear thinking and acting that is essential to evaluation, not a bunch of difficult and esoteric technologies. Good practitioners carefully observe and subjectively evaluate their own practice. This may be why they are good practitioners—they benefit from this precise observation and accurate feedback as they monitor the progress of their clients. The single-system methodology adds to these subjective evaluations by providing everyone with a common language and more or less objective procedures for doing what good practitioners have long done. Good practice is holistic practice. It involves a systematic framework for conducting the actual steps of intervention; consideration of the role of theory and ethics; and a way to measure, monitor, evaluate, and guide what we do in practice. We have tried to be as honest and as accurate as possible an account of how new students may proceed (and sometimes stumble) through that process. There are, indeed, complex issues that must be worked out with each and every case. That is one of the great challenges of evaluation-informed practice. But we want to assure you, based on our own practice and the practice of our students and colleagues, that with experience comes greater expertise, and with greater expertise comes enhanced competence in wending your way through some of the obstacles to humane and effective scientific practice. SINGLE-SYSTEM DESIGNS AND CLASSICAL RESEARCH: THE KNOWLEDGE-BUILDING CONTEXT The focus of this book is almost exclusively on singlesystem designs. Many readers have probably had some exposure to research methods, but this exposure is more likely to have involved what is called “classical” methods, in which experimental groups are compared with control groups in laboratories or field settings, or where large-scale surveys are conducted. Typically, these methods depend on the aggregation of data from groups of people; for example, one would find the mean of the group rather than look at any one person’s scores. Then comparisons are made between different groups of persons; for example, the average score of one group that received an intervention is

compared with the average score of another group that did not. These methods frequently use a variety of statistical procedures and computer software to process the large amounts of data collected from the participants in the research. Such classical methods are extremely powerful and useful, particularly in generalizing research results to similar populations, one of the vital functions of science. The classical group designs are also effective in ruling out alternative explanations of results (e.g., Shadish, Cook, & Campbell, 2002). In fact, most program evaluations are conducted using classical research methods, providing absolutely critical information to practitioners about the effectiveness of different intervention programs (see Royse, Thyer, Padgett, & Logan, 2006; Unrau, Gabor, & Grinnell, 2007; Cone, 2001, for excellent overviews of the methods and issues in evaluating practice outcomes). So, classical research and evaluation methods are at the heart of evidence-based practice decisions, as we will see in the next section. However, these classical designs are not useful for many of the situations with which most practitioners are immediately concerned. While controlled, experimental, and quasi-experimental designs evaluate the differences between groups from pretest to posttest, a time period that can range from weeks to months, single-system designs provide immediate information on virtually a daily basis so that you can monitor changes (and lack of changes) in the client’s problem situation and guide your practice accordingly. This is the crucial information a practitioner uses to make decisions about whether to maintain or change his or her intervention. And that is at the core of evaluationinformed practice. We have summarized the similarities and differences between these models of research and evaluation—the classical experimental/control group design on the one hand, and the single-system design on the other—in Table 1. We also have drawn the classical and the single-system designs in Figure 1, comparing their basic ingredients. By following the research and the evaluation processes from left to right, and then from the top to the bottom of Figure 1, you can see the parallels and the differences that are also described in a different form in Table 1. In general, then, there are different roles for each of these approaches: Single-system designs are best used to monitor, guide, and evaluate practice in field situations, whereas the results of experimental/control

7

Table 1 Comparison of single-system designs and experimental/control group designs

Characteristic

Single-System Designs

1. Number of clients involved

One individual, group, or collectivity.

2. Number of attributes measured

3. Number of measures used for each attribute 4. Number of times measures are repeated

5. Duration of research

6. Choice of goals of research 7. Choice of research design and its review by others for ethical suitability 8. Feedback

9. Changes in research design

10. Use of theory as guide to practice 11. Use of comparison groups in arriving at the evaluation

12. Reliability

8

Experimental/Control Group Designs

At least two groups are involved. Ideally, these groups are randomly selected from a common population and randomly assigned to groups to control between-group variability. Limited to a number feasibly collected by the Variable, depending on the purpose of the client, practitioner, or others. Usually a study. Usually a medium to large number large number of issues are assessed beof items is asked of a large number of perfore selecting the specific targets, few in sons by research interviewers or others. number. Variable. Ideally, multiple measures of the Variable. Ideally, multiple measures of the same attribute would be used (cf. same attribute would be used (cf. CampShadish, Cook, & Campbell, 2002). bell & Fiske, 1959). Data are collected frequently in regular inter- Data are collected only a few times, usually vals before, during, and sometimes after once before and once after intervention, (follow-ups) the intervention. Assumes and one follow-up. Assumes relatively variability in behavior across time (cf. static state of human behavior, which may Chassan, 1979). be representatively sampled in research. Variable, depending on whether time-limited Fixed time periods are usually used. services are provided wherein the duration is fixed, or whether an open-ended service is given; depends on needs of client. Goals are usually chosen by the client and Goals are often chosen by the researcher, agreed to and made operationally clear occasionally in consultation with the by the practitioner. funding agency, practitioner, or community representatives. The practitioner usually selects the particular The researcher chooses the design, but usudesign and has no professional review ally has to submit the design and instruexcept by the practice supervisor, if that. mentation to peer review (e.g., Human Subjects Review Committee). Feedback is a vital ingredient of singleThere is almost no feedback in the group desystem designs; it is immediately forthsign until the entire project is completed, coming as progress is monitored, and the lest such information influence events service program may be modified acunder study. Little information is given cordingly. This permits the study of about process in studies of outcome. process as well as outcome. Systematic recording opportunities are present. Changes are permitted. Any set of intervenChanges in research design are not permittions can be described as a “design,” but ted. Fixed methods are used as exactly as some designs are logically stronger than possible across subjects. Standardized inothers. Flexibility of design is a major struments and trained interviewers are characteristic of single-system designs. often employed. Variable. In ideal form, some clear concepVariable. In ideal form, some clear hypothetual rationale is used to give meaning to ses are derived from a theory that gives the set of operational procedures. direction to the entire project. The relatively stable baseline period would Ideally, a control or contrast group is ranhave continued, it is assumed, had not domly selected from a common the intervention been made. Therefore, population and randomly assigned on the single-system design uses its “own the assumption that it will represent what control” by comparing outcomes from likely would have occurred to the experithe intervention period with the preintermental group had the intervention not vention baseline. been made. Usually achieved through having a second ob- All the basic methods of measuring reliability server make ratings where possible. Internal can be used: reliability of tests (test-retest, consistency used with standardized meaalternative forms, split-half) and intersures, along with conventional methods of observer agreement. measuring reliability where necessary.

Table 1 (continued)

Characteristic

Single-System Designs

13. Validity

The closeness of the measures, especially diAll basic methods of measuring validity can rect measures, to the ultimate outcome be used: face validity, content validity, criteria increases the opportunity for validcriterion-related validity, and construct ity, while various pressures toward distorvalidity. tion in reporting decrease that opportunity. Direct and immediate. May include involve- Indirect. Probably will not affect clients of ment of client as well as practitioner in the present project, but may be useful for collecting and interpreting data and modthe class of subjects (or problems) inifying intervention. volved. Ideally, should be important life events, but Variable, depending on the purpose of the single-system designs are susceptible to study. Can include important life events trivialization in choice of targets. Emphaor targets of theoretical interest (and low sis is on knowledge-for-immediate-use, immediate-practice utility). Emphasis is but some efforts concern knowledgemore likely to be on knowledge for building issues. knowledge-building, but with some efforts on knowledge-for-use. a. Descriptive data on the system in quesa. Descriptive data, providing normative tion, providing norms for that system. information. b. Change for that individual system. b. Scores, grouped or averaged, thus c. If the design permits, inferences of masking individual patterns of change causal or functional relationships be(Chassan, 1979). tween independent and dependent varic. Experimental designs provide strong ables, but susceptible to many alternalogical bases for inference of causality tive explanations for given outcomes. and generalizability (Shadish, Cook & Campbell, 2002). Relatively low. A small amount of time and Relatively high. Most group designs require energy must be expended in designing research specialists separate from the the evaluation and carrying it out, but practitioners involved, as well as exthis may be routinized as part of the empenses for data collection, analysis, and pirically guided practice itself. Essentially report writing. Computers typically are a do-it-yourself operation. used. Clients’ values are incorporated in the choice Researchers or funders generally control the of targets and goal-setting procedures. values expressed in the research—the Practitioners remain “agents of society” in goals, instruments, designs, and interpreaccepting or modifying these goals and in tations. helping client/systems to attain them. Statistical and logical considerations and lim- Problems remain concerning practitioners itations (such as generalizability of rereading, understanding, and using sciensults) are not yet well-explicated; indeed, tific information stemming from group dethere are many differences of opinion signs. Ethical issues (e.g., withholding treatamong specialists in the field. There is an ment from controls) are still not resolved increasingly large body of exemplars that among practitioners. Resistance to largecan be modeled available to practitioners scale research by minorities is increasing. from all theoretical orientations. Ethical issues abound (e.g., use of baselines vs. immediate practice). Optimistic. The literature on single-system Optimistic. Greater care in selecting populadesigns is expanding rapidly. Freedom tions and problems seems likely. Funding from major funding needs and the relaof large-scale research suffers in times of tive simplicity of the task—and the teacheconomic scarcity. New research journals ing of the task—may stimulate more should stimulate communication of curpractice research and evaluation with rent research. All of this is the heart of single-system designs. It is now potenevidence-based practice. tially possible for every practitioner to evaluate every case on every occasion. All of this is the heart of evaluationinformed practice.

14. Utility of findings for intervention 15. Targets of intervention and measurement

16. Kinds of data obtained

17. Costs

18. Role of values in the research

19. Limitations

20. Prospects

Experimental/Control Group Designs

9

a random selection is made for a

One target y (or more, z . . .) is selected for service, and is measured repeatedly before intervention.This measure indicates desired (+) and/or undesired (–) events or behaviors. 1

2

3

4

5

1

3

4

5

6

The intersection of a scale unit (y ) and a time unit indicates how much of the target occurred at that time.

2

Before Intervention (Baseline)

At time 1, both E and C are measured on many variables (before intervention).

Control Group (Ct1), which receives either the conventional treatment or none at all. A placebo may be used.

(–) 0

(+)

Randomly assigned to a(n)

Target y is measured and thus defined by some objective means, such as number of desired events occurring each day. A graph or chart is then generated.

Sample, from which subsamples are

Experimental Group (Et1), which receives a special treatment x.

7

10 11 12 13 Time Line

Sometimes follow-ups are used to study whether desired behaviors are maintained after service ends.

17 18 19 20

(Random data collection)

21

Sometimes, at time 3, a follow-up evaluation is obtained.

(Ct3