Using Knowledge and Evidence in Health Care: Multidisciplinary Perspectives 9781442682979

The contributors to Using Knowledge and Evidence in Health Care seek to broaden our understanding of the complexity invo

291 43 816KB

English Pages 290 [319] Year 2005

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Contents
Acknowledgments
Introduction: Towards a Broader Understanding of the Use of Knowledge and Evidence in Health Care
1. A Knowledge Utilization Perspective on Fine-Tuning Dissemination and Contextualizing Knowledge
2. A Sociological Perspective on the Transfer and Utilization of Social Scientific Knowledge for Policy-Making
3. A Political Science Perspective on Evidence-Based Decision-Making
4. An Organizational Science Perspective on Information, Knowledge, Evidence, and Organizational Decision-Making
5. An Innovation Diffusion Perspective on Knowledge and Evidence in Health Care
6. A Program Evaluation Perspective on Processes, Practices, and Decision-Makers
7. A Cognitive Science Perspective on Evidence-Based Decision-Making in Medicine
8. An Informatics Perspective on Decision Support and the Process of Decision-Making in Health Care
9. An Evidence-Based Medicine Perspective on the Origins, Objectives, Limitations, and Future Developments of the Movement
10. A Nursing and Allied Health Sciences Perspective on Knowledge Utilization
Postscript: Understanding Evidence-Based Decision-Making – or, Why Keyboards Are Irrational
Contributors
Index
Recommend Papers

Using Knowledge and Evidence in Health Care: Multidisciplinary Perspectives
 9781442682979

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

USING KNOWLEDGE AND EVIDENCE IN HEALTH CARE: MULTIDISCIPLINARY PERSPECTIVES Edited by Louise Lemieux-Charles and François Champagne

This collection of original essays seeks to broaden our understanding of the link between the structure and nature of knowledge and evidence and their role in health care decision-making. The basic premise underlying current discussion on evidence-based decision-making, notably as applied at the clinical level, is that the use of scientific evidence and knowledge should lead to higher-quality decisions, to the implementation of higher-quality actions, and, consequently, to better outcomes. The introduction to the volume presents a conceptual framework that illustrates the factors critical to analysing and optimizing the use of knowledge and evidence in health care decision-making. The following essays, by distinguished scholars from a variety of disciplines, discuss the dominant paradigms and understanding of knowledge and evidence through different perspectives emanating from the social sciences. To date the literature has focused primarily on the clinical levels, while ignoring the challenges of incorporating evidence in decisions at the management and policy levels. This work fills that gap by integrating social science views on the use of knowledge and evidence in health care and by exploring some of the challenges and limits of the use of evidence in different health care contexts. LOUISE LEMIEUX-CHARLES is an associate professor and chair of the Department of Health Policy, Management, and Evaluation at the University of Toronto. FRANÇOIS CHAMPAGNE is a professor in the Department of Health Administration at l’Université de Montréal.

This page intentionally left blank

Using Knowledge and Evidence in Health Care: Multidisciplinary Perspectives

Edited by Louise Lemieux-Charles and François Champagne

UNIVERSITY OF TORONTO PRESS Toronto—Buffalo—London

www.utppublishing.com © University of Toronto Incorporated 2004 Toronto –Buffalo –London Printed in Canada Reprinted in paperback 2008 ISBN 978-0-8020-8932-8 (cloth) ISBN 978-0-8020-9636-4 (paper)

Printed on acid-free paper

Library and Archives Canada Cataloguing in Publication Using knowledge and evidence in health care : multidisciplinary perspectives / edited by Louise Lemieux-Charles and François Champagne. Includes bibliographical references and index. ISBN-13: 978-0-8020-8932-8 (bound). -- ISBN-10: 0-8020-8932-1 (bound). ISBN-13: 978-0-8020-9636-4 (pbk.) — 1. Evidence-based medicine. — 2. Medical care – Decision making. I. Lemieux-Charles, Louise — II. Champagne, François R723.7.U85 2004 —— 362.1 —— C2004-902722-0

University of Toronto Press acknowledges the financial assistance to its publishing program of the Canada Council for the Arts and the Ontario Arts Council. University of Toronto Press acknowledges the financial support for its publishing activities of the Government of Canada through the Book Publishing Industry Development Program (BPIDP).

Contents

Acknowledgments—vii Introduction: Towards a Broader Understanding of the Use of Knowledge and Evidence in Health Care—3 FRANÇOIS CHAMPAGNE, LOUISE LEMIEUX-CHARLES, AND WENDY MCGUIRE

1 A Knowledge Utilization Perspective on Fine-Tuning Dissemination and Contextualizing Knowledge—18 JEAN-LOUIS DENIS, PASCALE LEHOUX, AND FRANÇOIS CHAMPAGNE 2 A Sociological Perspective on the Transfer and Utilization of Social Scientific Knowledge for Policy-Making—41 HARLEY D. DICKINSON

3 A Political Science Perspective on Evidence-Based Decision-Making—70 JOHN N. LAVIS

4 A n Organizational Science Perspective on Information, Knowledge, Evidence, and Organizational Decision-Making—86 G. ROSS BAKER, LIANE GINSBURG, AND ANN LANGLEY

vi—Contents

15 An Innovation Diffusion Perspective on Knowledge and Evidence in Health Care—115 LOUISE LEMIEUX-CHARLES AND JAN BARNSLEY 16 A Program Evaluation Perspective on Processes, Practices, and Decision-Makers—139 FRANÇOIS CHAMPAGNE, ANDRÉ-PIERRE CONTANDRIOPOULOS, AND ANAÎS TANON

17 A Cognitive Science Perspective on Evidence-Based Decision-Making in Medicine—172 LAMBERT FARAND AND JOSE AROCHA

18 An Informatics Perspective on Decision Support and the Process of Decision-Making in Health Care—199 ANDREW GRANT, ANDRE KUSHNIRUK, ALAIN VILLENEUVE, NICOLE BOLDUC, AND ANDRIY MOSHYK

19 An Evidence-Based Medicine Perspective on the Origins, Objectives, Limitations, and Future Developments of the Movement—227 R. BRIAN HAYNES

10 A Nursing and Allied Health Sciences Perspective on Knowledge Utilization—242 CAROLE A. ESTABROOKS, SHANNON SCOTT-FINDLAY, AND CONNIE WINTHER

Postscript: Understanding Evidence-Based Decision-Making – or, Why Keyboards Are Irrational—281 JONATHAN LOMAS— Contributors—291 Index—297

Acknowledgments

Using Knowledge and Evidence in Health Care arose out of a collaboration of talented researchers, health care policy-makers, practitioners, and industry partners. This enterprise was supported by HEALNet (Health Evidence Application and Linkage Network), a network of centres of excellence funded by the government of Canada that focuses on utilizing the best evidence in decision-making at all levels within the health system. Members of the network represent more than twenty disciplines, based at university sites from coast to coast. Over a seven-year period, investigators from clinical, organizational, informational, and political backgrounds collaborated on a number of research projects from which emerged an increased understanding of the link between the structure and nature of evidence in different settings and its use in decisionmaking in health care. We would like to thank the network leadership within HEALNet – George Browman, Vivek Goel, and Diana Royce – and members of the Management Team – Steven Lewis, Terry Sullivan, Brian Haynes, and Denise Kouri – as well as our contributors, who actively supported the dissemination of these perspectives. Morgan Holmes was critical in coordinating the project and ensuring that the book became a reality. His efficiency and sense of optimism throughout the process have been much appreciated.

This page intentionally left blank

Introduction—1

USING KNOWLEDGE AND EVIDENCE IN HEALTH CARE: MULTIDISCIPLINARY PERSPECTIVES

This page intentionally left blank

Introduction—3

Introduction: Towards a Broader Understanding of the Use of Knowledge and Evidence in Health Care FRANÇOIS CHAMPAGNE, LOUISE LEMIEUX-CHARLES, AND WENDY MCGUIRE

In Using Knowledge and Evidence in Health Care: Multidisciplinary Perspectives we present a wide-ranging discussion of one of the most influential developments in contemporary health care research, policy-making, and delivery. Evidence-based decision-making (EBDM) in health care and, in particular, the term ‘evidence-based medicine’ and its concepts originated in the early 1990s with the work of clinical epidemiologists at McMaster University in Hamilton, Ontario (Evidence-Based Medicine Working Group, 1992). In their discussions, however, there was often little acknowledgment that studies of the uptake of research-based information had also been of interest to many other disciplines for a much longer period of time. The application of organizational, cognitive, sociological, political, and management theories and concepts to the use of evidence in health care had been occurring in isolated disciplinary pockets as the evidence-based medicine movement came into being. In this volume we gather these multiple perspectives under a single cover. It was made possible by a multi-year collaboration of the authors and many others involved in a national network of researchers exploring the use of research-based knowledge and evidence in health care at multiple levels (HEALNet, 2002). While rapid acceptance of the practice of evidence-based medicine has been premised on the positive relationship between the use of clinical evidence and improved clinical health outcomes, a broader approach to evidence-based health care considers the use of clinical and non-clinical evidence for improving health care at multiple levels, including health service delivery, health

4—F. Champagne, L. Lemieux-Charles, and Wendy McGuire

care management, health policy, and individual and population health. This collection fills a major gap in the literature on EBDM. To date, the literature has failed to draw on the large body of knowledge generated from the social sciences, while the social science literature has not been systematically examined and synthesized for its applicability to the field of health care. The contributors to this book describe existing conceptual frameworks and models of the use of knowledge and evidence in health care and other settings; examine the assumptions that underlie theories of knowledge and its utilization; integrate multidisciplinary conceptions of the use of knowledge and evidence in health care; and explore some of the challenges to and limits of the use of knowledge and evidence in different health care practice contexts. In the following sections we review the EBDM and social sciences literatures and describe how, in the essays in this book, different models of utilization within health care are applied and the contributions of different disciplines to our understanding of how the use of knowledge and evidence in health care can be improved are examined. We conclude our Introduction with a new framework for studying knowledge utilization in health care. Evidence-Based Decision-Making Literature EBDM is based on the assumptions that evidence is produced by researchers in scientific laboratories (or other controlled settings) and then distributed to clinicians, who evaluate it with rigorous criteria – including a research project's methodological strengths, level of effectiveness demonstrated, and utility for particular clinical problems – and apply it appropriately in clinical practice (Sackett et al., 1996). Much of the literature on EBDM falls into the following four categories: (1) descriptions of how evidence-based medicine differs from traditional medicine and why it should or should not be widely adopted by clinicians; (2) strategies for assessing the quality of evidence (e.g., hierarchies of evidence, systematic reviews, basic versus applied science; see chapter 9); (3) strategies for improving the use of evidence by clinicians (e.g., training, decision support, and informatics; see chapters 7 and 8); and (4) issues in applying clinical evidence to individual patients who differ from the average study patient (e.g., clinical judgment, patient choice, and values). Since EBDM was first introduced as a new paradigm for the practice of medicine, it has been subject to a variety of internal debates and criticisms, such as the following: EBDM is based on flawed assumptions that clinicians are rational actors; EBDM overshadows the importance

Introduction—5

of clinical expertise and judgment; clinicians do not have time to seek out, assess, and use evidence at the bedside; there is no evidence that practitioners of EBDM have better patient outcomes than those that do not; and criteria for assessing quality of evidence are widely debated (Djulbegovic, Morris, & Lyman, 1999; Gupta, 2003; Miles et al., 2003; Norman, 2003). In addition, very little attention has been paid within EBDM to the social interactions, organizational context, or complex cognitive reasoning processes that affect the interpretation and use of evidence, while medicine has rarely examined non-medical literatures to understand these processes. In his book Evidence-Based Healthcare, Muir Gray (1997) attempts to move beyond the study of the use of evidence in the clinician-patient relationship by investigating how evidence can be used more broadly in the health care system by managers and policy-makers. Gray confines his investigation to the medical literature, however, even when examining the policy-making processes, organizational characteristics and processes, and management decision-making processes that affect knowledge utilization. Some explorations of these subjects do exist in the EBDM literature. For example, Jonathan Lomas (1995; et al., 1991) has examined the role of opinion leaders on utilization and the need for increased policymaker and clinician-researcher collaboration. More recently, Lomas (2003) has drawn on the political science and information science literatures to explore the impact of the organizational context on utilization. Knowledge Utilization in the Social Sciences The social sciences literature has much to offer the debates surrounding both the role and the practice of EBDM in health care. Researchers suggest alternative ways of conceptualizing utilization and look at individual behavioural processes as they are embedded in social and organizational environments. Surprisingly, there have been few efforts to synthesize the social sciences literature since the emergence of knowledge utilization as a rapidly growing multidisciplinary field over the past thirty-five years. Studies are scattered in a wide variety of working papers, reports, books, and journals (especially Knowledge: Creation, Diffusion, Utilization; Knowledge in Society; and Knowledge and Policy: The International Journal of Knowledge Transfer). In a synthesis of empirical studies of research utilization in the social sciences that, although conducted more than twenty years ago, remains of great value, Beyer and Trice (1982) reviewed articles related to im-

6—F. Champagne, L. Lemieux-Charles, and Wendy McGuire

proving the utilization of social sciences research in policy arenas, education, business, social services, and health. They developed a conceptual model to assess empirical studies and to provide a more systematic approach to the observation and study of utilization within organizations. Their model proposes that specific components of individual behaviour correspond to specific organizational and utilization (adoption and implementation) processes. Using this model, Beyer and Trice organized their findings according to four themes: (1) information processing capacities (cognition, sensing, searching, and diffusion of research); (2) social and affective relationships that influence interpretation and use (receptivity and commitment to research); (3) capacity to assess and select research in decision-making processes; and (4) actions taken to put research into practice (two phases of use: adoption and implementation). Beyer and Trice (1982) found that information processing was most influenced by the presence of effective linkages between researchers and users, the timeliness with which research was made available in non-research environments, and the ease with which users could understand the research. Receptivity and commitment to research were influenced by the relationships between users and producers of knowledge and the compatibility of research findings with pre-existing beliefs and attitudes. The authors found there was no consensus across studies on how research quality should be assessed or whether quality even affected use. However, research that was congruent with the experience and ‘ordinary knowledge’ of users and that studied variables considered to be within the control of decision-makers was more likely to be considered in decision-making. Use was usually defined as adoption and few studies actually observed whether or how research-based decisions were implemented or the quality of outcomes resulting from research-informed decisions. Based on their synthesis of the literature, Beyer and Trice (1982) recommended that researchers pay attention to linkage mechanisms; establish contacts with users; examine variables that are seen as relevant and within decision-makers' control; examine implementation processes as well as adoption; differentiate between types of use (e.g., instrumental, conceptual, or symbolic); and examine patterns and extent of use (e.g., partial or full). They also suggested that users be trained in the use of research evidence. Some researchers took up these challenges; for example, Huberman (1994), who studied the impact of sustained

Introduction—7

interactivity between producers and users of research on utilization and developed a Dissemination Effort Model that emphasizes the relationships between researcher and user contexts, linkage mechanisms, dissemination efforts, and use. In a later review of literature from basic and applied social sciences, Dunn, Holzner, and Zaltman (1990) identified four key themes: (1) knowledge is subjectively interpreted by individuals and collectivities; (2) knowledge is continually being produced and is subject to continual comparison and assessment for its quality or truth-value; (3) the production, transfer, and utilization of knowledge are social processes; and (4) knowledge production, transfer, and utilization are interdependent processes with interdependent effects. The first three themes are consistent with those identified by Beyer and Trice (1982). Yet as researchers recognized the need for increased researcher and user interactions and began to examine these interactions, a new need emerged: researchers were required to study interactions between the processes of knowledge production, dissemination, and utilization. Dunn et al. found, however, that most studies continued to examine the processes of utilization in isolation from production and dissemination. In addition, Dunn, Holzner, and Zaltman (1990) learned that the field continued to be troubled by problems defining use and an overemphasis on instrumental, rather than conceptual or symbolic, forms of use. Rich (1997) attributed these problems to a rationalistic bias in utilization research. He concluded that most knowledge utilization research employs an input-output model and is stymied by the near impossibility of being able to trace discrete outcomes from the use of specific pieces of information. Instead, Rich proposed a broader model that requires use to be reconceptualized as a series of events that occur under various conditions and circumstances. As such, attention must be paid to the different types of information being used, the conditions of their use, the kinds of users, and the different purposes to which they are put. Rich provided suggestions for how to distinguish between types of use and levels of utilization and non-utilization. Applying Knowledge Utilization Models to Health Care The model of knowledge utilization that underlies EBDM is rational, knowledge driven, and oriented towards solving instrumental problems. As revealed in the opening three chapters of our volume, this is only

8—F. Champagne, L. Lemieux-Charles, and Wendy McGuire

one of many potential models and may not provide the most complete picture of how evidence is used, for what purpose it is used, and how use varies according to different health care contexts. Jean-Louis Denis, Pascale Lehoux, and François Champagne (chapter 1), Harley Dickinson (chapter 2), and John Lavis (chapter 3) examine various social sciences models of utilization through which the use of knowledge and evidence in different health care contexts can be understood. Denis et al. describe the assumptions underlying five traditional models of knowledge utilization – knowledge-driven, problemsolving, enlightenment, strategic, and deliberative – that draw on and integrate perspectives from a number of different disciplines. They note that each of these models is based on a specific concept of knowledge and on a more or less implicit view about the nature and role of the relationship between science and practice. The authors argue that the relationship between science and society has undergone a shift, resulting in increasing pressures in health care for the production of knowledge that has immediate utility. This is achieved through intense researcher/practitioner collaboration, which various health-funding organizations now require to be built into the research process. In this ‘Mode II’ society, knowledge production is socially driven and knowledge is seen as an asset for society's development of competitive advantage. In their contribution to our volume, Denis et al. consider the interdependent relationship between knowledge production and utilization as recommended by Dunn, Holzner, and Zaltman (1990). In chapter 2, Dickinson charts the history of the application of scientific knowledge to social problems in the twentieth century and presents three sociological theories of knowledge utilization – technocratic, decisionistic, and pragmatic – that are interactive in nature and also address the relationship between science and politics. Dickinson points out some of the failings of the technocratic model, which equates human progress with scientific progress and has led to the medicalization of social problems and the use of medical knowledge for social control. He argues that it is not enough to apply rational knowledge to solve human problems; we must also find a rational way to make value choices. Asserting that technocratic and decisionistic models possess incomplete understandings of knowledge and how it is generated and applied, Dickinson contends that new approaches must include ethics as a measure of the validity of knowledge claims. A discourse ethics in health care, Dickinson concludes, requires broad participation of the public, professionals, and policy-makers.

Introduction—9

In chapter 3, Lavis provides a political science perspective for understanding how research evidence, as a subset of ideas, is interpreted and used in the explicitly value-laden arena of political policy-making. Lavis examines the relationship between mode of utilization and the political decision-making context within which knowledge is disseminated and used to influence policy change. Presenting a framework for understanding how knowledge is used within different political models – state centred, coalition centred, dialogue based, or strategy based – Lavis considers what type of policy learning occurs, including who learns, the object of learning, and the resultant policy changes. Finally, Lavis relates the type of utilization (instrumental, conceptual, or symbolic) to the political model to explain how determinants-of-health research knowledge was used to influence two health policy changes. In some branches of the social sciences literature, such as management sciences, innovation diffusion, and evaluation utilization, utilization is addressed at multiple levels. In chapter 5, Louise Lemieux-Charles and Jan Barnsley review the literature on the diffusion of innovations to determine whether diffusion theory can explain the successful or unsuccessful adoption of evidence-based innovations in health care systems. While early researchers focused on individuals as adopters and examined variables such as adopter attributes and innovation attributes, more recent studies have explored the relationship between innovation attributes, organizational attributes, and adopter systems. Whether addressing adoption at the level of the individual or of the organization, diffusion theorists examine the social environment within which innovations diffuse and the social interactions between individuals and organizations that facilitate or impede diffusion. A significant body of innovation diffusion literature specific to health care has accumulated since the 1950s, yet only very recently has it been used to understand the dissemination and use of evidence in health care. Lemieux-Charles and Barnsley recommend that dissemination efforts in health care should consider the relationship between innovation attributes, adopter attributes, and the larger systems within which adoption decisions are made. Like innovation diffusion, evaluation utilization has applications in health care at clinical, organizational, and policy-making levels. François Champagne, André-Pierre Contandriopoulos, and Anaïs Tanon (chapter 6) suggest that the lack of a coherent theory of knowledge utilization has limited its ability to explain utilization. The authors provide a theoretically grounded investigation of evaluation and its role in health

10—F. Champagne, L. Lemieux-Charles, and Wendy McGuire

care and related settings. They argue that utilization of evaluation research depends on the type of evaluation employed, and therefore they propose a typology of evaluation models based on three paradigms of knowledge: positivism, neo-positivism, and constructivism. They conclude that, for evaluation research to be utilized, there must be coherence between the nature of an intervention being evaluated and the type of evaluation chosen, as well as a shared understanding between evaluators and decision-makers on the premises on which an evaluation is based. In chapter 4, Ross Baker, Liane Ginsburg, and Ann Langley review the organizational and management literature and compare rational and naturalistic models of managerial decision-making. While the history of management thought reveals a continuing search for more rational approaches to organizational decision-making consistent with the practice of EBDM, the rational model cannot explain actual decisionmaking processes. The authors found that rational decision-making does not account for political or symbolic uses of information; that information is not useful in resolving value conflicts; and that, contrary to what is known about superior rational decision-making, experts do not need to compare options in order to select the best one. According to naturalistic decision-making (NDM) researchers, experts can mobilize modes of thinking that surpass rational procedures in their effectiveness. Baker et al. suggest that NDM among top management teams in health care should be explored. At the clinical level, there is a recent trend towards thinking that EBDM is not a radical new form of medical practice, but a technique for clinicians to manage the explosion of medical research available for use in making treatment decisions and to augment clinical judgment (Haynes, 2002). In chapter 9, Brian Haynes reviews evidence-based medicine as a set of procedures and resources designed to aid clinicians in finding and using the few studies produced each year that have practical applications. Systems for pre-grading evidence rescue the clinician from the hopeless task of reading the original medical literature. However, the process of grading evidence is not objective or value neutral, as the EBDM model would suggest. Haynes reviews some of the practical and philosophical issues associated with evidence grading, such as the differences between basic and applied research, the tension between population-based epidemiology and individual-based clinical practice, and the ethics of distributing potentially very costly new evidencebased interventions.

Introduction—11

Assuming that, in some cases, high-quality evidence is readily available to the clinician and resources are readily available for its deployment, cognitive science can offer insights into the mental processes that influence its interpretation and application by clinicians in realworld contexts. While cognition has been identified as a factor in the interpretation of evidence, the knowledge utilization literature has not been focused on this area. In chapter 7, Lambert Farand and Jose Arocha draw on cognitive science to understand the differences between expert and non-expert reasoning processes and how they affect the use (and misuse) of evidence by clinicians. The availability of highquality evidence does not inevitably lead to its effective use in complex, time-constrained, dynamic decision-making environments. The authors provide specific recommendations for changes in clinician training in order to strengthen clinician reasoning processes intended to increase the likelihood that increased utilization will, in fact, lead to better clinical outcomes. They also suggest that computerized decision-support systems have the potential to facilitate the management and use of evidence by expert clinicians and hold significant promise as training tools when employed in conjunction with feedback and monitoring by non-experts. In chapter 8, Andrew Grant, Andre Kushniruk, Alain Villeneuve, Nicole Bolduc, and Andriy Moshyk apply cognitive science to the study of human-computerized decision support interactions in clinical decision-making environments to understand how decision-support systems can be made more effective. They explore the assumptions underlying decision-support models and argue that decision-making undertaken by individuals is isolated neither from the contexts in which it takes place nor from the social interactions that occur within those particular contexts. While technology offers some promise for bringing evidence to the point of care, decision-support systems are still in their infancy and clinicians do not always agree on the best evidence, nor is evidence always available to address specific problems or patients. Carole Estabrooks, Shannon Scott-Findlay, and Connie Winther (chapter 10) make a valuable contribution to the EBDM literature by synthesizing the current state of knowledge in nursing and the allied health sciences. The authors trace the history of the research utilization field in nursing, noting the increased output, the use of broader methodological approaches, and the enhanced sophistication of study design and methods of studies produced as a result of the evidence-based medicine movement in the 1990s. As is true in the medical literature,

12—F. Champagne, L. Lemieux-Charles, and Wendy McGuire

the focus of nursing and the allied health sciences has been on research use by individual health professionals in the clinical setting (rather than the organizational or policy-making contexts). The authors found that few conclusions could be drawn about the impact of individual attributes on research use in the clinical setting and that the attributes most likely to influence use were not amenable to change (e.g., age, education, and years of experience). Organizational variables were found to be more influential, and the authors recommend further study in this area. Limits to the Use of Knowledge and Evidence in Health Care What overall themes can be drawn from this collection of multidisciplinary essays that apply social science knowledge to the use of evidence in multiple health care settings? Four limits to the rational use of evidence in health care can be identified. First, rationality cannot replace judgment and politics in managerial and policy-making decision-making. Even when evidence is used in the policy arena, it is not necessarily used instrumentally to make better policies. Evidence may be used symbolically to legitimate political positions or strategically to bolster support for or neutralize opposition to decisions that have already been made. In addition, information has marginal utility when decisions involve competing interests or values. In these instances, negotiation, dialogue, and debate may be more valuable processes than seeking and applying scientific evidence. Second, technical rationality should not be pursued at the expense of social values. Most knowledge utilization studies operate on the assumption that increased utilization is desirable. As Dickinson suggests, however, as a society we should seek not to subordinate our values to scientific knowledge, but to use science to rationally pursue values that are democratically agreed upon. For example, EBDM can inform but cannot provide definitive answers to questions of how to ensure the equitable distribution of costly new evidence-based medical interventions. Third, even at the clinical level, rationality may be inferior to some alternative decision-making processes. Studies of NDM and expert versus non-expert cognitive reasoning are beginning to shed light on processes of ‘intuitive synthesis’ that may surpass rationality in achieving positive outcomes. Advocates of EBDM have not been able to demonstrate that EBDM, as opposed to other forms of decision-making, leads to better outcomes.

Introduction—13

Finally, higher utilization may be gained at the cost of methodological rigour. While knowledge that is produced through researcher/practitioner or researcher/decision-maker collaboration may be more likely to be used, some would question whether it still can be called ‘scientific knowledge.’ Denis et al. caution that bounded disciplinary research should not be completely replaced by practical, problem-solving research. Societal demands for the latter may exert pressure on researchers to abandon or restrict the study of more complex phenomena that do not yield results that may be immediately used or commodified, as is often the case in health care organizational or policy research. Do these limits to rationality mean that EBDM itself should be abandoned as an unproven, if not futile, effort to improve health outcomes? We believe they do not. It is clear, however, that EBDM needs to be opened up to examination, placed in context, and understood as one part of a complex process of knowledge production and utilization, which has varying effects depending on the types of knowledge used, the types of user employing it, the context within which use occurs, and the types of purpose it may serve. An Expanded Conceptualization of the Use of Evidence and Knowledge in Health Care Taken as a whole, the essays in this volume call for the need to adopt a broad, expanded perspective on the use of evidence in health care. Figure 0.1 presents a conceptual framework that illustrates the factors identified in the following chapters as critical when one analyses and seeks to optimize the use of evidence in decision-making. The basic premises underlying much of the discussions on EBDM, notably as applied at the clinical level, are that knowledge derived from systematic investigation conducted in such a way as to be recognized as rigorous by a community of researchers should lead to higher-quality decisions, to the implementation of higher-quality actions, and consequently to better outcomes. Our framework underscores the fact that these premises need to be more rigorously analysed, particularly when one seeks to understand EBDM at the policy and managerial levels of health care. Deeper analysis must take the following six points into account: 1. Complex decisions are, and always will be, only partly influenced (if at all) by scientific evidence and knowledge. Bounded rationality and the prevalence of dynamic phenomena and uncertainty in

Processes of knowledge generation (epistemological paradigm, translation and brokering processes)

Characteristics of scientific evidence (availability, accessibility, validity, timing, communicability, manipulatibility)

Characteristics of organizational and systemic context (specialization, formalization, managerial attention, culture, extent of interorganizational collaboration) Decision-makers (values, beliefs, attitudes, motivation, skills, scientific literacy) Decisions (multiplicity and attention of stakeholders, group processes, characteristics of object)

Non-instrumental use of scientific evidence and knowledge (deliberative, political, tactical, conceptual)

Instrumental use of scientific evidence and knowledge

Use of naturalistic processes ■ practical knowledge ■ intuitive knowledge/wisdom ■ expert reasoning

Quality of implemented actions

Quality of decisions

Outcomes

14—F. Champagne, L. Lemieux-Charles, and Wendy McGuire

Figure 0.1. A framework for the analysis and optimization of the use of scientific evidence and knowledge in decisionmaking

Introduction—15

2.

3.

4.

5.

6.

large, complex social systems inevitably lead to the crumbling of rationality assumptions (Simon, 2000). Complex decisions in complex social systems are, and always will be, heavily influenced by social and political stakes, dynamics, and processes. Scientific knowledge and evidence influences decisions in many other ways than instrumentally. Conceptual, deliberative, political, and tactical uses of knowledge and evidence should be non-explicitly acknowledged as important channels leading to the optimization of the use of knowledge and evidence in decision-making (chapters 1, 2, 3, and 10). Health care policy-makers and managers are frequently involved in decision-making situations where goals and means are unclear (chapter 4). In such situations of ‘anomie,’ they tend to value and rely on practical knowledge (derived from their personal experience), intuitive knowledge and wisdom, and expert reasoning. Although further research is clearly needed, it is possible that these NDM processes are needed for high-quality decision-making, particularly in situations of anomie (chapter 4). Cognitive science has already shown the value of expert reasoning at the clinical level (chapters 7 and 8). The study of health care policy-makers and managers as expert reasoners is likewise a promising avenue. All of the above (the instrumental and non-instrumental uses of knowledge and the use of NDM processes) will be influenced by, and their effectiveness will be contingent upon, the characteristics of decisions (e.g., the multiplicity and attention of stakeholders, group dynamics, and the characteristics of the object of decisioncosts, stakes, compatibility, complexity, triability, observability, and uncertainty [chapters 4 and 6]); the characteristics of decisionmakers (e.g., their values and beliefs, attitudes and motivation, skills, scientific literacy, and social networks [chapters 5, 6, 7, and 10]); and the characteristics of the organizational and systemic contexts of decision-making (e.g., organizational specialization, formalization, professionalization, task complexity, centralization, culture, and extent of involvement in interorganizational collaboration [chapters 3, 4, 5, 8, and 10]). The characteristics of scientific evidence itself (e.g., its availability, accessibility, validity, communicability, timing, and manipulability) will also determine its usability and influence its use. These characteristics result in large part from the processes of knowledge

16—F. Champagne, L. Lemieux-Charles, and Wendy McGuire

generation from which evidence arises (chapters 1, 2, and 6). The underlying knowledge production paradigm; the linking, participation, and other mechanisms used to bridge the cultural gap between researchers and decision-makers; as well as the knowledge translation and brokering processes, all seem to be determinant. *** This book is the result of the work of many researchers who are seeking to better understand the link between knowledge and evidence and their use in decision-making in health care settings. In his postscript to the volume, Jonathan Lomas comments on HEALNet's success in building a house where all scholars from different disciplinary perspectives could find respect and mutual sustenance. Despite their different disciplinary roots, he notes that three interrelated themes are evident among the chapters: (1) the need to embed the individual in a changing organizational and social culture, rather than look for a magic bullet for individual behaviour change; (2) the need to develop methods that encourage EBDM to fit the context in which an individual works or an innovation is being implemented; and (3) the need to view EBDM as a social process rather than a technical endeavour. Lomas envisions a future that holds a more appropriate synthesis of research, a greater use of intermediary structures and roles, and enhanced opportunities for ‘cross-learning’ between researchers and decision-makers. The latter reinforces the need for dialogue and a belief that process matters. In a similar spirit, we hope this multidisciplinary gathering itself will stimulate fruitful dialogues and provide a significant boost to further explorations of this rich and rewarding field.

REFERENCES Beyer, J.M., & Trice, H.M. (1982). The utilization process: A conceptual framework and synthesis of empirical findings. Administrative Sciences Quarterly, 27, 591– 622. Djulbegovic, B., Morris, L., & Lyman, G.H. (1999). Evidentiary challenges to evidence-based medicine. Journal of Evaluation in Clinical Practice, 6(2), 222– 109. Dunn, W., Holzner, B., & Zaltman, G. (1990). Knowledge utilization. In H.J. Walberg & G.D. Haertel (Eds.), International encyclopedia of educational evaluation (pp. 725 –733). Oxford: Elsevier Science. .

.

Introduction—17 Evidence-Based Medicine Working Group. (1992). Evidence-based medicine: A new approach to teaching the practice of medicine. Journal of the American Medical Association, 268, 2420–2425. Gray, J.A.M. (1997). Evidence-based healthcare: How to make health policy and management decisions. New York: Churchill Livingstone. Gupta, M. (2003). A critical appraisal of evidence-based medicine: Some ethical considerations. Journal of Evaluation in Clinical Practice, 9(2), 111–121. Haynes, R.B. (2002). What kind of evidence is it that evidence-based medicine advocates want health care providers and consumers to pay attention to? BMC Health Services Research, 2(1), 3. HEALNet. (2002). HEALNet accomplishments and impact: 1995 –2002. Retrieved on 31 October 2003 from http://hiru.mcmaster.ca/nce/reports/ mainframe.htm. Huberman, M. (1994). Research utilization: The state of the art. Knowledge and Policy: The International Journal of Knowledge Transfer and Utilization, 7, 13 –33. Lomas, J. (1995). Bridges between health care research evidence and clinical practice. Journal of the American Medical Informatics Association, 2(6), 342–350. Lomas, J. (2003). Evidence-based practice in steeltown: A good start on needed cultural change. Healthcare Papers, 3(3), 24 –28. Lomas, J., Enkin, M., Anderson, G.M., Hannah, W.J., Vayda, E., & Singer, J. (1991). Opinion leaders vs audit and feedback to implement practice guidelines. Delivery after previous cesarean section. Journal of the American Medical Association, 265(17), 2202–2207. Miles, A., Grey, J.E., Polychronis, A., Price, N., & Melchiorri, C. (2003). Current thinking in the evidence-based health care debate. Journal of Evaluation in Clinical Practice, 9(2), 95 –109. Norman, G. (2003). The paradox of evidence-based medicine. Commentary on Gupta (2003), A critical appraisal of evidence-based medicine: Some ethical considerations. Journal of Evaluation in Clinical Practice, 9(2), 129 –132. Rich, R. (1997). Measuring knowledge utilization: Processes and outcomes. Knowledge and Policy: International Journal of Knowledge Transfer and Utilization, 10(3), 11–24. Sackett, D., Rosenberg, W., Gray, M., Haynes, B., & Richardson, S. (1996). Evidence based medicine: What it is and what it isn't. British Medical Journal, 312(7023), 71–2. Simon, H. (2000). Public administration in today's world of organizations and markets. The 2000 John Gaus Lecture. Political Science and Politics, 33(4), 749–756. .

.

.

.

.

18—J.L. Denis, P. Lehoux, and F. Champagne

1 A Knowledge Utilization Perspective on Fine-Tuning Dissemination and Contextualizing Knowledge JEAN-LOUIS DENIS, PASCALE LEHOUX, AND FRANÇOIS CHAMPAGNE

Introduction: Towards a Knowledge-Based Society? Many analysts of contemporary western societies claim we have moved from an industrial economy to a knowledge-based economy. Knowledge is increasingly seen as an important asset in organizational and social development (Blackler, 1995; Brown & Duguid, 1991; David & Foray, 2002; Empson, 2001; Nahapiet & Ghoshal, 1998). Hence, in a knowledge-based economy or society, knowledge should be produced, disseminated, and applied with greater intensity and at a faster pace than in previous eras. In a knowledge-based economy or society the utilization of knowledge is routinized and institutionalized. One may thus infer that major changes in the production, dissemination, and application of knowledge will accompany the emergence of a knowledge-based economy, and that this shift may take a specific shape in the field of health care. In this chapter we examine how the call for increased dissemination and uptake of scientific evidence in health care is part of a broader transformation that involves new players, new relationships, and new challenges. Our principal goal is to describe the assumptions embedded in different knowledge utilization models regarding the status of knowledge, the relationships between science and practice, and the determinants of use. An understanding of these things is crucial to defining what may be seen as a paradox: the emergence of a knowledge-based society appears to be linked to a weakening of the practice of science in its more traditional form.

A Knowledge Utilization Perspective—19

We first describe how the field of knowledge utilization considers the processes by which knowledge should be used in society, especially by practitioners. We then look at recent attempts by Gibbons et al. (1994) and Nowotny et al. (2001) to define broader societal changes through which universities become only one player among several others in the knowledge production arena. Drawing on their typology of modes of knowledge production, we suggest that this approach is useful for defining critical issues around the processes for knowledge utilization in health care and for framing relationships between knowledge producers and knowledge users. Knowledge Utilization Models Despite the current emphasis on the need to increase the use of knowledge in society and organizations, one may say that knowledge utilization is, in fact, much more prevalent than it used to be. A quick look at the daily life of contemporary societies and organizations reveals an intense use of technologies, information systems, data, expert analyses, and so on (Tenner, 1996). This is not a truly new phenomenon; since the Second World War, societies and organizations have been steadily exposed to new knowledge and have been engaged more often in innovative practices. What seems to characterize current debates regarding knowledge utilization is a stronger emphasis on the need to intensify the diffusion of knowledge and the development of knowledge-informed ‘products.’ This emphasis is more striking for researchers because funding bodies, across a wide spectrum of disciplines, now consistently insist on the importance of taking practical concerns into account during knowledge production and dissemination processes (Leydesdorff & Etzkowitz, 2000). In this section, we will examine different models of knowledge utilization, highlight their assumptions about an increased use of knowledge in society and organizations, and provide examples of how these assumptions translate into, or are embodied in, health care. Five different models can be found in the literature on knowledge utilization (Peltz, 1978; Weiss, 1979; Bulmer, 1982; Denis, Béland, & Champagne, 1996): knowledge-driven, problem-solving, enlightenment, strategic, and interactive or deliberative. Each of these models is based on a specific conception of knowledge and on a more or less implicit view about the nature and role of the relationships between science and practice.

20—J.L. Denis, P. Lehoux, and F. Champagne

Knowledge-Driven Model The knowledge-driven model (Weiss, 1979; Bulmer, 1982) corresponds to the traditional view of science. The duties of scientists are to respond to a specific and vocational mission that is the collective production of an incremental body of disinterested knowledge (Weber, 1989). According to this model, knowledge is valuable in and of itself, which means that the value of knowledge will be assessed in relation to an existing stock of knowledge in a given field. Despite the fact that the organization and processes of knowledge production are always embedded in certain social, economic, and cultural contexts, the knowledgedriven model considers that it is better to minimize the interference of any exogenous factors or concerns in the work of the scientific community. For the knowledge-driven model, the confinement of science is good and productive for societies and organizations, and the production and assessment of relevant knowledge should be in the hands of the scientific community. The peer-review system and the collegial form of control should guarantee the appropriateness of knowledge, and knowledge must be assessed according to rational criteria. If knowledge is scientifically rigorous and legitimate, the diffusion of knowledge may and will occur. Because knowledge is valuable in itself, the use of this knowledge will happen somewhere and at some time in the production chain from pure to applied research. For the knowledge-driven model, what seems most important is to put solid knowledge in the marketplace and to let actors – organizations and decision-makers – grasp the knowledge that seems appropriate for their field of activity. In this model no emphasis is placed on the need to develop specific policies to diffuse knowledge or to intensify its use – laisser-faire seems to be an acceptable policy. If certain deliberate actions are taken to increase knowledge utilization, they may be limited to communication strategies to increase the awareness of non-experts of the value and usefulness of knowledge (Bero & Jadad, 1997). We should also consider that the knowledge-driven model pushes towards certain broad institutional strategies or social policies designed to increase interest in scientific knowledge and its potential applications (Leydesdorff & Etzkowitz, 2000). Efforts to develop a scientific culture or scientific literacy in a given society can be seen as a natural extension of the knowledge-driven model. If scientific knowledge and scientific experts possessing this knowledge are valuable in themselves, the main issue in terms of knowledge utilization becomes the promo-

A Knowledge Utilization Perspective—21

tion of this knowledge outside scientific communities. According to the knowledge-driven model, there is a one-way relationship between knowledge producers and knowledge users: scientists produce knowledge and societies reap the benefits of such a valuable asset. The traditional role of universities in society fits well with this model. Similiarly, one must recognize that private companies often rely on their own research and development (R&D) units to generate new knowledge that, in the long term, can lead to profitable applications (Gelijns, 1991).

Problem-Solving Model The problem-solving model (Weiss, 1979; Bulmer, 1982) takes an almost opposite view of the way knowledge should be used in society and organizations. The problem-solving model conceives knowledge utilization as a process through which practitioners formulate requests to scientists or experts in order to solve their day-to-day problems. One may say that the problem-solving model is the ideal companion to the knowledge-driven model by being its ‘equilibrating opposite.’ Whereas the knowledge-driven model insists on the predominant role of scientists in the determination of knowledge, the problem-solving model insists on the role of practitioners in the determination of knowledge. The problem-solving model recognizes the legitimacy of practitioners' demands and expectations and the need for the scientific community to respond to those demands. In one variant of this model, practitioners or decision-makers, when faced with a problem, search for information in existing research. In another variant, research is commissioned to find a solution for a specific problem defined by practitioners. The adoption of the first variant is probably more in line with the way that knowledge is developed in science through cumulative processes (Champagne, 1999). The second variant may serve pedagogical and social purposes in order to reinforce links between practitioners and researchers. If practitioners are more aware of existing knowledge and scientific resources they may be more effective in exerting their demands and in relating to a specific scientific community. Contrary to the knowledge-driven model, the problem-solving model does not conceive of knowledge as being valuable in itself. Knowledge has value only if, by using it, practitioners can solve critical problems in their practices. This model for knowledge utilization is based on the idea that relationships between practitioners and scientists entail a logic of co-optation. These relationships can develop through networks of

22—J.L. Denis, P. Lehoux, and F. Champagne

initiés – practitioners and small circles of scientists, the former being able to define a problem and the latter searching for a solution. Hence, the interactions between scientists and practitioners are defined as contractual relationships. According to this model, relationships between knowledge users and producers are also unilateral. Practitioners ask for scientific input and scientists are obliged to provide solutions. Because this model recognizes the autonomy of the scientific community, it is expected that the instrumental role of scientists will be temporary and that contractual arrangements may provide an efficient device to circumscribe the dominant role of science through practical imperatives. According to the problem-solving model, strategies to increase knowledge utilization should be designed to reinforce practitioners' capability of framing problems and demands, and the ability of scientists to translate their knowledge into local and practical applications. Recent emphasis on developing knowledge management skills in health care organizations may be a natural extension of the problemsolving model (Muir Gray, 1997; Tranmer, 1998; Kovner, Elton, & Billings, 2000; Lomas, 2000; PAHO, 2001). By increasing access to knowledge and reinforcing the ability to select the appropriate corpus of knowledge, it is expected that practitioners will be more in tune with scientific developments, will hold clearer expectations, and, ultimately, will be in a better position to interact with experts. This description of the problem-solving model presents the world of practice and the world of science as two parallel universes that may engage in confined interactions. This model leaves almost intact the vision and rules of the game that structure the scientific world. In an ideal situation, the problem-solving model implies that the pressures on scientists to step into the practical world will be, at least partly, related to their scientific achievements. In a sense, if someone is a scientific virtuoso in a given field, practitioners will seek her advice. Valuable science is seen as being the basis for valuable expert advice. Of course, this is not always the case. Many academics in practical fields are or can be trapped in the role of expert. More specifically, if incentives strongly favour the development of scientists as contractual experts, the shift to increasingly serve practitioners and engage less with the scientific community in the quest for new knowledge represents a risk in the problem-solving model. A natural extension of the problem-solving model towards knowledge utilization is found in the now classic work by Schön (1983) on

A Knowledge Utilization Perspective—23

the reflective practitioner. Schön is concerned with the processes of developing and using knowledge in the context of applied sciences (meaning sciences devoted mainly to the resolution of practical problems in fields such as psychology and architecture). For Schön, one of the main problems of contemporary societies is their inefficiency in developing knowledge that possesses significant practical and positive consequences for the resolution of modern dilemmas (e.g., urban development, continuous discrepancies in health status among citizens, and environmental pollution). This limitation in contemporary knowledge production is due, according to Schön, to the assumed hierarchical relationship between the realm of science and the realm of practice. Science is cast as the dominating practice, because knowledge derived from experience is seen as less valuable than systematic and formalized knowledge produced by academics. However, Schön, in keeping with his emphasis on applied sciences, argues for the opposite: knowledge is derived predominantly from practice and experience and then is formalized by science produced by the academic community. Clinical sciences illustrate quite well the perspective of the reflective practitioner. In the clinical context, each patient is a reservoir for observations from which the practitioner or clinical scientist generates hypotheses and infers new knowledge (Dodier, 1993). Problems and puzzles emanating from practical contexts are the key source of new knowledge. Schön is in favour of the emergence of reflective practitioners who creatively use lessons from experience to reframe problems in a manageable way. The scientist or expert will not provide solutions to the problems of practice. He/she will provide some resources that practitioners can then mobilize according to their own visions, constraints, and responsibilities. Pedagogical strategies based on learning through problem-solving accord with the development and use of adequate knowledge to sustain adequate practices. If the first version of the problemsolving model we described can be considered a complement to the knowledge-driven model, the model of the reflective practitioner challenges not only the usefulness of science but also the scientific domination of practice.

Enlightenment Model The enlightenment model, like the knowledge-driven model, considers knowledge to be valuable in itself (Weiss, 1979). It shares affinities with a Renaissance view of the world, in which the main benefit of knowl-

24—J.L. Denis, P. Lehoux, and F. Champagne

edge production and dissemination is the development of our understanding of the world (Rouse, 1991). Knowledge is valuable not in an instrumental way, but rather through an intellectual and cognitive alteration of world views. Knowledge is useful to people who face complex situations. It helps them to reframe problems and elaborate new forms of understanding. Increasing the diversity and circulation of knowledge in society is expected to enrich decisions and actions. Knowledge helps social actors to make sense of their reality. According to this model, the circulation of knowledge is unstructured and informal. Knowledge penetrates society, organizations, and communities in unexpected ways. Relations between science and practice are, and should remain, random, because the enlightenment model of knowledge utilization holds that the benefits of knowledge are self-evident and sees no need to demonstrate empirically the use of knowledge in society or organizations. Knowledge is a cultural asset, not a tool to resolve day-to-day problems. Policies designed to increase cultural literacy in the broadest sense of the term cohere with this conception of the value of knowledge (Stengers, 1993). A more restricted use of the enlightenment model is found in the field of evaluation and policy analysis, which recognizes the legitimacy of a conceptual use of knowledge by society and organizations (see the debate between Patton, 1997, and Weiss, 1988a, b). According to this perspective, one of the main limitations of empirical studies of knowledge utilization has been their focus on measuring the instrumental use of knowledge. For Weiss, knowledge will be used over time by actors. Consequently, knowledge utilization is a learning process through which social actors develop new frameworks or mindsets. On this theory of knowledge utilization, the use of knowledge may be a long process without any clear or immediate output and the main locus of knowledge use will not be the immediate context in which applied research is developed. This is easily understandable if someone holds realistic assumptions about the social and political processes for decision-making in organizations and policy arenas (March, 1999). Patton suggests another view of the conceptual use of knowledge. He argues that, despite the fact that instrumental utilization may be low in a given setting, social actors or practitioners involved in the production of knowledge will always gain something from such involvement. The enlightenment model is, in a sense, very optimistic; knowledge is not used in an instrumental sense but will always benefit some actors in proximal or more distal settings. Concepts, ideas, and models are no-

A Knowledge Utilization Perspective—25

madic entities (Stengers, 1993); they migrate from setting to setting in a manner that cannot be anticipated. Thus, the enlightenment model does not really challenge traditional scientific practice. On the one hand, it reinforces the idea that confined science is good because it produces some positive input in the development and transformation of world views. On the other, it may imply some transgression of the autonomy of science by assuming that increased interactions between practitioners and researchers in the production of research will trigger mutual learning events. We will return to this point later.

Strategic Model The strategic model (Weiss, 1979; Peltz, 1978) conceives of knowledge as one resource among others to be accumulated, exchanged, or used in political interplays among actors located in diverse organizations. In its more extreme version, knowledge has no specificity and, consequently, it cannot be valuable in itself; it can have added value only in certain social or organizational contexts. Knowledge will diffuse as a consequence of the negotiations and plays by actors. Two views of knowledge utilization can be inferred from this model. The first is a milder, strategic approach in which knowledge utilization is regarded in terms of its fit with a given project or set of objectives as defined by influential social actors. This interpretation corresponds to a stakeholder approach of knowledge utilization (Mitchell, Agle, & Wood, 1997). According to this moderate perspective, knowledge will be used because it becomes a critical resource in implementing a strategy envisioned by social entrepreneurs or leaders. This approach resembles a social planning approach to contemporary problems (Vaillancourt Rosenau, 1994). Oriented research programs and top-down initiatives are instruments that fit well with this strategic perspective. The second, more radical, view suggests that knowledge is a resource that can be manipulated to legitimize particular positions or to gain specific advantages from a given organizational or social situation. This view corresponds to a slightly pessimistic representation of power and politics in organizations and societies. It implies that relationships between scientists and practitioners will develop according to the search for an opportunistic use of knowledge rather than from a rational exercise of priority setting. The covering up or, alternatively, the strategic disclosure of scientific breakthroughs by private companies illustrates this radical view of the strategic model (Jasanoff, 1990).

26—J.L. Denis, P. Lehoux, and F. Champagne

When the strategic model is examined through the lens of the classic model for strategic analysis in organizational theory (Crozier & Friedberg, 1977), it is clear that the opportunistic manipulation of knowledge may encounter obstacles or some opposition. One may argue that the injection of useful knowledge in strategic interplays may shape the negotiations between actors of the issues at stake and contribute to sound policies or decisions. The strategic model does not necessarily imply that knowledge cannot positively influence the development of strategic games, issues, and outcomes. According to this model, a reasonable way to stimulate knowledge utilization is to promote the emergence of leaders who can orchestrate the potential contribution of scientists to societal and organizational development.

Interactive or Deliberative Model The interactive or deliberative model (Weiss, 1979; Huberman, 1989; Bohman, 1996) can be considered a synthesis and an extension of the enlightenment model and the strategic model. The deliberative model is based on the assumption that the co-production of knowledge by practitioners and researchers is key. In order to achieve co-production or co-interpretation of knowledge, this model insists on a high level of cooperation between these two communities. According to the deliberative model, the relationships between practitioners and researchers are symmetrical and non-hierarchical. Because knowledge is a social construction that needs to be validated by social interactions and public deliberations, the deliberative model pushes towards a democratization of knowledge utilization in society and organizations. From this perspective, knowledge is not valuable in itself; it gains value only through its interpretation by potential users. Knowledge represents a significant asset for the development of individuals and societies if it arises from cooperative arrangements and open public debates. To provide the institutional conditions required to sustain the deliberative model, relationships between scientists and practitioners can be formalized or structured to ensure the free circulation of knowledge and the emergence of reciprocal and shared interpretations. This perspective obliges actors to decide and agree on the roles of practitioners and scientists in the implementation of research efforts. Practitioners can have different roles and be involved at different stages of the research process. They may participate in the interpre-

A Knowledge Utilization Perspective—27

tation of research results or, upstream, in the formulation of research problems. Recently, many authors have argued in favour of a more participatory form of research that seems in line with the deliberative model (Guba & Lincoln, 1989; Vaillancourt Rosenau, 1994). This model shares with the enlightenment model the idea that knowledge is the basis of a more complex and adapted form of understanding of social and organizational world views. It also shares with the strategic model the idea that knowledge is a resource for positioning an actor in a given social field. The deliberative model places these two models (enlightenment and strategic models) within the framework of deliberative democracy. According to this approach, it is possible to structure interactions between individuals and groups in society in order to sustain cooperation and rational argumentation between them. More specifically, a deliberative perspective to knowledge utilization implies the adhesion by individuals or groups to certain rules defined a priori for deliberating the meanings and consequences of a specific body of knowledge, the acceptance of a wide variety of participants in the deliberation process, and the obligation to debate and justify positions in public (Hirschman, 1995; Habermas, 1989, 1993).

Comparison of Models Overall, these five models suggest different processes by which knowledge is used in society and organizations (see table 1.1). Except for Schön's (1983) insistence on the primary role of practice in the emergence of knowledge, all these models share a positive valuation of the diffusion or spreading of scientific knowledge in society and organizations. They are not, however, in agreement either on the strategies to be put forward to increase knowledge use or on the status and role of practitioners and scientists in such processes. The knowledge-driven and enlightenment models are more attached to the sovereignty of the scientific community. According to these models, researchers or scientists produce knowledge in a confined world without any external interference, and they have their own rules for regulating the production and quality of knowledge. On this understanding, the quest for science is similar to the ideal type of the ‘savant’ described by Weber (1989): scientists will serve society and organizations better if they pursue their calling or mission

Conception of knowledge

Knowledge-driven

Problem-solving

Enlightenment

Strategic

Deliberative

Disinterested knowledge is produced by devoted scientists.

Useful knowledge is produced by giving constraints to scientists.

Knowledge is made of ideas that can flow from one area to another, and through time.

Knowledge is a strategic resource that can be accumulated, exchanged, or applied in specific situations.

Knowledge is co-produced and co-interpreted by scientists and practitioners.

Cross-fertilization of scientists' and practitioners' world views should be sought at different phases in knowledge production processes.

Framing of strategic knowledge by stakeholders should drive the work of scientists.

Increased interactions and cooperation between scientists and practitioners should be sought throughout the knowledge production processes.

Knowledge penetrates society through an unpredictable process of interactions; use is thus conceptual.

Knowledge may serve several ends; use derives from negotiations between practitioners and scientists orchestrated by a rational and strategic third-party.

Knowledge is applied when it is openly debated; use is a negotiated outcome.

For Schön, knowledge emerges from practice. Interactions between No need for science and practice interactions; society is seen as a passive receptacle.

Practitioners should tell scientists what the problems are. For Schön, reflexive practitioners are able to solve their own problems.

Determinants of use

Useful applications will derive from good science; use is the responsibility of practitioners.

Practitioners’ needs should be fulfilled by scientists; use is more or less straightforward. For Schön, reflexivity and creative problem definition determine use.

28—J.L. Denis, P. Lehoux, and F. Champagne

Table 1.1 Characteristics of the Five Knowledge Utilization Models

A Knowledge Utilization Perspective—29

according to deontological conduct for research as defined by scientists themselves. We suggest that, according to the knowledge-driven and enlightenment models, the role of scientists is to produce knowledge, and the role of practitioners is to consume it. This implies a hierarchical social relationship and an impermeable boundary between these two communities. Developing strategies for communication, dissemination, and/or persuasion is sufficient to foster knowledge utilization. We also suggest that policies that stimulate scientific and cultural literacy in a given society are in line with these models. The problem-solving model, meanwhile, reverses this pattern of relationships between practitioners and scientists. Knowledge is produced in specific situations to support the resolution of concrete problems experienced by practitioners. The two communities evolve along parallel trajectories, according to their own rules, but they may engage in occasional interactions. These interactions are structured around the needs and expectations of practitioners. Schön (1983), through his work on the reflective practitioner, suggests that the problem-solving model represents a critique of the existing hierarchical relation between scientific and experiential knowledge. For him, the use of knowledge depends mainly on a positive valuation and formalization of experiential knowledge. We also suggest that a possible pitfall of the problem-solving model can be seen when scientists become increasingly the servants of practitioners and, in so doing, neglect their quest for new scientific knowledge. The strategic model is based on the assumption that knowledge is attractive only when it serves strategic or particular interests and when actors know which strategy to put forward and are in a position to discern the strategic edge of knowledge. This model is useful for making a distinction between the appropriate and the inappropriate use of knowledge in society; it underlines the possibility of a manipulative or Machiavellian application of knowledge. Roles for practitioners and scientists lose specificity if both get involved in similar strategic interplays. However, this model suggests that knowledge will be used only if the actors are able to understand their interest in using knowledge. The idea of devising incentives for knowledge use stems from the strategic model. Finally, as we saw, the deliberative model values knowledge and its use when it is based on sound democratic and deliberative processes. It is the most challenging model for practitioners and scientists. The other models more or less put the emphasis on fine-tuning communication

30—J.L. Denis, P. Lehoux, and F. Champagne

strategies in order to increase knowledge utilization. Except for the problem-solving model à la Schön, most of the models of knowledge utilization preserve existing research production processes and keep scientists' and practitioners' norms almost intact. The deliberative model changes the rules of the scientific community by opening it up to external influences. It also obliges practitioners to increase their awareness of knowledge and to engage in a reflexive appropriation of knowledge. They cannot behave simply as the clients of experts. Our examination of the five models of knowledge utilization has helped to clarify their assumptions about the ways the realms of science and practice should or should not be brought closer together. Most of these models imply that a certain form of fine-tuning communication between these two domains should facilitate knowledge use. We will now turn to the idea that the production of knowledge in a practice context and in the context of a network of organizations embodies a more profound transformation of the modes of knowledge production themselves, and that it can be seen as the result of a deeper institutionalization of science in modern societies. Knowledge Production and Use in Context As described in the previous section, excepting the deliberative model, most knowledge utilization models pay little attention to the production processes of knowledge or to their implications for the use of knowledge in society and organizations. The models do not recognize explicitly the facts that the social and economic context has changed and the development of scientific applications poses dilemmas for individuals and societies. Nowotny, Scott, & Gibbons (2001) address the emergence of a ‘Mode II society’ in which the expansion of science, the diversification of knowledge production sites and networks, and the rapid pace at which science converses in applications make the confinement of science no longer possible. Knowledge utilization is an epiphenomenon of deep changes in current modes of knowledge production. The knowledge-driven model and enlightenment model approaches to knowledge utilization do not reflect the reality of the processes of knowledge diffusion and utilization in societies. Nowotny et al. argue that factors related to the uncertainties of knowledge, the consequences of knowledge (an argument closed to the deliberative or interactive model), and the nature of problems in contemporary societies tighten relationships between researchers and practitioners. Problem-solving, strategic,

A Knowledge Utilization Perspective—31

and deliberative approaches to knowledge utilization now are embedded in new institutional forms in which scientists and non-scientists participate in the creation and management of these new arrangements. Knowledge use is thus a process embedded in what Gibbons et al. (1994) call the contextualization of knowledge. The works by Gibbons et al. (1994) and Nowotny et al. (2001) present these changes as a movement between two archetypes – namely, Mode I and Mode II of knowledge production. The development of Mode II will be seen to be a social process that increases the probability of knowledge use because it goes further than the marginal adjustments of traditional scientific logic. As we will argue, such evolution may pose dilemmas for the development of science itself. Gibbons et al. (1994) characterize the evolution of the organization of research in contemporary societies in terms of a switch in modes of knowledge production. A mode of production implies an organized set of rules that governs access to the means of production and relations among social actors. For the sake of convenience, they base their arguments on two ideal types: Mode I and Mode II. According to them, recent changes in the organization of research have fostered the implementation of Mode II. Mode I for knowledge production corresponds to the traditional mode of organizing universities and research. The main objective under Mode I is to produce new knowledge. The best vehicle to produce this knowledge is the ‘pure’ disciplinary endeavour. Research activities are structured inside historically established disciplinary boundaries. Scientific recognition is obtained through achievement inside these boundaries. Mode I does not imply a total detachment from any mundane considerations. Research, from a disciplinary point of view, can be fundamental or applied. This duality has implications for the conception of knowledge utilization. Under Mode I, fundamental research logically precedes applied research. Applied research – a form of research that is aimed at developing practical applications – is the logical next step once a pool of solid disciplinary knowledge has been established. Because Mode I is defined by disciplines and because its main objective is the production of new fundamental knowledge, it is expected that scientists will be the guardian of the quality and probity of knowledge. Under Mode I, the peer-review system is the main safeguard of research quality and defines clear criteria for a scientist's career development. The ultimate form of knowledge diffusion is the publication of research results in peer-reviewed journals. It is through this system of validation

32—J.L. Denis, P. Lehoux, and F. Champagne

and recognition that knowledge and scientists become credible. Under Mode I the scientific validation of knowledge is seen as the prerequisite for any larger diffusion processes. The main responsibility of the scientific community is to ensure the validity and originality of scientific knowledge. Application derives from this process; it is not a predominant goal under Mode I. The substance of Mode I is science confined within disciplinary boundaries. This mode of knowledge production implies specific organizational arrangements. For instance, the funding of research is a public responsibility. Private funding is possible but the governance of research funding is assumed by independent organizations that rely on scientific communities to assess the relevance and quality of proposed research activities. Universities are the main loci of research and are responsible for guaranteeing researchers' autonomy and independence. While maintaining these basic characteristics, Mode I today probably takes new organizational forms. Increasingly, fundamental research in many sectors is achieved through networks. Researchers are pressed to collaborate widely both nationally and internationally. Clustering of different disciplines in a specific research team or network may also be involved. Mode I reflects the attachment to the idea that research is an independent endeavour for the sake of knowledge creation. In many cases, however, disciplinary boundaries are not seen as sufficient for generating new cutting-edge knowledge. Mode II contrasts significantly with Mode I. The main objective of Mode II is to develop problem-solving capabilities in a given society. From a Mode II perspective, the production of new knowledge is valuable but insufficient; instead, its emphasis is on the practical outcomes of this knowledge. Close to the spirit of Mode II is the notion of R&D outside the usual industrial or entrepreneurial environments. Mode II argues for co-option by researchers for resolving critical problems in modern society. This has important implications for the craft and the organization of research. Researchers under Mode II are involved in an increasing number of heterogeneous teams, which enables researchers and non-researchers together to discuss the suitability and the working out of different projects. In the case of industrial spin-offs, researchers even can become entrepreneurs. Interdisciplinary teams can also evolve into agencies or regulatory bodies designed to guide policies or decision-making processes. Under Mode II, such projects derive from social or external demands. Because the clusters of actors and organizations vary from problem to

A Knowledge Utilization Perspective—33

problem, researchers must adapt existing relationships and develop new ones. In a certain sense, the social structure of research production becomes blurred, and research will be organized according to tasks (‘which problems need to be solved?’) instead of within a stable group of research colleagues. The task-oriented network is therefore a plausible organizational form for research under Mode II. This mode is also based on a mix of disciplines and sectors; it is expected that, over time, researchers will lose some of their disciplinary specificity because they specialize in the resolution of problems in particular societal sectors (health systems, education, etc.). The knowledge base for Mode II evolves through a learning-by-doing philosophy, whereby a researcher develops models and concepts by using prior knowledge to address new problems or similar problems in different contexts. The term ‘context’ is crucial to understanding the distinctive features of Mode II. Research problems and research methods are developed in specific contexts following close interactions between researchers and other social actors. Research also can take place in unusual contexts, and researchers may now pursue their careers in various practice settings. Under Mode II, the boundaries between the scientific community and other communities become very fluid. This has important consequences; for example, under Mode II the academic monopoly over the regulation of research is undermined. The assessment of research and the criteria for assessing research must now be shared with non-academics and colleagues from other disciplines. The emergence of taskoriented networks represents an organizational restructuring of research. For Gibbons et al. (1994), this restructuring corresponds to an increasing contextualization of research. It is through this research in context that knowledge use takes form. Under Mode II, researchers should become more sensitive to the needs and concerns of practitioners, while practitioners should develop skills to interact with researchers. This dynamic, according to the Mode II perspective, will encourage the diversification of strategies and loci for dissemination. It is not accidental that interest in implementing Mode II has recently increased. Organizations are increasingly relying on expert or qualified personnel to sustain their development and strategies. Studies of the knowledge-based economy illustrate the centrality of knowledge in the structuring of organizational activities. Mode II is also stimulated by the ‘new entrepreneurship’ in research, whereby more academics become involved in the management and development of heterogeneous research units than was previously the case. It seems clear that in

34—J.L. Denis, P. Lehoux, and F. Champagne

a context in which knowledge is perceived as a fundamental asset for a society's development and competitive advantage, the pressures to expand Mode II knowledge production will increase. According to the Mode II perspective, knowledge use is a result of fundamental changes in the organization of knowledge production, rather than the development of specific strategies to foster the use of knowledge that would maintain both the normal functioning of the scientific community and the nature of their relationship with practitioners. The switch from Mode I to Mode II has consequences for the organization of research itself, for the status of academics, and for the nature of relationships that need to be developed between practitioners and researchers. In this section we have briefly described the philosophies and approaches of Mode I and Mode II in order to raise issues about the current demand for an increased use of knowledge in society. Discussions around Mode I and Mode II show that changes in the institutional basis of research and science encourage increased knowledge transfer and utilization. However, knowledge utilization will not be confined to the mobilization or passive consumption of scientific knowledge by practitioners and lay people. It will also involve new sets of demands by both groups (Bastian, 1998). The growing acceptance of Mode II precepts means that researchers can expect to face multiple demands from diverse social settings, and it may not be easy for the scientific community to cope with these demands. Increased social pressures for knowledge utilization will probably have important consequences for the craft of research and the duties of academics. In the next section we will look at the implementation of Mode II in the health care sector and its implications for knowledge utilization policies. Mode II in Health Care The intensification of pressures to increase knowledge use in health care and to demonstrate more explicitly the usefulness of research reflects the pervasiveness of Mode II. Our discussion of the penetration of Mode II into health care is divided into two parts. We first address some examples of Mode II's development in health research and then consider some of the challenges and dilemmas in its deployment. There have been many signs recently that Mode II has made an appearance in the health care sector, especially in health research. Mode II is based on the assumption that science can no longer be confined to the university; science penetrates society, while in many

A Knowledge Utilization Perspective—35

ways society influences science. Recent policies by various health research funding agencies are closely aligned with Mode II. For example, different programs foster the development of partnerships in research. Until recently, however, these partnerships were somewhat unusual for health services researchers. Incentives for partnerships vary. Some agencies, such as the Canadian Health Services Research Foundation (CHSRF), require a close partnership between researchers and decision-makers before providing funding to research projects, regional training centres, and chair programs. The Canadian Institutes of Health Research (CIHR) and the Social Science and Humanities Research Council of Canada (SSHRC) have recently funded broad community alliances in health research. During the 1990s the Quebec Social Research Council launched and implemented an ambitious program to support the installation of research teams in practice settings or to stimulate close collaboration between researchers and practitioners. A recent CIHR initiative is aimed at expanding research networks in the health services and population health research. These programs have had a definite impact on the way researchers organize and work. For instance, they develop multiple affiliations and links to various settings. They are obliged to share their ideas of what research is with non-researchers, and they must convince practitioners of the future pay-offs for collaborative research. In tune with this change in the mode of knowledge production, researchers become involved in research through the dynamics and links between science and practices. These different developments show that funding agencies are operating more or less consciously under a Mode II vision. They see it as a legitimate mode of organizing research, one that is gradually replacing the traditional Mode I approach. Such policies also accord both with the need for funding agencies to show the pay-offs from research more explicitly than before and with recent scientific policies that encourage intensifying knowledge development and applications. Mode II does not possess the same implications or significance in all the scientific fields covered by health research. Clinical research carried out collaboratively between universities and private firms is not a new phenomenon. Mode II fits quite naturally with the process of clinical research where the objective is to move rapidly from the creation of new knowledge to clinical and commercial applications. The use of knowledge is a fundamental objective of this type of research. The outcome of research (e.g., a new drug or technology) is also more tangible for different stakeholders.

36—J.L. Denis, P. Lehoux, and F. Champagne

Applied social or organizational research in health care faces different challenges in the process of implementing Mode II. Social research traditionally has been confined, in the sense that it was not organized, in large, complex, heterogeneous networks. The concept of knowledge use associated with these research communities is similar to the enlightenment model described above. However, social science researchers have frequently been involved in controversies around policies or social issues; generally, this involvement has been more arm's length and less concerned with the objective of serving a specific public or client. Certain pressures to develop Mode II in social research imply that researchers regard their endeavours as collaborative ventures with a welldefined pay-off for policy- and decision-making. One could also say that the implementation of Mode II might be an opportunity to generate new knowledge and to enquire about new problems, because, through interactions with practitioners, researchers discover information, opportunities, and an enriched understanding of practices. However, pressures to implement Mode II may dilute scientific resources in numerous projects and efforts may be driven by considerations other than scientific excellence. Researchers may be pushed to adopt a model that is more opportunistic (large funding, large network) and less driven by strict scientific achievements. Concluding Remarks One might wonder what the implications of this review of knowledge utilization models are for health care researchers and practitioners and what a shift to Mode II means for health care. There is a definite demand in health care for increased mobilization of knowledge by practitioners and organizations. The evidence-based decision-making movement implies the development of strategies to increase knowledge utilization and probably some transformations in the organization of research practices. The five models of knowledge utilization we have reviewed suggest different strategies to achieve knowledge utilization. Some of them (the enlightenment and deliberative models) imply deeper changes in the way science and practice have traditionally interacted. Others (the knowledge-driven, problem-solving, and strategic models) may place more emphasis on the proper packaging and circulation of knowledge. One may expect that scientists, through socialization and training (whether disciplinary or interdisciplinary), may feel more comfortable with one of these models but still relatively able to function,

A Knowledge Utilization Perspective—37

perhaps on an ad hoc basis, in others. In addition, over time experience may change how scientists perceive their role in society. In other words, it is far from clear that a given scientist's career should or will be dominated by a single model during his/her whole career. For practitioners, the five models suggest lines of enquiry that may previously have been neglected. All of the models, when examined closely, imply that the practitioners should take an active role in knowledge utilization; otherwise, it might never happen. The knowledgedriven model, we believe, has never clearly defined how practitioners and, more broadly, society should be engaging in the appropriation of scientific notions. The strategic and deliberative models give more precise indications about why and how practitioners should increase their awareness of the value of science. The contextualization of research (Mode II) may respond partially to the perceived need for evidence-based decisions and practices. We postulate, however, that the deployment of this mode may take various specific forms in different health care contexts. In addition, it will not necessarily be easily implemented in the fields of health management and policy. Generally speaking, these functions take place within more or less restricted circles, from which external actors, especially those holding private interests, usually are excluded. Mode II corresponds more naturally to routine practices in clinical sciences where private companies actively cooperate with clinicians to develop and test new technologies. In fact, Mode II may be deployed much more rapidly in the area of clinical innovations, such as telemedicine, where governments, private companies, doctors, and even patients' associations join in efforts to conceive and implement technological systems. In contrast, traditions, experience, and the case-by-case approach typical of social interventions may require further investments in cooperative relations between researchers and practitioners (closer to the deliberative model). Finally, we question whether the emergence of a knowledge-based society would be linked, somewhat paradoxically, to a weakening of science. If one bluntly adheres to the thesis put forward by Gibbons et al. (1994) and Nowotny et al. (2001), according to which Mode II will entirely replace Mode I in all scientific fields (an argument that we do not support), it could mean that some of the fundamental principles of science (independence, methodological rigour, organized scepticism, etc.) would be eroded (Ferlie, 2002). For instance, a certain quest for knowledge devoid of any kind of practical aim still appears to be a good thing for modern societies, does it not? Imagine a society that lacks

38—J.L. Denis, P. Lehoux, and F. Champagne

bodies of knowledge in arts, literature, or history. Imagine a society in which every decision and interaction must translate into financial or practical values. This question will remain open, of course, because it is an empirical one. Nevertheless, it is worth remembering that one may become (temporarily) overly enthusiastic regarding the recent call for an increased use of knowledge. We must also recognize that knowledge utilization is a reasonable objective if practitioners can relate to a significant, coherent, and rigorous body of knowledge. If changes in the scientific community towards Mode II bring creativity in research by stimulating the framing of new problems, the use of new and potentially useful knowledge can be fostered. We suggest that Mode II and its explicit valuation of knowledge use and intense relationships between scientists and practitioners should be a complement to, but not a substitute for, Mode I.

REFERENCES Bastian, H. (1998). Speaking up for ourselves: The evolution of consumer advocacy in health care. International Journal of Technology Assessment in Health Care, 14, 3 –23. Bero, L.A., & Jadad, A.R. (1997). How consumers and policymakers can use systematic reviews for decision making. Annals of Internal Medicine, 127(1), 37– 42. Blackler, F. (1995). Knowledge, knowledge work and organizations: An overview and interpretation. Organization Studies, 16(6), 1021–1046. Bohman, J. (1996). Social critics, collective actors and public deliberation: Innovation and change in deliberative democracy. In J. Bohman (Ed.), Public deliberation: Pluralism, complexity, and democracy (pp. 197–236). Cambridge, MA: MIT Press. Brown, J., & Duguid, P. (1991). Organizational learning and communities of practice: Toward a unified view of working, learning, and innovation. Organization Science, 2(1), 40 – 57. Bulmer, M. (1982). The use of social research: Social investigation in public policymaking. London: Allen & Unwin. Champagne, F. (1999). The use of scientific evidence and knowledge by managers. Montreal: Groupe de recherche interdisciplinaire en santé, Faculté de Médecine, Université de Montréal. .

.

.

.

A Knowledge Utilization Perspective—39 Crozier, M., & Friedberg, E. (1970). L'acteur et le système. Paris: Seuil. David, P., & Foray, D. (2002). An introduction to the economy of the knowledge society. International Social Science Journal, 171, 9 –23. Denis, J.L., Béland, F., Champagne, F. (1996). Le chercheur et ses interlocuteurs: Complicité et intéressement dans le domaine de la recherche evaluative. In Évaluer: Pourquoi? (pp. 21–31). Quebec: CQRS. Dodier, N. (1993) L'expertise médicale – Essai de sociologie sur l'exercice du jugement. Paris: Éditions Métailié. Empson L. (2001). Introduction: Knowledge management in professional service firms. Human Relations, 54(7), 811– 817. Ferlie, E. (2002). Public management research and Mode II knowledge production. Paper presented at the EGOS Conference 2002, Barcelona. Gelijns, A.C. (1991). Innovation in clinical practice: The dynamics of medical technology development. Washington, DC: National Academy Press. Gibbons, M., Limoges, C., Nowotny, H., et al. (1994) Introduction. In M. Gibbons et al. (Eds.), The new production of knowledge: The dynamics of science and research in contemporary societies (pp. 1–16). London: Sage. Gray, J.A.M. (1997). Evidence-based health care. New York: Churchill Livingstone. Guba, E.G., Lincoln, Y.S. (1989). Fourth generation evaluation. Newbury Park, CA: Sage. Habermas, J. (1989). La souveraineté populaire comme procédure. Un concept normatif d'espace public. Lignes, 7, 29 –57. Habermas, J. (1993). Justification and application: Remarks on discourse ethics. Cambridge, MA: MIT Press. Hirschman, A.O. (1995). Défection et prise de parole. Paris: Fayard. Huberman, M. (1989). Predicting conceptual effects in research utilization: Looking with both eyes. Knowledge in Society, 2(3), 6 –24. Kovner, A.R., Elton, J.J., & Billings, J. (2000). Evidence-based management. Frontiers of Health Services Management, 16(4), 3 –24. Leydesdorff, L., & Etzkowitz, H. (2000). Le ‘Mode II’ et la globalisation des systèmes d'innovation ‘nationaux’: Le modèle à triple hélice des relations entre université, industrie et gouvernement. Sociologie et Société, 32(1), 135 –156. Lomas, J. (2000). Connecting research and policy. Isuma, Spring, 140 –144. March, J.G. (1999). Understanding how decisions happen in organizations. In J. March (Ed.), The pursuit of organizational intelligence (pp. 13 –38). Malden, Mass.: Blackwell. Mitchell, R.K., Agle, B.R., & Wood, D.J. (1997). Toward a theory of stakeholder identification and salience: Defining the principle of who and what really counts. Academy of Management Review, 22(4), 853 – 886. .

.

.

.

.

.

.

.

.

.

40—J.L. Denis, P. Lehoux, and F. Champagne Nahapiet J., & Ghoshal, S. (1998). Social capital, intellectual capital, and the organizational advantage. Academy of Management Review, 23(2), 242 –266. Nowotny, H., Scott, P., & Gibbons, M. (2001). Re-thinking science: Knowledge and the public in an age of uncertainty. Cambridge: Polity Press. Pan American Health Organization (2001). Strategies for utilization of scientific information in decision-making for health equity. Washington, DC: PAHO. Patton, M.Q. (1997). Utilization-focused evaluation. Thousand Oaks, CA: Sage. Pelz, D.C. (1978). Some expanded perspectives on use of social science in public policy. In J.M. Yinger and & S.J. Cutler (Eds.), Major social issues: A multidisciplinary view (346 –357). New York: Free Press. Rouse J. (1991). Philosophy of science and the persistent narratives of modernity. Studies in History and Philosophy of Science, 22(1), 141–162. Schön, D.A. (1983). The reflective practitioner: How professionals think in action. New York: BasicBooks. Stengers, I. (1993). L'invention des sciences modernes. Paris: Éditions La Découverte. Tenner, E. (1996). Why things bite back: Technology and the revenge of unintended consequences. New York: Vintage. Tranmer, J.E. (1998). La prise de décisions : Données probantes et information. SaintFoy, QC: Éditions MultiMondes. Vaillancourt Rosenau, P. (1994). Health politics meets post-modernism: Its meaning and implications for community health organizing. Journal of Health Politics, Policy and Law, 19(2), 303 –333. Weber, M. (1989). Le métier et la vocation de savant. In Le savant et le politique (pp. 71–122). Paris: Éditions 10/18. Weiss, C.H. (1979). The many meanings of research utilization. Public Administration Review, 39, 426 – 431. Weiss, C.H. (1988a). Evaluation for decisions: Is anybody there? Does anybody care? Evaluation Practice, 9(3), 15 –28. Weiss, C.H. (1988b). If programs hinged only on information: A response to Patton. Evaluation Practice, 9(3), 5 –14. .

.

.

.

.

.

.

A Sociological Perspective—41

2 A Sociological Perspective on the Transfer and Utilization of Social Scientific Knowledge for Policy-Making HARLEY D. DICKINSON

Working out methods for using scientific and technical knowledge in the cause of humanity is the great central problem of our time. Mott (1948) Medical practice must be part of the general institutionalization of scientific investigation and the application of science to practical problems, which is a characteristic feature of modern Western society. Parsons (1951) In every generation students raised in the Western tradition of rationality optimistically call for the application of science to social problems. Also, in every generation there are a few eloquent voices which warn of applying scientific rationality to the large-scale reconstruction of society. Waitzkin (1969)

Introduction Modernity encompasses the rationalist dream that science can, and will, produce the knowledge required to emancipate us individually and collectively from scarcity, ignorance, and errors. The lucidity of this dream, however, has waxed and waned over time. Currently, in the context of the knowledge society, we are being enticed, cajoled, and otherwise encouraged to increase and improve the transfer and utilization of research knowledge. The call for evidence-based health care is one expression of the dream.

42—H.D. Dickinson

Appeals for evidence-based health care began with the idea of evidence-based medicine. Evidence-based medicine is founded on the preference for clinical decisions that are based on the best available scientific research evidence in the light of clinical experience and patient desires (Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996). It is premised on the belief that the use of research evidence leads to better health care decisions, which in turn leads to better health outcomes. The logic is simple and robust. Thus, since the early 1990s it has been a central item on the health care reform agenda (National Forum on Health, 1998). Many, however, were startled by these calls. Although it is commonplace to identify medicine as both an art and a science, it has long been claimed and widely assumed that therapeutic decision-making is the science and that ‘bedside manners,’ or interpersonal relations, are the art (Gadamer, 1996; Lomas & Contandriopoulos, 1994). A surprisingly large proportion of clinical practice, however, is not based on scientific knowledge (Anderson & Mooney, 1990; Roos & Roos, 1994). Clinical practice patterns often are better understood as professional rituals or habits (Freidson, 1973). Many medical rituals may be harmless (Roth, 1960) even if they are wasteful of scarce resources, but others appear to pose serious health risks (Roemer and Schwartz, 1979). Additionally, a disturbingly large number of harmful and costly medical errors are being made and increasingly reported (Medical errors, 2002). Set against this background, calls for evidence-based medicine are less surprising. Likewise, it is not surprising that initiatives soon appeared to extend the principles of evidence-based decision-making to health care consumers, health system managers, and policy-makers (Hayward, 1996; Gray, 1997; Davies & MacDonald, 1998; Perkins, Simnett, & Wright, 1999; Elliott & Popay, 2000). This is not the first time such initiatives have occurred. The use of social scientific knowledge for solving social problems and managing social processes, for example, is a perennial preoccupation of both social scientists and policy-makers. Indeed, it is a defining feature of modern societies (Polanyi, 1944; Parsons, 1951; Waitzkin, 1969; Habermas, 1971). Many issues related to contemporary efforts to institutionalize evidence-based health care are therefore similar to those identified at other times and in other policy domains. There are some issues relative to these previous efforts, however, that are not apparent in current debates about evidence-based health care. A review of these issues may be instructive for both proponents and opponents of evidence-based decision-making in general and evidence-based health care in particular.

A Sociological Perspective—43

This paper consists of three main sections. In the first I discuss the development of the field of knowledge transfer and utilization in the decades immediately following the Second World War. Then I summarize three sociological models of knowledge transfer and utilization and briefly outline the issues they raise. Finally, I discuss recommended reforms of the practice of science relative to policy-making that emerge from these models. I argue that the successful transfer and use of scientific knowledge for policy-making requires a rational means to address ethical as well as factual issues. Knowledge Transfer in Historical Context In the period bracketed by the two world wars powerful support for the principle of rational democratic planning emerged. This support was based on the belief that the main alternatives – laissez-faire and technocracy – led to anarchy on the one hand and totalitarianism on the other (Mannheim, 1940). At that time these were not simply abstract fears. It was generally agreed that the economic and social anarchy caused by the Great Depression was the result of an unfounded belief in the rationality of the free market. At the other extreme, the atrocities perpetrated in the name of both Soviet and fascist central planning highlighted the dangers of abandoning democratic checks and balances. It was therefore widely believed that the west had to chart a course between anarchy and totalitarianism (Lynd, 1967 [1939]; Mannheim, 1940; Lerner & Lasswell, 1951). There was also widespread confidence that it was possible to chart such a course; a primary source of this optimism was contributions made by the social sciences to both the military and the industrial war efforts. These successes supported the view that social science could, and would, provide the knowledge and expertise required to rationally and democratically reconstruct society. The demand for rational, democratic policy-making and the need to supply the necessary expertise required for the welfare state and its programs helped to drive the dramatic expansion of funding for the social and administrative sciences following the Second World War. This expansion, in turn, created pressure for the transformation of the social sciences themselves (Gouldner, 1970). Transformation took two main forms: consolidation and expansion of already established social sciences such as sociology (Dickinson & Bolaria, 1994); and the emergence of new policy and management sciences that addressed a variety of public policy issues, including the issue of knowledge transfer and

44—H.D. Dickinson

utilization for policy-making purposes (Lerner & Lasswell, 1951; Dror, 1971; Wilensky, 1967; Caplan, Morrison, & Stambaugh, 1975; Weiss, 1977; Lindblom & Cohen, 1979; Scott & Shore, 1979; OECD, 1980; Nathan, 1988; Bryant & Becker, 1990). These new and consolidated sciences generated or refined a variety of planning and management technologies, such as decision analysis, systems analysis, cybernetics, and operations research (Wiener, 1967; Gouldner, 1970, p. 346; Dror, 1971; Bell, 1973). The primary purpose of these new management technologies was to increase decision-making rationality by substituting ‘algorithms (problem-solving rules) for intuitive judgments’ (Bell, 1973, p. 402). Substituting problem-solving (or decision-making) rules for intuitive judgments was an attempt to render decision-making foolproof in the quest for rationality. In many cases, social scientists were eager collaborators in this quest. Some were concerned, however, that the promise to rationalize decision-making could not be kept (Simon, 1982; Lindblom, 1959; Gouldner, 1970; Vickers, 1965). Others were concerned that the development and deployment of the new intellectual technologies and decision-making algorithms signalled the emergence of an incipient technocracy that threatened democracy and individual autonomy (Waitzkin, 1969; Habermas, 1971). A number of social scientists objected that such efforts turned them into servants of corporate and state power and thereby threatened scientific autonomy and objectivity (Baritz, 1960), and some opposed the use of scientific knowledge for rational planning and societal reconstruction because of fears that it upset the natural order of things (Collins & Makowsky, 1972). Each of these themes emerged as more or less salient, depending on the model of knowledge transfer and utilization adopted. Models of Knowledge Transfer and Utilization Knowledge transfer models have been categorized in a number of ways. Most commonly, three main models are identified: supplier push models; user demand, or pull, models; and interaction models (Landry, Amara, & Lamari, 2001; Weiss, 1977, 1979). In sociology there are three main variants of the interaction model. Two of them, the technocratic models and decisionistic models, focus on the relationship between science and politics. The third, the pragmatistic models, expand the focus to include the public.

A Sociological Perspective—45

In this section I outline the three sociological models. I identify issues and debates associated with their understanding of the nature and consequences of knowledge transfer and utilization. I also summarize solutions for increasing the transfer and utilization of science for policymaking and practice decisions.

Technocratic Models Technocratic models are based on the view that modern science is a driving force in human progress. They assume that unhindered development and application of scientific knowledge to the problems of human life will result in their solution (Lundberg, 1947). Corrupt politicians are the main impediment to realizing this technocratic dream, because they represent special interests that often are opposed to the universal interests that proponents of technocracy claim to represent. It follows that political interference with the application of expert knowledge and skills to the solution of social problems is the main obstacle on the road to rationality (Ellul, 1964 [1954], p. 262). Technocratic models therefore view science and politics as ‘two communities’ separated by values, interests, and decision-making practices. The gap between these communities is seen as unbridgeable because the irrational, subjective character of politics is irremediable. Technocracy requires politicians to become technocrats or to implement decisions made by technocrats, that is, to become servants of truth. In the post-war period the dawn of a new technocratic epoch was widely prophesied. Some predicted new levels of human empowerment and freedom (Lundberg, 1947; Ellul, 1954 [1964]; Bell, 1973). Others warned of a new regime of domination and discipline (Foucault, 1979; Marcuse, 1964; Illich, 1976; Roszak, 1969; Rose, 1990; Douglas, 1971; Boguslaw, 1965). Daniel Bell's (1973) announcement of the coming of post-industrial society was among the most influential of the celebrants. The axial principle of post-industrial society (or of what he also termed knowledge society) was the use of scientific knowledge for planned and rational policy formulation, technological innovation, and economic growth. This principle, Bell contended, was reflected in a number of other features of post-industrial societies, including the creation of new ‘intellectual technologies’ for rational decision-making and the predominance of the professional and technical classes. This predominance arose from the fact that in a knowledge society ‘knowledge gets

46—H.D. Dickinson

organized as work’ in the form of professionalism (Freidson, 1973, p. 50; Elliot, 1972). Furthermore, it was thought that the large-scale application of science to practical affairs was possible only via the professional organization of work (Parsons, 1951, p. 348). A societal interest in minimizing illness and its disruptive social and economic effects is central among these practical affairs. As a result, medicine is considered to be the archetype of professionalism and the embodiment of modern technocratic tendencies (Parsons, 1951). The relationships between medical knowledge, power, and control are dominant themes in sociology (Parsons; 1951; Johnson, 1972; Vollmer & Mills, 1966; Freidson, 1973; Turner, 1987). Freidson, for example, argues that the medical profession exercises autonomy in determining both the form and the content of its work. This autonomy results in dominance relative to patients, other occupations in the health care division of labour, and policy-makers. The profession's dominance vis à vis patients and policy-makers in particular indicates a tendency towards medical technocracy. The development of medical knowledge and its application to the diagnosis and treatment of illness have been argued to be the dominant forms of social control in modern societies. This process has been referred to as the medicalization of deviance. It is seen as a subcategory of a more general process of the medicalization of society. Expansion of the domain of medicine to include childbirth, alcoholism, eating disorders, addictions, and, at one time, homosexuality, is pointed to as evidence of creeping medical imperialism and a tendency towards technocratic control (Parsons, 1951; Zola, 1972; Conrad, 1975; Conrad & Schneider, 1980). Psychiatry was seen by many to be in the vanguard of these developments, not in the sense that it was unique, but in the belief that it was leading the way for other medical specialties (Zola, 1972). Critics argued that psychiatry was, at best, a pseudo-science and, at worst, witchcraft (Szasz, 1972 [1961]; Torrey, 1986 [1972]). This criticism was based on the claim that in many cases mental illnesses were not illnesses at all – at least not illnesses in the sense of possessing known biochemical and physiological causes. Rather, many of the moods and behaviours diagnosed as symptomatic of mental illness were considered psychosocial problems caused by poverty and powerlessness. The fact that incarceration in large, remote institutions was the main medical treatment for mental illnesses until well after the Second World War was cited as

A Sociological Perspective—47

evidence that psychiatry was more involved in the social control of the poor and the powerless than in the scientific diagnosis and treatment of illness (Szasz, 1972 [1961]; Scull, 1984 [1977]; Dickinson, 1989; Chesler, 1972). Others pointed out that somatic therapies for so-called mental illnesses were based more on desperate empiricism than on science. Insulin-shock therapy, electro-convulsive therapy, psychosurgery, and chemotherapies had no theoretical foundation, and more often than not they had negative effects (Valenstein, 1973, 1986). Thus, the problem was not only the misguided application of good science to health problems. In many cases, the science itself was suspect, motivated by careerist or political and economic considerations that had only a tenuous relationship to the value-free objectivity presumed to be the foundation of good science (Scull, 1984 [1977]; Collins, 1988). Additionally, ethical concerns were raised when clinical treatment/research was undertaken without the consent of those affected (Valenstein, 1973; Pines, 1973; Collins, 1988). Medical dominance relative to the formulation and implementation of health policy led some to suggest that the profession of medicine in post-war Canada constituted a ‘private government’ or ‘a state within a state,’ and a threat to democracy (Taylor, 1978, p. 265). Analyses of the infamous Saskatchewan doctors' strike, set against the introduction of Medicare, highlighted this point (Badgeley & Wolfe, 1967; Tollefson, 1963) and is reminiscent of earlier populist critiques of the American Medical Association (Beale, 1939). The above critiques of professionalism as a technocratic strategy of scientific knowledge transfer and utilization relate to the clinical management of illness. The mental hygiene and the eugenics movements are examples of multidisciplinary attempts to apply scientific knowledge to the prevention of social problems and the promotion of healthy behaviours (Griffin, 1989; McLaren, 1990). Both movements were attempts at social engineering. Proponents of both movements claimed that their interventions were based on the best available scientific knowledge, and both movements fell into disfavour because of their efforts. However, the underlying impetus to scientifically improve individuals and society through applied science remains strong. Those in the mental hygiene movement have renamed and reformed their group over the course of the twentieth century. It started as an effort to reform mental hospitals and to medicalize the diagnosis and treatment of mental illness. From this beginning it evolved into a

48—H.D. Dickinson

multidisciplinary, community-based effort to prevent mental illness and mental disorders. Currently it is a multidisciplinary, community-based effort to promote positive mental health (Dickinson, 1989, 1994). The eugenics movement began as a program in which plant and animal breeding techniques were applied to humans. Excessive breeding among the physically and socially unfit was identified by eugenicists as the cause of social and health problems such as poverty, alcoholism, prostitution, illegitimate births, divorce, ‘feeble-mindedness,’ and mental illness. Combined with insufficient rates of reproduction among the eugenically fit – members of the white, Anglo-Saxon, middle and upper classes – it was claimed to be the root of the apparent racial and social degeneration of western societies (McLaren, 1990). Eugenicists introduced an array of putatively science-based policy interventions designed to solve these problems. The main objectives of these interventions were the prevention of the unfit from breeding and its encouragement among the fit. Preventive, or negative, eugenics predominated; in the North American context it included techniques ranging from marriage licences to incarceration and involuntary sterilization (Dickinson, 1989). In the European context it also took the form of Nazi genocidal attempts to eliminate supposedly problem populations, including the Jews, the mentally ill, the feeble-minded, and homosexuals. Atrocities associated with the application of eugenic science to the solution of social and health problems often are regarded as anomalies or historical aberrations that could not happen again. It is important to note, however, that the scientific knowledge upon which eugenics policies were based was the same in Nazi Germany as it was in Canada. Eugenics policies and practices were not the result of abnormal science. Nazi eugenics was normal science applied by a particularly virulent, anti-democratic, totalitarian, and authoritarian regime. Critics are quick to observe that even benevolent technocracies, by definition, are anti-democratic and authoritarian. The potential for good intentions to go horribly awry is inherent in these features. Currently, eugenics principles, though not named as such, underlie the development and application of genetic science in the form of new reproductive technologies and other medical biotechnologies (McLaren, 1990; Burstyn, 1993). The ‘technocratic wish’ is that scientists of one sort or another will be social engineers who rationally plan social change and manage social problems through evidence-based policy and program interventions (Belkin, 1997). In its most extreme manifestation, technocracy (i.e., government by scientific and technical experts) displaces democracy

A Sociological Perspective—49

(DeSario and Langton, 1987; Chafetz, 1996). Critics of this agenda argue that the reduction of politics to science is based on ‘scientism,’ the false belief that only science produces true knowledge and that all other knowledge claims are necessarily false (Habermas, 1971; Keat, 1981). Critics of the technocratic tendencies inherent in the professional application of science to the identification and solution of social problems argue that science must be accountable and responsible to the public, usually through government or a government agency.

Decisionistic Models Decisionistic models constitute the dominant sociological approach of knowledge transfer and utilization. Like technocratic models, they are based on the view that science and politics constitute two distinct communities and that scientists and politicians perform distinct, but necessary, social functions. Decisionistic models assume a normative division of labour between political decision-makers and scientists. Politicians make interest-based decisions concerning policy goals and scientists rationally determine the best means of reaching them. As we saw, however, the technocratic model assumes that political decision-making is subjective, irrational, and an unnecessary impediment to achieving a rational society. As a result, advocates of technocracy argue that the unification of science and government is the best way to ensure the effective transfer and utilization of scientific knowledge. Adherents of decisionistic models argue that collaboration between researchers and policy-makers is the best way to ensure effective knowledge transfer and utilization (Lomas, 2000; Scott & Shore, 1979; Lindblom & Cohen, 1979; Weiss, 1977, 1979; Gans, 1971). Decisionistic models also maintain that value choices cannot be rationally justified. One simply must decide (hence the term ‘decisionistic’) between competing value choices. To the extent such decisions ‘escape compelling arguments and remain inaccessible to cogent discussion,’ they are seen as subjective and irrational (Habermas, 1971, p. 63). Value choices are outside the domain of science, because the ‘best’ choice cannot be objectively determined. Once political decisions are made and the goals of social policy set, however, scientists and other experts have a role to play in rationally determining the most effective and efficient means of implementation. Determining the ‘best’ means to pre-selected ends is a question of facts, and such questions are the domain of science. Disputed validity claims

50—H.D. Dickinson

can be resolved rationally through conventional, value-free, scientific methods of enquiry and debate, where evidence is presented in support or refutation of contested truth claims. The doctrine of value-free science, upon which this division of labour is based, is intended to serve at least two functions. First, it is intended to ensure that scientists refrain from demagoguery. Second, its purpose is to guarantee scientists’ objectivity by protecting them from the truthdistorting influences of powerful vested interests. The problem of demagoguery arises when a scientist moves beyond factual questions to value questions. This shift is seen as particularly problematic in the context of classroom teaching, where the power differential between professor and student is such that professorial proclamations effectively cannot be disputed (Weber, 1970 [1946]). It is also seen to be a more general problem, which characterizes all situations in which there is an imbalance between knowledge and power. Relative to the second problem, the doctrine of value freedom (combined with academic tenure and peer review) is intended to ensure that social scientists have both the capacity and the courage to speak truth to power. A number of highprofile academic freedom cases, particularly in the United States, have characterized the historical development of the discipline of sociology and highlight one aspect of the problematic relationship between power and knowledge (Scott & Shore, 1979). Such concerns, however, are not simply historical curiosities. Recent Canadian cases involving Nancy Olivieri and David Healey, for example, highlight threats to academic freedom among medical researchers and clinical teachers. The Canadian Association of University Teachers (CAUT), an organization formed in the 1950s to defend academic freedom, recently established a Task Force on Academic Freedom for Faculty in University-Affiliated Health Care Institutions in response to the Nancy Olivieri and David Healey cases (CAUT, 2002a). The Nancy Olivieri case (CAUT, 2002b), in particular, underscored concerns about the importance of maintaining arm’s length relationships with research funding agencies that may have an interest in some truths and some knowledge but not in all truth and all knowledge. Although there is a real concern that research knowledge will conflict with powerful vested interests, a more common criticism of the truth and knowledge produced by independent, university-based disciplinary research is that it is trivial, esoteric, and irrelevant to practical concerns. Or it is seen to be contradictory and confusing. In either case, there is a sense that neither decision-makers nor the general public can use it (Willinsky, 1999, 2000).

A Sociological Perspective—51

Sensitivity to these charges has elicited a number of responses from researchers. One is the argument that the problems of non-utilization or under-utilization of scientific research are more apparent than real. Focusing only on the instrumental, or ‘social engineering,’ uses of research, critics claim, fails to recognize other appropriate forms of use such as educational and conceptual deployment (Weiss, 1977, 1979). This understanding of knowledge transfer and utilization often is referred to as the enlightenment model. It encompasses situations in which new insights into problems and novel understandings of needs and interests occur. These new understandings and interpretations may then give rise to new problem-solving approaches and to innovative ways of satisfying needs and realizing interests. Analysts of the transfer and utilization of social science knowledge often claim that the enlightenment model is a more appropriate conceptualization of use than more instrumentalist conceptions (Weiss, 1977; Hetherington, 1988; Huberman, 1994; NSDDR, 1996). Another common response to the problem of inadequate knowledge transfer is an attempt to make university research more relevant to particular groups of users. This strategy has resulted in the proliferation of numerous sub-disciplinary specializations, such as the sociology of work and employment, the sociology of education, the sociology of science and technology, and the sociology of medicine, to name only a few. Early debates in these sub-fields clearly reflected the tension between autonomy and relevance. Straus (1957), for example, proposed differentiating two types of medical sociology: sociology of medicine and sociology for medicine. He briefly differentiated between the subject matter of the two sub-disciplines and concluded they were incompatible: ‘the sociologist of medicine may lose objectivity if he identifies too closely with medical teaching or clinical research while the sociologist in medicine risks a good relationship if he tries to study his colleagues’ (p. 203). This tension, of course, has never been resolved. Broader ethical and political issues emerged in the debates that developed in the 1960s over the involvement of social scientists in the study and solution of social problems. Some critics argued that the social problems approach jeopardized the objectivity of the knowledge produced, insofar as problems are defined as problems from someone's perspective and solutions generally serve the interests of some, but not all. Opponents of the social problems approach maintained that defining something as a problem generally is linked to the needs and interests of the powerful in society, not to those of the powerless (Scheff,

52—H.D. Dickinson

1975; Fraser, 1989). Even if the needs and interests of the marginal and powerless are taken into account by decision-makers, the means selected to satisfy them often serve the interests of the powerful more than those of the powerless (Lubove, 1973 [1965]; Foucault, 1979). Thus, in seeking solutions to problems, social scientists become either willing or unwilling servants of power (Baritz, 1960). This raises the question as to whether science can be value free or whether such bromides are best understood as self-deception by scientists busily engaged in the service of power while denying or ignoring the issue. Some social scientists adopt the position that value-free social science is a myth and that all science, indeed all knowledge, is deeply imbued with values and power (Becker, 1967; Gouldner, 1968; Foucault, 1980). Howard Becker, in his presidential address to the annual meeting of the Society for the Study of Social Problems in August 1966, highlighted these ethical and political issues in the title of his presentation: ‘Whose side are we on?’ In his answer he contended that all social research is value laden; therefore, social scientists have to choose sides on the basis of personal and political commitments (Becker, 1967). He himself argued for a partisan sociology that self-consciously sides with the underdog. Advocacy of sociology for the underdog was intended to be a corrective to social science, regarded by many as primarily serving the interests of those who occupied positions of relative power in various institutional settings. Paradoxically, the sociology of the underdog took the form of collaborative, policy-oriented research primarily supported by, or undertaken in collaboration with, central governments committed to developing and expanding the welfare state and the postindustrial economy (Gouldner, 1970). Set against this background, a new multidisciplinary specialty emerged, with knowledge transfer and utilization as its domain of enquiry and expertise. Such specialists undertook research intended to solve the problem of ineffective transfer and utilization of research knowledge. Four main issues were addressed: the sources of research knowledge (i.e., researchers and their organizations), the content of the knowledge being transferred, the medium of dissemination, and the users and their organizational contexts. Relative to the sources of research knowledge, research was done on the limits and biases of research, the factors affecting the credibility of researchers and their knowledge claims, the orientations of researchers and research organizations, and ways to build effective relationships between researchers and users. In terms of the content of research

A Sociological Perspective—53

findings, attention was paid to its comprehensibility to users, its content in terms of what was included and what was not included, and the compatibility of research findings with the interests and values of users. Investigations of the media of dissemination initially held out great hope for information and communication technologies to make goodquality research findings available to users in a timely and cost-effective fashion. It was found, however, that person-to-person interaction is the best determinant of research utilization. Finally, attention was paid to users' needs and expectations and how they could be harmonized with the nature of research knowledge and what it could realistically be expected to provide. The most common conclusion drawn from the study of knowledge transfer and utilization is that research can be made more relevant to decision-makers if they are involved in all stages of the research process as early as possible. By doing so, the questions of relevance, credibility, and timeliness of information can be addressed before, during, and after the research is completed. This model clearly overcomes the problems of ‘ivory towerism’ and produces relevant and usable knowledge (Lynd, 1967 [1939]; Lindblom & Cohen, 1979). Knowledge so produced, however, may not be science. Knowledge produced for problem-solving purposes tends to short-circuit the scientific cycle by moving directly to solutions and bypassing explanations (theory building) (Mills, 1959; Wallace, 1971; Willer & Willer, 1973). In this respect, such knowledge may be better understood as ideology rather than science. Ideology is knowledge that serves particular interests, and scientific knowledge is universal and objective (Edelman, 1964; Habermas, 1971, chap. 6). This concept of the ideological nature of knowledge highlights the notion of the symbolic, strategic use of research to justify pre-established positions and vested interests. Empirical research in knowledge utilization reveals that the validity and reliability of research findings may not be a primary determinant of utilization. Rich and Oh (1993) and Oh (1997), for example, show that research results are more likely to be used when they support the interests and goals of an organization. They are more likely not to be used when they are in conflict with the structurally determined interests and preferences of decision-makers. These questions and concerns have recently risen to a crescendo, at least within some quarters of the academy, as a result of recent initiatives in collaborative research. Based on the theories identified by Bell (1973) three decades ago regarding the relationship among scientific knowledge, technological innovation, and economic growth, several ef-

54—H.D. Dickinson

forts to encourage government, industry, and university collaborations have emerged. Indeed, HEALNet itself is one such collaboration that evolved from Industry Canada's Networks of Centres of Excellence program (HEALNet, 2002; Industry Canada, 2002). Despite these efforts and despite recognition that competing knowledge claims, values, and interests are determinants of knowledge utilization, current attempts to make research relevant by forging collaborative interactions have failed to address the myth of value neutrality. As we saw above, Becker (1967) argued that researchers simply had to make value choices. He advocated siding with underdogs – the poor and the powerless. This alignment has an ethical appeal insofar as it is compatible with Judaeo-Christian and other religious ethics, and it resonates well with the ethics of secular humanitarianism. Gouldner (1968), however, in a scathing rebuttal of Becker, argued that underdog sociology was not an ethical values-based commitment at all. Rather, it was a self-serving commitment to the centralizing powers behind the establishment and extension of the welfare state over and against the more parochial interests of local authorities and state institutions. He concluded that sociologists must commit to values, not factions; for this is the only way in which knowledge can claim to be rational. Yet Gouldner failed to provide guidance on the question of how to make rational value choices. It is this question that remains at the centre of pragmatistic models of knowledge transfer and utilization.

Pragmatistic Models Pragmatistic models posit a necessary relationship between science and the public's democratic participation in policy formulation and implementation (Habermas, 1971). Technocratic models relegate the public to the roles of consumers and clients. As consumers, members of the public choose between market-based commodified services. As clients, they receive professionally provided and bureaucratically organized services via the welfare state. There is no citizenship role distinct from these consumer and client roles. Decisionistic models do have a role for the public as citizens. It is a role, however, that consists of the acclamation of elite decision-makers in the form of periodic general elections. The democratic elitism assumed by decisionistic models has increasingly been faced with a more or less perpetual legitimacy crisis in the form of a growing democratic deficit (Dickinson, 2002). On one hand, this is reflected in declining membership in traditional political parties

A Sociological Perspective—55

and reduced voter turnouts. On the other, it is manifested as calls for direct democracy and increasing political activism in the form of oppositional movements operating outside the organizational and procedural framework of representative democracy. A major cause of these interconnected developments is a growing distrust of the motives and actions of political, scientific, and professional elites. The public's distrust of such groups often is rooted in inadequate consultation prior to decisions concerning new technologies and services that have profound, often destabilizing, effects on their lives. These destabilizing forces frequently are experienced as a loss of identity and sense of the meaning of one's life. Concerning social roles, assurance in one’s practical knowledge of how to live a good life relative to other people is eroded. This erosion is manifested as confusion and contestation over rights and responsibilities and, in many cases, a heightened sense of being victimized and unfairly treated. The experience of existential insecurity and sense of injustice associated with these developments has been accompanied by increased demand for participation in the policy formulation and implementation processes. Pragmatistic models of knowledge transfer and utilization have developed in this context; they are attempts to rationally deal with the ethical issues raised by the application of science and technology in society. There are three key elements of these models. First, knowledge transfer is understood to be a process of knowledge translation. Second, knowledge translation is required in multiple sites. Third, ethical issues and value conflicts can be rationally deliberated (Habermas, 1971; MacRae, 1976; Dryzek, 1989; Majone, 1989; De Leon, 1994; Fischer & Forrester, 1993; Fay, 1973, 1987). The shift from the notion of knowledge transfer to that of knowledge translation signalled a move from an understanding of knowledge as a product to an understanding of knowledge generation as a process. Seen this way, knowledge is socially constructed through communicative processes of learning that take place in structured contexts characterized by pre-existing meaning systems, role structures, and values. Knowledge translation thus has two dimensions. First, it encompasses communicative efforts to demonstrate that new knowledge and technologies are relevant to pre-existing needs and interests. Second, it entails communicative interaction oriented towards the redefinition of pre-existing understandings of needs and interests so as to achieve a new understanding of relevance. Both aspects of social learning, if they

56—H.D. Dickinson

are to occur, presuppose the full and free participation of those who are affected by the new knowledge and technologies in question. The second consequence of the pragmatistic view is that multiple sites requiring knowledge translation become evident. Increased scientific specialization, combined with interdisciplinarity, have ‘made an unsolved problem out of the appropriate translation of technical information even between the disciplines, let alone between the sciences and the public at large’ (Habermas, 1971, p. 69), and, one must add, between the sciences and policy-makers and between policy-makers and the public. The third element that distinguishes pragmatistic approaches to the problem of knowledge translation is the belief that what is required is more than merely a one-way translation of technical knowledge for other specialists, policy-makers, and the general public. Rather, knowledge translation is a two-way communication process oriented towards achieving mutual understanding. The knowledge and understanding that policy-makers and the general public have of their own needs and interests, values and beliefs, and rights and responsibilities are communicated to, and translated for, experts. Included are discussions of how proposed scientific and technological innovations will affect individuals and their current ways of life. This dimension of knowledge translation often takes the form of ethical discourse involving the fairness or justice of applying new knowledge or implementing new policies from the point of view of those affected by them. It is this type of ethical discourse that proponents of pragmatistic models maintain can be conducted rationally ‘in ways analogous to scientific discourse’ (MacRae, 1976, p. 34). To the extent that this assertion is true, it counters arguments that the choice of values and ethical discourses are inherently and irredeemably subjective and irrational, as proponents of both technocratic and decisionistic models assume. Realizing rational ethical discourse, however, presupposes the institutionalization of norms that enable it to occur. These norms are referred to as the ‘discourse ethics.’ Discourse ethics take the form of normative guidelines that specify the ‘who, what, and how’ of deliberation directed towards answering the following general question: ‘In a world characterized by an irreducible plurality of values, and in the absence of a universal morality, how can we resolve, or at least accommodate, value differences in ways acceptable to all without resorting to manipulation, threat or violence?’1 Specific versions of this question must be

A Sociological Perspective—57

answered relative to all policy and practice decisions if those policies and practices are going to be rationally justifiable and acceptable to all those who are affected by them. Who may participate in deliberations? Anyone who is competent to speak and act, in principle, has a right to take part in discourse. In modern industrialized societies, of course, it is impossible to achieve this degree of participation. It was for this reason, in part, that representative forms of democratic governance were developed and why they remain essential. The current democratic deficit, however, signals a crisis of legitimacy for existing institutions of representative democracy. Development and deployment of deliberative consultation procedures are part of the attempt to resolve this crisis without abandoning representative forms of governance to direct democracy or to non-democratic forms of authoritarian and elite (technocratic) decision-making. Discourse ethics also address the question: ‘What may be discussed?’ In principle, every participant in a discourse has a right to express his or her needs and interests, desires, preferences and beliefs, principles and values. Similarly, any other participant may challenge the validity or sincerity of such views. The person so challenged is obliged to provide reasons and arguments in support of his/her assertions. Argumentation, motivated to achieve mutual understanding, is the essence of rational deliberation (Toulmin, 1958; Habermas, 1984; Dickinson, 1998). Discourse ethics also provides normative guidelines concerning how participants in deliberation are to interact. As we have seen, participants in deliberative discourse have the right to assert anything and to challenge any assertion made by others. They not only must exercise their own rights in these regards. They also have obligations to respect the rights of others to do the same. Some argue that participants in deliberative discourse are obliged not only to respect others' rights to participate in deliberations, but to ensure that all other participants actually exercise their rights (Chambers, 1995). These normative expectations presuppose that participants will be self-interested. Discourse ethics does not begin with the unrealistic expectation that participants will adopt Rawls's ‘veil of ignorance,’ or some other artificial device, as a way to move beyond self-interest to altruism (Lomas, 1997). Rather, the rights that participants in deliberation have publicly to assert anything and to challenge any assertions made by others discourage narrowly self-serving strategic communication, such as misrepresentation, deceit or manipulation, threats, coercion, and violence. By exposing strategic communicative action, delib-

58—H.D. Dickinson

eration discourages it and, in the process, contributes to the achievement of rational understanding and agreement based only on the force of the better arguments. Only agreements achieved through this means are recognized as genuine and legitimate (Habermas, 1990). Habermas further argues that following from discourse ethics is the principle that valid policies are those ‘that meet (or could meet) with the approval of all affected in their capacity as participants in a practical discourse’ (p. 93). It is obvious, of course, that implementation of the norms of rational deliberation as specified in discourse ethics is impossible. It is possible, however, to approximate, more or less closely, these deliberative procedures and thereby also to approximate realization of the above-mentioned policy principle that follows from them. Various deliberative consultation processes are more or less well designed to approximate these normative procedures (Abelson, Forest, Eyles, Smith, Martin, & Gauvin, 2001; Pickard, 1998). The impediments to this type of deliberation and associated processes of knowledge translation are numerous and well known. They include the cognitive content of expert knowledge, which, by its nature, is not easily accessible to non-experts. There also is the problem of the quantity of new scientific and technical knowledge. The ‘knowledge explosion’ invariably is identified as a nearly insurmountable problem for experts, let alone the lay public. It is universally identified as one of the primary barriers to the institutionalization of evidence-based decision-making. Another increasingly common impediment to democratic discourse on the pragmatic translation of scientific and technical knowledge is the tendency towards private science, or science that produces proprietary knowledge under contract to funding agents. The proprietary approach to scientific and technical knowledge is a correlate of the knowledge-based economy and the drive for innovation as the primary engine of economic growth. It is also related to growing concerns about the corporatization of the university and the loss of academic autonomy (Newson & Buchbinder, 1988; Currie & Newson, 1998). The instrumental approach to knowledge transfer and utilization associated with technocratic and decisionistic models itself is an impediment to institutionalizing the expanded processes of knowledge translation envisioned by pragmatistic models. Instrumentalization of knowledge transfer embodies tendencies towards technocracy. By definition, this impedes efforts at enhanced and expanded democratization.

A Sociological Perspective—59

Finally, the democratic deficit and the depoliticized public are indicative of a motivation crisis (Habermas, 1976), that is, an apparent lack of interest in political participation. The democratic deficit may be offset, however, by what appears to be strong support for more participatory forms of public political activity, some of which are compatible with the participation in deliberative discourse (Dalton, Burklin, & Drummond, 2001). Impediments to change are easy to find. Yet there is evidence that ways of surmounting these impediments are developing. Relative to the information explosion, initiatives such as the Cochrane Collaboration are making significant progress towards effective knowledge digests. Techniques such as meta-analysis and systematic review are providing useful summaries of relevant research knowledge for practitioners. Information and communication technologies are increasing accessibility to knowledge in historically unprecedented ways, which is true not only for experts but also for the general public. These communities, of course, are not entirely independent. It is well known that the high degree of specialization in the scientific division of labour results in a situation in which specialists are effectively lay persons in relation to the knowledge base and expert discourse of other specialists. Given this effect of specialization, Habermas suggests that ‘the public often provides the shortest path of internal understanding between mutually estranged specialists’ (1971, p. 77). The internal need for translation of scientific information thus also ‘benefits the endangered communication between science and the general public in the political sphere’ (Habermas, 1971, p. 78). Another source of support for effective translation emerges from the political sphere itself. The requirements for environmental and social impact assessments initially required scientists and other experts to specify the consequences that technological innovations and interventions were expected to have. More recently, these requirements have evolved into the fields of risk assessment, communication, and management. It has been said that it is impossible to overstate the importance of the idea of risk to contemporary policy debates: in a report on the 2002 National Policy Research conference on risk, it was noted that ‘the language of risk has spread throughout society and now dominates the language of governance’ (A new world of risk? 2002, p. 1). Although this influence offers the potential to engage in wide-ranging debate about risk and its management, such potential is not often realized

60—H.D. Dickinson

because there are actually two conceptualizations of risk: a narrow technical conceptualization and a broader ethical conceptualization (Sirard & Létourneau, 2002). Those advocating innovation and change tend to adopt the narrow version and to focus on measurable risks and benefits. In terms of risk management they tend to adopt the position that if there is no demonstrable harm associated with adopting innovations, or if it can be argued that the benefits outweigh the costs, then they should be implemented. The alternative conception of risk is much broader. It is focused not only on the measurable effects of adopting innovations but also on the cultural, social, and ethical consequences of doing so. Relative to risk management, the broad conception tends towards adoption of the precautionary principle. This is a conservative, risk-averse stance: if it cannot be demonstrated that an innovation is risk free, it should not be adopted. Recent widespread mobilization of public opposition to various technological innovations such as agricultural and medical biotechnologies shows the potential of these oppositional activist movements to jeopardize technological innovation and, in the case of stem cell and other forms of genomic research, to have limits put on research itself. The confrontational politicization of scientific research and the innovation agenda creates objective conditions for inclusion of ethical issues and concerns in rational deliberations related to formulation and implementation of policy. The current climate of two monologues on risk results in an impasse: ‘discussion becomes polarized and emotional opinions become entrenched, and the true nature and complexity of the issues are obscured. The resulting confusion serves no one's interests and makes an in-depth analysis difficult’ (Sirard & Létourneau, 2002, 8). Indeed, such processes result in a self-fulfilling prophecy, where debates over values are, in fact, irrational and subjective. Despite these problems, there is evidence of a growing willingness and ability of science to engage in a more comprehensive debate about the facts and the values associated with the production and application of new knowledge (Hammond, Harvey, & Hastie, 1992). Concepts such as post-normal science (Funtowicz & Ravetz, 1993, 2002), mandated science (Salter, 1988), and the fourth helix (Mehta, 2002) suggest that the old facts-values dichotomy is breaking down and the institutions of science and politics are being transformed in the direction of increased democratic participation. Much of the debate about knowledge transfer and evidence-based decision-making has proceeded on the assumption that the main impediment to a more rational society is the lack of uptake and utilization

A Sociological Perspective—61

of scientific knowledge in policy-making and professional practice contexts. While not rejecting the importance of this limitation, proponents of the pragmatistic model argue that finding ways to rationally address ethical issues is essential to the task of increased social rationality. The pragmatistic approach to knowledge generation, translation and application argues that ethical considerations are not simply add-ons. They are not something to be looked at after a scientific knowledge claim has been made or a policy decision arrived at. Rather, both ethical and empirical validity claims are essential elements of any comprehensive claims to valid knowledge. Addressing this point, Hardwig (1991, p. 708), for example, concludes that prioritizing empirical knowledge claims over ethical ones is incoherent: ‘In order to qualify as knowledge (or even as rational belief), many epistemic claims must meet ethical standards. If they cannot pass the ethical muster, they fail epistemologically.’ This is a crucial insight for those committed to the effective and fair application of knowledge. Conclusion Knowledge transfer and utilization in the form of evidence-based decision-making are defining features of post-industrial (aka knowledge) societies. More specifically, along with knowledge generation, the translation and application of scientific and technical knowledge constitute the innovation process. Innovation can be defined as the planned and rational generation, translation, and application of scientific and technical knowledge. Against this theoretical backdrop, I have outlined the central assumptions and controversies characteristic of the technocratic, decisionistic, and pragmatistic models of knowledge transfer and utilization. Both the technocratic and the decisionistic models have an incomplete understanding of knowledge and how it is generated and applied. These models have been unable both to solve the problems of and to overcome the impediments to effective and fair knowledge transfer and utilization. It is time to try new approaches based on more comprehensive concept of knowledge and new understandings of democratic decision-making. Failure to do so jeopardizes the very goals being pursued.

ACKNOWLEDGMENTS This work was done with the support of the HEALNet NCE and the CIHR-

62—H.D. Dickinson funded project ‘Knowledge Utilization and Policy Implementation’ (KUPI). NOTE 1 The discussion of discourse ethics is taken, with minor editing, from Dickinson (2002, pp. 11–13).

REFERENCES Abelson, J.P.-G. Forest, Eyles, J., Smith, P., Martin, E., & Gauvin, F-P. (2001). Deliberations about deliberative methods: Issues in the design and evaluation of public consultation processes. McMaster University Centre for Health Economics and Policy Analysis Research Working Paper 01-04. Anderson, T.F., & Mooney, G. (Eds.). (1990). The challenge of medical practice variations. London: Macmillan. Badgeley, R., & Wolfe, S. (1967). Doctors' strike: Medical care and conflict in Saskatchewan. Toronto: Macmillan. Baritz, L. (1960). The servants of power: A history of the use of social science in American industry. Middleton, CT: Wesleyan University Press. Beale, M.A. (1939). Medical Mussolini. Washington, D.C.: Columbia. Becker, H. (1967). Whose side are we on? Social Problems, 14, 239 –247. Belkin, G.S. (1997). The technocratic wish: Making sense and finding power in the ‘managed’ medical marketplace. Journal of Health Politics, Policy and Law, 22, 509 – 532. Bell, D. (1973). The coming of post-industrial society: A venture in social forecasting. New York: Basic Books. Boguslaw, R. (1965). The new utopians: A study of system design and social change. Englewood Cliffs, N.J.: Prentice-Hall. Bryant, C.G., & Becker, H.A. (Eds.). (1990). What has sociology achieved? New York: St. Martin's Press. Burstyn, E. (1993, June). Breeding discontent. Saturday Night, 108(5), 15 –17. Caplan, N., Morrison, A., & Stambaugh, R.J. (1975). The uses of social science knowledge in policy decisions at the national level: A report to respondents. Ann Arbor, MI: Institute for Social Research. CAUT (Canadian Association of University Teachers) (2002a). Task force to investigate academic freedom of medical researchers and clinical teachers. News release. Retrieved 12 November 2002, from . .

.

.

.

A Sociological Perspective—63 CAUT. (2002b) No title. Retrieved 12 November 2002, from . Chafetz, M.E. (1996). The tyranny of experts: Blowing the whistle on the cult of expertise. New York: Madison Books. Chambers, S. (1995). Feminist discourse / practical discourse. In J. Meehan (Ed.), Feminists read Habermas: Gendering the subject of discourse (pp. 163–180). New York: Routledge. Chesler, P. (1972). Women and madness. New York: Avon Books. Collins, A. (1988). In the sleep room: The story of the CIA brainwashing experiments in Canada. Toronto: Lester & Orpen Dennys. Collins, R., & Makowsky, M. (1972). The discovery of society. New York: Random House. Conrad, P. (1975). The discovery of hyperkinesis: Notes on the medicalization of deviant behavior. Social Problems, 23(1), 12–21. Conrad, P., & Schneider, J.W. (1980). Deviance and medicalization: From badness to sickness. St Louis: Mosby. Currie, J., & Newson, J. (1998). Universities and globalization: Critical perspectives. Thousand Oaks, CA: Sage. Dalton, R.J., Burklin, W.F., & Drummond, A. (2001). Public opinion and direct democracy. Journal of Democracy, 12(4), 141–153. Davies, J.K., & MacDonald, G. (Eds.). (1998). Quality, evidence and effectiveness in health promotion: Striving for certainties. London: Routledge. De Leon, P. (1994). Democracy and the policy sciences: Aspirations and operations. Policy Studies Journal, 22(2), 200 –213. DeSario, J., & Langton, S. (1987). Citizen participation and technocracy. In J. DeSario & S. Langton (Eds.), Citizen participation in public decision making (pp. 3 –18). Westport, CT: Greenwood. Dickinson, H.D. (1989). The two psychiatries: The transformation of psychiatric work in Saskatchewan, 1905 –1984. Regina, SK: Canadian Plains Research Centre. Dickinson H. D. (1994). Mental health policy in Canada: What's the problem? In B.S. Bolaria & S. and H.D. Dickinson (Eds.), Health, illness and health care in Canada (2nd ed.) (pp. 466 – 483). Toronto: Saunders. Dickinson, H.D. (1998). Evidence-based decision-making: An argumentative approach. International Journal of Medical Informatics 51, 71–81. Dickinson, H.D. (2002). How can the public be meaningfully involved in developing and maintaining an overall vision of the health care system consistent with its values and principles? Discussion paper 33, prepared for the Commission on the Future of Health Care / Commission sur l'avenir des soins de santé au Canada. . .

.

.

.

.

64—H.D. Dickinson Dickinson H.D., & Bolaria, B.S. (1994). Expansion and survival: Canadian sociology and the development of the Canadian nation state. In R.P. Mohan & A.S. Wilke (Eds.), International handbook of contemporary developments in sociology (pp. 229–254). Westport, CT: Greenwood. Douglas, J. (Ed.). (1971). The technological threat. Englewood Cliffs, NJ: PrenticeHall. Dror, Y. (1971). Applied social science and systems analysis. In I.L. Horowitz (Ed.), The use and abuse of social science (109 –132). New Brunswick, NJ: Transaction Books. Dryzek, J.S. (1989). Policy sciences of democracy. Polity, 22, 97–118. Edelman, M. (1964). The symbolic uses of politics. Urbana, IL: University of Illinois Press. Elliot, P. (1972). The sociology of the professions. London: Macmillan. Elliott, H., & Popay, J. (2000). How are policy makers using evidence? Models of research utilization and local NHS policy making. Journal of Epidemiology and Community Health, 54, 461– 468. Ellul, J. (1964 [1954]). The technological society. New York: Vintage. Fay, B. (1973). Social theory and political practice. London: George Allen & Unwin. Fay. B. (1987). Critical social science: Liberation and its limits. Ithaca, NY: Cornell University Press. Fischer, F., & Forester, J. (Eds.). (1993). The argumentative turn in policy analysis and planning. Durham, NC: Duke University Press. Foucault, M. (1979). Discipline and punish: The birth of the prison. (A. Sheridan, Trans.). New York: Vintage. Foucault, M. (1980). Power/Knowledge: Selected interviews and other writings, 1972– 1977. (C. Gordon, L. Marshall, J. Mepham & K. Soper, Trans.; C. Gordon, Ed.). New York: Pantheon. Fraser, N. (1989). Unruly practices: Power, discourse and gender in contemporary social theory. Minneapolis: University of Minnesota Press. Freidson, E. (1973). Profession of medicine: A study of the sociology of applied knowledge. New York: Dodd Mead. Funtowicz, S., & Ravetz, J.R. (1993). Science for a post-normal age. Futures, 25(7), 739–755. Funtowicz, S., & Ravitz, J.R. (2002). Post-normal science: Environmental policy under conditions of complexity. Retrieved 18 November 2002 from http://www.nusap.net/sections.php?op=viewarticle&artid=13. Gadamer, H.-G. (1996). The enigma of health: The art of healing in a scientific age. Stanford, CA: Stanford University Press. .

.

A Sociological Perspective—65 Gans, H. (1971). Social science for social policy. In I.L. Horowitz (Ed.), The use and abuse of social science (pp. 13 –33). New Brunswick, NJ: Transaction Books. Gouldner, A.W. (1968). The sociologist as partisan: Sociology and the welfare state. American Sociologist, 3, 103 –116. Gouldner, A. W. (1970). The coming crisis of western sociology. New York: Avon. Gray, J.A.M. (1997). Evidence-based healthcare: How to make health policy and management decisions. New York: Churchill Livingstone. Griffin, J.D. (1989). In search of sanity: A chronicle of the Canadian mental health association, 1918 –1988. London, ON: Third Eye. Habermas, J. (1971). Toward a rational society: Student protest, science, and politics. (Trans. J. J. Shapiro). London: Heinemann. Habermas, J. (1976). Legitimation crisis. (T. McCarthy, Trans.). London: Heinemann. Habermas, J. (1984). The theory of communicative action. Volume 1: Reason and the rationalization of society. (T. McCarthy, Trans.). Boston: Beacon Press. Habermas, J. (1990). Moral consciousness and communicative action. (C. Lenhradt & S. Weber Nicholson, Trans.). Cambridge, MA: MIT Press. Hammond, K.R., Harvey, L.O., & Hastie, R. (1992). Making better use of scientific knowledge: Separating truth from justice. Psychological Science, 3, 80 – 87. Hardwig, J. (1991). The role of trust in knowledge. Journal of Philosophy, 88(12), 693 –708. Hayward, J. (1996). Promoting clinical effectiveness: A welcome initiative, but both clinical and health policy need to be based on evidence. British Medical Journal, 312, 1491–1492. HEALNet (2002) Home page. Retrieved 19 November 2002 from

Hetherington, R.W. (1988). The utilization of social science research in mental health policy: A Canadian survey. In B.S. Bolaria & H.D. Dickinson (Eds.), Sociology of health care in Canada (pp. 278 –94). Toronto: Harcourt Brace Jovanovich. Huberman, M. (1994). Research utilization: the state of the art. Knowledge and Policy, 7(4), 13 –33. Illich, I. (1976). Limits to medicine. Medial nemesis: The expropriation of health. London: Marion Boyars. Industry Canada. (2002). Home page. Retrieved 19 November 2002 from http://www.nce.gc.ca/success_e.htm Johnson, T.L. (1972). Professions and power. London: Macmillan. Keat, R. (1981). The politics of social theory: Habermas, Freud and the critique of positivism. Oxford: Basil Blackwell. .

.

.

.

.

.

.

.

66—H.D. Dickinson Landry, R., Amara, N., & Lamari, M. (2001). Utilization of social science research knowledge in Canada. Research Policy, 30, 333 –349. Lerner, D., & Lasswell, H.D. (Eds.). (1951) The policy sciences: Recent developments in scope and method. Stanford, CA: Stanford University Press. Lindblom, C.E. (1959). The science of ‘muddling through.’ Public Administration Review, 19(2), 79 – 88. Lindblom, C.E., & Cohen, D.K. (1979). Usable knowledge: Social science and social problem solving. New Haven, CT: Yale University Press. Lomas, J. (1997). Reluctant rationers: Public input into health care priorities. Journal of Health Services Research and Policy, 2(2), 103 –111. Lomas, J. (2000). Connecting research and policy. Isuma, Spring, 140–144. Lomas, J., & Contandriopoulos, A-P. (1994). Regulating limits to medicine: Towards harmony in public- and self-regulation. In R.G. Evans, M.L. Barer, & T.R. Marmor (Eds.), Why are some people healthy and others not? The determinants of health of populations (pp. 253 – 284). New York: Aldine de Gruyter. Lubove, R. (1973 [1965]). The professional altruist: The emergence of social work as a career, 1880 –1930. New York: Atheneum. Lundberg, G.A. (1947). Can science save us? New York: Longmans. Lynd, R.S. (1967 [1939]). Knowledge for what? The place of social science in American culture. Princeton, NJ: Princeton University Press. MacRae, D., Jr. (1976). The social function of social science. New Haven, CT: Yale University Press. Majone, G. (1989). Evidence, argument and persuasion in the policy process. New Haven, CT: Yale University Press. Mannheim, K. (1940). Man and society in an age of reconstruction: Studies in modern social structure. London: K. Paul, Trench, Trubner. Marcuse. H. (1964). One-dimensional man: Studies in the ideology of advanced industrial society. Boston: Beacon Press. McLaren, A. (1990). Our own master race: Eugenics in Canada, 1885 –1945. Toronto: McClelland & Stewart. Medical errors: A global health care issue. (2002, 6 September). Health Edition, 6(35), 2. Mehta, M.D. (2002, April). Regulating biotechnology and nanotechnology in Canada: A post-normal science approach for inclusion of the fourth helix. Paper presented at the International Workshop on Science, Technology and Society. National University of Singapore, Singapore. Mills, C.W. (1959). The sociological imagination. London: Oxford University Press. Mott, F.D. (1948). Recent developments in the provision of medical services in Saskatchewan. Canadian Medical Association Journal, 58, 195–200. .

.

.

.

.

.

.

.

A Sociological Perspective—67 Nathan, R.P. (1988). Social science in government: Uses and misuses. New York: Basic Books. National Forum on Health. (1998). Making decisions: Evidence and information. (Vol. 5). Saint-Foy, QC: Éditions MultiMondes. Newson, J. and Buchbinder, H. 1988. The university means business. Toronto: Garamond Press. A new world of risk? (2002). Horizons, 5(3), 1–2. NSDDR (National Center for the Dissemination of Disability Research). (1996). A review of the literature on dissemination and knowledge utilization. Retrieved 2 August 2002 from . OECD. (1980). The utilization of the social sciences in policy making in the United States. Paris: Organisation for Economic Co-operation and Development. Oh, C.H. (1997). Issues for new thinking on knowledge utilization: Introductory remarks. Knowledge and Policy: International Journal of Knowledge Transfer and Utilization, 10(3), 3 –10. Parsons, T. (1951). The social system. New York: Free Press. Perkins, E.R., Simnett, I., & Wright, L. (Eds.). (1999). Evidence-based health promotion. New York: Wiley. Pickard, S. (1998). Citizenship and consumerism in health care: A critique of citizens' juries. Social Policy and Administration, 32(3), 226 –244. Pines, M. (1973). The brain changers: Scientists and the new mind control. New York: Harcourt Brace Jovanovich. Polanyi, K. (1944). The great transformation. New York: Octagon Books. Rich, R.F., & Oh, C.H. (1993). The utilization of policy research. In S. Nagel (Ed.), Encyclopedia of social sciences (2nd ed.). New York: Marcel Dekkar. Roemer, M.I., & Schwartz, J.L. (1979). Doctors slowdown: Effects of the residents of Los Angeles County. Social Science and Medicine, 13C(4), 213 –218. Roos, N.P., & Roos, L.L. (1994). Small area variations, practice style, and quality of care. In R.G. Evans, M.L. Barer, & T.R. Marmor (Eds.), Why are some people healthy and others not? The determinants of health of populations (pp. 231–252). New York: Aldine de Gruyter. Rose, N. (1990). Governing the soul: The shaping of the private self. London: Routledge. Roszak. T. (1969). The making of a counter culture: Reflections on the technocratic society and its youthful opposition. Garden City, NY: Anchor Books. Roth, J.A. (1960). Ritual and magic in the control of contagion. In D. Apple (Ed.), Sociological studies of health and sickness: A source book for the health professions (pp. 332–339). New York: McGraw-Hill. .

.

.

68—H.D. Dickinson Sackett, D.L., Rosenberg, W., Gray, J.A.M., Haynes, R.B., & Richardson, W.S. (1996). Evidence-based medicine: What it is and what it isn't. British Medical Journal, 312 (7023), 71–72. Salter, L. (1988). Mandated science: Science and scientists in the making of standards. Dordrecht: Kluwer Academic. Scheff, T.J. (Ed.). (1975). Labeling madness. Englewood Cliffs, NJ: Prentice-Hall. Scott, R.A., & Shore, A.R. (1979). Why sociology does not apply: A study of the use of sociology in public policy. New York: Elsevier. Scull, A. (1984 [1977]). Decarceration: Community treatment and the deviant – a radical view. (2nd ed.). Englewood Cliffs, NJ: Polity. Simon, H.A. (1982). Theories of bounded rationality. In Simon, H.A. (Ed.), Models of bounded rationality. Volume 2: Behavioral economics and business organization (pp. 408 – 423). Cambridge, MA: MIT Press. Sirard, M-A, & Létourneau, L. (2002). Risk and genetic engineering. Horizons, 5(3), 8 –9. Straus, R. (1957). The nature and status of medical sociology. American Sociological Review, 22(2), 200 –204. Szasz, T. (1972 [1961]). The myth of mental illness. Frogmore, UK: Granada. Taylor, M.G. (1960). The role of the medical profession in the formation and execution of public policy. Canadian Journal of Economics and Political Science, 25, 108 –127. Taylor, M.G. (1978). Health insurance and Canadian public policy: The seven decisions that created the Canadian health insurance system. Montreal and Kingston: McGill-Queens University Press. Tollefson, E.A. (1963). Bitter medicine: The Saskatchewan medicare feud. Saskatoon: Modern Press. Torrey, E.F. (1986 [1972]). Witchdoctors and psychiatrists: The common roots of psychotherapy and its future. Northvale, NJ: Jason Aronson. Toulmin, S. (1958). The uses of argument. Cambridge: Cambridge University Press. Turner, B. (1987). Medical power and social knowledge. London: Sage. Valenstein, E.S. (1973). Brain control: A critical examination of brain stimulation and psychosurgery. New York: Wiley. Valenstein, E. S. (1986). Great and desperate cures: The rise and decline of psychosurgery and other radical treatments for mental illness. New York: Basic Books. Vickers, G. (1965). The art of judgment. New York: Basic Books. Vollmer, H.M., & Mills, D.L. (Eds.). (1966). Professionalization. Englewood Cliffs, NJ: Prentice-Hall. Waitzkin, H. (1969). Truth’s search for power: The dilemmas of the social sciences. Social Problems, 15(4), 408 – 419. .

.

.

.

.

.

.

A Sociological Perspective—69 Wallace, W.L. (1971). The logic of science in sociology. Chicago: Aldine. Weber, M. (1970 [1946]). Science as a vocation. In J.D. Douglas, (Ed.), The Relevance of Sociology (pp. 45 – 63). New York: Appleton Century Crofts. Weiss, C.H. (1977). Introduction. In C.H. Weiss (Ed.), Using social research in public policy making (pp. 1–22). Lexington, MA: Lexington Books. Weiss, C.H. (1979). The many meanings of research utilization. Public Administration Review, 39, 426 – 431. Wiener, N. (1967). The human use of human beings: Cybernetics and society. Chicago: Discus Books. Wilensky, H.L. (1967). Organizational intelligence: Knowledge and policy in government and industry. New York: Basic Books. Willer D., & Willer, J. (1973). Systematic empiricism: Critique of a pseudoscience. Englewood Cliffs, NJ: Prentice-Hall. Willinsky, J. (1999). Technologies of knowing: A proposal for the human sciences. Boston: Beacon Press. Willinsky, J. (2000). If only we knew: Increasing the public value of social science research. New York: Routledge. Zola, I.K. (1972). Medicine as an institution of social control. Sociological Review, 20, 487–504. .

.

.

.

70—J.N. Lavis

3 A Political Science Perspective on Evidence-Based Decision-Making JOHN N. LAVIS

Introduction A political science perspective on evidence-based decision-making can be summarized quite simply: evidence-based decision-making rarely exists in a modern democracy (an empirically grounded observation), and it probably should not exist in a modern democracy (a normative statement). Who wins and who loses matter. Consider a hypothetical scenario in which research evidence indicates that if a government transfers income from the least well-off citizens to the most well-off citizens, the average level of health in society will increase. The majority of citizens would probably not wish to see this evidence acted upon. Evidence-informed decision-making is another issue entirely: evidenceinformed decision-making does exist in many modern democracies, and, if voters hold the view that politicians should be accountable to research evidence even though other considerations may often trump this evidence, it should exist in a modern democracy. Politicians could argue against a reverse Robin Hood policy on the grounds that fairness trumps the evidence, but accountability to the evidence means that they cannot ignore it. Coming to grips with the political science perspective can be difficult, however, given the range of topics that political scientists study. Some study revolutions and how and why they happen. Others study whether and how federal states foster innovation more than unitary states do. But if one thinks in terms of the very traditional set-up for an empirical study, where one has an outcome (i.e., a dependent variable)

A Political Science Perspective—71

and a set of factors that help to explain that outcome (i.e., independent variables), the part of political science germane to our understanding of evidence-informed decision-making addresses whether, how, and under what conditions one particular independent variable (‘ideas,’ which covers much more than just research knowledge) helps to explain a particular outcome. Such an outcome might be a policy change or, more generally, an issue appearing on the government agenda, a public policy or a change in public policy, or the implementation of a public policy. As is the case in other types of analysis, focusing on one independent variable does not mean that an analyst can safely disregard other independent variables such as powerful interest groups. It does mean that a great deal of time must be spent figuring out how ideas influence a public policy decision both independently and in conjunction with other factors. In this chapter ‘policy learning’ is employed as a heuristic for considering the ways in which ideas influence public policy. Policy change can be influenced by ideas because some individuals or groups learned something. First, I describe a new framework designed to bring order to the many types of policy learning and policy change explored in empirical studies. Second, I address the models of politics (i.e., the theoretical orientations) that political science scholars employ in conducting these empirical studies. Third, I apply the framework and related models of politics in the context of an empirical study in the health field to provide an illustration of how the framework and models can be used. Fourth, I explore related developments in studying the role of ideas in policy change, including developments arising from links with other disciplines. The chapter concludes with some implications for those seeking to increase the uptake of one particular category of ideas – research knowledge – in public policy-making. A Framework for Understanding the Role of Ideas in Policy Change Ideas can play many roles in public policy-making, and determining their precise role in a particular policy change can be very difficult. At one extreme, ideas may provide a wholly new perspective through which individuals and groups choose their goals or the political strategies by which they hope to achieve their goals. For example, ideas that constitute or emerge from the determinants-of-health synthesis could suggest (as a new goal) a quest to improve a population’s health status or (as a new political strategy) building a political coalition that unites health

72—J.N. Lavis

and social policy activists seeking to reduce income disparities within a population. At the other extreme, ideas may simply provide rhetorical camouflage for already chosen goals and political strategies; the idea that the determinants of health extend beyond medical care could be used, for example, as window-dressing for an established economic goal such as deficit-reduction through reduced health care spending. Most examples likely fall somewhere between epiphany and epiphenomenon. As such, a systematic approach is required to disentangle one potential role for ideas from another. Determining the role that ideas play in a particular policy can be approached, in part, by determining whether the related politics are more about learning or more about conflict resolution (Weatherford & Mayhew, 1995). Heclo (1974) argues that public policy-making should be viewed as a process of learning (e.g., about how to improve a population’s health status or whether to have such a goal in the first place), typically on the part of state officials and other social actors intimately connected to the state. Such idea-oriented explanations arose largely as a reaction to the notion, implicit in conflict-oriented theories, that governments are passive and public policy-making is driven by social pressures. Pure conflict-oriented (i.e., interest-based) explanations provide no role for ideas. In these types of explanation – whether based on pluralist, corporatist, or Marxist theories – debate (insofar as it exists) can be seen as part of a predetermined strategy (e.g., reducing health care spending) for achieving pre-determined goals (e.g., deficit reduction). If some learning does take place, determining the role that ideas play can be further elaborated by determining who learned, what was learned, and what type of policy change resulted (Bennett & Howlett, 1992). The people who learn can include experts (e.g., economists on a presidential advisory council or a royal commission), state officials (e.g., politicians or civil servants), and social actors (e.g., interest groups or the public more generally). These people can learn about the different options for structuring decision-making organizations and processes, the different means to accepted ends, and even the different ends that policy can achieve. Such learning can translate into organizational change or into policy changes that involve a change in means or even in ends. Taken together, these two sets of questions – (1) Is learning or conflict resolution involved? and (2) Who learns, what was learned, and what type of policy change resulted? – form the axes of a framework that can be used to examine systematically the role of ideas in public

A Political Science Perspective—73 Table 3.1 A Framework for Understanding the Role of Ideas in Policy Change Type of policy learning Theoretical orientation

Type of policy change (to what effect or what type of influence?)

Who learns? (subject of learning)

Learns what? (object of learning)

Expert-centred

Experts

Theory

No policy change directly

State-centred – gov't learning

State officials

Decision-making organizations and processes

Organizational change

Coalition-centred – lesson-drawing

State officials and social actors intimately connected to the state

Means (programs and instruments used to implement policy)

Change in means

Social learning

Same

Means and ends

Change in means and ends (if ends, a paradigm shift)

Debate as dialogue

State officials and social actors

Means and ends

Change in means and ends

Debate as strategy

State officials

Justification for means and ends

None attributable to learning

Interests

No one

Nothing

None attributable to learning

Ideas

Ideas and interests

Source: This framework has been adapted from Weatherford & Mayhew (1995), Weiss (1979), Hall (1989), and Bennett & Howlett (1992).

policy-making (table 3.1). The framework serves two main purposes. First, it can be used to identify policy changes that may have come about because of ideas. Second, it can be used to determine the role that those ideas played in the politics associated with the developments in question. Particular ideas may play different roles at different times, depending on the politics surrounding a particular public policy. Thus, the framework can be used both to find potential cases for study and to inform their exploration.

74—J.N. Lavis

Situating Models of Politics within the Framework Within this framework, six models of politics can be understood in relation to one another (Hall, 1989; Weatherford & Mayhew, 1995), ranging from public policy-making in a closed, decision-making environment dominated by experts and the knowledge they bring to the table to public policy-making in an environment that entails open conflict between opposing interests. The models at either end of this range, however, lack intuitive plausibility: the expert-based model because it ignores interests and the interest-based model because it ignores ideas. On the spectrum between these two models sit four models of politics that allow a role for both ideas and interests: state centred, coalition centred, debate as dialogue, and debate as strategy. The state-centred model posits that state officials learn about different options for structuring decision-making organizations and processes and use what they learn to bring about organizational change. For example, civil servants in the national government may learn about the need for cross-sectoral organizations and processes to address determinants of health, such as income distribution, which can be influenced by public policies that transcend ministerial or departmental lines of authority. Attendant upon such learning might be the ability of civil servants to convince their political masters of the need for institutional innovation. This model involves ‘government learning’ (Etheredge, 1981; Etheredge & Short, 1983), a transplantation of ‘organizational learning’ from its origins in the study of private firms to the study of public organizations (see also Lindblom & Cohen, 1979; Lynn, 1978). The coalition-centred model broadens ‘who learns’ beyond state officials to include social actors intimately connected to the state. It changes ‘what was learned’ from options for structuring decision-making organizations and processes to include either the means to accepted ends or even the different ends that policy can achieve. For example, an informal coalition of health care civil servants, public health practitioners, and not-for-profit health advocacy organizations could learn about the need to look beyond health policy in order to influence determinants of health, such as labour market experiences, as well as learn to use health-related ideas to inform trade-offs in labour market policy development (Lavis, 2002). More dramatically, the same coalition may learn about the benefits of focusing on health as an important outcome of labour market policy per se and then work to bring about such a fundamental shift in the hierarchy of government goals.

A Political Science Perspective—75

Two distinct components of the coalition-centred model have been elaborated. Some have focused on ‘who learns,’ with state officials and social actors typically seen as part of a domestic or transnational policy subsystem – variously called an issue network (Heclo, 1978), a policy network (Knoke, Pappi, Broadbent, & Tsujinaka, 1996), or a policy community (Walker, 1981; Brooks, 1994). Alternatively, these individuals have been regarded as a grouping within a policy subsystem, such as an advocacy coalition (Sabatier, 1987) or an epistemic community (Haas, 1992). Other scholars have focused on learning types. One formulation of the coalition-centred model, sometimes called ‘lesson-drawing’ (Rose, 1988, 1991), involves policy-oriented learning about different means to accepted ends; this type of learning can be based on a group’s own experiences (Sabatier, 1987, 1988; Sabatier & Jenkins-Smith, 1993) or on others' experiences (Rose, 1988, 1991). A second formulation of the model, called ‘social learning’ (Hall, 1989, 1993), moves beyond the notion that changes in ends come about only through an externally induced crisis that alters the distribution of political resources (Sabatier, 1987). Reinterpreting Kuhn (1962), Hall suggests that a change in ends – a paradigm shift – can occur if an accumulation of anomalies undermines the original normative and empirical assumptions that underlie a hierarchy of goals. The dialogue-based model broadens ‘who learns’ even further to include state officials, interest groups, and segments of the public, but retains ‘what was learned’ as the means to accepted ends and the ends that policy can achieve. For example, these individuals and groups are able to learn, through debates in open fora and the media, about public policies that can influence the determinants of health and, through them, health status itself; or about how health can be a goal of such public policies. Such debates can be seen as a dialogue between proponents of different ideas. Through debate, goals are shaped, the relevance of different resources is defined, and the legitimacy of different reasons or justifications are conferred or denied. The outcome determines whether policy change takes place. The strategy-based model posits a more narrow role for ideas, one in which ideas are used by a limited range of actors (primarily politicians and civil servants) as components of a strategy to advance predetermined goals and the means to those ends (Weatherford and Mayhew, 1995). For example, politicians might learn that the determinants-ofhealth synthesis can be used as a rhetorical device to justify reductions in health care spending – an end motivated by reasons other than a

76—J.N. Lavis

desire to reinvest health care dollars in public policy initiatives that could lead to larger health gains. Weatherford and Mayhew argue that a meaningful role for ideas in public policy-making extends only as far as the dialogue-based model and does not include the strategy-based model. They believe that the strategy-based model – where state officials alone learn and they learn only about politically acceptable justifications for means and ends – cannot be considered a meaningful role for ideas. Although strictly true, a role for ideas as rhetorical camouflage is still worth examining. An Application of the Framework in the Health Field A study I conducted of the role of the determinants-of-health synthesis in policy change in Canada between 1987 and 1997 illustrates how the framework for understanding the role of ideas in policy change can be applied (Lavis, 1998). For the purposes of this study, the key policyrelevant ideas that constitute or emerge from the determinants-of-health synthesis included the following points: (1) the determinants of health extend beyond medical care and include factors such as labour-market experiences, income distribution, and social supports; (2) public policy for which health is an unintended consequence, not an explicit objective, can influence these factors and, through them, health; (3) substantial health gains might accrue from a shift in focus from health policy to public policy for which health is an unintended consequence; and (4) cross-sectoral organizations and processes are needed to bring about such a shift (Lalonde 1974; Evans & Stoddart, 1990; Evans, Barer, & Marmor, 1994; Amick, Chapman, Levine, & Tarlov, 1995; Blane, Brunner, & Wilkinson, 1996). The types of policy change relevant to this study follow from these ideas and from the framework and models of politics described in the last two sections: (1) institutional innovation that involves the development of cross-sectoral organizations or processes to address the determinants of health; (2) change in means to improve health that involve public policy for which health is an unintended consequence and using health to inform trade-offs in these other areas; and (3) change in ends that involves focusing on health as a primary objective of public policy for which health was formerly considered an unintended consequence. Potentially eligible cases of policy change were identified through interviews with key informants, including civil servants, representatives

A Political Science Perspective—77

of funding agencies, researchers, and other social actors intimately connected to the state. Each key informant was asked the following question: ‘What institutional innovations or policy changes have come about [in Canada], at least in part, because of the determinants-of-health synthesis [or this body of knowledge which may be known locally by another term, such as population-health research]?’ The eligibility of identified cases was established through interviews with key informants familiar with the policy change, as well as primary sources: for example, documents related to the determinants of health that were produced by both governments and non-governmental organizations and transcripts of parliamentary debates and secondary sources such as academic studies of particular cases and an unpublished doctoral thesis. Only two of the four potential cases met the following two plausibility criteria: (1) explicit reference made to ideas related to the determinants-of-health synthesis in primary sources produced at the time of the policy change; and (2) post hoc assessments by key informants familiar with the particular policy change concurring that ideas related to the determinants-of-health synthesis played some role in the politics associated with the policy change. The role that ideas played in the politics associated with the two eligible cases of policy change was established through interviews with key informants familiar with the policy change and with the primary and secondary sources already mentioned. Each key informant was asked the following question: ‘How, if at all, did ideas about the determinants of health affect the exploration of policy alternatives, the decision to adopt the ... policy change, and the justifications used to support the decision?’ The politics associated with the policy change were then matched against one of the models of politics by asking whether the related politics were more about learning or more about conflict resolution, and, if some learning did take place, who learned and what was learned. The policy-relevant ideas embodied in the determinants-of-health synthesis played strategic, rather than substantive, roles in both policy changes. The politics associated with the Prince Edward Island government's decision to pool health and social services budgets and to allocate decision-making authority over these budgets to regional boards (Prince Edward Island Health Task Force, 1992; Government of Prince Edward Island, 1993) matched the ‘debate-as-strategy’ model of politics. The related politics were partly about learning by politicians and civil servants, not only about conflict resolution. What they learned,

78—J.N. Lavis

however, was not so much that a single management structure would facilitate cross-sectoral reallocations in line with ideas about the determinants of health (Lomas & Rachlis, 1997). Rather, they realized that making such an argument attracted support among some members of the health policy community for initiatives that they might not otherwise have supported. Invoking ideas related to the determinants-ofhealth synthesis can therefore be seen as a strategy for advancing predetermined goals (less government) and the means to these ends (a single management structure). Similarly, the politics associated with Health Canada’s targeted funding of policy-relevant research that is related to the determinants of health, through the National Forum on Health and the National Health Research and Development Program (National Forum on Health, 1996), matched the debate-as-strategy model of politics. The politics related to Health Canada’s funding decision was partly about politicians and civil servants learning that actions linked to the determinants of health legitimated the federal government's role in the health policy domain. The constitutional division of authority over health policy gives control over health care financing and delivery to the provinces. The federal government's role in this electorally popular policy domain is restricted to fiscal transfers and the enforcement of the Canada Health Act (Government of Canada, 1984). With deficit reduction steadily diminishing the size of its fiscal transfers to the provinces during the latter half of the period of study, the federal government used rhetoric about the determinants of health to maintain the appearance of activity on healthrelated issues (see, for example, the National Forum on Health, 1997). At the time, however, there were no documented cases in which the federal government had brought about a policy change for reasons related to the determinants-of-health synthesis. Related Developments in Studying the Role of Ideas in Policy Change Models of research utilization provide additional perspectives on how the role of ideas differs across models of politics (Pelz, 1978; Weiss, 1979). While not new developments, these models have only recently been employed in analyses informed by a political science perspective (e.g., Lavis et al., 2002). One of these models of research utilization, the problem-solving model (sometimes called the instrumental model), helps to explain the role of ideas in the coalition-centred model of politics. According to the problem-solving model, research such as that

A Political Science Perspective—79

on the determinants of health helps to solve a problem that already exists and about which a decision must be made. Typically, such research either antedates the policy problem or is purposefully commissioned when needed. For example, state officials and other social actors intimately connected to the state may draw on specific published studies of the health consequences of unemployment in particular demographic groups to determine which groups to target with a new jobs strategy. The enlightenment model of research utilization (sometimes called the conceptual model) helps to explain the role of ideas in the dialoguebased model of politics. Here, research is taken to mean generalizations and orientations that science or social science research has engendered, rather than the findings of a single study. These ideas percolate through informed publics and come to shape the way people think about social issues or help people to make sense of them. Public policymakers are rarely able to cite the findings of a specific study that influenced their decisions, but they have a sense that social science has given them a backdrop of ideas and orientations that has had important consequences (Weiss, 1979). The determinants-of-health synthesis can be seen as an example of a set of generalizations and orientations about how to improve health. At the other extreme, the political and tactical (sometimes called ‘symbolic’ models) of research utilization can be seen as examples of the broader strategy-based model of politics. According to the political model, research is used as ammunition for the side that finds its conclusions congenial and supportive; it is used to neutralize opponents, convince waverers, and bolster supporters. According to the tactical model, calls for new or more research are used as an argument to delay action by the side that finds the conclusions of current research uncongenial and unsupportive. Both models of research utilization suggest a strategic, not a substantive, role for ideas. Four other developments in the study of the role of ideas in policy change warrant mention. First, researchers who draw on a political science perspective in conducting their research have begun to examine research knowledge as a subset of ‘ideas.’ This innovation has opened the door to an improved understanding of how researchers can more effectively transfer research knowledge to public policy-makers and how public policy-makers can facilitate the uptake of this research knowledge (see Lavis et al., 2002). In doing so, these researchers must grapple with how to define research knowledge: is it citable research or is it any

80—J.N. Lavis

‘professional social inquiry that aids in problem-solving’ (Lindblom & Cohen, 1979)? Researchers must also grapple with whether and how to disentangle research knowledge from other types of information and from the values of politicians, civil servants, stakeholders, and the general public. Second, researchers have begun to examine intermediate outcomes such as public policy-makers' awareness and knowledge of particular ideas, their attitudes towards the ideas, and their self-reported use of the ideas (Lavis et al., 2003). ‘Intermediate outcomes’ seems to be an appropriate term, because public policy-makers might be aware of and knowledgeable about the determinants-of-health synthesis, for example, yet have taken no action outside the health sector on the basis of those ideas (Lavis, 2002). While researchers have begun to look at intermediate outcomes, they rarely extend their assessment of impact to include the quality of implemented actions or outcomes. The final outcome, typically, is policy change. Third, many researchers are no longer content to confine their studies to only the policy development stage of policy change. Instead, many now examine the agenda setting (Kingdon, 1995) and policy implementation stages of the public policy-making process, as well as ‘non-decisions.’ Moving beyond an exclusive focus on policy development holds the potential to improve our understanding of the roles that research knowledge plays in getting an issue onto the government agenda in the first place (Lavis et al., 2002) and of how a public policy will be implemented once it has been developed. Expanding analysis beyond the publicly available roster of ‘decisions’ (e.g., legislative acts) to the much harder to identify ‘non-decisions’ (e.g., considering but deciding against the introduction of a user fee for a medically necessary diagnostic procedure) provides a more comprehensive roster of possible cases for study and thus a more accurate understanding of the roles that research knowledge can play in public policy-making. Fourth, researchers are now examining the conditions that appear to favour the use of ideas (and specifically research knowledge) in public policy-making (Innvaer, Vist, Trommald, & Oxman, 2002). In doing so, researchers must consider institutions (i.e., the public policy-making institutions and processes that facilitate or hinder the uptake of new ideas), a dimension of the political landscape not captured by the ideasbased or interests-based orientations described earlier. Institutions include factors such as policy legacies, which are the attributes of past policies that influence whether an issue will be put on the decision

A Political Science Perspective—81

agenda (e.g., legislation with a fixed phase-out and/or re-assessment date) or how an issue will be addressed once it is on the agenda (e.g., administrative capacities that have been developed within government). Institutions also include characteristics of the public policy-making process such as its openness, the degree of time pressure involved, and the level of approval required (e.g., legislative or staff in an executive agency). An accountable ‘receptor’ function in government has been posited to be an institutional factor that increases the likelihood that research knowledge will be acted upon (Lomas, 1997). In examining the conditions that appear to favour the use of ideas in research knowledge, researchers must also consider sampling issues, because some public policies and some public policy-making processes may be particularly amenable to being informed by research (Lavis et al., 2002). Conclusion Answers to two sets of questions – (1) Is learning or conflict resolution involved? and (2) Who learns, what was learned, and what type of policy change resulted? – can be used to identify more precisely what role, if any, ideas (and specifically research knowledge) play in public policy-making. In both the budget-pooling and the research-funding cases that were used to illustrate how to approach answering these questions, the politics related to the initiatives resemble the debate-asstrategy model of politics. According to this model, politicians and civil servants use the determinants-of-health synthesis as a strategy to advance pre-determined goals and the means to these ends. The policyrelevant ideas that constitute or emerge from the determinants-of-health synthesis were used in different ways, however, with ideas serving a political purpose for politicians and civil servants in Prince Edward Island (in that arguments that invoked the determinants-of-health synthesis attracted support for health care reform from some members of the health policy community) and a tactical purpose for politicians and civil servants in Health Canada (in that funding research on the determinants of health helped to delay more meaningful action on these determinants). The framework for interpreting the role of ideas in policy change can be used within both the health policy and the broader public policy fields. Many applications of the models of politics that can be located within the framework have been undertaken in the economic policy field. Health policy analysts have only recently begun to draw on this

82—J.N. Lavis

literature in a systematic way (see, e.g., Peterson, 1997; Wilensky, 1997; Klein, 1997; Brown, 1998). The particular application of the framework to the role of the determinants-of-health synthesis in policy change serves to highlight the importance of researchers who consider the range of audiences for their research (state officials, social actors intimately connected to the state, and a broader group of social actors) and the range of uses for their research (establishing decision-making organizations and processes and choosing between different means and ends). Such considerations can suggest new approaches to knowledge transfer such as the development of audience-specific briefing notes that specify the implications of the research for organizations' processes, means, and ends. These advances notwithstanding, determining the role that ideas play in public policy-making remains a stubborn challenge. Like Heclo, political scientists who study the role of ideas in policy change ‘deal in that difficult but perhaps rewarding middle zone – between the large questions with no determinate answers and the small questions of tiresome and often insignificant conclusiveness. As usual, the challenge is to find a balance between being irrefutable and being worth refuting’ (Heclo, 1974, p. 16).

REFERENCES Amick, B.C., Chapman, D.C., Levine, S., & Tarlov A. (Eds.). (1995). Society and health. New York: Oxford University Press. Bennett, C. & Howlett, M. (1992). The lessons of learning: Reconciling theories of policy learning and change. Policy Sciences, 25, 275 –294. Blane, D., Brunner, E., & Wilkinson, R. (1996). Health and social organization. London: Routledge. Brooks, S. (1994). Introduction: Policy communities and the social sciences. In S. Brooks & A.-G. Gagnon (Eds.), The political influence of ideas: Policy communities and the social sciences (pp. 1–12). Westport, CT: Praeger. Brown, L.D. (1998). Exceptionalism as the rule? U.S. health policy innovation and cross-national learning. Journal of Health Politics, Policy and Law, 23, 35 – 51. Etheredge, L.M. (1981). Government learning: An overview. In S.L. Long (Ed.), The handbook of political behavior (Vol. 2, pp. 73 –161). New York: Pergamon. .

.

.

.

A Political Science Perspective—83 Etheredge, L.M. & Short, J. (1983). Thinking about government learning. Journal of Management Studies, 20, 41– 58. Evans, R.G. & Stoddart, G.L. (1990). Producing health, consuming health care. Social Science and Medicine, 31, 1347–1363. Evans, R.G., Barer, M.L. & Marmor, T.R. (Eds.). (1994). Why are some people healthy and others not? The determinants of health of populations. New York: Aldine de Gruyter. Government of Canada. (1984). Canada Health Act. Ottawa: Queen's Printer. Government of Prince Edward Island. (1993). Health and Community Services Act. Charlottetown: P. Joseph Murray, Acting Queen's Printer. Haas, P.M. (1992). Introduction: Epistemic communities and international policy coordination. International Organization, 46, 1– 35. Hall, P.A. (1993). Policy paradigms, social learning and the state: The case of economic policymaking in Britain. Comparative Politics, April, 275 –296. Hall, P.A. (Ed.). (1989). The political power of economic ideas: Keynesianism across nations. Princeton, NJ: Princeton University Press. Heclo, H. (1974). Modern social politics in Britain and Sweden: From relief to income maintenance. New Haven, CT: Yale University Press. Heclo, H. (1978). Issue networks and the executive establishment. In A. King (Ed.), The new American political system (pp. 87–124). Washington, DC: American Enterprise Institute. Innvaer, S., Vist, G., Trommald, M., & Oxman, A. (2002). Health policymakers' perceptions of their use of evidence: A systematic review. Journal of Health Services Research and Policy, 7, 239 – 44. Kingdon, J.W. (1995). Agendas, alternatives, and public policy (2nd ed.). New York: HarperCollins College. Klein, R. (1997). Learning from others: Shall the last be the first? Journal of Health Politics, Policy and Law, 22, 1267–1278. Knoke, D., Pappi, F.U., Broadbent, J., & Tsujinaka, Y. (1996). Comparing policy networks: Labor politics in the U.S., Germany, and Japan. Cambridge: Cambridge University Press. Kuhn, T. (1962). The structure of scientific revolutions. Chicago: University of Chicago Press. Lalonde, M. (1974). A new perspective on the health of Canadians: A working document. Ottawa: Minister of Supply and Services Canada. Lavis, J.N. (1998). Ideas, policy learning and policy change: The determinants-of health synthesis in Canada and the United Kingdom. McMaster University Centre for Health Economics and Policy Analysis (CHEPA) Working Paper 98 – 6. .

.

.

.

.

.

.

84—J.N. Lavis Lavis, J.N. (2002). Ideas at the margin or marginalized ideas? Non-medical determinants of health in Canada. Health Affairs, 21(2), 107–112. Lavis, J.N., Ross, S.E., Hurley, J.E., Hohenadel, J.M., Stoddart, G.L., Woodward, C.A., & Abelson, J. (2002). Examining the role of health services research in public policymaking. Milbank Quarterly, 80(1), 125 –154. Lavis, J.N., Ross, S.E., Stoddart, G.L., Hohenadel, J.M., McLeod, C.B., & Evans, R.G. (2003). Do Canadian civil servants care about the health of populations? American Journal of Public Health, 93(4), 658 – 63. Lindblom, C.E., & Cohen, D.K. (1979). Usable knowledge. New Haven, CT: Yale University Press. Lomas, J. (1997). Improving research dissemination and uptake in the health sector: Beyond the sound of one hand clapping. McMaster University Centre for Health Economics and Policy Analysis (CHEPA) Policy Commentary C97–1. Lomas, J., & Rachlis, M. (1997). Moving rocks: Block funding in PEI as an incentive for cross-sectoral reallocations among human services. Canadian Public Administration, 39, 581–600. Lynn, L.E. (Ed.). (1978). Knowledge and policy: The uncertain connection. Washington, DC: National Academy of Sciences. National Forum on Health. (1996). What determines health? Ottawa: National Forum on Health. National Forum on Health. (1997). Canadian health action: Building on the legacy. Ottawa: National Forum on Health. Prince Edward Island Health Task Force. (1992). Health reform: A vision for change. Charlottetown: Island Information Services. Pelz, D.C. (1978). Some expanded perspectives on use of social science in public policy. In J.M. Yinger & S.J. Cutler (Eds.), Major social issues: A multidisciplinary view (pp. 346 – 357). New York: Free Press. Peterson, M.A. (1997). The limits of social learning: Translating analysis into action. Journal of Health Politics, Policy and Law, 22, 1077–1114. Rose, R. (1988). Comparative policy analysis: The program approach. In M. Dogan (Ed.), Comparing pluralist democracies (pp. 219 –241). Boulder, CO: Westview Press. Rose, R. (1991). What is lesson-drawing? Journal of Public Policy, 11, 3 –30. Sabatier, P.A. (1987). Knowledge, policy-oriented learning, and policy change: An advocacy coalition framework. Knowledge: Creation, Diffusion, Utilization, 8, 649 – 692. Sabatier, P.A. (1988). An advocacy coalition framework of policy change and the role of policy learning therein. Policy Sciences, 21, 129 –168. Sabatier, P.A., & Jenkins-Smith, H.C. (Eds.). (1993). Policy change and learning: An advocacy coalition approach. Boulder, CO: Westview Press. .

.

.

.

.

.

.

.

.

.

A Political Science Perspective—85 Walker, J.L. (1981). The diffusion of knowledge, policy communities, and agenda setting: The relationship of knowledge and power. In J.E. Tropman, M.J. Dluhy, & R.M. Lind (Eds.), New strategic perspectives on social policy (pp. 75 – 96). New York: Pergamon. Weatherford, M.S., & Mayhew, T.B. (1995). Tax policy and presidential leadership: Ideas, interests and the quality of advice. Studies in American Political Development, 9, 287–330. Weiss, C.H. (1979). The many meanings of research utilization. Public Administration Review, 39, 426 – 431. Wilensky, H.L. (1997). Social science and the public agenda: Reflections on the relation of knowledge to policy in the United States and abroad. Journal of Health Politics, Policy and Law, 22, 1241–1265. .

.

.

.

86—G.R. Baker, L. Ginsburg, and A. Langley

4 An Organizational Science Perspective on Information, Knowledge, Evidence, and Organizational Decision-Making G. ROSS BAKER, LIANE GINSBURG, AND ANN LANGLEY

Introduction The study of the use of information, knowledge, and evidence in decision-making has long been an important part of organizational theory. Organizational scholars have focused on studies of decision-making generally and strategic planning more specifically, because decision-making defines both the work and the outcomes of organizations (e.g., see Mintzberg, 1973; Stewart, 1967, 1975). Decisions shape the services and products of organizations, the experiences of those who work in them, and the satisfaction and lifestyles of stakeholders and customers of organizations (Miller, Hickson, & Wilson, 1996). The growing focus on the development and application of evidencebased medicine has stimulated interest in adopting similar guidelines for decision-making in managerial practice (Kovner, Elton, & Billings, 2000; Alexsson, 1998; Hewison, 1997; Walshe & Rundall, 2001). Walshe and Rundall point out the paradoxical stance of many leaders of health care organizations who have supported evidence-based decision-making in clinical realms while failing to adopt the same ideas in their own decision-making. In part, this paradox stems from the greater investment in developing and disseminating research evidence relevant to clinical decision-making compared with managerial decision-making. But Walshe and Rundall also argue that the nature of knowledge used for decisions varies between clinical and managerial domains. Managerial research differs from clinical research by more often being based in social sciences, using a wider range of methods, and incorporating

An Organizational Science Perspective—87

more theory. As a result, managers are less likely to believe that there are ‘right answers’ and more likely to see findings as dependent upon context, rather than generalizable to various settings. These differences might lead observers to conclude that health care managerial research is simply less well developed than clinical research. If this is true, then greater funding for additional studies, enhanced efforts to disseminate and transfer knowledge into practice, and new patterns of research that link decision makers with researchers are all that is required to create evidence-based management. While such efforts are laudable, we argue that the failure to rely on evidence and information in health care management decision-making is more profound and reflects the nature of the decision-making environment and the processes of linking information, values, and expertise. The new emphasis on evidence-based decision-making should be placed in the context of the history of management thought. This history reveals a continuing search for more rational approaches to organizational decision-making – approaches that are more systematic, more logical, and based on all available information; in other words, approaches that in principle follow the tenets of evidence-based decisionmaking. Over the years, the labels associated with ‘rational’ techniques have changed, but the overriding push for more systematic and comprehensive procedures for decision-making has not (Abrahamson, 1996). At the same time, the attempts to use rational approaches do not appear to have substantially altered the processes of decision-making. Given the sophistication of techniques and the increasingly wide availability of information, one might expect that, as Simon (1978) predicted, by now we would have reached the point where the classic rational model would have begun to provide an increasingly accurate description of how decisions are made in organizations. Yet this is not the case. In this chapter we explore why managerial decision-making remains resistant to ‘rational’ approaches. In the first part we summarize the research evidence on how decisions are actually made in organizations. In the second part we review the research on the efficacy of several strategies for improving it. Finally, in the third part, we examine the lessons learned from the recent stream of work on ‘naturalistic decision-making’ (NDM), which highlights the alternative paths for decision-making that occur when time pressures, uncertainties, and ill-defined goals shape decision-making processes. As we proceed, we relate the arguments and issues to the specific characteristics of the health care context. In the conclusion, we draw together the ideas from the

88—G.R. Baker, L. Ginsburg, and A. Langley

previous sections, suggesting a need to reach beyond simplistic notions of rationality in promoting better ways to improve decision-making. Organizational Decision-Making in Practice

Dimensions of Organizational Decision-Making: Rationality, Judgment, and Politics One of the recurring issues in the organizational decision-making literature has been the respective roles of ‘rationality’ versus ‘judgment’ and ‘politics’ in determining decisions (see, e.g., the reviews by Miller, Hickson, & Wilson, 1996; Eisenhardt & Zbaracki, 1992; Langley, Mintzberg, Pitcher, Posada, & Saint-Macary, 1995). As early as 1945 Simon pointed out that in practice there are serious limits to human rationality. Goals cannot always be clearly defined beforehand; it is impossible to consider all possible alternatives; and there are limits to the availability of information, especially about anticipated consequences of actions. Thus, even intendedly rational beings can only approximate the rational ideal, leading Simon to coin the term ‘bounded rationality.’ Lindblom (1959) and later Quinn (1980) reinforced the idea that rationality could be only partial. They argued that decision-makers often need to approach their tasks by making ‘incremental’ adjustments to the status quo that can garner support, rather than insisting on the establishment of comprehensive objectives and attempting to implement large-scale decisions that might be globally rational but impossible to achieve because they threaten vested interests. From the beginning, then, rationality seems unlikely to perfectly describe all organizational decision-making. The questions that arise are how much does it and can it encompass? To what extent is the rational ideal reflected in real decisions? How, when, and why are rational procedures (formal analysis, information gathering, etc.) used in organizational decisions? These questions also lead us to consider other possible dimensions of organizational decision-making. Clearly, human judgment or intuition can play a major role. Here, formal analysis is replaced by tacit knowledge based on experience. Some management writers, swimming against the tide of evidence-based decision-making, suggest that managerial judgment has been underrated (Mintzberg, 1989; Lipshitz, Klein, Orasanu, & Salas, 2001); we shall return to this issue later. Finally, organizational decisions are necessarily the outcome of various forms of social interaction between groups and individuals. Power, values, and

An Organizational Science Perspective—89

interests play a critical role here (Pfeffer, 1981, 1992; Allison, 1971; Tetlock, 1985). This political aspect of organizational decision-making is important, but sometimes forgotten. Its recognition distinguishes the organizational decision-making literature from literature on behavioural decision-making among individuals (Kahneman & Tversky, 1979; Jackson & Dutton, 1988) as well as from the group-based literature on decisionmaking (Schweiger, Sandberg, & Ragan, 1986; Schweiger, Sandberg, & Rechner, 1989; Rogelberg, Barnes-Farrell, & Lowe, 1992). Political processes may take the form of explicit bargaining and consensus building (Janis, 1982), or they may involve direct or indirect uses of power through which individuals influence decisions by their willingness to invest in issues, their skills in persuading others, their use of resources, their expertise, or simply through their formal authority. Below, we review three complementary streams of research on decision processes, the role of rationality within them, and the possible consequences for health care organizations.

Functionalist Perspective: Analysis as an Alternate Mode of Evaluation The initial stream of research is rooted in two main ideas. First is Simon’s (1960) early description of organizational decisions as composed of ‘intelligence,’ ‘design,’ and ‘choice’ phases. This understanding implicitly views decision processes as having well-defined beginnings and endings and as progressing in a unified manner from issue recognition to solution and implementation. In the second idea analysis, judgment, and political processes are regarded as mutually exclusive modes of choice that may produce more or less ‘successful’ decisions.1 Researchers adopting this perspective have examined the extent to which the sequencing of decisions follows a linear path and have looked at the relative weight placed on analytical, judgmental, and political processes. Some have further considered the fit between organizational context and the use of different decision modes (Thompson & Tuden, 1959; Nutt, 2002). In an early and often cited empirical study of organizational decisionmaking, Mintzberg, Raisinghani, and Théorêt (1976) develop a model based on an analysis of twenty-five major decisions. They identified three phases – ‘identification,’ ‘development,’ and ‘selection’ – that can be further broken down into seven routines (‘recognition,’ ‘diagnosis,’ ‘search,’ ‘design,’ ‘screening,’ ‘evaluation,’ and ‘authorization’). At first sight, these elements suggest a linear and intendedly rational decision

90—G.R. Baker, L. Ginsburg, and A. Langley

process. By plotting flowcharts of the twenty-five decisions on this framework, however, the authors observed that routines are frequently skipped and issues often recycle through them in various orders. The authors also noted that political routines can throw decisions off their course. These descriptions clearly show that linear processes reflecting a global rational model are the exception rather than the rule. Beyond the overall shape of decision processes, the study by Mintzberg et al. (1976) also investigated the approaches adopted within each of their seven routines. These approaches, too, were assessed for the degree to which they reflected the tenets of rational decision-making. For example, the authors identified three modes of evaluation that they labelled ‘analysis,’ ‘judgment,’ and ‘bargaining.’ They found that judgment was the most common mode, followed by bargaining and then analysis. They also found that when analysis was used, its purpose seemed to be mainly to justify decisions taken by other means or for authorization. In other words, though information was mobilized, its role appeared secondary. Other work using a similar basic framework followed. Nutt (1984), for example, undertook a qualitative study of seventy-eight decision processes in health services organizations, drawing conclusions similar to those of Mintzberg et al. (1976). In particular, he noted: ‘The sequence of problem definition, alternatives generation, refinement and selection called for by nearly every theorist, seems rooted in rational arguments, not behavior. Executives do not use this process. Instead, ideas drive the process which is used to rationalize and shape the idea or determine if it has value’ (Nutt, 1984, p. 446). Since that original study, Nutt (1998a, b; 1999a, b; 2002) has continued to research decision processes, broadening the sample of decisions to include private sector firms and refining the measures used. In particular, he investigated in some depth the ‘tactics’ adopted at each stage of decisionmaking (Nutt, 1999b). In a recent study based on over 300 decisions, Nutt (1998b) developed a new typology of the tactics used to evaluate alternatives. He estimated that ‘analytical’ evaluation tactics were used in 34.5 per cent of cases. In contrast, about 40 per cent of decisions involved some form of ‘subjective’ process in which quantitative or qualitative data were selectively mobilized by decision-makers to support proposals. A further 14 per cent involved sponsor judgment with no justification, and 6 per cent were resolved through bargaining or voting. The decisions considered were largely a convenience sample, so care must be taken in interpreting the frequencies. Nutt (1998b) noted that analytical tactics are more frequent and judgment tactics less frequent than the earlier studies suggested. Moreover, if the subjective

An Organizational Science Perspective—91

evaluation procedures are also considered, some form of mobilization of information is present in the vast majority of decisions. Yet many decisions do not incorporate such rational techniques. What factors affect the use of analysis in decision-making? Thompson and Tuden (1959) provided a framework that classifies decision tasks in terms of two dimensions: the clarity of objectives and the clarity of means for producing results. In this view, analytical procedures are possible and appropriate only when both the objectives and the means are clear, because analysis demands a priori criteria and the ability to collect unambiguous information about alternatives. When objectives are known but the means for producing results are ambiguous, judgment is argued to be more appropriate because experiential knowledge can be taken into account. When objectives are unclear but means are known, bargaining is seen as most appropriate because the main concern is to arrive at an agreement that will satisfy all stakeholders. Thompson and Tuden considered unclear goals and unclear means (called ‘anomie’) to be highly problematic and suggested ‘inspiration’ as a response. Nutt (2002) used his extensive decision process data to empirically test Thompson and Tuden’s model and found that decision success was significantly related to the fit between decision context and process. Decision-making in health care organizations often falls into the problematic anomie category. Health care organizations’ goals frequently are ambiguous when the combination of effective treatment, quality of care, and efficient delivery is involved. The means of providing effective care are also only partly understood. Accordingly, one might expect analysis to be little used. In one study, Nutt (1999a) provided data on the relative prevalence of analysis in public, private, and third-sector organizations (including many hospitals). The third-sector organizations did not show significantly less use of analysis than private firms, though they did make greater use of bargaining and had high use of subjective tactics based on expert opinion. Overall, the studies by Mintzberg and Nutt show that decisions as a whole can rarely be reduced to rational processes. Yet, in specific phases, considerable quantities of information may be mobilized. How and why this information is used, however, are other questions.

Political Perspective: Analysis and Politics as Symbiotic The organizational decision-making literature provides considerable evidence that analysis may be used for non-informational purposes: for example, to support predetermined choices (Bower, 1970; Meyer, 1984),

92—G.R. Baker, L. Ginsburg, and A. Langley

to contribute to adversarial debate (Lindblom & Cohen, 1979), to exert control (Dalton, 1959), or to deflect attention away from issues (Meltsner, 1976). Political uses of information are common and shape decisionmaking processes in health care settings and elsewhere. Langley (1989, 1990) has identified four main purposes behind formal analysis, using twenty-seven strategic issues across three different organizations. Information motives for analysis are closest to the tenets of the rational model, implying the collection of information to relieve uncertainty and directly improve decisions. Communication motives involve the use of analysis to persuade others inside and outside the organization. Direction and control motives involve the commissioning of analysis to focus others’ attention on problems and to ensure implementation. Finally, symbolic purposes include the initiation of analysis to convey rationality, concern, or willingness to act, even when neither the initiator nor the audience may have the intention or capacity to use the information to change the course of events. Langley (1989) found that while information motives could be associated with over 50 per cent of the analysis studies identified, communication motives were equally (if not more) important, while direction and control and symbolic purposes were associated with 25 per cent and 19 per cent of studies, respectively. Beyond the numbers, Langley’s (1989) research reveals the tight linkages between the social interactive or political dimensions of decisionmaking and the informational or rational dimensions. The two, in fact, appear to be symbiotic, not mutually exclusive as earlier writings tend to suggest. As Langley notes, ‘Formal analysis would be less necessary if everybody could execute their decisions themselves, and nobody had to convince anybody of anything. In fact, one could hypothesize that the more decision-making power is shared between people who do not quite trust one another, the more formal analysis will be important’ (p. 609). Although politically oriented uses of analysis are often viewed negatively, they may improve decisions. For example, when various members of an organization have different goals and information sources, analysis for communication purposes may ensure that ideas are thoroughly debated and that errors are detected (see also Lindblom, 1979). In the absence of debate, decisions may be captured by ‘groupthink,’ in which solidarity becomes the dominant value at the expense of careful reflection (Janis 1982).

An Organizational Science Perspective—93

On the other hand, as Langley (1995) suggests, the analysis process can sometimes get out of hand. The risk of ‘paralysis by analysis’ – or the collection of more information than needed for decision-making – is highest when participation in decisions is widespread, power is dispersed, opinions are divergent, and leadership is diffuse. In such circumstances, there is potential for the development of ‘paper battles’ in which opposing factions mobilize facts and counter-facts to defend their differing views without resolving issues. Because diffuse leadership and ambiguous goals characterize health care organizations, the conditions there seem ripe for excess analysis. However, Langley (1990) observed that many decisions in the hospital she studied were of local rather than global interest (e.g., the acquisition of medical technology) and were not particularly controversial. Strategic decision-making in this setting tended to involve channelled negotiation through formal analysis up and down the hierarchy. In contrast, where decisions involve a wider range of constituents, the risks of paralysis are higher. Recent episodes involving hospital mergers in Canada and elsewhere may fall into this category (Denis, Lamothe, & Langley, 1999). Some features of health care organizations may lead to the opposite problem: decisions made with no reflection and analysis because of a concentration of power that allows individuals to act without convincing others. For example, in some cases certain key physicians may use their control of operating resources, access to donors, or monopoly on the expertise needed to understand the issues involved to unilaterally make decisions (Langley, 1995). This political view of the role of formal analysis provides a mixed portrait of its potential role in health care organizations. However, the portrait does consistently point out that a large proportion of analysis is done because of the need to interact with others: to persuade them, to control them, and sometimes to distract them. The relationship between doing analysis and producing good decisions is therefore complex and uncertain. Discussions of evidence-based decision-making in health care management cannot ignore the strategic role that formal analysis plays or the political context in which it takes place.

Symbolic Perspective: Rationality and Clinical Judgment as Competing Myths Over and above its political role, Feldman and March (1981) argue that part of the reason for documented instances of information overload in

94—G.R. Baker, L. Ginsburg, and A. Langley

organizations is related to the high symbolic value attached to rationality in western culture. Thus, when we label uses of information and analysis as rational (systematic), they are often viewed as implicitly good (not irrational), no matter how sensible it might sometimes be to stop collecting information, to use one’s judgment, or to negotiate. This perspective suggests that when norms of rationality are dominant, displaying the use of formal analysis procedures enhances legitimacy, even when such uses are instrumentally inappropriate. This cultural bias may help to explain some of the ineffective uses of analysis uncovered in Nutt’s studies discussed above and also some of the excesses observed by Langley (1995). When rationality is an overriding norm, it may be easy to interpret almost all disagreements as disagreements of fact that can be resolved by additional information and logical argument. If disagreements are really based on values or interests, however, factual information and logical arguments are unlikely to provide resolution. For example, continuing analyses of the benefits and costs of private investment in health care do not and cannot tackle the underlying ideological differences between proponents and opponents. Analysis recommending a new hospital configuration following a merger will miss its mark if nobody has considered the impact on physician revenues or negotiated arrangements that deal with their interests. Yet these kinds of misunderstandings are commonplace. Thomas and Trevino (1993) describe a health care case in which an inter-organizational collaboration failed because of the incapacity of its sponsors to see that their rational arguments did not cope with fundamental goal ambiguities. Clearly, health care organizations are not immune to the rational myth, as witnessed by their successive adoption and often ritualized use of managerial techniques such as strategic planning and total quality management (Lozeau, Langley, & Denis, 2002). Unless carefully used, evidence-based management may become the next manifestation of the rational myth and could further contribute to misunderstandings of the type mentioned above. Health care organizations are also traversed by competing decisionmaking myths, including the myth of professional judgment. This, too, can be a powerful force that legitimizes decisions because it implies the application of clinical expertise in the interests of patient needs. Thus, clinical approval (or opposition) may also exert ceremonial power and create hybrid forms of decision legitimation. A study by Meyer (1984) identified four distinct processes related to the adoption of technology in hospitals. The four processes were clinical judgment, analytical budg-

An Organizational Science Perspective—95

eting, political bargaining among peers, and strategic evaluation in which fit with mission is examined. Meyer noted that as soon as a piece of equipment appeared to be well ranked on one or two of the key criteria, processes in all the other domains became symbolic. Whether or not these were the underlying mechanisms for decision, successful purchases were blessed by clinical approval, political consensus, and supporting analysis. Meyer’s (1984) discussion draws attention to the symbolic role of rationality in decision-making, while noting that, in the health care field, it is shared with other sources of legitimacy. Feldman and March (1981) argue that societal norms are not static and that symbols wax and wane. Given this mutability, the evidence-based decision-making movement itself may represent the continuation of the rationality myth. At the same time, it signals a certain retreat of the myth of clinical judgment, because evidence-based medicine, on which evidence-based management is founded, implies the subordination of clinical judgment to the canons of science. The three perspectives of rationality, politics and symbolism offer insights into the ways in which evidence is incorporated into decisions and why it is or is not used. The first perspective tends to suggest that in health care organizations analytical approaches are less appropriate because they do not deal well with situations of ambiguous goals and unclear technologies. However, while empirical observations do reveal an important role for professional judgment and bargaining, they also show that analysis nevertheless is common in these settings. Through the political and symbolic perspectives we can understand why this is so. Wide participation, diffuse power, and divergent opinions force people to justify their decisions to each other and stimulate the collection of information. These actions are supported by strong (and possibly strengthening) norms of rationality, combined with a continuing faith in clinical judgment. A key question for the future of the evidencebased decision-making movement, however, is whether the extensive use of formal analysis can, given the persistent ambiguous nature of the task context, improve decision-making in health care organizations to the extent hoped for by its proponents. Strategies for Improving Decision-Making The gap between the prescription of rational decision-making as a model for effective decisions and the empirical evidence suggesting that decisions are often made in a fashion that does not follow these precepts

96—G.R. Baker, L. Ginsburg, and A. Langley

has prompted several responses. Some researchers have moved towards amending theoretical approaches and broadening their advice to practitioners. As already noted, in this view rational decision-making is suited to some types of decisions, but not to others, or is only one element of a decision-making process that must also account for political and symbolic uses of information. Other researchers have viewed the gap between current practice and theory as an opportunity to design decision aids to bring behaviour in line with prescription. Underlying these efforts are the recurrent assumptions that the failure to use rational processes, seek information, incorporate evidence, and ensure exploration of alternatives all undermine effective decision-making. Finally, other researchers have probed the sources of these discrepancies between theory and practice, hopeful that their elucidation will help to develop more effective decision-making practices. The effort to create decision aids builds on the analysis of the processes of rational decision-making. Several researchers have attempted to identify the process elements contributing to successful decisions by individuals and groups and the circumstances under which these processes are effective (Fredrickson & Mitchell, 1984; Schweiger, Sandberg, & Ragan, 1986; Eisenhardt, 1989; Nutt, 1989; Rogelberg, Barnes-Farrell, & Lowe, 1992; Whyte, 2000). Whyte summarizes these process elements as seven steps necessary for effective decision-making: 1. Identifying the objectives to be achieved by the decision 2. Generating a comprehensive list of well-developed alternatives 3. Searching widely for information with which to determine the quality of these alternatives 4. Engaging in unbiased and accurate processing of all relevant information 5. Reconsidering and re-examining all pros and cons 6. Examining costs, benefits, and risks of the preferred choice 7. Developing an implementation plan and monitoring results. (pp. 316–17) .

Janis (1982) has labelled these steps ‘vigilant information processing.’ Nutt (1989) also suggests that, by adhering to these processes, groups can begin to minimize the potentially negative influence that ambiguity, uncertainty, and risk can have on the decisions they make.

Techniques That Promote Vigilant Information Processing Several techniques have been found to enhance the way people interact and the processes they use to evaluate information and decision alter-

An Organizational Science Perspective—97

natives. For instance, introduction of constructive conflict into decision-making, through the use of techniques such as dialectical enquiry or devil’s advocacy, has been found to lead to higher-quality decisions than use of a consensus approach (Schweiger, Sandberg, & Ragan, 1986; Schweiger, Sandberg, & Rechner, 1989). In dialectical enquiry, part of the decision-making group develops recommendations based on data and other information, while a second subgroup must develop plausible alternatives that nullify the assumptions and recommendations of the first group. In devil’s advocacy, a subgroup critiques the recommendations and assumptions, but does not come up with its own recommendations. In both techniques, groups then join together again to debate the merit of all recommendations and assumptions put forward. Participants using these techniques reported greater re-evaluation of their own assumptions and recommendations than members of consensus groups did (Schweiger et al., 1989). Jehn (1995) extended this work by looking at the effect of conflict for routine and non-routine decisions. She found that task conflict had a positive impact on group performance of non-routine tasks, and that norms promoting openness ‘enhanced the beneficial effects of task conflict’ (p. 274). Conflict was found to be detrimental to group functioning on routine tasks, and too much conflict in non-routine tasks degraded performance. Other authors have identified task or cognitive conflict (as distinguished from interpersonal or affective conflict) as enhancing performance and decision-making (e.g., Amason, 1996). Still another approach to improving decision-making is the Stepladder Technique (Rogelberg, Barnes-Farrell, & Lowe, 1992). This technique requires each individual in a group first to consider a decision on his or her own. One at a time, members join the group and present their approach. Following each new presentation of ideas the decision is reconsidered until all group members have joined in and presented their views. In a laboratory experiment it was found that groups using the Stepladder Technique produced significantly higher-quality decisions than groups using a conventional technique, had increased critical evaluation of ideas, and showed improved group performance (Davis, 1992). Using computers may also improve decision-making processes. Molloy and Schwenk (1995), for example, found that computer-based programs that can store, process and communicate information in different formats can improve strategic decision-making. Such systems assist in the identification and analysis of multiple alternatives, make the decision-making process more rapid and comprehensive, and lead to more effective decisions.

98—G.R. Baker, L. Ginsburg, and A. Langley

Techniques for Addressing the Harmful Effects of Politics in Decision-Making Computer support and the use of decision aids may be helpful in using information, but they fail to address the political and symbolic motivations that undermine the influence of such information. Tetlock (1985) has noted that when there is accountability to known views, people often make decisions that satisfy those views. When there is accountability to unknown views, people are more vigilant, employ more cognitive resources, and utilize available data. Other researchers have found that procedural rationality contributes to decision effectiveness, while covert politicking undermines it (Dean & Sharfman, 1996). Eisenhardt and Bourgeois (1988) also found that the greater the use of politics within the top management team in strategic planning, the worse the performance of the firm. How can decision makers increase the use of procedural rationality in endeavours, such as strategic planning, that are inherently political? Thomas and Trevino (1993) point out that different communication approaches are required for managing decisions where there is a lack of information (what they call managing uncertainty) versus instances where there are conflicting personal interests or values (managing equivocality). To manage equivocality, managers need to use ‘rich’ information processing, including face-to-face communication, to resolve political and personal roadblocks. ‘Lean’ information-processing mechanisms, such as written documents, are more effective for reducing uncertainty in decision-making. Nutt’s analysis (1998b) of over 300 strategic decisions also supports the idea that certain ‘richer’ techniques are required when opposition to a decision is based on values.

Addressing Behavioural Shortfalls in Decision-Making How decisions are framed also influences decision-making processes. Kahneman and Tversky (1979) have shown that people will be more risk seeking if they perceive themselves to be losing rather than winning. For instance, on average, people who lost a previous bet will be more likely to agree to a ‘double or nothing’ wager than people who won a previous bet. In a study of six policy decisions, Gregory, Lichtenstein, and MacGregor (1993) found that decision-makers elicited riskier behaviour when the policy decision was framed as the restoration of a prior loss, compared with when it was framed as a gain from

An Organizational Science Perspective—99

the status quo. Nutt (1998a) found that claims that were framed as performance shortfalls led to more successful decisions than did other claims framed as opportunities for innovation or the need to adopt practices used by other organizations, or claims that identified unmet stakeholder needs. Nutt (1991) also looked at the effects of framing strategic health care decisions and found that reframing decisions as performance shortfalls is effective in both crisis and non-crisis decisions. To limit the deleterious effects of cognitive biases and reduce overly risky behaviour, Whyte (2000) suggested that it is critical to ensure that decisions are framed positively, as opportunities, rather than negatively, as threats.2

Leadership and Group Composition Research on leadership style, group composition, and power has also identified situations and approaches that may have a positive impact on decision processes and outcomes. Flowers (1977) found that open leadership (versus directive or closed leadership styles) generates more decision alternatives, while Leana (1985) found that groups with directive leaders discuss fewer alternatives and fewer solutions than do groups with participative leaders. A study of management teams by Korsgaard, Schweiger, and Sapienza (1993) also demonstrates that groups in which leaders listen to group members and incorporate their ideas into final decisions can achieve significantly higher decision quality. Certainly, there are times when a more directive style of leadership may be preferred or more effective (Eisenstat & Cohen, 1990; Vroom & Yetton, 1973; House & Baetz, 1979); in general, however, participative leadership seems to be more effective in incorporating and appraising information. The composition of groups also influences their decision-making processes. Janis (1982) argues that groups that include members with varying views are more likely to assess the pros and cons of several decision alternatives. Similarly, Whyte (2000) argues that homogeneous groups are likely to explore fewer alternatives, although they make decisions faster and experience fewer conflicts. Gender composition also influences the performance of small decision-making groups; one study found that groups with a lone female performed best (Rogelberg & Rumery, 1996). Finally, Eisenhardt and Borgeois (1988) suggest that organizations that have decentralized power structures are less susceptible to politics, although extremely decentralized organizations, such as uni-

100—G.R. Baker, L. Ginsburg, and A. Langley

versities, have been found to suffer from the same type of politics as highly centralized, autocratic organizations (Pfeffer & Salancik, 1974). Many of these lessons on improving decision-making have emerged from extensive research in the group and organizational literatures rather than from studies of health care organizations alone. As we have already noted, health care organizations have ambiguous, often conflicting goals. They also have a highly professionalized work force, multiple internal and external stakeholder groups, and dual lines of authority. Meyer (1984) highlighted some of the tensions that can arise between medical staff and management in capital expansion decisions, and he demonstrated the importance of conciliatory, face-saving strategies, thereby validating the use of rich communication strategies for resolving value conflicts in decision-making. Rodrigues and Hickson (1995) found that organizations with highly professionalized workforces, such as the United Kingdom’s National Health Service, were more successful when they used participative management to make decisions. Overall, organizational literature suggests that when there are few value conflicts among decision-makers, the vigilant search for and appraisal of information hold promise as techniques for improving organizational decision-making. However, it is clear that information cannot be successfully used to resolve ‘value’ disagreements. In such situations other more social forms of communication and uses of information, are required. These strategies should hold equally well in health care and non-health care situations alike, although the nature of health care organizations suggests an increased need for methods to resolve value conflicts. Even where techniques that influence decision-making are available, such improvements are often difficult to implement. Factors such as group heterogeneity, gender composition, and, particularly, leadership style can be next to impossible to change in many organizations. Changes in decision-making processes may also be difficult to introduce. Several studies suggest that managers are constrained by their mental models and tend to pursue what they know rather than explore new strategies (Levitt & March, 1988; Miller, 1993). This leads to myopic local thinking and overvaluing of past success (Levinthal & March, 1993; Zajac & Bazerman, 1991) and to engaging in largely repetitive behaviour (Amburgey & Miner, 1992). In such situations, initial conditions, past history, and past success all work to create serious challenges for changing the way decision-making occurs. The development of techniques to improve decision-making may offer opportunities to improve current

An Organizational Science Perspective—101

practices, but there is little evidence that these techniques are widely adopted. Naturalistic Decision-Making We earlier suggested two responses to the inadequacy of theories of rational decision-making: (1) amending theories to account for political and symbolic uses of information, and (2) improving practice by alerting decision-makers to the impact of framing their decisions and offering decision aids that facilitate the identification and analysis of multiple decision alternatives. A third, more radical response to the inconsistent use of rational choice approaches requires the highlighting of other intuitive, more naturalistic approaches that best characterize the decision-making style of certain groups. This perspective has its roots in the work of Simon (1978) and others, who pointed out that limitations in human memory as well as organizational resource constraints promote satisficing behaviour (e.g., the search for good solutions, rather than investing the time and energy needed to select the best potential approach). As earlier noted, in Simon’s view decisionmakers operate within bounded rationality because a complete search for relevant information and full analysis is exhausting and difficult. Thus, when good choices are available that meet the needs of individuals and organizations, decisions are often made on the basis of incomplete information. Simon (1978) has also pointed out that real-world problems are often loosely coupled, allowing decision makers to attend to them sequentially and to apply solutions devised in one context to problems seen in another. These and related ideas on bounded rationality have been viewed largely as limitations on the applicability of rational choice models, not as refutations of this approach. Recently, however, a number of researchers have put forward an alternative view of decisionmaking that challenges the assumptions of rational decision-making. Loosely linked under the label of naturalistic decision-making (NDM) these researchers have studied how people use their experience to make decisions in field settings (Klein 1997, 1998; Lipshitz, Klein, Orasanu, & Salas, 2001). Unlike much of behavioural decision-making research, in which carefully controlled laboratory experiments are used, NDM researchers rely heavily on observation in the real world. The impact of time pressures, uncertainty, ill-defined or shifting goals, and other complexities that influence decision-making are primary themes in this re-

102—G.R. Baker, L. Ginsburg, and A. Langley

search (Lipshitz et al., 2001). Another important theme is a focus on studying the reasoning processes of experts, including firefighters, nurses, military teams, and other high-stakes work groups. Studies by Klein (1998) and other NDM researchers have challenged the prevailing view that high-quality decisions require the comparison of multiple options and the selection of one that best meets the needs of a situation. In their study of fire-ground commanders, for example, Klein and associates found that when faced with a fire, commanders were not comparing options. Instead, they were carrying out the first course of action they identified. To understand why and how skilled decision-makers could reliably identify good options and carry out the first course of action (when multiple choices are possible) they interviewed the firefighters to discover their reasoning processes. They found that the fire-ground commanders did not refuse to compare options; they simply did not have to do so. Even when they faced complex situations, the commanders could read the nature of the situation, perceive it as a typical case, and select a course of action. The cognitive processes in such cases rely on pattern-matching and mental simulation, not on an evaluation of alternatives. Expertise is critical to the development of these skills. As Klein notes, fire-ground commanders realized that they could misperceive a situation and choose the wrong approach, but they relied on their expectations as a check against such errors. If they read a situation correctly, the expectations should match the events. If not, they would notice anomalies that would alert them to change their approach. Such a strategy would not work for beginners who lacked the experience to know what to expect, but it was frequently used by experts who had a range of experiences that informed their choices. The processes of reasoning for such tasks have been labelled ‘recognition-primed decision-making.’ Other research teams have confirmed similar decision-making processes for actors in different environments, including navy commanders, design engineers, intensive care nurses (Crandall & Getchell-Reiter, 1993), and commercial aviation pilots (Kaempf & Orasanu, 1997). In a study of intensive care nurses, for example, Crandall and Getchell-Reiter found that experienced nurses in a neonatal intensive care unit (NICU) could identify babies who were septic and required antibiotics even before laboratory tests confirmed the need for such an intervention. By interviewing the nurses, the researchers identified a number of cues that caught their attention in these cases. The cues varied from case to case, and the nurses relied on multiple cues in making their assessments.

An Organizational Science Perspective—103

When the NICU nurses were asked how they knew that the babies were septic, they often could not make their reasoning processes explicit. In some cases, they claimed it was simply intuition. Such ‘intuition’ derives from the ability of experts to recognize situations and know how to handle them. The recognition-primed decision model provides a way to explain how experts can generate and adopt a course of action without having to consider and assess multiple options. The idea that individuals who are highly proficient or expert in their fields are able to make better decisions appears to make intuitive sense. What this research also suggests is that experts make decisions in different ways than those with less knowledge and experience do. Here, too, NDM researchers provide a different perspective than that of many earlier researchers, who, in order to understand the processes of decision-making, have relied on simulations with novices – often college students – carrying out tasks that are unfamiliar to them. These observations about the role of expert perceptions and decisions are consistent with the work of Hubert and Stuart Dreyfus (Dreyfus & Dreyfus 1986; Dreyfus, 1997, 2001), who have studied how people progress from novice to expert. They identify the practice of novices as success by learning and following rules. Competence is achieved when individuals learn which aspects of a situation must be treated as important and which ones can be ignored. The competent performer thus ‘seeks new rules and reasoning procedures to decide on a plan or perspective’ (Dreyfus, 1997, p. 28). By contrast, experts succeed by knowing when not to follow rules. Expert performance is not only better than that of beginners or competent learners, it follows different paths of reasoning, where rules are not central to decisions. Hubert Dreyfus (2001) even suggests that experts cannot refer to rules, because doing so would reduce their performance from expertise to competence. Even experts themselves often find it difficult to elucidate how they arrived at decisions. Klein (1998, p. 168) describes expertise as ‘learning to perceive.’ The quality of decisions performed by experts is high even when under stress. Studies of expert chess players, for example, confirm that grand master chess players perform almost as well under extreme time pressure (‘blitz’ chess) as they do in regular games (Calderwood, Klein, & Crandall, 1988). The view of NDM researchers that experts often do not determine their actions by evaluating alternatives and selecting among them, raises the issue of whether such cognitive processes are more akin to skill acquisition (recognizing a particular type of situation and the appropriate response) than to deliberate decision-making processes (Kerstholt

104—G.R. Baker, L. Ginsburg, and A. Langley

& Ayton, 2001). Other cognitive processes, including problem detection, problem-solving, situation awareness, and planning also need to be explored (Lipshitz et al., 2001). Naturalistic decision-making research thus calls into question not only the critical nature of the processes that surround decision-making, but also the range of behaviours and contextual factors that must be studied to understand how courses of action are determined. NDM research has attracted some criticism from other decision researchers who have characterized it for being atheoretical, incomplete or simply a repackaging of ideas that have been around for some time (Yates, 2001; Whyte, 2001). Two issues in particular pose challenges to the use of NDM models in understanding decisions in health care. First, does the recognition-primed decision model apply to teams, not simply to individuals? Second, if it does apply to teams, do the recognition-primed decision model and other aspects of NDM apply to top management team decisions, not just to operational and front-line manager decisions? The work of NDM researchers discussed so far has been focused on understanding the reasoning processes of individuals in complex, highpressure, field situations. But there is also a substantial body of similar work that is focused on interactions between members of intact teams, including fire crews (Klein, 1998; Lipshitz et al., 2001), pilots (Abrahamson, 1996), health care personnel (Bogner, 1997), and flight deck crews on aircraft carriers (Weick & Roberts, 1993). This focus on intact teams again contrasts with much of small-group research, which relies on observations of groups created for the purposes of experimentation (McGrath 1991). Orasanu and Salas (1993) identify two theoretical frameworks for team decision-making that have emerged from their research: the concept of ‘shared mental models’ (Cannon-Bowers, Salas, & Converse, 1990) and ‘team mind’ (Klein, 1998). Shared mental models refers to structured knowledge-sharing by team members – for example, airplane crews – which includes knowledge of basic principles of flight and an understanding of the norms of behaviour and roles of each team member. Salas, Cannon-Bowers and Johnson (1997) argue that training team members in teamwork skills, such as situation awareness, communications, adaptability, and team leadership, along with creating shared mental models of the task and crew member roles is essential for developing expert teams that can achieve high levels of task performance. High-performing teams learn to develop common perceptual filters for

An Organizational Science Perspective—105

interpreting events and patterns of learning necessary for developing new behaviours and abandoning inefficient or ineffective ones (Klein, 1998). The concept of effective team learning has also been explored by Edmondson (1999; Edmonson, Bohmer, & Pisano, 2000), who found that psychological safety and learning behaviour were critical to team performance. If NDM research can apply to team as well as to individual decisionmaking, the question remains whether findings from work teams such as flight crews and operating room teams (Bogner, 1997) can be generalized to top management teams. Other researchers have noted that the factors that determine effectiveness of teams may vary according to the type of team involved (Cohen & Bailey, 1997). Moreover, top management teams comprise individuals who have important roles outside the teams and often have competing power bases. How does this approach apply to highly political, non-routine situations such as strategic planning? No studies of top management teams are included in recent compendiums of NDM work (Salas & Klein, 2001; Flin, Salas, Strub, & Martin, 1997; Klein, Orasanu, Calderwood, & Zsambok, 1993; Zsambok & Klein, 1997) and overviews of NDM research (Klein, 1997, 1998; Lipshitz et al., 2001). It is therefore unclear whether top management team decision-making processes are amenable to NDM in the way that decisions by front-line workers and managers are.3 Despite the paucity of studies, however, the application of NDM to top management teams may be particularly relevant given that, like those of the groups studied in the NDM literature, top management team decisions often involve high-stakes interactions. In some research it is also suggested that top management teams rely to an extent on intuition or subjective factors in making decisions. In our earlier discussion of the limits of rational decision-making we noted Nutt’s (1998b) finding that 40 per cent of decisions involved some kind of subjective assessment. Khatri and Ng (2000), in a study of strategic decision-making, found that ‘intuitive synthesis’ was present in many strategic planning decisions; use of intuitive synthesis was positively related to organizational performance in unstable environments. Other researchers, such as Agor (1990), also note that intuition plays a critical role in decisions, bringing past experiences and learning to bear in the examination of new situations. Part of the conflict between traditional decision researchers and those who are pursuing topics framed through NDM stems from a lack of understanding about how naturalistic decision-making complements or

106—G.R. Baker, L. Ginsburg, and A. Langley

conflicts with behavioural decision-making research. In fact, NDM researchers (Lipshitz et al., 2001) and behavioural decision researchers (Kerstholt & Ayton, 2001) agree that cognitive processes ought to be more fully examined if we are to properly understand how to promote more effective decision outcomes. What are the processes of problem detection, situation awareness, and problem-solving that contribute to effective decision-making? To what extent do tacit knowledge and shared mental models contribute to effective decision-making? These issues are largely unexamined, although recent research has highlighted their importance in a variety of domains. Conclusions The evidence-based decision-making movement is the most recent of a long line of attempts to promote the use of more rational procedures for organizational decision-making. Although attempts are intuitively laudable, the discourse surrounding them is often naïve. In this chapter we have examined the challenges to those who want to create a direct comparison between the introduction of evidence-based medicine in clinical practice and the development of evidence-based management in health care organizations. First, we showed how rational procedures necessarily interact with other forms of decision-making as human beings with different roles, interests, knowledge, and values participate in the production of decisions. Decision-making at strategic levels is, and will probably remain, fundamentally political and judgmental. This is particularly true in the health care arena, where ambiguous goals, divergent values and interests, and diffuse power structures both stimulate the search for information and simultaneously limit its capacity to determine, on its own, the content of decisions. Second, we reviewed a number of strategies for improving organizational decision-making that have been studied in the literature either in the laboratory or in real organizations. Some strategies achieve improvement by promoting vigilant information processing; some are aimed at countering destructive political games; while others attempt to counter cognitive biases that distort understanding. Many of these techniques offer promise, but the challenges of implementing them should not be underestimated. These techniques and strategies must be introduced into a context where ordinary human judgment and social interaction are the default modes of decision-making.

An Organizational Science Perspective—107

Finally, in the last section we drew on a recent stream of research on naturalistic decision-making to pose a fundamental question: could it be that the evidence-based decision-making movement is barking up the wrong tree entirely? NDM reinstates to some extent the value of human judgment by suggesting that, in situations of everyday work under intense pressure, experienced experts mobilize modes of thinking that often surpass rational procedures in their effectiveness. If the NDM researchers are correct about the role of intuition and experience in decision-making, then top management teams need to spend more time examining their mental models and the underlying assumptions that inhibit shared understanding.4 We believe there is a need to reach beyond simplistic notions of rationality that assume that analytical information is politically neutral and universally superior. At the same time, researchers and decisionmakers should welcome the challenge to improve both the content of decisions and the processes through which they are made. For example, we need a better understanding of how the strengths of tacit human knowledge (as in NDM) and of explicit information and evidence (as favoured by the evidence-based decision-making movement) can be combined. We also need an enhanced understanding of how politics and social interaction can best be mobilized to become constructive, rather than destructive, forces in developing health care policies and decisions that respect the legitimate concerns of stakeholders and, especially, those of the people we aim to serve.

NOTES 1 Successful decisions are defined here as decisions that are adopted in a timely manner and have value (Nutt, 1991, 1998a). 2 This suggestion conflicts, however, with Nutt (1998a), who supports the framing of decisions as the restoration of a past loss, because decisions framed in this way are more likely to promote action by decision-makers. This divergence of opinion raises an interesting question for how best to deal with some of the cognitive biases inherent in the decision-making process: is it better to frame decisions in a way that causes decision-makers to be more proactive even if this may prompt unnecessarily risky action? 3 The absence of research on the tasks of top management teams is not a failing of NDM alone; for example, in a review of team studies over six years in the 1990s Cohen and Bailey (1997) found only a handful of

108—G.R. Baker, L. Ginsburg, and A. Langley studies in which the work processes of top management teams were examined. 4 Argyris (1991) argues that examining such assumptions is critical to more effective learning and decision-making by teams.

REFERENCES Abrahamson, E. (1996). Management fashion. Academy of Management Review, 21(1), 254–285. Agor, W.H. (1990). The logic of intuition: How top executives make important decisions. In W.H. Agor (Ed.), Intuition in organizations (pp. 151–70). Newbury Park, CA: Sage. Alexsson, R. (1998). Towards an evidence-based health care management. International Journal of Health Planning and Management, 13, 307–317. Allison, G.T. (1971). Essence of decision: Explaining the Cuban missile crisis. New York: Little Brown. Amason, A.C. (1996). Distinguishing the effects of functional and dysfunctional conflict on strategic decision-making: Resolving a paradox for top management teams. Academy of Management Journal, 39, 123–148. Amburgey, T.L., & Miner, A.S. (1992). Strategic momentum: The effects of repetitive, positional, and contextual momentum on merger activity. Strategic Management Journal, 13, 335–348. Argyris, C. (1991). Teaching smart people how to learn. Harvard Business Review, 69(3), 99–109. Bogner, M.S. (1997). Naturalistic decision making in health care. In C.E. Zsambok & G.A. Klein (Eds.), Naturalistic decision making (pp. 61–71). Mahwah, NJ: Erlbaum. Bower, J.L. (1970). Managing the resource allocation process: a study of corporate planning and investment. Cambridge, MA: Division of Research, Graduate School of Business Administration, Harvard University. Calderwood, R., Klein, G.A., & Crandall, B.W. (1988). Time pressure, skill and move quality in chess. American Journal of Psychology, 101, 481–493. Cannon-Bowers, J.A., Salas, E., & Converse, S. (1990). Cognitive psychology and team training: Training shared mental models of complex systems. Human Factors Society Bulletin, 33(12), 1–4. Cohen, S.G., & Bailey, D.E. (1997). What makes teams work: Group effectiveness research from the shop floor to the executive suite. Journal of Management, 23(3), 239–290.

An Organizational Science Perspective—109 Crandall, B., & Getchell-Reiter, K. (1993). Critical decision method: A technique for eliciting concrete assessment indicators from the ‘intuition’ of NICU nurses. Advances in Nursing Sciences, 16(1), 42–51. Dalton, M. (1959). Men who manage. New York: Wiley. Davis, J.H. (1992). Some compelling intuitions about group consensus decisions, theoretical and empirical research, and interpersonal aggregation phenomenon: Selected examples, 1950–1990. Organizational Behaviour and Human Decision Processes, 52, 3–38. Dean, J.W., Jr., & Sharfman, M.P. (1996). Does decision making matter? A study of strategic decision making effectiveness. Academy of Management Journal, 39(2), 368–396. Denis, J.L., Lamothe, L., & Langley, A. (1999). The struggle to implement teaching-hospital mergers. Canadian Public Administration Administration Publique du Canada, 42(3), 285–311. Dreyfus, H.L. (1997). Intuitive, deliberative and calculative models of expert performance. In C.E. Zsambok & G.A. Klein (Eds.), Naturalistic decision making (pp. 17–28). Mahwah, NJ: Erlbaum. Dreyfus, H.L. (2001). On the Internet. London: Routledge. Dreyfus, H.L., & Dreyfus S.E. (1986). Mind over machine: The power of human intuitive expertise in the era of the computer. New York: Free Press. Edmondson, A. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350–383. Edmondson, A., Bohmer, R., & Pisano, G.P. (2000). Learning new technical and interpersonal routines in operating room teams: The case of minimally invasive cardiac surgery. Research on Managing Groups and Teams, 3, 29–51. Eisenhardt, K.M. (1989). Making fast strategic decisions in high velocity environments. Academy of Management Journal, 32(3), 543–576. Eisenhardt, K.M., & Borgeois III, L.J. (1988). Politics of strategic decision making in high-velocity environments: Toward a midrange theory. Academy of Management Journal, 31(4), 737–770. Eisenhardt, K.M., & Zbaracki, M.J. (1992). Strategic decision-making. Strategic Management Journal, 13 (Special Issue), 17–37. Eisenstat, R.A., & Cohen, S.G. (1990). Summary: Top management groups. In J.R. Hackman (Ed.), Groups that work (and those that don’t) (pp. 78–86). San Francisco: Jossey-Bass. Feldman, M.S., & March, J.G. (1981). Information in organizations as signal and symbol. Administrative Science Quarterly, 26(2), 171. Flin, R., Salas, E., Strub, M., & Martin, L. (1997). Decision making under stress. Aldershot, UK: Ashgate.

110—G.R. Baker, L. Ginsburg, and A. Langley Flowers, M.L. (1977). A laboratory test of some implications of Janis’s groupthink hypothesis. Journal of Personality and Social Psychology, 33, 888–895. Fredrickson, J., & Mitchell, T. (1984). Strategic decision processes: Comprehensiveness and performance in an industry with an unstable environment. Academy of Management Journal, 27, 399–423. Gregory, R., Lichtenstein, S., & MacGregor, D.G. (1993). The role of past states in determining reference points for policy decisions. Organizational Behavior and Human Decision Processes, 55, 195–206. Hewison, A. (1997). Evidence-based medicine: What about evidence-based management? Journal of Nursing Management, 5, 195–198. House, R.J., & Baetz, M.L. (1979). Leadership: Some empirical generalizations and new research directions. In L.L. Cummings & B.M. Staw (Eds.), Research in organizational behaviour (Vol. 1, pp. 341–423). Greenwich, CT: JAI Press. Jackson, S.E., & Dutton, J.E. (1988). Discerning threats and opportunities. Administrative Science Quarterly, 33(3), 370–87. Janis, I.L. (1982). Groupthink (2nd ed.). Boston: Houghton Mifflin. Jehn, K.A. (1995). A multimethod examination of the benefits and detriments of intragroup conflict. Administrative Science Quarterly, 40, 256–282. Kaempf, G.L., & Orasanu, J. (1997). Current and future applications of naturalistic decision making in aviation. In C.E. Zsambok & G.A. Klein (Eds.), Naturalistic decision-making (81–90). Mahwah, NJ: Erlbaum. Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47, 263–291. Kerstholt, J., & Ayton, P. (2001). Should NDM change our understanding of decision-making? Journal of Behavioral Decision Making, 14(5), 370–371. Khatri, N., & Ng, H.A. (2000). The role of intuition in strategic decisionmaking. Human Relations, 53(1), 57–86. Klein, G.A. (1997). The current status of the naturalistic decision-making framework. In L. Martin (Ed.), Decision making under stress: Emerging themes and applications (pp. 13–28). Aldershot, UK: Ashgate. Klein, G.A. (1998). Sources of power: How people make decisions. Cambridge, MA: MIT Press. Klein, G.A., Orasanu, J., Calderwood, R., & Zsambok, C.E. (Eds). (1993). Decision making in action: Models and methods. Norwood, NJ: Ablex. Korsgaard, M.A., Schweiger, D.M., & Sapienza, H.J. (1993). Building commitment, attachment and trust in strategic decision-making teams: The role of procedural justice. Academy of Management Journal, 38, 60–84. Kovner, A.R., Elton, J.J., & Billings, J. (2000). Evidence-based management. Frontiers of Health Services Management, 16(4), 3–24.

An Organizational Science Perspective—111 Langley, A. (1989). In search of rationality: The purposes behind the use of formal analysis in organizations. Administrative Science Quarterly, 34(4), 598–631. Langley, A. (1990). Patterns in the use of formal analysis in strategic decisions. Organization Studies, 11(1), 17–45. Langley, A. (1995). Between ‘paralysis by analysis’ and ‘extinction by instinct.’ Sloan Management Review, 36(3), 63–76. Langley, A., Mintzberg, H., Pitcher, P., Posada, E., & Saint-Macary, J. (1995). Opening up decision-making: The view from the black stool. Organization Science, 6(3), 260–279. Leana, C.R. (1985). A partial test of Janis’ groupthink model: Effects of group cohesiveness and leader behaviour on defective decision-making. Journal of Management, 11, 5–17. Levinthal, D.A., & March, J.G. (1993). The myopia of learning. Strategic Management Journal, 14 (Special issue), 95–112. Levitt, B., & March, G. (1988). Organizational learning. Annual Review of Sociology, 14, 319–340. Lindblom, C.E. (1959). The science of ‘muddling through.’ Public Administration Review, 19(2), 79–88. Lindblom, C.E., & Cohen, D.K. (1979). Usable knowledge: Social science and social problem solving. New Haven, CT: Yale University Press. Lipshitz, R., Klein, G.A., Orasanu, J. & Salas, E. (2001). Taking stock of naturalistic decision making. Journal of Behavioral Decision Making, 14, 331–352. Lipshitz, R., Klein, G.A., Orasanu, J. & Salas, E. (2001). Rejoinder: A welcome dialogue – and the need to continue. Journal of Behavioral Decision Making, 14, 385–389. Lozeau, D., Langley, A., & Denis, J.-L. (2002). The corruption of managerial techniques by organizations. Human Relations, 55(5), 537–564. McGrath, J.E. (1991). Time, interaction and performance: A theory of groups. Small Group Research, 22, 147–174. Meltsner, A.J. (1976). Policy analysts in the bureaucracy. Berkeley: University of California Press. Meyer, A.D. (1984). Mingling decision making metaphors. Academy of Management Review, 9(1), 6–17. Miller, D. (1993). The architecture of simplicity. Academy of Management Review, 18(1), 116–138. Miller, S.J., Hickson, D.J., & Wilson, D.C. (1996). Decision-making in organizations. In S. Clegg, C. Hardy, & W. R. Nord (Eds.), Handbook of organization studies (pp. 293–312). Thousand Oaks, CA: Sage.

112—G.R. Baker, L. Ginsburg, and A. Langley Mintzberg, H. (1973). The nature of managerial work. New York: Harper and Row. Mintzberg, H. (1989). Mintzberg on management: inside our strange world of organizations. New York: Free Press. Mintzberg, H., Raisinghani, D., & Théorêt, A. (1976). The structure of unstructured decision processes. Administrative Science Quarterly, 21(2), 246–275. Molloy, S., & Schwenk, C.R. (1995). The effects of information technology on strategic decision making. Journal of Management Studies, 32(3), 283–311. Nutt, P.C. (1984). Types of organizational decision processes. Administrative Science Quarterly, 29(3), 414–450. Nutt, P.C. (1989). Making tough decisions: Tactics for improving managerial decision making. San Francisco: Jossey-Bass. Nutt, P. C. (1991). How top managers in health organizations set directions that guide decision-making. Hospital and Health Services Administration, 36(1), 57–75. Nutt, P.C. (1998a). Framing strategic decisions. Organization Science, 9(2), 195–216. Nutt, P.C. (1998b). How decision makers evaluate alternatives and the influence of complexity. Management Science, 44(8), 1148–1166. Nutt, P.C. (1999a). Public-private differences and the assessment of alternatives for decision-making. Journal of Public Administration Research and Theory, 9(2), 305–319. Nutt, P.C. (1999b). Surprising but true: Half the decisions in organizations fail. Academy of Management Executive, 13(4), 75–90. Nutt, P.C. (2000). Decision-making success in public, private and third sector organizations: Finding sector dependent best practice. Journal of Management Studies, 37(1), 77–108. Nutt, P.C. (2002). Making strategic choices. Journal of Management Studies, 39(1): 67–96. Orasanu, J., & Salas, E. (1993). Team decision making in complex environments. In G.A. Klein, J. Orasanu, R. Calderwood, & C.E. Zsambok (Eds.), Decision making in action: Models and methods (pp. 327–345). Norwood, NJ: Ablex. Pfeffer, J. (1981). Power in organizations. Boston: Pitman. Pfeffer, J. (1992). Managing with power: Politics and influence in organizations. Cambridge, MA: Harvard Business School Press. Pfeffer, J., & Salancik, G.R. (1974). Organizational decision making as a political process: The case of a university budget. Administrative Science Quarterly, 19, 135–151.

An Organizational Science Perspective—113 Pfeffer, J., & Salancik, G. R. (1978). The external control of organizations: A resource dependence perspective. New York: Harper and Row. Quinn, J.B. (1980). Strategies for change: Logical incrementalism. Georgetown, ON: Irwin-Dorsey. Rodrigues, S.B., & Hickson, D. (1995). Success in decision making: Different organisations, differing reasons for success. Journal of Management Studies, 32(5), 655–678. Rogelberg, S.G., & Rumery, S.M. (1996). Gender diversity, team decision quality, time on task, and interpersonal cohesion. Small Group Research, 27(1), 79–90. Rogelberg, S.G., Barnes-Farrell, J.L., & Lowe, C.A. (1992). The stepladder technique: An alternate group structure facilitating effective group decisionmaking. Journal of Applied Psychology, 77, 730–737. Salas, E., & Klein, G.A. (Eds.). (2001). Linking expertise and naturalistic decision making. Mahwah, NJ: Erlbaum. Salas, E., Cannon-Bowers, J.A., & Johnson, J.H. (1997). How can you turn a team of experts into an expert team? Emerging training strategies. In C.E. Zsambok & G.A. Klein (Eds.), Naturalistic decision-making (pp. 359–370). Mahwah, NJ: Erlbaum. Schweiger, D.M., Sandberg, W.R., & Ragan, J.W. (1986). Group approaches for improving strategic decision-making: A comparative analysis of dialectical inquiry, devil’s advocacy, and consensus. Academy of Management Journal, 29, 51–71. Schweiger, D.M., Sandberg, W.R., & Rechner, P.L. (1989). Experiential effects of dialectical inquiry, devil’s advocacy, and consensus approaches to strategic decision-making. Academy of Management Journal, 32(4), 745–772. Schwenk, C.R. (1990). Effects of devil’s advocacy and dialectical inquiry on decision making: A meta-analysis. Organizational Behavior and Human Decision Processes, 47(1), 161–177. Schwenk, C.R., & Valacich, J.S. (1994). Effects of devil’s advocacy and dialectical inquiry on individuals versus groups. Organizational Behavior and Human Decision Processes, 59(2), 210–222. Simon, H.A. (1945). Administrative behavior. London: Macmillan. Simon, H.A. (1960). The new science of management decision. Englewood Cliffs, NJ: Prentice-Hall. Simon, H.A. (1978). Rationality as process and as product of thought. American Economic Review, 68, 1–16. Stewart, R. (1967). Managers and their jobs: a study of the similarities and differences in the ways managers spend their time. London: Macmillan.

114—G.R. Baker, L. Ginsburg, and A. Langley Stewart, R. (1975). The reality of management. London: Heinemann. Tetlock, P.E. (1985). Accountability: The neglected social context of judgment and choice. In L.L. Cummings & B.M. Staw (Eds.), Research in organizational behaviour (Vol. 7, pp. 297–332). Greenwich, CN: JAI Press. Thomas, J.B., & Trevino, L.K. (1993). Information-processing in strategic alliance building – a multiple-case approach. Journal of Management Studies, 30(5), 779–814. Thompson, J.D., & Tuden, A. (1959). Strategies, structures and processes of organizational decision. In J.D. Thompson (Ed.), Comparatives studies in administration (pp. 195–216). Pittsburgh: University of Pittsburgh Press. Vroom, V.H., and P.W. Yetton. (1973). Leadership and decision making. Pittsburgh: University of Pittsburgh Press. Walshe, K., & Rundall, T.G. (2001). Evidence-based management: From theory to practice in health care. Milbank Quarterly, 79(3), 429–457. Weick, K., & Roberts, K. (1993). Collective mind in organizations: Heedful interrelating on flight decks. Administrative Science Quarterly, 38, 357–381. Whyte, G. (2000). Make good decisions by effectively managing the decisionmaking process. In E. Locke (Ed.), Handbook of principles of organizational behavior (pp. 316–330). Oxford: Blackwell. Whyte, G. (2001). Perspectives on naturalistic decision-making from organizational behavior. Journal of Behavioral Decision Making, 14, 353–384. Yates, J.F. (2001). ‘Outsider’: Impressions of naturalistic decision making. In G.A. Klein & E. Salas (Eds.), Linking expertise and naturalistic decision making (pp. 9–34). Mahwah, NJ: Erlbaum. Zajac, E.J., & Bazerman, M.H. (1991). Blind spots in industry and competitor analysis. Academy of Management Review, 16, 37–56. Zsambok, C.E., & Klein, G.A. (Eds.). (1997). Naturalistic decision making. Mahwah, NJ: Erlbaum.

An Innovation Diffusion Perspective—115

5 An Innovation Diffusion Perspective on Knowledge and Evidence in Health Care LOUISE LEMIEUX-CHARLES AND JAN BARNSLEY

Introduction The spread of innovations is considered to be an integral part of social and cultural change (Kroeber, 1923; Tarde, 1903; Wissler, 1923). With roots in anthropology and sociology, innovation theory originally was focused on the diffusion, or rate of spread, of an innovation among a discrete population (Rogers, 1995; Ryan & Gross, 1943). Innovation diffusion research within the health care sector began in the 1950s, predominantly in the fields of public health and medical sociology, and has generated hundreds of studies in which the diffusion of health care practices and technologies is described. A 1995 review of diffusion research identified 277 health-related studies, which represent about 7 per cent of all diffusion studies (Rogers, 1995). Our chief purpose in this chapter is to examine key components of innovation diffusion theory in relation to the pattern and speed of adoption of innovations in health care. Our aim is to determine whether diffusion theory can help to explain the extent to which the availability of research evidence is associated with the successful or unsuccessful transfer (diffusion and adoption) of an innovation. We begin with the definition of diffusion used in this chapter. Next, we define innovation and discuss the theoretical and empirical association between innovation attributes, including strength of evidence, and diffusion. The focus then shifts to the adopters of innovations, both individuals and organizations, and their characteristics, including the selection of communication channels and their location within social systems/networks. Fi-

116—L. Lemieux-Charles and J. Barnsley

nally, we identify aspects of innovation diffusion theory that might facilitate adoption of evidence-based practices by individual health professionals or health care organizations. Diffusion: Definition and Attributes The acceptance of an innovation by individuals or organizations is treated in some research as a discrete event, usually the decision about whether or not to adopt a particular innovation (Corwin, 1972; Kimberly & Evanisko, 1981; Moch & Morse, 1977; Rapoport, 1978). This is particularly evident in those studies of technology diffusion where rate of adoption and relative time of adoption are the outcomes of interest (Kaluzny et al., 1989), as opposed to extent of adoption (Downs & Mohr, 1979). Downs and Mohr (1976, p. 710) speculated that the determinants of the time of adoption might differ from the determinants of the extent of adoption; they therefore suggest that ‘it would be wise to conceive of the two operationalizations as two different behaviors to be explored.’ Extent-of-adoption studies provide a micro-level view, following an innovation through the various stages of the innovation process. Extent of adoption has been described as a passage through a set of stages ordered along the temporal dimension of their most likely sequence (Hage & Aiken, 1970; Kimberly, 1981; Milio, 1971; Rogers, 1995; Yin 1979; Zaltman, Duncan, & Holbek, 1973). The stages described by researchers differ in name and number, but generally they include (1) the perception of a performance gap; (2) the identification of an innovative response; (3) the adoption of an innovation; (4) the implementation of the innovation by those who must put it into effect; (5) the institutionalization of the innovation within an individual’s or organization’s normal activities; and (6) the discontinuance of an innovation because of poor performance, internal opposition/resistance, lack of resources, or replacement by a new and potentially superior innovation. The study of time of adoption requires a macro-level investigation and focuses on the proportion of adopting units (i.e., individuals or organizations) within a population that adopt an innovation at different points in time. We are concerned with the time and rate of adoption, rather than the extent of adoption, and consider that diffusion entails both the spontaneous spread of innovations among adopting units and their more active and purposeful dissemination. Other

An Innovation Diffusion Perspective—117

contributors to this volume deal with extent of adoption in relation to the implementation or use of innovations in specific health care settings. Innovation: Definition and Attributes For the purpose of this chapter, innovation is defined as any idea, practice, or item that is perceived to be new by an individual, organization, or other unit of adoption (Rogers, 1995; Zaltman et al., 1973). Innovation theory and research initially concentrated on the manner in which the characteristics of adopting individuals influenced their relative time of adoption. These characteristics included the breadth of an adopter’s information network (localite vs cosmopolite) and an adopter’s status within a specified social system (opinion leader vs follower). In practice, there was little opportunity to investigate the relative importance of innovation attributes, because most of the early studies traced the diffusion of a single innovation. Therefore, the role of attributes was acknowledged only theoretically. In later studies the relationship between innovation attributes and their adoption was investigated in a number of different settings (Ettlie & Vellenga, 1979; Fliegel & Kivlin, 1966), including health care settings (Kaluzny & Veney, 1973; Kimberly & Evanisko, 1981). Despite extensive work in the area, it is still not well understood why some innovations have consistently higher rates of adoption than others. There is consensus, however, regarding the need to construct a classification system for innovations that will promote the development of defensible generalizations and provide both theoretical and empirical insights into the innovation process (Downs & Mohr, 1976; Fliegel & Kivlin, 1966; Kaluzny & Veney, 1973; Rogers & Shoemaker, 1971). Several classification strategies have been devised, but none has been uniformly accepted. For the most part, these classification systems are based on theoretical abstractions and require further empirical investigation. In addition, most are typologies that place innovations on a continuum described by a single attribute. An alternative approach – classification of innovations according to multiple attributes – encourages more detailed descriptions of innovations on a number of dimensions and allows comparisons among different types of innovations (Zaltman et al., 1973; Moore & Benbasat, 1990; Dobbins, Cockerill, Barnsley, & Ciliska, 2001). Researchers have constructed theories and associated hypotheses that specifically address innovation attributes and their positive or negative

118—L. Lemieux-Charles and J. Barnsley

impact on innovation adoption (Downs & Mohr, 1979; Ettlie & Vellenga, 1979; Rogers, 1995; Scott, 1990; Tornatzky & Klein, 1982). However, this approach has been plagued by inconsistent definitions of innovation attributes across studies. In an attempt to deal with the confusion, Tornatzky and Klein (1982) conducted a meta-analysis of research on innovation attributes. They concluded that, of the ten conceptually independent attributes reviewed in depth, only three – relative advantage, compatibility, and complexity – were consistently related to innovation adoption and implementation. We discuss below these three attributes and others that seem particularly pertinent to health care innovations: cost, triability, observability, risk and uncertainty, and research evidence. We conclude this section with some comments on research evidence as an identifiable innovation attribute. While evidence may be classified as one aspect of relative advantage, given the focus of this book we wish to highlight the role of research evidence in relation to other innovation attributes.

Relative Advantage Relative advantage is the degree to which innovations are perceived to be better than the ideas superseded (Rogers & Shoemaker, 1971). Empirical studies have supported the theoretical contention of a positive (although not always significant) association between some measures of relative advantage and innovation adoption (Ettlie & Vellenga, 1979; Fliegel & Kivlin, 1966). Tornatzky and Klein (1982) consider relative advantage to be a vaguely defined ‘garbage pail’ category because of the variety of ways in which it is operationalized in empirical studies (e.g., profitability, social benefits, time saved, and hazards eliminated). Relative advantage can refer to improved effectiveness or efficiency, or both. The value placed on the type of advantage may vary across different constituencies. For example, we might speculate that clinical effectiveness is more persuasive for clinicians than are promises of improved efficiency. On one hand, in a Quebec study of the diffusion patterns for complex health care innovations, it was found that laparoscopic cholecystectomy that produced both improved effectiveness and greater efficiency diffused rapidly. On the other, reusable filters for haemodialysis that had an economic advantage but no clinical benefit had slow and minimal diffusion (Denis, Hébert, Langley, Lozeau, & Trottier, 2002). The magnitude of the improvement (called performance radicalness by

An Innovation Diffusion Perspective—119

Zaltman et al., 1973) can also be an aspect of relative advantage. There are no studies, however, in which diffusion in relation to the extent of improvement or advantage is examined. Magnitude of improvement, however, may not be meaningful without taking into consideration the cost or resources required to implement the innovation.

Cost Cost includes both initial and continuing costs. They can take the form of financial investment for equipment and other hardware or the time and effort invested to develop the knowledge and skills required to use a new procedure or technology (Zaltmanet al., 1973). Denis et al. (2002) note that cost was a consideration when adoption decisions were being made but, as mentioned above, cost was frequently considered in relation to potential advantages. Cost can also be associated with the complexity of an innovation.

Compatibility Compatibility is the degree to which innovations are perceived as consistent with the existing values, past experiences, and needs of the receivers (Rogers & Shoemaker, 1971; Thio, 1971). Compatibility is similar to what other authors have referred to as magnitude, pervasiveness, or radicalness of change (Beyer & Trice, 1978; Zaltman et al., 1973; Pelz, 1985; Dewar & Dutton, 1986). Innovations may be compatible with sociocultural values and beliefs, previously introduced ideas, or an individual’s or organization’s need for innovation. Empirical work tends to confirm the positive association between compatibility and innovation adoption posited by theory (Meyer & Goes, 1988; Tornatzky & Klein, 1982). Compatibility can address the notion of an innovation’s consistency with organizational ends, means, and experience as described by Normann (1971) and by Kaluzny and Hernandez (1988). Becker (1970a, b) compared a ‘high adoption potential’ innovation (measles immunization) with a ‘low adoption potential’ innovation (diabetes screening). Compared with measles immunization, diabetes screening was assessed by a five-member expert panel as representing a greater departure from traditional public health activity and conflicting with important values in the health field; it was also opposed by the county medical society. Given these findings, it was deemed incompatible with traditional norms.

120—L. Lemieux-Charles and J. Barnsley

Complexity Complexity is the degree to which an innovation is perceived as relatively difficult to understand and use and has been hypothesized to be negatively associated with innovation adoption (Rogers & Shoemaker, 1971). This negative relationship has been supported in a number of studies (Ettlie & Vellenga, 1979; Fliegel & Kivlin, 1966; Meyer & Goes, 1988). Complexity operates at two levels (Zaltman et al., 1973). Conceptual complexity refers to the ease or difficulty with which an innovation’s underlying science or design can be understood. Technical complexity refers to the technical difficulty of an innovation. For example, the basic science leading to the development of a new chemotherapeutic agent might be complicated, while the actual application of the medication might be relatively simple. Technical complexity might require staff training and education, and a highly complex innovation (e.g., new imaging technology) might require the recruitment of new, specially trained staff. The complexity of an innovation can also be attenuated by its level of compatibility.

Triability Triability is the degree to which an innovation may be implemented on a partial basis; it is positively associated with adoption (Rogers, 1995). Innovations with low compatibility – especially those that require major, and often costly, change – may be more acceptable if they can be tested in the short term. Innovations that require major investment in equipment or training (e.g., a new surgical or screening procedure) would have low triability and would be expected to diffuse more slowly than those that require minimal investment of resources. The attraction of triability is the opportunity to reverse the adoption decision and return to the original practice, without penalty, if the innovation is judged to be unacceptable by the adopter.

Observability Observability is the degree to which the results of an innovation are visible to others. It is positively associated with adoption (Rogers, 1995). Observability concerns the immediacy of results and the ease with which observers can be confident that there is a causal link between innovations and outcomes. For example, preventive care has low observability,

An Innovation Diffusion Perspective—121

and, indeed, prevention/promotion activities seem to diffuse less easily because of the extended time lag between activity and outcome. Furthermore, outcomes are often an absence of something – usually a health problem (Rogers, 1995). The observability of the embodiment (as opposed to the results) of an innovation can also influence adoption and rate of diffusion. Innovations can be predominantly hardware (material or physical technology) or software (information component), or a combination of the two. Innovations that are largely software and require the transfer of ideas are the least observable and most open to adaptation or redefinition compared with innovations that are mainly hardware. Evidence-based practice itself is a software innovation that promotes the value of research evidence and its application in the selection of appropriate health care services. A high level of observability can reduce the risk and uncertainty associated with a new technology, procedure, or idea.

Complexity, Trialability, and Observability Grilli and Lomas (1994) investigated the relationship between the attributes of recommendations within practice guidelines and their subsequent adoption by clinicians. A total of 143 recommendations were extracted from 23 guidelines along with reported compliance rates (either proportion of providers acting according to those recommendations or proportion of patients treated according to the recommendations). Grilli and Lomas categorized the recommendations according to complexity, trialability, and observability, factors that Rogers (1995) suggested influence the diffusion of innovations. Recommendations judged to be of high complexity had significantly lower compliance rates (41.9 per cent) compared with those judged to be of low complexity (55.9 per cent, p = 0.05). Recommendations judged to be high on trialability showed a significantly higher compliance rate (55.6 per cent) compared with those judged as low on trialability (36.8 per cent, p = 0.03). High versus low observability of the recommendation did not demonstrate any significant difference in compliance rate (54.6 per cent vs 52.4 per cent). Multivariate analysis demonstrated that these attributes accounted for 23 per cent of the variance in compliance with practice guideline recommendations. Grilli and Lomas were surprised at the lack of influence of observability on compliance rates. They noted that this attribute is closely related to clinical relevance, and that the absence of clinical relevance in practice

122—L. Lemieux-Charles and J. Barnsley

guidelines has often been cited as a major explanation for observed lack of compliance.

Risk and Uncertainty Risk and uncertainty are innovation attributes that are closely related to evidence. Risk refers to adopter knowledge of the probability distribution of the consequences or outcomes of an adoption decision. Uncertainty describes a situation in which decision-makers are unable to assign, with any confidence, the probabilities of particular consequences associated with an adoption decision (March & Simon, 1958). In studies of the adoption of drugs associated with high risk or uncertain outcomes it has been found that the decision to adopt is circumscribed by a patient’s condition (Warner, 1977; Peay & Peay, 1994). Warner (1975) refers to rapid diffusion in situations in which a drug is easy to adopt and is aimed at catastrophic medical problems as the desperation-reaction model of medical diffusion. However, most adoption decisions are made in non-catastrophic circumstances. Innovations are often valued because of their ability to reduce uncertainty; however, they also can create uncertainty. Rogers (1995) describes two types of information inherent in innovations: operating and evaluation information. Operating information reduces cause-effect uncertainty by explaining how and why an innovation is supposed to work, thus defining the innovation’s complexity. Evaluation information reduces uncertainty about an innovation’s expected outcomes by describing consequences and the positive and negative aspects of using the innovation in particular situations. Research evidence and the ability to observe the experience of others are important components of evaluation information.

Research Evidence Research evidence supporting innovations, including health care innovations, has received little attention in the innovation diffusion literature. In their description of possible innovation attributes, Zaltman et al. (1973) note only that ‘scientific status’ and its sub-attributes of reliability, validity, generality, and internal consistency, play important roles in the adoption and diffusion of innovations. Historically, research evidence has not been considered highly relevant in the diffusion process. Rogers (1995, p. 18) observes that ‘diffu-

An Innovation Diffusion Perspective—123

sion investigations show that most individuals do not evaluate an innovation on the basis of scientific studies of its consequences, although such objective evaluations are not entirely irrelevant, especially to the very first individuals who adopt.’ An early study of tetracycline supports this view. Although information on the positive results of pharmaceutical and university studies of tetracycline were communicated to the physicians in the study sample, scientific evaluations of the drug were not sufficient to guarantee adoption. Instead, subjective evaluations based on the personal experience of a doctor’s peers were key to convincing a typical doctor to adopt the drug (Coleman, Katz, & Menzel, 1966). Since then, a number of authors have provided examples of inappropriate adoption decisions, given the research evidence. Examples have included no adoption or delayed adoption in the presence of supporting evidence and eventual adoption, as well as rapid adoption in the absence of supporting evidence (Denis et al., 2002; Anderson & Lomas, 1988; Warner, 1977; Abrahamson & Rosenkopf, 1993; McKinlay, 1981; Fineberg, Gabel, & Sosman, 1978). Health services researchers have argued, often passionately, for criteria to determine the effectiveness and efficiency of proposed procedures, services, or technologies (McKinlay, 1981) and for the generation and dissemination of valid information on health technologies and the establishment of appropriate controls and incentives to guide the introduction of such technologies (Battista, 1989). Even evidence-based innovations can retain a substantial element of uncertainty. It has been noted that the perception of strength of evidence supporting an innovation varies across adopters. As described above, perception is influenced by the clarity or observability of the link between an innovation and the outcome of interest (e.g., improved effectiveness or efficiency) and may be partially determined by the local experience of the adopter, including the social system and channels of communication. Denis et al. (2002) refer to the hard core and soft periphery of medical innovations. The hard core, which is similar to operating information, can be very clear (drug or equipment) or more vague (e.g., list of standard practices for dealing with psychiatric patients in the community). The soft periphery includes those aspects of an innovation that are not directly addressed in the results of efficacy or effectiveness studies (e.g., the appropriate method of follow-up, required level of surgical skill, or types of patient most appropriate for the treatment or procedure). These aspects of the innovation are open to interpretation

124—L. Lemieux-Charles and J. Barnsley

and may lead to different decisions about how to implement an innovation, each of which may be justified by the evidence that supports its hard-core components. The soft periphery gives the adopting unit an opportunity to design or use an innovation in a way that is compatible with local needs and values, while maintaining its hard core. Gelijns and Rosenberg (1994, p. 29) explain that ‘much uncertainty associated with a new technology can be resolved only after extensive use in practice. Thus, development does not end with the adoption of an innovation. Actual adoption constitutes only the beginning of an often prolonged process in which important redesigning takes place, exploiting the feedback of new information generated by users.’ Different adopters implement slightly different innovations. Multiple natural experiments occur that can result in an innovation’s evolving in a variety of directions determined by the perceptions and characteristics of the adopters, which in turn may be partially determined by an adopter’s local experience, including the social system and channels of communication. In the following section we explore the numerous patterns and determinants of adoption by individuals and organizations.

Innovation Adopters Early studies of innovation diffusion were focused on individuals as the units of adoption and on the degree to which they were linked by interpersonal networks. There is now increasing interest in health care research on the organization as the adopting unit that reflects organizational rather than individual processes (Fennel & Warnecke, 1988; Kaluzny, Glasser, Gentry, & Sprague, 1970; Kaluzny, Veney, & Gentry 1974; Nathanson & Morlock, 1980; Russell, 1978; Scott, 1990).

Individuals as Adopters Researchers focusing on the individual as the adopting unit have demonstrated the ways in which new ideas and practices are spread through interpersonal contacts, consisting largely of interpersonal communication (Beal & Bohlen, 1955; Hagerstrand, 1967; Ryan & Gross, 1943; Rogers, 1995; Valente & Rogers, 1995; Valente & Davis, 1999; Valente, 1995). Known as network interconnectedness, information is seen to flow through individuals across organizational and professional boundaries (Rogers, 1995). Within organizations, individuals are involved in

An Innovation Diffusion Perspective—125

exchange relationships that include exchanging knowledge, information, and expertise with people both within and across sectors. When the individual is selected as the unit of adoption, attributes of the individual have been studied especially in relation to the time at which he/she first begins to use a new idea. In an extensive review of different categorization systems, Rogers (1995) suggested five adopter categories: innovators, early adopters, early majority, late majority, and laggards. Innovators are those individuals who launch a new idea in a system by importing the innovation from outside the system’s boundaries. They are ‘cosmopolites’ in that they are broadly connected to groups outside their own practice community. New ideas, meanwhile, are unlikely to be encountered through highly interconnected, closeknit social networks, because individuals in the same work and social groups tend to possess the same information. Early adopters are more integrated into the local social system, and it is among this group that one finds opinion leadership in most systems. These individuals ‘decrease uncertainty about a new idea by adopting it [and] then conveying a subjective evaluation of the innovation to near-peers through interpersonal networks’ (p. 264); this evaluation contributes to its observability. Early majority and late majority adopters represent adoption along a continuum; that is, early majority adopters will adopt ideas just before the average member of a system. Laggards, finally, tend to be individuals who are the most local and isolated in a social system. They must be certain that a new idea will not fail before adopting it. A synthesis of the empirical literature (Rogers, 1995) suggests important differences between earlier and later adopters based on socio-economic status, personality variables, and communication behaviour. Some of the earlier studies in health care were focused on physicians’ adoption of new drugs (Coleman, Katz, & Menzel, 1957, 1959), and it was found that physicians who were described as profession-oriented – that is, they put recognition by colleagues ahead of patients and general standing in the community and valued research and publishing – adopted new drugs earlier than physicians who were patient-oriented. ‘Socially integrated’ physicians adopted new drugs earlier than those who were isolated. Of the early adopters, the most socially integrated adopted in the first two months. In a study of public health officers’ adoption of public health innovations, Becker (1970a, b) concluded that the speed with which the innovations were adopted depended on the officers’ location in the communications network of their group, their cosmopolitanism, their reliance

126—L. Lemieux-Charles and J. Barnsley

on outside sources of scientific information, and their training. He proposed that it is the strength of an individual’s motivation to maintain or increase prestige and professional status in the light of the risks of adoption that determines the timing of the information source selected. Treating medical communities as closed systems, Greer (1988) explored the ways in which physicians receive and assess information on new medical technologies. Results indicated that local consensus to adopt and informal conversations with users of new technology were more important than the scientific literature underpinning the technology’s effectiveness and efficacy. Local innovators played an important role in bringing new technology into a closed community, and idea champions were the tireless promoters of the innovation. These individuals could be the same person as the innovator. According to Rogers (1995), ‘homophily of agents’ influences innovation adoption; that is to say, innovations are more likely to be adopted if individuals communicating information have similar characteristics. A study of agent characteristics that affected the diffusion of family-planning programs in rural Mexico supports this contention. In this case, the credibility of agent/supervisor characteristics was more important than effort expended. Older, married agents with children were more influential than their counterparts, except in the case of midwifery. Studies of individual attributes have not generally addressed the role the individual plays in the system and the contextual issues influencing adoption. For example, individuals who assume the role of manager are embedded in a particular, formal system, yet they may also be part of other networks. Dobbins et al. (2001) found that in public health settings, program managers reported using systematic reviews of public health interventions more frequently than other professionals within the unit. The reviews were used mainly for program planning and program justifications. One explanation is that these reviews may have been more relevant to the types of decision for which these managers were responsible. Managers’ expertise may also be relevant. In contemporary health care there is an ongoing debate regarding whether content expertise is necessary to manage in these settings. Researchers in other industries found in several British and German plant comparisons (Daly, Hitchens, & Wagner, 1985; Steedman & Wagner, 1987, 1989) that managers who did not have technical training relevant to their manufacturing were slow to adopt new technologies. In an indepth case study of managers’ theories about the process of innovation,

An Innovation Diffusion Perspective—127

Salaman and Storey (2002) found that managers’ thinking frames for innovation – their ability to imagine an alternative form of organization – were overwhelmed by the organization’s dominance. This power differential acted as a barrier to imagining an innovative organization. In these situations, individuals’ adoption behaviour may be contingent upon the system within which they are embedded. Although many factors influence innovation diffusion, it has been consistently found that interpersonal contacts within and between communities are important influences on adoption behaviour (Valente, 1995). Social network ties to adopters have been shown to increase the likelihood of adopting medical technology and techniques (Coleman, Katz, & Menzel, 1966; Becker, 1970a, b) and management programs (Burns & Wholey, 1993). Robertson, Swan, and Newell (1996) examined the inter-organizational networks through which potential adopters learn about computer-aided production management technology in order to make decisions regarding its adoption. They found that people involved in making decisions to adopt the technology participated in at least three or four networks: academic, professional, inter-company, and technology supply. These networks reinforced and shaped the availability of the technology through an active promotion of its use. In health care there are strong professional norms that are constrained by a variety of occupational and professional standards to which participants subscribe (Ruef & Scott, 1998).

Organizations as Adopters The transition from a health care system in which individuals are viewed as the major adopters of innovation to a system in which innovations are considered within a managerial framework (Scott, Ruef, Mendel, & Caronna, 2000) has led to increased attention to the study of organizations as adopters of innovations (Aiken & Hage, 1968; Becker & Whistler, 1969; Corwin, 1972; Daft, 1978; Kimberly, 1987; Tornatzky & Fleischer, 1990). This focus is based on the realization that, in many situations, individuals cannot adopt or implement an innovation on their own but require organizational support and resources. The need to commit organizational resources to ensure successful adoption of new medical technologies increases the complexity of the innovation decision and moves the locus of decision-making from the individual to the organizational level (Greer, 1986; Kimberly & Evanisko, 1981; Tornatzsky & Fleischer).

128—L. Lemieux-Charles and J. Barnsley

There is no consensus on what motivates organizational adoption. Walston, Kimberly, and Lawton (2001) discuss two organizational perspectives: one that focuses on environmental pressures to improve economic efficiency as the principal motivator of change (Ansoff, 1965; Mintzberg, 1990) and another that emphasizes the effects of pressures for conformity that result from institutional pressures such as government, the legal system, and professional standards (Meyer & Rowan, 1977; Scott, 1995). A central tenet of the institutional perspective is that organizations that share a similar environment are driven by legitimacy motives to adopt new practices (DiMaggio & Powell, 1983). Some argue, however, that innovation in the health care system is neither one nor the other, but is highly susceptible to the influence of both economic and institutional factors (Scott, 1992; Arndt & Bigelow, 1998). The complexity of the relationship between economic forces and institutional pressures seems most evident in the adoption of managerial innovations (Walston et al., 2001; Westphal, Gulati, and Shortell, 1997). Walston et al. found that the probability of adopting re-engineering in acute-care hospitals increased substantially as a greater percentage of neighbouring hospitals adopted the innovation. Economic factors such as higher costs and a vulnerability to managed care also promoted adoption, whereas higher market penetration by managed care was associated with lower adoption rates. The impact of institutional pressures was also evident in a study of the diffusion of quality improvement (Westphal et al.). The authors found increased conformity over time in the form of innovation adoption, early adopters being more likely to be motivated by technical efficiency gains and a desire to customize quality practices to their organization’s unique practices. Late adopters were more likely to feel pressured to adopt the innovation and mimic what had been adopted in other hospitals. Organizational attributes, including responses to internal and external pressures, have also received some attention. Early studies were focused primarily on organizational characteristics, such as firm size, performance, functional differentiation, slack, and leadership qualities, to explain adoption (Kimberly & Evanisko, 1981; Moch & Morse, 1977; Rosner, 1968). More recently, Domanpour (1991) and Zammuto and O’Connor (1992), in separate literature reviews, identified six organizational indicators as important features that facilitated innovation, including specialization, departmentalization, professionalization, technical knowledge resources, and job complexity. Hage (1999) believes that job complexity – that is, increased labour skills – is most important,

An Innovation Diffusion Perspective—129

because it taps an organization’s learning, problem-solving, and creativity capacities. He also notes that in neither of the reviews was the importance of a research department that has been linked to an organization’s capacity to absorb innovations considered (Cohen & Levinthal, 1990). Known as ‘absorptive capacity,’ it is the firm’s ability to recognize the value of new and external information and its ability to assimilate and exploit it (Fiol, 1996). It is largely a function of the organization’s prior related knowledge and is critical to innovation. A large gap in skills between partners in a strategic alliance can impair transfer of knowledge between them. The use of individuals as either boundary spanners, who absorb knowledge from outside and interpret it for internal constituents (Allen, 1977), or task coordinators, who are intimately connected to the organization’s work-flow (Ancona & Caldwell, 1992), has been found to have a positive effect on the diffusion of innovation. These individuals act as conduits for innovation to others in the organization. In health care, Glandon and Counte (1995) reviewed the literature on hospital adoption of medical and managerial innovations and noted support for examining organizational structural attributes such as size, specialization, functional differentiation, and decentralization, as well as characteristics of the external environment (Moch & Morse, 1977, Greer, 1977, Kaluzny et al., 1974; Provan, 1987). In their study of the diffusion of cost accounting systems in hospitals they found that size, slack resources, and membership in a multi-hospital system were positively related to adoption behaviour. In studies of the adoption in nursing homes of special care units and subacute care services, Castle (2001) found that the organizational factors that increased the likelihood of early innovation adoption were larger bed size, chain membership, and high levels of private-pay residents. Influential environmental factors included a more competitive environment and a higher number of beds in the county. Adoption of provider-based, rural-health clinics by rural hospitals, meanwhile, appeared to be motivated less by an adaptive response to observable economic signals than by an imitation of others, either because of uncertainty or a limited ability (due to limited resources) to fully evaluate their options. In examinations of organizational attributes the focus has been on innovation adoption in specific settings such as nursing homes, acute care hospitals (teaching and community), and rural areas and communities. It has been argued that it is important to distinguish between organization types in order to address issues of generalizability

130—L. Lemieux-Charles and J. Barnsley

(Domanpour, 1991). In addition to innovations described above, types of innovations studied have included promotion and prevention services (Miller, 2001; Krein, 1999), new managerial approaches (Walston et al., 2001; Westphal et al., 1997; Castle, 2001; Glandon & Counte, 1995; Provan, 1987), and new drug approvals and technology (D’sa, Hill, & Stratton, 1994; Langley & Denis, 2002; Lemieux-Charles, McGuire, & Blidner, 2002). Literature on technology transfer provides considerable evidence that inter-organizational relationships facilitate the spread of particular innovations across organizations (Tushman, 1977; Darr, Argote, & Epple, 1995; Robertson et al., 1996) or promote innovation in general (Shan, Walker, & Kogut, 1994). It has been argued that, in order for an organization to innovate and learn, it should receive relevant new information from its external environment. Westphal et al. (1997) found that social networks within hospital systems expedited adoption of similar structures (mimetic isomorphism) in later stages of the adoption process by disseminating knowledge about the normative form of quality improvement adoption as it emerged over time. In another study, regional and local hospital networks were found to influence the diffusion of an administrative innovation, matrix management. In a recent study of the diffusion of an evidencebased strategy to coordinate the delivery of stroke care in four geographic regions in Ontario, Lemieux-Charles et al. (2002) found that establishing new structures in the form of networks was necessary for disseminating evidence. The different organizations representing the continuum of care were engaged in transforming the delivery of stroke care in a region. An important factor affecting the transfer of knowledge across organizations is whether the organizations are embedded in a superordinate relationship (Argote, 1999). Powell, Koput, & SmithDoerr (1996) found that biotechnology firms that were linked together in an R&D alliance were more likely to have access to critical information and resource flows that facilitated their growth than firms not engaged in such collaborative relationships. Being embedded in a network provides more opportunities to communicate than are afforded independent organizations. Conclusion Although the use of scientific evidence and knowledge in practice has been a long-standing concern of the knowledge utilization field

An Innovation Diffusion Perspective—131

(Huberman, 1994), policy analysis (Weiss 1979), and organizational sciences (Rynes, Bartunek, & Daft, 2001), there continues to be interest in examining how new ideas/innovations enter the realm of practice either through interactions of individuals or through the system within which they work. Much of the knowledge transfer work in health care is focused on dissemination methods where there is strong evidence to support adoption, without consideration of the attributes of the innovations and how they interact with adopter or system characteristics in determining adoption behaviour. (See Denis et al., 2002, for an exception to this analytic tendency.) If these interactions are predictive of appropriate adoption behaviour, knowledge transfer/dissemination methods may need to be tailored to particular circumstances defined by the innovation, the individual, the organization, and the larger system within which adoption decisions are made.

REFERENCES Abrahamson, E., & Rosenkopf, L. (1993). Institutional and competitive bandwagons: Using mathematical modeling as a tool to explore innovation diffusion. Academy of Management Review, 18(3), 487–517. Aiken, M., & Hage, J. (1968). Organizational interdependence and intraorganizational structure. American Social Review, 33, 912. Allen, T.J. (1977). Managing the flow of technology: Technology transfer and the dissemination of technological information within the R&D organization. Cambridge, MA: MIT Press. Ancona, D.G., & Caldwell, D.F. (1992). Bridging the boundary: External activity and performance in organizational teams. Administrative Science Quarterly, 37, 634 – 665. Anderson, G.M., & Lomas, J. (1988). Monitoring the diffusion of a technology: Coronary artery bypass surgery in Ontario. American Journal of Public Health, 78(3), 251–254. Ansoff, H.I. (1965). Corporate Strategy. New York: McGraw-Hill. Argote, L. (1999). Knowledge transfer in organizations. In Organizational learning: Creating, retaining and transferring knowledge (pp. 143 –188). Boston: Kluwer Academic. Arndt, M., & Bigelow, B. (1998). Reengineering: Déjà vu all over again. Health Care Management Review, 23(3), 58 – 66. Battista, R.N. (1989). Innovation and diffusion of health-related technologies. International Journal of Technology Assessment in Health Care, 5, 227–248. .

.

.

.

.

132—L. Lemieux-Charles and J. Barnsley Beal, G.M., & Bohlen, J.M. (1955). How farm people accept new ideas. Cooperative Extension Service Report 15. Ames, IA: U.S. Department of Agriculture. Becker, M.H. (1970a) Factors affecting diffusion of innovation among health professionals. American Journal of Public Health, 60(2), 294 –304. Becker, M. (1970b). Sociometric location and innovativeness: Reformulation and extension of the diffusion model. American Social Review, 35, 267–282. Becker, S.W., & Whistler, T.L. (1969) The innovative organization: A selective view of current theory and research. Journal of Business, 4, 462 – 469. Beyer, J.M., & Trice, H.M. (1978). Implementing change. New York: Free Press. Burns, L.R., & Wholey, D.R. (1993). Adoption and abandonment of matrix management programs: Effects of organizational characteristics and interorganizational networks. Academy of Management Journal, 36(1), 106 –138. Castle, N.G. (2001). Innovation in nursing homes: which facilities are the early adopters? Gerontologist, 41(2), 161–172. Cohen, W., & Levinthal, D. (1990). Absorptive Capacity: a new perspective on learning and innovation. Administrative Science Quarterly, 35, 128 –152. Coleman, J., Katz, E., & Menzel, H. (1957). The diffusion of innovation among physicians. Sociometry, 20, 253 –279. Coleman, J., Katz, E., & Menzel. (1959). Social processes in physicians’ adoption of a new drug. Journal of Chronic Diseases, 9(1), 1–19. Coleman, J., Katz, E, & Menzel, H. (1966). Medical innovation: A diffusion study. Indianapolis, IN: Bobbs-Merrill. Corwin, R.G. (1972). Strategies for organizational innovations: An empirical comparison. American Sociological Review, 37, 441 – 454. Daft, R.L. (1978). A dual core model of organizational innovation. Academy of Management Journal, 21, 193 –210. Daly, A., Hitchens, D.M., & Wagner, K. (1985). Productivity, machinery and skills in a sample of British and German manufacturing plants. National Institute Economic Review (February), 48 – 61. Darr, E., Argote, L., & Epple, D. (1995). The acquisition, transfer, and depreciation of learning in service organizations: Productivity in franchises. Management Science, 44, 1750 –1762. Denis, J.L., Hébert, Y., Langley, A., Lozeau, D., & Trottier, L.H. (2002). Explaining diffusion patterns for complex health care innovations. Health Care Management Review, 27(3), 59 –72. Dewar, R.D., & Dutton, J.E. (1986). The adoption of radical and incremental innovation: An empirical analysis. Management Science, 32(11), 1422–1433. DiMaggio, P.J., & Powell, W.W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48, 147–160. .

.

.

.

.

.

.

.

.

.

.

.

.

An Innovation Diffusion Perspective—133 Dobbins, M., Cockerill, R., Barnsley, J., & Ciliska, D. (2001). Factors of the innovation, organization, environment, and individual that predict the influence five systematic reviews had on public health decisions. International Journal of Technology Assessment in Health Care, 17, 467 – 478. Domanpour, F. (1991). Organizational innovation: a meta-analysis of effects of determinants and moderators. Academy of Management Journal, 34, 555– 590. Downs, G., & Mohr, L.B. (1976). Conceptual issues in the study of innovations. Administrative Science Quarterly, 21, 700 –714. Downs, G.W., & Mohr, L.B. (1979). Toward a theory of innovation. Administration and Society, 10(4), 379 – 408. D’sa, M.M., Hill, D.S., & Stratton, T.P. (1994). Diffusion of innovation I: Formulary acceptance rates of new drugs in teaching and non-teaching British Columbia hospitals – a hospital pharmacy perspective. Canadian Journal of Hospital Pharmacy, 47(6), 254 –260. Ettlie, J.E., & Vellenga, D.B. (1979). The adoption time period for some transportation innovations. Management Science, 25(5), 429 – 443. Fennel, M., & Warnecke, R. (1988). The diffusion of medical innovation: An applied network analysis. New York: Plenum Press. Fineberg, H.V., Gabel, R.A., & Sosman, M.B. (1978). Acquisition and application of new medical knowledge by anesthesiologists: Three recent examples. Anesthesiology, 48(6), 430 – 436. Fiol, C. M. (1996). Squeezing harder doesn’t always work: Continuing the search for consistency in innovation research. Academy of Management Review, 21(4), 1012–1021. Fliegel, F.C., & Kivlin, J.E. (1966). Attributes of innovations as factors in diffusion. American Journal of Sociology, 72, 235 –248. Gelijns, A., & Rosenberg, N. (1994). The dynamics of technological change in medicine. Health Affairs, 13(3), 29. Glandon, G. L., & Counte, M. A. (1995). An analysis of the adoption of managerial innovation. Health Services Management Research, 8(4), 243 – 251. Greer, A.L. (1977). Advances in the study of diffusion and innovation in health care organizations. Milbank Memorial Fund Quarterly / Health and Society, 55(4), 505 – 532. Greer, A.L. (1986). Medical conservatism and technological acquisitiveness: The paradox of hospital technology adoptions. In J.A. Roth & S.B. Ruzek (Eds.), Research in the sociology of health care (Vol. 4, pp. 188 – 235). Greenwich, CT: JAI Press. Greer, A.L. (1988). The state of the art versus the state of the science. The diffusion of new medical technologies into practice. International Journal of Technology Assessment in Health Care, 4(1), 5 – 26. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

134—L. Lemieux-Charles and J. Barnsley Grilli, R., & Lomas J. (1994). Evaluating the message: The relationship between compliance rate and the subject of a practice guideline. Medical Care, 32(3), 202 – 213. Hage, J.T. (1999). Organizational innovation and change. Annual Review of Sociology, 25, 597 – 622. Hage, J., & Aiken, M. (1970). Social change in complex organizations. Englewood Cliffs, NJ: Prentice-Hall. Hagerstrand, T. (1967). Innovation diffusion as a spatial process. Chicago: University of Chicago Press. Huberman, M. (1994). Research utilization: The state of the art. Knowledge and Policy: The International Journal of Knowledge Transfer and Utilization, 7, 13 – 33. Kaluzny, A.D., Glasser, J., Gentry, J., & Sprague, J. (1970). Diffusion of innovative health care services in the United States. Medical Care, 8, 474 – 487. Kaluzny, A.D., & Hernandez, S.R. (1988). Organizational change and innovation. In S.M. Shortell & A.D. Kaluzny (Eds.), Health care management: A text in organizational theory and behavior (pp. 379 – 417). New York: John Wiley. Kaluzny, A.D., Ricketts, T., Warnecke, R., Ford, L., Morrissey, J., Gillings, D., Sondik, E.J., Ozer, H., & Goldman, J. (1989). Evaluating organizational design to assure technology transfer: The case of the Community Clinical Oncology Program. Journal of the National Cancer Institute, 81, 1717–1725. Kaluzny, A.D., & Veney, J.E. (1973). Attributes of health services as factors in program implementation. Journal of Health and Social Behaviour, 14, 124 –133. Kaluzny, A.D., Veny, J.E., & Gentry, J.T. (1974). Innovation of health services: A comparative study of hospitals and health departments. In A. K. Kaluzny, J. T. Gentry, & J. E. Veney (Eds.), Innovations in health care organizations (pp. 80 –110). Chapel Hill, NC: University of North Carolina at Chapel Hill, Department of Health Administration. Kimberly, J.R. (1981). Managerial innovation. In P.C. Nystrom & W.H. Starbuck (Eds.), Handbook of organizational design: Adapting organizations to their environment (pp. 84 –104). Oxford: Oxford University Press. Kimberly, J.R. (1987). Organizational and contextual influences on the diffusion of technological innovations. In H.H. Pennings & A. Buttendam (Eds.), New technology as organizational innovation: The development and diffusion of microelectronics (pp. 237–259). Cambridge, MA: Ballinger. Kimberly, J., & Evanisko, M. (1981). Organizational innovation: The influence of individual, organizational and contextual factors on hospital adoption of technological and administrative innovations. Academy of Management Journal, 24, 689 –713. .

.

.

.

.

.

.

.

.

.

.

.

.

.

An Innovation Diffusion Perspective—135 Krein, S.L. (1999). The adoption of provider-based rural health clinics by rural hospitals: A study of market and institutional forces. Health Services Research, 34(1), 33 – 60. Kroeber, A.L. (1923). Anthropology. New York: Harcourt Brace. Langley, A., & Denis, J.-L. (2002). Forum. Health Care management Review, 27(3), 32–34. Lemieux-Charles, L., McGuire, W., & Blidner, I. (2002). Building interorganizational knowledge for evidence-based health system change. Health Care Management Review, 27(3), 47– 58. March, J., & Simon, H. (1958). Organizations. New York: Wiley. McKinlay, J.B. (1981). From ‘promising report’ to ‘standard procedure’: Seven stages in the career of a medical innovation. Milbank Memorial Fund Quarterly / Health and Society, 59(3), 374 – 411. Meyer, A.D., & Goes, J.B. (1988). Organizational assimilation of innovation: A multilevel contextual analysis. Academy of Management Journal, 31, 897– 923. Meyer, J.W., & Rowan, B. (1977). Institutionalized organizations: Formal structure as myth and ceremony. American Journal of Sociology, 83, 340 –363. Milio, N. (1971). Health care organizations and innovation. Journal of Health and Social Behavior, 12, 163 –174. Miller, R.L.( 2001). Innovation in HIV prevention: Organizational and intervention characteristics affecting program adoption. American Journal of Community Psychology, 29(4), 621– 647. Mintzberg, H. (1990). The design school: Reconsidering the basic premises of strategic management. Strategic Management Journal, 6, 257–272. Moch, M.K., & Morse, E.V. (1977). Size, centralization and organizational adoption of innovations. American Sociological Review, 42, 716 –725. Moore, G.C., & Benbasat, I. (1990). An examination of the adoption of information technology by end-users: A diffusion of innovations perspective. University of British Columbia, Department of Commerce and Business Administration. Working Paper 90-MIS-012. Nathanson, C., & Morlock, L. (1980). Control structure, values and innovation: A comparative study of hospitals. Journal of Health and Social Behavior, 21, 315 – 333. Normann, R. (1971). Organizational innovativeness: Product variation and reorientation. Administrative Science Quarterly, 16, 2. Peay, M.Y., & Peay, E.R. (1994). Innovation in high-risk drug therapy. Social Science Medicine, 39(1), 39 –52. Pelz, D.C (1985). Innovation, complexity and the sequence of innovation stages. Knowledge: Creation, Diffusion and Utilization, 6(3), 261– 291. .

.

.

.

.

.

.

.

.

.

.

.

.

.

136—L. Lemieux-Charles and J. Barnsley Powell, W.W., Koput, K.W., & Smith-Doerr, L. (1996). Interorganizational collaboration and the locus of innovation: Networks of learning in biotechnology. Administrative Science Quarterly, 41, 116 –145. Provan. K. (1987). Environmental and organizational oredictors of adoption of cost containment policies in hospitals. Academy of Management Journal, 30(2), 219 – 239. Rapoport, J. (1978). Diffusion of technological innovation among nonprofit firms: A case study of radioisotopes in U.S. hospitals. Journal of Economics and Business, 30, 108 –118. Roberston, M., Swan, J., & Newell, S. (1996). The role of networks in the diffusion of technological innovation. Journal of Management Studies, 33(3), 333 – 359. Rogers, E.M. (1995). Diffusion of innovations (4th ed.). New York: Free Press. Rogers, E.M., & Kincaid, D.L. (1981). Communication networks: Toward a new paradigm for research. New York: Free Press. Rogers, E.M., & Shoemaker, F.F. (1971). Communication of innovations: A crosscultural approach. New York: Free Press. Rosner, M.M. (1968). Economic determinants of organizational innovation. Administrative Science Quarterly, 12, 614 – 625. Ruef, M., & Scott, W.R. (1998). A multidimensional model of organizational legitimacy: Hospital survival in changing institutional environments. Administrative Science Quarterly, 43, 877– 904. Russell, L.B. (1978). The diffusion of hospital technologies: Some econometric evidence. Journal of Human Resources, 12, 482 – 502. Ryan, B., & Gross, N.C. (1943). The diffusion of hybrid seed corn in two Iowa communities. Rural Sociology, 8, 15 – 24. Rynes, S.L., Bartunek, J.M., & Daft, R.L. (2001). Across the great divide: Knowledge creation and transfer between practitioners and academics. Academy of Management Journal, 44(2), 340 –355. Salaman, G., & Storey, J. (2002). Managers’ theories about the process of innovation. Journal of Management Studies, 39, 147–165 Scott, W.R. (1990). Innovation in medical care organizations: A synthetic review. Medical Care Review, 47(2), 165–192. Scott, W.R. (1992). Organizations: Rational, natural and open systems (3rd ed.). Englewood Cliffs, NJ: Prentice-Hall. Scott, W.R. (1995). Institutions and organizations. Thousand Oaks, CA: Sage. Scott, W.R., Ruef, M., Mendel, P.J., & Caronna, C.A. (2000). Institutional change and healthcare organizations: From professional dominance to managed care. Chicago: University of Chicago Press. Shan, W., Walker, G., & Kogut, B. (1994). Interfirm cooperation and startup .

.

.

.

.

.

.

.

.

.

.

.

.

.

An Innovation Diffusion Perspective—137 innovation in the biotechnology industry. Strategic Management Journal, 15, 387–394. Steedman, H., & Wagner, K. (1987). A second look at productivity, machinery and skills in Britain and Germany. National Economic Review, (November), 84 – 95. Steedman, H., & Wagner, K. (1989). Productivity, machinery and skills: Clothing manufacture in Britain and Germany. National Institute Economic Review (May), 40 – 57. Tarde, G. (1903). The laws of immigration (E.C. Parsons, Trans.). New York: Holt. Thio, A. (1971). A reconsideration of the concept of adopto-innovation compatibility in diffusion research. Sociological Quarterly, 12, 56 – 58. Tornatzky, L.G., & Fleischer, M. (1990). The process of technological innovation. Lexington MA: Lexington Books. Tornatzky, L.G., & Klein, K.J. (1982). Innovation characteristics and innovation adoption-implementation: A meta-analysis of findings. IEEE Transactions in Engineering Management, EM-29(1), 28 – 45. Tushman, M.L. (1977). Special boundary roles in the innovation process. Administrative Science Quarterly, 22(4), 587– 605. Valente, T.W. (1995). Network models of the diffusion of innovations. Cresskill, NJ: Hampton Press. Valente, T.W., & Davis, R.L. (1999). Accelerating the diffusion of innovations using opinion leaders. Annals of the American Academy of Political and Social Science, 566, 55 – 67. Valente, T.W., & Rogers, E.M. (1995). The origins and development of the diffusion of innovations paradigm as an example of scientific growth. Science and Communication: An Interdisciplinary Social Science Journal, 16(3), 238 – 269. Walston, S.L., Kimberly, J.R., & Lawton, R.B. (2001). Institutional and economic influences on the adoption and extensiveness of managerial innovation in hospitals: The case of reengineering. Medical Care Research and Review, 58(2), 194 –228. Warner, K.E. (1975). A ‘desperation-reaction’ model of medical diffusion. Health Services Research, 10, 369 – 383. Warner, K. E. (1977). Treatment decision making in catastrophic illness. Medical Care, 15, 19 – 33. Weiss, C.H. (1979) The many meanings of research utilization. Public Administration Review, 39, 426 – 431. Westphal, J.D., Gulati, R., & Shortell, S.M. (1997). Customization or conformity? An institutional and network perspective on the content and consequences of TQM adoption. Administrative Science Quarterly, 42, 366 – 394. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

138—L. Lemieux-Charles and J. Barnsley Wissler, C. (1923). Man and culture. New York: Thomas Y. Crowell. Yin, R.K. (1979). Changing urban bureaucracies. Lexington, MA: Lexington Books. Zaltman, G., Duncan, R., & Holbek, J. (1973). Innovations and organizations. New York: John Wiley. Zammuto, R., & O’Connor, E. (1992). Gaining advanced manufacturing technologies benefits: the role of organizational design and culture. Academy of Management Review, 17, 701–728.

A Program Evaluation Perspective—139

6 A Program Evaluation Perspective on Processes, Practices, and Decision-Makers FRANÇOIS CHAMPAGNE, ANDRÉ-PIERRE CONTANDRIOPOULOS, AND ANAÏS TANON

Introduction ‘Making decisions based on scientific evidence is a good thing.’ This simple yet daring hypothesis justifies the current movement that has been named evidence-based decision-making. The complex nature of health systems has produced large areas of uncertainty pertaining to the relationship between health problems and the interventions that could resolve them. These areas of uncertainty are widening under the pressure of rapid technological development, aging populations, and the public’s ever-increasing expectations. The establishment of major health insurance programs during the decades following the Second World War fuelled, during the 1970s, the need for scientific evidence capable of guiding decision-making that would increase the probability that the most suitable solutions possible would be found for the problems of effectiveness and efficiency in health care services at the clinical, organizational, and systemic levels. Since that time, evaluation has become increasingly popular with decision-makers in the health care field, in spite of the storm clouds that have been building over relations between evaluation and decision-making. During the 1960s evaluators were concerned about the limited impact evaluations were having on decision-making. In 1966 Carol Weiss’s article ‘Utilization of evaluation: Toward comparative study’ (see Weiss, 1972b) marked the beginning of a series of investigations, both empirical and theoretical, on the nature, causes, determinants, and consequences of its use (Alkin, Daillak, & White, 1979). For the next twenty years, the term ‘evaluation utilization’ actually meant ‘utilization of

140—F. Champagne, A.-P. Contandriopoulos, and A. Tanon

evaluation results’ (Weiss, 1998a, p. 24). The view adopted by the majority of the evaluators was that of the policy designers and those in charge of programs (Cronbrach, Ambron, Dornbush, Hess, Hornik, Phillips, et al., 1980). The utilization of evaluation results that was being sought was essentially instrumental (actions undertaken based on the results or the report of the evaluation) (Rich, 1977) and short term (Alkin et al., 1979). Towards the end of the 1970s, Patton (1978) stressed that the perception of a marginal utilization of evaluation results could be explained by an overly restrictive conceptualization of utilization: We found in interviews with federal decision-makers that evaluation research is used by decision-makers, but not in the clear cut, organization-shaking ways in which social scientists sometimes believe research should be used. Our data suggest that what is typically characterized as underutilization or nonutilization of evaluation research can be attributed in substantial degree to a narrow definition of utilization that fails to take into consideration the nature of actual decision-making processes in most programs. Utilization of research findings is not something that suddenly and concretely occurs at some one distinct moment in time. Rather, utilization is a diffuse and gradual process of reducing decision-maker uncertainty within an existing social context. (p. 34)

Subsequent works have shown another possible utilization of evaluations: conceptual utilization. While still limited to evaluation results, conceptual utilization concerns the cognitive impact of evaluations on decision-makers’ understanding of programs (Rich, 1977; Weiss & Bucavalas, 1980; Weiss, 1981). The temporal perspective on utilization also becomes wider as increasing importance is given to the long-term influence of evaluation results (Weiss, 1980; Alkin, 1990; Huberman & Cox, 1990). At the beginning of the 1980s it became increasingly clear that evaluation utilization is a multi-dimensional concept, with not only instrumental and conceptual dimensions but also a symbolic dimension. Utilization of evaluation results is meant to justify, support, or legitimize certain positions (Pelz, 1978; Leviton & Hughes, 1981). This type of utilization is also referred to as political, legitimating, or persuasive (Kirkhart, 2000). Research carried out on evaluation utilization at this time highlighted the predictive factors of utilization (Cousins & Leithwood, 1986) and, in particular, the factors tied to context: influ-

A Program Evaluation Perspective—141

ence of the environment, interest groups, collective and individual learning processes, the nature of decisions made, organizational structure, and processes (Lester & Wilds, 1990; Shulha & Cousins, 1997). The role that the evaluator could play in the evaluation utilization process also came under increasing scrutiny (Patton, 1988). The conceptualization of evaluation utilization continued to evolve; in addition to using results, it became possible to use the evaluation process. A tactical dimension was thus added to the other three. This stemmed from the involvement in the evaluation process of a broader range of stakeholders (Patton, 1997; Forss, Rebien, & Carlsson, 2002). With the emergence of participatory approaches, which enriched the field of evaluation practice, additions were made to the repertoire of roles played by the evaluators and the skills that resulted (Shula & Cousins, 1997). Thus, for example, the new role of planned change agent requires communication skills, sociability, in-depth knowledge of the programs to be evaluated, as well as skills in data collection and analysis. In the decade that followed, the range of influence of evaluations continued to widen. It is not only the immediate environment of the program or project evaluated that can be influenced by evaluation, but also the entire organization. Several authors thus looked into the relationship between evaluation and organizational structures and processes (Jenlik, 1994; Preskill, 1994). Borrowing from studies on ‘misutilization,’ researchers became interested in the intentions that guide the actors who would use results. This served to further refine the typology of evaluation utilization. New collaborative approaches to evaluation aimed at improving utilization are now being proposed, tried, and tested in several settings. After more than forty years of investigation, the field of evaluation utilization has become more complex. The major discoveries in this field have been summarized by numerous authors. Many of these reviews identify variables that affect evaluation utilization without, however, placing them in an analytical model demonstrating their interrelationships (Lester & Wilds, 1990; Johnson, 1998). The need for models or theories of knowledge utilization, as stressed by numerous evaluators (Alkin, 1991; Greene 1988a, b; Shulha & Cousins, 1997), led to a more recent wave of theoretical syntheses (Denis, Béland, & Champagne, 1996; Johnson, 1998; Turnbull, 1999). These works integrated the different dimensions of utilization but did not link them to the evaluation models that underlie them. As Patton points out, these two elements (evaluation and utilization) are intrinsically

142—F. Champagne, A.-P. Contandriopoulos, and A. Tanon

embedded and cannot be separated: ‘since there is no universally accepted definition of evaluation, there can be no universally accepted definition of utilization ... Any given definition of utilization will necessarily be dependent on and derived from a prior definition of evaluation’ (1988, p. 305). Our aim in this chapter is to propose a typology of evaluation models built around utilization concerns. In the first part we present evaluation as a system of action; this framework allows us, in the second part, to build archetypal evaluation models that, in theory, appear to be the most effective in terms of utilization. In the third and final section of the chapter we discuss utilization models proposed by various authors in the light of these archetypal models. Evaluation as an Organized System of Action Starting with the concept that evaluation is a social process, we postulate that evaluation can be best understood by considering it as an organized system of action (see figure 6.1). (This concept is partly taken from Parsons, 1977; Friedberg, 1993; Rocher, 1972; Bourdieu & Wacquant, 1992; see also Contandriopoulos et al., 2000.) Any evaluative process can thus be described according to five components: its objective(s), its structure, the actors who interact within it and their practices, its processes, and the environment in which it occurs.

Objectives The explicit objectives of any organized system of action consist of transforming the predicted trajectory of one or more phenomena by acting, over a period of time, on a certain number of their determinants. In the case of an evaluation, the aim is to improve one or more decision-making processes by providing decision-makers with information that is scientifically valid and socially legitimate, as well as with a judgment on its worth. There is no universal or absolute definition of evaluation. However, the following definition reconciles the main elements that currently form a scholarly consensus: ‘Evaluation basically consists of making a judgment on the worth of an intervention by implementing a deliberate process for providing scientifically valid and socially legitimate information on an intervention or any of its components in such a way that the various stakeholders, who may have differ-

A Program Evaluation Perspective—143 Figure 6.1 Evaluation as an organized system of action

ENVIRONMENT Decision-making process

y of

tor ajec r t d

}

non ome n e Preferred trajectory e ph

th

ecte

Exp

Utilization by the decision-makers of the evaluation results

Practices

Practices

Actors Processes

Resources Physical structure

Organizational ructure Organizational ststructure

Symbolic Symbolicstructure st ructure

Information and judgments

Effects: Better decisions

144—F. Champagne, A.-P. Contandriopoulos, and A. Tanon

ent bases for judgment, are able to take a position on the intervention and to construct a judgment that could translate into action. Information produced by an evaluation may result from the comparison of observations with standards (normative evaluation) or arise from a scientific process (evaluative research)’ (Contandriopoulos et al., 2000, p. 521). It is clear that evaluation results do not always automatically improve decisions. All things being equal, however, we can expect that when information derived from an evaluative process is used, it will have a positive influence on decisions. Utilization of results from an evaluation by decision-makers constitutes the primary objective of the evaluation as an intervention.

Structure Structure is made up of three components and their interrelations: 1. A symbolic structure consisting of beliefs, representations, and values that allow the various stakeholders in the system to communicate with each other, give direction to their actions, and develop a commitment and a sense of belonging. In the case of an evaluation, it concerns, in particular, the epistemological bases from which the various stakeholders approach the evaluation being undertaken. 2. An organizational structure comprising rules, agreements, and methods that define how, and according to what logic, resources (funds, authority, influence, and commitments) to be used in the evaluative process will be allocated and exchanged. These are the rules of play in the organized system of action. 3. A physical structure composed of the volume and the structuring of the various resources mobilized (financial, human, material, technical, informational, etc.) and the physical parameters of the system. These three structural elements interrelate to create a structured social environment in which actors in a system interact.

Actors Actors (or stakeholders) are characterized by their conceptions of the world and their convictions; their positions in a system, which depends

A Program Evaluation Perspective—145

on resources they have or they control; their projects; and their ability to act. Actors interact in an ongoing game of cooperation and competition to increase their control over the action system’s critical resources (funding, authority, influence, commitments) and thereby improve their position. The practices (or conduct) of the actors are simultaneously influenced by and constitutive of the structure of the intervention. Their practices are interdependent in terms of carrying out the activities required to reach a system’s objective(s).

Processes Resources are mobilized and used by actors to produce the activities, goods, and services required to attain a system’s objectives. In the case of evaluation, the processes involved in mobilization and use comprise all the activities needed to produce in-depth scientific information about and judgments on an intervention.

Environments Any organized system of action exists in a particular environment at a given moment. An environment is made up of the physical, legal, symbolic, historical, economic, and social contexts that structure the field in which it is inserted, in addition to all the other organized systems for action with which it interacts.

The Performance of a System of Action A system’s ability to reach its objectives efficiently depends on the degree of coherence that exists, over time, between its five components. As Parsons proposed in his social system action theory (1951, 1977; see also Sicotte et al., 1998; Sicotte, Champagne, & Contandriopoulos, 1999), this coherence is determined by the form given to a system by the simultaneous execution of four essential functions: (1) goal attainment; (2) environmental adaptation for resource acquisition; (3) integration of internal processes for production; and (4) maintenance of values. It should be noted that these functions form subsystems that evolve independently and that can be analysed, in turn, as systems of action. However, because they are contingent upon each other, only a fraction of the potential combinations between their modalities results in plausible forms of action that can be evaluated.

146—F. Champagne, A.-P. Contandriopoulos, and A. Tanon

The combinations of function modalities stem from an arbitration process that takes place within six alignments: 1. The alignment between the Values Maintenance function and the Adaptation function defines the extent to which resources mobilized for evaluation are influenced by the paradigms upon which an evaluation process is based and, inversely, whether the environmental constraints have an impact on the choice of the belief system that must guide evaluation. 2. The alignment between the Values Maintenance function and the Goal Attainment function represents the arbitration between the evaluation paradigms and its objectives: To what extent do stakeholders’ values allow the evaluation objectives to be reached and to what extent are these values influenced, in turn, by the objectives pursued by the (evaluation) system? 3. The alignment between the Values Maintenance function and the Production function defines how paradigmatic beliefs are translated in the production of information process and, inversely, the extent to which the mechanisms for information production strengthen or undermine the paradigm values underlying the evaluation. 4. The alignment between the Adaptation function and the Production function ensures that the resources available are sufficient, appropriate, and properly coordinated at the production level and, in return, that the resource acquisition mechanisms remain compatible with the requirements of the information production system. 5. The alignment between the Adaptation function and the Goal Attainment function concerns, on the one hand, the appropriateness of a system’s adaptation processes in regard to the objectives being pursued and, on the other, the relevance of the objectives pursued given available resources. 6. The alignment between the Goal Attainment function and the Production function rests on the compatibility of an evaluation process with evaluation objectives. The most plausible combinations that arise from these alignments are those that ensure that a given system of action maintains a certain equilibrium that enhances its effectiveness, for example, in the case of evaluation, combinations that maximize the potential for utilization of the information generated. More specifically, we propose that utiliza-

A Program Evaluation Perspective—147

tion of the information and judgments produced by an evaluation depends on the coherence that exists among the various dimensions of the evaluation process. As illustrated in figure 6.2, a given evaluation must maintain a logical coherence between • the position or relationship of the evaluator with the decisionmakers and the range of stakeholder participation in the evaluation (which are two dimensions of the environmental adaptation or resource acquisition function of an evaluation); • the level of participation of stakeholders and the locus of control on the technical component of the evaluation (two dimensions of the evaluation production process); • the scope of results transfer activities and the understanding (or conceptualization) of how evaluation utilization occurs (two dimensions of goal attainment); and • the paradigmatic foundations of the evaluation methods and approaches (the ‘values’ underlying the evaluation).

The Values Maintenance Function The Values Maintenance function plays an essential role in the system of action that an evaluation represents, in the sense that it fosters and distributes the values in the symbolic and cultural world in which a system’s actors evolve. Its role is to ensure that the values and culture, synthesized within a paradigm, are internalized by actors and institutionalized in a system, which is a prerequisite for cohesion within a system of action. A paradigm is a general conceptual model reflecting a set of beliefs and values recognized by a community and accepted as being common to all the individuals in the group. This set of beliefs, which may also be called ‘ideological constructs’ give the group (or the discipline) the possibility of identifying, structuring, interpreting, and solving definite and specific disciplinary problems (Levy, 1994). In other words, a paradigm allows a community of researchers sharing it to formulate questions about itself and about the world that it believes are legitimate. The paradigm also allows for identification of the techniques and instruments to be used by the group, that is to say, the methods considered ‘rigorous’ for researching and proposing solutions to questions.

148—F. Champagne, A.-P. Contandriopoulos, and A. Tanon Figure 6.2. Functions of the evaluation process

Environmental Adaptation: Mobilization of resources for evaluation ¸

Relationship with decisionmakers

¸

Range of stakeholdersí participation

Values Maintenance: Models of knowledge production Paradigmatic foundations: ¸ Ontology ¸ Epistemology ¸ Methodology ¸ Teleology

Goal Attainment: Evaluation objectives ¸

Evaluation utilization models

¸

Results transfer

Production Processes: Evaluation practices ¸

Level of stakeholder participation

¸

The locus of control for the technical part of the evaluation

A paradigm is characterized by four broad perspectives: 1. An ontological perspective on the nature of reality and the manner in which it is conceived. There are generally three distinct positions within this perspective: reality exists and is governed by known natural and immutable laws; reality exists but may not be apprehended objectively; there is no one reality but rather several that are the results of mental, social, or experimental constructions.

A Program Evaluation Perspective—149

2. An epistemological perspective. This perspective describes the nature of the relationship between the evaluator with the object of the evaluation. The evaluator may seek to establish a purely objective relationship (he or she does not influence the object under observation and is not influenced by it); a purely subjective position (the result of observation is a construction stemming from the interaction between the evaluator and the object he or she is observing); a position that, while maintaining the foundations of objectivity, introduces the means for taking the context into account; or, finally, a position that stresses the mediating role of values. 3. A methodological perspective. This perspective describes the methods judged to be valid for reflection, representation, reconstruction, and construction of problems to be examined and the solutions that can be applied. Methods can vary from those that focus on controlling bias and predicting phenomena; others that, while still resting on experimental foundations, introduce qualitative methods; and those that are purely qualitative. 4. A teleological perspective. This perspective defines the intentions, aims, and objectives of the research, as well as the logic guiding the actors. Combinations of the modalities in these four perspectives give us three major paradigms in the field of evaluation (see table 6.1): positivism, neo-positivism, and constructivism.

The Environmental Adaptation Function The Environmental Adaptation or Resource Mobilization function represents the set of actions that establish relationships between evaluation as a system of action and the external milieu characterized by a diversity of needs in terms of scientific evidence. For evaluation, as for any other system of action, this function has two main roles. First, it must allow the system to draw on its external environment for the resources that are needed for it to function; and second, it must demonstrate the capacity of the system to react to the needs of its environment. The emergence of collaborative approaches to evaluation has highlighted the important role of stakeholders as crucial resources for evaluation, in particular for evaluation results utilization. According to Cousins, Donahue, and Bloom (1996, p. 209), three main reasons justify the involvement of a broad range of stakeholders in the evaluation process:

Positivism

Neo-positivism

Constructivism

Ontology

Realist Reality exists ‘out there’ and is driven by immutable natural laws, and mechanisms. Knowledge of these entities, laws, and mechanisms is conventionally summarized in the form of time and context-free generalizations. Some of these latter generalizations take the form of cause-effect laws.

Critical realist Reality exists but can never be fully apprehended. It is driven by natural laws that can be only incompletely understood.

Relativist Realities exist in the form of multiple mental, socially and experimentbased constructions, local and specific, dependent for their form and content on the persons who hold them.

Epistemology

Dualist/objectivist It is both possible and essential for the enquirer to adopt a distant, noninteractive posture. Values and other biasing and confounding factors are thereby automatically excluded from influencing the outcomes.

A continuum from modified objectivist (objectivity remains a regulatory ideal, but it can only be approximated, with special emphasis placed on external guardians such as the critical tradition and the critical community) to subjectivist (in the sense that values mediate enquiry).

Subjectivist Enquirer and inquired-into are fused into a single (monistic) entity. Findings are literally the creation of the process of interaction between the two.

Methodology

Experimental/manipulative Questions and/or hypotheses are stated in advance in propositional form and subjected to empirical tests (falsification) under carefully controlled conditions.

A continuum from modified experimental/manipulative (emphasize critical pluralism; redress imbalances by doing enquiry in more natural settings, using more qualitative methods, depending more on grounded theory, and reintroducing

Hermeneutic, dialectic Individual constructions are elicited and refined hermeneutically, and compared and contrasted dialectically, with the aim of generating one (or a few) construction(s) on which there is substantial consensus.

150—F. Champagne, A.-P. Contandriopoulos, and A. Tanon

Table 6.1 The Paradigms in the Field of Evaluation

discovery into the enquiry process) to dialogic, transformative (eliminate false consciousness and energize and facilitate transformation). Teleology

Provide a real image of the world by revealing and theorizing on the immutable laws that regulate it in order to predict and control.

A continuum from providing the most reasonable image possible of reality in order to predict and control, to enabling the actors in order for them to construct a clear idea of reality both within and outside themselves that will allow them to change the world.

Creation of meaning through knowledge construction from negotiations between actors.

Source: Adapted from Guba & Lincoln (1994) and Levy (1994).

A Program Evaluation Perspective—151

152—F. Champagne, A.-P. Contandriopoulos, and A. Tanon 1. political justifications, where the aim is to ameliorate social inequities; 2. epistemological justifications, which highlight the importance of understanding the context(s) in which knowledge is produced, distributed, and used; 3. practical justifications, which underscore the production of usable knowledge.

While currently there is broad consensus on the need to involve stakeholders in the evaluation process, the issue of the range of stakeholders to involve is still a matter of debate among different schools in evaluation. A number of authors (Cousins et al., 1996; Russ-Eft & Preskill, 2001) have therefore proposed differentiation of the approaches in evaluation, using a continuum with, at one end, participation limited to a restricted group of stakeholders, and, at the other, participation open to any individual or groups of individuals with a legitimate interest in the object evaluated. Another aspect of the Environmental Adaptation function consists of the roles assumed by an evaluator during an evaluation. These roles evolved considerably over time in response to the need to adapt to the context of programs (Eash, 1985; Torres, Preskill, & Piontek, 1996). They form a varied repertoire (Scriven, 1991), differing from one evaluation model to another. One important component that defines the evaluator’s role and would appear important to consider in relation to result utilization is the position of an evaluator in relation to the environment of the intervention and in relation to the decision-makers (Scriven, 1991; Mathison, 1991). The position of an evaluator in relation to decision-makers has often been seen as dichotomous (external evaluator – internal evaluator), an external evaluator’s supposedly being more independent than an internal evaluator’s. An evaluator’s internal or external position constitutes, in fact, one of the most basic parameters for differentiation between the various approaches in evaluation at both the practical and the theoretical levels. These two positions respond to different needs of stakeholders: external evaluation offers the perceived benefit of greater objectivity, while the subjectivity of internal evaluation is compensated for by contextual sensitivity, longterm commitment to program improvement, advocacy for evaluation, and cost-effectiveness (Torres, Preskill, & Piontek, 1996). Others, however, including Scriven (1991, 1995), have suggested that it is appropriate to think of the independence of an evaluator in relation to decision-makers along a continuum, rather than as a dichotomy.

A Program Evaluation Perspective—153

The Production Function The Production function describes the technical course of evaluation, that is, the decisions that concern the set of procedures and means to be used to produce the information that will be the final product of an evaluation. On this level, the approaches involved in evaluation can be structured according to two dimensions: stakeholders’ level of involvement and the degree of control that an evaluator exercises over technical decisions. Stakeholder involvement can be analysed at different phases in the evaluation process: problem identification, evaluation design, development of data collection instruments, data collection, data analysis, interpretation of results, communication, and transfer of findings. This involvement can be considered to be important if it is intensive and concerns all phases of the evaluation process; it can be seen as marginal if it is weak at all phases and as moderate if it concerns only a few phases of the evaluation or if the intensity varies from one phase to another. The locus of control over technical decisions in evaluation can rest on an evaluator, on a partnership between an evaluator and stakeholders, or on stakeholders (Cousins et al., 1996). The third option would be rare, however, because greater participation of stakeholders in the evaluation process is called for by many evaluators, very few having advocated for a complete disengagement from control over the process (Preskill & Torres, 1999).

The Goal Attainment Function The Goal Attainment function is, like the first function discussed above, oriented externally. Through it, a system of action responds to the expectations of its environment, which, in the case of evaluation, consist mainly of providing useful, useable information. Thus, the role of this function is to define the type of utilization targeted by evaluation and to identify the actors to whom the information produced by evaluation will be transferred. According to the Goal Attainment function, evaluations can be used in three different ways: 1. Instrumental: using evaluation to directly, specifically, and punctually influence decisions. 2. Strategic: using evaluation results to support a priori positions on

154—F. Champagne, A.-P. Contandriopoulos, and A. Tanon

an issue involving multiple actors and multiple interests (political use) or using the evaluation process as a tactic either to delay decision and action, to avoid taking responsibility for a decision, or as a public relations exercise (tactical use). 3. Conceptual: using evaluation findings as integrated background knowledge for understanding. In this process, which has been called an enlightenment process (Janowitz, 1972; Crawford & Biderman, 1969; Rich & Caplan, 1976; Pelz, 1978; Weiss, 1979), the concepts and theoretical perspectives of social and management sciences permeate the decision-making process. Knowledge used in decision-making comprises accumulated and integrated evidence and generalizations rather than findings from a specific study. Evaluation results can also be intended for a variety of audiences. Audiences may be specific (primary stakeholders) or general (anyone remotely concerned with the evaluation and its results). To summarize, we propose to discuss evaluation utilization by analysing evaluation as a system for action that must simultaneously fulfil the following four functions (see table 6.2): 1. The Values Maintenance function, which presents the paradigms (positivism, neo-positivism, constructivism) from which any evaluative process may be conducted. 2. The Environmental Adaptation function, which presents the position of the evaluator in comparison to that of the decisionmaker (hierarchical position, consultant position, or independent position) and the range of stakeholders involved in the evaluation. 3. The Information Production function, which deals with the level of participation of stakeholders and the locus of control over technical decisions in evaluation (evaluator or an evaluator-stakeholders mix). 4. The Goal Attainment function, which concerns the model of evaluation utilization put forward (instrumental, strategic, conceptual) and the audience for whom the information generated is intended. Typology of Utilization Models In the second part of our chapter we propose a typology of evaluation models based on the various configurations that appear to be the logically coherent functions of evaluation as a system of action. This typol-

A Program Evaluation Perspective—155 Table 6.2 The Functions of Evaluation as a System for Action: Dimensions and Sub-dimensions Functions

Dimensions

Sub-dimensions

Values Maintenance

Paradigms

• Positivism • Neo-positivism • Constructivism

Environmental Adaptation

Position in relation to decision-makers

• Hierarchical • Consultant • Independent

Range of participation of stakeholders

• Selective • Average • Large

Level of participation of stakeholders

• Low • Medium • Intensive

Locus of control over the technical aspects of the evaluation

• Evaluator • Mix (evaluator, parties • involved)

Model of evaluation utilization

• Instrumental • Conceptual • Strategic: tactical, political

Scope of results transfer

• Targeted • Open

Evaluation Practices

Goal Attainment

ogy takes as its foundation the Values Maintenance function, because the paradigmatic position constitutes one of the main elements for differentiating the approaches in evaluation (Williams, 1989). Table 6.3 presents a schematic outline of our evaluation models typology.

Pure Models CLASSIC POSITIVIST MODEL

The first model belongs to a pure positivist paradigm. The ontological assumption is that reality exists and that it is independent of the evaluator. The intention guiding the evaluator is to discover this reality and the immutable laws that govern it in order to predict and control the phenomenon under study. The goal of evaluation in this context is to generate valid knowledge capable of being generalized and to build

Value paradigm Model name

Mobilization of resources

Processes

Aims

Position relative to the decisionmaker

Range of Level of Locus of control Utilization participation of participation of over the tech- model stakeholders stakeholders nical portion of the evaluation

Transfer of results

Positivism

Classic positivist

Independent

Selective

Low

Evaluator

Conceptual

Open

Constructivism

Constructivist

Independent

Large

Intensive

Evaluator

Strategic: tactical and political

Open

Neo-positivism

Academic neo- Independent positivist

Medium to large

Low to medium Evaluator

Instrumental Open conceptual strategic: tactical and political

Expert consultant

Consultant

Selective

Low, medium

Evaluator

Instrumental

Targeted

Internal evaluator

Hierarchical (internal)

Selective

Low

Evaluator

Instrumental

Targeted

Facilitator consultant

Consultant

Average to large

Intensive

Mix

Instrumental strategic: political

Partially open

Medium to large

Medium to strong

Mix

Instrumental conceptual

Targeted

Empowerment Hierarchical (internal)

156—F. Champagne, A.-P. Contandriopoulos, and A. Tanon

Table 6.3 Typology of the Models for Evaluation Utilization

A Program Evaluation Perspective—157

theories. The main utilization of evaluation, whose aim is primarily to generate knowledge, is conceptual, and the results are disseminated to a wide audience, including stakeholders who are not directly involved in the program (Patton, 1997). The epistemological assumption calls for an evaluator to remain neutral or ‘objective’ in relation to the object under evaluation, thereby avoiding any bias from him or herself or other actors. For this reason, evaluators in a positivist paradigm would tend to maintain a certain independence from decision-makers and would involve stakeholders only marginally or not at all. The positivist methodological assumption calls for an evaluator to set up an evaluation plan in such a way as to neutralize the influences of context. To do so, an evaluator will maintain control of the technical decisions in evaluation, and the participation of the parties involved, where it exists, will be limited to consultation. A typical evaluator according to this model works in an academic milieu, the main function of which is to provide theoretically based, generalizable knowledge. The evaluations he or she carries out are conducted rigorously, with special attention paid to external validity, generalizability, and exportability to new environments. Such an evaluator has great expertise in methodology or in a specific field of academic interest, but foremost is his or her methodological expertise. Evaluators take on projects to which their knowledge can make a definite contribution and that in return will allow increased knowledge in their fields of specialization. They very rarely use a multidisciplinary approach. The high degree of sophistication and the need to generalize lead such researchers to involvement in large-scope evaluations, especially in the field of social science. Usually, utilization of the information generated is not the responsibility of the researcher (Cox, 1990). In the field of social science, very few evaluators identify with the positivist paradigm. Yet it is a model that is still seen in the field of clinical epidemiology. Systematic reviews of the effect of health interventions in health care by the Cochrane Collaboration or in education and the social sciences by the Campbell Collaboration are also conducted from this perspective. CONSTRUCTIVIST MODEL

The second model falls under a constructivist paradigm. The ontological assumption is that there are multiple realities, because reality is a reflection of the conceptions of different actors or groups of actors

158—F. Champagne, A.-P. Contandriopoulos, and A. Tanon

involved in the project under study. The goal of evaluation is to generate knowledge by constructing or reconstructing the reality that emerges from the different conceptions, intentions, and strategies of the stakeholders. The information thus generated does not pretend to be universal; because it is firmly established in actors’ values systems, it provides a utilization that is essentially strategic, both tactical and political. The epistemological assumption is that the products of research are the result of the interaction between an evaluator and the object under evaluation. This assumption requires that an evaluator adopt a position independent of the actors present, not to avoid any bias, as in the classic positivist model, but to remain open to the different conceptions of reality brought by the stakeholders. Phenomenological and hermeneutic approaches – the preferred methods in a constructivist paradigm – require that numerous actors be involved in the evaluation process in order to ensure the maximum variation in the reported conceptions of reality. Even though an evaluator maintains control over the technical side of an evaluation in order to play his or her role as catalyst to the full, the participation of actors remains intense at all stages of the evaluation, from the formulation of hypotheses to the interpretation of results. The audience for whom the information generated by the evaluation is relatively large. This is due to the large number and great diversity of actors involved in the project from the beginning, who, by participating intensely in the process, are in a better position to see themselves in the information generated at the end of the process. Guba and Lincoln (1989, 1994) are the leaders in this school of evaluation. They see constructivist evaluation as a typical form of constructivist inquiry. For them, an external evaluator must maintain considerable control over the whole evaluation process. He or she is responsible for identifying all stakeholders; eliciting their concerns and interests in the evaluation; proposing appropriate methods; leading the discussion needed to generate joint constructions about the value of the intervention being evaluated; collecting data if required on items for which there is no consensus; providing the data and conducting negotiations among stakeholders; and developing reports, tailored to each stakeholder group, for communicating to them any consensus and resolutions on issues they have raised. This type of evaluation must strive to take into account the interests of all groups that are put at risk by an evaluation. Moreover, these

A Program Evaluation Perspective—159

stakeholders must participate strongly in all phases of the evaluation. Stakeholders are the users of the evaluation information. Because of the large number and diversity of the stakeholders involved in an evaluation, an audience can be described as wide. Constructivist evaluation is used either formatively (formative merit evaluation, formative worth evaluation) or summatively (summative merit evaluation, summative worth evaluation).

Hybrid Models The five models that follow fall under what we have called the neopositivist paradigm. For the requirements of our analysis, we combined under this label two major paradigms (post-positivism and critical theory), primarily because they share the same ontology, which supposes that even though reality exists, knowledge of it may only be approximate. These two sets of beliefs will be presented as poles of the neo-positivist paradigm. Their common ontological perspective suggests that theories are no longer absolute (as is typical of the positivist paradigm) and leaves room for a more contextual understanding. It is thus more amenable to instrumental and strategic utilization. Within the neo-positivist paradigm, the post-positivist pole is closest to the positivist paradigm. Nevertheless, it distinguishes itself from the latter by the loosening of a certain number of constraints considered to be problematic in the positivist paradigm: less rigour for more relevance; less precision for more richness; less elegance for more applicability and more subjectivity; and less verification for more discovery (Guba & Lincoln, 1989). Because there is greater room for subjectivity (a consequence of the epistemological assumption of this paradigm) an evaluator can depart from his or her position of independence from decision-makers in order to act as consultant or internal evaluator, as long as he or she remains cognizant of his or her own biases. Compared with the positivist model, the methodological assumption allows stakeholders to play a more important role; for example, they can be involved in larger numbers to ensure more relevance and in a more intense way to ensure a greater richness. Nevertheless, an evaluator maintains control of the technical side of evaluation to ensure the validity of the evaluative process. Critical theory represents the pole of the neo-positivist paradigm that is closest to the constructivist paradigm. The search for objectivity is no longer a sine qua non of the evaluative process, which all but excludes

160—F. Champagne, A.-P. Contandriopoulos, and A. Tanon

the option of an evaluator who is independent of the decision-makers. The methodological assumption of this paradigm calls for a larger number of stakeholders to be intensively involved in the evaluation process, including in its technical side. ACADEMIC NEO-POSITIVIST MODEL

In this model, an evaluator remains independent of decision-makers; subjectivity, relevance, and richness are guaranteed through the involvement of a larger number of stakeholders (medium or large range) whose participation will not be very intense so as to guarantee a minimum of rigour and focus. For the same reason, an evaluator maintains control over the technical side of evaluation. Evaluation rests on a theoretical construction a priori of the reality that can be modified during the course of the research. This allows for a utilization that can be conceptual, instrumental, or even strategic, owing to the involvement of stakeholders. The audience for whom the information is intended is wider than simply the actors directly involved in the project, because of the need for an evaluator to place the information thus generated at the disposal of anyone likely to have an interest in it. According to this model, evaluators are academic experts who conduct their own evaluations in order to increase knowledge in their own fields of specialization and also, for certain of them, to increase knowledge in the field of evaluation as a discipline. Their main concern is to preserve academic credibility, because they see themselves as researchers first and as evaluators second. This group tries to maintain a certain academic strictness in its evaluation process while at the same time encouraging utilization of the information it produces. Neo-positivists are also less perturbed than classic positivists by the political nature of the evaluation, the need for relations with the stakeholders, and the interdisciplinarity required in a high-quality evaluation (Bickman, 1990). This model is especially prevalent in the literature on evaluation. Two important authors in the field of evaluation can be classed in this group – Carol Weiss and Michael Scriven – both of whom possess a post-positivist orientation (Shadish, Cook, & Leviton, 1991). For Weiss (1998b, p. 15), evaluation utilization is a predominant concern: ‘Evaluation is intended for use. Where basic research puts the emphasis on the production of knowledge and leaves its use to the natural process of dissemination and application, evaluation starts out with use in mind. In its simplest form, evaluation is conducted for a client who has decisions to make and who looks to the evaluation for information on

A Program Evaluation Perspective—161

which to base his decisions. Even when use is less direct and immediate, utility of some kind provides the rationale for evaluation.’ It does not matter to her whether the evaluator is an independent researcher, a consultant, or even an internal evaluator. This choice must be made specifically for each evaluation context (Weiss, 1972a, p. 21). Whatever their positions, however, evaluators must maintain a certain distance or independence from decision-makers while working in close collaboration with the numerous stakeholders. Stakeholders’ involvement can improve the fairness of the evaluation process by ‘increasing the range of information collected to respond to participants’ requests and by increasing access to evaluation information. It can also lead to more utilization by making stakeholders more knowledgeable about evaluation results’ (Weiss, 1983, pp. 91–92). In this model the evaluator nevertheless takes control of the technical portion of the evaluation because, for Weiss, the ‘evaluator is responsible for the use of the evaluation, and the quality of the evaluation is one of the most important factors for utilization’ (Shadish et al., 1991, p. 208). Without renouncing instrumental use, Weiss prefers a conceptual type of use (ibid.). Michael Scriven shows a greater preference than Weiss for evaluators’ independence. Scriven’s approach is guided by an overriding concern for minimizing biases in constructing value judgments (Shadish et al., 1991. pp. 79 – 80). For him, the evaluator has a duty to establish safeguards against biases. Biases are likely to arise if the evaluator’s position in relation to the decision-maker puts him or her in a situation of dependence, since this might lead to favouring management interests. Accordingly, external, independent evaluation is preferable in most instances. .

.

EXPERT CONSULTANT MODEL

In this model, an evaluator is a consultant who receives a specific mandate from certain stakeholders. In general terms, this places him or her in a situation of greater dependence than an evaluator who is independent of his or her partners. The clarity of the mandate and the fact that actors only have to report to a specific stakeholder means that there will be a less intense, lower level of involvement of other stakeholders. As evaluation is carried out in response to clients’ needs, utilization will be instrumental in type, and the information generated will be intended primarily for the actors involved in the evaluation project. This model is probably the one most commonly used in practice, even

162—F. Champagne, A.-P. Contandriopoulos, and A. Tanon

though it has not been extensively written about in the literature on evaluation. Evaluators who evolve according to this model are inclined to respond mainly to the need for information by decision-makers. Complying with the scientific community’s standards for quality is a secondary objective. INTERNAL EVALUATOR

This model differs from the previous one only in the position of an evaluator in relation to decision-makers. Here, an evaluator is an internal agent (i.e., a member of the project staff) even if he or she is part of a special evaluation staff, that is, external to the production/writing/ teaching/service part of the project (Scriven, 1991). Because of their hierarchical dependence and their specific roles, such evaluators will seek little involvement from stakeholders and will maintain control over the technical side of evaluation. FACILITATOR CONSULTANT

In this model, an evaluator is a consultant (as in the expert consultant model). Faced with a divergent, complex problem and a diffuse mandate, however, an evaluator will seek intense involvement of a larger number of stakeholders, extending to the technical side of evaluation. The utilization of the information generated by evaluation is instrumental or strategic because of the involvement of various stakeholders. The audience targeted is somewhat open, since the information is intended for a large variety of stakeholders. The evaluation model that is most akin to this ‘ideal type’ is probably the ‘utilization focused evaluation’ model developed by Michael Quinn Patton. He describes evaluation as ‘the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness, and/or inform decisions about future programming’ and utilization focused evaluation as that which is ‘done for and with specific, intended primary users for specific, intended uses’ (1997, p. 23). This approach to evaluation puts the emphasis on utilization from the start of the evaluation process. In addition, although conceptual utilization is not excluded, this type of evaluation is primarily geared to instrumental or strategic utilization. As a general rule, an evaluator is a consultant whose role is to identify and organize decision-makers and the potential users of the information, to identify and target the pertinent questions, to select appropri-

A Program Evaluation Perspective—163

ate methods for generating useful information, to involve the users of the information in the analysis and interpretation of the data, and to cooperate with the decision-makers in the dissemination of the information. This approach can be described as participative because it requires the involvement of program participants. While the involvement of a relatively high number of program participants is not excluded, Patton (1988) prefers quality rather than quantity in the involvement and suggests that one ought to target a restricted, select group of program participants (Stufflebeam, 2001). An evaluator, in concert with targeted participants, determines the technical content of an evaluation. EMPOWERMENT

According to the empowerment model, an internal evaluator has a mandate that is diffuse and complex, like that of the facilitator consultant model. It means that a larger number of parties will be involved, and they will participate more intensely in all phases of evaluation, including the technical side. Utilization is instrumental or conceptual, because the goal of this type of evaluation, which is long term, is to generate knowledge that will be integrated and will enlighten future decisions. The author most often associated with this perspective is David Fetterman, who has propounded a comprehensive theory of the empowerment evaluation model. Fetterman defines empowerment evaluation as ‘the use of evaluation concepts, techniques and findings to foster improvements and self-determination’ (1999, p. 5). This type of evaluation is preferably conducted by an internal evaluator: ‘The selection of inside facilitators increases the probability that the process will continue and be internalized in the system and creates the opportunity for capacity building. With an outside evaluator, the evaluation can be an exercise in dependency rather than an empowering experience’ (Fetterman, 1995, p. 181). Empowerment evaluation is a group activity (Fetterman, 1995) that necessitates the consistent involvement of the program participants (Fetterman, 1996), who are involved throughout the whole evaluation process. Participants’ involvement is seen as necessary to ensure relevance. Thus, it is the group formed by the evaluator and the program participants that is responsible for evaluation. Empowerment evaluation is a type of action research. Stakeholders control the study and conduct the work (Fetterman, 1999, p. 10). Utilization is both instrumental and conceptual: evaluation should not only lead to program improvement but also provide a new under-

164—F. Champagne, A.-P. Contandriopoulos, and A. Tanon

standing about roles, structures, and program dynamics. Fetterman describes this new insight as illumination, ‘an eye opening, revealing, and enlightening experience’ (1996, p. 15). The audience for utilization is primarily targeted because empowerment evaluation is designed to help program staff and participants to use evaluation findings (Fetterman, 1997, p. 257). Conclusion Health care is a highly complex field. Its complexity stems from the unresolved confrontation of the values, structures, attitudes, and modes of thinking of the various stakeholder groups that interact within it. The health care system is indeed highly fragmented. It has been described as torn between four fundamental regulation logics (professional, administrative, market, and democratic) (Contandriopoulos, 1999; et al., 2000) and as comprising four distinct worlds (the worlds of cure, care, control, and community) (Glouberman & Mintzberg, 2002). Accordingly, there can be no magic bullet for utilization of evaluation in such a disunified and thus complex environment. Evaluation will be conducted for a variety of reasons and objectives, using various approaches and methods. It will inevitably be of interest to a wide range of stakeholders, whose involvement will vary in intensity and may take several forms. These stakeholders might use the evaluation process and results in many different ways. Increasingly, the literature on evaluation has shown that there are several ways to use evaluations and that, while numerous factors for utilization have already been revealed (Leviton & Hughes, 1981), there are no universal formulas to ensure that the evaluation process will lead to an optimal form of utilization. In this chapter we have proposed that the question of evaluation utilization by decision-makers is contingent on the specific evaluation model implemented, and in any approach to it the nature of the intervention to be evaluated and the context in which it occurs must be borne in mind. There is no evaluation model that would guarantee all forms of utilization in all circumstances. Various evaluation models offer significant potential for a particular form of utilization, as long as they maintain an internal coherence among the different functions they must perform as social action systems. The seven ideal-type evaluation utilization models described in the previous sections were based on this logical coherence among functions.

A Program Evaluation Perspective—165 Table 6.4 Outcomes of Misevaluation

Misevaluation

Findings used

Findings not used

User awareness

Outcome

Proposed hypothesis: Conformity of the evaluation with the problems under study can:

Unaware user

Mistaken use

Decrease the probability of unaware users using erroneous data.

Aware user

Misuse

Decrease the probability of decision-makers intentionally using erroneous data.

Justified non-use

Decrease the probability of an evaluation proving to be a waste of resources.

Such coherence depends on two conditions. First, there is a form of alignment between the nature of the intervention or problem to be evaluated and the type of evaluation that will be chosen. The impact that the specific decision-making context can have on the probability of a particular evaluation type’s being used should not be underestimated (Mitchell, 1990). Beyond the issue of context, however, it is the actual nature of the object being evaluated that needs to be taken into account (Glouberman & Zimmerman, 2002). Some problems, both simple and complicated (the latter being easily broken up into a series of simple problems), can be considered convergent problems (Schumaker, 1977). For situations in which, as a general rule, the causes, nature, and probable evolution of a problem can be identified, in which possible solutions exist, and in which the application of these solutions is usually

166—F. Champagne, A.-P. Contandriopoulos, and A. Tanon Table 6.5 Outcomes of Evaluation

Evaluation

Impact on informative process

Outcome

Inappropriate use

Misuse (a user Decrease the engages in a occurrences of particular action misuse. where misuse is the desired outcome)

Appropriate use Utilization

No impact on informative process

Proposed hypothesis: Agreement among stakeholders on the epistemological, methodological, and political foundations of the evaluation will.:

User action

Unintentional non-use

Non-use

Blatant non-use Misuse (deliberate inaction when results of an evaluation could inform program decision-making)

Increase the probability of utilization. Decrease the chances of nonuse. Make deliberate misuse difficult.

Source: Adapted from Christie and Alkin (1999).

effective, the most suitable evaluation models are the classic positivist model, the academic neo-positivist model, the expert consultant model, and the internal evaluator model. For complex or divergent problems, however, where the causes are difficult to identify (because they are multiple, non-specific, overlapping, embedded, evolving, or sometimes even contradictory) and for which possible solutions are difficult to find and do not provide any guarantee of success, the more pertinent

A Program Evaluation Perspective—167

types of evaluation are the constructivist model, the facilitator consultant model, and the empowerment model. Selecting an inappropriate evaluation model for a problem increases the probability that an evaluation will be judged inadequate (misevaluation). Deploying an incorrect model can compromise utilization in several different ways (see table 6.4) and lead to cases of justified non-utilization, misuse, and mistaken use, which can be particularly dangerous for organizations (Christie & Alkin, 1999). The second condition to be met for coherence and thus utilization is a shared understanding (by an evaluator and the potential user(s) of an evaluation) of the epistemological, methodological, and political foundations of an evaluation (see table 6.5). Meeting this second condition increases the probability that evaluations will have an impact on the decision-making process. The fact that an evaluator and user(s) share the same paradigm – in other words, the same reference system – limits the possibility of misunderstanding and even incomprehension. Occurrences of non-utilization or misuse that are linked not to a lack of pertinence or to the poor quality of an evaluation, but rather to disagreement over the models for evaluation and utilization, are thereby decreased. The chances of meeting these two conditions for optimum utilization of the evaluation are much greater when all the actors involved, independently of their epistemological positions, engage with the evaluation process from a perspective of exchange and linkage. According to Huberman (1987), such ‘sustained interactivity’ is the strongest predictor of evaluation utilization and can be a guiding principle in any of the evaluation utilization models we have discussed in this chapter.

REFERENCES Alkin, M.C. (1990). Debates on evolution. Thousand Oaks, CA: Sage. Alkin, M.C. (1991). Evaluation theory development: II. In M.W. McLaughlin & D.C. Phillips (Eds.), Evaluation and education at quarter century (pp. 91–112). Chicago: University of Chicago Press. Alkin, M.C., Daillak, R., & White, P. (1979). Using evaluations: Does evaluation make a difference? Beverly Hills, CA: Sage. Bickman, L. (1990). The two worlds of evaluation: An optimistic view of the future. Evaluation and Program Planning, 13, 421– 422. Bourdieu P., Wacquant L.J. (1992). Réponses: pour une anthropologie réflexive. Paris: Éditions du Seuil. .

168—F. Champagne, A.-P. Contandriopoulos, and A. Tanon Christie, C.A., & Alkin, M.C. (1999). Further reflections on evaluation misutilization. Studies in Educational Evaluation, 25, 1–10. Contandriopoulos, A.P. (1999). La régulation d’un système de soins sans murs. In J.P. Claveranne, C. Lardy, G. de Pouvourville, A.P. Contandriopoulos, & B. Experton (Eds.), La santé demain: Vers un systeme de soins sans murs (pp. 87– 102). Paris: Economica. Contandriopoulos, A.P., Champagne, F., Denis, J.L., & Avargues, M.C. (2000). Evaluation in the health sector: A conceptual framework. Revue d’épidémiologie et de santé publique, 48(6), 517–539. Contandriopoulos, A.P., de Pouvourville, G., Poullier, J.P., & Contandriopoulos, D. (2000). ‘À la recherche d’une troisième voie : Les systèmes de santé au XXIe siècle. In M.P. Pomey & J.P. Poullier (Eds.), Santé publique (pp. 637– 67). Paris: Ellipses. Cousins, J.B., & Leithwood, K.A. (1986). Current empirical research on evaluation utilization. Review of Educational Research, 56(3), 331–364. Cousins, J.B., Donahue, J.J., & Bloom, G.A. (1996). Collaborative evaluation in North America: Self-reported opinions, practices and consequences. Evaluation Practice, 17(3), 207–226. Cox, G.B. (1990). On the demise of academic evaluation. Evaluation and Program Planning, 13, 415 – 419. Crawford, E.T., & Biderman, A.D. (1969). The functions of policy-oriented social science. In E.T. Crawford & A. D. Biderman (Eds.), Social scientists and international affairs (pp. 233 – 243). New York: Wiley. Cronbach, L.J, Ambron, S.R., Dornbush, S.M., Hess, R.D., Hornik, R.C., Phillips, D.C., Walker, D.F., Weiner, S.S. (1980). Toward reform of program evaluation: Aims, methods, and institutional arrangements. San Francisco: JosseyBass. Denis, J.L., Béland, F., Champagne, F. (1996). Le chercheur et ses interlocuteurs : Complicité et intéressement dans le domaine de la recherche evaluative. In Évaluer: Pourquoi? Actes du colloque du CQRS tenu à l’occasion du 63.e congrès de l’ACFAS (pp. 21–31). Quebec City: CQRS. Eash, M.J. (1985). A reformulation of the rule of the evaluator. Educational evaluation and Policy Analysis, 7(3), 249 – 252. Fetterman, D.M. (1995). In response to Dr. Daniel Stufflebeam’s ‘Empowerment evaluation, objectivist evaluation, and evaluation standards: Where the future of evaluation should not go and where it needs to go’ (October 1994, 321–338). Evaluation Practice, 16(2), 179 – 99. Fetterman, D.M. (1996). Empowerment evaluation: An introduction to theory and practice. In D.M. Fetterman, S.J. Kaftarian, & A. Wandersman (Eds.), Empowerment evaluation: Knowledge and tools for self-assessment and accountability (pp. 3 – 46). Thousand Oaks, CA: Sage. .

.

.

.

.

.

.

.

.

.

.

A Program Evaluation Perspective—169 Fetterman, D.M. (1997). Empowerment evaluation: A response to Patton and Scriven. Evaluation Practice, 18(3), 253 – 266. Fetterman, D.M. (1999). Reflections on empowerment evaluation: Learning from experience [Special Issue], Canadian Journal of Program Evaluation, 5–37. Forss, K., Rebien, C.C., & Carlsson, J. (2002). Process use of evaluations: Types of use that precede lessons learned and feedback. Evaluation, 8(1), 29 – 45. Friedberg E. (1993). Le pouvoir et la règle. Paris: Éditions du Seuil. Glouberman, S., & Mintzberg, H. (2001). Managing the care of health and the cure of disease. Part I: Differentiation. Part II: Integration. Health Care Management Review, 26(1), 54. Glouberman, S., & Zimmerman, B. (2002). Systèmes compliqués et systèmes complexes. Comission sur l’avenir des soins de santé au Canada, étude 8. Available online at www.hc-sc.gc.ca/francais/pdf/romanow/ 8_Glouberman_F.pdf. Greene, J.C. (1988a). Communication of results and utilization in participatory program evaluation. Evaluation and Program Planning, 11, 341–351. Greene, J.C. (1988b). Stakeholder participation and utilization in program evaluation. Evaluation Review, 12, 91–116. Guba, E.G., & Lincoln, Y.S. (1989). Fourth generation evaluation. Thousand Oaks, CA: Sage. Guba, E.G., & Lincoln, Y.S. (1994). Competing paradigms in qualitative research. In N.K. Denzin & Y.S. Lincoln (Eds.), Handbook of qualitative research (pp. 105 –117). Thousand Oaks, CA: Sage. Huberman, M. (1987). Steps toward an integrated model of research utilization. Knowledge: Creation, Diffusion, Utilization, 8(4), 586 – 611. Huberman, M., & Cox, P. (1990). Evaluation utilization: Building links between action and reflection. Studies in Educational Evaluation, 16(1), 157–179. Janowitz, M. (1972). Professionalization of sociology. American Journal of Sociology, 78, 105–135. Jenlik, P.M. (1994). Using evaluation to understand the learning architecture of an organization. Evaluation and Program Planning, 17(3), 315 – 325. Johnson, R.B. (1998). Toward a theoretical model of evaluation utilization. Evaluation and Program Planning, 21(1), 93 –110. Kirkhart, K.E. (2000). Reconceptualizing evaluation use: An integrated theory of influence. New Directions for Evaluation, 88, 5 – 23. Lester, J.P., & Wilds, L.J. (1990). The utilization of public policy analysis: A conceptual framework. Evaluation and Program Planning, 13, 313 – 319. Leviton, L.C., & Hughes, E.F.X. (1981). Research on the utilization of evaluations. Evaluation Review, 5(4), 525 – 548. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

170—F. Champagne, A.-P. Contandriopoulos, and A. Tanon Levy, R. (1994). Croyance et doute: Une vision paradigmatique des methods qualitatives. Ruptures – Revue Transdisciplinaire en Santé, 1(1), 92 –100. Mathison, S. (1991). What do we know about internal evaluation? Evaluation and Program Planning, 14, 159 –165. Mitchell, J. (1990). Policy evaluation for policy communities: Confronting the utilization problem. Evaluation Practice, 11(2), 109 –114. Parsons, T. (1951). The social system. New York: Free Press. Parsons, T. (1977). Social systems and the evolution of action theory. New York: Free Press. Patton, M.Q. (1978). Utilization-focused evaluation. Beverley Hills, CA: Sage. Patton, M.Q. (1988). Six honest serving men for evaluation. Studies in educational evaluation, 14, 301 – 330. Patton, M.Q. (1997). Utilization-focused evaluation (3rd ed.). Thousand Oaks, CA: Sage. Pelz, D.C. (1978). Some expanded perspectives on use of social science in public policy. In J.M. Yinger & S.J. Cutler (Eds.), Major social issues: A multidisciplinary view (pp. 346 – 357). New York: Free Press. Preskill, H. (1994). Evaluation’s role in enhancing organizational learning. Evaluation and Program Planning, 17(3), 291–297. Preskill, H., & Torres, R.T. (1999). Evaluative inquiry for learning in organizations. Thousand Oaks, CA: Sage. Rich, R.F. (1977). Uses of social science information by federal bureaucrats: Knowledge for action versus knowledge for understanding. In C.H. Weiss (Ed.), Using social research in public policy making (pp. 199 – 211). Lexington, MA: Lexington Books. Rich, R.F., & Caplan, N. (1976, June). Instrumental and conceptual uses of social science knowledge and perspectives: Means/Ends matching versus understanding. Paper presented at the OECD conference on dissemination of economic and social development research results, Bogota, Colombia. Rocher, G. (1972). Talcott Parsons et la sociologie américaine. Paris: Presses universitaires de France. Russ-Eft, D., & Preskill, H. (2001). Evaluation in organizations: A systematic approach to enhancing learning, performance and change. Boston: Perseus Books. Schumaker E.F. (1977) A guide for the perplexed. New York: Harper & Row. Scriven, M. (1991). Evaluation thesaurus (4th ed.). Newbury Park, CA: Sage. Scriven, M. (1995). Evaluation consulting. Evaluation practice, 16(1), 47–57. Shadish, W.R., Cook, T.D., & Leviton, L.C. (1991). Foundations of program evaluation. Newbury Park, CA: Sage. Shulha, L.M., & Cousins, J.B. (1997). Evaluation use: Theory, research, and practice since 1986. Evaluation Practice, 18(3), 195 – 208. .

.

.

.

.

.

.

.

.

.

.

A Program Evaluation Perspective—171 Sicotte, C., Champagne, F., Contandriopoulos, A.P., Barnley, J., Béland, F., Leggat, S.G., Denis, J.L., Bilodeau, H., Langley, A., Brémond, M., & Baker, G.R. (1998). A conceptual framework for the analysis of health care organisations’ performance. Health Service Management Research, 11, 24 – 48. Sicotte, C., Champagne, F., & Contandriopoulos, A.P. (1999). La performance organisationnelle des organismes publics de santé. Ruptures – Revue Transdisciplinaire en santé, 6(1), 34 – 36. Stufflebeam, D.L. (2001) New Directions for Evaluation. Evaluation Models No. 89. San Francisco: Jossey-Bass. Torres, R.T., Preskill, H., & Piontek, M.E. (1996). Evaluation strategies for communicating and reporting. Thousand Oaks, CA: Sage. Turnbull, B. (1999). The mediating effect of participation efficacy on evaluation use. Evaluation and Program Planning, 22(2), 131–140. Weiss, C.H. (1972a). Evaluation research: Methods for assessing program effectiveness. Englewood Cliffs, NJ: Prentice-Hall. Weiss, C.H. (1972b). Utilization of evaluation toward comparative study. In C.H. Weiss (Ed.), Evaluating action programs: Readings in social action and education (pp. 318 – 326). Boston: Allyn and Bacon. Weiss, C.H. (1979). The many meanings of research utilization. Public Administration Review, 39, 426 – 431. Weiss, C.H. (1980). Knowledge creep and decision accretion. Knowledge: Creation, Utilization, Diffusion, 1(3), 381– 404. Weiss, C.H. (1981). Measuring the use of evaluation. In J.A. Ciarlo (Ed.), Utilizing evaluation: Concepts and measurement techniques (pp. 17–33). Thousand Oaks, CA: Sage. Weiss, C.H. (1983). Toward the future of stakeholders approaches in evaluation. In A.S. Bryk (Ed.), Stakeholders-based evaluation (pp. 83 –96). San Francisco: Jossey-Bass. Weiss, C.H. (1998a). Have we learned anything new about the use of evaluation? American Journal of Evaluation, 19(1), 21–33. Weiss, C.H. (1998b). Evaluation. Methods for studying programs and policies. (2nd ed.). Upper Saddle River, NJ: Prentice-Hall. Weiss, C.H., & Bucavalas, M.J. (1980). Truth tests and utility tests: Decision makers’ frames of reference for social science research. American Sociological Review, 45, 302 – 313. Williams, J.E. (1989). A numerical developed taxonomy of evaluation theory and practice. Evaluation Review, 13(1), 19 –31. .

.

.

.

.

.

.

.

.

.

.

.

.

172—L. Farand and J. Arocha

7 A Cognitive Science Perspective on Evidence-Based Decision-Making in Medicine LAMBERT FARAND AND JOSE AROCHA

Introduction

The Art and Science of Medicine Medicine is often thought of as a mixture of art and science. Its ‘scientific’ aspects consist of the wealth of knowledge arising from the biomedical researcher’s laboratory and from the epidemiologist’s desk, which provides empirically generalizable knowledge to health care professionals. Its ‘artistic’ aspects derive from the physician’s experience, in the form of intuitions, rules of thumb, heuristics, and ‘remindings’ of past experiences with particular patients. In some ways, these two aspects are integrated in medical practice to the extent that medicine makes use of both scientific and experiential knowledge (Enkin & Jadad, 1998). In recent years, however, there has been an increasing emphasis on the science of medicine. The asserted need to establish the scientific foundation for medical practice is founded on the inherent superiority of scientifically validated knowledge over experience-based knowledge. This claim is based on, among other things, the results of utilization studies showing important variations in medical practice for seemingly homogeneous classes of health problems. Also, as we discuss in this chapter, studies in which actual physicians’ decisions are compared with those prescribed by normative models have identified numerous cognitive biases potentially leading to suboptimal care. The word evidence is commonly used in many contexts, but nowhere is this term given more importance than in the sciences and some

A Cognitive Science Perspective—173

professional fields, such as medicine, where claims to the validity of theories or hypotheses are based on their degree of empirical support. In these contexts, evidence has come to mean scientific evidence. Medical procedures require the support of scientific evidence rather than of opinion, on the well-founded assumption that scientific evidence is more credible than other types of evidence. Disciplines like medicine that have not reached the level of maturity of the harder sciences still debate the role that empirical evidence plays in their development and its application to real-world problems. In the health sciences, evidence-based medicine (EBM) has been advertised as a possible integrative force for medical practice (Davidoff, 1999), with the objective of bringing ‘the best evidence from clinical and health care research to the bedside, to the surgery or clinic, and to the community’ (Sackett & Haynes, 1995, p. 5). This movement has promoted the use of scientific evidence in clinical decision-making (Haynes, Hayward, & Lomas, 1995; Kennedy, 1999; Sackett, Rosenburg, Gray, Haynes, & Richardson, 1996) to improve the quality and efficiency of health care (Grimshaw & Russell, 1993; Lomas, Anderson, Dominic-Pierre, Vaida, & Enkin, 1989). Implicitly, proponents of EBM strive to augment the scientific rationality of medical practice.

A Cognitive Science Perspective EBM does not entirely exclude the artistic component of medical practice, however, because its goal has been formulated as ‘integrating individual clinical expertise with the best available external clinical evidence from systematic research’ (Sackett et al., 1996, 71; emphasis in original). This notion of expertise has received a great deal of attention in cognitive science over the last twenty years and has given rise to a wealth of information about its process characteristics and development. A cognitive science perspective on the matter can help to refine EBM by providing a detailed understanding of how those expert processes integrate, or sometimes interfere, with the use of external scientific evidence in decision-making. Empirical research on human reasoning and decision-making originates from two complementary traditions, the ‘descriptive’ and the ‘normative,’ which differ in terms of their goals and their choices of ‘gold standards.’ In the descriptive tradition, the goal is to understand the processes that underlie high-level cognitive performance and its development (Ericsson & Smith, 1991), and the gold standard is expert reasoning. In the normative tradition, the goal is to identify biases in

174—L. Farand and J. Arocha

human judgment (Kahneman, Slovic, & Tversky, 1982), compared with prescriptive models of decision-making that are based on probability and expected utility theory (Weinstein & Fineberg, 1980). The two traditions complement each other, because the descriptive approach may help to explain the biases that are identified with the normative approach. In the remainder of this chapter we review the principal findings from the descriptive research tradition. We then examine selected findings from the normative decision-making research tradition. By shedding light on the interactions between the art and the science of medicine, both research traditions offer a deeper understanding of certain obstacles to implementing EBM and allow us to identify strategies for improvement. Descriptive Studies of Medical Reasoning and Decision-Making

Cognitive Characteristics of Expert vs Non-Expert Medical Reasoning EXPERTISE

In cognitive studies of expertise, the selection criteria for subjects typically pertain to academic background (environment, duration, and level of training); duration of pertinent experience (in most domains, expert-level performance is not attained before the completion of ten years of full-time practice involving a significant level of feedback); peer recognition; practice environment (such as academic vs community hospital1); fees charged; and, in certain cases, outcomes of professional activity. In studies of medical expertise, experts are generally peerhonoured specialists who have been practising in academic hospitals for about ten years; intermediates are generally residents or recently certified specialists (between one and five years); and novices generally are medical students and interns. Unfortunately, general practitioners have not been the subject of very much attention in such studies. In these studies, medical expertise is conceived in relative terms, that is, based on the familiarity of the problem-solver with the problem at hand. Thus, subjects corresponding to the preceding criteria for expertise will not behave as experts if they are asked to solve problems that lie outside their domain of expertise. In such cases, subjects – sometimes called sub-experts (Patel, Groen, & Arocha, 1990) – will generally fall back on reasoning mechanisms that are typical of lower levels of expertise.

A Cognitive Science Perspective—175 REASONING STRATEGIES

The study of expert problem-solving in several domains (Chi, Glaser, & Farr, 1988; Ericsson & Smith, 1991; Feltovich, Ford, & Hoffman, 1997) has shown that, in routine problems, experts exhibit different reasoning strategies from those of novices (Patel, Arocha, & Kaufman, 1994). Experts use predominantly ‘data-driven,’ ‘forward,’ or ‘inductive’ reasoning, which consists of reasoning from the given information of a case to a hypothesis (Patel & Groen, 1986), in contrast with non-experts who use predominantly ‘hypothesis-driven,’ ‘backward,’ or ‘deductive’ reasoning. If we take into account Evans and Gadd’s (1989) ontological model, this distinction means that experts infer predominantly upwards along an abstraction hierarchy of medical concepts, in contrast to non-experts. Whereas data-driven reasoning is fast but error prone in the absence of adequate knowledge, backward reasoning is slow and is used when domain knowledge is inadequate. This suggests a distinction between expertise and experience. Experience is a prerequisite for expertise, but it is not a sufficient condition. For expertise to develop with experience (or practice), problem-solving must be performed in an environment that provides sufficient feedback and/or tutoring, as well as access to high-quality ‘declarative’ knowledge (i.e., ‘evidence’). Without such an environment, experienced subjects will apply forward reasoning with inadequate knowledge when they should have used backward reasoning. Experts, on the other hand, are better able to monitor their own performance (Patel, Arocha, & Glaser, 2001) and to resort to backward reasoning when they recognize that the problem at hand is unfamiliar or very complex. In general, data-driven reasoning is associated with successful performance in routine problems, whereas hypothesis-driven reasoning is associated with non-routine problems and often with unsuccessful performance (Patel & Groen, 1986). Given a ‘relativist’ definition of expertise, experts are obviously more often in a position to solve routine problems than non-experts, but in non-routine problems (such as those that lie outside their domain or very complex problems within their domain) they fall back on non-expert styles of reasoning. A study by Patel, Groen, and Arocha (1990), who examined the conditions underlying the shift in directionality of reasoning, shows that the change from forward to backward reasoning occurs when unexplained observations in a problem are contrasted against a hypothesis held as an explanation. In other words, inconsistencies between problem data and hy-

176—L. Farand and J. Arocha

pothesis disrupt forward reasoning. These results have been supported in other studies (e.g., Joseph & Patel, 1990; Patel, Arocha, & Kaufman, 1994). Along the same lines, conditional reasoning (i.e., the establishment of conditional relations in a semantic network) is associated primarily with forward inferencing, while causal reasoning is linked to backward inferencing; both carry the same implications in terms of diagnostic performance. Consequently, research on pathophysiological reasoning (involving mostly causal inferences) has shown that it is rarely used by experts for solving routine clinical problems (as opposed to providing post hoc explanations for the reasoning process, as is often the case in pedagogical interactions), and that it is associated with high problem complexity and low diagnostic performance (Patel, Evans, & Groen, 1989). At the same time, sub-experts, especially novices, make more extensive use of pathophysiological (causal) reasoning, with detrimental consequences in terms of performance (i.e., more errors and fewer coherent solutions). PATHOPHYSIOLOGY

Studies of the use of pathophysiology in clinical reasoning (Patel, Evans, & Groen, 1989) suggest that much of the biomedical knowledge acquired in medical school is not used effectively in clinical decisionmaking. Misconceptions in biomedical knowledge are common, and can lead to diagnostic inaccuracies (Feltovitch, Spiro, & Coulson, 1989). Patel, Evans, and Groen consider causal reasoning to be a ‘weak’ inferential method that is applied, like goal-driven reasoning, in situations of insufficient specific knowledge about a case and that is correlated with errors. Boshuizen and Schmidt (1992) demonstrated a decrease in the use of biomedical knowledge with increasing expertise, but also an increase in the quantity and depth of this knowledge. They interpret these findings through an ‘encapsulation’ mechanism that allows experts to possess large amounts of biomedical knowledge while at the same time minimizing interference with their decision-making processes. Patel, Evans, and Groen have also shown that novices and intermediates use more biomedical concepts than experts during the course of problem-solving, but that these concepts are often applied inappropriately and interfere with the reasoning process. ONTOLOGICAL LEVELS

Inferences by experts typically subsume multiple ontological (i.e., abstraction) levels, moving, for example, directly from observations to

A Cognitive Science Perspective—177

diagnosis in a single step. As a result, a larger proportion of the reasoning by experts occurs at the higher (i.e., more abstract) levels of the ontological hierarchy. For instance, in studies of diagnostic problemsolving, Patel, Evans, and Kaufman (1989) have shown that students tend to focus on observations or findings (depending on their level of training); residents tend to rely on findings and facets (facets represent pathophysiological concepts at an abstraction level that is intermediary between findings and diagnoses); and expert physicians concentrate on the higher ontological levels (i.e., facets and diagnoses). Because the time frame for single inferential steps is constant whatever their complexity (Anderson, 1983), experts solve problems faster than nonexperts because they need to apply a smaller number of their more productive inferences (i.e., spanning more levels of the ontological hierarchy in a single step). The probability of reasoning errors’ being directly related to the length of inference chains is also why experts are less prone to making errors than non-experts. EXPERT MONOTONICITY

Expert reasoning is also characterized by its monotonicity, resulting in the cumulative construction of a solution model without retraction of parts of the model (‘backtracking’) during the course of problem-solving (Farand, 1996; Patel & Groen, 1986). Conversely, sub-expert and novice problem-solving processes involve a greater amount of non-monotonicity (i.e., faulty inferences that are later retracted). Experts also produce more coherent solution models (i.e., their semantic networks are more densely interconnected and they contain fewer ‘loose ends’ and isolated ‘semantic islands’) than non-experts (Patel & Groen). MEMORY STRUCTURES

These process characteristics of expert reasoning may be explained by the configuration of their long-term memory structures, sometimes called ‘schemata,’ ‘frames,’ or ‘scripts’ in cognitive psychology (Anderson, 1983; Gick & Holyoak, 1983; Kintsch, 1974; Schank & Abelson, 1977; Winograd, 1975) and artificial intelligence (Minsky, 1975; Sowa, 1984). In the contexts where they are activated, these memory structures support interpretation, prediction, action, and recall. Compared with those of non-experts, the schemata of experts are more numerous (i.e., experts have a larger quantity of knowledge). It is estimated (Charness & Shultetus, 1999; Gobet & Simon, 1996; Patel, Arocha, & Glaser, 2001) that the long-term memory of experts holds

178—L. Farand and J. Arocha

between 50,000 and 100,000 chunks of information, allowing them to recognize a larger set of situations, to solve a wider variety of problems, and to be more flexible in problem-solving. Each schema is also ‘larger’ in the sense that, during a single inferential step, it can match more complex patterns from the evolving mental model (i.e., more elements of the model and thus more aspects of a problem), and the model transformations that it produces are more important and coherent (i.e., contributing to the model more nodes and relations, which are also more densely interconnected). Working memory – the part of memory that temporarily holds these activated schemata upon which attention is focused at any time during problem-solving – has a limited capacity in terms of the number of simultaneously active schemata it can hold (between five and seven), this limitation being, however, independent of their size (Anderson, 1983). The larger expert schemata therefore also increase the power and flexibility of reasoning: with the same number of activated schemata, experts have more knowledge available at each step of the reasoning process. These properties also explain the superior memory of experts for relevant clinical information (Patel, Groen, & Frederiksen, 1986). Because their memory structures are not as large and well organized as that of experts, sub-experts and novices cannot rely on powerful knowledge-based pattern recognition mechanisms (‘strong methods’), and they must construct their solutions in a painstaking sequential fashion involving numerous less productive inferential steps under the control of general-purpose problem-solving heuristics (‘weak methods’). HYPOTHETICO-DEDUCTIVE AND CAUSAL REASONING

These types of reasoning represent facilitating conditions for learning (because they allow the exploration of alternatives and the fine-tuning of diagnostic schemata, respectively) and they are considered to be necessary steps in the acquisition of clinical competence (Feltovich, Spiro, & Coulson, 1989). Both procedures are taught explicitly to medical students in so-called problem-based curricula (Barrows, 1983). Because they correspond, from an epistemological perspective, to the canons of positivism, they are also used to emphasize the ‘scientific’ character of medical reasoning. Studies concerning cognitive correlates of problem-based vs traditional curricula (Patel, Groen, & Norman, 1993) have shown that students from the problem-based curriculum, who are explicitly taught the hypothetico-deductive method, tend to use that method at more advanced stages of their medical education than stu-

A Cognitive Science Perspective—179

dents from the traditional curriculum who earlier adopt expert-like forward reasoning. These results have been found to generalize to residents educated under the two approaches (Patel, Arocha, & Leccisi, 2001), but they may not be applicable at higher levels of expertise, where many other factors have to be considered. PROCEDURALIZATION AND COMPILATION OF KNOWLEDGE

General theories of cognition (Laird, Newell, & Rosenbloom, 1987; Anderson, 1983) shed some light on the processes by which memory structures and reasoning processes evolve, under the influence of feedback and experience, from their sub-expert to their expert forms. Two general cognitive processes are involved: proceduralization and compilation of knowledge. Proceduralization consists of the transformation of declarative memory structures (i.e., memory about what is true: the facts of the trade, including the scientific ‘evidence’ of EBM) into procedural memory structures (i.e., memory about what to do). Typically, sub-experts know a lot of factual (declarative) information about their domain (e.g., disease and symptom classifications, causal mechanisms of disease, diagnostic and therapeutic options for specific problems), and they use general purpose, problem-solving strategies (‘weak methods,’ such as hypothetico-deduction) to search through these facts during the course of problem-solving. With feedback and experience, this factual knowledge is progressively transformed into cognitive procedures (proceduralization) that are directly applicable to problem-solving, thus eliminating much of the need for weak inference procedures except in very complex or unfamiliar situations. Compilation is the process by which the problem-solving strategies of experts are integrated into their procedural memory structures. It also refers to the integration of smaller chunks of knowledge into larger and more powerful schemata. As a result of proceduralization and compilation, expert knowledge becomes readily applicable for problemsolving, and reasoning loses some of its sequential character, which was imposed by the application of weak inference procedures on small chunks of declarative knowledge. This process is sometimes called ‘parallelization’ (Anderson, 1983). Under the influence of these developmental processes, experts’ knowledge becomes more productive, flexible, effective, and efficient, and they become better able to monitor their own performance (Patel, Arocha, & Glaser, 2001). However, their knowledge also acquires less

180—L. Farand and J. Arocha

desirable properties. First, proceduralized and compiled knowledge is less amenable to explanation, since cognitive research has shown that only declarative knowledge may be processed for language production (Anderson, 1983). As a result, experts have difficulty explaining their actual decision-making processes; in pedagogical situations, for example, they will easily construct post hoc descriptions of their reasoning processes that have limited empirical validity (Ericsson & Simon, 1993). These explanations often take the form of sub-expert reasoning processes; for example, experts will pretend that they have used hypotheticodeduction with declarative knowledge. This may be one of the reasons why earlier research (e.g., Elstein, Shulman, & Sprafka, 1978) conveyed the impression that experts actually used such methods. The difficulty of explaining decisions that are typically made in a few large leaps from the given of a problem to a solution may also provide the basis for an operational definition of ‘intuition.’ The second undesirable consequence, which is especially important for EBM, is that once proceduralized and compiled, knowledge becomes harder to modify. In fact, cognitive research suggests that further knowledge acquisition, in a form suitable for efficient problemsolving, must go through the painstaking and time-consuming processes of proceduralization and compilation from newly acquired declarative knowledge (Anderson, 1983), such as the evidence entailed in EBM. Maintaining high-level expertise in a rapidly changing domain like medicine is so challenging that expertise is usually a transient phenomenon; it reaches a peak after ten to fifteen years of practice and intensive maintenance and then declines progressively. The third undesirable consequence is that expert reasoning processes lead to biases, in contrast to normative decision-analytic models. This aspect is considered below. COGNITIVE CHARACTERISTICS OF MEDICAL REASONING IN NATURALISTIC SETTINGS

Most of the studies from the cognitive information-processing tradition that we have described so far have been carried out in laboratory settings, which provide good control over the experiment and facilitate replication. Task parameters may also be manipulated precisely and one at a time. Questions have been raised, however, about the equivalence of laboratory situations with tasks performed in real clinical settings where, among other things, subjects almost never have access to

A Cognitive Science Perspective—181

all pertinent information at once, thus imposing a sequential character onto the decision-making process, and multiple environmental constraints have a bearing on their decision-making. Moreover, the multisensorial communication bandwidth, the sense of purpose and motivation, and the contextual opportunities of natural situations are absent from laboratory settings. This suggests that some of the more complex characteristics of decision-making, including its real-time dynamics, may be better analysed in real clinical settings. As a consequence, cognitive researchers have increasingly adopted a naturalistic decision-making paradigm. DATA COLLECTION

In diagnostic tasks occurring in real clinical settings, physicians almost never have access to all pertinent information at once, and their information acquisition processes vary in relation to their level of expertise. In a study of doctor-patient interactions (Kaufman & Patel, 1988), experts generated accurate initial hypotheses from initial information about a case (observations and findings) in a data-driven process that the authors called diagnostic reasoning. Experts then used these hypotheses to reason predictively (i.e., from diagnoses to lower-level concepts) in order to selectively acquire confirming information from the patient. Data-driven processes in this instance were thus controlling goal-driven processes. Non-experts did not demonstrate this dominant data-driven information acquisition process, which led to the collection of more irrelevant pieces of information and to diagnostic inaccuracies. When analysing other situations in which clinical data had to be acquired sequentially, Joseph and Patel (1990) found that forward-directed reasoning also controlled the data collection processes of experts. Experts’ data collection strategies are thus more specific and sensitive than those of non-experts (Patel, Evans, & Groen, 1989). In constructing their solution models, experts elicit very early in a problemsolving episode a small set of hypotheses containing the solution that will eventually be chosen, and they gather only the information that is relevant to the problem at hand (Feltovich, Johnson, Moller, & Swanson, 1984). All this information is integrated into their solution model. Subexperts and novices are less specific; they elicit a larger number of hypotheses (among which the solution may not even be included) and they gather more information that is not integrated into the solution.

182—L. Farand and J. Arocha TEMPORAL DYNAMICS

Other studies performed in naturalistic settings have examined the temporal dynamics of medical reasoning. Farand (1996) analysed the reasoning processes of consultant physicians on hospital wards. Video recordings of the subjects’ manipulations of the medical record, obtained in an unobtrusive way during the course of a real consultation, allowed for the precise time-coding of inferential processes. A similar methodology was used by Farand, Lafrance, and Arocha (1998) to analsze telemedical consultations. These studies showed that opportunistic planning characterizes problem-solving in realistic time-constrained situations. Opportunistic planning relates to the higher-level ‘control’ or ‘meta-’ knowledge involved in the problem-solving process, which determines the course of reasoning in the domain (i.e., within the ontological hierarchy mentioned earlier). It is this kind of control knowledge that allows experts to realise that a problem is unusual or very complex, and that they should adapt their problem-solving strategies accordingly, such as by shifting from forward to backward reasoning or from pattern-matching to hypothetico-deduction. In the same way that reasoning at the domain level consists of the construction of a semantic model of the problem/solution, meta-level reasoning may be conceived as hierarchical model construction at the control level (Hayes-Roth, 1985). Opportunistic planning is the control-level equivalent of forward reasoning. It is characterized by flexible ‘bottom-up’ inferencing within the control hierarchy as well as between domain-level (foci) and control-level hierarchies. In the control of their reasoning processes (e.g., in deciding where to focus their attention), experts are thus more reactive to the current status of the problem than sub-experts or novices, who tend to follow a more rigid, pre-determined procedure (Farand, 1996). SUMMARY OF CHARACTERISTICS OF EXPERT VERSUS NON-EXPERT REASONING

When compared with those of sub-experts, all of the empirical properties of experts’ reasoning we have reviewed to this point result in more inferential productivity (fewer cognitive resources mobilized in terms of time, effort, number of inferences; fewer inferences that are later ‘backtracked’; and more specific data collection); flexibility (better adaptation to real-life decision-making constraints); effectiveness (fewer errors, more coherence, and more sensitive data collection); and

A Cognitive Science Perspective—183

efficiency (better solutions relative to cognitive effort). As mentioned earlier, however, these properties of expert reasoning also have less desirable consequences2: decisions are harder to explain (but easy to justify), and knowledge is harder to modify. As we discuss in the next section, expert reasoning processes often do not conform to normative models of medical reasoning and decision-making. Normative Studies of Medical Reasoning and Decision-Making

Problems in the Study of Decision-Making Processes In normative studies of decision-making, prescriptive decision models are compared, based on expected utility theory and probability theory, with the processes that are actually used by subjects to make decisions (Elstein & Schwartz, 2000). Normative decision-making research is very much in line with EBM because the ultimate goal is to improve the rationality of decision-making. But while EBM is focused on inputs (i.e., evidence), normative research is focused on processes. Because empirical decision researchers in the normative tradition compare human decisions with an analytic model that represents the behaviour of an optimal or rational decision-maker (von Neumann & Morgenstern, 1944), even the behaviour of experts is considered suspect. Given their orientation, these researchers select representative groups of physicians, often comprising subjects with different levels of expertise. However, the mix of expertise in these samples usually is not precisely described, and the analysts tend not to consider that variable. These factors make it difficult to establish comparisons with the results of studies from the descriptive tradition. The normative study of decision-making received a great deal of impetus from the work of Tversky and Kahneman (1974), whose research – and that of followers (Brenner, Koehler, & Tversky, 1996; Redelmeier & Tversky, 1990) – showed that cognitive heuristics may bias what could be considered rational decisions. Although heuristics are usually effective, they sometimes lead to systematic and predictable errors in decision-making (Elstein, 1999). In many studies discrepancies between normative decision models and human judgment have been documented in several domains, including medicine (Elstein & Schwartz, 2000). Examples of such biases include the framing effect (i.e., making different decisions depending on how a problem is presented or ‘framed’); the neglect of prior probabilities of diseases in Bayes’ theorem (Berwick,

184—L. Farand and J. Arocha

Fineberg, & Weinstein, 1981); the ‘chagrin factor’ (i.e., over-valuing adverse outcomes related to the physician’s actions [Feinstein, 1985], which corresponds to the Hippocratic principle prima non nocere); overconfidence (i.e., attributing good outcomes to treatment and poor outcomes to disease [Iansek, Elstein, & Balla, 1983]); availability (i.e., focusing on salient aspects of a situation while ignoring others [Dawson & Arkes, 1987]); representativeness (i.e., focusing on prototypical cases and disregarding atypical features [Arocha & Patel, 1995; Dawson & Arkes, 1987]); anchoring (i.e., making estimates based on known values while ignoring others); and recency (i.e., being over-influenced by recent situations). Similar research has also shown that, when compared with regression models, physicians are not particularly accurate at predicting outcomes (Camerer & Johnson, 1991). The validity of some of these research findings has been criticized on various grounds (some of which have been reviewed by Elstein, 1988), the most significant being the low ecological validity of experiments designed to elicit these biases. Yet with so many studies documenting these phenomena it seems quite improbable that they would not exist; however, they may not be as important as initially thought. In recent studies (Gigerenzer, 1996; Cosmides & Tooby, 1996) decision-making has been investigated from a frequentist perspective, in which probability values are conceived of in objective terms as the longrun frequency of a series of events. Investigators replicated previous studies conducted by Tversky and Kahneman, this time presenting probabilities in terms of frequency. They found that cognitive biases, such as base-rate neglect and overconfidence, tend to disappear when scenarios are described and questions are asked in more ethologically pertinent, frequentist terms. These researchers argue that, given its potential survival value, humans have developed a sensitivity to picking up statistical information from their environment and have adapted to processing long-run frequencies in an intuitive manner. If this is the case, experts may be better than non-experts at processing these frequencies, and they may exhibit less severe biases than non-experts in normative studies. Unfortunately, as mentioned earlier, these studies do not usually allow for comparisons between subjects with different levels of expertise. This also suggests that subjects who have been exposed to a larger and more representative case mix may perform better in that area. Three lines of argument may be used to explain the differences between human judgment and rational models; they also reveal limita-

A Cognitive Science Perspective—185

tions of the normative models and obstacles to their use by physicians. The first line relates to the cognitive processes of medical reasoning that we have reviewed in the preceding sections. The second relates to the characteristics of normative decision models per se. The third relates to the characteristics of the natural contexts in which medical decision-making usually occurs.

Limitations Related to the Cognitive Processes of Medical Reasoning In the early days of normative decision theory it was thought that wellinformed and competent decision-makers behaved according to normative principles. However, all of the empirical cognitive research that we have reviewed in the preceding sections provides a picture of medical reasoning processes that is not even remotely akin to the methods prescribed by normative models of decision-making. As cognitive research in decision-making has repeatedly shown, human beings in general, and experts in particular, do not use Bayes’ theorem or decision trees, and they do not compute or even use probabilities or utilities. Instead, as we have seen, they resort to qualitative pattern-matching from structural features of past situations that they have integrated into long-term memory and then implemented by propagating activity levels within inference chains. This approach readily explains a number of biases that were mentioned earlier, such as framing, availability, representativeness, anchoring, and recency effects. The empirical cognitive characteristics of human problem-solving also suggest that experts may exhibit fewer severe biases than non-experts. Again, the results of normative studies do not allow for the verification of that hypothesis. Fundamentally, these discrepancies are related to the fact that human reasoning is more dependent on neurobiology than on economic science (Ellis & Young, 1988; Goldman-Rakic, 1987; Duchaine, Cosmides, & Tooby, 2001).

Limitations Related to the Characteristics of the Normative Decision Models Despite their computational complexity, the vast majority of the normative decision models do not specify the mathematical functions that people are supposed to maximize, and when they do so specify, these functions are selected for computational convenience or simply arbitrarily (Bunge, 1995). This is problematic because maximization of utility is assumed to exist a priori and resulting utilities may vary depend-

186—L. Farand and J. Arocha

ing on the specific function that is used in the model. Other aspects of normative decision-making approaches also represent obstacles to their use by clinicians. For instance, the probabilities needed for constructing decision trees or for using Bayes’ theorem are usually not accessible, even in simple cases (which can be solved by other means), and, given human limitations for estimating probabilities, subjective estimates may introduce more error than expert pattern-matching. A more fundamental obstacle to the application of prescriptive decision-making methods based on expected utility theory is that they consider utilities for the patient (which is justified on ethical grounds), while utilities for the decision-maker (i.e., the physician or, ideally, the patient-physician dyad) are disregarded. This situation is in fundamental contradiction to expected utility theory itself, which stipulates that utilities are optimized from the perspective of the decision-maker. We may hypothesize that numerous medical decisions, which may not seem optimal from the perspective of the patient, may, in fact, be found to conform better to normative models when utilities are considered from the perspective of the physician. For example, the ‘chagrin factor’ may be conceived as the consequence of expected utility theory being applied to the wrong problem, which can lead to unexpected consequences. Farand (1996) analysed a real-life medical decision, where a patient’s misconception about prognosis, which could not possibly have been remedied in the short time frame of the medical encounter, led a physician to adopt the patient’s risk evaluation scheme, resulting in a suboptimal decision (for the patient). In that case, the patient’s involvement in the decision-making process modified the physician’s initial utilities, which would have led to a better decision from the patient’s perspective.

Limitations Related to the Characteristics of the Natural Contexts of Medical Decision-Making The most severe limitations of normative methods are related to their being inapplicable to most real-life situations (Beach & Lipshitz, 1993; Klein, Orasanu, Calderwood, & Szambok, 1993). Obviously, given the time constraints of medical practice, it is generally unthinkable to compute decision trees or similar devices, and the heuristic processes described earlier represent the only practical way to make a decision. The choice is not between an optimal vs a suboptimal decision, but rather between a decision vs no decision at all. Realistically, EBM does not

A Cognitive Science Perspective—187

suggest that physicians build decision trees for each decision. Less timeconsuming methods are advocated, such as practice guidelines (Garfield & Garfield, 2000; Zarin, Seigle, Pincus, & McIntyre, 1997; Zielstroff, 1998), where decision analysis has been performed in advance for representative groups of patients. However, these guidelines are generally applicable only in the simplest situations, where, for example, a patient has only one health problem at a time and all pertinent information is available. Unfortunately, in such situations physicians do not feel the need, and often do not have the time, for decision support. In more complex situations, where information about a case is uncertain, ambiguous, or partial, where symptoms, diseases, and treatments interact, and where patients present with multiple health problems, these guidelines rapidly become inapplicable and sometimes even dangerous. As a consequence, readily usable external ‘evidence’ is not available for those medical decision-making situations where it may have a chance of being welcome (McDonald, 1996), and heuristic processes then represent the only alternative. Implications for Evidence-Based Medicine

Suboptimal Medical Decision-Making Most people – including the promoters of EBM – would probably agree that if medicine were always practised as high-level experts with academic credentials practise, there would be little to complain about, despite the fact that these experts do not compute utilities or probabilities. First, it has long been shown that decisions made in academic centres are generally superior to those made in other settings (Morehead & Donaldson, 1964; Flood, 1994). Second, these experts are the very ones who are involved in developing, teaching, and promoting EBM. Third, the educated public implicitly recognizes these facts as common sense when accessing care for their own health problems. This is not to say that the decisions of experts are always optimal, but it seems that society has little to gain from further improving the performance of its best medical practitioners. Nor is there a concern about non-experts who have not yet completed their training (i.e., medical students and residents); because they are identified and generally under control, they know their limitations, and they will progressively acquire higher levels of competence during the course of their training. The problem lies, rather, with the subset of medical practitioners who lack sufficient

188—L. Farand and J. Arocha

or up-to-date knowledge. Given the rapid evolution of medical science, in the absence of appropriate continuing-education or decision-support interventions this phenomenon probably is widespread. Cognitive research suggests that situations where the performance of physicians may be suboptimal belong to three categories. First, there are the sub-experts, such as medical students, residents, and a large proportion of practising physicians, who recognize their limitations and may be likely to seek advice from experts, transfer complex cases, search for decision support, and be involved in continuing education. They are amenable to improving performance. But there are two other groups of physicians, or rather types of situation, that may be encountered by any physician, where performance may be much harder to improve. First, there are situations where physicians are simply not aware of their limitations. In such cases, they may apply heuristic decision-making processes without sufficient knowledge, usually based on long-term experience in the absence of sufficient feedback or tutoring. Solo practitioners who do not have access to an adequate network of consultants are particularly at risk. With practice, their (inadequate or insufficient) knowledge becomes proceduralized and compiled, and inadequate decisions are made intuitively and comfortably (i.e., without much effort). In such circumstances, physicians are not likely to search for advice, refer patients, or become involved in continuing education on a voluntary basis. Second, there are situations in which physicians make decisions that optimize their own utilities (in a figurative sense) at the expense of those of the patient. In such cases, they integrate into their decision-making processes contextual elements that are extraneous to the clinical situation and that may lead to decisions that are not optimal for the patient. While this pattern is entirely unsurprising from a cognitive perspective, it should nevertheless be discouraged on ethical and economic grounds. In such cases, there is little that EBM and cognitive science can do. Solutions belong to higher organizational and systemic levels that should be configured to improve the convergence of the interests of both patients and physicians. An important prerequisite to the efficient implementation of EBM is to identify those situations in which medical practice is suboptimal, especially when the subjects are not aware of their limitations or when they are not motivated to take action. Again, solutions are in part technological, but are very much organizational and political. Information systems technology – such as electronic patient records (EPR) – could facilitate the monitoring of the processes and outcomes of medical

A Cognitive Science Perspective—189

practice. This would allow targeted feedback to individual physicians about relevant aspects of their practice. It would also facilitate problem detection by external actors. EPR technology is still in its infancy, however, and much research and development is needed (see chapter 8 in this volume). Even if technology eventually makes such monitoring of performance possible, political and organizational interventions would still be needed – first, to convince physicians to use the technology (incentives may be implemented, but certain aspects of medical practice may have to be reorganized in order to alleviate some of its time constraints); and second, to allow the monitoring itself to be implemented.

Promoting EBM through Training The principal interventions that can be used for promoting EBM may be grouped into two categories: training (medical school and continuing medical education [CME]) and decision support. The cognitive perspective strongly suggests that medical school training represents the most effective means of promoting the use of EBM. First, EBM itself originates from medical schools and their research centres, and highlevel experts acting as medical instructors have generally integrated EBM into their own practice much more extensively than other physicians, given the constant challenges to their decisions from students and colleagues and their specific role as promoters of medical rationality. Second, as we have mentioned before, during the course of pedagogical interactions, experts explain their decisions in the format of normative rationality (such as hypothetico-deduction), even if it does not correspond to their actual decision-making processes. Learning such problem-solving strategies (‘weak methods’) may be invaluable for those subjects who may never become experts in narrow medical domains. It may not help them to make better decisions (which are generally attained with ‘strong methods’), but at least it may allow them to recognize their limitations and search for advice, decision support, and referral. Third, medical students and residents, whose knowledge is not yet compiled and proceduralized, are in an ideal position to acquire EBM principles, first in declarative form and then to progressively integrate them into expert-like schemata by painstakingly and repeatedly making decisions under supervision. This process is facilitated by the context of medical school that temporarily isolates them from the constraints of real-world medical practice and exposes them to intensive feedback

190—L. Farand and J. Arocha

and tutoring. Such an approach may also apply to experienced physicians with already proceduralized knowledge that cannot easily be altered by current CME practices. In terms of curricular content, cognitive research suggests that teaching probabilistic and decision-theoretic methods at medical school (or anywhere else for that matter) is not likely to influence the way physicians make decisions; rather, medical students should be exposed to as many cases as possible during their training, and their case mix should be representative of their future clientele. The cognitive perspective provides a much dimmer picture of current CME approaches. One problem that we have already mentioned is that medical practice situations that may be in greater need of these types of interventions are also likely to be those for which physicians may not spontaneously perceive the need. The current approach of estimating educational need based on the opinions of physicians, while pedagogically sound, produces adverse effects at the system level: physicians tend to choose subject matter they are already interested in and thus in which they are already likely to be competent. The targeting of CME constitutes a major problem, and, as mentioned before, solutions reside in organizational or systemic change. Another problem for CME is that, given the solidity of their proceduralized and compiled knowledge, experienced physicians are less likely to respond to remedial interventions of low intensity. However, sending physicians back to medical school at regular intervals is rather costly and not very practical, although it is done occasionally in either voluntary or compulsory mode. Since competence at problem-solving is acquired by solving problems (with supervision and feedback), it is also obvious that most current CME practices, such as lectures, may not be very effective. They provide mostly declarative knowledge, which will not become integrated into decision-making processes unless it is used in real problem-solving situations for extended periods (such as in medical school). Finally, with their limited resources, CME and EBM must also compete with sophisticated and highly effective marketing procedures, from which they may nevertheless take some inspiration. It is rather amusing, for a cognitive psychologist, to read medical journal articles where scientific evidence of the highest quality is presented in such a format that it is almost impossible to integrate, even in declarative form, in less than an hour. Within the same journal, they alternate with advertisements, the messages of which can be painlessly absorbed as readily applicable, if not appropriate, decision rules by anybody staring at them for more than a

A Cognitive Science Perspective—191

second. From a cognitive perspective, CME in its present format, with its current focus and level of resources, and given the systemic context into which it is integrated, seems to have little to offer for effectively promoting EBM.

Improving EBM through Decision Support Decision support, especially in its computerized form (as reviewed in chapter 8), may have a brighter future. Computerized systems that have been usefully integrated into medical practice deal with the lower levels of the clinical ontology, such as the empirium (e.g., medical imaging); observations (e.g., laboratory systems and systems that facilitate data validation, presentation, and summarization); and, sometimes, findings (e.g., ECG and lab test interpretations, medical alerts for drug interactions). But decision support has not yet been successfully integrated into the higher ontological levels of medical decision-making. Despite certain limitations, a large number of powerful decision support systems have been developed for various fields of medicine (see chapter 8). This technology can integrate EBM principles and, given sufficient resources, is relatively easy to keep updated with advances in medical science. Moreover, it has the potential of making normative decision-making methods usable, since computers are very good at computing probabilities and utilities. Yet despite their potential, such systems are not being used in practice, the main problem being that of input. Given the constraints of medical practice, physicians will not enter the large amounts of fine-grained information that these programs need, and they may not be used until EPR systems feed them with relevant data. Although some progress has been made, it is hard to tell when, or if, such EPR systems will become available; fundamental advances in computer technology first may be necessary, such as speech recognition and, more important, natural language understanding. When the EPR problem is solved, computerized decision support may eventually complement and become tightly integrated with the heuristic problem-solving and decision-making processes of physicians. Such an approach would alleviate most of the limitations that we have mentioned in regard to CME. But this development is in the future. In the meantime, medical schools could train future physicians in using certain computer tools that are already available. These tools tend to reproduce written information that is available elsewhere, but in a much more compact and portable format. Such systems, however, like their

192—L. Farand and J. Arocha

written counterparts, are mainly applicable in simple situations in which patients do not have multiple and interacting problems. The Bigger Picture While cognitive research and developments in medical computer science may contribute to the goals of EBM, many of the solutions seem to belong to higher systemic levels (i.e., organizations and the health care system as a whole), as is pointed out in other chapters of this volume. Among other things, in order to gain from the lessons of cognitive research, seekers of these solutions must strive to alleviate some of the constraints of medical practice and establish the right balance of incentives for improving the convergence of the interests of both patients and physicians. This may be a difficult task, because interventions for improving medical care may also produce adverse effects for other aspects of performance in health care systems.

NOTES 1 Conditions for the development of high-level expertise are more favourable in academic environments, where physicians are frequently questioned and challenged by their students and colleagues and must also strive to remain up to date. 2 This point also applies to experienced physicians who do not qualify as experts for the reasons that we mentioned earlier.

REFERENCES Anderson, J.R. (1983). The architecture of cognition. Cambridge, MA: Harvard University Press. Arocha, J.F., & Patel, V.L. (1995) Diagnostic reasoning by novices: Accounting for evidence. Journal of Learning Sciences, 4, 355 – 384. Barrows, H.S. (1983). Problem-based, self-directed learning. Journal of the American Medical Association, 250, 3077 – 3080. Barrows, H.S., & Feltovich, P.J. (1987). The clinical reasoning process. Medical Education, 21, 86 –91. Beach, L.R., & Lipshitz, R. (1993). Why classical decision theory is an inappropriate standard for evaluating and aiding most human decision making. In .

.

.

.

.

A Cognitive Science Perspective—193 G.A. Klein, J. Orasanu, R. Calderwood, & C.E. Zsambok (Eds.), Decision making in action: Models and methods (pp. 21–35). Norwood, NJ: Ablex. Berwick, D.M., Fineberg, H.V., & Weinstein, M.C. (1981). When doctors meet numbers. American Journal of Medicine, 71, 991– 998. Boshuizen, H.P.A., & Schmidt, H.G. (1992). On the role of biomedical knowledge in clinical reasoning by experts, intermediates, and novices. Cognitive Science, 16(2), 153 –184. Brachman, R.J. (1979). On the epistemological status of semantic networks. In N. V. Findler (Ed.), Associative networks: Representation and use of knowledge by computers (pp. 191– 218). New York: Academic Press. Brenner, L.A., Koehler, D.J., & Tversky, A. (1996). On the evaluation of onesided evidence. Journal of Behavioral Decision Making, 9(1), 59 –70. Bunge, M. (1995). The poverty of rational choice theory. In I.C. Jarvie & N. Laor (Eds.), Critical rationalism, metaphysics, and science: Essays for Joseph Agassi (pp. 149 –165). Dordretch, Netherlands: Kluwer Academic. Camerer, C.F., & Johnson, E.J. (1991). The process-performance paradox in expert judgment: How can experts know so much and predict so badly? In A. Ericsson & J. Smith (Eds.), Toward a general theory of expertise (pp. 195 – 217). New York: Cambridge University Press. Charness, N., & Schultetus, S. (1999). Knowledge and expertise. In F.T. Durso, R.S. Nickerson, R.W. Schvaneveldt, S.T. Dumais, D.S. Lindsay, & M.T.H. Chi (Eds.), Handbook of applied cognition (pp. 57– 81). Chichester, UK: John Wiley. Chi, M.T.H., Glaser, R., & Farr, M.J. (Eds.) (1988). The nature of expertise. Hillsdale, NJ: Erlbaum. Chi, M.T.H., Glaser, R., & Rees, E. (1982). Expertise in problem-solving. In R.J. Sternberg (Ed.), Advances in the psychology of human intelligence (pp. 7–75). Hillsdale, NJ: Erlbaum. Cosmides, L., & Tooby, J. (1996). Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition, 58, 1–73. Davidoff, F. (1999). In the teeth of the evidence: The curious case of evidencebased medicine. Mount Sinai Journal of Medicine, 66(2), 75 – 83. Dawson, N.V., & Arkes, H.R. (1987). Systematic errors in medical decision making: Judgment limitations. Journal of General Internal Medicine, 2, 183 –187. de Groot, A. (1978 [1946]). Thought and choice in chess. The Hague: Mouton. Duchaine, B., Cosmides, L., & Tooby, J. (2001). Evolutionary psychology and the brain. Current Opinions in Neurobiology, 11, 225 – 230. Eddy, D.M., & Clanton, C.H. (1982). The art of diagnosis: Solving the clinicopathological exercise. New England Journal of Medicine, 306, 1263 –1266. .

.

.

.

.

.

.

.

.

.

.

.

.

194—L. Farand and J. Arocha Ellis, A., & Young, A. (1988). Human cognitive neuropsychology. London: Erlbaum. Elstein, A.S. (1988). Cognitive processes in clinical inference and decision making. In D. Turk & P. Salovey (Eds.), Reasoning, inference, and judgment in clinical psychology (pp. 17– 50). New York: Free Press/Macmillan. Elstein, A.S. (1999). Heuristics and biases: Selected errors in clinical reasoning. Academic Medicine, 74(7), 791–794. Elstein, A.S., & Schwartz, A. (2000) Clinical reasoning in medicine. In J. Higgs & M. Jones (Eds.), Clinical reasoning in the health professions (2nd ed., pp. 95 – 106). Oxford: Butterworth Heinemann. Elstein, A.S., Shulman, L.S., & Sprafka, S.A. (1978). Medical problem-solving: An analysis of clinical reasoning. Cambridge, MA: Harvard University Press. Enkin, M.W., & Jadad, A.R. (1998). Using anecdotal information in evidencebased health care: Heresy or necessity? Annals of Oncology, 9(9), 963 –966. Ericsson A., & Simon, H.A. (1993). Protocol analysis: Verbal reports as data. Cambridge, MA: MIT Press. Ericsson, Y.A., & Smith, J. (Eds.). (1991). Toward a general theory of expertise: Prospects and limits. New York: Cambridge University Press. Evans, D.A., & Gadd, C.S. (1989). Managing coherence and context in medical problem-solving discourse. In D.A. Evans & V.L. Patel (Eds.), Cognitive science in medicine: Biomedical modeling (pp. 211– 255). Cambridge, MA: MIT Press. Farand, L. (1996). Cognitive multi-tasking in situated medical reasoning. Unpublished doctoral dissertation (Educational Psychology), McGill University. Farand, L., Lafrance, J.-P., & Arocha, J.F. (1998). Collaborative problemsolving in telemedicine and evidence interpretation in a complex clinical case. International Journal of Medical Informatics, 51, 153 –167. Feinstein, A.R. (1985). The ‘chagrin factor’ and qualitative decision analysis. Archives of Internal Medicine, 145, 1257–1259. Feltovich, P.J., Ford, K.M., & Hoffman, R.R. (1997). Expertise in context. Menlo Park, CA: AAAI/MIT Press. Feltovich, P.J., Johnson, P.E., Moller, J.H., & Swanson, D.B. (1984). LCS: The role and development of medical knowledge in diagnostic expertise. In W.J. Clancey & E.H. Shortliffe (Eds.), Readings in medical artificial intelligence: The first decade (pp. 275 – 319). Reading, MA: Addison-Wesley. Feltovich, P.J., Spiro, R., & Coulson, R.L. (1989). The nature of conceptual understanding in biomedicine: The deep structure of complex ideas and the development of misconceptions. In D.A. Evans & V.L. Patel (Eds.), Cognitive science in medicine: Biomedical modeling (pp. 113 –172). Cambridge, MA: MIT Press. Feyerabend, P. (1988). Against method. London: Verso. .

.

.

.

.

.

.

.

A Cognitive Science Perspective—195 Flood, A.B. (1994). The impact of organizational and managerial factors on the quality of care in health care institutions. Medical Care Review, 51(4), 381– 428. Frederiksen, C.H. (1986). Cognitive models and discourse analysis. In C.R. Cooper & S. Greenbaum (Eds.), Written communication annual. Vol. 1: Linguistic approaches to the study of written discourse (pp. 227–268). Beverley Hills, CA: Sage. Garfield, F.B., & Garfield, J.M. (2000). Clinical judgment and clinical practice guidelines. International Journal of Technology Assessment in Health Care, 16(4), 1050 – 1060. Gentner, D., & Stevens, A.L. (1983). Mental models. Hillsdale, NJ: Erlbaum. Gick, M.L., & Holyoak, K.H. (1983). Schema induction and analogical transfer. Cognitive Psychology, 15, 1 – 38. Gigerenzer, G. (1996). The psychology of good judgment: Frequency formats and simple algorithms. Medical Decision Making, 16(3), 273 –280. Gobet, F., & Simon, H.A. (1996). Templates in chess memory: A mechanism for recalling several boards. Cognitive Psychology, 31, 1– 40. Goldman, A.I. (1986). Epistemology and cognition. Cambridge, MA: Harvard University Press. Goldman-Rakic, P.S. (1987). Circuitry of prefrontal cortex and regulation of behavior by representational knowledge. In F. Plum and V. Mountcastle (Eds.), Higher cortical functions: Handbook of physiology (pp. 373 – 417). Washington, DC: American Physiological Society. Grant, A., Kushniruk, A., Villeneuve, A., Bolduc, N., & Moshyk, A. (2004). An informatics perspective on decision support and the process of decisionmaking in health care. Chapter 8 in this volume. Grimshaw, J.M., & Russell, I.T. (1993). Effect of clinical guidelines on medical practice: A systematic review of rigorous evaluations. Lancet, 342, 1317–1322. Groen, J.G., & Patel, V.L. (1985). Medical problem-solving: Some questionable assumptions. Medical Education, 19, 95 –100. Hayes-Roth, B. (1985). A blackboard architecture for control. Artificial Intelligence, 26, 251–321. Haynes, R.B., Hayward, R.S.A., & Lomas, J.A. (1995). Bridges between health care research evidence and clinical practice. Journal of the American Medical Informatics Association, 2, 342– 450. Iansek, R., Elstein, A.S., & Balla, J.I. (1983). Application of decision analysis to cerebral arteriovenous malformation. Lancet, 21, 1132–1135. Johnson-Laird, P.N. (1983). Mental models: Towards a cognitive science of language, inference, and consciousness. Cambridge, MA: Harvard University Press. .

.

.

.

.

.

.

.

.

.

.

196—L. Farand and J. Arocha Joseph, G.-M., & Patel, V.L. (1990). Domain knowledge and hypothesis generation in diagnostic reasoning. Medical Decision Making, 10, 31 – 46. Kahneman, D., Slovic, P., & Tversky, A. (Eds.). (1982). Judgment under uncertainty: Heuristics and biases. Cambridge: Cambridge University Press. Kassirer, J.P., & Gorry, G.A. (1978). Clinical problem-solving: A behavioral analysis. Annals of Internal Medicine, 89, 245 – 55. Kaufman, D.R., & Patel, V.L. (1988). The nature of expertise in the clinical interview: Interactive medical problem-solving. Proceedings of the Tenth Annual Conference of the Cognitive Science Society (pp. 461– 467). Hillsdale, NJ: Erlbaum. Kennedy, H.L. (1999) The importance of randomized clinical trials and evidence-based medicine: a clinician’s perspective. Clinical Cardiology, 22(1), 6 –12. Kintsch, W. (1974) The representation of meaning in memory. Hillsdale, NJ: Erlbaum. Kitcher, P. (1983) The nature of mathematical knowledge. New York: Oxford University Press. Klein, G., Orasanu, J., Calderwood, R., & Zsambok, C.E. (Eds.). (1993). Decision making in action: Models and methods. Norwood, NJ: Ablex. Laird, J.E., Newell A., & Rosenbloom, P.S. (1987). SOAR: An architecture for general intelligence. Artificial Intelligence, 33(1), 1– 64. Larkin, J.H., McDermott, J., Simon, D.P., & Simon, H.A. (1980). Expert and novice performance in solving physics problems. Science, 208, 1335 –1342. Lipshitz, R., Klein, G., Orasanu, J., & Salas, E. (2001). Taking stock of naturalistic decision making. Journal of Behavioral Decision Making, 14, 331– 352. Lomas, J., Anderson, G.M., Dominic-Pierre, K., Vaida, D., & Enkin, M.W. (1989). Do practice guidelines guide practice? The effect of a consensus statement on the practice of physicians. New England Journal of Medicine, 321, 1306 –1311. McDonald, C.J. (1996). Medical heuristics: The silent adjudicators of clinical practice. Annals of Internal Medicine, 124(1), 56 – 62. Minsky, M. (1975). A framework for representing knowledge. In P.H. Winston (Ed.), The psychology of computer vision. New York: McGraw-Hill. Morehead, M.A., & Donaldson, R.S. (1964). A study of the quality of hospital care secured by a sample of teamster family members in New York City. New York: School of Public Health and Administrative Medicine, Columbia University. Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall. Orasanu, J. (1990). Shared mental models and crew decision making. Technical Report Number 46. Princeton, NJ: Princeton University, Cognitive Sciences Laboratory. .

.

.

.

.

.

.

.

.

.

.

.

A Cognitive Science Perspective—197 Patel, V.L., Arocha, J.F., & Glaser, R. (2001). Cognition and expertise: Acquisition of medical competence. Clinical and Investigative Medicine, 23(4), 256 –260. Patel, V.L., Arocha, J.F., Kaufman, D.R. (1994). Diagnostic reasoning and expertise. Psychology of Learning and Motivation: Advances in Research and Theory, 31, 137– 252. Patel, V.L., Arocha, J.F., & Leccisi, M. (2001). Impact of undergraduate medical training on resident problem-solving performance. Journal of Dental Education, 65, 1199 –1218. Patel, V.L., Evans, D.A., Groen, G.J. (1989). Biomedical knowledge and clinical reasoning. In D.A. Evans, & V.L. Patel (Eds.), Cognitive science in medicine: Biomedical modeling (pp. 53 –112). Cambridge, MA: MIT Press. Patel, V.L., Evans, D.A., & Kaufman, D.R. (1989). Cognitive framework for doctor-patient interaction. In D.A. Evans & V.L. Patel (Eds.), Cognitive science in medicine: Biomedical modeling (pp. 253 –308). Cambridge, MA: MIT Press. Patel, V.L., & Groen, G.J. (1986). Knowledge-based solution strategies in medical reasoning. Cognitive Science, 10, 91–116. Patel, V.L., Groen, G.J., & Arocha, J.F. (1990). Medical expertise as a function of task difficulty. Memory & Cognition, 18(4), 394 – 406. Patel, V.L., Groen, J.G., & Frederiksen, C.H. (1986). Differences between students and physicians in memory for clinical cases. Medical Education, 20, 3 –9. Patel, V.L., Groen, G.J., & Norman, G.R. (1993). Reasoning and instruction in medical curricula. Cognition & Instruction, 10(4), 335 –378. Redelmeier, D.A., & Tversky, A. (1990). Discrepancy between medical decisions for individual patients and for groups. New England Journal of Medicine, 322, 1162–1164. Rorty, R. (1979). Philosophy and the mirror of nature. Princeton, NJ: Princeton University Press. Sackett, D.L., & Haynes, R.B. (1995). On the need for evidence-based medicine. Evidence-Based Medicine, 1, 5 – 6. Sackett, D.L., Rosenburg, W.M.C., Gray, J.A.M., Haynes, R.B., & Richardson, W.S. (1996) Evidence based medicine: What it is and what it isn’t. British Medical Journal, 312(7023), 71–72. Scardamalia, M., & Bereiter, C. (1991). Literate expertise. In K.A. Ericsson & J. Smith (Eds.), Toward a general theory of expertise: Prospects and limits (pp. 172–194). New York: Cambridge University Press. Schank, R., & Abelson, R. (1977). Scripts, plans, goals, and understanding. Northvale, NJ: Erlbaum. Sowa, J.F. (1984). Conceptual structures. Information processing in man and machine. Reading, MA: Addison-Wesley. .

.

.

.

.

.

.

.

.

.

.

198—L. Farand and J. Arocha Tversky, A., & Kahneman, D. (1974). Judgement under uncertainty: Heuristics and biases. Science, 185, 1124 –1131. van Dijk, T.A., & Kintsch, W. (1983). Strategies of discourse comprehension. New York: Academic Press. von Cranach, M., Foppa, K., Lepinies, W., & Ploog, D. (Eds.). (1979). Human ethology: Claims and limits of a new discipline. Cambridge: Cambridge University Press. von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior. Princeton, NJ: Princeton University Press. Weinstein, M.C., & Fineberg, H.V. (1980). Clinical Decision Analysis. Philadelphia: W.B. Saunders. Winograd, T. (1975). Frame representations and the declarative/procedural controversy. In D.G. Bobrow & A.M. Collins (Eds.), Representation and understanding: Studies in cognitive science (pp. 67– 96). New York: Academic Press. Zarin, D.A., Seigle, L., Pincus, H.A., & McIntyre, J.S. (1997). Evidence-based practice guidelines. Psychopharmacology Bulletin, 33(4), 641– 646. Zielstroff, R.D. (1998). Online practice guidelines. Journal of the American Medical Informatics Association, 5(3), 227–236. .

.

.

An Informatics Perspective—199

8 An Informatics Perspective on Decision Support and the Process of Decision-Making in Health Care ANDREW GRANT, ANDRE KUSHNIRUK, ALAIN VILLENEUVE, NICOLE BOLDUC, AND ANDRIY MOSHYK

Introduction In this chapter we discuss the relationship between decision-making, its computerization, and the use of computers to support human decisionmaking. Our discussion is focused on the relationship between the integration of accurate, updateable, retrievable, and useful computerized information and an understanding of how humans accommodate and process information during decision-making. A decision can be seen not as a single isolated event, but rather as an act that takes place in a continuum of actions, by an individual, in a given organizational context. This crucial aspect of the dynamic unfolding of decisions has been neglected in most work on decision support systems. Figure 8.1 provides a schematic overview of our chapter’s content. At the centre of this diagram is the person in his/her organizational context; often this person works in a team. The cognitive (i.e., human decision-making) process is therefore also central. This person relates to data and knowledge, a knowledge environment that is continually changing. In the field of computing, information processing is structured and logical ways of using information are explicitly known and always in improvement by designers. There is a real world that consists of an interaction between humans, organizations, and computing processing. It requires that abstractions of the total information be used, because dealing with every detail is not possible; human judgment, evaluation, and feedback are key. In this chapter we provide insights into the relationships between cognition and computerized-decision-making, which

200—A. Grant, A. Kushniruk, A. Villeneuve, N. Bolduc, and A. Moshyk Figure 8.1. Schematic summary: informatics, evidence, and decision support

abstraction/compiled information

real-world decision-making

organization quality, uncertainty

evaluation/feedback

person/ team

evidence, argument

cognition: recognition, processing (learning), analysis, decision

standards

data-info-knowledge

decision support

TIME

environment, alerts,reminders, individualized guidelines

qualitative, rules, frames, ontologies, objects

decision modelling

quantitive, Bayesian neural net

An Informatics Perspective—201

we use as a basis for suggesting future advances in informatics tools to support decision-making. Decision-Making

Decision-Making by Professionals An understanding of the process of decision-making is critical to knowing how technology may be designed to support it. Decision-making in health care is often complex and involves a range of psychological processes. They include both initial cognitive processes involved in gathering information in order to assess a situation that may potentially involve a decision, as well as subsequent processing involved in the generation of decision options and choice of action (Kushniruk, 2001). Traditional perspectives on decision-making have been criticized for focusing almost exclusively on ‘the decision event’ – a hypothesized point in time at which the decision-maker chooses among a number of alternatives for action (Orasanu & Connolly, 1993). However, in recent work in a variety of domains, including firefighting (Klein, 1993) and health care (Kushniruk & Patel, 1998), it has been shown that in realworld domains much of the essential cognitive processing involved in decision-making may be considered ‘pre-decisional.’ Such decision-making involves initially sizing up a decision situation in order to arrive at a coherent situation assessment and reasoning about conditions relevant to choice in the context of a situation assessment. Consistent with this perspective, in professional domains it has been noted that experienced decision-makers often do not perceive themselves as concurrently choosing among alternatives (as a classic decision analytic model would imply), but instead have a strong tendency to apply their prior knowledge and experience in previous cases, in conjunction with available situational evidence, to arrive at a solution that may not be optimal but that may be satisfactory (applying the wellknown psychological principle known as ‘satisficing,’ as described by Simon [1986]). Klein’s (1993) recognition-primed model of decisionmaking exemplifies this approach, where experienced decision-making is seen as relying on prior experience and expectations in initially analysing a current situation and matching to prior cases in making complex decisions. Depending on how well the cues in a current situation match earlier cases, prior plans for action may be modified, discarded, or applied in making a choice.

202—A. Grant, A. Kushniruk, A. Villeneuve, N. Bolduc, and A. Moshyk

In health care decision-making, Kushniruk (2001) has found that similar models are applicable, experienced decision-makers relying on their prior experience in applying a predominantly recognitional or pattern-matching approach to decision-making. Research in which the role of evidence-based knowledge resources in improving clinical decision-making is examined by teams indicates that actual use and application of such resources by both experienced and less experienced decision-makers is strongly influenced by the decision-makers’ contextual knowledge and experience (Mehta et al., 1998).

Decision-Making by Novices The cognitive processing of less experienced decision-makers (e.g., medical residents) may range from more analytical processes (i.e., explicitly weighing evidence) to recognitional processes, depending on how familiar they are with the cases with which they are dealing. In general, for complex decision-making, particularly in areas such as health care, Hammond (1993) has proposed that cognitive processes in decisionmaking can be located along a cognitive continuum, which ranges from recognitional processes to analysis. Tasks requiring the processing of large amounts of information in a limited time tend to induce recognition, while tasks involving more quantitative information may induce more analytical processing.

Decision-Making and Evidence The practice of evidence-based medicine (EBM) is predicated on the assumption that clinicians will apply the best evidence and knowledge that has been obtained through rigorous scientific procedures (Sackett & Haynes, 1995). As critics of this approach have noted, however, the application of formal or scientific knowledge is often mediated by the application of personal knowledge based on experience and personal understanding of a decision situation. Furthermore, the interplay between these types of knowledge are not well understood, making application of the ‘best’ formal knowledge at point of care an elusive goal. Health care workers appear to rely on the use of informal knowledge, represented as simplifying medical heuristics in order to deal with complex cases and situations (Kushniruk, 2001). According to McDonald (1996), there may be a number of reasons why physicians may rely on such heuristics, particularly in domains where accumulated

An Informatics Perspective—203

scientific or formal evidence may be lacking or difficult to apply to particular clinical situations.

Decision-Making in the Real World Decision-making in complex, real-world domains involves a range of cognitive processes, including situational assessment, generation of choices, and subsequent selection of action. In addition, decision-making in the real world involves the application of knowledge of varying forms in order to solve complex decision problems. Health care research indicates there are a number of types of knowledge applied in both individual (Kushniruk, 2001) and team decisionmaking (Mehta et al., 1998). The types of knowledge involved in health care decision-making can be classified as belonging to the following categories: (1) formal or scientific knowledge – obtained from medical literature (e.g., articles, journals, books) and formal education; (2) knowledge based on prior experience with similar medical cases; and (3) situational knowledge based on an assessment of the current medical situation for which a decision must be made. The latter two types of knowledge may be considered ‘informal,’ because they would not have undergone rigorous scientific scrutiny. All three types of knowledge are interwoven when health care workers make complex decisions, as are the cognitive processes that draw on different forms of knowledge.

Implications for Clinical Decision Support Systems The wide range of cognitive processes and varied knowledge types employed in human diagnostic and therapeutic decision-making has implications for the development of clinical decision support systems. In several domains, application of prescriptive approaches to supporting decision-making (both computer-based and educational efforts) based on decision analytic theory and principles (designed to lead to optimal decisions) has not been found to lead to enduring changes in the behaviour of decision-makers over time (Means, Crandall, Sales, & Jacobs, 1993) or to continued use and acceptance of decision support tools. For example, in studies in domains such as business and health care it has been found that professionals trained in the application of decision analysis often resort to their previous approaches to decision-making when faced with complex problems subsequent to their training. According to some, this fallback may be due to an inherent mismatch

204—A. Grant, A. Kushniruk, A. Villeneuve, N. Bolduc, and A. Moshyk

between the way humans deal with complex decision situations and many prescriptive approaches that are based on analytic processing and the application of ‘best’ formal scientific knowledge at point of care (Kushniruk & Patel, 1998). Indeed, it has been argued that although cognitive processes used in clinical decision-making may be considered by some ‘gold standards’ as being suboptimal, in successful implementation of decision support designed to improve human processing, the ways professionals actually make decisions should to be taken into account in order for support to be accepted and lead to enduring improvements. In cases where decision support is most warranted (i.e., non-routine or difficult situations in which human decision-makers may wish to elicit decision support), the decision situation is often characterized by missing, incomplete, and even ambiguous information and evidence. These situations complicate decision processing and require decisionmakers to carefully integrate knowledge from varied sources in arriving at solutions (Kushniruk, 2001). Decision-making in such circumstances is based not on the application of formal knowledge or experience alone, but rather on a complex and not-well-understood interplay among varied sources of knowledge. This complexity may be responsible for problems in the deployment and acceptance of many health decision support systems. In subsequent sections we discuss experience with decision support systems and then return to a consideration of how a better understanding of decision-making might influence future decision support systems. Decision Support Models In the case of complex human decision-making, varied forms of knowledge may need to be applied in real decision-making situations. Representations that have been postulated for knowledge use in human decision-making range from procedural rules to structures such as schemata. The latter structures are hypothesized knowledge representations that encapsulate knowledge about specific diagnostic hypotheses and their differentiation, for example, in the area of health care decisionmaking (Kushniruk & Patel, 1998). Likewise, over the past several decades a wide range of computerbased representations and models have appeared for expressing knowledge within health care decision support systems. As will be discussed at length later, the degree of match between the human and computer-

An Informatics Perspective—205

based representations may underlie the success or failure of such systems.

Quantitative Models Quantitative approaches to decision support in health care are typically based on statistical methods (e.g., Bayesian, fuzzy sets, and other statistical approaches, including Bayesian networks and influence diagrams). Since the 1960s it has been recognized that computers could be used to compute probabilities based on observations of patient-specific parameters (e.g., the relation between occurrence of symptom and the presence of a particular disease). As a consequence, a great variety of Bayesian diagnosis programs have appeared. Perhaps the best-known work is that of de Dombal et al. (1972), who applied a simple Bayesian model with the assumption that there are no conditional dependencies among findings in a medical case (e.g., the presence of one medical finding does not affect the likelihood of another finding being present). Applying surgical or pathological diagnoses as a gold standard, de Dombal et al. developed a system, known as the Leeds abdominal pain system, that allowed physicians to enter attributes of a patient’s condition, which were then analysed using Bayes’ rule in order to generate diagnoses of the patient’s condition, based on the probabilities involved. It was found that the system was accurate in up to 90 per cent of cases in a number of studies at Leeds University (although this level of accuracy was not achieved during testing at other sites). More recent quantitative models have extended Bayesian reasoning to the development of belief networks for decision support, where conditional dependencies are modelled explicitly (Heckerman & Nathwani, 1992). For example, a system known as Pathfinder, which was developed for diagnosis of lymph node problems, employs a belief network approach and builds on the foundation laid by earlier systems employing Bayesian statistical approaches. These models include explicit representation of decisions and consist of acyclic graphs with nodes representing variables such as medical symptoms and diagnoses and arcs between nodes representing the conditional probabilistic dependencies among the variables. Other quantitative methods include the application of decision analysis, which enhances Bayesian reasoning with explicit representation of decision and the utilities that are associated with the outcomes that occur in response to decisions (Pauker & Kassirer, 1981). Bayesian

206—A. Grant, A. Kushniruk, A. Villeneuve, N. Bolduc, and A. Moshyk

belief networks have subsequently been used for decision support in diagnostic classification and treatment options control (Morrison et al., 2002; Bothtner et al., 2002; Montironi et al., 2002; Diamond et al., 2002). Although researchers who possess a decision-analytic perspective have promoted the use of such algorithms, the average clinician has found them difficult to use in the context of real medical cases in which it may be difficult to specify all the choices and utilities needed to arrive at an optimal decision (Kushniruk, 2001). Bayesian theorem provides a normative method for updating beliefs about the results using new information, but it cannot help if many factors might influence the posterior belief. Other modelling approaches can then be used, such as predictive models (including regression models or neural networks). Artificial neural networks accept as input sets of findings that can describe a medical case and generate as output the likelihood of a particular classification (e.g., diagnosis) that explains the findings. These programs work by translating a set of findings into a weighted set of classifications, which are mathematically combined in order to reach a conclusion. To develop such networks to distinguish among diseases, they must be trained (i.e., the value of the weights must be determined in an incremental way by ‘training’ the network) using a collection of previously classified example cases [Weiss & Kulikowski, 1990]).

Qualitative Models In contrast to quantitative models, qualitative approaches to modelling and representing knowledge for decision support typically have been inspired by perceptions of how humans reason and make decisions. Furthermore, qualitative models are often less formal than quantitative ones. The term ‘heuristic method’ has been associated with qualitative models, where reasoning (by machines or humans) is seen as being based on the application of informal rules or heuristics. Such rules have been embodied in the numerous rule-based expert systems that have appeared in health care. The earliest and one of the most well-known systems, MYCIN, was a computer-assisted decision support system that provided decisions regarding the management of patients who had infections (Shortliffe, 1976). Such expert systems were intended to mimic the knowledge and decision-making capabilities of human experts.

An Informatics Perspective—207

The development of MYCIN spawned a great number of rule-based expert systems during the 1980s; these innovations will be described below in our historical view of health care decision support. The knowledge base of such systems contained a number of IF-THEN rules that represented logical sentences describing how a set of conditions (the IF part of the rule) relates to certain possible conclusions (the THEN part of the rule). Each rule represents a micro-decision, which ideally would be based on experience and insight and would be verifiable from patient data and the healthcare literature. The issue of successfully acquiring the knowledge to be embodied in such systems has formed a research area in its own right: the study of knowledge acquisition for health care knowledge-based systems (Musen, 1998). Other representational issues for qualitative models also arose, including the need for richer computational approaches to representing health care knowledge bases that go beyond simple IF-THEN rule formalisms. Indeed, as will be described later, during the 1980s first-generation expert systems (exemplified by MYCIN, INTERNIST, and other medical expert systems) began to be replaced by second-generation expert systems that embodied not only rules but other representational mechanisms, such as frames, object-oriented behaviours, and deductive database capabilities (Mylopoulos, Wang, & Kushniruk, 1990). The processing capabilities of expert systems have also been examined in the light of how humans actually process data and knowledge during decision-making. In the area of health care decision-making, Patel and Groen (1986) considered the directionality of reasoning by health care professionals in the context of expert systems that appeared during the 1980s and that used an approach whereby reasoning was ‘backward’ (i.e., from conclusions back to data). In contrast, human experts have been shown to use a method whereby they are data driven (i.e., move from recognizing patterns in data towards conclusions or hypotheses, also known as ‘forward’ reasoning). During the 1980s expert system developers also explored the integration of multiple reasoning methods in knowledge-based expert systems. Other approaches to qualitative modelling for health care decision support systems included the early use of a variety of related approaches for representing knowledge used in decision-making. They ranged from simple decision tables, which embody a set of conditions of rules for diagnosis along with associated conclusions, to decision trees, which incorporate both possible decision paths along with their associated utilities.

208—A. Grant, A. Kushniruk, A. Villeneuve, N. Bolduc, and A. Moshyk

Integrating Quantitative and Qualitative Approaches Models that merge aspects of both qualitative and quantitative approaches include influence diagrams as well as work in developing frameworks for decision support that allow users to represent uncertainty in either quantitative or qualitative terms (Fox & Cooper, 1997). The use of decision models to support strategic decision-making in areas such as health technology evaluation, practice change, and health policy decisions essentially involves a combination of quantitative and qualitative reasoning. The quantitative component is expressed in the possibility to assign both probabilistic and judgment-based quantitative values to the link. The qualitative component is expressed in the structure of the model and the possible relationships as well as in the possibility to assign judgment-based qualitative values to the links. Multi-attribute utility theory consists of sets of methods for making choices about different options (e.g., alternatives definition, evaluation of alternatives according to their relevant attributes, assigning relative weights to the attributes, overall evaluation of each alternative, sensitivity analysis). Using a decision tree and an influence diagram approach is a direct application of multi-attribute utility theory. These decision analysis models contain the basic components of decision nodes and chance nodes. The display of these two approaches (decision tree and influence diagram) is different, the latter providing a considerably more compact representation; the underlying mathematical calculations, however, are the same. The network of chance nodes can be viewed as a belief network. Markov models compactly represent situations in which there is an ongoing risk of a patient’s moving from one state to another. These models are often represented using two figures (a state transition diagram and a transition-probability matrix). Transitional probabilities can be also be defined through a belief network. Markov models have been applied for evaluation of patient survival and impact of diagnostic and treatment actions on patient survival and quality-of-life criteria (Lee et al., 2002; Miners et al., 2002; Romagnuolo, Meier, & Sadowski, 2002). Clinical Decision Support Systems

Approaches and Examples In this section we review systems used in the clinical environment and, in particular, the context of individual patient management. This em-

An Informatics Perspective—209

phasis means that information must be timely, appropriate, safe, usable, and useful. Shortliffe (1987, p. 61) has broadly defined a medical decision support system as being ‘any computer program designed to help health professionals make decisions.’ Other authors, such as van der Lei and Talmon (1997, p. 262), have developed more specific definitions in which clinical decision support systems are viewed as ‘active knowledge systems which use two or more items of patient data to generate case-specific advice.’ These systems consist of several components: (1) medical knowledge (represented using various forms as described above); (2) patient data; and (3) case-specific advice generation (i.e., a component for applying medical knowledge to the particular patient data to generate advice). Such a definition precludes passive knowledge sources, such as electronic textbooks and many information resources available on the World Wide Web (e.g., links to lists of clinical guidelines), from being defined as decision support systems. In recent years, however, the distinction between what may be termed ‘passive’ resources (e.g., MEDLINE and Web sites provided by organizations responsible for disseminating clinical practice guidelines) that can be used to aid decision-making and more active tools is beginning to become blurred. This is because passive information resources are beginning to incorporate aspects of interactive decision support (e.g., patient information resources that also contain advice functions) (Cimino, Patel, & Kushniruk, 2002). Broad categories of decision support systems in health care can be identified, including computer tools for information management and administrative decisions (e.g., hospital financial information and decision support systems); computer tools for focusing attention (e.g., executive information and support systems and clinical laboratory systems that can flag abnormal values); computer tools for patient-specific consultations (e.g., expert diagnostic systems designed to provide advice or suggest differential diagnoses); and computer tools for quality assessment. If we accept a broad definition of decision support systems, the growing number of online medical references and respositories could also be considered a form of decision support. For example, Cimino’s MEDLINE button (Cimino, Johnson, Aguirre, Roderer, & Clayton, 1993), which can be accessed within a computerized patient record system, constitutes a form of decision support by providing context-sensitive linking of resources such as MEDLINE to support clinical decisionmaking at point of care. Recently, this resource has been extended to such linkage within a clinical patient record system known as PATCIS (Cimino, Patel, & Kushniruk, 2002), which allows patients to access

210—A. Grant, A. Kushniruk, A. Villeneuve, N. Bolduc, and A. Moshyk

their health records over the Internet and also to obtain context-specific help about information presented by clicking on ‘infobuttons.’ Another example of related work that appeared in the 1980s is the Quick Medical Reference (QMR), which evolved from extension of a well-known medical expert system, INTERNIST, to include additional decision support capabilities and different modes of interaction (Miller & Masarie, 1989). QMR is designed to assist users with the diagnostic process and contains more than 600 diseases and 4,500 clinical findings associated with those diseases. The approach taken in representing the relation between findings and diseases involves two parameters: evoking strength (indicating how strong the finding suggests the presence of the disease is) and frequency (the measure of how often a finding is found in a given disease). The system allows for several modes of use, including advice in diagnosis and links to information about diseases. A number of problems, however, have been noted in acceptance of this system, even given its extensions from the expert system approach on which it was based. For example, the approach relies heavily on evoking strengths and frequencies that vary from site to site and country to country (e.g., for AIDS these parameters would be location sensitive, with differences in strengths for North America vs Africa, where there is a greater risk). One promising direction lies in the integration of decision support capabilities with other, more frequently used health information systems. Work in this line dates back to pioneering efforts at the University of Utah in the development of a system known as HELP. This information system was initially developed at the Latter-Day Saints Hospital in Salt Lake City by Warner and colleagues and was later implemented at other acute-care facilities (Pryor, Gardner, Clayton, & Warner, 1983). This integrated hospital information system was pioneering in that it incorporated decision-support logic within its organization. Modules containing specialized logic allowed the system to analyse and react to data entered about patients and automatically trigger alerts, diagnostic suggestions, and management advice. The knowledge representation used to enter this information was known as the HELP Frame Language and was designed to enable physicians to understand its contents. More recently, work at Columbia University has led to a standard formalism for encoding rules that can be used to automatically trigger alerts and reminders within hospital information systems (Hripcsak, Ludemann, Pryor, Wigertz, & Clayton, 1994; Jenders et al., 1995). Using this approach, a component known as a Clinical Event Monitor

An Informatics Perspective—211

executes Medical Logic Modules (MLMs) that contain rules that can automatically trigger alerts to physicians. Other efforts at developing standard formalisms for representing decision logic include the GLIF formalism (Shortliffe, Patel, Cimino, Barnett, & Greenes, 1998), which attempts to develop generic formalisms for representing clinical knowledge that can be applied at point of care.

Classifying Clinical Decision Support Systems Musen, Shahar, and Shortliffe (2001) have described a classification of decision support systems according to several dimensions: (1) intended function of the system; (2) the mode in which advice is given; (3) the style of consultation; (4) the underlying decision-making process; and (5) issues related to human/computer interaction. Examples of intended function include a system that supports diagnosis or supports the process of patient management. Examples of advice include a passive mode that involves a clinician or other user’s first recognizing he/she needs help and then explicitly invoking the system. This was the mode of advice of many early medical expert systems. In contrast, other systems, such as the HELP system described above, provide active advice by automatically monitoring data management activities and offering unsolicited advice and reminders. Examples of consultation include two modes: the consulting model of interaction and the critiquing model. In the consulting model the computer program acts as an adviser, asking questions and generating advice. In contrast, in the critiquing model the clinician may already have a preconceived notion of a patient’s diagnosis or management plan and invoke the support system to get a computer-based critique of his/her plan. The final dimension, human/ computer interaction, continues to be a challenging and relatively unexplored aspect of decision support when considered in terms of how such systems will fit into workflow, daily practices, and human cognitive processing. Understanding the psychological aspects of the use of decision support (i.e., usability of such systems, integration with efficient work practices) has in some cases been ignored or treated superficially, leading to major roadblocks in system acceptance. Psychological and Cognitive Issues Although a great deal of effort has gone into developing systems designed to support human decision-making in health care, the incorporation into day-to-day practice of many of the systems described above

212—A. Grant, A. Kushniruk, A. Villeneuve, N. Bolduc, and A. Moshyk

has often been limited. Initially, there was a great deal of enthusiasm and hope that systems incorporating advanced representations and processes (ranging from statistical quantitative approaches to heuristic approaches) would lead to widespread use of decision support technology in health care. However, this has often not yet been the case, with systems such as expert systems achieving limited use (Shortliffe, 1993), and other approaches to decision support leading to systems that may have been successful within limited environments but less successful when implemented in other sites (e.g., de Dombal’s Bayesian system). Miller, who pioneered the INTERNIST expert system, argued that many decision support systems that had been developed were based on what he and Masarie termed the ‘Greek Oracle’ model of diagnostic systems (Miller & Masarie, 1990). By the Greek Oracle model Miller and Masarie meant that the user of the system (e.g., a physician) is supposed to transfer patient information, and to a large extent control of decision-making, to the decision support system. They explained that during the process the physician’s role (after initial data input) would be that of a passive observer. At most, the physician could answer yes/ no questions posed by the diagnostic consultant program or ask for an explanation of the program’s behaviour as the process unfolded. At the end of the consultation, the Greek Oracle (consultation program) would be expected to reveal, if possible, the correct diagnosis, and to provide a detailed explanation of its reasoning. Miller and Masarie (1990) went on to argue that physicians’ need for decision support rarely involved global failure and the need for replacement of the entire decision-making process, as the Greek Oracle model would imply. They contended that, instead, physicians want a system to help them to overcome limiting steps in their decision-making process, that is, a catalyst that would help them overcome problems as they arise rather than replace their skill in decision-making. An additional aspect of their critique was that of the mode of human/computer interaction, whereby the physician was asked to transfer knowledge to the computer and wait for its deliberation to end (perhaps with the computer asking for further input). Thus, the lack of integration of decision support systems with actual work practices was identified as a major retarding issue for many decision support systems before 1990. Even now, the question of how such systems are integrated into the work practice and cognitive processing of health care workers remains a major issue. Decision-making can be understood in terms of three generic sets of activities. First, the problem must be understood; second, alternative

An Informatics Perspective—213

solutions are generated; and, finally, the choice (decision) is made. While attempting to understand a problem, experts (as opposed to novices) focus on unusual events and use fewer cues (Charness, 1991; Shanteau, 1992a; Johnson, 1988; Reimann & Chi, 1989). They can take a more holistic view of a problem before decomposing it into smaller parts (Hershey, Walsh, Read, & Chulef, 1990; Shanteau, 1992b). A challenge here for decision support systems is to properly identify what is presented to a decision-maker with respect to his/her level of expertise. Too much information may be discouraging for an expert decisionmaker, who will then ignore the system and revert to his/her former practice, while not enough information offered to a novice decisionmaker may impair his/her performance. Experts‘ information search strategies are better organized than those of novices (Bédard & Mock, 1992; Hershey et al., 1990). Experts tend to adopt a breadth-first search strategy, as opposed to novices’ depthfirst search strategy. In accounting, for example, expert auditors follow a more goal-driven search strategy than novices, and their search strategy is similar to the frameworks proposed by framers of professional accounting standards (Reimann & Chi, 1989; Bédard 1991). This pattern has been reported, too, in the context of managing complex systems (Dörner & Schölkopf, 1991). Experts have also been found to be better than novices at identifying relevant information, and they categorize problems using high-level knowledge, whereas novices use objects and situations (Patel & Groen, 1991; Petre, 1995; Anzai, 1991). Experts use backward-reasoning to pick up missed information in the problem statement (Scardamalia & Bereiter, 1991; Bereiter & Bird, 1985; Johnston & Afflerbach, 1985) and ask more questions (Glaser & Chi, 1988; Dörner & Schölkopf, 1991), using high-level knowledge whenever possible (Murphy & Wright, 1984; Hershey et al., 1990). Medical experts, however, do focus quickly on symptom cues and patterns before formulating their diagnosis; that is, they perform forward-reasoning (Patel & Groen, 1991). Owing to the limited problem-solving capacity of the human mind (Newell & Simon, 1972), top-performing individuals develop abstraction mechanisms to cope with their processing limitations. This suggests that technology, when used to support decision-making, must bring forward data at the right level of abstraction and support data integration in a way that is suitable to the user requirement. Experts sometimes retrieve a solution method as part of the immediate comprehension of a task (Anzai, 1991; Batra & Davis, 1989; Chi, Glaser, & Rees, 1982), but this retrieval of a solution method may occur

214—A. Grant, A. Kushniruk, A. Villeneuve, N. Bolduc, and A. Moshyk

under circumstances of pattern familiarity in which the expert can recall a very similar problem from the past. When experts recognize patterns, they are in a position to recall the solution method that was employed for similar problems. This ability to make use of previously existing knowledge (familiar patterns and their solution methods) is referred to as the representativeness heuristic in other literature (Kahneman & Tversky, 1972, 1973; Tversky & Kahneman, 1974). The similarity of the data at hand to pre-stored representations of objects and events is presumed to be a chief determinant in the arousal of existing knowledge and its application (Nisbett & Ross, 1980). However, recall of a solution method might occur for only small problems and be unlikely to happen when the problem is large (although such retrieval could occur for parts of the problem). As experts decompose the problem into smaller pieces, they try to make it fit with patterns stored in long-term memory and can therefore retrieve solution methods more easily (experiential knowledge). This suggests that access to case examples may aid orientation when a complex problem is examined. Evolving Issues in Clinical Decision Support

Challenges In recent years, a number of researchers working in the area of human cognition in health informatics have argued for the development of decision support systems that are more sensitive to the information processing needs of health care workers and that are focused on complementing and extending human decision-making (Baud, Rassinoux, & Scherrer, 1992; Kushniruk & Patel, 1998). For example, Kushniruk and Patel argue that methodologies must be developed that can practically and accurately be applied in describing characteristics of human decision-making, including the following: (1) the cognitive processes and decision-making steps of health care workers of varying levels of expertise; (2) the skills needed by decision makers to bring to bear on successful decision-making; (3) the strategies and heuristics actually used in health care to deal with major constraints in decision-making, including time constraints, lack of information, and ambiguity of evidence; and (4) the types of problems encountered by decision-makers, classified both as routine problems and as non-routine or difficult problems, as well as by frequency.

An Informatics Perspective—215

Work along these lines has begun with the application of a methodology known as cognitive task analysis, borrowed from cognitive science, for characterizing problems in human decision-making and reasoning in the solving of complex real-world problems (see Kushniruk, 2001, for application of this approach to health care). In the study of team decision-making, empirical studies of the Autocontrol approach (described below) holds promise for obtaining more accurate requirements and understanding of information needs in the context of group or shared decision-making (Grant et al., 1997; Mehta et al., 1998). In addition to issues related to human/computer interaction and compatibility of decision support with human cognitive processes, a number of other issues remain challenges for improving decision support and helping it to achieve its potential in health care. Foremost among them is the issue of evaluation and the safety of such systems for use in actual health care practice; this is closely related to the issue of their compatibility with actual human cognitive processes, since decision support systems that have achieved high degree of accuracy are not necessarily acceptable for use by health care workers. This unacceptability is particularly evident when the representation and algorithms used by such systems are foreign or even incomprehensible to the average clinician who is the intended user. Systems must be carefully evaluated both initially ‘in vitro’ (i.e., outside patient care, where there are no risks) and then ‘in vivo’ (i.e., in the actual health care environment) (Miller & Geissbuhler, 1999). Effective evaluation methods are needed for both the formative evaluation of these systems while they are under development (in order to provide feedback for their improvement) as well as summative evaluation of their effectiveness and safety once they are complete (Grant, Plante, & Leblanc, 2002). As evaluation in health informatics is currently an area of research and debate itself (see Moehr, 2002), there are currently no clear standards available by which to judge many innovative information systems. Whatever approach is taken, a number of areas need to be addressed for appropriate evaluation, including evaluation of the boundaries and limitations of such systems and identification of potential reasons for lack of system effect. As noted by other authors (see Miller & Geissbuhler, 1999), there are also a host of legal and ethical issues in use of decision support in health care, including the issue of liability if the use of these systems leads to suboptimal or potentially disastrous consequences.

216—A. Grant, A. Kushniruk, A. Villeneuve, N. Bolduc, and A. Moshyk

In considering the acceptance of decision support, a number of interesting patterns emerge. Successful integration of systems has occurred in several areas. For example, medical imaging systems designed to support data acquisition and provide data not otherwise available to physicians have been successfully integrated into practice (van der Lei & Talmon, 1997). Other examples of successful systems include support where the system transforms, reduces, or presents data in ways that are more comprehensible than would otherwise be the case (e.g., systems for data summarization in operating rooms and intensive-care units) (McKeown et al., 2000). Other promising areas where decision support facilitates human decision-making and work practices include systems that automatically validate data, for example, laboratory support systems that semi-automatically take care of data validation, as well as systems that check for drug interactions or contraindications. Van der Lei and Talmon (1997) provide an insightful list of conditions where decision support has been shown to successfully support human processes: (1) providing health care workers with data that cannot otherwise be obtained; (2) providing comprehensive overviews of data for rapid decision-making that might otherwise lead to cognitive overload; (3) providing support for tedious or time-consuming administrative functions, documentation, storage, or transportation of data; and (4) providing processing support for large data streams. These conditions are by no means exhaustive, but they do indicate areas where decision support can be successfully integrated into health care. As more systems are deployed and greater consideration is given to issues of human/computer interaction and real support for human cognitive processes, the list of conditions in which decision support is advantageous can be expected to grow. At this time, such critical examination of conditions underlying both system acceptance and system failure will likely provide considerable insight into future efforts. Evaluating and critiquing system deployment by multidisciplinary teams and examining the use of decision support from a variety of perspectives ranging from the cognitive to the social and organizational levels will be needed in order to realize the promise of health care decision support.

Team Decision Support Recent research focused on understanding decision processes in the context of teams and organizations is becoming increasingly important for understanding the potential for providing decision support. Prior

An Informatics Perspective—217

work, described above in the development of decision support, has tended to be focused on the decision-making of the individual. In the remainder of this chapter we argue that one major difficulty in successfully implementing decision support in real-world settings may be due to a lack of consideration of organizational context and team processes. Decision-making undertaken by individuals is not isolated from a context, nor is it usually totally independent of other people who work in the same context. Health professionals work as part of professional teams, and this fact has some significance for the dynamic of the decision-making environment. Being part of a team suggests that each member is aware of his/her role in the team and, furthermore, that as a group they share common objectives (Amason & Sapienza, 1997). As professionals, all members must exercise judgment based on their skills and experiences compatible with standards usually determined by their profession. Decision accountability can be shared hierarchically within the health practice institution but also with respect to an individual’s professional organization. For example, a doctor may not have to justify his professional decision, with respect to an individual patient, to his organizational superior. Linking decision-making to the dynamic of practice change distinguishes decision-making at a specific moment in relation to a given problem, for example, a critical decision in relation to a patient’s illness that requires discussion between a health care provider and the patient and the decision-making linked to a type of problem in a given context. The latter is a dynamically evolving model of best decisionmaking that is influenced by experience, continuing education, relations with peers both locally and in professional meetings, and, particularly, by the accepted practice within the local team. If there is no communication within the local team, there will be very poor continuity of care, because a patient communicates at different times to the different caregivers, other physicians, nurses, and others. Practice change combining quality assurance and innovation uptake is typically viewed as a cycle. In the Autocontrol methodology the types of information flow that can occur in a theoretical cycle of practice change are considered. In real life many cycles occur at different rates at the same time. Furthermore, the cycles can be distinguished by whether they have effect mainly at a strategic, tactical, or operational level. The Autocontrol model, as shown in figure 8.2, can be broken down into four blocks: information, critique, construction, and integration.

218—A. Grant, A. Kushniruk, A. Villeneuve, N. Bolduc, and A. Moshyk Figure 8.2. The Autocontrol team decision support model

INFORMATION

CRITIQUE Meta-cognition

Information Evidence Filter

Cognitive

Dynamic Customized Compilation

Professional Team

CONSTRUCTION

Affective

Critique Consensus on Practice Change Problem Analysis & Solution

PracticeChange Decisions

Understanding Commitment

Structured Team Discussions

Feedback

Adaptation

Implementation of PracticeChange Decisions

INTEGRATION Practice-Change Cycle

They are largely similar to steps in a quality control cycle such as collect, disintegrate, reintegrate, apply, inform, think, plan, and act. In fact, the cycle is coherent with that of scientific discovery: inductive reasoning, hypothesis generation, research activity, and analysis. Autocontrol decorticates each of the steps as a basis for tracing information flow in the cycle. It also anchors the steps within a system-model theoretical construct.

An Informatics Perspective—219

The system-model approach enables a construct of a hierarchy of systems, such as a unit in a hospital, and a consideration of how information flows within a system and between systems. In general, each system has its own culture and individualities in its way of operating. A team usually relates to a system, although more complex systems, such as a chronic-disease support program, may borrow from different systems and create its own system. A system has basic characteristics in that it generally follows a particular aim and seeks at least to survive. In this sense, Autocontrol applies to a system. In the information block all types of information that may be relevant to decisions of practice change are examined. Two general types of evidence can be distinguished: external and internal. External sources include published information that might be classified according to a Cochrane Collaboration scale of degree of quality of evidence and also might include, for example, data of similar practice elsewhere. Internal sources include analyses of data relating to current practice that should be increasingly enabled by the availability of data warehouses linked to hospital information systems with appropriate attention to data quality. Our own studies of practice-change decision-making have highlighted the different sources of information that are relevant (Mehta et al., 1998). The critique block is concerned with the mixture of individual and team reflection that is part of individual and organizational learning, a constant process of updating of experience and aligning it with available relevant evidence. The concomitant of critique is consensus and construction. In this block there is recognition of what procedures should used (e.g., in the treatment of patients). In many ways this aspect is as important or more important than improving the environment for the decision with the individual patient. Analyses of communication in a unit of care (Grant et al., 2003) and in a situation of continuity of care (Villeneuve, Grant, Bolduc, Vanasse, & Ouellette, 2003; Grant et al., 2003) have shown the variation of perceptions among professional groups regarding priorities of treatment and how impediments in the flow of information can disadvantage a patient. The next step in the Autocontrol model of integration of practice change entails making procedures of care explicit. Undertaking this step will enhance continuity of information and be a means by which subsequent improvement in practice can be compared. Such analyses affect the interplay of quality medical practice and resource provision.

220—A. Grant, A. Kushniruk, A. Villeneuve, N. Bolduc, and A. Moshyk

Conclusions In this chapter we have argued that a study of cognitive issues and an analysis of contextual requirements are necessary to build a decisionmaking environment. We have pointed out that a decision is not an isolated event involving an isolated patient, but can be understood only if it is considered alongside team decision-making and as part of a continuum of decisions. We believe that evidence must be linked with argument and that internal evidence from practice is an essential part of effective decision-making. In describing a decision-making environment there will be increasing emphasis on how best to filter and make accessible information tailored to both context and expertise. The role of feedback of information as a means of continuous practice improvement will grow in importance and enhance the availability of libraries of best-practice guidelines that can be locally tailored. Different means increasingly will be incorporated into the electronic medical record as it is progressively integrated into practice in order to provide contextual information (either as alerts or on demand) by users. There will be, slowly at first, an increase in the role of models of diagnosis and care as part of patient care pathways. These models will support major decisions affecting individual patients as well as policy decisions affecting populations of patients. Semi-automated methods, such as supportive intelligent tools, will be able to point out particular patterns of response, for example, the behaviour of a drug in a subgroup of patients. This innovation will lead to improved decision-making for these patients. Longitudinal analysis of such processes, which may conform to models recognizable by the computer, might allow tracking of decisions that can be linked to the consequences of decision-making.

REFERENCES Amason, A.C., & Sapienza, H.J. (1997). The effects of top management team size and interaction norms on cognitive and affective conflict. Journal of Management, 23, 495 – 516. Anzai, Y. (1991). Learning and use of representations for physics expertise. In K. Anders Ericsson & Jacqui Smith (Eds.), Toward a general theory of expertise: Prospects and limits (pp. 64 –92). Cambridge: Cambridge University Press. .

.

.

An Informatics Perspective—221 Batra, D., & Davis, J.G. (1989). A study of conceptual data modeling in database design: Similarities and differences between expert and novice designers. In J.I. De Gross, J.C. Henderson, & B.R. Konsynski (Eds.), Proceedings of the tenth international conference on information systems (pp. 91– 99). New York: ACM Press. Baud, R.H.; Rassinoux, A.M.; & Scherrer, J.R. (1992). Natural language processing and semantical representation of medical texts. Methods of Information in Medicine, 31, 117–125. Bédard, J. (1991). Expertise and its relation to audit decision quality. Contemporary Accounting Research, 8(1), 198 – 222. Bédard, J., & Mock, T.J. (1992). Expert and novice problem-solving behavior in audit planning. Auditing: A Journal of Practice and Theory, 11(Supplement), 1–20. Bereiter, C., & Bird, M. (1985). Use of thinking aloud in identification and teaching of reading comprehension strategies. Cognition and Instruction, 2(2), 131–156. Bothtner, U., Milne, S.E., Kenny, G.N., Georgieff, M., & Schraag, S. (2002). Bayesian probabilistic network modeling of remifentanil and propofol interaction on wakeup time after closed-loop controlled anaesthesia. Journal of Clinical Monitoring and Computing, 17(1), 31–36. Chi, M.T.H., Glaser, R., & Rees, E. (1982). Expertise in problem solving. In R.S. Sternberg (Ed.). Advances in the Psychology of Human Intelligence, Vol. 1 (pp. 1–75). Hillsdale, NJ: Lawrence Erlbaum. Charness, N. (1991). Expertise in chess: The balance between knowledge and search. In K. Anders Ericsson & Jacqui Smith (Eds.), Toward a general theory of expertise: Prospects and limits (pp. 39 – 63). Cambridge: Cambridge University Press. Cimino, J.J., Johnson, S.B., Aguirre, A., Roderer, N., & Clayton, P.D. (1993). The MEDLINE Button. In M.E. Frisse (Ed.), Proceedings of the sixteenth annual symposium on computer applications in medical care (pp. 81– 85). New York: McGraw-Hill. Cimino, J.J., Patel, V.L., & Kushniruk, A.W. (2002). The patient clinical information system (PATCIS): Technical solutions for and experiences with giving patients access to their electronic medical records. International Journal of Medical Informatics, 68(1–3), 113 –127. de Dombal, F.T., Leaper, D.J., Staniland, J.R., McCann, A.P., & Horrocks, J.C. (1972). Computer-aided diagnosis of acute abdominal pain. British Medical Journal, 1, 376 –380. Diamond, J., Anderson, N.H., Thompson, D., Bartels, P.H., & Hamilton P.W. (2002). A computer-based training system for breast fine needle aspiration cytology. Journal of Pathology, 196(1), 113 –121. .

.

.

.

.

.

.

.

.

222—A. Grant, A. Kushniruk, A. Villeneuve, N. Bolduc, and A. Moshyk Dickinson, H. (1998). Evidence-based decision-making: An argumentative approach. International Journal of Medical Informatics, 51(2–3), 71– 81. Dörner, D., & Schölkopf, J. (1991). Controlling complex systems; or, Expertise as ‘grandmother’s know-how.’ In K. Anders Ericsson & Jacqui Smith (Eds.), Toward a general theory of expertise: Prospects and limits (pp. 218–239). Cambridge: Cambridge University Press. Fox, J., & Cooper, R. (1997). Cognitive processing and knowledge representation in decision making under uncertainty. Psycholgische Beiträge, 39, 83 –106. Reprinted in R.W. Scholz & A.C. Zimmer (Eds.), Qualitative aspects of decision making. Lengerich, Germany: Pabst Science Publishers. Glaser, R., & Chi, M.T.H. (1988). Overview. In M.T.H. Chi, R. Glaser, M.J. Farr (Eds.), The Nature of Expertise (pp. xv–xxviii). Hillsdale, NJ: Lawrence Erlbaum. Grant, A., Plante, I., & Leblanc, F. (2002). The TEAM Methodology for the evaluation of information systems in biomedicine. Journal of Biomedical Computing 32, 195 –207. Grant, A.M, Richard, Y., Deland, E., Després, N., de Lorenzi, F., Dagenais, A., & Buteau, M. (1997). Data collection and information presentation for optimal decision making by clinical managers – The Autocontrol project. In Journal of the American Medical Informatics Association, Symposium Supplement, 789 –793. Grant, A., Grant, G., Comeau, E., Langlois, M., Blanchette, C., Gagné, J., Brodeur, G., Dionne, J., Ayite, A., Desautels, M., & Apinowitz, C. (2003). Évaluation des patrons de pratique routinière de l’entrepôt des données du système d’information hospitalier : l’exploitation d’OLAP et de l’analyse par ‘rough sets’ avec rétroaction aux cliniciens. In A.M. Grant, J.P. Fortin, & L. Mathieu (Eds.), Actes des 9e journées francophones d’informatique médicale: L’informatique de la santé dans les soins intégrés: Connaissances, application, évaluation (pp. 207–214). Quebec: SoQibs. Hammond, K.R. (1993). Naturalistic decision-making from a Brunswikian viewpoint: Its past, present, future. In G. Klein, J. Orasanu, R. Calderwood, & C. Zsambok (Eds.), Decision-making in action: Models and methods (pp. 205 –27). Norwood, NJ: Ablex. Heckerman, D., & Nathwani, B. (1992). An evaluation of the diagnostic accuracy of Pathfinder. Computers and Biomedical Research, 25, 56–74. Hershey, D.A., Walsh, D.A., Read, S.J., & Chulef, A.S. (1990). The effects of expertise on financial problem solving: Evidence for goal-directed, problemsolving scripts. Organizational Behavior and Human Decision Processes, 46(1), 77–101. .

.

.

.

.

An Informatics Perspective—223 Hripcsak, G., Ludemann, P., Pryor, T.A., Wigertz, O.B., & Clayton, P.D. (1994). Rationale for the Arden syntax. Computers and Biomedical Research, 27, 291–324. Jenders, R.A., Hripcsak, G., Sideli, R., DuMouchel, W., Zhang, H., Cimino, J.J., Johnson, S.B., Sherman, E.H., & Clayton, P.D. (1995). Medical decision support: Experiences with implementing the Arden syntax at the ColumbiaPresbyterian Medical Center. In R. Gardner (Ed.), Proceedings of the nineteenth annual symposium on computer applications in medical care (pp. 169 –173). New York: McGraw-Hill. Johnson, E.J. (1988). Expertise and decision under uncertainty: Performance and process. In M.T.H. Chi, R. Glaser, & M.J. Farr (Eds.), The nature of expertise (pp. 209 – 228). Hillsdale, NJ: Lawrence Erlbaum. Johnston, P., & Afflerbach, P. (1985). The process of constructing main ideas from text. Cognition and Instruction, 2(3), 207–232. Kahneman, D., & Tversky, A. (1972). Subjective probability: A judgment of representativeness. Cognitive Psychology, 3(3), 430 – 454. Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological Review, 80(3), 237–251. Klein, G.A. (1993). A recognition-primed decision (RPD) model of rapid decision-making. In G. Klein, J. Orasanu, R. Calderwood, & C. Zsambok, (Eds.), Decision-making in action: Models and methods (pp. 138–147). Norwood, NJ: Ablex. Kushniruk, A.W. (2001). Analysis of complex decision-making processes in health care: Cognitive approaches to health informatics. Journal of Biomedical Informatics, 34, 365 – 376. Kushniruk, A.W., & Patel, V.L. (1998). Knowledge-based HDSS: Cognitive approaches to the extraction of knowledge and understanding of decision support needs. In J. Tan (Ed.), Health decision support systems (pp. 127–152). Gaithersburg, MA: Aspen. Lee, T.Y, Korn, P, Heller, J.A., Kilaru, S., Beavers, F.P., Bush, H.L., & Kent, K.C. (2002). The cost-effectiveness of a ‘quick-screen’ program for abdominal aortic aneurysms. Surgery, 132(2), 399 – 407. McDonald, C.J. (1996). Medical heuristics: The silent adjudicators of clinical practice. Annals of Internal Medicine, 124(1), 56–62. McKeown, K., Jordan, D., Feiner, S., Shaw, J., Chen, E., Ahmad, S., Kushniruk, A., & Patel, V.L. (2000). A study of communication in the cardiac surgery intensive care unit and its implications for automated briefing. In Proceedings of the American Medical Informatics Annual Symposium (pp. 570 – 574). Philadelphia: Hanley and Belfus. .

.

.

.

.

.

.

.

.

.

.

224—A. Grant, A. Kushniruk, A. Villeneuve, N. Bolduc, and A. Moshyk Means, B., Crandall, B., Salas, E., & Jacobs, T. (1993). Training decision makers for the real world. In G. Klein, J. Orasanu, R. Calderwood, & C. Zsambok (Eds.), Decision-making in action: Models and methods (pp. 306 – 326). Norwood, NJ: Ablex. Mehta, V., Kushniruk A., Gauthier S., Richard Y., Deland E., Veilleux M., & Grant A. (1998). Use of evidence in the process of practice change in a clinical team: A study forming part of the Autocontrol project. International Journal of Medical Informatics, 51, 169 –180. Miller, R.A., & Geissbuhler, A. (1999). Clinical diagnostic decision support systems: An overview. In E.S. Berner (Ed.), Clinical decision support systems: Theory and practice (pp. 3 – 34). New York: Springer. Miller, R.A., & Masarie, F.E. (1989). Use of the quick medical reference (QMR) program as a tool in medical education. Methods of Information in Medicine, 28, 340 – 345. Miller, R.A., & Masarie, F.E. (1990). The demise of the ‘Greek Oracle’ model for medical diagnostic systems. Methods of Information in Medicine, 29, 1–2. Moehr, J.R. (2002). Evaluation: Salvation or nemesis of medical informatics? Computers in Biology and Medicine, 32(3), 113 –125. Miners, A.H., Sabin, C.A., Tolley, K.H., & Lee, C.A. (2002). Cost-utility analysis of primary prophylaxis versus treatment on-demand for individuals with severe haemophilia. Pharmacoeconomics, 20(11), 759 –774. Montironi, R., Mazzucchelli, R., Colanzi, P., Streccioni, M., Scarpelli, M., Thompson, D., & Bartels, P.H. (2002). Improving inter-observer agreement and certainty level in diagnosing and grading papillary urothelial neoplasms: Usefulness of a Bayesian belief network. European Urology, 41(4), 449 – 457. Morrison, M.L., McCluggage, W.G., Price, G.J., Diamond, J., Sheeran, M.R., Mulholland, K.M., Walsh, M.Y., Montironi, R., Bartels, P.H., Thompson, D., & Hamilton, P.W. (2002). Expert system support using a Bayesian belief network for the classification of endometrial hyperplasia. Journal of Pathology, 197(3), 403 – 414. Murphy, G.L., & Wright, J.C. (1984). Changes in conceptual structure with expertise: Differences between real-world experts and novices. Journal of Experimental Psychology: Learning Memory and Cognition, 10(1), 144 –155. Musen, M.A. (1998). Domain ontologies in software engineering: Use of PROTÉGÉ with the EON architecture. Methods of Information in Medicine, 37(4 – 5), 540 – 550. Musen, M.A., Shahar, Y., & Shortliffe, E.H. (2001). Clinical decision support systems. In E.H. Shortliffe, L.E. Perreault, G. Wiederhold, & L.M. Fagen (Eds.), Medical informatics: Computer applications in health care and biomedicine (pp. 573 – 609). New York: Springer. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

An Informatics Perspective—225 Mylopoulos, J., Wang, H., & Kushniruk, A. (1990). KNOWBEL: A hybrid expert system building tool. Proceedings of the second international conference on tools for artificial intelligence (864 – 870). Washington, DC. Newell, A., & Simon, H.A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice Hall. Nisbett, R., & Ross, L. (1980). Human inference: Strategies and shortcomings of social judgment. Englewood Cliffs, NJ: Prentice-Hall. Orasanu, J., & Connolly, T. (1993). The reinvention of decision-making. In G. Klein, J. Orasanu, R. Calderwood, & C. Zsambok (Eds.), Decisionmaking in action: Models and methods (pp. 3 – 20). Norwood, NJ: Ablex. Patel, V.L, & G.J. Groen. (1986) Knowledge-based solution strategies in medical reasoning. Cognitive Science, 10, 91–116. Patel, V.L., & Groen, G.J. (1991). The general and specific nature of medical expertise: a critical look. In K. Anders Ericsson and Jacqui Smith (Eds.), Toward a general theory of expertise: Prospects and limits (pp. 93 –125). Cambridge: Cambridge University Press. Pauker, S.G., & Kassirer, J.P. (1981). Clinical decision analysis by computer. Archives of Internal Medicine, 141(13), 1831–1837. Petre, M. (1995). Why looking isn’t always seeing: Readership skills and graphical programming. Communications of the ACM, 38(6), 33 – 44. Pryor, T.A., Gardner, R.M., Clayton, P.D., & Warner, H.R. (1993). The HELP system. Journal of Medical Systems, 7(2), 87–102. Reimann, P., & Chi, M.T.H. (1989). Human expertise. In K. J. Gilhooly (Ed.). Human and machine problem solving (pp. 161–191). New York: Plenum. Romagnuolo, J., Meier, M.A., & Sadowski, D.C. (2002). Medical or surgical therapy for erosive reflux esophagitis: Cost-utility analysis using a Markov model. Annals of Surgery, 236(2), 191–202. Sackett, D.L., & Haynes, B.R. (1995). On the need for evidence-based medicine. Evidence-Based Medicine, 1, 5 – 6. Scardamalia, M., & Bereiter, C. (1991). Literate expertise. In K.A. Ericsson & J. Smith (Eds.), Toward a general theory of expertise: Prospects and limits (pp. 172–194). Cambridge: Cambridge University Press. Shanteau, J. (1992a). Competence in experts: The role of task characteristics. Organizational Behavior & Human Decision Processes, 53 (2), 252–266. Shanteau, J. (1992b). The psychology of experts: An alternative view. In G. Wright and F. Bolger (Eds.), Expertise and decision support (pp. 11–23). New York: Plenum. Shortliffe, E.H. (1976). Computer-based medical consultations: MYCIN. New York: Elsevier/North-Holland. .

.

.

.

.

.

.

.

.

226—A. Grant, A. Kushniruk, A. Villeneuve, N. Bolduc, and A. Moshyk Shortliffe, E.H. (1987). Computer programs to support clinical decisionmaking. Journal of the American Medical Association, 258, 61 – 66. Shortliffe, E.H. (1993). The adolescence of AI in medicine: Will the field come of age in the 90’s? Artificial Intelligence in Medicine, 5, 93 –106. Shortliffe, E.H., Patel, V.L., Cimino, J.J., Barnett, G.O., & Greenes, R.A. (1998). A study of collaboration among medical informatics research laboratories. Artificial Intelligence in Medicine, 12, 97–123. Simon, H.A. (1986). Alternative visions of reality. In H.R. Arkes & K.R. Hammond (Eds.), Judgment and decision-making: An interdisciplinary reader (pp. 97–113). Cambridge: Cambridge University Press. Strong, D.M., Lee, Y.W., & Wang R.Y. (1997) Data quality in context. Communications of the ACM, 40(5), 103 –110. Tversky, A., & Kahneman, D. (1974). Causal schemata in judgment under uncertainty. In M. Fishbein (Ed.), Progress in social psychology (pp. 49 –72). Hillsdale, NJ: Lawrence Erlbaum. Van der Lei, J., & Talmon, J.L. (1997). Clinical decision support systems. In J.H. van Bemmel & M.A. Musen (Eds.), Handbook of medical informatics (pp. 261–276). New York: Springer. Villeneuve, A., Grant, A., Bolduc, N., Vanasse, A., & Ouellette, D. (2002). Les perceptions des professionnels de santé dans les problèmes de communication en situation de continuité des soins : Implications sur les stratégies d’informatisation en santé. In A.M. Grant, J.P. Fortin, L. Mathieu (Eds.), Actes des 9e journées francophones en informatique médicale: L’information de la santé dans les soins intégrés: Connaissances, application, évaluation (pp. 207–214). Quebec: SoQibs. Wang, R.Y., & Strong, D.M. (1996). Beyond accuracy: What data quality means to data consumers. Journal of Management Information Systems, 12, 5 –34. Weiss, S., & Kulikowski, C. (1991). Computer systems that learn: Classification and prediction methods from statistics, neural nets, machine learning, and expert systems. San Mateo, CA: Morgan Kaufman. .

.

.

.

.

.

An Evidence-Based Medicine Perspective—227

9 An Evidence-Based Medicine Perspective on the Origins, Objectives, Limitations, and Future Developments of the Movement R. BRIAN HAYNES

Introduction to the Precepts and History of Evidence-Based Medicine In this chapter the origins, scientific precepts, aspirations, and modus operandi of evidence-based medicine (EBM) are described.1 The term and current concepts originated from clinical epidemiologists at McMaster University (Evidence-Based Medicine Working Group, 1992). EBM is based on the principle that clinicians, if they are to provide optimal care for their patients, need to know enough about applied research principles to detect studies published in the medical literature that are both scientifically strong and ready for clinical application. This potential for continuing to improve the quality of medical care stems from ongoing public and private investment in biomedical and health research. The challenges in applying new knowledge, however, are considerable, and EBM does not address all of them. Two that EBM tries to address are as follows. First, the advance of knowledge is incremental, with many false steps and with breakthroughs being few and far between. As a result, only a tiny fraction of the reports in the medical literature signal new knowledge that is both adequately tested and important enough for clinical practitioners to depend upon and apply. Second, these practitioners have limited time and little understanding of research methods. To help clinicians to meet these challenges, EBM advocates have created procedures and resources to identify the relatively few studies each year that can lead to important improvements in patient care (Sackett, Straus, Richardson, Rosenberg, & Haynes, 2000).

228—R.B. Haynes

EBM advocates want patients, practitioners, health care managers, and policy-makers to pay attention to the best findings from health care research that meet the dual requirements of being both scientifically valid and ready for clinical application. In doing so, EBM has proclaimed a new paradigm and seemingly pitted itself against the traditional knowledge foundation of medicine, in which the key elements are understanding of basic mechanisms of disease coupled with clinical experience. The latter is epitomized by the individual authority (‘expert’) or collective medical authority, such as a panel of experts convened by a professional society to provide practice guidelines. EBM claims that experts are more fallible in their recommendations of what works and what does not work in caring for patients, than is evidence derived from sound systematic observation (i.e., health care research). This fallibility has been especially acute during recent decades with the development, in increasingly naturalistic and complex clinical settings, of applied research methods for observation and experimentation. Furthermore, because applied research methods are based on assessing probabilities for relationships and the effects of interventions rather than on discovering underlying mechanistic explanations, EBM posits that practitioners must eschew the reductionist allure of basic science. Instead, they must be ready to accept and live with uncertainty and to acknowledge that management decisions are often made in the face of relative ignorance of their underlying nature or their true impact on individual patients. A fundamental assumption of EBM is that clinicians whose practices are based on an understanding of evidence from applied health care research will provide superior patient care, compared with that of others who rely on understanding of basic mechanisms and their own experience. So far, no convincing direct evidence exists to show that this assumption is correct. The term ‘evidence-based medicine’ first appeared in print more than a decade ago (Guyatt, 1991), but its origins can be traced back at least as far as mid-nineteenth-century France, when Pierre Louis said, ‘let the facts be rigorously analyzed in order to (arrive at) a just appreciation of them ... and then therapeutics will advance not less than other branches of science’ (1835, p. 81; trans. Alfredo Morabia), or perhaps even to the mediaeval Arab world when Ibn Sina (aka Avicenna) proposed standards for drug testing.2 Scientific approaches to studying health care problems developed at a leisurely pace until the end of the Second World War, when some of the public funding that had been

An Evidence-Based Medicine Perspective—229

dedicated to mass destruction was reallocated to saving lives through health research. Initial investments were directed first to basic research in order to better understand the determinants and pathophysiology of disease. Medical schools reflected this stage of development in their teaching of the basic sciences of biology, pathology, physiology, and biochemistry as the foundation of medical knowledge. Increasing shares of investment were allocated next to the development and applied testing of innovations in clinical settings. Although these applied research methods were rooted mainly in the observational techniques of epidemiology, clinical epidemiologists, such as Archie Cochrane in the United Kingdom, Alvan Feinstein in the United States, and David Sackett in Canada, pioneered and legitimized the use of experimentation in clinical settings, leading to the randomized, controlled trial becoming the hallmark of testing. The first trial in which randomization was formally described and applied was published in the British Medical Journal in 1948 (Medical Research Council, 1948); it heralded a new era of antibiotic treatment: streptomycin for tuberculosis. It is important to recognize, however, that experimental designs were added to observational designs, not substituted for them. Different methods, observational or experimental, were and are used to explore different questions. History has shown that the research methods of medical science are pluralistic and expanding. They are driven by attempts to address a broader range of questions and undoubtedly by the priority that people place on personal health, the obvious benefits that biomedical research has already brought, and the prospect that these benefits are merely the beginning. Today, basic and applied research have combined to bring innovations from the ‘bench’ to the ‘bedside’ at a pace that far outstrips the ability of medical education and the health care system to comprehend, much less respond to. EBM does not clearly address the role of basic science in medical discovery, except to indicate that, in most circumstances of relevance to individual patient care, basic science alone does not provide valid and practical guidance. There are, of course, some exceptions, for example, certain deficiency disorders, such as type 1 diabetes mellitus. But even though basic science provides definitive evidence that insulin deficiency is the underlying problem in this disorder, determining which of the many possible ways of delivering exogenous insulin therapy results in the best care and outcome for each patient has required a myriad of applied research studies. Clear evidence concerning the benefit of multiple-dose insulin regimens ap-

230—R.B. Haynes

peared less than a decade ago (Diabetes Control and Complications Trial Research Group, 1993), but there remains a great deal of room for improvement. Indeed, for many (probably most) disorders, basic mechanisms are still not understood. Even when they are thought to be known, they have proved to have short half-lives before being replaced by new, better knowledge. Further, even when convincingly known, basic mechanisms often have not provided valid guidance concerning intervention. In many such situations, empirical solutions, tested by applied research methods, are holding the fort until basic understanding – of mechanisms and interventions – is forthcoming. This will continue to be the case for the foreseeable future, the marvellous advances in genomics notwithstanding. The schism between basic and applied research, however, is more rhetoric than reality. Rather, basic and applied research are different ends of a spectrum of health research, progressing from bench to bedside. The best applied research studies are often founded on excellent basic science findings, even if basic research is neither necessary nor sufficient for the management of most medical problems. Applied research usually is downstream from basic research and usually is better when it is downstream rather than based on chance observations and intuition. A complementary way of knowing, applied research is not a participant in a scientific turf war to establish the best way of knowing. Nevertheless, from a pragmatic, clinical point of view, applied research provides evidence to practitioners and patients that is often better suited to the specific problems with which they must deal. Confusion of the objectives of science with those of the practice of medicine perhaps has led to much of the misunderstanding and criticism levelled at EBM. A Practical Example of the Relationship between Basic and Applied Research An example drawn from recent medical advances in stroke therapy illustrates the complex relationship between basic and applied research and, in turn, between both of them and clinical practice. Narrowing of the arteries to the front part of the brain (the internal carotid artery [ICA} and its tributaries, the anterior and middle cerebral arteries) is associated with stroke, in which a part of the brain dies when it loses its blood supply. Narrowings of the internal carotid artery above the level of the neck can be bypassed through connecting the superficial temporal artery (STA), located on the outside of the head, with a branch of

An Evidence-Based Medicine Perspective—231

the middle cerebral artery (MCA), which is just inside the skull. STAMCA bypass (also known as extracranial-intracranial [EC/IC] bypass) is an elegant (and expensive) surgical procedure that is technically feasible in a high proportion of cases and leads to increased blood supply to the part of the brain beyond the narrowing of the ICA. For many years, this increased blood supply was thought to be exactly what the brain needed to prevent future strokes in people who had experienced minor strokes in this vascular distribution. Approximately 200 cases of patients undergoing STA-MCA bypass were reported in the medical literature up to 1985, almost all of them interpreted by their surgeon authors as indicating benefits for patients. In these ‘case series’ studies, patients are described before and after undergoing the procedure, and these descriptions are sometimes compared with findings in previous reports (‘historical controls’) of patients with and without the procedure. In 1985 a large randomized controlled trial was reported (EC/IC Bypass Study Group, 1985). This study showed no reduction at all in the subsequent rate of stroke with the bypass when compared with the rate for patients who had not had the procedure. On further analysis, it was found that patients with STAMCA bypass who had higher rates of blood flow were actually worse off, and that surgery blunted the natural rate of recovery from the initial stroke that led to selection of patients for surgery (Haynes et al., 1987). Dissemination of these findings was rapid and led to the elimination of this procedure for attempting to prevent stroke recurrence. Randomized controlled trials involving another procedure, carotid endarterectomy, were subsequently conducted for patients who had narrowing of the ICA in the neck. At the time, carotid endarterectomy had been practised longer than STA-MCA bypass, had not been tested adequately in controlled trials, and had been brought into question because of the negative findings of the STA-MCA bypass trial. Several randomized controlled trials of carotid endarterectomy showed that it has substantial benefit for symptomatic patients with severe narrowing of the carotid artery, but not for those who had mild narrowing or no symptoms associated with the narrowing (Cina, Clase, & Haynes, 2003). These trials involving STA-MCA bypass and carotid endarterectomy have led to a better understanding of the basic mechanisms of stroke, elimination of a harmful surgical procedure, promotion of another procedure, and provision of evidence for tailoring the findings to individual patients (Rothwell, Slattery, & Warlow, 1997). These advances in

232—R.B. Haynes

knowledge have benefited many patients. Unfortunately, surveys of patient care also show that some patients continue to receive carotid endarterectomy when they are unlikely to benefit from it, while others who might benefit are not offered it (Goldstein, Bonito, Matchar, Duncan, & Samsa, 1996). In fact, there are numerous examples of under-applied evidence of both the benefits and the harms of treatments (Antman, Lau, Kupelnick, Mosteller, & Chalmers, 1992). Eliminating such mismatches between patients and health care interventions is a prime objective of EBM. The Nuts and Bolts of EBM: Finding and Applying the Best Evidence A contemporary definition of EBM is ‘the explicit, judicious, and conscientious use of current best evidence from health care research in decisions about the care of individuals and populations’ (Haynes, 2002, p. 3). A more pragmatic definition is a set of tools and resources for finding and applying current best evidence from research for the care of individual patients. This practical explanation reflects the fact that there are now many information resources in which evidence from health care research has been pre-graded for validity and clinical relevance. Thus, the user’s task is changing from the largely hopeless one of reading the original medical literature to learn about current best care to one of finding the right pre-assessed research evidence, judging whether it applies to the health problem at hand, and then working the evidence into the decision that must be made. Grades of evidence quality are derived from scientific principles of epidemiology and its offspring, clinical epidemiology. The grades are based on several principles, the most elementary of which are as follows. First, studies that take more precautions to minimize the risk of bias are more likely to reveal useful truths than are those that take fewer precautions. Second, studies based in patient populations that more closely resemble those that exist in usual clinical practice are more likely to provide valid and useful information for clinical practice than are studies based on organisms in test tubes, creatures in cages, very select human populations, or unachievable clinical circumstances (such as extra staff to provide intensive follow-up, which is far beyond the resources in most clinical settings). Third, studies that measure clinical outcomes that are more important to patients (e.g., mortality, morbidity, and quality of life, rather than liver enzymes and serum electrolytes) are more likely to provide evidence that is important to both practitioners and patients.

An Evidence-Based Medicine Perspective—233 Table 9.1 Guidelines for the Critical Appraisal of Health Care Research Reports Topic of study

Criteria for appraisal

Therapy

Diagnosis

Prognosis

Causation

Reviews

Random allocation of patients to comparison groups

Clearly identified comparison groups, one being free of the disorder

Inception cohort, early in the course of the disorder and initially free of the outcome of interest

Clearly identified comparison group for those at risk of, or having, the outcome of interest

Comprehensive search for relevant articles

Outcome measure of known or probable clinical importance

Objective or reproducible diagnostic standard, applied to all participants

Objective or reproducible assessment of clinically important outcomes

Masking of observers of outcome to exposure

Explicit criteria for rating relevance and merit

Follow-up of Masked ≥ 80 per cent assessment of test and diagnostic standard

Inclusion of all Follow-up of Masking of ≥ 80 per cent observers of relevant exposure to studies outcome

Simple guidelines for critically appraising health care research evidence appear in table 9.1.3 Optimal study designs differ for determining the cause, course, diagnosis, prognosis, prevention, therapy, and rehabilitation of disease; the rules for assessing validity vary for these different questions. For example, randomized allocation of participants to intervention and control groups is held to be better than non-random allocation for controlling bias in intervention studies. This is not merely a matter of logic, common sense, or faith; non-random allocation usually results in more optimistic differences between intervention and control groups than does random allocation (Schulz, Hays, & Altman, 1995). Similarly, in observational study designs for assessing the accuracy of diagnostic tests, independent interpretation of the tests that are being compared is known to result in less optimistic reports of test performance (Lijmer et al., 1999).

234—R.B. Haynes

Other guidelines (or rules) incorporated into the critical appraisal of research evidence are not based on empirical demonstration of their scientific merit, but rather are advised on the basis of common sense. For example, although the extent of follow-up of research participants has not been consistently demonstrated to influence estimates of the size of a treatment effect (perhaps because those lost to follow-up cannot by definition be included in such determinations), some critical appraisal guides suggest that studies should not be trusted if more than 20 per cent of participants are lost to follow-up. Although the guidelines for critical appraisal that appear in table 9.1 are not comprehensive or fully rigorous, they provide an effective filter for the reliability and validity of health care research that screens out about 98 per cent or more of the medical research literature as not being ready for clinical use (Haynes, 1993). Of those studies that make it through the filter, systematic reviews provide the firmest base for the application of evidence in practice (Clarke & Chalmers, 1998). Along these lines, the past decade has seen the Cochrane Collaboration forging a worldwide effort to summarize evidence concerning the effects of health care interventions (Jadad & Haynes, 1998). Many objections to EBM are based on the idea that it advocates textbook or cookbook medicine, that is, treating patients strictly according to a formula or an algorithm derived from a research study, a course that the advocates of EBM never intended. They did not clearly emphasize at the outset, however, that evidence from research can be no more than one element of any clinical decision. Other key components are patient circumstances (as assessed through the expertise of the clinician) and preferences (figure 9.1) (Haynes, Sackett, Gray, Cook, & Guyatt, 1996). Exactly how research evidence, clinical circumstances, and patients’ wishes are to be combined to derive an optimal decision has not been clearly determined, except insofar as clinical judgment and expertise are viewed as essential to success. Even more problematic, the term ‘evidence’ is commonly used for many different types of evidence that are relevant to clinical practice, not just health care research evidence. For example, clinicians regularly collect evidence of patients’ circumstances and desires. Thus, it is hardly surprising that the term ‘evidence-based medicine’ is confusing to many who do not appreciate (or do not agree) that evidence is narrowly defined as having to do with systematic observations from certain types of research.

An Evidence-Based Medicine Perspective—235 Figure 9.1. Basic elements of clinical decision-making clinical circumstances

research evidence

patient’s preference

The movement’s very name – evidence-based medicine – has been an impediment to communicating its main objective: the application of health care research to provide superior benefits for patients than is currently possible with the treatments that clinicians are experienced in recommending. Using the technical definition of EBM, evidence from heath care research is a modern, never-before-available complement to traditional medicine. Perhaps a better name would be ‘certain-types-ofhigh-quality-and-clinically-relevant-evidence-from-health-care-research-insupport-of-health-care-decision-making’ – an accurate but mind-numbing phrase. Philosophical Issues The originators of EBM paid relatively little attention to the philosophy of science, and attempts to do so now are mainly post hoc and perhaps defensive for some EBM advocates (including the author of this chapter). Nevertheless, EBM’s concepts are in evolution, at least partly in response to valid criticisms of the limitations of the original vision. It is worth noting that it is impossible to say where most EBM advocates

236—R.B. Haynes

stand on any of these issues, because no serious attempt has been made to determine this. It is also easy to agree with Alan Chalmers (1999) that most scientists and EBM advocates are (blissfully?) ignorant of the philosophy of science and give little or no thought to constructing a philosophical basis for their research. In the main original paper on EBM (Evidence-Based Medicine Working Group, 1992), EBM was proposed as a paradigm shift, based on Thomas Kuhn’s definition of paradigms: ways of looking at the world that define both the problems that can be legitimately addressed and the range of admissible evidence that may bear on their solution. According to Guba and Lincoln (1994), in the basic science that underpins traditional medicine the workings of the human body and basic mechanisms of disease can be discovered by observation using instruments that are objective and bias free. These mechanisms can then be discerned by inductive logic and known for a certainty. By contrast, applied research deals with more complex phenomena than disease mechanisms – researchers often rely on experimentation rather than only observation; they recognize that observations of complex phenomena can be biased and require measures to reduce bias; they take groups of patients as the basis of observation; they use probabilities to judge truth, rather than expecting certainty; and they use deductive and Bayesian logic to progress. Certainly, there are differences between the approaches of basic and applied research, but are they mutually exclusive, as in a paradigm shift, or complementary ways of knowing, as in a pluralistic version of epistemology? By these remarks, you will know that I favour the latter view. The expectation of EBM that doctors should keep abreast of evidence from certain types of health care research raises many issues. First, what is ‘valid’ health care research? Second, what are the ‘best’ findings from this research? Third, when is health care research ‘ready’ for application? Fourth and fifth, to whom and how does one apply valid and ready evidence from health care research? EBM provides a set of increasingly sophisticated tools for addressing these questions; at present, however, the result is only partly as good as EBM advocates hope it will become. Meanwhile, there is much to criticize about EBM from both philosophical (Kulkarni, 2000) and practical perspectives. For example, it is difficult to be smug about the superiority of the research methods advocated by EBM when the results of studies that are methodologically similar not infrequently disagree with one another. It has also been shown that the findings of observational studies

An Evidence-Based Medicine Perspective—237

agree more often than not with the findings of allegedly more potent randomized controlled trials (Concato, Shaw, & Horwitz, 2000; Benson & Hartz, 2000). While holes can be picked in these arguments against the ascendancy of randomized controlled trials (Pocock & Elbourne, 2000), there is no way to win the argument without a universal standard of truth. The issue of when a research finding is ready for clinical application also remains mired in the lack of a satisfactory resolution for how findings from groups can be applied to individuals. For one thing, our understanding of how to determine what patients want is primitive. Also problematic is the fact that the circumstances in which patients are treated can vary widely from location to location (including those directly across the street from one another). In addition, the resources, expertise, and patients are often quite different, and the same evidence from research cannot be applied in the same way (and often not at all). Finally, we do not have convincing studies showing that patients of EBM practitioners are better off than those who have non-EBM practitioners. No one has done a randomized controlled trial of EBM with patient outcomes as the measure of success. Such a trial, in fact, would be impossible to conduct, given that the control group could not be effectively isolated from the research that EBM is attempting to transfer, and it would be regarded as unethical to attempt to do so. This situation is unfortunate in the sense that, even if it is accepted that current research is generating valuable findings for health care, there are many questions about whether the EBM movement is doing anything useful to accelerate the transfer of these findings into practice. The eighteenth-century philosopher David Hume and his followers took pains to point out the differences between is and ought. The is of EBM is science’s production of new and better ways of predicting, detecting, and treating diseases than were imaginable at the middle of the past century. The ought of the EBM movement, which annoys many practitioners and would perturb Hume and his followers, is that EBM advocates believe that clinicians ought to be responsible for keeping up to date with research advances and ought to be prepared to offer them to patients. Thus, EBM has taken on the tone of a moral imperative. I believe that it is premature to get preachy about the ought of EBM – not that such restraint has stopped EBM’s more ardent advocates. Worse still, the insistence by EBM advocates that interventions ought to be provided in all appropriate individual circumstances would undoubtedly have some important adverse effects. For one, full imple-

238—R.B. Haynes

mentation would cost much more than the resources currently available for health care, leading to unaddressed (and unresolved) dilemmas in distributive justice. Second, interventions that save lives and reduce suffering in the short term may end up prolonging life beyond the point of senescence and misery. EBM advocates try to ameliorate the latter problem by declaring that patients’ values ought to be incorporated into clinical decisions, but without the assurance that we know how to do this. Indeed, there is a continuing tension here between the consequentialist, population-based origins of epidemiology (doing the greatest good for the greatest number), which generates most of the best evidence that EBM advocates hope to convince practitioners and patients to pay attention to, and the deontological or individualistic approach of medicine (doing the greatest good for the individual patient), which practitioners are sworn to take. Although some components of EBM have been derided as representing ultrapragmatic utilitarianism, EBM does not offer a credible solution to this tension, nor does it even take a clear stance on it. This absence of final answers perhaps reflects the dual origins of many EBM advocates: most of the leaders are trained in both epidemiology and a clinical discipline and are involved in both research and clinical practice. In weighing the philosophical, scientific, and moral issues raised by EBM, I believe that the priority target should not be EBM’s postulates about ways of knowing, although there are certainly many epistemological issues raised by EBM that merit intense discussion. Rather, the ethical issues are of highest concern. Will the proceeds of the new science of medicine be fairly distributed in society? Given the already stupendous and wildly escalating costs of health care, driven particularly by newer diagnostic and therapeutic interventions, how can resources be optimally and fairly allocated within the health care sector and across all areas of public expenditure? Can the long-term consequences (e.g., unproductive and miserable longevity) of the short-term gains that are regularly documented by health care research continue to be ignored? How can patients’ wishes be informed, determined, and taken into account in health care decision-making? Should some of the funds for health research be diverted into some other sector (continuing education?), so that the health care system can catch up to the current state of knowledge? Is EBM a waste of time if we lack adequate understanding of practical methods of changing practitioner and patient actions (Oxman, Thomson, Davis, & Haynes, 1995)? I hope that the attention of philosophers will be drawn to these questions, as well as to the

An Evidence-Based Medicine Perspective—239

continuing debate about whether EBM is a new paradigm and whether applied health care research findings are more valid for reaching practical decisions about health care than are basic pathophysiological mechanisms and practitioners’ unsystematic observations.

NOTES 1 This chapter is based on an invited presentation at the Philosophy of Science 2000 Symposium on Evidence-Based Medicine, and its ensuing publication (Haynes, 2002); see also http://www.pubmedcentral.nih.gov/ articlrender.fcgi?tool=pubmed&pubmedid=11882257. 2 ‘The drug must have a specific defined mode of action. It must be tested on a well-defined disease. The time of action must be observed. The effect of the drug must be seen to occur constantly in many cases. The experimentation must be done with the human body, for testing a drug on a lion or a horse might not prove anything about its effect on man’ (Avicenna, 1999). 3 A more elaborate set of guidelines can be found at http://cebm.jr2.ox.ac.uk/ docs/levels.html.

REFERENCES Antman, E., Lau, J., Kupelnick, B., Mosteller, F., & Chalmers, F. (1992). A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts. Journal of the American Medical Association, 268, 240 – 248. Avicenna, I.S. (1999). Canon of medicine. Chicago: KAZI. Benson, K., & Hartz, J. (2000). A comparison of observational studies and randomized, controlled trials. New England Journal of Medicine, 342, 1878 – 1886. Chalmers, A.F. (1999). What is this thing called science? (3rd ed.). Indianapolis: Hackett. Cina, C., Clase, C., & Haynes, R.B. (2003). Carotid endarterectomy for symptomatic carotid stenosis (Cochrane Review). In The Cochrane Library, Issue 4. Chichester, UK: John Wiley. Clarke, M., & Chalmers, I. (1998). Discussion sections in reports of controlled trials published in general medical journals: Islands in search of continents? Journal of the American Medical Association, 280, 280 – 282. .

.

.

.

.

240—R.B. Haynes Concato, J., Shah, N., & Horwitz, R.I. (2000). Randomized controlled trials, observational studies, and the hierarchy of research designs. New England Journal of Medicine, 342, 1887–1892. Diabetes Control and Complications Trial Research Group (1993). The effect of intensive treatment of diabetes on the development and progression of long-term complications in insulin-dependent diabetes mellitus. New England Journal of Medicine, 329, 977 – 986. EC/IC Bypass Study Group (1985). Failure of extracranial-intracranial arterial bypass to reduce the risk of ischemic stroke: Results of an international randomized trial. New England Journal of Medicine, 313, 1191–1200. Evidence-Based Medicine Working Group (1992). Evidence-based medicine: A new approach to teaching the practice of medicine. Journal of the American Medical Association, 268, 2420 – 2425. Goldstein, L.B., Bonito, A.J., Matchar, D.B., Duncan, P.W., & Samsa, G.P. (1996). US national survey of physician practices for the secondary and tertiary prevention of ischemic stroke – carotid endarterectomy. Stroke, 27, 801– 806. Guba, E. & Lincoln, Y. (1994). Competing paradigms in qualitative research. In N. Denzin & Y. Lincoln (Eds.), Handbook of qualitative research. Thousand Oaks, CA: Sage. Guyatt, G.H. (1991). Editorial: Evidence-based medicine. ACP Journal Club, 114, A16. Haynes, R.B. (1993). Editorial: Where’s the meat in clinical journals? ACP Journal Club, A22–3 (Annals of Internal Medicine, 115, suppl. 3). Haynes, R.B. (2002). What kind of evidence is it that evidence-based medicine advocates want health care providers and consumers to pay attention to? BMC Health Services Research, 2, 3. . Haynes, R.B., Mukherjee, J., Sackett, D.L., Taylor, D.W., Barnett, H.J.M., & Peerless, S.J. (1987). Functional status changes following medical or surgical treatment for cerebral ischemia: Results in the EC/IC bypass study. Journal of the American Medical Association, 257, 2043–2046. Haynes, R.B., Sackett, D.L., Gray, J.R.M., Cook, D.L., & Guyatt, G.H. (1996). Editorial: Transferring evidence from research into practice: 1. The role of clinical care research evidence in clinical decisions. ACP Journal Club, 125, A14 –16. Jadad, A. & Haynes, R.B. (1998). The Cochrane Collaboration: Advances and challenges in improving evidence-based decision making. Medical Decision Making, 18, 2 – 9. .

.

.

.

.

.

.

.

An Evidence-Based Medicine Perspective—241 Kuhn, T. (1996). The structure of scientific revolutions (3rd ed.). Chicago: University of Chicago Press. Kulkarni, A.V. (2000). Evidence-based medicine: A philosophical perspective. Unpublished manuscript. Department of Surgery, Hospital for Sick Children, University of Toronto, Toronto, Canada. Lijmer, J.G., Mol, B.W., Heisterkamp, S., Bonsel, G.J., Prins, M.H., van der Meulen, J., & Bossuyt, P.M. (1999). Empirical evidence of design-related bias in studies of diagnostic tests. Journal of the American Medical Association, 282, 1061–1066. Louis, P.C.A. (1835). Recherches sur les effets de la saignée dans quelques maladies inflammatoires et sur l’action de l’émétique et des vesicatoires dans la pneumonie. Paris: Librairie de l’Academie royale de médicine. Medical Research Council. (1948). Streptomycin treatment of pulmonary tuberculosis. British Medical Journal, 2, 769 –782. Oxman, A.D., Thomson, M.A., Davis, D.A., & Haynes, R.B. (1995). No magic bullets: A systematic review of 102 trials of interventions to improve professional practice. Canadian Medical Association Journal, 153, 1423 –1431. Pocock, S.J., & Elbourne, D.R. (2000). Randomized trials or observational tribulations? New England Journal of Medicine, 342, 1907–1909. Rothwell, P.M., Slattery, M.J., & Warlow, C.P. (1997). Clinical and angiographic predictors of stroke and death from carotid endarterectomy: Systematic review. British Medical Journal, 315, 1571 –1577. Sackett, D.L., Straus, S., Richardson, S.R., Rosenberg, W., & Haynes, R.B. (2000). Evidence-based medicine: How to practice and teach EBM (2nd ed.). London: Churchill Livingstone. Schulz, K.F., Chalmers, I., Hayes, R.J., & Altman, D. (1995). Empirical evidence of bias: Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. Journal of the American Medical Association, 273, 408 – 412. .

.

.

.

.

242—C.A. Estabrooks, S. Scott-Findlay, and C. Winther

10 A Nursing and Allied Health Sciences Perspective on Knowledge Utilization CAROLE A. ESTABROOKS, SHANNON SCOTT-FINDLAY, AND CONNIE WINTHER

Introduction The tradition of knowledge utilization1 and related research in the health sciences is among recent efforts to root policy and practice decisions in science. The evidence-based medicine movement (EvidenceBased Medicine Working Group, 1992) emerged in the early 1990s and quickly evolved into more general calls for the adoption of an evidencebased decision-making culture at all levels of the health care system (Gray, 1995; National Forum on Health, 1997). In the Canadian context, these efforts eventually contributed to pressures to downsize the welfare state and increase the efficiency and effectiveness of those components that were not eliminated or privatized. The roots of EBM are in clinical epidemiology and evaluation research. Randomized controlled trial (RCT) designs and meta-analyses are the core methods, with clinical practice guidelines and policies as its primary products. EBM is not without controversies, as evidenced in recent thematic issues of the Journal of Evaluation in Clinical Practice (1997, 1998, 1999, 2000). Such controversies have centred primarily on the nature of evidence (Upshur, 1997). In nursing, there is a longer tradition in the broad area of knowledge utilization, specifically in the narrower field of research utilization, dating from the 1970s (Ketefian, 1975; Shore, 1972). Figure 10.1 shows that the knowledge utilization field is older than EBM writings suggest. In nursing, the field is commonly demarcated by the large Conduct and Utilization of Research in Nursing (CURN) project (Horsley, Crane, & Bingle, 1978; Horsley, Crane, Crabtree, & Wood, 1983) of the 1970s.

A Nursing and Allied Health Sciences Perspective—243 Figure 10.1. Knowledge utilization timeline 1920–1960

1900

2002

1903 G. Tardé 1992 EBM

1955 Manzel & Katz

Agricultural extension model

1943 Ryan & Gross

1970s CURN Project

1997 NFH 1993 Cochrane Collaboration

CHSRF: Canadian Health Services Research Foundation CIHR:

Canadian Institutes of Health Research

CURN: Conduct and Utilization of Research in Nursing EBM:

Evidence-Based Medicine

NFH:

National Forum on Health

2000 CIHR Canada

1985 Conceptual Papers in Nursing

1997 CHSRF Canada

However, nursing research activity in this area has increased significantly only since the 1990s. In related health disciplines, such as physiotherapy, occupational therapy, and dentistry, the advent of identifiable activity in this field is even more recent. Knowledge utilization manifests itself in nursing (and in social work) as research utilization. Until the recent emergence of the evidence-based medicine movement, the focus in nursing was exclusively on the implementation of research evidence in practice; there was little, if any, confusion in nursing about what constituted research evidence (Ketefian, 1975; King, Barnard, & Hoehn, 1981; Kreuger, 1978; Shore, 1972). Discussion of research utilization in nursing is often couched within the theory- (or research-) practice gap (Allmark, 1995; Bostrom & Wise, 1994; Feldman et al., 1993; Ferrell, Grant, & Rhiner, 1990; Landers, 2000; Le May, Mulhall, & Alexander, 1998; Ousey, 2000; Rafferty, Allcock, & Lathlean, 1996; Rolfe, 1993, 1998; Schmitt, 1999; Upton, 1999; Wilson, 1984). However, the theory-practice discourse is incongruous across authors. Allmark (1995) identifies three versions of the theory-practice problem: (a) practice fails to live up to theory, (b) a relational problem

244—C.A. Estabrooks, S. Scott-Findlay, and C. Winther

exists between nurses and organizations, and (c) theory is irrelevant to practice. The first problem is evoked most often in contemporary discussions about evidence-based practice or one of its analogues. If practice fails to live up to theory, the gap is perceived to originate with practice and practitioners; consequently, the solutions are oriented towards changing the individual (e.g., becoming more research minded), the patient care unit (e.g., decreasing ritualistic practice), and the environment (e.g., optimizing a research-positive climate). This chapter is an overview of the research utilization field in nursing and the allied health sciences using the extant literature. We begin by briefly outlining a history of research utilization in nursing. We then describe our methods and the results of our literature review, and discuss the implications of our findings. Our primary purpose in writing this chapter is to offer a judgment of the state of knowledge utilization in nursing and, to a lesser degree, in the allied health professions. Secondarily, we propose a series of recommendations and directions for the field that are applicable to nursing and to all of the health sciences. History of the Field The early roots of research utilization as an area of interest in nursing can be dated to the U.K.-based Royal College of Nursing’s (RCN) Study of Nursing Care in the 1960s (McFarlane, 1970). The purpose of the RCN study was to assess clinical effectiveness. Investigators discovered that effective treatments were being used incorrectly, while ineffective treatments were still used frequently. This early reference suggested that nurses were not adopting research knowledge adequately. In North America, Shore (1972) published the first study about research utilization in nursing. Soon after, the U.S. government’s Division of Nursing funded three projects: (a) the Western Interstate Commission for Higher Education in Nursing (WICHEN) project (Krueger, 1977; Krueger, Nelson, & Wolanin, 1978; Lindeman & Krueger, 1977); (b) the Nursing Child Assessment Satellite Training (NCAST) project (Barnard & Hoehn, 1978; King et al., 1981); and (c) the Conduct and Utilization of Research in Nursing (CURN) project (Horsley et al., 1978; Horsley et al., 1983). These three projects formed the nucleus of research utilization study in nursing and remain widely cited. They broadcast the emergence of the first nursing research utilization models: WICHEN (1977), CURN (1978), and NCAST (1978).

A Nursing and Allied Health Sciences Perspective—245

In the 1980s, despite a significant decrease in federal sponsorship of research and knowledge utilization that ended many programs (Backer, 1991), the study of research utilization in nursing continued. Over the course of the decade, the results of the WICHEN, NCAST, and CURN projects were disseminated, studied, evaluated, and discussed (Donaldson, 1992). Research utilization study grew rapidly in the 1990s in the field of nursing. Nursing journals increased their focus on research utilization, and nurse researchers developed and refined research utilization models, such as the Registered Nurses Association of British Columbia (RNABC) Model (RNABC, 1996; Clarke 1995); the Horn Model of Research Utilization (Goode & Bulechek, 1992); the Iowa Model of Research in Practice (Titler et al., 1994a); the Collaborative Research Model (Dufault, 1995); the Ottawa Model of Research Use (Logan & Graham, 1998); Kitson et al.’s Multidimensional Framework of Research Utilization (Kitson, Harvey, & McCormack, 1998); the Evidence-Based Multidisciplinary Practice Model (Goode & Piedalue, 1999); and the Model for Change to Evidence-Based Practice (Rosswurm & Larrabee, 1999). In 1992 the largely Canadian Evidence-Based Medicine Working Group coined the term ‘evidence-based medicine’ (Evidence-Based Medicine Working Group, 1992). Their article proclaimed the arrival of the evidence-based practice movement in medicine. Nursing and the allied health professions soon followed with proclamations of their own. Canadian research funding agencies began to focus on the dissemination and uptake of research as a fundable initiative, for example, the Alberta Heritage Foundation for Medical Research (AHFMR) dissemination program in 1996 and the Canadian Health Services Research Foundation (CHSRF) in 1997. Significantly, the Canadian Institutes of Health Research (CIHR), which was established in 2000, includes in its legislative act a clear emphasis on knowledge translation (Government of Canada, 2000). The CIHR boasts a separate knowledge translation division and an active program of research funding in this area.2 International cooperation and networks also began to form in the 1990s. In 1992 Sigma Theta Tau International held its International State of Science Congress: Nursing Research and Its Utilization in Washington, D.C. In 1998 Sigma Theta Tau, in cooperation with the Faculty of Nursing at the University of Toronto, sponsored the first international research utilization conference: Preparing for the New Millennium. Institutions dedicated to evidence-based nursing also began to appear: the Joanna Briggs Institute for Evidence Based Nursing and

246—C.A. Estabrooks, S. Scott-Findlay, and C. Winther

Midwifery (Australia and New Zealand, 1996); the Centre for Evidence Based Nursing, University of York (United Kingdom, 1996); and the Sarah Cole Hirsch Institute for Best Nursing Practices Based on Evidence (United States, 1998). The journal Evidence Based Nursing commenced publication in 1998. In the early part of the twenty-first century, nurse researchers continue to build on the achievements of the previous decades. Increasingly, nurses are pursuing collaborative research utilization initiatives with scholars from other disciplines, institutions, and countries. The remarkable acceleration in the research utilization agenda in nursing during the 1990s can be attributed primarily to the evidence-based movement. Additionally, recent initiatives, such as the creation of nationally funded research chairs in knowledge utilization, transfer, and translation, have been important; they have accelerated efforts to build research capacity and provided much needed momentum. Two major national health-funding agencies in Canada continue to have a special focus on the transfer of research into decision-making processes and outcomes; the CIHR and the CHSRF. Additionally, initiatives such as the new tri-party-funded Centre for Knowledge Transfer are contributing to ongoing development in nursing and other allied health sciences.3 Methods We undertook a wide-ranging search of the literature to provide an overview of the state of the field of research utilization in nursing and allied health. We conducted the majority of our search using online bibliographic databases. To find articles published prior to 1982 we searched the print version of the Cumulative Index to Nursing and Allied Health (CINAHL), using the terms research use, research, research utilization, innovation diffusion, and dissemination. Review of the references of key works and information from key informants augmented our primary search results. We also searched Medline and CINAHL for relevant works of those authors known to have written in the area of research utilization. Our complete search strategy appears as an appendix to this chapter. The initial search of all databases resulted in over 3,000 citations, which were then screened according to a set of inclusion and exclusion criteria. Inclusion in the overview required only that an article be related to research utilization, knowledge utilization, or evidence-based practice in nursing or allied health. Articles were excluded if they were

A Nursing and Allied Health Sciences Perspective—247

not in English or if they were in the fields of psychology or medicine. We did not include dissertations in the overview. Every effort was made to retrieve all relevant works; the final number of articles included in the overview was 544.

Data Extraction After collecting and scanning the literature, three large categories emerged naturally: (a) opinion, (b) conceptual, and (c) research. A common set of data was extracted from each of the 544 pieces: the name and nationality of the author, the year of publication, the title of the article, and the name of the journal. For most subcategories, we also extracted general themes in the literature. The exception was the research category, where still more information was extracted: the setting of the research, target population, sample size, research design, theoretical framework, individual determinants of knowledge utilization, organizational determinants of knowledge utilization, measurement of knowledge utilization, the statistical methods used in the analysis, and a summary of the research findings.

General Opinion Articles Of the 327 articles in the large category of general opinion articles, we described 197 articles simply as opinion pieces; they accounted for a large portion (60 per cent) of the opinion-based literature on research utilization in nursing. In such papers, the authors typically form and describe an opinion based upon a limited review of the literature. Editorials make up a relatively large portion (15 per cent) of the opinionbased literature; commentaries and letters made up 7 per cent. In 20 instructive articles (6 per cent) strategies or processes regarding research implementation in the clinical setting were outlined. The 18 articles (6 per cent) included from the allied health literature are discussed separately. We also found 10 articles (3 per cent) related to sources of knowledge, and 11 (3 per cent) related to the theory-practice gap.

Conceptual Articles Conceptual or theoretical articles included those in which the authors discussed theoretical or conceptual issues, methodological issues, or presented or discussed a research utilization model. This group is the

248—C.A. Estabrooks, S. Scott-Findlay, and C. Winther

smallest, with a total of only 62 (11 per cent) of the 544 reviewed papers located. Within this category, in 22 (35 per cent) papers theoretical issues in research utilization were discussed, in 21 (34 per cent) a theoretical model of research utilization was addressed, and in 19 (31 per cent) research utilization models were discussed.

Research Articles The research category includes research specifically on research utilization, as well as clinical research studies in which the implementation of research-based interventions is evaluated or described. We identified a total of 155 empirical studies (29 per cent of the total reviewed) in nursing and allied health. In 95 of the research articles (61 per cent) research utilization is specifically examined. In 7 articles (5 per cent) research for implementation in clinical practice is evaluated, and in 53 research studies (34 per cent), which we describe as clinical research studies, the focus is on the outcomes of the use of research-based interventions in clinical practice. The clinical research studies differ from the research utilization studies because their focus is not on research utilization itself, but rather on the outcome of a clinical intervention. Results

General Opinion Articles In reviewing this literature, instructional pieces (n = 20) outlining strategies or processes to implement a research utilization program in the clinical setting were tangential to our purposes and were therefore excluded. However, the tone of this writing is predominantly positive and the articles cover numerous pragmatic topics. The ‘allied health’ and ‘sources of knowledge’ subcategories are discussed in later sections. EDITORIALS, COMMENTARIES, AND LETTERS

We located 48 editorials and 23 commentaries or letters addressing research utilization in the nursing literature. We anticipated that the editorials would reflect the tenor of at least one segment of the profession regarding research utilization. Despite possessing common themes, this collection offered only a narrow view of the current state of research utilization science in nursing. Two themes did filter through the editorials and commentaries: the two-communities theme (Anderson,

A Nursing and Allied Health Sciences Perspective—249

1998; Barnard, 1986; Castledine, 1997a, b; Haller, 1987), and the theorypractice gap discussed in the opening sections of this chapter (Blanchard, 1996; Bower, 1994; Feeg, 1987; Feldman, 1995; Schmitt, 1999; Tenove, 1999; Williams, 1987). The two-communities theme is usually stated implicitly, likely because authors are unaware of the metaphor’s considerable influence (Caplan, 1979; Dunn, 1980) on the knowledge utilization field as a whole. Other, less frequent themes in these editorials include calls for (and admonishments for not) implementing research (Anderson, 1998; Barnard, 1986; Castledine, 1996; Titler & Goode, 1995); the historical evolution of research utilization in nursing (BlissHoltz, 1999; Davis & King, 1998; Hilton, 1995; Ingersoll, 2000); calls for more research into the processes of research utilization (Clarke, 1999); questions of adequate capacity in the profession (Grier, 1986); and the relationship between evidence-based practice and clinical practice guidelines (McPheeters & Lohr, 1998; King & Davis, 1997).

Conceptual Articles We divided this category of literature into three general groups: (a) overviews, (b) critiques and clarifications of evidence-based nursing, and (c) papers that seriously attempt to advance the field theoretically. Overview papers often offer substantive and thoughtful views on the field (French, 1999; Funk, Tornquist, & Champagne, 1995; Gennaro, 1994; Hunt, 1981, 1997; MacGuire, 1990). They frequently raise relevant questions and therefore should be read by anyone studying this literature. CRITIQUES AND CLARIFICATIONS

Though currently few in number, critiques and clarifications are emerging. We located eight (Estabrooks, 1998; Kitson, 1997; Mitchell, 1997, 1999; Traynor, 2000; French, 2002; Flaming, 2001; Kim, 1993). These authors attempt to correct perceived problems, inconsistencies, or inadequacies in nursing’s knowledge utilization theory. These issues include vague or inconsistent terminology, epistemological concerns, and a scarcity of foundational theory in nursing. THEORETICAL DISCUSSIONS

Similar to critiques and clarification work, theoretically important work in this field is infrequent. Three early papers were published in the mid-1980s (Horsley, 1985; Loomis, 1985; Stetler, 1985) and the remain-

250—C.A. Estabrooks, S. Scott-Findlay, and C. Winther

der in the 1990s (Estabrooks, 1999b; Flaming, 2001; French, 2002; Kim, 1993). Work from the 1980s is focused primarily on tracing the lineage of research utilization and on defining concepts. Interestingly, these early scholars incorporate work from the broader knowledge utilization field, whereas later scholars and researchers have narrowed their scope. Had this early trend been sustained, the research utilization field in nursing would have benefited considerably. Papers from the 1990s differ significantly from earlier ones. Estabrooks, for example, attempts to map the field of study and to illustrate the complexity of the scholarship required. Kim, Flaming, and French, meanwhile, take a philosophical approach and raise important questions about the nature of evidence and the complex relationship of theory to practice. RESEARCH UTILIZATION MODELS

In 1985 Crane (1985b) identified four basic styles of research utilization models: (a) research, development, and diffusion; (b) social interaction and diffusion; (c) problem-solving; and (d) linkage. She used these styles to review the three most commonly used research utilization models: CURN (linkage), NCAST (social interaction and diffusion), and WICHEN (problem-solving) (Crane, 1985a). In 1995 White (White, Leske, & Pearcy, 1995) briefly reviewed the CURN, Stetler, and Iowa models. These studies by Crane and White are the primary sources providing a review of research utilization models in nursing, leaving a need for more review work. The primary research utilization models developed and used in nursing are identified in table 10.1. Most of these models have been inductively developed from experiential work and activity. All models share common characteristics despite having different developmental bases. Many are grounded in social interaction and diffusion theories and are oriented towards the individual practitioner (e.g., Stetler) or incorporate organizational change in their focus (e.g., CURN). However, they do not truly address ways of modifying social interactions to increase sharing, exchange, and use of research, nor do they easily incorporate non-research forms of knowledge or account for interactions between various forms of knowledge. Most of these models have an explicitly rational actor basis and, with one or two notable exceptions (e.g., Kitson et al., 1998), address change at the individual level. In these models it is assumed that, if given options, the clinical practitioner will default to a scientific practitioner model of behaviour. The ideal type of scientific practitioner is, in fact, the rational actor model at work in health care.

A Nursing and Allied Health Sciences Perspective—251 Table 10.1 Models of Research Utilization Authors

Model

Krueger, 1977; Krueger, Nelson, & Wolanin, 1978

WICHEN

Barnard & Hoehn, 1978; King, Barnard, & Hoehn, 1981

NCAST

Horsley, Crane, & Bingle, 1978; Horsley et al., 1983

CURN

Goode et al., 1987

Goode model

Goode & Bulechek, 1992; Goode & Titler, Horn Model 1996 Titler, Kleiber, Steelman, Godde, Rakel, Barry-Walker, et al., 1994

Iowa model of research in practice

Goode & Piedalue, 1999

Evidence-based multidisciplinary practice model

Funk, Tornquist, & Champagne, 1989a,b

Dissemination model

RNABC, 1991, 1996

RNABC model

Rutledge & Donaldson, 1995

OCRUN (Orange County Research Utilization in Nursing) project

Stetler & Marram, 1976; Stetler, 1985; Stetler 1994a, b, Stetler, 2001

Stetler/Marram model; Stetler model

Logan & Graham, 1998

OMRU (Ottawa Model of Research Use)

Dufault, 1995; Dufault & Willey-Lessne, 1999

CRU (collaborative research utilization) model

Kitson et al., 1996; Kitson et al., 1998

PARIHS model of research implementation

Rosswurm & Larrabee, 1999; Rosswurm, 1992

A model for change to evidence-based practice

These models collectively represent a style of process models common to nursing in the 1970s and 1980s that place strenuous demands on individual clinicians working in complex and fast-paced organizations. They have provided useful heuristics for the profession and have helped to shape a range of professional values regarding the use of science in practice. However, they require rigorous and systematic testing of their efficacy, despite a growing body of anecdotal evidence

252—C.A. Estabrooks, S. Scott-Findlay, and C. Winther

supporting their effectiveness. They are expensive to implement, and organizations require more compelling evidence to suggest their impact on client, provider, or system outcomes before adoption.

Research Papers We divided research articles into two major categories: (a) descriptions or evaluations of research-based clinical interventions (clinical research utilization projects), and (b) research into the efficacy, frequency, or extent of research utilization (RU research studies). CLINICAL RESEARCH UTILIZATION PROJECTS

We reviewed 53 papers about clinical research utilization projects. Inconsistent key word selection in cataloguing prevented a complete review of this subset of papers. Generally, in these papers institutionally based projects that apply research findings to specific areas of nursing practice are described. The projects discussed in the reviewed articles focus on narrow aspects of clinical practice, such as pain management, oral care, tube feeding, and the treatment of ulcers. In 41 of the 53 articles the development of a research-based protocol is described (VandenBosch, Cooch, & Treston-Aurand, 1997; Stiefel, Damron, Sowers, & Velez, 2000; Shively et al., 1997); because of challenges in assessing patient outcomes and the unavailability of patient outcome measurement tools, however, not all studies (n = 9) subsequently evaluate the effectiveness of the protocol (Beaudry, VandenBosch, & Anderson, 1996; Reedy, Shivnan, Hanson, Haisfield, & Gregory, 1994). The recognition of variations in clinical practice and the need for quality assurance and improvement initiatives are the impetus for protocol work. In the 32 studies with an evaluative component, all were deemed successful in at least one of the following: improving patient outcomes, decreasing costs, or improving patient satisfaction with care. The remainder of the clinical research utilization project articles include literature reviews on specific clinical issues (n = 8) (Jones, 1997; Taczak Kupensky, 1998; Longman, Verran, Ayoub, Neff, & Noyes, 1990), small surveys related to the research findings for specific clinical practices (Beitz, Fey, & O’Brien, 1999; Morin et al., 1999), a description of experiences using research in clinical practice (Schroyen et al., 1994), and an evaluation of a specific clinical practice with the research recommendations for this same practice (Grap, Pettrey, & Thornby, 1997). The clinical appeal of these articles to the practising clinician is appar-

A Nursing and Allied Health Sciences Perspective—253

ent, since case studies are commonly used to demonstrate the clinical significance of research utilization in practice (Vines, Arnstein, Shaw, Buchholz, & Jacobs Julie, 1992; Wolf et al., 1997). Some of the clinical research utilization projects (Logan, Harrision, Graham, Dunn, & Bissouette, 1999; Beaudry et al., 1996; Vines et al., 1992) use established research utilization models as frameworks. Examples include Stetler (Stetler & Marram, 1976); CURN (Horsley et al., 1978; Horsley et al., 1983); and the Iowa Model (Titler et al., 1994a). Although primarily descriptive and lacking the rigours of intervention research (e.g., control, randomization), these studies make a valuable contribution to the research utilization field. Their findings suggest that research use in clinical practice improves patient and system outcomes in a context-specific manner. Practice-based studies are foundational in clinical nursing research and, if conducted systematically and with increasing sophistication, may herald the beginnings of a nursing database of significant value in facilitating evidence-based practices. RU RESEARCH STUDIES

This group consists of research studies (n = 95) in which various aspects of research utilization are investigated. Figure 2 illustrates a dramatic increase in the number of research articles published since the mid1990s. We identified three recent shifts in trends in this literature subset: (a) increased scholarly output in the field, (b) broader methodological approaches, and (c) the use of more sophisticated research design and analysis techniques. INCREASED OUTPUT IN THE FIELD

There is a suggestion of increased capacity building by nurses in the field based on increased output. Until 1996 the majority of the work in the field consisted of single studies. Since this time, eleven investigators have published more than one knowledge utilization study. This important shift in trend is indicative of investigators directing more resources to this field to programmatic research. The building of research capacity and the establishment of research utilization and knowledge utilization programs will advance the field and help to overcome challenges discussed later in this section. BROADER METHODOLOGICAL APPROACH

Another shift is the increased use of qualitative methods to investigate knowledge utilization. The majority of existing qualitative studies (n = 15) in the field were completed after 1995. This emerging trend is

254—C.A. Estabrooks, S. Scott-Findlay, and C. Winther Figure 10.2. Research publications per year 16 14 Canada U.K. U.S.A. Other

12

Number

10 8 6 4

0

1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000

2

Year

important because hypothesis-generating work is foundational to theory development. If this trend continues, along with programmatic research we can expect the emergence of intervention studies that are well grounded in qualitative work. INCREASED SOPHISTICATION OF RESEARCH DESIGN AND STATISTICAL TECHNIQUES

Over time increasingly complex research designs and sophisticated analysis techniques have been adopted. Until 1996 research in this field was largely correlational. Since 1996 researchers have used more sophisticated statistical techniques, such as structural equation modelling (Estabrooks, 1999c), and more sophisticated research designs, including quasi-experimental and randomized control trial designs (Dufault & Willey-Lessne, 1999; Dufault & Sullivan, 2000; Hamilton & McLaren, 2000; Hundley, Milne, Leighton-Beck, Graham, & Fitzmaurice, 2000; Hodnett et al., 1996).

A Nursing and Allied Health Sciences Perspective—255 Table 10.2 Theoretical Grounding of Research Literature Theoretical grounding

Number of studies

Rogers’ Diffusion of Innovation Theory

17 (18%)

Various research utilization models (e.g., CURN, Stetler model) No theoretical grounding

10 (10.5%) 68 (71.5%)

Total

95 (100%)

Although the research utilization field in nursing shows signs of recent development and advancement, researchers still face numerous challenges. These challenges include: (a) the need to ground studies theoretically, (b) a disproportionate focus on the individual determinants of research utilization, (c) a nearly exclusive use of correlational designs, (d) overpowering of studies, and (e) conceptual and measurement challenges. NEED TO GROUND STUDIES THEORETICALLY

In the majority of the research studies (72 per cent – see table 10.2), investigators fail to identify a theoretical framework for their studies. When studies are grounded theoretically (n = 27), Rogers’ Diffusion of Innovation theory is used most frequently (Rogers, 1995). Rogers’ theory has radically shaped the perception of research utilization in nursing and the subsequent research avenues explored. The Diffusion of Innovation Theory refers to the spread of new ideas, techniques, behaviours, or products throughout a population. The term ‘innovation’ is defined as ‘an idea, practice or object that is perceived as new by an individual or other unit of adoption’ (ibid., 11). However, research knowledge must be conceptualized as a product that is analogous to an innovation to apply Rogers’s framework. Although this adaptation is plausible, a conceptualization of research utilization as a process is less likely within the confines of innovation diffusion theory. Rogers’ theory illustrates that several factors can influence the diffusion of innovations, including individual factors, organizational factors, and characteristics of the innovation itself.

256—C.A. Estabrooks, S. Scott-Findlay, and C. Winther DISPROPORTIONATE FOCUS ON INDIVIDUAL DETERMINANTS OF RESEARCH UTILIZATION

Researchers have focused on the individual determinants of research utilization since the beginning of this field in nursing (1972) (see figure 10.3). These determinants are factors at the individual level that appear to influence the use of research. Commonly investigated individual determinants include research beliefs (Bostrom & Suter, 1993); attitudes towards research (Champion & Leach, 1989; Coyle & Sokop, 1990; Lacey, 1994); problem-solving ability (Estabrooks, 1999c); involvement in research activities (Bostrom & Suter, 1993; Butler, 1995); reading practices (Brett, 1987; Coyle & Sokop, 1990; Kirchoff, 1982; Michel & Sneed, 1995); conference attendance (Butler, 1995; Coyle & Sokop, 1990; Michel & Sneed, 1995; Rutledge, Greene, Mooney, Nail, & Ropka, 1996; Winter, 1990); level of education (Bostrom & Suter, 1993; Brett, 1987; Butler, 1995; Champion & Leach, 1989; Coyle & Sokop, 1990; Davies, 1999); work experience (Estabrooks, 1999a; Kirchoff, 1982; LiaHoagberg, Schaffer, & Strohschein, 1999; Michel & Sneed, 1995); and age (Lacey, 1994; Rodgers, 2000a, b; Winter, 1990). Investigation into the individual determinants of research utilization prevails, although in several studies the organizational determinants of research utilization as either the primary or secondary focus are examined. A recent systematic review of the individual determinants research revealed inconsistent results and questioned the validity of conclusions and recommendations arising from this research stream (Estabrooks, Floyd, ScottFindlay, O’Leary, & Gushta, 2004). A commonly studied determinant is higher education. The research evidence on this individual determinant is equivocal: whether or not higher education leads to improved utilization of research is debatable (Lacey, 1994; Brett, 1987; Butler, 1995; Coyle & Sokop, 1990). Equivocal findings are also found with other frequently studied individual determinants: years employed as a registered nurse (Bostrom & Suter, 1993; Davies, 1999; Michel & Sneed, 1995); clinical specialty (Bostrom & Suter, 1993; Michel & Sneed, 1995; Tsai, 2000); age (Lacey, 1994; Rodgers, 2000a; Winter, 1990); and nurses’ reading practices (Brett, 1987; Coyle & Sokop, 1990; Kirchoff, 1982). Research findings are more consistent regarding the positive correlation between a positive attitude towards research and research use (Champion & Leach, 1989; Coyle & Sokop, 1990; Estabrooks, 1999a, b; Hatcher & Tranmer, 1997; Lacey, 1994). Given the equivocal nature of the research findings stemming from an inflated focus on individual

A Nursing and Allied Health Sciences Perspective—257 Figure 10.3. Research studies in which individual determinants, organizational determinants, or both are discussed 12

10

Individual determinants Organizational determinants Both

Number

8

6

4

0

1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000

2

Year

determinants of research utilization, and the reality that individual practitioner characteristics (e.g., age and educational preparation) are not susceptible to change from external forces, further research energy should be directed to understanding organizational factors. Recently, nurse scholars (Kitson et al., 1998; McCormack et al., 2002) have advocated the importance of organizational context in facilitating knowledge utilization. As figure 10.3 demonstrates, organizational determinants in nursing are traditionally studied in tandem with individual determinants. The organizational determinants previously studied are organizational size (Varcoe & Hilton, 1995); availability of information (Hatcher & Tranmer, 1997; Krueger et al., 1978; Rutledge et al., 1996; Rutledge, Ropka, Greene, Nail, & Mooney, 1998); organizational infrastructure (Varcoe & Hilton, 1995); support (Alcock, Carroll, & Goodman, 1990; Champion & Leach, 1989; Pettengill, Gillies, & Clark, 1994); and presence of

258—C.A. Estabrooks, S. Scott-Findlay, and C. Winther

a nursing research committee (Mitchell, Janzen, Pask, & Southwell, 1995; Rutledge et al., 1996). Kirchoff (1982) and Krueger (1982) were the first to investigate organizational determinants that influence research use in nursing. Conclusions about the impact of the organizational context are not easily drawn from the existing research (Varcoe & Hilton, 1995). However, findings to date suggest that nursing unit policy and subsequent organizational climate influence the use of specific research (Coyle & Sokop, 1990; Varcoe & Hilton, 1995), and that organizational factors are more influential than individual factors in general (Varcoe & Hilton, 1995). Continued investigation into organizational determinants holds promise for advancing the research utilization field. PREDOMINANCE OF CORRELATIONAL DESIGNS

Of the 95 research papers reviewed, survey methods are primarily used in 81. In the remainder qualitative analytic approaches (12) or a hybrid of approaches are used. A nearly exclusive reliance on survey and correlational design is, though, problematic. For example, the inability to manipulate the independent variable deters the development and testing of hypotheses to understand causal relationships. In addition, most studies do not use multivariate analysis techniques (73 per cent) (table 10.3). Without such techniques, assessing the complex interrelations embedded in subject responses is difficult and may result in potentially misleading results. PROLIFERATION OF BARRIERS STUDIES

Investigators in this field often examine the obstacles to and facilitators of research utilization. These studies (n = 10) often use the Funk BARRIERS scale (Funk, Champagne, Wiese, & Tornquist, 1991a) and show remarkable consistency in their findings. Overall, nurses tend to have positive attitudes towards research, but they must overcome significant organizational barriers to using research in their clinical practice. The most frequently reported barriers are lack of time to read research and implement findings, lack of relevant research, lack of readily understandable research, and administration and physicians who do not support the implementation of new research-based protocols. These findings are consistent across all studies using the BARRIERS scale, though the rank-ordering varies. Because of the consistency in these findings, additional investigation into the barriers to research use should proceed only if they will advance our understanding further.

A Nursing and Allied Health Sciences Perspective—259 Table 10.3 Analytic Approaches in Nursing and Allied Health Sciences Knowledge Utilization Research Literature Analytic approach

Number of studies

Univariate and bivariate statistical analysis (e.g., frequencies, correlations)

55 (58%)

Multivariate analysis (e.g., ANOVA, regression, structural equation modelling)

26 (27%)

Qualitative analysis

12 (13%)

Hybrid analysis (e.g., quantitative and qualitative analysis)

12 (2%)

Total number of research studies

95 (100%)

One variable, lack of time, is consistently ranked higher by nurses as a barrier to research utilization. However, little, if any, exploration or clarification of time as a barrier has been conducted. Often, the assumption is that nurses need more clock time on a given shift. Nurses’ perceptions of time, however, are full of contradictions. In Estabrooks’s research utilization case studies, for example, nurses cited lack of time as a significant barrier, yet they reported an average of only thirty minutes per shift when asked how much more time was needed (Estabrooks et al., 2001). This mismatch in results is puzzling and suggests that the kind of time, rather than the amount of time, may be the core issue. OVERPOWERING OF STUDIES

A problem inherent in large correlational studies is that of overpowering, or having too large a sample size. The potential implications of overpowering are seldom discussed in the literature; the focus, instead, is often on underpowering (King, 2001). In this area of the literature, nine researchers report a sample size of over 1,000. Care needs to be exercised when results from studies with large sample sizes are interpreted, as the possibility of Type-1 error increases with sample size. DEFINITIONAL AND MEASUREMENT CHALLENGES

The broader interdisciplinary knowledge utilization field is full of inconsistent definitions and shifting terminology, resulting in confusion and hampering communication (Larsen, 1980). The nursing research

260—C.A. Estabrooks, S. Scott-Findlay, and C. Winther

utilization literature is similar. Frequently used terms include knowledge utilization, research utilization, innovation adoption, evidencebased practice, evidence-based nursing, research-based practice, technology transfer, and research translation. Research utilization is the most commonly used term, but evidence-based terminology is rapidly replacing it (French, 2002). The lack of conceptual clarity in the field is an a priori weakness that prevents progress in all aspects of the field. The formulation of a common set of terminology and conceptualizations, or a common lexicon, would enable cross-study comparisons and would facilitate communication among knowledge utilization researchers across disciplines. For instance, inconsistent theoretical and operational definitions of the dependent variable (knowledge utilization) have hindered the development of instruments to measure knowledge utilization. In the nursing research utilization literature, research utilization is discussed regularly in qualitative terms, while little attention is accorded to quantitative evidence (see table 10.4). The majority of empirical studies (57 per cent, n = 54) fail to measure research utilization, the dependent variable. In 20 per cent (n = 19) of the studies existing or self-developed instruments are used to measure research utilization. Unfortunately, these scales often lack demonstrated reliability and validity because they were developed for single-study use. The Nursing Practice Questionnaire (NPQ) (Brett, 1987) and Champion and Leach’s (1989) survey are used most commonly. Another frequent approach to understanding the extent of research utilization is to ask one or more questions about ‘use’ throughout the survey. These omnibus-type questions ask participants to assess their research utilization behaviour retrospectively over a specified period (e.g., how often have you used research in your clinical practice over the last six months?). This approach is problematic because it relies on self-reporting and places a recall burden on the participant to remember and determine knowledge utilization over a long period. Also, varying recall time (e.g., one month, one year) makes cross-study comparisons difficult. All current research utilization tools rely on self-reporting.

Sources of Knowledge Studies In a related body of literature sources of knowledge (n = 15 studies) are addressed through exploration of sources that inform nurses’ decisionmaking (e.g., research journals, textbooks, expert colleagues). Investi-

A Nursing and Allied Health Sciences Perspective—261 Table 10.4 Approaches to Measuring the Dependent Variable (Knowledge Utilization) in the Nursing and Allied Health Sciences Knowledge Utilization Research Literature Measurement approach

Number of studies

Scale to measure dependent variable (knowledge utilization; e.g., Brett’s Nursing Practice Questionnaire or Champion and Leach’s Research Utilization Questionnaire)

19 (20%)

A small number of questions (not an entire survey) to assess dependent variable – includes single-item questions (e.g., yes/no questions; Have you used research in your clinical practice in the last year? How often have you used research in your clinical practice over the last six months?)

20 (21%)

Other approaches (e.g., combination of instrumental and omnibus approaches, evaluation of the implementation of specific procedures)

12 (2%)

No measurement of dependent variable

54 (57%)

Total number of studies reviewed

95 (100%)

gation of sources of knowledge began in response to the prevailing nursing professionalization agenda, which attempted to base clinical practice on research findings. The sources of knowledge work is important to the research utilization agenda, because findings demonstrate that nurses do not rely heavily on research findings to guide their clinical practice. In seven of the studies (Baessler, Curran, & McGrath, 1994; Estabrooks, 1998; Winter, 1990; Lathey & Hodge, 2001; Rasch & Cogdill, 1999; Barta, 1995; Luker & Kenrick, 1992) non-research sources (e.g., peers, individual patient information, physicians) were chosen as the most commonly utilized sources of knowledge. Only two studies had research-based sources ranked as most frequently used (Lawton, Montgomery, & Farmer, 2001; Ciliska, Hayward, Dobbins, Brunton, & Underwood, 1999). An important finding inferred from studies of the sources of knowledge is that nurses have unique knowledge resources and ways of using knowledge, including research knowledge; this finding has not been discussed previously in the literature. Findings suggest that nurses prefer interpersonal and interactive sources of knowledge (e.g., dialogue

262—C.A. Estabrooks, S. Scott-Findlay, and C. Winther

with colleagues) over traditional modes of dissemination, which are primarily printed materials (e.g., textbooks or journals). If substantiated, these findings could explain why many of the conventional, mainly educational approaches to increasing the use of research in clinical practice have not been successful. Conventional approaches are based largely on a rational-actor approach, in which it is assumed that people will gather, read, and consider all relevant information on a topic prior to making a decision. The rational-actor approach is individualistic, and a linear decision-making process uninfluenced by other factors, such as organizational context, politics, or values, is assumed. The sources of knowledge literature, meanwhile, point to the value of considering other approaches to increase knowledge utilization – approaches that reflect the ways and types of knowledge that nurses actually base their clinical decisions upon. Sources of knowledge studies indirectly suggest that focusing increased attention on interventions that increase social and relational capital in the workplace holds promise for increasing research use in clinical decision-making, because interventions based on social and relational capital are more reflective of actual patterns of knowledge use by nurses.

Summary of Research Results Since 1995 the momentum produced by the evidence-based practice movement has led to an exponential increase in the number of research articles published. An analysis of the empirical literature highlighted recent increases in publication volume and advances in the research utilization field. In particular, there have been (a) an increased scholarly capacity in the field, (b) broader methodological approaches, and (c) utilization of more sophisticated research design and analysis techniques. Yet the field continues to present challenges for researchers: the failure to ground studies theoretically, an overemphasis on individual determinants of knowledge utilization, a widespread use of correlational designs, the overpowering of studies, and definitional and measurement challenges. The lack of a consistent and common language in the knowledge utilization field is problematic and complicates the measurement of knowledge utilization. Conceptual clarity is urgently needed to enhance theory and measurement developments that, subsequently, will drive further empirical research.

A Nursing and Allied Health Sciences Perspective—263

Allied Health Literature The research utilization literature from the allied health professions was reviewed separately to determine if its history and development differ from that of nursing. We reviewed 18 articles: 8 from social work, 3 from occupational therapy, and 7 from general rehabilitation literature and other allied health disciplines. Like the nursing literature, the majority of articles (13) were opinion pieces in which strategies for implementing research into practice and for disseminating research results are described. In two articles (Egan, Dubouloz, von Zweck, & Vallerand, 1998; Thomas, 1978) models for implementing researchbased practice are outlined, and the attitude of occupational therapists towards research is evaluated in another article (Dubouloz, Egan, Vallerand, & von Zweck, 1999). When evaluating terms used in this literature as a whole, we found that research utilization, knowledge utilization, or innovation management were dominant. The research utilization literature in social work stems largely from authors such as Carol Weiss, Everett Rogers, and William Dunn. Not until the late 1990s was the term ‘evidence-based practice’ seen in the physical and occupational therapy literature (as was also the case in nursing). The need to produce and use research in the allied health professions for professionalization is expressed throughout the literature. The research utilization agenda legitimizes their fields to outside parties: the medical profession, funding agencies, and clients (Ashford & LeCroy, 1991; Ottenbacher, Barris, & Van Deusen, 1986; Berman, 1995). Despite recognizing the need for this agenda, many clinicians were against the devaluation of clinical knowledge and expertise (Dubouloz et al., 1999). Overall, the literature reveals a recognition of several barriers to overcome in research utilization. Researchers need to increase awareness of research utilization in allied health; in addition, to build capacity, more students need to be channelled into graduate school, and more practitioners need to be doing research (Ottenbacher et al., 1986). Berman (1995) believes that making research a clinical goal would stimulate more interest in its participation. Researchers also noted that clinicians lack the knowledge to properly evaluate research for practice (Dubouloz et al., 1999). For example, modelling evidence-based practice on evidence-based medicine is problematic because clinicians experience difficulties when applying the

264—C.A. Estabrooks, S. Scott-Findlay, and C. Winther

results of large-scale, epidemiological studies to individual clients (Egan et al., 1998; Staller & Kirk, 1998; Dubouloz et al., 1999). Ashford and LeCroy (1991) argue that, for research to be used, clinicians need to move from a tacit decision-making approach, in which practice answers are sought for each individual client, to a more knowledge-based approach. Like nurses, allied health professionals prefer using their own observations and experiences, the experiences and knowledge of experts, and knowledge from their clients; books and journals are not preferred sources of knowledge (Dubouloz et al., 1999; Ashford & LeCroy, 1991). This trend remains unchanged from Glaser and Marks’s report, in which they state that ‘person to person transmission of information is more effective than the written word’ in disseminating vocational rehabilitation information (1966, p. 7). The scarcity of articles in the allied health literature precludes comparisons between nursing and the allied health professions in terms of the history and development of research utilization. Future comparisons are plausible, however, because of the anticipated rapid growth in the allied health research utilization literature based on impetus from the evidence-based medicine movement. These comparisons would be useful, because research utilization studied in different contexts may yield insights about key processes and factors influencing use. Discussion The results of our review point to several issues in research utilization in nursing. First, as in most disciplines, much of the literature does not significantly advance the field, except to enable extrapolation of past and emerging trends. Second, while critiques of the contemporary evidence-based movement in nursing are emerging, critiques of the more confined (but longer established) research utilization agenda in nursing are lacking. The research utilization agenda is a more confined and clearly understood agenda than that of the evidence-based practice movement, which is rife with epistemological and political controversies, among others. Nonetheless, critiques addressed at its various assumptions and implications are important to informed theory development. Third, the theoretical basis of research utilization is critically underdeveloped in nursing and in the allied health sciences. The scarcity of substantive conceptual or theoretical papers is troubling in the light of its thirty-year history. Fourth, research activity has been characterized by individual and uni-disciplinary efforts and by a failure to

A Nursing and Allied Health Sciences Perspective—265

approach the field programmatically. This narrow approach limits the development of theory and of interventions to improve the use of research in decision-making. Other disciplines offer different approaches, different theories, and different practices, all of which can inform research utilization in nursing. A core issue in the field is the measurement problem, which stems primarily from difficulties in conceptualizing research utilization. In addressing this issue, investigators must include both individual and organizational approaches and find ways to examine influencing factors from both levels. However, investigators undertaking the measurement challenges are entering poorly charted territory. They will find limited guidance in existing work from other disciplines, such as education, sociology, and political science, (Dunn, 1983; Hall & Loucks, 1975; Johnson, 1980; Knott & Wildavsky, 1980; Rich, 1997; van de Vall & Bolas, 1982) or from nursing (Pelz & Horsley, 1981).

Unit of Analysis As researchers respond to calls for organizational modelling work, challenges involving unit of analysis are increasingly prevalent. Issues of study costs, complexity, and design exist when adequate samples at the unit or organizational level are collected for analysis. There are also pressing questions regarding the methods of aggregation and whether aggregating individual data to reflect unit or organizational characteristics is defensible (Forbes & Taunton, 1994; Hughes & Anderson, 1994; Verran, Mark, & Lamb, 1992; Verran, Gerber, & Milton, 1995).

Recommendations Undoubtedly, the issues we have outlined need to be addressed if the research utilization field is to continually grow and evolve in nursing and the allied health sciences. The following recommendations represent the key areas requiring attention.

Theoretical Framing One reason for slow progress in nursing has been the failure to treat this field as a theoretically rich area and to develop and test relevant theory. Most studies reviewed were not framed theoretically; in those with theoretical frameworks Rogers’ Diffusion of Innovation Theory (Rogers, 1995) was used. Nurse researchers’ nearly exclusive reliance

266—C.A. Estabrooks, S. Scott-Findlay, and C. Winther

on Rogers’s influential work has limited development in the field. Researchers should explore and draw from other theoretical positions in the field of research and knowledge utilization (Lomas, 1993a; Oh, 1997; Rich, 1979, 1997). In addition, numerous nurse scholars have developed research utilization models that should be rigorously and systematically tested.

The Predictors of Research Use Research has been focused predominantly on individual determinants of research use. Because many of these determinants cannot be modified (e.g., age, sex, years of experience), the focus should be on those that can be realistically changed. Interventions focused on changing the behaviour of individual health care professionals, however, have proved largely ineffective. Researchers should therefore focus on a wider range of influencing factors (Lomas, 1993b; Funk, Champagne, Wiese, & Tornquist, 1991b; Golden-Biddle, Locke, & Reay, 2001; Nilsson Kajermo, Nordstrom, Krusebrant, & Bjorvell, 1998; Rogers, 1995; Swap, Leonard, Shields, & Abrams, 2001). A shift from modifying individuals to modifying organizational environments may advance the research utilization agenda, because most nurses and other health care professionals work within organizations that structure and guide their work.

The Dependent Variable Significant measurement difficulties continue to vex investigators in the field of research utilization (Dunn, 1983; Pelz & Horsley, 1981; Rich, 1997; Weiss, 1981). Lack of clear theoretical or operational definitions of research/knowledge utilization precede the measurement issues. Discussions of measurement development, selection, and performance in a population group are rare; consequently, existing measures tap only instrumental research utilization (e.g., Brett’s Nursing Practice Questionnaire) and rely on respondents’ self-reported behaviour over long time periods (e.g., one year, five years). No existing measures of research utilization in nursing have been systematically developed and tested over time. Some evidence suggests that Knott and Wildavsky’s (1980) measure has been used somewhat more widely in a series of studies examining knowledge utilization in various populations (e.g., Landry, Amara, & Lamari, 2001). Significant conceptual issues influence the development of sound measures – whether research utilization should be conceptualized as product or process, whether nurse

A Nursing and Allied Health Sciences Perspective—267

researchers should pursue an omnibus measure of research utilization, and whether they should pursue measures of instrumental, conceptual, and symbolic research use (Estabrooks, 1999a; Stetler, 1994a; Weiss, 1979), or a combination of these approaches. Additionally, an important issue is the persistent use of research utilization as the dependent variable, rather than as an independent variable predicting a relevant patient or system outcome. Although researchers understandably treat research utilization as a dependent variable, research use is of practical interest only if it positively predicts patient and system outcomes.

Interdisciplinary and Programmatic Research Individual nurse investigators or small groups of nurse investigators conducted the studies covered in this review. Given the scope and complexity of the field, the research agenda will succeed only with interdisciplinary teams representing various disciplines such as political science, sociology, policy studies, organizational studies, anthropology, psychology, and the health sciences. A distinct ‘discipline of knowledge utilization’ does not currently exist. We argue that its emergence as a genuinely interdisciplinary field is essential to making substantial and sustained gains in theory development and subsequent usable intervention strategies. Historically, knowledge utilization and research utilization have been considered exclusively applied fields and, therefore, have lacked legitimate scholarly status. Consequently, both within and outside nursing, the field has had difficulties accessing adequate funding sources and publication outlets to sustain academic careers or to develop the necessary foundational work. Now, however, national funding agencies provide significant and sustained assistance for research on knowledge utilization and its variants. Our review findings suggest that building capacity in the field and increasing programmatic knowledge utilization research are two important avenues to follow, and that collaborative investigation will continue to mitigate core challenges in the knowledge utilization field. Nursing and the allied health sciences are well positioned to contribute meaningfully to theory development in the knowledge utilization field. Although these disciplines are professional, as opposed to academic (Donaldson & Crowley, 1978), this fact does not negate their obligation to develop the basic social science that will underpin effective research implementation strategies in their respective health pro-

268—C.A. Estabrooks, S. Scott-Findlay, and C. Winther

fessions. Such basic social science will be the source from which interventions that increase research use are developed, as well as a source of a significantly increased understanding of the nature and substance of practice theory. It is apparent that a substantial amount of the knowledge needed to practise optimally in nursing and other health professions is not scientific. Well-developed practice theory is essential in an era where science often dominates, yet on its own it remains a necessary but insufficient form of knowledge for practice.

Appendix: Search Strategy The following electronic bibliographic databases were searched: •–CINAHL (1982 – May 2001) •–MEDLINE (1966 – May 2001) •–Embase (1988 – June 2001) •–Psychinfo (1984 – May 2001) •–Dissertation Abstracts (to September 2001) •–Sociological Abstracts (1974 – September 2001) •–Web of Science (1975 – September 2001) Because not all databases employ the same controlled vocabulary, different search terms were used as applicable. The searches were based upon the following vocabulary in CINAHL, with all search results being limited to English only: Professional practice, evidence based (SH, exploded) Diffusion of innovation (SH, exploded) Nursing practice, research based (SH, exploded) OR Nursing practice, evidence based (SH, exploded) Evidence based medicine (textword) OR Innovation$ (textword) utili$ (textword) Evidence (textword) AND use (textword) Knowledge (SH, exploded) uptake (textword) Technolog$ (textword) transfer (textword) Implement$ (textword) Disseminat$ (textword) Diffus$ (textword)

A Nursing and Allied Health Sciences Perspective—269 NOTES 1 We have been somewhat informal in our use of the terms ‘knowledge utilization’ and ‘research utilization,’ often interchanging them in this chapter. In nursing and social work, in particular, the most commonly used term has been ‘research utilization’; this will be the term we mostly employ throughout. It is generally taken to mean the use in clinical practice of findings from empirical studies. 2 Information about the CIHR’s knowledge translation division and active program of research funding in the area can be accessed on the Internet at . 3 Information about the Centre for Knowledge Transfer can be found on the Internet at .

REFERENCES Alcock, D., Carroll, G., & Goodman, M. (1990). Staff nurses’ perceptions of factors influencing their role in research. Canadian Journal of Nursing Research, 22(4), 7–18. Allmark, P. (1995). A classical view of the theory-practice gap in nursing. Journal of Advanced Nursing, 22(1), 18 –23. Anderson, C.A. (1998). Does evidence-based practice equal quality nursing care? Nursing Outlook, 46, 257– 258. Ashford, J.B., & LeCroy, C.W. (1991). Problem solving in social work practice: Implications for knowledge utilization. Research on Social Work Practice, 1(3), 306 – 318. Backer, T.E. (1991). Knowledge utilization. The third wave. Knowledge: Creation, Diffusion, Utilization, 12(3), 225 – 40. Baessler, C.A., Curran, J.A., & McGrath, P. (1994). Medical-surgical nurses’ utilization of research methods and products. MEDSURG Nursing, 3(2), 113 –141. Barnard, K.E. (1986). Research utilization: The researcher’s responsibilities. Maternal Child Nursing, 11, 150. Barnard, K.E., & Hoehn, R. (1978). Nursing child assessment satellite training: final report. Hyattsville, MD: DHEW, Division of Nursing. Barta, K.M. (1995). Information-seeking, research utilization, and barriers to research utilization of pediatric nurse educators. Journal of Professional Nursing, 11(1), 49 – 57. .

.

.

.

.

.

.

.

.

270—C.A. Estabrooks, S. Scott-Findlay, and C. Winther Beaudry, M., VandenBosch, T., & Anderson, J. (1996). Research utilization: Once-a-day temperatures for afebrile patients. Clinical Nurse Specialist, 10(1), 21– 24. Beitz, J.M., Fey, J., & O’Brien, D. (1999). Perceived need for education vs. actual knowledge of pressure ulcer care in a hospital nursing staff. Dermatology Nursing, 11(2), 125 – 36. Berman, Y. (1995). Knowledge transfer in social work: The role of grey documentation. International Information & Library Review, 27(2), 143 – 154. Blanchard, H. (1996). Factors inhibiting the use of research in practice. Professional Nurse, 11(8), 524. Bliss-Holtz, J. (1999). The fit of research utilization and evidence based practice. Issues in Comprehensive Pediatric Nursing, 22(1), iii – iv. Bostrom, J., & Suter, W.N. (1993). Research utilization: Making the link to practice. Journal of Nursing Staff Development, 9(1), 28–34. Bostrom, J., and L. Wise. (1994) Closing the gap between research and practice. Journal of Nursing Administration, 24(5), 22 –7. Bower, F.L. (1994). Research utilization: Attitude and value. Reflections, Summer, 4 – 5. Brett, J.L.L. (1987). Use of nursing practice research findings. Nursing Research, 36(6), 344 – 349. Butler, L. (1995). Valuing research in clinical practice: A basis for developing a strategic plan for nursing research. Canadian Journal of Nursing Research, 27(4), 33 – 39. Caplan, N. (1979) The two-communities theory and knowledge utilization. American Behavioral Scientist, 22(3), 459 – 470. Castledine, G. (1996). Castledine column. All nurses are responsible for implementing research. British Journal of Nursing, 5(12), 764. Castledine, G. (1997a). Evidence-based nursing: Where is the evidence? British Journal of Nursing, 6(5), 290. Castledine, G. (1997b). Barriers to evidence-based nursing care. British Journal of Nursing, 6(18), 1077. Champion, V.L., & Leach, A. (1989). Variables related to research utilization in nursing: An empirical investigation. Journal of Advanced Nursing, 14, 705 – 710. Ciliska, D., Hayward, S., Dobbins, M., Brunton, G., & Underwood, J. (1999). Transferring public-health nursing research to health-system planning: Assessing the relevance and accessibility of systematic reviews. Canadian Journal of Nursing Research, 31(1), 23 – 36. Clarke, H. (1995). Using research to improve the quality of nursing care. Nursing BC, 27(5), 19 – 22. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

A Nursing and Allied Health Sciences Perspective—271 Clarke, H. (1999). Moving research utilization into the millennium. Canadian Journal of Nursing Research, 31(1), 5 –7. Coyle, L.A., & Sokop, A.G. (1990). Innovation adoption behavior among nurses. Nursing Research, 39(3), 176 –180. Crane, J. (1985a). Using research in practice: Research utilization – Theoretical perspectives. Western Journal of Nursing Research, 7(2), 261– 8. Crane, J. (1985b) Using research in practice: Research utilization – Nursing models. Western Journal of Nursing Research, 7(4), 494 – 497. Davies, S. (1999). Practice nurses’ use of evidence-based research. Nursing Times, 95(4), 57– 60. Davis, T., & King, K.M. (1998). Evidence-based nursing practice. Canadian Journal of Cardiovascular Nursing, 9(1), 29 – 34. Donaldson, N.E. (1992). If not now, then when? Nursing’s research utilization imperative. Communicating Nursing Research, 25, 29 – 44. Donaldson, S.K., & Crowley, D.M. (1978). The discipline of nursing. Nursing Outlook, 26(2), 113 –120. Dubouloz, C., Egan, M., Vallerand, J., & von Zweck, C. (1999). Occupational therapists’ perceptions of evidence-based practice. American Journal of Occupational Therapy, 53(5), 445 – 453. An earlier version of this paper was presented at the World Federation of Occupational Therapists Conference in Montreal, July 1998. Dufault, M.A. (1995). A collaborative model for research development and utilization. Journal of Nursing Staff Development, 11(3), 139 – 144. Dufault, M.A., & Sullivan, M. (2000). A collaborative research utilization approach to evaluate the effects of pain management standards on patient outcomes. Journal of Professional Nursing, 16(4), 240 – 250. Dufault, M.A., & Willey-Lessne, C. (1999). Using a collaborative research utilization model to develop and test the effects of clinical pathways for pain management. Journal of Nursing Care Quality, 13(4), 19 – 33. Dunn, W.N. (1980). The two-communities metaphor and models of knowledge use. Knowledge: Creation, Diffusion, Utilization, 1(4), 515 – 536. Dunn, W.N. (1983). Measuring knowledge use. Knowledge: Creation, Diffusion, Utilization, 5(1), 120 –133. Egan, M., Dubouloz, C., von Zweck, C., & Vallerand, J. (1998). The clientcentred evidence-based practice of occupational therapy. Canadian Journal of Occupational Therapy, 65(3), 136 –143. Estabrooks, C.A. (1998). Will evidence-based nursing practice make practice perfect? Canadian Journal of Nursing Research, 30(1), 15 –36. Estabrooks, C.A. (1999a). The conceptual structure of research utilization. Research in Nursing & Health, 22(3), 203 –16. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

272—C.A. Estabrooks, S. Scott-Findlay, and C. Winther Estabrooks, C.A. (1999b). Mapping the research utilization field in nursing. Canadian Journal of Nursing Research, 31(1), 53 –72. Estabrooks, C.A. (1999c). Modeling the individual determinants of research utilization. Western Journal of Nursing Research, 21(6), 758 –772. Estabrooks, C.A., Floyd, J.A., Scott-Findlay, S., O’Leary, K.A., & Gushta M. (2004). Individual determinants of research utilization: A systematic review. Journal of Advanced Nursing, 43(5), 506 –520. Estabrooks, C.A., Norris, J., Watt-Watson, J., Hugo, K., Profetto-McGrath, J., Telford, P., Scott-Findlay, S., McGilton, K., Lander, J., Hesketh, K., Chong, H., Dimaculagon, D., Morris, M., Smith, J.E., & Humphrey, C.K. (2001). A tale of two cities: A multi-site study of research use in the context of pain management. Four papers presented at AARN Research Conference on Using Research in Practice. Alberta Association of Registered Nurses. Edmonton, 2 May. Evidence-Based Medicine Working Group. (1992). Evidence-based medicine. A new approach to teaching the practice of medicine. Journal of the American Medical Association, 268(17), 2420 –2425. Feeg, V.D. (1987). Research based practice: rhetoric or reality? Pediatric Nursing, 13(1), 6 –7. Feldman, H.R. (1995). Research utilization: the next step. Journal of Professional Nursing, 11(4), 201. Feldman, H.R., Penney, N., Haber, J., Carter, E., Hott, J.R., & Jacobson, L. (1993). Bridging the nursing research-practice gap through research utilization. Journal of the New York State Nurses Association, 24(3), 4 –10. Ferrell, B.R., Grant, M.M., & Rhiner, M. (1990). Bridging the gap between research and practice. Oncology Nursing Forum, 17(3), 447 – 448. Flaming, D. (2001). Using phronesis instead of ‘research-based practice’ as the guiding light for nursing practice. Nursing Philosophy, 2, 251–258. Forbes, S., & Taunton, R.L. (1994). Reliability of aggregated organizational data: An evaluation of five empirical indices. Journal of Nursing Measurement, 2(1), 37– 48. French, P. (1999). The development of evidence-based nursing. Journal of Advanced Nursing, 29(1), 72–78. French, P. (2002). What is the evidence on evidence-based nursing? An epistemological concern. Journal of Advanced Nursing, 37(3), 250 –257. Funk, S.G., Tornquist, E.M., & Champagne, M.T. (1989a). A model for improving the dissemination of nursing research. Western Journal of Nursing Research, 11(3), 361– 367. Funk, S.G., Tornquist, E.M. & Champagne, M.T. (1989b). Application and evaluation of the dissemination model. Western Journal of Nursing Research, 11(4), 486 – 491. .

.

.

.

.

.

.

.

.

.

.

.

.

A Nursing and Allied Health Sciences Perspective—273 Funk, S.G., Tornquist, E.M., & Champagne, M.T. (1995). Barriers and facilitators of research utilization. Nursing Clinics of North America, 30(3), 395 – 407. Funk, S.G., Champagne, M.T., Wiese, R.A., & Tornquist, E.M. (1991a). BARRIERS: The barriers to research utilization scale. Applied Nursing Research, 4(1), 39 – 45. Funk, S.G., Champagne, M.T., Wiese, R.A., & Tornquist, E.M. (1991b). Barriers to using research findings in practice: The clinician’s perspective. Applied Nursing Research, 4(2), 90 –95. Gennaro, S. (1994). Research utilization: An overview. Journal of Obstetrical, Gynecological and Neonatal Nursing, 23(4), 313 –319. Glaser, E.M., & Marks, J.B. (1966). Putting research to work. Rehabilitation Record, November–December, 6 –10. Goode, C.J., & Bulechek, G.M. (1992). Research utilization: An organizational process that enhances quality of care. Journal of Nursing Care Quality, 27–35. Goode, C.J., Lovett, M.K., Hayes, J.E., & Butcher, L.A. (1987). Use of research based knowledge in clinical practice. Journal of Nursing Administration, 17(12), 11–18. Goode, C.J., & Piedalue, F. (1999). Evidence-based clinical practice. Journal of Nursing Administration, 29(6), 15 –21. Goode, C.J., & Titler, M.G. (1996). Moving research-based practice throughout the health care system. MEDSURG Nursing, 5(5), 380 – 383. Government of Canada. (2000). Canadian Institutes of Health Research Act. Vol. C –13, c. 6. Grap, M.J., Pettrey, L., & Thornby, D. (1997). Hemodynamic monitoring: A comparison of research and practice. American Journal of Critical Care, 6(6), 452 – 456. Gray, M. (1995). Scientific truths: Weighing the evidence. Journal of Wound, Ostomy and Continence Nursing, 22, 203 –205. Grier, M.R. (1986). Dissemination of nursing research: Past and future. Research in Nursing & Health, 9, iii – iv. Hall, G., & Loucks, S. (1975). Levels of use of the innovation: A framework for analyzing innovation adoption. Journal of Teacher Education, 26(1), 52– 56. Haller, K.B. (1987). Readying research for practice. Maternal Child Nursing, 12, 226. Hamilton, S., & McLaren, S.M. (2000). Evidence-based practice in stroke assessment and recording: An evaluation of the implementation of guidelines using a multifaceted strategy. Clinical Effectiveness in Nursing, 4(4), 173 –179. Hatcher, S., & Tranmer, J. (1997). A survey of variables related to research utilization in nursing practice in the acute care setting. Canadian Journal of Nursing Administration, September/October, 31–53. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

274—C.A. Estabrooks, S. Scott-Findlay, and C. Winther Hilton, B.A. (1995). Translating research into practice. Canadian Oncology Nursing Journal, 5(3), 79 – 81. Hodnett, E.D., Kaufmann, K., O’Brien-Pallas, L., Chipman, M., WatsonMacDonell, J., & Hunsburger, W. (1996). A strategy to promote researchbased nursing care: effects on childbirth outcomes. Research in Nursing & Health, 19(1), 13 – 20. Horsley, J. (1985). Using research in practice: The current context. Western Journal of Nursing Research, 7(1), 135 –139. Horsley, J.A., Crane, J., & Bingle, J.D. (1978). Research utilization as an organizational process. Journal of Nursing Administration, July, 4 – 6. Horsley, J.A., Crane, J., Crabtree, M.K., & Wood, D.J. (1983). Using research to improve nursing practice: A guide. San Francisco: Grune & Stratton. Hughes, L.C., & Anderson, R.A. (1994). Issues regarding aggregation of data in nursing systems research. Journal of Nursing Management, 2(1), 79 –101. Hundley, V., Milne, J., Leighton-Beck, L., Graham, W., & Fitzmaurice, A. (2000). Raising research awareness among midwives and nurses: Does it work? Journal of Advanced Nursing, 31(1), 78 – 88. Hunt, J. (1981). Indicators for nursing practice: The use of research findings. Journal of Advanced Nursing, 6, 189–194. Hunt, J. (1997). Towards evidence based practice. Nursing Management, 4(2), 14 –17. Ingersoll, G.L. (2000). Evidence-based nursing: What it is and what it isn’t. Nursing Outlook, 48(4), 151–152. Johnson, K.W. (1980). Stimulating evaluation use by integrating academia and practice. Knowledge: Creation, Diffusion, Utilization, 2(2), 237–262. Jones, J.E. (1997). Research-based or idiosyncratic practice in the management of leg ulcers in the community. Journal of Wound Care, 6(9), 447– 450. Journal of Evaluation in Clinical Practice. (1997). 3(2). Journal of Evaluation in Clinical Practice. (1998). 4(4). Journal of Evaluation in Clinical Practice. (1999). 5(2). Journal of Evaluation in Clinical Practice. (2000). 6(2). Ketefian, S. (1975). Application of selected nursing research findings into nursing practice. Nursing Research, 24(2), 89 – 92. Kim, H.S. (1993). Putting theory into practice: Problems and prospects. Journal of Advanced Nursing, 18(10), 1632 – 9. King, D., Barnard, K.E., & Hoehn, R. (1981). Disseminating the results of nursing research. Nursing Outlook, 29(3), 164 –169. King, K.M. (2001). The problem of under-powering in nursing research. Western Journal of Nursing Research, 23(4), 334 –335. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

A Nursing and Allied Health Sciences Perspective—275 King, K.M., & Davis, T. (1997). Evidence-based nursing practice. Canadian Journal of Cardiovascular Nursing, 9(2), 43 – 48. Kirchoff, K.T. (1982). A diffusion survey of coronary precautions. Nursing Research, 31(4), 196 – 201. Kitson, A. (1997). Using evidence to demonstrate the value of nursing. Nursing Standard, 11(28), 34 – 39. Kitson, A., L.B. Ahmed, G. Harvey, K. Seers, and D.R. Thompson. (1996) From research to practice: one organizational model for promotion researchbased practice. Journal of Advanced Nursing, 23, 430 – 440. Kitson, A., Harvey, G., & McCormack, B. (1998). Enabling the implementation of evidence based practice: A conceptual framework. Quality in Health Care, 7, 149–158. Knott, J., & Wildavsky, A. (1980). If dissemination is the solution, what is the problem? Knowledge: Creation, Diffusion, Utilization, 1(4), 537– 578. Krueger, J.C. (1977). Utilizing clinical nursing research findings in practice: A structured approach. In M. Batey (Ed.), Communicating nursing research, vol. 9 (pp. 381– 394). Boulder, CO: Western Interstate Commission for Higher Education. Kreuger, J.C. (1978). Utilization of nursing research: the planning process. Journal of Nursing Administration, 8(1), 6 – 9. Krueger, J.C. (1982). A survey of research utilization in community health nursing ... using research in practice. Western Journal of Nursing Research, 4, 244 – 248. Krueger, J.C., Nelson, A.H., & Wolanin, M.O. (1978). Nursing research: Development, collaboration and utilization. Germantown, MD: Aspen. Lacey, E.A. (1994). Research utilization in nursing practice – a pilot study. Journal of Advanced Nursing, 19(5), 987– 995. Landers, M.G. (2000). The theory-practice gap in nursing: The role of the nurse teacher. Journal of Advanced Nursing, 32(6), 1550 –1556. Landry, R., Amara, N., & Lamari, M. (2001). Utilization of social science research knowledge in Canada. Research Policy, 30, 333 – 349. Larsen, J.K. (1980). Knowledge utilization. What is it? Knowledge: Creation, Diffusion, Utilization, 1(3), 421– 442. Lathey, J.W., & Hodge, B. (2001). Information seeking behavior of occupational health nurses: How nurses keep current with health information. AAOHN Journal, 49(2), 87– 95. Lawton, S., Montgomery, L., & Farmer, J. (2001). Survey and workshop initiative on community nurses’ knowledge of the Internet. Computers in Nursing, 19(3), 118 –121. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

276—C.A. Estabrooks, S. Scott-Findlay, and C. Winther Le May, A., Mulhall, A., & Alexander, C. (1998). Bridging the research-practice gap: Exploring the research cultures of practitioners and managers. Journal of Advanced Nursing Science, 28(2), 428 – 437. Lia-Hoagberg, B., Schaffer, M., & Strohschein, S. (1999). Public health nursing practice guidelines: An evaluation of dissemination and use. Public Health Nursing, 16(6), 397– 404. Lindeman, C.A., & Krueger, J.C. (1977). Increasing the quality, quantity and use of nursing research. Nursing Outlook, 25(7), 450 – 454. Logan, J., & Graham, I.D. (1998). Toward a comprehensive interdisciplinary model of health care research use. Science Communication, 20(2), 227–246. Logan, J., Harrision, M.B., Graham, I., Dunn, K., & Bissouette, J. (1999). Evidence-based pressure ulcer practice: The Ottawa model of research use. Canadian Journal of Nursing Research, 31(1), 37– 52. Lomas, J. (1993a). Diffusion, dissemination, and implementation: Who should do what? Annals of the New York Academy of Sciences, 703, 226 –237. Lomas, J. (1993b). Retailing research: Increasing the role of evidence in clinical services for childbirth. Milbank Quarterly, 71(3), 439 – 475. Longman, A.J., Verran, J.A., Ayoub, J., Neff, J., & Noyes, A. (1990). Research utilization: An evaluation and critique of research related to oral temperature measurement. Applied Nursing Research, 3(1), 14 –19. Loomis, M.E. (1985). Knowledge utilization and research utilization in nursing. IMAGE: The Journal of Nursing Scholarship, 17(2), 35 – 39. Luker, K.A., & Kenrick, M. (1992). An exploratory study of the sources of influence on the clinical decisions of community nurses. Journal of Advanced Nursing, 17(No?), 457– 466. MacGuire, J.M. (1990). Putting nursing research findings into practice: Research utilization as an aspect of the management of change. Journal of Advanced Nursing, 15, 614 – 620. McCormack, B., Kitson, A., Harvey, G., Rycroft-Malone, J., Titchen, A., & Seers, K. (2002). Getting evidence into practice: The meaning of ‘context.’ Journal of Advanced Nursing, 38(1), 94 –104. McFarlane, J.K. (1970). The proper study of the nurse. London: Royal College of Nursing. McPheeters, M., & Lohr, K. (1998). Evidence-based practice and nursing: Commentary. Outcomes Management for Nursing Practice, 3(3), 99–101. Menzel, H., & Katz, E. (1955). Social relations and innovation in the medical profession: The epidemiology of a new drug. Public Opinion Quarterly, 19(4), 337– 352. Michel, Y., & Sneed, N.V. (1995). Dissemination and use of research findings in nursing practice. Journal of Professional Nursing, 11(5), 306 –311. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

A Nursing and Allied Health Sciences Perspective—277 Mitchell, A., Janzen, K., Pask, E., & Southwell, D. (1995). Assessment of nursing research utilization needs in Ontario health agencies. Canadian Journal of Nursing Administration, 8(1), 77– 91. Mitchell, G.J. (1997). Questioning evidence-based practice for nursing. Nursing Science Quarterly, 10(4), 154 –155. Mitchell, G.J. (1999). Evidence-based practice: Critique and alternative view. Nursing Science Quarterly, 12(1), 30 – 35. Morin, K.H., Bucher, L., Plowfield, L., Hayes, E., Mahoney, P., & Armiger, L. (1999). Using research to establish protocols for practice: A statewide study of acute care agencies. Vol. 13(2), 77– 84. National Forum on Health. (1997). Making decisions: Evidence and information. Ottawa: National Forum on Health. Nilsson Kajermo, K., Nordstrom, G., Krusebrant, A., & Bjorvell, H. (1998). Barriers to and facilitators of research utilization, as perceived by a group of registered nurses in Sweden. Journal of Advanced Nursing, 27(4), 798 – 807. Oh, C.H. (1997). Issues for the thinking of knowledge utilization: Introductory remarks. Knowledge and Policy: International Journal of Knowledge Transfer and Utilization, 10(3), 3 –10. Ottenbacher, K.J., Barris, R., & Van Deusen, J. (1986). Some issues related to research utilization in occupational therapy. American Journal of Occupational Therapy, 40(2), 111–116. Ousey, K. (2000). Bridging the theory-practice gap? The role of the lecturer/ practitioner in supporting pre-registration students gaining clinical experience in an orthopaedic unit. Journal of Orthopaedic Nursing, 4(3), 115 –120. Pelz, D.C., & Horsley, J.A. (1981). Measuring utilization of nursing research. In J.A. Ciarlo (Ed.), Utilizing evaluation: Concepts and measurement techniques (pp. 125 –149). Beverly Hills, CA: Sage. Pettengill, M.M., Gillies, D.A., & Clark, C.C. (1994). Factors encouraging and discouraging the use of nursing research findings. IMAGE: The Journal of Nursing Scholarship, 26(2), 143 –147. Rafferty, A.M., Allcock, N., & Lathlean, J. (1996). The theory/practice gap: Taking issue with the issue. Journal of Advanced Nursing, 23(4), 685 – 691. Rasch, R.F.R., & Cogdill, K.W. (1999). Nurse practitioners’ information needs and information seeking: Implications for practice and education. Holistic Nursing Practice, 13(4), 90 – 97. Reedy, A.M., Shivnan, J.C., Hanson, J.L., Haisfield, M.E., & Gregory, R.E. (1994). The clinical application of research utilization: Amphotericin B. Oncology Nursing Forum, 21(4), 715 –719. Registered Nurses Association of British Columbia. (1996). Making a difference: From ritual to research-based nursing practice. Vancouver, BC: Author. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

278—C.A. Estabrooks, S. Scott-Findlay, and C. Winther Rich, R.F. (1979). The pursuit of knowledge. Knowledge: Creation, Diffusion, Utilization, 1(1), 6 – 30. Rich, R.F. (1997). Measuring knowledge utilization: Processes and outcomes. Knowledge and Policy: The International Journal of Knowledge Transfer and Utilization, 10(3), 11–24. Rodgers, S.E. (2000a). A study of the utilization of research in practice and the influence of education. Nurse Education Today, 20, 279 – 287. Rodgers, S.E. (2000b). The extent of nursing research utilization in general medical and surgical wards. Journal of Advanced Nursing, 32(1), 182–193. Rogers, E. (1995). Diffusion of innovations (4th ed.). New York: Free Press. Rolfe, G. (1993). Closing the theory-practice gap: A model of nursing praxis. Journal of Clinical Nursing, 2, 173 –177. Rolfe, G. (1998). The theory-practice gap in nursing: From research-based practice to practitioner-based research. Journal of Advanced Nursing, 28(3), 672–679. Rosswurm, M.A. (1992). A research-based practice model in a hospital setting. Journal of Nursing Administration, 22(3), 57– 60. Rosswurm, M.A., & Larrabee, J.H. (1999). A model for change to evidencebased practice. Image: Journal of Nursing Scholarship, 31(4), 317–322. Rutledge, D.N., & Donaldson, N.E. (1995). Building organizational capacity to engage in research utilization. Journal of Nursing Administration, 25(10), 12–16. Rutledge, D.N., Greene, P., Mooney, K., Nail, L.M., & Ropka, M. (1996). Use of research-based practices by oncology staff nurses. Oncology Nursing Forum, 23(8), 1235 –1244. Rutledge, D.N., Ropka, M., Greene, P.E., Nail, L., & Mooney, K.H. (1998). Barriers to research utilization for oncology staff nurses and nurse managers / clinical nurse specialists. Oncology Nursing Forum, 25(3), 497– 506. Ryan, B., & Gross, N.C. (1943). The diffusion of hybrid corn seed in two Iowa communities. Rural Sociology, 8, 15 – 24. Schmitt, M.H. (1999). Closing the gap between research and practice: Strategies to enhance research utilization. Research in Nursing & Health, 22(6), 433 – 4. Schroyen, B., Bielby, A.M., Hawtin, F., MacKay, B.J., Simkiss, L., & Meehan, T.C. (1994). Integrating nursing research and practice: Part I – a journey into the unknown. Nursing Praxis in New Zealand, 9(3), 12–14. Shively, M., Riegel, B., Waterhouse, D., Burns, D., Templin, K., & Thomason, T. (1997). Testing a community level research utilization intervention. Applied Nursing Research, 10(3), 121–127. .

.

.

.

.

.

.

.

.

.

.

.

A Nursing and Allied Health Sciences Perspective—279 Shore, H.L. (1972). Adopters and laggards. Canadian Nurse, 68(7), 36 –39. Staller, K.M., & Kirk, S.A. (1998). Knowledge utilization in social work and legal practice. Journal of Sociology and Social Welfare, 25(3), 91–113. Stetler, C.B. (1985). Research utilization: Defining the concept. Image: Journal of Nursing Scholarship, 17(2), 40 – 44. Stetler, C.B. (1994a). Problems and issues of research utilization. In O.L. Strickland, & D.J. Fishman (Eds.), Nursing Issues in the 1990s (pp. 459 – 471). New York: Delmar. Stetler, C.B. (1994b). Refinement of the Stetler/Marram model for application of research findings to practice. Nursing Outlook, 42(1), 15 – 25. Stetler, C.B. (2001). Updating the Stetler Model of research utilization to facilitate evidence-based practice. Nursing Outlook, 49(6), 272– 279. Stetler, C.B., & Marram, G. (1976). Evaluating research findings for applicability in practice. Nursing Outlook, 24(9), 559 – 563. Stiefel, K.A., Damron, S., Sowers, N.J., & Velez, L. (2000). Improving oral hygiene for the seriously ill patient: Implementing research-based practice. MEDSURG Nursing, 9(1), 40 – 43. Swap, W., Leonard, D., Shields, M., & Abrams, L. (2001). Using mentoring and storytelling to transfer knowledge in the workplace. Journal of Management Information Systems, 18(1), 95 –114. Taczak Kupensky, D. (1998). Applying current research to influence clinical practice. Journal of Intravenous Nursing, 21(5), 271–274. Tarde, G. (1903). The laws of imitation. New York: Holt. Tenove, S. (1999). Dissemination: Current conversations and practices. Canadian Journal of Nursing Research, 31(1), 95 – 99. Thomas, E.J. (1978). Generating innovation in social work: The paradigm of developmental research. Journal of Social Services Research, 2(1), 95 –115. Titler, M.G., & Goode, C.J. (1995). Research utilization. Nursing Clinics of North America, 30(3), 15 –16. Titler, M.G., Kleiber, C., Steelman, V., Goode, C., Rakel, B., Barry-Walker, J., Small, S., & Buckwalter, K. (1994). Infusing research into practice to promote quality care. Nursing Research, 43(5), 307– 313. Traynor, M. (2000). Purity, conversion and the evidence based movements. Health, 4(2), 139 –158. Tsai, S.-L. (2000). Nurses’ participation and utilization of research in the Republic of China. International Journal of Nursing Studies, 37, 435 – 444. Upshur, R. (1997). Certainty, probability and abduction: Why we should look to C.S. Peirce rather than Godel for a theory of clinical reasoning. Journal of Evaluation in Clinical Practice, 3(3), 201– 206. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

280—C.A. Estabrooks, S. Scott-Findlay, and C. Winther Upton, D. (1999). How can we achieve evidence-based practice if we have a theory-practice gap in nursing today? Journal of Advanced Nursing, 29(3), 549 – 555. VandenBosch, T.M., Cooch, J., & Treston-Aurand, J. (1997). Research utilization: Adhesive bandage dressing regimen for peripheral venous catheters. American Journal of Infection Control, 25(6), 513 – 519. van de Vall, M., & Bolas, C. (1982). Using social policy research for reducing social problems: An empirical analysis of structure and functions. Journal of Applied Behavioural Science, 18(1), 49 – 67. Varcoe, C., & Hilton, A. (1995). Factors affecting acute-care nurses’ use of research findings. Canadian Journal of Nursing Research, 27(4), 51–71. Verran, J.A., Gerber, R.M., & Milton, D.A. (1995). Data aggregation: Criteria for psychometric evaluation. Research in Nursing & Health, 18, 77– 80. Verran, J.A., Mark, B.A., & Lamb, G. (1992). Psychometric examination of instruments using aggregated data. Research in Nursing & Health, 15, 237– 240. Vines, S.W., Arnstein, P., Shaw, A., Buchholz, S., & Jacobs Julie. (1992). Research utilization: An evaluation of the research related to causes of diarrhea in tube-fed patients. Applied Nursing Research, 5(4), 164 –173. Ward, D. (2000). Implementing evidence-based practice in infection control. British Journal of Nursing, 9(5), 267–71. Weiss, C.H. (1979). The many meanings of research utilization. Public Administration Review, 39, 426 – 431. Weiss, C.H. (1981). Measuring the use of evaluation. In J.A. Ciarlo (Ed.), Utilizing evaluation: Concepts and measurement techniques (pp. 17–33). Thousand Oaks, CA: Sage. White, J.M., Leske, J.S., & Pearcy, J.M. (1995). Models and processes of research utilization. Nursing Clinics of North America, 30(3), 409 –19. Williams, C.A. (1987). Research utilization: A special challenge for nursing faculty. Journal of Professional Nursing, 3(3), 133. Wilson, H.S. (1984). Organizational approaches to bridging the researchpractice gap. Journal of Nursing Administration, 14(9), 7– 8. Winter, J.C. (1990). Brief: Relationship between sources of knowledge and use of research findings. Journal of Continuing Education in Nursing, 21(3), 138 –140. Wolf, Z.R., Brennan, R., Ferchau, L., Magee, M., Miller-Samuel, S., Nicolay, L., Paschal, D., Ring, J., & Sweeney, A. (1997). Creating and implementing guidelines on caring for difficult patients: A research utilization project. MEDSURG Nursing, 6(3), 137–145. .

.

.

.

.

.

.

.

.

.

.

.

.

.

Postscript—281

Postscript: Understanding Evidence-Based Decision-Making – or, Why Keyboards Are Irrational JONATHAN LOMAS

Some Musings Anyone who has typed, which in these days of the computer must be everyone under age forty, has surely wondered about the awkwardness of the keyboard layout. That pesky but frequently needed ‘a’ is forced to rely on the weak left little finger, while ‘f,’ ‘g,’ ‘h,’ and ‘j’ – not needed in more than 70 per cent of English words – seem idiosyncratically placed in prime position. The rational layout for a keyboard would be to place frequently used letters on the home key locations and array the progressively less frequently used letters in the outer reaches. Stephen Jay Gould (1991) explains the origins of this anomaly in his delightful essay on the quirks of the QWERTY keyboard (so named for the first six letters in the top left row). Three factors conspired to bequeath us this bothersome legacy: early technological imperfection, the social context of teaching, and the power of the status quo. The technological imperfection was the propensity for the keys on early typewriters to coalesce when brought to the striking point too quickly. Excessive proficiency on the typewriter was punished, and thus was born the ‘irrational’ placement of keys to constrain the overly proficient human within the limits of the early technology. With this foothold, the QWERTY layout was reinforced by a celebrated and much publicized victory in a typing competition, won by one Cincinnati typing teacher’s eight-finger approach on a Remington QWERTY keyboard over a hunt-and-peck technique on a competitor’s more rational layout. The typing schools that emerged in the late 1800s and their

282—J. Lomas

teachers quickly adopted this apparently superior and now well-marketed QWERTY layout, thus perpetuating the peculiarity long after the technology overcame the early mechanical malfunctions. Once entrenched in the fabric of teaching and in the minds of typists, the QWERTY approach became the accepted wisdom, the dominant way, and accrued all the power of the status quo. As Gould (1991, p. 69) notes, ‘incumbency also reinforces the stability of a pathway once the little quirks of early flexibility push a sequence into a firm channel.’ Thus has QWERTY fended off the challenges of rationally superior competitors with apparent ease. What are we to take from this tale of tainted innovation diffusion? Perhaps it is ‘that which is sensible is not always rational.’ When the best-laid plans of the scientist meet the realities of technological constraint, social processes, and, to borrow from Herbert Simon, humans’ naturally ‘bounded rationality,’ there is no linear progression from research to practice. In the words of clinical epidemiology, it is when potential efficacy meets actual effectiveness or, as Lambert Farand and Jose Arocha note (see chapter 7 in this volume), it is what happens when we understand ‘the cognitive processes that characterize medical decision-making in naturalistic settings’ (emphasis added). This collection of thoughtful disciplinary messages on ‘knowledge transfer’ shouts this message loud and clear. Almost every author marks the need to view evidence-based decision-making not as a logical or linear extension of science, but as a social process in which the evidence sits alongside or is secondary to personal predilection, professional power, and organizational politics as predictors of outcome. For instance, Louise Lemieux-Charles and Jan Barnsley note Rogers’s observation that ‘diffusion investigations show that most individuals do not evaluate an innovation on the basis of scientific studies.’ Thus, the task for scholars of evidence-based decision-making is to objectively explore and better understand the richly endowed social processes that surround or sublimate the role of research. The challenge is that this task all too quickly becomes entwined with our desire to ‘make a difference,’ to leap to the intervention studies that will find the ‘evidence pill’ that is the antidote to irrational decision-making. The roles of scholar and advocate become easily confused when the outcome we are seeking is an increase in the use of what we produce: research. Scholars of knowledge utilization must exercise caution if they are not to become the marketing department for the research enterprise!

Postscript—283

This dilemma is presumably what lies behind the warning by Carole Estabrooks and her colleagues that we need better theoretical grounding for our intervention studies before we plow ahead. It must also inform Brian Haynes’s important distinction between the descriptive ‘what is’ and the normative ‘what we think ought to be’ the role of research in decision-making. This collection is remarkable in its faithful adherence to the descriptive task, rarely straying from the path of a basic science grounding for the applied activity of enhancing evidence-based decision-making. There is no assumption by the authors that the greater use of evidence from research will necessarily produce better decisions; this is an outcome to be proved, not presumed. The collection is a reminder of the value of disciplinary, ‘basic science’ approaches to understanding social issues. This is all too easy to forget or neglect at a time when there is an everincreasing demand for relevance, impact, and speedily demonstrated returns on investment for research. Each discipline or field has something to offer to the project of better understanding the realm of evidence-based decision-making. From program evaluation we learn the importance of a shared epistemology between those producing evidence and those about whom evidence is being produced. From sociology we are reminded that knowledge produced for problem-solving all too readily short-circuits explanation. Political science can demonstrate the value of seeing research as the seeds inside the clouds of ideas that float through the policy process. Organizational behaviour reminds us that institutional decisions are not obtained by merely adding up those made by the individuals within an organization, and cognitive science highlights the need to be realistic about human capacity. Knowledge utilization studies – the science behind the evidence-based decision-making and knowledge translation movements – not only profits but also suffers because it comprises such a rich array of disciplinary perspectives. Different terms are used for the same concepts and viceversa. Different definitions of ‘evidence’ carry varying degrees of legitimacy across the epistemological spectrum. As much time is spent in clarifying terms as in finding answers. While it is clear that only interdisciplinary approaches can do justice to the complexity of knowledge utilization studies, it is also clear that interdisciplinary communication still presents a formidable barrier to achieving that aim. It is here that HEALNet’s real victory is to be found. HEALNet built a house where all the different disciplinary perspectives could find not

284—J. Lomas

only respect, but also mutual sustenance. It is possible to be both a basic and an applied researcher: as Jean-Louis Denis and his co-authors point out in chapter 1, to be operating in both the Mode I and the Mode II forms of organizing research. The scholars who contributed to this collection return to their basic disciplinary roots, but all the wiser for their network-inspired collaborations on tool development, intervention assessments, and frameworks applied to health services management and delivery. The good news is that they found a way to make it happen; the bad news is that they did it largely despite the (rigidly disciplinary) structure of universities, rather than because of it. Some Themes There are at least three interrelated themes that emerge from this volume.

No Man or Woman Is an Island In this collection the evolution of evidence-based decision-making is documented, from naïve childhood to youthful awareness. The early years were dominated by evidence-based medicine and were marked by an exuberant search for magic bullets and a narrow interpretation of what constitutes ‘evidence.’ Efforts were focused first on disseminating practice guidelines and then on more active interventions such as reminders, audit, and feedback or recruitment of opinion leaders to the cause. Influenced more by medicine’s clinical epidemiology approach to research and less by social scientists’ theory-driven research, the keys to moving forward were seen to be access to validated research and methods to change individual clinical behaviour. Shortcomings, however, became clearer as the imperative to improve the use of evidence in decision-making migrated out from medicine, first to other clinical realms and then to the management and policy worlds. Even earlier, it was becoming clear that no man or woman is an island, and approaches that ignored the clinical, social, organizational, financial, regulatory and other contexts of the individual were inadequate and incomplete. Walshe and Rundall (2001) and Klein (2000) have forcefully made this point for the management and policy worlds, respectively. The authors of this collection also resoundingly reject the focus on the isolated individual. Their work on evidence-based decision-making in nursing, education, management, and policy uncovers the need to

Postscript—285

change from conceptualizing the task as finding magic bullets for individual behaviour change, to finding theories and frameworks that embed individuals in a changing organizational and social culture. As Estabrooks and her colleagues state: ‘A shift from modifying individuals to modifying organizational environments may advance the research utilization agenda.’

Different Strokes for Different Folks This volume of essays makes it very clear that context matters and contexts vary. The working world of the clinician is far different from that of the manager, which in turn differs from that of the policymaker. Even among clinicians, their worlds vary according to professional norms, physical locations, peer group relations, and so on. Once the focus changes from the isolated individual to the individual in social and other contexts, then it becomes important for those studying or trying to promulgate evidence-based decision-making to understand these contexts. McCormack et al. (2002) have made this point in nursing, but, as Grant and his colleagues point out in chapter 8, even in an apparently ‘technical area’ such as building decision support systems ‘a decision cannot be seen as a single isolated event, but rather as an act that takes place in a continuum of actions, by an individual, in a given organizational context.’ This recognition underscores the need to tailor our approaches to encouraging evidence-based decision-making, not so much to fit the characteristics of the individual or the innovation, but rather to fit the context in which the individual works or the innovation is being implemented. For instance, Ross Baker and his co-authors warn us in chapter 4 about transferring the tenets of a research world to that of the manager, and they ask us to ‘reach beyond simplistic notions of rationality that assume that analytical information is politically neutral and universally superior.’ The contexts in which managers and policy-makers work often imbue ideas that are apparently innocent to the naïve researcher with significant political symbolism. This can be understood (and/or overcome) only by seeing the difference between a sensible and a rational decision: that difference is ‘context.’

Water Coolers as Change Agents In chapter 3 John Lavis refers to research use as a ‘strategic dialogue’ and in chapter 2 Harley Dickinson describes it as ‘deliberative discourse.’

286—J. Lomas

Both highlight the need to view evidence-based decision-making as a social process rather than a technical endeavour. The dialogues and discourses that lead to research-based change are more likely to occur around the water cooler than in the boardroom or on the computer. As the Canadian Health Services Research Foundation has noted, evidencebased decision-making is a ‘contact sport.’ This not only has implications for the way social scientists approach the study and dissemination of evidence-based decision-making, but also for how research funders conceptualize what it is they are funding. It seems only right to include them in any discussion of evidence-based decision-making. For they, like universities, have a responsibility to think about whether their structures and processes help or impede the task of evidence-based decisionmaking. Jean-Louis Denis and his colleagues, in chapter 1 on the changing views of how knowledge is created in society, point out (as do François Champagne and his co-authors in chapter 6 on program evaluation) that the conduct of research itself has become a dialogue or discourse. If the objective is to get research used, there is no longer an easy division of roles between those doing research and those using it. If these groups, too, convene around the water cooler now and then, there is a significantly increased chance of research’s being both more used and more useful. This implies that research funders, or at least those with a strong commitment to getting the research they fund used in the system, need to view such ‘linkage and exchange’ (Lomas, 2000) as an integral part of what they fund. Research funders need to invest in water coolers! A Future Given these themes, what are the prospects for both the scholarship and the practice of evidence-based decision-making? The label, which like other catch-phrases may not survive, is of less importance than the principles and trends it represents. On the scholarship side, the trend is clearly towards interdisciplinarity, and the principles are very much the themes outlined in this collection and highlighted in the previous section. We have moved away from the naïve individual rational actor view of evidence-based decision-making. The academic community is embracing the idea that individuals operate within a social milieu that the characteristics of these milieux vary and must be better understood if we are to see how and when research

Postscript—287

evidence has a role, and that both research and decision-making are processes. Our missing information entails how and when to best intertwine these processes for evidence-based decision-making. While we await the fruits of this scholarship, what principles might we adopt to guide the practice of evidence-based decision-making (or whatever label comes to supplant it)? The first, and perhaps central, principle relates to the outcome we are trying to achieve. The evidencebased medicine movement tended to focus on patient outcome. It has become clear in the management and policy realms that this is too far removed and too subject to competing influences to be of much use. A more realistic and achievable outcome, which also respects the reality of the organizational and social contexts of decision-making, is to increase the sense of responsibility and accountability to valid research evidence in decision-making – what some have called cultural change. This change is as much an organizational outcome as it is an individual change in behaviour, and it can be measured by the extent to which new processes and structures appear in the decision-making and research worlds. How many health care organizations invest in their own R&D capacity? As health care is delivered increasingly by teams, do teams include or have access to evidence-based decision-making capacity? Are clinical epidemiologists replicated in other forms of practice, such as researcher-managers or researcher/policy-makers? Do researchers and research evidence have an explicit place in the decision processes of health care organizations or in their training activities? Do research funders reflect decision-makers’ priorities in at least part of what they do, and do they fund the linkages and exchanges between the researchers and the decision-makers? Are there increasingly fuzzy boundaries at the edges of applied research and evidence-based decision-making? Within the knowledge transfer component of evidence-based decision-making, the principle of synthesis is of growing importance. The Cochrane Collaboration in its adherence to summarizing through metaanalysis and overview has certainly reinforced synthesized knowledge, not individual research studies, as the ‘unit of transfer.’ However, there is still much to be done. The Cochrane focus on randomized controlled trials for clinical interventions is far from adequate to serve the evidence needs of the managers and policy-makers in the system. Their questions are less defined, and the research methods that address them are appropriately more varied and, therefore, harder to summarize quantitatively. While the odds ratio as a summary statistic has the ap-

288—J. Lomas

peal of singularity, it lacks intuitive meaning and transparency for most decision-makers. New approaches to the methodology and language of synthesis are needed. This is also a task for the scholars of evidencebased decision-making. Increasingly, the format and presentation of synthesis will become less dependent on researchers alone and will rely more on a partnership with intermediaries who have communication and dissemination skills. Indeed, a third principle is the greater use of intermediaries. Terms such as ‘boundary spanners,’ ‘research liaison officers,’ ‘translation agents,’ and ‘knowledge brokers’ are used to capture the need for more explicit attention to the linkage between evidence production and evidence use. It is not reasonable to assume that all decision-makers and decision-making organizations can become familiar with the research world or vice-versa in the case of researchers and the decisionmaking world. Organizations and individuals performing a knowledgebrokering function can bridge the two cultures. Nevertheless, a fourth and final principle is to increase the extent to which there is more general knowledge of each others’ cultures. Exposing researchers to the decision world and decision-makers to the research world, during training and beyond, will demystify the two sides and give recognition to the relative expertise that each brings to evidence-based decision-making. These principles – a focus on cultural change in organizations, ever more appropriate synthesis of research, greater use of intermediary structures and roles, and enhanced opportunities for ‘cross-learning’ between researchers and decision-makers – significantly alter the future trajectory of evidence-based decision-making. It is no longer a push from the research community for decision-makers to use their products and to ‘stop being irrational.’ It is a joint endeavour designed to make the production of research more sensitive to the needs of decisionmaking, its communication more congruent with and integrated into decision-making, and its use both advocated and stimulated by decision-making organizations. To reiterate the message from this collection: evidence-based decision-making is a social process, not a technical task.

REFERENCES Gould, S.J. (1991). History in evolution. Sect. 4, chap. 1, in Bully for brontosaurus: Reflections in natural history (pp. 59 –75). New York: W.W. Norton. .

Postscript—289 Klein, R. (2000). From evidence-based medicine to evidence-based policy? Journal of Health Services Research and Policy, 5(2), 65 – 66. Lomas, J. (2000). Using ‘linkage and exchange’ to move research into policy at a Canadian Foundation. Health Affairs, 19(3), 236 –240. McCormack, B., Kitson, A., Harvey, G., Rycroft-Malone, J., Titchen, A., & Seers, K. (2002). Getting evidence into practice. The meaning of ‘context.’ Journal of Advanced Nursing, 38(1), 94 –104. Walshe, K., & Rundall, T.G. (2001). Evidence-based management: From theory to practice in health care. Milbank Quarterly, 79(3), 429 – 457. .

.

.

.

.

.

This page intentionally left blank

Contributors—291

Contributors

Jose Arocha is an assistant professor in the Department of Health Studies and Gerontology at the University of Waterloo. Dr Arocha completed his PhD in educational psychology at McGill University and worked as a research associate at McGill’s Centre for Medical Education. He currently teaches health informatics and conducts research on the use of clinical practice guidelines, cognitive models of health and disease, comprehension of health information, and knowledge representation. G. Ross Baker is a professor in the Department of Health Policy, Management and Evaluation at the University of Toronto. His research is focused on patient safety and the development and use of performance measurement and balanced scorecards in health care. Dr Baker was the principal investigator for Hospital Report ’98 and Hospital Report ’99, performance reports on Ontario hospitals. He currently chairs the Association of University Programs in Health Administration based in Washington, D.C. Jan Barnsley is an associate professor in the Department of Health Policy, Management and Evaluation at the University of Toronto. Her main area of research is primary care, with a focus on the development and application of organizational and clinical performance indicators and on the facilitation of collaboration and communication between family physicians and other health professionals. Nicole Bolduc is completing her PhD at the University of Montreal. She is also an adjunct professor of nursing sciences at the University of Sherbrooke. Ms

292—Contributors Bolduc’s research and publications focus on individual and family cardiac rehabilitation, nursing interventions, and the continuity and quality of care. François Champagne is a professor of health care management, health policy, and health care evaluation in the Department of Health Administration, a researcher in the Interdisciplinary Research Group in Health (GRIS), and a collaborator in the Unité de santé internationale in the Faculty of Medicine, all at the University of Montreal. From 1986 to 2000 he was the director of the University of Montreal’s PhD program in Health Care Organization (Public Health), and from 1993 to 1997 he was the director of the GRIS. Dr Champagne was one of the co-leaders of HEALNet, a Canadian network of Centres of Excellence dedicated to research on optimizing the use of research funding to improve decisions in the health system. He has published many books on epidemiology in health services management, research methods, evaluation, quality assurance, and health care organization performance. Dr Champagne’s current research is focused on strategic management, inter-organizational networks, integrated delivery systems, organizational performance, and the use of evidence in management. André-Pierre Contandriopoulos is a professor in the Department of Health Administration and a researcher in the Interdisciplinary Research Group in Health (GRIS), both at the University of Montreal. His primary areas of research are health care systems, health interventions, health system organization, and pharmaco-economics. In 2001 the Canadian Health Services Research Foundation honoured Dr Contandriopoulos with a Health Services Research Advancement Award. Jean-Louis Denis is a professor in the Department of Health Administration and a researcher in the Interdisciplinary Research Group in Health (GRIS), both at the University of Montreal. He holds the CHSRF/CIHR Chair on Transformation and Governance of Health Organizations and is a member of the Royal Society of Canada. Dr Denis has published widely on the management of strategic change in health care organizations. His research is focused on the merger of teaching hospitals, the reform of primary care, and the use of scientific evidence in policy and managerial decisions. Harley D. Dickinson is a professor in and head of the Department of Sociology at the University of Saskatchewan and co-principal investigator for the Centre for Knowledge Transfer (CKT). He has published articles and book chapters on evidence-based decision-making, health policy, and health care reform in the International Journal of Medical Informatics, Social Science and Medicine, Interna-

Contributors—293 tional Journal of Contemporary Sociology, Journal of Comparative Family Studies, and the Canadian Medical Association Journal, among others. Dr Dickinson is also coauthor of Health, Illness and Health Care in Canada (3d ed., 2002). Carole A. Estabrooks is an associate professor in the Faculty of Nursing at the University of Alberta. Her research is focused on knowledge utilization and policy implementation in nursing and the social sciences. Dr Estabrooks holds several grants for examining the use of research by health professionals and is particularly interested in research implementation in complex health organizations. A member of several research studies nationally and internationally, she publishes extensively in the knowledge utilization and related fields. Lambert Farand is an associate professor in the Department of Health Administration and a researcher in the Interdisciplinary Research Group in Health (GRIS), both at the University of Montreal. He holds an MD and a PhD in educational psychology. Dr Farand’s research is focused on cognitive science in relation to medical decision-making, health care informatics, pre-hospital care, the organization of emergency health services, the organization of mental health services, and youth suicide. Liane Ginsburg is an assistant professor in the School of Health Policy and Management at York University in Toronto. She received her PhD in health care organization and management from the University of Toronto. Dr Ginsburg teaches courses on decision-making in health care organizations and the integration of health services. Her research is focused on the utilization of research findings and other data by health care decision-makers. Andrew Grant is a professor of medical biochemistry and the director of the Collaborative Research for Effective Diagnostics (CRED) research unit, which focuses on diagnostic technology translation, at the University of Sherbrooke. He is active in health informatics as well as knowledge transfer and education. R. Brian Haynes is the chair of the Department of Clinical Epidemiology and Biostatistics and chief of the Health Informatics Research Unit at McMaster University, and on the medical staff of Hamilton Health Sciences. Dr Haynes received his MD from the University of Alberta and a PhD in clinical epidemiology from McMaster University. He completed residency training in internal medicine at the Toronto General Hospital and St Thomas’s Hospital in London, England. Andre Kushniruk is an associate professor in and the director of the School of Health Information Science at the University of Victoria. He is an expert in the

294—Contributors areas of health informatics, system evaluation, and human-computer interaction. Dr Kushniruk has written numerous articles on topics related to improving the usability of health care information systems and has worked on a number of influential health informatics projects in Canada and internationally. Ann Langley is a professor of strategic management and research methods at HEC Montréal and director of its MSc and PhD programs. From 1985 –2000 she was a faculty member at Université du Québec à Montréal. Dr Langley’s recent research deals with strategic decision-making, innovation, and leadership and strategic change in the health care sector. She is currently working on a major project in two teaching hospitals in which the management of the implementation of hospital mergers is examined. .

John N. Lavis holds the Canada Research Chair in Knowledge Transfer and Uptake at McMaster University, and he directs the Program in Policy DecisionMaking, a research program affiliated with the Centre for Health Economics and Policy Analysis. Dr Lavis holds an MD from Queen’s University, an MSc from the London School of Economics, and a PhD from Harvard University. His principal research interests include knowledge transfer and uptake in public policy-making environments and the politics of health care systems. Pascale Lehoux is an associate professor in the Department of Health Administration and a researcher in the Interdisciplinary Research Group in Health (GRIS), both at the University of Montreal. She is also a consultant researcher for the Quebec Health Services and Technology Agency (AETMIS), and the coordinator of an international Master’s program in Health Technology Assessment and Management. Dr Lehoux has published over thirty papers examining the use of computerized medical records, telemedicine, scientific knowledge, and home care equipment. Louise Lemieux-Charles is an associate professor and the chair of the Graduate Program, Department of Health Policy, Management and Evaluation in the Faculty of Medicine at the University of Toronto. She is also the director of the Hospital Management Research Unit and an adjunct scientist with the Institute for Work and Health. Dr Lemieux-Charles focuses on performance management, including organization and team effectiveness, organizational learning, knowledge transfer, quality of work life, and health system organization. In addition to holding a number of research grants devoted to issues of evidence and decision-making in health care, Dr Lemieux-Charles was one of the coleaders of HEALNet, a Canadian network of Centres of Excellence dedicated to

Contributors—295 research on optimizing the use of research funding to improve decisions in the health system. Jonathan Lomas is the chief executive officer of the Canadian Health Services Research Foundation. From 1982 to 1997 he was a professor of Health Policy Analysis at McMaster University, where he co-founded the Centre for Health Economics and Policy Analysis. In addition to authoring two books and numerous articles and chapters in the area of health policy and health services research, he is an associate of the Population Health Programme of the Canadian Institute for Advanced Research, a member of the Institute Advisory Board for the Institute of Health Services and Policy Research, a member of the board of directors of AcademyHealth, and sits on the advisory boards of several Canadian and international journals and organizations. Wendy McGuire is a PhD candidate in the Department of Public Health Sciences at the University of Toronto. Her doctoral research is focused on the role of civil society organizations in the transfer of health knowledge within and between developed and developing nations. Ms McGuire has also worked for several years in the area of knowledge transfer as project coordinator of provincial and national studies of the use of evidence in the delivery of coordinated stroke care and the measurement of hospital organizational performance. Andriy Moshyk is a Master’s candidate in the Department of Clinical Sciences at the University of Sherbrooke. He is working under the supervision of Dr Andrew Grant on the cost-effectiveness of screening strategies for prostate cancer. His current research interests are in decision analysis, clinical research methodologies, health informatics, and clinical statistics. Shannon Scott-Findlay is a doctoral candidate in the Faculty of Nursing at the University of Alberta. Prior to beginning her doctoral studies she was on faculty at the University of Manitoba and a nurse clinician at the Winnipeg Children’s Hospital. In her dissertation research she investigates how nursing unit culture shapes the use of research by paediatric health care professionals. Affaud Anaïs Tanon is a doctoral candidate in public health in the Department of Health Administration and a researcher in the Interdisciplinary Research Group in Health (GRIS), both at the University of Montreal. Her research is focused on the analysis and organization of health systems and services, the evaluation of health services, and health care interventions.

296—Contributors Alain O. Villeneuve is an associate professor of management information systems at the University of Sherbrooke. He has published on decision support systems and information and management, in addition to presenting papers at several prestigious international conferences. His current research involves cognition, expertise, knowledge management and transfer, research methodologies, organizational information requirements, and users’ information needs. Connie Winther is the library and information science coordinator for the Knowledge Utilization Studies Program (KUSP) in the Faculty of Nursing at the University of Alberta. She has a background in rehabilitation medicine and works closely with Dr Carole Estabrooks in various aspects of KUSP.

Index—297

Index

absorptive capacity, 129 abstraction, 200 academic freedom, 27, 50, 58 academic neo-positivist model of evaluation, 160 –1, 165 – 6 action, 14, 142–54 active knowledge systems, 209 actors, 143, 144 –5, 151. See also stakeholder participation adaptation, 218 adopters: early, 125; individuals as, 124 –7; organizations as, 127–30 adoption, 6, 116–31. See also diffusion, theories of advice: case-specific, 209; mode of, 211 agricultural extension model, 243 allied health sciences, 11–12, 242– 68 analysis, 90, 200: as alternate mode of evaluation, 89 –91; bivariate statistical, 259; cognitive task, 215; decision, 205 – 6; hybrid, 259; models of decision, 208; motives of, 92; multivariate, 259; paralysis by, .

.

.

.

.

.

93; problem, 218; qualitative, 259; quantitative, 259; as symbiotic with politics, 91–3; unit of, 265; univariate, 259 anomie, 91 applied knowledge: and knowledgebased society, 18; profitable, 21 applied research: and basic research, 230 –2; and Mode I of knowledge production, 31 argument, 200 artificial intelligence, 177 artistic aspects of medicine, 172–3 assessment of knowledge, 7 assessment research in decisionmaking processes, 6 Autocontrol approach, 215 Autocontrol team decision support model, 217–19 .

.

.

.

backward reasoning, 207, 213. See also hypothesis-driven reasoning; hypothetico–deductive reasoning bargaining, 90

298—Index BARRIERS studies, 258 –9 basic research, and applied research, 230 –2 Bayes’ theorem, 183 – 4, 186 Bayesian model of decision support, 200, 205 – 6 behavioural decision-making, 98 –9, 101, 106 belief network, 208 bivariate statistical analysis, 259 bounded rationality, 88 breadth-first search strategy, 213 Brett’s Nursing Practice Questionnaire, 261 brokering processes, 14 .

.

.

.

.

.

.

Campbell Collaboration, 157 Canadian Health Services Research Foundation (CHSRF), 35, 243 Canadian Institutes of Health Research (CIHR), 35, 243, 245 case-specific advice, 209 causal reasoning, 176, 178 –9 causation, 233 chagrin factor, 184, 186 CHSRF. See Canadian Health Services Research Foundation CIHR. See Canadian Institutes of Health Research classic model for strategic analysis in organizational theory, 26 classic positivist model of evaluation, 165 – 6 clinical decision-making, 86, 202, 234 – 5 clinical decision support, 203–4, 208–11, 214–19 clinical epidemiology, 242, 282 Clinical Event Monitor, 210 –11 clinical judgment, 93 –5 .

.

.

.

.

.

.

clinical management of illness, 47 clinical practice guidelines, 242 clinical practice patterns, 42 clinical research, 87, 252–3 clinical sciences, and reflective practitioner, 23 coalition-centred model of politics, 73, 74 –5 Cochrane Collaboration, 59, 157, 219, 234, 243, 287 cognition, 200 cognitive characteristics of medical reasoning, 174 – 83 cognitive issues, in decision support systems, 211–14 cognitive processes, 185, 214 cognitive science: application to evidence-based medicine, 3; applying to reasoning processes, 11; applying to study of humancomputerized decision-support interactions, 11; perspective on evidence-based decision-making, 172–92 cognitive task analysis, 215 collaborative research and evaluation, 53 – 4, 149 – 50. See also Campbell Collaboration; Cochrane Collaboration collaborative research utilization (CRU) model, 251 commitment, 218 communication motives for analysis, 92 compatibility of innovations, 119 compilation of knowledge, 179 – 80, 200, 218 complexity of innovations, 120–2 computer-assisted decision-making, 11, 97– 8, 206 –7. See also informatics .

.

.

.

.

.

.

.

.

.

.

Index—299 computer/human interaction, 211, 212 conceptual model of knowledge utilization. See enlightenment model of knowledge utilization conceptual sub-dimension of goal attainment, 155 conceptual use of evaluation, 154 conceptual utilization, 14, 140, 156 conditional reasoning, 176 Conduct and Utilization of Research in Nursing (CURN), 242, 243, 244, 245, 251 consensus approach, 97 construction, 218 constructive conflict, 97 constructivism, 155 constructivist model of evaluation, 157–9, 166 –7 constructivist paradigm, 149, 150 –1, 154, 156 consultation, style of, 211 context: of knowledge, 18 –38; of medical decision-making, 186 –7; and Mode II of knowledge production, 33; systematic and organizational, 14 continuing medical education (CME). See training control: and direction motives for analysis, 92; of technical component of evaluation, 147, 153, 155, 156 correlational designs, 258 cost, of innovations, 119 critical realist, 150 critique, 218 CRU model. See collaborative research utilization model cultural literacy, 24 .

.

.

.

CURN. See Conduct and Utilization of Research in Nursing data-driven reasoning, 175 – 6, 207 data-info-knowledge, 200 debate as strategy model of politics. See strategy-based model of politics decision, 200 decision analysis, 205 – 6, 208 decision event, 201 decision-makers, 14: in Environmental Adaptation function, 155; problems encountered by, 214; program evaluation perspective on, 139 – 67; relationship with, 147, 148, 156; skills of, 214 decision-making, 201– 4, 211: activities in, 212–13; alternative, 12; assessment of research in, 6; behavioural, 98–9, 106; characteristics of human, 214; clinical, 86, 202, 234, 235; computer assisted, 11, 97– 8, 206 –7; control over technical, 147, 153, 155, 156; descriptive studies of, 173 – 83; effective, 96, 214; environment of, 143; evidencebased, 14, 87, 95, 172–92, 202–3, 282, 285 – 8; improving, 95 –101; informatics perspective on, 199– 220; managerial, 10, 87; natural contexts of medical, 186–7; normative studies of, 173 – 4, 183 –7; by novices, 86, 202; organizational, 86 –107; pattern-matching approach to, 202; and politics, 98; prescriptive models in, 173 – 4; by professionals, 201–2; real-world, 200, 203; recognition-primed, 102– 4, 201, 202; study of, 183 –5; suboptimal medical, 187–9; team, .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

300—Index 104 –5, 215; types of knowledge applied in, 203 decision-making myths, 93 –5 decision-making research, 101 decision-making steps, 214 decision modelling, 200 decision models, normative, 185 – 6 decision support, 189, 200: in health care, 199 –220; and human processes, 216; improving evidencebased medicine through, 191–2; issues in clinical, 214 –19; qualitative models of, 206 – 8; quantitative models of, 205 – 6, 208; team, 216 – 19 decision support interactions, applying cognitive science to human computerized, 11 decision support models, 204 – 8, 217–19 decision support systems: categories of, 209; clinical, 203 – 4, 208 –11; components of, 209; computerassisted, 206 –7; definition of medical, 209; psychological and cognitive issues, 211–14 decision tables, 207 decision trees, 186 –7, 207, 208 decisionistic models, 44, 49 –55, 58, 61 deductive reasoning. See hypothesisdriven reasoning; hypotheticodeductive reasoning deliberative democracy, 27 deliberative discourse, 57–9, 285–6 deliberative model of knowledge utilization, 14, 26 – 8, 29, 30 –1 democratic deficit, 54 –5, 59 dependent variable, 261, 26 6–7 depth-first search strategy, 213 descriptive studies of medical .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

reasoning and decision-making, 173 – 83 determinants-of-health synthesis, 75 – 8, 79, 80, 81 devil’s advocacy, 97 diagnosis, 212, 220, 233 dialectic, 150 dialectical enquiry, 97 dialogic, 150 –1 dialogue-based model of politics, 73, 74, 75, 79 diffusion, 31, 115 –31: theories of, 9, 255, 265 – 6 direction for analysis, 92 disclosure of scientific breakthroughs by private companies, 25 discourse ethics, 56 – 8 dissemination, knowledge utilization perspective on, 18–38 dissemination model, 6 –7, 251 dualist/objectivist, 150

.

.

.

.

early adopters, 125 EBDM. See evidence-based decisionmaking EBM. See evidence-based medicine electronic patient records (EPR), 188 –9, 191 empirical research, 53 empowerment model of evaluation, 156, 163 – 4, 166 –7 enlightenment model of knowledge utilization, 23–5, 26, 28, 29, 30, 51, 79 enlightenment process, 154 entrepreneurship in research, 33 environmental adaptation, 148 Environmental Adaptation function, 145, 146, 149, 152, 154, 155 environments, 200: organizational, .

.

.

.

Index—301 285; of organized system of action, 145 epistemology, 14, 148, 149, 150, 152 EPR. See electronic patient records ethical issues, 47, 56 – 8, 60 eugenics movements, 47– 8 evaluation, 24, 200: academic neopositivist model of, 156, 160 –1, 165 – 6; analysis as alternate mode of, 89–91; classic positivist model of, 155 –7, 165 – 6; collaborative approaches to, 149 –50; constructivist model of, 156, 157–9, 166 –7; control of technical component of, 147, 153, 155, 156; empowerment model of, 156, 163 – 4, 166 –7; expert consultant model of, 156, 161–2, 165 – 6; facilitator consultant model of, 156, 162–3, 166 –7; functions of, 155; hybrid models of, 159 – 64; internal evaluator model of, 156, 162, 165 –6; mobilization of resources for, 148; modes of, 90; as organized system of action, 142–55; outcomes of, 165; paradigms in, 150 –1; program, 139–67; pure models of, 155 –9; stakeholder participation in, 147, 149 –50, 152; utilization of, 9–10, 139 –42, 143, 147, 153 – 4, 162 evaluation information, in innovations, 122 evaluation objectives, 148 Evaluation Practice function, 155 evaluation practices, 148 evaluation process, 148 evaluation research, 242 evaluation results utilization, 149 evaluation tactics, 90 –1 .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

evaluation utilization, 9 –10, 139 –42, 143, 147, 153 – 4, 162 evaluation utilization models, 148, 155, 156 evaluative research, 144 evaluator, 147, 155 evidence, 200: critical appraisal of health care research, 233 – 4; and decision-making, 95, 202–3; definitions of, 172–3, 282; grades of, 232; management science perspective on, 86 –107; summarization of, 234; types of, 234; use of, 12–13 evidence-based decision-making (EBDM), 3, 4, 282: barriers to institutionalization of, 58; cognitive science perspective on, 172–92; context of, 87, 285; debates and criticisms, 4 – 5; future of, 286–8; justification of, 139; model of knowledge utilization underlying, 7– 8; political science perspective of, 70–82 evidence-based decision-making literature, 4 – 5 evidence-based decision-making movement, 36 evidence-based health care, 41–2 evidence-based management, 87 evidence-based medicine (EBM), 243: broader approach to, 3 – 4; definition of, 232, 235; foundations of, 42; history of, 3, 227–30; improving through decision support, 191–2; objections to, 234; objectives of, 235; as paradigm shift, 236; philosophical issues of, 235–9; precepts of, 227–30; promoting through training, 188 –91; roots of, 242 .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

302—Index evidence-based medicine perspective on EBM movement, 227–39 evidence-based medicine movement, 227–39 evidence-based multidisciplinary practice model, 251 evidence-based nursing, critiques and clarifications of, 249 evidence-based practice, model for change to, 251 evidence filter, 218 evidence grading, 10 evidence in health care, innovation diffusion perspective on, 115 –31 experience, 175, 203, 214 expert-based model of politics, 73, 74 expert consultant model of evaluation, 156, 161–2, 165 –6 expert monotonicity, 177 expert reasoning, 14, 102–4, 174 – 83 expert systems, 207. See also INTERNIST expertise 174, 175, 192n.1, 214 extent of adoption, 116 .

..

.

.

facilitator consultant model of evaluation, 156, 162–3, 166 –7 feedback, 200, 218 formal knowledge, 203 forward reasoning, 213. See also datadriven reasoning fourth helix, 60 frames, 178 –9 framing effect, 183 functionalist perspective of organizational decision-making, 89 –91 fundamental research, 31, 32 Funk BARRIERS scale, 258 .

.

.

GLIF formalism, 211

goal attainment, 147, 148 Goal Attainment function, 145, 146, 153–4, 155 Goode model, 251 government learning, 74 Greek Oracle model of diagnostic systems, 212 group composition, 99–101 group processes, 14 guidelines, 200, 242 hard core of medical innovations, 123 – 4 HEALNet, 54, 283 – 4 HELP, 210, 211 heuristics, 183, 186–7, 188, 206, 214 historical context, knowledge transfer in, 43 – 4 history: of applying scientific knowledge to social problems, 8; of knowledge utilization, 242–3; of management thought, 87; of research utilization in nursing, 244 – 6 homophily of agents, 126 Horn model, 251 human processes, and decision support, 216 hybrid analysis, 259 hybrid models of evaluation, 159 – 64 hypothesis–driven reasoning, 175 – 6 hypothetico–deductive reasoning, 178 –9, 189 .

.

.

.

.

.

.

.

.

.

.

.

.

ideas, role of in policy change, 71–3, 76 – 81 ideology, 53, 147 IF-THEN rules, 207 illness, clinical management of, 47 implementation of research, 6 .

.

Index—303 individual determinants of research utilization, 256 – 8 individuals as adopters, 124 –7 inductive reasoning. See data-driven reasoning influence diagram approach, 208 informatics, 199 –220, 200 information, 143, 218: compiled, 179 – 80, 200, 218; management science perspective on, 86 –107 information explosion, 58, 59 information motives for analysis, 92 information processing, 6, 96 –7 information search strategies, 213 information-systems technology, 188 –9 innovation adopters, 124 innovation diffusion perspective on knowledge and evidence, 115 –31 Innovation theory, Rogers’ Diffusion of, 255, 265 – 6 innovations, 61, 217: adoption of, 116 –31; compatibility of, 119; complexity of, 120; cost of, 119; definition and attributes, 117–30; evaluation information in, 122; hard core and soft periphery of medical, 123 – 4; observability of, 120 –1; operating information in, 122; relative advantage of, 118 –19; research evidence supporting, 122– 4; risk and uncertainty of, 122; triability of, 120 innovators, 125 instrumental approach to knowledge transfer, 58 instrumental model of knowledge utilization. See problem-solving model of knowledge utilization .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

instrumental sub-dimension of goal attainment, 155 instrumental use of evaluation, 153, 154 instrumental utilization model, 156 integration, 218 intellectual technologies, 45 interaction model of knowledge transfer, 44 interactive model of knowledge utilization. See deliberative model of knowledge utilization interdisciplinary research, 267– 8 interest-based model of politics, 73, 74 internal evaluator model of evaluation, 156, 162, 165 – 6 INTERNIST system, 207, 210, 212 intuition, 14, 103, 180 intuitive synthesis, 12, 105 – 6 Iowa model of research in practice, 251 .

.

.

.

.

judgment, 90, 143: and managerial and policy decision-making, 12; and organizational decisionmaking, 88 –9, 93 –5 .

.

Kitson model of research utilization, 251 knowledge: applied, 18, 203; based on prior experience, 203; compilation of, 179 – 80, 200, 218; conception of, 28; contextualization of, 31; experiential, 214; formal, 203; ideological nature of, 53; innovation diffusion perspective on, 115 – 31; intuitive, 14, 103, 180; limits to use of in health care, 12–13; management science perspective .

.

.

304—Index on, 86 –107; manipulative application of, 25; medical, 209; practical, 14; practitioner’s role in determination of, 21; proceduralization and compilation of, 179 – 80; profitable application of, 21; proprietary, 58; scientific, 53, 203; situational, 203; sources of research, 52–3; subjective interpretation of, 7; use of, 13 – 16; value of, 21, 23 – 4, 26 knowledge assessments, 7 knowledge-based expert systems, 207 knowledge-based society, 18 –19, 33, 37– 8, 41, 45 – 6, 58 knowledge comparisons, 7 knowledge diffusion, and peer review, 31 knowledge-driven model of knowledge utilization, 20 –1, 28, 29, 30 knowledge explosion, 58, 59 knowledge generation, processes of, 14 knowledge management skills, 22 knowledge producers and users, relationship between, 21–2 knowledge production, 7: and knowledge-based society, 18; Mode I and Mode II of, 31–2; models of, 148; for problem–solving purposes, 53; and use in context, 30 – 4 knowledge production processes, 28 knowledge studies, sources of, 260 –2 knowledge systems, active, 209 knowledge transfer, 7, 43 – 4, 55 – 6, 58, 282 knowledge transfer models, 44–61 knowledge translation, 55–6, 245 knowledge users. See practitioners, role of .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

knowledge utilization: allied health sciences perspective on, 242– 68; definitional and measurement challenges in, 259– 60; democratization of, 26; determinants of use, 28; and evidence-based decision-making, 7– 8; factors affecting instrumental and non-instrumental, 15; history of, 242–3; measuring, 261; and methodological rigour, 13; problem-solving model of, 78 –9; social science models of, 7–8; use of, 14 knowledge utilization in context, 30 – 4 knowledge utilization in nursing and allied health sciences, 11–12 knowledge utilization in social sciences, 5 –7 knowledge utilization models, 7–12, 19 –30, 36 –7, 44 – 61 knowledge utilization perspective on dissemination and contextualizing knowledge, 18 –38 knowledge utilization research literature, 259 knowledge utilization studies, 283 knowledge utilization timeline, 243 .

.

.

.

.

.

.

.

.

.

.

.

Leach’s Research Utilization Questionnaire, 261 leadership, and group composition, 99 –101 learning, 200 lesson-drawing. See coalition-based model of politics .

management: evidence-based, 87; and organizational literature, 10

Index—305 management science perspective on information, knowledge, evidence, and organizational decisionmaking, 3, 10, 12, 86 –107 managerial research, 87 mandated science, 60 Markov models, 208 medical decision-making 186 –9. See also decision-making medical decision support system, definition of, 209 medical errors, 42 medical expert system. See INTERNIST medical innovations, hard core and soft periphery of 123 – 4. See also innovations medical knowledge, 46 –7, 209 Medical Logic Modules (MLM), 210 –11 medical reasoning, 173 – 87 medical school 229. See also training medical technocracy, 46 medical treatment, and ethical concerns, 47 medicalization of society, 46 medicine, artistic and scientific aspects of, 172–3 MEDLINE, 209 memory structures, 177– 8 mental hygiene, and eugenics movements, 47– 8 meta-analyses, 242 meta-cognition, 218 methodological perspective of paradigm, 149 methodological rigour, and knowledge utilization, 13 methodology, 148, 150 –1 .

.

.

.

.

.

.

.

.

.

.

misevaluation, 165, 167 misutilization, 141 MLM. See Medical Logic Modules Mode I of knowledge production, 31–2 Mode II of knowledge production, 31, 32– 8 Mode II society, 30 multi-attribute utility theory, 208 multidisciplinary practice model, evidence-based, 251 multivariate analysis, 259 MYCIN system, 206–7 .

National Forum on Health (NFH), 243 natural contexts of medical decisionmaking, 186 –7 naturalistic decision-making (NDM), 10, 15, 87, 101–7, 181 naturalistic model of managerial decision-making, 10 naturalistic processes, use of, 14 naturalistic settings, cognitive characteristics of medical reasoning in, 180 –1 NCAST. See Nursing Child Assessment Satellite Training NDM. See naturalistic decisionmaking neo-positivist paradigm, 149, 150–1, 154, 155, 156 network, task-oriented, 33 NFH. See National Forum on Health normative decision models, 185 – 6 normative evaluation, 144 normative studies of medical reasoning and decision-making, 173 – 4, 183 –7 .

.

.

.

.

.

.

306—Index novices, decision-making by, 202 nursing: history of research utilization in, 244 – 6; knowledge utilization in, 11–12, 242–68; and recognitionprimed decision-making, 102–3 Nursing Child Assessment Satellite Training (NCAST), 244, 245, 251 .

.

objectivist/dualist, 150 observability, of innovations, 120 –2 OCRUN project. See Orange County Research Utilization in Nursing project OMRU. See Ottawa Model of Research Use ontological levels, 176 –7 ontological perspective, of paradigm, 148 –9 ontology, 148, 150, 200 operating information, in innovations, 122 Orange County Research Utilization in Nursing (OCRUN) project, 251 organizational context, 14 organizational decision-making, 86 – 107 organizational determinants of research utilization, 256 – 8 organizational environments, 285 organizational learning, 74 organizational literature, 10 organizational structure, 143, 144 organizational theories, application to evidence-based medicine 3 organizations, 127–30, 200 organized system of action, 142–54 Ottawa Model of Research Use (OMRU), 251 outcomes, 14, 80, 150, 165, 233 .

.

.

.

.

.

PATCIS system, 209 –10 paradigm shift, evidence-based medicine as, 236 paradigms, 14, 147, 148 –9, 150, 155 parallelization, 179 paralysis by analysis, 93 pathfinder, 205 pathophysiological reasoning, 176 patient care, 208 –11, 220 patient data, 209 patient preferences, and clinical decision-making, 235 pattern-matching approach to decision-making, 202 peer-review, 31 physical structure of organized system of action, 143, 144 policy analysis, 24 policy change, 71–3, 76 – 81 policy learning, 71 policy-making, 12, 54 –61 policy-oriented learning, 75 political justifications of stakeholder participation, 152 political model of research utilization, 79 political perspective of organizational decision-making, 91–3 political science perspective of evidence-based decision-making, 70 – 82 political theories, application to evidence-based medicine, 3 political use of knowledge, 14 politics: and decision-making, 12, 88 – 9, 95, 98; models of, 74 – 6; as symbiotic with analysis, 91–3 population-health research. See determinants-of-health synthesis .

.

.

.

.

.

.

.

.

.

.

Index—307 positivist paradigm, 149, 150 –1, 154, 155, 156 post-normal science, 60 power, 52, 99 practical justifications of stakeholder participation, 152 practical knowledge, 14 practice: Iowa model of research in, 251; program evaluation perspective on, 139 – 67; and science, 28 practice change, 217, 218 practice guidelines, clinical, 242 practice model, evidence-based multidisciplinary, 251 practitioners, role of, 21, 29 practitioners and scientists, relationship between, 21–2, 25, 26, 28, 30 –1, 38 pragmatistic models, 44, 54 – 61 prescriptive models in decisionmaking, 173 – 4 private companies, 21, 25, 58 problem analysis, 218 problem-solving, knowledge production for, 53 problem-solving model of knowledge utilization, 21–3, 28, 29, 30 –1, 78 –9 proceduralization of knowledge, 179 – 80 processes, 139 – 67, 143, 145, 148 processing, 200 Production function, 145, 146, 153, 154 production processes, 148 professionals, 201–2, 208 profitable application of knowledge, 21 prognosis, 233 .

.

.

.

.

.

.

.

.

.

.

.

.

program evaluation perspective on processes, practices, and decisionmakers, 139 – 67 programmatic research, 267– 8 propriety knowledge, 58 psychiatry, 46 –7 psychological issues in decision support systems, 211–14 public participation, 54 – 61 pull model of knowledge transfer, 44 .

.

.

.

.

.

QMR. See Quick Medical Reference qualitative analysis, 259 qualitative models of decision support, 206 –7 quality, 200 quality assurance, 217 quantitative analysis, 259 quantitative models of decision support, 205 – 6 Quebec Social Research Council, 35 Quick Medical Reference (QMR), 210 .

.

.

R&D. See research and development randomized controlled trial (RCT), 242 rational-actor approach, 262 rational approaches to managerial decision-making, 10, 12, 87 rational approaches to organizational decision-making, 12, 87 rational deliberation, 56 – 8, 60 rationalistic bias in utilization research, 7 rationality: bounded, 88; and decision-making, 10, 12, 87, 88 – 9, 93 –5; technical, 12 RCT. See randomized controlled trial realist, 150 .

.

.

.

.

308—Index reasoning: backward, 207, 213; causal, 178 –9; conditional, 176; datadriven, 175 – 6, 207; expert, 14, 102–4, 182–3; forward, 213; hypothesis-driven, 175 – 6; hypothetico-deductive, 178 –9; pathophysiological, 176 reasoning processes, 11, 102– 4 reasoning strategies, 175 – 6 recognition, 200 recognition-primed decision-making, 102– 4, 201, 202 reflective practitioner, 22–3, 28, 29, 30 Registered Nurses Association of British Columbia (RNABC), 245, 251 relative advantage of innovations, 118 –19 relativist, 150 reports. See evidence research: basic and applied, 230 –2; behavioural decision-making, 101, 106; clinical, 87; collaborative 53 – 4; empirical, 53; entrepreneurship in, 33; evaluation, 242; evaluative, 144; fundamental, 32; health care managerial, 87; instrumental uses of, 45, 71; interdisciplinary and programmatic, 267–8; and Mode I of knowledge production, 31; putting into practice, 6; receptivity and commitment to, 6; relevance of, 53–4; symbolic and strategic use of, 53 research and development (R&D), 21, 32 research design, 254 –5 research evidence, 233 – 4, 235 .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

research funding agencies, relationships with, 50 research in decision-making processes, assessment and selection of, 6 research in practice, Iowa model of, 251 research knowledge, sources of, 52–3 research literature, 255, 259 research programs, 25 research utilization, 242, 243: history of in nursing, 244 – 6; individual determinants of, 256 – 8; Kitson model of, 251; in nursing and allied health, 248 – 62; organizational determinants of, 256 – 8; predictors of, 266; recommendations for, 265 – 8 research utilization models, 250 –2 research utilization projects, clinical, 252–3 research utilization research studies, 253 researchers. See scientists Resource Mobilization function. See Environmental Adaptation function results transfer, 148, 155, 156 results utilization, 149 reviews, 233 risk, 59 – 60, 122 RNABC. See Registered Nurses Association of British Columbia Rogers’ Diffusion of Innovation theory, 255, 265 – 6 rule-based expert systems, 207 .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

savant, 27–8 schemata, 177–8 science: and knowledge-based society, 37–8; mandated, 60; and politics,

Index—309 44, 45; post-normal, 60; and practice, 28; and public participation in policy-making, 54 – 61 scientific aspects of medicine, 172–3 scientific breakthroughs, disclosure of by private companies, 25 scientific community, sovereignty of 27. See also academic freedom scientific culture, 20 scientific evidence, 14 scientific knowledge, 8, 20 –1, 53, 203 scientism, 49 scientists, role of, 29 scientists and practitioners, relationship between, 21–2, 25, 26, 28, 30 –1, 38 scripts, 177– 8 search strategies, information, 213 shared mental models, 104 –5 situational knowledge, 203 social control, and medical knowledge, 46 –7 social engineering, 47, 51 social learning, 75 social planning approach to contemporary problems, 25 social problems, history of applying scientific knowledge to, 8 social problems approach in sociology, 51–2 social processes, 7 social science: knowledge utilization in, 5–7, 8, 41– 61; and solving social problems, 52 Social Science and Humanities Research Council of Canada (SSHRC), 35 social scientific knowledge for policy making, 41– 61 .

.

.

.

.

.

.

.

.

social system action theory, 145 social values and technical rationality, 12 sociological perspective on transfer and utilization of social scientific knowledge for policy making, 41–61 sociological theories, 3, 8 sociology, 51–2 soft periphery of medical innovations, 123 – 4 solution, 218 sources of research knowledge, 52–3 SSHRC. See Social Science and Humanities Research Council of Canada stakeholder approach to knowledge utilization, 25 stakeholder participation, 14, 143 –5, 147–56 standards, 200 state-centred model of politics, 73, 74 state-transition diagram, 208 statistical analysis, univariate and bivariate, 259 statistical techniques, 254 – 5 Stepladder Technique, 97 Stetler/Marram model, 251 Stetler model, 251 strategic dialogue, 285 – 6 strategic model of knowledge utilization, 25 – 6, 28, 29, 30 –1 strategic sub-dimension of goal attainment, 155 strategic use of evaluation, 153 – 4 strategic use of research, 53 strategic utilization model, 156 strategies, 214 .

.

.

.

.

.

.

.

.

.

.

.

310—Index strategy-based model of politics, 73, 74, 75 – 6, 77– 8, 79 structured team discussions, 218 subjectivist, 150 supplier push model of knowledge transfer, 44 symbolic model of research utilization, 79 symbolic motives for analysis, 92 symbolic perspective of organizational decision-making, 93–5 symbolic structure of organized system of action, 143, 144 symbolic use of research, 53, 79 synthesis: determinants-of-health, 75 – 8, 79, 80, 81; intuitive and naturalistic decision-making, 12, 105 – 6 system-model approach, 218–19 system of action, 145 –7, 155 systematic and organizational context, 14 .

.

.

.

.

.

.

tactical model of research utilization, 79 tactical use of knowledge, 14 task-oriented network, 33 team decision-making, 104 –5, 200, 215 team decision support, 216 –19 technical rationality and social values, 12 technocratic models, 44, 45 –9, 54, 58, 61 technology, information-systems, 188 –9 teleological perspective of paradigm, 149 teleology, 148, 151 temporal dynamics, 182 theoretical framing, 265 – 6 .

.

theory-practice discourse, 243 – 4 therapy, 233 top-down initiatives, 25 training, 188 –91 transfer: in historical context, 43 – 4; knowledge, 44, 282; results, 148, 155, 156; of social scientific knowledge for policy making, 41– 61; unit of, 287 transition-probability matrix, 208 translation, 14, 245 triability, 120 –2 .

.

.

.

.

.

uncertainty, 122, 200 understanding, 218 unit of transfer, 287 univariate statistical analysis, 259 user demand model of knowledge transfer, 44 utilization: conceptual, 140; evaluation, 139 – 43, 147–9, 154 –6, 162; knowledge, 242–68; research, 242– 6, 250–2, 256 – 8, 265 – 8; of social scientific knowledge for policy making, 41– 61 utilization models, 154 – 64 utilization projects, clinical research, 252–3 utilization research, rationalistic bias in, 7 utilization studies, knowledge, 283 .

.

.

.

.

.

.

.

.

.

.

.

.

.

value choices, 49, 54 value conflicts, 56 – 8, 100 value freedom, 50 value paradigms. See paradigms values: of decision-makers, 14; and paradigms in evaluation, 150; and power, 52 values maintenance, 148 .

.

Index—311 Values Maintenance function, 145–9, 154, 155 welfare state, 54

Western Interstate Commission for Higher Education in Nursing (WICHEN), 244, 245, 251 wisdom, 14