538 65 38MB
English Pages 1195 [1179] Year 2021
Biomedical Informatics Computer Applications in Health Care and Biomedicine Edward H. Shortliffe James J. Cimino Editors Michael F. Chiang Co-Editor Fifth Edition
123
Biomedical Informatics
Edward H. Shortliffe • James J. Cimino Editors Michael F. Chiang Co-Editor
Biomedical Informatics Computer Applications in Health Care and Biomedicine 5th Edition
Editors Edward H. Shortliffe Biomedical Informatics Columbia University New York, NY, USA
James J. Cimino Informatics Institute University of Alabama at Birmingham Birmingham, AL, USA
Co-Editor Michael F. Chiang National Eye Institute National Institutes of Health Bethesda, MD, USA
ISBN 978-3-030-58720-8 ISBN 978-3-030-58721-5 (eBook) https://doi.org/10.1007/978-3-030-58721-5 © Springer Nature Switzerland AG 2013, 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
V
This volume is dedicated to AMIA, the principal professional association for the editors. Born as the American Medical Informatics Association in 1990, AMIA is now preferentially known simply by its acronym and has grown to include some 5500 members who are dedicated to all aspects of biomedical informatics. AMIA and this textbook have evolved in parallel for four decades, and we thank the organization and its members for all they have done for the field and for health care and biomedicine. May both AMIA and this volume evolve and prosper in parallel for years to come.
VII
Foreword Health and biomedicine are in the midst of revolutionary change. Health care, mental health, and public health are converging as discovery science reveals these traditional “silos” share biologic pathways and collaborative management demonstrates better outcomes. Health care reimbursement is increasingly framed in terms of paying for outcomes achieved through value-based purchasing and population health management. Individuals are more engaged in their health and wellness decisions, using personal biomedical monitoring devices and testing services and engaging in citizen science. Systems biology is revealing the complex interactions among a person’s genome, microbiome, immune system, neurologic system, social factors, and environment. Novel biomarkers and therapeutics exploit these interactions. These advances are fueled by digitization and generation of data at an unprecedented scale. The volume of health care data has multiplied 8 times since 2013 and is projected to grow at a compound annual rate of 36% between 2018 and 20251. The rate of growth of biomedical research data is comparable2. When you consider recent estimates that socioeconomics, health behaviors, and environment—factors outside of the domain of health care and biomedicine—contribute as much as 80% to health outcomes3, the variety and scale of health-related data are breathtaking. Biomedical informatics provides the scientific basis for making sense of these data—methods and tools to structure, mine, visualize, and reason with data and information. Biomedical informatics also provides the scientific basis for incorporating data and information into effective workflows—techniques to link people, process, and technology into systems; methods to evaluate systems and technology components; and methods to facilitate system-level change. Biomedical informatics grew out of efforts to understand biomedical reasoning4, such as artificial intelligence; to develop medical systems, such as multiphasic screening5; and to write computer programs to solve clinical problems, such as diagnosis and treatment of acid-base disorders6. By the late 1970s, “medical informatics” was used interchangeably with “computer applications in medical care”. As computer programs were written for various allied health disciplines, nursing informatics, dental informatics, and public health informatics emerged. The 1980s saw the emergence of computational biology for applications such as scientific visualization and bioinformatics to support tasks such as DNA sequence analysis. Biomedical Informatics: Computer Applications in Health Care and Biomedicine provided the first comprehensive guide to the field with its first edition in 1990. That edition and the subsequent three have served as the core syllabus for introductory courses in informatics and as a reference source for those seeking advanced training or working in the field. The fifth edition carries on the tradition with new topics, comprehensive glossary, reading lists, and citations.
VIII Foreword
I encourage people who are considering formal education in iomedical informatics to use this book to sample the field. The book’s b framework provides a guide for educators from junior high to graduate school as they design introductory courses in biomedical informatics. It is the basic text for students entering the field. With digitization and data driving change across the health and biomedicine ecosystem, everyone in the ecosystem will benefit from reading Biomedical Informatics and using it as a handbook to guide their work. The following is a sample of questions readers can turn to the book to explore: 55 Practicing health professionals—How do I recognize an information need? How do I quickly scan and filter information to answer a question? How do I sense the fitness of the information to answer my question? How do I configure my electronic health record to focus my attention and save time? How do I recognize when to override decision support? How do I analyze data from my practice to identify learning and improvement opportunities? How do I engage with patients outside of face-to-face encounters? 55 Quality improvement teams—How might we detect if the outcome we are trying to improve is changing in the desired direction? Are data available in our operational systems that are fit for that purpose? What combination of pattern detection algorithm, workflow process, decision support, and training might work together to change the outcome? How can we adapt operational processes and systems to test the change and to scale if it proves effective? 55 Discovery science teams—How do data about biological systems differ from data about physical systems? How do we decide when to use integrative analytic approaches and when to use reductionist approaches? How much context do we need to keep about data we create and how do we structure the metadata? How do we optimize compute and storage platforms? How might we leverage electronic health record-derived phenotype to generate hypotheses? 55 Artificial intelligence researchers or health “app” developers—What health outcome am I trying to change? Do I need a detection, prediction, or classification algorithm? What sources of data might be fit for that purpose? What type of intervention might change the outcome? Who would be the best target for the intervention? What is the best place in their workflow to incorporate the intervention? 55 Health system leaders—How do we restructure team roles and electronic health record workflows to reduce clinician burnout and improve care quality? How do we take advantage of technology- enabled self-management and virtual visits to increase adherence and close gaps in care? How do we continuously evaluate evidence and implement or de-implement guidelines and decision support across our system? How do we leverage technology to deploy context-sensitive just-in-time learning across our system? 55 Health policy makers—How might we enhance health information privacy and security and reduce barriers to using data for population
IX Foreword
health, health care quality improvement, and discovery? To what degree is de-identification a safeguard? What combination of legislative mandate, executive action, and industry-driven innovation will accelerate health data interoperability and business agility? How might federal and state governments enable communities to access small area data to inform their collective action to improve community health and well-being? You have taken the first step in exploring these frontiers by picking up this book. Enjoy!
References 1. IDC White Paper, The Digitization of the World from Edge to Core, #US44413318 November 2018 7 https://www.seagate.com/files/www-content/our-story/trends/ files/idc-seagate-dataage-whitepaper.pdf 2. Vamathevan, J. V., Apweiler, R., & Birney, W. (2019). Biomedical data resources: Bioinformatics infrastructure for biomedical data science. Annual Review of Biomedical Data Science, 2:199–222. 3. County Health Rankings Program and Roadmap 7 https://www. countyhealthrankings.org/explore-health-rankings/measures-data-sources/county- health-rankings-model 4. Ledley, R., & Lusted, L. (1959). Reasoning foundations of medical diagnosis. Science, 130, 9–21. 5. Collen, M. F. (1966). Periodic health examinations using an automated multitest laboratory. JAMA, 195, 830–3. 6. Bleich, H. L. (1971). The computer as a consultant. New England Journal of Medicine, 284, 141–7.
William W. Stead, MD, FACMI
Chief Strategy Officer, Vanderbilt University Medical Center, McKesson Foundation Professor, Department of Biomedical Informatics, Professor, Department of Medicine, Vanderbilt University Nashville, TN, USA September 2019
XI
Preface to the Fifth Edition The world of biomedical research and health care has changed remarkably in the 30 years since the first edition of this book was published. So too has the world of computing and communications and thus the underlying scientific issues that sit at the intersections among biomedical science, patient care, public health, and information technology. It is no longer necessary to argue that it has become impossible to practice modern medicine, or to conduct modern biological research, without information technologies. Since the initiation of the Human Genome Project three decades ago, life scientists have been generating data at a rate that defies traditional methods for information management and data analysis. Health professionals also are constantly reminded that a large percentage of their activities relates to information management—for example, obtaining and recording information about patients, consulting colleagues, reading and assessing the scientific literature, planning diagnostic procedures, devising strategies for patient care, interpreting results of laboratory and radiologic studies, or conducting case-based and population-based research. Artificial intelligence, “big data,” and data science are having unprecedented impact on the world, with the biomedical field a particularly active and visible component of such activity. It is complexity and uncertainty, plus society’s overriding concern for patient well-being, and the resulting need for optimal decision making, that set medicine and health apart from many other information- intensive fields. Our desire to provide the best possible health and health care for our society gives a special significance to the effective organization and management of the huge bodies of data with which health professionals and biomedical researchers must deal. It also suggests the need for specialized approaches and for skilled scientists who are knowledgeable about human biology, clinical care, information technologies, and the scientific issues that drive the effective use of such technologies in the biomedical context.
Information Management in Biomedicine The clinical and research influence of biomedical-computing systems is remarkably broad. Clinical information systems, which provide communication and information-management functions, are now installed in essentially all health care institutions. Physicians can search entire drug indexes in a few seconds, using the information provided by a computer program to anticipate harmful side effects or drug interactions. Electrocardiograms (ECGs) are typically analyzed initially by computer programs, and similar techniques are being applied for interpretation of pulmonary-function tests and a variety of laboratory and radiologic abnormalities. Devices with embedded processors routinely monitor patients and provide warnings in critical-care settings, such as the
XII Preface to the Fifth Edition
intensive- care unit (ICU) or the operating room. Both biomedical researchers and clinicians regularly use computer programs to search the medical literature, and modern clinical research would be severely hampered without computer-based data-storage techniques and statistical analysis systems. Machine learning methods and artificial intelligence are generating remarkable results in medical settings. These have attracted attention not only from the news media, patients, and clinicians but also from health system leaders and from major corporations and startup companies that are offering new approaches to patient care and health information management. Advanced decision-support tools also are emerging from research laboratories, are being integrated with patient-care systems, and are beginning to have a profound effect on the way medicine is practiced. Despite this extensive use of computers in health care settings and biomedical research, and a resulting expansion of interest in learning more about biomedical computing, many life scientists, health-science students, and professionals have found it difficult to obtain a comprehensive and rigorous, but nontechnical, overview of the field. Both practitioners and basic scientists are recognizing that thorough preparation for their professional futures requires that they gain an understanding of the state of the art in biomedical computing, of the current and future capabilities and limitations of the technology, and of the way in which such developments fit within the scientific, social, and financial context of biomedicine and our health care system. In turn, the future of the biomedical-computing field will be largely determined by how well health professionals and biomedical scientists are prepared to guide and to capitalize upon the discipline’s development. This book is intended to meet this growing need for such well- equipped professionals. The first edition appeared in 1990 (published by Addison-Wesley) and was used extensively in courses on medical informatics throughout the world (in some cases with translations to other languages). It was updated with a second edition (published by Springer) in 2000, responding to the remarkable changes that occurred during the 1990s, most notably the Human Genome Project and the introduction of the World Wide Web with its impact on adoption and acceptance of the Internet. The third edition (again published by Springer) appeared in 2006, reflecting the ongoing rapid evolution of both technology and health- and biomedically related applications, plus the emerging government recognition of the key role that health information technology would need to play in promoting quality, safety, and efficiency in patient care. With that edition the title of the book was changed from Medical Informatics to Biomedical Informatics, reflecting (as is discussed in 7 Chap. 1) both the increasing breadth of the basic discipline and the evolving new name for academic units, societies, research programs, and publications in the field. The fourth edition (published by Springer in 2014) followed the same conceptual framework for learning about the science that underlies applications of computing and communications technology in biomedicine and health care, for understanding the state of the art in computer applications in clinical care and biology, for critiquing existing systems, and for anticipating future directions that the field may take.
XIII Preface to the Fifth Edition
In many respects, the fourth edition was very different from its predecessors, however. Most importantly, it reflected the remarkable changes in computing and communications that continued to occur, most notably in communications, networking, and health information technology policy, and the exploding interest in the role that information technology must play in systems integration and the melding of genomics with innovations in clinical practice and treatment. Several new chapters were introduced and most of the remaining ones underwent extensive revision. In this fifth edition, we have found that two previous single-chapter topics have expanded to warrant two complementary chapters, specifically Cognitive Science (split into Cognitive Informatics and Human- Computer Interaction, Usability, and Workflow) and Consumer Health Informatics and Personal Health Records (split into Personal Health Informatics and mHealth and Applications). There is a new chapter on precision medicine, which has emerged in the past 6 years as a unique area of special interest. Those readers who are familiar with the first four editions will find that the organization and philosophy are essentially unchanged (although bioinformatics, as a set of methodologies, is now considered a “recurrent theme” rather than an “application”), but the content is either new or extensively updated.1 This book differs from other introductions to the field in its broad coverage and in its emphasis on the field’s conceptual underpinnings rather than on technical details. Our book presumes no health- or computer-science background, but it does assume that you are interested in a comprehensive domain summary that stresses the underlying concepts and that introduces technical details only to the extent that they are necessary to meet the principal goal. Recent specialized texts are available to cover the technical underpinnings of many topics in this book; many are cited as suggested readings throughout the book, or are cited in the text for those who wish to pursue a more technical exposure to a topic.
Overview and Guide to Use of This Book This book is written as a text so that it can be used in formal courses, but we have adopted a broad view of the population for whom it is intended. Thus, it may be used not only by students of medicine and of the other health professions but also as an introductory text by future biomedical informatics professionals, as well as for self-study and for reference by practitioners, including those who are pursuing formal board certification in clinical informatics (as is discussed in more detail later in this
1 As with the first four editions, this book has tended to draw both its examples and its contributors from North America. There is excellent work in other parts of the world as well, although variations in health care systems, and especially financing, do tend to change the way in which systems evolve from one country to the next. The basic concepts are identical, however, so the book is intended to be useful in educational programs in other parts of the world as well.
XIV Preface to the Fifth Edition
“Preface”). The book is probably too detailed for use in a 2- or 3-day continuing-education course, although it could be introduced as a reference for further independent study. Our principal goal in writing this text is to teach concepts in biomedical informatics—the study of biomedical information and its use in decision making—and to illustrate them in the context of descriptions of representative systems that are in use today or that taught us lessons in the past. As you will see, biomedical informatics is more than the study of computers in biomedicine, and we have organized the book to emphasize that point. 7 Chapter 1 first sets the stage for the rest of the book by providing a glimpse of the future, defining important terms and concepts, describing the content of the field, explaining the connections between biomedical informatics and related disciplines, and discussing the forces that have influenced research in biomedical informatics and its integration into clinical practice and biological research. Broad issues regarding the nature of data, information, and knowledge pervade all areas of application, as do concepts related to optimal decision making. 7 Chapters 2 and 3 focus on these topics but mention computers only in passing. They serve as the foundation for all that follows. 7 Chapters 4 and 5 on cognitive science issues enhance the discussions in 7 Chaps. 2 and 3, pointing out that decision making and behavior are deeply rooted in the ways in which information is processed by the human mind. Key concepts underlying system design, human- computer interaction, patient safety, educational technology, and decision making are introduced in these chapters. 7 Chapter 6 introduces the central notions of software engineering that are important for understanding the applications described later. We have dropped a chapter from previous editions that dealt broadly with system architectures, networking, and computer-system design. This topic is more about engineering than informatics, it changes rapidly, and there are excellent books on this subject to which students can turn if they need more information on these topics. 7 Chapter 7 summarizes the issues of standards development, focusing in particular on data exchange and issues related to sharing of clinical data. This important and rapidly evolving topic warrants inclusion given the evolution of the health information exchange, institutional system integration challenges, federal government directives, and the increasingly central role of standards in enabling clinical systems to have their desired influence on health care practices. 7 Chapter 8 addresses a topic of increasing practical relevance in both the clinical and biological worlds: natural language understanding and the processing of biomedical texts. The importance of these methods is clear when one considers the amount of information contained in free-text notes or reports (either dictated and transcribed or increasingly created using speech-understanding systems) or in the published biomedical literature. Even with efforts to encourage structured data entry in clinical systems, there will likely always be an important role for techniques that allow computer systems to extract meaning from natural language documents. 7 Chapter 9 recognizes that bioinformatics is not just an application area but rather a fundamental area of study. The chapter introduces
XV Preface to the Fifth Edition
many of the concepts and analytical tools that underlie modern computational approaches to the management of human biological data, especially in areas such as genomics and proteomics. Applications of bioinformatics related to human health and disease later appear in a chapter on “Translational Bioinformatics” (7 Chap. 26). 7 Chapter 10 is a comprehensive introduction to the conceptual underpinnings of biomedical and clinical image capture, analysis, interpretation, and use. This overview of the basic issues and imaging modalities serves as background for 7 Chap. 22, which deals with imaging applications issues, highlighted in the world of radiological imaging and image management (e.g., in picture archiving and communication systems). 7 Chapter 11 considers personal health informatics not as a set of applications (which are covered in 7 Chap. 19), but as introductory concepts that relate to this topic, such as notions of the digital self and the digital divide, patient-generated health data, and how a focus on the patient (or on healthy individuals) affects both the person and the field of biomedical informatics. 7 Chapter 12 addresses the key legal and ethical issues that have arisen when health information systems are considered. Then, in 7 Chap. 13, the challenges associated with technology assessment and with the evaluation of clinical information systems are introduced. 7 Chapters 14–28 (which include two new chapters in this edition, including one on mHealth and another on precision medicine) survey many of the key biomedical areas in which informatics methods are being used. Each chapter explains the conceptual and organizational issues in building that type of system, reviews the pertinent history, and examines the barriers to successful implementations. 7 Chapter 29 reprises and updates a chapter that was new in the fourth edition, providing a summary of the rapidly evolving policy issues related to health information technology. Although the emphasis is on US government policy, there is some discussion of issues that clearly generalize both to states (in the USA) and to other countries. The book concludes in 7 Chap. 30 with a look to the future—a vision of how informatics concepts, computers, and advanced communication devices one day may pervade every aspect of biomedical research and clinical practice. Rather than offering a single point of view developed by a group of forward thinkers, as was offered in the fourth edition, we have invited seven prominent and innovative thinkers to contribute their own views. We integrate these seven future perspectives (representing clinical medicine, nursing, health policy, translational bioinformatics, academic informatics, the information technology industry, and the federal government) into a chapter where the editors have synthesized the seven perspectives after building on how an analysis of the past helps to inform the future of this dynamic field.
The Study of Computer Applications in Biomedicine The actual and potential uses of computers in health care and biomedicine form a remarkably broad and complex topic. However, just as you do not need to understand how a telephone or an ATM machine works
XVI Preface to the Fifth Edition
to make good use of it and to tell when it is functioning poorly, we believe that technical biomedical-computing skills are not needed by health workers and life scientists who wish simply to become effective users of evolving information technologies. On the other hand, such technical skills are of course necessary for individuals with career commitment to developing information systems for biomedical and health environments. Thus, this book will neither teach you to be a programmer nor show you how to fix a broken computer (although it might motivate you to learn how to do both). It also will not tell you about every important biomedical-computing system or application; we shall use an extensive bibliography included with each chapter to direct you to a wealth of literature where review articles and individual project reports can be found. We describe specific systems only as examples that can provide you with an understanding of the conceptual and organizational issues to be addressed in building systems for such uses. Examples also help to reveal the remaining barriers to successful implementations. Some of the application systems described in the book are well established, even in the commercial marketplace. Others are just beginning to be used broadly in biomedical settings. Several are still largely confined to the research laboratory. Because we wish to emphasize the concepts underlying this field, we generally limit the discussion of technical implementation details. The computer-science issues can be learned from other courses and other textbooks. One exception, however, is our emphasis on the details of decision science as they relate to biomedical problem solving (7 Chaps. 3 and 24). These topics generally are not presented in computer-science courses, yet they play a central role in the intelligent use of biomedical data and knowledge. Sections on medical decision making and computer-assisted decision support accordingly include more technical detail than you will find in other chapters. All chapters include an annotated list of “Suggested Readings” to which you can turn if you have a particular interest in a topic, and there is a comprehensive set of references with each chapter. We use boldface print to indicate the key terms of each chapter; the definitions of these terms are included in the “Glossary” at the end of the book. Because many of the issues in biomedical informatics are conceptual, we have included “Questions for Discussion” at the end of each chapter. You will quickly discover that most of these questions do not have “right” answers. They are intended to illuminate key issues in the field and to motivate you to examine additional readings and new areas of research. It is inherently limiting to learn about computer applications solely by reading about them. We accordingly encourage you to complement your studies by seeing real systems in use—ideally by using them yourself. Your understanding of system limitations and of what you would do to improve a biomedical-computing system will be greatly enhanced if you have had personal experience with representative applications. Be aggressive in seeking opportunities to observe and use working systems. In a field that is changing as rapidly as biomedical informatics is, it is difficult ever to feel that you have knowledge that is completely current.
XVII Preface to the Fifth Edition
However, the conceptual basis for study changes much more slowly than do the detailed technological issues. Thus, the lessons you learn from this volume will provide you with a foundation on which you can continue to build in the years ahead.
The Need for a Course in Biomedical Informatics A suggestion that new courses are needed in the curricula for students of the health professions is generally not met with enthusiasm. If anything, educators and students have been clamoring for reduced lecture time, for more emphasis on small group sessions, and for more free time for problem solving and reflection. Yet, in recent decades, many studies and reports have specifically identified biomedical informatics, including computer applications, as an area in which new educational opportunities need to be developed so that physicians and other health professionals will be better prepared for clinical practice. As early as 1984, the Association of American Medical Colleges (AAMC) recommended the formation of new academic units in biomedical informatics in our medical schools, and subsequent studies and reports have continued to stress the importance of the field and the need for its inclusion in the educational environments of health professionals. The reason for this strong recommendation is clear: The practice of medicine is inextricably entwined with the management of information. In the past, practitioners handled medical information through resources such as the nearest hospital or medical-school library; personal collections of books, journals, and reprints; files of patient records; consultation with colleagues; manual office bookkeeping; and (all-too-often flawed) memorization. Although these techniques continue to be variably valuable, information technology is offering new methods for finding, filing, and sorting information: online bibliographic retrieval systems, including full-text publications; personal computers, laptops, tablets, and smart phones, with database software to maintain personal information and commonly used references; office-practice and clinical information systems and EHRs to capture, communicate, and preserve key elements of the health record; information retrieval and consultation systems to provide assistance when an answer to a question is needed rapidly; practice-management systems to integrate billing and receivable functions with other aspects of office or clinic organization; and other online information resources that help to reduce the pressure to memorize in a field that defies total mastery of all but its narrowest aspects. With such a pervasive and inevitable role for computers in clinical practice, and with a growing failure of traditional techniques to deal with the rapidly increasing information-management needs of practitioners, it has become obvious to many people that an essential topic has emerged for study in schools and clinical training programs (such as residencies) that train medical and other health professionals. What is less clear is how the subject should be taught in medical schools or other health professional degree programs, and to what extent it should be left for postgraduate education. We believe that top-
XVIII Preface to the Fifth Edition
ics in biomedical informatics are best taught and learned in the context of health-science training, which allows concepts from both the health sciences and informatics science to be integrated. Biomedical-computing novices are likely to have only limited opportunities for intensive study of the material once their health-professional training has been completed, although elective opportunities for informatics rotations are now offered to residents in many academic medical centers. The format of biomedical informatics education has evolved as faculty members have been hired to carry out informatics research and to develop courses at more health-science schools, and as the emphasis on lectures as the primary teaching method continues to diminish. Computers will be used increasingly as teaching tools and as devices for communication, problem solving, and data sharing among students and faculty. Indeed, the recent COVID-19 pandemic has moved many traditional medical teaching experiences from the classroom to online teaching environments using video conferencing and on-demand access to course materials. Such experiences do not teach informatics (unless that is the topic of the course), but they have rapidly engaged both faculty and students in technology-intensive teaching and learning experiences. The acceptance of computing, and dependence upon it, has already influenced faculty, trainees, and curriculum committees. This book is designed to be used in a traditional introductory course, whether taught online or in a classroom, although the “Questions for Discussion” also could be used to focus conversation in small seminars and working groups. Integration of biomedical informatics topics into clinical experiences has also become more common. The goal is increasingly to provide instruction in biomedical informatics whenever this field is most relevant to the topic the student is studying. This aim requires educational opportunities throughout the years of formal training, supplemented by continuing-education programs after graduation. The goal of integrating biomedicine and biomedical informatics is to provide a mechanism for increasing the sophistication of health professionals, so that they know and understand the available resources. They also should be familiar with biomedical computing’s successes and failures, its research frontiers, and its limitations, so that they can avoid repeating the mistakes of the past. Study of biomedical informatics also should improve their skills in information management and problem solving. With a suitable integration of hands-on computer experience, computer-mediated learning, courses in clinical problem solving, and study of the material in this volume, health-science students will be well prepared to make effective use of computational tools and information management in health care delivery.
The Need for Specialists in Biomedical Informatics As mentioned, this book also is intended to be used as an introductory text in programs of study for people who intend to make their professional careers in biomedical informatics. If we have persuaded you that a course in biomedical informatics is needed, then the requirement for
XIX Preface to the Fifth Edition
trained faculty to teach the courses will be obvious. Some people might argue, however, that a course on this subject could be taught by a computer scientist who had an interest in biomedical computing, or by a physician or biologist who had taken a few computing courses. Indeed, in the past, most teaching—and research—has been undertaken by faculty trained primarily in one of the fields and later drawn to the other. Today, however, schools have come to realize the need for professionals trained specifically at the interfaces among biomedicine, biomedical informatics, and related disciplines such as computer science, statistics, cognitive science, health economics, and medical ethics. This book outlines a first course for students training for careers in the biomedical informatics field. We specifically address the need for an educational experience in which computing and information-science concepts are synthesized with biomedical issues regarding research, training, and clinical practice. It is the integration of the related disciplines that originally was lacking in the educational opportunities available to students with career interests in biomedical informatics. Schools are establishing such courses and training programs in growing numbers, but their efforts have been constrained by a lack of faculty who have a broad familiarity with the field and who can develop curricula for students of the health professions as well as of informatics itself. The increasing introduction of computing techniques into biomedical environments requires that well-trained individuals be available not only to teach students but also to design, develop, select, and manage the biomedical-computing systems of tomorrow. There is a wide range of context-dependent computing issues that people can appreciate only by working on problems defined by the health care setting and its constraints. The field’s development has been hampered because there are relatively few trained personnel to design research programs, to carry out the experimental and developmental activities, and to provide academic leadership in biomedical informatics. A frequently cited problem is the difficulty a health professional (or a biologist) and a technically trained computer scientist experience when they try to communicate with one another. The vocabularies of the two fields are complex and have little overlap, and there is a process of acculturation to biomedicine that is difficult for computer scientists to appreciate through distant observation. Thus, interdisciplinary research and development projects are more likely to be successful when they are led by people who can effectively bridge the biomedical and computing fields. Such professionals often can facilitate sensitive communication among program personnel whose backgrounds and training differ substantially. Hospitals and health systems have begun to learn that they need such individuals, especially with the increasing implementation of, and dependence upon, EHRs and related clinical systems. The creation of a Chief Medical Information Officer (CMIO) has now become a common innovation. As the concept became popular, however, questions arose about how to identify and evaluate candidates for such key institutional roles. The need for some kind of suitable certification process became clear—one that would require individuals to demonstrate both formal training and the broad skills and knowledge that were required. Thus,
XX Preface to the Fifth Edition
the American Medical Informatics Association (AMIA) and its members began to develop plans for a formal certification program. For physicians, the most meaningful approach was to create a formal medical subspecialty in clinical informatics. Working with the American Board of Preventive Medicine and the parent organization, the American Board of Medical Specialties (ABMS), AMIA helped to obtain approval for a subspecialty board that would allow medical specialists, with board certification in any ABMS specialty (such as pediatrics, internal medicine, radiology, pathology, preventive medicine) to pursue subspecialty board certification in clinical informatics. This proposal was ultimately approved by the ABMS in 2011, and the board examination was first administered in 20132. After a period during which currently active clinical informatics physician experts could sit for their clinical informatics boards, board eligibility now requires a formal fellowship in clinical informatics. This is similar to the fellowship requirement for other subspecialties such as cardiology, nephrology, and the like. Many health care institutions now offer formal clinical informatics fellowships for physicians who have completed a residency in one of the almost 30 ABMS specialties. These individuals are now often turning to this volume as a resource to help them to prepare for their board examinations. It is exciting to be working in a field that is maturing and that is having a beneficial effect on society. There is ample opportunity remaining for innovation as new technologies evolve and fundamental computing problems succumb to the creativity and hard work of our colleagues. In light of the increasing sophistication and specialization required in computer science in general, it is hardly surprising that a new discipline should arise at that field’s interface with biomedicine. This book is dedicated to clarifying the definition and to nurturing the effectiveness of that discipline: biomedical informatics. Edward H. Shortliffe
New York, NY, USA
James J. Cimino
Birmingham, AL, USA
Michael F. Chiang
Bethesda, MD, USA June 2020
2 AMIA is currently developing a Health Informatics Certification program (AHIC) for individuals who seek professional certification in health-related informatics but are not physicians or are otherwise not eligible to take the ABMS board certification exam. 7 https://www.amia.org/ahic (Accessed June 10, 2020).
XXI
Acknowledgments In the 1980s, when I was based at Stanford University, I conferred with colleagues Larry Fagan and Gio Wiederhold, and we decided to compile the first comprehensive textbook on what was then called medical informatics. As it turned out, none of us predicted the enormity of the task we were about to undertake. Our challenge was to create a multiauthored textbook that captured the collective expertise of leaders in the field yet was cohesive in content and style. The concept for the book was first developed in 1982. We had begun to teach a course on computer applications in health care at Stanford’s School of Medicine and had quickly determined that there was no comprehensive introductory text on the subject. Despite several published collections of research descriptions and subject reviews, none had been developed to meet the needs of a rigorous introductory course. The thought of writing a textbook was daunting due to the diversity of topics. None of us felt that he was sufficiently expert in the full range of important subjects for us to write the book ourselves. Yet we wanted to avoid putting together a collection of disconnected chapters containing assorted subject reviews. Thus, we decided to solicit contributions from leaders in the pertinent fields but to provide organizational guidelines in advance for each chapter. We also urged contributors to avoid writing subject reviews but, instead, to focus on the key conceptual topics in their field and to pick a handful of examples to illustrate their didactic points. As the draft chapters began to come in, we realized that major editing would be required if we were to achieve our goals of cohesiveness and a uniform orientation across all the chapters. We were thus delighted when, in 1987, Leslie Perreault, a graduate of our informatics training program, assumed responsibility for reworking the individual chapters to make an integral whole and for bringing the project to completion. The final product, published in 1990, was the result of many compromises, heavy editing, detailed rewriting, and numerous iterations. We were gratified by the positive response to the book when it finally appeared, and especially by the students of biomedical informatics who have often come to us at scientific meetings and told us about their appreciation of the book. As the 1990s progressed, however, we began to realize that, despite our emphasis on basic concepts in the field (rather than a survey of existing systems), the volume was beginning to show its age. A great deal had changed since the initial chapters were written, and it became clear that a new edition would be required. The original editors discussed the project and decided that we should redesign the book, solicit updated chapters, and publish a new edition. Leslie Perreault by this time was a busy Director at First Consulting Group in New York City and would not have as much time to devote to the project as she had when we did the first edition. With trepidation, in light of our knowledge of the work that would be involved, we embarked on the new project. As before, the chapter authors did a marvelous job, trying to meet our deadlines, putting up with editing changes that were designed to
XXII Acknowledgments
bring a uniform style to the book, and contributing excellent chapters that nicely reflected the changes in the field during the preceding decade. No sooner had the second edition appeared in print in 2000 than we started to get inquiries about when the next update would appear. We began to realize that the maintenance of a textbook in a field such as biomedical informatics was nearly a constant, ongoing process. By this time I had moved to Columbia University and the initial group of editors had largely disbanded to take on other responsibilities, with Leslie Perreault no longer available. Accordingly, as plans for a third edition began to take shape, my Columbia colleague Jim Cimino joined me as the new associate editor, whereas Drs. Fagan, Wiederhold, and Perreault continued to be involved as chapter authors. Once again the authors did their best to try to meet our deadlines as the third edition took shape. This time we added several chapters, attempting to cover additional key topics that readers and authors had identified as being necessary enhancements to the earlier editions. We were once again extremely appreciative of all the authors’ commitment and for the excellence of their work on behalf of the book and the field. Predictably, it was only a short time after the publication of the third edition in 2006 that we began to get queries about a fourth edition. We resisted for a year or two, but it became clear that the third edition was becoming rapidly stale in some key areas and that there were new topics that were not in the book and needed to be added. With that in mind we, in consultation with Grant Weston from Springer’s offices in London, agreed to embark on a fourth edition. Progress was slowed by my professional moves (to Phoenix, Arizona, then Houston, Texas, and then back to New York) with a very busy 3-year stint as President and CEO of the American Medical Informatics Association. Similarly, Jim Cimino left Columbia to assume new responsibilities at the NIH Clinical Center in Bethesda, MD. With several new chapters in mind, and the need to change authors of some of the existing chapters due to retirements (this too will happen, even in a young field like informatics), we began working on the fourth edition, finally completing the effort with publication in early 2014. Now, seven years later, we are completing the fifth edition of the volume. It was not long after the publication of the fourth edition that we began to get requests for a new edition that would include many of the new and emerging topics that had not made it into the 2014 publication. With the introduction of new chapters, major revisions to previous chapters, and some reordering of authors or introduction of new ones, we have attempted to assure that this new edition will fill the necessary gaps and engage our readers with its currency and relevance. As Jim Cimino (now directing the Informatics Institute at the University of Alabama in Birmingham) and I considered the development of this edition, we realized that we were not getting any younger and it would be wise to craft a succession plan so that others could handle the inevitable requests for a sixth and subsequent editions. We were delighted when Michael Chiang agreed to join us as an associate editor, coauthoring three chapters and becoming fully involved in the book’s philosophy and the editing tasks involved. Michael was a postdoctoral informatics
XXIII Acknowledgments
trainee at Columbia when we were both there on the faculty. A well- known pediatric ophthalmologist, he is now balancing his clinical career with an active set of research and academic activities in biomedical informatics. We believe that Michael will be a perfect person to carry the book into the future as Jim and I (both of whom view the book as a significant component of our professional life’s work) phase out our own involvement after this edition. I should add that, in mid-2020, Michael was named director of the National Eye Institute at NIH, which offers further evidence of his accomplishments as a ophthalmologist, researcher, and informatician. For this edition we owe particular gratitude to Elektra McDermott, our developmental editor, whose rigorous attention to detail has been crucial given the size and the complexity of the undertaking. At Springer we have been delighted to work once again with Grant Weston, Executive Editor in their Medicine and Life Sciences division, who has been extremely supportive despite our missed deadlines. And I want to offer my sincere personal thanks to Jim Cimino, who has been a superb and talented collaborator in this effort for the last three editions. Without his hard work and expertise, we would still be struggling to complete the massive editing job associated with this now very long manuscript. Edward H. Shortliffe
New York, NY, USA December 2020
XXV
Contents I
Recurrent Themes in Biomedical Informatics
1
Biomedical Informatics: The Science and the Pragmatics ����������������������������������������������������������������������������������������������������������������������� 3 Edward H. Shortliffe and Michael F. Chiang
2 Biomedical Data: Their Acquisition, Storage, and Use�������������������� 45
Edward H. Shortliffe and Michael F. Chiang 3 Biomedical Decision Making: Probabilistic Clinical Reasoning ���������������������������������������������������������������������������������������������� 77
Douglas K. Owens, Jeremy D. Goldhaber-Fiebert, and Harold C. Sox 4
Cognitive Informatics��������������������������������������������������������������������������������������� 121 Vimla L. Patel and David R. Kaufman
5
Human-Computer Interaction, Usability, and Workflow������������ 153 Vimla L. Patel, David R. Kaufman, and Thomas Kannampallil
6 Software Engineering for Health Care and Biomedicine ����������� 177
Adam B. Wilcox, David K. Vawdrey, and Kensaku Kawamoto 7
Standards in Biomedical Informatics����������������������������������������������������� 205 Charles Jaffe, Viet Nguyen, Wayne R. Kubick, Todd Cooper, Russell B. Leftwich, and W. Edward Hammond
8
Natural Language Processing for Health-Related Texts ������������� 241 Dina Demner-Fushman, Noémie Elhadad, and Carol Friedman
9
Bioinformatics ����������������������������������������������������������������������������������������������������� 273 Sean D. Mooney, Jessica D. Tenenbaum, and Russ B. Altman
10
Biomedical Imaging Informatics��������������������������������������������������������������� 299 Daniel L. Rubin, Hayit Greenspan, and Assaf Hoogi
11
Personal Health Informatics������������������������������������������������������������������������� 363 Robert M. Cronin, Holly Jimison, and Kevin B. Johnson
XXVI Contents
12
Ethics in Biomedical and Health Informatics: Users, Standards, and Outcomes ������������������������������������������������������������� 391 Kenneth W. Goodman and Randolph A. Miller
13
Evaluation of Biomedical and Health Information Resources ����������������������������������������������������������������������������������� 425 Charles P. Friedman and Jeremy C. Wyatt
II
Biomedical Informatics Applications
14 Electronic Health Records����������������������������������������������������������������������������� 467
Genevieve B. Melton, Clement J. McDonald, Paul C. Tang, and George Hripcsak 15
Health Information Infrastructure����������������������������������������������������������� 511 William A. Yasnoff
16 Management of Information in Health Care Organizations ������������������������������������������������������������������������������������������� 543
Lynn Harold Vogel and William C. Reed 17
Patient-Centered Care Systems ����������������������������������������������������������������� 575 Suzanne Bakken, Patricia C. Dykes, Sarah Collins Rossetti, and Judy G. Ozbolt
18 Population and Public Health Informatics������������������������������������������� 613
Martin LaVenture, David A. Ross, Catherine Staes, and William A. Yasnoff 19 mHealth and Applications���������������������������������������������������������������������������� 637
Eun Kyoung Choe, Predrag Klasnja, and Wanda Pratt 20
Telemedicine and Telehealth����������������������������������������������������������������������� 667 Michael F. Chiang, Justin B. Starren, and George Demiris
21
Patient Monitoring Systems������������������������������������������������������������������������� 693 Vitaly Herasevich, Brian W. Pickering, Terry P. Clemmer, and Roger G. Mark
22
Imaging Systems in Radiology������������������������������������������������������������������� 733
Bradley J. Erickson 23
Information Retrieval��������������������������������������������������������������������������������������� 755 William Hersh
XXVII Contents
24
Clinical Decision-Support Systems ��������������������������������������������������������� 795 Mark A. Musen, Blackford Middleton, and Robert A. Greenes
25
Digital Technology in Health Science Education����������������������������� 841 Parvati Dev and Titus Schleyer
26
Translational Bioinformatics����������������������������������������������������������������������� 867 Jessica D. Tenenbaum, Nigam H. Shah, and Russ B. Altman
27
Clinical Research Informatics ��������������������������������������������������������������������� 913 Philip R. O. Payne, Peter J. Embi, and James J. Cimino
28
Precision Medicine and Informatics ������������������������������������������������������� 941 Joshua C. Denny, Jessica D. Tenenbaum, and Matt Might
III
Biomedical Informatics in the Years Ahead
29
Health Information Technology Policy ������������������������������������������������� 969 Robert S. Rudin, Paul C. Tang, and David W. Bates
30 The Future of Informatics in Biomedicine ������������������������������������������� 987
James J. Cimino, Edward H. Shortliffe, Michael F. Chiang, David Blumenthal, Patricia Flatley Brennan, Mark Frisse, Eric Horvitz, Judy Murphy, Peter Tarczy-Hornoch, and Robert M. Wachter
Supplementary Information Glossary������������������������������������������������������������������������������������������ 1018 Name Index������������������������������������������������������������������������������������ 1091 Subject Index���������������������������������������������������������������������������������� 1131
XXIX
Editors Edward H. Shortliffe, MD, PhD, MACP, FACMI Biomedical Informatics, Columbia University, Arizona State University, and Weill Cornell Medical College, New York, NY, USA Columbia University, New York, NY, USA Arizona State University, Phoenix, AZ, USA Weill Cornell Medical College, New York, NY, USA [email protected] James J. Cimino, MD, FACP, FACMI Informatics Institute, University of Alabama at Birmingham, Birmingham, AL, USA [email protected]
Associate Editor Michael F. Chiang, MD, MA, FACMI National Eye Institute, National Institutes of Health, Bethesda, MD, USA [email protected]
Contributors Russ B. Altman, MD, PhD, FACMI Departments of Bioengineering, Genetics and Medicine, Stanford University, Stanford, CA, USA [email protected] Suzanne Bakken, RN, PhD, FAAN, FACMI, FIAHSI Department of Biomedical Informatics, Vagelos College of Physicians and Surgeons, School of Nursing, and Data Science Institute, Columbia University, New York, NY, USA [email protected] David W. Bates, MD, MSc, FACMI Division of General Internal, Medicine and Primary Care, Department of Medicine, Brigham and Women’s Hospital, Boston, MA, USA [email protected] David Blumenthal, MD, MPP Commonwealth Fund, New York, NY, USA [email protected] Patricia Flatley Brennan, RN, PhD, FACMI National Library of Medicine, National Institutes of Health, Bethesda, MD, USA [email protected]
XXX Editors
Eun Kyoung Choe, PhD College of Information Studies, University of Maryland, College Park, College Park, MD, USA [email protected] Terry P. Clemmer, MD Pulmonary – Critical Care Medicine, Intermountain Healthcare (Retired), Salt Lake City, UT, USA [email protected] Sarah Collins Rossetti, PhD, RN, FACMI Department of Biomedical Informatics, School of Nursing, Columbia University Medical Center, New York, NY, USA [email protected] Todd Cooper Trusted Solutions Foundry, Inc., San Diego, CA, USA Robert M. Cronin, MD, MS, MEng Department of Biomedical Informatics, Vanderbilt University, Nashville, TN, USA The Department of Internal Medicine, The Ohio State University, Columbus, OH, USA [email protected] George Demiris, PhD, FACMI Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA [email protected] Dina Demner-Fushman, MD, PhD, FACMI National Library of Medicine, Lister Hill National Center for Biomedical Communications, Bethesda, MD, USA [email protected] Joshua C. Denny, MD, MS, FACMI All of Us Research Program, National Institutes of Health, Bethesda, MD, USA [email protected] Parvati Dev, PhD, FACMI SimTabs, Los Altos Hills, CA, USA [email protected] Patricia C. Dykes, PhD, MA, RN, FAAN, FACMI Center for Patient Safety Research & Practice, Brigham and Women’s Hospital, Boston, MA, USA [email protected] Noémie Elhadad, PhD, FACMI Department of Biomedical Informatics, Columbia University, New York, NY, USA [email protected] Peter J. Embi, MD, MS, FACMI Regenstrief Institute and Indiana University School of Medicine, Indianapolis, IN, USA Bradley J. Erickson, MD, PhD Department of Radiology, Mayo Clinic, Rochester, MN, USA [email protected]
XXXI Editors
Carol Friedman, PhD, FACMI Department of Biomedical Informatics, Columbia University, New York, NY, USA Charles P. Friedman, PhD, FACMI, FIAHSI Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, MI, USA [email protected] Mark Frisse, MD, MS, MBA, FACMI Biomedical Informatics, Vanderbilt University, Nashville, TN, USA [email protected] Jeremy D. Goldhaber-Fiebert, PhD Center for Primary Care and Outcomes Research/Center for Health Policy, Stanford University, Stanford, CA, USA [email protected] Kenneth W. Goodman, PhD, FACMI Institute for Bioethics and Health Policy, University of Miami Miller School of Medicine, Miami, FL, USA [email protected] Robert A. Greenes, MD, PhD, FACMI Department of Biomedical Informatics, Arizona State University, Mayo Clinic, Scottsdale, AZ, USA Hayit Greenspan, PhD Tel Aviv University, Tel Aviv, Israel W. Edward Hammond, PhD, FACMI Duke Center for Health Informatics, Duke University Medical Center, Durham, NC, USA Vitaly Herasevich, MD, PhD Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN, USA [email protected] William Hersh, MD, FACP, FACMI, FAMIA, FIAHSI Department of Medical Informatics & Clinical Epidemiology, School of Medicine, Oregon Health & Science University, Portland, OR, USA [email protected] Assaf Hoogi, PhD Computer Science Department, Ariel University, Israel, Ariel, Israel Eric Horvitz, MD, PhD, FACMI Microsoft, Redmond, WA, USA [email protected] George Hripcsak, MD, MS, FACMI Department of Biomedical Informatics, Columbia University Medical Center, New York, NY, USA [email protected] Charles Jaffe, MD, PhD, FACMI Health Level Seven International, Ann Arbor, MI, USA [email protected]
XXXII Editors
Holly Jimison, PhD, FACMI Khoury College of Computer Sciences & Bouve College of Health Sciences, Northeastern University, Boston, MA, USA [email protected] Kevin B. Johnson, MD, MS, FACMI Department of Biomedical Informatics, Vanderbilt University, Nashville, TN, USA [email protected] Thomas Kannampallil, PhD Institute for Informatics, Washington University School of Medicine, St Louis, MO, USA [email protected] David R. Kaufman, PhD, FACMI Medical Informatics, SUNY Downstate Medical Center, Brooklyn, NY, USA [email protected] Kensaku Kawamoto, MD, PhD, MHS, FACMI University of Utah, Salt Lake City, UT, USA Predrag Klasnja, PhD School of Information, University of Michigan, Ann Arbor, MI, USA [email protected] Wayne R. Kubick, BA, MBA Health Level 7 International, Ann Arbor, MI, USA Martin LaVenture, MPH, PhD, FACMI Informatics Savvy Advisors, Edina, MN, USA Institute for Health Informatics (IHI), University of Minnesota, Minneapolis, MN, USA [email protected] Russell B. Leftwich, MD InterSystems, Cambridge, MA, USA Vanderbilt University Medical Center, Department of Biomedical Informatics, Nashville, TN, USA Roger G. Mark, MD, PhD Department of Electrical Engineering and Computer Science (EECS), Institute of Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA [email protected] Clement J. McDonald, MD, MS, FACMI Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA [email protected]; [email protected] Genevieve B. Melton, MD, PhD, FACMI Department of Surgery and Institute for Health Informatics, University of Minnesota, Minneapolis, MN, USA [email protected]
XXXIII Editors
Blackford Middleton, MD, MPH, MSc, FACMI Apervita, Inc, Chicago, IL, USA Matt Might, PhD UAB Hugh Kaul Precision Medicine Institute, University of Alabama at Birmingham, Birmingham, AL, USA [email protected] Randolph A. Miller, MD, FACMI Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, USA [email protected] Sean D. Mooney, PhD, FACMI Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, WA, USA [email protected] Judy Murphy, RN, FACMI, LFHIMSS, FAAN IBM Global Healthcare, Armonk, NY, USA [email protected] Mark A. Musen, MD, PhD, FACMI Stanford Center for Biomedical Informatics Research, Stanford University, Stanford, CA, USA [email protected] Viet Nguyen, MD Stratametrics LLC, Salt Lake City, UT, USA Douglas K. Owens, MD, MS Center for Primary Care and Outcomes Research/ Center for Health Policy, Stanford University, Stanford, CA, USA [email protected] Judy G. Ozbolt, PhD, RN(Ret), FAAN, FACMI Department of Organizational Systems and Adult Health, University of Maryland School of Nursing, Baltimore, MD, USA [email protected] Vimla L. Patel, PhD, DSc, FACMI Center for Cognitive Studies in Medicine and Public Health, The New York Academy of Medicine, New York, NY, USA [email protected] Philip R. O. Payne, PhD, FACMI Institute for Informatics, Washington University School of Medicine in St. Louis, St. Louis, MO, USA [email protected] Brian W. Pickering, MB, BCh Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN, USA [email protected] Wanda Pratt, PhD, FACMI Information School, University of Washington, Seattle, WA, USA [email protected]
XXXIV Editors
William C. Reed, MS, PhD Huntzinger Management Group, Inc, Moosic, PA, USA [email protected] David A. Ross, DSc Task Force for Global Health, Decatur, GA, USA [email protected] Daniel L. Rubin, MD, MS, FSIIM, FACMI Departments of Biomedical Data cience, Radiology, and Medicine, Stanford University, Stanford, CA, USA S [email protected] Robert S. Rudin, PhD RAND Healthcare, RAND Corporation, Boston, MA, USA [email protected] Titus Schleyer, DMD, PhD, FAMIA, FACMI Department of Medicine, Division of General Internal Medicine and Geriatrics, Indiana University School of Medicine, Indianapolis, IN, USA [email protected] Nigam H. Shah, MBBS, PhD, FACMI Department of Medicine, Stanford University, Stanford, CA, USA [email protected] Harold C. Sox, MD, MACP Patient-Centered Outcomes Research Institute, Washington, DC, USA Geisel School of Medicine at Dartmouth, Dartmouth College, Hanover, NH, USA [email protected] Catherine Staes, PhD, MPH, RN, FACMI Nursing Informatics Program, College of Nursing, University of Utah, Lake City, UT, USA [email protected] Justin B. Starren, MD, PhD Division of Health and Biomedical Informatics, Departments of Preventive Medicine and Medical Social Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, USA [email protected] Paul C. Tang, MD, MS, FACMI Clinical Excellence Research Center, Stanford University, Stanford, CA, USA [email protected] Peter Tarczy-Horonch, MD, FACMI Biomedical Informatics and Medical Education, University of Washington, Seattle, WA, USA [email protected] Jessica D. Tenenbaum, PhD, FACMI Department Bioinformatics, Duke University, Durham, NC, USA [email protected]
of
Biostatistics
&
XXXV Editors
David K. Vawdrey, PhD, FACMI Geisinger, Danville, PA, USA Lynn Harold Vogel, AB, AM, PhD LH Vogel Consulting, LLC, Ridgewood, NJ, USA [email protected] Robert M. Wachter, MD Department of Medicine, University of California, San Francisco, San Francisco, CA, USA [email protected] Adam B. Wilcox, MD, PhD, FACMI University of Washington, Seattle, WA, USA [email protected] Jeremy C. Wyatt, MBBS, FRCP, FACMI, FFCI Wessex Institute of Health Research, Faculty of Medicine, University of Southampton, Southampton, UK [email protected] William A. Yasnoff, MD, PhD, FACMI National Health Information Infrastructure Advisors, Portland, OR, USA Division of Health Sciences Informatics, Johns Hopkins University, Portland, OR, USA National Health Information Infrastructure (NHII) Advisors and Johns Hopkins University, Portland, OR, USA [email protected]
About the Future Perspective Authors (Chapter 30) David Blumenthal, MD, MPP became President and CEO of the Commonwealth Fund, a national health care philanthropy based in New York City, in January, 2013. Previously, he served as Chief Health Information and Innovation Officer at Partners Health System in Boston, MA, and was Samuel O. Thier Professor of Medicine and Professor of Health Care Policy at Massachusetts General Hospital/ Harvard Medical School. From 2009 to 2011, Dr. Blumenthal was the National Coordinator for Health Information Technology under President Barack Obama. In this role he was charged with building an interoperable, private and secure nationwide health information system and supporting the widespread, meaningful use of health IT. Prior to that, Dr. Blumenthal was a practicing primary care physician, director of the Institute for Health Policy, and professor of medicine and health policy at Massachusetts General Hospital/Partners Healthcare System and Harvard Medical School. As a renowned health services researcher and national authority on health IT adoption, Dr. Blumenthal has authored over 300 scholarly publications, including the seminal studies on the adoption and use of health information technology in the United States. Dr. Blumenthal received his undergraduate, medical, and public policy degrees from Harvard University and completed his residency in internal medicine at Massachusetts General Hospital.
Patricia Flatley Brennan, RN, PhD, FAAN, FACMI is the Director of the National Library of Medicine (NLM) and Adjunct Investigator in the National Institute of Nursing Research’s Advanced Visualization Branch at the National Institutes of Health (NIH). As the world’s largest biomedical library, NLM produces digital information resources used by scientists, health professionals, and members of the public. A leader in research in computational health informatics, NLM plays a pivotal role in translating research into practice. NLM’s research and information services support scientific discovery, health care, and public health. Prior to joining
XXXVII About the Future Perspective Authors (Chapter 30)
NLM, Dr. Brennan was the Lillian L. Moehlman Bascom Professor at the School of Nursing and College of Engineering at the University of Wisconsin–Madison. Dr. Brennan is a pioneer in the development of innovative information systems and services such as ComputerLink, an electronic network designed to reduce isolation and improve self-care among home care patients. She directed HeartCare, a web-based information and communication service that helps home-dwelling cardiac patients recover faster and with fewer symptoms, and also directed Project HealthDesign, an initiative designed to stimulate the next generation of personal health records. Her professional accomplishments reflect her background, which unites engineering, information technology, and clinical care to improve public health and ensure the best possible experience in patient care. A past president of the American Medical Informatics Association, Dr. Brennan was elected to the National Academy of Medicine in 2001. She is a fellow of the American Academy of Nursing, the American College of Medical Informatics, and the New York Academy of Medicine. In 2020, Dr. Brennan was inducted into the American Institute for Medical and Biological Engineering (AIMBE). The AIMBE College of Fellows is among the highest professional distinctions accorded to a medical and biological engineer.
Mark Frisse, MD, MS, MBA is an Emeritus Professor in the Department of Biomedical Informatics at Vanderbilt University Medical Center. Until July 2020, he held the Accenture Endowed Chair within the Department. His interests are directed toward the potential contribution to economically sustainable health care through more effective use of informatics and information technology. In addition to his teaching and research responsibilities at the Vanderbilt Medical School, he also directed the Masters of Management in Health Care graduate program at Vanderbilt’s Owen Graduate School of Management and led a health information technology course for the executive Masters of Health Care Delivery Science program at Tuck School of Business, Dartmouth College. Previously, Dr. Frisse has held leadership positions at Washington University, Express Scripts, and the First Consulting Group.
XXXVIII About the Future Perspective Authors (Chapter 30)
At Express Scripts, he was Chief Medical Officer and was responsible for their Practice Patterns Science division and helped found RxHub. At First Consulting, he led engagements in vendor selection, quality governance, physician information technology leadership development, and clinician governance. A board-certified internist with fellowship training in medical oncology, he obtained his bachelor’s degree from the University of Notre Dame in 1974 and his MD from Washington University in 1978. He received a master’s degree in medical information science from Stanford University in 1987 and an MBA from Washington University in 1997. He is a fellow of the American College of Physicians, the American College of Medical Informatics, and the New York Academy of Medicine. He is an elected member of the National Academy of Medicine.
Eric Horvitz is a technical fellow at Microsoft, where he serves as the company’s first Chief Scientific Officer. He previously served as director of Microsoft Research Labs. He has pursued principles and applications of AI with contributions in machine learning, perception, natural language understanding, and decision making. His research centers on challenges with uses of AI amidst the complexities of the open world, including uses of probabilistic and decision-theoretic representations for reasoning and action, models of bounded rationality, and human-AI complementarity and coordination. His efforts and collaborations have led to fielded systems in healthcare, transportation, ecommerce, operating systems, and aerospace. He received the Feigenbaum Prize and the Allen Newell Prize for contributions to AI. He was inducted into the CHI Academy for contributions at the intersection of AI and human-computer interaction. He has been elected fellow of the American College of Medical Informatics (ACMI), National Academy of Engineering (NAE), Association of Computing Machinery (ACM), Association for the Advancement of AI (AAAI), the American Association for the Advancement of Science (AAAS), the American Academy of Arts and Sciences, and the American Philosophical Society. He has served as president of the AAAI, and on advisory commit-
XXXIX About the Future Perspective Authors (Chapter 30)
tees for the NSF, NIH, and the U.S. Department of Defense. He currently serves on the Board of Regents of the National Library of Medicine and on the scientific advisory board of the Allen Institute for AI. He earned his Bachelors in biophysics at Binghamton University and a PhD in biomedical informatics and his MD at Stanford University.
Judy Murphy, RN, FACMI, FHIMSS, FAAN is a nursing executive with a long history in health informatics. She was Chief Nursing Officer for IBM Global Healthcare, where she was strategic advisor regarding provider IT solutions to achieve the quadruple aim. Prior to working at IBM, she was Deputy National Coordinator for Programs and Policy at the ONC in Washington D.C. where she led federal efforts to assist healthcare providers in adopting health IT to improve care and to promote consumers’ understanding and use of health IT for their own health. She came to ONC with 25 years of health informatics experience at Aurora Health Care in Wisconsin, a large integrated delivery network. As Vice President-EHR Applications, she led their EHR program since 1995, when Aurora was an early adopter of technology. She publishes and lectures nationally and internationally. She served on the AMIA and HIMSS Board of Directors and is a Fellow in the American Academy of Nursing, the American College of Medical Informatics and HIMSS. She has received numerous awards, including the HIMSS 2018 Most Influential Women in Health IT, the AMIA 2014 Don Eugene Detmer Award for Health Policy Contributions in Informatics, the HIMSS 2014 Federal Health IT Leadership Award, and the HIMSS 2006 Nursing Informatics Leadership Award.
Peter Tarczy-Hornoch, MD is Professor and Chair of the Department of Biomedical Informatics and Medical Education at the University of Washington (UW). His background includes computer science, bioengineering, biomedical informatics, medicine (Pediatrics and Neonatology) with undergraduate and medical degrees from Stanford, Residency at the University of Minnesota, Fellowship at the University of Washington. He has played a key role in establishing and growing the biomedical informatics research, teaching and practice activities at UW
XL About the Future Perspective Authors (Chapter 30)
for over two decades. The unifying theme of his research has been data integration of electronic biomedical data (clinical and genomic) both for a) knowledge discovery and b) in order to integrate this knowledge with clinical data at the point of care for decision support. His current research focuses on a) secondary use of electronic medical record (EMR) for translational research including outcomes research, learning healthcare systems, patient accrual and biospecimen acquisition based on complex phenotypic eligibility criteria, b) the use of EMR systems for cross institutional comparative effectiveness research, and c) integration of genomic data into the EMR for clinical decision support. Key past research has included data integration and knowledge base creation for genomic testing. He has served in a number of national leadership roles including founding chair of the AMIA Genomics (now Genomics and Translational B ioinformatics) working group.
Robert M. Wachter, MD is Professor and Chair of the Department of Medicine at the University of California, San Francisco. He is author of 300 articles and 6 books and is a frequent contributor to the New York Times and Wall Street Journal. He coined the term “hospitalist” in 1996 and is often considered the “father” of the hospitalist field, the fastest growing specialty in the history of modern medicine. He is past president of the Society of Hospital Medicine and past chair of the American Board of Internal Medicine. In the safety and quality arenas, his book, Understanding Patient Safety, is the world’s top selling safety primer. In 2004, he received the John M. Eisenberg Award, the nation’s top honor in patient safety. Twelve times, Modern Healthcare magazine has ranked him as one of the 50 most influential physician- executives in the U.S.; he was #1 on the list in 2015. His 2015 book, The Digital Doctor: Hope, Hype and Harm at the Dawn of Medicine’s Computer Age, was a New York Times science bestseller. In 2016, he chaired a blue-ribbon commission advising England’s National Health Service on its digital strategy. He is an elected member of the National Academy of Medicine.
XLI
About the Editors Edward H. Shortliffe is Chair Emeritus and Adjunct Professor in the Department of Biomedical Informatics at Columbia University’s Vagelos College of Physicians and Surgeons. Previously he served as President and CEO of the American Medical Informatics Association. He was Professor of Biomedical Informatics at the University of Texas Health Science Center in Houston and at Arizona State University. A board-certified internist, he was Founding Dean of the University of Arizona College of Medicine – Phoenix and served as Professor of Biomedical Informatics and of Medicine at Columbia University. Before that he was Professor of Medicine and of Computer Science at Stanford University. Honors include his election to membership in the National Academy of Medicine (where he served on the executive council for 6 years and has chaired the membership committee) and in the American Society for Clinical Investigation. He has also been elected to fellowship in the American College of Medical Informatics and the American Association for Artificial Intelligence. A Master of the American College of Physicians (ACP), he held a position for 6 years on that organization’s Board of Regents. He is Editor Emeritus of the Journal of Biomedical Informatics and has served on the editorial boards for several other biomedical informatics publications. In the early 1980s, he was recipient of a research career development award from the National Library of Medicine. In addition, he received the Grace Murray Hopper Award of the Association for Computing Machinery in 1976, the Morris F. Collen Award of the American College of Medical Informatics in 2006, and was a Henry J. Kaiser Family Foundation Faculty Scholar in General Internal Medicine. He has served on the oversight committee for the Division on Engineering and Physical Sciences (National Academy of Sciences), the National Committee on Vital and Health Statistics (NCVHS), and on the President’s Information Technology Advisory Committee (PITAC). Dr. Shortliffe has authored over 350 articles and books in the fields of biomedical computing and artificial intelligence.
XLII About the Editors
James J. Cimino is a board-certified internist who completed a National Library of Medicine informatics fellowship at the Massachusetts General Hospital and Harvard University and then went on to an academic position at Columbia University College of Physicians and Surgeons and the Presbyterian Hospital in New York. He spent 20 years at Columbia, carrying out clinical informatics research, building clinical information systems, teaching medical informatics and medicine, and caring for patients, rising to the rank of full professor in both Biomedical Informatics and Medicine. His principal research areas there included desiderata for controlled terminologies, mobile and Web-based clinical information systems for clinicians and patients, and a context-aware form of clinical decision support called “infobuttons.” In 2008, he moved to the National Institutes of Health, where he was the Chief of the Laboratory for Informatics Development and a Tenured Investigator at the NIH Clinical Center and the National Library of Medicine. His principal project involved the development of the Biomedical Translational Research Information System (BTRIS), an NIH-wide clinical research data resource. In 2015, he left the NIH to be the inaugural Director of the Informatics Institute at the University of Alabama at Birmingham. The Institute is charged with improving informatics research, education, and service across the University, supporting the Personalized Medicine Institute, the Center for Genomic Medicine, and the University Health System Foundation, including improvement of and access to electronic health records. He holds the rank of Tenured Professor in Medicine and is the Chief for the Informatics Section in the Division of General Internal Medicine. He continues to conduct research in clinical informatics and clinical research informatics, he was Director of the NLM’s weeklong Biomedical Informatics course for 16 years, and teaches at Columbia University and Georgetown University as an Adjunct Professor. He is an Associate Editor of the Journal of Biomedical Informatics. His honors include Fellowships of the American College of Physicians, the New York Academy of Medicine and the American College of Medical Informatics (Past President), the Priscilla Mayden Award from the University of Utah, the Donald A.B. Lindberg Award for Innovation in Informatics and the President’s Award, both from the American Medical
XLIII About the Editors
Informatics Association, the Morris F. Collen Award of the American College of Medical Informatics, the Medal of Honor from New York Medical College, the NIH Clinical Center Director’s Award (twice), and induction into the National Academy of Medicine (formerly the Institute of Medicine).
Michael F. Chiang is Director of the National Eye Institute, at the National Institutes of Health in Bethesda, Maryland. His clinical practice focuses on pediatric ophthalmology, and he is board-certified in clinical informatics. His research develops and applies biomedical informatics methods to clinical ophthalmology in areas such as retinopathy of prematurity (ROP), telehealth, artificial intelligence, clinical information systems, genotype-phenotype correlation, and data analytics. His group has published over 200 peer-reviewed papers, and has developed an assistive artificial intelligence system for ROP that received breakthrough status from the US Food and Drug Administration. He received a BS in Electrical Engineering and Biology from Stanford University, an MD from the Harvard-MIT Division of Health Sciences and Technology, and an MA in Biomedical Informatics from Columbia University. He completed clinical training at the Johns Hopkins Wilmer Eye Institute. Between 2001 and 2010, he worked at Columbia University, where he was Anne S. Cohen Associate Professor of Ophthalmology & Biomedical Informatics, director of medical student education in ophthalmology, and director of the introductory graduate student course in biomedical informatics. From 2010 to 2020, he was Knowles Professor of Ophthalmology & Medical Informatics and Clinical Epidemiology, and Associate Director of the Casey Eye Institute, at the Oregon Health & Science University (OHSU) Casey Eye Institute. He has served as a member of the American Academy of Ophthalmology (AAO) Board of Trustees, Chair of the AAO IRIS Registry Data Analytics Committee, Chair of the AAO Task Force on Artificial Intelligence, Chair of the AAO Medical Information Technology Committee, and on numerous other national and local committees. He currently serves as an Associate Editor for JAMIA, and is on the Editorial Board for Ophthalmology and the Asia-Pacific Journal of Ophthalmology.
1
Recurrent Themes in Biomedical Informatics Contents Chapter 1 Biomedical Informatics: The Science and the Pragmatics – 3 Edward H. Shortliffe and Michael F. Chiang Chapter 2 Biomedical Data: Their Acquisition, Storage, and Use – 45 Edward H. Shortliffe and Michael F. Chiang Chapter 3 Biomedical Decision Making: Probabilistic Clinical Reasoning – 77 Douglas K. Owens, Jeremy D. GoldhaberFiebert, and Harold C. Sox Chapter 4 Cognitive Informatics – 121 Vimla L. Patel and David R. Kaufman Chapter 5 Human-Computer Interaction, Usability, and Workflow – 153 Vimla L. Patel, David R. Kaufman, and Thomas Kannampallil Chapter 6 Software Engineering for Health Care and Biomedicine – 177 Adam B. Wilcox, David K. Vawdrey, and Kensaku Kawamoto
I
Chapter 7 Standards in Biomedical Informatics – 205 Charles Jaffe, Viet Nguyen, Wayne R. Kubick, Todd Cooper, Russell B. Leftwich, and W. Edward Hammond Chapter 8 Natural Language Processing for Health-Related Texts – 241 Dina Demner-Fushman, Noémie Elhadad, and Carol Friedman Chapter 9 Bioinformatics – 273 Sean D. Mooney, Jessica D. Tenenbaum, and Russ B. Altman Chapter 10 Biomedical Imaging Informatics – 299 Daniel L. Rubin, Hayit Greenspan, and Assaf Hoogi Chapter 11 Personal Health Informatics – 363 Robert M. Cronin, Holly Jimison, and Kevin B. Johnson Chapter 12 Ethics in Biomedical and Health Informatics: Users, Standards, and Outcomes – 391 Kenneth W. Goodman and Randolph A. Miller Chapter 13 Evaluation of Biomedical and Health Information Resources – 425 Charles P. Friedman and Jeremy C. Wyatt
3
Biomedical Informatics: The Science and the Pragmatics Edward H. Shortliffe and Michael F. Chiang Contents 1.1
The Information Revolution Comes to Medicine – 4
1.1.1 1.1.2 1.1.3
I ntegrated Access to Clinical Information – 5 Today’s Electronic Health Record (EHR) Environment – 6 Anticipating the Future of Electronic Health Records – 11
1.2
ommunications Technology and Health Data C Integration – 12
1.2.1 1.2.2 1.2.3 1.2.4
Model of Integrated Disease Surveillance – 13 A The Goal: A Learning Health System – 15 Implications of the Internet for Patients – 17 Requirements for Achieving the Vision – 18
1.3
The US Government Steps In – 21
1.4
efining Biomedical Informatics and Related D Disciplines – 21
1.4.1 1.4.2 1.4.3
T erminology – 23 Historical Perspective – 26 Relationship to Biomedical Science and Clinical Practice – 29 Relationship to Computer Science – 36 Relationship to Biomedical Engineering – 38
1.4.4 1.4.5
1.5
I ntegrating Biomedical Informatics and Clinical Practice – 39 References – 44
© Springer Nature Switzerland AG 2021 E. H. Shortliffe, J. J. Cimino (eds.), Biomedical Informatics, https://doi.org/10.1007/978-3-030-58721-5_1
1
4
1
E. H. Shortliffe and M. F. Chiang
nnLearning Objectives After reading this chapter, you should know the answers to these questions: 55 Why is information and knowledge management a central issue in biomedical research, clinical practice, and public health? 55 What are integrated information management environments, and how are they affecting the practice of medicine, the promotion of health, and biomedical research? 55 What do we mean by the terms biomedical informatics, medical computer science, medical computing, clinical informatics, nursing informatics, bioinformatics, public health informatics, and health informatics? 55 What is translational research, why is it being heavily promoted and supported, how does it depend on translational bioinformatics and clinical research informatics, and how do these all relate to precision medicine? 55 Why should health professionals, life scientists, and students of the health professions learn about biomedical informatics concepts and informatics applications? 55 How has the development of modern computing technologies and the Internet changed the nature of biomedical computing? 55 How is biomedical informatics related to clinical practice, public health, biomedical engineering, molecular biology, decision science, information science, and computer science? 55 How does information in clinical medicine and health differ from information in the basic sciences? 55 How can changes in computer technology and the financing of health care influence the integration of biomedical computing into clinical practice?
1.1
he Information Revolution T Comes to Medicine
After scientists had developed the first digital computers in the 1940s, society was told that these new machines would soon be serving routinely as memory devices, assisting with calculations and with information retrieval. Within the next decade, physicians and other health professionals had begun to hear about the dramatic effects that such technology would have on clinical practice. More than seven decades of remarkable progress in computing have followed those early predictions, and many of the original prophesies have come to pass. Stories regarding the “information revolution”, “artificial intelligence”, and “big data” fill our newspapers and popular magazines, and today’s children show an uncanny ability to make use of computers (including their handheld mobile versions) as routine tools for study, communication, and entertainment. Similarly, clinical workstations have been available on hospital wards and in outpatient offices for decades, and in some settings have been supplanted by mobile tablets with wireless connectivity. Not long ago, the health care system was perceived as being slow to understand information technology and slow to exploit it for its unique practical and strategic functionalities. This is no longer the case. The enormous technological advances of the last four decades— personal computers and graphical interfaces, laptop machines, new methods for humancomputer interaction, innovations in mass storage of data (both locally and in the “cloud”), mobile devices, personal health- monitoring devices, the Internet, wireless communications, social media, and more—have all combined to make use of computers by health workers and biomedical scientists part of today’s routine. This new world is already upon us, but its greatest influence is yet to come as today’s prominent innovations such as
5 Biomedical Informatics: The Science and the Pragmatics
electronic health records and decision-support port a better understanding of how they and software are further refined. This book will their providers compare with other organizateach you about our present resources and tions in their local or regional competitive accomplishments, and about gaps that need to environment, and to support reporting to be addressed in the years ahead. regulatory agencies. When one considers today’s penetration of In the past, administrative and financial computers and communication into our daily data were the major elements required for lives, it is remarkable that the first personal planning, but in recent years comprehensive computers were introduced as recently as the clinical data have also become important for late 1970s; local area networking has been institutional self-analysis and strategic planavailable only since the 1980s; the World Wide ning. Furthermore, the inefficiencies and frusWeb dates only to the early 1990s; and smart trations associated with the use of paper-based phones, social networking, tablet computers, medical records are well accepted (Dick and wearable devices, and wireless communication Steen 1991 (Revised 1997)), especially when are even more recent. This dizzying rate of inadequate access to clinical information is change, combined with equally pervasive and one of the principal barriers that clinicians revolutionary changes in almost all interna- encounter when trying to increase their effitional health care systems, makes it difficult ciency in order to meet productivity goals for for public-health planners and health- their practices. institutional managers to try to deal with both issues at once. As new technologies have been introduced 1.1.1 Integrated Access to Clinical and adopted in health settings, unintended Information consequences have emerged, such as ransomware and other security challenges that can Encouraged by health information technology compromise the protection and privacy of (HIT) vendors (and by the US government, as patient data. Yet many observers now believe is discussed later), most health care instituthat rapid changes in both technology and tions have or are developing integrated health systems are inextricably related. We computer-based information-management can see that planning for the new health care environments. These underlie a clinical world environments of the coming decades requires in which computational tools assist not only a deep understanding of the role that infor- with patient-care matters (e.g., reporting mation technology is likely to play in those results of tests, allowing direct entry of orders environments. or patient information by clinicians, facilitatWhat might that future hold for the typical ing access to transcribed reports, and in some practicing clinician? As we discuss in detail in cases supporting telemedicine applications or 7 Chap. 14, no applied clinical computing decision-support functions) but also with topic is gaining more attention currently than administrative and financial topics (e.g., is the issue of electronic health records tracking of patients within the hospital, man(EHRs). Health care organizations have aging materials and inventory, supporting largely replaced their paper-based recording personnel functions, and managing the paysystems, recognizing that they need to have roll), with research (e.g., analyzing the outdigital systems in place that create opportuni- comes associated with treatments and ties to facilitate patient care that is safe and procedures, performing quality assurance, effective, to answer questions that are cru- supporting clinical trials, and implementing cially important for strategic planning, to sup- various treatment protocols), with access to
1
6
1
E. H. Shortliffe and M. F. Chiang
scholarly information (e.g., accessing digital libraries, supporting bibliographic search, and providing access to drug information databases), and even with office automation (e.g., providing access to spreadsheets and document- management software). The key idea, however, is that at the heart of the evolving integrated environments lies an electronic health record that is intended to be accessible, confidential, secure, acceptable to clinicians and patients, and integrated with other types of useful information to assist in planning and problem solving.
oday’s Electronic Health T Record (EHR) Environment
1.1.2
The traditional paper-based medical record is now recognized as being woefully inadequate for meeting the needs of modern medicine. It arose in the nineteenth century as a highly personalized “lab notebook” that clinicians could use to record their observations and plans so that they could be reminded of pertinent details when they next saw the same patient. There were no regulatory requirements, no assumptions that the record would be used to support communication among varied providers of care, and few data or test results to fill up the record’s pages. The record that met the needs of clinicians a century or so ago struggled mightily to adjust over the decades and to accommodate to new requirements as health care and medicine changed. Today the inability of paper charts to serve the best interests of the patient, the clinician, and the health system is no longer questioned (see 7 Chaps. 14 and 16). Most organizations have found it challenging (and expensive) to move to a paperless, electronic clinical record. This observation forces us to ask the following questions: “What is a health record in the modern world? Are the available products and systems well matched with the modern notions of a comprehensive health record? Do they meet the needs of individual users as well as the health systems themselves? Are they efficient, easy to use, and smoothly integrated into clinical workflow? How should our concept of the
comprehensive health record evolve in the future, as technology creates unprecedented opportunities for innovation?” The complexity associated with automating clinical-care records is best appreciated if one analyzes the processes associated with the creation and use of such records rather than thinking of the record as a physical object (such as the traditional paper chart) that can be moved around as needed within the institution. For example, on the input side (. Fig. 1.1), an electronic version of the paper chart requires the integration of processes for data capture and for merging information from diverse sources. The contents of the paper record were traditionally organized chronologically—often a severe limitation when a clinician sought to find a specific piece of information that could occur almost anywhere within the chart. To be useful, the electronic record system has to make it easy to access and display needed data, to analyze them, and to share them among colleagues and with secondary users of the record who are not involved in direct patient care (. Fig. 1.2). Thus, the EHR, as an adaptation of the paper record, is best viewed not as an object, or a product, but rather as a set of processes that an organization puts into place, supported by technology (. Fig. 1.3). Implementing electronic records is inherently a systems-integration task. It accordingly requires a custom-tailored implementation at each institution, given the differences in existing systems and practices that must be suitably integrated. Joint development and local adaptation are crucial, which implies that the institutions that purchase such systems must have local expertise that can oversee and facilitate an effective implementation process, including elements of process re-engineering and cultural change that are inevitably involved. Experience has shown that clinicians are “horizontal” users of information technology (Greenes and Shortliffe 1990). Rather than becoming “power users” of a narrowly defined software package, they tend to seek broad functionality across a wide variety of systems and resources. Thus, routine use of comput
7 Biomedical Informatics: The Science and the Pragmatics
.. Fig. 1.1 Inputs to the clinical-care record. The traditional paper record was created by a variety of organizational processes that captured varying types of information (notes regarding direct encounters between health professionals and patients, laboratory or radio-
logic results, reports of telephone calls or prescriptions, and data obtained directly from patients). The paper record thus was a merged collection of such data, generally organized in chronological order
ers, and of EHRs, is most easily achieved when the computing environment offers a critical mass of functionality that makes the system both smoothly integrated with workflow and useful for essentially every patient encounter. The arguments for automating clinical- care records are summarized in 7 Chaps. 2 and 14 and in the now classic Institute of Medicine’s report on computer-based patient records (CPRs) (Dick and Steen 1991 (Revised 1997).1 One argument that warrants emphasis is the importance of the EHR in supporting clinical trials—experiments in which data from specific patient interactions are pooled
and analyzed in order to learn about the safety and efficacy of new treatments or tests and to gain insight into disease processes that are not otherwise well understood. Medical researchers were constrained in the past by clumsy methods for identifying patients who met inclusion criteria for clinical trials as well as acquiring the data needed for the trials, generally relying on manual capture of information onto datasheets that were later transcribed into computer databases for statistical analysis (. Fig. 1.4). The approach was labor- intensive, fraught with opportunities for error, and added to the high costs associated with randomized prospective research protocols. The use of EHRs has offered many advantages to those carrying out clinical research (see 7 Chap. 27). Most obviously, it helps to eliminate the manual task of extracting data from charts or filling out specialized data-
1 The Institute of Medicine, part of the National Academy of Sciences, is now known as the National Academy of Medicine.
1
8
E. H. Shortliffe and M. F. Chiang
1
.. Fig. 1.2 Outputs from the clinical-care record. Once information was collected in the traditional paper chart, it needed to be provided to a wide variety of potential users of the information that it contained. These users included health professionals and the patients themselves, as well as “secondary users” (represented here by the individuals in business suits) who had valid reasons for accessing the record but who were not involved with
direct patient care. Numerous providers are typically involved in a patient’s care, so the chart also served as a means for communicating among them. The traditional mechanisms for displaying, analyzing, and sharing information from such records resulted from a set of processes that often varied substantially across several patient-care settings and institutions
sheets. The data needed for a study can often be derived directly from the EHR, thus making much of what is required for research data collection simply a by-product of routine clinical record keeping (. Fig. 1.5). Other advantages accrue as well. For example, the record environment can help to ensure compliance with a research protocol, pointing out to a clinician when a patient is eligible for a study or when the protocol for a study calls for a specific management plan given the currently available data about that patient. We are also seeing the development of novel authoring environments for clinical trial protocols that can help to ensure that the data elements needed for the trial are compatible with the local EHR’s conventions for representing patient descriptors.
Note that . Fig. 1.5 represents a study at a single institution and often for a limited subset of the patients who receive care there. Yet much research is carried out with very large numbers of patients, such as within a regional health care system, statewide, or nationally. Accordingly, the size of research datasets can get very large, but analyzing across them introduces challenges related to data exchange and the standardization of the ways in which individual data elements are defined, identified, or stored (see 7 Chap 8). Retrospective studies on data collected in the past typically cannot assume a prior standardization of the elements that will be needed, thereby requiring analyses that infer relationships among specific descriptors in different institutions represented in different ways. When the number of data elements
9 Biomedical Informatics: The Science and the Pragmatics
.. Fig. 1.3 Complex processes demanded of the record. As shown in . Figs. 1.1 and 1.2, the paper chart evolved to become the incarnation of a complex set of organizational processes, which both gathered information to be shared and then distributed that information to those who had
valid reasons for accessing it. Yet paper-based documents were severely limited in meeting the diverse requirements for data collection and information access that are implied by this diagram. These deficiencies accounted in large part for the effort to create today’s electronic health records
is large, and the population being studied is also vast, the challenges are often described as “big data” analytics (James et al. 2013). Another theme in the changing world of health care is the increasing investment in the creation of standard order sets, clinical guidelines, and clinical pathways (see 7 Chap. 24), generally in an effort to reduce practice variability and to develop consensus approaches to recurring management problems. Several government and professional organizations, as well as individual provider groups, have invested heavily in guideline development, often putting an emphasis on using clear evidence from the literature, rather than expert opinion alone, as the basis for the advice. Despite the success in creating such evidence- based guidelines, there is a growing recognition that we need better methods for delivering the decision logic to the point of care. Guidelines
that appear in monographs or journal articles tend to sit on shelves, unavailable when the knowledge they contain would be most valuable to practitioners. Computer-based tools for implementing such guidelines, and integrating them with the EHR, present a means for making high-quality advice available in the routine clinical setting. Many organizations are accordingly integrating decision-support tools with their EHR systems (see 7 Chaps. 14 and 24), and there are highly visible commercial efforts underway to provide computer-based diagnostic decision support to practitioners.2 There are at least five major issues that have consistently constrained our efforts to build effective EHRs: (1) the need for stan-
2
7 https://ehrintelligence.com/news/top-clinicaldecision-support-system-cdss-companies-by-ambulatory-inpatient; 7 https://www.ibm.com/watson/ health/. (Accessed 5/29/19/).
1
10
E. H. Shortliffe and M. F. Chiang
1
Medical record
Computer database Clinical trial design • Definition of data elements • Definition of eligibility • Process descriptions • Stopping criteria • Other details of the trial
Data sheets Analyses
Results .. Fig. 1.4 Traditional data collection for clinical trials. Until the introduction of EHRs and similar systems, the gathering of research data for clinical studies was typically a manual task. Physicians who cared for patients enrolled in trials, or their research assistants, would be asked to fill out special datasheets for later transcription into computer databases. Alternatively,
data managers were often hired to abstract the relevant data from the paper chart. The trials were generally designed to define data elements that were required and the methods for analysis, but it was common for the process of collecting those data in a structured format to be left to manual processes at the point of patient care
Electronic Health Record (EHR)
Clinical data repository
Clinical trial database
Clinical trial design • Definition of data elements • Definition of eligibility • Process descriptions • Stopping criteria • Other details of the trial
Analyses
Results .. Fig. 1.5 Role of electronic health records (EHRs) in supporting clinical trials. With the introduction of EHR systems, the collection of much of the research data for clinical trials can become a by-product of the routine care of the patients. Research data may be analyzed directly from the clinical data repository, or a secondary research database may be created by downloading information from the online patient records. The manual processes in
. Fig. 1.4 are thereby largely eliminated. In addition, the interaction of the physician with the EHR permits two-way communication, which can greatly improve the quality and efficiency of the clinical trial. Physicians can be reminded when their patients are eligible for an experimental protocol, and the computer system can also remind the clinicians of the rules that are defined by the research protocol, thereby increasing compliance with the experimental plan
11 Biomedical Informatics: The Science and the Pragmatics
dards in the area of clinical terminology; (2) concerns regarding data privacy, confidentiality, and security; (3) challenges in data entry by physicians; (4) difficulties associated with the integration of record systems with other information resources in the health care setting, and (5) designing and delivering systems that are efficient, acceptable to clinicians, and intuitive to use. The first of these issues is discussed in detail in 7 Chap. 7, and privacy is one of the central topics in 7 Chap. 12. Issues of direct data entry by clinicians are discussed in 7 Chaps. 2 and 14 and throughout many other chapters as well. 7 Chapter 15 examines the fourth topic, focusing on recent trends in networked data integration, and offers solutions for the ways in which the EHR can be better joined with other relevant information resources and clinical processes, especially within communities where patients may have records with multiple providers and health care systems (Yasnoff et al. 2013). Finally, issues of the interface between computers and clinicians (or other users), with a cognitive emphasis, are the subject of 7 Chap. 5.
1.1.3
Anticipating the Future of Electronic Health Records
One of the first instincts of software developers is to create an electronic version of an object or process from the physical world. Some familiar notion provides the inspiration for a new software product. Once the software version has been developed, however, human ingenuity and creativity often lead to an evolution that extends the software version far beyond what was initially contemplated. The computer can thus facilitate paradigm shifts in how we think about such familiar concepts. Consider, for example, the remarkable difference between today’s office automation software and the typewriter, which was the original inspiration for the development of “word processors”. Although the early word processors were designed largely to allow users to avoid retyping papers each time a minor change was made to a document, the
document-management software of today bears little resemblance to a typewriter. Consider all the powerful desktop-publishing facilities, integration of figures, spelling correction, grammar aids, “publishing” online, collaboration on individual documents by multiple users, etc. Similarly, today’s spreadsheet programs bear little resemblance to the tables of numbers that we once created on graph paper. To take an example from the financial world, consider automatic teller machines (ATMs) and their facilitation of today’s worldwide banking in ways that were never contemplated when the industry depended on human bank tellers. It is accordingly logical to ask what the health record will become after it has been effectively implemented on computer systems and new opportunities for its enhancement become increasingly clear to us. It is clear that EHRs a decade from now will be remarkably different from the antiquated paper folders that used to dominate our health care environments. We might similarly predict that the state of today’s EHR is roughly comparable to the status of commercial aviation in the 1930s. By that time air travel had progressed substantially from the days of the Wright Brothers, and air travel was becoming common. But 1930s air travel seems archaic by modern standards, and it is logical to assume that today’s EHRs, albeit much better than both paper records and the early computer- based systems of the 1960s and 1970s, will be greatly improved and further modernized in the decades ahead. If people had failed to use the early airplanes for travel, the quality and efficiency of airplanes and air travel would not have improved as they have. A similar point can be made about the importance of committing to the use of EHRs today, even though we know that they need to be much better in the future. We must also commit to assuring that those improvements are made, which suggests a dynamic interaction and interdependency among the researchers who address limitations in EHRs and their underlying methods and philosophy, the EHR compa-
1
12
1
E. H. Shortliffe and M. F. Chiang
mercialized operation, no longer depending on the U.S. government to support even the major backbone connections. Today, the Internet is ubiquitous, worldwide, accessible through mobile wireless devices, and has provided the invisible but mandatory infrastructure for social, political, financial, scientific, corporate, and entertainment ventures. Many people point to the Internet as a superb example of the facilitating role of federal investment in promoting innovative technologies. The Internet is a major 1.2 Communications Technology societal force that arguably would never and Health Data Integration have been created if the research and develAn obvious opportunity for changing the role opment, plus the coordinating activities, had and functionality of clinical-care records in been left to the private sector. The explosive growth of the Internet did the digital age is the power and ubiquity of not occur until the late 1990s, when the World the Internet. The Internet began in 1968 as a Wide Web (which had been conceived initially U.S. research activity funded by the Advanced by the physics community as a way of using Research Projects Agency (ARPA) of the the Internet to share preprints with photoDepartment of Defense. Initially known as graphs and diagrams among researchers) was the ARPANET, the network began as a novel introduced and popularized. Navigating the mechanism for allowing a handful of defense- Web is highly intuitive, requires no special related mainframe computers, located mostly training, and provides a mechanism for access at academic institutions or in the research to multimedia information that accounts for facilities of military contractors, to share data its remarkable growth as a worldwide phefiles with each other and to provide remote access to computing power at other locations. nomenon. It is also accessible by essentially all The notion of electronic mail arose soon digital devices—computers, tablets, smart thereafter, and machine-to-machine electronic phones, and a plethora of personal monitors mail exchanges quickly became a major com- and “smart home” tools—which is a tribute to ponent of the network’s traffic. As the tech- its design and its compatibility with newer nology matured, its value for nonmilitary networking technologies, such as Bluetooth research activities was recognized, and by and Wi-Fi. The societal impact of this communica1973 the first medically related research comtions phenomenon cannot be overstated, puter had been added to the network especially given the international connectivity (Shortliffe 1998a, 2000). that has grown phenomenally in the past two During the 1980s, the technology began decades. Countries that once were isolated to be developed in other parts of the world, from information that was important to citiand the National Science Foundation took zens, ranging from consumers to scientists to over the task of running the principal highthose interested in political issues, are now speed backbone network in the United finding new options for bringing timely inforStates. Hospitals, mostly academic centers, mation to the desktop machines and mobile began to be connected to what had by then become known as the Internet, and in a devices of individuals with an Internet conmajor policy move it was decided to allow nection. There has in turn been a major upheaval commercial organizations to join the netin the telecommunications industry, with work as well. By April 1995, the Internet in companies that used to be in different busithe United States had become a fully comnies that currently exist or will arise in the future, and the users who identify requirements and areas for improvement. These companies must look to creative researchers, both within their own companies and in academia, who will forge the changes that will encourage EHR users to embrace and appreciate the technology much more than they often do today.
13 Biomedical Informatics: The Science and the Pragmatics
nesses (e.g., cable television, Internet services, and telephone) now finding that their activities and technologies have merged. In the United States, legislation was passed in 1996 to allow new competition to develop and new industries to emerge. We have subsequently seen the merging of technologies such as cable television, telephone, networking, and satellite communications. High-speed lines into homes and offices are widely available, wireless networking is ubiquitous, and inexpensive mechanisms for connecting to the Internet without using conventional computers (e.g., using cell phones or set-top boxes) have also emerged. The impact on everyone has been great and hence it is affecting the way that individuals seek health-related information while also enhancing how patients can gain access to their health care providers and to their clinical data. The Internet also has exhibited unintended consequences, especially in the world of social media, which has created opportunities for promoting political unrest, social shaming, and dissemination of falsehoods. In the world of health care, the Internet has created opportunities for attacks on personal privacy, even while facilitating socially valuable exchanges of data among institutions and individuals. Many of these practical, legal, and ethical challenges are the subject of 7 Chap. 12. Just as individual hospitals and health care systems have come to appreciate the importance of integrating information from multiple clinical and administrative systems within their organizations (see 7 Chap. 16), health planners and governments now appreciate the need to develop integrated information resources that combine clinical and health data from multiple institutions within regions, and ultimately nationally (see 7 Chaps. 15 and 18). As you will see, the Internet and the role of digital communications has therefore become a major part of modern medicine and health. Although this topic recurs in essentially every chapter in this book, we introduce it in the following sections because of its importance to modern technical issues and policy directions.
1.2.1
Model of Integrated A Disease Surveillance3
To emphasize the role that the nation’s networking infrastructure is playing in integrating clinical data and enhancing care delivery, consider one example of how disease surveillance, prevention, and care are increasingly being influenced by information and communications technology. The goal is to create an information-management infrastructure that will allow all clinicians, regardless of practice setting (hospitals, emergency rooms, small offices, community clinics, military bases, multispecialty groups, etc.) to use EHRs in their practices both to assist in patient care and to provide patients with counsel on illness prevention. The full impact of this use of electronic resources will occur when data from all such records are pooled in regional and national registries or surveillance databases (. Fig. 1.6), mediated through secure connectivity with the Internet. The challenge, of course, is to find a way to integrate data from such diverse practice settings, especially since there are multiple vendors and system developers active in the marketplace, competing to provide value-added capabilities that will excite and attract the practitioners for whom their EHR product is intended. The practical need to pool and integrate clinical data from such diverse resources and systems emphasizes the practical issues that need to be addressed in achieving such functionality and resources. Interestingly, most of the barriers are logistical, political, and financial rather than technical in nature: 55 Encryption of data: Concerns regarding privacy and data protection require that Internet transmission of clinical information occur only if those data are encrypted, with an established mechanism for identifying and authenticating individuals before they are allowed to decrypt the information for surveillance or research use.
3
This section is adapted from a discussion that originally appeared in (Shortliffe and Sondik 2004).
1
14
E. H. Shortliffe and M. F. Chiang
1
Internet Provider
EHR
Provider
EHR
Provider
EHR
Provider
EHR
Provider
EHR
Regional and national registries and surveillance databases
Protocols and guidelines for standards of care
Different vendors .. Fig. 1.6 A future vision of surveillance databases, in which clinical data are pooled in regional and national registries or repositories through a process of data submission that occurs over the Internet (with attention to privacy and security concerns as discussed in the text).
When information is effectively gathered, pooled, and analyzed, there are significant opportunities for feeding back the results of derived insights to practitioners at the point of care. Thus the arrows indicate a bi- directional process. See also 7 Chap. 15
55 Protection of stored clinical data: Even when data are stored within an institution, there are opportunities for attack over the Internet, which can be an affront to patient privacy or, equally seriously, an opportunity for installing malware within an institution, resulting in rogue uses of data or even a lockout of valid users from crucially important functions or data. Cybersecurity has accordingly become a major topic of concern for health care institutions and other practice settings.4 55 HIPAA-compliant policies: The privacy and security rules that resulted from the 1996 Health Insurance Portability and Accountability Act (HIPAA) do not prohibit the pooling and use of such data, but they do lay down policy rules and technical security practices that must be part of the solution in achieving the vision we are discussing here. 55 Standards for data transmission and sharing: Sharing data over networks requires that all developers of EHRs and clinical databases adopt a single set of
standards for communicating and exchanging information. The major enabling standard for such sharing, Health Level 7 (HL7), was introduced decades ago and, after years of work, has been uniformly adopted, implemented, and utilized. However, a uniform “envelope” for digital communication, such as HL7, does not assure that the contents of such messages will be understood or standardized. The pooling and integration of data requires the adoption of standards for clinical terminology and potentially for the schemas used to store clinical information in databases. Thus true interoperability of such systems requires additional standards to be adopted, many of which are discussed in 7 Chap. 7. 55 Quality control and error checking: Any system for accumulating, analyzing, and utilizing clinical data from diverse sources must be complemented by a rigorous approach to quality control and error checking. It is crucial that users have faith in the accuracy and comprehensiveness of the data that are collected in such repositories, because policies, guidelines, and a variety of metrics can be derived over time from such information.
4
7 https://www.theverge.com/2019/4/4/18293817/ cybersecurity-hospitals-health-care-scan-simulation (Accessed 5/29/19).
1
15 Biomedical Informatics: The Science and the Pragmatics
55 Regional and national registries and surveillance databases: Any adoption of the model in . Fig. 1.6 will require mechanisms for creating, funding, and maintaining the regional and national databases or registries that are involved (see 7 Chap. 15). The growing amount of data that can be gathered in this way are naturally viewed as part of the “big data” problem that has characterized modern data science. The role of state and federal governments in gathering and curating such databases will need to be clarified, and the political issues addressed (including the concerns of some members of the populace that any government role in managing or analyzing their health data may have societal repercussions that threaten individual liberties, employability, and the like).
With the establishment of registries and surveillance databases, and a robust system of Internet integration with EHRs, summary information can flow back to providers to enhance their decision making at the point of care (. Fig. 1.6). This assumes standards that allow such information to be integrated into the vendor-supplied products that the clinicians use in their practice settings. These may be EHRs or their order-entry components that clinicians use to specify the actions that they want to have taken for the treatment or management of their patients (see 7 Chaps. 14 and 16). Furthermore, as is shown in . Fig. 1.6, the databases can help to support the creation of evidence-based guidelines, or clinical research protocols, which can be delivered to practitioners through the feedback process. Thus one should envision a day when clinicians, at the point of care, will receive integrated, non-dogmatic, supportive information regarding: 55 Recommended steps for health promotion and disease prevention 55 Detection of syndromes or problems, either in their community or more widely 55 Trends and patterns of public health importance, a capability emphasized by the need for rapidly changing data on cases and deaths during the COVID-19 pandemic in 2020.
55 Clinical guidelines, adapted for execution and integration into patient-specific decision support rather than simply provided as text documents 55 Opportunities for distributed (community- based) clinical research, whereby patients are enrolled in clinical trials and protocol guidelines are in turn integrated with the clinicians’ EHR to support protocol- compliant management of enrolled patients 1.2.2
he Goal: A Learning Health T System
We have been stressing the cyclical role of information—its capture, organization, interpretation, and ultimate use. You can easily understand the small cycle that is implied: patient-specific data and plans entered into an EHR and subsequently made available to the same practitioner or others who are involved in that patient’s care (. Fig. 1.7). Although this view is a powerful contributor to improved data management in the care of patients, it fails to include a larger view of the societal value of the information that is contained in clinical-care records. In fact, such straightforward use of EHRs for direct patient care would not have met some of the
Record patient information
Electronic health records
Providers caring for patients
Provider’s knowlege and advice from others
Access patient information
.. Fig. 1.7 There is a limited view of the role of EHRs that sees them as intended largely to support the ongoing care of the patient whose clinical data are stored in the record
16
E. H. Shortliffe and M. F. Chiang
1
Biomedical and clinical research Electronic health records providers caring for patients Information, decision-support, and order-entry systems
Pooled clinical data Regional and national public health and Standards disease for registries prevention Creation of and protocols, treatment guidelines, and A ‘’Learning educational health system’’ materials
.. Fig. 1.8 The ultimate goal is to create a cycle of information flow, whereby data from local distributed electronic health records (EHRs) and their associated clinical datasets are routinely and effortlessly submitted to registries and research databases. The resulting new
knowledge then can feed back to practitioners at the point of care, using a variety of computer-supported decision-support delivery mechanisms. This cycle of new knowledge, driven by experience, and fed back to clinicians, has been dubbed a “learning health system”
requirements that the US government specified after 2009 when determining eligibility for payment of incentives to clinicians or hospitals who implemented EHRs (see the discussion of the government HITECH program in 7 Sect. 1.3). Consider, instead, an expanded view of the health surveillance model introduced in 7 Sect. 1.2.1 (. Fig. 1.8). Beginning at the left of the diagram, clinicians caring for patients use electronic health records, both to record their observations and to gain access to information about the patient. Information from these records is then stored in local patient-care clinical databases and forwarded automatically to regional and national registries as well as to research databases that can support retrospective studies (see 7 Chap. 15) or formal institutional or community-based clinical trials (see 7 Chap. 27). The analyzed information from institutional datasets, registries and research studies can in turn be used to develop standards for prevention and treatment, with major guidance from biomedical research. Researchers can draw information either directly from the health records or from the pooled data in registries. The standards
for treatment in turn can be translated into protocols, guidelines, and educational materials. This new knowledge and decision-support functionality can then be delivered over the network back to the clinicians so that the information informs patient care, where it is integrated seamlessly with EHRs and order- entry systems. This notion of a system that allows us to learn from what we do, unlocking the experience that has traditionally been stored in unusable form in paper charts, is gaining wide attention now that we can envision an interconnected community of clinicians and institutions, building digital data resources using EHRs. The concept has been dubbed a learning health system and is an ongoing subject of study by the National Academy of Medicine (Daley 2013), which has published a series of reports on the topic.5 It is also the organizing conceptual framework for a
5
7 https://nam.edu/programs/value-science-drivenhealth-care/learning-health-system-series/ (Accessed 05/29/19)
17 Biomedical Informatics: The Science and the Pragmatics
Biomedical and clinical research Electronic health records
Pooled clinical data Regional and national public health and disease registries
Providers caring for patients
Creation of protocols, guidelines, and educational materials
Information, decision-support, and order-entry systems
Standards for prevention and treatment A ‘’Learning health system’’
Add: • “Big data” from massive data sets • “Big data” from monitored behaviors • Social media • Personal health devices
.. Fig. 1.9 Today the learning health system is increasingly embracing new forms of massive health-related data, often from outside the clinical care setting and
derived from population activities that reflect individuals’ health, activities, and attitudes
recently created department at the University of Michigan Medical School6 and for a new scientific journal.7 Although the learning health system concept of . Fig. 1.8 may at first seem expansive and all-inclusive, in recent years we have learned that there are other important inputs to the health care environment and these can have important implications for what we learn by analyzing what both patients and healthy individuals do. Some of these data sources are immense and are in line with the recent interest in “big data” analytics (. Fig. 1.9). Consider, for example, the analysis of huge datasets associated with full human genome specifications for individuals and populations. Another approach for gathering massive amounts of relevant health-related data is to
monitor the behavior of individuals as they use online information resources, searching for health-related information. Social media exchanges (e.g., Twitter, Facebook) have also been used to extract health-related information, such as complaints that suggest early stages of communicable diseases or expressed attitudes towards diseases and treatment. The explosive adoption of health monitoring devices (e.g., step counters, exercise analyzers, cardiac or sleep monitors) has also offered a useful source of large-scale information that is only beginning to be merged with other data in our learning health system.
6 7
7 https://medicine.umich.edu/dept/learning-healthsciences (Accessed 05/03/2020) 7 https://onlinelibrary.wiley.com/journal/23796146 (Accessed 05/03/2020)
1.2.3
Implications of the Internet for Patients
With the penetration of the Internet, patients, as well as healthy individuals, have turned to the Internet for health information. It is a rare North American physician who has not
1
18
1
E. H. Shortliffe and M. F. Chiang
encountered a patient who comes to an appointment armed with a question, or a stack of printouts, that arose due to medically related searches on the net. The companies that provide search engines for the Internet report that health-related sites are among the most popular ones being explored by consumers. As a result, physicians and other care providers have learned that they must be prepared to deal with information that patients discover on the net and bring with them when they seek care from clinicians. Some of the information is timely and excellent; in this sense, physicians can often learn about innovations from their patients and need to be open to the kinds of questions that this enhanced access to information will generate from patients in their practices. On the other hand, much of the health information on the Web lacks peer review or is purely anecdotal. People who lack medical training can be misled by such information, just as they have been poorly served in the past by printed information in books and magazines dealing with fad treatments from anecdotal sources. This also creates challenges for health care providers, who often feel pressured to handle more issues in less time due to economic pressures. In addition, some sites provide personalized advice, sometimes for a fee, with all the attendant concerns about the quality of the suggestions and the ability to give valid advice based on an electronic-mail or Web-based interaction. In a positive light, communications technologies offer clinicians creative ways to interact with their patients and to provide higher quality care. Years ago, medicine adopted the telephone as a standard vehicle for facilitating patient care, and we now take this kind of interaction with patients for granted. If we extend the audio channel to include our visual sense as well, typically relying on the Internet as our communication mechanism, the notion of telemedicine emerges (see 7 Chap. 20). This notion of “medicine at a distance” arose early in the twentieth century (see . Fig. 1.10), but the technology was too limited for much penetration of the idea beyond telephone conversations until the last 30–40 years. The use of telemedicine has subsequently grown rap
idly, and there are settings in which it is already proving to be successful and cost-effective (e.g., rural care, international medicine, teleradiology, and video-based care of patients in prisons). Similarly, there are now a large number of apps (designed for smart phones, tablets, or desktop machines) that offer specialized medical care or advice or assist with health data management and communication with providers and support groups (see 7 Chaps. 11 and 20).
1.2.4
Requirements for Achieving the Vision
Efforts that continue to push the state of the art in Internet technology all have significant implications for the future of health care delivery in general and of EHRs and their integration in particular (Shortliffe 1998b, 2000). But in addition to increasing speed, reliability, security, and availability of the Internet, there are many other areas that need attention if the vision of a learning health system is to be achieved. 1.2.4.1
Education and Training
There is a difference between computer literacy (familiarity with computers and their routine uses in our society) and knowledge of the role that computing and communications technology can and should play in our health system. We need to do a better job of training future clinicians in the latter area. Otherwise we will leave them poorly equipped for the challenges and opportunities they will face in the rapidly changing practice environments that surround them (Shortliffe 2010). Not only do they need to feel comfortable with the technology itself, but they need to understand the profound effect that it has had on the practice of medicine—with many more changes to come. Medicine, and other health professions, are being asked to adapt in ways that were not envisioned even a decade or two ago. Not all individuals embrace such change, but younger clinicians, who have grown up with technology in almost all aspects of their lives, have high
19 Biomedical Informatics: The Science and the Pragmatics
.. Fig. 1.10 “The Radio Doctor”: long before television was invented, creative observers were suggesting how doctors and patients could communicate using
advanced technologies. This 1924 example is from the cover of a popular magazine and envisions video enhancements to radio. (Source: “Radio News” 1924)
expectations for how digital systems and tools should enhance their professional experience. What is even more challenging, perhaps, is that assumptions that they have made about the field they have entered may no longer be valid in the coming years, as some skills are no longer required and new requirements are viewed as dramatically different from what health professionals have had to know in the past.
Furthermore, in addition to the implications for education of health professionals about computer-related topics, much of the future vision we have proposed here can be achieved only if educational institutions produce a cadre of talented individuals who are highly skilled in computing and communications technology but also have a deep understanding of the biomedical milieu and of the needs of practitioners and other health work-
1
20
1
E. H. Shortliffe and M. F. Chiang
failure to understand the requirements for process reengineering as part of software implementation, as well as problems with technical leadership and planning, account for many of the frustrating experiences that health care organizations report in their efforts to use computers more effectively in support of patient care and provider productivity. The notion of a learning health system described previously is meant to motivate your enthusiasm for what lies ahead and to suggest the topics that need to be addressed in a book such as this one. Essentially all of the following chapters touch on some aspect of the vision of integrated systems that extend beyond single institutions. Before embarking on these topics, however, we must emphasize two points. First, the cyclical creation of new knowledge in a learning health care system will become reality only if individual hospitals, academic medical centers, and national coordinating bodies work together to provide 1.2.4.2 Organizational the standards, infrastructure, and resources and Management Change that are necessary. No individual system Second, as implied above, there needs to be a developer, vendor, or administrator can mangreater understanding among health care date the standards for connectivity, data poolleaders regarding the role of specialized multi- ing, and data sharing implied by a learning disciplinary expertise in successful clinical health care system. A national initiative of systems implementation. The health care sys- cooperative planning and implementation for tem provides some of the most complex orga- computing and communications resources nizational structures in society (Begun et al. within and among institutions and clinics is 2003), and it is simplistic to assume that off- required before practitioners will have routine the- shelf products will be smoothly intro- access to the information that they need (see duced into a new institution without major 7 Chap. 15). A major federal incentive proanalysis, redesign, and cooperative joint- gram for EHR implementation was a first step development efforts. Underinvestment and a in this direction (see 7 Sect. 1.3). The criteria that are required for successful EHR implementation are sensitive to the need for data integration, public-health support, and a 8 7 https://www.hcinnovationgroup.com/policy- learning health system. value-based-care/staffing-professional-development/ Second, although our presentation of the news/13024360/report-health-informatics-laborlearning health system notion has focused on market-lags-behind-demand-for-workers (Accessed 5/30/2019); 7 https://www.bestvalueschools.com/ the clinician’s view of integrated information faq/job-outlook-health-informatics-graduates/ access, other workers in the field have similar (Accessed 5/30/2019). needs that can be addressed in similar ways. 9 7 https://www.burning-glass.com/wp-content/ The academic research community has u p l o a d s / B G - H e a l t h _ I n fo r m at i c s _ 2 0 1 4 . p d f already developed and made use of much of (Accessed 5/30/2019). 10 A directory of some existing training programs is the technology that needs to be coalesced if available at 7 http://www.amia.org/education/pro- the clinical user is to have similar access to grams-and-courses (Accessed 5/30/19). data and information. There is also the
ers. Computer science training alone is not adequate. Fortunately, there are increasing numbers of formal training programs in what has become known as biomedical informatics (see 7 Sect. 1.4) that provide custom-tailored educational opportunities. Many of the trainees are life science researchers, physicians, nurses, pharmacists, and other health professionals who see the career opportunities and challenges at the intersections of biomedicine, information science, computer science, decision science, data science, cognitive science, and communications technologies. As has been clear for three decades (Greenes and Shortliffe 1990), however, the demand for such individuals far outstrips the supply, both for academic and industrial career pathways.8, 9 We need more training programs,10 expansion of those that already exist, plus support for junior faculty in health science schools who may wish to pursue additional training in this area.
21 Biomedical Informatics: The Science and the Pragmatics
patient’s view, which must be considered in the notion of patient-centered health care that is now broadly accepted and encouraged (Ozkaynak et al. 2013).
1.3
The US Government Steps In
During the early decades of the evolution of clinical information systems for use in hospitals, patient care, and public health, the major role of government was in supporting the research enterprise as new methods were developed, tested, and formally evaluated. The topic was seldom mentioned by the nation’s leaders, however, even during the 1990s when the White House was viewed as being especially tech savvy. It was accordingly remarkable when, in the President’s State of the Union address in 2004 (and in each of the following years of his administration), President Bush called for universal implementation of electronic health records within 10 years. The Secretary of Health and Human Services, Tommy Thompson, was similarly supportive and, in May 2004, created an entity intended to support the expansion of the use of EHRs—the Office of the National Coordinator for Health Information Technology (initially referred to by the full acronym ONCHIT, but later shortened simply to ONC). There was initially limited budget for ONC, although the organization served as a convening body for EHR-related planning efforts and the National Health Information Infrastructure (see 7 Chaps. 14, 15 and 29). The topic of EHRs subsequently became a talking point for both major candidates during the Presidential election in 2008, with strong bipartisan support. Then, in early 2009, Congress enacted the American Recovery and Reinvestment Act (ARRA), also known as the economic “Stimulus Bill”. One portion of that legislation was known as the Health Information Technology for Economic and Clinical Health (HITECH) Act. It was this portion of the bill that provided significant fiscal incentives for health systems, hospitals, and providers to implement EHRs in their practices and eventual financial penalties for lack of implementation. Such
1
payments were made available, however, only when eligible organizations or individual practitioners implemented EHRs that were “certified” as meeting minimal standards and when they could document that they were making “meaningful use” of those systems. You will see references to such certification and meaningful use criteria in other chapters in this volume. This volume also offers a discussion of HIT policy and the federal government in 7 Chap. 29. Although the process of EHR implementation is approaching completion in the US, both in health systems and practices, the current status is largely due to this legislative program: because of the federal stimulus package, large numbers of hospitals, systems, and practitioners invested in EHRs and incorporated them into their practices. Furthermore, the demand for workers skilled in health information technology grew much more rapidly than did the general job market, even within health care (. Fig. 1.11). It is a remarkable example of how government policy and investment can stimulate major transitions in systems such as health care, where many observers had previously felt that progress had been unacceptably slow (Shortliffe 2005).
1.4
Defining Biomedical Informatics and Related Disciplines
With the previous sections of this chapter as background, let us now consider the scientific discipline that is the subject of this volume and has led to the development of many of the functionalities that need to be brought together in the integrated biomedical- computing environment of the future. The remainder of this chapter deals with biomedical informatics as a field and with biomedical and health information as a subject of study. It provides additional background needed to understand many of the subsequent chapters in this book. Reference to the use of computers in biomedicine evokes different images depending on the nature of one’s involvement in the field. To a hospital administrator, it might suggest
22
E. H. Shortliffe and M. F. Chiang
Health IT jobs
1
Healthcare jobs
All jobs
199
200
HITECH Act February 2009
150
100 57 50
52
0
Jan-12
Nov-11
Jul-11
Sep-11
May-11
Jan-11
Mar-11
Nov-10
Jul-10
Sep-10
May-10
Jan-10
Mar-10
Nov-09
Jul-09
Sep-09
May-09
Jan-09
Mar-09
Nov-08
Jul-08
Sep-08
May-08
Jan-08
Mar-08
Nov-07
Jul-07
Sep-07
May-07
Jan-07
–50 Mar-07
Percent change in Health IT job Positings per Month (normalized to Feb 2009)
250
.. Fig. 1.11 Impact of the HITECH Act on health information technology (IT) employment. Percent change in online health IT job postings per month for first 3 years, relative to health care jobs and all jobs: normalized to February 2009 when ARRA passed. (Source:
ONC analysis of data from O’Reilly Job Data Mart, ONC Data Brief, No. 2, May 2012 (7 https://www. healthit.g ov/sites/default/files/pdf/0512_ONCDataBrief2_JobPostings.pdf (Accessed 5/6/2019)))
the maintenance of clinical-care records using computers; to a decision scientist, it might mean the assistance by computers in disease diagnosis; to a basic scientist, it might mean the use of computers for maintaining, retrieving, and analyzing gene-sequencing information. Many physicians immediately think of office-practice tools for tasks such as patient billing or appointment scheduling, and of electronic health record systems for clinical documentation. Nurses often think of computer- based tools for charting the care that they deliver, or decision-support tools that assist in applying the most current patient-care guidelines. The field includes study of all these activities and a great many others too. More importantly, it includes the consideration of various external factors that affect the biomedical setting. Unless you keep in mind these surrounding factors, it may be
difficult to understand how biomedical computing can help us to tie together the diverse aspects of health care and its delivery. To achieve a unified perspective, we might consider four related topics: (1) the concept of biomedical information (why it is important in biological research and clinical practice and why we might want to use computers to process it); (2) the structural features of medicine, including all those subtopics to which computers might be applied; (3) the importance of evidence-based knowledge of biomedical and health topics, including its derivation and proper management and use; and (4) the applications of computers and communication methods in biomedicine and the scientific issues that underlie such efforts. We mention the first two topics briefly in this and the next chapter, and we provide references in the Suggested Readings section for readers who
23 Biomedical Informatics: The Science and the Pragmatics
wish to learn more. The third topic, knowledge to support effective decision making in support of human health, is intrinsic to this book and occurs in various forms in essentially every chapter. The fourth topic, however, is the principal subject of this book. Computers have captured the imagination (and attention) of our society. Today’s younger individuals have grown up in a world in which computers are ubiquitous and useful. Because the computer as a machine is exciting, people may pay a disproportionate amount of attention to it as such—at the expense of considering what the computer can do given the numbers, concepts, ideas, and cognitive underpinnings of fields such as medicine, health, and biomedical research. Computer scientists, philosophers, psychologists, and other scholars increasingly consider such matters as the nature of information and knowledge and how human beings process such concepts. These investigations have been given a sense of timeliness (if not urgency) by the simple existence of the computer. The cognitive activities of clinicians in practice probably have received more attention over the past three or four decades than in all previous history (see 7 Chap. 4). Again, the existence of the computer and the possibilities of its extending a clinician’s cognitive powers have motivated many of these studies. To develop computer-based tools to assist with decisions, we must understand more clearly such human processes as diagnosis, therapy planning, decision making, and problem solving in medicine. We must also understand how personal and cultural beliefs affect the way in which information is interpreted and decisions are ultimately made.
1.4.1
Terminology
Although, starting in the 1960s, a growing number of individuals conducting serious biomedical research or undertaking clinical practice had access to a computer system, there was initial uncertainty about what name should be used for the biomedical application of computer science concepts. The name computer science was itself new in 1960 and was
only vaguely defined. Even today, the term computer science is used more as a matter of convention than as an explanation of the field’s scientific content. In the 1970s we began to use the phrase medical computer science to refer to the subdivision of computer science that applies the methods of the larger field to medical topics. As you will see, however, medicine has provided a rich area for computer science research, and several basic computing insights and methodologies have been derived from applied medical-computing research. The term information science, which is occasionally used in conjunction with computer science, originated in the field of library science and is used to refer, somewhat generally, to the broad range of issues related to the management of both paper-based and electronically stored information. Much of what information science originally set out to be is now drawing evolving interest under the name cognitive science. Information theory, in contrast, was first developed by scientists concerned about the physics of communication; it has evolved into what may be viewed as a branch of mathematics. The results scientists have obtained with information theory have illuminated many processes in communications technology, but they have had little effect on our understanding of human information processing. The terms biomedical computing or biocomputation have been used for a number of years. They are non-descriptive and neutral, implying only that computers are employed for some purpose in biology or medicine. They are often associated with bioengineering applications of computers, however, in which the devices are viewed more as tools for a bioengineering application than as a primary focus of research. In the 1970s, inspired by the French term for computer science (informatique), the English-speaking community began to use the term medical informatics. Those in the field were attracted by the word’s emphasis on information, which they saw as more central to the field than the computer itself, and it gained momentum as a term for the discipline, espe-
1
24
1
E. H. Shortliffe and M. F. Chiang
cially in Europe, during the 1980s. The term is broader than medical computing (it includes such topics as medical statistics, record keeping, and the study of the nature of medical information itself) and deemphasizes the computer while focusing instead on the nature of the field to which computations are applied. Because the term informatics became widely accepted in the United States only in the late 1980s, medical information science was also used earlier in North America; this term, however, may be confused with library science, and it does not capture the broader implications of the European term. As a result, the name medical informatics appeared by the late 1980s to have become the preferred term, even in the United States. Indeed, this is the name of the field that we used in the first two editions of this textbook (published in 1990 and 2000), and it is still sometimes used in professional, industrial, and academic settings. However, many observers expressed concern that the adjective “medical” is too focused on physicians and disease, failing to appreciate the relevance of this discipline to other health and life-science professionals and to health promotion and disease prevention. Thus, the term health informatics, or health care informatics, gained some popularity, even though it has the disadvantage of tending to exclude applications to biomedical research (7 Chaps. 9 and 26) and, as we shall argue shortly, it tends to focus the field’s name on application domains (clinical care, public health, and prevention) rather than the basic discipline and its broad range of applicability. Applications of informatics methods in biology and genetics exploded during the 1990s due to the human genome project11 and the growing recognition that modern life- science research was no longer possible without computational support and analysis (see 7 Chaps. 9 and 26). By the late 1990s, the use of informatics methods in such work had become widely known as bioinformatics and the director of the National Institutes of Health (NIH) appointed an advisory group
called the Working Group on Biomedical Computing. In June 1999, the group provided a report12 recommending that the NIH undertake an initiative called the Biomedical Information Science and Technology Initiative (BISTI). With the subsequent creation of another NIH organization called the Bioinformatics Working Group, the visibility of informatics applications in biology was greatly enhanced. Today bioinformatics is a major area of activity at the NIH13 and in many universities and biotechnology companies around the world. The explosive growth of this field, however, has added to the confusion regarding the naming conventions we have been discussing. In addition, the relationship between medical informatics and bioinformatics became unclear. As a result, in an effort to be more inclusive and to embrace the biological applications with which many medical informatics groups had already been involved, the name medical informatics gradually gave way to biomedical informatics (BMI). Several academic groups have changed their names, and a major medical informatics journal (Computers and Biomedical Research, first published in 1967) was reborn in 2001 as The Journal of Biomedical Informatics.14 Despite this convoluted naming history, we believe that the broad range of issues in biomedical information management does require an appropriate name and, beginning with the third edition of this book (2006), we used the term biomedical informatics for this purpose. It has become the most widely accepted term for the core discipline and should be viewed as encompassing broadly all areas of application in health, clinical practice, and biomedical research. When we speak specifically about computers and their use within biomedical informatics activities, we use the terms biomedical computer science (for the methodologic issues) or biomedical computing (to describe the activity itself). 12 Available at 7 https://acd.od.nih.gov/documents/ reports/060399_Biomed_Computing_WG_RPT. htm (Accessed 5/31/2019). 13 See 7 http://www.bisti.nih.gov/. (Accessed 5/31/2019). 14 7 http://www.journals.elsevier.com/journal-of-biomedical-informatics (Accessed 5/30/19).
11 7 https://www.ornl.gov/sci/techresources/Human_ Genome/home.shtml (Accessed 5/31/2019).
25 Biomedical Informatics: The Science and the Pragmatics
Note, however, that biomedical informatics has many other component sciences in addition to computer science. These include the decision sciences, statistics, cognitive science, data science, information science, and even management sciences. We return to this point shortly when we discuss the basic versus applied nature of the field when it is viewed as a basic research discipline. Although labels such as these are arbitrary, they are by no means insignificant. In the case of new fields of endeavor or branches of science, they are important both in designating the field and in defining or restricting its contents. The most distinctive feature of the modern computer is the generality of its application. The nearly unlimited range of computer uses complicates the business of naming the field. As a result, the nature of computer science is perhaps better illustrated by examples than by attempts at formal definition. Much of this book presents examples that do just this for biomedical informatics as well. The American Medical Informatics Association (AMIA), which was founded in the late 1980s under the former name for the
Box 1.1: Definition Informatics
of
Biomedical
Biomedical informatics (BMI) is the interdisciplinary field that studies and pursues the effective uses of biomedical data, information, and knowledge for scientific inquiry, problem solving, and decision making, driven by efforts to improve human health. Scope and breadth of discipline: BMI investigates and supports reasoning, modeling, simulation, experimentation, and translation across the spectrum from molecules to individuals and to populations, from biological to social systems, bridging basic and clinical research and practice and the health care enterprise. Theory and methodology: BMI develops, studies, and applies theories, methods, and processes for the generation, storage, retrieval, use, management, and sharing of biomedical data, information, and knowledge.
discipline, has recognized the confusion regarding the field and its definition.15 They accordingly appointed a working group to develop a formal definition of the field and to specify the core competencies that need to be acquired by students seeking graduate training in the discipline. The resulting definition, published in AMIA’s journal and approved by the full board of the organization, identifies the focus of the field in a simple sentence and then adds four clarifying corollaries that refine the definition and the field’s scope and content (7 Box 1.1). We adopt this definition, which is very similar to the one we offered in previous editions of this text. It acknowledges that the emergence of biomedical informatics as a new discipline is due in large part to rapid advances in computing and communications technology, to an increasing awareness that the knowledge base of biomedicine is essentially unmanageable by traditional paper- based methods, and to a growing conviction that the process of informed decision making is as important to modern biomedicine as is the collection of facts on which clinical decisions or research plans are made.
Technological approach: BMI builds on and contributes to computer, telecommunication, and information sciences and technologies, emphasizing their application in biomedicine. Human and social context: BMI, recognizing that people are the ultimate users of biomedical information, draws upon the social and behavioral sciences to inform the design and evaluation of technical solutions, policies, and the evolution of economic, ethical, social, educational, and organizational systems. Reproduced with permission from (Kulikowski et al. 2012) © Oxford University Press, 2012.
15 7 https://www.amia.org/about-amia/science-informatics (Accessed 5//27/19).
1
1
26
E. H. Shortliffe and M. F. Chiang
1.4.2
Historical Perspective
The modern digital computer grew out of developments in the United States and abroad during World War II, and general-purpose computers began to appear in the marketplace by the mid-1950s (. Fig. 1.12). Speculation about what might be done with such machines (if they should ever become reliable) had, however, begun much earlier. Scholars, at least as far back as the Middle Ages, often had raised the question of whether human reasoning might be explained in terms of formal or algorithmic processes. Gottfried Wilhelm von Leibnitz, a seventeenth-century German philosopher and mathematician, tried to develop a calculus that could be used to simulate human reasoning. The notion of a “logic engine” was subsequently worked out by Charles Babbage in the mid nineteenth century. The first practical application of automatic computing relevant to medicine was Herman Hollerith’s development of a punched-card data-processing system for the 1890 U.S. census (. Fig. 1.13). His methods were soon adapted to epidemiologic and public health surveys, initiating the era of electromechanical punched-card data-processing technology, which matured and was widely adopted during the 1920s and 1930s. These techniques were the precursors of the stored
.. Fig. 1.12 The ENIAC. Early computers, such as the ENIAC, were the precursors of today’s personal computers (PCs) and handheld calculators. (US Army photo. See also 7 http://www.computersciencelab.com/ComputerHistory/HistoryPt4.htm (Accessed 5/31/2019))
program and wholly electronic digital computers, which began to appear in the late 1940s (Collen 1995). One early activity in biomedical computing was the attempt to construct systems that would assist a physician in decision making (see 7 Chap. 24). Not all biomedical- computing programs pursued this course, however. Many of the early ones instead investigated the notion of a total hospital information system (HIS; see 7 Chap. 16). These projects were perhaps less ambitious in that they were more concerned with practical applications in the short term; the difficulties they encountered, however, were still formidable. The earliest work on HISs in the United States was probably that associated with the MEDINET project at General Electric, followed by work at Bolt, Beranek, Newman in Cambridge, Massachusetts, and then at the Massachusetts General Hospital (MGH) in Boston. A number of hospital application programs were developed at MGH by Barnett and his associates over three decades beginning in the early 1960s. Work on similar systems was undertaken by Warner at Latter Day Saints (LDS) Hospital in Salt Lake City, Utah, by Collen at Kaiser Permanente in Oakland, California, by Wiederhold at
.. Fig. 1.13 Tabulating machines. The Hollerith Tabulating Machine was an early data-processing system that performed automatic computation using punched cards. (Photograph courtesy of the Library of Congress)
27 Biomedical Informatics: The Science and the Pragmatics
Stanford University in Stanford, California, and by scientists at Lockheed in Sunnyvale, California.16 The course of HIS applications bifurcated in the 1970s. One approach was based on the concept of an integrated or monolithic design in which a single, large, time-shared computer would be used to support an entire collection of applications. An alternative was a distributed design that favored the separate implementation of specific applications on smaller individual computers—minicomputers—thereby permitting the independent evolution of systems in the respective application areas. A common assumption was the existence of a single shared database of patient information. The multimachine model was not practical, however, until network technologies permitted rapid and reliable communication among distributed and (sometimes) heterogeneous types of machines. Such distributed HISs began to appear in the 1980s (Simborg et al. 1983). Biomedical-computing activity broadened in scope and accelerated with the appearance of the minicomputer in the early 1970s. These machines made it possible for individual departments or small organizational units to acquire their own dedicated computers and to develop their own application systems (. Fig. 1.14). In tandem with the introduction of general-purpose software tools that provided standardized facilities to individuals with limited computer training (such as the UNIX operating system and programming environment), the minicomputer put more computing power in the hands of more biomedical investigators than did any other single development until the introduction of the microprocessor, a central processing unit (CPU) contained on one or a few chips (. Fig. 1.15). Everything changed radically in the late 1970s and early 1980s, when the microproces-
sor and the personal computer (PC) or microcomputer became available. Not only could hospital departments afford minicomputers but now individuals also could afford micro-
.. Fig. 1.14 Departmental system. Hospital departments, such as the clinical laboratory, were able to implement their own custom-tailored systems when affordable minicomputers became available. These departments subsequently used microcomputers to support administrative and clinical functions. (Copyright 2013 HewlettPackard Development Company, LP. Reproduced from ~1985 original with permission)
16 The latter system was later taken over and further developed by the Technicon Corporation (subsequently TDS Healthcare Systems Corporation). Later the system was part of the suite of products available from Eclipsys, Inc. (which in turn was acquired by Allscripts, Inc in 2010).
.. Fig. 1.15 Miniature computer. The microprocessor, or “computer on a chip,” revolutionized the computer industry in the 1970s. By installing chips in small boxes and connecting them to a computer terminal, engineers produced the personal computer (PC)—an innovation that made it possible for individual users to purchase their own systems
1
28
computers. This change enormously broadened the base of computing in our society and gave rise to a new software industry. The first articles on computers in medicine had appeared in clinical journals in the late 1950s, but it was not until the late 1970s that the first use of computers in advertisements dealing with computers and aimed at physicians began to appear (. Fig. 1.16). Within a few years, a wide range of computer-based information- management tools were available as commercial products; their descriptions began to appear in journals alongside the traditional advertisements for drugs and other medical products. Today individual physicians find it practical to employ PCs in a variety of settings, including for applications in patient care or clinical investigation. Today we enjoy a wide range of hardware of various sizes, types, prices, and capabilities, all of which will continue to evolve in the decades ahead. The trend—reductions in size and cost of computers with simultaneous increases in power (. Fig. 1.17)—shows no sign of slowing, although scientists foresee the
.. Fig. 1.16 Medical advertising. An early advertisement for a portable computer terminal that appeared in general medical journals in the late 1970s. The development of compact, inexpensive peripheral devices and personal computers (PCs) inspired future experiments in marketing directly to clinicians (Reprinted by permission of copyright holder Texas Instruments Incorporated © 1985)
1010 109 108
Transistors per chip
1
E. H. Shortliffe and M. F. Chiang
107 106
Pentium® i486TM
105
i386TM 80286
104 103
IBM z13 POWER8 AMD K-10 ItaniumTM Pentium® 4 Pentium® III Pentium® II
8086 8080 4004
102 101 100 1970
1980
1990
.. Fig. 1.17 Moore’s Law. Former Intel chairman Gordon Moore is credited with popularizing the “law” that the size and cost of microprocessor chips will half every 18 months while they double in computing power. This
2000
2010
2020
graph shows the exponential growth in the number of transistors that can be integrated on a single microprocessor chip. The trend continues to this day. (Source: Wikipedia: 7 https://en.wikipedia.org/wiki/Transistor_count)
1
29 Biomedical Informatics: The Science and the Pragmatics
.. Fig. 1.18 The National Library of Medicine (NLM). The NLM, on the campus of the National Institutes of Health (NIH) in Bethesda, Maryland, is the principal biomedical library for the nation (see 7 Chap. 23). It is also a major source of support for research and training in biomedical informatics, both at NIH and in universities throughout the US. (Photograph courtesy of the National Library of Medicine)
ultimate physical limitations to the miniaturization of computer circuits.17 Progress in biomedical-computing research will continue to be tied to the availability of funding from either government or commercial sources. Because most biomedical- computing research is exploratory and is far from ready for commercial application, the federal government has played a key role in funding the work of the last four decades, mainly through the NIH and the Agency for Health Care Research and Quality (AHRQ). The National Library of Medicine (NLM) has assumed a primary role for biomedical informatics, especially with support for basic research in the field (. Fig. 1.18). As increasing numbers of applications prove successful in the commercial marketplace, it is likely that more development work will shift to industrial settings and that university programs will focus increasingly on fundamental research problems viewed as too speculative for short- term commercialization – as has occurred in the field of computer science over the past several decades.
.. Fig. 1.19 Doctor of the future. By the early 1980s, advertisements in medical journals (such as this one for an antihypertensive agent) began to use computer equipment as props and even portrayed them in a positive light. The suggestion in this photograph seems to be that an up-to-date physician feels comfortable using computerbased tools in his practice. (Photograph courtesy of ICI Pharma, Division of ICI Americas, Inc)
1.4.3
Relationship to Biomedical Science and Clinical Practice
The exciting accomplishments of biomedical informatics, and the implied potential for future benefits to medicine, must be viewed in the context of our society and of the existing health care system. As early as 1970, an eminent clinician suggested that computers might in time have a revolutionary influence on medical care, on medical education, and even on the selection criteria for health-science trainees (Schwartz 1970). The subsequent enormous growth in computing activity has been met with some trepidation by health professionals. They ask where it will all end. Will health workers gradually be replaced by computers? Will nurses and physicians need to be highly trained in computer science or informatics before they can practice their professions effectively? Will both patients and health workers eventually revolt rather than accept a trend toward automation that they believe may threaten the traditional humanistic values in health care delivery (see 7 Chap. 12) (Shortliffe 1993a)? Will clinicians be viewed as outmoded and backward if they do not turn to computational tools for assistance with information management and decision making (. Fig. 1.19)?
17 7 h t t p s : / / w w w . s c i e n c e d a i l y . c o m / releases/2008/01/080112083626.htm; 7 https://arstechnica.com/science/2014/08/are-processors-pushing-upagainst-the-limits-of-physics/ (Accessed 5/27/19).
30
1
E. H. Shortliffe and M. F. Chiang
Biomedical informatics is intrinsically entwined with the substance of biomedical science. It determines and analyzes the structure of biomedical information and knowledge, whereas biomedical science is constrained by that structure. Biomedical informatics melds the study data, information, knowledge, decision making, and supporting technologies with analyses of biomedical information and knowledge, thereby
Box 1.2: The Nature of Medical Information This material is adapted from a small portion of a classic book on this topic. It was written by Dr. Scott Blois, who coauthored the introductory chapter to this textbook in its 1st edition, which was published shortly after his death. Dr. Blois was a scholar who directed the informatics program at the University of California San Francisco and served as the first president of the American College of Medical Informatics (ACMI). [Blois, M. S. (1984). Information and medicine: The nature of medical descriptions. Berkeley: University of California Press]. From the material in this chapter, you might conclude that biomedical applications do not raise any unique problems or concerns. On the contrary, the biomedical environment raises several issues that, in interesting ways, are quite distinct from those encountered in most other domains of applied computing. Clinical information seems to be systematically different from the information used in physics, engineering, or even clinical chemistry (which more closely resembles chemical applications generally than it does medical ones). Aspects of biomedical information include an essence of uncertainty—we can never know all about a physiological process—and this results in inevitable variability among individuals. These differences raise special problems and some investigators suggest that biomedical computer science differs from conventional computer science in fundamental ways. We shall explore these differences only briefly here; for details, you can
addressing specifically the interface between the science of information and knowledge management and biomedical science. To illustrate what we mean by the “structural” features of biomedical information and knowledge, we can contrast the properties of the information and knowledge typical of such fields as physics or engineering with the properties of those typical of biomedicine (see 7 Box 1.2).
consult Blois’ book on this subject (see Suggested Readings). Let us examine an instance of what we will call a low-level (or readily formalized) science. Physics is a natural starting point; in any discussion of the hierarchical relationships among the sciences (from the fourth-century BC Greek philosopher Aristotle to the twentieth-century U.S. librarian Melvil Dewey), physics will be placed near the bottom. Physics characteristically has a certain kind of simplicity, or generality. The concepts and descriptions of the objects and processes of physics, however, are necessarily used in all applied fields, including medicine. The laws of physics and the descriptions of certain kinds of physical processes are essential in representing or explaining functions that we regard as medical in nature. We need to know something about molecular physics, for example, to understand why water is such a good solvent; to explain how nutrient molecules are metabolized, we talk about the role of electron-transfer reactions. Applying a computer (or any formal computation) to a physical problem in a medical context is no different from doing so in a physics laboratory or for an engineering application. The use of computers in various low-level processes (such as those of physics or chemistry) is similar and is independent of the application. If we are talking about the solvent properties of water, it makes no difference whether we happen to be working in geology, engineering, or medicine. Such low-level processes of physics are particularly receptive to
31 Biomedical Informatics: The Science and the Pragmatics
mathematical treatment, so using computers for these applications requires only conventional numerical programming. In biomedicine, however, there are other higher-level processes carried out in more complex objects such as organisms (one type of which is patients). Many of the important informational processes are of this kind. When we discuss, describe, or record the properties or behavior of human beings, we are using the descriptions of very high-level objects, the behavior of whom has no counterpart in physics or in engineering. The person using computers to analyze the descriptions of these high-level objects and processes encounters serious difficulties (Blois 1984). One might object to this line of argument by remarking that, after all, computers are used routinely in commercial applications in which human beings and situations concerning them are involved and that relevant computations are carried out successfully. The explanation is that, in these commercial applications, the descriptions of human beings and their activities have been so highly abstracted that the events or processes have been reduced to low-level objects. In biomedicine, abstractions carried to this degree would be worthless from either a clinical or research perspective. For example, one instance of a human being in the banking business is the customer, who may deposit, borrow, withdraw, or invest money. To describe commercial activities such as these, we need only a few properties; the customer can remain an abstract entity. In clinical medicine, however, we could not begin to deal with a patient represented with such skimpy abstractions. We must be prepared to analyze most of the complex behaviors that human beings display and to describe patients as completely as possible. We must deal with the rich descriptions occurring at high levels in the hierarchy, and we may be hard pressed
to encode and process this information using the tools of mathematics and computer science that work so well at low levels. In light of these remarks, the general enterprise known as artificial intelligence (AI) can be aptly described as the application of computer science to high-level, real-world problems. Biomedical informatics thus includes computer applications that range from processing of very low-level descriptions, which are little different from their counterparts in physics, chemistry, or engineering, to processing of extremely high-level ones, which are completely and systematically different. When we study human beings in their entirety (including such aspects as human cognition, self-consciousness, intentionality, and behavior), we must use these high-level descriptions. We will find that they raise complex issues to which conventional logic and mathematics are less readily applicable. In general, the attributes of low-level objects appear sharp, crisp, and unambiguous (e.g., “length,” “mass”), whereas those of high-level ones tend to be soft, fuzzy, and inexact (e.g., “unpleasant scent,” “good”). Just as we need to develop different methods to describe high-level objects, the inference methods we use with such objects may differ from those we use with low-level ones. In formal logic, we begin with the assumption that a given proposition must be either true or false. This feature is essential because logic is concerned with the preservation of truth value under various formal transformations. It is difficult or impossible, however, to assume that all propositions have truth values when we deal with the many high-level descriptions in medicine or, indeed, in everyday situations. Such questions as “Was Woodrow Wilson a good president?” cannot be answered with a “yes” or “no” (unless we limit the question to specific criteria for determining the goodness of presidents). Many common questions in biomedicine have the same property.
1
32
E. H. Shortliffe and M. F. Chiang
1
Biomedical informatics methods, techniques, and theories
Basic research
Applied research and practice
Bioinformatics
Imaging informatics
Clinical informatics
Public health informatics
.. Fig. 1.20 Biomedical informatics as basic science. We view the term biomedical informatics as referring to the basic science discipline in which the development and evaluation of new methods and theories are a primary focus of activity. These core concepts and methods in turn have broad applicability in the health and biomedical sciences. The informatics subfields indicated by the terms across the bottom of this figure are accordingly best viewed as application domains for a common
set of concepts and techniques from the field of biomedical informatics. Note that work in biomedical informatics is motivated totally by the application domains that the field is intended to serve (thus the twoheaded arrows in the diagram). Therefore the basic research activities in the field generally result from the identification of a problem in the real world of health or biomedicine for which an informatics solution is sought (see text)
Biomedical informatics is perhaps best viewed as a basic biomedical science, with a wide variety of potential areas of application (. Fig. 1.20). The analogy with other basic sciences is that biomedical informatics uses the results of past experience to understand, structure, and encode objective and subjective biomedical findings and thus to make them suitable for processing. This approach supports the integration of the findings and their analyses. In turn, the selective distribution of newly created knowledge can aid patient care, health planning, and basic biomedical research. Biomedical informatics is, by its nature, an experimental science, characterized by posing questions, designing experiments, performing analyses, and using the information gained to design new experiments. One goal is simply to search for new knowledge, called basic research. A second goal is to use this knowledge for practical ends, called applications (applied) research. There is a continuity between these two endeavors (see . Fig. 1.20). In biomedical informatics, there is an espe-
cially tight coupling between the application areas, broad categories of which are indicated at the bottom of . Fig. 1.20, and the identification of basic research tasks that characterize the scientific underpinnings of the field. Research, however, has shown that there can be a very long period of time between the development of new concepts and methods in basic research and their eventual application in the biomedical world (Balas and Boren 2000). Furthermore (see . Fig. 1.21), many discoveries are discarded along the way, leaving only a small percentage of basic research discoveries that have a practical influence on the health and care of patients. Work in biomedical informatics (BMI) is inherently motivated by problems encountered in a set of applied domains in biomedicine. The first of these historically has been clinical care (including medicine, nursing, dentistry, and veterinary care), an area of activity that demands patient-oriented informatics applications. We refer to this area as clinical informatics.18 It includes several sub-
1
33 Biomedical Informatics: The Science and the Pragmatics
Original research (100%)
Negative results :18%
Variable Submission
Negative results: 46%
0.5 year Acceptance 0.6 year Publication
Lack of numbers: 35%
0.3 year Bibliographic databases
Inconsistent indexing: 50%
6.0 - 13.0 years Reviews, guidelines, textbook
5.8 years Inplementation (14%)
.. Fig. 1.21 Phases in the transfer of research into clinical practice. A synthesis of studies focusing on various phases of this transfer has indicated that it takes an average of 17 years to make innovation part of routine care (Balas and Boren 2000). Pioneering institutions often apply innovations much sooner, sometimes within a few weeks, but nationwide introduction is usually slow.
National utilization rates of specific, well-substantiated procedures also suggest a delay of two decades in reaching the majority of eligible patients. For a well-documented study of such delays and their impact in an important area of clinical medicine, see (Krumholz et al. 1998). (Figure courtesy of Dr. Andrew Balas, used with permission)
topics and areas of specialized expertise, including patient-care foci such as nursing informatics, dental informatics, and even veterinary informatics. Furthermore, the former name of the discipline, medical informatics, is now reserved for those applied research and practice topics that focus on disease and the role of physicians. As was previously discussed, the term “medical informatics” is no longer used to refer to the discipline as a whole. Closely tied to clinical informatics is public health informatics (. Fig. 1.20), where simi-
lar methods are generalized for application to populations of patients rather than to single individuals (see 7 Chap. 18). Thus clinical informatics and public health informatics share many of the same methods and techniques. The closeness of their relationship was amply demonstrated by the explosion in informatics research and applications that occurred in response to the COVID-19 pandemic.19 By mid-2020, several articles had appeared to demonstrate the tight relationship between EHRs and public health informatics for management of the outbreak (Reeves et al. 2020). Two other large areas of application overlap in some ways with clinical informatics and public health informatics. These include imaging informatics (and the set of issues developed around both radiology and other image management and image analysis domains such as pathology, dermatology, and molecular visualization—see 7 Chaps. 10 and 22). Finally, there
18 Clinical informatics was approved in 2013 by the American Board of Medical Specialties as a formal subspecialty of medicine (Finnell and Dixon, 2015), with board certification examinations offered for eligible candidates by the American Board of Preventive Medicine (7 https://www.theabpm.org/ become-certified/subspecialties/clinical-informatics/ (Accessed 6/1/19)). AMIA is formulating a similar certification program, AMIA Health Informatics Certification (AHIC) for non-physicians who are working in the clinical informatics area (7 https:// www.amia.org/ahic, Accessed 1/5/2020).
19 7 https://www.amia.org/COVID19 05/03/2020)
(Accessed
34
1
E. H. Shortliffe and M. F. Chiang
Biomedical informatics methods, techniques, and theories
Basic research
Health informatics Applied research and practice
Bioinformatics
Imaging informatics
Molecular and Tissues and cellular organs processes
Clinical Public health informatics informatics Individuals (patients)
Populations and society
.. Fig. 1.22 Building on the concepts of . Fig. 1.20, this diagram demonstrates the breadth of the biomedical informatics field. The relationship between biomedical informatics as a core scientific discipline and its diverse array of application domains that span biological science, imaging, clinical practice, public health, and
others not illustrated (see text). Note that “health informatics” is the term used to refer to applied research and practice in clinical and public health informatics. It is not a synonym for the underlying discipline, which is “biomedical informatics”
is the burgeoning area of bioinformatics, which at the molecular and cellular levels is offering challenges that draw on many of the same informatics methods as well (see 7 Chaps. 9 and 26). As is shown in . Fig. 1.22, there is a spectrum as one moves from left to right across these BMI application domains. In bioinformatics, workers deal with molecular and cellular processes in the application of informatics methods. At the next level, workers focus on tissues and organs, which tend to be the emphasis of imaging informatics work (also called structural informatics by some investigators). Progressing to clinical informatics, the focus is on individual patients, and finally to public health, where researchers address problems of populations and of society, including prevention. The core science of biomedical informatics has important contributions to make across that entire spectrum, and many informatics methods are broadly applicable across the full range of domains. Note from . Fig. 1.20 that biomedical informatics and bioinformatics are not synonyms and it is incorrect to refer to the scientific discipline as bioinformatics, which is, rather, an important area of application of BMI methods
and concepts. Similarly, the term health informatics, which refers to applied research and practice in clinical and public-health informatics, is also not an appropriate name for the core discipline, since BMI is applicable to basic human biology as well as to health. We acknowledge that the four major areas of application shown in . Fig. 1.19 have “fuzzy” boundaries, and many areas of applied informatics research involve more than one of the categories. For example, biomolecular imaging involves both bioinformatics and imaging informatics concepts. Similarly, personal or consumer health informatics (see 7 Chap. 11) includes elements of both clinical informatics and public-health informatics. Another important area of BMI research activities is pharmacogenomics (see 7 Chap. 27), which is the effort to infer genetic determinants of human drug response. Such work requires the analysis of linked genotypic and phenotypic databases, and therefore lies at the intersection of bioinformatics and clinical informatics. Similarly, 7 Chap. 28 presents the role of informatics in precision medicine, which relies heavily on both bioinformatics and clinical informatics concepts and systems.
35 Biomedical Informatics: The Science and the Pragmatics
Translational research
Clinical science, public health, & health services research
Biological science - Genetics - Structural biology - Neuroscience
-Policy -Outcomes
Bioinformatics
Health informatics
Biomedical informatics -Informatics -Computation -Statistics
.. Fig. 1.23 A Venn diagram that depicts the relationships among the three major disciplines: biological research, clinical medicine / public health, and biomedical informatics. Bioinformatics, Health Informatics, and Translational Research lie at the intersections among pairs of these fields as shown. Precision Medicine, which
relies on Translational Bioinformatics and Clinical Research Informatics, constitutes the area of common overlap among all three Venn circles. (Adapted with permission from a diagram developed by the Department of Biomedical Informatics at the Vanderbilt Medical Center, Nashville, TN)
Precision medicine is a product of the increasing emphasis on moving both data and concepts from basic science research into clinical science and ultimately into practice. Such efforts are typically characterized as translational science—a topic that has attracted major investments by the US National Institutes of Health (NIH) over the past two decades. Informatics scientists are engaged as collaborators in this translational work, which spans all four major categories of application shown in . Fig. 1.20, pursuing work in translational bioinformatics (7 Chap. 26) and clinical research informatics (7 Chap. 27).20 Accordingly, informatics was defined as a major component of the Clinical and Translational Science Awards (CTSA) Program,21 support by the National Center for Advancing Translational Sciences (NCATS) at the NIH. AMIA sponsors an annual weeklong conference, known as the Informatics Summit, that presents new
research results and applications in these areas.22 The interactions among bioscience, clinical science, and informatics can be nicely captured by recognizing how informatics fields and translational science relate to one another (. Fig. 1.23). In general, BMI researchers derive their inspiration from one or two, rather than all, of the application areas, identifying fundamental methodologic issues that need to be addressed and testing them in system prototypes or, for more mature methods, in actual systems that are used in clinical or biomedical research settings. One important implication of this viewpoint is that the core discipline is identical, regardless of the area of application that a given individual is motivated to address, although some BMI methods have greater relevance to some domains than to others. This argues for unified BMI educational programs, ones that bring together students with a wide variety of application interests. Elective courses and internships in areas of specific
20 See also the diagram in (Kulikowski et al. 2012), which shows how these two disciplines span all areas of applied biomedical informatics. 21 7 https://ncats.nih.gov/ctsa (Accessed 6/2/2019).
22 7 https://www.amia.org/meetings-and-events (Accessed 6/2/2019)
1
36
1
E. H. Shortliffe and M. F. Chiang
interest are of course important complements to the core exposures that students should receive (Kulikowski et al. 2012), but, given the need for teamwork and understanding in the field, separating trainees based on the application areas that may interest them would be counterproductive and wasteful.23 The scientific contributions of BMI also can be appreciated through their potential for benefiting the education of health professionals (Shortliffe 2010). For example, in the education of medical students, the various cognitive activities of physicians traditionally have tended to be considered separately and in isolation—they have been largely treated as though they are independent and distinct modules of performance. One activity frequently emphasized is formal education regarding medical decision making (see 7 Chap. 3). The specific content of this area continues to evolve, but the discipline’s dependence on formal methods regarding the use of knowledge and information reveal that it is one aspect of biomedical informatics. A particular topic in the study of medical decision making is diagnosis, which is often conceived and taught as though it were a free- standing and independent activity. Medical students may thus be led to view diagnosis as a process that physicians carry out in isolation before choosing therapy for a patient or proceeding to other modular tasks. A number of studies have shown that this model is oversimplified and that such a decomposition of cognitive tasks may be quite misleading (Elstein et al. 1978a; Patel and Groen 1986). Physicians seem to deal with several tasks at the same
23 Many current biomedical informatics training programs were designed with this perspective in mind. Students with interests in clinical, imaging, public health, and biologic applications are often trained together and are required to learn something about each of the other application areas, even while specializing in one subarea for their own research. Several such programs were described in a series of articles in the Journal of Biomedical Informatics in 2007 (Tarczy-Hornoch et al. 2007) and many more have been added since that time.
time. Although a diagnosis may be one of the first things physicians think about when they see a new patient, patient assessment (diagnosis, management, analysis of treatment results, monitoring of disease progression, etc.) is a process that never really terminates. A physician must be flexible and open-minded. It is generally appropriate to alter the original diagnosis if it turns out that treatment based on it is unsuccessful or if new information weakens the evidence supporting the diagnosis or suggests a second and concurrent disorder. 7 Chapter 4 discusses these issues in greater detail. When we speak of making a diagnosis, choosing a treatment, managing therapy, making decisions, monitoring a patient, or preventing disease, we are using labels for different aspects of medical care, an entity that has overall unity. The fabric of medical care is a continuum in which these elements are tightly interwoven. Regardless of whether we view computer and information science as a profession, a technology, or a science, there is no doubt about its importance to biomedicine. We can assume computers are here to stay as fundamental tools to be used in clinical practice, biomedical research, and health science education.
1.4.4
Relationship to Computer Science
During its evolution as an academic entity in universities, computer science followed an unsettled course as involved faculty attempted to identify key topics in the field and to find the discipline’s organizational place. Many computer science programs were located in departments of electrical engineering, because major concerns of their researchers were computer architecture and design and the development of practical hardware components. At the same time, computer scientists were interested in programming languages and software, undertakings not particularly characteristic of engineering. Furthermore, their work with algorithm design, computability
37 Biomedical Informatics: The Science and the Pragmatics
theory,24 and other theoretical topics seemed more related to mathematics. Biomedical informatics draws from all of these activities—development of hardware, software, and computer science theory. Biomedical computing generally has not had a large enough market to influence the course of major hardware developments; i.e., computers serve general purposes and have not been developed specifically for biomedical applications. Not since the early 1960s (when health-computing experts occasionally talked about and, in a few instances, developed special medical terminals) have people assumed that biomedical applications would use hardware other than that designed for general use. The question of whether biomedical applications would require specialized programming languages might have been answered affirmatively in the 1970s by anyone examining the MGH Utility Multi-Programming System, known as the MUMPS language (Greenes et al. 1970; Bowie and Barnett 1976), which was specially developed for use in medical applications. For several years, MUMPS was the most widely used language for medical record processing. Under its subsequent name, M, it is still in widespread use and has been used to develop commercial electronic health record systems. New implementations have been developed for each generation of computers. M, however, like any programming language, is not equally useful for all computing tasks. In addition, the software requirements of medicine are better understood and no longer appear to be unique; rather, they are specific to the kind of task. A program for scientific computation looks pretty much the same whether it is designed for chemical engineering or for pharmacokinetic calculations. How, then, does BMI differ from biomedical computer science? Is the new discipline
simply the study of computer science with a “biomedical flavor”? If you return to the definition of biomedical informatics that we provided in 7 Box 1.1, and then refer to . Fig. 1.20, you will begin to see why biomedical informatics is more than simply the biomedical application of computer science.25 The issues that it addresses not only have broad relevance to health, medicine, and biology, but the underlying sciences on which BMI professionals draw are inherently interdisciplinary as well (and are not limited to computer science topics). Thus, for example, successful BMI research will often draw on, and contribute to, computer science, but it may also be closely related to the decision sciences (probability theory, decision analysis, or the psychology of human problem solving), cognitive science, information sciences, or the management sciences (. Fig. 1.24). Furthermore, a biomedical informatics researcher will be tightly linked to some underlying problem from the real world of health or biomedicine. As . Fig. 1.24 illustrates, for example, a biomedical informatics basic researcher or doctoral student will typically be motivated by one of the application areas, such as those shown at the bottom of . Fig. 1.22, but a dissertation worthy of a PhD in the field will usually be identified by a generalizable scientific result that also contributes to one of the component disciplines (. Fig. 1.20) and on which other scientists can build in the future.
25 In fact, the multidisciplinary nature of biomedical informatics has led the informatics term to be borrowed in other disciplines, including computer science organizations, even though the English name for the field was first adopted in the biomedical context. Today we even have generic full departments of informatics in the US (e.g., see 7 https://informatics.njit.edu, Accessed 11/28/2020) and in other parts of the world as well (e.g., 7 http://www.sussex.ac. uk/informatics/. Accessed 1/5/2020). In the US, there are full schools with informatics in their title (e.g., 7 https://luddy.indiana.edu/index.html. Accessed 1/5/2020) and even a School of Biomedical Informatics (7 https://sbmi.uth.edu/. Accessed 1/2/2020).
24 Many interesting problems cannot be computed in a reasonable time and require heuristics. Computability theory is the foundation for assessing the feasibility and cost of computation to provide the complete and correct results to a formally stated problem.
1
38
1
E. H. Shortliffe and M. F. Chiang
Contribute to...
Computer science decision science statistics cognitive science information sciences management sciences other component sciences
Biomedical informatics methods, techniques, and theories
Draw upon.... Contributes to....
Applied information
.. Fig. 1.24 Component sciences in biomedical informatics. An informatics application area is motivated by the needs of its associated biomedical domain, to which it attempts to contribute solutions to problems. Thus any applied informatics work draws upon a biomedical domain for its inspiration, and in turn often leads to the delineation of basic research challenges in biomedical informatics that must be tackled if the applied biomedi-
1.4.5
Relationship to Biomedical Engineering
BMI is a relatively young discipline, whereas biomedical engineering (BME) is older and well-established. Many engineering and medical schools have formal academic programs in BME, often with departmental status and full-time faculty. Only in the last two or three decades has this begun to be true of biomedical informatics academic units. How does biomedical informatics relate to biomedical engineering, especially in an era when engineering and computer science are increasingly intertwined? Biomedical engineering departments emerged in the late 1960s, when technology began to play an increasingly prominent role
Clinical or biomedical domain of interest Draws upon....
cal domain is ultimately to benefit. At the methodologic level, biomedical informatics draws on, and contributes to, a wide variety of component disciplines, of which computer science is only one. As . Figs. 1.20 and 1.22 show explicitly, biomedical informatics is inherently multidisciplinary, both in its areas of application and in the component sciences on which it draws
in medical practice.26 The emphasis in such departments has tended to be research on, and development of, instrumentation (e.g., as discussed in 7 Chaps. 21 and 22, advanced monitoring systems, specialized transducers for clinical or laboratory use, and imaging methods and enhancement techniques for use in radiology), with an orientation toward the
26 By the late 1960s the first BME departments were formed in the US at the University of Virginia, Case Western Reserve University, Johns Hopkins University, and Duke University (see 7 https://navigate. aimbe.org/why-bioengineering/history/, Accessed 6/2/2019). Duke’s undergraduate degree program in BMI was the first to be accredited by the Engineering Council for Professional Development (September 1972).
39 Biomedical Informatics: The Science and the Pragmatics
development of medical devices, prostheses, and specialized research tools. There is also a major emphasis on tissue engineering and related wet-bench research efforts. In recent years, computing techniques have been used both in the design and construction of medical devices and in the medical devices themselves. For example, the “smart” devices increasingly found in most medical specialties are all dependent on computational technology. Intensive care monitors that generate blood pressure records while calculating mean values and hourly summaries are examples of such “intelligent” devices. The overlap between biomedical engineering and BMI suggests that it would be unwise for us to draw compulsively strict boundaries between the two fields. There are ample opportunities for interaction, and there are chapters in this book that clearly overlap with biomedical engineering topics—e.g., 7 Chap. 21 on patient-monitoring systems and 7 Chap. 22 on radiology systems. Even where they meet, however, the fields have differences in emphasis that can help you to understand their different evolutionary histories. In biomedical engineering, the emphasis is on medical devices and underlying methods; in BMI, the emphasis is on biomedical information and knowledge and on their management with the use of computers. In both fields, the computer is secondary, although both use computing technology. The emphasis in this book is on the informatics end of the spectrum of biomedical computer science, so we shall not spend much time examining biomedical engineering topics.
1.5
Integrating Biomedical Informatics and Clinical Practice
It should be clear from the material in this chapter that biomedical informatics is a remarkably broad and complex topic. We have argued that information management is intrinsic to both life-science research and clinical practice and that, in biomedical settings
over a half century, the use of computers to aid in information management has grown from a futuristic notion to an everyday occurrence. In fact, the EHR and other information technology tools may now be the only kind of equipment that is used by every single health care professional, regardless of specialty or professional title. In this chapter and throughout the book, we emphasize the myriad ways in which computers are used in biomedicine to ease the burdens of information management and the means by which new technology is changing the delivery of health care. The degree to which such changes are positively realized, and their rate of occurrence, are being determined in part by external forces that influence the costs of developing and implementing biomedical applications and the ability of scientists, clinicians, patients, and the health care system to accrue the potential benefits. We can summarize several global forces that are affecting biomedical computing and that will continue to influence the extent to which computers are assimilated into clinical practice: (1) new developments in communications plus computer hardware and software; (2) a further increase in the number of individuals who have been trained in both medicine, or another health profession, and in BMI; and (3) ongoing changes in health care financing designed to control the rate of growth of health-related expenditures. We touched on the first of these factors in 7 Sect. 1.4.2, when we described the historical development of biomedical computing and the trend from mainframe computers, to microcomputers and PCs, and to the mobile devices of today. The future view outlined in 7 Sect. 1.1 similarly builds on the influence that the Internet has provided throughout society during the last decade. Hardware improvements have made powerful computers inexpensive and thus available to hospitals, to departments within hospitals, and even to individual physicians. The broad selection of computers of all sizes, prices, and capabilities makes computer applications both attractive and accessible. Technological advances in
1
40
1
E. H. Shortliffe and M. F. Chiang
information storage devices,27 including the movement of files to the “cloud”, are facilitating the inexpensive storage of large amounts of data, thus improving the feasibility of data- intensive applications, such as drawing inferences from human genome datasets (see 7 Chaps. 9, 26, and 28) and the all-digital radiology department (7 Chap. 22). Standardization of hardware and advances in network technology are making it easier to share data and to integrate related information-management functions within a hospital or other health care organization, although inadequacies in standards for encoding and sharing data continue to be challenging (7 Chaps. 7, 14, 15, and 16). The second factor is the frustratingly slow increase in the number of professionals who are being trained to understand the biomedical issues as well as the technical and engineering ones. Computer scientists who understand biomedicine are better able to design systems responsive to actual needs and sensitive to workflow and the clinical culture. Health professionals who receive formal training in BMI are likely to build systems using well-established techniques while avoiding the past mistakes of other developers. As more professionals are trained in the special aspects of both fields, and as the programs they develop are introduced, health care professionals are more likely to have useful and usable systems available when they turn to the computer for help with information management tasks. The third factor affecting the integration of computing technologies into health care settings is our evolving health care system and the increasing pressure to control medical spending. The escalating tendency to apply technology to all patient-care tasks is a frequently cited phenomenon in modern medical practice. Mere physical findings no longer are
27 Technological progress in this area is occurring at a dizzying rate. Consider, for example, the announcement that scientists are advancing the notion of using DNA for data storage and can store as much as 704 terabytes of information in a gram of DNA. 7 http://www.engadget.com/2012/08/19/harvardstores-704tb-in-a-gram-of-dna; 7 https://homes. cs.washington.edu/~bornholt/dnastorage-asplos16/ (Accessed 5/30/19).
considered adequate for making diagnoses and planning treatments. In fact, medical students who are taught by more experienced physicians to find subtle diagnostic signs by examining various parts of the body nonetheless often choose to bypass or deemphasize physical examinations in favor of ordering one test after another. Sometimes, they do so without paying sufficient attention to the ensuing cost. Some new technologies replace less expensive, but technologically inferior, tests. In such cases, the use of the more expensive approach is generally justified. Occasionally, computer-related technologies have allowed us to perform tasks that previously were not possible. For example, the scans produced with computed tomography or magnetic resonance imaging (see 7 Chaps. 10 and 22) have allowed physicians to visualize cross- sectional slices of the body, and medical instruments in intensive care units perform continuous monitoring of patients’ body functions that previously could be checked only episodically (see 7 Chap. 21). The development of expensive new technologies, and the belief that more technology is better, have helped to fuel rapidly escalating health care costs. In the 1970s and 1980s, such rising costs led to the introduction of managed care and capitation—changes in financing and delivery that were designed to curb spending. Today we are seeing a trend toward value-based reimbursement, which is predicated on the notion that payment for care of patients should be based on the demonstrated value received (as defined by high quality at low cost) rather than simply the existence of an encounter or procedure. Integrated computer systems can provide the means to capture data to help assess such value, while they also support detailed cost accounting, the analysis of the relationship of costs of care to the benefits of that care, evaluation of the quality of care provided, and identification of areas of inefficiency. Systems that improve the quality of care while reducing the cost of providing that care clearly will be favored. The effect of cost containment pressures on technologies that increase the cost of care while improving the quality are less clear. Medical technologies, including computers, will be
41 Biomedical Informatics: The Science and the Pragmatics
embraced only if they improve the delivery of clinical care while either reducing costs or providing benefits that clearly exceed their costs. Designers of medical systems must address satisfactorily many logistical and engineering questions before innovative solutions are integrated optimally into medical practice. For example, are the machines conveniently located? Should mobile devices further replace tethered workstations? Can users complete their tasks without excessive delays? Is the system reliable enough to avoid loss of data? Can users interact easily and intuitively with the computer? Does it facilitate rather than disrupt workflow? Are patient data secure and appropriately protected from prying eyes? In addition, cost-control pressures produce a growing reluctance to embrace expensive technologies that add to the high cost of health care. The net effect of these opposing trends is in large part determining the degree to which specific systems are embraced and effectively implemented in the health care environment. In summary, rapid advances in communications, computer hardware, and software, coupled with an increasing computer literacy of health care professionals and researchers, favor the implementation of effective computer applications in clinical practice, public health, and life sciences research. Furthermore, in the increasingly competitive health care industry, providers have a greater need for the information management capabilities supplied by computer systems. The challenge is to demonstrate in persuasive and rigorous ways the financial and clinical advantages of these systems (see 7 Chap. 13).
nnSuggested Readings Blois, M. S. (1984b). Information and medicine: The nature of medical descriptions. Berkeley: University of California Press. In this classic volume, the author analyzes the structure of medical knowledge in terms of a hierarchical model of information. He explores the ideas of high- and low-level sciences and suggests that the nature of medical descriptions accounts for difficulties in applying computing technology to medicine. A brief summary of key elements in this book is included as Box 1.2 in this chapter.
Coiera, E. (2015). Guide to health informatics (3rd ed.). Boca Raton, FL: CRC Press. This introductory text is a readable summary of clinical and public health informatics, aimed at making the domain accessible and understandable to the non-specialist. Collen, M. F., & Ball, M. J. (Eds.). (2015). A history of medical informatics in the United States (2nd ed.). London: Springer. This comprehensive book traces the history of the field of medical informatics, and identifies the origins of the discipline’s name (which first appeared in the English-language literature in 1974). The original (1995) edition was being updated by Dr. Collen when he passed away shortly after his 100th birthday. Dr. Ball organized an effort to complete the 2nd edition, enlisting participation by many leaders in the field. Elstein, A. S., Shulman, L. S., & Sprafka, S. A. (1978b). Medical problem solving: An analysis of clinical reasoning. Cambridge, MA: Harvard University Press. This classic collection of papers describes detailed studies that have illuminated several aspects of the ways in which expert and novice physicians solve medical problems. The seminal work described remains highly relevant to today’s work on problem solving and clinical decision support systems. Friedman, C. P., Altman, R. B., Kohane, I. S., McCormick, K. A., Miller, P. L., Ozbolt, J. G., Shortliffe, E. H., Stormo, G. D., Szczepaniak, M. C., Tuck, D., & Williamson, J. (2004). Training the next generation of informaticians: The impact of BISTI and bioinformatics. Journal of American Medical Informatics Association, 11, 167–172. This important analysis addresses the changing nature of biomedical informatics due to the revolution in bioinformatics and computational biology. Implications for training, as well as organization of academic groups and curriculum development, are discussed. Hoyt, R. E., & Hersh. W. R. (2018). Health informatics: Practical guide (7th ed). Raleigh: Lulu. com. This introductory volume provides a broad view of informatics and is aimed especially at health professionals in management roles or IT professionals who are entering the clinical world. Institute of Medicine25. (1991 [revised 1997]). The computer- based patient record: An essential
1
42
1
E. H. Shortliffe and M. F. Chiang
technology for health care. Washington, DC: National Academy Press. National Research Council (1997). For The Record: Protecting Electronic Health Information. Washington, DC: National Academy Press. National Research Council (2000). Networking Health: Prescriptions for the Internet. Washington, DC: National Academy Press. This set of three reports from branches of the US National Academies of Science has had a major influence on health information technology education and policy over the last 25 years. Institute of Medicine25. (2000). To err is human: Building a safer health system. Washington, DC: National Academy Press. Institute of Medicine (2001). Crossing the Quality Chasm: A New Health Systems for the 21st Century. Washington, DC: National Academy Press. Institute of Medicine (2004). Patient Safety: Achieving a New Standard for Care. Washington, DC: National Academy Press. This series of three reports from the Institute of Medicine has outlined the crucial link between heightened use of information technology and the enhancement of quality and reduction in errors in clinical practice. Major programs in patient safety have resulted from these reports, and they have provided motivation for a heightened interest in health care information technology among policy makers, provider organizations, and even patients. Kalet, I. J. (2013). Principles of biomedical informatics (2nd ed.). New York: Academic. This volume provides a technical introduction to the core methods in BMI, dealing with storage, retrieval, display, and use of biomedical data for biological problem solving and medical decision making. Application examples are drawn from bioinformatics, clinical informatics, and public health informatics. National Academy of Medicine. (2018). Procuring interoperability: Achieving high-quality, connected, and person-centered care. Washington, DC: National Academy Press. National Academy of Medicine (2019). Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: National Academy Press. This series of two reports from the National Academy of Medicine outlines emerging issues in biomedical informatics: interoperability (which is dis-
cussed in greater detail in Chapter 8), and artificial intelligence (which is discussed in in many chapters throughout this volume). National Academy of Medicine. (2019). Taking action against clinician burnout: A systems approach to professional well-being. Washington, DC: National Academy Press. This consensus study from the National Academy of Medicine discusses the problem of clinician burnout in the United States, including areas where health care information technology may contribute or reduce these problems. Shortliffe, E. (1993b). Doctors, patients, and computers: Will information technology dehumanize health care delivery? Proceedings of the American Philosophical Society, 137(3), 390– 398 In this paper, the author examines the frequently expressed concern that the introduction of computing technology into health care settings will disrupt the development of rapport between clinicians and patients and thereby dehumanize the therapeutic process. He argues, rather, that computers may eventually have precisely the opposite effect on the relationship between clinicians and their patients.
??Questions for Discussion 1. How do you interpret the phrase “logical behavior”? Do computers behave logically? Do people behave logically? Explain your answers. 2. What do you think it means to say that a computer program is “effective”? Make a list of a dozen computer applications with which you are familiar. List the applications in decreasing order of effectiveness, as you have explained this concept. Then, for each application, indicate your estimate of how well human beings perform the same tasks (this will require that you determine what it means for a human being to be effective). Do you discern any pattern? If so, how do you interpret it? 3. Discuss three society-wide factors that will determine the extent to which computers are assimilated into clinical practice. 4. Reread the future vision presented in 7 Sect. 1.1. Describe the characteristics of an integrated environment for
43 Biomedical Informatics: The Science and the Pragmatics
managing clinical information. Discuss two ways (either positive or negative) in which such a system could change clinical practice. 5. Do you believe that improving the technical quality of health care entails the risk of dehumanization? If so, is it worth the risk? Explain your reasoning. 6. Consider . Fig. 1.20, which shows that bioinformatics, imaging informatics, clinical informatics, and public health informatics are all application domains of the biomedical informatics discipline because they share the same core methods and theories: (a) Briefly describe two examples of core biomedical informatics methods or theories that can be applied both to bioinformatics and clinical informatics. (b) Imagine that you describe . Fig. 1.20 to a mathematics faculty member, who responds that “in that case, I’d also argue that statistics, computer science, and physics are all application domains of math because they share the same core mathematical methods and theories.” In your opinion, is this a legitimate argument? In what ways is this situation similar to, and different from, the case of biomedical informatics? (c) Why is biomedical informatics not simply computer science applied to biomedicine, or to the practice of medicine, using computers? (d) How would you describe the relevance of psychology and cognitive science to the field of biomedical informatics? (Hint: See . Fig. 1.24) 7. In 2000, a major report by the Institute of Medicine28 entitled “To Err is Human: Building a Safer Health System” (see Suggested Readings) stated that up to 98,000 patient deaths were being caused by preventable medical errors in American hospitals each year. (a) It has been suggested that effective electronic health record (EHR)
systems should mitigate this problem. What are three specific ways in which they could be reducing the number of adverse events in hospitals? (b) Are there ways in which computerbased systems could increase the incidence of medical errors? Explain. (c) Describe a practical experiment that could be used to examine the impact of an EHR system on patient safety. In other words, propose a study design that would address whether the computer-based system increases or decreases the incidence of preventable adverse events in hospitals – and by how much. (d) What are the limitations of the experimental design you proposed in (c)? 8. It has been argued that the ability to capture “nuance” in the description of what a clinician has seen when examining or interviewing a patient may not be as crucial as some people think. The desire to be able to express one’s thoughts in an unfettered way (free text) is often used to argue against the use of structured data-entry methods using a controlled vocabulary and picking descriptors from lists when recording information in an EHR. (a) What is your own view of this argument? Do you believe that it is important to the quality and/or efficiency of care for clinicians to be able to record their observations, at least part of the time, using free text/natural language? (b) Many clinicians have been unwilling to use an EHR system requiring structured data entry 28 The Institute of Medicine (IOM), part of the former National Academy of Sciences (NAS) was reorganized in 2015 to become the National Academy of Medicine (NAM). The NAS is now known as the National Academies of Science, Engineering, and Medicine (NASEM).
1
44
1
E. H. Shortliffe and M. F. Chiang
because of the increased time required for documentation at the point of care and constraints on what can be expressed. What are two strategies that could be used to address this problem (other than “designing a better user interface for the system”)?
and effectiveness of beta-blockers for the treatment of elderly patients after acute myocardial infarction: National Cooperative Cardiovascular Project. JAMA, 19; 280(7), 623–9. Kulikowski, C. A., Shortliffe, E. H., et al. (2012). AMIA Board white paper: Definition of biomedical informatics and specification of core competencies for graduate education in the discipline. Journal of the American Medical Informatics Association, 19(6), 931–938. Ozkaynak, M., Brennan, P. F., Hanauer, D. A., Johnson, S., Aarts, J., Zheng, K., & Haque, S. N. (2013). Patient-centered care requires a patient oriented workflow model. Journal of the American Medical References Informatics Association: JAMIA, 20(e1), e14–e16. Patel, V. L., & Groen, G. J. (1986). Knowledge based Balas, E. A., & Boren, S. A. (2000). Managing clinisolution strategies in medical reasoning. Cognitive cal knowledge for health care improvement. In Science, 10(1), 91–116. Yearbook of medical informatics 2000: Patient- Reeves, J. J., Hollandsworth, H. M., Torriani, F. J., centered systems (pp. 65–70). Stuttgart: Schattauer. et al. (2020). Rapid response to COVID-19: Health Begun, J. W., Zimmerman, B., & Dooley, K. (2003). informatics support for outbreak management in Health care organizations as complex adaptive an academic health system. Journal of the American systems. In S. M. Mick & M. Wyttenbach (Eds.), Medical Informatics Association: JAMIA, ocaa037. Advances in health care organization theory (pp. 253– https://doi.org/10.1093/jamia/ocaa037. 288). San Francisco: Jossey-Bass. Blois, M. S. (1984). Information and medicine: The Schwartz, W. B. (1970). Medicine and the computer: The promise and problems of change. The New England nature of medical descriptions. Berkeley: University Journal of Medicine, 283(23), 1257–1264. of California Press. Shortliffe, E. H. (1993a). Doctors, patients, and comBowie, J., & Barnett, G. O. (1976). MUMPS: An ecoputers: Will information technology dehumanize nomical and efficient time-sharing system for health-care delivery? Proceedings of the American information management. Computer Programs in Philosophical Society, 137(3), 390–398. Biomedicine, 6, 11–22. Shortliffe, E. H. (1998a). The next generation internet Collen, M. F. (1995). A history of medical informatics in and health care: A civics lesson for the informatics the United States, 1950 to 1990. Bethesda: American community. In Proceedings of the AMIA annual fall Medical Informatics Association. symposium (pp. 8–14). Orlando. Daley, K. A. (2013). A continuously learning health care Shortliffe, E. H. (1998b). The evolution of health-care system in the United States. In NAM perspectives. records in the era of the internet, proceedings of Washington, DC: Commentary, National Academy Medinfo 98. Seoul/Amsterdam: IOS Press. of Medicine. https://doi.org/10.31478/201307b. Dick, R., & Steen, E. (Eds.). (1991). The computer – Shortliffe, E. H. (2000). Networking health: Learning from others, taking the lead. Health Affairs, 19(6), 9–22. Based patient record : An essential technology for HealthCare. (Rev. 1997). Washington, D.C.: Shortliffe, E. H. (2005). Strategic action in health information technology: Why the obvious has taken so Institute of Medicine, National Academy Press. long. Health Affairs, 24, 1222–1233. Elstein, K. A., Shulman, L. S., & Sprafka, S. A. (1978a). Shortliffe, E. H. (2010). Biomedical informatics in the Medical problem solving: An analysis of clinical reaeducation of physicians. Journal of the American soning. Cambridge, MA: Harvard University Press. Medical Association, 304(11), 1227–1228. Finnell, J. T., & Dixon, B. E. (2015). Clinical informatics Shortliffe, E. H., & Sondik, E. (2004). The informatics study guide: Text and review. London: Springer. infrastructure: Anticipating its role in cancer surveilGreenes, R. A., & Shortliffe, E. H. (1990). Medical inforlance. Proceedings of the C-change summit on canmatics: An emerging academic discipline and insticer surveillance and information: The next decade, tutional priority. Journal of the American Medical Phoenix, AZ. Association, 263(8), 1114–1120. Simborg, D. W., Chadwick, M., Whiting-O’Keefe, Q. E., Greenes, R. A., Barnett, G. O., Klein, S. W., Robbins, Tolchin, S. G., Kahn, S. A., & Bergan, E. S. (1983). A., & Prior, R. E. (1970). Recording, retrieval, Local area networks and the hospital. Computers and review of medical data by physician-computer and Biomedical Research, 16(3), 247–259. interaction. The New England Journal of Medicine, Tarczy-Hornoch, P., Markey, M. K., Smith, J. A., & 282(6), 307–315. Hiruki, T. (2007). Biomedical informatics and James, G., Witten, D., Hastie, T., & Tibshirani, R. genomic medicine: Research and training. Journal (2013). An introduction to statistical learning: With of Biomedical Informatics, 40, 1–4. applications in R, (Springer texts in statistics). Yasnoff, W. A., Sweeney, L., & Shortliffe, E. H. (2013). New York: Springer. Putting health IT on the path to success. Journal of Krumholz, H. M., Radford, M. J., Wang, Y., Chen, J., the American Medical Association, 309(10), 989–990. Heiat, A. & Marciniak, T. A. (1998). National use
45
Biomedical Data: Their Acquisition, Storage, and Use Edward H. Shortliffe and Michael F. Chiang Contents 2.1
What Are Clinical Data? – 47
2.1.1 2.1.2
hat Are the Types of Clinical Data? – 49 W Who Collects the Data? – 51
2.2
Uses of Health Data – 53
2.2.1 2.2.2 2.2.3 2.2.4 2.2.5 2.2.6 2.2.7
reate the Basis for the Historical Record – 54 C Support Communication Among Providers – 54 Anticipate Future Health Problems – 55 Record Standard Preventive Measures – 56 Identify Deviations from Expected Trends – 56 Provide a Legal Record – 56 Support Clinical Research – 58
2.3
ationale for the Transition from Paper to Electronic R Documentation – 58
2.3.1 2.3.2 2.3.3 2.3.4
ragmatic and Logistical Issues – 58 P Redundancy and Inefficiency – 60 Influence on Clinical Research – 61 The Passive Nature of Paper Records – 62
2.4
New Kinds of Data and the Resulting Challenges – 62
2.5
The Structure of Clinical Data – 63
2.5.1 2.5.2
oding Systems – 64 C The Data-to-Knowledge Spectrum – 66
2.6
Strategies of Clinical Data Selection and Use – 67
© Springer Nature Switzerland AG 2021 E. H. Shortliffe, J. J. Cimino (eds.), Biomedical Informatics, https://doi.org/10.1007/978-3-030-58721-5_2
2
2.6.1 2.6.2 2.6.3
T he Hypothetico-Deductive Approach – 67 The Relationship Between Data and Hypotheses – 70 Methods for Selecting Questions and Comparing Tests – 71
2.7
The Computer and Collection of Medical Data – 72 References – 74
47 Biomedical Data: Their Acquisition, Storage, and Use
nnLearning Objectives After reading this chapter, you should know the answers to these questions: 55 What are clinical data? 55 How are clinical data used? 55 What are the advantages and disadvantages of traditional paper medical records vs. electronic health records? 55 What is the role of the computer in data storage, retrieval, and interpretation? 55 What distinguishes a database from a knowledge base? 55 How are data collection and hypothesis generation intimately linked in clinical diagnosis? 55 What are the meanings of the terms prevalence, predictive value, sensitivity, and specificity? 55 How are the terms related? 55 What are the alternatives for entry of data into a clinical database?
2.1 What Are Clinical Data?
From earliest times, the ideas of ill health and its treatment have been wedded to those of the observation and interpretation of data. Whether we consider the disease descriptions and guidelines for management in early Greek literature or the modern physician’s use of complex laboratory and X-ray studies, it is clear that gathering data and interpreting their meaning are central to the health care process. With the move toward the use of clinical and genomic information in assessing individual patients (their risks, prognosis, and likely responses to therapy), the sheer amounts of data that may be used in patient care have become huge. A textbook on biomedical informatics will accordingly refer time and again to issues in data collection, storage, and use. This chapter lays the foundation for this recurring set of issues that is pertinent to all aspects of the use of information, knowledge, and computers in biomedicine, both in the clinical world and in applications related to public health, biology and human genetics. If data are central to all health care, it is because they are crucial to the process of decision making (as described in detail in 7 Chaps. 3 and 4 and again in 7 Chap. 26). In fact, sim
ple reflection will reveal that all health care activities involve gathering, analyzing, or using data. Data provide the basis for categorizing the problems a patient may be having or for identifying subgroups within a population of patients. They also help a physician to decide what additional information is needed and what actions should be taken to gain a greater understanding of a patient’s problem or most effectively to treat the problem that has been diagnosed. It is overly simplistic to view data as the columns of numbers or the monitored waveforms that are a product of our technological health care environment. Although laboratory test results and other numeric data are often invaluable, a variety of more subtle types of data may be just as important to the delivery of optimal care: the awkward glance by a patient who seems to be avoiding a question during the medical interview, information about the details of a patient’s symptoms or about his family or economic setting, or the subjective sense of disease severity that an experienced clinician will often have within a few moments of entering a patient’s room. No clinician disputes the importance of such observations in decision making during patient assessment and management, yet the precise role of these data and the corresponding decision criteria are so poorly understood that it is difficult to record them in ways that convey their full meaning, even from one clinician to another. Despite these limitations, clinicians need to share descriptive information with others. When they cannot interact directly with one another, they often turn to the chart or electronic health record for communication purposes. We consider a clinical datum to be any single observation of a patient—e.g., a temperature reading, a red blood cell count, a past history of rubella, or a blood pressure reading. As the blood pressure example shows, it is a matter of perspective whether a single observation is in fact more than one datum. A blood pressure of 120/80 might well be recorded as a single element in a setting where knowledge that a patient’s blood pressure is normal is all that matters. If the difference between diastolic (while the heart cavities are beginning to fill)
2
48
2
E. H. Shortliffe and M. F. Chiang
and systolic (while they are contracting) blood pressures is important for decision making or for analysis, however, the blood pressure reading is best viewed as two pieces of information (systolic pressure = 120 mmHg, diastolic pressure = 80 mmHg). Human beings can glance at a written blood pressure value and easily make the transition between its unitary view as a single data point and the decomposed information about systolic and diastolic pressures. Such dual views can be much more difficult for computers, however, unless they are specifically allowed for in the design of the method for data storage and analysis. The idea of a data model for computer-stored medical data accordingly becomes an important issue in the design of medical data systems. Clinical data may involve several different observations made concurrently, the observation of the same patient parameter made at several points in time, or both. Thus, a single datum generally can be viewed as defined by five elements: 1. The patient in question 2. The parameter being observed (e.g., liver size, urine sugar value, history of rheumatic fever, heart size on chest X-ray film) 3. The value of the parameter in question (e.g., weight is 70 kg, temperature is 98.6 °F, profession is steel worker) 4. The time of the observation (e.g., 2:30 A.M. on 14FEB20191) 5. The method by which the observation was made (e.g., patient report, thermometer, urine dipstick, laboratory instrument).
to-minute variations may be important—e.g., the frequent blood sugar readings obtained for a patient in diabetic ketoacidosis (acid production due to poorly controlled blood sugar levels) or the continuous measurements of mean arterial blood pressure for a patient in cardiogenic shock (dangerously low blood pressure due to failure of the heart muscle). It may also be important to keep a record of the circumstances under which a data point was obtained. For example, was the blood pressure taken in the arm or leg? Was the patient lying or standing? Was the pressure obtained just after exercise? During sleep? What kind of recording device was used? Was the observer reliable? Such additional information, sometimes called contexts, methods, or modifiers, can be of crucial importance in the proper interpretation of data. Two patients with the same basic problem or symptom often have markedly different explanations for their problem, revealed by careful assessment of the modifiers of that problem. A related issue is the uncertainty in the values of data. It is rare that an observation— even one by a skilled clinician—can be accepted with absolute certainty. Consider the following examples: 55 An adult patient reports a childhood illness with fevers and a red rash in addition to joint swelling. Could he or she have had scarlet fever? The patient does not know what his or her pediatrician called the disease nor whether anyone thought that he or she had scarlet fever. 55 A physician listens to the heart of an asthTime can particularly complicate the assessmatic child and thinks that she hears a ment and computer-based management of heart murmur—but is not certain because data. In some settings, the date of the obserof the patient’s loud wheezing. vation is adequate—e.g., in outpatient clinics 55 A radiologist looking at a shadow on a or private offices where a patient generally is chest X-ray film is not sure whether it repseen infrequently and the data collected need resents overlapping blood vessels or a lung to be identified in time with no greater accutumor. racy than a calendar date. In others, minute- 55 A confused patient is able to respond to simple questions about his or her illness, but under the circumstances the physician 1 Note that it was the tendency to record such dates is uncertain how much of the history being in computers as “14FEB12” that led to the reported is reliable. end-of-century complexities that were called the Year 2K problem. It was shortsighted to think that it was adequate to encode the year of an event with only two digits.
As described in 7 Chaps. 3 and 4, there are a variety of possible responses to deal with
49 Biomedical Data: Their Acquisition, Storage, and Use
incomplete data, the uncertainty in them, and in their interpretation. One technique is to collect additional data that will either confirm or eliminate the concern raised by the initial observation. This solution is not always appropriate, however, because the costs of data collection must be considered. The additional observation might be expensive, risky for the patient, or wasteful of time during which treatment could have been instituted. The idea of trade-offs in data collection thus becomes extremely important in guiding health care decision making. 2.1.1
hat Are the Types of Clinical W Data?
The examples in the previous section suggest that there is a broad range of data types in the practice of medicine and the allied health sciences. They range from narrative, textual data to numerical measurements, genetic information, recorded signals, drawings, and photographs or other images. Narrative data account for a large component of the information that is gathered in the care of patients. For example, the patient’s description of his or her present illness, including responses to focused questions from the physician, generally is gathered verbally and is recorded as text in the medical record. The same is true of the patient’s social and family history, the general review of systems that is part of most evaluations of new patients, and the clinician’s report of physical examination findings. Such narrative data were traditionally handwritten by clinicians and then placed in the patient’s medical record (. Fig. 2.1a). Increasingly, however, the narrative summaries were dictated and then transcribed by typists who produced printed summaries or electronic copies for inclusion in paper or electronic medical records. Now, physicians and staff largely enter narrative text directly into electronic health records (EHRs), usually through keyboard, mouse-driven, or voice-driven interfaces (. Fig. 2.1b). Electronic narrative data often include not only patient histories and physical examinations, but also other narrative descrip
tions such as reports of specialty consultations, surgical procedures, pathologic examinations of tissues, and hospitalization summaries when a patient is discharged. Some narrative data are loosely coded with shorthand conventions known to health personnel, particularly data collected during the physical examination, in which recorded observations reflect the stereotypic examination process taught to all practitioners. It is common, for example, to find the notation “PERRLA” under the eye examination in a patient’s medical record. This encoded form indicates that the patient’s “Pupils are Equal (in size), Round, and Reactive to Light and Accommodation (the process of focusing on near objects).” Note that there are significant problems associated with the use of such abbreviations. Many are not standard and can have different meanings depending on the context in which they are used. For example, “MI” can mean “mitral insufficiency” (leakage in one of the heart’s valves) or “myocardial infarction” (the medical term for what is commonly called a heart attack). Many hospitals try to establish a set of “acceptable” abbreviations with meanings, but the enforcement of such standardization is often unsuccessful. Other hospitals approach this challenge by not permitting use of abbreviations in the medical record, and instead require use of full-length narrative descriptions. Standard narrative expressions have often become loose standards of communication among medical personnel. Examples include “mild dyspnea (shortness of breath) on exertion,” “pain relieved by antacids or milk,” and “failure to thrive.” Such standardized expressions are attempts to use conventional text notation as a form of summarization for otherwise heterogeneous conditions that together characterize a simple concept about a patient. Many data used in medicine take on discrete numeric values. These include such parameters as laboratory tests, vital signs (such as temperature and pulse rate), and certain measurements taken during the physical examination. When such numerical data are interpreted, however, the issue of precision becomes important. Can physicians distinguish reliably between a 9-cm and a 10-cm liver span when they examine a
2
50
E. H. Shortliffe and M. F. Chiang
a
2
.. Fig. 2.1 Much of the information gathered during a physician–patient encounter is written in the medical record. This was traditionally done using a paper notes, and now increasingly using b electronic health records
patient’s abdomen? Does it make sense to report a serum sodium level to two-decimal-place accuracy? Is a 1-kg fluctuation in weight from 1 week to the next significant? Was the patient weighed on the same scale both times (i.e., could the different values reflect variation between measure-
ment instruments rather than changes in the patient)? In some fields of medicine, analog data in the form of continuous signals are particularly important (see 7 Chap. 23). Perhaps the best-known example is an electrocardiogram
51 Biomedical Data: Their Acquisition, Storage, and Use
b
.. Fig. 2.1 (continued)
(ECG), a tracing of the electrical activity from a patient’s heart. When such data are stored in medical records, a graphical tracing frequently is included, with a written interpretation of its meaning. There are clear challenges in determining how such data are best managed in computer-based storage systems. Visual images—acquired from machines or sketched by the physician—are another important category of data. Radiologic images or photographs of skin lesions are obvious examples. It has traditionally been common for physicians to draw simple pictures to represent abnormalities that they have observed; such drawings may serve as a basis for comparison when they or another physician next see the patient. For example, a sketch is a concise way of conveying the location and size of a nodule in the prostate gland (. Fig. 2.2). In electronic health record systems, these hand drawings are increasingly being replaced in the medical record by text-based descriptions or photographs (Sanders et al. 2013). As should be clear from these examples, the idea of data is inextricably bound to the idea of data recording. Physicians and other health care personnel are taught from the outset that it is crucial that they do not trust their memory when caring for patients. They must record their observations, as well as the actions they have
taken and the rationales for those actions, for later communication to themselves and other people. A glance at a medical record will quickly reveal the wide variety of data-recording techniques that have evolved. The range goes from narrative text to commonly understood shorthand notation to cryptic symbols that only specialists can understand; for example, few physicians without specialized training know how to interpret the data-recording conventions of an ophthalmologist (. Fig. 2.3). The notations may be highly structured records with brief text or numerical information, machinegenerated tracings of analog signals, photographic images (of the patient or of radiologic or other studies), or drawings. This range of data-recording conventions presents significant challenges to the person implementing electronic health record systems.
2.1.2
Who Collects the Data?
Health data on patients and populations are gathered by a variety of health professionals. Although conventional ideas of the healthcare team evoke images of coworkers treating ill patients, the team has much broader responsibilities than treatment per se; data collection and recording are a central part of its task.
2
52
E. H. Shortliffe and M. F. Chiang
2
.. Fig. 2.2 A physician’s hand-drawn sketch of a prostate nodule. Drawings may convey precise information more easily and compactly than a textual description,
but are less common in electronic health records compared to paper charts
.. Fig. 2.3 An ophthalmologist’s report of an eye examination. Most physicians trained in other specialties would have difficulty deciphering the symbols that
the ophthalmologist has used. (Image courtesy of Nita Valikodath, MD, with permission)
Physicians are key players in the process of data collection and interpretation. They converse with a patient to gather narrative descriptive data on the chief complaint, past illnesses, family and social information, and the system review. They examine the patient, collecting pertinent data and recording them during or at the end of the visit. In addition, they generally decide what additional data to collect by ordering laboratory or radiologic studies and by observing the patient’s response to therapeutic interventions (yet another form of data that contributes to patient assessment). In both outpatient and hospital settings, nurses play a central role in making observa-
tions and recording them for future reference. The data that they gather contribute to nursing care plans as well as to the assessment of patients by physicians and by other health care staff. Thus, nurses’ training includes instruction in careful and accurate observation, history taking, and examination of the patient. Because nurses typically spend more time with patients than physicians do, especially in the hospital setting, nurses often build relationships with patients that uncover information and insights that contribute to proper diagnosis, to understanding of pertinent psychosocial issues, or to proper planning of therapy or discharge management (. Fig. 2.4). The role of
53 Biomedical Data: Their Acquisition, Storage, and Use
.. Fig. 2.4 Nurses often develop close relationships with patients. These relationships may allow the nurse to make observations that are missed by other staff. This ability is just one of the ways in which nurses play a key role in data collection and recording. (Photograph courtesy of Susan Ostmo, with permission)
information systems in contributing to patient care tasks such as care planning by nurses is the subject of 7 Chap. 19. Various other health care workers contribute to the data-collection process. Office staff and admissions personnel gather demographic and financial information. Physical or respiratory therapists record the results of their treatments and often make suggestions for further management. Laboratory personnel perform tests on biological samples, such as blood or urine, and record the results for later use by physicians and nurses. Radiology technicians perform X-ray examinations; radiologists interpret the resulting data and report their findings to the patients’ physicians. Pharmacists may interview patients about their medications or about drug allergies and then monitor the patients’ use of prescription drugs. Increasingly, health professionals such as physician assistants, nurse practitioners, nurse anesthetists, nurse midwives, psychologists, chiropractors, and optometrists are assuming patient care
responsibilities. As these examples suggest, many different individuals employed in health care settings gather, record, and make use of patient data in their work. Finally, there are the technological devices that generate data—laboratory instruments, imaging machines, monitoring equipment in intensive care units, and measurement devices that take a single reading (such as thermometers, ECG machines, sphygmomanometers for taking blood pressure, and spirometers for testing lung function). Sometimes such a device produces a paper report suitable for inclusion in a traditional medical record. Sometimes the device indicates a result on a gauge or traces a result that must be read by an operator and then recorded in the patient’s chart. Sometimes a trained specialist must interpret the output. Increasingly, however, the devices feed their results directly into computer equipment so that the data can be analyzed or formatted for electronic storage in the electronic health record (see 7 Chap. 16), thereby allowing access to information is through computer workstations, hand-held tablets, or even mobile devices.
2.2 Uses of Health Data
Health data are recorded for a variety of purposes. Clinical data may be needed to support the proper care of the patient from whom they were obtained, but they also may contribute to the good of society through the aggregation and analysis of data regarding populations of individuals (supporting clinical research or public health assessments; see 7 Chaps. 20 and 28). Traditional data-recording techniques and a paper record may have worked reasonably well when care was given by a single physician over the life of a patient. However, given the increased complexity of modern health care, the broadly trained team of individuals who are involved in a patient’s care, the need for multiple providers to access a patient’s data and to communicate effectively with one another through the chart, and the need for aggregating clinical data from multiple individuals to support population health, the electronic health record has created new possibilities for improv
2
54
2
E. H. Shortliffe and M. F. Chiang
ing the health care delivery process that were not feasible a generation ago. We will discuss these topics in more detail later in this chapter and in 7 Chaps. 16 and 20.
2.2.1
Create the Basis for the Historical Record
Any student of science learns the importance of collecting and recording data meticulously when carrying out an experiment. Just as a scientific laboratory notebook provides a record of precisely what an investigator has done, the experimental data observed, and the rationale for intermediate decision points, medical records are intended to provide a detailed compilation of information about individual patients: 55 What is the patient’s history (development of a current illness; other diseases that coexist or have resolved; pertinent family, social, and demographic information)? 55 What symptoms has the patient reported? When did they begin, what has seemed to aggravate them, and what has provided relief ? 55 What physical signs have been noted on examination? 55 How have signs and symptoms changed over time? 55 What laboratory results have been, or are now, available? 55 What radiologic and other special studies have been performed? 55 What medications are being taken and are there any allergies? 55 What other interventions have been undertaken? 55 What is the reasoning behind the management decisions? Each new patient problem and its management can be viewed as a therapeutic experiment, inherently confounded by uncertainty, with the goal of answering three questions when the experiment is over: 1. What was the nature of the disease or symptom? 2. What was the treatment decision? 3. What was the outcome of that treatment?
As is true for all experiments, one purpose is to learn from experience through careful observation and recording of data. The lessons learned in a given encounter may be highly individualized (e.g., the physician may learn how a specific patient tends to respond to pain or how family interactions tend to affect the patient’s response to disease). On the other hand, the value of some experiments may be derived only by pooling of data from many patients who have similar problems and through the analysis of the results of various treatment options to determine efficacy. Although laboratory research has contributed dramatically to our knowledge of human disease and treatment, it is careful observation and recording by skilled health care personnel that has always been fundamental to the effective generation of new knowledge about patient care. We learn from the aggregation of information from large numbers of patients; thus, the historical record for individual patients is of inestimable importance to clinical research. 2.2.2
Support Communication Among Providers
A central function of structured data collection and recording in health care settings is to assist personnel in providing coordinated care to a patient over time. Most patients who have significant medical conditions are seen over months or years on several occasions for one or more problems that require ongoing evaluation and treatment. Given the increasing numbers of elderly patients in many cultures and health care settings, the care given to a patient is less oriented to diagnosis and treatment of a single disease episode and increasingly focused on management of one or more chronic disorders—possibly over many years. It was once common for patients to receive essentially all their care from a single provider: the family doctor who tended both children and adults, often seeing the patient over many or all the years of that person’s life. We tend to picture such physicians as having especially close relationships with their patients—knowing the family and sharing in many of the patient’s life events, especially in smaller communities. Such
55 Biomedical Data: Their Acquisition, Storage, and Use
doctors nonetheless kept records of all encounters so that they could refer to data about past illnesses and treatments as a guide to evaluating future care issues. In the world of modern medicine, the emergence of subspecialization and the increasing provision of care by teams of health professionals have placed new emphasis on the central role of the medical record. Over the past several decades, shared access to a paper chart (. Fig. 2.5) has largely been replaced by clinicians accessing electronic records, sometimes conferring as they look at the same computer screen (. Fig. 2.6). Now the record not only contains observations by a physician for reference on the next visit but also serves as a communication mechanism among physicians and other medical personnel, such as physical or respiratory therapists, nursing staff, radiology technicians, social workers, or discharge planners. In many outpatient settings, patients receive care over time from a variety of physicians—colleagues covering for the primary physician, or specialists to whom the patient has been referred, or a managed care organization’s case manager. It is not uncommon to hear complaints from patients who remember the days when it was possible to receive essentially all their care from a single physician whom they had come to trust and who knew them well. Physicians are sensitive to this issue
.. Fig. 2.6 Today similar communication sessions occur around a computer screen rather than a paper chart (see . Fig. 2.5). (Photograph courtesy of Susan Ostmo with permission)
and therefore recognize the importance of the medical record in ensuring quality and continuity of care through adequate recording of the details and logic of past interventions and ongoing treatment plans. This idea is of particular importance in a health care system in which chronic diseases rather than care for trauma or acute infections increasingly dominate the basis for interactions between patients and their doctors. 2.2.3
.. Fig. 2.5 One role of the medical record: a communication mechanism among health professionals who work together to plan patient care. (Photograph courtesy of Janice Anne Rohn)
nticipate Future Health A Problems
Providing high-quality health care involves more than responding to patients’ acute or chronic health problems. It also requires educating patients about the ways in which their environment and lifestyles can contribute to, or reduce the risk of, future development of disease. Similarly, data gathered routinely in the ongoing care of a patient may suggest that he or she is at high risk of developing a specific problem even though he or she may feel well and be without symptoms at present. Clinical
2
56
2
E. H. Shortliffe and M. F. Chiang
data therefore are important in screening for risk factors, following patients’ risk profiles over time, and providing a basis for specific patient education or preventive interventions, such as diet, medication, or exercise. Perhaps the most common examples of such ongoing risk assessment in our society are routine monitoring for excess weight, high blood pressure, and elevated serum cholesterol levels. In these cases, abnormal data may be predictive of later symptomatic disease; optimal care requires early intervention before the complications have an opportunity to develop fully. 2.2.4
ecord Standard Preventive R Measures
The medical record also serves as a source of data on interventions that have been performed to prevent common or serious disorders. Sometimes the interventions involve counseling or educational programs (for example, regarding smoking cessation, measures for stopping drug abuse, safe sex practices, or dietary changes). Other important preventive interventions include immunizations: the vaccinations that begin in early childhood and continue throughout life, including special treatments administered when a person will be at particularly high risk (e.g., injections to protect people from certain highly communicable diseases, administered before travel to areas where such diseases are endemic). When a patient comes to his local hospital emergency room with a laceration, the physicians routinely check for an indication of when he most recently had a tetanus immunization. When easily accessible in the record (or from the patient), such data can prevent unnecessary treatments (in this case, a repeat injection) that may be associated with risk or significant cost.
2.2.5
Identify Deviations from Expected Trends
Data often are useful in medical care only when viewed as part of a continuum over time. An example is the routine monitoring of
children for normal growth and development by pediatricians (. Fig. 2.7). Single data points regarding height and weight may have limited use by themselves; it is the trend in such data points observed over months or years that may provide the first clue to a medical problem. It is accordingly common for such parameters to be recorded on special charts or forms that make the trends easy to discern at a glance. Women who want to have a child often keep similar records of body temperature. By measuring temperature daily and recording the values on special charts, women can identify the slight increase in temperature that accompanies ovulation and thus may discern the days of maximum fertility. Many physicians will ask a patient to keep such graphical records so that they can later discuss the data with the patient and include the scanned or photographed graph in the electronic record for ongoing reference. Such graphs are increasingly captured and displayed for viewing by clinicians as a feature of a patient’s medical record.
2.2.6
Provide a Legal Record
Another use of health data, once they are charted and analyzed, is as the foundation for a legal record to which the courts can refer if necessary. The medical record is a legal document; the responsible individual must certify or sign most of the clinical information that is recorded. In addition, the chart generally should describe and justify both the presumed diagnosis for a patient and the choice of management. We emphasized earlier the importance of recording data; in fact, data do not exist in a generally useful form unless they are recorded. The legal system stresses this point as well. Providers’ unsubstantiated memories of what they observed or why they took some action are of little value in the courtroom. The medical record is the foundation for determining whether proper care was delivered. Thus, a well-maintained record is a source of protection for both patients and their physicians.
57 Biomedical Data: Their Acquisition, Storage, and Use
.. Fig. 2.7 A pediatric growth chart. Single data points would not be useful; it is the changes in values over time that indicate whether development is progressing normally. (Source: National Center for Health Statistics in collabora-
tion with the National Center for Chronic Disease Prevention and Health Promotion (2000). 7 http://www.cdc.gov/ growthcharts)
2
2
58
E. H. Shortliffe and M. F. Chiang
2.2.7
Support Clinical Research
Although experience caring for individual patients provides physicians with special skills and enhanced judgment over time, it is only by formally analyzing data collected from large numbers of patients that researchers can develop and validate new clinical knowledge of general applicability. Thus, another use of clinical data is to support research through the aggregation and statistical or other analysis of observations gathered from populations of patients (see 7 Chap. 1). A randomized clinical trial (RCT) (see also 7 Chaps. 15 and 29) is a common method by which specific clinical questions are addressed experimentally. RCTs typically involve the random assignment of matched groups of patients to alternate treatments when there is uncertainty about how best to manage the patients’ problem. The variables that might affect a patient’s course (e.g., age, gender, weight, coexisting medical problems) are measured and recorded. As the study progresses, data are collected meticulously to provide a record of how each patient fared under treatment and precisely how the treatment was administered. By pooling such data, sometimes after years of experimentation (depending on the time course of the disease under consideration), researchers may be able to demonstrate a statistical difference among the study groups depending on precise characteristics present when patients entered the study or on the details of how patients were managed. Such results then help investigators to define the standard of care for future patients with the same or similar problems. Medical knowledge also can be derived from the analysis of large patient data sets or registries, even when the patients were not specifically enrolled in an RCT, often referred to as retrospective studies. Much of the research in the field of epidemiology involves analysis of population-based data of this type. Our knowledge of the risks associated with cigarette smoking, for example, is based on irrefutable statistics derived from large populations of individuals with and without lung cancer, other pulmonary problems, and heart disease.
2.3 Rationale for the Transition
from Paper to Electronic Documentation
The preceding description of medical data and their uses emphasizes the positive aspects of information storage and retrieval in the record. During the past several decades, the United States and many other countries have gradually transitioned from traditional paper records to electronic health records. The rationale for this transition has largely been to create the potential for enhancing the record’s effectiveness for its intended uses, as summarized in the previous section.
2.3.1
Pragmatic and Logistical Issues
Recall, first, that data cannot effectively serve the delivery of health care unless they are recorded. Their optimal use depends on positive responses to the following questions: 55 Can I find the data I need when I need them? 55 Can I find the medical record in which they are recorded? 55 Can I find the data within the record? 55 Can I find what I need quickly? 55 Can I read and interpret the data once I find them? 55 Can I update the data reliably with new observations in a form consistent with the requirements for future access by me or other people? The traditional paper record created situations in which people too often answered such questions in the negative. For example: 55 The patient’s paper chart was too often unavailable when the health care professional needed it. It could be in use by someone else at another location; it might have been misplaced despite the recordtracking system of the hospital, clinic, or office (. Fig. 2.8); or it might have been taken by someone unintentionally and is now buried on a desk.
59 Biomedical Data: Their Acquisition, Storage, and Use
.. Fig. 2.8 Storage room for paper-based medical records. These paper repositories have largely been replaced as EHRs have become more standard. (Photograph courtesy of Janice Anne Rohn)
55 It could be difficult to find the information required in either the paper or electronic record. The data might have been known previously but never recorded due to an oversight by a physician or other health professional. Poor organization or sheer size of either the paper or electronic record may lead the user to spend an inordinate time searching for the data, especially for patients who have long and complicated histories. 55 Paper records were notoriously difficult to read. It was not uncommon to hear one physician asking another as they peered together into a chart: “What is that word?” “Is that a two or a five?” “Whose signature is that?” Illegible and sloppy entries was too often a major obstruction to effective use of the paper chart (. Fig. 2.9). 55 When a paper chart was unavailable, the health care professional still had to provide patient care. Thus, providers would often make do without past data, basing their decisions instead on what the patient could tell them and on what their examination revealed. They then wrote a note for inclusion in the chart—when the chart was located! In a large institution with thousands of medical records, it is not surprising that such loose notes often failed to make it to the patient’s chart or were filed out of sequence so that the actual chronology of management was disrupted in the record.
.. Fig. 2.9 Written entries were standard in paper records, yet handwritten notes could be illegible. Notes that cannot be interpreted by other people due to illegibility may cause delays in treatment or inappropriate care—an issue that is largely eliminated when EHRs are used. (Image courtesy of Emily Cole, MD, with permission)
55 When patients who have chronic or frequent diseases are seen over months or years, their paper records grew so large that the charts had to be broken up into multiple volumes. When a hospital clinic or emergency room ordered the patient’s chart, only the most recent volume typically was provided. Old but pertinent data might have been in early volumes that were stored offsite or are otherwise unavailable. Alternatively, an early volume could be mistaken for the most recent volume, misleading its users and resulting in documents being inserted out of sequence.
7 Chapter 16 describes approaches that electronic health record systems have taken toward addressing these practical problems in the use of the paper record. It is for this reason that almost all hospitals, health systems, and individual practitioners have implemented EHRs–further encouraged in the US by Federal incentive programs that helped to cover the costs of EHR acquisition and maintenance (see 7 Chaps. 1 and 31). That said, one challenge is that electronic health records in the US have been criticized for being composed of bloated, lengthy documentation that
2
60
E. H. Shortliffe and M. F. Chiang
is often focused on billing and compliance over clinical care (7 Chaps. 16 and 31).
2
2.3.2
Redundancy and Inefficiency
To be able to find data quickly in the medical record, health professionals developed a variety of techniques in paper documentation that provided redundant recording to match alternate modes of access. For example, the result of a radiologic study typically was entered on a standard radiology reporting form, which was filed in the portion of the chart labeled “X-ray.” For complicated procedures, the same data often were summarized in brief notes by radiologists in the narrative part of the chart, which they entered at the time of studies because they knew that the formal report would not make it back to the chart for 1 or 2 days. In addition, the study results often were mentioned in notes written by the patient’s admitting and consulting physicians and by the nursing staff. Although there may have been good reasons for recording such information multiple times in different ways and in different locations within the paper chart, the combined bulk of these notes accelerated the
physical growth of the document and, accordingly, complicated the chart’s logistical management. Furthermore, it became increasingly difficult to locate specific patient data as the chart succumbed to “obesity”. The predictable result was that someone would write yet another redundant entry, summarizing information that it took hours to track down – and creating potential sources for transcription error. A similar inefficiency occured because of a tension between opposing goals in the design of reporting forms used by many laboratories. Most health personnel preferred a consistent, familiar form, often with color-coding, because it helped them to find information more quickly (. Fig. 2.10). For example, a physician might know that a urinalysis report form is printed on yellow paper and records the bacteria count halfway down the middle column of the form. This knowledge allowed the physician to work backward quickly in the laboratory section of the chart to find the most recent urinalysis sheet and to check at a glance the bacterial count. The problem is that such forms typically stored only sparse information. It was clearly suboptimal if a rapidly growing physical chart was filled with sheets of paper that reported only a single data element.
.. Fig. 2.10 Laboratory reporting forms present medical data in a consistent, familiar format (in this case a complete blood count (CBC)). (Photograph courtesy of Jimy Chen, with permission)
61 Biomedical Data: Their Acquisition, Storage, and Use
2.3.3
Influence on Clinical Research
Anyone involved in a clinical research project based on retrospective data review from paper records can attest to the tediousness of flipping through myriad medical records. For all the reasons described in 7 Chap. 1, it is arduous to sit with stacks of patient records, extracting data and formatting them for structured statistical analysis, and the process is vulnerable to transcription errors. Observers often wonder how much medical knowledge is sitting untapped in old paper medical records because there is no easy way to analyze experience across large populations of patients from the past without first extracting pertinent data from those charts. Let’s contrast such retrospective review with paper and electronic medical records. Suppose, for example, that physicians on a medical consultation service notice that patients receiving a certain common oral medication for diabetes (call it drug X) seem to be more likely to have significant postoperative hypotension (low blood pressure) than do surgical patients receiving other medications for diabetes. The doctors have based this hypothesis—that drug X influences postoperative blood pressure—on only a few recent observations, however, so they decide to look into existing hospital records to see whether this correlation has occurred with sufficient frequency to warrant a formal investigation. One efficient way to follow up on their theory from existing medical data would be to examine the hospital records of all patients who have diabetes and also have been admitted for surgery. The task would then be to examine those records (difficult and arduous with paper charts as will be discussed shortly, but subject to automated analysis in the case of EHRs) and to note for all patients (1) whether they were taking drug X when admitted and (2) whether they had postoperative hypotension. If the statistics showed that patients receiving drug X were more likely to have low blood pressure after surgery than were similar diabetic patients receiving alternate treatments, a controlled trial (prospective observation and data gathering) might well be appropriate. Note the distinction between retrospective chart review to investigate a question that was
not a subject of study at the time the data were collected and prospective studies in which the clinical hypothesis is known in advance and the research protocol is designed specifically to collect future data that are relevant to the question under consideration (see also 7 Chaps. 15 and 29). Subjects are assigned randomly to different study groups to help prevent researchers—who are bound to be biased, having developed the hypothesis—from unintentionally skewing the results by assigning a specific class of patients all to one group. For the same reason, to the extent possible, the studies are double blind; i.e., neither the researchers nor the subjects know which treatment is being administered. Such blinding is of course impractical when it is obvious to patients or physicians what therapy is being given (such as surgical procedures versus drug therapy). Prospective, randomized, double-blind studies are considered the best method for determining optimal management of disease, but it is often impractical to carry out such studies, and then methods such as retrospective chart review may be used. Returning to our example, consider the problems in paper chart review that the researchers used to encounter in addressing the postoperative hypotension question retrospectively. First, they would have to identify the charts of interest: the subset of medical records dealing with surgical patients who are also diabetic. In a hospital record room filled with thousands of charts, the task of chart selection was often overwhelming. Medical records departments generally did keep indexes of diagnostic and procedure codes cross-referenced to specific patients (see 7 Sect. 2.5.1). Thus, it sometimes was possible to use such an index to find all charts in which the discharge diagnoses included diabetes and the procedure codes included major surgical procedures. The researcher might then have compiled a list of patient identification numbers and have the individual charts pulled from the file room for review. The researchers’ next task was to examine each paper chart serially to find out what treatment each patient was receiving for diabetes at the time of the surgery and to determine whether the patient had postoperative hypotension. Finding such information tended to be
2
62
2
E. H. Shortliffe and M. F. Chiang
extremely time-consuming. Where should the researcher look for it? The admission drug orders might have shown what the patient received for diabetes control, but it would also have been wise to check the medication sheets to see whether the therapy was also administered (as well as ordered) and the admission history to see whether a routine treatment for diabetes, taken right up until the patient entered the hospital, was not administered during the inpatient stay. Information about hypotensive episodes might be similarly difficult to locate. The researchers might start with nursing notes from the recovery room or with the anesthesiologist’s datasheets from the operating room, but the patient might not have been hypotensive until after leaving the recovery room and returning to the ward. So the nursing notes from the ward would need to be checked too, as well as vital signs sheets, physicians’ progress notes, and the discharge summary. It should be clear from this example that retrospective paper chart review was a laborious and tedious process and that people performing it were prone to make transcription errors and to overlook key data. EHRs offer an enormous opportunity (7 Chap. 16) to facilitate the chart review and clinical research process. They have obviated the need to retrieve hard copy charts; instead, researchers are increasingly using computer-based data retrieval and analysis techniques to do most of the work (finding relevant patients, locating pertinent data, and formatting the information for statistical analyses). Researchers can use similar techniques to harness computer assistance with data management in prospective clinical trials (7 Chap. 29).
2.3.4
he Passive Nature of Paper T Records
The traditional manual system has another limitation that would have been meaningless until the emergence of the computer age. A manual archival system is inherently passive; the charts sit waiting for something to be done with them. They are insensitive to the characteristics of the data recorded within their pages,
such as legibility, accuracy, or implications for patient management. They cannot take an active role in responding appropriately to those implications. EHR systems have changed our perspective on what health professionals can expect from the medical chart. Automated record systems introduce new opportunities for dynamic responses to the data that are recorded in them. As described in many of the chapters to follow, computational techniques for data storage, retrieval, and analysis make it feasible to develop record systems that (1) monitor their contents and generate warnings or advice for providers based on single observations or on logical combinations of data; (2) provide automated quality control, including the flagging of potentially erroneous data; or (3) provide feedback on patient-specific or population-based deviations from desirable standards. 2.4 New Kinds of Data
and the Resulting Challenges
The revolution in human genetics that emerged with the Human Genome Project in the 1990s already has had a profound effect on the diagnosis, prognosis, and treatment of disease (Vamathevan and Birney 2017). The vast amounts of data that are generated in biomedical research (see 7 Chaps. 11 and 28), and that can be pooled from patient datasets to support clinical research (7 Chap. 29) and public health (7 Chap. 20), have created new opportunities as well as challenges. Researchers are finding that the amount of data that they must manage and assess has become so large that they often find that they lack either the capabilities or expertise to handle the analytics that are required. This problem, sometimes dubbed the “big data” problem, has gathered the attention of government agencies as well.2
2
Big Data Senior Steering Group. The Federal Big Data Research and Development Strategic Plan. Available at: 7 https://obamawhitehouse.archives. gov/sites/default/files/microsites/ostp/NSTC/ bigdatardstrategicplan-nitrd_final-051916.pdf (Accessed 6/28/2019).
63 Biomedical Data: Their Acquisition, Storage, and Use
Some suggest that the genetic material itself will become our next-generation method for storing large amounts of data (Erlich and Zielinski 2017). Data analytics, and the management of large amounts of genomic/proteomic or clinical/public-health data, have accordingly become major research topics and key opportunities for new methodology development by biomedical informatics and data scientists (Adler-Milstein and Jha 2013; Brennan et al. 2018; Bycroft et al. 2018). The issues that arise are practical as well as scientifically interesting. For example, developers of EHRs have begun to grapple with questions regarding how they might store an individual’s personal genome within the electronic health record. New standards will be required, and tactical questions need answering regarding, for example, whether to store an entire genome or only those components (e.g., genetic markers) that are already reasonably well understood (Masys et al. 2012; Haendel et al. 2018). In cancer, for example, where mutations in cell lines can occur, an individual may actually have many genomes represented among his or her cells. These issues will undoubtedly influence the evolution of data systems and EHRs, as well as the growth of precision medicine (see 7 Chap. 30), in the years ahead (Relling and Evans 2015).
2.5 The Structure of Clinical Data
Scientific disciplines generally develop a precise terminology or notation that is standardized and accepted by all workers in the field. Consider, for example, the universal language of chemistry embodied in chemical formulae, the precise definitions and mathematical equations used by physicists, the predicate calculus used by logicians, or the conventions for describing circuits used by electrical engineers. Medicine is remarkable for its failure to develop a widely accepted standardized vocabulary and nomenclature, and many observers believe that a true “scientific” basis for the field will be impossible until this problem is addressed (see 7 Chap. 8). Other people argue that common references to the “art” of medicine reflect an important distinction between medicine and the “hard” sci
ences; these people question whether it is possible to introduce too much standardization into a field that prides itself in humanism. The debate has been accentuated by the introduction of computers for data management, because such machines tend to demand conformity to data standards and definitions. Otherwise, issues of data retrieval and analysis are confounded by discrepancies between the meanings intended by the observers or recorders and those intended by the individuals retrieving information or doing data analysis. What is an “upper respiratory infection”? Does it include infections of the trachea or of the main stem bronchi? How large does the heart have to be before we can refer to “cardiomegaly”? How should we deal with the plethora of disease names based on eponyms (e.g., Alzheimer’s disease, Hodgkin’s disease) that are not descriptive of the illness and may not be familiar to all practitioners? What do we mean by an “acute abdomen”? Are the boundaries of the abdomen well agreed on? What are the time constraints that correspond to “acuteness” of abdominal pain? Is an “ache” a pain? What about “occasional” cramping? Imprecision and the lack of a standardized vocabulary are particularly problematic when we wish to aggregate data recorded by multiple health professionals or to analyze trends over time. Without a controlled, predefined vocabulary, data interpretation is inherently complicated, and the automatic summarization of data may be impossible. For example, one physician might note that a patient has “shortness of breath.” Later, another physician might note that she has “dyspnea.” Unless these terms are designated as synonyms, an automated program will fail to indicate that the patient had the same problem on both occasions. Regardless of arguments regarding the “artistic” elements in medicine, the need for health personnel to communicate effectively is clear both in acute care settings and when patients are seen over long periods. Both high- quality care and scientific progress depend on some standardization in terminology. Otherwise, differences in intended meaning or in defining criteria will lead to miscommunication, improper interpretation, and potentially negative consequences for the patients involved.
2
64
2
E. H. Shortliffe and M. F. Chiang
Given the lack of formal definitions for many medical terms, it is remarkable that medical workers communicate as well as they do. Only occasionally is the care for a patient clearly compromised by miscommunication. If EHRs are to become dynamic and responsive manipulators of patient data, however, their encoded logic must be able to presume a specific meaning for the terms and data elements entered by the observers. This point is discussed in greater detail in 7 Chap. 8, which deals in part with the multiple efforts to develop healthcare computing standards, including a shared, controlled terminology for biomedicine.
hospitalized population and the average length of stay for each disease category), for quality improvement, and for research. For such data to be useful, the codes must be well defined as well as uniformly applied and accepted. The World Health Organization publishes a diagnostic coding scheme called the International Classification of Disease (ICD). The 10th revision of this standard, ICD-10-CM (clinical modification),3 is currently in use in much of the world (see 7 Chap. 8). ICD-10- CM is used by all nonmilitary hospitals in the United States for discharge coding, and must be reported on the bills submitted to most insurance companies (. Fig. 2.11). Pathologists have developed another widely used diagnostic coding scheme; originally known as Systematized Nomenclature of Pathology (SNOP), it was expanded to the Systematized Nomenclature of Medicine (SNOMED) and then merged with the Read Clinical Terms from Great Britain to become SNOMED-CT (Stearns et al. 2001; Lee et al. 2014). In recent years, support for SNOMED-CT was assumed by the International Health Terminology Standards Development Organization, based in Copenhagen, now renamed SNOMED International and relocated to London.4 Another coding scheme, developed by the American Medical Association, is the Current Procedural Terminology (CPT) (Hirsch et al. 2015). It is similarly widely used in producing bills for services rendered to patients. More details on such schemes are provided in 7 Chap. 8. What warrants emphasis here, however, is the motivation for the codes’ development: health care personnel need standardized terms that can support pooling of data for analysis and can provide criteria for determining charges for individual patients. The historical roots of a coding system reveal themselves as limitations or idiosyncrasies when the system is applied in more general clinical settings. For example, ICD-10-CM was derived from a classification scheme developed for epidemiologic reporting. Consequently, it has over 60 separate codes for describing tuber
2.5.1
Coding Systems
We are used to seeing figures regarding the growing incidences of certain types of tumors, deaths from influenza during the winter months, and similar health statistics that we tend to take for granted. How are such data accumulated? Their role in health planning and health care financing is clear, but electronic health records provide the infrastructure for aggregating individual patient data to learn more about the health status of the populations in various communities (see 7 Chap. 20). Because of the needs to know about health trends for populations and to recognize epidemics in their early stages, there are various healthreporting requirements for hospitals (as well as other public organizations) and practitioners. For example, cases of gonorrhea, syphilis, and tuberculosis generally must be reported to local public-health organizations, which code the data to allow trend analyses over time. The Centers for Disease Control and Prevention in Atlanta (CDC) then pool regional data and report national as well as local trends in disease incidence, bacterial-resistance patterns, etc. Another kind of reporting involves the coding of all discharge diagnoses for hospitalized patients, plus coding of certain procedures (e.g., type of surgery) that were performed during the hospital stay. Such codes are reported to state and federal health-planning and analysis agencies and also are used internally at the institution for case-mix analysis (determining the relative frequencies of various disorders in the
3 4
7 http://www.icd10data.com/ (Accessed 11/1/2019). 7 http://snomed.org/ (Accessed 5/6/2019).
65 Biomedical Data: Their Acquisition, Storage, and Use
J45 Asthma Includes: allergic (predominantly) asthma, allergic bronchitis NOS, allergic rhinitis with asthma, atopic asthma, extrinsic allergic asthma, hay fever with asthma, idiosyncratic asthma, intrinsic nonallergic asthma, nonallergic asthma Use additional code to identify: exposure to environmental tobacco smoke (Z77.22), exposure to tobacco smoke in the perinatal period (P96.81), history of tobacco use (Z87.891), occupational exposure to environmental tobacco smoke (Z57.31), tobacco dependence (F17.-), tobacco use (Z72.0) Excludes: detergent asthma (J69.8), eosinophilic asthma (J82), lung diseases due to external agents (J60-J70), miner's asthma (J60), wheezing NOS (R06.2), wood asthma (J67.8), asthma with chronic obstructive pulmonary disease (J44.9), chronic asthmatic (obstructive) bronchitis (J44.9), chronic obstructive asthma (J44.9) J45.2 Mild intermittent asthma J45.20 Mild intermittent asthma, uncomplicated Mild intermittent asthma NOS J45.21 Mild intermittent asthma with (acute) exacerbation J45.22 Mild intermittent asthma with status asthmaticus J45.3 Mild persistent asthma J45.30 Mild persistent asthma, uncomplicated Mild persistent asthma NOS J45.31 Mild persistent asthma with (acute) exacerbation J45.32 Mild persistent asthma with status asthmaticus J45.4 Moderate persistent asthma J45.40 Moderate persistent asthma, uncomplicated Moderate persistent asthma NOS J45.41 Moderate persistent asthma with (acute) exacerbation J45.42 Moderate persistent asthma with status asthmaticus J45.5 Severe persistent asthma J45.50 Severe persistent asthma, uncomplicated Severe persistent asthma NOS J45.51 Severe persistent asthma with (acute) exacerbation J45.52 Severe persistent asthma with status asthmaticus J45.9 Other and unspecified asthma J45.90 Unspecified asthma Asthmatic bronchitis NOS Childhood asthma NOS Late onset asthma J45.901 Unspecified asthma with (acute) exacerbation J45.902 Unspecified asthma with status asthmaticus J45.909 Unspecified asthma, uncomplicated Asthma NOS J45.99 Other asthma J45.990 Exercise induced bronchospasm J45.991 Cough variant asthma J45.998 Other asthma .. Fig. 2.11 The subset of disease categories for asthma taken from ICD-10-CM. (Source: Centers for Medicare and Medicaid Services, US Department of Health and
Human Services, 7 https://www.cms.gov/Medicare/Coding/ICD10/2018-ICD-10-CM-and-GEMs.html, accessed June 28, 2019)
culosis infections. SNOMED versions have long permitted coding of pathologic findings in exquisite detail but only in later years began to introduce codes for expressing the dimensions of a patient’s functional status. In a particular clinical setting, none of the common coding schemes is likely to be completely satisfactory. In some cases, the granularity of the code will be too coarse; on the one hand, a hematologist
(person who studies blood diseases) may want to distinguish among a variety of hemoglobinopathies (disorders of the structure and function of hemoglobin) lumped under a single code in ICD-10-CM. On the other hand, another practitioner may prefer to aggregate many individual codes—e.g., those for active tuberculosis—into a single category to simplify the coding and retrieval of data.
2
66
2
E. H. Shortliffe and M. F. Chiang
Such schemes cannot be effective unless health care providers accept them. There is an inherent tension between the need for a coding system that is general enough to cover many different patients and the need for precise and unique terms that accurately apply to a specific patient and do not unduly constrain physicians’ attempts to describe what they observe. Yet if physicians view the EHR as a blank sheet of paper on which any unstructured information can be written, the data they record will be unsuitable for dynamic processing, clinical research, and health planning. The challenge is to learn how to meet all these needs. Researchers at many institutions worked for over two decades to develop a unified medical language system (UMLS), a common structure that ties together the various vocabularies that have been created. At the same time, the developers of specific terminologies are continually working to refine and expand their independent coding schemes (Humphreys et al. 1998) (see 7 Chap. 8).
2.5.2
The Data-to-Knowledge Spectrum
A central focus in biomedical informatics is the information base that constitutes the “substance of medicine.” Workers in the field have tried to clarify the distinctions among three terms frequently used to describe the content of computer-based systems: data, information, and knowledge (Blum 1986; Bernstam et al. 2010). These terms are often used interchangeably. In this volume, we shall refer to a datum as a single observational point that characterizes a relationship. It generally can be regarded as the value of a specific parameter for a particular object (e.g., a patient) at a given point in time. The term information refers to analyzed data that have been suitably curated and organized so that they have meaning. Data do not constitute information until they have been organized in some way, e.g., for analysis or display. Knowledge, then, is derived through the formal or informal analysis (or interpretation) of information that was in turn derived from data. Thus, knowledge includes the results of formal studies and also common sense facts, assumptions, heuristics (strategic rules of thumb), and models—any of
which may reflect the experience or biases of people who interpret the primary data and the resulting information. The observation that patient Brown has a blood pressure of 180/110 is a datum, as is the report that the patient has had a myocardial infarction (heart attack). When researchers pool such data, creating information, subsequent analysis may determine that patients with high blood pressure are more likely to have heart attacks than are patients with normal or low blood pressure. This analysis of organized data (information) has produced a piece of knowledge about the world. A physician’s belief that prescribing dietary restriction of salt is unlikely to be effective in controlling high blood pressure in patients of low economic standing (because the latter are less likely to be able to afford special low-salt foods) is an additional personal piece of knowledge—a heuristic that guides physicians in their decision making. Note that the appropriate interpretation of these definitions depends on the context. Knowledge at one level of abstraction may be considered data at higher levels. A blood pressure of 180/110 mmHg is a raw piece of data; the statement that the patient has hypertension is an interpretation of several such data and thus represents a higher level of information. As input to a diagnostic decision aid, however, the presence or absence of hypertension may be requested, in which case the presence of hypertension is treated as a data item. A database is a collection of individual observations without any summarizing analysis. An EHR system is thus primarily viewed as a database—the place where patient data are stored. When properly collated and pooled with other data, these elements in the EHR provide information about the patient. A knowledge base, on the other hand, is a collection of facts, heuristics, and models that can be used for problem solving and analysis of organized data (information). If the knowledge base provides sufficient structure, including semantic links among knowledge items, the computer itself may be able to apply that knowledge as an aid to case-based problem solving. Many decision-support systems have been called knowledge-based systems, reflect-
2
67 Biomedical Data: Their Acquisition, Storage, and Use
ing this distinction between knowledge bases and databases (see 7 Chap. 26).
2.6 Strategies of Clinical Data
Selection and Use
What do we mean by selectivity in data collection and recording? It is precisely this process that often is viewed as a central part of the “art” of medicine, an element that accounts for individual styles and the sometimes marked distinctions among clinicians. As is discussed with numerous clinical examples in 7 Chaps. 3 and 4, the idea of selectivity implies an ongoing decision-making process that guides data collection and interpretation. Attempts to understand how expert clinicians internalize this process, and to formalize the ideas so that they can better be taught and explained, are central in biomedical informatics research. Improved guidelines for such decision making, derived from research activities in biomedical informatics, not only are enhancing the teaching and practice of medicine (Shortliffe 2010) but also are providing insights that suggest methods for developing computer-based decisionsupport tools.
It is illusory to conceive of a “complete clinical data set.” All medical databases, and medical records, are necessarily incomplete because they reflect the selective collection and recording of data by the health care personnel responsible for the patient. There can be marked interpersonal differences in both style and problem solving that account for variations in the way practitioners collect and record data for the same patient under the same circumstances. Such variations do not necessarily reflect good practices, however, and much of medical education is directed at helping physicians and other health professionals to learn what observations to make, how to make them (generally an issue of technique), how to interpret them, and how to decide whether they warrant formal recording. An example of this phenomenon is the difference between the first medical history, physical examination, and summarizing report developed by a medical student and the similar process undertaken by a seasoned clinician examining the same patient. Medical students tend to work from comprehensive mental outlines of questions to ask, physical tests to perform, and additional data to collect. Because they have not developed skills of selectivity, the process of taking a medical history and performing a physical examination may take more than 1 h, after which students develop extensive reports of what they observed and how they have interpreted their observations. It clearly would be impractical, inefficient, and inappropriate for physicians in practice to spend this amount of time assessing every new patient. Thus, part of the challenge for the neophyte is to learn how to ask only the questions that are necessary, to perform only the examination components that are required, and to record only those data that will be pertinent in justifying the ongoing diagnostic approach and in guiding the future management of the patient.
2.6.1
The Hypothetico-Deductive Approach
Studies of clinical decision makers have shown that strategies for data collection and interpretation may be imbedded in an iterative process known as the hypothetico-deductive approach (Elstein et al. 1978; Kassirer and Gorry 1978). As medical students learn this process, their data collection becomes more focused and efficient, and their medical records become more compact. The central idea is one of sequential, staged data collection, followed by data interpretation and the generation of hypotheses, leading to hypothesis-directed selection of the next most appropriate data to be collected. As data are collected at each stage, they are added to the growing database of observations and are used to reformulate or refine the active hypotheses. This process is iterated until one hypothesis reaches a threshold level of certainty (e.g., it is proved to be true, or at least the uncertainty is reduced to a satisfactory level). At that point, a management, disposition, or therapeutic decision can be made. The diagram in . Fig. 2.12 clarifies this process. As is shown, data collection begins
68
E. H. Shortliffe and M. F. Chiang
Patient presents with a problem
Initial hypotheses
ID, CC, HPI Ask questions
2
More questions Patient is better; no further care required Observe results Treat patient accordingly
PE Examine patient
HPI, PMH, FH, Social, ROS Patient dies
Refine hypotheses
Chronic disease Select most likely diagnosis
ECG etc.
Laboratory tests
Radiologic studies
.. Fig. 2.12 A schematic view of the hypothetico- deductive approach. The process of medical data collection and treatment is intimately tied to an ongoing process of hypothesis generation and refinement. See text
for full discussion. ID patient identification, CC chief complaint, HPI history of present illness, PMH past medical history, FH family history, Social social history, ROS review of systems, PE physical examination
when the patient presents to the physician with some issue (a symptom or disease, or perhaps the need for routine care). The physician generally responds with a few questions that allow one to focus rapidly on the nature of the problem. In the written report, the data collected with these initial questions typically are recorded as the patient identification, chief complaint, and initial portion of the history of the present illness. Studies have shown that an experienced physician will have an initial set of hypotheses (theories) in mind after hearing the patient’s response to the first six or seven questions (Elstein et al. 1978). These hypotheses then serve as the basis for selecting additional questions. As shown in . Fig. 2.12, answers to these additional questions allow the physician to refine hypotheses about the source of the patient’s problem. Physicians refer to the set of active hypotheses as the differential diagnosis for a patient; the differential diagnosis comprises the set of possible diagnoses among which the physician must distinguish to determine how best to administer treatment. Note that the question selection process is inherently heuristic; e.g., it is personalized and efficient, but it is not guaranteed to collect every piece of information that might be per-
tinent. Human beings use heuristics all the time in their decision making because it often is impractical or impossible to use an exhaustive problem-solving approach. A common example of heuristic problem solving is the playing of a complex game such as chess. Because it would require an enormous amount of time to define all the possible moves and countermoves that could ensue from a given board position, expert chess players develop personal heuristics for assessing the game at any point and then selecting a strategy for how best to proceed. Differences among such heuristics account in part for variations in observed expertise. Physicians have developed safety measures, however, to help them to avoid missing important issues that they might not discover when collecting data in a hypothesisdirected fashion when taking the history of a patient’s present illness (Pauker et al. 1976). These measures tend to be focused in four general categories of questions that follow the collection of information about the chief complaint: past medical history, family history, social history, and a brief review of systems in which the physician asks some general questions about the state of health of each of the major organ systems in the
69 Biomedical Data: Their Acquisition, Storage, and Use
body. Occasionally, the physician discovers entirely new problems or finds important information that modifies the hypothesis list or modulates the treatment options available (e.g., if the patient reports a serious past drug reaction or allergy). When physicians have finished asking questions, the refined hypothesis list (which may already be narrowed to a single diagnosis) then serves as the basis for a focused physical examination. By this time, physicians may well have expectations of what they will find on examination or may have specific tests in mind that will help them to distinguish among still active hypotheses about diseases based on the questions that they have asked. Once again, as in the question-asking process, focused hypothesis-directed examination is augmented with general tests that occasionally turn up new abnormalities and generate hypotheses that the physician did not expect on the basis of the medical history alone. In addition, unexplained findings on examination may raise issues that require additional history taking. Thus, the asking of questions generally is partially integrated with the examination process. When physicians have completed the physical examination, their refined hypothesis list may be narrowed sufficiently for them to undertake specific treatment. Additional data gathering may still be necessary, however. Such testing is once again guided by the current hypotheses. The options available include laboratory tests (of blood, urine, other body fluids, or biopsy specimens), radiologic studies (X-ray examinations, nuclear-imaging scans, computed tomography (CT) studies, magnetic resonance scans, sonograms, or any of a number of other imaging modalities), and other specialized tests (electrocardiograms (ECGs), electroencephalograms, nerve conduction studies, and many others), as well as returning to the patient to ask further questions or perform additional physical examination. As the results of such studies become available, physicians constantly revise and refine their hypothesis list. Ultimately, physicians are sufficiently certain about the source of a patient’s problem to be able to develop a specific management plan. Treatments are administered, and the patient
is observed. Note data collected to measure response to treatment may themselves be used to synthesize information that affects the hypotheses about a patient’s illness. If patients do not respond to treatment, it may mean that their disease is resistant to that therapy and that their physicians should try an alternate approach, or it may mean that the initial diagnosis was incorrect and that physicians should consider alternate explanations for the patient’s problem. The patient may remain in a cycle of treatment and observation for a long time, as shown in . Fig. 2.12. This long cycle reflects the nature of chronic-disease management— an aspect of medical care that is accounting for an increasing proportion of the health care community’s work (and an increasing proportion of health care cost). Alternatively, the patient may recover and no longer need therapy, or he or she may die. Although the process outlined in . Fig. 2.12 is oversimplified in many regards, it is generally applicable to the process of data collection, diagnosis, and treatment in most areas of medicine. Note that the hypothesis-directed process of data collection, diagnosis, and treatment is inherently knowledge-based. It is dependent not only on a significant fact base that permits proper interpretation of data and selection of appropriate follow-up questions and tests but also on the effective use of heuristic techniques that characterize individual expertise. Another important issue, addressed in 7 Chap. 3, is the need for physicians to balance financial costs and health risks of data collection against the perceived benefits to be gained when those data become available. It costs nothing but time to examine the patient at the bedside or to ask an additional question, but if the data being considered require, for example, X-ray exposure, coronary angiography, or a CT scan of the head (all of which have associated risks and costs), then it may be preferable to proceed with treatment in the absence of full information. Differences in the assessment of cost-benefit trade-offs in data collection, and variations among individuals in their willingness to make decisions under uncertainty, often account for differences of opinion among collaborating physicians.
2
2
70
E. H. Shortliffe and M. F. Chiang
2.6.2
he Relationship Between T Data and Hypotheses
We wrote rather glibly in 7 Sect. 2.6.1 about the “generation of hypotheses from data”; now we need to ask: What precisely is the nature of that process? As is discussed in 7 Chap. 4, researchers with a psychological orientation have spent much time trying to understand how expert problem solvers evoke hypotheses (Elstein et al. 1978; Arocha et al. 2005) and the traditional probabilistic decision sciences have much to say about that process as well. We provide only a brief introduction to these ideas here; they are discussed in greater detail in 7 Chaps. 3 and 4. When an observation evokes a hypothesis (e.g., when a clinical finding makes a specific diagnosis come to mind), the observation presumably has some close association with the hypothesis. What might be the characteristics of that association? Perhaps the finding is almost always observed when the hypothesis turns out to be true. Is that enough to explain hypothesis generation? A simple example will show that such a simple relationship is not enough to explain the evocation process. Consider the hypothesis that a patient is pregnant and the observation that the patient is biologically female. Clearly, all pregnant patients are female. When a new patient is observed to be female, however, the possibility that the patient is pregnant is not immediately evoked. Thus, female gender is a highly sensitive indicator of pregnancy (there is a 100% certainty that a pregnant patient is female), but it is not a good predictor of pregnancy (most females are not pregnant). The idea of sensitivity—the likelihood that a given datum will be observed in a patient with a given disease or condition—is an important one, but it will not alone account for the process of hypothesis generation in medical diagnosis. Perhaps the clinical manifestation seldom occurs unless the hypothesis turns out to be true; is that enough to explain hypothesis generation? This idea seems to be a little closer to the mark. Suppose a given datum is never seen unless a patient has a specific disease. For example, a Pap smear (a smear of cells swabbed from the cervix, at the opening to the uterus, treated with
Papanicolaou’s stain, and then examined under the microscope) with grossly abnormal cells (called class IV findings) is never seen unless the woman has cancer of the cervix or uterus. Such tests are called pathognomonic. Not only do they evoke a specific diagnosis but they also immediately prove it to be true. Unfortunately, there are few pathognomonic tests in medicine and they are often of relatively low sensitivity (that is, although having a particular test result makes the diagnosis, few patients with the condition may actually have that finding). More commonly, a feature is seen in one disease or disease category more frequently than it is in others, but the association is not absolute. For example, there are few disease entities other than infections that elevate a patient’s white blood cell count. Certainly it is true, for example, that leukemia can raise the white blood cell count, as can the use of certain medications, but most patients who do not have infections will have normal white blood cell counts. An elevated white count therefore does not prove that a patient has an infection, but it does tend to evoke or support the hypothesis that an infection is present. The word used to describe this relationship is specificity. An observation is highly specific for a disease if it is generally not seen in patients who do not have that disease. A pathognomonic observation is 100% specific for a given disease. When an observation is highly specific for a disease, it tends to evoke that disease during the diagnostic or data- gathering process. By now, you may have realized that there is a substantial difference between a physician viewing test results that evoke a disease hypothesis and that physician being willing to act on the disease hypothesis. Yet even experienced physicians sometimes fail to recognize that, although they have made an observation that is highly specific for a given disease, it may still be more likely that the patient has other diseases (and does not have the suspected one) unless (1) the finding is pathognomonic or (2) the suspected disease is considerably more common than are the other diseases that can cause the observed abnormality. This mistake is one of the most common errors of intuition in the medical decision-making process. To explain the basis for this confusion in
71 Biomedical Data: Their Acquisition, Storage, and Use
more detail, we must introduce two additional terms: prevalence and predictive value. The prevalence of a disease is simply the percentage of a population of interest that has the disease at any given time. A particular disease may have a prevalence of only 5% in the general population (1 person in 20 will have the disease) but have a higher prevalence in a specially selected subpopulation. For example, black-lung disease has a low prevalence in the general population but has a much higher prevalence among coal miners, who develop black lung from inhaling coal dust. The task of diagnosis therefore involves updating the probability that a patient has a disease from the baseline rate (the prevalence in the population from which the patient was selected) to a post-test probability that reflects the test results. For example, the probability that any given person in the United States has lung cancer is low (i.e., the prevalence of the disease is low), but the chance increases if his or her chest X-ray examination shows a possible tumor. If the patient were a member of the population composed of cigarette smokers in the United States, however, the prevalence of lung cancer would be higher. In this case, the identical chest X-ray PV + =
report would result in an even higher updated probability of lung cancer than it would had the patient been selected from the population of all people in the United States. The predictive value (PV) of a test is simply the post-test (updated) probability that a disease is present based on the results of a test. If an observation supports the presence of a disease, the PV will be greater than the prevalence (also called the pretest risk). If the observation tends to argue against the presence of a disease, the PV will be lower than the prevalence. For any test and disease, then, there is one PV if the test result is positive and another PV if the test result is negative. These values are typically abbreviated PV+ (the PV of a positive test) and PV− (the PV of a negative test). The process of hypothesis generation in medical diagnosis thus involves both the evocation of hypotheses and the assignment of a likelihood (probability) to the presence of a specific disease or disease category. The PV of a positive test depends on the test’s sensitivity and specificity, as well as the prevalence of the disease. The formula that describes the relationship precisely is:
( sensitivity )( prevalence ) ( sensitivity )( prevalence ) + (1 - specificity ) (1 - prevalence )
There is a similar formula for defining PV− in terms of sensitivity, specificity, and prevalence. Both formulae can be derived from simple probability theory. Note that positive tests with high sensitivity and specificity may still lead to a low post-test probability of the disease (PV+) if the prevalence of that disease is low. You should substitute values in the PV+ formula to convince yourself that this assertion is true. It is this relationship that tends to be poorly understood by practitioners and that often is viewed as counterintuitive (which shows that your intuition can misguide you!). Note also (by substitution into the formula) that test sensitivity and disease prevalence can be ignored only when a test is pathognomonic (i.e., when its specificity is 100%, which mandates that PV+ be 100%). The PV+ formula is one of many forms of Bayes’ theorem,
a rule for combining probabilistic data that is generally attributed to the work of Reverend Thomas Bayes in the 1700s. Bayes’ theorem is discussed in greater detail in 7 Chap. 3.
2.6.3
Methods for Selecting Questions and Comparing Tests
We have described the process of hypothesis- directed sequential data collection and have asked how an observation might evoke or refine the physician’s hypotheses about what abnormalities account for the patient’s illness. The complementary question is: Given a set of current hypotheses, how does the physician decide what additional data should be collect-
2
72
2
E. H. Shortliffe and M. F. Chiang
ed? This question also has been analyzed at length (Elstein et al. 1978; Arocha et al. 2005) and is pertinent for computer programs that gather data efficiently to assist clinicians with diagnosis or with therapeutic decision making (see 7 Chap. 26). Because understanding issues of test selection and data interpretation is crucial to understanding medical data and their uses, we devote 7 Chap. 3 to these and related issues of medical decision making. In 7 Sect. 3.6, for example, we discuss the use of decisionanalytic techniques in deciding whether to treat a patient on the basis of available information or to perform additional diagnostic tests.
2.7 The Computer and Collection
of Medical Data
Although this chapter has not directly discussed computer systems, the role of the computer in medical data storage, retrieval, and interpretation should be clear. Much of the rest of this book deals with specific applications in which the computer’s primary role is data management. One question is pertinent to all such applications: What are the best approaches for getting data into the computer in the first place? The need for data entry by physicians has posed a problem for medical-computing systems since the earliest days of the field. Awkward or nonintuitive interactions at computing devices—particularly ones requiring keyboard typing or confusing movement through multiple display screens by the physician—have perhaps done more to frustrate clinicians than have any other factor. A variety of approaches have been used to try to finesse this problem. One is to design systems such that clerical staff can do essentially all the data entry and much of the data retrieval as well. Many clinical research systems (see 7 Chap. 29) have taken this approach. Physicians may be asked to fill out structured paper datasheets, or such sheets may be filled out by data abstractors who review patient charts, but the actual entry of data into the database is done by paid transcriptionists. Other physicians have adopted “scribes” (staff whose role is to follow physicians in examination rooms and to enter
data into the electronic health record) to reduce the data entry burden on physicians while they interact with patients. In some applications, data are entered automatically into the computer by the device that measures or collects them. For example, monitors in intensive care or coronary care units, pulmonary function or ECG machines, and measurement equipment in the clinical chemistry laboratory can interface directly with a computer in which a database is stored. Certain data can be entered directly by patients; there are systems, for example, that take the patient’s history by presenting on a computer screen or tablet multiple-choice questions that follow a branching logic. The patient’s responses to the questions are used to generate electronic or hard copy reports for physicians and also may be stored directly in a computer database for subsequent use in other settings. When physicians or other health personnel do use the machine themselves, specialized devices often allow rapid and intuitive operator– machine interaction. Most of these devices use a variant of the “point-and-select” approach—e.g., touch-sensitive computer screens, mouse-pointing devices, and increasingly the clinician’s finger on a mobile tablet or smart phone (see 7 Chaps. 5 and 6). When conventional computer workstations are used, specialized keypads can be helpful. Designers frequently permit logical selection of items from menus displayed on the screen so that the user does not need to learn a set of specialized commands to enter or review data. There were clear improvements when handheld tablets using pen-based or finger-based mechanisms for data entry were introduced. With ubiquitous wireless data services, such devices are allowing clinicians to maintain normal mobility (in and out of examining rooms or inpatient rooms) while accessing and entering data that are pertinent to a patient’s care. These issues arise in essentially all application areas, and, because they can be crucial to the successful implementation and use of a system, they warrant particular attention in system design. As more physicians are comfortable with computers in daily life, they will likely find the use of computers in their practice less of a hindrance. We encourage you to consider human–computer interaction, and
73 Biomedical Data: Their Acquisition, Storage, and Use
the cognitive issues that arise in dealing with computer systems (see 7 Chap. 4), as you learn about the application areas and the specific systems described in later chapters.
nnSuggested Readings Adler-Milstein, J., Zhao, W., Willard-Grace, R., Knox, M., & Grumbach, K. (2020). Electronic health records and burnout: Time spent on the electronic health record after hours and message volume associated with exhaustion but not with cynicism among primary care clinicians. Journal of the American Medical Informatics Association. https://doi. org/10.1093/jamia/ocz220. This paper examines the correlation between electronic health record use and clinician burnout, and concludes that two specific EHR usage measures (EHR time after hours and message volume) were associated with exhaustion. Arocha, J. F., Wang, D., & Patel, V. L. (2005). Identifying reasoning strategies in medical decision making: A methodological guide. Journal of Biomedical Informatics, 38(2), 154–171. This paper illustrates the role of theory-driven psychological research and cognitive evaluation as they relate to medical decision making and the interpretation of clinical data. See also Chap. 4. Bernstam, E. V., Smith, J. W., & Johnson, T. R. (2010). What is biomedical informatics? Journal of Biomedical Informatics, 43(1), 104–110. The authors discuss the transformation of data into information and knowledge, delineating the ways in which this focus lies at the heart of the field of biomedical informatics. Brennan, P. F., Chiang, M. F., & Ohno-Machado, L. (2018). Biomedical informatics and data science: Evolving fields with significant overlap. Journal of the American Medical Informatics Association, 25(1), 2–3. This editorial introduces a special issue of the Journal of the American Medical Informatics Association, in which the rapidly evolving field of data science is the focus. There are 8 papers in this issue that involve applications such as secondary use of EHR data, repositories of data, and standardization of data representation.
Klasnja, P., & Pratt, W. (2012). Healthcare in the pocket: Mapping the space of mobile-phone health interventions. Journal of Biomedical Informatics, 45(1), 184–198. This review article describes the multiple ways in which both patients and providers are being empowered through the introduction of affordable mobile technologies that manage data and apply knowledge to generate advice. Steinhubl, S. R., Muse, E. D., & Topol, E. J. (2015). The emerging field of mobile health. Science Translational Medicine, 7(283), 283rv3. The authors discuss the potential for mobile health (mHealth) to impact the delivery and quality of health care delivery and clinical research on a large scale. This paper includes a discuss of challenges to the field, as well as efforts to address those challenges. Vamathevan, J., & Birney, E. (2017). A review of recent advances in translational bioinformatics: Bridges from biology to medicine. Yearbook of Medical Informatics, 26(1), 178– 187. This articles reviews the latest trends and major developments in translational bioinformatics. This includes work applying findings from national genome sequencing initiatives to health care delivery. There is a discussion of current challenges and emerging technologies that bridge research with clinical care. See also Chap. 28.
??Questions for Discussion 1. You check your pulse and discover that your heart rate is 100 beats per minute. Is this rate normal or abnormal? What additional information would you use in making this judgment? How does the context in which data are collected influence the interpretation of those data? 2. Given the imprecision of many medical terms, why do you think that serious instances of miscommunication among health care professionals are not more common? Why is greater standardization of terminology necessary if computers rather than humans are to manipulate patient data? 3. Based on the discussion of coding schemes for representing clinical information, discuss three challenges you foresee in
2
74
2
E. H. Shortliffe and M. F. Chiang
attempting to construct a standardized terminology to be used in hospitals, physicians’ offices, and research institutions. 4. How would medical practice change if nonphysicians were to collect and enter all medical data into EHRs? What problems or unintended consequentes would you anticipate? 5. Consider what you know about the typical daily schedule of a busy clinician. What are the advantages of wireless devices, connected to the Internet, as tools for such clinicians? Can you think of disadvantages as well? Be sure to consider the safety and protection of information as well as workflow and clinical needs. 6. To decide whether a patient has a significant urinary tract infection, physicians commonly use a calculation of the number of bacterial organisms in a milliliter of the patient’s urine. Physicians generally assume that a patient has a urinary tract infection if there are at least 10,000 bacteria per milliliter. Although laboratories can provide such quantification with reasonable accuracy, it is obviously unrealistic for the physician explicitly to count large numbers of bacteria by examining a milliliter of urine under the microscope. As a result, one article offers the following guideline to physicians: “When interpreting … microscopy of … stained centrifuged urine, a threshold of one organism per field yields a 95% sensitivity and five organisms per field a 95% specificity for bacteriuria [bacteria in the urine] at a level of at least 10,000 organisms per ml.” (Senior Medical Review 1987, p. 4) (a) Describe an experiment that would have allowed the researchers to determine the sensitivity and specificity of the microscopy. (b) How would you expect specificity to change as the number of bacteria per microscopic field increases from one to five? (c) How would you expect sensitivity to change as the number of bacteria
per microscopic field increases from one to five? (d) Why does it take more organisms per microscopic field to obtain a specificity of 95% than it does to achieve a sensitivity of 95%?
References Adler-Milstein, J., & Jha, A. K. (2013). Healthcare’s “big data” challenge. American Journal of Managed Care, 19(7), 537–538. Arocha, J. F., Wang, D., & Patel, V. L. (2005). Identifying reasoning strategies in medical decision making: A methodological guide. Journal of Biomedical Informatics, 38(2), 154–171. Bernstam, E. V., Smith, J. W., & Johnson, T. R. (2010). What is biomedical informatics? Journal of Biomedical Informatics, 43(1), 104–110. Blum, B. I. (1986). Clinical information systems: A review. Western Journal of Medicine, 145(6), 791–797. Brennan, P. F., Chiang, M. F., & Ohno-Machado, L. (2018). Biomedical informatics and data science: Evolving fields with significant overlap. Journal of the American Medical Informatics Association, 25(1), 2–3. Bycroft, C., Freeman, C., Petkova, D., Band, G., Elliott, L. T., Sharp, K., et al. (2018). The UK Biobank resource with deep phenotyping and genomic data. Nature, 562(7726), 203–209. Elstein, K. A., Shulman, L. S., & Sprafka, S. A. (1978). Medical problem solving: An analysis of clinical reasoning. Cambridge, MA: Harvard University Press. Erlich, Y., & Zielinski, D. (2017). DNA Foundation enables a robust and efficient storage architecture. Science, 355, 950–954. Haendel, M. A., Chute, C. G., & Robinson, P. N. (2018). Classification, ontology, and precision medicine. New England Journal of Medicine, 379, 1452–1462. Hirsch, J. A., Leslie-Mazwi, T. M., Nocola, G. N., Barr, R. M., Bello, J. A., Donovan, W. D., et al. (2015). Current procedural terminology; a primer. Journal of Neurointerventional Surgery, 7(4), 309–312. Humphreys, B., Lindberg, D., Schoolman, H., & Barnett, G. (1998). The unified medical language system: An informatics research collaboration. Journal of the American Medical Informatics Association: JAMIA, 5, 1–11. Kassirer, J. P., & Gorry, G. A. (1978). Clinical problem solving: A behavioral analysis. Annals of Internal Medicine, 89(2), 245–255. Lee, D., de Keizer, N., Lau, F., & Cornet, R. (2014). Literature review of SNOMED CT use. Journal of the American Medical Informatics Association, 21(e1), e11–e19.
75 Biomedical Data: Their Acquisition, Storage, and Use
Masys, D. R., Jarvik, G. P., Abernethy, N. F., Anderson, N. R., Papanicolaou, G. J., Paltoo, D. N., et al. (2012). Technical desiderata for the integration of genomic data into electronic health records. Journal of Biomedical Informatics, 45(3), 419–422. Pauker, S. G., Gorry, G. A., Kassirer, J. P., & Scwartz, W. B. (1976). Towards the simulation of clinical cognition: Taking a present illness by computer. American Journal of Medicine, 60(7), 981–996. Relling, M. V., & Evans, W. E. (2015). Pharmacogenomics in the clinic. Nature, 526(7573), 342–350. Sanders, D. S., Lattin, D. J., Read-Brown, S., Tu, D. C., Wilson, D. J., Hwang, T. S., et al. (2013). Electronic
health record systems in ophthalmology: Impact on clinical documentation. Ophthalmology, 120, 1745– 1755. Senior Medical Review (1987). Urinary tract infections. Senior Medical Review Newsletter. Shortliffe, E. H. (2010). Biomedical informatics in the education of physicians. Journal of the American Medical Association, 304(11), 1227–1228. Stearns, M. Q., Price, C., Spackman, K. A., & Wang, A. Y. (2001). SNOMED clinical terms: Overview of the development process and project status. Proceedings of the AMIA Symposium, 662–666.
2
77
Biomedical Decision Making: Probabilistic Clinical Reasoning Douglas K. Owens, Jeremy D. Goldhaber-Fiebert, and Harold C. Sox Contents 3.1
he Nature of Clinical Decisions: Uncertainty and the T Process of Diagnosis – 79
3.1.1 3.1.2
ecision Making Under Uncertainty – 80 D Probability: An Alternative Method of Expressing Uncertainty – 81 Overview of the Diagnostic Process – 81
3.1.3
3.2
robability Assessment: Methods to Assess Pretest P Probability – 83
3.2.1 3.2.2
S ubjective Probability Assessment – 83 Objective Probability Estimates – 84
3.3
easurement of the Operating Characteristics M of Diagnostic Tests – 87
3.3.1 3.3.2 3.3.3
lassification of Test Results as Abnormal – 87 C Measures of Test Performance – 88 Implications of Sensitivity and Specificity: How to Choose Among Tests – 90 Design of Studies of Test Performance – 91 Bias in the Measurement of Test Characteristics – 91 Meta-Analysis of Diagnostic Tests – 92
3.3.4 3.3.5 3.3.6
3.4
ost-test Probability: Bayes’ Theorem P and Predictive Value – 93
3.4.1
Bayes’ Theorem – 93
© Springer Nature Switzerland AG 2021 E. H. Shortliffe, J. J. Cimino (eds.), Biomedical Informatics, https://doi.org/10.1007/978-3-030-58721-5_3
3
3.4.2 3.4.3 3.4.4 3.4.5
T he Odds-Ratio Form of Bayes’ Theorem and Likelihood Ratios – 94 Predictive Value of a Test – 95 Implications of Bayes’ Theorem – 96 Cautions in the Application of Bayes’ Theorem – 98
3.5
Expected-Value Decision Making – 99
3.5.1 3.5.2 3.5.3 3.5.4 3.5.5 3.5.6
omparison of Uncertain Prospects – 99 C Representation of Choices with Decision Trees – 100 Performance of a Decision Analysis – 101 Representation of Patients’ Preferences with Utilities – 105 Performance of Sensitivity Analysis – 106 Representation of Long-Term Outcomes with Markov Models – 108
3.6
The Decision Whether to Treat, Test, or Do Nothing – 109
3.7
lternative Graphical Representations A for Decision Models: Influence Diagrams and Belief Networks – 112
3.8
Other Modeling Approaches – 114
3.9
he Role of Probability and Decision Analysis T in Medicine – 114
3.10
Appendix A: Derivation of Bayes’ Theorem – 116 References – 119
79 Biomedical Decision Making: Probabilistic Clinical Reasoning
nnLearning Objectives After reading this chapter, you should know the answers to these questions: 55 How is the concept of probability useful for understanding test results and for making medical decisions that involve uncertainty? 55 How can we characterize the ability of a test to discriminate between disease and health? 55 What information do we need to interpret test results accurately? 55 What is expected-value decision making? How can this methodology help us to understand particular medical problems? 55 What are utilities, and how can we use them to represent patients’ preferences? 55 What is a sensitivity analysis? How can we use it to examine the robustness of a decision and to identify the important variables in a decision? 55 What are influence diagrams? How do they differ from decision trees?
tween information needs and system design and implementation. The material in this chapter is presented in the context of the decisions made by an individual clinician. The concepts, however, are more broadly applicable. Sensitivity and specificity are important parameters of laboratory systems that flag abnormal test results, of patient monitoring systems (7 Chap. 21), and of information-retrieval systems (7 Chap. 23). An understanding of what probability is and of how to adjust probabilities after the acquisition of new information is a foundation for our study of clinical decision-support systems (7 Chap. 24). The importance of probability in medical decision making was noted as long ago as 1922:
»» [G]ood medicine does not consist in the indiscriminate application of laboratory examinations to a patient, but rather in having so clear a comprehension of the probabilities and possibilities of a case as to know what tests may be expected to give information of value (Peabody 1922).
►►Example 3.1
3.1 The Nature of Clinical
Decisions: Uncertainty and the Process of Diagnosis
Because clinical data are imperfect and outcomes of treatment are uncertain, health professionals often are faced with difficult choices. In this chapter, we introduce probabilistic medical reasoning, an approach that can help health care providers to deal with the uncertainty inherent in many medical decisions. Medical decisions are made by a variety of methods; our approach is neither necessary nor appropriate for all decisions. Throughout the chapter, we provide simple clinical examples that illustrate a broad range of problems for which probabilistic medical reasoning does provide valuable insight. As discussed in 7 Chap. 2, medical practice is medical decision making. In this chapter, we look at the process of medical decision making. Together, 7 Chaps. 2 and 3 lay the groundwork for the rest of the book. In the remaining chapters, we discuss ways that computers can help clinicians with the decision-making process, and we emphasize the relationship be-
You are the director of a blood bank. All potential blood donors are tested to ensure that they are not infected with the human immunodeficiency virus (HIV), the causative agent of acquired immunodeficiency syndrome (AIDS). You ask whether use of the polymerase chain reaction (PCR), a gene-amplification technique that can diagnose HIV, would be useful to identify people who have HIV. The PCR test is positive 98% of the time when antibody is present, and negative 99% of the time antibody is absent.1 ◄
If the test is positive, what is the likelihood that a donor actually has HIV? If the test is negative, how sure can you be that the person does not have HIV? On an intuitive level, these questions
1
The test sensitivity and specificity used in 7 Example 3.1 are consistent with the reported values of the sensitivity and specificity of the PCR test for diagnosis of HIV early in its development (Owens et al. 1996b); the test now has higher sensitivity and specificity.
3
80
3
D. K. Owens et al.
do not seem particularly difficult to answer. The test appears accurate, and we would expect that, if the test is positive, the donated blood specimen is likely to contain the HIV. Thus, we are surprised to find that, if only one in 1000 donors actually is infected, the test is more often mistaken than it is correct. In fact, of 100 donors with a positive test, fewer than 10 would be infected. There would be ten wrong answers for each correct result. How are we to understand this result? Before we try to find an answer, let us consider a related example. ►►Example 3.2
Ms. Kamala is a 66-year-old woman with coronary artery disease (narrowing or blockage of the blood vessels that supply the heart tissue). When the heart muscle does not receive enough oxygen (hypoxia) because blood cannot reach it, the patient often experiences chest pain (angina). Ms. Kamala has twice undergone coronary artery bypass graft (CABG) surgery, a procedure in which new vessels, often taken from the leg, are grafted onto the old ones such that blood is shunted past the blocked region. Unfortunately, she has again begun to have chest pain, which becomes progressively more severe, despite medication. If the heart muscle is deprived of oxygen, the result can be a heart attack (myocardial infarction), in which a section of the muscle dies. ◄
Should Ms. Kamala undergo a third operation? The medications are not working; without surgery, she runs a high risk of suffering a heart attack, which may be fatal. On the other hand, the surgery is hazardous. Not only is the surgical mortality rate for a third operation higher than that for a first or second one but also the chance that surgery will relieve the chest pain is lower than that for a first operation. All choices in 7 Example 3.2 entail considerable uncertainty. Furthermore, the risks are grave; an incorrect decision may substantially increase the chance that Ms. Kamala will die. The decision will be difficult even for experienced clinicians. These examples illustrate situations in which intuition is either misleading or inadequate. Although the test results in 7 Example 3.1 are appropriate for the blood bank, a clinician who uncritically reports these results would erroneously inform many people that they had
HIV—a mistake with profound emotional and social consequences. In 7 Example 3.2, the decision-making skill of the clinician will affect a patient’s quality and length of life. Similar situations are commonplace in medicine. Our goal in this chapter is to show how the use of probability and decision analysis can help to make clear the best course of action. Decision making is one of the quintessential activities of the healthcare professional. Some decisions are made on the basis of deductive reasoning or of physiological principles. Many decisions, however, are made on the basis of knowledge that has been gained through collective experience: the clinician often must rely on empirical knowledge of associations between symptoms and disease to evaluate a problem. A decision that is based on these usually imperfect associations will be, to some degree, uncertain. In 7 Sects. 3.1.1, 3.1.2 and 3.1.3, we examine decisions made under uncertainty and present an overview of the diagnostic process. As Smith (1985, p. 3) said: “Medical decisions based on probabilities are necessary but also perilous. Even the most astute physician will occasionally be wrong.” 3.1.1 Decision Making Under
Uncertainty
►►Example 3.3
Ms. Kirk, a 33-year-old woman with a history of a previous blood clot (thrombus) in a vein in her left leg, presents with the complaint of pain and swelling in that leg for the past 5 days. On physical examination, the leg is tender and swollen to midcalf—signs that suggest the possibility of deep vein thrombosis.2 A test (ultrasonography) is performed, and the flow of blood in the veins of Ms. Kirk’s leg is evaluated. The blood flow is abnormal, but the radiologist cannot tell whether there is a new blood clot. ◄
2 In medicine, a sign is an objective physical finding (something observed by the clinician) such as a temperature of 101.2 °F. A symptom is a subjective experience of the patient, such as feeling hot or feverish. The distinction may be blurred if the patient’s experience also can be observed by the clinician.
81 Biomedical Decision Making: Probabilistic Clinical Reasoning
Should Ms. Kirk be treated for blood clots? The main diagnostic concern is the recurrence of a blood clot in her leg. A clot in the veins of the leg can dislodge, flow with the blood, and cause a blockage in the vessels of the lungs, a potentially fatal event called a pulmonary embolus. Of patients with a swollen leg, about one-half actually have a blood clot; there are numerous other causes of a swollen leg. Given a swollen leg, therefore, a clinician cannot be sure that a clot is the cause. Thus, the physical findings leave considerable uncertainty. Furthermore, in 7 Example 3.3, the results of the available diagnostic test are equivocal. The treatment for a blood clot is to administer anticoagulants (drugs that inhibit blood clot formation), which pose the risk of excessive bleeding to the patient. Therefore, clinicians do not want to treat the patient unless they are confident that a thrombus is present. But how much confidence should be required before starting treatment? We will learn that it is possible to answer this question by calculating the benefits and harms of treatment. This example illustrates an important concept: Clinical data are imperfect. The degree of imperfection varies, but all clinical data— including the results of diagnostic tests, the history given by the patient, and the findings on physical examination—are uncertain.
3.1.2 Probability: An Alternative
Method of Expressing Uncertainty
The language that clinicians use to describe a patient’s condition often is ambiguous—a factor that further complicates the problem of uncertainty in medical decision making. Clinicians use words such as “probable” and “highly likely” to describe their beliefs about the likelihood of disease. These words have strikingly different meanings to different individuals. Because of the widespread disagreement about the meaning of common descriptive terms, there is ample opportunity for miscommunication. The problem of how to express degrees of uncertainty is not unique to medicine. How is it handled in other contexts? Horse racing has
its share of uncertainty. If experienced gamblers are deciding whether to place bets, they will find it unsatisfactory to be told that a given horse has a “high chance” of winning. They will demand to know the odds. The odds are simply an alternate way to express a probability. The use of probability or odds as an expression of uncertainty avoids the ambiguities inherent in common descriptive terms. 3.1.3 Overview of the Diagnostic
Process
In 7 Chap. 2, we described the hypothetico- deductive approach, a diagnostic strategy comprising successive iterations of hypothesis generation, data collection, and interpretation. We discussed how observations may evoke a hypothesis and how new information subsequently may increase or decrease our belief in that hypothesis. Here, we review this process briefly in light of a specific example. For the purpose of our discussion, we separate the diagnostic process into three stages. The first stage involves making an initial judgment about whether a patient is likely to have a disease. After an interview and physical examination, a clinician intuitively develops a belief about the likelihood of disease. This judgment may be based on previous experience or on knowledge of the medical literature. A clinician’s belief about the likelihood of disease usually is implicit; he or she can refine it by making an explicit estimation of the probability of disease. This estimated probability, made before further information is obtained, is the prior probability or pretest probability of disease. ►►Example 3.4
Mr. Smith, a 60-year-old man, complains to his clinician that he has pressure-like chest pain that occurs when he walks quickly. After taking his history and examining him, his clinician believes there is a high enough chance that he has heart disease to warrant ordering an exercise stress test. In the stress test, an electrocardiogram (ECG) is taken while Mr. Smith exercises. Because the heart must pump more blood per stroke and must beat faster (and thus
3
82
D. K. Owens et al.
explore methods used to estimate pretest probability accurately in 7 Sect. 3.2. After the pretest probability of disease has been estimated, the second stage of the diagnostic process involves gathering more information, often by performing a diagnostic test. The clinician in 7 Example 3.4 ordered a test to reduce the uncertainty about the diagnosis of heart disease. The positive test result supports the diagnosis of heart disease, and this reduction in uncertainty is shown in . Fig. 3.1a. Although the clinician in 7 Example 3.4 chose the exercise stress test, there are many tests available to diagnose heart disease, and the clinician would like to know which test he or she should order next. Some tests reduce uncertainty more than do others (see . Fig. 3.1b), but may cost more. The more a test reduces uncertainty, the more useful it is. In 7 Sect. 3.3, we explore ways to measure
requires more oxygen) during exercise, many heart conditions are evident only when the patient is physically stressed. Mr. Smith’s results show abnormal changes in the ECG during exercise—a sign of heart disease. ◄
3
How would the clinician evaluate this patient? The clinician would first talk to the patient about the quality, duration, and severity of his or her pain. Traditionally, the clinician would then decide what to do next based on his or her intuition about the etiology (cause) of the chest pain. Our approach is to ask the clinician to make his or her initial intuition explicit by estimating the pretest probability of disease. The clinician in this example, based on what he or she knows from talking with the patient, might assess the pretest or prior probability of heart disease as 0.5 (50% chance or 1:1 odds; see 7 Sect. 3.2). We a
Pretest probability
Post-test probability
Perform test
0.0
0.5
1.0
Probability of disease
b Pretest probability
Post-test probability after test 2 Perform test 2 Post-test probability after test 1 Perform test 1
0.0
Probability of disease
.. Fig. 3.1 The effect of test results on the probability of disease. a A positive test result increases the probability of disease. b Test 2 reduces uncertainty about pres-
1.0
ence of disease (increases the probability of disease) more than test 1 does
83 Biomedical Decision Making: Probabilistic Clinical Reasoning
how well a test reduces uncertainty, expanding the concepts of test sensitivity and specificity first introduced in 7 Chap. 2. Given new information provided by a test, the third step is to update the initial probability estimate. The clinician in 7 Example 3.4 must ask: “What is the probability of disease given the abnormal stress test?” The clinician wants to know the posterior probability, or post-test probability, of disease (see . Fig. 3.1a). In 7 Sect. 3.4, we reexamine Bayes’ theorem, introduced in 7 Chap. 2, and we discuss its use for calculating the post-test probability of disease. As we noted, to calculate post-test probability, we must know the pretest probability, as well as the sensitivity and specificity, of the test.3 3.2 Probability Assessment:
Methods to Assess Pretest Probability
In this section, we explore the methods that clinicians can use to make judgments about the probability of disease before they order tests. Probability is our preferred means of expressing uncertainty. In this framework, probability (p) expresses a clinician’s opinion about the likelihood of an event as a number between 0 and 1. An event that is certain to occur has a probability of 1; an event that is certain not to occur has a probability of 0.4 The probability of event A is written p[A]. The sum of the probabilities of all possible, collectively exhaustive outcomes of a chance event must be equal to 1. Thus, in a coin flip, p [ heads ] + p [ tails ] = 1.0.
The probability of event A and event B occurring together is denoted by p[A&B] or by p[A,B]. Events A and B are considered independent if the occurrence of one does not influence the probability of the occurrence of the other. The probability of two independent events A and B both occurring is given by the product of the individual probabilities: p [ A,B] = p [ A ] ´ p [ B]. Thus, the probability of heads on two consecutive coin tosses is 0.5 × 0.5 = 0.25. (Regardless of the outcome of the first toss, the probability of heads on the second toss is 0.5). The probability that event A will occur given that event B is known to occur is called the conditional probability of event A given event B, denoted by p[A|B] and read as “the probability of A given B.” Thus a post-test probability is a conditional probability predicated on the test or finding. For example, if 30% of patients who have a swollen leg have a blood clot, we say the probability of a blood clot given a swollen leg is 0.3, denoted: p [ blood clot|swollen leg ] = 0.3. Before the swollen leg is noted, the pretest probability is simply the prevalence of blood clots in the leg in the population from which the patient was selected—a number likely to be much smaller than 0.3. Now that we have decided to use probability to express uncertainty, how can we estimate probability? We can do so by either subjective or objective methods; each approach has advantages and limitations. 3.2.1 Subjective Probability
Assessment
3 Note that pretest and post-test probabilities correspond to the concepts of prevalence and predictive value. The latter terms were used in 7 Chap. 2 because the discussion was about the use of tests for screening populations of patients; in a population, the pretest probability of disease is simply that disease’s prevalence in that population. 4 We assume a Bayesian interpretation of probability; there are other statistical interpretations of probability.
Most assessments that clinicians make about probability are based on personal experience. The clinician may compare the current problem to similar problems encountered previously and then ask: “What was the frequency of disease in similar patients whom I have seen?” To make these subjective assessments of probability, people rely on several discrete, often unconscious mental processes that have been de-
3
84
3
D. K. Owens et al.
scribed and studied by cognitive psychologists (Tversky and Kahneman 1974). These processes are termed cognitive heuristics. More specifically, a cognitive heuristic is a mental process by which we learn, recall, or process information; we can think of heuristics as rules of thumb. Knowledge of heuristics is important because it helps us to understand the underpinnings of our intuitive probability assessment. Both naive and sophisticated decision makers (including clinicians and statisticians) misuse heuristics and therefore make systematic—often serious—errors when estimating probability. So, just as we may underestimate distances on a particularly clear day (Tversky and Kahneman 1974), we may make mistakes in estimating probability in deceptive clinical situations. Three heuristics have been identified as important in estimation of probability: 1. Representativeness. One way that people estimate probability is to ask themselves: What is the probability that object A belongs to class B? For instance, what is the probability that this patient who has a swollen leg belongs to the class of patients who have blood clots? To answer, we often rely on the representativeness heuristic in which probabilities are judged by the degree to which A is representative of, or similar to, B. The clinician will judge the probability of the development of a blood clot (thrombosis) by the degree to which the patient with a swollen leg resembles the clinician’s mental image of patients with a blood clot. If the patient has all the classic findings (signs and symptoms) associated with a blood clot, the clinician judges that the patient is highly likely to have a blood clot. Difficulties occur with the use of this heuristic when the disease is rare (very low prior probability, or prevalence); when the clinician’s previous experience with the disease is atypical, thus giving an incorrect mental representation; when the patient’s clinical profile is atypical; and when the probability of certain findings depends on whether other findings are present. 2. Availability. Our estimate of the probability of an event is influenced by the ease with which we remember similar events. Events more easily remembered are judged more probable; this rule is the availability
heuristic, and it is often misleading. We remember dramatic, atypical, or emotion- laden events more easily and therefore are likely to overestimate their probability. A clinician who had cared for a patient who had a swollen leg and who then died from a blood clot would vividly remember thrombosis as a cause of a swollen leg. The clinician would remember other causes of swollen legs less easily, and he or she would tend to overestimate the probability of a blood clot in patients with a swollen leg. 3. Anchoring and adjustment. Another common heuristic used to judge probability is anchoring and adjustment. A clinician makes an initial probability estimate (the anchor) and then adjusts the estimate based on further information. For instance, the clinician in 7 Example 3.4 makes an initial estimate of the probability of heart disease as 0.5. If he or she then learns that all the patient’s brothers had died of heart disease, the clinician should raise the estimate because the patient’s strong family history of heart disease increases the probability that he or she has heart disease, a fact the clinician could ascertain from the literature. The usual mistake is to adjust the initial estimate (the anchor) insufficiently in light of the new information. Instead of raising his or her estimate of prior probability to, say, 0.8, the clinician might adjust it to only 0.6. Heuristics often introduce error into our judgments about prior probability. Errors in our initial estimates of probabilities will be reflected in the posterior probabilities even if we use quantitative methods to derive those posterior probabilities. An understanding of heuristics is thus important for medical decision making. The clinician can avoid some of these difficulties by using published research results to estimate probabilities. 3.2.2 Objective Probability
Estimates
Published research results can serve as a guide for more objective estimates of probabilities. We can use the prevalence of disease in the popula-
85 Biomedical Decision Making: Probabilistic Clinical Reasoning
tion or in a subgroup of the population, or clinical prediction rules, to estimate the probability of disease. As we discussed in 7 Chap. 2, the prevalence is the frequency of an event in a population; it is a useful starting point for estimating probability. For example, if you wanted to estimate the probability of prostate cancer in a 50-year-old man, the prevalence of prostate cancer in men of that age (5–14%) would be a useful anchor point from which you could increase or decrease the probability depending on your findings. Estimates of disease prevalence in a defined population often are available in the medical literature. Symptoms, such as difficulty with urination, or signs, such as a palpable prostate nodule, can be used to place patients into a clinical subgroup in which the probability of disease is known. For patients referred to a urologist for evaluation of a prostate nodule, the prevalence of cancer is about 50%. This approach may be limited by difficulty in placing a patient in the correct clinically defined subgroup, especially if the criteria for classifying patients are ill-defined. A trend has been to develop guidelines, known as clinical prediction rules, to help clinicians assign patients to well-defined subgroups in which the probability of disease is known. Clinical prediction rules are developed from systematic study of patients who have a particular diagnostic problem; they define how clinicians can use combinations of clinical findings to estimate probability. The symptoms or signs that make an independent contribution to the probability that a patient has a disease are identified and assigned numerical weights based on statistical analysis of the finding’s contribution. The result is a list of symptoms and signs for an individual patient, each with a corresponding numerical contribution to a total score. The total score places a patient in a subgroup with a known probability of disease. ►►Example 3.5
Ms. Troy, a 65-year-old woman who had a heart attack 4 months ago, has abnormal heart rhythm (arrhythmia), is in poor medical condition, and is about to undergo elective surgery. ◄
What is the probability that Ms. Troy will suffer a cardiac complication? Clinical prediction rules have been developed to help clinicians to assess this risk (Palda and Detsky 1997). . Table 3.1 lists clinical findings and their corresponding diagnostic weights. We add the diagnostic weights for each of the patient’s clinical findings to obtain the total score. The total score places the patient in a group with a defined probability of cardiac complications, as shown in . Table 3.2. Ms. Troy receives a score of 20; thus, the clinician can estimate that the patient has a 27% chance of developing a severe cardiac complication. Objective estimates of pretest probability are subject to error because of bias in the studies on which the estimates are based. For
.. Table 3.1 Diagnostic weights for assessing risk of cardiac complications from noncardiac surgery Clinical finding
Diagnostic weight
Age greater than 70 years
5
Recent documented heart attack >6 months previously
5
5 PVCs
5
Critical aortic stenosis
20
Poor medical condition
5
Emergency surgery
10
ECG electrocardiogram, PVCs premature ventricular contractions on preoperative electrocardiogram aFluid in the lungs due to reduced heart function
3
86
D. K. Owens et al.
.. Table 3.2 Clinical prediction rule for diagnostic weights in . Table 3.1
3
Total score
Prevalence (%) of cardiac complicationsa
0–15
5
20–30
27
>30
60
aCardiac
complications defined as death, heart attack, or congestive heart failure
instance, published prevalence data may not apply directly to a particular patient. A clinical illustration is that early studies indicated that a patient found to have microscopic evidence of blood in the urine (microhematuria) should undergo extensive tests because a significant proportion of the patients would be found to have cancer or other serious diseases. The tests involve some risk, discomfort, and expense to the patient. Nonetheless, the approach of ordering tests for any patient with microhematuria was widely practiced for some years. A later study, however, suggested that the probability of serious disease in asymptomatic patients with only microscopic evidence of blood was only about 2%. In the past, many patients may have undergone unnecessary tests, at considerable financial and personal cost. What explains the discrepancy in the estimates of disease prevalence? The initial studies that showed a high prevalence of disease in patients with microhematuria were performed on patients referred to urologists, who are specialists. The primary care clinician refers patients whom he or she suspects have a disease in the specialist’s sphere of expertise. Because of this initial screening by primary care clinicians, the specialists seldom see patients with clinical findings that imply a low probability of disease. Thus, the prevalence of disease in the patient population in a specialist’s practice often is much higher than that in a primary care practice; studies performed with the former patients therefore almost always overestimate disease probabilities. This example demonstrates
referral bias. Referral bias is common because many published studies are performed on patients referred to specialists. Thus, one may need to adjust published estimates before one uses them to estimate pretest probability in other clinical settings. We now can use the techniques discussed in this part of the chapter to illustrate how the clinician in 7 Example 3.4 might estimate the pretest probability of heart disease in his or her patient, Mr. Smith, who has pressure-like chest pain. We begin by using the objective data that are available. The prevalence of heart disease in 60-year-old men could be our starting point. In this case, however, we can obtain a more refined estimate by placing the patient in a clinical subgroup in which the prevalence of disease is known. The prevalence in a clinical subgroup, such as men with symptoms typical of coronary heart disease, will predict the pretest probability more accurately than would the prevalence of heart disease in a group that is heterogeneous with respect to symptoms, such as the population at large. We assume that large studies have shown the prevalence of coronary heart disease in men with typical symptoms of angina pectoris to be about 0.9; this prevalence is useful as an initial estimate that can be adjusted based on information specific to the patient. Although the prevalence of heart disease in men with typical symptoms is high, 10% of patients with this history do not have heart disease. The clinician might use subjective methods to adjust his or her estimate further based on other specific information about the patient. For example, the clinician might adjust his or her initial estimate of 0.9 upward to 0.95 or higher based on information about family history of heart disease. The clinician should be careful, however, to avoid the mistakes that can occur when one uses heuristics to make subjective probability estimates. In particular, he or she should be aware of the tendency to stay too close to the initial estimate when adjusting for additional information. By combining subjective and objective methods for assessing pretest probability, the clinician can arrive at a reasonable estimate of the pretest probability of heart disease.
87 Biomedical Decision Making: Probabilistic Clinical Reasoning
False positives False negatives Normal Healthy population Number of individuals
Abnormal
Cutoff value
Diseased population
Test result .. Fig. 3.2 Distribution of test results in healthy and diseased individuals. Varying the cutoff between “normal” and “abnormal” across the continuous range of
possible values changes the relative proportions of false positives (FPs) and false negatives (FNs) for the two populations
In this section, we summarized subjective and objective methods to determine the pretest probability, and we learned how to adjust the pretest probability after assessing the specific subpopulation of which the patient is representative. The next step in the diagnostic process is to gather further information, usually in the form of formal diagnostic tests (laboratory tests, X-ray studies, etc.). To help you to understand this step more clearly, we discuss in the next two sections how to measure the accuracy of tests and how to use probability to interpret the results of the tests.
individuals. The distribution of values often is approximated by the normal (gaussian, or bell-shaped) distribution curve (. Fig. 3.2). Thus, 95% of the population will fall within two standard deviations of the mean. About 2.5% of the population will be more than two standard deviations from the mean at each end of the distribution. The distribution of values for ill individuals may be normally distributed as well. The two distributions usually overlap (see . Fig. 3.2). How is a test result classified as abnormal? Most clinical laboratories report an “upper limit of normal,” which usually is defined as two standard deviations above the mean. Thus, a test result greater than two standard deviations above the mean is reported as abnormal (or positive); a test result below that cutoff is reported as normal (or negative). As an example, if the mean cholesterol concentration in the blood is 180 mg/dl, a clinical laboratory might choose as the upper limit of normal 220 mg/dl because it is two standard deviations above the mean. Note that a cutoff that is based on an arbitrary statistical criterion may not have biological significance. An ideal test would have no values at which the distribution of diseased and nondiseased people overlap. That is, if the cutoff value were set appropriately, the test would be normal in all healthy individuals and abnormal in all individuals with disease. Few tests meet this standard. If a test result is defined as
3.3 Measurement of the Operating
Characteristics of Diagnostic Tests
The first challenge in assessing any test is to determine criteria for deciding whether a result is normal or abnormal. In this section, we present the issues that you need to consider when making such a determination. 3.3.1 Classification of Test Results
as Abnormal
Most biological measurements in a population of healthy people are continuous variables that assume different values for different
3
88
3
D. K. Owens et al.
abnormal by the statistical criterion, 2.5% of healthy individuals will have an abnormal test. If there is an overlap in the distribution of test results in healthy and diseased individuals, some diseased patients will have a normal test (see . Fig. 3.2). You should be familiar with the terms used to denote these groups: 55 A true positive (TP) is a positive test result obtained for a patient in whom the disease is present (the test result correctly classifies the patient as having the disease). 55 A true negative (TN) is a negative test result obtained for a patient in whom the disease is absent (the test result correctly classifies the patient as not having the disease). 55 A false positive (FP) is a positive test result obtained for a patient in whom the disease is absent (the test result incorrectly classifies the patient as having the disease). 55 A false negative (FN) is a negative test result obtained for a patient in whom the disease is present (the test result incorrectly classifies the patient as not having the disease). . Figure 3.2 shows that varying the cutoff point (moving the vertical line in the figure) for an abnormal test will change the relative proportions of these groups. As the cutoff is moved further up from the mean of the normal values, the number of FNs increases and the number of FPs decreases. Once we have chosen a cutoff point, we can conveniently summarize test performance—the ability to discriminate disease from nondisease—in a 2 × 2 contingency table, as shown in . Table 3.3. The table summarizes the number of patients in each group: TP, FP, TN, and FN. Note that the sum of the first column is the total number of diseased patients, TP + FN. The sum of the second column is the total number of nondiseased patients, FP + TN. The sum of the first row, TP + FP, is the total number of patients with a positive test result. Likewise, FN + TN gives the total number of patients with a negative test result.
A perfect test would have no FN or FP results. Erroneous test results do occur, however, and you can use a 2 × 2 contingency table to define the measures of test performance that reflect these errors. 3.3.2 Measures of Test Performance
Measures of test performance are of two types: measures of agreement between tests or measures of concordance, and measures of disagreement or measures of discordance. Two types of concordant test results occur in the 2 × 2 table in . Table 3.3: TPs and TNs. The relative frequencies of these results form the basis of the measures of concordance. These measures correspond to the ideas of the sensitivity and specificity of a test, which we introduced in 7 Chap. 2. We define each measure in terms of the 2 × 2 table and in terms of conditional probabilities. The true-positive rate (TPR), or sensitivity, is the likelihood that a diseased patient has a positive test. In conditional-probability notation, sensitivity is expressed as the probability of a positive test given that disease is present: p [ positive test|disease ]. Another way to think of the TPR is as a ratio. The likelihood that a diseased patient has a positive test is given by the ratio of diseased .. Table 3.3 A 2 × 2 contingency table for test results Results of test
Disease present
Disease absent
Total
Positive result
TP
FP
TP + FP
Negative result
FN
TN
FN + TN
TP + FN
FP + TN
TP true positive, TN true negative, FP false positive, FN false negative
3
89 Biomedical Decision Making: Probabilistic Clinical Reasoning
patients with a positive test to all diseased patients:
.. Table 3.4 A 2 × 2 contingency table for HIV antibody EIA
æ number of diseased patients ö ç with positive test ÷ TPR = ç ÷. ç total number of diseased patients ÷ ç ÷ è ø We can determine these numbers for our example from the 2 × 2 table (see . Table 3.3). The number of diseased patients with a positive test is TP. The total number of diseased patients is the sum of the first column, TP + FN. So, TPR =
The true-negative rate (TNR), or specificity, is the likelihood that a nondiseased patient has a negative test result. In terms of conditional probability, specificity is the probability of a negative test given that disease is absent: p [ negative test|no disease ].
Antibody absent
Total
Positive EIA
98
3
101
Negative EIA
2
297
299
100
300
EIA enzyme-linked immunoassay
æ Number of nondiseased patients ö ç ÷ with positive test FPR = ç ÷ ç Total number of nondiseased patients ÷ ç ÷ è ø FP = FP + TN
Viewed as a ratio, the TNR is the number of nondiseased patients with a negative test divided by the total number of nondiseased patients:
►►Example 3.6
Consider again the problem of screening blood donors for HIV. One test used to screen blood donors for HIV antibody is an enzyme-linked immunoassay (EIA). So that the performance of the EIA can be measured, the test is performed on 400 patients; the hypothetical results are shown in the 2 × 2 table in . Table 3.4.5 ◄
ö ÷ ÷ ÷ ÷ ø
From the 2 × 2 table (see . Table 3.3), TNR =
Antibody present
The FPR is the likelihood that a nondiseased patient has a positive test result:
TP . TP + FN
æ Number of nondiseased patients ç with negative test TNR = ç ç Total number of nondiseased ç patients è
EIA test result
TN TN + FP
To determine test performance, we calculate the TPR (sensitivity) and TNR (specificity) of the EIA antibody test. The TPR, as defined previously, is:
TP 98 The measures of discordance—the false- = = 0.98 positive rate (FPR) and the false-negative rate TP + FN 98 + 2 (FNR)—are defined similarly. The FNR is the likelihood that a diseased patient has a nega- Thus, the likelihood that a patient with the HIV antibody will have a positive EIA test is tive test result. As a ratio, æ Number of diseased patients ö ç with negative test ÷ FNR = ç ÷ ç Total number of diseased patients ÷ ç ÷ è ø FN = . FN + TP
5
This example assumes that we have a perfect method (different from EIA) for determining the presence or absence of antibody. We discuss the idea of goldstandard tests in 7 Sect. 3.3.4. We have chosen the numbers in the example to simplify the calculations. In practice, the sensitivity and specificity of the HIV EIAs are greater than 99%.
3
D. K. Owens et al.
0.98. If the test were performed on 100 patients who truly had the antibody, we would expect the test to be positive in 98 of the patients. Conversely, we would expect two of the patients to receive incorrect, negative results, for an FNR of 2%. (You should convince yourself that the sum of TPR and FNR by definition must be 1: TPR + FNR = 1). And the TNR is: TN 297 = = 0.99 TN + FP 297 + 3 The likelihood that a patient who has no HIV antibody will have a negative test is 0.99. Therefore, if the EIA test were performed on 100 individuals who had not been infected with HIV, it would be negative in 99 and incorrectly positive in 1. (Convince yourself that the sum of TNR and FPR also must be 1: TNR + FPR = 1). 3.3.3 Implications of Sensitivity
and Specificity: How to Choose Among Tests
It may be clear to you already that the calculated values of sensitivity and specificity for a continuous-valued test depend on the particular cutoff value chosen to distinguish normal and abnormal results. In . Fig. 3.2, note that increasing the cutoff level (moving it to the right) would decrease significantly the number of FP tests but also would increase the number of FN tests. Thus, the test would have become more specific but less sensitive. Similarly, a lower cutoff value would increase the FPs and decrease the FNs, thereby increasing sensitivity while decreasing specificity. Whenever a decision is made about what cutoff to use in calling a test abnormal, an inherent philosophic decision is being made about whether it is better to tolerate FNs (missed cases) or FPs (nondiseased people inappropriately classified as diseased). The choice of cutoff depends on the disease in question and on the purpose of testing. If the disease is serious and if lifesaving therapy is available, we should try to minimize the number of FN results. On the other hand, if the disease in not serious
and the therapy is dangerous, we should set the cutoff value to minimize FP results. We stress the point that sensitivity and specificity are characteristics not of a test per se but rather of the test and a criterion for when to call that test abnormal. Varying the cutoff in . Fig. 3.2 has no effect on the test itself (the way it is performed, or the specific values for any particular patient); instead, it trades off specificity for sensitivity. Thus, the best way to characterize a test is by the range of values of sensitivity and specificity that it can take on over a range of possible cutoffs. The typical way to show this relationship is to plot the test’s sensitivity against 1 minus specificity (i.e., the TPR against the FPR), as the cutoff is varied and the two test characteristics are traded off against each other (. Fig. 3.3). The resulting curve, known as a receiver-operating characteristic (ROC) curve, was originally described by researchers investigating methods of electromagnetic-signal detection and was later applied to the field of psychology (Peterson and Birdsall 1953; Swets 1973). Any given point along a ROC curve for a test corresponds to the test sensitivity and specificity for a given threshold of “abnormality.” Similar curves can be drawn for any test used to associate obTest B
1.0 True-positive rate (sensitivity)
90
Test A
0.5
0 0
0.5 False-positive rate (1- specificity)
1.0
.. Fig. 3.3 Receiver operating characteristic (ROC) curves for two hypothetical tests. Test B is more discriminative than test A because its curve is higher (e.g., the false-positive rate (FPR) for test B is lower than the FPR for test A at any value of true-positive rate (TPR)). However, the more discriminative test may not always be preferred in clinical practice (see text)
91 Biomedical Decision Making: Probabilistic Clinical Reasoning
served clinical data with specific diseases or disease categories. Suppose a new test were introduced that competed with the current way of screening for the presence of a disease. For example, suppose a new radiologic procedure for assessing the presence or absence of pneumonia became available. This new test could be assessed for trade-offs in sensitivity and specificity, and an ROC curve could be drawn. As shown in . Fig. 3.3, a test has better discriminating power than a competing test if its ROC curve lies above that of the other test. In other words, test B is more discriminating than test A when its specificity is greater than test A’s specificity for any level of sensitivity (and when its sensitivity is greater than test A’s sensitivity for any level of specificity). Understanding ROC curves is important in understanding test selection and data interpretation. Clinicians should not necessarily, however, always choose the test with the most discriminating ROC curve. Matters of cost, risk, discomfort, and delay also are important in the choice about what data to collect and what tests to perform. When you must choose among several available tests, you should select the test that has the highest sensitivity and specificity, provided that other factors, such as cost and risk to the patient, are equal. The higher the sensitivity and specificity of a test, the more the results of that test will reduce uncertainty about probability of disease. 3.3.4 Design of Studies of Test
Performance
In 7 Sect. 3.3.2, we discussed measures of test performance: a test’s ability to discriminate disease from no disease. When we classify a test result as TP, TN, FP, or FN, we assume that we know with certainty whether a patient is diseased or healthy. Thus, the validity of any test’s results must be measured against a gold standard: a test that reveals the patient’s true disease state, such as a biopsy of diseased tissue or a surgical operation. A gold-standard test is a procedure that is used to define unequivocally the presence or absence of disease. The test whose discrimination is being measured is called the
index test. The gold-standard test usually is more expensive, riskier, or more difficult to perform than is the index test (otherwise, the less precise test would not be used at all). The performance of the index test is measured in a small, select group of patients enrolled in a study. We are interested, however, in how the test performs in the broader group of patients in which it will be used in practice. The test may perform differently in the two groups, so we make the following distinction: the study population comprises those patients (usually a subset of the clinically relevant population) in whom test discrimination is measured and reported; the clinically relevant population comprises those patients in whom a test typically is used.
3.3.5 Bias in the Measurement of
Test Characteristics
We mentioned earlier the problem of referral bias. Published estimates of disease prevalence (derived from a study population) may differ from the prevalence in the clinically relevant population because diseased patients are more likely to be included in studies than are nondiseased patients. Similarly, published values of sensitivity and specificity are derived from study populations that may differ from the clinically relevant populations in terms of average level of health and disease prevalence. These differences may affect test performance, so the reported values may not apply to many patients in whom a test is used in clinical practice. ►►Example 3.7
In the early 1970s, a blood test called the carcinoembryonic antigen (CEA) was touted as a screening test for colon cancer. Reports of early investigations, performed in selected patients, indicated that the test had high sensitivity and specificity. Subsequent work, however, proved the CEA to be completely valueless as a screening blood test for colon cancer. Screening tests are used in unselected populations, and the differences between the study and clinically relevant populations were partly responsible for the original miscalculations of the CEA’s TPR and TNR (Ransohoff and Feinstein 1978). ◄
3
92
3
D. K. Owens et al.
The experience with CEA has been repeated with numerous tests. Early measures of test discrimination are overly optimistic, and subsequent test performance is disappointing. Problems arise when the TPR and TNR, as measured in the study population, do not apply to the clinically relevant population. These problems usually are the result of bias in the design of the initial studies—notably spectrum bias, test referral bias, or test interpretation bias. Spectrum bias occurs when the study population includes only individuals who have advanced disease (“sickest of the sick”) and healthy volunteers, as is often the case when a test is first being developed. Advanced disease may be easier to detect than early disease. For example, cancer is easier to detect when it has spread throughout the body (metastasized) than when it is localized to, say, a small portion of the colon. In contrast to the study population, the clinically relevant population will contain more cases of early disease that are more likely to be missed by the index test (FNs). Thus, the study population will have an artifactually low FNR, which produces an artifactually high TPR (TPR = 1 − FNR). In addition, healthy volunteers are less likely than are patients in the clinically relevant population to have other diseases that may cause FP results6; the study population will have an artificially low FPR, and therefore the specificity will be overestimated (TNR = 1 − FPR). Inaccuracies in early estimates of the TPR and TNR of the CEA were partly due to spectrum bias. Test-referral bias (sometimes referred to as referral bias) occurs when a positive index test is a criterion for ordering the gold standard test. In clinical practice, patients with negative index tests are less likely to undergo the gold 6
Volunteers are often healthy, whereas patients in the clinically relevant population often have several diseases in addition to the disease for which a test is designed. These other diseases may cause FP test results. For example, patients with benign (rather than malignant) enlargement of their prostate glands are more likely than are healthy volunteers to have FP elevations of prostate-specific antigen (Meigs et al. 1996), a substance in the blood that is elevated in men who have prostate cancer. Measurement of prostate-specific antigen is often used to detect prostate cancer.
standard test than are patients with positive tests. In other words, the study population, comprising individuals with positive index–test results, has a higher percentage of patients with disease than does the clinically relevant population. Therefore, both TN and FN tests will be underrepresented in the study population. The result is overestimation of the TPR and underestimation of the TNR in the study population. Test-interpretation bias develops when the interpretation of the index test affects that of the gold standard test or vice versa. This bias causes an artificial concordance between the tests (the results are more likely to be the same) and spuriously increases measures of concordance—the sensitivity and specificity—in the study population. (Remember, the relative frequencies of TPs and TNs are the basis for measures of concordance). To avoid these problems, the person interpreting the index test should be unaware of the results of the gold standard test. To counter these three biases, you may need to adjust the TPR and TNR when they are applied to a new population. All the biases result in a TPR that is higher in the study population than it is in the clinically relevant population. Thus, if you suspect bias, you should adjust the TPR (sensitivity) downward when you apply it to a new population. Adjustment of the TNR (specificity) depends on which type of bias is present. Spectrum bias and test interpretation bias result in a TNR that is higher in the study population than it will be in the clinically relevant population. Thus, if these biases are present, you should adjust the specificity downward when you apply it to a new population. Testreferral bias, on the other hand, produces a measured specificity in the study population that is lower than it will be in the clinically relevant population. If you suspect test referral bias, you should adjust the specificity upward when you apply it to a new population. 3.3.6 Meta-Analysis of Diagnostic
Tests
Often, there are many studies that evaluate the sensitivity and specificity of the same diagnostic test. If the studies come to similar conclusions
93 Biomedical Decision Making: Probabilistic Clinical Reasoning
about the sensitivity and specificity of the test, you can have increased confidence in the results of the studies. But what if the studies disagree? For example, by 1995, over 100 studies had assessed the sensitivity and specificity of the PCR for diagnosis of HIV (Owens et al. 1996a, b); these studies estimated the sensitivity of PCR to be as low as 10% and to be as high as 100%, and they assessed the specificity of PCR to be between 40 and 100%. Which results should you believe? One approach that you can use is to assess the quality of the studies and to use the estimates from the highest-quality studies. For evaluation of PCR, however, even the high-quality studies did not agree. Another approach is to perform a meta-analysis: a study that combines quantitatively the estimates from individual studies to develop a summary ROC curve (Moses et al. 1993; Owens et al. 1996a, b; Hellmich et al. 1999; Leeflang et al. 2008; Leeflang 2014). Investigators develop a summary ROC curve by using estimates from many studies, in contrast to the type of ROC curve discussed in 7 Sect. 3.3.3, which is developed from the data in a single study. Summary ROC curves provide the best available approach to synthesizing data from many studies. 7 Section 3.3 has dealt with the second step in the diagnostic process: acquisition of further information with diagnostic tests. We have learned how to characterize the performance of a test with sensitivity (TPR) and specificity (TNR). These measures reveal the probability of a test result given the true state of the patient. They do not, however, answer the clinically relevant question posed in the opening example: Given a positive test result, what is the probability that this patient has the disease? To answer this question, we must learn methods to calculate the post-test probability of disease.
3.4 Post-test Probability: Bayes’
Theorem and Predictive Value
The third stage of the diagnostic process (see . Fig. 3.1a) is to adjust our probability estimate to take into account the new information gained from diagnostic tests by calculating the post-test probability.
3.4.1 Bayes’ Theorem
As we noted earlier in this chapter, a clinician can use the disease prevalence in the patient population as an initial estimate of the pretest risk of disease. Once clinicians begin to accumulate information about a patient, however, they revise their estimate of the probability of disease. The revised estimate (rather than the disease prevalence in the general population) becomes the pretest probability for the test that they perform. After they have gathered more information with a diagnostic test, they can calculate the post-test probability of disease with Bayes’ theorem. Bayes’ theorem is a quantitative method for calculating post-test probability using the pretest probability and the sensitivity and specificity of the test. The theorem is derived from the definition of conditional probability and from the properties of probability (see the Appendix to this chapter for the derivation). Recall that a conditional probability is the probability that event A will occur given that event B is known to occur (see 7 Sect. 3.2). In general, we want to know the probability that disease is present (event A), given that the test is known to be positive (event B). We denote the presence of disease as D, its absence as − D, a test result as R, and the pretest probability of disease as p[D]. The probability of disease, given a test result, is written p[D|R]. Bayes’ theorem is: p [ D|R ] =
p [ D ] ´ p [ R|D ]
p [ D ] ´ p [ R|D ] + p [ -D ] ´ p [ R| - D ]
We can reformulate this general equation in terms of a positive test, (+), by substituting p[D|+] for p[D|R], p[+|D] for p[R|D], p[+| − D] for p[R| − D], and 1 − p[D] for p[− D]. From 7 Sect. 3.3, recall that p[+|D] = TPR and p[+| − D] = FPR. Substitution provides Bayes’ theorem for a positive test: p [ D| + ] =
p [ D ] ´ TPR
p [ D ] ´ TPR + (1 - p [ D ]) ´ FPR
We can use a similar derivation to develop Bayes’ theorem for a negative test: p [ D| - ] =
p [ D ] ´ FNR
p [ D ] ´ FNR + (1 - p [ D ]) ´ TNR
3
94
D. K. Owens et al.
►►Example 3.8
3
post-test odds = pretest odds ´ likelihood ratio
We are now able to calculate the clinically important probability in 7 Example 3.4: the posttest probability of heart disease after a positive exercise test. At the end of 7 Sect. 3.2.2, we estimated the pretest probability of heart disease as 0.95, based on the prevalence of heart disease in men who have typical symptoms of heart disease and on the prevalence in people with a family history of heart disease. Assume that the TPR and FPR of the exercise stress test are 0.65 and 0.20, respectively. Substituting in Bayes’ formula for a positive test, we obtain the probability of heart disease given a positive test result: p [ D| + ] =
0.95 ´ 0.65 = 0.98 0.95 ´ 0.65 + 0.05 ´ 0.20 ◄
Thus, the positive test raised the post-test probability to 0.98 from the pretest probability of 0.95. The change in probability is modest because the pretest probability was high (0.95) and because the FPR also is high (0.20). If we repeat the calculation with a pretest probability of 0.75, the post-test probability is 0.91. If we assume the FPR of the test to be 0.05 instead of 0.20, a pretest probability of 0.95 changes to 0.996. 3.4.2 The Odds-Ratio Form of Bayes’
Theorem and Likelihood Ratios
Although the formula for Bayes’ theorem is straightforward, it is awkward for mental calculations. We can develop a more convenient form of Bayes’ theorem by expressing probability as odds and by using a different measure of test discrimination. Probability and odds are related as follows: p , 1- p odds . p= 1 + odds Thus, if the probability of rain today is 0.75, the odds are 3:1. Thus, on similar days, we should expect rain to occur three times for each time it does not occur. A simple relationship exists between pre-test odds and post-test odds: odds =
or p [ D|R ]
p [ -D|R ]
=
p [ D]
p [ -D]
´
p [ R|D ]
p [ R| - D ]
.
This equation is the odds-ratio form of Bayes’ theorem.7 It can be derived in a straightforward fashion from the definitions of Bayes’ theorem and of conditional probability that we provided earlier. Thus, to obtain the posttest odds, we simply multiply the pre-test odds by the likelihood ratio (LR) for the test in question. The LR of a test combines the measures of test discrimination discussed earlier to give one number that characterizes the discriminatory power of a test, defined as: LR =
p [ R|D ]
p [ R| - D ] or
probability of result in diseased people LR = probability of result in nondiseased people The LR indicates the amount that the odds of disease change based on the test result. We can use the LR to characterize clinical findings (such as a swollen leg) or a test result. We describe the performance of a test that has only two possible outcomes (e.g., positive or negative) by two LRs: one corresponding to a positive test result and the other corresponding to a negative test. These ratios are abbreviated LR+ and LR−, respectively. æ probability that test ç is positive in ç diseased people LR + = ç ç probability that test ç çç is positive in è nondiseased people
ö ÷ ÷ ÷ = TPR ÷ FPR ÷ ÷÷ ø
7 Some authors refer to this expression as the oddslikelihood form of Bayes’ theorem.
95 Biomedical Decision Making: Probabilistic Clinical Reasoning
In a test that discriminates well between disease and nondisease, the TPR will be high, the FPR will be low, and thus LR+ will be much greater than 1. A LR of 1 means that the probability of a test result is the same in diseased and nondiseased individuals; the test has no value. Similarly, æ probability that test ç is negative in ç diseased people LR - = ç ç probability that test ç çç is negative in è nondiseased people
ö ÷ ÷ ÷ = FNR ÷ TNR ÷ ÷÷ ø
A desirable test will have a low FNR and a high TNR; therefore, the LR− will be much less than 1. ►►Example 3.9
We can calculate the post-test probability for a positive exercise stress test in a 70 year-old woman whose pretest probability is 0.75. The pretest odds are: odds =
0.75 0.75 p = = = 3, or 3 : 1 1 - p 1 - 0.75 0.25
The LR for the stress test is: LR + =
TPR 0.65 = = 3.25 FPR 0.20
3.4.3 Predictive Value of a Test
An alternative approach for estimation of the probability of disease in a person who has a positive or negative test is to calculate the predictive value of the test. The positive predictive value (PV+) of a test is the likelihood that a patient who has a positive test result also has disease. Thus, PV+ can be calculated directly from a 2 × 2 contingency table: number of diseased patients with positive test PV + = total number of patients with a positive test From the 2 × 2 contingency table in . Table 3.3, TP PV + = TP + FP The negative predictive value (PV−) is the likelihood that a patient with a negative test does not have disease: number of nondiseased patients with negative test PV - = Total number of patients with a negative test From the 2 × 2 contingency table in . Table 3.3,
We can calculate the post-test odds of a positive test result using the odds-ratio form of Bayes’ theorem: post-test odds = 3 ´ 3.25 = 9.75 : 1 We can then convert the odds to a probability: p=
odds 9.75 = = 0.91 1 + odds 1 + 9.75
◄
As expected, this result agrees with our earlier answer (see the discussion of 7 Example 3.8). The odds-ratio form of Bayes’ theorem allows rapid calculation. The LR is a powerful method for characterizing the operating characteristics of a test: if you know the pretest odds, you can calculate the post-test odds in one step. The LR demonstrates that a useful test is one that changes the odds of disease.
PV - =
TN TN + FN
►►Example 3.10
We can calculate the PV of the EIA test from the 2 × 2 table that we constructed in 7 Example 3.6 (see . Table 3.4) as follows: PV+ =
98 = 0.97 98 + 3
PV- =
297 = 0.99 297 + 2
The probability that antibody is present in a patient who has a positive index test (EIA) in this study is 0.97; about 97 of 100 patients with a positive test will have antibody. The likelihood that a patient with a negative index test does not have antibody is about 0.99. ◄
3
It is worth reemphasizing the difference between PV and sensitivity and specificity, given that both are calculated from the 2 × 2 table and they often are confused. The sensitivity and specificity give the probability of a particular test result in a patient who has a particular disease state. The PV gives the probability of true disease state once the patient’s test result is known. The PV+ calculated from . Table 3.4 is 0.97, so we expect 97 of 100 patients with a positive index test actually to have antibody. Yet, in 7 Example 3.1, we found that fewer than one of ten patients with a positive test were expected to have antibody. What explains the discrepancy in these examples? The sensitivity and specificity (and, therefore, the LRs) in the two examples are identical. The discrepancy is due to an extremely important and often overlooked characteristic of PV: the PV of a test depends on the prevalence of disease in the study population (the prevalence can be calculated as TP + FN divided by the total number of patients in the 2 × 2 table). The PV cannot be generalized to a new population because the prevalence of disease may differ between the two populations. The difference in PV of the EIA in 7 Example 3.1 and in 7 Example 3.6 is due to a difference in the prevalence of disease in the examples. The prevalence of antibody was given as 0.001 in 7 Example 3.1 and as 0.25 in 7 Example 3.6. These examples should remind us that the PV+ is not an intrinsic property of a test. Rather, it represents the post-test probability of disease only when the prevalence is identical to that in the 2 × 2 contingency table from which the PV+ was calculated. Bayes’ theorem provides a method for calculation of the post-test probability of disease for any prior probability. For that reason, we prefer the use of Bayes’ theorem to calculate the post-test probability of disease. 3.4.4 Implications of Bayes’
Theorem
In this section, we explore the implications of Bayes’ theorem for test interpretation. These ideas are extremely important, yet they often are misunderstood. . Figure 3.4 illustrates one of the most essential concepts in this chapter: The post-test
a 1.0 0.8 Post-test probability
3
D. K. Owens et al.
0.6 0.4 0.2 0.0 0.0
0.2
0.4
0.6
0.8
1.0
0.8
1.0
Pretest probability
b 1.0 0.8 Post-test probability
96
0.6 0.4 0.2 0.0 0.0
0.2
0.4
0.6
Pretest probability .. Fig. 3.4 Relationship between pretest probability and post-test probability of disease. The dashed lines correspond to a test that has no effect on the probability of disease. Sensitivity and specificity of the test were assumed to be 0.90 for the two examples. a The post-test probability of disease corresponding to a positive test result (solid curve) was calculated with Bayes’ theorem for all values of pretest probability. b The post-test probability of disease corresponding to a negative test result (solid curve) was calculated with Bayes’ theorem for all values of pretest probability. (Source: Adapted from Sox (1987), with permission)
probability of disease increases as the pretest probability of disease increases. We produced . Fig. 3.4a by calculating the post-test probability after a positive test result for all possible pretest probabilities of disease. We similarly derived . Fig. 3.4b for a negative test result. The 45-degree line in each figure denotes a test in which the pretest and post-test probability are equal (LR = 1), indicating a test that
3
97 Biomedical Decision Making: Probabilistic Clinical Reasoning
tibody tests have a specificity greater than 0.99, and therefore a positive test is convincing. Similarly, if the pretest probability is very high, it is unlikely that a negative test result will lower the post-test probability sufficiently to exclude a diagnosis. . Figure 3.5 illustrates another important concept: test specificity affects primarily the ina 1.0
TNR=0.98
Post-test probability
0.8
TNR=0.90 TNR=0.80
0.6 0.4
TNR=0.80 TNR=0.90
0.2
TNR=0.98
0.0 0.0
0.2
0.4
0.6
0.8
1.0
Pretest probability 1.0 TPR=0.95 0.8
TPR=0.60 TPR=0.80
TP
R=
TP
0.4
80
R=
0.
60
0.6
0.
b
Post-test probability
is useless. The curve in . Fig. 3.4a relates pretest and post-test probabilities in a test with a sensitivity and specificity of 0.9. Note that, at low pretest probabilities, the post-test probability after a positive test result is much higher than is the pretest probability. At high pretest probabilities, the post-test probability is only slightly higher than the pretest probability. . Figure 3.4b shows the relationship between the pretest and post-test probabilities after a negative test result. At high pretest probabilities, the post-test probability after a negative test result is much lower than is the pretest probability. A negative test, however, has little effect on the post-test probability if the pretest probability is low. This discussion emphasizes a key idea of this chapter: the interpretation of a test result depends on the pretest probability of disease. If the pretest probability is low, a positive test result has a large effect, and a negative test result has a small effect. If the pretest probability is high, a positive test result has a small effect, and a negative test result has a large effect. In other words, when the clinician is almost certain of the diagnosis before testing (pretest probability nearly 0 or nearly 1), a confirmatory test has little effect on the posterior probability (see 7 Example 3.8). If the pretest probability is intermediate or if the result contradicts a strongly held clinical impression, the test result will have a large effect on the post-test probability. Note from . Fig. 3.4a that, if the pretest probability is very low, a positive test result can raise the post-test probability into only the intermediate range. Assume that . Fig. 3.4a represents the relationship between the pretest and post-test probabilities for the exercise stress test. If the clinician believes the pretest probability of coronary artery disease is 0.1, the post-test probability will be about 0.5. Although there has been a large change in the probability, the post-test probability is in an intermediate range, which leaves considerable uncertainty about the diagnosis. Thus, if the pretest probability is low, it is unlikely that a positive test result will raise the probability of disease sufficiently for the clinician to make that diagnosis with confidence. An exception to this statement occurs when a test has a very high specificity (or a large LR+); e.g., HIV an-
0.2 TPR=0.95
0.0 0.0
0.2
0.4
0.6
0.8
1.0
Pretest probability .. Fig. 3.5 Effects of test sensitivity and specificity on post-test probability. The curves are similar to those shown in . Fig. 3.4 except that the calculations have been repeated for several values of the sensitivity (TPR true-positive rate) and specificity (TNR true-negative rate) of the test. a The sensitivity of the test was assumed to be 0.90, and the calculations were repeated for several values of test specificity. b The specificity of the test was assumed to be 0.90, and the calculations were repeated for several values of the sensitivity of the test. In both panels, the top family of curves corresponds to positive test results, and the bottom family of curves corresponds to negative test results. (Source: Adapted from Sox (1987), with permission)
98
3
D. K. Owens et al.
terpretation of a positive test; test sensitivity affects primarily the interpretation of a negative test. In both parts (a) and (b) of . Fig. 3.5, the top family of curves corresponds to positive test results and the bottom family to negative test results. . Figure 3.5a shows the post-test probabilities for tests with varying specificities (TNR). Note that changes in the specificity produce large changes in the top family of curves (positive test results) but have little effect on the lower family of curves (negative test results). That is, an increase in the specificity of a test markedly changes the post-test probability if the test is positive but has relatively little effect on the post-test probability if the test is negative. Thus, if you are trying to rule in a diagnosis,8 you should choose a test with high specificity or a high LR+. . Figure 3.5b shows the post-test probabilities for tests with varying sensitivities. Note that changes in sensitivity produce large changes in the bottom family of curves (negative test results) but have little effect on the top family of curves. Thus, if you are trying to exclude a disease, choose a test with a high sensitivity or a high LR−. 3.4.5 Cautions in the Application of
Bayes’ Theorem
Bayes’ theorem provides a powerful method for calculating post-test probability. You should be aware, however, of the possible errors you can make when you use it. Common problems
8
In medicine, to rule in a disease is to confirm that the patient does have the disease; to rule out a disease is to confirm that the patient does not have the disease. A doctor who strongly suspects that his or her patient has a bacterial infection orders a culture to rule in his or her diagnosis. Another doctor is almost certain that his or her patient has a simple sore throat but orders a culture to rule out streptococcal infection (strep throat). This terminology oversimplifies a diagnostic process that is probabilistic. Diagnostic tests rarely, if ever, rule in or rule out a disease; rather, the tests raise or lower the probability of disease.
are inaccurate estimation of pretest probability, faulty application of test-performance measures, and violation of the assumptions of conditional independence and of mutual exclusivity. Bayes’ theorem provides a means to adjust an estimate of pretest probability to take into account new information. The accuracy of the calculated post-test probability is limited, however, by the accuracy of the estimated pretest probability. Accuracy of estimated prior probability is increased by proper use of published prevalence rates, heuristics, and clinical prediction rules. In a decision analysis, as we shall see, a range of prior probability often is sufficient. Nonetheless, if the pretest probability assessment is unreliable, Bayes’ theorem will be of little value. A second potential mistake that you can make when using Bayes’ theorem is to apply published values for the test sensitivity and specificity, or LRs, without paying attention to the possible effects of bias in the studies in which the test performance was measured (see 7 Sect. 3.3.5). With certain tests, the LRs may differ depending on the pretest odds in part because differences in pretest odds may reflect differences in the spectrum of disease in the population. A third potential problem arises when you use Bayes’ theorem to interpret a sequence of tests. If a patient undergoes two tests in sequence, you can use the post-test probability after the first test result, calculated with Bayes’ theorem, as the pretest probability for the second test. Then, you use Bayes’ theorem a second time to calculate the post-test probability after the second test. This approach is valid, however, only if the two tests are conditionally independent. Tests for the same disease are conditionally independent when the probability of a particular result on the second test does not depend on the result of the first test, given (conditioned on) the disease state. Expressed in conditional probability notation for the case in which the disease is present,
99 Biomedical Decision Making: Probabilistic Clinical Reasoning
p [second test positive|first test positive and disease present ] = p [second test positive|first test negative and disease present ] = p [second test positive|disease present ]. If the conditional independence assumption is satisfied, the post-test odds = pretest odds × LR1 × LR2. If you apply Bayes’ theorem sequentially in situations in which conditional independence is violated, you will obtain inaccurate post-test probabilities. The fourth common problem arises when you assume that all test abnormalities result from one (and only one) disease process. The Bayesian approach, as we have described it, generally presumes that the diseases under consideration are mutually exclusive. If they are not, Bayesian updating must be applied with great care. We have shown how to calculate post-test probability. In 7 Sect. 3.5, we turn to the problem of decision making when the outcomes of a clinician’s actions (e.g., of treatments) are uncertain.
use the ideas developed in the preceding sections to solve such difficult decision problems. Here we discuss two methods: the decision tree, a method for representing and comparing the expected outcomes of each decision alternative; and the threshold probability, a method for deciding whether new information can change a management decision. These techniques help you to clarify the decision problem and thus to choose the alternative that is most likely to help the patient. 3.5.1 Comparison of Uncertain
Prospects
Like those of most biological events, the outcome of an individual’s illness is unpredictable. How can a clinician determine which course of action has the greatest chance of success?
3.5 Expected-Value Decision
►►Example 3.11
Medical decision-making problems often cannot be solved by reasoning based on pathophysiology. For example, clinicians need a method for choosing among treatments when the outcome of the treatments is uncertain, as are the results of a surgical operation. You can
There are two available therapies for a fatal illness. The length of a patient’s life after either therapy is unpredictable, as illustrated by the frequency distribution shown in . Fig. 3.6 and summarized in . Table 3.5. Each therapy is associated with uncertainty: regardless of which therapy a patient receives, the patient will die by the end of the fourth year, but there is no way to know which
Making
100
100 Treatment A
Treatment B
80 Percent of patients
Percent of patients
80 60 40 20 0
60 40 20 0
0
1 2 3 Survival (years)
4
5
0
1 2 3 Survival (years)
4
5
.. Fig. 3.6 Survival after therapy for a fatal disease. Two therapies are available; the results of either are unpredictable
3
100
D. K. Owens et al.
.. Table 3.5 Distribution of probabilities for the two therapies in . Fig. 3.7 Probability of death
3
Years after therapy
Therapy A
Therapy B
1
0.20
0.05
2
0.40
0.15
3
0.30
0.45
4
0.10
0.35
year will be the patient’s last. . Figure 3.6 shows that survival until the fourth year is more likely with therapy B, but the patient might die in the first year with therapy B or might survive to the fourth year with therapy A. ◄
Which of the two therapies is preferable? 7 Example 3.11 demonstrates a significant fact: a choice among therapies is a choice among gambles (i.e., situations in which chance determines the outcomes). How do we usually choose among gambles? More often than not, we rely on hunches or on a sixth sense. How should we choose among gambles? We propose a method for choosing called expected- value decision making: we characterize each gamble by a number, and we use that number to compare the gambles.9 In 7 Example 3.11, therapy A and therapy B are both gambles with respect to duration of life after therapy. We want to assign a measure (or number) to each therapy that summarizes the outcomes such that we can decide which therapy is preferable. The ideal criterion for choosing a gamble should be a number that reflects preferences (in medicine, often the patient’s preferences) for the outcomes of the gamble. Utility is the name given to a measure of preference that has a desirable property for decision making: the gamble with the highest utility should be preferred. We shall discuss utility briefly (7 Sect. 3.5.4), but you can pursue this topic and
9 Expected-value decision making had been used in many fields before it was first applied to medicine.
the details of decision analysis in other textbooks (see Suggested Readings at the end of this chapter).10 We use the average duration of life after therapy (survival) as a criterion for choosing among therapies; remember that this model is oversimplified, used here for discussion only. Later, we consider other factors, such as the quality of life. Because we cannot be sure of the duration of survival for any given patient, we characterize a therapy by the mean survival (average length of life) that would be observed in a large number of patients after they were given the therapy. The first step we take in calculating the mean survival for a therapy is to divide the population receiving the therapy into groups of patients who have similar survival rates. Then, we multiply the survival time in each group11 by the fraction of the total population in that group. Finally, we sum these products over all possible survival values. We can perform this calculation for the therapies in 7 Example 3.11. Mean survival for therapy A = (0.2 × 1.0) + (0.4 × 2.0) + (0.3 × 3.0) + (0.1 × 4.0) = 2.3 years. Mean survival for therapy B = (0.05 × 1.0) + (0.15 × 2.0) + (0.45 × 3.0) + (0.35 × 4.0) = 3.1 years. Survival after a therapy is under the control of chance. Therapy A is a gamble characterized by an average survival equal to 2.3 years. Therapy B is a gamble characterized by an average survival of 3.1 years. If length of life is our criterion for choosing, we should select therapy B. 3.5.2 Representation of Choices
with Decision Trees
The choice between therapies A and B is represented diagrammatically in . Fig. 3.7. Events that are under the control of chance can be represented by a chance node. By convention, a 10 A more general term for expected-value decision making is expected utility decision making. Because a full treatment of utility is beyond the scope of this chapter, we have chosen to use the term expected value. 11 For this simple example, death during an interval is assumed to occur at the end of the year.
101 Biomedical Decision Making: Probabilistic Clinical Reasoning
Expected survival = 2.3 years
p = 0.20
Survive 1 year
p = 0.05
Survive 1 year
p = 0.40
Survive 2 years
p = 0.15
Survive 2 years
p = 0.45
Survive 3 years
p = 0.35
Survive 4 years
p = 0.30
Survive 3 years
p = 0.10
Survive 4 years
Treatment A
Expected survival = 3.1 years
Treatment B
.. Fig. 3.7 A chance-node representation of survival after the two therapies in . Fig. 3.6. The probabilities times the corresponding years of survival are summed to obtain the total expected survival
chance node is shown as a circle from which several lines emanate. Each line represents one of the possible outcomes. Associated with each line is the probability of the outcome occurring. For a single patient, only one outcome can occur. Some physicians object to using probability for just this reason: “You cannot rely on population data, because each patient is an individual.” In fact, we often must use the frequency of the outcomes of many patients experiencing the same event to inform our opinion about what might happen to an individual. From these frequencies, we can make patient-specific adjustments and thus estimate the probability of each outcome at a chance node. A chance node can represent more than just an event governed by chance. The outcome of a chance event, unknowable for the individual, can be represented by the expected value at the chance node. The concept of expected value is important and is easy to understand. We can calculate the mean survival that would be expected based on the probabilities depicted by the chance node in . Fig. 3.7. This average length of life is called the expected survival or, more generally, the expected value of the chance node. We calculate the expected value at a chance node by the process just described: we multiply the survival value associated with each possible outcome by the probability that that outcome will occur. We then sum the product of probability times survival over all outcomes. Thus, if sev-
eral hundred patients were assigned to receive either therapy A or therapy B, the expected survival would be 2.3 years for therapy A and 3.1 years for therapy B. We have just described the basis of expected-value decision making. The term expected value is used to characterize a chance event, such as the outcome of a therapy. If the outcomes of a therapy are measured in units of duration of survival, units of sense of well- being, or dollars, the therapy is characterized by the expected duration of survival, expected sense of well-being, or expected monetary cost that it will confer on, or incur for, the patient, respectively. To use expected-value decision making, we follow this strategy when there are therapy choices with uncertain outcomes: (1) calculate the expected value of each decision alternative and then (2) pick the alternative with the highest expected value. 3.5.3 Performance of a Decision
Analysis
We clarify the concepts of expected-value decision making by discussing an example. There are four steps in decision analysis: 1. Create a decision tree; this step is the most difficult, because it requires formulating the decision problem, assigning probabilities, and measuring outcomes.
3
102
3
D. K. Owens et al.
2. Calculate the expected value of each decision alternative. 3. Choose the decision alternative with the highest expected value. 4. Use sensitivity analysis to test the conclusions of the analysis. Some health professionals hesitate when they first learn about the technique of decision analysis, because they recognize the opportunity for error in assigning values to both the probabilities and the utilities in a decision tree. They reason that the technique encourages decision making based on small differences in expected values that are estimates at best. The defense against this concern, which also has been recognized by decision analysts, is the technique known as sensitivity analysis. We discuss this important fourth step in decision analysis in 7 Sect. 3.5.5. In addition, decision analysis helps make the assumptions underlying a decision explicit, so that the assumptions can be assessed carefully. The first step in decision analysis is to create a decision tree that represents the decision problem. Consider the following clinical problem. ►►Example 3.12
The patient is Mr. Danby, a 66-year-old man who has been crippled with arthritis of both knees so severely that, while he can get about the house with the aid of two canes, he must otherwise use a wheelchair. His other major health problem is emphysema, a disease in which the lungs lose their ability to exchange oxygen and carbon dioxide between blood and air, which in turn causes shortness of breath (dyspnea). He is able to breathe comfortably when he is in a wheelchair, but the effort of walking with canes makes him breathe heavily and feel uncomfortable. Several years ago, he seriously considered knee replacement surgery but decided against it, largely because his internist told him that there was a serious risk that he would not survive the operation because of his lung disease. Recently, however, Mr. Danby’s wife had a stroke and was partially paralyzed; she now requires a degree of assistance that the patient cannot supply given his present state of mobility. He tells his doctor that he is reconsidering knee replacement surgery.
Mr. Danby’s internist is familiar with decision analysis. She recognizes that this problem is filled with uncertainty: Mr. Danby’s ability to survive the operation is in doubt, and the surgery sometimes does not restore mobility to the degree required by such a patient. Furthermore, there is a small chance that the prosthesis (the artificial knee) will become infected, and Mr. Danby then would have to undergo a second risky operation to remove it. After removal of the prosthesis, Mr. Danby would never again be able to walk, even with canes. The possible outcomes of knee replacement include death from the first procedure and death from a second mandatory procedure if the prosthesis becomes infected (which we will assume occurs in the immediate postoperative period, if it occurs at all). Possible functional outcomes include recovery of full mobility or continued, and unchanged, poor mobility. Should Mr. Danby choose to undergo knee replacement surgery, or should he accept the status quo? ◄
Using the conventions of decision analysis, the internist sketches the decision tree shown in . Fig. 3.8. According to these conventions, a square box denotes a decision node, and each line emanating from a decision node represents an action that could be taken. According to the methods of expected- value decision making, the internist first must assign a probability to each branch of each chance node. To accomplish this task, the internist asks several orthopedic surgeons for their estimates of the chance of recovering full function after surgery (p[full recovery] = 0.60) and the chance of developing infection in the prosthetic joint (p[infection] = 0.05). She uses her subjective estimate of the probability that the patient will die during or immediately after knee surgery (p[operative death] = 0.05). Next, she must assign a value to each outcome. To accomplish this task, she first lists the outcomes. As you can see from . Table 3.6, the outcomes differ in two dimensions: length of life (survival) and quality of life (functional status). To characterize each outcome accurately, the internist must develop a measure that takes into account these two dimensions. Simply using duration of survival is inadequate because Mr. Danby values 5 years of good health more than
103 Biomedical Decision Making: Probabilistic Clinical Reasoning
Operative death
Operative death Infection
Surgery Survival Survival
Full mobility
No infection Poor mobility
No surgery
.. Fig. 3.8 Decision tree for knee replacement surgery. The box represents the decision node (whether to have surgery); the circles represent chance nodes
.. Table 3.6 Outcomes for 7 Example 3.12 Survival (years)
Functional status
Years of full function equivalent to outcome
10
Full mobility (successful surgery)
10
10
Poor mobility (status quo or unsuccessful surgery)
6
10
Wheelchair-bound (the outcome if a second surgery is necessary)
3
0
Death
0
he values 10 years of poor health. The internist can account for this trade-off factor by converting outcomes with two dimensions into outcomes with a single dimension: duration of survival in good health. The resulting measure is called a quality-adjusted life year (QALY).12 She can convert years in poor health into years in good health by asking Mr. Danby to indicate the shortest period in good health
12 QALYs commonly are used as measures of utility (value) in medical decision analysis and in health policy analysis.
(full mobility) that he would accept in return for his full expected lifetime (10 years) in a state of poor health (status quo). Thus, she asks Mr. Danby: “Many people say they would be willing to accept a shorter life in excellent health in preference to a longer life with significant disability. In your case, how many years with normal mobility do you feel is equivalent in value to 10 years in your current state of disability?” She asks him this question for each outcome. The patient’s responses are shown in the third column of . Table 3.6. The patient decides that 10 years of limited mobility are equivalent to 6 years of normal mobility, whereas 10 years of wheelchair confinement are equivalent to only 3 years of full function. . Figure 3.9 shows the final decision tree—complete with probability estimates and utility values for each outcome.13 The second task that the internist must undertake is to calculate the expected value, in healthy years, of surgery and of no surgery. She calculates the expected value at each chance node, moving from right (the tips of
13 In a more sophisticated decision analysis, the clinician also would adjust the utility values of outcomes that require surgery to account for the pain and inconvenience associated with surgery and rehabilitation. Other approaches to assessing utility are available and may be preferable in some circumstances.
3
104
D. K. Owens et al.
Operative death p = 0.05 Operative death
3
p = 0.05 Infection Surgery
p = 0.05
D p = 0.95 Survival
p = 0.95 Full mobility p = 0.6
C
No infection
No surgery
0
Death
0
Wheelchair-bound
3
Full mobility
10
Poor mobility
6
Poor mobility
6
A Survival
p = 0.95
Death
B p = 0.4 Poor mobility
.. Fig. 3.9 Decision tree for knee-replacement surgery. Probabilities have been assigned to each branch of each chance node. The patient’s valuations of outcomes
(measured in years of perfect mobility) are assigned to the tips of each branch of the tree
the tree) to left (the root of the tree). Let us consider, for example, the expected value at the chance node representing the outcome of surgery to remove an infected prosthesis (Node A in . Fig. 3.9). The calculation requires three steps: 1. Calculate the expected value of operative death after surgery to remove an infected prosthesis. Multiply the probability of operative death (0.05) by the QALY of the outcome—death (0 years): 0.05 × 0 = 0 QALY. 2. Calculate the expected value of surviving surgery to remove an infected knee prosthesis. Multiply the probability of surviving the operation (0.95) by the number of healthy years equivalent to 10 years of being wheelchair-bound (3 years): 0.95 × 3 = 2.85 QALYs. 3. Add the expected values calculated in step 1 (0 QALY) and step 2 (2.85 QALYs) to obtain the expected value of developing an infected prosthesis: 0 + 2.85 = 2.85 QALYs.
viving knee replacement surgery (Node C), she proceeds as follows: 1. Multiply the expected value of an infected prosthesis (already calculated as 2.85 QALYs) by the probability that the prosthesis will become infected (0.05): 2.85 × 0.05 = 0.143 QALYs. 2. Multiply the expected value of never developing an infected prosthesis (already calculated as 8.4 QALYs) by the probability that the prosthesis will not become infected (0.95): 8.4 × 0.95 = 7.98 QALYs. 3. Add the expected values calculated in step 1 (0.143 QALY) and step 2 (7.98 QALYs) to get the expected value of surviving knee replacement surgery: 0.143 + 7.98 = 8.123 QALYs.
Similarly, the expected value at chance node B is calculated: (0.6 × 10) + (0.4 × 6) =8.4 QALYs. To obtain the expected value of sur-
The clinician performs this process, called averaging out at chance nodes, for node D as well, working back to the root of the tree, until the expected value of surgery has been calculated. The outcome of the analysis is as follows. For surgery, Mr. Danby’s average life expectancy, measured in years of normal mobility, is 7.7. What does this value mean? It does not mean that, by accepting surgery,
105 Biomedical Decision Making: Probabilistic Clinical Reasoning
Mr. Danby is guaranteed 7.7 years of mobile life. One look at the decision tree will show that some patients die in surgery, some develop infection, and some do not gain any improvement in mobility after surgery. Thus, an individual patient has no guarantees. If the clinician had 100 similar patients who underwent the surgery, however, the average number of mobile years would be 7.7. We can understand what this value means for Mr. Danby only by examining the alternative: no surgery. In the analysis for no surgery, the average length of life, measured in years of normal mobility, is 6.0, which Mr. Danby considered equivalent to 10 years of continued poor mobility. Not all patients will experience this outcome; some who have poor mobility will live longer than, and some will live less than, 10 years. The average length of life, however, expressed in years of normal mobility, will be 6. Because 6.0 is less than 7.7, on average the surgery will provide an outcome with higher value to the patient. Thus, the internist recommends performing the surgery. The key insight of expected-value decision making should be clear from this example: given the unpredictable outcome in an individual, the best choice for the individual is the alternative that gives the best result on the average in similar patients. Decision analysis can help the clinician to identify the therapy that will give the best results when averaged over many similar patients. The decision analysis is tailored to a specific patient in that both the utility functions and the probability estimates are adjusted to the individual. Nonetheless, the results of the analysis represent the outcomes that would occur on average in a population of patients who have similar utilities and for whom uncertain events have similar probabilities. 3.5.4 Representation of Patients’
Preferences with Utilities
In 7 Sect. 3.5.3, we introduced the concept of QALYs, because length of life is not the only outcome about which patients care. Patients’ preferences for a health outcome may depend on the length of life with the outcome, on the
quality of life with the outcome, and on the risk involved in achieving the outcome (e.g., a cure for cancer might require a risky surgical operation). How can we incorporate these elements into a decision analysis? To do so, we can represent patients’ preferences with utilities. The utility of a health state is a quantitative measure of the desirability of a health state from the patient’s perspective. Utilities are typically expressed on a 0 to 1 scale, where 0 represents death and 1 represents ideal health. For example, a study of patients who had chest pain (angina) with exercise rated the utility of mild, moderate, and severe angina as 0.95, 0.92, and 0.82 (Nease et al. 1995), respectively. There are several methods for assessing utilities. The standard-gamble technique has the strongest theoretical basis of the various approaches to utility assessment, as shown by Von Neumann and Morgenstern and described by Sox et al. (1988). To illustrate use of the standard gamble, suppose we seek to assess a person’s utility for the health state of asymptomatic HIV infection. To use the standard gamble, we ask our subject to compare the desirability of asymptomatic HIV infection to those of two other health states whose utility we know or can assign. Often, we use ideal health (assigned a utility of 1) and immediate death (assigned a utility of 0) for the comparison of health states. We then ask our subject to choose between asymptomatic HIV infection and a gamble with a chance of ideal health or immediate death. We vary the probability of ideal health and immediate death systematically until the subject is indifferent between asymptomatic HIV infection and the gamble. For example, a subject might be indifferent when the probability of ideal health is 0.8 and the probability of death is 0.2. At this point of indifference, the utility of the gamble and that of asymptomatic HIV infection are equal. We calculate the utility of the gamble as the weighted average of the utilities of each outcome of the gamble [(1 × 0.8) + (0 × 0.2)] = 0.8. Thus in this example, the utility of asymptomatic HIV infection is 0.8. Use of the standard gamble enables an analyst to assess the utility of outcomes that differ in length or quality of life. Because the standard gamble involves chance events, it
3
106
3
D. K. Owens et al.
also assesses a person’s willingness to take risks—called the person’s risk attitude. A second common approach to utility assessment is the time-trade-off technique (Sox et al. 1988; Torrance and Feeny 1989). To assess the utility of asymptomatic HIV infection using the time-trade-off technique, we ask a person to determine the length of time in a better state of health (usually ideal health or best attainable health) that he or she would find equivalent to a longer period of time with asymptomatic HIV infection. For example, if our subject says that 8 months of life with ideal health was equivalent to 12 months of life with asymptomatic HIV infection, then we calculate the utility of asymptomatic HIV infection as 8 ÷ 12 = 0.67. The time-trade-off technique provides a convenient method for valuing outcomes that accounts for gains (or losses) in both length and quality of life. Because the time trade-off does not include gambles, however, it does not assess a person’s risk attitude. Perhaps the strongest assumption underlying the use of the time trade-off as a measure of utility is that people are risk neutral. A riskneutral decision maker is indifferent between the expected value of a gamble and the gamble itself. For example, a risk- neutral decision maker would be indifferent between the choice of living 20 years (for certain) and that of taking a gamble with a 50% chance of living 40 years and a 50% chance of immediate death (which has an expected value of 20 years). In practice, of course, few people are risk-neutral. Nonetheless, the time-trade- off technique is used frequently to value health outcomes because it is relatively easy to understand. Several other approaches are available to value health outcomes. To use the visual analog scale, a person simply rates the quality of life with a health outcome (e.g., asymptomatic HIV infection) on a scale from 0 to 100. Although the visual analog scale is easy to explain and use, it has no theoretical justification as a valid measure of utility. Ratings with the visual analog scale, however, correlate modestly well with utilities assessed by the standard gamble and time trade-off. For a demonstration of the use of standard gambles, time tradeoffs, and the visual analog scale to assess utilities in patients with angina, see Nease et al.
(1995); in patient living with HIV, see Joyce et al. (2009) and (2012). Other approaches to valuing health outcomes include the Quality of Well-Being Scale, the Health Utilities Index, and the EuroQoL (see Neumann et al. 2017, ch. 7). Each of these instruments assesses how people value health outcomes and therefore may be appropriate for use in decision analyses or cost-effectiveness analyses. In summary, we can use utilities to represent how patients value complicated health outcomes that differ in length and quality of life and in riskiness. Computer-based tools with an interactive format have been developed for assessing utilities; they often include text and multimedia presentations that enhance patients’ understanding of the assessment tasks and of the health outcomes (Sumner et al. 1991; Nease and Owens 1994; Lenert et al. 1995). 3.5.5 Performance of Sensitivity
Analysis
Sensitivity analysis is a test of the robustness of the conclusions of an analysis over a wide range of assumptions about the probabilities and the values, or utilities. The probability of an outcome at a chance node may be the best estimate that is available, but there often is a wide range of reasonable probabilities that a clinician could use with nearly equal confidence. We use sensitivity analysis to answer this question: Do my conclusions regarding the preferred choice change when the probability and outcome estimates are assigned values that lie within a reasonable range? The knee-replacement decision in 7 Example 3.12 illustrates the power of sensitivity analysis. If the conclusions of the analysis (surgery is preferable to no surgery) remain the same despite a wide range of assumed values for the probabilities and outcome measures, the recommendation is trustworthy. . Figures 3.10 and 3.11 show the expected survival in healthy years with surgery and without surgery under varying assumptions of the probability of operative death and the probability of attaining perfect mobility, respectively. Each point (value) on these lines represents one calculation of expected survival using the tree in
3
107 Biomedical Decision Making: Probabilistic Clinical Reasoning
10
Surgery
Expected years of healthy life
Expected years of healthy life
10
5 No surgery
0
0.25
0.5
Probability of operative death .. Fig. 3.10 Sensitivity analysis of the effect of operative mortality on length of healthy life (7 Example 3.12). As the probability of operative death increases, the relative values of surgery versus no surgery change. The point at which the two lines cross represents the probability of operative death at which no surgery becomes preferable. The solid line represents the preferred option at a given probability
. Fig. 3.8. . Figure 3.10 shows that expected survival is higher with surgery over a wide range of operative mortality rates. Expected survival is lower with surgery, however, when the operative mortality rate exceeds 25%. . Figure 3.11 shows the effect of varying the probability that the operation will lead to perfect mobility. The expected survival, in healthy years, is higher for surgery as long as the probability of perfect mobility exceeds 20%, a much lower figure than is expected from previous experience with the operation. (In 7 Example 3.12, the consulting orthopedic surgeons estimated the chance of full recovery at 60%). Thus, the internist can proceed with confidence to recommend surgery. Mr. Danby cannot be sure of a good outcome, but he has valid reasons for thinking that he is more likely to do well with surgery than he is without it. Another way to state the conclusions of a sensitivity analysis is to indicate the range of probabilities over which the conclusions apply. The point at which the two lines in . Fig. 3.10 cross is the probability of operative death at which the two therapy options have the same expected survival. If expected survival is to be the basis for choosing therapy, the internist and the patient should be indifferent between
Surgery
5 No surgery
0
0.5
1.0
Probability of perfect mobility .. Fig. 3.11 Sensitivity analysis of the effect of a successful operative result on length of healthy life (7 Example 3.12). As the probability of a successful surgical result increases, the relative values of surgery versus no surgery change. The point at which the two lines cross represents the probability of a successful result at which surgery becomes preferable. The solid line represents the preferred option at a given probability
surgery and no surgery when the probability of operative death is 25%.14 When the probability is lower, they should select surgery. When it is higher, they should select no surgery. The approach to sensitivity analyses we have described enables the analyst to understand how uncertainty in one, two, or three parameters affects the conclusions of an analysis. But in a complex problem, a decision tree or decision model may have a 100 or more parameters. The analyst may have uncertainty about many parameters in a model. Probabilistic sensitivity analysis is an approach for understanding how the uncertainty in all (or a large number of) model parameters affects the conclusion of a decision analysis. To perform a probabilistic sensitivity analysis, the analyst must specify a probability distribution for each model parameter. The analytic software then chooses a value for each model parameter randomly from the 14 An operative mortality rate of 25% may seem high; however, this value is correct when we use QALYs as the basis for choosing treatment. A decision maker performing a more sophisticated analysis could use a utility function that reflects the patient’s aversion to risking death.
108
3
D. K. Owens et al.
parameter’s probability distribution. The software then uses this set of parameter values and calculates the outcomes for each alternative. For each evaluation of the model, the software will determine which alternative is preferred. The process is usually repeated 10,000–100,000 times. From the probabilistic sensitivity analysis, the analyst can determine the proportion of times an alternative is preferred, accounting for all uncertainty in model parameters simultaneously. For more information on this advanced topic, see the article by Briggs and colleagues referenced at the end of the chapter.
Well
Cancer
Death
3.5.6 Representation of Long-Term
Outcomes with Markov Models
In 7 Example 3.12, we evaluated Mr. Danby’s decision to have surgery to improve his mobility, which was compromised by arthritis. We assumed that each of the possible outcomes (full mobility, poor mobility, death, etc.) would occur shortly after Mr. Danby took action on his decision. But what if we want to model events that might occur in the distant future? For example, a patient with HIV infection might develop AIDS 10–15 years after infection; thus, a therapy to prevent or delay the development of AIDS could affect events that occur 10–15 years, or more, in the future. A similar problem arises in analyses of decisions regarding many chronic diseases: we must model events that occur over the lifetime of the patient. The decision tree representation is convenient for decisions for which all outcomes occur during a short time horizon, but it is not always sufficient for problems that include events that could occur in the future. How can we include such events in a decision analysis? The answer is to use Markov models (Beck and Pauker 1983; Sonnenberg and Beck 1993; Siebert et al. 2012). To build a Markov model, we first specify the set of health states that a person could experience (e.g., Well, Cancer, and Death in . Fig. 3.12). We then specify the transition probabilities, which are the probabilities that a person will transit from one of these health states to another during a specified time period. This pe-
.. Fig. 3.12 A simple Markov model. The states of health that a person can experience are indicated by the circles; arrows represent allowed transitions between health states
riod—often 1 month or 1 year—is the length of the Markov cycle. The Markov model then simulates the transitions among health states for a person (or for a hypothetical cohort of people) for a specified number of cycles; by using a Markov model, we can calculate the probability that a person will be in each of the health states at any time in the future. As an illustration, consider a simple Markov model that has three health states: Well, Cancer, and Death (see . Fig. 3.12). We have specified each of the transition probabilities in . Table 3.7 for the cycle length of 1 year. Thus, we note from . Table 3.7 that a person who is in the well state will remain well with probability 0.9, will develop cancer with probability 0.06, and will die from noncancer causes with probability 0.04 during 1 year. The calculations for a Markov model are performed by computer software. Based on the transition probabilities in . Table 3.7, the probabilities that a person remains well, develops cancer, or dies from non-cancer causes over time is shown in . Table 3.8. We can also determine from a Markov model the expected length of time that a person spends in each health state. Therefore, we can determine life expectancy, or quality-adjusted life expectancy, for any alternative represented by a Markov model. In decision analyses that represent long- term outcomes, the analysts will often use a
3
109 Biomedical Decision Making: Probabilistic Clinical Reasoning
Markov model in conjunction with a decision tree to model the decision (Owens et al. 1995; Salpeter et al. 1997; Sanders et al. 2005; Lin et al. 2018). The analyst models the effect of an intervention as a change in the probability of going from one state to another. For example, we could model a cancer-prevention intervention (such as screening for breast cancer with mammography) as a reduction in the transition probability from Well to Cancer in . Fig. 3.12. (See the articles by Beck and Pauker (1983) and Sonnenberg and Beck (1993) for further explanation of the use of Markov models). 3.6 The Decision Whether to Treat,
Test, or Do Nothing
The clinician who is evaluating a patient’s symptoms and suspects a disease must choose among the following actions:
.. Table 3.7 Transition probabilities for the Markov model in . Fig. 3.13 Health state transition
Annual probability
Well to well
0.9
Well to cancer
0.06
Well to death
0.04
Cancer to well
0.0
Cancer to cancer
0.4
Cancer to death
0.6
Death to well
0.0
Death to cancer
0.0
Death to death
1.0
1. Do nothing further (neither perform additional tests nor treat the patient). 2. Obtain additional diagnostic information (test) before choosing whether to treat or do nothing. 3. Treat without obtaining more information. When the clinician knows the patient’s true state, testing is unnecessary, and the doctor needs only to assess the trade-offs among therapeutic options (as in 7 Example 3.12). Learning the patient’s true state, however, may require costly, time-consuming, and often risky diagnostic procedures that may give misleading FP or FN results. Therefore, clinicians often are willing to treat a patient even when they are not absolutely certain about a patient’s true state. There are risks in this course: the clinician may withhold therapy from a person who has the disease of concern, or he may administer therapy to someone who does not have the disease yet may suffer undesirable side effects of therapy. Deciding among treating, testing, and doing nothing sounds difficult, but you have already learned all the principles that you need to solve this kind of problem. There are three steps: 1. Determine the treatment threshold probability of disease. 2. Determine the pretest probability of disease. 3. Decide whether a test result could affect your decision to treat. The treatment threshold probability of disease is the probability of disease at which you should be indifferent between treating and not treating (Pauker and Kassirer 1980). Below the treatment threshold, you should not treat. Above the treatment threshold, you should
.. Table 3.8 Probability of future health states for the Markov model in Health state
Probability of health state at end of year Year 1 Year 2 Year 3 Year 4
Fig. 3.12
Year 5
Year 6
Year 7
Well
0.9000
0.8100
0.7290
0.6561
0.5905
0.5314
0.4783
Cancer
0.0600
0.0780
0.0798
0.0757
0.0696
0.0633
0.0572
Death
0.0400
0.1120
0.1912
0.2682
0.3399
0.4053
0.4645
110
D. K. Owens et al.
Treatment-threshold probability
3 Do not treat
0.0
Treat
Probability of disease
.. Fig. 3.13 Depiction of the treatment threshold probability. At probabilities of disease that are less than the treatment threshold probability, the preferred action
treat (. Fig. 3.13). Whether to treat when the diagnosis is not certain is a problem that you can solve with a decision tree, such as the one shown in . Fig. 3.14. You can use this tree to learn the treatment threshold probability of disease by leaving the probability of disease as an unknown, setting the expected value of surgery equal to the expected value for medical (i.e., nonsurgical, such as drugs or physical therapy) treatment, and solving for the probability of disease. (In this example, surgery corresponds to the “treat” branch of the tree in . Fig. 3.14, and nonsurgical intervention corresponds to the “do not treat” branch). Because you are indifferent between medical treatment and surgery at this probability, it is the treatment threshold probability. Using the tree completes step 1. In practice, people often determine the treatment threshold intuitively rather than analytically. An alternative approach to determination of the treatment threshold probability is to use the equation: H p* = , H+B where p* = the treatment threshold probability, H = the harm associated with treatment of a nondiseased patient, and B = the benefit associated with treatment of a diseased patient (Pauker and Kassirer 1980; Sox et al. 1988). We define B as the difference between the utility (U) of diseased patients who are treated and
1.0
is to withhold therapy. At probabilities of disease that are greater than the treatment threshold probability, the preferred action is to treat
Disease present p [D]
U(D, treat)
Treat p [-D] Disease absent Disease present p [D]
U(-D, treat)
U(D, do not treat)
Do not treat p [-D] Disease absent
U(-D, do not treat)
.. Fig. 3.14 Decision tree with which to calculate the treatment threshold probability of disease. By setting the utilities of the treat and do not treat choices to be equal, we can compute the probability at which the clinician and patient should be indifferent to the choice. Recall that p [−D] = 1 − p [D]
diseased patients who are not treated (U[D, treat] − U[D, do not treat], as shown in . Fig. 3.14). The utility of diseased patients who are treated should be greater than that of diseased patients who are not treated; therefore, B is
111 Biomedical Decision Making: Probabilistic Clinical Reasoning
positive. We define H as the difference in utility of nondiseased patients who are not treated and nondiseased patients who are treated (U[−D, do not treat] − U[−D, treat], as shown in . Fig. 3.14). The utility of nondiseased patients who are not treated should be greater than that of nondiseased patients who are treated; therefore, H is positive. The equation for the treatment threshold probability fits with our intuition: if the benefit of treatment is small and the harm of treatment is large, the treatment threshold probability will be high. In contrast, if the benefit of treatment is large and the harm of treatment is small, the treatment threshold probability will be low. Once you know the pretest probability, you know what to do in the absence of further information about the patient. If the pretest probability is below the treatment threshold, you should not treat the patient. If the pretest probability is above the threshold, you should treat the patient. Thus, you have completed step 2. One of the guiding principles of medical decision making is this: do not order a test unless it could change your management of the patient. In our framework for decision making, this principle means that you should order a test only if the test result could cause the probability of disease to cross the treatment threshold or lead to another test that would do so. Thus, if the pretest probability is above the treatment threshold, a negative test result must lead to a post-test probability that is below the threshold. Conversely, if the pretest probability is below the threshold probability, a positive result must lead to a post-test probability that is above the threshold. In either case, the test result would alter your decision of whether to treat the patient. This analysis completes step 3. To decide whether a test could alter management, we simply use Bayes’ theorem. We calculate the post-test probability after a test result that would move the probability of disease toward the treatment threshold. If the pretest probability is above the treatment threshold, we calculate the probability of disease if the test result is negative. If the pretest probability is below the treatment threshold, we calculate the probability of disease if the test result is positive.
►►Example 3.13
You are a pulmonary medicine specialist. You suspect that a patient of yours has a pulmonary embolus (blood clot lodged in the vessels of the lungs). One approach is to do a computed tomography angiography (CTA) scan, a test in which a computed tomography (CT) of the lung is done after a radiopaque dye is injected into a vein. The dye flows into the vessels of the lung. The CT scan can then assess whether the blood vessels are blocked. If the scan is negative, you do no further tests and do not treat the patient. ◄
To decide whether this strategy is correct, you take the following steps: 1. Determine the treatment threshold probability of pulmonary embolus. 2. Estimate the pretest probability of pulmonary embolus. 3. Decide whether a test result could affect your decision to treat for an embolus. First, assume you decide that the treatment threshold should be 0.10 in this patient. What does it mean to have a treatment threshold probability equal to 0.10? If you could obtain no further information, you would treat for pulmonary embolus if the pretest probability was above 0.10 (i.e., if you believed that there was greater than a 1 in 10 chance that the patient had an embolus), and would withhold therapy if the pretest probability was below 0.10. A decision to treat when the pretest probability is at the treatment threshold means that you are willing to treat nine patients without pulmonary embolus to be sure of treating one patient who has pulmonary embolus. A relatively low treatment threshold is justifiable because treatment of a pulmonary embolism with blood-thinning medication substantially reduces the high mortality of pulmonary embolism, whereas there is only a relatively small danger (mortality of less than 1%) in treating someone who does not have pulmonary embolus. Because the benefit of treatment is high and the harm of treatment is low, the treatment threshold probability will be low, as discussed earlier. You have completed step 1. You estimate the pretest probability of pulmonary embolus to be 0.05, which is equal to a pretest odds of 0.053. Because the pretest probability is lower than the treatment thresh-
3
112
3
D. K. Owens et al.
old, you should do nothing unless a positive CTA scan result could raise the probability of pulmonary embolus to above 0.10. You have completed step 2. To decide whether a test result could affect your decision to treat, you must decide whether a positive CTA scan result would raise the probability of pulmonary embolism to more than 0.10, the treatment threshold. You review the literature and learn that the LR for a positive CTA scan is approximately 21 (Stein et al. 2006). A negative CTA scan result will move the probability of disease away from the treatment threshold and will be of no help in deciding what to do. A positive result will move the probability of disease toward the treatment threshold and could alter your management decision if the post-test probability were above the treatment threshold. You therefore use the odds-ratio form of Bayes’ theorem to calculate the post-test probability of disease if the lung scan result is reported as high probability.
representation for such problems (Nease and Owens 1997; Owens et al. 1997). As shown in . Fig. 3.15, influence diagrams have certain features that are similar to decision trees, but they also have additional graphical elements. Influence diagrams represent decision nodes as squares and chance nodes as circles. In contrast to decision trees, however, the influence diagram also has arcs HIV+ Treat
HIV− “HIV+”
3.7 Alternative Graphical
Representations for Decision Models: Influence Diagrams and Belief Networks
In 7 Sects. 3.5 and 3.6, we used decision trees to represent decision problems. Although decision trees are the most common graphical representation for decision problems, influence diagrams are an important alternative
0.1901 HIV+
0.0968 No treat
0.8099 HIV−
Obtain
0.1901 HIV+
PCR Treat
0.0018 HIV−
“HIV−”
0.9982 HIV+
0.9032
post-test odds = pretest odds ´ LR = 0.053 ´ 21 = 1.11. A post-test odds of 1.1 is equivalent to a probability of disease of 0.53. Because the posttest probability of pulmonary embolus is higher than the treatment threshold, a positive CTA scan result would change your management of the patient, and you should order the lung scan. You have completed step 3. This example is especially useful for two reasons: first, it demonstrates one method for making decisions and second, it shows how the concepts that were introduced in this chapter all fit together in a clinical example of medical decision making.
0.8099
No treat
0.0018 HIV− 0.9982 HIV+
Treat
0.08 HIV− 0.92
Do not obtain PCR
HIV+ No treat
0.08 HIV− 0.92
PCR results
HIV status
Treat? (Yes/No)
QALE
10.50 75.46 10.00
75.50 10.50
75.46 10.00 75.50 10.50 75.46 10.00
75.50
Obtain PCR? (Yes/NO)
.. Fig. 3.15 A decision tree (top) and an influence diagram (bottom) that represent the decisions to test for, and to treat, HIV infection. The structural asymmetry of the alternatives is explicit in the decision tree. The influence diagram highlights probabilistic relationships. HIV human immunodeficiency virus, HIV+ HIV infected, HIV− not infected with HIV, QALE quality-adjusted life expectancy, PCR polymerase chain reaction. Test results are shown in quotation marks (“HIV+”), whereas the true disease state is shown without quotation marks (HIV+). (Source: Owens et al. (1997). Reproduced with permission)
113 Biomedical Decision Making: Probabilistic Clinical Reasoning
between nodes and a diamond-shaped value node. An arc between two chance nodes indicates that a probabilistic relationship may exist between the chance nodes (Owens et al. 1997). A probabilistic relationship exists when the occurrence of one chance event affects the probability of the occurrence of another chance event. For example, in . Fig. 3.15, the probability of a positive or negative PCR test result (PCR result) depends on whether a person has HIV infection (HIV status); thus, these nodes have a probabilistic relationship, as indicated by the arc. The arc points from the conditioning event to the conditioned event (PCR test result is conditioned on HIV status in . Fig. 3.15). The absence of an arc between two chance nodes, however, always indicates that the nodes are independent or conditionally independent. Two events are conditionally independent, given a third event, if the occurrence of one of the events does not affect the probability of the other event conditioned on the occurrence of the third event. Unlike a decision tree, in which the events usually are represented from left to right in
the order in which the events are observed, influence diagrams use arcs to indicate the timing of events. An arc from a chance node to a decision node indicates that the chance event has been observed at the time the decision is made. Thus, the arc from PCR result to Treat? in . Fig. 3.15 indicates that the decision maker knows the PCR test result (positive, negative, or not obtained) when he or she decides whether to treat. Arcs between decision nodes indicate the timing of decisions: the arc points from an initial decision to subsequent decisions. Thus, in . Fig. 3.15, the decision maker must decide whether to obtain a PCR test before deciding whether to treat, as indicated by the arc from Obtain PCR? to Treat? The probabilities and utilities that we need to determine the alternative with the highest expected value are contained in tables associated with chance nodes and the value node (. Fig. 3.16). These tables contain the same information that we would use in a decision tree. With a decision tree, we can determine the expected value of each alternative by averaging out at chance nodes and folding back the tree (7 Sect. 3.5.3).
Probability of test results conditioned on disease status and decision to test "HIV+"
"HIV−"
"NA"
Obtain PCR HIV+
0.98
0.02
0.0
HIV−
0.02
0.98
0.0
Do not obtain PCR HIV+
0.00
0.00
1.0
HIV−
0.00
0.00
1.0
PCR results
Prior probability of HIV HIV+
HIV−
0.08
0.92
HIV status Value table
Obtain PCR? (Yes/NO)
QALE
Treat? (Yes/No) .. Fig. 3.16 The influence diagram from . Fig. 3.15, with the probability and value tables associated with the nodes. The information in these tables is the same as that associated with the branches and endpoints of the decision tree in . Fig. 3.15. HIV human immunodeficiency virus, HIV+ HIV infected, HIV− not infected with HIV, QALE quality-
QALE
HIV+, Tx+
10.50
HIV−, Tx−
10.00
HIV+, Tx+
75.46
HIV−, Tx
75.50
adjusted life expectancy, PCR polymerase chain reaction, NA not applicable, TX+ treated, TX− not treated. Test results are shown in quotation marks (“HIV+”), and the true disease state is shown without quotation marks (HIV+). (Source: Owens et al. (1997). Reproduced with permission)
3
114
3
D. K. Owens et al.
For influence diagrams, the calculation of expected value is more complex (Owens et al. 1997), and generally must be performed with computer software. With the appropriate software, we can use influence diagrams to perform the same analyses that we would perform with a decision tree. Diagrams that have only chance nodes are called belief networks; we use them to perform probabilistic inference. Why use an influence diagram instead of a decision tree? Influence diagrams have both advantages and limitations relative to decision trees. Influence diagrams represent graphically the probabilistic relationships among variables (Owens et al. 1997). Such representation is advantageous for problems in which probabilistic conditioning is complex or in which communication of such conditioning is important (such as may occur in large models). In an influence diagram, probabilistic conditioning is indicated by the arcs, and thus the conditioning is apparent immediately by inspection. In a decision tree, probabilistic conditioning is revealed by the probabilities in the branches of the tree. To determine whether events are conditionally independent in a decision tree requires that the analyst compare probabilities of events between branches of the tree. Influence diagrams also are particularly useful for discussion with content experts who can help to structure a problem but who are not familiar with decision analysis. In contrast, problems that have decision alternatives that are structurally different may be easier for people to understand when represented with a decision tree, because the tree shows the structural differences explicitly, whereas the influence diagram does not. The choice of whether to use a decision tree or an influence diagram depends on the problem being analyzed, the experience of the analyst, the availability of software, and the purpose of the analysis. For selected problems, influence diagrams provide a powerful graphical alternative to decision trees.
the problem and the objectives of the analysis. Although how to choose and design such models is beyond our scope, we note other type of models that analysts use commonly for medical decision making. Microsimulation models are individual-level health state transition models, similar to Markov models, that provide a means to model very complex events flexibly over time. They are useful when the clinical history of a problem is complex, such as might occur with cancer, heart disease, and other chronic diseases. They are also useful for modeling individual heterogeneity which may depend on combinations of individual characteristics (e.g. heterogeneity of response to treatment based on medical conditions or genetics). Dynamic transmission models are particularly well-suited for assessing the outcomes of infectious diseases. These models divide a population into compartments (for example, uninfected, infected, recovered, dead), and transitions between compartments are governed by differential or difference equations. The rate of transition between compartments depends in part on the number of individuals in the compartment, an important feature for infectious diseases in which the transmission may depend on the number of infected or susceptible individuals. Discrete event simulation models also are often used to model interactions between people. These models are composed of entities (a patient) that have attributes (clinical history), and that experience events (a heart attack). An entity can interact with other entities and use resources. Discrete event simulation models are also used when considering scarce resources such as queues for a diagnostic test or an operating room slot. For more information on these types of models, we suggest a recent series of papers on best modeling practices; the paper by Caro and colleagues noted in the suggested readings at the end of the chapter is an overview of this series of papers.
3.8 Other Modeling Approaches
3.9 The Role of Probability and
We have described decision trees, Markov models and influence diagrams. An analyst also can choose several other approaches to modeling. The choice of modeling approach depends on
You may be wondering how probability and decision analysis might be integrated smoothly into medical practice. An understanding of
Decision Analysis in Medicine
115 Biomedical Decision Making: Probabilistic Clinical Reasoning
probability and measures of test performance will prevent any number of misadventures. In 7 Example 3.1, we discussed a hypothetical test that, on casual inspection, appeared to be an accurate way to screen blood donors for previous exposure to the AIDS virus. Our quantitative analysis, however, revealed that the hypothetical test results were misleading more often than they were helpful because of the low prevalence of HIV in the clinically relevant population. Fortunately, in actual practice, much more accurate tests are used to screen for HIV. The need for knowledgeable interpretation of test results is widespread. The federal government screens civil employees in “sensitive” positions for drug use, as do many companies. If the drug test used by an employer had a sensitivity and specificity of 0.95, and if 10% of the employees used drugs, one-third of the positive tests would be FPs. An understanding of these issues should be of great interest to the public, and health professionals should be prepared to answer the questions of their patients. Although we should try to interpret every kind of test result accurately, decision analysis has a more selective role in medicine. Not all clinical decisions require decision analysis. Some decisions depend on physiologic principles or on deductive reasoning. Other decisions involve little uncertainty. Nonetheless, many decisions must be based on imperfect data, and they will have outcomes that cannot be known with certainty at the time that the decision is made. Decision analysis provides a technique for managing these situations. For many problems, simply drawing a tree that denotes the possible outcomes explicitly will clarify the question sufficiently to allow you to make a decision. When time is limited, even a “quick and dirty” analysis may be helpful. By using expert clinicians’ subjective probability estimates and asking what the patient’s utilities might be, you can perform an analysis quickly and learn which probabilities and utilities are the important determinants of the decision. Health care professionals sometimes express reservations about decision analysis because the analysis may depend on probabilities that must be estimated, such as the pretest
probability. A thoughtful decision maker will be concerned that the estimate may be in error, particularly because the information needed to make the estimate often is difficult to obtain from the medical literature. We argue, however, that uncertainty in the clinical data is a problem for any decision-making method and that the effect of this uncertainty is explicit with decision analysis. The method for evaluating uncertainty is sensitivity analysis: we can examine any variable to see whether its value is critical to the final recommended decision. Thus, we can determine, for example, whether a change in pretest probability from 0.6 to 0.8 makes a difference in the final decision. In so doing, we often discover that it is necessary to estimate only a range of probabilities for a particular variable rather than a precise value. Thus, with a sensitivity analysis, we can decide whether uncertainty about a particular variable should concern us. The growing complexity of medical decisions, coupled with the need to control costs, has led to major programs to develop clinical practice guidelines. Decision models have many advantages as aids to guideline development (Eddy 1992; Habbema et al. 2014; Owens et al. 2016): they make explicit the alternative interventions, associated uncertainties, and utilities of potential outcomes. Decision models can help guideline developers to structure guideline-development problems (Owens and Nease 1993), to incorporate patients’ preferences (Nease and Owens 1994; Owens 1998), and to tailor guidelines for specific clinical populations (Owens and Nease 1997). The U.S. Preventive Services Task Force, which develops national prevention guidelines, has used decision models in the development of guidelines on breast, lung, cervical, and colorectal cancer screening. In addition, Web- based interfaces for decision models can provide distributed decision support for guideline developers and users by making the decision model available for analysis to anyone who has access to the Web (Sanders et al. 1999). We have not emphasized computers in this chapter, although they can simplify many aspects of decision analysis (see 7 Chap. 24). MEDLINE and other bibliographic retrieval systems (see 7 Chap. 23) make it easier to ob-
3
116
3
D. K. Owens et al.
tain published estimates of disease prevalence and test performance. Computer programs for performing statistical analyses can be used on data collected by hospital information systems. Decision analysis software, available for personal computers, can help clinicians to structure decision trees, to calculate expected values, and to perform sensitivity analyses. Researchers continue to explore methods for computer-based automated development of practice guidelines from decision models and use of computerbased systems to implement guidelines (Musen et al. 1996). With the growing maturity of this field, there are now companies that offer formal analytical tools to assist with clinical outcome assessment and interpretation of population datasets. Medical decision making often involves uncertainty for the clinician and risk for the patient. Most health care professionals would welcome tools that help them make decisions when they are confronted with complex clinical problems with uncertain outcomes. There are important medical problems for which decision analysis offers such aid. 3.10 Appendix A: Derivation of
Bayes’ Theorem
Bayes’ theorem is derived as follows. We denote the conditional probability of disease, D, given a test result, R, p[D|R]. The prior (pretest) probability of D is p[D]. The definition of conditional probability is: p [ D|R ] =
p [ R,D ]
(3.1) p [R ] The probability of a test result (p[R]) is the sum of its probability in diseased patients and its probability in nondiseased patients: p [ R ] = p [ R,D ] + p [ R, - D ]. Substituting into Eq. 3.1, we obtain: p [ D|R ] =
p [ R,D ]
p [ R,D ] + p [ R, - D ]
(3.2)
Again, from the definition of conditional probability, p [ R|D ] =
p [ R,D ] p [ D]
and p [ R| - D ] =
p [ R, - D ] p [ -D]
These expressions can be rearranged: p [ R,D ] = p [ D ] ´ p [ R| D ] ,
(3.3)
p [ R, - D ] = p [ -D ] ´ p [ R | - D ].
(3.4)
Substituting Eqs. 3.3 and 3.4 into Eq. 3.2, we obtain Bayes’ theorem: p [ D|R ] =
p [ D ] ´ p [ R|D ]
p [ D ] ´ p [ R|D ] + p [ -D ] ´ p [ R| - D ]
nnSuggested Readings Briggs, A., Weinstein, M., Fenwick, E., Karnon, J., Sculpher, M., & Paltiel, A. (2012). Model parameter estimation and uncertainty analysis: A report of the ISPOR-SMDM modeling good research practices task force-6. Medical Decision Making, 32(5), 722–732. This article describes best practices for estimating model parameters and for performing sensitivity analyses, including probabilistic sensitivity analysis. Caro, J., Briggs, A., Siebert, U., & Kuntz, K. (2012). Modeling good research practices – overview: A report of the ISPOR-SMDM modeling good research practices task force-1. Value in Health, 15, 796–803. This paper is an introduction to a series of papers that describe best modeling practices. Hunink, M., Glasziou, P., Siegel, J., Weeks, J., Pliskin, J., Einstein, A., & Weinstein, M. (2001). Decision making in health and medicine. Cambridge: Cambridge University Press. This textbook addresses in detail most of the topics introduced in this chapter. Nease, R. F., Jr., & Owens, D. K. (1997b). Use of influence diagrams to structure medical decisions. Medical Decision Making, 17(13), 263– 275. This article provides a comprehensive introduction to the use of influence diagrams. Neumann, P. J., Sanders, G. D., Russell, L. B., Siegel, J. E., & Ganiats, T. G. (Eds.). (2017b). Cost-effectiveness in health and medicine (2nd
3
117 Biomedical Decision Making: Probabilistic Clinical Reasoning
ed.). New York: Oxford University Press. This book provides authoritative guidelines for the conduct of cost-effectiveness analyses. Chapter 7 discusses approaches for valuing health outcomes. Owens, D. K., Schacter, R. D., & Nease, R. F., Jr. (1997b). Representation and analysis of medical decision problems with influence diagrams. Medical Decision Making, 17(3), 241–262. This article provides a comprehensive introduction to the use of influence diagrams. Raiffa, H. (1970). Decision analysis: Introductory lectures on choices under uncertainty. Reading: Addison-Wesley. This now classic book provides an advanced, nonmedical introduction to decision analysis, utility theory, and decision trees. Sox, H. C. (1986). Probability theory in the use of diagnostic tests. Annals of Internal Medicine, 104(1), 60–66. This article is written for clinicians; it contains a summary of the concepts of probability and test interpretation. Sox, H. C., Higgins, M. C., & Owens, D. K. (2013). Medical decision making. Chichester: Wiley-Blackwell. This introductory textbook covers the subject matter of this chapter in greater detail, as well as discussing many other topics. Tversky, A., & Kahneman, D. (1974b). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124. This now classic article provides a clear and interesting discussion of the experimental evidence for the use and misuse of heuristics in situations of uncertainty.
??Questions for Discussion 1. Calculate the following probabilities for a patient about to undergo CABG surgery (see 7 Example 3.2): (a) The only possible, mutually exclusive outcomes of surgery are death, relief of symptoms (angina and dyspnea), and continuation of symptoms. The probability of death is 0.02, and the probability of relief of symptoms is 0.80. What is the probability that the patient will continue to have symptoms?
(b) Two known complications of heart surgery are stroke and heart attack, with probabilities of 0.02 and 0.05, respectively. The patient asks what chance he or she has of having both complications. Assume that the complications are conditionally independent, and calculate your answer. (c) The patient wants to know the probability that he or she will have a stroke given that he or she has a heart attack as a complication of the surgery. Assume that 1 in 500 patients has both complications, that the probability of heart attack is 0.05, and that the events are independent. Calculate your answer. 2. The results of a hypothetical study to measure test performance of a diagnostic test for HIV are shown in the 2 × 2 table in . Table 3.9. (a) Calculate the sensitivity, specificity, disease prevalence, PV+, and PV–. (b) Use the TPR and TNR calculated in part (a) to fill in the 2 × 2 table in . Table 3.10. Calculate the disease prevalence, PV+, and PV–.
.. Table 3.9 A 2 × 2 contingency table for the hypothetical study in problem 2 PCR test result
Gold standard test positive
Goldstandard test negative
Total
Positive PCR
48
8
56
Negative PCR
2
47
49
Total
50
55
105
PCR polymerase chain reaction
118
D. K. Owens et al.
positive test result, would you change the TPR or TNR of the test?
.. Table 3.10 A 2 × 2 contingency table to complete for problem 2b
3
PCR test result
Gold standard test positive
Gold standard test negative
Total
Positive PCR
x
x
x
Negative PCR
100
99,900
x
Total
x
x
x
PCR polymerase chain reaction x quantities that the question ask students to calculate
3. You are asked to interpret the results from a diagnostic test for HIV in an asymptomatic patient whose test was positive when the patient volunteered to donate blood. After taking the patient’s history, you learn that the patient has a history of intravenous-drug use. You know that the overall prevalence of HIV infection in your community is 1 in 500 and that the prevalence in people who have injected drugs is 20 times as high as in the community at large. (a) Estimate the pretest probability that this patient is infected with HIV. (b) The patient tells you that two people with whom the patient shared needles subsequently died of AIDS. Which heuristic will be useful in making a subjective adjustment to the pretest probability in part (a)? (c) Use the sensitivity and specificity that you worked out in 2(a) to calculate the post-test probability of the patient having HIV after a positive and negative test. Assume that the pretest probability is 0.10. (d) If you wanted to increase the posttest probability of disease given a
4. You have a patient with cancer who has a choice between surgery or chemotherapy. If the patient chooses surgery, he or she has a 2% chance of dying from the operation (life expectancy = 0), a 50% chance of being cured (life expectancy = 15 years), and a 48% chance of not being cured (life expectancy = 1 year). If the patient chooses chemotherapy, he or she has a 5% chance of death (life expectancy = 0), a 65% chance of cure (life expectancy = 15 years), and a 30% chance that the cancer will be slowed but not cured (life expectancy = 2 years). Create a decision tree. Calculate the expected value of each option in terms of life expectancy. 5. You are concerned that a patient with a sore throat has a bacterial infection that would require antibiotic therapy (as opposed to a viral infection, for which no treatment is available). Your treatment threshold is 0.4, and based on the examination you estimate the probability of bacterial infection as 0.8. A test is available (TPR = 0.75, TNR = 0.85) that indicates the presence or absence of bacterial infection. Should you perform the test? Explain your reasoning. How would your analysis change if the test were extremely costly or involved a significant risk to the patient? 6. What are the three kinds of bias that can influence measurement of test performance? Explain what each one is, and state how you would adjust the post-test probability to compensate for each. 7. How could a computer system ease the task of performing a complex decision analysis? 8. When you search the medical literature to find probabilities for patients similar to one you are treating, what is the most important question to consider? How should you adjust probabilities in light of the answer to this question?
119 Biomedical Decision Making: Probabilistic Clinical Reasoning
9. Why do you think clinicians sometimes order tests even if the results will not affect their management of the patient? Do you think the reasons that you identify are valid? Are they valid in only certain situations? Explain your answers. See the January 1998 issue of Medical Decision Making for articles that discuss this question. 10. Explain the differences in three approaches to assessing patients’ preferences for health states: the standard gamble, the time trade-off, and the visual analog scale. Disclaimer The views presented are solely the responsibility of the authors and do not necessarily represent the views of the Patient-Centered Outcomes Research Institute (PCORI), its Board of Governors, or its Methodology Committee.
References Beck, J. R., & Pauker, S. G. (1983). The Markov process in medical prognosis. Medical Decision Making, 3(4), 419–458. Eddy, D. M. (1992). A manual for assessing health practices and designing practice policies: The explicit approach. Philadelphia: American College of Physicians. Habbema, J. D. F., Wilt, T. J., Etzioni, R., Nelson, H. D., Schechter, C. B., Lawrence, W. F., Melnikow, J., Kuntz, K. M., Owens, D. K., & Feuer, E. J. (2014). Models in the development of clinical practice guidelines. Annals of Internal Medicine, 161, 812–818. Hellmich, M., Abrams, K. R., & Sutton, A. J. (1999). Bayesian approaches to meta-analysis of ROC curves. Medical Decision Making, 19, 252–264. Joyce, V. R., Barnett, P. G., Bayoumi, A. M., Griffin, S. C., Kyriakides, T. C., Yu, W., Sundaram, V., Holodniy, M., Brown, S. T., Cameron, W., Youle, M., Sculpher, M., Anis, A. H., & Owens, D. K. (2009). Healthrelated quality of life in a randomized trial of antiretroviral therapy for advanced HIV disease. Journal of the Acquired Immunodeficiency Syndrome, 50, 27–36. Joyce, V. R., Barnett, P. G., Chow, A., Bayoumi, A. M., Griffin, S. C., Sun, H., Holodniy, M., Brown, S. T., Cameron, D. W., Youle, M., Sculpher, M., Anis, A. H., & Owens, D. K. (2012). Effect of treatment interruption and intensification of antiretroviral therapy on health-related quality of life in patients with advanced HIV: A randomized controlled trial. Medical Decision Making, 32, 70–82.
Leeflang, M. (2014). Systematic reviews and meta- analysis of diagnostic test accuracy. Clinical Microbiology and Infection, 20, 105–113. Leeflang, M. M. G., Deeks, J. J., Gatsonis, C., Bossuyt, P. M., & on behalf of the Cochrane diagnostic test accuracy working group. (2008). Systematic reviews of diagnostic tests. Annals of Internal Medicine, 149, 889–897. Lenert, L. A., Michelson, D., Flowers, C., & Bergen, M. R. (1995). IMPACT: An object-oriented graphical environment for construction of multimedia patient interviewing software (pp. 319–323). Washington, D.C.: Proceedings of the Annual Symposium of Computer Applications in Medical Care. Lin, J. K., Lerman, B. J., Barnes, J. I., Bourisquot, B. C., Tan, Y. J., Robinson, A. Q. L., Davis, K. L., Owens, D. K., & Goldhaber-Fiebert, J. D. (2018). Cost effectiveness of chimeric antigen receptor T-cell therapy in relapsed or refractory pediatric B-cell acute lymphoblastic leukemia. Journal of Clinical Oncology, 36, 3192–3202. Meigs, J., Barry, M., Oesterling, J., & Jacobsen, S. (1996). Interpreting results of prostate-specific antigen testing for early detection of prostate cancer. Journal of General Internal Medicine, 11(9), 505–512. Moses, L. E., Littenberg, B., & Shapiro, D. (1993). Combining independent studies of a diagnostic test into a summary ROC curve: Data-analytic approaches and some additional considerations. Statistics in Medicine, 12(4), 1293–1316. Musen, M. A., Tu, S. W., Das, A. K., & Shahar, Y. (1996). EON: A component-based approach to automation of protocol-directed therapy. Journal of the American Medical Informatics Association: JAMIA, 3(6), 367– 388. Nease, R. F., Jr., & Owens, D. K. (1994). A method for estimating the cost-effectiveness of incorporating patient preferences into practice guidelines. Medical Decision Making, 14(4), 382–392. Nease, R. F., Jr., & Owens, D. K. (1997a). Use of influence diagrams to structure medical decisions. Medical Decision Making, 17(13), 263–275. Nease, R. F., Jr., Kneeland, T., O’Connor, G. T., Sumner, W., Lumpkins, C., Shaw, L., Pryor, D., & Sox, H. C. (1995). Variation in patient utilities for the outcomes of the management of chronic stable angina. Implications for clinical practice guidelines. Journal of the American Medical Association, 273(15), 1185–1190. Neumann, P. J., Sanders, G. D., Russell, L. B., Siegel, J. E., & Ganiats, T. G. (Eds.). (2017a). Cost-effectiveness in health and medicine (2nd ed.). New York: Oxford University Press. Owens, D. K. (1998). Patient preferences and the development of practice guidelines. Spine, 23(9), 1073–1079. Owens, D. K., & Nease, R. F., Jr. (1993). Development of outcome-based practice guidelines: A method for structuring problems and synthesizing evidence. The Joint Commission Journal on Quality Improvement, 19(7), 248–263.
3
120
3
D. K. Owens et al.
Owens, D. K., & Nease, R. F., Jr. (1997). A normative analytic framework for development of practice guidelines for specific clinical populations. Medical Decision Making, 17(4), 409–426. Owens, D., Harris, R., Scott, P., & Nease, R. F., Jr. (1995). Screening surgeons for HIV infection: A cost-effectiveness analysis. Annals of Internal Medicine, 122(9), 641–652. Owens, D. K., Holodniy, M., Garber, A. M., Scott, J., Sonnad, S., Moses, L., Kinosian, B., & Schwartz, J. S. (1996a). The polymerase chain reaction for the diagnosis of HIV infection in adults: A meta-analysis with recommendations for clinical practice and study design. Annals of Internal Medicine, 124(9), 803–815. Owens, D. K., Holodniy, M., McDonald, T. W., Scott, J., & Sonnad, S. (1996b). A meta-analytic evaluation of the polymerase chain reaction (PCR) for diagnosis of human immunodeficiency virus (HIV) infection in infants. Journal of the American Medical Association, 275(17), 1342–1348. Owens, D. K., Shachter, R. D., & Nease, R. F., Jr. (1997a). Representation and analysis of medical decision problems with influence diagrams. Medical Decision Making, 17(3), 241–262. Owens, D. K., Whitlock, E. P., Henderson, J., Pignone, M. P., Krist, A. H., Bibbins-Domingo, K., Curry, S. J., Davidson, K. W., Ebell, M., Gilman, M. W., Grossman, D. C., Kemper, A. R., Kurth, A. E., Maciosek, M., Siu, A. L., LeFevre, M. L., & for the U.S. Preventive Services Task Force. (2016). Use of decision models in the development of evidence- based clinical preventive services recommendations: Methods of the U.S. Preventive Services Task Force. Annals of Internal Medicine, 165, 501–508. Palda, V. A., & Detsky, A. S. (1997). Perioperative assessment and management of risk from coronary artery disease. Annals of Internal Medicine, 127(4), 313–328. Pauker, S. G., & Kassirer, J. P. (1980). The threshold approach to clinical decision making. The New England Journal of Medicine, 34(5 Pt 2), 1189–1208. Peabody, G. (1922). The physician and the laboratory. Boston Medical Surgery Journal, 187, 324. Peterson, W., & Birdsall, T. (1953). The theory of signal detectability. (Technical Report No. 13.): Electronic Defense Group, University of Michigan, Ann Arbor. Ransohoff, D. F., & Feinstein, A. R. (1978). Problems of spectrum and bias in evaluating the efficacy of diagnostic tests. The New England Journal of Medicine, 299(17), 926–930.
Salpeter, S. R., Sanders, G. D., Salpeter, E. E., & Owens, D. K. (1997). Monitored isoniazid prophylaxis for low-risk tuberculin reactors older than 35 years of age: A risk-benefit and cost-effectiveness analysis. Annals of Internal Medicine, 127(12), 1051–1061. Sanders, G. D., Hagerty, C. G., Sonnenberg, F. A., Hlatky, M. A., & Owens, D. K. (1999). Distributed dynamic decision support using a web-based interface for prevention of sudden cardiac death. Medical Decision Making, 19(2), 157–166. Sanders, G. D., Hlatky, M. A., & Owens, D. K. (2005). Cost effectiveness of the implantable cardioverter defibrillator (ICD) in primary prevention of sudden death. The New England Journal of Medicine, 353, 1471–1478. Siebert, U., Alagoz, O., Bayoumi, A. M., Jahn, B., Owens, D. K., Cohen, D., et al. (2012). State-transition modeling: A report of the ISPOR-SMDM modeling good research practices task force-3. Medical Decision Making, 32, 690–700. Smith, L. (1985). Medicine as an art. In J. Wyngaarden & L. Smith (Eds.), Cecil textbook of medicine. Philadelphia: W. B. Saunders. Sonnenberg, F. A., & Beck, J. R. (1993). Markov models in medical decision making: A practical guide. Medical Decision Making, 13(4), 322–338. Sox, H. C. (1987). Probability theory in the use of diagnostic tests: Application to critical study of the literature. In H. C. Sox (Ed.), Common diagnostic tests: Use and interpretation (pp. 1–17). Philadelphia: American College of Physicians. Sox, H. C., Blatt, M. A., Higgins, M. C., & Marton, K. I. (1988). Medical decision making. Boston: Butterworth Publisher. Stein, P. D., Fowler, S. E., Goodman, L. R., Gottschalk, A., Hales, C. A., et al. (2006). Multidetector computed tomography for acute pulmonary embolism. The New England Journal of Medicine, 354, 2317–2327. Sumner, W., Nease, R. F., Jr., & Littenberg, B. (1991). U-titer: A utility assessment tool. In Proceedings of the 15th annual symposium on computer applications in medical care (pp. 701–705). Washington, DC. Swets, J. A. (1973). The relative operating characteristic in psychology. Science, 182, 990. Torrance, G. W., & Feeny, D. (1989). Utilities and quality- adjusted life years. International Journal of Technology Assessment in Health Care, 5(4), 559–575. Tversky, A., & Kahneman, D. (1974a). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124–1131.
121
Cognitive Informatics Vimla L. Patel and David R. Kaufman Contents 4.1
Introduction – 122
4.1.1 4.1.2
I ntroducing Cognitive Science – 122 Cognitive Science and Biomedical Informatics – 123
4.2
ognitive Science: The Emergence of an Explanatory C Framework – 124
4.3
Human Information Processing – 127
4.3.1 4.3.2
ognitive Architectures and Human Memory Systems – 128 C The Organization of Knowledge – 129
4.4
Medical Cognition – 133
4.4.1
Expertise in Medicine – 137
4.5
Human Factors Research and Patient Safety – 139
4.5.1 4.5.2 4.5.3
atient Safety – 140 P Unintended Consequences – 142 Distributed Cognition and Electronic Health Records – 143
4.6
Conclusion – 147 References – 148
© Springer Nature Switzerland AG 2021 E. H. Shortliffe, J. J. Cimino (eds.), Biomedical Informatics, https://doi.org/10.1007/978-3-030-58721-5_4
4
122
V. L. Patel and D. R. Kaufman
nnLearning Objectives
4
After reading this chapter, you should know the answers to these questions: 55 How can cognitive science theory meaningfully inform and shape the design, development, and assessment of healthcare information systems? 55 How is cognitive science different from behavioral science? 55 What are some of the ways in which we can characterize the structure of knowledge? 55 What are some of the dimensions of difference between experts and novices? 55 Why is it important to consider cognition and human factors in dealing with issues of patient safety? 55 How does distributed cognition differ from other theories of human cognition?
Similar to other complex domains, biomedical information systems embody ideals in design that often do not readily yield practical solutions in implementation. As computer- based systems infiltrate clinical practice and settings, the consequences often can be felt through all levels of the organization. This impact can have deleterious effects resulting in systemic inefficiencies and suboptimal practice, which can lead to frustrated healthcare practitioners, unnecessary delays in healthcare delivery, and even adverse events (Lin et al. 1998; Weinger and Slagle 2001). How can we manage change? How can we introduce systems that are designed to be more intuitive and also implemented efficiently to be confluent with everyday practice without compromising safety?
4.1.1 4.1
Introduction
Enormous advances in health information technologies and more generally, in computing over the past several decades have begun to permeate diverse facets of clinical practice. The rapid pace of technological developments such as the Internet, wireless technologies, and mobile devices, in the last decade, affords significant opportunities for supporting, enhancing and extending user experiences, interactions and communications (Rogers 2004). These advances, coupled with a growing computer literacy among healthcare professionals, afford the potential for great improvements in healthcare. Yet many observers note that the healthcare system is slow to understand information technology and effectively incorporate it into the work environment (Shortliffe and Blois 2001; Karsh et al. 2010; Harrington 2015). Innovative technologies often produce profound cultural, social, and cognitive changes. These transformations necessitate adaptation at many different levels of aggregation from the individual to the larger institution, often causing disruptions of workflow and user dissatisfaction (Bloomrosen et al. 2011).
Introducing Cognitive Science
Cognitive science is a multidisciplinary domain of inquiry devoted to the study of cognition and its role in intelligent agency. The primary disciplines include cognitive psychology, artificial intelligence, neuroscience, linguistics, anthropology, and philosophy. From the perspective of informatics, cognitive science can provide a framework for the analysis and modeling of complex human performance in technology-mediated settings. Cognitive science incorporates basic science research focusing on fundamental aspects of cognition (e.g., attention, memory, reasoning, early language acquisition) as well as applied research. Applied cognitive research is focally concerned with the development and evaluation of useful and usable cognitive artifacts. Cognitive artifacts are humanmade materials, devices, and systems that extend people’s abilities in perceiving objects, encoding and retrieving information from memory, and problem-solving (Gillan and Schvaneveldt 1999). In this regard, applied cognitive research is closely aligned with the disciplines of human-computer interaction (HCI) and human factors. In everyday life, we interact with cognitive artifacts to receive and manipulate information to alter our thinking
123 Cognitive Informatics
processes and offload effort-intensive cognitive activity to the external world, thereby reducing mental workload. The past three decades have produced a cumulative body of experiential and practical knowledge about system design and implementation that guide future initiatives. This practical knowledge embodies the need for sensible and intuitive user interfaces, an understanding of workflow, and the ways in which systems impact individual and team performance. However, experiential knowledge in the form of anecdotes and case studies is inadequate for producing robust generalizations or sound design and implementation principles. There is a need for a theoretical foundation. Biomedical informatics is more than the thin intersection of biomedicine and computing (Patel and Kaufman 1998). There is a growing role for the social sciences, including the cognitive and behavioral sciences, in biomedical informatics, particularly as they pertain to human-computer interaction and other areas such as information retrieval and decision support (Patel et al. 2017). In this chapter, we focus on the foundational role of cognitive science in biomedical informatics research and practice. Theories and methods from the cognitive sciences can illuminate different facets of design and implementation of information and knowledge-based systems. They can also play a larger role in characterizing and enhancing human performance on a wide range of tasks involving clinicians, patients, and healthy consumers of biomedical information. These tasks may include developing training programs and devising measures to reduce errors or increase efficiency. In this respect, cognitive science represents one of the basic component sciences of biomedical informatics (Shortliffe and Blois 2001; Patel and Kaufman 1998). 4.1.2
Cognitive Science and Biomedical Informatics
How can cognitive science theory meaningfully inform and shape design, development, and assessment of health-care information
systems? Cognitive science provides insight into principles of system usability and learnability, the mediating role of technology in clinical performance, the process of medical judgment and decision-making, the training of healthcare professionals, patients, and health consumers, and the design of a safer workplace. The central argument is that it can inform our understanding of human performance in technology-rich healthcare environments (Carayon 2012; Patel et al. 2013b). Precisely how will cognitive science theory and methods make a significant contribution towards these important objectives? The translation of research findings from one discipline into practical concerns that can be applied to another is rarely a straight-forward process (Rogers 2004). Furthermore, even when scientific knowledge is highly relevant in principle, making that knowledge actionable in a design context can be a significant challenge. In this chapter, we discuss (a) basic cognitive science research and theories that provide a foundation for understanding the underlying mechanisms guiding human performance (e.g., findings pertaining to the structure of human memory), and (b) research in the areas of medical errors and patient safety as they interact with health information technology), As illustrated in . Table 4.1, there are correspondences between basic cognitive science research, medical cognition and cognitive research in biomedical informatics along several dimensions. For example, theories of human memory and knowledge organization lend themselves to characterizations of expert clinical knowledge that can then be contrasted with the representations of such knowledge in clinical systems. Similarly, research in text comprehension has provided a theoretical framework for research in understanding biomedical texts. Additionally, theories of problem solving can be used to understand the processes and knowledge associated with diagnostic and therapeutic reasoning. This understanding provides a basis for developing medical artificial intelligence and decision support systems. Cognitive research, theories, and methods can contribute to applications in informatics
4
124
V. L. Patel and D. R. Kaufman
.. Table 4.1 Correspondences between cognitive science, medical cognition and applied cognitive research in medical informatics
4
Cognitive Science
Medical Cognition
Biomedical Informatics
Knowledge organization and human memory
Organization of clinical and basic science knowledge
Development and use of medical knowledge bases
Problem solving, Heuristics/ reasoning strategies
Medical problem solving and decision making
Medical artificial intelligence/decision support systems/medical errors
Perception/attention
Radiologic and dermatologic diagnosis
Medical imaging systems
Text comprehension
Understanding medical texts Knowledge representation
Information retrieval/digital libraries/ health literacy
Conversational analysis
Medical discourse
Medical natural language processing
Distributed cognition
Collaborative practice and research in health care
Computer-based provider order entry systems
Coordination of theory and evidence
Diagnostic and therapeutic reasoning
Evidence-based clinical guidelines
Diagrammatic reasoning
Perceptual processing of patient data displays
Biomedical information visualization
in a number of ways including: (1) seed basic research findings that can illuminate dimensions of design (e.g., attention and memory, aspects of the visual system), (2) provide an explanatory vocabulary for characterizing how individuals process and communicate health information (e.g., various studies of medical cognition pertaining to doctor-patient interaction), (3) present an analytic framework for identifying problems and modeling certain kinds of user interactions, (4) characterize the relationship between health information technology, human factors and patient safety, (5) provide rich descriptive accounts of clinicians employing technologies in the context of work, and (6) furnish a generative approach for novel designs and productive applied research programs in informatics (e.g., intervention strategies for supporting low literacy populations in health information seeking). Based on a review of articles published in the Journal of Biomedical Informatics between January 2001 and March 2014, Patel and Kannampallil (2015) identified 57 articles that focused on topics related to cognitive informatics. The topics ranged from characterizing the limits of clinician problem- solving and reasoning behavior, to describing
coordination and communication patterns of distributed clinical teams, to developing sustainable and cognitively plausible interventions for supporting clinician activities. The social sciences are constituted by multiple frameworks and approaches. Behaviorism constitutes a framework for analyzing and modifying behavior. It is an approach that has had an enormous influence on the social sciences. Cognitive science partially emerged as a response to the limitations of behaviorism. The next section of the chapter contains a brief history of the cognitive and behavioral sciences that emphasizes the points of difference between the two approaches. It also serves to introduce basic concepts in the study of cognition.
4.2
Cognitive Science: The Emergence of an Explanatory Framework
In this section, we sketch a brief history of the emergence of cognitive science in view to differentiate it with competing theoretical frameworks in the social sciences. The section also
125 Cognitive Informatics
serves to introduce core concepts that constitute an explanatory framework for cognitive science. Behaviorism is the conceptual framework underlying a particular science of behavior (Zuriff 1985). It is not to be confused with the term behavioral science which names a large body of work across disciplines, but not a specific theoretical framework. Behaviorism dominated experimental and applied psychology as well as the social sciences for the better part of the twentieth century (Bechtel et al. 1998). Behaviorism represented an attempt to develop an objective, empirically-based science of behavior and more specifically, learning. Empiricism is the view that experience is the only source of knowledge (Hilgard and Bower 1975). Behaviorism endeavored to build a comprehensive framework of scientific inquiry around the experimental analysis of observable behavior. Behaviorists eschewed the study of thinking as an unacceptable psychological method because it was inherently subjective, error-prone, and could not be subjected to empirical validation. Similarly, hypothetical constructs (e.g., mental processes as mechanisms in a theory) were discouraged. All constructs had to be specified in terms of operational definitions, so they could be manipulated, measured, and quantified for empirical investigation (Weinger and Slagle 2001). For reasons that go beyond the scope of this chapter, classical behavioral theories have been largely discredited as a comprehensive unifying theory of behavior. However, behaviorism continues to provide a theoretical and methodological foundation in a wide range of social science disciplines. For example, behaviorist tenets continue to play a central role in public health research. In particular, health behavior research emphasizes antecedent variables and environmental contingencies that serve to sustain unhealthy behaviors such as smoking (Sussman 2001). Around 1950, there was increasing dissatisfaction with the limitations and methodological constraints (e.g., the disavowal of the unobserved such as mental states) of behaviorism. In addition, developments in logic, information theory, cybernetics, and perhaps most importantly,
the advent of the digital computer, aroused substantial interest in “information processing” (Gardner 1985). Cognitive scientists placed “thought” and “mental processes” at the center of their explanatory framework. The “computer metaphor” provided a framework for the study of human cognition as the manipulation of “symbolic structures.” It also provided the foundation for a model of memory, which was a prerequisite for an information processing theory (Atkinson and Shiffrin 1968). The implementation of models of human performance as computer programs provided a measure of objectivity and a sufficiency test of a theory and also served to increase the objectivity of the study of mental processes (Estes 1975). Arguably, the landmark publication in the nascent field of cognitive science is Newell and Simon’s “Human Problem Solving” (Newell and Simon 1972). This was the culmination of over 15 years of work on problem solving and research in artificial intelligence. It was a mature thesis that described a theoretical framework, extended a language for the study of cognition, and introduced protocol- analytic methods that have become ubiquitous in the study of high-level cognition. It laid the foundation for the formal investigation of symbolic-information processing (more specifically, problem solving). The development of models of human information processing also provided a foundation for the discipline of human-computer interaction and the first formal methods of analysis (Card et al. 1983). The early investigations of problem solving focused primarily on investigations of experimentally contrived or toy-world tasks such as elementary deductive logic, the Tower of Hanoi, illustrated in . Fig. 4.1, and mathematical word problems (Greeno and Simon 1988). These tasks required very little background knowledge and were well structured, in the sense that all the variables necessary for solving the problem were present in the problem statement. These tasks allowed for a complete description of the task environment, a step-by-step description of the sequential behavior of the subjects’ performance, and the modeling of subjects’ cognitive and overt
4
126
V. L. Patel and D. R. Kaufman
Start state
A
4
B
Goal state
C
A
B
C
.. Fig. 4.1 Tower of Hanoi task illustrating a start state and a goal state
behavior in the form of a computer simulation. The Tower of Hanoi, in particular, served as an important test bed for the development of an explanatory vocabulary and framework for analyzing problem-solving behavior. The Tower of Hanoi (TOH) is a relatively straight-forward task that consists of three pegs (A, B, and C) and three or more disks that vary in size. The goal is to move the three disks from peg A to peg C one at a time with the constraint that a larger disk can never rest on a smaller one. Problem solving can be construed as search in a problem space. A problem space has an initial state, a goal state, and a set of operators. Operators are any moves that transform a given state to a successor state. For example, the first move could be to move the small disk to peg B or peg C. In a three-disk TOH, there are a total of 27 possible states representing the complete problem space. TOH has 3n states where n is the number of disks. The minimum number of moves necessary to solve a TOH is 2n−1. Problem solvers will typically maintain only a small set of states at a time. The search process involves finding a solution strategy that will minimize the number of steps. The metaphor of movement through a problem space provides a means for understanding how an individual can sequentially address the challenges they confront at each stage of a problem and the actions that ensue. We can characterize the problem-solving behavior of the subject at a local level in terms of state transitions or at a more global level in terms of strategies. For example, meansends analysis is a commonly used strategy for reducing the difference between the start state and goal state. For instance, moving all but the largest disk from peg A to peg B is an interim goal associated with such a strat-
egy. Although TOH bears little resemblance to the tasks performed by either clinicians or patients, the example illustrates the process of analyzing task demands and task performance in human subjects. The TOH helped lay the groundwork for cognitive task analyses that are performed today. Protocol analysis1 is among the most commonly used methods (Newell and Simon 1972). Protocol analysis refers to a class of techniques for representing verbal think-aloud protocols (Greeno and Simon 1988). Think- aloud protocols are the most common source of data used in studies of problem solving. In these studies, subjects are instructed to verbalize their thoughts as they perform an experimental task. Ericsson and Simon (1993) specify the conditions under which verbal reports are acceptable as legitimate data. For example, retrospective think-aloud protocols are viewed as somewhat suspect because the subject has had the opportunity to reconstruct the information in memory, and the verbal reports are inevitably distorted. Think- aloud protocols recorded in concert with observable behavioral data such as a subject’s actions provide a rich source of evidence to characterize cognitive processes. Cognitive psychologists and linguists have investigated the processes and properties of language and memory in adults and children for many decades. Early research focused on basic laboratory studies of list learning or processing of words and sentences (as in a sentence completion task) (Anderson 1985).
1 The term protocol refers to that which is produced by a subject during testing (e.g., a verbal record). It differs from the more common use of protocol as defining a code or set of procedures governing behavior or a situation.
127 Cognitive Informatics
van Dijk and Kintsch (1983) developed an influential method of analyzing the process of text comprehension based on the realization that text can be described at multiple levels from surface codes (e.g., words and syntax) to a deeper level of semantics. Comprehension refers to cognitive processes associated with understanding or deriving meaning from text, conversation, or other informational resources. It involves the processes that people use when trying to make sense of a piece of text, such as a sentence, a book, or a verbal utterance. It also involves the final product of such processes, which is, the mental representation of the text, essentially what people have understood. Comprehension may often precede problem solving and decision making but is also dependent on perceptual processes that focus attention, the availability of relevant knowledge, and the ability to deploy knowledge in a given context. Some of the more important differences in medical problem solving and decision making arise from differences in knowledge and comprehension. Furthermore, many of the problems associated with decision making are the result of either a lack of knowledge or failure to understand the information appropriately. The early investigations provided a well- constrained artificial environment for the development of the basic methods and principles of problem solving. They also provide a rich explanatory vocabulary (e.g., problem space), but were not fully adequate in accounting for cognition in knowledge-rich domains of greater complexity and involving uncertainty. In the mid to late 1970s, there was a shift in research to complex “real-life” knowledge-based domains of inquiry (Greeno and Simon 1988). Problem-solving research was studying performance in domains such as physics (Larkin et al. 1980), medical diagnoses (Elstein et al. 1978) and architecture (Akin 1982). Similarly, the study of text comprehension shifted from research on simple stories to technical and scientific texts in a range of domains, including medicine. This paralleled a similar change in artificial intelligence research from “toy programs” to addressing “real-world” problems and the development
of expert systems (Clancey and Shortliffe 1984). The shift to real-world problems in cognitive science was spearheaded by research exploring the nature of expertise. Most of the early investigations on expertise involved laboratory experiments. However, the shift to knowledge-intensive domains provided a theoretical and methodological foundation to conduct both basic and applied research in real-world settings such as the workplace (Vicente 1999) and the classroom (Bruer 1993). These areas of application provided a fertile test bed for assessing and extending the cognitive science framework. In recent years, the conventional information- processing approach has come under criticism for its narrow focus on the rational/cognitive processes of the solitary individual. One of the most compelling proposals has to do with a shift from viewing cognition as a property of the solitary individual to viewing cognition as distributed across groups, cultures, and artifacts. This claim has significant implications for the study of collaborative endeavors and human-computer interaction. We explore the concepts underlying distributed cognition in greater detail in a subsequent section.
4.3
Human Information Processing
It is well known that product design often fails to consider cognitive and physiological constraints adequately and imposes an unnecessary burden on task performance (Sharp et al. 2019). Fortunately, advances in theory and methods provide us with greater insight into designing systems for the human condition. Cognitive science serves as a basic science and provides a framework for the analysis and modeling of complex human performance. A computational theory of mind provides the fundamental underpinning for most contemporary theories of cognitive science. The basic premise is that much of human cognition can be characterized as a series of operations or computations on mental representations. Mental representations are internal cognitive
4
128
4
V. L. Patel and D. R. Kaufman
states that have a certain correspondence with the external world. For example, they may reflect a clinician’s hypothesis about a patient’s condition after noticing an abnormal gait as he entered the clinic. These are likely to elicit further inferences about the patient’s underlying condition and may direct the physician’s information-gathering strategies and contribute to an evolving problem representation. Two interdependent dimensions by which we can characterize cognitive systems are (1) architectural theories that endeavor to provide a unified theory for all aspects of cognition and (2) the different kinds of knowledge necessary to attain competency in a given domain. Individuals differ substantially in terms of their knowledge, experiences, and endowed capabilities. The architectural approach capitalizes on the fact that we can characterize certain regularities of the human information-processing system. These can be either structural regularities—such as the existence of and the relations between perceptual, attentional, and memory systems and memory capacity limitations—or processing regularities, such as processing speed, selective attention, or problem-solving strategies. Cognitive systems are characterized functionally in terms of the capabilities they enable (e.g., focused attention on selective visual features), the way they constrain human cognitive performance (e.g., limitations on memory), and their development during the lifespan. In regard to the lifespan issue, there is a growing body of literature on cognitive aging and how aspects of the cognitive system such as attention, memory, vision and motor skills change as a function of aging (Fisk et al. 2009). This basic science research is of growing importance to informatics as we seek to develop e-health applications for seniors, many of whom suffer from chronic health conditions such as arthritis and diabetes. A graphical user interface or more generally, a website designed for younger adults may not be suitable for older adults. Differences in knowledge organization are a central focus of research into the nature of expertise. In medicine, the expert-novice paradigm has contributed to our understanding
of the nature of medical expertise and skilled clinical performance. 4.3.1
Cognitive Architectures and Human Memory Systems
Fundamental research in perception, cognition, and psychomotor skills over the last 50 years has provided a foundation for design principles in human factors and human- computer interaction. Although cognitive guidelines have made significant inroads in the design community, there remains a significant gap in applying basic cognitive research (Gillan and Schvaneveldt 1999). Designers routinely violate basic assumptions about the human cognitive system. There are invariably challenges in applying basic research and theory to applications. More human-centered design and cognitive research can instrumentally contribute to such an endeavor (Zhang et al. 2004). Over the last 50 years, there have been several attempts to develop a unified theory of cognition. The goal of such a theory is to provide a single set of mechanisms for all cognitive behaviors from motor skills, language, memory, to decision making, problem solving, and comprehension (Newell 1990). Such a theory provides a means to put together a voluminous and seemingly disparate body of human experimental data into a coherent form. Cognitive architecture represents unifying theories of cognition that are embodied in large-scale computer simulation programs. Although there is much plasticity evidenced in human behavior, cognitive processes are bound by biological and physical constraints. Cognitive architectures specify functional rather than biological constraints on human behavior (e.g., limitations on working memory). These constraints reflect the information- processing capacities and limitations of the human cognitive system. Architectural systems embody a relatively fixed permanent structure that is (more or less) characteristic of all humans and doesn’t substantially vary over an individual’s lifetime. It represents a scientific hypothesis about
129 Cognitive Informatics
those aspects of human cognition that are relatively constant over time and independent of the task (Carroll 2003). Cognitive architectures also play a role in providing blueprints for building future intelligent systems that embody a broad range of capabilities like those of humans (Duch et al. 2008). There are several large-scale cognitive architecture theories that embody computational models of cognition and have informed a substantial body of research in cognitive science and allied disciplines. ACT-R (short for “Adaptive Control of Thought- Rational”) is perhaps, the most widely known cognitive architecture. It was developed by John R. Anderson and is sustained by a large global community of researchers centered at Carnegie Mellon University (Anderson 2013). It is a theory for simulating and understanding human cognition. It started more than 40 years ago as an architecture that could simulate basic tasks related to memory, language and problem solving. It has continued to evolve into a system that can perform an enormous range of human tasks (Ritter et al. 2019). Cognitive architectures include shortterm and long-term memories that store content about an individual’s beliefs, goals, and knowledge, the representation of elements that are contained in these memories as well as their organization into larger-scale structures (Lieto et al. 2018). An extended discussion of architectural theories and systems is beyond the scope of this chapter. However, we employ the architectural frame of reference to introduce some basic distinctions in memory systems. Human memory is typically divided into at least two structures: long-term memory and short-term/working memory. Working memory is an emergent property of interaction with the environment. Longterm memory (LTM) can be thought of as a repository of all knowledge, whereas working memory (WM) refers to the resources needed to maintain information active during cognitive activity (e.g., text comprehension). The information maintained in working memory includes stimuli from the environment (e.g., words on a display) and knowledge activated from long-term memory. In theory, LTM
is infinite, whereas WM is limited to five to ten “chunks” of information. A chunk is any stimulus or patterns of stimuli that have become familiar from repeated exposure and is subsequently stored in memory as a single unit (Larkin et al. 1980). Problems impose a variable cognitive load on working memory. This refers to an excess of information that competes for few cognitive resources, creating a burden on working memory (Chandler and Sweller 1991). For example, maintaining a seven-digit phone number in WM is not very difficult. However, to maintain a phone number while engaging in conversation is nearly impossible for most people. Multi-tasking is one factor that contributes to cognitive load. The structure of the task environment, for example, a crowded computer display is another contributor. High velocity/high workload clinical environments such as intensive care units also impose cognitive loads on clinicians carrying out the task. 4.3.2
The Organization of Knowledge
Architectural theories specify the structure and mechanisms of memory systems, whereas theories of knowledge organization focus on the content. There are several ways to characterize the kinds of knowledge that reside in LTM and that support decisions and actions. Cognitive psychology has furnished a range of domain-independent constructs that account for the variability of mental representations needed to engage the external world. A central tenet of cognitive science is that humans actively construct and interpret information from their environment. Given that environmental stimuli can take a multitude of forms (e.g., written text, speech, music, images, etc.), the cognitive system needs to be attuned to different representational types to capture the essence of these inputs. For example, we process written text differently than we do mathematical equations. The power of cognition is reflected in the ability to form abstractions - to represent perceptions, experiences, and thoughts in some medium
4
130
V. L. Patel and D. R. Kaufman
1. 43-year-old white female who developed diarrhea after a brief period of 2 days of GI upset 1.1 1.2 1.3 1.4 1.5
4
female develop period upset TEM:ORD
ATT: Age (old); DEG: 43 year; ATT: white PAT: [she]; THM: diarrhea; TNS: past ATT: brief; DUR: 2 days; THM: 1.4 LOC: GI [1.3], [1.2]
.. Fig. 4.2 Propositional analysis of a think-aloud protocol of a primary care physician
other than that in which they have occurred without extraneous or irrelevant information (Norman 1993). Representations enable us to remember, reconstruct, and transform events, objects, images, and conversations absent in space and time from our initial experience of the phenomena. Representations reflect states of knowledge. Propositions are a form of natural language representation that captures the essence of an idea (i.e., semantics) or concept without explicit reference to linguistic content. For example, “hello”, “hey”, and “what’s happening” can typically be interpreted as a greeting containing identical propositional content even though the literal semantics of the phrases may differ. These ideas are expressed as language and translated into speech or text when we talk or write. Similarly, we recover the propositional structure when we read or listen to verbal information. Numerous psychological experiments have demonstrated that people recover the gist of a text or spoken communication (i.e., propositional structure) not the specific words (Anderson 1985; van Dijk and Kintsch 1983). Studies have also shown the individuals at different levels of expertise will differentially represent a text (Patel and Kaufman 1998). For example, experts are more likely to selectively encode relevant propositional information that will inform a decision. On the other hand, non-experts will often remember more information, but much of the recalled information may not be relevant to the decision (Patel and Groen 1991a, b). Propositional representations constitute an important construct in theories of comprehension. Propositional knowledge can be expressed using a predicate calculus formalism or as a semantic network. The predicate
calculus representation is illustrated below. A subject’s response, as given on . Fig. 4.2, is divided into sentences or segments and sequentially analyzed. The formalism includes a head element of a segment and a series of arguments. For example, in proposition 1.1, the focus is on a female who has the attributes of being 43 years of age and white. The TEM:ORD or temporal order relation indicates that the events of 1.3 (GI upset) precede the event of 1.2 (diarrhea). The formalism is informed by an elaborate propositional language (Frederiksen 1975) and was first applied to the medical domain by Patel and her colleagues (Patel and Groen 1986). The method provides us with a detailed way to characterize the information subjects understood from reading a text, based on their summary or explanations. Kintsch (1998) theorized that comprehension involves an interaction between what the text conveys and knowledge in long-term memory. Comprehension occurs when the reader uses prior knowledge to process the incoming information presented in the text. The text information is called the textbase (the propositional content of the text). For instance, in medicine, the textbase could consist of the representation of a patient problem as written in a patient chart. The situation model is constituted by the textbase representation plus the domain-specific and everyday knowledge that the reader uses to derive a broader meaning from the text. In medicine, the situation model would enable a physician to draw inferences from a patient’s history leading to a diagnosis, therapeutic plan or prognosis (Patel and Groen 1991a, b). This situation model is typically derived from the general knowledge and specific knowledge acquired through medical teaching, readings
131 Cognitive Informatics
(e.g., theories and findings from biomedical research), clinical practice (e.g., knowledge of associations between clinical findings and specific diseases, knowledge of medications or treatment procedures that have worked in the past) and the textbase representation. Like other forms of knowledge representation, the situation model is used to “fit in” the incoming information (e.g., text, perception of the patient). Since the knowledge in LTM differs among physicians, the resulting situation model generated by any two physicians is likely to differ as well. Theories and methods of text comprehension have been widely used in the study of medical cognition and have been instrumental in characterizing the process of guideline development and interpretation (Peleg et al. 2006; Patel et al. 2014). Schemata represent higher-level knowledge structures. They can be construed as data structures for representing categories of concepts stored in memory (e.g., fruits, chairs, geometric shapes, and thyroid conditions). There are schemata for concepts underlying situations, events, sequences of actions and so forth. To process information with the use of a schema is to determine which model best fits the incoming information. Schemata have constants (all birds have wings) and variables (chairs can have between one and four legs). The variables may have associated default values (e.g., birds fly) that represent the prototypical circumstance. When a person interprets information, the schema serves as a “filter” for distinguishing relevant and irrelevant information. Schemata can be considered as generic knowledge structures that contain slots for particular kinds of propositions. For instance, a schema for myocardial infarction may contain the findings of “chest pain,” “sweating,” “shortness of breath,” but not the finding of “goiter,” which is part of the schema for thyroid disease. The schematic and propositional representations reflect abstractions and don’t necessarily preserve literal information about the external world. Imagine that you are having a conversation at the office about how to rearrange the furniture in your living room. To engage in such a conversation, one needs to be able to construct images of the objects and
their spatial arrangement in the room. Mental images are a form of internal representation that captures perceptual information recovered from the environment. There is compelling psychological and neuropsychological evidence to suggest that mental images constitute a distinct form of mental representation (Bartolomeo 2008). Images play a particularly important role in domains of visual diagnosis such as dermatology and radiology. Mental models are an analog-based construct for describing how individuals form internal models of systems. Mental models are designed to answer questions such as “how does it work?” or “what will happen if I take the following action?” “Analogy” suggests that the representation explicitly shares the structure of the world it represents (e.g., a set of connected visual images of a partial road map from your home to your work destination). This contrasts with an abstraction- based form such as propositions or schemas in which the mental structure consists of either the gist, an abstraction, or summary representation. However, like other forms of mental representation, mental models are always incomplete, imperfect, and subject to the processing limitations of the cognitive system. Mental models can be derived from perception, language, or one’s imagination (Payne 2003). Running of a model corresponds to a process of mental simulation to generate possible future states of a system from observed or hypothetical state. For example, when one initiates a Google Search, one may reasonably anticipate that the system will return a list of relevant (and less than relevant) websites that correspond to the query. Mental models are a particularly useful construct in understanding human-computer interaction. An individual’s mental models provide predictive and explanatory capabilities of the function of a physical system. More often the construct has been used to characterize models that have a spatial and temporal context, as is the case in reasoning about the behavior of electrical circuits (White and Frederiksen 1990). The model can be used to simulate a process (e.g., predict the effects of network interruptions on getting cash from an ATM machine). Kaufman, Patel and Magder (1996)
4
132
4
V. L. Patel and D. R. Kaufman
characterized clinicians’ mental models of the cardiovascular system (specifically, cardiac output). The study characterized the development of an understanding of the system as a function of expertise. The research also documented various conceptual flaws in subjects’ models and how these flaws impacted subjects’ predictions and explanations of physiological manifestations. . Figure 4.3 illustrates the four chambers of the heart and blood flow in the pulmonary and cardiovascular systems. The claim is that clinicians and medical students have variably robust representations of the structure and function of the system. This model enables prediction and explanation of the effects of perturbations in the system on
Lungs
Pulmonary Pulmonary Pulmonary Veins Arteries Circulation Heart
RV LA
PV
TV
MV AV
RA LV Collecting System Vena Cavae and Systemic Veins
Distributing System Systemic Circulation
Aorta and Systemic Circulation
Exchange System Systemic Capillaries Chambers RV - Right Ventricle RA - Right Atrium LV - Left Ventricle LA - Left Atrium
blood flow and on various clinical measures such as left ventricular ejection fraction. Thus far, we have only considered domain- general ways of characterizing the organization of knowledge. In view to understanding the nature of medical cognition, it is necessary to characterize the domain-specific nature of knowledge organization in medicine. Given the vastness and complexity of the domain of medicine, this can be a rather daunting task. There is no single way to represent all biomedical (or even clinical) knowledge, but it is an issue of considerable importance for research in biomedical informatics. Much research has been conducted in biomedical artificial intelligence to develop biomedical ontologies for use in knowledge-based systems (Ramoni et al. 1992). Patel et al. (1997) address this issue in the context of using empirical evidence from psychological experiments on medical expertise to test the validity of the AI systems. Developers of biomedical taxonomies, nomenclatures, and vocabulary systems such as UMLS or SNOMED are engaged in a similar pursuit (see 7 Chap. 7). We have employed an epistemological framework developed by Evans and Gadd (1989). They proposed a framework that serves to characterize the knowledge used for medical understanding and problem solving, and for differentiating the levels at which biomedical knowledge may be organized. This framework represents a formalization of biomedical knowledge as realized in textbooks and journals and can be used to provide us with insight into the organization of clinical practitioners’ knowledge (see . Fig. 4.4). The framework consists of a hierarchical structure of concepts formed by clinical observations at the lowest level, followed by findings, facets, and diagnoses. Clinical observations are units of information that are recognized as potentially relevant in the problem-solving context. However, they do not constitute clinically useful facts. Findings are composed of observations that have potential clinical significance. Establishing a finding reflects a decision made by a physician that an array of data contains a significant cue or cues that need to be considered. Facets consist of clusters of findings that indicate an underlying
Valves PA - Pulmonic Valve TV - Tricuspid Valve MV - Mitral Valve AV - Aortic Valve
.. Fig. 4.3 Schematic model of circulatory and cardiovascular physiology. The diagram illustrates various structures of the pulmonary and systemic circulation system and the process of blood flow. The illustration is used to exemplify the concept of mental model and how it could be applied to explaining and predicting physiologic behavior
133 Cognitive Informatics
System complex level
SC1
Diagnosis level
D1
Facet level
D2
Fa1
f1
Finding level
Observation level
SC2
01
02
03
Fa2
f2
04
f3
05
06
D3 Fa3
Fa4
f4
07
f5
08
f6
09
10
11
12
.. Fig. 4.4 Epistemological frameworks representing the structure of medical knowledge for problem solving
medical problem or class of problems. They reflect general pathological descriptions such as left-ventricular failure or thyroid condition. Facets resemble the kinds of constructs used by researchers in medical artificial intelligence to describe the partitioning of a problem space. They are interim hypotheses that serve to divide the information in the problem into sets of manageable sub-problems and to suggest possible solutions. Facets also vary in terms of their levels of abstraction. Diagnosis is the level of classification that subsumes and explains all levels beneath it. Finally, the systems level consists of information that serves to contextualize a problem, such as the ethnic background of a patient.
4.4
Medical Cognition
The study of expertise is one of the principal paradigms in problem-solving research, which has been documented in a number of volumes in literature (Sternberg and Ericsson 1996; Ericsson 2009; Ericsson et al. 2018). Comparing experts to novices provides us with the opportunity to explore the aspects of performance that undergo change and result in increased problem-solving skill (Glaser 2000). It also permits investigators to develop domain-specific models of competence that can be used for assessment and training purposes.
A goal of this approach has been to characterize expert performance in terms of the knowledge and cognitive processes used in comprehension, problem solving, and decision making, using carefully developed laboratory tasks (Chi and Glaser 1981), where deGroot’s (1965) pioneering research in chess represents one of the earliest characterizations of expertnovice differences. In one of his experiments, subjects were allowed to view a chess board for 5–10 seconds and were then required to reproduce the position of the chess pieces from memory. The grandmaster chess players were able to reconstruct the mid-game positions with better than 90% accuracy, while novice chess players could only reproduce approximately 20% of the correct positions. When the chess pieces were placed on the board in a random configuration, not encountered in the course of a normal chess match, expert chess masters’ recognition ability fell to that of novices. This result suggests that superior recognition ability is not a function of superior memory, but is a result of an enhanced ability to recognize typical situations (Chase and Simon 1973). This phenomenon is accounted for by a process known as “chunking.” Patel and Groen (1991b) showed a similar phenomenon in medicine The expert physicians were able to reconstruct patient summaries in an accurate manner when patient information was collected out of order (e.g., history, physical exam, lab results), as long as the pattern of information, even out of sequence was famil-
4
134
4
V. L. Patel and D. R. Kaufman
iar. When the sentences were placed out of order in a way that the pattern was unfamiliar, the expert physicians’ recognition ability was no better than the novices. It is well known that knowledge-based differences impact the problem representation and determine the strategies a subject uses to solve a problem. Simon and Simon (1978) compared a novice subject with an expert subject in solving textbook physics problems. The results indicated that the expert solved the problems in one-quarter of the time required by the novice with fewer errors. The novice solved most of the problems by working backward from the unknown problem solution to the givens of the problem statement. The expert worked forward from the givens to solve the necessary equations and determine the quantities they are asked to solve for. Differences in the directionality of reasoning by levels of expertise has been demonstrated in diverse domains from computer programming (Perkins et al. 1990) to medical diagnosis (Patel and Groen 1986). The expertise paradigm spans the range of content domains including physics (Larkin et al. 1980), sports (Allard and Starkes 1991), music (Sloboda 1991), and medicine (Patel et al. 1994). Edited volumes (Ericsson 2006; Chi et al. 1988 Ericsson et al. 2018; Ericsson and Smith 1991; Hoffman 1992) provide an informative general overview of the area. This research has focused on differences between subjects varying in levels of expertise in terms of memory, reasoning strategies, and in particular the role of domain-specific knowledge. Among the expert’s characteristics uncovered by this research are the following: (1) experts are capable of perceiving large patterns of meaningful information in their domain, which novices cannot perceive; (2) they are fast at processing and at deployment of different skills required for problem solving; (3) they have superior short-term and long-term memories for materials (e.g., clinical findings in medicine) within their domain of expertise, but not outside of it; (4) they typically represent problems in their domain at deeper, more principled levels whereas novices show a superficial level of representation; (5) they spend more time assessing the problem prior
to solving it, while novices tend to spend more time working on the solution itself and little time in problem assessment; (6) individual experts may differ substantially in terms of exhibiting these kinds of performance characteristics (e.g., superior memory for domain materials). Usually, someone is designated as an expert based on a certain level of performance, as exemplified by Elo ratings in chess; by virtue of being certified by a professional licensing body, as in medicine, law, or engineering; on the basis of academic criteria, such as graduate degrees; or simply based on years of experience or peer evaluation (Hoffman et al. 1995). The concept of an expert, however, refers to an individual who surpasses competency in a domain (Sternberg and Horvath 1999). Although competent performers, for instance, may be able to encode relevant information and generate effective plans of action in a specific domain, they often lack the speed and the flexibility that we see in an expert. A domain expert (e.g., a medical practitioner) possesses an extensive, accessible knowledge base that is organized for use in practice and is tuned to the particular problems at hand. In the study of medical expertise, it has been useful to distinguish different types of expertise. Patel and Groen (1991a, b) distinguished between general and specific expertise, a distinction supported by research indicating differences between subexperts (i.e., expert physicians who solve a case outside their field of specialization) and experts (i.e., domain specialist) with respect to reasoning strategies and organization of knowledge. General expertise corresponds to expertise that cuts across medical subdisciplines (e.g., general medicine). Specific expertise results from having extensive experience within a medical subdomain, such as cardiology or endocrinology. An individual may possess both or only generic expertise. The development of expertise can follow a somewhat unusual trajectory. It is often assumed that the path from novice to expert goes through a steady process of gradual accumulation of knowledge and fine-tuning of skills. That is, as a person becomes more familiar with a domain, his or her level of per-
135 Cognitive Informatics
formance (e.g., accuracy, quality) gradually increases. However, research has shown that this assumption is often incorrect (Lesgold et al. 1988; Patel et al. 1994). Cross-sectional studies of experts, intermediates, and novices have shown that people at intermediate levels of expertise may perform more poorly than those at a lower level of expertise on some tasks. Furthermore, there is a longstanding body of research on learning that has suggested that the learning process involves phases of errorfilled performance followed by periods of stable, comparatively error-free performance. In other words, human learning does not consist of the gradually increasing accumulation of knowledge and fine-tuning of skills. Rather, it requires the arduous process of continually learning, re-learning, and exercising new knowledge, punctuated by periods of an apparent decrease in mastery and declines in performance, which may be necessary for learning to take place. . Figure 4.5 presents an illustration of this learning and development phenomenon known as the intermediate effect. The intermediate effect has been found in a variety of tasks and with a great number of performance indicators. The tasks used include comprehension and explanation of clinical problems, doctor-patient communica
Performance Level
expected actual
Novice
Intermediate Development
Expert
.. Fig. 4.5 Schematic representation of intermediate effect. The straight line gives a commonly assumed representation of performance development by level of expertise. The curved line represents the actual development from novice to expert. The Y-axis may represent any of a number of performance variables such as the number of errors made, number of concepts recalled, number of conceptual elaborations, or number of hypotheses generated in a variety of tasks
tion, recall and explanation of laboratory data, generation of diagnostic hypotheses, and problem solving (Patel and Groen 1991a, b). The performance indicators used have included recall and inference of medical-text information, recall, and inference of diagnostic hypotheses, generation of clinical findings from a patient in doctor-patient interaction, and requests for laboratory data, among others. The research has also identified developmental levels at which the intermediate phenomenon occurs, including senior medical students and residents. It is important to note, however, that in some tasks, the development is monotonic. For instance, in diagnostic accuracy, there is a gradual increase, with an intermediate exhibiting a greater degree of accuracy than the novice and the expert demonstrating a still greater degree than the intermediate. Furthermore, when the relevancy of the stimuli to a problem is considered, an appreciable monotonic phenomenon appears. For instance, in recall studies, novices, intermediates, and experts are assessed in terms of the total number of propositions recalled showing the typical non-monotonic effect. However, when propositions are divided in terms of their relevance to the problem (e.g., a clinical case), experts recall more relevant propositions than intermediates and novices, suggesting that intermediates have difficulty separating what is relevant from what is not. During the periods when the intermediate effect occurs, a reorganization of knowledge and skills takes place, characterized by shifts in perspectives or a realignment or creation of goals. The intermediate effect is also partly due to the unintended changes that take place as the person reorganizes for intended changes. People at intermediate levels typically generate a great deal of irrelevant information and seem incapable of discriminating what is relevant from what is not. As compared to a novice student (. Fig. 4.6), the reasoning pattern of an intermediate student shows the generation of long chains of discussion evaluating multiple hypotheses and r easoning in haphazard direction (. Fig. 4.7). A wellstructured knowledge structure of a senior level student leads him more directly to a
4
136
V. L. Patel and D. R. Kaufman
45 years-old male 4-hr history of chest pain
Myocardial infarction
4
central, crushing chest pain faintness
Other diagnosis
sweating mild cough
.. Fig. 4.6 Problem interpretations by a novice medical student. The given information from patient problem is represented on the right side of the figure and the new generated information is given on the left side, information in the box represents diagnostic hypothesis. Intermediate hypotheses are represented as solid dark
circles (filled). Forward driven or data driven inference arrows are shown from left to right (solid dark line). Backward or hypothesis driven inference arrows are shown from right to left (solid light line). Thick solid dark line represents rule out strategy
45 years-old male 4-hr history of chest pain
Myocardial infarction
central, crushing chest pain faintness
Other diagnoses
sweating mild cough
.. Fig. 4.7 Problem interpretations by an intermediate medical student
solution (. Fig. 4.8). Thus, the intermediate effect can be explained as a function of the learning process, maybe as a necessary phase of learning. Identifying the factors involved in the intermediate effect may help in improving performance during learning (e.g., by designing decision-support systems or intelligent tutoring systems that help the user in focusing on relevant information).
The intermediate effect is not a one-time phenomenon. Rather, it repeatedly occurs at strategic points in a student or physician’s training and follows periods in which large bodies of new knowledge or complex skills are acquired. These periods are followed by intervals in which there is a decrement in performance until a new level of mastery is achieved.
137 Cognitive Informatics
.. Fig. 4.8 Problem interpretations by a senior medical student
45 year-old male Myocardial infarction
Other diagnoses
Aortic dissection
Expertise in Medicine
4-hr history of chest pain central, crushing chest pain faintness sweating mild cough asymmetric BP
Their research findings led to the development of an elaborated model of hypothetico- The systematic investigation of medical deductive reasoning, which proposed that expertise began more than 60 years ago with physicians reasoned by first generating and research by Ledley and Lusted (1959) into the then testing a set of hypotheses to account for nature of clinical inquiry. They proposed a clinical data (i.e., reasoning from hypothesis two-stage model of clinical reasoning involv- to data). First, physicians generated a small ing a hypothesis-generation stage, followed by set of hypotheses very early in the case, as a hypothesis-evaluation stage. This latter stage soon as the first pieces of data became availis most amenable to formal decision ana- able. Second, physicians were selective in the lytic techniques. The earliest empirical stud- data they collected, focusing only on the relies of medical expertise can be traced to the evant data. Third, physicians made use of a works of Rimoldi (1961) and Kleinmuntz and hypothetico-deductive method of diagnostic McLean (1968) who conducted experimental reasoning (Elstein et al. 1978). The previous research was largely modstudies of diagnostic reasoning by contrasting students with medical experts in simulated eled after early problem-solving studlean tasks. Medicine is problem-solving tasks. The results empha- ies in knowledge- rich domain, and a different sized the greater ability of expert physicians to a knowledge- attend to relevant information selectively and approach was needed. Feltovich, Johnson, narrow the set of diagnostic possibilities (i.e., Moller, and Swanson (1984), drawing on models of knowledge representation from consider fewer hypotheses). The origin of contemporary research on medical artificial intelligence, charactermedical thinking is associated with the semi- ized fine-grained differences in knowledge nal work of Elstein, Shulman, and Sprafka organization between subjects of different (1978) who studied the problem-solving levels of expertise in the domain of pediatprocesses of physicians by drawing on then- ric cardiology. Patel and colleagues studied based solution strategies of contemporary methods and theories of cog- the knowledge- nition. This model of problem-solving has expert cardiologists as evidenced by their had a substantial influence both on studies pathophysiological explanations of a complex of medical cognition and medical education. clinical problem (Patel and Groen 1986). The They were the first to use experimental meth- results indicated that subjects who accurately ods and theories of cognitive science to inves- diagnosed the problem, employed a forward- directed (data-driven) reasoning strategy— tigate clinical competency. 4.4.1
4
138
4
V. L. Patel and D. R. Kaufman
using patient data to lead toward a complete diagnosis (i.e., reasoning from data to hypothesis). This is in contrast to subjects who misdiagnosed or partially diagnosed the patient problem. They tended to use a backward or hypothesis-driven reasoning strategy. Patel and Groen (1991a, b) investigated the nature and directionality of clinical reasoning in a range of contexts of varying complexity. The objectives of this research program were both to advance our understanding of medical expertise and to devise more effective ways of teaching clinical problem solving. It has been established that the patterns of data-driven and hypothesis-driven reasoning are used differentially by novices and experts. Experts tend to use data-driven reasoning, which depends on the physician possessing a highly organized knowledge base about the patient’s disease (including sets of signs and symptoms). Because of their lack of substantive knowledge or their inability to distinguish relevant from irrelevant knowledge, novices and intermediates use more hypothesis-driven reasoning, often resulting in very complex reasoning patterns. The fact that experts and novices reason differently suggests that they might reach different conclusions (e.g., decisions or understandings) when solving medical problems. Similar patterns of reasoning have been found in other domains (Larkin Vitiligo
COND:
Progressive thyroid disease
COND:
Examination of thyroid
COND:
Respiratory failure
CAU:
et al. 1980). Due to their extensive knowledge base and the high-level inferences they make, experts typically skip steps in their reasoning. Although experts typically use data-driven reasoning during clinical performance, this type of reasoning sometimes breaks down, and the expert must resort to hypothesis-driven reasoning. Although data-driven reasoning is highly efficient, it is often error-prone in the absence of adequate domain knowledge, since there are no built-in checks on the legitimacy of the inferences that a person makes. Pure data-driven reasoning is only successful in constrained situations, where one’s knowledge of a problem can result in a complete chain of inferences from the initial problem statement to the problem solution, as illustrated in . Fig. 4.9. In contrast, hypothesis-driven reasoning is slower and may make heavy demands on working memory, because one must keep track of goals and hypotheses. It is, therefore, most likely to be used when domain knowledge is inadequate, or the problem is complex. Hypothesis-driven reasoning is usually exemplary of a weak method of problem solving in the sense that is used in the absence of relevant prior knowledge and when there is uncertainty about problem solution. In problem-solving terms, strong methods engage knowledge, whereas weak methods refer to general strategies. Weak does not necessar
Autoimmune thyroiditis
COND:
Myxedema
Diminished thyroid function
Hypoventilation
CAU:
Hypometabolic state
RSLT: .. Fig. 4.9 Diagrammatic representation of data- driven (top down) and hypothesis-driven (bottom-up) reasoning. From the presence of vitiligo, a prior history of progressive thyroid disease, and examination of the thyroid (clinical findings on the left side of figure), the physician reasons forward to conclude the diagnosis of Myxedema (right of figure). However, the anomalous
finding of respiratory failure, which is inconsistent with the main diagnosis, is accounted for as a result of a hypometabolic state of the patient, in a backward- directed fashion. COND: refers to a conditional relation; CAU: indicates a causal relation; and RSLT: identifies a resultive relation
139 Cognitive Informatics
ily imply ineffectual in this context. However, hypothesis-driven reasoning may be more conducive to the novice learning experience in that it can guide the organization of knowledge (Patel et al. 1990). In the more recent literature, described in a chapter by Patel and colleagues (2013a), two forms of human reasoning that are more widely accepted are deductive and inductive reasoning. Deductive reasoning is a process of reaching specific conclusions (e.g., a diagnosis) from a hypothesis or a set of hypotheses, whereas inductive reasoning is the process of generating possible conclusions based on available data, such as data from a patient. However, when reasoning in real-world clinical situations, it is too simplistic to think of reasoning with only these two strategies. A third form of reasoning, abductive, which combines deductive and inductive reasoning, was proposed (Peirce 1955). A physician developing and testing explanatory hypotheses based on a set of heuristics, may be considered abductive reasoning (Magnani 2001). Thus, an abductive reasoning process where a set of hypotheses are identified and then each of these hypotheses is evaluated on the basis of its potential consequences (Elstein et al. 1978; Ramoni et al. 1992). This makes abductive reasoning a data-driven process that relies heavily on the domain expertise of the person. During the testing phase, hypotheses are evaluated by their ability to account for the current problem. Deduction helps in building up the consequences of each hypothesis, and this kind of reasoning is customarily regarded as a common way of evaluating diagnostic hypotheses (Joseph and Patel 1990; Kassirer 1989; Patel et al. 1994; Patel, Evans, and Kaufman 1989). All these types of inferences play different roles in the hypothesis generation and testing phases (Patel and Ramoni 1997; Peirce 1955). Our inherent ability to adapt to different kinds of knowledge domains, situations, and problems requires the use of a variety of reasoning modes, and this process describes the notion of abductive medical reasoning (Patel and Ramoni 1997). In contrast, novices and intermediate subjects (e.g., medical trainees) are more likely to employ more deliberative, effortful, and
cognitively taxing forms of reasoning that can resemble hypothetico-deductive methods. As problems increase in complexity and uncertainness, expert clinicians’ resort to hybrid forms of reasoning that may include substantial backward-directed reasoning. The study of medical cognition has been summarized in a series of articles (Patel et al. 1994, 2018) and edited volumes (e.g., Evans and Patel 1989). In more recent times, medical cognition is discussed in the context of informatics and in the new field of investigation, cognitive informatics (Patel and Kannampallil 2015; Patel et al. 2014, 2015b, 2017). Furthermore, foundations of cognition also play a significant role in investigations of HCI, including human factors and patient safety. Details of HCI in biomedicine are covered in 7 Chap. 5.
4.5
uman Factors Research H and Patient Safety
»» Human error in medicine and the adverse
events which may follow are problems of psychology and engineering not of medicine “(Senders 1993)” (cited in (Woods et al. 2008).
Human factors research is a discipline devoted to the study of technology systems and how people work with them or are impacted by these technologies (Henriksen 2010). Human factors research discovers and applies information about human behavior, abilities, limitations, and other characteristics to the design of tools, machines, systems, tasks, and jobs, and environments for productive, safe, comfortable, and effective human use (Chapanis 1996). In the context of healthcare, human factors are concerned with the full complement of technologies and systems used by a diverse range of individuals including clinicians, hospital administrators, health consumers and patients (Flin and Patey 2009). Human factors work approaches the study of health practices from several perspectives or levels of analysis. A full exposition of human factors in medicine is beyond the scope of this chapter. For a detailed treatment of these
4
140
4
V. L. Patel and D. R. Kaufman
issues, the reader is referred to the Handbook of Human Factors and Ergonomics in Health Care and Patient Safety (Carayon et al. 2011). The focus in this chapter is on cognitive work in human factors and healthcare, particularly in relation to patient safety. We recognize that patient safety is a systemic challenge at multiple levels of aggregation beyond the individual. It is clear that understanding, predicting, and transforming human performance in any complex setting requires a detailed understanding of both the setting and the factors that influence performance (Woods et al. 2008). Our objective in this section is to introduce a theoretical foundation, establish important concepts, and discuss illustrative research in patient safety. The field of human factors is guided by principles of engineering and applied cognitive psychology (Chapanis 1996). Human factors analysis applies knowledge about the strengths and limitations of humans to the design of interactive systems and their environment. The objective is to ensure their effectiveness, safety, and ease of use. Mental models and issues of decision making are central to human-factors analysis. Any system will be easier and less burdensome to use to the extent that it is co-extensive with users’ mental models. The different dimensions of cognitive capacity, including memory, attention, and workload are central to human- factor analyses. Our perceptual system inundates us with more stimuli than our cognitive systems can process. Attentional mechanisms enable us to selectively prioritize and attend to certain stimuli and attenuate other ones. They also have the property of being sharable, which enables us to multitask by dividing our attention between two activities. For example, if we are driving on a highway, we can easily have a conversation with a passenger at the same time. However, as the skies get dark or the weather changes or suddenly you find yourself driving through winding mountainous roads, you will have to allocate more of your attentional resources to driving and less to the conversation. Human factors research leverages theories and methods from cognitive engineering to characterize human performance in complex
settings and challenging situations in aviation, industrial process control, military command control and space operations (Woods et al. 2008). The research has elucidated empirical regularities and provides explanatory concepts and models of human performance. This allows us to derive common underlying patterns in somewhat disparate settings. 4.5.1
Patient Safety
Patient safety refers to the prevention of healthcare errors, and the elimination or mitigation of patient injury caused by healthcare errors (Patel and Zhang 2007). It has been an issue of considerable concern for the past quarter-century, but the greater community was galvanized by the National Academy of Medicine report “To Err is Human,” (Kohn et al. 2000) and by a follow-up report, “Improving Diagnosis in Health Care” (Balogh et al. 2015). The 2000 report communicated the surprising fact that up to 98,000 preventable deaths every single year in the United States are attributable to human error, which makes it the 8th leading cause of death in this country. Although one may argue over the specific numbers, there is no disputing that too many patients are harmed or die every year as a result of human actions or absence of action. We can only analyze errors after they happened, and they often seem to be glaring blunders after the fact. This leads to the assignment of blame or searches for a single cause of the error. However, in hindsight, it is exceedingly difficult to recreate the situational context, stress, shifting attention demands, and competing goals that characterized a situation prior to the occurrence of an error. This sort of retrospective analysis is subject to hindsight bias. Hindsight bias masks the dilemmas, uncertainties, demands, and other latent conditions that were operative before the mishap. Too often the term ‘human error’ connotes blame and a search for the guilty culprits, suggesting some sort of human deficiency or irresponsible behavior. Human factors researchers recognized that this approach error is inherently incomplete and poten-
141 Cognitive Informatics
tially misleading. They argue for the need for a more comprehensive systems-centered approach that recognizes that error could be attributed to a multitude of factors as well as the interaction of these factors. Error is the failure of a planned sequence of mental or physical activities to achieve its intended outcome when these failures cannot be attributed to chance (Patel and Zhang 2007; Reason 1990). Reason (1990) introduced an important distinction between latent and active failures. Active failure represents the face of error. The effects of active failure are immediately felt. In healthcare, active errors are committed by providers such as nurses, physicians, or pharmacists who are actively responding to patient needs at the “sharp end”. The latent conditions are less visible but equally important. Latent conditions are enduring systemic problems that may not be evident for some time, combine with other system problems to weaken the system’s defenses and make errors possible. There is a lengthy list of potential latent conditions including poor interface design of important technologies, communication breakdown between key actors, gaps in supervision, inadequate training, and absence of a safety culture in the workplace—a culture that emphasizes safe practices and the reporting of any conditions that are potentially dangerous. Zhang, Patel, Johnson, and Shortliffe (2004) developed a taxonomy of errors partially based on the distinctions proposed by Reason (1990). They further classified errors in terms of slips and mistakes (Reason 1990). A slip occurs when the actor selected the appropriate course of action, but it was executed inappropriately. A mistake involves an inappropriate course of action reflecting an erroneous judgment or inference (e.g., a wrong diagnosis or misreading of an x-ray). Mistakes may either be knowledge-based owing to factors such as incorrect knowledge, or they may be rule-based, in which case the correct knowledge was available, but there was a problem in applying the rules or guidelines. They further characterize medical errors as a progression of events. There is a period when everything is operating smoothly. Then an unsafe practice unfolds resulting in
a kind of error, but not necessarily leading to an adverse event. For example, if there is a system of checks and balances that is part of routine practice or if there is a systematic supervisory process in place, the vast majority of errors will be trapped and defused in this middle zone. If these measures or practices are not in place, an error can propagate and cross the boundary to become an adverse event. At this point, the patient has been harmed. In addition, if an individual is subject to a heavy workload or intense time pressure, then that will increase the potential for an error, resulting in an adverse event. The notion that human error should not be tolerated is prevalent in both the public and personal perception of the performance of most clinicians. However, researchers in other safety-critical domains have long since abandoned the quest for zero defect, citing it as an impractical goal, and choosing to focus instead on the development of strategies to enhance the ability to recover from error (Morel et al. 2008). Patel and her colleagues conducted empirical investigations into error detection and recovery by experts (attending physicians) and non-experts (resident trainees) in the critical care domain, using both laboratory-based and naturalistic approaches (Patel et al. 2011). These studies show that expertise is more closely tied to the ability to detect and recover from errors and not so much to the ability not to make errors. The study results show that both the experts and non-experts are prone to commit and recover from errors, but experts’ ability to detect and recover from knowledge-based errors is better than that of trainees. Error detection and correction in complex real-time critical care situations appears to induce certain urgency for quick action in a high alert condition, resulting in rapid detection and correction. Studies on expertise and understanding of the limits and failures of human decision-making are important if we are to build robust decision- support systems to manage the boundaries of risk of error in decision making (Patel et al. 2015a; Patel and Cohen 2008). Research on situational complexity and medical errors is documented in a recent book by Patel, Kaufman, and Cohen (2014).
4
4
142
V. L. Patel and D. R. Kaufman
4.5.2
Unintended Consequences
It is widely believed that health information technologies have the potential to transform healthcare in a multitude of ways, including the reduction of errors. However, it is increasingly apparent that technology-induced errors are deeply consequential and have had deleterious consequences for patient safety. There is evidence to suggest that a poorly designed user interface can present substantial challenges even for the well-trained and highly skilled user (Zhang et al. 2003). Lin et al. (1998) conducted a series of studies on a patient-controlled analgesic or PCA device, a method of pain relief that uses disposable or electronic infusion devices and allows patients to self-administer analgesic drugs as required. Lin and colleagues investigated the effects of two interfaces to a commonly used PCA device, including the original interface. Based on cognitive task analysis, they redesigned the original interface so that it was more in line with sound human factors principles. Based on the cognitive task analysis, they found the existing PCA interface to be problematic in several different ways. For example, the structure of many subtasks in the programming sequence was unnecessarily complex. There was a lack of information available on the screen to provide meaningful feedback and to structure the user experience (e.g., negotiating the next steps). For example, a nurse would not know that he or she was on the third of five screens or when they were half way through the task. Based on the CTA analysis, Lin et al. (1998) also redesigned the interface according to sound human factors principles and demonstrated significant improvements in efficiency, error rate, and reported workload. Zhang and colleagues employed a modified heuristic evaluation method (see 7 Sect. 4.5, above) to test the safety of two infusion pumps (Zhang et al. 2003). Based on an analysis by four evaluators, a total of 192 violations with the user interface design were documented. Consistency and visibility (the ease in which a user can discern the system state) were the most widely documented violations. Several of the violations were classified as
problems of substantial severity. Their results suggested that one of the two pumps were likely to induce more medical errors than the other ones. It is clear that usability problems are consequential and have the potential to impact patient safety. Kushniruk et al. (2005) examined the relationship between particular kinds of usability problems and errors in a handheld prescription writing application. They found that particular usability problems were associated with the occurrence of an error in entering the medication. For example, the problem of inappropriate default values automatically populating the screen was found to be correlated with errors in entering the wrong dosages of medications. In addition, certain types of errors were associated with mistakes (not detected by users) while others were associated with slips about unintentional errors. Horsky et al. (2005) analyzed a problematic medication order placed using a CPOE system that resulted in an overdose of potassium chloride being administered to an actual patient. The authors used a range of investigative methods including inspection of system logs, semistructured interviews, the examination of the electronic health record, and cognitive evaluation of the order entry system involved. They found that the error was due to a confluence of factors including problems associated with the display, the labeling of functions, and ambiguous dating of the dates in which medication was administered. The poor interface design did not assist with the decision-making process, and in fact, its design served as a hindrance, where the interface was a poor fit for the conceptual operators utilized by clinicians when calculating medication dosage (i.e., based on volume, not duration). Koppel et al. (2005) published an influential study examining how computer-provider order-entry systems (CPOE) facilitated medical errors. The study, which was published in JAMA (Journal of the American Medical Association), used a series of methods including interviews with clinicians, observations, and a survey to document the range of errors. According to the authors, the system facilitated 22 types of medication errors, and many
143 Cognitive Informatics
of them occurred with some frequency. The errors were classified into two broad categories: (1) information errors generated by fragmentation of data and failure to integrate the hospital’s information systems and (2) human-machine interface flaws reflecting machine rules that do not correspond to work organization or usual behaviors. The growing body of research on unintended consequences spurred the American Medical Informatics Association to devote a policy meeting to consider ways to understand and diminish their impact (Bloomrosen et al. 2011). The matter is especially pressing given the increased implementation of health information technologies nationwide, including ambulatory care practices that have little experience with health information technologies. The authors outline a series of recommendations, including a need for more cognitively-oriented research to guide the study of the causes and mitigation of unintended consequences resulting from health information technology implementations. These changes could facilitate improved management of those consequences, resulting in enhanced performance, patient safety, as well as greater user acceptance. 4.5.3
Distributed Cognition and Electronic Health Records
In this chapter, we have considered a classical model of information-processing cognition in which mental representations mediate all activity and constitute the central units of analysis. The analysis emphasizes how an individual formulates internal representations of the external world. To illustrate the point, imagine an expert user of a word processor who can effortlessly negotiate tasks through a combination of key commands and menu selections. The traditional cognitive analysis might account for this skill by suggesting that the user has formed an image or schema of the layout structure of each of eight menus, and retrieves this information from memory each time an action is to be performed. For example, if the goal is to “insert a clip art icon,”
the user would recall that this is subsumed under pictures that are the ninth item on the “Insert” menu and then execute the action, thereby achieving the goal. However, there are some problems with this model. Mayes, Draper, McGregor, and Koatley (1988) demonstrated that even highly skilled users could not recall the names of menu headers, yet they could routinely make fast and accurate menu selections. The results indicate that many or even most users relied on cues in the display to trigger the right menu selections. This suggests that the display can have a central role in controlling interaction in graphical user interfaces. As discussed, the conventional information- processing approach has come under criticism for its narrow focus on the rational/cognitive processes of the solitary individual. In the previous section, we considered the relevance of external representations to cognitive activity. The emerging perspective of distributed cognition offers a more far-reaching alternative. The distributed view of cognition represents a shift in the study of cognition from being the sole property of the individual to being “stretched” across groups, material artifacts, and cultures (Hutchins 1995; Suchman 1987). This viewpoint is increasingly gaining acceptance in cognitive science and human-computer interaction research. In the distributed approach to HCI research, cognition is viewed as a process of coordinating distributed internal (i.e., knowledge) and external representations (e.g., visual displays, manuals). Distributed cognition has two central points of inquiry, one that emphasizes the inherently social and collaborative nature of cognition (e.g., doctors, nurses and technical support staff in neonatal care unit jointly contributing to a decision process), and one that characterizes the mediating effects of technology or other artifacts on cognition. The mediating role of technology can be evaluated at several levels of analysis from the individual to the organization. Technologies, whether they be computer-based or an artifact in another medium, transform the ways individuals and groups think. They do not merely augment, enhance, or expedite perfor-
4
144
4
V. L. Patel and D. R. Kaufman
mance, although a given technology may do all of these things. The difference is not merely one of quantitative change, but one that is qualitative in nature. In a distributed world, what becomes of the individual? We believe it is important to understand how technologies promote enduring changes in individuals. Salomon, Perkins and Globerson (1991) introduced an important distinction in considering the mediating role of technology on individual performance, the effects with technology and the effects of technology. The former is concerned with the changes in performance displayed by users while equipped with the technology. For example, when using an effective medical information system, physicians should be able to gather information more systematically and efficiently. In this capacity, medical information technologies may alleviate some of the cognitive load associated with a given task and permit physicians to focus on higher- order
thinking skills, such as diagnostic hypothesis generation and evaluation. The effects of technology refer to enduring changes in general cognitive capacities (knowledge and skills) as a consequence of interaction with a technology. This effect is illustrated subsequently in the context of the enduring effects of an EHR (see 7 Chap. 10). We employed a pen-based EHR system, DCI (Dossier of Clinical Information), in several of our studies (see Kushniruk et al. 1996). Using the pen or computer keyboard, physicians can directly enter information into the EHR, such as the patient’s chief complaint, past history, history of present illness, laboratory tests, and differential diagnoses. Physicians were encouraged to use the system while collecting data from patients (e.g., during the interview). The system allows the physician to record information about the patient’s differential diagnosis, the ordering of tests, and the prescription of medication.
.. Fig. 4.10 Display of a structured electronic medical record with graphical capabilities
145 Cognitive Informatics
The graphical interface provides a highly structured set of resources for representing a clinical problem, as illustrated in . Fig. 4.10. We have studied the use of this EHR in both laboratory-based research (Kushniruk et al. 1996) and actual clinical settings using cognitive methods (Patel et al. 2000). The laboratory research included a simulated doctor- patient interview. We have observed two distinct patterns of EHR usage in the interactive condition, one in which the subject pursues information from the patient predicated on a hypothesis; the second strategy involves the use of the EHR display to guide asking the patient questions. In the screendriven strategy, the clinician is using the structured list of findings in the order in which they appear on the display to elicit information from the patient. All experienced users of this system appear to have both strategies in their repertoire. In general, a screen-driven strategy can enhance performance by reducing the cognitive load imposed by information-gathering goals and allow the physician to allocate more cognitive resources toward testing hypotheses and rendering decisions. On the other hand, this strategy can encourage a certain sense of complacency. We observed both effective as well as counter-productive uses of this screen- driven strategy. A more experienced user consciously used the strategy to structure the information-gathering process, whereas a novice user used it less discriminately. In employing this screen-driven strategy, the novice elicited almost all of the relevant findings in a simulated patient encounter. However, she also elicited numerous irrelevant findings and pursued incorrect hypotheses. In this particular case, the subject became too reliant on the technology and had difficulty imposing her own set of working hypotheses to guide the information-gathering and diagnostic- reasoning processes. The use of a screen-driven strategy is evidence of how technology transforms clinical cognition, as manifested in clinicians’ patterns of reasoning. Patel et al. (2000) extended this line of research to study the cognitive consequences of using the same EHR system in a
diabetes clinic. The study considered the following questions (1) How do physicians manage information flow when using an EHR system? (2) What are the differences in the way physicians organize and represent this information using paper-based and EHR systems, and (3) Are there long-term, enduring effects of the use of EHR systems on knowledge representations and clinical reasoning? One study focused on an in-depth characterization of changes in knowledge organization in a single subject as a function of using the system. The study first compared the contents and structure of patient records produced by the physician using the EHR system and paper-based patient records, using ten pairs of records matched for variables such as patient age and problem type. After having used the system for six months, the physician was asked to conduct his/her next five patient interviews using only hand-written paper records. The results indicated that the EHRs contained more information relevant to the diagnostic hypotheses. In addition, the structure and content of information were found to correspond to the structured representation of the particular medium. For example, EHRs were found to contain more information about the patient’s past medical history, reflecting the query structure of the interface. The paper-based records appear to better preserve the integrity of the time course of the evolution of the patient problem, whereas, this is notably absent from the EHR. Perhaps, the most striking finding is that, after having used the system for six months, the structure and content of the physician’s paper-based records bore a closer resemblance to the organization of information in the EHR than the paperbased records produced by the physician prior to exposure to the system. This finding is consistent with the enduring effects of technology even in the absence of the particular system (Salomon et al. 1991). The authors conclude that given these potentially enduring effects, the use of a particular EHR will almost certainly have a direct effect on medical decision making. The previously discussed research demonstrates how information technologies can
4
146
4
V. L. Patel and D. R. Kaufman
mediate cognition and even produce enduring changes in how one performs a task. What dimensions of an interface contribute to such changes? What aspects of a display are more likely to facilitate efficient task performance, and what aspects are more likely to impede it? Norman (1986) argued that well-designed artifacts could reduce the need for users to remember large amounts of information, whereas poorly designed artifacts increased the knowledge demands on the user and the burden of working memory. In the distributed approach to HCI research, cognition is viewed as a process of coordinating distributed internal and external representations, and this, in effect, constitutes an indivisible information- processing system. One of the appealing features of the distributed cognition paradigm is that it can be used to understand how properties of objects on the screen (e.g., links, buttons) can serve as external representations and reduce cognitive load. The distributed resource model proposed by Wright, Fields, and Harrison (2000) addresses the question of “what information is required to carry out some task and where should it be located: as an interface object or as something that is mentally represented to the user.” The relative difference in the distribution of representations (internal and external) is central to determining the efficacy of a system designed to support a complex task. Wright, Fields, and Harrison (2000) were among the first to develop an explicit model for coding the kinds of resources available in the environment and how they are embodied on an interface. Horsky, Kaufman, and Patel (2003a, b) applied the distributed resource model and analysis to a provider order entry system. The goal was to analyze specific order-entry tasks such as those involved in admitting a patient to a hospital and then to identify areas of complexity that may impede optimal recorded entries. The research consisted of two- component analyses: a cognitive walkthrough evaluation that was modified based on the distributed resource model and a simulated clinical ordering task performed by
seven physicians. The CW analysis revealed that the configuration of resources (e.g., very long menus, complexly configured displays) placed unnecessarily heavy cognitive demands on users, especially those who were new to the system. The resources model was also used to account for patterns of errors produced by clinicians. The authors concluded that the redistribution and reconfiguration of resources might yield guiding principles and design solutions in the development of complex interactive systems. The distributed cognition framework has proved to be particularly useful in understanding the performance of teams or groups of individuals in a particular work setting (Hutchins 1995). Hazlehurst and colleagues (Hazlehurst et al. 2003, 2007) have drawn on this framework to illuminate how work in healthcare settings is constituted using shared resources and representations. The activity system is the primary explanatory construct. It is comprised of actors and tools, together with shared understandings among actors that structure interactions in a work setting. The “propagation of representational states through activity systems” is used to explain cognitive behavior and investigate the organization of the system and human performance. Following Hazlehurst et al. (2007, p. 540), “a representational state is a particular configuration of an information-bearing structure, such as a monitor display, a verbal utterance, or a printed label, that plays some functional role in a process within the system.” The author has used the concept to explain the process of medication ordering in an intensive care unit and the coordinated communications of a surgical team in a heart room. The framework for distributed cognition is still an emerging one in human-computer interaction. It offers a novel and potentially powerful approach for illuminating the kinds of difficulties users encounter and finding ways to better structure the interaction by redistributing the resources. Distributed cognition analyses may also provide a window into why technologies sometimes fail to reduce errors or even contribute to them.
147 Cognitive Informatics
4.6
Conclusion
Theories and methods from cognitive science can shed light on a range of issues about the design and implementation of health information technologies. They can also serve an instrumental role in understanding and enhancing the performance of clinicians and patients as they engage in a range of cognitive tasks related to health. We believe that fundamental studies in psychology and cognitive science in general, can provide general guiding principles to study these issues, and can be combined with field studies which serve to illuminate different facets and contextualize the phenomena observed in laboratory studies. The potential scope of applied cognitive research in biomedical informatics is very broad. Significant inroads have been made in areas such as EHRs and patient safety. However, there are promising areas of future cognitive research that remain largely uncharted. These include understanding how to capitalize on health information technology without compromising patient safety (particularly in providing adequate decision support), understanding how various visual representations/graphical forms mediate reasoning in biomedical informatics and how these representations can be used by patients and health consumers with varying degrees of literacy. These are only a few of the cognitive challenges related to harnessing the potential of cuttingedge technologies to improve patient safety. nnSuggested Readings Anderson, J. R. (2015). Cognitive psychology and its implications. New York: Worth Publishers. Carayon, P., Alyousef, B., & Xie, A. (2012). Human factors and ergonomics in health care. In Handbook of human factors and ergonomics (pp. 1574–1595). Patel, V. L., Kaufman, D. R., & Kannampallil, T. G. (2013c). Diagnostic reasoning and decision making in the context of health information technology. In D. Marrow (Ed.), Reviews of human factors and ergonomics (Vol. 8). Thousand Oaks, CA: SAGE Publications. Patel, V. L., Kaufman, D. R., & Arocha, J. F. (2002). Emerging paradigms of cognition in
medical decision- making. Journal of Biomedical Informatics, 35, 52–75. This relatively recent article summarizes new directions in decision-making research. The authors articulate a need for alternative paradigms for the study of medical decision making. Patel, V. L., Yoskowitz, N. A., Arocha, J. F., & Shortliffe, E. H. (2009). Cognitive and learning sciences in biomedical and health instructional design: A review with lessons for biomedical informatics education. Journal of Biomedical Informatics, 42(1), 176–197. A review of learning and cognition with a particular focus on biomedical informatics. Patel, V. L., Kannampallil, T. G., & Shortliffe, E. H. (2015c). Role of cognition in generating and mitigating clinical errors. BMJ Quality & Safety, 24, 468–474. https://doi.org/10.1136/ bmjqs-2014-003482. Middleton, B., Bloomrosen, M., Dente, M. A., Hashmat, B., Koppel, R., Overhage, J. M., et al. (2013). Enhancing patient safety and quality of care by improving the usability of electronic health record systems: Recommendations from AMIA. Journal of the American Medical Informatics Association, 20(e1), e2–e8.
??Questions for Discussion 1. How can cognitive science theory meaningfully inform and shape design, development, and assessment of health-care information systems? 2. Describe two or three kinds of mental representations and briefly characterize their significance in understanding human performance. 3. What is the purpose and value of cognitive architectures? 4. Identify three ways in which novices differ from experts in medicine. 5. What are the limitations of interpreting retroactive data on medical errors? 6. Explain the difference between latent and active failures and their implications for patient safety? 7. How does the field of Cognitive Informatics capture the interaction of cognition and informatics in biomedicine and healthcare?
4
148
4
V. L. Patel and D. R. Kaufman
8. Explain the role inductive, deductive and abductive reasoning play in medical diagnostic reasoning? 9. Explain some ways in which technology- mediated errors can compromise patient safety. 10. What are some of the assumptions of the distributed cognition framework? What implications does this approach have for the evaluation of electronic health records? 11. Explain the difference between the effects of technology and the effects with technology? How can each of these effects contribute to improving patient safety and reducing medical error? 12. The use of electronic health records (EHR) has been shown to differentially affect clinical reasoning relative to paper charts. Briefly characterize the effects they have on reasoning, including those that persist after the clinician ceases to use the system.
References Akin, O. (1982). The psychology of architecture design. London: Pion. Allard, F., & Starkes, J. L. (1991). Motor-skill experts in sports, dance, and other domains. In K. A. Ericsson & J. Smith (Eds.), Toward a general theory of expertise: Prospects and limits (pp. 126–150). New York: Cambridge University Press. Anderson, J. R. (1985). Cognitive psychology and its implications (2nd ed.). New York: Freeman. Anderson, J. R. (2013). The architecture of cognition. New York: Psychology Press. Atkinson, R. C., & Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes. In K. W. Spence & J. T. Spence (Eds.), The psychology of learning and motivation (Vol. 2, pp. 89–195). New York: Academic. Balogh, E. P., Miller, B. T., & Ball, J. R. (2015). Improving diagnosis in health care. Washington, DC: National Academy Press. Bartolomeo, P. (2008). The neural correlates of visual mental imagery: An ongoing debate. Cortex, 44(2), 107–108. S0010-9452(07)00016-0 [pii]. Bechtel, W., Abrahamsen, A., & Graham, G. (1998). Part I: The life of cognitive science. In W. Bechtel & G. Graham (Eds.), A companion to cognitive science Blackwell companions to philosophy (Vol. 13, pp. 2–104). Malden: Blackwell.
Bloomrosen, M., Starren, J., Lorenzi, N. M., Ash, J. S., Patel, V. L., & Shortliffe, E. H. (2011). Anticipating and addressing the unintended consequences of health IT and policy: A report from the AMIA 2009 health policy meeting. Journal of American Medical Informatics Association: JAMIA, 18(1), 82–90. http://doi.org/18/1/82 [pii]10.1136/ jamia.2010.007567. Bruer, J. T. (1993). Schools for thought: A science of learning in the classroom. Cambridge: MIT Press. Carayon, P., Karsh, B.-T., Gurses, A. P., Holden, R. J., Hoonakker, P., Hundt, A. S., Wetterneck, T. (2011). Macroergonomics in patient care and health care safety. In D. G. Morrow (Ed.), Reviews of human factors and ergonomics (Vol. 8). Santa Monica, CA: Human Factors and Ergonomics Society. Carayon, P. (2012). Handbook of human factors and ergonomics in health care and patient safety (2nd ed.). Boca Raton: CRC Press. Card, S. K., Moran, T. P., & Newell, A. (1983). The psychology of human-computer interaction. Hillsdale: L. Erlbaum Associates. Carroll, J. M. (2003). HCI models, theories, and frameworks: Toward a multidisciplinary science. San Francisco: Morgan Kaufmann. Chandler, P., & Sweller, J. (1991). Cognitive load theory and the format of instruction. Cognition and Instruction, 8(4), 293–332. Chapanis, A. (1996). Human factors in systems engineering. New York: Wiley. Chase, W. G., & Simon, H. A. (1973). Perception in chess. Cognitive Psychology, 4(1), 55–81. Chi, M. T. H., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5, 121–152. Chi, M. T. H., Glaser, R., & Farr, M. J. (1988). The nature of expertise. Hillsdale: L. Erlbaum Associates. Clancey, W. J., & Shortliffe, E. H. (1984). Readings in medical artificial intelligence: The first decade. Reading: Addison-Wesley. DeGroot, A. T. (1965). Thought and choice in chess. The Hague: Mouton. Duch, W., Oentaryo, R. J., & Pasquier, M. (2008). Cognitive architectures: Where do we go from here?. Paper presented at the proceeding of the 2008 conference on Artificial General Intelligence 2008: Proceedings of the First AGI Conference. Elstein, K. A., Shulman, L. S., & Sprafka, S. A. (1978). Medical problem solving: An analysis of clinical reasoning. Cambridge, MA: Harvard University Press. Ericsson, K. A. (1996). The road to excellence: The acquisition of expert performance in the arts and sciences sports and games. Mahwah: Lawrence Erlbaum Associates. Ericsson, K. A. (2006). The Cambridge handbook of expertise and expert performance. Cambridge/New York: Cambridge University Press. Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as data (Rev. ed.). Cambridge, MA: MIT Press.
149 Cognitive Informatics
Ericsson, K. A., & Smith, J. (1991). Toward a general theory of expertise: Prospects and limits. New York: Cambridge University Press. Ericsson, K. A. (2009). Enhancing the development of professional performance: Implications from the study of deliberate practice. In K. A. Ericsson (Ed.), The development of professional expertise: Toward measurement of expert performance and design of optimal learning environments (pp. 405–431). New York: Cambridge University Press. Ericsson, K. A., Hoffman, R. R., Kozbelt, A., & Williams, A. M. (Eds.). (2018). The Cambridge handbook of expertise and expert performance. UK: Cambridge University Press. Ericsson, K. A. (Ed.). (1996). The road to excellence: The acquisition of expert performance in the arts and sciences, sports. Hillsdale, NJ: Lawrence Erlbaum. Estes, W. K. (1975). The state of the field: General problems and issues of theory and metatheory. In W. K. Estes (Ed.), Handbook of learning and cognitive processes (Vol. 1). Hillsdale/New York: L. Erlbaum Associates. Evans, D. A., & Gadd, C. S. (1989). Managing coherence and context in medical problem-solving discourse. In D. A. Evans & V. L. Patel (Eds.), Cognitive science in medicine: Biomedical modeling (pp. 211– 255). Cambridge, MA: MIT Press. Evans, D. A., & Patel, V. L. (1989). Cognitive science in medicine. Cambridge, MA: MIT Press. Feltovich, P. J., Johnson, P. E., Moller, J. H., & Swanson, D. B. (1984). The role and development of medical knowledge in diagnostic expertise. In W. J. Clancey & E. H. Shortliffe (Eds.), Readings in medical artificial intelligence: The first decade (pp. 275–319). Reading: Addison Wesley. Fisk, A. D., Rogers, W. A., Charness, N., Czaja, S. J., & Sharit, J. (2009). Designing for older adults: Principles and creative human factors approaches. Boca Raton: CRC Press. Flin, R., & Patey, R. (2009). Improving patient safety through training in non-technical skills. British Medical Journal, 339, B3595. https://doi. org/10.1136/Bmj.B3595. Frederiksen, C. H. (1975). Representing logical and semantic structure of knowledge acquired from discourse. Cognitive Psychology, 7(3), 371–458. Gardner, H. (1985). The mind’s new science: A history of the cognitive revolution. New York: Basic Books. Gillan, D. J., & Schvaneveldt, R. W. (1999). Applying cognitive psychology: Bridging the gulf between basic research and cognitive artifacts. In F. T. Durso, R. Nickerson, R. Schvaneveldt, S. Dumais, M. Chi, & S. Lindsay (Eds.), Handbook of applied cognition (pp. 3–31). Chichester/New York: Wiley. Glaser, R. (Ed.). (2000). Advances in instructional psychology: Education design and cognitive science (Vol. 5). Mahwah: Lawrence Erlbaum and Associates. Greeno, J. G., & Simon, H. A. (1988). Problem solving and reasoning. In R. C. Atkinson & R. J. Herrnstein
(Eds.), Stevens’ handbook of experimental psychology Vol 1: Perception and motivation; Vol 2: Learning and cognition (Vol. 1, 2nd ed., pp. 589–672). New York: Wiley. Harrington, L. (2015). Electronic health record workflow: Why more work than flow? Advanced Critical Care AACN, 26(1), 5–9. Hazlehurst, B., McMullen, C., Gorman, P., & Sittig, D. (2003). How the ICU follows orders: Care delivery as a complex activity system. AMIA Annual Symposium Proceedings, 2003, 284–288. D030003599 [pii]. Hazlehurst, B., McMullen, C. K., & Gorman, P. N. (2007). Distributed cognition in the heart room: How situation awareness arises from coordinated communications during cardiac surgery. Journal of Biomedical Informatics, 40(5), 539–551. https://doi.org/10.1016/j.jbi.2007.02.001. S15320464(07)00008-1 [pii]. Henriksen, K. (2010). Partial truths in the pursuit of patient safety. BMJ Quality & Safety Health Care, 19(3), i3–i7. Hilgard, E. R., & Bower, G. H. (1975). Theories of learning (4th ed.). Englewood Cliffs: Prentice-Hall. Hoffman, R. R. (Ed.). (1992). The psychology of expertise: Cognitive research and empirical AI. Mahwah: Lawrence Erlbaum Associates. Hoffman, R. R., Shadbolt, N. R., Burton, A. M., & Klein, G. (1995). Eliciting knowledge from experts – A methodological analysis. Organizational Behavior and Human Decision Processes, 62(2), 129–158. Horsky, J., Kaufman, D. R., & Patel, V. L. (2003a). The cognitive complexity of a provider order entry interface. AMIA Annual Symposium Proceedings, 2013, 294–298. PMID: 14728181; PMCID: PMC1480200. Horsky, J., Kaufman, D. R., Oppenheim, M. I., & Patel, V. L. (2003b). A framework for analyzing the cognitive complexity of computer-assisted clinical ordering. Journal of Biomedical Informatics, 36(1–2), 4–22. https://doi.org/10.1016/S1532-0464(03)00062-5. Horsky, J., Kuperman, G. J., & Patel, V. L. (2005). Comprehensive analysis of a medication dosing error related to CPOE. Journal of the American Medical Informatics Association: Journal of the American Medical Informatics Association JAMIA, 12(4), 377–382. https://doi.org/10.1197/jamia. M1740. M1740 [pii]. Hutchins, E. (1995). Cognition in the wild. Cambridge, MA: MIT Press. Joseph, G. M., & Patel, V. L. (1990). Domain knowledge and hypothesis generation in diagnostic reasoning. Medical Decision Making, 10, 31–46. Karsh, B., Weinger, M. B., Abbott, P. A., & Wears, R. L. (2010). Health information technology: Fallacies and sober realities. Journal of the American Medical Informatics Association, 17(6), 617–623. Kassirer, J. P. (1989). Diagnostic reasoning. Annals of Internal Medicine, 110, 893–900. Kaufman, D. R., Patel, V. L., & Magder, S. (1996). The explanatory role of spontaneously generated analo-
4
150
4
V. L. Patel and D. R. Kaufman
gies in a reasoning about physiological concepts. International Journal of Science Education, 18, 369– 386. Kintsch, W. (1998). Comprehension: A paradigm for cognition. Cambridge/New York: Cambridge University Press. Klienmuntz, B., & McLean, R. S. (1968). Diagnostic interviewing by digital computer. Behavioral Science, 13(1), 75–80. Kohn, L. T., Corrigan, J., & Donaldson, M. S. (2000). To err is human: Building a safer health system (Vol. 6). Washington, DC: National Academy Press. Koppel, R., Metlay, J. P., Cohen, A., Abaluck, B., Localio, A. R., Kimmel, S. E., & Strom, B. L. (2005). Role of computerized physician order entry systems in facilitating medication errors. Journal of the American Medical Association JAMA, 293(10), 1197–1203. Kushniruk, A. W., Kaufman, D. R., Patel, V. L., Levesque, Y., & Lottin, P. (1996). Assessment of a computerized patient record system: A cognitive approach to evaluating medical technology. MD Computing, 13(5), 406–415. Kushniruk, A. W., Triola, M. M., Borycki, E. M., Stein, B., & Kannry, J. L. (2005). Technology induced error and usability: The relationship between usability problems and prescription errors when using a handheld application. International Journal of Medical Informatics, 74(7–8), 519–526. https:// doi.org/10.1016/j.ijmedinf.2005.01.003. S13865056(05)00011-0 [pii]. Larkin, J., McDermott, J., Simon, D. P., & Simon, H. A. (1980). Expert and novice performance in solving physics problems. Science, 208(4450), 1335–1342. Ledley, R. S., & Lusted, L. B. (1959). Probability, logic and medical diagnosis. Science, 130(3380), 892–930. Lesgold, A., Rubinson, H., Feltovich, P., Glaser, R., Klopfer, D., & Wang, Y. (1988). Expertise in a complex skill: Diagnosing x-ray pictures. In M. T. H. Chi, R. Glaser, & M. J. Farr (Eds.), The nature of expertise (pp. 311–342). Hillsdale: Lawrence Erlbaum Associates. Lieto, A., Lebiere, C., & Oltramari, A. (2018). The knowledge level in cognitive architectures: Current limitations and possible developments. Cognitive Systems Research, 48, 39–55. Lin, L., Isla, R., Doniz, K., Harkness, H., Vicente, K. J., & Doyle, D. J. (1998). Applying human factors to the design of medical equipment: Patient-controlled analgesia. Journal of Clinical Monitoring and Computing, 14(4), 253–263. Magnani, L. (2001). Abduction, reason, and science: Processes of discovery and explanation. Dordrecht: Kluwer Academic. Mayes, T. J., Draper, S. W., McGregor, A. M., & Koatley, K. (1988). Information flow in a user interface: The effect of experience and context on the recall of MacWrite screens. Paper presented at the conference on people and computers IV, Cambridge. Morel, G., Amalberti, R., & Chauvin, C. (2008). Articulating the differences between safety and
resilience: The decision-making process of professional sea-fishing skippers. Human Factors, 50(1), 1–16. Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harvard University Press. Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs: Prentice-Hall. Norman, D. A. (1986). Cognitive engineering. In D. A. Norman & S. W. Draper (Eds.), User centered system design: New perspectives on human-computer interaction (pp. 31–61). Hillsdale: Lawrence Erlbaum Associates. Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Reading, Mass.: Addison-Wesley Pub. Co. Patel, V. L., & Cohen, T. (2008). New perspectives on error in critical care. Current Opinions in Critical Care, 14(4), 456–459. Patel, V. L., & Groen, G. J. (1986). Knowledge based solution strategies in medical reasoning. Cognitive Science, 10(1), 91–116. Patel, V. L., & Groen, G. J. (1991a). Developmental accounts of the transition from medical student to doctor: Some problems and suggestions. Medical Education, 25(6), 527–535. Patel, V. L., & Groen, G. J. (1991b). The general and specific nature of medical expertise: A critical look. In K. A. Ericsson & J. Smith (Eds.), Toward a general theory of expertise: Prospects and limits (pp. 93–125). New York: Cambridge University Press. Patel, V. L., & Kannampallil, T. G. (2015). Cognitive informatics in biomedicine and healthcare. Journal of Biomedical Informatics, 53, 3–14. Patel, V. L., & Kaufman, D. R. (1998). Medical informatics and the science of cognition. Journal of the American Medical Informatics Association JAMIA, 5(6), 493–502. Patel, V. L., & Ramoni, M. F. (1997). Cognitive models of directional inference in expert medical reasoning. In P. J. Feltovich, K. M. Ford, & R. R. Hoffman (Eds.), Expertise in context: Human and machine (pp. 67–99). Cambridge: The MIT Press. Patel, V. L., & Zhang, J. (2007). Cognition and patient safety in healthcare. In F. T. Durso, R. S. Nickerson, S. Dumais, S. Lewandowsky, & T. Perfect (Eds.), Handbook of applied cognition (2nd ed., pp. 307– 331). New York: Wiley. Patel, V. L., Groen, G. J., & Arocha, J. F. (1990). Medical expertise as a function of task-difficulty. Memory and Cognition, 18(4), 394–406. Patel, V. L., Arocha, J. F., & Kaufman, D. R. (1994). Diagnostic reasoning and medical expertise. In D. L. Medin (Ed.), The psychology of learning and motivation: Advances in research and theory (Vol. 31, pp. 187–252). San Diego: Academic Press. Patel, V. L., Kushniruk, A. W., Yang, S., & Yale, J. F. (2000). Impact of a computer-based patient record system on data collection, knowledge organization, and reasoning. Journal of the American Medical Informatics Association JAMIA, 7(6), 569–585.
151 Cognitive Informatics
Patel, V. L., Cohen, T., Murarka, T., Olsen, J., Kagita, S., Myneni, S., et al. (2011). Recovery at the edge of error: Debunking the myth of the infallible expert. Journal of Biomedical Informatics, 44(3), 413–424. Patel, V. L., Kaufman, D. R., & Kannampallil, T. G. (2013a). Diagnostic reasoning and decision making in the context of health information technology. In D. Marrow (Ed.), Reviews of human factors and ergonomics (Vol. 8). Thousand Oaks: SAGE Publications. Patel, V. L., Kaufman, D. R., & Kannampallil, T. G. (2013b). Diagnostic reasoning and decision making in the context of health information technology. Reviews of Human Factors and Ergonomics, 8(1), 149–190. Patel, V. L., Kaufman, D. R., & Cohen, T. (2014). Cognitive informatics in health and biomedicine: Case studies on critical care, complexity and errors (pp. 1–13). London: Springer. Patel, V. L., Kannampallil, T. G., & Shortliffe, E. H. (2015a). Role of cognition in generating and mitigating clinical errors. BMJ Quality and Safety, 24, 468–474. Patel, V. L., Kannampallil, T. G., & Kaufman, D. R. (Eds.). (2015b). Cognitive informatics in health and biomedicine: Human computer interaction. London: Springer. Patel, V. L., Arocha, J. F., & Ancker, J. (2017). Cognitive informatics in health and biomedicine: Understanding and modeling health behaviors. London: Springer. Patel, V. L., Kaufman, D. R., & Kannampallil, T. G. (2018). Diagnostic reasoning and expertise in healthcare. In P. Ward, J. M. Schraagen, J. Gore, & E. Roth (Eds.), The Oxford handbook of expertise: Research & application. UK: Oxford University Press. Patel, V. L., Evans, D. A., & Kaufman, D. R. (1989). Cognitive framework for doctor-patient communication. In D. A. Evans & V. L. Patel (Eds.), Cognitive science in medicine: Biomedical modeling (pp. 257– 312). Cambridge, MA: MIT Press.. Payne, S. J. (2003). Users’ mental models: The very ideas. In J. M. Carroll (Ed.), HCI models, theories, and frameworks: Toward a multidisciplinary science (1st ed., pp. 135–156). San Francisco: Morgan Kaufmann. Peirce, C. S. (1955). Philosophical writings of Peirce. Ed. by Justus Buchler. New York: Dover. Peleg, M., Gutnik, L. A., Snow, V., & Patel, V. L. (2006). Interpreting procedures from descriptive guidelines. Journal of Biomedical Informatics, 39, 184–195. Perkins, D. N., Schwartz, S., & Simmons, R. (1990). A view from programming. In M. Smith (Ed.), Toward a unified theory of problem solving: Views from content domains (pp. 45–67). Hillsdale: Lawrence Erlbaum Associates. Ramoni, M., Stefanelli, M., Magnani, L., & Barosi, G. (1992). An epistemological framework for medical knowledge-based systems. IEEE Transactions on Systems, Man, and Cybernetics, 22(6), 1361–1375.
Reason, J. T. (1990). Human error. Cambridge/New York: Cambridge University Press. Rimoldi, H. J. (1961). The test of diagnostic skills. Journal of Medical Education, 36, 73–79. Ritter, F. E., Tehranchi, F., & Oury, J. D. (2019). ACT- R: A cognitive architecture for modeling cognition. Wiley Interdisciplinary Reviews: Cognitive Science, 10(3), e1488. Rogers, Y. (2004). New theoretical approaches for HCI. Annual Review of Information Science and Technology, 38, 87–143. Salomon, G., Perkins, D. N., & Globerson, T. (1991). Partners in cognition: Extending human intelligence with intelligent technologies. Educational Researcher, 20(3), 2–9. https://doi.org/10.3102/0013 189x020003002. Senders JW. (1993) Theory and analysis of typical errors in a medical setting. Hospital Pharmacy. 1993 Jun;28(6):505–508. Sharp, H., Preece, J., & Rogers, Y. (2019). Interaction design: Beyond human-computer interaction. Hoboken, NJ: Wiley. Shortliffe, E. H., & Blois, M. S. (2001). The computer meets medicine and biology: Emergence of a discipline. In E. H. Shortliffe & L. E. Perreault (Eds.), Medical informatics: Computer applications in health care and biomedicine (2nd ed., pp. 3–40). New York: Springer. Simon, D. P., & Simon, H. A. (1978). Individual differences in solving physics problems. In R. S. Siegler (Ed.), Children’s thinking: What develops? (Vol. xi, pp. 325–348). Hillsdale, NJ: Lawrence Erlbaum Associates. Sloboda, J. (1991). Musical expertise. In K. A. Ericsson & J. Smith (Eds.), Toward a general theory of expertise: Prospects and limits (pp. 153–171). New York: Cambridge University Press. Sternberg, R. J., & Horvath, J. A. (Eds.). (1999). Tacit knowledge in professional practice: Researcher and practitioner. Mahwah: Lawrence Erlbaum Associates. Suchman, L. A. (1987). Understanding computers and cognition: A new foundation for design – Winograd, T., Flores, F. Artificial Intelligence, 31(2), 227–232. Sussman, S. Y. (2001). Handbook of program development for health behavior research & practice. Thousand Oaks: Sage. van Dijk, T. A., & Kintsch, W. (1983). Strategies of discourse comprehension. New York: Academic. Vicente, K. J. (1999). Cognitive work analysis: Toward safe, productive & healthy computer-based work. Mahwah: Lawrence Erlbaum Associates Publishers. Weinger, M. B., & Slagle, J. (2001). Human factors research in anesthesia patient safety. Proceedings of the AMIA Symposium, 756–760. PMID: 11825287; PMCID: PMC2243459. White, B. Y., & Frederiksen, J. R. (1990). Causal model progressions as a foundation for intelligent learning environments. In W. J. Clancey & E. Soloway (Eds.), Artificial intelligence and learning environments spe-
4
152
4
V. L. Patel and D. R. Kaufman
cial issues of “Artificial Intelligence: An International Journal” (pp. 99–157). Woods, D. D., Patterson, E. S., & Cook, R. I. (2008). Behind human error: Taming complexity to improve patient safety. In P. Carayon (Ed.), Handbook of human factors and ergonomics in health care and patient safety (pp. 459–476). Mahwah: Lawrence Erlbaum Associates. Wright, P. C., Fields, R. E., & Harrison, M. D. (2000). Analyzing human-computer interaction as distributed cognition: The resources model. Human Computer Interaction, 15(1), 1–41.
Zhang, J., Johnson, T. R., Patel, V. L., Paige, D. L., & Kubose, T. (2003). Using usability heuristics to evaluate patient safety of medical devices. Journal of Biomedical Informatics, 36(1–2), 23–30. S1532046403000601 [pii]. Zhang, J., Patel, V. L., Johnson, T. R., & Shortliffe, E. H. (2004). A cognitive taxonomy of medical errors. Journal of Biomedical Informatics, 37(3), 193–204. Zuriff, G. E. (1985). Behaviorism: A conceptual reconstruction. New York: Columbia University Press.
153
Human-Computer Interaction, Usability, and Workflow Vimla L. Patel, David R. Kaufman, and Thomas Kannampallil Contents 5.1
Introduction to Human-Computer Interaction – 154
5.2
Role of HCI in Biomedical Informatics – 155
5.3
Theoretical Foundations – 155
5.4
Usability of Health Information Technology – 156
5.4.1
Analytical Approaches – 157
5.5
Usability Testing and User-Based Evaluation – 164
5.5.1 5.5.2 5.5.3 5.5.4
I nterviews and Focus Groups – 164 Verbal Think Aloud – 165 Usability Surveys and Questionnaires – 166 Field/Observational Approaches – 166
5.6
Clinical Workflow – 166
5.7
Future Directions – 169
5.8
Conclusion – 170 References – 171
© Springer Nature Switzerland AG 2021 E. H. Shortliffe, J. J. Cimino (eds.), Biomedical Informatics, https://doi.org/10.1007/978-3-030-58721-5_5
5
154
V. L. Patel et al.
nnLearning Objectives
5
After reading this chapter, you should know the answers to these questions: 55 What are the major attributes of system usability? 55 What are the methods that can be used to evaluate usability of a health information system? 55 How does a poorly designed HIT implementation contribute to disruptions to clinical workflow?
5.1
Introduction to Human-Computer Interaction
Human-computer interaction (HCI) is a multifaceted discipline devoted to the study and practice of design and usability (Carroll 2003). The history of computing and more generally, that of artifact design, are rife with stories of dazzlingly powerful devices with remarkable capabilities that are thoroughly unusable by anyone except for the team of designers and their immediate families. In the often-cited book, Psychology of Everyday Things, Donald Norman (1988) describes a litany of poorly designed artifacts ranging from programmable VCRs to answering machines and water faucets that are inherently non-intuitive and difficult to use. Similarly, there have been numerous innovative and promising clinical information technologies that have yielded decidedly suboptimal results and resulted in deep user dissatisfaction. At a minimum, difficult interfaces result in steep learning curves and structural inefficiencies in task performance. At worst, problematic interfaces can have serious consequences for patient safety (Koppel et al. 2005; Lin et al. 1998; Zhang et al. 2004). Myers and Rosson (1992) reported that nearly 50% of software code was devoted to the user interface, and a survey of developers indicated that, on average, 6% of their project budgets were spent on usability evaluation. Given the complexities of the modern graphical user interfaces (GUI), it is likely that more than 50% of the code is now
devoted to the GUI. On the other hand, usability evaluations have greatly increased over the last 20 years (Jaspers 2009). There have been numerous books and articles devoted to promoting effective user interface design (Preece et al. 2015; Shneiderman et al. 2016), and the importance of enhancing the user experience has been widely acknowledged by both consumers and producers of information technology. Part of the impetus is that usability has been demonstrated to be highly cost effective. Karat (1994) reported that for every dollar a company invests in the usability of a product, it receives between $10 and $100 in benefits. Although much has changed in the world of computing since Karat’s estimate (e.g., the flourishing of the World Wide Web and mobile apps), it is clear that investments in usability still yield substantial rates of return (Nielsen 2008). It remains far costlier to fix a problem after product release than in an early design phase. The concept of usability as well as the methods and tools to measure and promote it are now “touchstones in the culture of computing” (Carroll 2003). HCI has spawned a professional orientation that focuses on practical matters concerning the integration and evaluation of applications of technology to support human activities. There are also active academic HCI communities that have contributed significant advances to the science of computing. HCI researchers have been devoted to the development of innovative design concepts such as virtual reality, ubiquitous computing, multimodal interfaces, collaborative workspaces, mobile technologies, and immersive and virtual environments. HCI research has been instrumental in transforming the software engineering process towards a more user- centered iterative system development (e.g., rapid prototyping). HCI research has also been focally concerned with the cognitive, social, and cultural dimensions of the computing experience. In this regard, it is concerned with developing analytic frameworks for characterizing how technology can be used more productively across a range of tasks, settings, and user populations.
155 Human-Computer Interaction, Usability, and Workflow
In this chapter, we describe the foundations of the role of HCI in biomedical informatics with a specific focus on methods for usability evaluation and clinical workflow. We also discuss the implications of HCI and clinical workflow methods for future biomedical informatics research. This chapter is a companion to chapter 4 in this volume on cognitive informatics (Chap. 4)
5.2
ole of HCI in Biomedical R Informatics
HCI research in healthcare emerged at a time when health information technology and electronic health records (EHRs) were becoming more central to the practice of medicine (Patel et al. 2015). Much HCI work has been devoted to creating or enhancing design in healthcare systems. However, the focus of most of our work has been on the cognitive mediation of technology in healthcare practice (Patel et al. 2015). Most of the early HCI research focused on the solitary user of technology. Although such research is still commonplace, the focus has extended to distributed health information systems (Hazlehurst et al. 2007; Horsky et al. 2003) and analysis of unintended sociotechnical consequences with a particular focus on computerized provider order entry systems (Koppel et al. 2005). HCI studies in biomedicine extend across clinical and consumer health informatics, addressing a range of user populations including providers, biomedical scientists, and patients. While the implications of HCI principles for the design of HIT are acknowledged, the adoption of the tools and techniques among clinicians, informatics researchers and developers of HIT are limited. There is a consensus that HIT has not realized its potential as a tool that facilitates clinical decision-making, coordination of care, and improvement of patient safety (Middleton et al. 2013). The field of human computer interaction intersects behavioral, and computer and information science. Thus, this field involves the study of interaction between people and computers. Computing systems includes both software and hardware. In addition, devices from
smartphones to glucose meters are devices that present usability challenges. In this chapter, the focus is on the software and the interface componants. Thus the major focus of HCI is with the evaluation of interactive computer systems for human use. In the healthcare environment, it is important to understand HCI to ensure the users and the computers interact successfully. Therefore, the goals of HCI are to deploy usable, useful and safe systems.
5.3
Theoretical Foundations
In recent years, there has been a significant growth in research and application regarding HCI and healthcare systems. They have produced a collective body of experiential and practical knowledge about user experience, adoption and implementation to guide future design work. Some of the work is not specifically guided by a theoretical foundation and these efforts have proven to be useful in elucidating problems and contributing to user- centered design efforts. Human-computer interaction work is at least partly an empirical science in which local knowledge derived from a small body of studies will suffice in solving a problem. However, it is also necessary that we extrapolate knowledge from one context to another. Concentrated efforts in HCI are timeconsuming, tend to employ small numbers of subjects and are conducted in a limited number of settings. For example, it is simply not possible to conduct an HCI research project in many different hospitals or to thoroughly test every facet of an electronic health record system. Knowledge solely based on practical experience or empirical studies are not adequate to account for the immense variety of health information technologies and the rich array of contexts that constitute the practice of medicine (Kaufman et al. 2015). There are many facets to technology use and a range of theories that address them. For example, the technology acceptance model that focuses on user’s perceived usefulness and usage intentions has been widely used in healthcare research (Venkatesh 2000). Sociotechnical systems theory is very broad in scope. It views all organizations as having the
5
156
5
V. L. Patel et al.
following elements that comprise its organizational design: technological (including the actual IT system, usability, and unintended consequences), social (doctors, staff, patients, etc.), and external environment (e.g., political, economic, cultural, and legal influences) (Hendrick and Kleiner 1999). These subsystems are intricately connected, such that changes to any one affects others, sometimes in unanticipated or dysfunctional ways (Aarts et al. 2007; Ash et al. 2004). One of the most influential theories in clinical informatics was offered by Sittig and Singh (2010). They proposed an 8-dimensional model of interrelated concepts that can be used to explain performance in complex adaptive systems in the healthcare arena. The model has been applied in a range of settings model to understand and improve HIT applications at various stages of development and implementation. Cognitive engineering (CE) is an interdisciplinary approach to the development of principles, methods, and tools to assess and guide the design of systems to support human performance (Hettinger et al. 2017). The approach is rooted in both cognitive science and engineering and has been used to support design of displays, decision support and training in numerous high-risk domains (Kushniruk et al. 2004). A computational theory of mind provides the fundamental underpinning for most contemporary cognitive theories. The basic premise is that much of human cognition can be characterized as a series of operations, computations on mental representations. At a higher level of cognitive analysis, CE also focuses on the discrepancy between user’s goals and the physical controls embodied in a system (Norman 1986). Interface design choices differentially mediates task performance and various methods of analysis including those described below endeavor to measure this impact. Distributed cognition (DCog) represents a shift in the study of cognition from an exclusive focus on the mind of the individual to being “stretched” across groups, material artifacts and cultures (Hutchins 1995). This paradigm has gained substantial currency in HCI research. In the distributed approach, cognition is viewed as a process of coordinating distributed internal (i.e., what’s in the mind) and
external representations (e.g., visual displays, post-it notes). DCog has two lines of analysis, one that emphasizes the social and collaborative nature of cognition (e.g., surgeons, nurses and respiratory therapists in cardiothoracic surgical setting jointly contributing to a decision process), and one that characterizes the mediating effects of technology (e.g., EHRs, paper charts, mobile devices, apps) or other artifacts on cognition. DCog constitutes a family of interrelated theories rather than a single approach (Cohen et al. 2006). The approaches collectively offer a penetrating view of the complexities embodied in human-computer interaction. However, there is no “off-theshelf” methodology for using it in research or as a practitioner (Furniss et al. 2015). The application of DCog theory and methods are complicated by the fact that there are no set of features to attend to and no checklist or prescribed method to follow (Rogers 2012). In addition, the analysis and abstraction requires a high level of skill and training. More in-depth reviews of DCog can be found in (Rogers 2004) and as applied to healthcare in (Hazlehurst et al. 2008; Kaufman et al. 2015). DCog approaches have been particularly useful in the analysis of teamwork and EHR-mediated workflow in complex environments (Blandford and Furniss 2006; Hazlehurst et al. 2007; Kaufman et al. 2009). It is not unusual for HCI researchers to engage multiple theories depending on the area of focus.
5.4
Usability of Health Information Technology1
Theories of cognitive science meaningfully inform and shape design, development and assessment of health-care information systems by providing insight into principles of
1
Parts of the section, have been adapted, with permission, from Kannampallil, T. G., & Abraham, J. (2015). Evaluation of health information technology: Methods, frameworks and challenges. In V. L. Patel, T. G. Kannampallil, & D. Kaufman (Eds.), Cognitive informatics in health and biomedicine: Human computer interaction. London: Springer.
5
157 Human-Computer Interaction, Usability, and Workflow
Evaluation Methods Analytic Approaches
Usability Testing Field/ Observational
Task Analytic
Hierarchical Task Cognitive Analysis (HTA) Task Analysis
Time and Motion Studies
Shadowing Modelbased
Inspectionbased Heuristic Evaluation
Surveys and Questionnaires
Cognitive Walkthrough Motor-based theories (e.g., Fitts Low)
[Lab or Field] Focus Groups, Interviews
Verbal Think Aloud
Eye-tracking, Screen capture
[Lab or Field or Online] Keystrokelevel Models
.. Fig. 5.1 Classification of evaluation methods
system usability and learnability, as well as the design of a safer workplace. Usability methods, most often drawn from cognitive science, have been used to evaluate a wide range of medical information technologies including infusion pumps (Karat 1994), ventilator management systems, physician order entry (Ash et al. 2003; Horsky et al. 2003; Koppel et al. 2005), pulmonary graph displays (Wachter et al. 2003), information retrieval systems, and research web environments for clinicians (Elkin et al. 2002). In addition, usability techniques are increasingly used to assess patient-centered environments (Chan and Kaufman 2011; Cimino et al. 2000; Kaufman et al. 2003a, b). The methods include observations, focus groups, surveys and experiments. Collectively, these studies make a compelling case for the instrumental value of such research to improve efficiency, user acceptance and relatively seamless integration with current workflow and practices. What do we mean by usability? Nielsen suggests that usability includes the following five attributes: (1) learnability: system should be relatively easy to learn, (2) efficiency: an experienced user can attain a high level of productivity, (3) memorability: features supported by the system should be easy to retain once learned, (4) errors: system should be designed to minimize errors and support error detection and recovery, and (5) satisfaction: the user experience should be subjectively satisfying.
The question then becomes how we evaluate and study the various attributes of usability. We classified usability evaluation methods into two categories: analytic evaluation approaches and usability testing. Analytic evaluation studies use experts as participants—usability experts, domain experts, software designers—or in some cases, are conducted without participants using task-analytic, inspection-based or model- based approaches and are conducted in laboratory- based settings. We categorized usability testing into field- based studies that capture situated and contextual aspects of HIT use, and a general category of methods (e.g., interviews, focus groups, surveys) that solicit user opinions and can be administered in different modes (e.g., face-to-face or online). A brief categorization of the evaluation approaches can be found in . Fig. 5.1. In the following sections, we provide a detailed description of each of the evaluation approaches along with research examples of its use.
5.4.1
Analytical Approaches
Analytical approaches rely on analysts’ judgments and analytic techniques to perform evaluations on user interfaces, and often do not directly involve the participation of end users. These approaches employ experts— general usability, human factors, or soft-
158
V. L. Patel et al.
ware—for conducting the studies. In general, analytical evaluation techniques involve task- analytic approaches, inspection-based methods, and predictive model-based methods (e.g., keystroke models, Fitts Law). 5.4.1.1
5
Task Analysis2
Task analysis is one of most commonly used techniques to evaluate “existing practices” in order to understand the rationale behind people’s goals of performing a task, the motivations behind their goals, and how they perform these tasks (Preece et al. 1994). As described by Vicente (1999), task analysis is an evaluation of the “trajectories of behavior.” There are several variants of task analysis—hierarchical task analysis (HTA) and cognitive task analysis (CTA) being the most commonly used in biomedical informatics research. HTA is the simplest task analytic approach and involves the breaking down of a task into sub-tasks and smaller constituted parts (e.g., sub-sub-tasks). The tasks are organized according to specific goals. This method, originally designed to identify specific training needs, has been used extensively in the design and evaluation of interactive interfaces (Annett and Duncan 1967). The application of HTA can be explained with an example: consider the goal of printing a Microsoft Word document that is on your desktop. The sub-tasks for this goal would involve finding (or identifying) the document on your desktop, and then print it by selecting the appropriate printer. The HTA for this task can be organized as follows: 0. Print document on the desktop 1. Go to the desktop 2. Find the document 2.1. Use “Search” function 2.2. Enter the name of the document 2.3. Identify the document 3. Open the document 4. Select the “File” menu and then “Print”
2 While GOMS (See 7 Sect. 5.4.1.3) is considered a task-analytic approach, we have categorized it as a model-based approach for predictions of task completion times. It is based on a task analytic decomposition of tasks.
4.1. Select relevant printer 4.2. Click “Print” button Plan 0: do 1–3–4; if file cannot be located by a visual search, do 2–3–4 Plan 2: do 2.1–2.2–2.3 In this task analysis, the task can be decomposed as follows: moving to your desktop, searching for the document (either visually or by using the search function and typing in the search criteria), selecting the document, opening and printing it using the appropriate printer. The order in which these tasks are performed may change based on specific situations. For example, if the document is not immediately visible on the desktop (or if the desktop has several documents making it impossible to identify the document visually), then a search function is necessary. Similarly, if there are multiple printer choices, then a relevant printer must be selected. The plans include a set of tasks that a user must undertake to achieve the goal (i.e., print the document). In this case, there are two plans: plan 0 and plan 2 (all plans are conditional on tasks having pertinent sub-tasks associated with it). For example, if the user cannot find a document on the desktop, plan 2 is instantiated, where a search function is used to identify the document (steps 2.1, 2.2 and 2.3). . Figure 5.2 depicts the visual form of the HTA for this particular example. HTA has been used in evaluating interfaces and medical devices. For example, Chung et al. (2003) used HTA to compare the differences between six infusion pumps. Using HTA, they identified potential sources for the generation of human errors during various tasks. While exploratory, their use of HTA provided insights into how the HTA can be used for evaluating human performance and for predicting potential sources of errors. Alternatively, HTA has been used to model information and clinical workflow in ambulatory clinics (Unertl et al. 2009). Unertl et al. (2009) used direct observations and semi- structured interviews to create a HTA of the workflows. The HTA was then used to identify the gaps in existing HIT functionality for supporting clinical workflows, and the needs of chronic disease care providers.
159 Human-Computer Interaction, Usability, and Workflow
Plan 0: do 1- 3-4; if file cannot be found do 2-3-4
0.Print document on desktop
1. Go to desktop
2. Find document
3. Open document
4. Print document
Plan 2: do 2.1- 2.2-2.3
2.1 Use search function
2.2 Enter name of document
2.3 Identify document
.. Fig. 5.2 Graphical representation of task analysis of printing a document: the tasks are represented in the boxes; the line underneath certain boxes represents the fact that there are no sub-tasks for these tasks
CTA is an extension of the general task analysis technique to develop a comprehensive understanding regarding the knowledge, cognitive/thought processes and goals that underlie observable task activities (Chipman et al. 2000). Although the focus is on knowledge and cognitive components of the task activities and performance, CTA relies on observable human activities to draw insights on the knowledge-based constraints and challenges that impair effective task performance. CTA techniques are broadly classified into three groups based: (a) interviews and observations, (b) process tracing and (c) conceptual techniques (Cooke 1994). CTA using interviews and observations involve developing a comprehensive understanding of tasks through discussions with, and task observations of experts. For example, a researcher observes an expert physician performing the task of medication order entry into a CPOE (Computerized Physician Order Entry) system and asks to follow up questions regarding the specific aspects of the task. In a study on understanding providers’ management of abnormal test results, Hysong et al. (2010) conducted CTA-based interviews with 28 primary care physicians on how and when they manage alerts, and how they use the various features on the EHR system to filter and sort their alerts. CTA supported by process-tracing approaches relies on capturing task activities
through direct (e.g., verbal think aloud) or indirect (e.g., unobtrusive screen recording) data capture methods. Whereas the process- tracing approach is generally used to capture expert behaviors, it has also been used to evaluate general users. In a study on experts’ information seeking behavior in critical care, Kannampallil et al. (2013a) used the process- tracing approach to identify the nature of these activities including the information sources, cognitive strategies, and shortcuts used by critical care physicians in decision- making tasks. The CTA approach relied on the verbalizations of physicians, their access to various sources, and the time spent on accessing these sources to identify the strategies of information seeking. Finally, CTA supported by conceptual techniques rely on the development of representations of a domain (and their related concepts) and the potential relationships between them. This approach is often used with experts and different methods are used for knowledge elicitation including concept elicitation, structured interviews, ranking approaches, card sorting, structural approaches such as multi- dimensional scaling, and graphical associations (Cooke 1994). 5.4.1.2
Inspection-Based Evaluation
Inspection methods involve experts appraising a system, playing the role of a user to identify potential usability and interaction issues with
5
160
5
V. L. Patel et al.
a system. Inspection methods are often conducted on fully developed systems or interfaces but may also be used for prototypes. Inspection methods rely on a usability expert, i.e., a person with significant training and experience in evaluating interfaces, to go through a system and identify whether the user interface elements conform to a predetermined set of usability guidelines and design requirements (or principles). The most commonly used inspection methods are heuristic evaluations (HE) and walkthroughs. HE techniques utilize a small set of experts to evaluate a user interface (or a set of interfaces in a system) based on their understanding of a set of heuristic principles regarding interface design (Johnson et al. 2005). This technique was developed by Jakob Nielsen and colleagues (Nielsen and Molich 1990), and has been used extensively in the evaluation of user interfaces. The original set of heuristics was developed by Nielsen based on an abstraction of 249 usability problems. In general, the following ten heuristic principles (or a subset of these) are most often considered for HE studies: system status visibility; match between system and real world; user control and freedom; consistency and standards; error prevention; recognition rather than recall; flexibility and efficiency of use; aesthetic and minimalist design; help users recognize, diagnose and recover from errors; and help and documentation (retrieved from: 7 http://www.n ngroup.c om/articles/tenusability-heuristics/). Conducting an HE involves a usability expert going through an interface to identify potential violations to a set of usability principles (referred to as “heuristics”). These perceived violations could involve a variety of interface elements such as windows, menu items, links, navigation, and interaction. Evaluators typically select a relevant subset of heuristics for evaluation (or add more based on the specific needs and context). The selection of heuristics is based on the type of system and interface being evaluated. For example, the relevant heuristics for evaluating an EHR interface would be different from that of an app on a mobile device. After selecting a set of applicable heuristics, one or more
usability experts evaluate the user interface against the identified heuristics. After evaluating the heuristics, the potential violations are rated according to a severity score (1–5, where 1 indicates a cosmetic problem and 5 indicates a catastrophic problem). This process is iterative and continues until the expert feels that a majority (if not all) of the violations are identified. It is also generally recommended that a set of 4–5 usability experts are required to identify 95% of the perceived violations or problems with a user interface. However, it is not uncommon to employ fewer experts (e.g., 3). It should be acknowledged that the HE approach may not lead to the identification of all problems and the identified problems may be localized (i.e., specific to a particular interface in a system). An example of an HE evaluation form is shown in . Fig. 5.3. In the healthcare domain, HE has been used in the evaluation of medical devices and HIT interfaces. For example, Zhang et al. (2003) used a modified set of 14 heuristics to compare the patient safety characteristics of two 1-channel volumetric infusion pumps. Four independent usability experts evaluated both infusion pumps using the list of heuristics and identified 89 usability problems categorized as 192 heuristic violations for pump 1, and 52 usability problems categorized as 121 heuristic violations for pump 2. The heuristic violations were also classified based on their severity. In another study, Allen et al. (2006) developed a simplified list of heuristics to evaluate web-based healthcare interfaces (printouts of each interface). Multiple usability experts assigned severity ratings for each of the identified violations and the severity ratings were used to re-design the interface. Walkthroughs are another inspection- based approach that relies on experts to evaluate the cognitive processes of users performing a task. It involves employing a set of potential stakeholders (designers, usability experts) to characterize a sequence of actions and goals for completing a task. Most commonly used walkthrough, referred to as cognitive walkthrough (CW), involves observing, recording and analyzing the actions and behaviors of users as they complete a scenario of use. CW is focused on identifying the usability and
161 Human-Computer Interaction, Usability, and Workflow
.. Fig. 5.3 Example of an HE form (for visibility)
comprehensibility of a system (Polson et al. 1992). The aim of CW is to investigate and determine whether the user’s knowledge and skills and the interface cues are sufficient to produce an appropriate goal-action sequence that is required to perform a given task (Kaufman et al. 2003a, b). CW is derived from the cognitive theory of how users work on computer-based tasks, using the exploratory learning approach, where system users continually appraise their goals and evaluate their progress against these goals (Kahn and Prail 1994). While performing CW, the focus is on simulating the human-system interaction, and evaluating the fit between the system features and the user’s goals. Conducting CW studies involves multiple steps. Potential participants (e.g., users, designers, usability experts) are provided a set of task sequences or scenarios
for working with an interface or system. For example, for an interface for entering demographic and patient history details, participants (e.g., physicians) are asked to enter the age, gender, race and clinical history information. As the participants perform their assigned task, their task sequences, errors and other behavioral aspects are recorded. Often, follow up interviews or think aloud (described in a later section) are used to identify participants’ interpretation of the tasks, how they make progress, and potential points of mismatches in the system. Detailed observations and recordings of these mismatches are documented for further analysis. While in most situations CWs are performed by individuals, sometimes groups of stakeholders perform the walkthrough together. For example, usability experts, designers and potential users could go through systems together to identify
5
162
5
V. L. Patel et al.
the potential issues and drawbacks. Such choosing among alternative available methgroup walkthroughs are often referred to as ods depending on features of the task at hand, keeping track of what has been done and what pluralistic walkthroughs. In biomedical informatics domain, it must needs to be done, and executing the motor be noted that CW has been used extensively in movements necessary for the keyboard and evaluating situations other than human- mouse” (Olson and Olson 2003). In other computer interaction. For example, the CW words, GOMS assumes that the execution of method (and its variants) has been used to tasks can be represented as a serial sequence evaluate diagnostic reasoning, decision- of cognitive operations and motor actions. GOMS is used to describe an aggregate of making processes and clinical activities. For example, Kushniruk et al. (1996) used the CW the task and the user’s knowledge regarding method to perform an early evaluation of the how to perform the task. This is expressed mediating role of HIT in clinical practice. The regarding the Goals, Operators, Methods and CW was not only used to identify usability Selection rules. Goals are the expected outproblems but was instrumental in the develop- comes that a user wants to achieve. For examment of a coding scheme for subsequent ple, a goal for a physician could be usability testing. Hewing et al. (2013) used documenting the details of a patient interacCW to evaluate an expert ophthalmologist’s tion on an EHR interface. Operators are the reasoning regarding retinal disease in infants. specific actions that can be performed on the Using images, clinical experts were indepen- user interface. For example, clicking on a text dently asked to rate the presence and severity box or selecting a patient from a list in a dropof retinal disease and provide an explanation down menu. Methods are sequential combinaof how they arrived at their diagnostic deci- tions of operators and sub-goals that need to sions. Similar approaches were used by be achieved. For example, in the case of selectKaufman et al. (2003a, b) to evaluate the ing a patient from a dropdown list, the user usability of a home-based, telehealth system. has to move the mouse over to the dropdown menu, click on the arrow using the appropri5.4.1.3 Model-Based Evaluation ate mouse key to retrieve the list of patients. Model-based evaluation approaches use pre- Finally, selection rules are used to ascertain dictive modeling approaches to characterize which methods to choose when several choices the efficiency of user interfaces. Model-based are available. For example, using the arrow approaches are often used for evaluating rou- keys on the keyboard to scroll down a list vertine, expert task performance. For example, sus using the mouse to select. how can the keys of a medical device interface One of the simplest and most commonly be optimally organized such that the users can used GOMS approaches is the Keystroke- complete their tasks quickly and accurately? Level Model (KLM), which was first described Similarly, predictive modeling can be used to in Card et al. (1983). As opposed to the gencompare the data entry efficiency between eral GOMS model, the KLM makes several interfaces with different layouts and organiza- assumptions regarding the task. In KLM, tion. We describe two commonly used predic- methods are limited to keystroke level operative modeling techniques in the evaluation of tions and task duration is predicted based on interfaces. these estimates. For the KLM, there are six Card et al. (1980) proposed the GOMS types of operators: K for pressing a key; P for (Goals, Operators, Methods and Selection pointing the mouse to a target; H for moving Rules) analytical framework for predicting hands to the keyboard or pointing device; D human performance with interactive systems. for drawing a line segment; M for mental Specifically, GOMS models predict the time preparation for an action; and R for system taken to complete a task by a skilled/expert response. Based on experimental data or other user based on “the composite of actions of predictive models (e.g., Fitts Law), each of retrieving plans from long-term memory, these operators is assigned a value or a param-
163 Human-Computer Interaction, Usability, and Workflow
eterized estimate of execution time. We describe an example from Saitwal et al. (2010) on the use of the KLM approach. In a study investigating the usability of EHR interfaces, Saitwal et al. (2010) used the KLM approach to evaluate the time is taken, and the number of steps required to complete a set of 14 EHR-based tasks. The purpose of the study was to characterize the issues with the user interface and also to identify potential areas for improvement. The evaluation was performed on the AHLTA (Armed Forces Health Longitudinal Technology Application) user interface. A set of 14 prototypical tasks was first identified. Sample tasks included entering the patient’s current illness, history of present illness, social history and family history. KLM analysis was performed on each of the tasks: this involved breaking each of the tasks into its component goals, operators, methods and selection rules. The operators were also categorized as physical (e.g., move the mouse to a button) or mental (e.g., locate an item from a dropdown menu). For example, the selection of a patient name involved eight steps (M – mental operation; P – physical operation): (1) think of location on the menu [M, 1.2s], (2) move hand to the mouse [P, 0.4s], (3) move the mouse to “Go” in the menu [P, 0.4s], (4) extend the mouse to “Patient” [P, 0.4s], (5) retrieve the name of the patient [M, 1.2s], (6) locate patient name on the list [M, 1.2s], (7) move mouse to the identified patient [P, 0.4s] and (8) click on the identified patient [P, 0.4s]. In this case, there were a total of 8 steps that would take 5.2s to complete. Similarly, the number of steps and the time taken for each of the 14 considered AHLTA tasks were computed. In addition, GOMS and its family of methods can be productively used to make comparisons regarding the efficiency of performing tasks interfaces. However, such approaches are approximations and have several disadvantages. Although GOMS provides a flexible and often reliable mechanism for predicting human performance in a variety of computer-based tasks, there are several potential limitations. A brief summary is provided here, and interested readers can find further
details in Card et al. (1980). GOMS models can be applied only to the error-free, routine tasks of skilled users. Hence, it is not possible to make time predictions for non-skilled users, who are likely to take considerable time to learn to use a new system. For example, the use of the GOMS approach to predict the potential time spent by physicians in using a new EHR would be inaccurate owing to relative lack of knowledge of the physicians regarding the use of the various interfaces, and the learning curve required to be up-to- speed with the new system. The complexity of clinical work processes and tasks, and the variability of the user population create significant challenges for the effective use of GOMS in measuring the effectiveness of clinical tasks. Fitts Law is used to predict human motor behavior; it is used to predict the time taken to acquire a target (Fitts 1954). On computer- based interfaces, it has been used to develop a predictive model of time it takes to acquire a target using a mouse (or another pointing device). The time taken to acquire a target depends on the distance between the pointer and target (referred to as amplitude, A) and the width of the target (W). The movement time (MT) is mathematically represented as follows: A MT k .log 2 1 W where k is a constant, A – amplitude, W – width of the target. In summary, based on Fitts law, one can say that the larger objects are easier to acquire while smaller, closely aligned objects are much more difficult to acquire with a pointing device. While the direct application of Fitts law is not often found in the evaluation studies of HIT or health interfaces in general, it has a profound influence in the design of interfaces. For example, the placement of menu items and buttons, such that a user can easily click on them for selection, are based on Fitts law parameters. Similarly, in the design of number keypads for medical devices, the size of the buttons and their location can be effectively predicted by Fitts law parameters.
5
164
5
V. L. Patel et al.
In addition to the above-mentioned predictive models, there are several other less common models. While a detailed description of each of them or their use is beyond the scope of this chapter, we provide a brief introduction to another predictive approach: Hick- Hyman choice reaction time (Hick 1951; Hyman 1953). Choice reaction time, RT, can be predicted based on the number of available stimuli (or choices), n: RT a b.log 2 n where a and b are constants. Hick-Hyman law is particularly useful in predicting text entry rates for different keyboards (MacKenzie et al. 1999), and time required to select from different menus (e.g., a linear vs. a hierarchical menu). In particular, the method is useful to make decisions regarding the design and evaluation of menus. For example, consider two menu design choices: 9 items deep/3 items wide and 3 items deep/9 items wide. The RT for each of these can be calculated as follows: (3 × (a + b.log2 (n)) 500,000 mostly SNPs common SNPs. More recent GWAS have incorporated rare variants, such as functional genomic variants known to be associated with disease and pharmacogenomic variants. The first GWASs (. Fig. 28.3a) was conducted in 2005 and 2006 and discovered genetic variants associated with Age-related Macular Degeneration of ~100k SNPs (Dewan et al., 2006; Klein et al., 2005). The modern era of array-based GWAS approach with large case control populations identifying common variants influencing common disease was arguably introduced in a large scale by the 2007 by the Wellcome Trust Case Control Consortium, which successfully iden
28.4.2
Genomic Sequencing
The rapidly falling cost of genomic sequencing (to several hundred for a research whole exome and less than $1000 for a whole genome as of the end of 2018) is leading to a dramatic growth in use of genetic sequencing. The primary added benefit of genomic sequencing to precision medicine at the current time is a better and more detailed assessment of rare and very rare variants through a more comprehensive coverage of the genome. Sequencing approaches have enabled the discovery of novel variants for common disease and have been especially impactful for the uncovering of variants in rare disease. Sequencing is routinely used now clinically to aid cancer care or diagnose rare genetic diseases. In research, sequencing is rapidly expanding our ability to discover associations with rare conditions. The NIH’s Undiagnosed Disease Network, for instance, routinely
951 Precision Medicine and Informatics
employs whole exome sequencing (WES) or whole genome sequencing (WGS) to diagnose individuals. As a notable win for sequencing, the UDN has been able to diagnose 35% of individuals referred into their network, 74% of which were made with the addition of genomic sequencing to comprehensive clinical phenotyping (Splinter et al., 2018). In addition, they have defined 31 new syndromes through their comprehensive clinical and molecular assessments of undiagnosed patients. 28.4.3
Phenome-Wide Association Studies (PheWAS)
Growth of EHR based cohorts provided rich and diverse phenotype data to complement biologic data. Whereas GWASs provided a way to assess genomic accession associations in a hypothesis-free manner starting around 2005, GWAS usually assesses only one phenotype at a time. However, the growth of GWAS quickly highlighted the occurrence of genetic pleiotropy – the condition in which one gene influences multiple independent phenotypes. Thus, the rich collection of diverse phenotype information in EHRs and other growing cohorts provided the ability to simultaneously access phenotype associations in the same scanning hypothesis free manner as GWAS. The first PheWAS EHR-based aggregated billing codes into 744 PheWAS “cases” (Denny, Ritchie, Basford, et al., 2010). Each case was linked to a control group. After identification of case and control groups, a PheWAS (. Fig. 28.3b) is essentially a pairwise test of all phenotypes against an independent variable, such as a genetic variant or laboratory value. For a genetic variant, PheWAS is analogous to genetic association tests performed in a GWAS, with a typical approach employing a logistic regression adjusted for demographic and genetic variables, such as genetic ancestry. The first PheWAS tested seven known SNP-disease associations, replicating four and suggested a couple of new associations. Newer approaches to PheWAS have leveraged increased density of phenotypes from the EHR, which current methods mapping
28
ICD9 and ICD10 codes into >1800 phenotype case groups. A 2013 study shows that this approach was able to replicate 66% of adequately-powered SNP- phenotype pairs, and also identified several new associations that were replicated (Denny et al., 2013). A catalog of some of the PheWAS associations found to date is available at 7 http://phewascatalog. org. A PheWAS of all phenotypes available in the UK Biobank has also been performed (7 http://www.nealelab.is/uk-biobank). PheWAS can essentially be performed on any broad collection of phenotypes. Researchers have used raw unaggregated ICD codes, other aggregation systems of ICD codes, or phenotypes collected from observational cohorts (Hebbring et al., 2013; Pathak, Kiefer, Bielinski, & Chute, 2012; S A Pendergrass et al., 2011). The disadvantage to using more granular ICD codes is the increased number of hypotheses being tested, which hinders the statistical power to detect a result. Lack of ICD code aggregation can also introduce variability in coding practices that decreases sample size for a given phenotype, such as the number of specific diagnostic codes available to represent common conditions and their complications, such as diabetes mellitus subtypes (e.g., with specific codes for controlled or uncontrolled glucose status and its resulting cardiovascular, renal, or neurological complications) or gout (e.g., chronic or acute, with or without tophi, etc). PheWAS can quickly highlight potential pleiotropy of a given genetic variant or other independent variable by analyzing for associations with multiple phenotypes within a single population, one can test the independence of the potential pleiotropic findings with subsequent conditioned analyses. Other advantages of PheWAS is that they are quick to perform and easily implemented through existing R packages (7 https://github.com/PheWAS/ PheWAS) or iteration through common statistical packages. A disadvantage of PheWAS is that its phenotypes can be coarse and can have both lower sensitivity and PPV than custom phenotype algorithms as discussed in 7 Sect. 28.3. Fortunately, these types of bias typically biased towards the null. Associations
952
J. C. Denny et al.
found via PheWAS can require refinement and subsequent validation. 28.4.4
28
Other Omic Investigations
In addition to genomics, the growth of a number of other omic approaches are providing greater insight into an individual’s environment, endophenotypes, and molecular measures. Some of these include the microbiome, proteome, metabolome, and other bioassays. Additional dense phenotypic and environmental assessments include dense measures of the environment and personal sensor-based technologies, such as consumer activity monitors. Publicly available datasets providing detailed measures of pollution, the built environment, weather patterns, availability of quality food or greenspace, and sociodemographic factors are available for linkage via geolocation, linked via smartphones and other devices that continuously track geolocation. These devices can also measure activity and heart rate to provide greater insight into a person’s habits and physiological factors. Today, the clinical impact of many of these measures is not yet known. However, there growing ubiquity through both research and commercial interests are enabling deeper investigation into their clinical impact. They are also being included in large research cohorts (see 7 Sect. 28.6).
28.5 Approaches to Using Dense
Genomic and Phenomic Data for Discovery
28.5.1
Combining Genotypes and Phenotypes as Risk Scores
Most genetic variants discovered via GWAS have had relatively mild effect sizes for their phenotype of interest. However, the size of modern GWAS, now involving hundreds of thousands of individuals for more common traits, have allowed identification of many independent genetic loci, sometimes reaching into the hundreds of distinct loci (Locke et al.,
2015; Okada et al., 2014; Wood et al., 2014). Collectively, these genetic variants can explain a much larger percentage of the variance in disease risk than the individual risk variants, even when the effect sizes of many of the individual variants may be rather small (e.g., having odds ratios of ~1.01). As a tool, researchers have aggregated genetic risk variants into a calculated score (called a “genetic risk score”, GRS, or “polygenic risk score”, PRS), typically as a sum of the presence of the variant multiplied by a weight, often taken from a regression analysis. These risk scores need to account linkage disequilibrium to find independent loci and may also produce a weighted model using penalized regression. A simple approach can be given as: k
GRS wi N i i 1
where wi is the weight for the variant (e.g., the log odds ratio from a logistic regression) and Ni is the number of risk alleles for that variant (typically, 0,1, or 2). The clinical advantage of a GRS is that it provides a way to evaluate the aggregate risk of an individual having a given disease that takes into account many typically small risk factors. For instance, consider breast cancer genetic testing. It has long been recognized that variants in BRCA1 and BRCA2 confer significant increased risk of breast cancer to carriers of these mutations. While pathogenic variants in these genes do confer a large risk of breast cancer (lifetime risk of 45–65%), the vast majority of breast cancer is not related to these variants, since they are present in 600,000 Goal: 1 million
7 www.research.va.gov/MVP/ default.cfm
Kaiser Permanente Biobank
U.S.
2009
240,000
7 www.rpgeh.kaiser.org
China Kadoorie Biobank
China
2004
510,000
7 ckbiobank.org
All of Us Research Program
U.S.
2017
>80,000 Goal: 1 million or more
7 joinallofus.org, researchallofus. org
Taiwan Biobank
Taiwan
2005
86,695 Goal: 200,000
7 www.twbiobank.org.tw
Geisinger MyCode
U.S.
2007
>190,000
7 www.geisinger.org/mycode
Limited to cohorts exceeding 100,000 individuals with biosamples. Sizes reported are as of 11/2018 eMERGE The Electronic Medical Records and Genomics Network
28.6.1
eed for Diversity, and Role N of Precision Medicine in Health Disparities
Health disparities are abundant in health care. The same concerns can be said for precision medicine, for which variabilities in health insurance coverage, access to care, and financial situations may alter availability and accessibility for precision therapies (Bentley, Callier, & Rotimi, 2017). However, it is also true that precision medicine has the potential to identify and help alleviate some health disparities. Since genetic variants vary by ancestry, genetic testing has the opportunity to identify those most at risk for adverse events based not just on ancestry but on actual carriage of variants. Moreover, drugs traditionally have not been tested in all diverse populations and risk factors may not always be identified reach population. For instance, individuals of Asian ancestry are at much greater risk for severe skin reactions such as
Stevens-Johnson syndrome from antiepileptics such as carbamazepine (Phillips et al., 2018). Similarly, it has been noted that carriage of CYP2C19 loss of function of alleles is much more common in individuals of Pacific Island descent (Kaneko et al., 1999). Since diverse ancestries are often not tested in large numbers in clinical trials, the increased risks in diverse populations are not necessarily noticed. However, genomic testing would identify those at greater risk of adverse events thus identifying the opportunities to optimize care. The specific association of clopidogrel and reduced efficacy in individuals of Pacific Island descent was a subject of a lawsuit (A. H. Wu, White, Oh, & Burchard, 2015). Unfortunately, the vast majority of individuals who have been genotyped or sequenced to date are of European ancestry. For instance, a 2016 study noted that 81 percent of all individuals who had undergone GWAS at that time were of European ancestry, and only ~4% represented African, Hispanic, or native
957 Precision Medicine and Informatics
ancestries (Popejoy & Fullerton, 2016). Those latter populations represent about one-third of the current US population. A lack of diversity in genetic testing results in a lack of knowledge of the genetic architecture for diverse populations. For instance, variance in warfarin sensitivity vary by ancestry such that the variants needed to accurately guide prescribing for European and African ancestry are different (Perera et al., 2013; Ramirez et al., 2012). Moreover, it is known that individuals of African ancestry typically require higher doses of warfarin. However, most of the warfarin pharmacovariants that have been identified actually increase sensitivity to warfarin rather than reducing it. The lack of diversity genotype populations affects not only our ability to adequately treat individuals with diverse ancestries, it also hinders discovery. For instance, the discovery of rare PCSK9 loss-of-function variants as a drug target for cholesterol and cardiovascular disease was discovered in African Americans (Cohen, Boerwinkle, Mosley, & Hobbs, 2006). These loss of function variants led to production of monoclonal antibodies against PCSK9 that dramatically reduce cholesterol levels – and will treat individuals of essentially any ancestry (Sabatine et al., 2017).
28.7 Implementation of Precision
Medicine in Clinical Practice
Currently, most efforts in precision medicine implementation focus on genomics. This comes in three main flavors: germline genomic changes to better tailor drug prescribing, diagnosing genetic disease, and identification of somatic variants to guide cancer therapy. A number of networks have been funded by the NIH to support these integration of genomic medicine into clinical care. They include the Implementing Genomics Into Practice (IGNITE) network, Electronic Medical Records and Genomics (eMERGE) Network, Clinical Sequencing EvidenceGenerating Research (CSER) Network,
the Pharmacogenomics Research Network (PGRN), and the Newborn Sequencing In Genomic medicine and public HealTh (NSIGHT) Network (. Table 28.3).
Cancer genomic testing Perhaps the most widespread use of precision medicine currently is for somatic variation to target cancer therapies. Cancer therapies have long recognized the contribution of genetic variation to prognosis, starting with clinical karyotyping. One of the earliest applications of truly targeted therapy started with identification of the Philadelphia chromosome/translocation, which generates a fusion gene product BCR-ABL1. BCR-ABL1 results in the tyrosine kinase Abl being constitutively activated and is a marker for acute lymphoblastic leukemia and chronic myeloid leukemia. It’s particular relevance to targeted therapy was noted in the 1990s when imatinib was identified through high throughput screening assays of tyrosine kinase inhibitors. Randomized controlled trials demonstrated a survival benefit on patients with chronic myelogenous leukemia (CML), thus leading to targeted therapies for individuals positive for this translocation. The use of genetic changes to guide cancer therapy are proliferating rapidly. The growth of next generation sequencing of cancer patients has resulted in discovery of a number of mutations that have been successfully targeted for therapeutics. Examples include variants in BRAF for melanoma; EGFR, ALK, ROS1, and others for lung cancer; and many others. Hallmarks of genetically-focused therapies are applicability to smaller populations and a potential for fewer side effects compared to traditional chemotherapy. However, they also tend to be more expensive (Tannock & Hickman, 2016). Given the focused care and workout for cancer patients, typical treatment for these individuals with cancers that have available genetically-targeted therapies is to clinically sequence tumor samples. These reports typically come in the form of PDFs; however, this is not a major impediment to accurate clinical care since it is a focus work up guided by professionals very knowledgeable in the field.
28
958
J. C. Denny et al.
.. Table 28.3 Example projects exploring genetic medicine implementation
28
Program
Region
Website
Comments
eMERGE
U.S.
7 gwas.net
Pharmacogenomics (PGx) and actionable Mendelian variants (AMV) for ~34 k
IGNITE
U.S.
7 ignite-genomics.org/
Research demonstration projects exploring family medical history, PGx, APOL1 variants
Alabama Genomic Health Initiative
U.S.
7 www.uabmedicine. org/aghi
Community-based with GWAS-based AMV
Undiagnosed Disease Network
U.S.
7 undiagnosed.hms. harvard.edu/
WGS, phenotyping for undiagnosed patients
Genomics England
U.K.
7 www. genomicsengland. co.uk/
WGS for rare disease and cancer for 100 k
Thailand
SE Asia
Sanford
U.S.
7 imagenetics. sanfordhealth.org/
PGx and AMV among primary care population
All of Us Research Program
U.S.
7 joinallofus.org
Stated goal of PGx and AMV for >1 million
Geisinger MyCode
U.S.
7 www.geisinger.org/ mycode
AMV; about 190 k enrolled
Proactive genotyping for SJS/TEN risk alleles in carbamazepine-exposed patients
eMERGE The Electronic Medical Records and Genomics Network, IGNITE Implementing Genomics into Practice
Germline pharmacogenomics Medications have variable efficacy and potentials for adverse effects based on three major modes of action: altered metabolism, on-target side effects, or off-target side effects, each of which can result from a drug-genome interaction (See 7 Chap. 26, 7 Sect. 26.5 for more details.). A common scenario for altered metabolism resulting in lack of efficacy would be if a drug is a prodrug, meaning that the drug that is administered requires activation in vivo (typically by enzymes) into its active form. For example, clopidogrel is a prodrug that requires activation from CY2C19 to its active form 2-oxoclopidogrel (Scott et al., 2013). Thus, people with poor metabolizing variants of CYP2C19 are more likely to experience a lack of clopidogrel efficacy and be at higher risk of myocardial infarctions, need for revascularization, stroke, and death (Delaney et al., 2012). Similarly, decreased metabolism of thiopurines (e.g., azathioprine) due to TPMT polymorphisms can
result in excessive bone marrow suppression (Relling et al., 2019). Second, drugs can produce adverse effects through off-target effects, such as an allergic reaction via an interaction with the immune system. Examples here include severe skin reactions from drugs such as carbamazepine and abacavir, which can be predicted by certain human leukocyte antigen variants (White et al., 2018). Third, drugs can have toxicity from on-target effects, such as increased sensitivity to warfarin resulting in an increased risk of bleeding with higher dose. Germline pharmacogenomics holds the promise of tailoring medications to an individual’s makeup to enable the “right drug for the right person” based on understanding of these effects. Unlike cancer genetic testing, pharmacogenetics requires a provider to potentially alter drug prescribing based on understanding of one’s genotype. To allow for pharmacogenetics to work, the system must be able to intercept a drug order and provide
959 Precision Medicine and Informatics
guidance. Drug-genome interactions could be accepted either by a computerized system of decision support (see 7 Chap. 24) or via a human mechanism, e.g., via pharmacists. For decision support to work, the EHR requires a structured understanding of one’s genotype, a clinical decision support system that can support action ability based on both a drug order and genotype. Pharmacogenomic testing can be ordered in either a preemptive or reactive fashion. In a preemptive fashion, an individual has pharmacogenetic testing prior to drug prescribing. Then, when a medication would be prescribed that may be altered by one’s genetic makeup, the system can intercept the order and recommend a genetically-tailored medication at the time of the prescribing event, such as the decision support alert in . Fig. 28.7. This sort of genetic testing has been deployed at Vanderbilt, University of Chicago, and Indiana’s INGENIOUS trial (Eadon et al., 2016; O’Donnell et al., 2012; Pulley et al., 2012). Further investigation of this approach is underway within the IGNITE Network. Reactive pharmacogenetic testing is the more common approach to genetic testing and involves testing an individual when there is a specific indication for that test. Research has shown that having genetic testing available at the time of the prescribing event results
in a higher frequency of genetically-tailored prescriptions (Peterson et al., 2016). Many genetic tests can take several days or more to receive results back for actionability, which may require a provider to recontact a patient to make a therapy change. Genomics for disease diagnosis and risk assessment Clinical genetic testing has often
occurred within the presence of specialized clinic visits with geneticists or genetic counselors, most commonly for prenatal screening or diagnosis of suspected genetic disease. These types of interactions typically require very little direct informatics support and results can be delivered effectively via send-out paper lab results. However, newer approaches underlying broaden understanding of individual disease risks based on genetics require greater intervention from informatics systems. While clinical use of genetic testing for common disease risk (such as through PRS as discussed above in 7 Sect. 28.5.1) is uncommon in clinical care now, the explosion of genetic knowledge envisions a day in which people could clinically implement genetic risk to enhance their understanding of their degree of genetic risk for a disease. Understandings of genetic risk for disease is already implied through resource through direct-to-consumer genetic testing, discussed in the next section.
.. Fig. 28.7 Screenshot of clinical decision support advisor for Clopidogrel pharmacogenetic advice
28
960
J. C. Denny et al.
28.8 Sequencing Early in Life
28
One crucial complication in the search for genomic explanations for any given disease or phenotype is the impact of environmental interactions. Over time, every person on earth is exposed to environmental factors that may differ based not only on a factory that disposes of industrial waste near a drinking water supply or the traffic on the street they grew up on, but also by the foods they eat, the climates in which they live, and the infections they have harbored. Those external variables, hard to control for and sometimes even to know, can have major effects on the downstream products and activities of one’s genomic fingerprint. Early in life, however, those effects are less pronounced. Of course, the impact of the in-utero environment on the wellbeing of the developing fetus is well established. But a genetic defect is much more likely to be the cause in a newborn with an unidentified disease than in an adult patient who has undergone a lifetime of environmental insults. In this vein, a number of initiatives have been established across the US to offer clinical sequencing services for young patients, including programs at Children’s Hospital of Philadelphia, Duke University, Partners Healthcare, the Baylor College of Medicine, and the Medical College of Wisconsin. More controversial on paper, and not yet being performed in practice, is prenatal genome sequencing. Ethicists are exploring the potential implications of this possible direction (Donley, Hull, & Berkman, 2012). Addressing the time and resources needed to perform genome interpretation, one striking success story was achieved at Children’s Mercy Hospitals and Clinics in Kansas City, MO (Saunders et al., 2012). Investigators used an Illumina HiSeq 2500 machine and an internally- developed automated analysis pipeline to perform whole-genome sequencing and make a differential diagnosis for genetic disorders in under 50 h. The diagnoses in question are among the ~3500 known monogenetic disorders that have been characterized. In this case, WGS is not being used to identify novel, previously unknown mutations. Rather, it is shortening the path to diagnosis to just over 2 days instead of the more traditional 4–6 weeks as a battery of tests were performed sequentially.
We offer one final example in which genome sequencing was used as a last resort in a medical odyssey to identify the cause of a mysterious bowel condition in a 4-year-old boy named Nicholas Volker (Worthey et al., 2011). Having ruled out every diagnosis they could conceive of, doctors resorted to exome sequencing, leading to the identification of 16,124 mutations, of which 1527 were novel. A causal mutation was discovered in the gene XIAP. This gene was already known to play a role in XLP, or X-linked lymphoproliferative syndrome and retrospective review showed that colitis had been observed in 2 XLP patients in the past. Based on these findings, a cord blood transplant was performed, and 2 years later, Nic’s intestinal issues had not returned. News coverage of this story by the Milwaukee Journal Sentinel was awarded a Pulitzer Prize for explanatory reporting (Journal Sentinel wins Pulitzer Prize for “One in a Billion” DNA series, n.d.). 28.9 Direct to Consumer Genetics
In the wake of the human genome project and the commoditization of genotypic data, a number of companies were founded to provide consumers with their own genetic information directly. These direct-to-consumer (DTC) genomic companies began making the services broadly available when deCODE genetics launched the deCODEme service in November 2007, followed a few days later by 23andMe. Navigenics was launched the following spring. These companies offered consumers the opportunity to provide a saliva specimen or buccal swab through the mail, and in exchange to receive genotypic information for a range of known genetic markers. Different companies emphasized different aspects of genetic testing. Navigenics focused on known disease risk markers, while 23andMe was much broader, including disease markers but also ancestry information and “recreational” genetic information, for example earwax type and the ability to smell a distinct odor in urine after eating asparagus. Navigenics offered free genetic counseling as part of their service, while 23andMe and deCODEme provided referrals to genetic
961 Precision Medicine and Informatics
counselors. A study of concordance between these three services found >99.6% agreement among them, but in some cases the predicted relative risks differed in magnitude or even direction (Imai, Kricka, & Fortina, 2011). This disagreement is likely due to differences in the specific SNPs and the reference population used to calculate risk. From the companies’ perspectives, their customers offer a rich resource of genomic data for potential research and data mining. 23andMe created a research initiative called 23andWe through which they enlist customers “to collaborate with us on cutting-edge genetic research.”(23andWe: The First Annual Update – 23andMe Blog, n.d.) They invite users to fill out questionnaires and then use the phenotypic information to perform genomewide analysis studies. This approach enabled researchers at the company to replicate a number of known associations, and to discover a number of novel associations, recreational though they may be, for curly hair, freckling, sunlight-induced sneezing, and the ability to smell a metabolite in urine after eating asparagus (Tung et al., 2011). deCODE, purchased by Amgen in 2012, boasts a large number of medically significant genetic discoveries to have come out of their volunteer registry of 160,000 Icelanders, more than half of the adult population of that country (SCIENCE | deCODE genetics, n.d.). Navigenics was purchased by Life Technologies (now part of Thermo Fisher Scientific Inc.) in 2012 and no longer offers their Health Compass genetic testing service. 28.10 Conclusion
Physicians have always sought to provide care personalized to the individual. The current era of large and deep data about individual patients is ushering in the promise of precision medicine that tailors care to the individual based on factors not previously observable by the clinician, such as genomic data, predictive patterns derived from mining clinical data, or dense sensors tracking activity and heart rate at density previously not possible. For precision medicine to become a reality, we will need informatics, to enable both its discovery and
implementation. The irony of the ability to personalize care based on an individual’s makeup is that it requires huge data sets of many individuals densley phenotyped to have statistical power to make predictions for rare variants, diseases, and outcomes. Thus, precision medicine requires that we have large data sets that are shareable and available for research. We will also need to effectively enroll diverse populations and ensure that the data includes both molecular data and social behavioral determinants of health. In addition, the ability to make accurate decisions for the individual patient requires implementation in the EHR, as the amount of data required to make decisions is vast and changing quickly. nnSuggested Readings Denny, J. C., Bastarache, L., & Roden, D. M. (2016). Phenome-Wide Association Studies as a Tool to Advance Precision Medicine. Annual Review of Genomics and Human Genetics, 17(1), 353–373. https://doi.org/10.1146/ annurev-genom-090314-024956. Provides an overview and history of phenome-wide association studies. Different approaches to PheWAS are described, along with the biases, advantages, and disadvantages of each. Green, E. D., Guyer, M. S., & National Human Genome Research Institute. (2011). Charting a course for genomic medicine from base pairs to bedside. Nature, 470(7333), 204–213. https://doi. org/10.1038/nature09764. Provides an overview of the NHGRI strategic plan through 2020, including the plan moving discovery in large cohorts to implementation in clinical enterprises. Kirby, J. C., Speltz, P., Rasmussen, L. V., Basford, M., Gottesman, O., Peissig, P. L., … Denny, J. C. (2016). PheKB: a catalog and workflow for creating electronic phenotype algorithms for transportability. Journal of the American Medical Informatics Association, 23(6), 1046– 1052. https://doi.org/10.1093/jamia/ocv202. Introduces the Phenotype KnowledgeBase website, which contains phenotype algorithms and related comments, plus implementation and validation data, for finding cases and controls for genomic analysis from EHR data. The paper includes some summary tables and experiences from the first several years of uploaded EHR phenotype algorithms.
28
962
28
J. C. Denny et al.
Newton, K. M., Peissig, P. L., Kho, A. N., Bielinski, S. J., Berg, R. L., Choudhary, V., … Denny, J. C. (2013). Validation of electronic medical record-based phenotyping algorithms: results and lessons learned from the eMERGE network. Journal of the American Medical Informatics Association, 20(e1), e147–54. https://doi.org/10.1136/amiajnl-2012-000896. This paper provides best practices and lessons learned from the Electronics Medical Records and Genomics (eMERGE) Network for how research-grade phenotypes are found from EHR data. This paper includes phenotype algorithm design, creation, and validation process, as well as some experiences regarding what worked well and what did not. Pulley, J. M., Denny, J. C., Peterson, J. F., Bernard, G. R., Vnencak-Jones, C. L., Ramirez, A. H., … Roden, D. M. (2012). Operational implementation of prospective genotyping for personalized medicine: the design of the Vanderbilt PREDICT project. Clinical Pharmacology and Therapeutics, 92(1), 87–95. https://doi.org/10.1038/clpt.2011.371. This paper describes one of the first prospective implementations of pharmacogenomics. Patients were selected based on their risk for potentially needing a medication affected by pharmacogenes. They were tested on a multiplexed platform, and then medication recommendations were provided through computer-based provider order entry decision support. The first implementation was CYP2C19 and clopidogrel (an antiplatelet medication), but because the platform tested multiple pharmacovariants, drug-genome interactions could be added over time. Wellcome Trust Case Control Consortium. (2007). Genome-wide association study of 14,000 cases of seven common diseases and 3,000 shared controls. Nature, 447(7145), 661–678. https://doi.org/10.1038/nature05911. This was one of the first large scale genome-wide association studies, which found common genetic variants influencing seven common diseases. One interesting component, discovered loci for type 2 diabetes, was in FTO, whose effect on diabetes risk is largely mediated through adiposity. This shows the importance of considering phenotypes along the causal pathway when performing GWAS.
??Questions for Discussion 1. Design a study to assess the genomic influences of a disease or drug response phenotype using EHR data. Who would be your cases and controls? What features would define each case and control, and how would you validate that the algorithms you picked for cases and controls were indeed finding the patients you wanted to find? 2. Research studies traditionally have not returned their research results to study subjects. However, genetic studies are on the forefront of changing paradigms in this space. What do you think about the implications of returning results to patients? How would you feel if you were a subject in a research study? Would you want results back or not? 3. What are the implications of returning results of actionable genetic variants (such as those causing breast and ovarian cancer) found incidentally during research studies or clinical testing purposes? 4. What are some ways in which precision medicine may improve health disparities between different populations? In what ways might precision medicine worsen them? How can researchers promote research that ameliorates this risk? 5. What are some requirements for a health system or a physician in the context of pharmacogenomic testing? 6. Given that genomics do not generally change over the lifetime, how can a patient take their genomic test results from one institution to another? What technological and non-technological solutions could be employed to allow a patient to take their genetic results with them? 7. Discuss the strengths and weaknesses of EHRs for precision medicine studies of diseases, drug responses, and exposures. What kinds of exposures and health outcomes does an EHR excel at capturing and where would traditional survey or in-person assessment measures perform better?
963 Precision Medicine and Informatics
Bibliography 1000 Genomes Project Consortium, Auton, A., Brooks, L. D., Durbin, R. M., Garrison, E. P., Kang, H. M., et al. (2015). A global reference for human genetic variation. Nature, 526(7571), 68–74. https://doi. org/10.1038/nature15393. 23andWe: The First Annual Update – 23andMe Blog. (n.d.). Retrieved from https://blog.23andme. com/23andme-and-you/23andwe-the-first-annualupdate/ Ahmad, T., Pencina, M. J., Schulte, P. J., O’Brien, E., Whellan, D. J., Piña, I. L., et al. (2014). Clinical implications of chronic heart failure phenotypes defined by cluster analysis. Journal of the American College of Cardiology, 64(17), 1765–1774. https:// doi.org/10.1016/j.jacc.2014.07.979. Bastarache, L., Hughey, J. J., Hebbring, S., Marlo, J., Zhao, W., Ho, W. T., et al. (2018). Phenotype risk scores identify patients with unrecognized Mendelian disease patterns. Science, 359(6381), 1233–1239. https://doi.org/10.1126/science.aal4043. Bentley, A. R., Callier, S., & Rotimi, C. N. (2017). Diversity and inclusion in genomic research: Why the uneven progress? Journal of Community Genetics, 8(4), 255–266. https://doi.org/10.1007/ s12687-017-0316-6. Cannon, C. P., Blazing, M. A., Giugliano, R. P., McCagg, A., White, J. A., Theroux, P., et al. (2015). Ezetimibe added to statin therapy after acute coronary syndromes. The New England Journal of Medicine, 372(25), 2387–2397. https://doi. org/10.1056/NEJMoa1410489. Carroll, R. J., Eyler, A. E., & Denny, J. C. (2011). Naïve electronic health record phenotype identification for rheumatoid arthritis. AMIA Annual Symposium Proceedings, 2011, 189–196. Carroll, R. J., Thompson, W. K., Eyler, A. E., Mandelin, A. M., Cai, T., Zink, R. M., et al. (2012). Portability of an algorithm to identify rheumatoid arthritis in electronic health records. Journal of the American Medical Informatics Association, 19(e1), e162–e169. https://doi.org/10.1136/amiajnl-2011000583. Cohen, J. C., Boerwinkle, E., Mosley, T. H., & Hobbs, H. H. (2006). Sequence variations in PCSK9, low LDL, and protection against coronary heart disease. The New England Journal of Medicine, 354(12), 1264–1272. https://doi.org/10.1056/ NEJMoa054013. Conway, M., Berg, R. L., Carrell, D., Denny, J. C., Kho, A. N., Kullo, I. J., et al. (2011). Analyzing the heterogeneity and complexity of Electronic Health Record oriented phenotyping algorithms. AMIA Annual Symposium Proceedings, 2011, 274–283. Crawford, D. C., Crosslin, D. R., Tromp, G., Kullo, I. J., Kuivaniemi, H., Hayes, M. G., et al. (2014). eMERGEing progress in genomics-the first seven years. Frontiers in Genetics, 5, 184. https://doi.org/10.3389/ fgene.2014.00184.
Delaney, J. T., Ramirez, A. H., Bowton, E., Pulley, J. M., Basford, M. A., Schildcrout, J. S., et al. (2012). Predicting clopidogrel response using DNA samples linked to an electronic health record. Clinical Pharmacology and Therapeutics, 91(2), 257–263. https://doi.org/10.1038/clpt.2011.221. Denny, J. C., Bastarache, L., Ritchie, M. D., Carroll, R. J., Zink, R., Mosley, J. D., et al. (2013). Systematic comparison of phenome-wide association study of electronic medical record data and genome-wide association study data. Nature Biotechnology, 31(12), 1102–1110. https://doi.org/10.1038/ nbt.2749. Denny, J. C., Crawford, D. C., Ritchie, M. D., Bielinski, S. J., Basford, M. A., Bradford, Y., et al. (2011). Variants near FOXE1 are associated with hypothyroidism and other thyroid conditions: Using electronic medical records for genome- and phenome-wide studies. American Journal of Human Genetics, 89(4), 529–542. https://doi.org/10.1016/j. ajhg.2011.09.008. Denny, J. C., Ritchie, M. D., Basford, M. A., Pulley, J. M., Bastarache, L., Brown-Gentry, K., et al. (2010). PheWAS: Demonstrating the feasibility of a phenome-wide scan to discover gene-disease associations. Bioinformatics, 26(9), 1205–1210. https://doi. org/10.1093/bioinformatics/btq126. Denny, J. C., Ritchie, M. D., Crawford, D. C., Schildcrout, J. S., Ramirez, A. H., Pulley, J. M., et al. (2010). Identification of genomic predictors of atrioventricular conduction: Using electronic medical records as a tool for genome science. Circulation, 122(20), 2016–2021. https://doi.org/10.1161/ CIRCULATIONAHA.110.948828. Dewan, A., Liu, M., Hartman, S., Zhang, S. S.-M., Liu, D. T. L., Zhao, C., et al. (2006). HTRA1 promoter polymorphism in wet age-related macular degeneration. Science, 314(5801), 989–992. https://doi. org/10.1126/science.1133807. Donley, G., Hull, S. C., & Berkman, B. E. (2012). Prenatal whole genome sequencing: Just because we can, should we? The Hastings Center Report, 42(4), 28–40. https://doi.org/10.1002/hast.50. Doshi-Velez, F., Ge, Y., & Kohane, I. (2014). Comorbidity clusters in autism spectrum disorders: An electronic health record time-series analysis. Pediatrics, 133(1), e54–e63. https://doi.org/10.1542/ peds.2013-0819. Eadon, M. T., Desta, Z., Levy, K. D., Decker, B. S., Pierson, R. C., Pratt, V. M., et al. (2016). Implementation of a pharmacogenomics consult service to support the INGENIOUS trial. Clinical Pharmacology and Therapeutics, 100(1), 63–66. https://doi.org/10.1002/cpt.347. Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115– 118. https://doi.org/10.1038/nature21056. Green, E. D., Guyer, M. S., & National Human Genome Research Institute. (2011). Charting a course for
28
964
28
J. C. Denny et al.
genomic medicine from base pairs to bedside. Nature, 470(7333), 204–213. https://doi.org/10.1038/ nature09764. Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., et al. (2016). Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. The Journal of the American Medical Association, 316(22), 2402–2410. https://doi. org/10.1001/jama.2016.17216. Hebbring, S. J., Schrodi, S. J., Ye, Z., Zhou, Z., Page, D., & Brilliant, M. H. (2013). A PheWAS approach in studying HLA-DRB1*1501. Genes and Immunity, 14(3), 187–191. https://doi.org/10.1038/ gene.2013.2. Holmes, M. V., Asselbergs, F. W., Palmer, T. M., Drenos, F., Lanktree, M. B., Nelson, C. P., et al. (2015). Mendelian randomization of blood lipids for coronary heart disease. European Heart Journal, 36(9), 539–550. https://doi.org/10.1093/eurheartj/eht571. Imai, K., Kricka, L. J., & Fortina, P. (2011). Concordance study of 3 direct-to-consumer genetic-testing services. Clinical Chemistry, 57(3), 518–521. https:// doi.org/10.1373/clinchem.2010.158220. Imran, T. F., Posner, D., Honerlaw, J., Vassy, J. L., Song, R. J., Ho, Y.-L., et al. (2018). A phenotyping algorithm to identify acute ischemic stroke accurately from a national biobank: The million veteran program. Clinical Epidemiology, 10, 1509–1521. https:// doi.org/10.2147/CLEP.S160764. Jerome, R. N., Pulley, J. M., Roden, D. M., Shirey-Rice, J. K., Bastarache, L. A., Bernard, G., et al. (2018). Using human “experiments of nature” to predict drug safety issues: An example with PCSK9 inhibitors. Drug Safety, 41(3), 303–311. https://doi. org/10.1007/s40264-017-0616-0. Jiang, M., Chen, Y., Liu, M., Rosenbloom, S. T., Mani, S., Denny, J. C., & Xu, H. (2011). A study of machine-learning-based approaches to extract clinical entities and their assertions from discharge summaries. Journal of the American Medical Informatics Association, 18(5), 601–606. https://doi.org/10.1136/ amiajnl-2011-000163. Journal Sentinel wins Pulitzer Prize for “One in a Billion” DNA series. (n.d.). Retrieved from http:// archive.jsonline.com/news/milwaukee/120091754. html/ Kaneko, A., Lum, J. K., Yaviong, L., Takahashi, N., Ishizaki, T., Bertilsson, L., et al. (1999). High and variable frequencies of CYP2C19 mutations: Medical consequences of poor drug metabolism in Vanuatu and other Pacific islands. Pharmacogenetics, 9(5), 581–590. Khera, A. V., Emdin, C. A., Drake, I., Natarajan, P., Bick, A. G., Cook, N. R., et al. (2016). Genetic risk, adherence to a healthy lifestyle, and coronary disease. The New England Journal of Medicine, 375(24), 2349–2358. https://doi.org/10.1056/ NEJMoa1605086. Kho, A. N., Hayes, M. G., Rasmussen-Torvik, L., Pacheco, J. A., Thompson, W. K., Armstrong, L. L.,
et al. (2012). Use of diverse electronic medical record systems to identify genetic risk for type 2 diabetes within a genome-wide association study. Journal of the American Medical Informatics Association, 19(2), 212–218. https://doi.org/10.1136/ amiajnl-2011-000439. Kirby, J. C., Speltz, P., Rasmussen, L. V., Basford, M., Gottesman, O., Peissig, P. L., et al. (2016). PheKB: A catalog and workflow for creating electronic phenotype algorithms for transportability. Journal of the American Medical Informatics Association, 23(6), 1046–1052. https://doi.org/10.1093/jamia/ ocv202. Kircher, M., Witten, D. M., Jain, P., O’Roak, B. J., Cooper, G. M., & Shendure, J. (2014). A general framework for estimating the relative pathogenicity of human genetic variants. Nature Genetics, 46(3), 310–315. https://doi.org/10.1038/ng.2892. Klein, R. J., Zeiss, C., Chew, E. Y., Tsai, J. Y., Sackler, R. S., Haynes, C., et al. (2005). Complement factor H polymorphism in age-related macular degeneration. Science (New York, N.Y), 308(5720), 385–389. Kullo, I. J., Ding, K., Jouni, H., Smith, C. Y., & Chute, C. G. (2010). A genome-wide association study of red blood cell traits using the electronic medical record. PLoS One, 5(9). https://doi.org/10.1371/ journal.pone.0013011. Kurreeman, F., Liao, K., Chibnik, L., Hickey, B., Stahl, E., Gainer, V., et al. (2011). Genetic basis of autoantibody positive and negative rheumatoid arthritis risk in a multi-ethnic cohort derived from electronic health records. American Journal of Human Genetics, 88(1), 57–69. https://doi.org/10.1016/j. ajhg.2010.12.007. Li, L., Cheng, W.-Y., Glicksberg, B. S., Gottesman, O., Tamler, R., Chen, R., et al. (2015). Identification of type 2 diabetes subgroups through topological analysis of patient similarity. Science Translational Medicine, 7(311), 311ra174. https://doi.org/10.1126/ scitranslmed.aaa9364. Liao, K. P., Cai, T., Gainer, V., Goryachev, S., Zeng- treitler, Q., Raychaudhuri, S., et al. (2010). Electronic medical records for discovery research in rheumatoid arthritis. Arthritis Care & Research, 62(8), 1120–1127. https://doi.org/10.1002/acr.20184. Lin, C., Karlson, E. W., Dligach, D., Ramirez, M. P., Miller, T. A., Mo, H., et al. (2015). Automatic identification of methotrexate-induced liver toxicity in patients with rheumatoid arthritis from the electronic medical record. Journal of the American Medical Informatics Association, 22(e1), e151–e161. https://doi.org/10.1136/amiajnl-2014-002642. Locke, A. E., Kahali, B., Berndt, S. I., Justice, A. E., Pers, T. H., Day, F. R., et al. (2015). Genetic studies of body mass index yield new insights for obesity biology. Nature, 518(7538), 197–206. https://doi. org/10.1038/nature14177. MacArthur, J., Bowler, E., Cerezo, M., Gil, L., Hall, P., Hastings, E., et al. (2017). The new NHGRI-EBI Catalog of published genome-wide association studies (GWAS Catalog). Nucleic Acids Research,
965 Precision Medicine and Informatics
45(D1), D896–D901. https://doi.org/10.1093/nar/ gkw1133. Michailidou, K., Lindström, S., Dennis, J., Beesley, J., Hui, S., Kar, S., et al. (2017). Association analysis identifies 65 new breast cancer risk loci. Nature, 551(7678), 92–94. https://doi.org/10.1038/ nature24284. Millard, L. A. C., Davies, N. M., Timpson, N. J., Tilling, K., Flach, P. A., & Davey Smith, G. (2015). MR-PheWAS: Hypothesis prioritization among potential causal effects of body mass index on many outcomes, using Mendelian randomization. Scientific Reports, 5, 16645. https://doi.org/10.1038/ srep16645. Mohammadpour, A. H., & Akhlaghi, F. (2013). Future of cholesteryl ester transfer protein (CETP) inhibitors: A pharmacological perspective. Clinical Pharmacokinetics, 52(8), 615–626. https://doi. org/10.1007/s40262-013-0071-8. Myocardial Infarction Genetics Consortium Investigators, Stitziel, N. O., Won, H.-H., Morrison, A. C., Peloso, G. M., Do, R., et al. (2014). Inactivating mutations in NPC1L1 and protection from coronary heart disease. The New England Journal of Medicine, 371(22), 2072–2082. https:// doi.org/10.1056/NEJMoa1405386. Newton, K. M., Peissig, P. L., Kho, A. N., Bielinski, S. J., Berg, R. L., Choudhary, V., et al. (2013). Validation of electronic medical record-based phenotyping algorithms: Results and lessons learned from the eMERGE network. Journal of the American Medical Informatics Association, 20(e1), e147–e154. https://doi.org/10.1136/amiajnl-2012-000896. Okada, Y., Wu, D., Trynka, G., Raj, T., Terao, C., Ikari, K., et al. (2014). Genetics of rheumatoid arthritis contributes to biology and drug discovery. Nature, 506(7488), 376–381. https://doi.org/ 10.1038/nature12873. O’Donnell, P. H., Bush, A., Spitz, J., Danahey, K., Saner, D., Das, S., et al. (2012). The 1200 patients project: Creating a new medical model system for clinical implementation of pharmacogenomics. Clinical Pharmacology and Therapeutics, 92(4), 446– 449. https://doi.org/10.1038/clpt.2012.117. O’Reilly, R., & Elphick, H. E. (2013). Development, clinical utility, and place of ivacaftor in the treatment of cystic fibrosis. Drug Design, Development and Therapy, 7, 929–937. https://doi.org/10.2147/ DDDT.S30345. Pathak, J., Kiefer, R. C., Bielinski, S. J., & Chute, C. G. (2012). Applying semantic web technologies for phenome-wide scan using an electronic health record linked biobank. Journal of Biomedical Semantics, 3(1), 10. https://doi.org/10.1186/2041-1480-3-10. Peissig, P. L., Santos Costa, V., Caldwell, M. D., Rottscheit, C., Berg, R. L., Mendonca, E. A., & Page, D. (2014). Relational machine learning for electronic health record-driven phenotyping. Journal of Biomedical Informatics, 52, 260–270. https://doi.org/10.1016/j.jbi.2014.07.007.
28
Pendergrass, S. A., Brown-Gentry, K., Dudek, S., Frase, A., Torstenson, E. S., Goodloe, R., et al. (2013). Phenome-wide association study (PheWAS) for detection of pleiotropy within the Population Architecture using Genomics and Epidemiology (PAGE) Network. PLoS Genetics, 9(1), e1003087. https://doi.org/10.1371/journal.pgen.1003087. Pendergrass, S. A., Brown-Gentry, K., Dudek, S. M., Torstenson, E. S., Ambite, J. L., Avery, C. L., et al. (2011). The use of phenome-wide association studies (PheWAS) for exploration of novel genotype- phenotype relationships and pleiotropy discovery. Genetic Epidemiology, 35(5), 410–422. https://doi. org/10.1002/gepi.20589. Perera, M. A., Cavallari, L. H., Limdi, N. A., Gamazon, E. R., Konkashbaev, A., Daneshjou, R., et al. (2013). Genetic variants associated with warfarin dose in African-American individuals: A genome- wide association study. The Lancet, 382(9894), 790– 796. https://doi.org/10.1016/S0140-6736(13)60681-9. Peterson, J. F., Field, J. R., Unertl, K. M., Schildcrout, J. S., Johnson, D. C., Shi, Y., et al. (2016). Physician response to implementation of genotype-tailored antiplatelet therapy. Clinical Pharmacology and Therapeutics, 100(1), 67–74. https://doi.org/10.1002/ cpt.331. Phillips, E. J., Sukasem, C., Whirl-Carrillo, M., Müller, D. J., Dunnenberger, H. M., Chantratita, W., et al. (2018). Clinical pharmacogenetics implementation consortium guideline for HLA genotype and use of carbamazepine and oxcarbazepine: 2017 update. Clinical Pharmacology and Therapeutics, 103(4), 574–581. https://doi.org/10.1002/cpt.1004. Popejoy, A. B., & Fullerton, S. M. (2016). Genomics is failing on diversity. Nature, 538(7624), 161–164. https://doi.org/10.1038/538161a. Pulley, J. M., Denny, J. C., Peterson, J. F., Bernard, G. R., Vnencak-Jones, C. L., Ramirez, A. H., et al. (2012). Operational implementation of prospective genotyping for personalized medicine: The design of the Vanderbilt PREDICT project. Clinical Pharmacology and Therapeutics, 92(1), 87–95. https://doi.org/10.1038/clpt.2011.371. Ramirez, A. H., Shi, Y., Schildcrout, J. S., Delaney, J. T., Xu, H., Oetjens, M. T., et al. (2012). Predicting warfarin dosage in European-Americans and African- Americans using DNA samples linked to an electronic health record. Pharmacogenomics, 13(4), 407–418. https://doi.org/10.2217/pgs.11.164. Rasmussen-Torvik, L. J., Stallings, S. C., Gordon, A. S., Almoguera, B., Basford, M. A., Bielinski, S. J., et al. (2014). Design and anticipated outcomes of the eMERGE-PGx project: A multicenter pilot for preemptive pharmacogenomics in electronic health record systems. Clinical Pharmacology and Therapeutics, 96(4), 482–489. https://doi. org/10.1038/clpt.2014.137. Relling, M. V., Schwab, M., Whirl-Carrillo, M., Suarez- Kurtz, G., Pui, C.-H., Stein, C. M., et al. (2019). Clinical Pharmacogenetics Implementation Consortium (CPIC) guideline for thiopurine dosing
966
28
J. C. Denny et al.
based on TPMT and NUDT15 genotypes: 2018 update. Clinical Pharmacology and Therapeutics, 105(5), 1095–1105. https://doi.org/10.1002/cpt.1304. Ritchie, M. D., Denny, J. C., Crawford, D. C., Ramirez, A. H., Weiner, J. B., Pulley, J. M., et al. (2010). Robust replication of genotype-phenotype associations across multiple diseases in an electronic medical record. American Journal of Human Genetics, 86(4), 560–572. https://doi.org/10.1016/j. ajhg.2010.03.003. Robinson, J. R., Wei, W.-Q., Roden, D. M., & Denny, J. C. (2018). Defining phenotypes from clinical data to drive genomic research. Annual Review of Biomedical Data Science, 1(1), 69–92. https://doi. org/10.1146/annurev-biodatasci-080917-013335. Sabatine, M. S., Giugliano, R. P., Keech, A. C., Honarpour, N., Wiviott, S. D., Murphy, S. A., et al. (2017). Evolocumab and clinical outcomes in patients with cardiovascular disease. The New England Journal of Medicine, 376(18), 1713–1722. https://doi.org/10.1056/NEJMoa1615664. Saria, S., Butte, A., & Sheikh, A. (2018). Better medicine through machine learning: What’s real, and what’s artificial? PLoS Medicine, 15(12), e1002721. https:// doi.org/10.1371/journal.pmed.1002721. Saunders, C. J., Miller, N. A., Soden, S. E., Dinwiddie, D. L., Noll, A., Alnadi, N. A., et al. (2012). Rapid whole-genome sequencing for genetic disease diagnosis in neonatal intensive care units. Science Translational Medicine, 4(154), 154ra135. https:// doi.org/10.1126/scitranslmed.3004041. Schmidt, A. F., Swerdlow, D. I., Holmes, M. V., Patel, R. S., Fairhurst-Hunter, Z., Lyall, D. M., et al. (2017). PCSK9 genetic variants and risk of type 2 diabetes: A mendelian randomisation study. The Lancet. Diabetes & Endocrinology, 5(2), 97–105. https://doi.org/10.1016/S2213-8587(16)30396-5. SCIENCE | deCODE genetics. (n.d.). Retrieved from https://www.decode.com/research/ Scott, S. A., Sangkuhl, K., Stein, C. M., Hulot, J. S., Mega, J. L., Roden, D. M., et al. (2013). Clinical Pharmacogenetics Implementation Consortium guidelines for CYP2C19 genotype and clopidogrel therapy: 2013 update. Clinical Pharmacology and Therapeutics, 94(3), 317–323. https://doi. org/10.1038/clpt.2013.105. Splinter, K., Adams, D. R., Bacino, C. A., Bellen, H. J., Bernstein, J. A., Cheatle-Jarvela, A. M., et al. (2018). Effect of genetic diagnosis on patients with previously undiagnosed disease. The New England Journal of Medicine, 379(22), 2131–2139. https:// doi.org/10.1056/NEJMoa1714458. Tannock, I. F., & Hickman, J. A. (2016). Limits to personalized cancer medicine. The New England Journal of Medicine, 375(13), 1289–1294. https://doi. org/10.1056/NEJMsb1607705. Torkamani, A., Wineinger, N. E., & Topol, E. J. (2018). The personal and clinical utility of polygenic risk scores. Nature Reviews. Genetics, 19(9), 581–590. https://doi.org/10.1038/s41576-018-0018-x.
Tung, J. Y., Do, C. B., Hinds, D. A., Kiefer, A. K., Macpherson, J. M., Chowdry, A. B., et al. (2011). Efficient replication of over 180 genetic associations with self-reported medical data. PLoS One, 6(8), e23473. https://doi.org/10.1371/journal. pone.0023473. Voight, B. F., Peloso, G. M., Orho-Melander, M., Frikke-Schmidt, R., Barbalic, M., Jensen, M. K., et al. (2012). Plasma HDL cholesterol and risk of myocardial infarction: A mendelian randomisation study. The Lancet, 380(9841), 572–580. https://doi. org/10.1016/S0140-6736(12)60312-2. Wei, W.-Q., & Denny, J. C. (2015). Extracting research- quality phenotypes from electronic health records to support precision medicine. Genome Medicine, 7(1), 41. https://doi.org/10.1186/s13073-015-0166-y. Wei, W.-Q., Teixeira, P. L., Mo, H., Cronin, R. M., Warner, J. L., & Denny, J. C. (2016). Combining billing codes, clinical notes, and medications from electronic health records provides superior phenotyping performance. Journal of the American Medical Informatics Association, 23(e1), e20–e27. https:// doi.org/10.1093/jamia/ocv130. Wellcome Trust Case Control Consortium. (2007). Genome-wide association study of 14,000 cases of seven common diseases and 3,000 shared controls. Nature, 447(7145), 661–678. https://doi.org/10.1038/ nature05911. White, K. D., Abe, R., Ardern-Jones, M., Beachkofsky, T., Bouchard, C., Carleton, B., et al. (2018). SJS/ TEN 2017: Building multidisciplinary networks to drive science and translation. The Journal of Allergy and Clinical Immunology. In Practice, 6(1), 38–69. https://doi.org/10.1016/j.jaip.2017.11.023. Wood, A. R., Esko, T., Yang, J., Vedantam, S., Pers, T. H., Gustafsson, S., et al. (2014). Defining the role of common variation in the genomic and biological architecture of adult human height. Nature Genetics, 46(11), 1173–1186. https://doi. org/10.1038/ng.3097. Worthey, E. A., Mayer, A. N., Syverson, G. D., Helbling, D., Bonacci, B. B., Decker, B., et al. (2011). Making a definitive diagnosis: Successful clinical application of whole exome sequencing in a child with intractable inflammatory bowel disease. Genetics in Medicine, 13(3), 255–262. https://doi.org/10.1097/ GIM.0b013e3182088158. Wu, A. H., White, M. J., Oh, S., & Burchard, E. (2015). The Hawaii clopidogrel lawsuit: The possible effect on clinical laboratory testing. Personalized Medicine, 12(3), 179–181. https://doi.org/10.2217/pme.15.4. Wu, Y., Denny, J. C., Trent Rosenbloom, S., Miller, R. A., Giuse, D. A., Wang, L., et al. (2017). A long journey to short abbreviations: Developing an open- source framework for clinical abbreviation recognition and disambiguation (CARD). Journal of the American Medical Informatics Association, 24(e1), e79–e86. https://doi.org/10.1093/jamia/ ocw109.
967
Biomedical Informatics in the Years Ahead Contents Chapter 29 Health Information Technology Policy – 969 Robert S. Rudin, Paul C. Tang, and David W. Bates Chapter 30 The Future of Informatics in Biomedicine – 987 James J. Cimino, Edward H. Shortliffe, Michael F. Chiang, David Blumenthal, Patricia Flatley Brennan, Mark Frisse, Eric Horvitz, Judy Murphy, Peter Tarczy-Hornoch, and Robert M. Wachter
III
969
Health Information Technology Policy Robert S. Rudin, Paul C. Tang, and David W. Bates Contents 29.1
Public Policy and Health Informatics – 970
29.2
ow Health IT Supports National Health Goals: H Promise and Evidence – 971
29.2.1 29.2.2 29.2.3 29.2.4 29.2.5
I mproving Care Quality and Health Outcomes – 971 Reducing Costs – 973 Using Health IT to Measure Quality of Care – 974 Holding Providers Accountable for Cost and Quality – 975 Informatics Research – 976
29.3
eyond Adoption: Policy for Optimizing B and Innovating with Health IT – 977
29.3.1 29.3.2 29.3.3
ealth Information Exchange – 977 H Patient Portals and Telehealth – 978 Application Programming Interfaces – 979
29.4
Policies to Ensure Safety of Health IT – 979
29.4.1 29.4.2
S hould Health IT Be Regulated as Medical Devices? – 979 Alternative Ways to Improve Patient Safety – 980
29.5
olicies to Ensure Privacy and Security of Electronic P Health Information – 980
29.5.1 29.5.2 29.5.3
egulating Privacy – 980 R Security – 980 Record Matching and Linking – 981
29.6
he Growing Importance of Public Policy T in Informatics – 981 References – 982
© Springer Nature Switzerland AG 2021 E. H. Shortliffe, J. J. Cimino (eds.), Biomedical Informatics, https://doi.org/10.1007/978-3-030-58721-5_29
29
970
R. S. Rudin et al.
nnLearning Objectives
29
After reading this chapter, you should know the answers to these questions: 55 Why is the development and use of IT in healthcare so much slower than in other industries? 55 How has public policy promoted the adoption and use of health IT? 55 How does health IT support national agendas and priorities for health and health care? 55 Why is it important to measure the value of health IT in terms of improvements in care quality and savings in costs? 55 How can public policies safeguard patient privacy in an era of electronic health information? 55 What are the main policy issues related to exchanging health information among health care organizations? 55 What are the major tradeoffs for regulating electronic health records in the same way that other medical devices are regulated to ensure patient safety? 55 What policies are needed to encourage clinicians to redesign their care practices to exploit better the capabilities of health IT? 55 How does the U.S. approach to health IT policy compare with those of other countries?
29.1 Public Policy and Health
Informatics
For decades after most industries had adopted IT as part of their core business and operational processes, clinical care in the U.S. remained largely in the paper world. Most developed countries adopted health IT sooner, especially in primary care. International leaders have included Denmark, Sweden and the Netherlands. However, health systems leaders in the U.S. have recognized that public policy played a role in the pace and nature of their health systems’ adoption and use of IT, and that changes in policy had the potential to accelerate change.
The influence of policy can be found throughout a health care system. Policies shape the structure of health care delivery organizations and the markets for medical products. Directly or indirectly, policies influence the behaviors of all health care stakeholders including patients, providers, health plans, and researchers. Public policy changes can enhance or set back health care delivery through incentives, requirements, and restrictions. In recent years, policy interventions have influenced health IT in major ways. In 2004, U.S. President George W. Bush established the Office of the National Coordinator for Health IT.1 In 2009, during the Obama Administration, the U.S. Congress allocated approximately $30 billion to support providers’ meaningful use of health IT. In 2015, the Medicare Access and CHIP Reauthorization Act (MACRA) absorbed the meaningful use program as part of a larger effort to harmonize how the federal government pays healthcare providers, called the Quality Payment Program (QPP). In 2016, the twenty-first Century Cures Act included provisions to improve patient access to their digital medical data and allow them to use the data in applications of their choice, which may accelerate innovation. Notably, healthcare information technology has been one of the few relatively non- partisan topics. Governments of many other countries have also spent significant public funds on health IT and are considering related policy issues. In this chapter, we review some of the key policy goals relevant to informatics and discuss how researchers and policymakers are trying to address them. We discuss how health IT policy goals have changed substantially in recent years, from a focus on accelerating adoption to a greater emphasis on interoperability and fostering innovation. Protecting privacy of patients’ health information, ensuring health IT products are safe for patients, and improving medical practice
1
7 http://www.healthit.gov/newsroom/about-onc (Accessed 12/9/2012).
971 Health Information Technology Policy
workflows remain persistent challenges and of interest to policy. While informatics research has been occurring for several decades, research in health IT policy is still relatively new. As stakeholders look to health IT to help address the major cost and quality problems in national health care systems, we expect the issues discussed in this chapter to become more important to policymakers and researchers in the fields of health policy and informatics. 29.2 How Health IT Supports
National Health Goals: Promise and Evidence
Health IT is not an end in itself. Like all technology, it is simply a tool for achieving larger clinical, social and policy goals, such as improving health outcomes, improving the quality of care, and reducing costs. Health IT has the potential to have a tremendous impact on these goals. Policymakers, however, are interested not only in the promise of health IT but also the reality. Like most software products, early versions of health IT products tend to have many problems, such as bugs, poor usability, and difficulties integrating with other products. Only after the technology matures is it possible to realize a larger portion of the promised benefits. Policymakers may be reluctant to invest public funds, which are raised primarily in the form of taxes, on technologies that have not been shown in empirical studies to produce benefits. Many studies have demonstrated empirical benefits of health IT, especially CPOE (see 7 Chap. 14) and some types of CDS (see 7 Chap. 24) (Jones et al. 2014). Recent studies of HIE (see 7 Chaps. 15 and 18) have also found some beneficial effects (Menachemi et al. 2018). However, substantial gaps in evidence exist. For example, many studies come from a small number of academic medical centers or geographical communities, and it is unknown if the benefits in terms of quality, safety and efficiency are being realized in other settings. Some have described this phe
nomenon as a health IT “productivity paradox” because of some observers’ assessment that the benefits of IT have so far not justified the investment (Jones et al. 2012). Lessons from other industries suggest that the substantial benefits of IT will eventually be realized but will require more than just improvements in the technology itself. Care processes will likely also need to be redesigned so that users can take advantage of the technology’s potential. New best practices may be needed for different care settings. And additional studies will likely also be needed to demonstrate benefits that may exist but are difficult to detect, especially in non-academic settings that do not have the expertise or incentives to conduct robust evaluation studies (see 7 Chap. 13). Despite the limits of the empirical evidence, policymakers have invested substantial sums in health IT hoping that the technology will realize its promised benefits and support national health goals. Further empirical studies will help to identify where health IT has been successful and what factors have made these investments effective, as well as to identify gaps that may benefit from further policy efforts. This section presents an overview of both the promise and the evidence of how health IT supports policy goals.
29.2.1
I mproving Care Quality and Health Outcomes
As informatics professionals understand intuitively, health IT has enormous potential to improve care quality and health outcomes, which are, of course, central policy goals (. Table 29.1). Just as computers have revolutionized many other industries, from banking to baseball, information technology is beginning to revolutionize health care through innovative applications. Policymakers in the U.S. appear to recognize this potential as demonstrated by the multiple pieces of state and federal legislation passed in recent years related to health IT. This activity began with a focus on encouraging adoption and has shifted to improving interoperability, patient
29
972
R. S. Rudin et al.
used (Amarasingham et al. 2009). Studies like these have supported the promotion of EHRs, medication-related CDS, and e-prescribing and are now widely, but not universally, Health IT Expected Expected functionality effect on care effect on cost adopted. Other functionalities, such as elecquality tronic patient decision aids, may have enormous potential to improve quality, safety and Electronic Improved Fewer efficiency, but have not been evaluated as health record clinical unnecessary extensively and are not widely adopted (EHR) with decisions, tests clinical fewer (Friedberg et al. 2013). decisions medication Another component of health IT that support and diagnostic may substantially improve quality of care (CDS) errors, timelier is clinical data exchange, which is the abilfollow up ity to exchange health information among Health Improved Reduced health care organizations and patients (see information clinical burden of 7 Chaps. 15 and 18). There is a great need for exchange decisions information this kind of capability. In the U.S., the typi(HIE) gathering, reduced cal Medicare beneficiary visits seven different duplicate physicians in four different offices per year on testing average, and many patients with chronic conPatient More Fewer ditions see more than 16 physicians per year decision aids personalized procedures (Pham et al. 2007). Not surprisingly, in such treatment a fragmented system, information is often Telehealth More timely Fewer office missing. One study shows that primary care and personal and accessible visits doctors reported missing information in more health records interactions than 13% of visits and other studies suggest (PHR) with clinicians much higher rates of missing data, affecting E-prescribing Fewer errors Reduced costs as much as 81% of visits (Smith et al. 2005; from errors van Walraven et al. 2008; Tang et al. 1994). A study in one community found that there may be a need to exchange data among local mediaccess to records, and innovation. Many other cal groups in as many as 50% of patient visits countries also specifically encourage adoption (Rudin et al. 2011). Recent empirical studies and use of health IT to improve health care have shown that real-world implementations quality. of electronic clinical data exchange systems Electronic health records (EHRs; 7 Chap. result in fewer duplicated procedures, reduced 14) probably represent the form of health IT use of imaging, lower costs, and improved that has been evaluated most extensively and patient safety (Menachemi et al. 2018). are now widely adopted in hospitals and clin- However, these studies were concentrated ics. EHRs with CPOE and clinical decision in a small number of HIEs and some were support (CDS; 7 Chap. 24) have been exten- restricted to a single vendor; it is not clear to sively studied and evaluated in terms of qual- what extent the results will generalize to other ity, safety, and efficiency benefits, with most contexts. studies finding positive results. For example, Researchers and policymakers agree that one study found EHRs with medication- improving the quality of health care must related CDS can reduce the number of adverse involve making it more patient-centric, and drug events from 30% to 84% (Ammenwerth health IT will likely be crucial to achieving et al. 2008). A study that examined EHR use that goal on a large scale. For example, perin several hospitals in Texas found that there sonal health records (PHRs) and patient porare reduced rates of inpatient mortality, com- tals were promoted by federal requirements in plications, and length of stay when EHRs are the US and are increasingly available – one .. Table 29.1 The promise of health IT (selected functionality)
29
973 Health Information Technology Policy
recent survey found that roughly half of older adults have accessed a PHR (Malani 2018). PHRs give patients access to their clinical data (see 7 Chap. 11), facilitate communication between patients and providers, and provide relevant and customized educational materials so that patients can take a more active role in their care (Tang et al. 2006; Halamka et al. 2008; Wells et al. 2014). PHRs may also incorporate patient decision-aids to help them to make critical health care decisions, considering their personal preferences (Fowler et al. 2011; Friedberg et al. 2013). Telehealth technologies, which enable patients to interact with clinicians over the Internet (see 7 Chap. 20), may make health care more patient-centric by allowing patients to receive some of their care without having to go physically to the doctor’s office. Few empirical studies to date have shown that these technologies result in improvements in care quality or health outcomes (Milani et al. 2017). A concern of policymakers is that there is an emerging “digital divide” in health IT, in which disadvantaged groups who might benefit most have less access to health IT than more affluent groups. One empiric study of this issue found that minority groups were less likely to access web-based PHRs and, in general, minorities and disadvantaged groups have less web access than other groups (Yamin et al. 2011). On the other hand, adoption rates of mobile platforms do not show as much of a divide and PHRs are increasingly accessible via these platforms. Still, policies may be necessary to ensure the technology is designed and implemented with minorities in mind to prevent disparities in health care from getting worse and to ensure that the improvements in care quality enabled by health IT are shared by all. The digital divide has also been suggested to exist among hospitals. One study found that although EHRs are widely adopted among hospitals, critical access hospitals lagged in adoption of performance measurement and patient engagement functions, suggesting an “advanced use” digital divide (Adler-Milstein et al. 2017). However, even if critical access hospitals are slower to adopt advanced functionalities, that may not indi
cate a permanent divide but rather a typical technology diffusion curve in which some organizations adopt faster than others. Unfortunately, health IT also has the potential to facilitate harmful unintended side effects (Bloomrosen et al. 2011). In one study involving a pediatric intensive care unit in Pittsburgh, patient mortality increased in patients transferred in after computerized physician order entry (CPOE) was installed (Han et al. 2005). The study found that certain aspects of the ordering system and some of the implementation decisions, restricted clinicians’ ability to work efficiently, causing delays in treatment, which was especially deleterious because of the urgent nature of the children’s conditions. Implementation decisions involving configuration of the system and changes in workflows appear to have been the major contributors to the increase in mortality– the same EHR product was installed in another hospital without such adverse impacts on mortality (Beccaro et al. 2006). Considering the volume of health IT studies, there are relatively few empirical assessments of adverse effects. Nonetheless, questions about the need to regulate the safety of EHRs are being debated. Balancing the need to protect patients from unintended harm is the concern, further discussed later in this chapter, that over-regulation may impede innovation. Most researchers tend to believe that if health IT systems are well-designed and implemented with close attention to the needs of the users, these kinds of unintended consequences can be avoided and health IT systems will result in tremendous improvements in quality of care (Berg 1999). Researchers have developed guides to help organizations implement health IT in a way that minimizes safety risks and improves patient safety (Sittig et al. 2014). In addition to unintended consequences on patients’ health, IT has also been shown to be a source of physician professional dissatisfaction (Sinsky et al. 2017). 29.2.2
Reducing Costs
In addition to improving quality, health IT is expected to reduce costs of care substantially
29
974
(. Table 29.1). Policies that promoted the use of health IT were informed by projections based on models showing large potential savings for many forms of health IT. One study by the RAND Corporation estimated that EHRs could save more than $81 billion per year (Hillestad et al. 2005). Another study estimated that electronic clinical data exchange has the potential to save $77.8 billion per year (Walker et al. 2005). Many of these savings were expected to come from reductions in redundant tests and use of generic drugs, as well as reductions in adverse drug events and other errors that EHRs might prevent (Bates et al. 1998; Wang et al. 2003). Telehealth and PHRs were also projected to result in billions of dollars in savings (Kaelber and Pan 2008; Cusack et al. 2008). One weakness of these projections is that they relied on expert opinions for some point estimates because, other than several studies showing that EHRs reduce costs by reducing medical errors, few studies have tried to examine empirically the effect of health IT on costs (Tierney et al. 1987, 1993). Also, some of the projections have been criticized because they estimate potential savings rather than actual measured savings (Congressional Budget Office 2008). However, the projections do not include several types of savings that may result from providing better preventive care and care coordination, which would reduce the need for patients’ use of high cost procedures in hospitals and emergency rooms. They also do not include potential reductions in costs that may result from decision aids for patients, which may, for example, reduce the number of unnecessary surgeries (O’Connor et al. 2009). And they do not include other innovations such as the impact of small changes in EHR displays. For example, one study found that when the fees associated with laboratory tests were shown to clinicians when they ordered the test, rates of test ordering decreased by more than 8% (Feldman et al. 2013). The actual savings, therefore, may be much greater than the projections suggest. As described above, realizing these savings will likely require more than simply adopting the technology – it will also require redesigning
29
R. S. Rudin et al.
healthcare workflows to make greater use of the technology, and developing and spreading best practices. 29.2.3
sing Health IT to Measure U Quality of Care
All health care stakeholders agree that a health care system should deliver high quality care. But how does one measure care quality? Current methods of quality measurement rely largely on administrative claims submitted by providers to insurers. These data may be useful for certain quality measurements such as for assessing a primary care physician’s mammography screening rates, but they lack important clinical details, such as the results of laboratory tests. They also do not represent a comprehensive picture of the care that is delivered, assess the appropriateness of most medical procedures, or determine if a patient’s quality of life has improved after treatment. Also, most patients in the U.S. switch insurance companies every few years, limiting the ability of any one insurer to measure quality improvements over longer periods of time, which is required to assess accurately the treatment of many medical conditions. Increasingly, clinical data available through EHRs are used for quality measurement (Ancker et al. 2015). Clinical data are much more comprehensive than administrative claims, and methods for measuring clinical quality using these data are growing. In the U.S., there is growing policy interest in creating such measures as shown in the National Quality Strategy and other reports (AHRQ 2017). This approach has been used in the United Kingdom (U.K.) where nearly 200 quality measures have regularly been assessed, with up to 25% of payment for general practitioners based on performance on these measures (Roland and Olesen 2016). While initially popular, U.K. physicians have become increasingly disenchanted with the administrative requirements of the program. There is growing support for developing patient-reported outcome measurements which may be integrated in PHRs, or obtained
975 Health Information Technology Policy
through other mechanisms and integrated with the patient’s clinical data (Lavallee et al. 2016). However, using electronic clinical data to generate quality measures is also associated with problems. Studies have found that clinical data in EHRs are often incomplete, inaccurate, and may not be comparable across different EHRs (Chan et al. 2010; Colin et al. 2018). Existing measures also tend to focus more on adherence to care processes rather than patient outcomes (Burstin et al. 2016). More research is needed to develop and standardize meaningful quality measures that would be worth the burden of reporting them.
29.2.4
Holding Providers Accountable for Cost and Quality
Currently, in the U.S., most care is delivered using a fee-for-service payment system, in which providers are paid for every procedure or patient visit. Under this payment method, providers have incentives to provide more care rather than less, which contributes to overtreatment (Lyu et al. 2017). It is therefore not surprising to find that in the U.S., costs are high and rising, nearly double those of many other industrial nations, and quality of care is mixed (Squires 2015). As . Fig. 29.1 shows,
.. Fig. 29.1 Health care expenditures and life expectancy in the United States and ten other developed countries. (From Fuchs and Milstein (2011), with permission © Massachusetts Medical Society)
29
976
29
R. S. Rudin et al.
the U.S. spends more money per capita on health care than any other country by a wide margin. Yet, many studies suggest that the U.S. is far from the world’s leader in overall care quality (Squires 2015). A seminal study by McGlynn et al. in 2003 found that patients in the U.S. received recommended care only about half of the time across a broad array of quality measures (McGlynn et al. 2003). An updated version in 2016 found that those results had not changed much (Levine et al. 2016). Policymakers are trying to replace the fee-for-service payment method with other methods that would hold providers accountable for the care they deliver. These policies create incentives for healthcare providers to constrain costs and may therefore motivate greater use of health IT tools to achieve this goal. In the U.S., one of the proposed mechanisms for accomplishing this is through Accountable Care Organizations (ACOs). As specified in the Affordable Care Act of 2010,2 an ACO is a group of providers who are held accountable, to some extent, for both the cost and the quality of a designated group of patients (Berwick 2011; McClellan et al. 2010). ACOs are still a work in progress, but early indications suggest that they may reduce some costs (McWilliams et al. 2018). The concept of ACOs depends on having an electronic health information infrastructure in place, including widespread use of EHRs, because health IT would enable ACOs to improve quality, reduce costs, and measure their performance. Without prior federal incentives for health IT adoption, these policies to aim to change incentives may not have been feasible. Many other countries have experimented with paying providers for quality and outcomes, or holding providers responsible for costs, although few have done both at the same time to a high degree. Health IT systems are critical for many of these efforts. Few policymakers or researchers believe providers can
2
7 http://www.healthcare.gov/law/index.html (Accessed 12/9/2012).
be held accountable to a substantial degree for the care they delivery without a robust health IT infrastructure.
Informatics Research
29.2.5
Although EHRs have become widespread, many health IT capabilities are still emerging, or standards have not yet been defined. New applications will still require additional research and development. For example, we are still in the early stages of understanding how to design applications for team care (7 Chap. 17), remote patient monitoring (7 Chaps. 20 and 21), online disease management (7 Chaps. 11 and 19), clinical decision- making (7 Chap. 24), alerts and reminders (7 Chap. 24), public health and disease surveillance (7 Chap. 18), clinical trial recruiting (7 Chap. 27), and evaluations of the impact of technologies on care and costs (7 Chap. 13). One concern is that most provider organizations, and increasingly even academic medical centers, are now using software applications made by private vendors, and innovating with them can be more challenging than with homegrown products. Private vendors may not be investing enough resources in research to produce transformational innovations (Shortliffe 2012). It will be essential to identify “sandboxes” in which new and innovative IT approaches can be developed and tested. More interactions between industry and academia may be a good way to accelerate progress (Rudin et al. 2016). Federal funding plays a major role in supporting this kind of upstream informatics research to help to incubate these new technologies but is decreasing in recent years. Because the benefits of such research will accrue to everyone who uses the health care system, the investment of public funds is justified. Few private companies have taken the risk of doing this kind of experimental research to date in part because many health IT companies have been relatively small and were focused on adding the functionalities that are needed to meet federal certification requirements. More recently, some health IT
977 Health Information Technology Policy
companies have become larger but they have not sponsored much research. It is too early to know what impact private companies will have on health IT innovation, but historically, most of the innovation in health informatics has occurred at universities and other government- funded research organizations affiliated with academic medical centers.
29.3 Beyond Adoption: Policy
for Optimizing and Innovating with Health IT
Many governments around the world have previously implemented policies to accelerate the adoption of health IT. The U.K. achieved near universal adoption of EHRs because it devoted substantial resources to the effort early on and has a national health care system which directly manages most of the health care providers in the country (Cresswell and Sheikh 2009; Ashworth and Millett 2008). Most other industrialized nations had achieved high levels of adoption in primary care by the early 2000s (Jha et al. 2008). Countries that achieved particularly high levels of adoption in non-hospital settings include Denmark, the Netherlands, Sweden, Hong Kong, Singapore, Australia, and New Zealand. Similar to the U.K., these countries devoted national resources for this effort. Levels of adoption in hospitals, however, lagged in many countries. In the U.S., after years of slow adoption of health IT relative to other developed countries, the federal government began to address this issue in 2004 by establishing the Office of the National Coordinator for Health IT (ONC). This office is located within the U.S. Department of Health and Human Services and tasked with “promoting development of a nationwide Health IT infrastructure that allows for electronic use and exchange of information.” The importance of this office grew considerably in 2009 when Congress passed legislation that is considered a major landmark in the history of health IT policy: the Health Information Technology
for Economic and Clinical Health (HITECH) Act.3 This legislation authorized $27 billion in stimulus funds to be paid to health care providers who demonstrate “meaningful use” of electronic health records as defined by specific criteria (Blumenthal 2010). Although there is debate as to the extent to which HITECH accelerated EHR adoption in ambulatory clinics, EHR adoption increased dramatically among hospitals and clinics in the U.S, after these incentives were put in place (Mennemeyer et al. 2016). Today, over 90% of hospitals and clinics have adopted some form of EHR, but there is large variation in adoption of specific EHR capabilities (HealthIT 2016). Now that EHRs have become widely adopted, policymakers in many countries are shifting focus toward optimizing the technology and fostering innovation to achieve greater impact. In the U.S., policy efforts are now trying to improve interoperability and health information exchange among providers and patients and facilitate innovation by making health information accessible to third party applications using application programming interfaces (APIs). U.S. policy has also incorporated many health IT efforts into a larger program that affects how Center for Medicare and Medicare Services (CMS) pays health providers for services. This section describes some of these efforts.
29.3.1
Health Information Exchange
All countries have challenges sharing clinical data among providers (see 7 Chap. 15). For many years, U.S. policy promoted data exchange through the formation of regional health information exchanges (HIEs). These organizations provided a variety of services including aggregating EHR data from local health care providers to create aggregate longitudinal patient records, automating the
3
7 https://www.healthit.gov/topic/laws-regulationand-policy/health-it-legislation (Accessed 10/16/2018).
29
978
29
R. S. Rudin et al.
delivery of laboratory results, integrating with pharmacies to facilitate e-prescribing, and facilitating public health and quality reporting. Although some HIEs are well-established, the number of these organizations has been declining and many of the remaining ones may not be financially viable (Adler-Milstein et al. 2016). Why is it so difficult to establish an HIE? Part of the problem is that EHR products did not always use the same technical data standards and are not interoperable. Recently developed technical and semantic standards have made considerable progress in making the standards robust (Health Level Seven International 2019). However, additional custom programming is still required to integrate EHRs with HIEs. HIEs face many other challenges including: recruiting providers who are reluctant to share data with competing medical groups, privacy and security concerns, legal issues, HIE-related fees, training clinicians to use the HIE, and the lack of a business case (7 Chap. 15). The business case problem is perhaps the most pressing – for a business to thrive, key stakeholders must be willing to pay for the product or service. In HIE, the primary financial beneficiaries are employers and insurers, but they have been reluctant to pay for the exchange services (Walker et al. 2005). While regional HIEs have faltered, EHR vendor-based networks have emerged as an alternative, but the extent to which they will succeed in the long term is uncertain. These networks may be limited to one vendor or involve a consortium of vendors. Currently, the most prominent vendor- based networks are Epic CareEverywhere, CareQuality, and the CommonWell Health Alliance. Policymakers have recognized that for data exchange to be comprehensive, these networks as well as regional HIEs will need to interact and share data. To address this concern, there are plans to establish a “trusted exchange framework” that facilitates this interaction (HealthIT 2018). Policymakers have also identified information blocking on the part of vendors and providers as a concern and plan to issue regulations to prevent it.
Some have proposed a different approach to data exchange in which patients can aggregate and control access to their complete health records (Szolovits et al. 1994). The history and details of this model are explained in 7 Chap. 15. There has been an increase in interest in this approach recently. However, it is too early to tell if it will become widely adopted. No country to date has completely solved the problem of clinical data exchange. In every country that attempts to foster data exchange, the hardest issues appear to be socio-political rather than technical, and there is clear agreement that health IT policy is particularly important to address these problems, especially in establishing standards. The U.K. has set up a “spine” which allows summary care documents to be widely exchanged (Greenhalgh et al. 2010). However, the overall program has encountered major political difficulties, and has been largely dismantled. Canada has established a program called Canada Health Infoway, which has emphasized setting up an infrastructure for data exchange (Rozenblum et al. 2011). While that effort has been somewhat successful, relatively little in the way of clinical data is being exchanged to date, in part because the adoption rate of electronic health records remains low. In Scandinavia, there has been substantial concern about the privacy aspects of data exchange, especially in Sweden, though data exchange is taking place in Denmark and its use is growing.
29.3.2
Patient Portals and Telehealth
Although EHRs have become widely adopted, other forms of health IT are still lagging. To encourage more patient-centric care, many countries are trying to foster the adoption of Patient Portals and Telehealth (see 7 Chaps. 11 and 20). In the U.S., federal incentives promoted patient portals, and adoption rates are growing. To promote telehealth, policymakers are exploring the possibility of reimbursing for telehealth care, which would probably
29
979 Health Information Technology Policy
improve adoption of this technology considerably (Mehrotra et al. 2016). Even though many states have passed “parity laws” that require commercial insurers to reimburse for telehealth visits, most healthcare encounters are still in-person. 29.3.3
Application Programming Interfaces
To accelerate innovation, policymakers in the U.S. have begun to promote Application Programming Interfaces (APIs) for EHR data. APIs are software mechanisms that allow different applications to connect to one another and share information. All modern software utilizes APIs for purposes ranging from communication with a computer’s operating system to querying a website for the latest news stories. In healthcare, one use of APIs is to allow patients to more easily download their latest medical data into an application of their choosing, such as an application on their smartphone that helps them organize and understand their health data. Another use of APIs is to allow providers to install third party applications for use within their EHRs. If patients and providers can pick and choose applications, a new market of innovative applications may arise to take advantage of these data. Standardization of APIs across EHRs is critical because otherwise application developers will be required to spend effort customizing their product to integrate with every EHR vendor. In the U.S., new policies will require EHR vendors to support APIs as a condition for certification and for receiving certain payments (Leventhal 2018).
29.4 Policies to Ensure Safety
of Health IT
As adoption of health IT accelerates and new innovations are developed, it is important to be vigilant about, and to reduce, the risk of unintended harmful side effects related to health IT use. Harm could arise from deficiencies in many areas when designing and deploy-
ing complex systems, including poor usability, inadequate testing and quality assurance, software flaws, poor implementation decisions, inattention to workflow design, or inadequate training. Policymakers have funded development of frameworks and guidelines to help implement and use health IT in a way that addresses safety concerns (Sittig and Singh 2012).
29.4.1
hould Health IT S Be Regulated as Medical Devices?
One policy option for reducing the likelihood of health IT-related medical errors is to create regulations that require health IT products to adhere to strict principles of safe design and be tested and certified (see also 7 Chap. 12) (Shuren et al. 2018). This is how many medical devices are regulated by the U.S. Food and Drug Administration.4 While this approach may ensure some degree of patient safety, the regulatory burden will increase the price of health IT systems, raise barriers of entry for new companies, and could stifle innovation. Also, even with regulations, health IT products might still have safety issues because software products can be used in many ways, unlike other medical devices that have more limited utility. There is an active debate about the appropriate types of regulation for medical apps for use by patients. Currently estimates have found more than 150,000 health apps available for download, but analysts have found that few have demonstrated clinical utility (Singh et al. 2016). The FDA does not regulate most apps but has recently begun a pilot “precertification” program for digital health which will provide information about vendors’ software quality control processes but does not involve evaluations of outcomes (Bates et al. 2018; Lee and Kesselheim 2018). This is controversial, and some feel it does not go far enough (Bates et al. 2018).
4
7 http://www.fda.gov/ (Accessed 12/10/12).
980
R. S. Rudin et al.
29.4.2
29
Alternative Ways to Improve Patient Safety
There are many other policy options to support patient safety (Committee on Patient Safety and Health Information Technology; Institute of Medicine 2011). Policies may fund training programs to educate clinicians in how to use health IT safely and alert them to common mistakes. Policies might encourage providers to report problems with software, including usability issues and bugs, so that vendors can fix them quickly. Policies might also help to establish programs in which users can rate health IT products. Finally, funding research into the science of patient safety would improve our knowledge of how to design better products and identify risks of errors (Shekelle et al. 2011). 29.5 Policies to Ensure Privacy
and Security of Electronic Health Information
It is almost impossible to have a conversation about digital medical records without discussing issues of privacy and security. Although the topic of privacy arose in the discussion of ethics in 7 Chap. 12, it also has policy implications and warrants mention here. As healthcare has become digitized, there has been an increase in security events (Liu et al. 2015). Protecting privacy and security are clearly important policy goals.
29.5.1
Regulating Privacy
The Health Insurance Portability and Accountability Act (HIPAA) of 19965 and subsequent regulations created a legal category of “protected health information” which was defined to encompass most forms
5
7 http://www.hhs.gov/ocr/privacy/index.html (Accessed 12/9/2012).
of clinical data. Covered entities which include providers and insurers are legally required under this law to safeguard electronic health information and would face fines if they did not. Many states have additional privacy laws regarding data exchange (e.g., mental health and HIV status). The effectiveness of these privacy-protective laws has not been rigorously evaluated. They can inadvertently reduce privacy protection, particularly when exchanging data across state lines, and have been showed to slow the adoption of EHRs (Miller and Tucker 2009; Harmonizing State Privacy Law Collaborative 2009). In other countries, privacy also has received a good deal of debate. Most recently, the European General Data Protection Regulation (GPDR) went into effect in 2018 and goes beyond healthcare in scope by encouraging “privacy by design” for all software products that store personal data (Haug 2018). Governments are still trying to find the best policies to protect privacy of medical records without slowing the adoption of health IT. 29.5.2
Security
Now that healthcare entities are mostly digital, they are increasingly targeted by cyberattacks, which may aim to steal patient data, demand money in return for unlocking a system, or make a political statement. HIPAA includes security policies that require health providers and other covered entities to implement various safeguards, and if data are breached, the federal government may charge a fine. The recent increase in cyberattacks on hospital and other healthcare stakeholders suggest that these regulations may not be adequate, and policymakers are considering a dditional moves. Security concerns exist in all countries. For example, the UK’s National Health Service recently experienced a cyberattack that crippled many hospitals and required many clinics to close down completely (Clarke and Youngstein 2017).
981 Health Information Technology Policy
29.5.3
Record Matching and Linking
For health IT to be effective, an essential prerequisite is that patients must be matched to their health data, and electronic records for the same patient must be linked together. If patients’ identity attributes are used (e.g., name, address, date of birth), matching and linking errors often occur because many patients share attributes, attributes change over time, and clerical errors are common. Many countries have adopted a unique health identifier (UHI) to facilitate these processes. However, in response to concerns of privacy advocates, the U.S. congress prohibited use of HHS to expend federal dollars to support development of a UHI. There is little evidence that suggests UHIs pose an increased risk of privacy violations and, in fact, not having a UHI may be even more risky because many other kinds of personal data may be collected and used instead (Greenberg et al. 2009). But UHIs require substantial federal resources to implement and may not address all matching and linking issues. Currently, some estimates suggest that errors in linking records shared across providers in the U.S. can be as high as 50%. Policymakers are therefore interested in alternative approaches, which include improving linking algorithms to better match identity attributes (e.g., name, address, date of birth), defining standards for the identity attributes, using biometrics-based methods, and allowing patients to participate more directly in the process, such as by verifying their phone number with their mobile phone or managing their data on their smartphone (Rudin et al. 2018). There are advantages and disadvantages to every approach, and it is likely that multiple approaches will be needed to substantially reduce matching and linking errors (Pew 2018). Policymakers may play a critical role in overseeing progress and supporting research to develop and more rigorously evaluate solutions.
29
29.6 The Growing Importance
of Public Policy in Informatics
Public policy is becoming increasingly important to the field of informatics. Policies affect everything from what research projects receive funding to whether a physician in a solo practice allows her patients to access their medical records online. Many of the health IT policy issues we discuss in this chapter are just beginning to attract attention from policymakers, and further research is needed to understand the best role for policy. It is likely that new policy issues will emerge as technology capabilities become more advanced. For example, artificial intelligence may help with many clinical applications, but policies may be needed to ensure it is applied safely and to ensure accountability. Traditionally, most informatics research has focused on the development of new technologies and how they integrate into clinical practice. Relatively few studies provide advice to policymakers on health IT policy issues, even though policies have enormous consequences for informatics research and practice. We hope that researchers and policymakers will recognize that technology and policy issues affect each other, and it is necessary to use both perspectives to understand how information technology can be used to improve health care. nnSuggested Readings Agency for Healthcare Research and Quality. (2013). A robust health data infrastructure. Retrieved from McLean, VA: https://www. healthit.gov/sites/default/files/ptp13-700hhs_ white.pdf. This white paper makes the case for public policy to promote open APIs to improve interoperability and data exchange, and to promote innovation in healthcare. Bloomfield, R. A., Jr., Polo-Wood, F., Mandel, J. C., & Mandl, K. D. (2017). Opening the Duke electronic health record to apps: Implementing SMART on FHIR. International Journal of Medical Informatics, 99, 1–10. https:// doi.org/10.1016/j.ijmedinf.2016.12.005. This
982
29
R. S. Rudin et al.
research study discusses a successful early attempt to use APIs within a live EHR and emerging technical standards to implement patient- and provide-facing apps. Clarke, R., & Youngstein, T. (2017). Cyberattack on Britain’s National Health Service – a wakeup call for modern medicine. New England Journal of Medicine, 377(5), 409–411. https:// doi.org/10.1056/NEJMp1706754. This brief perspective describes a harrowing cyber attack on the U.K.’s healthcare system and offers suggestion to help improve preparedness. Jones, S. S., Heaton, P. S., Rudin, R. S., & Schneider, E. C. (2012). Unraveling the IT productivity paradox – lessons for health care. New England Journal of Medicine, 366(24), 2243–2245. This brief perspective addresses the contentious issue of why few studies have been able to show that health IT produces an improvement in economic productivity. It makes its case by pointing out that the IT industry had the same problem in the 1980s and 1990s but managed to overcome these difficulties through better measurement of productivity, improved management of technology, and better usability. Sinsky, C., Colligan, L., Li, L., Prgomet, M., Reynolds, S., Goeders, L., et al. (2016). Allocation of physician time in ambulatory practice: A time and motion study in 4 specialties. Annals of Internal Medicine, 165(11), 753–760. https://doi.org/10.7326/M16-0961. This study reported direct observation of 57 U.S. physicians and found they spend almost 50% of their time on EHR and desk work, which was much more than time on direct clinical face time with patients. Other work by some of the same authors have identified EHRs as a source of professional dissatisfaction and burnout. Sittig, D. F., & Singh, H. (2012). Electronic health records and national patient-safety goals. New England Journal of Medicine, 367(19), 1854–1860. https://doi.org/10.1056/ NEJMsb1205420. This article proposes a 3-phased approach to implementing EHRs in a way that improves safety: address safety concerns unique to EHR technology, mitigate safety concerns arising from failure to use EHRs appropriately, and use EHRs to monitoring and improve patient safety.
??Questions for Discussion 1. What are the key barriers to effective use of EHRs and exchange of health information? Which of these challenges are amenable to public policy decisions? 2. What are the key barriers to innovation in health IT? What can be done to accelerate innovation? 3. What might be some of the tradeoffs of using administrative claims data compared with using clinical data from health IT systems for care quality analysis? 4. What might be some of the tradeoffs of promoting health IT by paying for use compared with paying for quality? 5. Should health IT be regulated the same way as devices are regulated to protect patient safety? Why or why not? 6. If research finds strong evidence of a digital divide in health IT, what policy actions should be taken? 7. What kinds of health IT functionality are needed to support accountable care organizations and patient-centered medical homes?
References Adler-Milstein, J., Lin, S. C., & Jha, A. K. (2016). The number of health information exchange efforts is declining, leaving the viability of broad clinical data exchange uncertain. Health Affairs (Millwood), 35(7), 1278–1285. https://doi.org/10.1377/hlthaff.2015.1439. Adler-Milstein, J., Holmgren, A. J., Kralovec, P., Worzala, C., Searcy, T., & Patel, V. (2017). Electronic health record adoption in US hospitals: The emergence of a digital “advanced use” divide. Journal of the American Medical Informatics Association, 24(6), 1142–1148. https://doi.org/10.1093/ jamia/ocx080. Agency for Healthcare Research and Quality. (2017, March). About the National Quality Strategy. Retrieved from https://www.ahrq.gov/workingforquality/about/index.html. Amarasingham, R., Plantinga, L., Diener-West, M., Gaskin, D. J., & Powe, N. R. (2009). Clinical information technologies and inpatient outcomes: A multiple hospital study. Archives of Internal Medicine, 169(2), 108–114. Ammenwerth, E., Schnell-Inderst, P., Machan, C., & Siebert, U. (2008). The effect of electronic prescribing on medication errors and adverse drug events: A systematic review. Journal of the American Medical Informatics Association: JAMIA, 15(5), 585–600.
983 Health Information Technology Policy
Ancker, J. S., Kern, L. M., Edwards, A., Nosal, S., Stein, D. M., Hauser, D., & Kaushal, R. (2015). Associations between healthcare quality and use of electronic health record functions in ambulatory care. Journal of the American Medical Informatics Association, 22(4), 864–871. https://doi.org/10.1093/ jamia/ocv030. Ashworth, M., & Millett, C. (2008). Quality improvement in UK primary care: The role of financial incentives. The Journal of Ambulatory Care Management, 31(3), 220–225. Bates, D. W., Leape, L. L., Cullen, D. J., Laird, N., Petersen, L. A., Teich, J. M., et al. (1998). Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA, 280(15), 1311–1316. Bates, D. W., Landman, A., & Levine, D. M. (2018). Health apps and health policy: What is needed? JAMA, 320, 1975. https://doi.org/10.1001/ jama.2018.14378. Beccaro, M. A. D., Jeffries, H. E., Eisenberg, M. A., & Harry, E. D. (2006). Computerized provider order entry implementation: No association with increased mortality rates in an intensive care unit. Pediatrics, 118(1), 290–295. Berg, M. (1999). Patient care information systems and health care work: A sociotechnical approach. International Journal of Medical Informatics, 55(2), 87–101. Berwick, D. M. (2011). Launching accountable care organizations – the proposed rule for the Medicare Shared Savings Program. The New England Journal of Medicine, 364, e32. Bloomrosen, M., Starren, J., Lorenzi, N. M., Ash, J. S., Patel, V. L., & Shortliffe, E. H. (2011). Anticipating and addressing the unintended consequences of health IT and policy: A report from the AMIA 2009 health policy meeting. Journal of the American Medical Informatics Association: JAMIA, 18(1), 82–90. Blumenthal, D. (2010). Launching HITECH. The New England Journal of Medicine, 362(5), 382–385. Burstin, H., Leatherman, S., & Goldmann, D. (2016). The evolution of healthcare quality measurement in the United States. Journal of Internal Medicine, 279(2), 154–159. https://doi.org/10.1111/ joim.12471. Chan, K. S., Fowles, J. B., & Weiner, J. P. (2010). Review: Electronic health records and the reliability and validity of quality measures: A review of the literature. Medical Care Research and Review, 67(5), 503– 527. Clarke, R., & Youngstein, T. (2017). Cyberattack on Britain’s National Health Service – a wake-up call for modern medicine. New England Journal of Medicine, 377(5), 409–411. https://doi.org/10.1056/ NEJMp1706754. Colin, N. V., Cholan, R. A., Sachdeva, B., Nealy, B. E., Parchman, M. L., & Dorr, D. A. (2018). Understanding the impact of variations in measurement period reporting for electronic clinical quality mea-
sures. EGEMS (Washington, DC), 6(1), 17. https:// doi.org/10.5334/egems.235. Committee on Patient Safety and Health Information Technology; Institute of Medicine. (2011). Health IT and patient safety: Building safer systems for better care. Washington, DC: National Academies Press. Congressional Budget Office. (2008). Evidence on the costs and benefits of health information technology. Washington, DC: CBO Paper. Cresswell, K., & Sheikh, A. (2009). The NHS care record service (NHS CRS): Recommendations from the literature on successful implementation and adoption. Informatics in Primary Care, 17(3), 153– 160. Cusack, C. M., Pan, E., Hook, J. M., Vincent, A., Kaelber, D. C., & Middleton, B. (2008). The value proposition in the widespread use of telehealth. Journal of Telemedicine and Telecare, 14(4), 167–168. Feldman, L. S., Shihab, H. M., Thiemann, D., Yeh, H.-C., Ardolino, M., Mandell, S., & Brotman, D. J. (2013). Impact of providing fee data on laboratory test ordering: A controlled clinical trial. JAMA Internal Medicine, 173(10), 903–908. https://doi. org/10.1001/jamainternmed.2013.232. Fowler, F. J., Levin, C. A., & Sepucha, K. R. (2011). Informing and involving patients to improve the quality of medical decisions. Health Affairs (Millwood), 30(4), 699–706. Friedberg, M. W., Van Busum, K., Wexler, R., Bowen, M., & Schneider, E. C. (2013). A demonstration of shared decision making in primary care highlights barriers to adoption and potential remedies. Health Affairs (Millwood), 32(2), 268–275. https://doi. org/10.1377/hlthaff.2012.1084. Fuchs, V. R., & Milstein, A. (2011). The $640 billion question–why does cost-effective care diffuse so slowly? The New England Journal of Medicine, 364(21), 1985–1987. Greenberg, M. D., Ridgely, M. S., & Hillestad, R. J. (2009). Crossed wires: How yesterday’s privacy rules might undercut tomorrow’s nationwide health information network. Health Affairs (Millwood), 28(2), 450–452. Greenhalgh, T., Stramer, K., Bratan, T., Byrne, E., Russell, J., & Potts, H. W. W. (2010). Adoption and nonadoption of a shared electronic summary record in England: A mixed-method case study. BMJ, 340, c3111. Halamka, J. D., Mandl, K. D., & Tang, P. C. (2008). Early experiences with personal health records. Journal of the American Medical Informatics Association: JAMIA, 15(1), 1–7. Han, Y. Y., Carcillo, J. A., Venkataraman, S. T., Clark, R. S. B., Watson, R. S., Nguyen, T. C., et al. (2005). Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system. Pediatrics, 116(6), 1506–1512.
29
984
29
R. S. Rudin et al.
Harmonizing State Privacy Law Collaborative. (2009). Harmonizing state privacy law. Washington, DC: Office of the National Coordinator of Health Information Technology. Haug, C. J. (2018). Turning the tables – the new European general data protection regulation. New England Journal of Medicine, 379(3), 207–209. https:// doi.org/10.1056/NEJMp1806637. Health Level Seven International. (2019). Retrieved from https://www.hl7.org/. HealthIT. (2016). Office-based physician electronic health record adoption. Retrieved from https:// dashboard.healthit.gov/quickstats/pages/physicianehr-adoption-trends.php. HealthIT. (2018). Draft trusted exchange network. Retrieved from https://www.healthit.gov/sites/ default/files/draft-trusted-exchange-framework.pdf. Hillestad, R., Bigelow, J., Bower, A., Girosi, F., Meili, R., Scoville, R., & Taylor, R. (2005). Can electronic medical record systems transform health care? Potential health benefits, savings, and costs. Health Affairs (Millwood), 24, 1103–1117. Jha, A. K., Doolan, D., Grandt, D., Scott, T., & Bates, D. W. (2008). The use of health information technology in seven nations. International Journal of Medical Informatics, 77(12), 848–854. Jones, S. S., Heaton, P. S., Rudin, R. S., & Schneider, E. C. (2012). Unraveling the IT productivity paradox – lessons for health care. The New England Journal of Medicine, 366(24), 2243–2245. Jones, S. S., Rudin, R. S., Perry, T., & Shekelle, P. G. (2014). Health information technology: An updated systematic review with a focus on meaningful use. Annals of Internal Medicine, 160(1), 48–54. https:// doi.org/10.7326/M13-1531. Kaelber, D., & Pan, E. C. (2008). The value of personal health record (PHR) systems. AMIA Annual Symposium Proceedings, 2008, 343–347. Lavallee, D. C., Chenok, K. E., Love, R. M., Petersen, C., Holve, E., Segal, C. D., & Franklin, P. D. (2016). Incorporating patient-reported outcomes into health care to engage patients and enhance care. Health Affairs (Millwood), 35(4), 575–582. https:// doi.org/10.1377/hlthaff.2015.1362. Lee, T. T., & Kesselheim, A. S. (2018). U.S. Food and Drug Administration precertification pilot program for digital health software: Weighing the benefits and risks. Annals of Internal Medicine, 168(10), 730– 732. https://doi.org/10.7326/m17-2715. Leventhal, R. (2018). BREAKING: CMS finalizes “promoting interoperability” rule. Retrieved from https:// www.h ealthcare-informatics.c om/article/valuebased-care/breaking-cms-finalizes-promotinginteroperability-rule. Levine, D. M., Linder, J. A., & Landon, B. E. (2016). The quality of outpatient care delivered to adults in the United States, 2002 to 2013. JAMA Internal Medicine, 176(12), 1778–1790. https://doi. org/10.1001/jamainternmed.2016.6217. Liu, V., Musen, M. A., & Chou, T. (2015). Data breaches of protected health information in the United
States. JAMA, 313(14), 1471–1473. https://doi. org/10.1001/jama.2015.2252. Lyu, H., Xu, T., Brotman, D., Mayer-Blackwell, B., Cooper, M., Daniel, M., et al. (2017). Overtreatment in the United States. PLoS One, 12(9), e0181970. https://doi.org/10.1371/journal.pone.0181970. Malani, P. (2018). Logging in: Using patient portals to access health information. Retrieved from https:// deepblue.lib.umich.edu/handle/2027.42/145683. McClellan, M., McKethan, A. N., Lewis, J. L., Roski, J., & Fisher, E. S. (2010). A national strategy to put accountable care into practice. Health Affairs (Millwood), 29(5), 982–990. McGlynn, E. A., Asch, S. M., Adams, J., Keesey, J., Hicks, J., DeCristofaro, A., & Kerr, E. A. (2003). The quality of health care delivered to adults in the United States. The New England Journal of Medicine, 348, 2635–2645. McWilliams, J. M., Hatfield, L. A., Landon, B. E., Hamed, P., & Chernew, M. E. (2018). Medicare spending after 3 years of the Medicare Shared Savings Program. New England Journal of Medicine, 379(12), 1139–1149. https://doi.org/10.1056/ NEJMsa1803388. Mehrotra, A., Jena, A. B., Busch, A. B., Souza, J., Uscher-Pines, L., & Landon, B. E. (2016). Utilization of telemedicine among rural medicare beneficiaries. JAMA, 315(18), 2015–2016. https://doi. org/10.1001/jama.2016.2186. Menachemi, N., Rahurkar, S., Harle, C. A., & Vest, J. R. (2018). The benefits of health information exchange: An updated systematic review. Journal of the American Medical Informatics Association, 25(9), 1259– 1265. https://doi.org/10.1093/jamia/ocy035. Mennemeyer, S. T., Menachemi, N., Rahurkar, S., & Ford, E. W. (2016). Impact of the HITECH Act on physicians’ adoption of electronic health records. Journal of the American Medical Informatics Association, 23(2), 375–379. https://doi.org/10.1093/ jamia/ocv103. Milani, R. V., Lavie, C. J., Bober, R. M., Milani, A. R., & Ventura, H. O. (2017). Improving hypertension control and patient engagement using digital tools. The American Journal of Medicine, 130(1), 14–20. https://doi.org/10.1016/j.amjmed.2016.07.029. Miller, A. R., & Tucker, C. (2009). Privacy protection and technology diffusion: The case of electronic medical records. Management Science, 55, 1077– 1093. O’Connor, A. M., Bennett, C. L., Stacey, D., Barry, M., Col, N. F., Eden, K. B., et al. (2009). Decision aids for people facing health treatment or screening decisions. Cochrane Database of Systematic Reviews, (3), CD001431. Pew. (2018). Enhanced patient matching is critical to achieving full promise of digital health records. Retrieved from https://www.pewtrusts.org/-/media/ assets/2018/09/healthit_enhancedpatientmatching_ report_final.pdf. Pham, H. H., Schrag, D., O’Malley, A. S., Wu, B., & Bach, P. B. (2007). Care patterns in medicare and
985 Health Information Technology Policy
their implications for pay for performance. The New England Journal of Medicine, 356(11), 1130– 1139. Roland, M., & Olesen, F. (2016). Can pay for performance improve the quality of primary care? BMJ, 354, i4058. https://doi.org/10.1136/bmj.i4058. Rozenblum, R., Jang, Y., Zimlichman, E., Salzberg, C., Tamblyn, M., Buckeridge, D., et al. (2011). A qualitative study of Canada’s experience with the implementation of electronic health information technology. CMAJ: Canadian Medical Association Journal, 183(5), E281–E288. Rudin, R. S., Salzberg, C. A., Szolovits, P., Volk, L. A., Simon, S. R., & Bates, D. W. (2011). Care transitions as opportunities for clinicians to use data exchange services: How often do they occur? Journal of the American Medical Informatics Association: JAMIA, 18(6), 853–858. Rudin, R. S., Bates, D. W., & MacRae, C. (2016). Accelerating innovation in Health IT. New England Journal of Medicine, 375(9), 815–817. https://doi. org/10.1056/NEJMp1606884. Rudin, R. S., Hillestad, R., Ridgely, M. S., Qureshi, N. S., II, Davis, J. S., & Fischer, S. H. (2018). Defining and evaluating patient-empowered approaches to improving record matching. Santa Monica: The RAND Corporation. Shekelle, P. G., Pronovost, P. J., Wachter, R. S., Taylor, S. L., Dy, S. M., Foy, R., et al. (2011). Advancing the science of patient safety. Annals of Internal Medicine, 154(10), 693–696. Shortliffe, T. (2012). The future of biomedical informatics: A perspective from academia. Keynote Presentation, Medical Informatics Europe 2012, Pisa, Italy. Shuren, J., Patel, B., & Gottlieb, S. (2018). FDA regulation of mobile medical apps. JAMA, 320(4), 337– 338. https://doi.org/10.1001/jama.2018.8832. Singh, K., Drouin, K., Newmark, L. P., Lee, J., Faxvaag, A., Rozenblum, R., et al. (2016). Many mobile health apps target high-need, high-cost populations, but gaps remain. Health Affairs (Millwood), 35(12), 2310–2318. https://doi.org/10.1377/ hlthaff.2016.0578. Sinsky, C. A., Dyrbye, L. N., West, C. P., Satele, D., Tutty, M., & Shanafelt, T. D. (2017). Professional satisfaction and the career plans of US physicians. Mayo Clinic Proceedings, 92(11), 1625–1635. https:// doi.org/10.1016/j.mayocp.2017.08.017. Sittig, D. F., & Singh, H. (2012). Electronic health records and national patient-safety goals. New England Journal of Medicine, 367(19), 1854–1860. https://doi.org/10.1056/NEJMsb1205420. Sittig, D. F., Ash, J. S., & Singh, H. (2014). The SAFER guides: Empowering organizations to improve the safety and effectiveness of electronic health records. The American Journal of Managed Care, 20(5), 418– 423.
29
Smith, P. C., Araya-Guerra, R., Bublitz, C., Parnes, B., Dickinson, L. M., Vorst, R. V., et al. (2005). Missing clinical information during primary care visits. JAMA, 293(5), 565–571. Squires, D. (2015). U.S. health care from a global perspective: Spending, use of services, prices, and health in 13 countries. Retrieved from https://www. c o m m o n we a l t h f u n d .o rg / p u bl i c at i o n s / i s s u e briefs/2015/oct/us-health-care-global-perspective. Szolovits, P., Doyle, J., Long, W. J., Kohane, I., & Pauker, S. G. (1994). Guardian angel: Patient-centered health information systems. Retrieved from https:// smarthealthit.org/1994/10/guardian-angel-patientcentered-health-information-systems/. Tang, P. C., Fafchamps, D., & Shortliffe, E. H. (1994). Traditional medical records as a source of clinical data in the outpatient setting. Proceedings of the Annual Symposium Computer Applications in Medical Care, 575–579. Tang, P. C., Ash, J. S., Bates, D. W., Overhage, J. M., & Sands, D. Z. (2006). Personal health records: Definitions, benefits, and strategies for overcoming barriers to adoption. Journal of the American Medical Informatics Association: JAMIA, 13(2), 121–126. Tierney, W. M., McDonald, C. J., Martin, D. K., & Rogers, M. P. (1987). Computerized display of past test results. Effect on outpatient testing. Annals of Internal Medicine, 107(4), 569–574. Tierney, W. M., Miller, M. E., Overhage, J. M., & McDonald, C. J. (1993). Physician inpatient order writing on microcomputer workstations. Effects on resource utilization. JAMA, 269(3), 379–383. van Walraven, C., Taljaard, M., Bell, C. M., Etchells, E., Zarnke, K. B., Stiell, I. G., & Forster, A. J. (2008). Information exchange among physicians caring for the same patient in the community. Canadian Medical Association Journal, 179(10), 1013–1018. Walker, J., Pan, E., Johnston, D., Adler-Milstein, J., Bates, D. W., & Middleton, B. (2005). The value of health care information exchange and interoperability. Health Affairs (Millwood), (Suppl Web Exclusives), W5–10–W5–18. Wang, S. J., Middleton, B., Prosser, L. A., Bardon, C. G., Spurr, C. D., Carchidi, P. J., et al. (2003). A cost-benefit analysis of electronic medical records in primary care. American Journal of Medicine, 114(5), 397–403. Wells, S., Rozenblum, R., Park, A., Dunn, M., & Bates, D. W. (2014). Personal health records for patients with chronic disease: A major opportunity. Applied Clinical Informatics, 5(2), 416–429. Yamin, C. K., Emani, S., Williams, D. H., Lipsitz, S. R., Karson, A. S., Wald, J. S., & Bates, D. W. (2011). The digital divide in adoption and use of a personal health record. Archives of Internal Medicine, 171(6), 568–574.
987
The Future of Informatics in Biomedicine James J. Cimino, Edward H. Shortliffe, Michael F. Chiang, David Blumenthal, Patricia Flatley Brennan, Mark Frisse, Eric Horvitz, Judy Murphy, Peter Tarczy-Hornoch, and Robert M. Wachter Contents 30.1
The Present and Its Evolution from the Past – 988
30.2
Looking to the Future – 994 References – 1016
© Springer Nature Switzerland AG 2021 E. H. Shortliffe, J. J. Cimino (eds.), Biomedical Informatics, https://doi.org/10.1007/978-3-030-58721-5_30
30
988
J. J. Cimino et al.
nnLearning Objectives After reading this chapter, you should know the answers to these questions: 55 What does the past evolution of the field of biomedical informatics tell us about its future trajectory? 55 How will data science methods influence biomedical informatics research 55 What roles will electronic health records and artificial intelligence play in health care of the future?
30.1
he Present and Its Evolution T from the Past
the future. This book first appeared in 1990 at a time when the field was much younger (the word informatics had come into common use only in the previous decade) and was still being defined. Thus that early edition, and the ones that followed (in 2000, 2006, and 2014) offer a glimpse of what topics appeared over time, which ones faded away, and how even the terminology evolved (as it will no doubt continue to do in the future). Consider, for example, the list of chapter titles from the 1990 edition (. Table 30.1). The first edition was titled Medical Informatics: Computer Applications in Medical Care, reflecting the field’s original roots in clinical medicine. In those days, the field was called medical informatics (see 7 Chap. 1) and the first edition was focused largely on clinical application areas, such as electronic health records, nursing systems, laboratory systems, radiology system, and education systems.
30
Every good look forward should start with a look back to provide perspective regarding the past and an assessment of the pace of change, thereby helping us to anticipate a trajectory for
.. Table 30.1 Table of contents sections and chapters from all five editions of this book, aligned by subject matter Medical Informatics: Computer Applications in Medical Care (1990)
Medical Informatics: Computer Applications in Health Care and Biomedicine (2000)
Biomedical Informatics: Computer Applications in Health Care and Biomedicine (2006)
Biomedical Informatics: Computer Applications in Health Care and Biomedicine (2014)
Biomedical Informatics: Computer Applications in Health Care and Biomedicine (2020)
Recurrent themes in medical informatics
Recurrent themes in medical informatics
Recurrent themes in biomedical informatics
Recurrent themes in biomedical informatics
Recurrent themes in biomedical informatics
1. The computer meets medicine: Emergence of a discipline
1. The computer meets medicine and biology: Emergence of a discipline
1. The computer meets medicine and biology: Emergence of a discipline
1. Biomedical informatics: The science and the pragmatics
1. Biomedical informatics: The science and the pragmatics
2. Medical data: Their acquisition, storage, and use
2. Medical data: Their acquisition, storage, and use
2. Biomedical data: Their acquisition, storage, and use
2. Biomedical data: Their acquisition, storage, and use
2. Biomedical data: Their acquisition, storage, and use
3. Medical decision making: Probabilistic medical reasoning
3. Medical decision-making: Probabilistic medical reasoning
3. Biomedical decision making: Probabilistic clinical reasoning
3. Biomedical decision making: Probabilistic clinical reasoning
3. Biomedical decision making: Probabilistic clinical reasoning
4. Essential concepts for medical computing
4. Essential concepts for medical computing
5. Essential concepts for biomedical computing
30
989 The Future of Informatics in Biomedicine
.. Table 30.1 (continued) Medical Informatics: Computer Applications in Medical Care (1990)
Medical Informatics: Computer Applications in Health Care and Biomedicine (2000)
Biomedical Informatics: Computer Applications in Health Care and Biomedicine (2006)
Biomedical Informatics: Computer Applications in Health Care and Biomedicine (2014)
Biomedical Informatics: Computer Applications in Health Care and Biomedicine (2020)
4. Cognitive science and biomedical informatics
4. Cognitive science and biomedical informatics
4. Cognitive informatics 5. Human-computer interaction, usability, and workflow
5. System design and evaluation
5. System design and engineering
6. Standards in medical informatics
6. System design and engineering in health care
5. Computer architectures for health care and biomedicine 6. Software engineering for health care and biomedicine
6. Software engineering for health care and biomedicine
7. Standards in biomedical informatics
7. Standards in biomedical informatics
7. Standards in biomedical informatics
8. Natural language and text processing in biomedicine
8. Natural language processing in health care and biomedicine
8. Natural language processing for health-related texts
9. Imaging and structural informatics
9. Biomedical imaging informatics
10. Imaging and structural informatics 9. Bioinformatics 11. Personal health informatics
7. Ethics and health informatics: Users, standards, and outcomes
10. Ethics and health informatics: Users, standards, and outcomes
10. Ethics in biomedical and health informatics: Users, standards, and outcomes
12. Ethics in biomedical and health informatics: Users, standards, and outcomes
8. Evaluation and technology assessment
11. Evaluation and technology assessment
11. Evaluation of biomedical and health information resources
13. Evaluation of biomedical and health information resources
(continued)
990
J. J. Cimino et al.
.. Table 30.1 (continued)
30
Medical Informatics: Computer Applications in Medical Care (1990)
Medical Informatics: Computer Applications in Health Care and Biomedicine (2000)
Biomedical Informatics: Computer Applications in Health Care and Biomedicine (2006)
Biomedical Informatics: Computer Applications in Health Care and Biomedicine (2014)
Biomedical Informatics: Computer Applications in Health Care and Biomedicine (2020)
Medical computing applications
Medical computing applications
Biomedical informatics applications
Biomedical informatics applications
Biomedical informatics applications
6. Medical-record systems
9. Computer-based patient record systems
12. Electronic health record systems
12. Electronic health record systems
14. Electronic health records
13. Health information infrastructure
15. Health information infrastructure
7. Hospital information systems
10. Management of information in integrated delivery networks
13. Management of information in healthcare organizations
14. Management of information in health care organizations
16. Management of information in health care organizations
8. Nursing information systems
12. Patient care systems
16. Patient-care systems
15. Patient-centered care systems
17. Patient-centered care systems
11. Public health and consumer uses of health information
15. Public health informatics and the health information infrastructure
16. Public health informatics
18. Population and public health informatics
14. Consumer health informatics and telehealth
17. Consumer health informatics and personal health records
19. mHealth and applications
18. Telehealth
20. Telemedicine and telehealth
9. Laboratory information systems 10. Pharmacy systems
11. Radiology systems
14. Imaging systems
18. Imaging systems in radiology
20. Imaging systems in radiology
22. Imaging systems in radiology
12. Patientmonitoring systems
13. Patient monitoring systems
17. Patientmonitoring systems
19. Patient monitoring systems
21. Patient monitoring systems
13. Information systems for office practice
991 The Future of Informatics in Biomedicine
.. Table 30.1 (continued) Medical Informatics: Computer Applications in Medical Care (1990)
Medical Informatics: Computer Applications in Health Care and Biomedicine (2000)
Biomedical Informatics: Computer Applications in Health Care and Biomedicine (2006)
Biomedical Informatics: Computer Applications in Health Care and Biomedicine (2014)
Biomedical Informatics: Computer Applications in Health Care and Biomedicine (2020)
14. Bibliographic- retrieval systems
15. Information retrieval systems
19. Information retrieval and digital libraries
21. Information retrieval and digital libraries
23. Information retrieval
15. Clinical decision-support systems
16. Clinical decision support systems
20. Clinical decision-support systems
22. Clinical decision-support systems
24. Clinical decision-support systems
26. Clinical research informatics
27. Clinical research informatics
16. Clinical research systems 17. Computers in medical education
17. Computers in medical education
21. Computers in medical education
23. Computers in health care education
25. Digital technology in health science education
18. Bioinformatics
22. Bioinformatics
24. Bioinformatics
(see 7 Chap. 9 under “recurrent themes in biomedical informatics”, above)
25. Translational bioinformatics
26. Translational bioinformatics
18. Health- assessment systems
28. Precision medicine and informatics Medical informatics in the years ahead
Medical informatics in the years ahead
Biomedical informatics in the years ahead
Biomedical informatics in the years ahead
Biomedical informatics in the years ahead
19. Health-care financing and technology assessment
19. Health care and information technology: Growing up together
23. Health care financing and information technology: A historical perspective
27 health information technology policy
29. Health information technology policy
20. The future of computer applications in health care
20. The future of computer applications in health care
24. The future of computer applications in biomedicine
28. The future of informatics in biomedicine
30. The future of informatics in biomedicine
30
992
30
J. J. Cimino et al.
The next decade was revolutionary, however, and it had a profound effect on informatics. During the 1990s, the Human Genome Project made it clear that much of what needed to be accomplished in human biology and genetics could not be achieved with the use of the computational methods available or introduced at that time. Many of the informatics techniques that had been developed in the clinical world became relevant to genomics research, where investigators coined the term bioinformatics for their computational explorations. Thus, the field of informatics began to broaden to span both basic and applied clinical sciences. In an effort to acknowledge this evolution, the second edition of this book was renamed, with “medical care” giving way to “health care” (to acknowledge the field’s growing role in prevention and public health) and the addition of “biomedicine” (to embrace the role of informatics in human biology research) (. Table 30.1). In addition, a new chapter on bioinformatics was added to the edition when it appeared in 2000. Similarly, the second edition took a broader view to consider topics such as standards, ethics, integrated delivery networks, and public health. Bibliographic retrieval expanded to be information retrieval, and there were changes in emphasis in several other chapters as well. In an attempt to acknowledge and emphasize the shared methods that applied in both the human life sciences and in clinical medicine and health, the academic discipline began to change its name from “medical informatics” to “biomedical informatics”. Several departments were renamed or created with this new name for the field. Hence, when the third edition of this book appeared in 2006, it adopted the title Biomedical Informatics, discarding the more limited “medical informatics” focus. Although several chapters were simply updated and some were deleted, others were divided into two components (e.g., the Imaging Systems chapter from the second edition was divided into a methodologic chapter on imaging/structural informatics plus an application chapter on Imaging Systems in Radiology) (. Table 30.1). Furthermore, totally new chapters were drawn from other fields, including cognitive science, natural language processing, and consumer
facing systems. In addition, chapter authorship evolved substantially as new topics were introduced and authors from earlier editions brought on coauthors whose expertise complemented their own. The book title remained unchanged in the fourth edition in 2014 (and in this edition), but changes to the chapter titles provide more detail to what was evolving (. Table 30.1). The fourth edition introduced several new topics, including telehealth, translational bioinformatics, and clinical research informatics. And the current edition has added new chapters in the areas of human-computer interaction, mHealth, and precision medicine. Thus, a review of the titles and tables of contents of the five editions of this book, spanning 30 years with 20 chapters at the outset evolving to 30 chapters now, provides a thumbnail view of the evolution of the field as a whole. So, what evolution can we observe and what does it tell us about where we are headed? We see a field that started with a strong focus on computer programming for clinical medicine and ancillary services. The field grew to embrace research areas related to medicine and health, ranging from molecular to biologic systems to organisms, and beyond to populations. And, as with any emerging discipline, biomedical informatics began to differentiate its activities into practice (the lion’s share), research (not just biomedical research informatics, but research on informatics in its own right), and education. Connections among these three types of activities and overlap across domains were often scant, as shown in . Fig. 30.1. Four trends, described throughout this book, have blurred many of the distinctions (. Fig. 30.2). First, the broader field of biomedicine itself has begun to blur the distinctions among its traditional areas of research domains. This is evident in the emergence of the “translational science” philosophy, with clear recognition that each scientific endeavor builds on the discoveries made at some other, usually smaller scale. One culmination of this trend is precision medicine (see 7 Chap. 28), in which discoveries at the genomic level are translated into knowledge that supports decisions for patients and populations.
30
993 The Future of Informatics in Biomedicine
Informatics Practice
Bioinformatics
Translational Research Informatics
Clinical Research Informatics
Clinical Informatics
Public Health Informatics
Informatics Research Informatics Education Basic Science
Translational Science
.. Fig. 30.1 Prior state of biomedical informatics domains and activities. Clinical informatics was the earliest and continues to be the largest domain, as reflected in this book, with other domains following in time. The advent of the Human Genome Project led to rapid expansion of the application of bioinformatics. Research in each domain
Clinical Research
Patient Care
Public Health
followed practice, depicted here as more or less semi-transparent connections. Education in informatics, both for research and practice, began in the clinical domain, especially with nursing informatics, to be followed by nascent bioinformatics training programs. Connections between the education and research varied
Informatics Practice Informatics Research Informatics Education Basic Science
Translational Science
Clinical Research
Patient Care
Public Health
.. Fig. 30.2 Today, biomedical informatics is becoming a continuum, with fewer distinctions among domains and activities. This mirrors the continuum of biomedicine, with its recognition of the translational nature ranging from basic science public health (1), exemplified by precision medicine which draws on genomic knowledge to directly improve care of the individual patient. Increased clinical informatics activity has resulted in increased availability of clinical data, which informs research to produce better evidence to guide practice, resulting in a “learning health system” (2). Users of
informatics applications in research and practice settings are increasingly seen as research subjects in “living laboratories” (3) for guiding the improvement of the tools they use and learning new ways to apply informatics methods. Finally, education and training in informatics is increasingly leaving the classroom and moving to practice sites for observation and learning and as settings for experimenting with new solutions (4). (7 https://systems.jhu.edu/ research/public-health/2019-ncov-map-faqs/. Copyright 2020, Johns Hopkins University. All rights reserved)
Second, the learning health system (7 Chaps. 1 and 17) is making use of lare-scale data collected from patients and populations to frame research questions that are answered in the laboratory. Knowledge discovered there is then returned to the point of care to support evidence-based diagnostic, preventive and therapeutic decisions. Third, informatics research is moving from the computer lab out to where potential users of informatics tools are actually working. Harnessing tools from cognitive science and mixed methods evaluation, the biological lab-
oratory, the clinic, and the hospital are becoming living informatics laboratories for testing informatics ideas and observing their impact. Fourth, the connections between informatics education and informatics practice are being strengthened. As electronic health records have become ubiquitous in clinical practice, computing devices and information technologies have become virtually the only tools used by every health care provider. Therefore, the importance of rigorous training in the use of these tools has increased. Informatics training programs are now able to train their students with the
994
30
J. J. Cimino et al.
.. Fig. 30.3 An example of the application of informatics to increase availability of large data sets and to facilitate their processing for public consumption. Depicted is a COVID-19 dashboard developed at the Johns Hopkins University Center for Systems Science and Engineering
for presenting COVID-19 data ingested from a variety of sources to allow lay people easy access to up-to-date information on the COVID-19 pandemic in their area (Dong et al. 2020). (7 https://www.jhu.edu/. Copyright 2020, Johns Hopkins University. All rights reserved)
living laboratories by studying how research and patient care systems are being used and can be improved. The long tradition of formal informatics training in nursing programs is now being adopted in clinical medicine, which has recently added clinical informatics as a board- certified subspecialty through the American Board of Medical Specialties (ABMS).
medical informatics is likely to influence the twenty-first century. As it happens, shortly after these pieces were written, events have unfolded to put these predictions to the test. Writing in early 2020, we are referring of course of the COVID-19 pandemic. The chapters in this book were largely written in the preceding year or two. Some have been updated to discuss the current situation (e.g., see 7 Chap. 18), but no textbook can keep current with the rapidly unfolding events of this cur30.2 Looking to the Future rent natural disaster. The level of public interGiven the general trends we have outlined, what est in biomedical information, ranging from can we expect in terms of specific advances for virology and immunology, to pharmacology the field? For that, we invited seven visionaries and epidemiology, has risen to unprecedented to share their predictions for the directions we levels. Up-to-the-minute data are being prowill, or least should, be moving. We chose inno- vided with great volume, variety and velocity vative thinkers who could provide insights on (the hallmarks of “big data – see 7 Chap. 13) the future of biomedical informatics from a through government, academic and news variety of perspectives: bioinformatics (Tarczy- media sources for popular consumption Hornoch), industry (Horvitz), nursing (. Fig. 30.3).1 All of this requires rapid devel(Murphy), health policy (Blumenthal), aca- opment and delivery of informatics solutions demic informatics (Frisse), clinical medicine on an unprecedented scale, along with (Wachter), and federal government (Brennan). approaches for confirming the veracity of data. Individually, they provide perspectives on government efforts, policy changes, research advances, and clinical practice. Together, they 1 7 https://www.arcgis.com/apps/opsdashboard/ index.html#/bda7594740fd40299423467b48e9ecf6 weave a rich tapestry that presages how bio
(accessed 2/13/2021).
995 The Future of Informatics in Biomedicine
The lessons and predictions discussed by our guest visionaries can be applied directly to the care of patients with suspected or confirmed COVID-19. For example, Tarczy- Hornoch (7 Box 30.1) describes correlation of genomic and functional data with clinical outcomes data. Thanks to adoption and interoperability of electronic health records (7 Chaps. 14 and 15), sufficient data are becoming available to provide an understanding of risk factors for disease severity as well as benefits risks of putative treatments much more rapidly than could be achieved with formal human subject studies (Xu et al. 2020). Horvitz (7 Box 30.2) provides an inventory of the methods drawn from the field of artificial intelligence (see 7 Chaps. 1 and 24) that stand ready to use these data to infer answers to such pressing questions through machine learning. He also describes supplemental methods for “machine teaching” that can be brought to bear when some data remain sparse (Feijóo et al. 2020). Murphy (7 Box 30.3) describes the advantages of tele-visits for improving access to care; the social distancing required for helping to control the pandemic provides additional incentive for care at a distance, even for those who otherwise have sufficient access to in- person care. Fortunately, the technology of tele-visits (7 Chap. 20) has progressed to the point where healthcare institutions have been able to make the necessary transition with an ease that would not have been possible 10 years earlier (Hong et al. 2020). Blumenthal (7 Box 30.4) enumerates issues related to safety of information systems and the government’s role in developing policies that address privacy concerns (7 Chaps. 12 and 29). The immediate need for such policies relates to the balance between individual rights and the protection of the public, as software developers race to create patient contact tracing applications (Abeler et al. 2020) and patients begin to collect their own intimate, detailed data through the wearables mentioned by Frisse (see also 7 Chap. 19 and Ding et al. 2020). Frisse (7 Box 30.5) recognizes that with the success of informatics, and its growing impact on both science and health, the challenges and complexities involved with “doing it right”
extend beyond the protection of data privacy. Other impacts on society may be both deep and unanticipated, with the possibility that unintended consequences may exacerbate differences among how different people, with difference education, cultures, and financial means, may experience health care and manage their own health. Unintended consequences of technology have been rampant in many fields (did anyone anticipate that television would engender generations of “couch potatoes”?). Wachter, who is well known for his provocative book characterizing the “digital doctor” who is already somewhat upon us (Wachter 2017), focuses on the informatician (or informaticist) of the future and the impact that such individuals will have in clinical settings (7 Box 30.6). He acknowledges the problems that are highlighted in his popular book, but envisions an ultimately positive future in which “the experience of being both patient and healthcare professional will be far more satisfying”, due in part to the role that informatics, and those who practice this new specialty, will play in the clinical environment. Tarczy-Hornoch, Horvitz, Murphy, Blumenthal, Frisse, and Wachter all describe ways in which clinicians’ workflow can be influenced for the better through informatics, with particular attention to their quality of life, which the pandemic has demonstrated can require as much attention, for some individuals, as do the lives of the patients they serve (Dewey et al. 2020). And of course, all of these activities are supported through access to data and literature (including the works cited here), made possible by the National Institutes of Health, with the National Library of Medicine at the fore (Zayas-Cabán et al. 2020), as well as other governmental and nongovernmental organizations. As Brennan notes in her perspective (7 Box 30.7), the federal government must provide the resources for the things that only it can do (such as gathering and consolidating epidemiologic data) and collaborate with leaders in the private sector that can provide additional breadth and depth of expertise. The COVID monitoring dashboard shown in . Fig. 30.3 is just the tip of an iceberg of such cooperation (in this case between the Centers for Disease Control &
30
996
J. J. Cimino et al.
Prevention and Johns Hopkins University), when one considers all the work that went into obtaining the underlying data. Today, biomedical informatics is moving out of the shadows. Instead of hearing “biomedical informatics? What’s that?”, we hear “How is biomedical informatics helping to solve this problem?” There are many answers and although they may seem specific to the current pandemic, they will remain applicable long
Box 30.1 A Perspective on the Future of Translational Bioinformatics and Precision Medicine
30
after the current challenges are overcome. Public recognition of the importance of informatics will lead to increased resources for research, increased interest in education and training, and new opportunities for applications in biomedicine, in preparation for the inevitable challenges we know to anticipate. The intent of this book is to prepare those who wish to understand, support and lead these changes.
practice and for the development of evidencebased guidelines. The validation of genomic discovery and demonstration of its suitability for widespread adoption (T2/T3) is just beginning. Peter Tarczy-Hornoch This evolution can be illustrated on the clinical The first part of the twenty-first century saw the T2 front by the work of the American College of establishment of the fields of translational bio- Medical Genetics, which monitors new genomic informatics (TBI) and precision medicine (PM), discoveries to identify what secondary findings accompanied by the movement of this research in genome and exome sequencing meet criteria into the early T-phases [1] of translational for reporting [2]. Thus far only selected mutaresearch (e.g. T0–T2 research focused on discov- tions of around 60 genes (out of over 20,000 in ery and early application). The next decade will the genome) meet the ACMG’s rigorous criteria see the fields move into the later T-phases of for clinical reporting. The clinical validation of broader adoption and diffusion (T3) and into new discoveries facilitated by TBI is a key step in evaluating the population impact in terms of the development of informatics tools that apply health outcomes (T4). Due to shared core meth- this knowledge to practice (e.g. decision-support odologies plus pressures on the health system, tools). As an example of informatics T2 work, T3/T4 research will demonstrate a convergence researchers have begun to assess the cost/benefit between TBI/PM (predicting individual out- of genomic decision-support tools in the eleccomes) and integration with more population- tronic health record [3]. The research and applibased approaches such as comparative cation of PM informatics approaches for T1 effectiveness research and, more broadly, the discovery and T2 application parallel those of TBI (oftentimes incorporating genomic eleconcept of the learning healthcare system. The earliest work in TBI and PM focused on ments as part of the input data for the developthe identification of opportunities and new ment of predictive models). In the coming decade the types and volume approaches (T0), discovery to early health applications (T1), and assessment of value (T2). In of data used for TBI and PM discovery and the TBI area, T0 work focused on studies pilot- application will continue to expand and the dising the combination of both genomic data and tinctions between TBI and PM will blur even electronic health record data for discovery (e.g. further. As the cost of genome sequencing conthe early phase of the eMERGE project) and tinues to drop, increasing numbers of patients proof-of-concept T1 translational applications will have genotypic information available to corof genomic discoveries to clinical care (e.g. tar- relate with clinical and other information, which geted pharmacogenomic decision-support sys- will enable both larger scale discovery and applitems). We also see an emerging body of T2 cation. In the cancer domain, for example, new translational research that is beginning to assess single-cell sequencing approaches will provide the value of these new discoveries for health additional granular data on a specific patient’s
997 The Future of Informatics in Biomedicine
clonal mutational profiles. In the metabolomics and proteomics areas, the cost of gathering these data is dropping, both at the patient and more targeted (e.g. organ) levels. The ability to begin to correlate these functional data with clinical outcome data and with response-to-therapy data will provide powerful new biological and clinical insights. These new sources of biological process data will complement new sources of phenotypic and environmental data. Text mining will enable free-text notes describing phenotype and environment (e.g. social determinants of health) to be transformed into more discrete data suitable for machine learning. Increasing availability of geocoded environmental data (e.g. climate data, pollution data, air quality data, pollen counts, etc.) will enable cross-links to patient data. With patient engagement and permission (and substantial work on standards and security), specific environmental data from the Internet of Things (e.g. lighting and temperature data in a home) may also be linked with genomic, biological and clinical data for a patient. Similarly other patient data can be integrated, such as questionnaire and survey data (patient reported outcomes and mMeasures) and data from consumer wearables (counts of steps, heart rate monitoring, sleep monitoring) as well as consumer medical devices (home glucose and blood pressure monitoring, and, recently, more experimental transdermal monitoring of metabolic processes). This increase in data about individual patients, as well as the number of patients for which these rich data are available, will greatly accelerate the T0–T1 discovery and initial clinical application phases of TBI/ PM. The volume of data and potential outcomes are such that the informatics tools will become ever more important for discovery. Already health care providers struggle with information overload and with the need to be current on new medical discoveries. The anticipated complexity and volume of new findings and correlations will be such that computerbased decision-support tools will be obligatory for application of these new findings. All of these approaches will fit into the paradigm of using predictions to provide early/preventive interventions that are tailored to the unique pro-
file of the individual patient. The core methods and approaches used for analysis and discovery, and the ones used for decision support, will be fundamentally similar, whatever mix of input variables is used across the spectrum of genetic, biologic, clinical, patient provided, or environmental data. In light of this, the distinction between TBI and PM will likely vanish. There are number of promising data analytics methods currently under development that are likely to be useful in the TBI/PM informatics area. One category is the creation of more automated model- selection and tuning methods. Without these it will be difficult to scale a number of the approaches currently being used since they are dependent on the involvement of human data scientists. Similarly, there is foundational work being done in academia and industry that is seeking better unsupervised learning approaches. These are needed because the ability to develop gold-standard training sets is now often constrained by the amount of human effort required. Another broad category is methods that provide some explanatory power related to predictions. As one example, current machine learning approaches identify correlation but generally cannot provide insight into causation. New approaches show promise when they leverage large enough data sets to begin to infer causation. Another example is using automated tools both to develop predictive models and then to use artificial intelligence techniques to develop an explanatory model. Both these examples illustrate ways in which new methods may begin to address the concern raised by some overly opaque “black box” predictive models. A final broad category is methods that begin to leverage available data more effectively, including new AI-based image-analysis approaches, next-generation hybrid statistical and rules-based textmining approaches, and new approaches to improve the use of temporal information in prediction algorithms (e.g. the slope and tempo of visits and laboratory values). The rapidly rising costs of health care in the United States, without a corresponding improvement in quality, will influence the development of informatics tools for precision medicine.
30
998
30
J. J. Cimino et al.
Broadly this will mean that work in the TBI/PM informatics space will need to factor in the perspectives of the Quadruple Aim: (1) enhancing patient experience, (2) improving population health, (3) reducing costs, and (4) improving the work life of healthcare providers. Regarding the first of these, it will be important to ensure that predictive model-based decision-support tools are built in ways to ensure the pursuit of shared decision making involving the patient. Tools must also provide the appropriate support for ensuring that behavioral changes occur (e.g. if a model predicts the need for increased aerobic exercise, there must be methods to ensure that occurs). It will similarly be important to ensure, as data are shared and models are developed, that attention is paid to ethical, legal and social aspects of data sharing. This will help to maintain the trust of patients and to avoid unintended biases in the models (consider, for example, the recent issues with facial recognition software that works well on white males but not on women of color). The ethical lens will be particularly important to ensure that the privacy and trust of patients and public are preserved as these large scale data-intensive methods are developed and deployed. The recent academic and popular press discussions of the breeches of trust by large scale social media and other Internet companies should serve as a cautionary tale. Regarding the next two elements in the Quadruple Aim, it will be critical that the work in TBI/PM be subject to the same kinds of assessments that we expect for other diagnostic and therapeutic interventions. Currently there are deployed tools that demonstrate a sensitivity and specificity that are far below the values that we would otherwise demand of diagnostic and screening tests. Informatics interventions have not been treated in quite the same way as laboratory tests or medications. Efforts to demonstrate value and real-world impact of TBI/ PM tools will align with broader efforts to demonstrate effectiveness in the real world (e.g. Comparative Effectiveness Research). They will also form a key aspect of the Learning Healthcare System approach, since TBI/PM tools will help to assure that learning can occur from analysis of the data artifacts generated in
the care-delivery process (e.g. the electronic health record and related data). Finally, in order to address the fourth element in the Quadruple Aim, we will need to determine how best to deploy predictive analytics tools. It will be important to preserve provider decision-making autonomy, to provide sufficient explanatory ability and rigorous validation to ensure that providers trust the results, and to diminish the information and alert overload that providers face today. In summary, we have just begun to see TBI and PM informatics discoveries and applications have an impact on achieving the broader goals of improving health and the more focused goals of the Quadruple Aim. Over the next decade, with advances in data analytics methods and increasing sources of data regarding an increasing number of patients, we are likely to see remarkable progress in the development of more easily developed and more accurate predictive models that will allow us to intervene at the patient level. These advances will be integrated into the broader trends in health care as encapsulated in the Quadruple Aim, which will require additional research and innovation to ensure that the full potential of TBI and PM are realized. 1. Khoury, M. J., Gwinn, M., Yoon, P. W., Dowling, N., Moore, C. A., & Bradley, L. (2007). The continuum of translation research in genomic medicine: How can we accelerate the appropriate integration of human genomic discoveries into health care and disease prevention? Genetics in Medicine, 9(10), 665–674. 2. Kalia, S. S., Adelman, K., Bale, S. J., Chung, W. K., Eng, C., Evans, J. P., et al. (2017). Recommendations for reporting of secondary findings in clinical exome and genome sequencing, 2016 update (ACMG SF v2.0): A policy statement of the American College of Medical Genetics and Genomics. Genetics in Medicine, 19(2), 249–255. 3. Mathias, P. C., Tarczy-Hornoch, P., & Shirts, B. H. (2017). Modeling the costs of clinical decision support for genomic precision medicine. Clinical Pharmacology & Therapeutics, 102(2), 340–348.
999 The Future of Informatics in Biomedicine
Box 30.2 The Future of Biomedical Informatics: Bottlenecks and Opportunities Eric Horvitz I see the rich, interdisciplinary field of biomedical informatics as the gateway to the future of health care. The concepts, methods, rich history of contributions, and the aspirations of biomedical informatics define key opportunities ahead in biomedicine—and shine light on the path to achieving true evidence-based health care. Progress with influences of biomedical informatics on health care over the last three decades has been slower than I had hoped. However, I remain optimistic about a forthcoming biomedical informatics revolution, made possible by a confluence of advances across industry and academia. Such a revolution will accelerate discovery in biomedicine, enhance the quality of health care, and reduce the costs of health care delivery. From my perch as an investigator and director of a worldwide system of computer science research labs, I view key opportunities ahead as hinging on (1) addressing the often underappreciated bottleneck of translation—moving biomedical informatics principles and prototypes into real-world practice, and (2) making progress on persisting challenges in principles and applications of artificial intelligence (AI). I am optimistic that we will make progress on both fronts and that there will be synergies among these advances. On challenges of translation, I believe that the difficulties of transitioning ideas and implementations from academic and industry research centers into the open world of medical practice have been widely underappreciated. Numerous factors are at play, including poor understanding of how computing solutions can assist with the tasks and day-to-day needs of health care practitioners and patients, inadequate appreciation of the needs and difficulties of developing site-specific solutions, poor compute infrastructure, and a constellation of challenges with human factors, including entrenched patterns of practice and difficulties of integrat-
ing new capabilities and services into existing clinical workflows. Multiple advances coming with the march of computer science will help to address challenges of translating ideas and methods that have been nurtured by biomedical informaticians for decades. At the base level, such advances include ongoing leaps in computing power and in storage, but also key innovations with computing principles and methods in such subdisciplines as databases, programming languages, security and privacy, human-computer interaction, visualization, and sensing and ubiquitous computing. Faster and more effective translation of ideas and methods from biomedical informatics will also be enabled by jumps in the quality of available computing tools and infrastructure. Increases in the power and ease-of-use of cloud computing platforms are being fueled by unprecedented investments in research and development by information technology companies—companies that are competing intensely with one another for contracts with enterprises that are hungry for digital transformation and the latest in modern computing tools. Cloud computing companies are packaging in their offerings sets of development tools and constellations of specialized services. Many of these offerings are relevant to biomedical informatics efforts, including machine learning toolkits, suites for analysis and visualization of data, and computer vision, speech recognition, and natural language analysis services made available via programmatic interfaces. Beyond developing generic platform capabilities, cloud service providers are motivated to gain understandings in key vertical markets, such as health care, finance, and defense, and have been working to custom-tailor their general platforms with tools, designs, and services for use in specific sectors. For example, there is incentive to support rising standards on schemata (e.g., Fast Healthcare Interoperability Resources (FHIR)) for storing and transferring electronic health records and on methods to ensure the privacy of patient data. There is also pressure to develop special versions of computing services for medicine, such as language
30
1000
30
J. J. Cimino et al.
models and, more generally, natural language capabilities specialized for medical terminology, enabling more accurate understanding and analysis of medical text and speech. Competitor cloud providers have also worked to identify and provide efficient methods and tools for important vertical needs, such as the rising importance of determining DNA sequences and interpreting protein expression data. Such special needs of researchers and clinicians have led to the availability of efficient and inexpensive cloud-computing services for genomic and proteomic analyses. Moving on to the second realm of opportunities, around harnessing advances in the constellation of technologies that we call AI, I believe that our community can do more to leverage existing methods and also to closely follow, push, and contribute to advances in AI subfields. Beyond methods available today, key developments will be required in principles and applications to realize the long-term goals of biomedical informatics. I am seeing good progress and am optimistic that the advances coming over the next decade will be deeply enabling. On existing technologies, and focusing on the example of developing effective decision support systems, we have been very slow to leverage the visionary ideas proposed by Robert Ledley and Lee Lusted in 1959 [1]. Ledley and Lusted provided a blueprint for constructing differential diagnoses and to use decision-theoretic analyses to generate recommendations for action. Biomedical informatics investigators have been top leaders with exploring prototypes for decision support systems, and systems constructed over 60 years of research have been shown to perform at expert levels. However, real-world impact has been limited to date. A key bottleneck has been the scarcity and cost of expertise and data. I believe that harnessing advances in machine learning will be particularly critical for delivering on the vision of evidence-based clinical decision making. Machine learning techniques available today can and should be playing a more central role in health care for assisting with pattern recognition, diagnosis, and prediction of outcomes. There
are multiple opportunities to build and to integrate pipelines where data flow via machine learning to predictions and via automated decision analyses to recommendations about testing and treatment. Making key investments to build and refine effective data-to-prediction-todecision pipelines will provide great value in multiple areas of medicine [2]. Opportunities ahead for biomedical informatics include leveraging recent advances in deep learning in medical applications, especially for image recognition and natural language tasks. These multilayered neural network architectures are celebrated for providing surprising boosts in classification accuracy in multiple application areas and for easing engineering overhead, as they do not require special feature engineering. The methods have been shown to perform well for recognition in the image-centric areas of pathology and radiology. Different variants of deep learning are also being explored for building predictive models from clinical data drawn from electronic health records. Beyond direct applications, deep learning methods have led to enhanced capabilities in multiple areas of AI with relevance to goals in biomedical informatics, including key advances in computer vision, speech recognition, text summarization, and language translation. With all of the recent fanfare about deep learning, it is easy to overlook the applicability of other machine learning methods, including probabilistic graphical models, generalized additive models, and even logistic regression for serving as the heart of predictions in recommendation engines. While excitement about deep learning is appropriate, it is important to note that the methods typically require large amounts of data of the right form and that such datasets may not be available for medical applications of interest. Other approaches have proven to be as accurate for clinical applications and provide other benefits such as providing more intelligible, explainable inferences. Also, when sufficiently large corpora of data labeled with ground truth are not available, knowledge acquisition techniques, referred to broadly as machine teaching, can provide value. While work
1001 The Future of Informatics in Biomedicine
is moving forward on machine teaching, existing methods and tools can be valuable in building models for prediction and classification. I believe that it is important to note that having access to powerful machine learning procedures may be insufficient for addressing goals in biomedical informatics. Key challenges for moving ahead with developing and deploying effective decision support systems include identifying where and when such systems would provide value, collecting sufficient amounts of the right kind of data for applications, developing and integrating automated decision analyses to move from predictions to recommendations for action [2], maintaining systems over time, developing means to build and apply learned models at multiple sites, and addressing human-factors, including formulating means for achieving smooth integration of inferences and recommendations into clinical workflows, and providing explanations of inferences to clinicians [3]. Providing explanations of predictions generated by machine-learned models is a topic of rising interest [4]. I hope to see revitalized interest and similar enthusiasm extended to addressing challenges identified in biomedical informatics with the intelligibility and explanation of the advice provided by other forms of reasoning employed in decision support systems, including logical, probabilistic, and decision-theoretic inference [5]. Key opportunities in AI research for progress with developing and fielding effective decision support systems include efforts in principles and applications of transfer learning, unsupervised learning, and causal inference. Transfer learning refers to methods that allow for data or task competencies learned in one area to be applied to another [6]. Unsupervised and semi-supervised learning refers to methods that can be used to build models and perform tasks without having a complete set of labeled data, such as labels about the final diagnoses of patients when working with electronic health records data. Causal inference refers to methods that can be used to identify causal knowledge, versus statistical associations that are commonly inferred from data. Advances in these areas promise to provide
new sources of biomedical knowledge, and to address the challenge of data scarcity and related difficulties with the generalizability of data resources for health care applications. On data scarcity and generalizability, an important, often underappreciated challenge in biomedical informatics is that the accuracy of diagnosis and decision support may not transfer well across institutions. In our work at Microsoft Research, we found that accuracies of a system trained on data obtained from a site can plummet when used at another location. The poor generality of datasets is based on multiple factors, including differences in patient populations—with site-specific incidence rates, covariates, and presentations of illness, site-specific capture of evidence in the electronic health record, and site-specific definitions of signs, symptoms, and lab results. As an example, we found site-specificity when my team studied the task of building models to predict the likelihood that patients being discharged from a hospital would be readmitted within 30 days. The accuracy of prediction for a model learned from a massive dataset drawn from single large urban hospital dropped when the model was applied at other hospitals. This observation of poor generalizability was behind our decision to develop a capability for performing automated, recurrent machine learning separately at each site that would rely on local data for predictions. This local trainand-test capability served as the core engine of an advisory system for readmissions management, named Readmission Manager, that was commercialized by Microsoft. Moving forward, research on a set of methods jointly referred to as transfer learning may help to address challenges of data scarcity and generalizability. Transfer learning algorithms for mapping the learnings from one hospital to another show promise in medicine [6]. Such methods include multitask learning. Also, obtaining spanning datasets, composed of large amounts of data drawn from multiple sites, may provide effective generalization. In support of this approach, methods called multiparty computation have been developed that
30
1002
30
J. J. Cimino et al.
can enable learning from multiple, privately held databases, where there is no violation of privacy among the contributing organizations. Beyond the daily practice of health care, and uses in such applications as diagnosis and treatment, methods for learning and reasoning from data can provide the foundations for new directions in the clinical sciences via tools and analyses that identify subtle but important signals in the fusing of clinical, behavioral, environmental, genetic, and epigenetic data. I see many directions springing from applications of machine learning, reasoning, planning, and causal inference for health care delivery as well as in supporting efforts in health care policy and in the discovery of new biomedical understandings. I remain excited about advances in biomedical informatics and see a biomedical informatics revolution on the horizon. Such a revolution will build on the glowing embers of decades of contributions and the flames of late-breaking activities that address long-term challenges and bottlenecks. 1. Ledley, R. S., & Lusted, L. B. (1959). Reasoning foundations of medical diagnosis. Science, 130(3366), 9–21. 2. Bayati, M., Braverman, M., Gillam, M., Mack, K. M., Ruiz, G., Smith, M. S., & Horvitz, E. (2014). Data-driven decisions for reducing readmissions for heart failure:
3.
General methodology and case study. PLOS One Medicine, 9(10), e109264. https://doi.org/10.1371/journal. pone.0109264 Teach, R. L., & Shortliffe, E. H. (1981). An analysis of physician attitudes regarding computer-based clinical consultation systems. Computers and Biomedical Research, 14(6), 542–558. 7 https://doi.org/10.1016/00104809(81)90012-4. Caruana, R., Koch, P., Lou, Y., Sturm, M., Gehrke, J., & Elhadad, N. (2015). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. KDD, August 10–13, 2015, Sydney, NSW, Australia. Horvitz, E., Heckerman, D., Nathwani, B., & Fagan, L. M. (1986). The use of a heuristic problem-solving hierarchy to facilitate the explanation of hypothesis-directed reasoning. Proceedings of Medical Informatics, Washington, DC (October 1986), North Holland: New York, pp. 27–31. Wiens, J., Guttag, J., & Horvitz, E. (2014). A study in transfer learning: Leveraging data from multiple hospitals to enhance hospital-specific predictions. Journal of the American Medical Informatics Association, 21(4), 699–706. 7 https://doi.org/10.1136/ amiajnl-2013-002162.
4.
5.
6.
1003 The Future of Informatics in Biomedicine
Box 30.3 The Future of Nursing Informatics Judy Murphy The focus of this commentary is on the future of biomedical informatics from a nursing perspective, but it is helpful to understand the background and history of nursing’s role in the field. Starting there, the focus will move to looking at nursing informatics today and then looking to the future of the field from a nursing point of view. Nurses have contributed to the purchase, design, and implementation of health information technology (IT) since the 1970s. The term “nursing informatics” (NI) first appeared in the literature in the 1980s [1–3]. The definition of NI has evolved ever since, molded by maturation of the field and influenced by health policy. In a classic article that described its domain, NI was defined as the combination of nursing, information, and computer sciences to manage and process data into information and knowledge for use in nursing practice [4]. Nurses who worked in NI during that time were pioneers who often got into informatics practice because they were good clinicians, were involved in IT projects as educators or project team members or were just technically curious and willing to try new things. Their roles, titles, and responsibilities varied greatly. A solid foundation for the NI profession continued to be laid over the ensuing 40 years. Today, informatics has been built into undergraduate nursing education and there are over a hundred schools offering post-graduate NI education. NI is recognized as a specialty by the American Nursing Association (ANA) and has a specialty certification [5]. NI is now described as the specialty that integrates nursing science with multiple information and analytical sciences to identify, define, manage, and communicate data, information, knowledge, and wisdom in nursing practice. NI supports nurses, consumers, patients, the interprofessional healthcare team, and other stakeholders in their decision-making in all roles and in all
settings to achieve desired health and healthcare outcomes. This support is accomplished using information structures, information processes, and information technology [6]. NI continues to grow. In the most recent Health Information and Management Systems Society (HIMSS) NI Workforce Survey, 57% of respondents held a post-graduate degree in nursing or nursing informatics and 44% were specialty certified by ANA in NI or other nursing specialty. Another 32% were currently pursuing NI certification, and over half have been working in an informatics role for more than 7 years [7]. Since the HITECH Act of 2009, nursing informatics specialists have played a pivotal role in influencing the adoption of electronic health records (EHR) for meaningful use. Having the breadth and depth of healthcare knowledge and understanding clinical practice workflows, nurses help all clinicians understand the application and value of the EHR. Nurses have a perspective of the many venues of care and working with all care team members, as well as working with patients at different points in their care continuum. Nurses help the patient utilize health IT to improve engagement in their own care, take control of their own health and become an integral part of the decision- making process and care team. As patient advocates, nurses understand the power of the patient in a participatory role and how this can improve outcomes. The type and quality of care that nurses provide to their patients will benefit immensely from the continued advancement of technology and informatics in healthcare. Although there are many ways those advancements will impact nursing, here are two areas that hold the greatest promise for nursing’s future. Data and the Continuous Learning Health System: Nursing research has not been as prolific as medical research, so there is a lot less known about the true impact/outcomes of nursing interventions. But now that organizations are aggregating health data electronically in an EHR and other Health IT, nurses can more easily identify practices that measurably
30
1004
30
J. J. Cimino et al.
impact individuals by mining the data and using prescriptive, predictive and cognitive analytics to correlate actions to improved outcomes. The collection, summarization and analysis of data can be from multiple venues and sources, including social determinants and patient-generated information for personalization. Then, it’s not just about impacting traditional care, but about the impact across the continuum for the individual and including public health and population health management. The learnings can be iterated back into nursing practice in months instead of years, using protocols/guidelines, documentation templates, and clinical decision support – making it easier to do the right thing and ‘hard-wiring’ new best practices – thus, creating a continuous learning health system. Care Coordination and Healthcare Anywhere: The advancement of technology has provided us the opportunity to provide care anytime/anywhere and there’s little question that both patients and providers are increasingly drawn to the concept of healthcare services that are virtual. This includes “visits” using communication technologies such as email, phone and videoconference, as well as telehealth technologies for remote monitoring and management of conditions or chronic disease. Coupling this with engaged patients using portals and mobile apps creates a new ecosystem for nurses and their patients to interact. Care coordination between venues of care and across the continuum will be directly impacted in a positive way. As nurses have primary responsibility for coordinating care and helping patients navigate the complexities of the healthcare system, this will be a way for them to extend their reach to more patients and to improve the quality of the care provided to each patient. Nurses can more easily close
care gaps for preventive and disease management services, monitor patients’ conditions while they live their lives and not just when they visit a healthcare facility, and provide consulting and educational services. The future of nursing informatics has no bounds; technologies of all kinds will continue to evolve, and informatics will help nurses both integrate new technologies into their practice as well as manage the impact of new technologies on that practice. Informatics will help invent the future of nursing care transformation. 1. Ball, M., & Hannah, K. (1984). Using computers in nursing. Reston, VA: Reston Publishers. 2. Grobe, S. (1988). Nursing informatics competencies for nurse educators and researchers. In H.E. Petersen & U. Gerdin-Jelger (Eds.), Preparing nurses for using information systems: Recommended informatics competencies. New York: National League for Nursing. 3. Hannah, K. J. (1985). Current trends in nursing informatics: Implications for curriculum planning. In K. J. Hannah, E. J. Guillemin, & D. N. Conklin (Eds.), Nursing uses of computers and information science. Amsterdam: Elsevier. 4. Graves, J., & Corcoran, S. (1989). The study of nursing informatics. Image: Journal of Nursing Scholarship, 21(4), 227–231. 5. ANCC. (2018). Informatics nursing certification. Retrieved from 7 https://www. nursingworld.org/our-certifications/informatics-nurse/. 6. ANA. (2014). Nursing informatics: Scope and standards of practice (2nd ed.). Silver Spring: ANA. 7. HIMSS. (2017). Nursing informatics workforce survey. Retrieved from 7 https://www. himss.org/ni-workforce-survey.
1005 The Future of Informatics in Biomedicine
Box 30.4 Biomedical Informatics: The Future of the Field from a Health Policy Perspective David Blumenthal Policy issues and developments in the United States will be vital to the evolution and efficacy of health information technology (HIT) in the future. This is true because health policy has made HIT a mainstream feature of the U.S. health care system and a vital tool for improving it. Two types of health policy issues will vitally affect the future of HIT, its uses and its benefits. The first type is generic to the U.S. health care system but will indirectly affect how HIT evolves. The second type of policy issue focuses particularly on HIT. Generic policy issues include payment reform and the push toward consumer empowerment. There is an urgent need for payment reform to address issues such as the high costs associated with the U.S. health system. HIT has the potential to be a powerful tool in health system improvement but whether that potential is exploited will depend on the needs and priorities of its users, especially health care providers. In a fee-for-service environment, where volume and revenue maximization are prioritized, purchasers of HIT will demand that it serve these purposes. The requirement to capture detailed information for billing purposes will be paramount to the design and configuration of electronic health records (EHRs) and other IT. Information systems will be used to assure that providers capture every billable service in a way that maximizes revenue collected. Payment approaches that prioritize value will favor different HIT configurations, especially if those payment methods hold providers accountable through risk-sharing for the cost and quality of services. HIT will have to facilitate the capture and reporting of quality and cost information for the purpose of demon-
strating the value of services provided and to manage resource use continuously over a reporting period. Interoperability and exchange of health care data will become a business imperative to the extent that accountable providers must absorb the costs of services provided to their patients at other health care facilities in their communities. HIT for value maximization will also put much greater emphasis on improving clinical decisions so as to enhance the value of services performed. In a value-oriented environment, usable and helpful decision support will achieve a priority it has never had in the current fee-forservice environment. Another priority will likely be the capability to assess the comparative performance of clinicians within organizations so as to evaluate reasons for variation in decision-making and health care outcomes. A bipartisan interest in making health care markets more competitive and responsive to patients’ needs is also motivating a push toward patient empowerment through sharing electronic data with patients and their families. This movement is reflected in legislation and regulations that encourage providers to share EHR data with individuals or their designated third parties. The Office of the National Coordinator for Health Information Technology (ONC) issued a rule in 2015 that requires certified EHRs to have standardized application programming interfaces (APIs) [1], which will facilitate access to EHR data by patients and their agents. A new ONC rule, proposed in the spring of 2018 [2], would also discourage so called information blocking. The growing interest in data-sharing with patients is also apparent in Apple’s decision to work with 13 prominent health systems [3] to accept their patients’ EHR data. Large, innovative technology companies like Apple may be able to support patient empowerment by fashioning user-friendly applications that use patients’ data to inform their decisionmaking.
30
1006
30
J. J. Cimino et al.
The emergence of such applications will raise a host of policy issues. Finding ways to assure the safety of these consumer-facing applications will be a critical part of consumer empowerment, and constitutes a key policy agenda. To this end, the Food and Drug Administration (FDA) is making an effort to adjust its traditional regulatory approaches for the special circumstances of HIT applications. One example of their efforts is the Accelerated Digital Clinical Ecosystem (ADviCE), a partnership between the University of California, San Francisco, several other universities and health systems, and the FDA to share best practices and data for using, integrating, and deploying health technology services and applications. ADviCE will make recommendations on the types of data needed, data sharing, transparency, and use. Policymakers must also find ways to protect privacy of patients, either through enforceable voluntary standards or governmental regulation of emerging private organizations, like Apple, that play the role of data stewards. Some HIT specific policy issues are also likely to influence the future development of health information technology. On this front, the increased use of EHRs has also given rise to safety challenges, as enumerated in a recent report from the Pew Charitable Trusts [4]. For example, patients may receive the incorrect dose of a medication or clinicians may select the wrong person when inputting an order. These safety issues are probably linked with the usability of EHRs, and suggest the need for improved user-centered design focused on the needs of both clinicians and patients. The
EHR certification process will likely play a role in pursuing improved safety of patient data. To address these and other health IT safety concerns, multiple experts have proposed the establishment of a safety collaborative composed of EHR developers, hospitals, government, health practitioners, and other key organizations to work together to resolve safety problems. Finally, policy interventions may be required to improve equity of access to benefits of HIT in rural areas and for underserved populations. Lack of connectivity and sophisticated technical support can handicap rural providers in their efforts to use advanced HIT. With the increasing power of HIT in health care will come increased reliance on its capabilities for responding to policy challenges, both general and HIT-specific. For the most part, these challenges will stimulate evolution in HIT design that makes it even more useful and important for the future of our health care system, and its patients and providers. 1. 7 https://www.healthit.gov/sites/default/ files/facas/HITSC_Onc_2015_edition_ final_rule_presentation_2015-11-03.pdf 2. 7 https://www.reginfo.gov/public/do/ e A g e n d a Vi ew Ru l e ? p u b I d = 2 0 1 8 0 4 & RIN=0955-AA01 3. 7 https://hbr.o rg/2018/03/apples-pactwith-13-health-care-systems-might-actually-disrupt-the-industry 4. 7 https://www.pewtrusts.org/en/researchand-analysis/reports/2017/12/improvingpatient-care-through-safe-health-it
1007 The Future of Informatics in Biomedicine
Box 30.5 Future Perspective Mark Frisse As this textbook demonstrates, biomedical informatics paradigms are changing. Original paradigms of necessity were moored by an environment where data sets were small; data storage was limited; computation required massive and costly hardware; and high-bandwidth network connections were rare. Most major biomedical research was conducted in large laboratories and, with a few exceptions, computational needs were limited. Health care delivery and clinical research generally took place in hospitals and large clinics both affiliated with medical schools and endowed with talent, revenues, and capital necessary for their successful operation. Payment models and reimbursement for health care operations took place behind the scenes without excessive complexity and with few burdens on providers. Public health workers, health policy researchers and related groups had access only to selective, retrospective, and often manually-collected data and limited analytic capabilities. Informatics was a select, expensive, and time-consuming endeavor. Despite great challenges, remarkable feats were accomplished. Recent paradigms are untethered from many early constraints. Today, data sets are massive and plentiful; data storage is inexpensive and seems unlimited; computation is ubiquitous and extends from minute sensor devices to massive cloud-based virtual machines; highbandwidth network connections are pervasive and central to American life. The range of biomedical research activities is far broader and is constrained more by funding and talent limitations than by facilities; and computation is not only central to traditional research approaches but has extended the reach of scientific investigation dramatically through the analysis of data sets ranging from molecules to genomes. Largely because of Internet- based services, more providers and other care givers have access to information they need. Public health workers, health policy researchers and other interest groups can access large and broad data
sets collected in near real-time. Patients and their families can also access much more information and are have become truly central to health care; patients are speaking up, and our health system is listening. Other academic disciplines, once working on the periphery of biomedical informatics, are converging and taking center stage. Social scientists explore care complexity both in delivery settings in and in the home. For example, cognitive scientists seek more effective and efficient ways of managing care tasks. Operations research professionals seek to improve patient access, scheduling, workflow analysis, capacity management, throughput, and systems science. Behavioral psychologists are studying how mobile technologies can “nudge” patients and providers into better behaviors. As a result, informatics has become even more imaginative, extensive, rigorous, broad, accessible, and inexpensive. The accomplishments have been many, the future seems bright, and the potential for societal good is promising. But, to paraphrase novelist William Gibson, this bright future is not now nor will it quickly become evenly distributed. Both in biomedicine and in society at large, new paradigms and technologies transformed commerce, interpersonal communication, social interactions, and behaviors have upended almost every aspect of society. By integrating and analyzing the multiple data streams emerging from our personal behavior, communication, reading habits, purchasing patterns, and social interactions, data and algorithms are capable of startlingly accurate predictions that in turn can profoundly influence behavior. The velocity of these changes carries biomedical informatics – and all of society – into an uncertain future full of promise and peril. Consider the American healthcare system. The United States has the highest per capita expenditures for health care in the world, yet, by many measures, important health care quality measures lag far behind these of other countries [1]. Despite significant advances in technology and clinical informatics, this trend continues. This may be due in part because technologies are
30
1008
30
J. J. Cimino et al.
not capable only of reducing complexity, they are also capable of introducing additional complexity whether such complexity is warranted or not. One cannot argue against effective informatics support for prescribing decisions; biology and the clinical condition warrant extreme detail to complexity. Similarly, knowledge of total and out-of-pocket drug costs would be helpful if patients were presented with choices, but it is difficult to rationalize the hundreds (if not thousands) of different formularies imposed by health plans. One can argue that effective measurement of outcomes and care metrics is essential for demonstrably increasing quality of care, but the value of many quality metrics is uncertain and the administrative burdens imposed on clinicians who must collect these data borders on the intolerable, often coming at the expense of patient interaction. As the economist Uwe Reinhardt wrote: “I have been at many conferences at which concerned clinicians explore socalled ‘evidence-based medicine,’ replete with ‘evidence-based best-clinical-practice guidelines’ and the associated ‘clinical pathways.’ I cannot recall a conference on the topic of ‘evidencebased best administrative practices,’ (although I may have missed it.)” [2]. Consider the future role of the traditional institution-centric electronic health record. Federal incentives greatly accelerated the introduction of EHRs into hospitals and clinics and made transactions like e-Prescribing routine. Data and communications standards allow communication across different clinical systems and expand capabilities for medication management, care coordination and other clinical activities outside of hospitals and clinics. Common EHR data elements and organizational data warehouses are simplifying secondary data use for quality reporting, administration, population health, research, and other uses. Web portals, mobile communications, and patient-accessible EHRs are engaging patients and their families to a greater degree. But this rapid introduction of EHRs has been a mixed blessing. Critics claim that EHRs focus on administrative and payment at the expense of providing the cognitive support patients and clinicians desperately need. EHRs
cannot simply continue their current approach at the expense of providing the cognitive support patients and clinicians desperately need. To improve clinician morale and productivity, the urge to introduce even further unnecessary administrative burdens on care providers must be resisted. Given the many turbulent transformations in care delivery methods, care delivery organizations, and patient-centered health technologies, many clinical informatics advances will be the realized through extension of traditional EHRs and still others will be the product of experimentation with clinical technologies address immediate consumer-directed needs and view EHR connectivity as secondary objective. Since both models will be introduced, evaluated, and adopted, one must understand how informatics can influence the evolution of many different types of clinical systems. The ascendancy of data science has been a central theme of biomedical informatics. Broadly construed, these activities expand fundamental biomedical informatics activities through the introduction of new technologies and techniques. Findings emerging from increasingly interoperable clinical databases like i2b2, OMOP, and PCORNet further stimulate essential large-scale, collaborative data standardization and ontology development. These in turn will simplify the inclusion of a broader array of personal, environmental, and biologic computable knowledge structures. Machine learning and related disciplines arising from these activities foster discovery of previously unknown medication interactions, genetic propensities, behavioral risks, predictions, and actionable care interventions. Social networks and other forms of informal communication are having similar impacts. In principle, these networks can gather isolated individuals sharing common concerns and can reinforce positive behaviors and combat impediments to health – social isolation, misinformation, and costs. Some forms of “digital group therapy” or “group telemedicine” may be particularly well-suited in these circumstances. A dazzling array of new technologies must also be understood and when appropriate intro-
1009 The Future of Informatics in Biomedicine
duced into clinical research and care delivery. The collection, integration, and analysis of new data streams produced by these devices are already being used to manage diet, weight, exercise, and even cardiac rhythm problems. Untethered from traditional EHRs, these products are producing new and valuable sources of ambiently-collected data at lower costs. Speech and gesture recognition will simplify human-computer interaction. Ambient data collection methods simplify collection of routine data and provide additional context for documentation and interpretation. Clinician-computer interactions may be unobtrusive and allow greater focus on patients rather than computer screens. Ambient data collection – including video interpretation of clinician – patient interactions may be used to more completely summarize the clinical encounter. Image recognition technologies can diagnose skin disorders, radiographs, and some other medical images. Machine learning algorithms will reliably screen for abnormalities and complement human judgement. We cannot can fully control how innovations will be adopted, nor can we predict their societal impact. Informatics – and innovation more broadly – is a two-edged sword. For example, clinical systems have improved care, reduced costs, and contributed to new insights through translational informatics and data science. At the same time, they have added considerably to administrative burdens and cost, and in practice, may emphasize administrative tasks over the critical cognitive work that is the foundation of clinical medicine. At the clinical and policy level, efforts to simplify programs and processes become even more important. Similarly, social networks and telemedicine allow previously isolated individuals to reinforce possibly socially objectionable attitudes or behaviors. But these same networks can rapidly distribute and reinforce exaggerated or false claims about the efficacy of vaccinations, treatments, and scientific evidence; these prac-
tices challenge society’s very idea of a common truth. Advances in data science and analytics, when combined with sensors and devices on the person, in the home or in public spaces raise fears that “someone/something is always watching.” If data are aggregated and used by an unauthorized “data- industrial complex” working outside of socially acceptable norms, privacy rights are threatened. Better means of anonymizing data and more realistic privacy and data use policies will become even more important. Although paradigms change, an emphasis on data, information, knowledge, and effective use remains foundational. A primary responsibility of biomedical information is to ensure that everything from data generation to knowledge generation is continually improving through greater consistency and efficiency. These improvements in turn should result in systems that more effectively address real needs and not merely automate flawed behaviors or practices. Our future depends on the extent to which we can introduce efficient means of presenting needed, reliable, and consistent information and the extent to which our efforts ensure better outcomes for individuals and society. To be effective, informatics professionals proceed based on their experience, knowledge, and values. They must, in other words, practice wisdom. 1. Schneider, E. C., Sarnak, D. O., Squires, D., Shah, A., & Doty, M. M. (2017). Mirror, mirror 2017: International comparison reflects flaws and opportunities for better U.S. Health Care. Commonwealth Fund. 7 http://www. commonwealthfund.org/interactives/2017/ july/mirror-mirror/. 2. Reinhardt, U. E. (2013, September 13). Waste vs value in American Health Care. New York Times. 7 https://economix.blogs. nytimes.com/2013/09/13/waste-vs-value-inamerican-health-care/.
30
1010
J. J. Cimino et al.
Box 30.6 The Future of Health IT: A Clinical Perspective Robert M. Wachter
30
About a decade ago, I hired a young clinical informaticist for a faculty position at UCSF. I told him he had an incredibly bright future, since we would soon implement a well-respected vendor-built electronic health record (EHR). I was confident that this would be exciting and important work, work that would keep him fully employed for years to come. I didn’t share with him my worry: what would his job be after the EHR was installed? Needless to say, despite the fact that our EHR has been up and running for 6 years, he remains gainfully employed. In fact, he is busier than ever. His experience taught me something I did not understand at the time: the implementation of the EHR is merely the first step in the process of extracting value from healthcare digitization. In fact, I have come to see the process of digitization as involving four steps: 1. Digitizing the record 2. Connecting all the digital parts (“interoperability”) 3. Gaining insights from the digital data now being generated by and traversing the system 4. Taking advantage of digitization to build and/or implement new tools and approaches that deliver healthcare value (improving quality, safety, patient experience, access, and equity while also lowering costs and improving efficiency and productivity) In the United States, the $30 billion of incentive payments distributed by the government under the HITECH Act from 2010–2014 succeeded in achieving the first step – nearly all hospitals and 90% of physician offices now use an EHR. While we see sporadic examples of activities under Steps 2, 3, and even 4, they are by far the exception. As we look beyond the present, let’s fantasize about a future in world in which we have substantially accomplished all four steps. What might our
healthcare system look like? The answer is that the experience of being both patient and healthcare professional will be far more satisfying. Let’s turn first to the hospital. Much of the care that we currently think of as requiring hospitalization will undoubtedly be accomplished within less expensive settings (including the patient’s home), aided by a variety of technologies ranging from clinical sensors to advanced audio and video capabilities. The hospital will mostly exist to care for very sick patients – the types we might today associate with being in the ICU. And the ICU will likely no longer be a walled off physical space. Rather, every hospital bed will be modular, capable of supporting ICU level care with the push of a few buttons. Decision-making about who needs higher levels of care will not be left to the clinician’s “eyeball test.” Instead, clinicians’ experience will be augmented by sophisticated AI-based prediction tools constantly humming in the background, alerting doctors and nurses that, say, a patient’s probability of death just spiked up and thus she bears closer watching. Of course, taking advantage of all these AI- generated predictions will require cracking the tough nut of alert fatigue. This will be accomplished by markedly decreasing false positive rates, implementing advanced data visualization and other prioritization methods, and likely through the discovery of approaches that haven’t yet been invented. Patient rooms will have large video screens and sophisticated camera and audio equipment to allow for tele-visits. Patients and families will be able to review clinicians’ notes, test results, and treatment recommendations, either on the big screen or on their hospital-issued tablet computer. Patients will not only have full access to their EHR but will also receive educational materials (“here’s what to expect from your MRI tonight”) and motivation (“Good job on your incentive spirometer today!”) – provided by the technology. The dreaded nurse call button will be replaced by a voice-activated system in which a patient’s request results in a nurse appearing on screen and even taking some actions (increasing the IV flow rate or adjusting
1011 The Future of Informatics in Biomedicine
the bed, for example) remotely. If a new pill is needed, likely as not a robot will deliver it. When the hospital doctor comes to visit the patient, the room’s telemedicine capabilities will allow additional parties to participate. For example, a palliative care discussion can involve distant family members, the inpatient palliative care team, and a physician at an outside hospice. An infectious disease consult might involve a discussion between the patient, the hospitalist, and the ID consultant in real time, rather than the serial visits and imperfect communication through chart notes that marks current practice. Speaking of notes, in both inpatient and outpatient settings physicians will no longer spend hours typing notes into the EHR. Rather, natural language processing technology will “listen” to the doctor-patient conversation via room-based microphones and create a useful note, improving itself over time as it learns each physicians’ individual practice style and patient population (“digital scribes”). Documentation will increasingly become the byproduct of the doctor-patient encounter, not a central focus on the physician’s attention. On the other hand, clinicians will glean far more useful information and insights from their digital tools, including the EHR. As data are entered into the patient’s chart, the EHR will suggest possible diagnoses and testing approaches, and guidelines and recommended treatment approaches will be a click or a voice command away. In essence, the EHR and the electronic textbook will merge into one integrated tool. Turning to the outpatient arena, much of the care that currently requires in-person visits will be conducted via IT-enabled home care and televisits. The care of patients with chronic diseases will be utterly transformed, with a far greater emphasis on real-time, home-based, technology-enabled decision support and disease management. The heart failure patient will begin his day by weighing himself on a digital scale and answering a few questions on the computer (“How is your breathing? How did you sleep?”). It might even know how much salt
he used in the past day (through the “Internet of Things”). The technology will integrate this information, along with streaming data on heart rate and blood pressure drawn from wearable or stick-on sensors, to offer recommendations for drug and activity adjustments. Ditto for patients with diabetes, emphysema, asthma, and the like. Making all of this function will require a new workflow, and with it a new set of healthcare professionals. Sometimes called “care traffic controllers,” they will be clinicians (likely nurses with advanced training in population health and some informatics) who will monitor, via advanced digital dashboards, the status of 100, or 1000 such patients, contacting and coaching the ones who seem to be having problems. The initial contacts may be generated by AI-driven algorithms and delivered by the technology, but the care traffic controller will intervene if the patient continues to have problems. For patients continuing to do poorly, a physician will become engaged. Even then, many of these encounters will be IT-enabled remote ones. For patients with acute medical issues, much of the care will be delivered by apps, which will also offer AI-derived recommendations for simple diagnoses and interventions. Patients who require higher levels of care will see a clinician through telemedicine or community-based urgent care. Urgent care clinics will be conveniently placed in supermarkets and pharmacies, and our eventual success in achieving complete interoperability – the patient’s record always available via the cloud – will enhance the ability to view the relevant parts of the EHR and to record data that then becomes available to all subsequent practitioners. The promise of precision medicine will finally be realized. For example, the guidelines for treating a 50-year old woman with high blood pressure or elevated cholesterol will become far more complex and customized, considering a variety of patient- and population-based risk factors and large amounts of genetic information. This same complexity also means that the clinician will depend on the computer to “know” all of these variables and
30
1012
30
J. J. Cimino et al.
suggest the best approach. Rather than remembering the correct approach to hypercholesteremia in middle-aged women (there will no longer be any one correct approach), the role of the clinician will become more about interpreting the computer’s output (including intervening when it seems wrong), communicating the findings to the patient, and motivating the necessary behavioral change. Of course, this changed role will require a significant evolution in medical education. In fact, the ability to analyze vast amounts of digital data will transform all clinical research. Rather than basing most of our treatment recommendations on small randomized clinical trials, many advances will come through analyses of actual clinical data, seeing which approaches are associated with better outcomes. Of course, this will require sophisticated adjustment for confounders, which should also be facilitated by the vast amounts of fully integrated digital clinical information. Individual healthcare systems will take advantage of these data as well, transforming themselves into so- called “learning healthcare systems” by mining their own data and experience to determine which approaches lead to the best outcomes. The vision that I’ve described here is not around the corner – it is likely 10–15 years away. And achieving it will require not just the technology of a few large EHR vendors, but
also the contribution of companies, large and small, some built specifically to solve specific healthcare problems, others digital giants (the Apples, Googles, and Amazons of the world) taking advantage of their capabilities in areas like app development, supply chain, and AI to attack healthcare problems. Importantly, in such a multidimensional digital world, success cannot come simply by buying pieces of technology, peeling off the bubble wrap, and dropping them into healthcare systems and workflows. It will be up to clinical informaticists to deeply understand the needs of patients, clinicians, families, and administrators, the complexities of the technologies, and the economic, regulatory, privacy, and often ethical context. Informatics professionals will be the ones making the clinical and business case for change, and working with both vendors and clinicians to ensure that these new approaches and technologies actually achieve their aims. This is why the job of the clinician informaticist will remain highly secure for the foreseeable future. While the job of the informaticist will no longer be to implement a core enterprise EHR, he or she will be doing something more complex and likely more valuable: reimagining the work and the workflow to take advantage of evolving digital capabilities to improve healthcare value.
1013 The Future of Informatics in Biomedicine
Box 30.7 The Future of Biomedical Informatics from the Federal Government Perspective Patricia Flatley Brennan Advances in biomedical informatics, including computational bioinformatics, are essential to accelerating scientific discovery and assuring the health of society. Finally, after 40 years of promise, there is sufficient data and computing power to realize the visions of early biomedical informatics leaders that data-powered health could become a reality. Decades of slow but steady progress towards formalizing biomedical knowledge through effective use of language and messaging standards is now complemented by improvement in heuristics and algorithms that can translate those formalizations into actionable decisions. The attention of the field to key users has broadened to include basic science researchers and clinicians as well as patients and families. As a major provider of health care services, as well as a key funder of health care services, supporter of biomedical and health related research, and guardian of key health quality initiatives, the United States federal government plays and will continue to play a significant role in advancing biomedical informatics over the next decade. Federal investment will lead to advances in data management and protection, new ways to draw knowledge out of health data, and delivery of better, accurate and complete health information at the point of need, anywhere. Perspectives of open science, ensuring economic advancement through research, and a recognition of the accountability of the government to the taxpayer are engendering a new commitment of openness and responsiveness to society. The National Library of Medicine (NLM), one of the 27 institutes and centers at the National Institutes of Health, is key among the several federal agencies committed to ensuring the availability of high-quality data to characterize patient problems, account for health care resource expenditure and foster research driven by greater understanding of clinical phenomena. The NLM partners with other health relat-
ed divisions and agencies, including the Center for Disease Control and Prevention, the Center for Medicare and Medicaid, and the Agency for Health Care Research and Quality to rapidly respond to public health threats, monitor health care expenditures and quality and foster systemic interoperability. Partnerships between the NIH and with other federal agencies outside of the health sector will allow the investments in biomedical informatics to benefit from generalized investment in data curation, large scale data management and storage, privacy and network platforms. The NLM will do for data what it has done for the literature – making them findable, accessible, interoperable and re-usable (FAIR). These attributes, linked under the rubric of the FAIR principles, provide guidance for how a federal library makes its resources available to the public. Making data FAIR requires improved curation strategies, ones that balance automated approaches with human indexing and metadata developments in a way that takes advantage of the speed of automation while preserving human talent for the most-complex cases. The Library-of-the-Future will continue to see the NLM serving as the custodian of key collections, but also increasing its reach as a connector of important information and data resources that exist outside of its boundaries. Future developments may also lead to a discovery-on-demand approach to locating and obtaining information that has not be previously archived. The NLM will invest in research that advances use of these important collections and provides novel methodologies to interrogate them. Some agencies within the Department of Health and Human Services, such as the Office of the National Coordinator of Health Information Technology (ONC), will continue to invest in broad, societal resources to maintain the health information infrastructure. Other agencies, such as the Center for Medicare and Medicaid Services, will continue to be both data consumers (payment schemes resting on claims for health services) as well as data contributors, making their information accessible to consumers for enhanced self-monitoring and to
30
1014
30
J. J. Cimino et al.
researchers to foster discovery informed by care. Several trans-federal initiatives are on the horizon, designed to ensure efficient investment in scalable, re-suable information resources. The recognition across NIH of the importance linking clinical information and biological data portends expanded investment in the methods for curating and integrating information across time within a person and across people to better understand the health of individuals and populations. Rapid growth of data from research taxes existing technical capabilities and demands additional policy development and financial investment to house important data resources. The federal government fosters policies that protect patient privacy and develop the incentive structures to accelerate the adoption of effective computer systems for health care. With the rapidly growing data generating initiatives, the federal government must take a critical role in determining how to best select and preserve the full range of information. The federal government will host public discussion and dialogs that ensure the clinical information is sufficiently broad to reflect the clinical experience of all persons. It is responsible for ensuring the crossnational arrangements needed to keep scientific exchange of health data open and free-flowing. Interagency coordination is needed to ensure that technological advances benefit health care and that health dollars leverage investment made in other sectors. The primary point of coordination is through the Networking and Information Technology Research and Development (NITRD) Program. NITRD is a trans-agency initiative designed to provide the research and development foundations for advancing information technologies, and also to deploy those technologies in the service of the country. The NIH reports its technological research and development expenditures to the President through the NITRD program. The NIH broadly, and the NLM specifically, participate in the many workgroups that focus on broad ranging topics such as computing-enabled human interaction, communication and augmentation, cybersecurity and privacy, and high capability computing infrastructure and applications.
Federal resources should be spent on those things that only the federal government should do. These investments include short and longterm research and d evelopment programs that advance the health and well-being of society, educating the workforce of the future, and protecting key assets in perpetuity. Most of this investment is likely to occur through the NLM. In the biomedical informatics arena this means investing in research to develop method that are scalable, sustainable and reproducible, creating computational approaches to data management capable of curation at scale, developing the libraries of the future that not only encompass literature and data but also the interim product of research such as protocols, ethics and human subjects agreement, as well as novel methods of documenting research activities, such as the next generation of Jupyter notebooks. Development efforts should be applied to the ever-growing amount of textbased journal articles and reports, to devise new and creative ways to expose the literature to a variety of publics. Educational programs and efforts of the future will infuse data science and advanced biomedical informatics lessons not only in the training programs of specialists, but across the biomedical research and clinical training programs, and even extending into equipping patients and lay people with access to data and information and tools to make use of those resources. It’s worthy to note two very important trends that will shape the future of the federal engagement with health information technologies. First, there will be an increase in public private partnerships to leverage knowledge in the technical and information technology sector in support of health care. Such partnership should lead to a more robust and interoperative health information environment. Second, there will be certain roles that the federal government must preserve, such as maintaining accurate and freely-accessible information resources for the public good and overseeing the development of policies that foster data sharing while protecting individual and institutional rights. Future federal efforts will be accompanied by collaborations with industry. These collaborations could take the form of joint investments in common problems, such as data quality or cura-
1015 The Future of Informatics in Biomedicine
tion. Other forms of partnership may emerge that engage the federal investment for research and development with accelerated pathways for technology transfer. Including industrial members on Federal Advisory Committees will provide pathways for exchange of information. The NLM will continue to play a leadership role in maintaining accurate and freely-accessible information resources. The NLM has taken a major step towards this by migrating all of our public facing information resources onto a common, sustainable technical platform. This migration will not only enhance efficiencies but also allow for increased interoperability across our resources. A common technical platform, coupled with enhancement of terminology and vocabulary systems, will make it more feasible for intended users to traverse the information resources housed here. In the future there will be an increasing role of the federal government in protecting and preserving information in perpetuity. The enabling legislation of the NLM directs it to collect the medical knowledge of the time and store it permanently in ways that make it accessible for a wide range of users. As the largest funder of public health and heath care, the federal government indirectly shapes what constitutes health information
and how it is used and valued. The federal government has two key levers for expanding the definition of what constitutes health: investing in research to demonstrate the consideration of health data, including social and behavioral predictors, on the impact of what constitutes health is a major contribution. Additionally, because of its role as a major funder of health care through the CMS, the federal government shapes what is considered of value in health care, such as research that finds ways to incorporate the social and behavioral predictors of health into routine data collection, and then to ensure the use of this information in the diagnostic, treatment and evaluation aspects of the health care process. The future of biomedical informatics from the federal perspective is one characterized by openness, partnerships and perpetual storage of biomedical knowledge. A vibrant research program will be needed to develop and deploy the tools needed to accomplish this vision. Thoughtful deliberation is essential to protect the privacy rights of individuals while fostering the greatest degree of sharing of data and information needed to achieve the goals enabled by data driven discovery.
30
1016
J. J. Cimino et al.
nnSuggested Readings
30
Cimino, J. J. (2019). Putting the “why” in “EHR”: Capturing and coding clinical cognition. Journal of the American Medical Informatics Association, 26(11), 1379–1384. Cimino identifies fundamental changes that will be needed to correct the common criticisms of today’s electronic health records to transform them from glorified billing diaries into true electronic assistants. Mesko, B. The Medical Futurist. https:// medicalfuturist.com/magazine (accessed June 12, 2020). Mesko’s online magazine (and other postings on the Futurist’s web site) provides a glimpse of technologies that are currently emerging or envisioned for the future, in many cases leveraging innovations in biomedical engineering or biomedical informatics. Topol, E. (2016). The patient will see you now: The future of medicine is in your hands. New York: Basic Books. Topol envisions the future world that follows today’s “Guttenberg moment.” Much as the printing press took learning out of the hands of a special class that had access to manuscripts, the Internet and modern computing devices are doing the same for medicine, giving individuals control over their own health care. Wachter, R. (2017). The digital doctor: Hope, hype, and harm at the dawn of medicine’s computer age. New York: McGraw-Hill Education. Offers a thoughtful critique of today’s modern application of digital technologies in health care, identifying today’s limitations but emphasizing the promise for a greatly enhanced world for both patients and physicians.
??Questions for Discussion 1. How are the advances in bioinformatics likely to affect clinical care and vice versa? 2. Identify one potential setting for an informatics “living laboratory”. Who or
what is the subject of evaluation? How would you “instrument” the setting to measure activity and performance? 3. Identify one area for informatics education and describe the living laboratory that would support training objectives.
References Abeler, J., Bäcker, M., Buermeyer, U., & Zillessen, H. (2020). COVID-19 contact tracing and data protection can go together. JMIR Mhealth and Uhealth, 8(4), e19359. Dewey, C., Hingle, S., Goelz, E., & Linzer, M. (2020). Supporting clinicians during the COVID-19 pandemic. Annals of Internal Medicine, 172, M20-1033. https://doi.org/10.7326/M20-1033. Ding, X. R., Clifton, D., Ji, N., Lovell, N. H., Bonato, P., Chen, W., et al. (2020). Wearable sensing and telehealth technology with potential applications in the coronavirus pandemic. IEEE Reviews in Biomedical Engineering, 1. https://doi.org/10.1109/RBME.2020.2992838. Dong, E., Du, H., & Gardner, L. (2020). An interactive web-based dashboard to track COVID-19 in real time. The Lancet. Infectious Diseases, 20(5), 533–534. https://doi.org/10.1016/S1473-3099(20)30120-1. Feijóo, C., Kwon, Y., Bauer, J. M., Bohlin, E., Howell, B., Jain, R., et al. (2020). Harnessing artificial intelligence (AI) to increase wellbeing for all: The case for a new technology diplomacy. Telecomm Policy, 44, 101988. Hong, Y. R., Lawrence, J., Williams, D., Jr., & Mainous, I. I. I. A. (2020). Population-level interest and telehealth capacity of US hospitals in response to COVID-19: Cross-sectional analysis of Google Search and National Hospital Survey Data. Journal of Medical Internet Research, 6(2), e18961. Wachter, R. (2017). The digital doctor: Hope, hype, and harm at the dawn of medicine’s computer age. New York: McGraw-Hill Education. Xu, B., Kraemer, M. U. G., & Open COVID-19 Data Curation Group. (2020). Open access epidemiological data from the COVID-19 outbreak. The Lancet. Infectious Diseases, 20(5), 534. Zayas-Cabán, T., Abernethy, A. P., Brennan, P. F., Devaney, S., Kerlavage, A. R., Ramoni, R., & White, P. J. (2020). Leveraging the health information technology infrastructure to advance federal research priorities. Journal of the American Medical Informatics Association, 27(4), 647–651.
1017
Supplementary Information Glossary – 1018 Name Index – 1091 Subject Index – 1131
© Springer Nature Switzerland AG 2021 E. H. Shortliffe, J. J. Cimino (eds.), Biomedical Informatics, https://doi.org/10.1007/978-3-030-58721-5
I
1018
Glossary
Glossary 21st Century Cures Act A comprehensive bill that promotes and funds the acceleration of research into preventing and curing serious illnesses; accelerates drug and medical device development; attempts to address the opioid abuse crisis; and tries to improve mental health service delivery. It also includes a health IT-related provisions on interoperability, data sharing/exchange and electronic health records. Abductive reasoning Can be characterized as
a cyclical process of generating possible explanations or a set of hypotheses that are able to account for the available data and then each of these hypotheses is evaluated on the basis of its potential consequences. In this regard, abductive reasoning is a data-driven process that relies heavily on the domain expertise of the person.
from participants receiving an intervention or interventions under study. It is also common to monitor study participants for adverse events during this phase. Active storage In a hierarchical data-storage scheme, the devices used to store data that have long-term validity and that must be accessed rapidly. Acute Physiology and Chronic Health Evaluation, Version III [APACHE III] A scoring
system for rating the disease severity for particular use in intensive care units. Adaptive learning Adapting the presenta-
tion of learning content in response to continuous assessment of the learner’s performance. Address An indicator of location; typically
Accountability Security function that ensures
users are responsible for their access to and use of information based on a documented need and right to know.
a number that refers to a specific position in a computer’s memory or storage device; see also: Internet Address. ADE See: Adverse Drug Events.
Accountable care A descendant of man-
aged care, accountable care is an approach to improving care and reducing costs. See: Accountable Care Organizations. Accountable Care Organizations (ACOs) An organization of health care providers that agrees to be accountable for the quality, cost, and overall care of their patients. An ACO will be reimbursed on the basis of managing the care of a population of patients and are determined by quality scores and reductions in total costs of care.
Admission-discharge-transfer (ADT) The core component of a hospital information system that maintains and updates the hospital census, including bed assignments of patients. ADT See: Admission-discharge-transfer. Advanced Cardiac Life Support A course to
train providers on the procedure and set of clinical interventions for urgent treatment of cardiovascular emergencies.
ACO See: Accountable Care Organizations.
Advanced Research Projects Agency Network (ARPANET) A large wide-area network cre-
Active failures Errors that occur in an acute situation, the effects of which are immediately felt.
ated in the 1960s by the U.S. Department of Defense Advanced Research Projects Agency (DARPA) for the free exchange of information among universities and research organizations; the precursor to today’s Internet.
Active phase The phase of a clinical research
study during which investigators collect data
1019 Glossary
Advanced Trauma Life Support A training program for medical providers for the management of acute trauma cases. ATLS is developed by the American College of Surgeons.
American Health Information Management Association (AHIMA) Professional association
devoted to the discipline of health information management (HIM). American Heart Association A
Adverse
drug
events
(ADEs) Undesired
patient events, whether expected or unexpected, that are attributed to administration of a drug. Aggregations In the context of information retrieval, collections of content from a variety of content types, including bibliographic, fulltext, and annotated material.
non-profit organization dedicated to improving heart health.
American Immunization Registry Association (AIRA) is a membership organization that exists
to promote the development and implementation of immunization information systems (IIS) as an important tool in preventing and controlling vaccine-preventable diseases. 7 https:// www.immregistries.org/about-aira.
AHIMA See: American Health Information
Management Association.
American Medical Informatics Association (AMIA) Professional association dedicated to
Alert message A computer-generated warning that is generated when a record meets pre- specified criteria, often referring to a potentially dangerous situation that may require action; e.g., receipt of a new laboratory test result with an abnormal value.
biomedical and health informatics.
Algorithmic process An algorithm is a well-
defined procedure or sequence of steps for solving a problem. A process that follows prescribed steps is accordingly an algorithmic process. Alphanumeric Descriptor of data that are represented as a string of letters and numeric digits, without spaces or punctuation.
American National Standards Institute [ANSI] A private organization that oversees
voluntary consensus standards. American Public Health Association (APHA) Represents a broad array of health
professionals and others who care about the health of all people and all communities. It is the leading not-for-profit public health organization in the U.S. and seeks to strengthens the impact of public health professionals and provides a science-based voice in policy debates. APHA seeks to advance prevention, reduce health disparities and promote wellness. 7 http://www.apha.org/.
Amazon Mechanical Turk Amazon’s crowd-
sourcing website for businesses or researchers (known as Requesters) that allows hiring of remotely located “crowdworkers” to perform discrete on-demand tasks that computers are currently unable to do. Ambulatory medical record system (AMRS) A clinical information system designed to support all information requirements of an outpatient clinic, including registration, appointment scheduling, billing, order entry, results reporting, and clinical documentation.
American Recovery and Reinvestment Act of 2009 Public Law 111–5, commonly referred
to as the Stimulus or Recovery Act, this legislation was designed to create jobs quickly and to invest in the nation’s infrastructure, education and healthcare capabilities. American Standard Code for Information Interchange (ASCII) A 7-bit code for rep-
resenting alphanumeric characters and other symbols. AMIA See: American Medical Informatics Association.
1020
Glossary
AMRS See: Ambulatory medical record sys-
tems.
tate the integration and communication of information, perform bookkeeping functions, monitor patient status, aid in education.
Analog signal A signal that takes on a conApplication programming interface (API) A specification that enables distinct software Analog-to-digital conversion (ADC) Conver modules or components to communicate with sion of sampled values from a continuous- each other. valued signal to a discrete-valued digital representation. Applications (applied) research Systematic investigation or experimentation with the Anchoring and adjustment A heuristic used goal of applying knowledge to achieve practiwhen estimating probability, in which a per- cal ends. son first makes a rough approximation (the anchor), then adjusts this estimate to account Apps Software applications, especially ones downloaded to mobile devices. for additional information.
tinuous range of values.
Annotated content In the context of informa-
tion retrieval, content that has been annotated to describe its type, subject matter, and other attributes. Anonymize Applied to health data and information about a unique individual, the act of de-identifying or stripping away any and all data which could be used to identify that individual.
Archival storage In a hierarchical data- storage scheme, the devices used to store data for long- term backup, documentary, or legal purposes. Arden Syntax for Medical Logic Module A coding scheme or language that provides a canonical means for writing rules that relate specific patient situations to appropriate actions for practitioners to follow. The Arden Syntax standard is maintained by HL7.
ANSI See: American National Standards
Institute.
Argument A word or phrase that helps complete the meaning of a predicate.
Antibiogram Pattern of sensitivity of a micro-
organism to various antibiotics. APACHE III See Acute Physiology and Chronic Health Evaluation, Version III. Apache Open source Web server software
that was significant in facilitating the initial growth of the World Wide Web.
ARPANET See Advanced Research Projects Agency Network. Artificial intelligence (AI) The branch of com-
puter science concerned with endowing computers with the ability to simulate intelligent human behavior. Artificial neural network A computer pro-
Applets Small computer programs that can
be embedded in an HTML document and that will execute on the user’s computer when referenced. Application program A computer program that automates routine operations that store and organize data, perform analyses, facili-
gram that performs classification by taking as input a set of findings that describe a given situation, propagating calculated weights through a network of several layers of interconnected nodes, and generating as output a set of numbers, where each output corresponds to the likelihood of a particular classification that could explain the findings.
1021 Glossary
ASCII See: American Standard Code for Information Interchange. Assembler A computer program that trans-
lates assembly-language programs machine-language instructions.
into
Availability In decision making, a heuristic
method by which a person estimates the probability of an event based on the ease with which he can recall similar events. In security systems, a function that ensures delivery of accurate and up-to-date information to authorized users when needed.
Assembly language A low-level language for
writing computer programs using symbolic names and addresses within the computer’s memory. Association of American Medical Colleges (AAMC) A non-profit organization that
includes all US and Canadian medical colleges and many teaching hospitals, and supports them in their education and research mission. Asynchronous Transfer Mode (ATM) A net-
work protocol designed for sending streams of small, fixed length cells of information over very high-speed, dedicated connections, often digital optical circuits. Audit trail A chronological record of all
accesses and changes to data records, often used to promote accountability for use of, and access to, medical data. Augmented reality Imposition of synthetic
three-dimensional and text information on top of a view of the real world seen through specialized glasses worn by the learner. Authenticated A process for positive and unique identification of users, implemented to control system access. Authorized Within a system, a process for
limiting user activities only to actions defined as appropriate based on the user’s role. Automated indexing The most common
method of full-text indexing; words in a document are stripped of common suffixes, entered as items in the index, then assigned weights based on their ability to discriminate among documents (see vector-space model).
Averaging out at chance nodes The process by which each chance node of a decision tree is replaced in the tree by the expected value of the event that it represents. Backbone links Sections of
high-capacity trunk (backbone) network that interconnect regional and local networks.
Backbone Network A high-speed communication network that carries major traffic between smaller networks. Background question A question that asks
for general information on a topic (see also: foreground question). Backward chaining Also known as goaldirected reasoning. A form of inference used in rule-based systems in which the inference engine determines whether the premise (lefthand side) of a given rule is true by invoking other rules that can conclude the values of variables that currently are unknown and that are referenced in the premise of the given rule. The process continues recursively until all rules that can supply the required values have been considered. Bag-of-words A language model where text is
represented as a collection of words, independent of each other and disregarding word order. Bandwidth The capacity for information transmission; the number of bits that can be transmitted per unit of time. Baseline rate: population The prevalence of the
condition under consideration in the population from which the subject was selected; individual: The frequency, rate, or degree of a condition before an intervention or other perturbation.
1022
Glossary
Basic Local Alignment Search Tool (BLAST) An algorithm for determining optimal genetic sequence alignments based on the observations that sections of proteins are often conserved without gaps and that there are statistical analyses of the occurrence of small subsequences within larger sequences that can be used to prune the search for matching sequences in a large database. Basic research Systematic investigation or experimentation with the goal of discovering new knowledge, often by proposing new generalizations from the results of several experiments.
Best of breed An information technology strategy that favors the selection of individual applications based on their specific functionality rather than a single application that integrates a variety of functions. Best of cluster Best of cluster became a variant of the “best of breed” strategy by selecting a single vendor for a group of similar departmental systems, such laboratory, pharmacy and radiology. Bibliographic
content In information retrieval, information abstracted from the original source.
basic research.
Bibliographic database A collection of citations or pointers to the published literature.
Bayes’ theorem An algebraic expression often
Binary The condition of having only two val-
used in clinical diagnosis for calculating posttest probability of a condition (a disease, for example) if the pretest probability. (prevalence) of the condition, as well as the sensitivity and specificity of the test, are known (also called Bayes’ rule). Bayes’ theorem also has broad applicability in other areas of biomedical informatics where probabilistic inference is pertinent, including the interpretation of data in bioinformatics.
ues or alternatives.
Basic science The enterprise of performing
Biobank A repository for biological materials that collects, processes, stores, and distributes biospecimens (usually human) for use in research. Biocomputation The field encompassing the
modeling and simulation of tissue, cell, and genetic behavior; see biomedical computing.
Bayesian diagnosis program A computer-
Bioinformatics The study of how information
based system that uses Bayes’ theorem to assist a user in developing and refining a differential diagnosis.
is represented and transmitted in biological systems, starting at the molecular level. Biomarker A characteristic that is objectively
Before-after study (aka Historically controlled study) A study in which the evaluator
attempts to draw conclusions by comparing measures made during a baseline period prior to the information resource being available and measures made after it has been implemented.
measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention. Biomed Central An independent publishing
house specializing in the publication of electronic journals in biomedicine (see 7 www. biomedcentral.com).
Behaviorism A social science framework for
analyzing and modifying behavior. Belief network A diagrammatic representa-
tion used to perform probabilistic inference; an influence diagram that has only chance nodes.
Biomedical computing The use of computers in biology or medicine. Biomedical engineering An area of engineer-
ing concerned primarily with the research and
1023 Glossary
development of biomedical instrumentation and biomedical devices. Biomedical informatics The interdisciplinary field that studies and pursues the effective uses of biomedical data, information, and knowledge for scientific inquiry, problem solving, and decision making, driven by efforts to improve human health.
Bit depth The number of bits that represent an
individual pixel in an image; the more bits, the more intensities or colors can be represented. Bit rate The rate of information transfer; a
function of the rate at which signals can be transmitted and the efficacy with which digital information is encoded in the signal. BLAST See: Basic Local Alignment Search
Biomedical Information Science and Technology Initiative (BISTI) An initiative
Tool.
launched by the NIH in 2000 to make optimal use of computer science, mathematics, and technology to address problems in biology and medicine. It includes a consortium of senior-level representatives from each of the NIH institutes and centers plus representatives of other Federal agencies concerned with biocomputing. See: 7 http://www.bisti. nih.gov.
Blinding In the context of clinical research, blinding refers to the process of obfuscating from the participant and/or investigator what study intervention a given participant is receiving. This is commonly done to reduce study biases.
Biomedical taxonomy A formal system for naming entities in biomedicine.
Blue Button A feature of the Veteran Administration’s VistA system that exports an entire patient’s record in electronic form.
Biomolecular imaging A discipline at the intersection of molecular biology and in vivo imaging, it enables the visualisation of cellular function and the follow-up of the molecular processes in living organisms without perturbing them. Biopsychosocial model A model of medical care that emphasizes not only an understanding of disease processes, but also the psychological and social conditions of the patient that affect both the disease and its therapy. Biosample Biological source material used in
experimental assays. Biosurveillance A public health activity that monitors a population for occurrence of a rare disease of increased occurrence of a common one. Also see Public Health Surveillance and Surveillance.
Blog A type of Web site that provides discus-
sion or information on specific topics.
BlueTooth A standard for the short-range
wireless interconnection of mobile phones, computers, and other electronic devices. Body The portion of a simple electronic mail message that contains the free-text content of the message. Body of knowledge An information resource that encapsulates the knowledge of a field or discipline. Boolean operators The mathematical operators and, or, and not, which are used to combine index terms in information retrieval searching. Boolean searching A search method in which
search criteria are logically combined using and, or, and not operators. Bootstrap A small set of initial instruc-
Bit The logical atomic element for all digital
computers.
tions that is stored in read-only memory and executed each time a computer is turned on.
1024
Glossary
Execution of the bootstrap is called booting the computer. By analogy, the process of starting larger computer systems.
Cadaver An embalmed human body used for teaching anatomy through the process of dissecting tissue.
Bottom-up An algorithm for analyzing small
Canonical form A preferred string or name
pieces of a problem and building them up into larger components.
for a term or collection of names; the canonical form may be determined by a set of rules (e.g., “all capital letters with words sorted in alphabetical order”) or may be simply chosen arbitrarily.
Bound morpheme A morpheme that creates a different form of a word but must always occur with another morpheme (e.g., −ed, −s). B-pref A method for measuring retrieval performance in which documents without relevance judgments are excluded. Bridge A device that links or routes signals
from one network to another. Broadband A data-transmission technique
in which multiple signals may be transmitted simultaneously, each modulated within an assigned frequency range. Browsing Scanning a database, a list of files,
Capitated payments System of health- care reimbursement in which providers are paid a fixed amount per patient to take care of all the health-needs of a population of patients. Capitation Payments to providers, typically on an annual basis, in return for which the clinicians provide all necessary care for the patient and do not submit additional fee-for- service bills. Cardiac output A measure of blood volume
pumped out of the left or right ventricle of the heart, expressed as liters per minute.
or the Internet, either for a particular item or for anything that seems to be of interest.
Care coordinator See: Case Manager.
Bundled payments In the healthcare context,
for individualized patient care.
refers to the practice of reimbursing providers based on the total expected costs of a particular episode of care. Generally occupies a “middle ground” between fee-for-service and capitation mechanisms. Business logic layer A conceptual level of
system architecture that insulates the applications and processing components from the underlying data and the user interfaces that access the data. Buttons Graphic elements within a dialog box or user-selectable areas within an HTML document that, when activated, perform a specified function (such as invoking other HTML documents and services).
Care plan A document that provides direction Cascading finite state automata (FSA) A tagging method in natural language processing in which as series of finite state automata are employed such that the output of one FSA becomes the input for another. Case Refers to the capitalization of letters in a word. Case manager A person in charge of coordinating all aspects of a patient’s care. CCD See: Continuity of Care Document. CCOW See:
Clinical
Context
Object
Workgroup.
ing characteristic (ROC) curve.
CDC See Centers for Disease Control and Prevention.
CAD See: Computer-aided diagnosis.
CDE See Common Data Element.
C statistic The area under an receiver operat-
1025 Glossary
CDR See: Clinical data repository.
tution using a common set of databases and interfaces.
CDS Hooks A technical approach designed to
invoke external CDS services from within the EHR workflow based upon a triggering event. Services may be in the form of (a) information cards – provide text for the user to read; (b) suggestion cards – provide a specific suggestion for which the EHR renders a button that the user can click to accept, with subsequent population of the change into the EHR user interface; and (c) app link cards – provide a link to an app. CDSS See: Clinical decision-support system. CDW See: Clinical data warehouse. Cellular imaging Imaging methods that visu-
alize cells. Center for Medicare & Medicaid Services The Center for Medicare & Medicaid Services (CMS) is a federal agency within the United States Department of Health and Human Services that administers the Medicare program and works in partnership with state governments to administer Medicaid, the Children’s Health Insurance Program, and health insurance portability standards. In addition to these programs, CMS has other responsibilities, including the administrative simplification standards from the Health Insurance Portability and Accountability Act of 1996 (HIPAA). Centering theory A theory that attempts to explain what entities are indicated by referential expressions (such as pronouns) by noting how the center (focus of attention) of each sentence changes across the text. Centers for Disease Control and Prevention (CDC) An agency within the US Department
of Health and Human Services that provides the public with health information and promotes health through partnerships with state health departments and other organizations. Central computer system A single system that
handles all computer applications in an insti-
Central processing unit (CPU) The “brain” of the computer. The CPU executes a program stored in main memory by fetching and executing instructions in the program. Central Test Node (CTN) DICOM software to foster cooperative demonstrations by the medical imaging vendors. Certificate Coded authorization information that can be verified by a certification authority to grant system access. Challenge evaluation An evaluation of information systems, often in the field of information retrieval or related areas, that provides a public test collection or gold standard data collection for various researchers to compare and analyze results. Chance node A symbol that represents a
chance event. By convention, a chance node is indicated in a decision tree by a circle. Character sets and encodings Tables
of numeric values that correspond to sets of printable or displayable characters. ASCII is one example of such an encoding.
Chart parsing A dynamic programming algo-
rithm for structuring a sentence according to grammar by saving and reusing segments of the sentence that have been parsed. Chat A synchronous mode of text-based
communication. Check tags In MeSH, terms that represent
certain facets of medical studies, such as age, gender, human or nonhuman, and type of grant support; check tags provide additional indexing of bibliographic citations in databases such as Medline. CHI See: Consumer health informatics. CHIN See: Community Health Information Network.
1026
Glossary
Chunking A natural language processing method for determining non-recursive phrases where each phrase corresponds to a specific part of speech. CINAHL (or CINHL) See: Cumulative Index to
Nursing and Allied Health Literature. CINAHL Subject Headings A set of terms
based on MeSH, with additional domainspecific terms added, used for indexing the Cumulative Index to Nursing and Allied Health Literature (CINAHL). CIS See: Clinical information system. Citation database A database of citations found in scientific articles, showing the linkages among articles in the scientific literature. Classification In image processing, the categorization of segmented regions of an image based on the values of measured parameters, such as area and intensity. Classroom Technologies All technology used
in a classroom setting including projection of two-dimensional slides or views of three- dimensional objects, electronic markup of a screen presentation, real time feedback systems such as class polling, and digital recording of a class session.
multiple biomedical informatics sub-domains, including both translational bioinformatics and clinical research informatics. Clinical Context Object Workgroup (CCOW) A common protocol for single sign-on implementations in health care. It allows multiple applications to be linked together, so the end user only logs in and selects a patient in one application, and those actions propagate to the other applications. Clinical data repository (CDR) Clinical database optimized for storage and retrieval for individual patients and used to support patient care and daily operations. Clinical data warehouse (CDW) A database of
clinical data obtained from primary sources such as electron health records, organized for re-use for secondary purposes. Clinical datum Replaces medical datum with
same definition. Clinical decision support Any process that provides health-care workers and patients with situation-specific knowledge that can inform their decisions regarding health and health care.
CLIA certification See: Clinical Laboratory
Clinical decision-support system (CDSS) A computer-based system that assists physicians in making decisions about patient care.
Improvement Certification.
Clinical Document Architecture An
Amendments
of
1988
Client–server Information processing inter-
action that distributes application processing between a local computer (the client) and a remote computer resource (the server). Clinical and translational research A broad
spectrum of research activities involving the translation of findings from initial laboratory- based studies into early-stage clinical studies, and subsequently, from the findings of those studies in clinical and/or population- level practice. This broad area incorporates
HL7 standard for naming and structuring clinical documents, such as reports.
Clinical expert system A computer program
designed to provide decision support for diagnosis or therapy planning at a level of sophistication that an expert physician might provide. Clinical guidelines Systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances.
1027 Glossary
Clinical informatics The application of bio-
medical informatics methods in the patientcare domain; a combination of computer science, information science, and clinical science designed to assist in the management and processing of clinical data, information, and knowledge to support clinical practice. Clinical information system (CIS) The com-
ponents of a health-care information system designed to support the delivery of patient care, including order communications, results reporting, care planning, and clinical documentation. Clinical judgment Decision making by clinicians that incorporates professional experience and social, ethical, psychological, financial, and other factors in addition to the objective medical data. Clinical Laboratory Improvement Amend ments of 1988 certification Clinical Labo
ratory Improvement Amendments of 1988, establishing laboratory testing quality standards to ensure the accuracy, reliability and timeliness of patient test results, regardless of where the test was performed.
interacts with human subjects. Patientoriented research includes: (a) mechanisms of human disease; (b) therapeutic interventions; (c) clinical trial; and (d) development of new technologies. (2) Epidemiologic and behavioral studies. (3) Outcomes research and health services research. Clinical research informatics (CRI) The application of biomedical informatics methods in the clinical research domain to support all aspects of clinical research, from hypothesis generation, through study design, study execution and data collection, data analysis, and dissemination of results. Clinical Research Management System (CRMS) A clinical research management sys-
tem is a technology platform that supports and enables the conduct of clinical research, including clinical trials, usually through a combination of functional modules targeting the preparatory, enrollment, active, and dissemination phases of such research programs. CRMS systems are often also referred to as Clinical Trials Management Systems (CTMS), particularly when they are used to manage only clinical trials rather than various types of clinical research.
Clinical modifications A published set of
changes to the International Classification of Diseases (ICD) that provides additional levels of detail necessary for statistical reporting in the United States.
Clinical subgroup A subset of a population in which the members have similar characteristics and symptoms, and therefore similar likelihood of disease.
Clinical pathway Disease-specific plan that
Clinical trials Research projects that involve the direct management of patients and are generally aimed at determining optimal modes of therapy, evaluation, or other interventions.
identifies clinical goals, interventions, and expected outcomes by time period. Clinical Quality Language An expression language standardized by HL7 that is used to characterize both quality measure logic and decision-support logic. Clinical research The range of studies and tri-
Clinical-event monitors Systems that electronically and automatically record the occurrence or changes of specific clinical events, such as blood pressure, respiratory capability, or heart rhythms.
als in human subjects that fall into the three sub-categories: (1) Patient-oriented research: Research conducted with human subjects (or on material of human origin such as tissues, specimens and cognitive phenomena) for which an investigator (or colleague) directly
Clinically relevant population The population of patients that is seen in actual practice. In the context of estimating the sensitivity and specificity of a diagnostic test, that group of patients in whom the test actually will be used.
1028
Glossary
Closed loop Regulation of a physiological variable, such as blood pressure, by monitoring the value of the variable and altering therapy without human intervention. Closed loop medication management system A workflow process (typically supported
electronically) through which medications are ordered electronically by a physician, filled by the pharmacy, delivered to the patient, administered by a nurse, and subsequently monitored for effectiveness by the physician. Cloud technology or computing Cloud computing is using computing resources located in a remote location. Typically, cloud computing is provided by a separate business, and the user pays for it on per usage basis. There are variations such as private clouds, where the “cloud” is provided by the same business, but leverages methods that permit easier virtualization and expandability than traditional methods. Private clouds are popular with healthcare because of security concerns with public cloud computing. Clustering algorithms A method which assigns a set of objects into groups (called clusters) so that the objects in the same cluster are more similar (in some sense or another) to each other than to those in other clusters. CMS See: Center for Medicare and Medicaid
abilities in perceiving objects, encoding and retrieving information from memory, and problem-solving. Cognitive engineering An interdisciplinary approach to the development of principles, methods and tools to assess and guide the design of computerized systems to support human performance. Cognitive heuristics Mental processes by which we learn, recall, or process information; rules of thumb. Cognitive Informatics (CI) is an interdisciplinary field consisting of cognitive and information sciences, specifically focusing on human information processing, mechanisms and processes within the context of computing and computer applications. The focus of CI is on understanding work processes and activities within the context of human cognition and the design of interventional solutions (often engineering, computing and information technology solutions). Cognitive load An excess of information that
competes for few cognitive resources, creating a burden on working memory. Cognitive science Area of research concerned
with studying the processes by which people think and behave.
Services. Coaching system An intelligent tutoring sys-
tem that monitors the session and intervenes only when the student requests help or makes serious mistakes. Cocke-Younger-Kasami (CYK) A dynamic programming method that uses bottom-up rules for parsing grammar-free text; used only in conjunction with a grammar written in Chomsky normal form. Code As a verb, to write a program. As a
noun, the program itself. Cognitive artifacts human-made materials,
devices, and systems that extend people’s
Cognitive task analysis The analysis of both the information-processing demands of a task and the kinds of domain-specific knowledge required performing it, used to study human performance. Cognitive walkthrough (CW) An
analytic method for characterizing the cognitive processes of users performing a task. The method is performed by an analyst or group of analysts “walking through” the sequence of actions necessary to achieve a goal, thereby seeking to identify potential usability problems that may impede the successful completion of a task or introduce complexity in a way that may frustrate users.
1029 Glossary
Collaborative workspace A virtual environ-
ment in which multiple participants can interact, synchronously or asynchronously, to perform a collaborative task. Color resolution A measure of the ability to
distinguish among different colors (indicated in a digital image by the number of bits per pixel). Three sets of multiple bits are required to specify the intensity of red, green, and blue components of each pixel color. Commodity internet A general-purpose con-
nection to the Internet, not configured for any particular purpose. Common Data Elements (CDEs) Standards for data that stipulate the methods by which the data are collected and the controlled terminologies used to represent them. Many standard sets of CDEs have been developed, often overlapping in nature. Communication Data transmission and information exchange between computers using accepted protocols via an exchange medium such as a telephone line or fiber optic cable. Community Health Information Network (CHIN) A computer network developed for
exchange of sharable health information among independent participant organizations in a geographic area (or community).
niques to analyze biological systems. See also bioinformatics. Computed check A procedure applied to
entered data that detects errors based on whether values have the correct mathematical relationship; (e.g., white blood cell differential counts, reported as percentages, must sum to 100. Computed tomography (CT) An imaging modality in which X rays are projected through the body from multiple angles and the resultant absorption values are analyzed by a computer to produce cross-sectional slices. Computer architecture The basic structure of a computer, including memory organization, a scheme for encoding data and instructions, and control mechanisms for performing computing operations. Computer memories Store programs and data that are being used actively by a CPU. Computer program A set of instructions that
tells a computer which mathematical and logical operations to perform. Computer simulated patient See
Virtual
patient. Computer-aided diagnosis (CAD) Any form
Comparative effectiveness research A form
of clinical research that compares examines outcomes of two or more interventions to determine if one is statistically superior to another. Compiler A program that translates a pro-
gram written in a high-level programming language to a machine-language program, which can then be executed. Comprehensibility and control Security func-
tion that ensures that data owners and data stewards have effective control over information confidentiality and access. Computational biology The science of computer-based mathematical and statistical tech-
of diagnosis in which a computer program helps suggest or rank diagnostic considerations. Computer-based (or computerized) physician order entry (CPOE) A clinical information sys-
tem that allows physicians and other clinicians to record patient-specific orders for communication to other patient care team members and to other information systems (such as test orders to laboratory systems or medication orders to pharmacy systems). Sometimes called provider order entry or practitioner order entry to emphasize such systems’ uses by clinicians other than physicians.
1030
Glossary
Computer-based patient records (CPRs) An early name for electronic health records (EHRs) dating to the early 1990s.
Conditioned event A chance event, the prob-
Concept A unit of thought made explicit
Conditioning event A chance event that affects the probability of occurrence of another chance event (the conditioned event).
through the representation of properties of an object or a set of common objects. An abstract idea generalized from specific instances of objects that occur in the world. Conceptual graph A formal notation in which
knowledge is represented through explicit relationships between concepts. Graphs can be depicted with diagrams consisting of shapes and arrows, or in a text format. Conceptual knowledge Knowledge
about
ability of which is affected by another chance event (the conditioning event).
Confidentiality The ability of data owners
and data stewards to control access to or release of private information. Consistency check A procedure applied to
entered data that detects errors based on internal inconsistencies; e.g., recognizing a problem with the recording of cancer of the prostate as the diagnosis for a female patient.
concepts. Concordant test results Test results that
reflect the true patient state (true-positive and true- negative results). Conditional probability The probability of
an event, contingent on the occurrence of another event. Conditionally independent Two events, A
and B, are conditionally independent if the occurrence of one does not influence the probability of the occurrence of the other, when both events are conditioned on a third event C. Thus, p[A | B,C] = p[A | C] and p[B | A,C] = p[B | C]. The conditional probability of two conditionally independent events both occur- ring is the product of the individual conditional probabilities: p[A,B | C] = p[A | C] × p[B | C]. For example, two tests for a disease are conditionally independent when the probability of the result of the second test does not depend on the result of the first test, given the disease state. For the case in which disease is present, p[second test positive | first test positive and disease present] = p[second test positive | first test negative and disease present] = p[second test positive | disease present]. More succinctly, the tests are conditionally independent if the sensitivity and specificity of one test do not depend on the result of the other test (See independent).
Constructivism Argues that humans generate knowledge and meaning from an interaction between their experiences and their ideas. Constructivist Argues that humans generate knowledge and meaning from an interaction between their experiences and their ideas. Consumer health informatics(CHI) Appli cations of medical informatics technologies that focus on patients or healthy individuals as the primary users. Content In
information retrieval, media developed to communicate information or knowledge.
Content based image retrieval Also known as
query by image content (QBIC) and contentbased visual information retrieval (CBVIR) is the application of computer vision techniques to the image retrieval problem, that is, the problem of searching for digital images in large databases. Context free grammar A mathematical model of a set of strings whose members are defined as capable of being generated from a starting symbol, using rules in which a single symbol is expanded into one or more symbols.
1031 Glossary
Contingency table A 2 × 2 table that shows
the relative frequencies of true-positive, truenegative, false-positive, and false-negative results. Continuity of care The coordination of care received by a patient over time and across multiple healthcare providers. Continuity of Care Document (CCD) An HL7
standard that enables specification of the patient data that relate to one or more encounters with the healthcare system. The CCD is used for interchange of patient information (e.g., within Health Information Exchanges). The format enables all the electronic information about a patient to be aggregated within a standardized data structure that can be parsed and interpreted by a variety of information systems.
ticipants assigned to the control or comparator arm of a study. Depending on the study type, the goal is to generate data as the basis of comparison with the experimental intervention of interest in order to determine the safety, efficacy, or benefits of an experimental intervention. Controlled terminology A finite, enumerated set of terms intended to convey information unambiguously. Copyright law Protection of written materials
and intellectual property from being copied verbatim. Coreference
chains Provide a compact re pre sentation for encoding the words and phrases in a text that all refer to the same entity.
Continuous glucose monitor (CGM) A device
that automatically tracks a diabetic patient’s blood glucose levels throughout the day and night using a tiny sensor inserted under the skin.
Coreference resolution In natural language
processing, the assignment of specific meaning to some indirect reference. Correctional Telehealth The application of
Continuum of care The full spectrum of
health services provided to patients, including health maintenance, primary care, acute care, critical care, rehabilitation, home care, skilled nursing care, and hospice care. Contract-management system A computer
system used to support managed-care contracting by estimating the costs and payments associated with potential contract terms and by comparing actual with expected payments based on contract terms. Contrast The difference in light intensity
between dark and light areas of an image. Contrast resolution A metric for how well an
imaging modality can distinguish small differences in signal intensity in different regions of the image.
telehealth to the care of prison inmates, where physical delivery of the patient to the practitioner is impractical. Covered entities Under the HIPAA Privacy
Rule, a covered entity is an organization or individual that handles personal health information. Covered entities include providers, health plans, and clearinghouses. COVID-19 A disease that was identified in late
2019 and was declared a global pandemic on March 11, 2020. COVID-19 became an international public health emergency, affecting essentially all countries on the planet. It is characterized by contagion before symptoms, high rate of transmission between human beings, variable severity among affected individuals, and relatively high mortality rate.
Control intervention In the context of clini-
CPOE See: Computer-based (or Compu terized) Physician (or Provider) Order Entry.
cal research, a control intervention represents the intervention (e.g. placebo, standard care, etc.) given to the group of study par-
records.
CPR (or CPRS) See: Computer-based patient
1032
Glossary
CPU See: Central processing unit. CRI See: Clinical research informatics. CRMS (or CRDMS) See: Clinical Research
Management System. Cryptographic encoding Scheme for protecting data through authentication and authorization controls based on use of keys for encrypting and decrypting information.
grams simultaneously and that allows users to interact with those programs in standardized ways. Data buses An electronic pathway for transferring data—for instance, between a CPU and memory. Data capture The process of collecting data to be stored in an information system; it includes entry by a person using a keyboard and collection of data from sensors.
CT (or CAT) See: Computed tomography. Data Encryption Standard (DES) A widelyCumulative Index to Nursing and Allied Health Literature (CINHL) A non-NLM bibliographic
database the covers nursing and allied health literature, including physical therapy, occupational therapy, laboratory technology, health education, physician assistants, and medical records. Curly Braces Problem The situation that arises in Arden Syntax where the code used to enumerate the variables required by a medical logic module (MLM) cannot describe how the variables actually derive their values from data in the EHR database. Each variable definition in an MLM has {curly braces} that enclose words in natural language that indicate the meaning of the corresponding variable. The particular database query required to supply a value for the variable must be specified by the local implementer, however. The curly braces problem makes it impossible for an MLM developed at one institution to operate at another without local modification. Cursor A blinking region of a display moni-
tor, or a symbol such as an arrow, that indicates the currently active position on the screen.
used method of for securing encryption that uses a private (secret) key for encryption and requires the same key for decryption (see also, public key cryptography). Data independence The insulation of appli
cations programs from changes in data- storage structures and data-access strategies. Data layer A conceptual level of system archi-
tecture that isolates the data collected and stored in the enterprise from the applications and user interfaces used to access those data. Data Recording The documentation of information for archival or future use through mechanisms such as handwritten text, drawings, machine-generated traces, or photographic images. Data science The field of study that uses
analytic, quantitative, and domain expertise for knowledge discovery, typically using “big data” which could be structured and/or unstructured. Database A collection of stored data—typi-
cally organized into fields, records, and files— and an associated description (schema).
Cybersecurity Measures that seeks to protect
against the criminal or unauthorized use of electronic data.
Database management system (DBMS) An integrated set of programs that manages access to databases.
CYK See: Cocke-Younger-Kasami. Dashboard A user-interface element that dis-
plays data produced by several computer pro-
Data-interchange standards Adopted formats and protocols for exchange of data between independent computer systems.
1033 Glossary
Datum Any single observation of fact. A medical datum generally can be regarded as the value of a specific parameter (for example, red-blood-cell count) for a particular object (for example, a patient) at a given point in time.
De-identified aggregate data Data reports
DBMIS See: Database Management System.
that are summarized or altered slightly in a way that makes the discernment of the identity of any of the individuals whose data was used for the report impossible or so difficult as to be extremely improbable. The process of de-identifying aggregate data is known as statistical disclosure control.
DCMI See: Dublin Core Metadata Initiative.
Delta check A procedure applied to entered
Debugger A system program that provides
traces, memory dumps, and other tools to assist programmers in locating and eliminating errors in their programs.
data that detects large and unlikely differences between the values of a new result and of the previous observations; e.g., a recorded weight that changes by 100 lb in 2 weeks. Demonstration study Study that establishes
Decision analysis A methodology for mak-
ing decisions by identifying alternatives and assessing them with regard to both the likelihood of possible outcomes and the costs and benefits of those outcomes. Decision node A symbol that represents a
choice among actions. By convention, a decision node is represented in a decision tree by a square. Decision support The process of assisting
humans in making decisions, such as interpreting clinical information or choosing a diagnostic or therapeutic action. See: Clinical Decision Support. Decision tree A diagrammatic representation of the outcomes associated with chance events and voluntary actions. Deductive reasoning is a process of reaching specific conclusions (e.g., a diagnosis) from a hypothesis or a set of hypotheses. Deductive logic helps in building up the consequences of each hypothesis, and this kind of reasonning is customarily regarded as a common way of evaluating diagnostic hypotheses.
a relation—which may be associational or cau sal—between a set of measured variables. Dental informatics The application of biomedical informatics methods and techniques to problems derived from the field of dentistry. Viewed as a subarea of clinical informatics. Deoxyribonucleic acid (DNA) The genetic material that is the basis for heredity. DNA is a long polymer chemical made of four basic subunits. The sequence in which these subunits occur in the polymer distinguishes one DNA molecule from another and in turn directs a cell’s production of proteins and all other basic cellular processes. Department of Health and Human Services (DHSS) that provides the public with health
information and promotes health through partnerships with state health departments and other organizations. It is the federal agency charged with protecting the health and safety of U.S. citizens, both at home and abroad. It also oversees the development and application of programs for disease prevention and control, environmental health, and health promotion and education. 7 http://www.cdc.gov/.
Departmental system A system that focus on De-duplicate/Deduplication The process that
matches, links, and or merges data to eliminate redundancies.
a specific niche area in the healthcare setting, such as a laboratory, pharmacy, radiology department, etc.
1034
Glossary
Dependency grammar A linguistic theory of
syntax that is based on dependency relations between words, where one word in the sentence is independent and other words are dependent on that word. Generally, the verb of a sentence is independent and other words are directly or indirectly dependent on the verb. Dependent variable (also called outcome variable) In a correlational or experimental
study, the main variable of interest or outcome variable, which is thought to be affected by or associated with the independent variables (q.v.).
Diagnostic decision-support system A computer- based system that assists physicians in rendering diagnoses; a subset of clinical decision-support systems. See clinical decision support system. Diagnostic process The activity of deciding which questions to ask, which tests to order, or which procedures to perform, and determining the value of the results relative to associated risks or financial costs. DICOM See: Digital Image Communications
in Medicine.
Derivational morphemes A morpheme that
Dictionary A set of terms representing the
changes the meaning or part of the speech of a word (e.g., −ful as in painful, converting a noun to an adjective).
system of concepts of a particular subject field. Differential diagnosis The
DES See: Data Encryption Standard. Descriptive study One-group study that seeks to measure the value of a variable in a sample of subjects. Study with no independent variable. Design validation A study conducted to
inform the design of an information resource, e.g., a user survey. DHHS See:
Department of Human Services.
Health and
Diagnosis The process of analyzing avail-
set of active hypotheses (possible diagnoses) that a physician develops when determining the source of a patient’s problem.
Digital computer A computer that processes
discrete values based on the binary digit or bit. Essentially all modern computers are digital, but analog computers also existed in the past. Digital divide Term referring to disparity in economic access to technology between “haves” and “have-nots”.
able data to determine the pathophysiologic explanation for a patient’s symptoms.
Digital image An image that is stored as a grid of numbers, where each picture element (pixel) in the grid represents the intensity, and possibly color, of a small area.
Diagnosis-based reimbursement Payments to providers (typically hospitals) based on the diagnosis made by a physician at the time of admission.
Digital Image Communications in Medicine (DICOM) A standard for electronic exchang-
group (DRG) One of almost 500 categories based on major diagnosis, length of stay, secondary diagnosis, surgical procedure, age, and types of services required. Used to determine the fixed payment per case that Medicare will reimburse hospitals for providing care to elderly patients.
ing digital health images, such as x-rays and CT scans.
Diagnosis-related
Digital library Organized collections of elec-
tronic content, intended for specific communities or domains. Digital object identifier (DOI) A system for providing unique identifiers for published digital objects, consisting of a prefix that is
1035 Glossary
assigned by the International DOI Foundation to the publishing entity and a suffix that is assigned and maintained by the entity.
Discourse Large portions of text forming a
narrative, such as paragraphs and documents. Discrete event simulation model A modeling
Digital radiography (DR) The process of pro-
ducing X-ray images that are stored in digital form in computer memory, rather than on film.
approach that assesses interactions between people, typically composed of patients that have attributes and that experience events. Discussion board An on-line environment for
Digital signal A signal that takes on discrete
values from a specified range of values. Digital signal processing (DSP) An integrated
circuit designed for high-speed data manipulation and used in audio communications, image manipulation, and other data acquisition and control applications.
exchanging public messages among participants. Discussion lists and messaging boards Online tools for asynchronous text conversation. Disease Any condition in an organism that is other than the healthy state.
Digital subscriber line (DSL) A digital telephone service that allows high-speed network communication using conventional (twisted pair) telephone wiring.
Dissemination phase During the dissemina-
Digital subtraction angiography (DSA) A radiologic technique for imaging blood vessels in which a digital image acquired before injection of contrast material is subtracted pixel by pixel from an image acquired after injection. The resulting image shows only the differences in the two images, highlighting those areas where the contrast material has accumulated.
Distributed cognition A view of cognition that considers groups, material artifacts, and cultures and that emphasizes the inherently social and collaborative nature of cognition.
tion phase of a clinical research study, investigators analyze and report upon the data generated during the active phase.
Distributed computer systems A collection
of independent computers that share data, programs, and other resources. DNA See: Deoxyribonucleic Acid.
Direct entry The entry of data into a com-
puter system by the individual who p ersonally made the observations.
DNS See: Domain name system. Document structure The organization of text
Discharge Plan A plan that supports the
transition of a patient from one care facility to home or another care facility and includes evaluation of the patient by qualified personnel, discussion with the patient or his representative, planning for homecoming or transfer to another care facility, determining whether caregiver training or other support is needed, referrals to a home care agency and/or appropriate support organizations in the community, and arranging for follow-up appointments or tests.
into sections. DOI See: Digital object identifier. Domain name system (DNS) A hierarchical name-management system used to translate computer names to Internet protocol (IP) addresses. Doppler shift A perceived change in fre-
quency of a signal as the signal source moves toward or away from a signal receiver.
1036
Glossary
Double blind A clinical study methodology in which neither the researchers nor the subjects know to which study group a subject has been assigned.
Dynamic A simulation program that mod-
els changes in patient state over time and in response to students’ therapeutic decisions. Dynamic programming A computationally
Double-blinded study In the context of
clinical research, a double blinded study is a study in which both the investigator and participant are blinded from the assignment of an intervention. In this scenario, a trusted third party must maintain records of such study arm assignments to inform later data analyses. Draft standard for trial use A proposal for a standard developed by HL7 that is sufficiently well defined that early adopters can use the specification in the development of HIT. Ultimately, the draft standard may be refined and put to a ballot for endorsement by the members of the organization, thus creating an official standard. DRG See Diagnosis-Related Groups. Drug repurposing Identifying existing drugs
intensive computer-science technique used, for example, to determine optimal sequence alignments in many computational biology applications. Dynamic transmission model A model that
divides a population into compartments (for example, uninfected, infected, recovered, dead), and for transitions between compartments are governed by differential or difference equations. Dynamical systems models Models that describe and predict the interactions over time between multiple components of a phenomenon that are viewed as a system. Dynamical systems models are often used to construct “controllers,” algorithms that adjust functioning of the system (an airplane, artificial pancreas, etc.) to maximize a set of optimization criteria.
that may be useful for indications other than those for which they were initially approved.
Earley parsing A
Drug screening robots A scientific instrument
EBM See Evidence-Based Medicine.
that can perform assays with potential drugs in a highly parallel and high throughput manner. Drug-genome interaction A relationship between a drug and a gene in which the gene product affects the activity of the drug or the drug influences the transcription of the gene.
dynamic programming method for parsing context-free grammar.
EBM database For Evidence-Based Medicine database, a highly organized collection of clinical evidence to support medical decisions based on the results of controlled clinical trials.
DSL See: Digital subscriber line.
Ecological momentary assessment (EMA) A range of methods for collecting ecologically- valid self-report by enabling study participants and patients to report their experiences in real-time, in real-world settings, over time and across contexts.
DSP See: Digital signal processing.
eCRF See: Electronic Case Report Form.
Dublin Core Metadata Initiative (DCMI) A standard metadata model for indexing published documents.
EDC See: Electronic Data Capture.
DSA See: Digital subtraction angiography.
EDI See: Electronic data interchange.
1037 Glossary
EEG See: Electroencephalography.
Electronic Medical Records and Genomics (eMERGE) network A network of academic
EHR See: Electronic health record.
institutions that is exploring the capabilities of EHRs for genomic discovery and implementation.
EIW See: Enterprise information warehouse. Electroencephalography (EEG) A method for
measuring the electromagnetic fields generated by the electrical activity of the neurons using a large arrays of scalp sensors, the output of which are processed to localize the source of the electrical activity inside the brain. Electronic Case Report Form (eCRF) A computational representation of paper case report forms (CRFs) used to enable EDC. Electronic Data Capture (EDC) EDC is the process of capturing study-related data elements via computational mechanisms. Electronic Data interchange (EDI) Electronic exchange of standard data transactions, such as claims submission and electronic funds transfer.
Electronic-long, paper-short (ELPS) A publication method which provides on the Web site supplemental material that did not appear in the print version of the journal. ELPS See: Electronic-long, paper-short. EMBASE A commercial biomedical and pharmacological database from ExcerptaMedica, which provides information about medical and drug-related subjects. Emergent design Study where the design or
plan of research can and does change as the study progresses. Characteristic of subjectivist studies. Emotion detection A natural language technique for determining the mental state of the author of a text document.
Electronic Health Record (EHR) A repository
of electronically maintained information about an individual’s lifetime health status and health care, stored such that it can serve the multiple legitimate users of the record. See also EMR and CPR.
EMPI See: Enterprise master patient index. EMR (or EMRS) See:
Electronic
Medical
Record. EMTREE A hierarchically structured, con-
Electronic health record system An elec-
tronic health record and the tools used to manage the information; also referred to as a computer-based patient-record system and often shortened to electronic health record. Electronic Medical Record (EMR) The elec-
tronic record documenting a patient’s care in a provider organization such as a hospital or a physician’s office. Often used interchangeably with Electronic Health Record (EHR), although EHRs refer more typically to an individual’s lifetime health status and care rather than the set of particular organizationally- based experiences.
trolled vocabulary used for subject indexing, used to index EMBASE. Enabling technology Any technology that
improves organizational processes through its use rather than on its own. Computers, for example, are useless unless “enabled” by operation systems and applications or implemented in support of work flows that might not otherwise be possible. Encryption The process of transforming information such that its meaning is hidden, with the intent of keeping it secret, such that only those who know how to decrypt it can read it; see decryption.
1038
Glossary
Endophenotypes An observable characteris-
Epidemiology The study of the patterns,
tic that is tightly linked to underlying genetics and less dependent on environmental exposures or chance.
causes, and effects of health and disease conditions in defined populations. Epigenetics Heritable phenotypes that are
Enrichment analysis A statistical method to
determine whether an a priori defined set of concepts shows statistically significant overrepresentation in descriptions of a set of items (such as genes) compared to what one would expect based on their frequency in a reference distribution. Enrollment during enrollment of a clinical
research study, potential participants are identified and research staff determine their eligibility for participation in a study, based upon the eligibility criteria described in the study protocol. If a participant is deemed eligible to participate, there are then officially “registered” for the study. It is during this phase that in some trial designs, a process of randomization and assignment to study arms occurs.
not encoded in DNA sequence. Epigenomics The study of heritable phenotypes that are not encoded in the organisms DNA. e-prescribing The electronic process of generating, transmitting and filling a medical prescription. Error analysis In natural language processing, a process for determining the reasons for false-positive and false-negative errors. Escrow Use of a trusted third party to hold
cryptographic keys, computer source code, or other valuable information to protect against loss or inappropriate access.
Enterprise information warehouse (EIW) A data base in which data from clinical, financial and other operational sources are collected in order to be compared and contrasted across the enterprise.
Ethernet A network standard that uses a bus or star topology and regulates communication traffic using the Carrier Sense Multiple Access with Collision Detection (CSMA/CD) approach.
Enterprise master patient index (EMPI) An architectural component that serves as the name authority in a health-care information system composed of multiple independent systems; the EMPI provides an index of patient names and identification numbers used by the connected information systems.
Ethnography Set of research methodologies derived primarily from social anthropology. The basis of much of the subjectivist, qualitative evaluation approaches.
Entrez A search engine from the National Center for Biotechnology Information (NCBI), at the National Library of Medicine; Entrez can be used to search a variety of life sciences databases, including PubMed. Entry term A synonym form for a subject
heading in the Medical Subject Headings (MeSH) controlled, hierarchical vocabulary. Epidemiologic Related to the field of epide-
miology.
ETL See: Extract, Transform, and Load. Evaluation contract A document describing the aims of a study, the methods to be used and resources made available, usually agreed between the evaluator and key stakeholders before the study begins. Event-Condition-Action (ECA) rule A rule that
requires some event (such as the availability of a new data value in the database) to cause the condition (premise, or left-hand side) of the rule to be evaluated. If the condition is determined to be true, then some action is performed. Such rules are commonly found in active database systems and form the basis of medical logic modules.
1039 Glossary
Evidence-based guidelines(EBM) An appro ach to medical practice whereby the best possible evidence from the medical literature is incorporated in decision making. Generally such evidence is derived from controlled clinical trials.
ity to generalize study results into clinical care.
Exabyte 1018 bytes.
Extract, Transform, and Load (ETL) ETL is the process by which source data is collected and manipulated so as to adhere to the structure and semantics of a receiving data construct, such as a data warehouse.
Exome The entire sequence of all genes within a genome, approximately 1–3% of the entire genome.
Extrinsic evaluation An evaluation of a component of a system based on an evaluation of the performance of the entire system.
Expected value The value that is expected on
F measure A measure of overall accuracy
average for a specified chance event or decision.
that is a combination of precision and recall.
Experimental intervention In the context of
clinical research, an experimental intervention represents the treatment or other intervention delivered to a participant assigned to the experimental arm of the study in order to determine the safety, efficacy, or benefits of that intervention. science Systematic study characterized by posing hypotheses, designing experiments, performing analyses, and interpreting results to validate or disprove hypotheses and to suggest new hypotheses for study.
Factual knowledge Knowledge of facts without necessarily having any in-depth understanding of their origin or implications. False negative A negative result that occurs
in a true situation. Examples include a desired entity that is missed by a search routine or a test result that appears normal when it should be abnormal.
Experimental
False positive A positive result that occurs in
a false situation. Examples include an inappropriate entity that is returned by a search routine or a test result that appears abnormal when it should be normal.
Extensible markup language (XML) A sub-
False-negative rate (FNR) The probability of a
set of the Standard Generalized Markup Language (SGML) from the World Wide Web Consortium (W3C), designed especially for Web documents. It allows designers to create their own custom-tailored tags, enabling the definition, transmission, validation, and interpretation of data between applications and between organizations.
negative result, given that the condition under consideration is true—for example, the probability of a negative test result in a patient who has the disease under consideration.
External router A computer that resides on multiple networks and that can forward and translate message packets sent from a local or enterprise network to a regional network beyond the bounds of the organization. External validity In the context of clinical research, external validity refers to the abil-
False-negative result (FN) A negative result
when the condition under consideration is true—for example, a negative test result in a patient who has the disease under consideration. False-positive rate (FPR) The probability of a positive result, given that the condition under consideration is false—for example, the probability of a positive test result in a patient who does not have the disease under consideration.
1040
Glossary
False-positive result (FP) A positive result when the condition under consideration is false—for example, a positive test result in a patient who does not have the disease under consideration.
Field user effect study A study of the actual
actions or decisions of the users of the resource. File In a database, a collection of similar
records. Fast Healthcare Interoperability Resource (FHIR) An HL7 standard for information
exchange using a well-defined and limited set of resources. FDDI See: Fiber Distributed Data Interface.
File format Representation of data within
a file; can refer to the method for individual characters and values (for example, ASCII or binary) or their organization within the file (for example, XML or text).
Feedback In a computer-based education program, system-generated responses, such as explanations, summaries, and references, provided to further a student’s progress in learning.
File server A computer that is dedicated to storing shared or private data files.
Fee-for-service Unrestricted system of health care reimbursement in which payers pay provider for those services the provider has deemed necessary.
Filtering algorithms A defined procedure applied to input data to reduce the effect of noise.
Fiber Distributed Data Interface [FDDI] A
transmission standard for local area networks operating on fiberoptic cable, providing a transmission rate of 100 Mbit/s. Fiberoptic cable A communication medium that uses light waves to transmit information signals. Fiducial An object used in the field of view of an imaging system which appears in the image produced, for use as a point of reference or a measure. Field In science, the setting, which may be
multiple physical locations, where the work under study is carried out. In database design, the smallest named unit of data in a database. Fields are grouped together to form records.
File system An organization of files within a
database or on a mass storage device.
Finite state automaton An abstract, computer-based representation of the state of some entity together with a set of actions that can transform the state. Collections of finite state automata can be used to model complex systems. Fire-wall A security system intended to protect an organization’s network against external threats by preventing computers in the organization’s network from communicating directly with computers external to the network, and vice versa. Flash memory card A portable electronic storage medium that uses a semiconductor chip with a standard physical interface; a convenient method for moving data between computers. Flexnerian One of science-based acquisition
Field function study Study of an informa-
tion resource where the system is used in the context of ongoing health care. Study of a deployed system (cf. Laboratory study).
of medically relevant knowledge, followed by on-the-job apprentice-style acquisition of experience, and accompanied by evolution and expansion of the curriculum to add new fields of knowledge.
1041 Glossary
Floppy disk An inexpensive magnetic disk
FPR See: False-positive rate.
that can be removed from the disk-drive unit and thereby used to transfer or archive files.
Frame An abstract representation of a con-
FM See: Frequency modulation.
cept or entity that consists of a set of attributes, called slots, each of which can have one or more values to represent knowledge about the entity or concept.
fMRI See: Functional magnetic resonance
imaging.
Frame Relay A high-speed network protocol
FN See: False-negative result.
designed for sending digital information over shared wide-area networks using variable length packets of information.
Force feedback A user interface feature in which physical sensations are transmitted to the user to provide a tactile sensation as part of a simulated activity. See also Haptic feedback. Foreground question Question
that asks for general information related to a specific patient (see also background question).
Form factor Typically refers to the physical dimensions of a product. With computing devices, refers to the physical size of the device, often with specific reference to the display. For example, we would observe that the form factor of a desktop monitor is significantly larger than that of a tablet or smart phone, and therefore able to display more characters and larger graphics on the screen. Formative evaluation An
assessment of a system’s behavior and capabilities conducted during the development process and used to guide future development of the system.
Forward chaining Also known as data-driven
reasoning. A form of inference used in rulebased systems in which the inference engine uses newly acquired (or concluded) values of variables to invoke all rules that may reference one or more of those variables in their premises (left-hand side), thereby concluding new values for variables in the conclusions (righthand side) of those rules. The process continues recursively until all rules whose premises may reference the variables whose values become known have been considered.
Free morpheme A morpheme that is a word
and that does not contain another morpheme (e.g., arm, pain). Frequency modulation(FM) A signal representation in which signal values are represented as changes in frequency rather than amplitude. Front-end application A computer program that interacts with a database- management system to retrieve and save data and to accomplish user-level tasks. Full-text content The complete textual information contained in a bibliographic source. Functional magnetic resonance imaging (fMRI) A magnetic resonance imaging method
that reveals changes in blood oxygenation that occur following neural activity. Functional mapping An imaging method that relates specific sites on images to particular physiologic functions. Gateway A computer that resides on multiple networks and that can forward and translate message packets sent between nodes in networks running different protocols. Gbps See: Gigabits per second. GEM See: Guideline Element Model. GenBank A centralized repository of protein,
RNA, and DNA sequences in all species, curFP See: False-positive result.
1042
Glossary
rently maintained by the National Institutes of Health.
segments, in the chromosomes of an organism.
Gene expression microarray Study the expres-
Genomics database An organized collection of information from gene sequencing, protein characterization, and other genomic research.
sion of large numbers of genes with one another and create multiple variations on a genetic theme to explore the implications of changes in genome function on human disease. Gene Expression Omnibus (GEO) A central-
ized database of gene expression microarray datasets. Gene Ontology(GO) A structured controlled
vocabulary used for annotating genes and proteins with molecular function. The vocabulary contains three distinct ontologies, Molecular Function, Biological Process and Cellular Component. Genes Units encoded in DNA and they are transcribed into ribonucleic acid (RNA).
Genotype The genetic makeup, as distinguished from the physical appearance, of an organism or a group of organisms. Genotypic Refers to the genetic makeup of an organism. GEO See: Gene Expression Omnibus. Geographic Information System (GIS) A sys-
tem designed to capture, store, manipulate, analyze, manage, and visually present all types of location-specific data. Geographic Information System (GIS) A sys-
Genetic data An overarching term used to
label various collections of facts about the genomes of individuals, groups or species.
tem designed to capture, store, manipulate, analyze, manage, and visually present all types of location-specific data.
Genetic risk score (GRS) A calculation of the likelihood of a particular phenotype being present based on a weighed score of one or more genetic variants; also referred to as a polygenic risk score (PRS).
Gigabits per second (Gbps) A common unit
Genome-Wide Association Studies (GWAS) An examination of many common genetic variants in different individuals to see if any variant is associated with a given trait, e.g., a disease.
GIS See: Geographic Information System.
Genomic medicine (also known as stratifiedmedicine) The management of groups of
GO See: Gene Ontology.
patients with shared biological characteristics, determined through molecular diagnostic testing, to select the best therapy in order to achieve the best possible outcome for a given group.
Gold-standard test The test or procedure whose result is used to determine the true state of the subject—for example, a pathology test such as a biopsy used to determine a patient’s true disease state.
Genomics The study of all of the nucleo-
Google A commercial search engine that provides free searching of documents on the World Wide Web.
tide sequences, including structural genes, regulatory sequences, and noncoding DNA
of measure for data transmission over highspeed networks. Gigabyte 230 or 1,073,741,824 bytes.
Global processing Computations
on the entire image, without regard to specific regional content.
1043 Glossary
GPS A system for calculating precise geo-
graphical location by triangulating information obtained from satellites and/or cell towers. GPU See: Graphics processing unit. Grammar A mathematical model of a poten-
the user to provide a tactile sensation as part of a simulated activity. Haptic sensation The sensation of touch or feel. It can be applied to a simulation of such sensation as presented within a virtual or augmented reality scenario.
tially infinite set of strings. Hard disk A magnetic disk used for data Graph In computer science, a set of nodes or
circles connected by a set of edges or lines. Graphical user interface (GUI) A type of envi-
ronment that represents programs, files, and options by means of icons, menus, and dialog boxes on the screen.
storage and typically fixed in the disk-drive unit. Hardware The physical equipment of a computer system, including the central processing unit, memory, data-storage devices, workstations, terminals, and printers.
Graphics processing unit (GPU) A computer hardware component that performs graphic displays and other highly parallel computations.
Harmonic mean An average of a set of
Gray scale A scheme for representing intensity in a black-and-white image. Multiple bits per pixel are used to represent intermediate levels of gray.
HCI See: Human-computer interaction.
Guardian Angel Proposal A proposed struc-
ture for a lifetime, patient-centered health information system.
weighted values in which the weights are determined by the relative importance of the contribution to the average.
HCO See: Healthcare organization. Head word The key word in a multi- word phrase that conveys the central meaning of the phrase. For example, a phrase containing adjectives and a noun, the noun is typically the head word.
GUI See: Graphical user interface. Guidance In a computer-based education
program, proactive feedback, help facilities, and other tools designed to assist a student in learning the covered material. Guideline Element Model (GEM) An XML specification for marking up textual documents that describe clinical practice guidelines. The guideline-related XML tags make it possible for information systems to determine the nature of the text that has been marked up and its role in the guideline specification. GWAS See:
Genome-Wide
Association
Studies. Haptic feedback A user interface feature in
which physical sensations are transmitted to
Header (of email) The portion of a simple electronic mail message that contains information about the date and time of the message, the address of the sender, the addresses of the recipients, the subject, and other optional information. Health Evaluation and Logical Processing [HELP] On of the first electronic health record
systems, developed at LDS Hospital in Sal Lake City, Utah. Still in use today, it was innovative for its introduction of automated alerts. Health informatics Used by some as a synonym for biomedical informatics, this term is increasingly used solely to refer to applied research and practice in clinical and public health informatics.
1044
Glossary
Health information and communication technology (HICT) The broad spectrum of hard-
ware and software used to capture, store and transmit health information. Health Information exchange (HIE) The process of moving health information electronically among disparate health care organizations for clinical care and other purposes; or an organization that is dedicated to providing health information exchange. Health Information Infrastructure (HII) The set of public and private resources, including networks, databases, and policies, for collecting, storing, and transmitting health information. Health Information Technology (HIT) These of
computers and communications tech nology in healthcare and public health settings. Health Information Technology for Economic and Clinical Health (HITECH) Also referred
to as HITECH Act. Passed by the Congress as Title IV of the American Recovery and Reinvestment Act of 2009 (ARRA) in 2009, established four major goals that promote the use of health information technology: (1) Develop standards for the nationwide electronic exchange and use of health information; (2) Invest $20B in incentives to encourage doctors and hospitals to use HIT to electronically exchange patients’ health information; (3) Generate $10B in savings through improvements in quality of care and care coordination, and reductions in medical errors and duplicative care and (4) Strengthen Federal privacy and security law to protect identifiable health information from misuse. Also codified the Office of the National Coordinator for Health Information Technology (ONC) within the Department of Health and Human Services. Health Insurance Portability and Accoun tability Act (HIPAA) A law enacted in 1996 to
protect health insurance coverage for workers and their families when they change or lose their jobs. An “administrative simplification” provision requires the Department of Health and Human Services to establish national
standards for electronic healthcare transactions and national identifiers for providers, health plans, and employers. It also addresses the security and privacy of health data. Health Level Seven (HL7) An ad hoc stan-
dards group formed to develop standards for exchange of health care data between independent computer applications; more specifically, the health care data messaging standard developed and adopted by the HL7 standards group. Health literacy A constellation of
skills, including the ability to perform basic reading, math, and everyday health tasks like comprehending prescription bottles and appointment slips, required to function in the health care environment.
Health Maintenance Organization (HMO) A group practice or affiliation of independent practitioners that contracts with patients to pro- vide comprehensive health care for a fixed periodic payment specified in advance. Health on the Net[HON] A private organiza-
tion establishing ethical standards for health information published on the World Wide Web. Health Record Bank (HRB) An independent organization that provides a secure electronic repository for storing and maintaining an individual’s lifetime health and medical records from multiple sources and assuring that the individual always has complete control over who accesses their information. Healthcare Effectiveness Data and Information Set (HEDIS) Employers and individuals use
HEDIS to measure the quality of health plans. HEDIS measures how well health plans give service to their members. HEDIS is one of health care’s most widely used performance improvement tools. It is developed and maintained by the National Committee for Quality Assurance. Healthcare organization (HCO) Any healthrelated organization that is involved in direct patient care.
1045 Glossary
Healthcare team A coordinated group of
health professionals including physicians, nurses, case managers, dieticians, pharmacists, therapists, and other practitioners who collaborate in caring for a patient. HEDIS See: Healthcare Effectiveness Data and Information Set.
parts (e.g., sub-sub-tasks). The tasks are organized according to specific goals. High-bandwidth An
information channel that is capable of carrying delivering data at a relatively high rate.
Higher-level process A complex process com-
prising multiple lower-level processes. HELP See Health Evaluation and Logical
Processing.
HII See: Health Information Infrastructure.
HELP sector A decision rule encoded in the
HII See: Health Information Infrastructure.
HELP system, a clinical information system that was developed by researchers at LDS Hospital in Salt Lake City. Helper (plug- in) An application that are
launched by a Web browser when the browser downloads a file that the browser is not able to process itself.
Hindsight bias The tendency to over-estimate the prior predictability of an event, once the events has already taken place. For example, if event A occurs before event B, there may be an assumption that A predicted B. HIPAA See: Health Insurance Portability and Accountability Act.
Heuristic A mental “trick” or rule of thumb;
a cognitive process used in learning or problem solving.
HIPAA See: Health Insurance Portability and Accountability Act.
Heuristic evaluation (HE) A usability inspection method, in which the system is evaluated on the basis of a small set of well-tested design principles such as visibility of system status, user control and freedom, consistency and standards, flexibility and efficiency of use.
HIS See: Hospital information system.
HICT See: Health information and communication technology. From standard of care practices), so as to provide the basis for comparison to data sets derived from participants who have received an experimental intervention under study.
Historical control In the context of clinical
research, historical controls are subjects who represent the targeted population of interest for a study. Typically, their data are derived from existing resources in a retrospective manner and that represent targeted outcomes in a non-interventional state (often resulting among humans and other elements of a system, and the profession that applies theory, principles, data, and other methods to design in order to optimize human well-being and overall system performance.
HIE See: Health Information Exchange. Historically controlled study See: before-after HIE See: Health Information Exchange.
study.
Hierarchical An arrangement between entities that conveys some superior-inferior relationship, such as parent–child, whole-part etc.
HIT See: Health Information Technology. HITECH See: Health Information Technology
for Economic and Clinical Health. Hierarchical Task
Analysis Task
analytic approach that involves the breaking down of a task into sub-tasks and smaller constituted
HITECH regulations The
components of the Health Information Technology for
1046
Glossary
Economic and Clinical Health Act, passed by the Congress in 2009, which authorized financial incentives to be paid to eligible physicians and hospitals for the adoption of “meaningful use” of EHRs in the United States. The law also called for the certification of EHR technology and for educational programs to enhance its dissemination and adoption. HIV See: Human immunodeficiency virus. HL7 See: Health Level 7.
mine the complete sequence of human deoxyribonucleic acid (DNA), as it is encoded in each of the 23 chromosomes. Human immunodeficiency virus (HIV) A ret-
rovirus that invades and inactivates helper T cells of the immune system and is a cause of AIDS and AIDS-related complex. Human-computer interaction (HCI) Formal methods for addressing the ways in which human beings and computer programs exchange information.
HMO See: Health maintenance organization. Home Telehealth The extension of
telehealth services in to the home setting to support activities such as home nursing care and chronic disease management.
HON See: Health on the Net. Hospital information system (HIS) Computer system designed to support the comprehensive information requirements of hospitals and medical centers, including patient, clinical, ancillary, and financial management.
Hyper Text markup language (HTML) The document specification language used for documents on the World Wide Web. Hypertext Text
linked together in a non sequential web of associations. Users can traverse highlighted portions of text to retrieve additional related information.
HyperText Transfer Protocol (HTTP) The cli-
ent–server protocol used to access information on the World Wide Web. Hypothesis generation The process of pro-
Hot fail over A secondary computer system
that is kept in constant synchronization with the primary system and that can take over as soon as the primary fails for any reason. Hounsfield number The numeric information
contained in each pixel of a CT image. It is related to the composition and nature of the tissue imaged and is used to represent the density of tissue. HRB See: Health Record Bank. HTML See HyperText. HTTP See: HyperText Transfer Protocol. Human factors The scientific discipline con-
cerned with the understanding of interactions. Human Genome Project An international undertaking, the goal of which is to deter-
posing a hypothesis, usually driven by some unexplained phenomenon and the derivation of a suspected underlying mechanism. Hypothetico-deductive approach A method of reasoning made up of four stages (cue acquisition, hypothesis generation, cue interpretation, and hypothesis evaluation) which is used to generate and test hypotheses. In clinical medicine, an iterative approach to diagnosis in which physicians perform sequential, staged data collection, data interpretation, and hypothesis generation to determine and refine a differential diagnosis. Hypothetico-deductive reasoning Reasoning by first generating and then testing a set of hypotheses to account for clinical data (i.e., reasoning from hypothesis to data). ICANN See:
Internet Corporation Assigned Names and Numbers.
for
1047 Glossary
ICD-9-CM See: International Classification of Diseases, 9th Edition, Clinical Modifications.
methods for storing, transmitting, displaying, retrieving, and organizing images.
ICMP See: Internet Control Message Protocol.
Image metadata Data about images, such as the type of image (e.g., modality), patient that was imaged, date of imaging, image features (quantitative or qualitative), and other information pertaining to the image and its contents.
Icon In a graphical interface, a pictorial representation of an object or function. ICT See: Information and communications technology.
Image processing The transformation of IDF See: Inverse document frequency. IDN See: Integrated delivery network.
one or more input images, either into one or more output images, or into an abstract representation of the contents of the input images.
Image acquisition The process of generat-
ing images from the modality and converting them to digital form if they are not intrinsically digital.
Image quantitation The process of extracting useful numerical parameters or deriving calculations from the image or from ROIs in the image.
Image compression A mathematical pro-
cess for removing redundant or relatively unimportant information from an electronic image such that the resulting file appears the same (lossless compression) or similar (lossy compression) when compared to the original.
Image
reasoning Computerized methods that use images to formulate conclusions or answer questions that require knowledge and logical inference.
Image content representation Makes the infor-mation in images accessible to machines for processing.
rendering/visualization A variety of techniques for creating image displays, diagrams, or animations to display images more in a different perspective from the raw images.
Image database An
Imaging informatics A subdiscipline of medi-
organized collection of clinical image files, such as x-rays, photographs, and microscopic images.
Image enhancement The use of global processing to improve the appearance of the image either for human use or for subsequent processing by computer. Image interpretation/computer reasoning The process by which the individual viewing the image renders an impression of the medical significance of the results of imaging study, potentially aided by computer methods.
Image
cal informatics concerned with the common issues that arise in all image modalities and applications once the images are converted to digital form. IMIA See: International Medical Informatics
Association. Immersive and virtual environments A com-
puter-based set of sensory inputs and outputs that can give the illusion of being in a different physical environment. Immersive environment A computer-based
Image management/storage Methods
for storing, transmitting, displaying, retrieving, and organizing images. The application of
set of sensory inputs and outputs that can give the illusion of being in a different physical environment; see; Virtual Reality.
1048
Glossary
Immersive simulated environment A com
Independent variable In a correlational or
puter-based set of sensory inputs and outputs that can give the illusion of being in a different physical environment.
experimental study, a variable thought to determine or be associated with the value of the dependent variable (q.v.).
Immersive simulated environment A teaching environment in which a student manipulates tools to control simulated instruments, producing visual, pressure, and other feedback to the tool controls and instruments.
Index In information retrieval, a shorthand
Immunization Information System (IIS) Con fidential, population based, computerized databases that record all immunization doses administered by participating providers to persons residing within a given geopolitical area. Also known as Immunization Registries. Immunization Registry Confidential, population based, computerized databases that record all immunization doses administered by participating providers to persons residing within a given geopolitical area. Also known as Immunization Information Systems. Implementation
science Implementation
science refers to the study of socio-cultural, operational, and behavioral norms and processes surrounding the dissemination and adoption of new systems, approaches and/or knowledge.
guide to the content that allows users to find relevant content quickly. Index Medicus The printed index used to catalog the medical literature. Journal articles are indexed by author name and subject heading, then aggregated in bound volumes. The Medline database was originally con- structed as an online version of the Index Medicus. Index test The diagnostic test whose performance is being measured. Indexing In information retrieval, the assignment to each document of specific terms that indicate the subject matter of the document and that are used in searching. Indirect-care Activities of health profession-
als that are not directly related to patient care, such as teaching and supervising students, continuing education, and attending staff meetings. Inductive reasoning Involves an inferential
Independent Two events, A and B, are con-
process from the observed data to account for the unobserved. It is a process of generating possible conclusions based on available data. For example, the fact that a patient who recently had major surgery has not had any fever for the last 3 days may lead us to conclude that he will not have fever tomorrow or in the immediate days that follow. The power of inductive reasoning lies in its ability to allow us to go beyond the limitations of our current evidence or knowledge to novel conclusions about the unknown.
sidered independent if the occurrence of one does not influence the probability of the occurrence of the other. Thus, p[A | B] = p[A]. The probability of two independent events A and B both occurring is given by the product of the individual probabilities: p[A,B] = p[A] × p[B]. (See conditional independence.).
Inference engine A computer program that reasons about a knowledge base. In the case of rule-based systems, the inference engine may perform forward chaining or backward chaining to enable the rules to infer new information about the current situation.
Inaccessibility A property of paper records
that describes the inability to access the record by more than one person or in more than one place at a time. Incrementalist An approach to evaluation that tolerates ambiguity and uncertainty and allows changes from day-to-day.
1049 Glossary
Inflectional morpheme A morpheme that creates a different form of a word without changing the meaning of the word or the part of speech (e.g., −ed, −s, −ing as in activated, activates, activating.). Influence diagram A belief network in which explicit decision and utility nodes are also incorporated. Infobutton A context-specific link from health care application to some information resource that anticipates users’ needs and provides targeted information. Infobutton manager Middleware that pro-
vides a standard software interface between infobuttons in an EHR and the documents and other information resources that the infobuttons may display for the user. infoRAD The information technology and
computing oriented component of the very large exhibition hall at the annual meeting of the Radiological Society of North America. Information Organized
data from which knowledge can be derived and that accordingly provide a basis for decision making.
Information and communications technology (ICT) The use of computers and commu-
nications devices to accept, store, transmit, and manipulate data; the term is roughly a synonym for information technology, but it is used more often outside the United States.
Information model A representation of con-
cepts, relationships, constraints, rules, and operations to specify data semantics for a chosen domain of discourse. It can provide sharable, stable, and organized structure of information requirements for the domain context. Information need In information retrieval, the
searchers’ expression, in their own language, of the information that they desire. Information resource Generic term for a computer-based system that seeks to enhance health care by providing patient-specific information directly to care providers (often used equivalently with “system”). Information retrieval (IR) Methods that efficiently and effectively search and obtain data, particularly text, from very large collections or databases. It is also the science and practice of identification and efficient use of recorded media. See also Search. Information science The field of study concerned with issues related to the management of both paper-based and electronically stored information. Information theory The theory and math-
ematics underlying the processes of communication. Information visualization The use of computersupported, interactive, visual repre sentations of abstract data to amplify cognition. Ink-jet printer Output device that uses a
Information blocking A practice or position
that interferes with exchange or accessibility of patient data or electronic health information. This was defined by the 21st Century Cures Act. Information extraction Methods that process
text to capture and organize specific information in the text and also to capture and organize specific relations between the pieces of information.
moveable head to spray liquid ink on paper; the head moves back and forth for each line of pixels. Input and Output Devices, such as keyboards,
pointing devices, video displays, and laser printers, that facilitate user interaction and storage or just Input The data that represent state informa-
tion, to be stored and processed to produce results (output).
1050
Glossary
Inspection method Class of usability evalu-
ation methods in which experts appraising a system, playing the role of a user to identify potential usability and interaction issues with a system. Institute of Medicine The health arm of the National Academy of Sciences, which provides unbiases, authoritative advice to decision makers and the public. Renamed the National Academy of Medicine in 2016. Institutional Review Board (IRB) A committee responsible for reviewing an institution’s research projects involving human subjects in order to protect their safety, rights, and welfare. Integrated circuit A circuit of transistors, resistors, and capacitors constructed on a single chip and interconnected to perform a specific function. Integrated delivery network (IDN) A large
conglomerate health-care organization developed to provide and manage comprehensive health-care services. Integrated Service Digital Network (ISDN) A digital telephone service that allows highspeed network communications using conventional (twisted pair) telephone wiring. Integrative model Model for understanding
a phenomenon that draws from multiple disciplines and is not necessarily based on first principles.
Interactome The set of all molecular interactions in a cell. Interface engine Software
that mediates the exchange of information among two or more systems. Typically, each system must know how to communicate with the interface engine, but not need to know the information format of the other systems.
Intermediate effect process of continually learning, re-learning, and exercising new knowledge, punctuated by periods of apparent decrease in mastery and declines in performance, which may be necessary for learning to take place. People at intermediate levels of expertise may perform more poorly than those at lower level of expertise on some tasks, due to the challenges of assimilating new knowledge or skills over the course of the learning process. Internal validity In the context of clinical
research, internal validity refers to the minimization of potential biases during the design and execution of the trial. International Classification of Diseases, 9th Edition, Clinical Modifications A US exten-
sion of the World Health Organization’s International Classification of Diseases, 9th Edition. International Medical Informatics Association (IMIA) An international organization dedicated
to advancing biomedical and health informatics; an “organization of organizations”, it’s members are national informatics societies and organizations, such as AMIA.
Intellectual property Software programs, knowledge bases, Internet pages, and other creative assets that require protection against copying and other unauthorized use.
and other standards.
Intelligent system See: knowledge-based sys-
Internet A worldwide collection of gateways
tem.
and networks that communicate with each other using the TCP/IP protocol, collectively providing a range of services including electronic mail and World Wide Web access.
Intelligent Tutor A tutoring system that mon-
itors the learning session and intervenes only when the student requests help or makes serious mistakes.
International Organization for Standards (ISO) The international body for information
Internet
Address.
address See
Internet
Protocol
1051 Glossary
Internet Control Message Protocol (ICMP) A network-level Internet protocol that provides error correction and other information relevant to processing data packets. Internet Corporation for Assigned Names and Numbers (ICANN) The organization respon-
sible for managing Internet domain name and IP address assignments.
user; (B) allows for complete access, exchange, and use of all electronically accessible health information for authorized use under applicable State or Federal law; and (C) does not constitute information blocking. Interpreter A program that converts each
statement in a high-level program to a machine- language representation and then executes the binary instruction(s).
Internet of Things (IoT) A system of intercon-
nected computing devices that can transfer data and be controlled over a network. In the consumer space, IoT technologies are most commonly found in the built environment where devices and appliances (such as lighting fixtures, security systems or thermostats) can be controlled via smartphones or smart speakers, creating “smart” homes or offices.
Interventional radiology A subspecialty of radiology that uses imaging to guide invasive diagnostic or therapeutic procedures.
Internet protocol The protocol within TCP/ IP that governs the creation and routing of data packets and their reassembly into data messages.
Intuitionist–pluralist or de-constructivist A philosophical position that holds that there is no truth and that there are as many l egitimate interpretations of observed phenomena as there are observers.
Internet Protocol address A 32-bit number that uniquely identifies a computer connected to the Internet. Also called “Internet address” or “IP address”. Internet service provider (ISP) A commercial
communications company that supplies feefor-service Internet connectivity to individuals and organizations. Internet standards The set of conventions and protocols all Internet participants use to enable effective data communications. Internet Support Group (ISG) An on-line forum for people with similar problems, challenges or conditions to share supportive resources. Interoperability The 21st Century Cures Act
defines interoperability as health information technology that—(A) enables the secure exchange of electronic health information with, and use of electronic health information from, other health information technology without special effort on the part of the
Intrinsic evaluation An evaluation of a com-
ponent of a system that focuses only on the performance of the component. See also Extrinsic Evaluation.
Inverse document frequency (IDF) A measure of how infrequently a term occurs in a document collection.
number of documents ö æ IDFi = log ç ÷ +1 è number of documents with term ø IOM See: Institute of Medicine. IP address See: Internet Protocol Address. IR See: Information retrieval. IRB See: Institutional Review Board. ISDN See: Integrated Service Digital Network. ISG See: Internet Support Group. ISO See: InternationalOrganization for Stan
dards. Iso-semantic mapping A relationship bet ween an entity in one dataset or model and an
1052
Glossary
entity in another dataset or model where the meaning of the two entities is identical, even if the syntax or lexical form is different. ISP See: Internet service provider. Job A set of tasks submitted by a user for
Keyboard A data-input device used to enter
alphanumeric characters through typing. Keyword A word or phrase that conveys special meaning or to refer to information that is relevant to such a meaning (as in an index).
processing by a computer system.
Kilobyte 210 or 1024 bytes.
Joint Commission (JC) An independent, notfor-profit organization, The Joint Comm ission accredits and certifies more than 19,000 health care organizations and pro- grams in the United States. Joint Commission accreditation and certification is recognized nationwide as a symbol of quality that reflects an organization’s commitment to meeting certain performance standards. The Joint Commission was formerly known as JCAHO (the Joint Commission for the Accreditation of Healthcare Organizations).
Knowledge Relationships,
Just-in-time adaptive interventions (JITAIs) An intervention design that aims to
provide the type of support that is most likely to be helpful in a particular context at times when users are most likely to be receptive to that support, by adapting intervention provision to an individual’s changing internal and contextual state. Just-in-time learning An approach to pro-
viding necessary information to a user at the moment it is needed, usually through anticipation of the need. Kernel The core of the operating system
that resides in memory and runs in the background to supervise and control the execution of all other programs and direct operation of the hardware. Key field A field in the record of a file that uniquely identifies the record within the file.
facts, assumptions, heuristics, and models derived through the formal or informal analysis (or interpretation) of observations and resulting information.
Knowledge acquisition The information- elicitation and modeling process by which developers interact with subject-matter experts to create electronic knowledge bases. Knowledge base A collection of stored facts,
heuristics, and models that can be used for problem solving. Knowledge graph A kind of knowledge rep-
resentation in which entities are encoded as nodes in a graph and relationships among entities are encoded as links between the nodes. Knowledge-based information Information derived and organized from observational or experimental research. Knowledge-based system A program that symbolically encodes, in a knowledge base, facts, heuristics, and models derived from experts in a field and uses that knowledge to provide problem analysis or advice that the expert might have provided if asked the same question. KPI See: Key Performance Indicator. Laboratory
Key Performance Indicator (KPI) A metric
defined to be an important factor in the success of an organization. Typically, several Key Performance indicators are displayed on a Dashboard.
function study Study that explores important properties of an information resource in isolation from the clinical setting.
1053 Glossary
Laboratory user effect study An evaluation technique in which a user is observed when given a simulated task to perform.
Learning Management System An LMS is
LAN See: Local-area network.
a repository of educational content, and interface for delivering courses and content to learners, and a vehicle for faculty to track learner usage and performance.
Laser printer Output device that uses an
LED See: Light-emitting diode.
electromechanically controlled laser beam to generate an image on a xerographic surface, which then is used to produce paper copies.
Lexemes A minimal lexical unit in a language
Latency The time required for a signal to travel between two points in a network. Latent failures Enduring systemic problems that make errors possible but are less visible or not evident for some time. Law of proximity Principle from Gestalt psy-
chology that states that visual entities that are close together are perceptually grouped.
that represents different forms of the same word. Lexical-statistical retrieval Retrieval
based on a combination of word matching and relevance ranking.
Lexicon A catalogue of the words in a language, usually containing syntactic information such as parts of speech, pluralization rules, etc.
LCD See: Liquid crystal display.
Light-emitting diode (LED) A semiconductor device that emits a particular frequency of light when a current is passed through it; typically used for indicator lights and computer screens because low power requirement, minimal heat generated, and durability.
Lean A management strategy that focuses
Likelihood ratio (LR) A measure of the dis-
only on those process that are able to contribute specific and measurable value for the end customer. The LEAN concept originated with Toyota’s focus on efficient manufacturing processes.
criminatory power of a test. The LR is the ratio of the probability of a result when the condition under consideration is true to the probability of a result when the condition under consideration is false (for example, the probability of a result in a diseased patient to the probability of a result in a non-diseased patient). The LR for a positive test is the ratio of true-positive rate (TPR) to false-positive rate (FPR).
Law of symmetry Principle from Gestalt psy-
chology that states that symmetric objects are more readily perceived.
Learning Content Management System A software platform that allows educational content creators to host, manage, and track changes in content. Learning health system A proposed model
for health care in which outcomes from past and current patient care provide are systematically collected, analyzed and then fed back into decision making about best practices for future patient care.
Link-based An indexing approach that gives relevance weight to web pages based on how often they are cited by other pages. Linux An open source operating system based
on principles of Unix and first developed by Linus Torvalds in 1991.
Learning healthcare system See: Learning
health system.
Liquid crystal display (LCD) A display technology that uses rod-shaped molecules to bend
1054
Glossary
light and alter contrast and viewing angle to produce images.
to store data while still allowing for the re- creation of the original data.
Listserver A distribution list for electronic
Lossy compression A mathematical technique for reducing the number of bits needed to store data but that results in loss of information.
mail messages. Literature reference database See: biblio-
graphic database. Local-area network (LAN) A network for data
communication that connects multiple nodes, all typically owned by a single inst tution and located within a small geographic area. Logical Observations, Identifiers, Names and Codes (LOINC) A controlled terminology
created for providing coded terms for observational procedures. Originally focused on laboratory tests, it has expanded to include many other diagnostic procedures. Logical positivist A philosophical position that holds that there is a single truth that can be inferred from the right combination of studies. Logic-based A knowledge representation method based on the use of predicates.
Low-level processes An elementary process that has its basis in the physical world of chemistry or physics. LR See: Likelihood ratio. Machine code The set of primitive instruc-
tions to a computer represented in binary code (machine language). Machine language The
set of primitive instructions represented in binary code (machine code).
Machine learning A computing technique in
which information learned from data is used to improve system performance. Machine translation Automatic mapping of
text written in one natural language into text of another language.
LOINC See: Logical Observations, Identifiers,
Names and Codes.
Macros A reusable set of computer instruc-
tions, generally for a repetitive task. Longitudinal Care Plan A holistic, dynamic,
and integrated plan that documents important disease prevention and treatment goals and plans. A longitudinal plan is patient- centered, reflecting a patient’s values and preferences, and is dependent upon bidirectional communications. Long-term memory The part of memory that
acquires information from short-term memory and retains it for long periods of time. Long-term storage A medium for storing information that can persist over long periods with- out the need for a power supply to maintain data integrity. Lossless compression A mathematical tech-
nique for reducing the number of bits needed
Magnetic disk A round, flat plate of mate-
rial that can accept and store magnetic charge. Data are encoded on magnetic disk as sequences of charges on concentric tracks. Magnetic resonance imaging (MRI) A modaity that produces images by evaluating the differential response of atomic nucleli in the body when the patient is placed in an intense magnetic field. Magnetic resonance spectroscopy A nonin-
vasive technique that is similar to magnetic resonance imaging but uses a stronger field and is used to monitor body chemistry (as in metabolism or blood flow) rather than anatomical structures.
1055 Glossary
Magnetic tape A long ribbon of material that can accept and store magnetic charge. Data are encoded on magnetic tape as sequences of charges along longitudinal tracks. Magnetoencephalography (MEG) A method
for measuring the electromagnetic fields generated by the electrical activity of the neurons using a large arrays of scalp sensors, the output of which are processed in a similar way to CT in order to localize the source of the electromagnetic and metabolic shifts occurring in the brain during trauma. Mailing list A set of mailing addresses used
for bulk distribution of electronic or physical mail. Mainframe computer system A large, expen-
sive, multi-user computer, typically operated and maintained by professional computing personnel. Often referred to as a “mainframe” for short. Malpractice Class of litigation in health care based on negligence theory; failure of a health professional to render proper services in kee ing with the standards of the community. Malware Software that is specifically design to
cause harm to computer systems by disrupting other programs, damaging the machine, or gaining unauthorized access to the system or the data that it contains. Management The process of
treating a patient (or allowing the condition to resolve on its own) once the medical diagnosis has been determined.
Mannequin A life size plastic human body with some or many human-like functions. Manual indexing The
process by which human indexers, usually using standardized terminology, assign indexing terms and attributes to documents, often following a specific protocol.
Markov cycle The period of time specified
for a transition probability within a Markov model. Markov model A mathematical model of a set of strings in which the probability of a given symbol occurring depends on the identity of the immediately preceding symbol or the two immediately preceding symbols. Processes modeled in this way are often called Markov processes. Markov process A mathematical model of
a set of strings in which the probability of a given symbol occurring depends on the identity of the immediately preceding symbol or the two immediately preceding symbols. Markup language A document specification language that identifies and labels the components of the document’s contents. Massively Online Open Course (MOOC) In a traditional MOOC, the teacher’s content is digitally recorded and made available online, freely, as a sequence of lectures with supporting learning material. Master patient index (MPI) A database that
is used across a healthcare organization to maintain consistent, accurate, and current demographic and essential clinical data on the patients seen and managed within its various departments. Mean average precision (MAP) A method for measuring overall retrieval precision in which precision is measured at every point at which a relevant document is obtained, and the MAP measure is found by averaging these points for the whole query. Mean time between failures (MTBF) The average predicted time interval between anti ipated operational malfunctions of a system, based on long-term observations. Meaningful use The set of standards defined by the Centers for Medicare & Medicaid
1056
Glossary
Services (CMS) Incentive Programs that governs the use of electronic health records and allows eligible providers and hospitals to earn incentive payments by meeting specific criteria. The term refers to the belief that health care providers using electronic health records in a meaningful, or effective, way will be able to improve health care quality and efficiency.
informatics is now viewed as the subfield of clinical informatics that deals with the management of disease and the role of physicians. Medical Information Bus (MIB) A data- communication system that supports data acquisition from a variety of independent devices. Medical information science The field of
Measurement study Study to determine the
extent and nature of the errors with which a measurement is made using a specific instrument (cf. Demonstration study).
study concerned with issues related to the management and use of biomedical information (see also biomedical informatics).
Measures of concordance Measures of agree-
Medical Literature Analysis and Retrieval System (MEDLARS) The initial electronic
ment in test performance: the true-positive and true-negative rates.
version of Index Medicus developed by the National Library of Medicine.
MedBiquitous A healthcare-specific standards consortium led by Johns Hopkins Medicine.
Medical Logic Module (MLM) A single chunk of medical reasoning or decision rule, typically encoded using the Arden Syntax.
Medical computer science The subdivision of
Medical record committees An institutional panel charged with ensuring appropriate use of medical records within the organization.
computer science that applies the methods of computing to medical topics. Medical computing The application of meth-
ods of computing to medical topics (see medical computer science). Medical entities dictionary (MED) A compendium of terms found in electronic medical record systems. Among the best known MEDs is that developed and maintained by the Columbia University Irving Medical Center and Columbia University. Contains in excess of 100,000 terms. Medical errors Errors or mistakes, committed
by health professionals, that hold the potential to result in harm to the patient. Medical home A primary care practice that will maintain a comprehensive problem list to make fully informed decisions in coordinating their care. Medical informatics An earlier term for the
biomedical informatics discipline, medical
Medical Subject Headings (MeSH) Some 18,000 terms used to identify the subject content of the biomedical literature. The National Library of Medicine’s MeSH vocabulary has emerged as the de facto standard for biomedical indexing. Medication A substance used for medical treatment, typically a medicine or drug. MEDLARS Online (MEDLINE) The National Library of Medicine’s electronic catalog of the biomedical literature, which includes information abstracted from journal articles, including author names, article title, journal source, publication date, abstract, and medical subject headings. Medline Plus An online resource from the National Library of Medicine that contains health topics, drug information, medical dictionaries, directories, and other resources, organized for use by health care consumers.
1057 Glossary
Megabits per second (Mbps) A common unit
of measure for specifying a rate of data transmission. Megabyte 220 or 1,048,576 bytes. Member checking In subjectivist research, the process of reflecting preliminary findings back to individuals in the setting under study, one way of confirming that the findings are truthful. Memorandum of understanding A document describing a bilateral or multilateral agreement between two or more parties. It expresses a convergence of will between the parties, indicating an intended common line of action.
Mental representations Internal cognitive states that have a certain correspondence with the external world. Menu In a user interface, a displayed list of valid commands or options from which a user may choose. Merck
Medicus An aggregated set of resources, including Harrison’s Online, MDConsult, and DXplain.
Meta-analysis A summary study that com-
bines quantitatively the estimates from individual studies. Metabolomic Pertaining to the study of small-molecule metabolites created as the endproducts of specific cellular processes.
Memory Areas that are used to store pro-
grams and data. The computer’s working memory comprises read-only memory (ROM) and random access memory (RAM). Memory sticks A portable device that typi-
cally plugs into a computer’s USB port and is capable of storing data. Also called a “thumb drive” or a “USB drive”. Mendelian randomization (MR) A technique
used to provide evidence for the causality of a biomarker on a disease state in conditions in which randomized controlled trials are difficult or too expensive to pursue. The technique uses genetic variants that are known to associate with the biomarker as instrument variables.
Metadata Literally, data about data, describing the format and meaning of a set of data. Metagenomics Using DNA sequencing technology to characterize complex samples derived from an environmental sample, e.g., microbial populations. For example, the gut “microbiome” can be characterized by applying next generation sequencing of stool samples. Metathesaurus One component of the Unified Medical Language System, the Metathesaurus contains linkages between terms in Medical Subject Headings (MeSH) and in dozens of controlled vocabularies. MIB See Medical Information Bus.
Mental images A form of internal represen-
tation that captures perceptual information recovered from the environment. Mental models A construct for describing how individuals form internal models of systems. They are designed to answer questions such as “how does it work?” or “what will happen if I take the following action?”.
Microarray chips A microchip that holds
DNA probes that can recognize DNA from samples being tested. Microbiome The microorganisms in a particular environment (including the body or a part of the body) or the combined genomes of those organisms.
1058
Glossary
Microprocessor An integrated circuit that
contains all the functions of a central processing unit of a computer. Microsimulation models Individual-level health state transition models that provide a means to model very complex events flexibly over time.
Model organism databases Organized reference databases the combine bibliographic databases, full text, and databases of sequences, structure, and function for organ- isms whose genomic data has been highly characterized, such as the mouse, fruit fly, and Sarcchomyces yeast. Modem A device used to modulate and
MIMIC II Database See Multiparameter Intel
ligent Monitoring in Intensive Care. Minicomputers A class of computers that
were introduced in the 1960s as a smaller alternative to mainframe computers. Minicomputers enabled smaller companies and departments within organizations (like HCOs) to implement software applications at significantly less cost than was required by mainframe computers. Mistake Occurs
when an inappropriate course of action reflects erroneous judgment or inference.
demodulate digital signals for transmission to a remote computer over telephone lines; converts digital data to audible analog signals, and vice versa. Modifiers of interest In natural language processing, a term that is used to describe or otherwise modify a named-entity that has been recognized. Molecular imaging A technique for capturing
images at the cellular and subcellular level by marking particular chemicals in ways that can be detected with image or radiodetection. Monitoring tool The application of logi-
Mixed-initiative dialog A mode of interac-
tion with a computer system in which the computer may pose questions for the user to answer, and vice versa. Mixed-initiative systems An educational pro-
gram in which user and program share control of the interaction. Usually, the program guides the interaction, but the student can assume control and digress when new questions arise during a study session. Mobile health (mHealth) The practice of
medicine and public health supported by mobile devices. Also referred to as mHealth or m-health. Model organism database Organized reference databases that combine bibliographic databases, full text, and databases of sequences, structure, and function for organisms whose genomic data has been highly characterized, such as the mouse, fruit fly, and Sarcchomyces yeast.
cal rules and conditions (e.g., range-checking, enforcement of data completion, etc.) to ensure the completeness and quality of research-related data. Monotonic Describes a function that consistently increases or decreases, rather than oscillates. Morpheme The smallest unit in the grammar
of a language which has a meaning or a linguistic function; it can be a root of a word (e.g., −arm), a prefix (e.g., re-), or a suffix (e.g., −it is). Morphology The study of meaningful units in language and how they combine to form words. Morphometrics The
quantitative study of growth and development, a research area that depends on the use of imaging methods.
1059 Glossary
Mosaic The first graphical web browser credited with popularizing the World Wide Web and developed at the National Center for Supercomputing Applications (NCSA) at the University of Illinois.
rather than long network addresses, avoiding complex lookups in a routing table. Multiuser system A computer system that shares its resources among multiple simultaneous users.
Motion artifact Visual interference caused by
the difference between the frame rate of an imaging device and the motion of the object being imaged. Mouse A small boxlike device that is moved on a flat surface to position a cursor on the screen of a display monitor. A user can select and mark data for entry by depressing buttons on the mouse. Multi-axial A terminology system composed of several distinct, mutually exclusive term sub- sets that care combined to support postcoordination. Multimodal interface A design concept which allows users to interact with computers using multiple modes of communication or tools, including speaking, clicking, or touchscreen input.
Mutually exclusive State in which one, and only one, of the possible conditions is true; for example, either A or not A is true, and one of the statements is false. When using Bayes’ theorem to perform medical diagnosis, we generally assume that diseases are mutually exclusive, meaning that the patient has exactly one of the diseases under consideration and not more. Myocardial ischemia Reversible damage to cardiac muscle caused by decreased blood flow and resulting poor oxygenation. Such ischemia may cause chest pain or other symptoms. Naïve Bayesian model The use of Bayes Theorem in a way that assumes conditional independence of variables that may in fact be linked statistically.
Multiparameter Intelligent Monitoring in Intensive Care (MIMIC-II) A publicly and
NAM See: National Academy of Medicine.
freely available research database that encompasses a diverse and very large population of ICU patients. It contains high temporal resolution data including lab results, electronic documentation, and bedside monitor trends and waveforms.
Name Designation of an object by a linguistic expression.
Multiprocessing The use of multiple proces-
sors in a single computer system to increase the power of the system (see parallel processing).
Name authority An entity or mechanism for controlling the identification and formulation of unique identifiers for names. In the Internet, a name authority is required to associate common domain names with their IP addresses.
tiple programs simultaneously reside in the main memory of a single central processing unit.
Named-entity normalization The natural language processing method, after finding a named entity in a document, for linking (normalizing) that mention to appropriate database identifiers.
Multiprotocol label switching (MPLS) A mecha-
Named-entity recognition In language pro-
nism in high-performance telecommunications networks that directs data from one network node to the next based on short path labels
cessing, a subtask of information extraction that seeks to locate and classify atomic elements in text into predefined categories.
Multiprogramming A scheme by which mul-
1060
Glossary
Name-server In networked environments such as the Internet, a computer that converts a host name into an IP address before the message is placed on the network. National Academies The
collective name for the National Academy of Engineering, National Academy of Medicine and National Academy of Sciences which are private, nonprofit institutions that provide expert advice on some of the most pressing challenges facing the nation and the world. The work of the National Academies helps shape sound policies, inform public opinion, and advance the pursuit of science, engineering, and medicine.
National Academy of Medicine (NAM) An
independent organization of eminent professionals from diverse fields including health and medicine; natural, social, and behavioral sciences and more. Established in 1970 as the Institute of Medicine (IOM), and in 2016 the name was changed to the National Academy of Medicine (NAM). National Center for Biotechnology Information (NCBI) Established in 1988 as a national
resource for molecular biology information, the NCBI is a component of the National Library of Medicine that creates public databases, con- ducts research in computational biology, develops software tools for analyzing genome data, and disseminates biomedical information. National Committee on Quality Assurance (NCQA) An independent 501 nonprofit orga-
nization in the United States that works to improve health care quality through the administration of evidence-based standards, measures, programs, and accreditation.
National Health Information Infra–structure (NHII) A comprehensive knowledge- based
network of interoperable systems of clinical, public health, and personal health information that is intended to improve decision-making by making health information available when and where it is needed. National Health Information Network (NHIN) A set of standards, services, and policies
that have been shepherded by the Office of the National Coordinator of Health Information Technology to enable secure health information exchange over the Internet. National Information Standards Organization (NISO) A non-profit association accredited by
the American National Standards Institute (ANSI), that identifies, develops, maintains, and publishes technical standards to manage information (see 7 www.niso.org).
National Institute for Standards and Technology (NIST) A non-regulatory fed-
eral agency within the U.S. Commerce Department’s Technology Administration; its mission is to develop and promote measurement, standards, and technology to enhance productivity, facilitate trade, and improve the quality of life (see 7 www.nist.gov).
National Library of Medicine (NLM) The gov-
ernment-maintained library of biomedicine that is part of the US National Institutes of Health. National Quality Forum A not-for-profit organization that develops and implements national strategies for health care quality measurement and reporting.
National Guidelines Clearinghouse A pub-
Nationwide Health Information Network (NwHIN) A set of standards, services, and
lic resource, coordinated by the Agency for Health Research and Quality, that collects and distributes evidence-based clinical practice guidelines (see 7 www.guideline.gov).
policies that have been shepherded by the Office of the National Coordinator of Health Information Technology to enable secure health information exchange over the Internet.
1061 Glossary
Natural language Unfettered spoken or writ-
Nestedstructures In natural language pro-
ten language. Free text.
cessing, a phrase or phrases that are used in place of simple words within other phrases.
Natural language processing (NLP) Facilitates tasks by enabling use of automated methods that represent the relevant information in the text with high validity and reliability. Natural language query A question expre ssed in unconstrained text, from which meaning must somehow be extracted or inferred so that a suitable response can be generated. Naturalistic Describes a study in which little if anything is done by the evaluator to alter the setting in which the study is carried out. NCBI Entrez global query A search interface
that allows searching over all data and information resources maintained by NCBI. NCI Thesaurus A large ontology developed by
the National Cancer Institute that describes entities related to cancer biology, clinical oncology, and cancer epidemiology. NCQA See National Committee on Quality Assurance. Needs assessment A study carried out to help understand the users, their context and their needs and skills, to inform the design of the information resource. Negative dictionary A list of stop words used in information retrieval.
Net reclassification improvement (NRI) In classification methods, a measure of the net fraction of reclassifications made in the correct direction, using one method over another method without the designated improvement. Network access provider A company that
builds and maintains high speed networks to which customers can connect, generally to access the Internet (see also Internet service provider). Network Operations Center (NOC) A centralized monitoring facility for physically distributed computer and/or telecommunications facilities that allows continuous real- time reporting of the status of the connected components. Network protocol The set of rules or conven-
tions that specifies how data are prepared and transmitted over a network and that governs data communication among the nodes of a network. Network stack The method within a single
machine by which the responsibilities for network communications are divided into different levels, with clear interfaces between the levels, thereby making network software more modular. Neuroinformatics An emerging subarea of
Negative predictive value (PV–) The probabil-
ity that the condition of interest is absent if the result is negative—for example, the probability that specific a disease is absent given a negative test result. Negligence theory A concept from tort law
that states that providers of goods and services are expected to uphold the standards of the community, thereby facing claims of negligence if individuals are harmed by substandard goods or services.
applied biomedical informatics in which the discipline’s methods are applied to the management of neurological data sets and the modeling of neural structures and function. Next Generation Internet Initiative A federally funded research program in the late 1990s and early in the current decade that sought to provide technical enhancements to the Internet to support future applications that currently are infeasible or are incapable of scaling for routine use.
1062
Glossary
Next generation sequencing methods Technologies for performing high
Nuclear magnetic resonance (NMR) spectroscopy A spectral technique used in chemis-
throughput sequencing of large quantities of DNA or RNA. Typically, these technologies determine the sequences of many millions of short segments of DNA that need to be reassembled and interpreted using bioinformatics.
try to characterize chemical compounds by measuring magnetic characteristics of their atomic nuclei.
NHIN Connect A software solution that
facilitates the exchange of healthcare information at both the local and national level. CONNECT leverages eHealth Exchange standards and governance and Direct Project specifications to help drive interoperability across health information exchanges throughout the country. Initially developed by federal agencies to support specific healthcare-related missions, CONNECT is now available to all organizations as downloadable open source software. NHIN Direct A set of standards and services to enable the simple, direct, and secure transport of health information between pairs of health care providers; it is a component of the Nationwide Health Information Network and it complements the Network’s more sophisticated components. NHIN See: National Health Information Network.
Nuclear medicine imaging A modality for producing images by measuring the radiation emitted by a radioactive isotope that has been attached to a biologically active compound and injected into the body. Nursing informatics The application of biomedical informatics methods and techniques to problems derived from the field of nursing. Viewed as a subarea of clinical informatics. NwHIN Direct A set of standards and services to enable the simple, direct, and secure transport of health information between pairs of health care providers; it is a component of the Nationwide Health Information Network and it complements the Network’s more sophisticated components. Nyquist frequency The minimum sampling
rate necessary to achieve reasonable signal quality. In general, it is twice the frequency of the highest-frequency component of interest in a signal. Object Any part of
the perceivable or
conceivable world. Noise The component of acquired data that
is attributable to factors other than the underlying phenomenon being measured (for example, electromagnetic interference, inaccuracy in sensors, or poor contact between sensor and source). Nomenclature A system of terms used in a scientific discipline to denote classifications and relationships among objects and processes. Nosocomial hospital-acquired infection An infection acquired by a patient after admission to a hospital for a different reason. NQF See: National Quality Forum.
Object Constraint Language (OCL) A textual
language for describing rules that apply to the elements a model created in the Uniform Modeling Language. OLC specifies constraints on allowable values in the model. OCL also supports queries of UML models (and of models constructed in similar languages). OCL is a standard of the Object Modeling Group (OMG), and forms the basis of the GELLO query language that may be used in conjunction with the Arden Syntax. Objectivist approaches Class of evaluation approaches that make use of experimental designs and statistical analyses of quantitative data.
1063 Glossary
Object-oriented database A database that is structured around individual objects (concepts) that generally include relationships among those objects and, in some cases, executable code that is relevant to the management and or understanding of that object.
hierarchical relationships among terms and concepts in a domain. Open access publishing (OA) An approach to publishing where the author or research funder pays the cost of publication and the article is made freely available on the Internet.
Odds-ratio form An algebraic expression for
calculating the posttest odds of a disease, or other condition of interest, if the pretest odds and likelihood ratio are known (an alternative formulation of Bayes’ theorem, also called the odds-likelihood form). Office of the National Coordinator for Health Information Technology (ONC) An agency
Open consent model A legal mechanism by
which an individual can disclose their own private health information or genetic information for research use. This mechanism is used by the Personal Genome Project to enable release of entire genomes of identified individuals.
within the US Department of Health and Human Services that is charged with supporting the adoption of health information technology and promoting nationwide health information exchange to improve health care.
Open source An approach to software development in which programmers can read, redistribute, and modify the source code for a piece of software, resulting in community development of a shared product.
Omics A set of areas of study in biology that use the suffix “-ome”, used to connote breadth or completeness of the objects being studied, for example genomics or proteomics.
Open standards development policy In standards group, a policy that allows anyone to become involved in discussing and defining the standard.
-omics technologies High throughput experi-
OpenNotes An international movement that urges doctors, nurses, therapists, and other clinicians to invite patients to read notes that clinicians write to describe a visit. OpenNotes provides free tools and resources to help clinicians and healthcare systems share notes with patients.
mentation that exhaustively queries a certain biochemical aspect of the state of an organism. Such technologies include proteomics (protein), genomics (gene expression), metabolomics (metabolites), etc. On line analytic processing (OLAP) A system that focuses on querying across multiple patients simultaneously, typically by few users for infrequent, but very complex queries, often research.
Operating system (OS) A program that allocates computer hardware resources to user programs and that supervises and controls the execution of all other programs.
On line transaction processing (OLTP) A system designed for use by thousands of simultaneous users doing repetitive queries.
Optical Character Recognition (OCR) The con-
Ontology A description (like a formal speci-
Optical coherence tomography (OCT) An optical signal acquisition and processing method. It captures micrometer-resolution, three-dimensional images from within optical scattering media (e.g., biological tissue).
fication of a program) of the concepts and relationships that can exist for an agent or a community of agents. In biomedicine, such ontologies typically specify the meanings and
version of typed text within scanned documents to computer understandable text.
1064
Glossary
Optical disk A round, flat plate of plastic or metal that is used to store information. Data are encoded through the use of a laser that marks the surface of the disc.
Page A partitioned component of a com-
puter users’ programs and data that can be kept in temporary storage and brought into main memory by the operating system as needed.
Order entry The use of a computer system for
entering treatments, requests for lab tests or radiologic studies, or other interventions that the attending clinician wishes to have performed for the benefit of a patient. Orienting issues/questions The initial questions or issues that evaluators seek to answer in a subjectivist study, the answers to which often in turn prompt further questions. Outcome data Formal information regarding
the results of interventions.
Pager One of the first mobile devices for electronic communication between a base station (typically a telephone, but later a computer) and an individual person. Initially restricted to receiving only numeric data (e.g., a telephone number), pagers later incorporated the ability to transmit a response (referred to as “two way pagers”) as well as alpha characters so that a message of limited length could be transmitted from a small keyboard. Pagers have been gradually replaced by cellular phones because of their greater flexibility and broader geographical coverage.
Outcome measurements Using metrics that
assess the end result of an intervention rather than an intervening process. For example, remembering to check a patient’s Hemoglobin A1C is a process measure, whereas reducing the complications of diabetes is an outcome measure. Outcome variable Similar
to “dependent variable,” a variable that captures the end result of a health care or educational process; for example, long-term operative complication rate or mastery of a subject area.
Outpatient A patient seen in a clinic rather
PageRank (PR) algorithm In indexing for
information retrieval on the Internet, an algorithmic scheme for giving more weight to a Web page when a large number of other pages link to it. Parallel processing The use of multiple processing units running in parallel to solve a single problem (see multiprocessing). Parse tree The representation of structural relationships that results when using a grammar (usually context free) to analyze a given sentence.
than in the hospital setting. Output The results produced when a process
is applied to input. Some forms of output are hardcopy documents, images displayed on video display terminals, and calculated values of variables. P4 medicine P4 medicine: a term coined by Dr. Leroy Hood for healthcare that strives to be personalized, predictive, preventive and participatory. Packets In networking, a variable-length
message containing data plus the network addresses of the sending and receiving nodes, and other control information.
Partial parsing The analysis of structural relationships that results when using a grammar to analyze a segment of a given sentence. Partial-match searching An approach to information retrieval that recognizes the inexact nature of both indexing and retrieval, and attempts to return the user content ranked by how close it comes to the user’s query. Participant calendaring Participant calendaring refers to the capability of a CRMS to support the tracking of participant compliance with a study schema, usually represented as a calendar of temporal events.
1065 Glossary
Participant screening and registration participant screening and registration refers to the capability of a CTMS to support the enrollment phase of a clinical study.
Patient portal An online application that
Participants The people or organizations who provide data for the study. According to the role of the information resource, these may include patients, friends and family, formal and informal carers, the general public, health professionals, system developers, guideline developers, students, health service managers, etc.
Patient record The collection of informa-
Part-of-speech tags Assignment of syntac-
tic classes to a given sequence of words, e.g., determiner, adjective, noun and verb. Parts of speech The categories to which words in a sentence are assigned in accordance with their syntactic function.
allows individuals to view health information and otherwise interact with their physicians and hospitals. tion traditionally kept by a health care provider or organization about an individual’s health status and health care; also referred to as the patient’s chart, medical record, or health record, and originally called the “unit record”. Patient safety The reduction in the risk of unnecessary harm associated with health care to an acceptable minimum; also the name of a movement and specific research area. Patient triage The process of allocating patients to different levels or urgency of care depending upon the complaints or symptoms displayed.
Patent A specific legal approach for protect-
ing methods used in implementing or instantiating ideas (see intellectual property).
Patient-specific information Information derived and organized from a specific patient.
Pathognomonic Distinctively characteristic,
Patient-tracking applications Monitor pat ient movement in multistep processes.
and thus, uniquely identifying a condition or object (100% specific).
Pattern check A procedure applied to entered Patient centered care Clinical care that is
based on personal characteristics of the patient in addition to his or her disease. Such characteristics include cultural traditions, preferences and values, family situations and lifestyles. Patient centered medical home A team-based health care delivery model led by a physician, physician’s assistant, or nurse practitioner that provides comprehensive, coordinated, and continuous medical care to patients with the goal of obtaining maximized health outcomes. engagement Participation of a patient as an active collaborator in his or her health care process.
data to verify that the entered data have a required pattern; e.g., the three digits, hyphen, and four digits of a local telephone number. Pay for performance Payments to providers
that are based on meeting pre-defined expectations for quality. Per diem Payments to providers (typically
hospitals) based on a single day of care. Perimeter definition Specification
of the boundaries of trusted access to an information system, both physically and logically.
Patient
Patient generated health data Health-related
data that are recorded or collected directly by patients.
Personal clinical electronic communication Web-based messaging solutions that
avoid the limitations of email by keeping all interactions within a secure, online environment.
1066
Glossary
Personal computers A small, relatively inex-
pensive, single-user computer. Personal Digital Assistants (PDA) A small, mobile, handheld device that provides computing and information storage and retrieval capabilities for personal or business use. PDAs can typically run third-party applications. Personal grid architecture A security meth-
odology that prevents large-scale data loss from a central repository by separately storing and encrypting each person’s records. While searching across records must be sequential, reasonable response times can be achieved by massive parallelization of the search process in the cloud. Personal health application Software for computers, tablet computers, or smart phones that are intended to allow individual patients to monitor their own health or to stimulate their own personal health activities. Personal health informatics The area of biomedical informatics based on patient-centered care, in which people are able to access care that is coordinated and collaborative. Personal health record (PHR) A collection of information about an individual’s health status and health care that is maintained by the individual (rather than by a health care provider); the data may be entered directly by the patient, captured from a sensing device, or transferred from a laboratory or health care provider. It may include medical information from several independent provider organizations, and may also have health and wellbeing information. Personal Internetworked Notary and Guardian (PING) An early personally controlled health
record, later known as Indivo. Personalized medicine Also often call individualized medicine, refers to a medical model in which decisions are custom-tailored to the patient based on that individual’s genomic data, preferences, or other considerations. Such decisions may involve diagnosis, treat-
ment, or assessments of prognosis. Also known as precision medicine. Personally controlled health record (PCHR) Similar to a PHR, the PCHR differs
in the nature of the control offered to the patient, with such features as semantic tags on data elements that can be used to determine the subsets of information that can be shared with specific providers. Petabyte A unit of information equal to 1000 terabytes or 1015 bytes. Pharmacodynamics program (PD) The study
of how a drug works, it’s mechanism of action and pathway of achieving its affect, or “what the drug does to the body”. Pharmacogenetics The study of drug-gene relationships that are dominated by a single gene. Pharmacogenomic variant A particular genetic variant that affects a drug-genome interaction. Pharmacokinetic program Pharmacokinetics or PK is the study of how a drug is absorbed, distributed, metabolized and excreted by the body, or “what the body does to the drug”. Pharmacovigilance The pharmacological sci-
ence relating to the collection, detection, assessment, monitoring, and prevention of adverse effects with pharmaceutical products. Phase In the context of clinical research, study
phases are used to indicate the scientific aim of a given clinical trial. There are 4 phases (Phase I, Phase II, Phase III, and Phase IV). Phase I (clinical trial) Investigators evaluate
a novel therapy in a small group of participants in order to assess overall safety. This safety assessment includes dosing levels in the case of non-interventional therapeutic trials, and potential side effects or adverse effects of the therapy. Often, Phase I trials of non-interventional therapies involve the use of normal volunteers who do not have the disease state targeted by the novel therapy.
1067 Glossary
Phase II (clinical trial) Investigators evaluate a
novel therapy in a larger group of participants in order to assess the efficacy of the treatment in the targeted disease state. During this phase, assessment of overall safety is continued. Phase III (clinical trial) Investigators evaluate
a novel therapy in an even larger group of participants and compare its performance to a reference standard which is usually the current standard of care for the targeted disease state. This phase typically employs a randomized controlled design, and often a multi-center RCT given the numbers of variation of subjects that must be recruited to adequately test the hypothesis. In general, this is the final study phase to be performed before seeking regulatory approval for the novel therapy and broader use in standard- of-care environments. Phase IV (clinical trial) Investigators study the
performance and safety of a novel therapy after it has been approved and marketed. This type of study is performed in order to detect long-term outcomes and effects of the therapy. It is often called “post-market surveillance” and is, in fact, not an RCT at all, but a less formal, observational study. PheKB.org A web site that houses EHR-based algorithms for determining phenotypes.
Phenotype definition The process of determining the set of observable descriptors that characterize an organism’s phenotype. Phenotype risk score (PheRS) A calculation
of the likelihood of a particular genetic variant being present based on a weighed score of one or more phenotypic characteristics. Phenotypic Refers to the physical characteristics or appearance of an organism. Picture Archive and Communication Systems (PACS) An integrated computer system that
acquires, stores, retrieves, and displays digital images. Pixel One of the small picture elements that makes up a digital image. The number of pixels per square inch determines the spatial resolution. Pixels can be associated with a single bit to indicate black and white or with multiple bits to indicate color or gray scale. Placebo In the context of clinical research,
a placebo is a false intervention (e.g., a mock intervention given to a participant that resembles the intervention experienced by individuals receiving the experimental intervention, except that it has no anticipated impact on the individual’s health or other indicated status), usually used in the context of a control group or intervention.
Phenome characterization Identification of
Phenome-wide association scan A study that
Plain old telephone service (POTS) The standard low speed, analog telephone service that is still used by many homes and businesses.
derives case and controls populations using the EMR to define clinical phenotypes and then examines the association of those phenotypes with specific genotypes.
Plastination A method of embalming a part of a human body using plastic to suffuse human tissue.
the individual traits of an organism that characterize its phenotype.
Phenome-wide association study (PheWAS) A study that tests for association between a particular genetic variant and a large number of phenotypic characteristics.
Plug-in A software component that is added to web browsers or other programs to allow them a special functionality, such as an ability to deal with certain kinds of media (e.g., video or audio).
Phenotype The observable physical charac-
teristics of an organism, produced by the interaction of genotype with environment.
Pointing device A manual device, such as a mouse, light pen, or joy stick, that can be used
1068
Glossary
to specify an area of interest on a computer screen.
Posterior probability The updated probability that the condition of interest is present after additional information has been acquired.
Polygenic risk score (PRS) See Genetic risk
score. Population Health is not universally defined but is a commonly used term to organize activities performed by private or public entities for assessing, managing, and improving the well-being and health outcomes of a defined group of individuals. Population may be defined by a specific geographic community or region; enrollees of a health plan; persons residing in a health systems catchment area; or an aggregation of individuals with specific conditions. Population health is based on the underlying assumption that multiple common factors impact the health and well-being of specific populations, and that focused interventions early in the causal chain of disease may save resources, and prevent morbidity and mortality.
Postgenomic
database A database that com- bines molecular and genetic information with data of clinical importance or relevance. Online Mendelian Inheritance in Man (OMIM) is a frequently cited example of such a database.
Post-test probability The updated probabil-
ity that the disease or other condition under consideration is present after the test result is known (more generally, the posterior probability). Practice management system The software used by physicians for scheduling, registration, billing, and receivables management in their offices. May increasingly be linked to an EHR. Pragmatics The study of how contextual
Population management Health care prac-
tices that assist with a large group of people, including preventive medicine and immunization, screening for disease, and prioritization of interventions based on community needs.
information affects the interpretation of the underlying meaning of the language. Precision The degree of accuracy with which
ity that the condition of interest is true if the result is positive—for example, the probability that a disease is present given a positive test result.
the value of a sampled observation matches the value of the underlying condition, or the exactness with which an operation is performed. In information retrieval, a measure of a system’s performance in retrieving relevant information (expressed as the fraction of relevant records among total records retrieved in a search).
Positron emission tomography A tomographic imaging method that measures the uptake of various metabolic products (generally a com- bination of a positron-emitting tracer with a chemical such as glucose), e.g., by the functioning brain, heart, or lung.
Precision Medicine The application of specific diagnostic and therapeutic methods matched to an individual based on highly unique information about the individual, such as their genetic profile or properties of their tumor.
Postcoordination The combination of two
Precoordination A complex phrase in a terminology that can be constructed from multiple terms but is, itself, assigned a unique identifier within the terminology; for example, “Acute Inflammation of the Appendix.” See also, postcoordination.
Positive predictive value (PV+) The probabil-
or more terms from one or more terminologies to create a phrase used for coding data; for example, “Acute Inflammation” and “Appendix” combined to code a patient with appendicitis. See also, precoordination.
1069 Glossary
Predatory journal A name given to journals that publish under the OA model and have no to minimal peer review of submitted papers.
population. Prevalence is the prior probability of a specific condition (or diagnosis), before any other information is available.
Predicate The part of a sentence or clause containing a verb and stating something about the subject.
Primary knowledge-based information The original source of knowledge, generally in a peer reviewed journal article that reports on a research project’s results.
Predicate logic In mathematical logic, the
generic term for symbolic formal systems like first-order logic, second-order logic, etc. Predictive value (PV) The posttest probability that a condition is present based on the results of a test (see positive predictive value and negative predictive value). Preparatory phase In the preparatory phase
of a clinical research study, investigators are involved in the initial design and documentation of a study (developing a protocol document), prior to the identification and enrollment of study participants. President’s Emergency Plan for AIDS Relief (PEPFAR) The United States government’s
response to the global HIV/AIDS epidemic, and represents the largest commitment by any nation to address a single disease in history. PEPFAR is intended to save and improved millions of lives, accelerating progress toward controlling and ultimately ending the AIDS epidemic as a public health threat. PEPFAR collects and uses data in the most granular manner (disaggregated by sex, age, and at the site level) to do the right things, in the right places, and right now within the highest HIV-burdened populations and geographic locations. Pretest probability The probability that the
dis- ease or other condition under consideration is present before the test result is known (more generally, the prior probability). Prevalence The frequency of the condition
under consideration in the population. For example, we calculate the prevalence of disease by dividing the number of diseased individuals by the number of individuals in the
Prior probability The probability that the
condition of interest is present before additional information has been acquired. In a population, the prior probability also is called the prevalence. Privacy A concept that applies to people, rather than documents, in which there is a presumed right to protect that individual from unauthorized divulging of personal data of any kind. Probabilistic context free grammar A context
free grammar in which the possible ways to expand a given symbol have varying probabilities rather than equal weight. Probabilistic relationship Exists when the
occurrence of one chance event affects the probability of the occurrence of another chance event. Probabilistic sensitivity analysis An approach
for understanding how the uncertainty in all (or a large number of) model parameters affects the conclusion of a decision analysis. Probability Informally, a means of expressing
belief in the likelihood of an event. Probability is more precisely defined mathematically in terms of its essential properties. Probalistic causal network Also known as a
Bayesian network, a statistical model built of directed acyclic graph structures (nodes) that are connected through relationships (edges). The strength of each of the relationships is defined through conditional probabilities.
1070
Glossary
Probes Genetic markers used it genetic assays
Prognostic scoring system An approach to
to determine the presences or absence of a particular variant.
prediction of patient outcomes based on formal analysis of current variables, generally through methods that compare the patient in some way with large numbers of similar patients from the past.
Problem impact study A study carried out in the field with real users as participants and real tasks to assess the impact of the information resource on the original problem it was designed to resolve. Problem space The range of possible solutions to a problem. Problem-based learning Small groups of
students, supported by a facilitator, learned through discussion of individual case scenarios. Procedural knowledge Knowledge of how to
perform a task (as opposed to factual knowledge about the world). Procedure An action or intervention undertaken during the management of a patient (e.g., starting an IV line, performing surgery). Procedures may also be cognitive. Procedure trainer (Also Part-task trainer).
An on-screen simulation of a surgical or other procedure that is controlled using physical tools such as an endoscope. It allows repeated practice of a specific skill. Process integration An organizational analy-
sis methodology in which a series of tasks are reviewed in terms of their impact on each other rather than being reviewed separately. In a hospital setting, for example, a process integration view would look at patient registration and scheduling as an integrated workflow rather than as separate task areas. The goal is to achieve greater efficiency and effectiveness by focusing on how tasks can better work together rather than optimizing specific areas. Prodrug A chemical that requires transformation in vivo (typically by enzymes) to produce its active drug. Product An object that goes through the processes of design, manufacture, distribution, and sale.
Progressive caution The idea that reason, caution and attention to ethical issues must govern research and expanding applications in the field of biomedical informatics. Propositions An expression, generally in language or other symbolic form, that can be believed, doubted, or denied or is either true or false. Prospective study An experiment in which
researchers, before collecting data for analysis, define study questions and hypotheses, the study population, and data to be collected. Prosthesis A device that replaces a body part—e.g., artificial hip or heart. Protected memory An segment of computer
memory that cannot be over-written by the usual means. Protein Data Bank (PDB) A centralized repository of experimentally determined three dimensional protein and nucleic acid structures. Proteomics The study of the protein products
produced by genes in the genome. Protocol A standardized method or app
roach. Protocol analysis In cognitive psychology,
methods for gathering and interpreting data that are presumed to reveal the mental processes used during problem solving (e.g., analysis of “think-aloud” protocols). Protocol authoring tools A software product
used by researchers to construct a description of a study’s rationale, guidelines, endpoints, and the like. Such descriptions may be structured formally so that they can be manipulated by trial management software.
1071 Glossary
Protocol management Protocol management
refers to the capability of a CRMS to support the preparatory phase of a clinical study. Provider-profiling system Software that utilizes available data sources to report on patterns of care by one or several providers. Pseudo-identifier A unique identifier substi-
tuted for the real identifier to mask the identify but can under certain circumstances allow linking back to the original person identifier if needed. Public health The field that deals with monitoring and influencing trends in habits and disease in an effort to protect or enhance the health of a population, from small communities to entire countries.
Public-key cryptography In
data encryption, a method whereby two keys are used, one to encrypt the information and a second to decrypt it. Because two keys are involved, only one needs be kept secret.
Public-private keys A pair of sequences of
characters or digits used in data encryption in which one is kept private and the other is made public. A message encrypted with the public key can only be opened by the holder of the private key, and a message signed with the private key can be verified as authentic by anyone with the public key. PubMed A software environment for search-
ing the Medline database, developed as part of the suite of search packages, known as Entrez, by the NLM’s National Center for Biotechnology Information (NCBI).
Public health informatics An application area
of biomedical informatics in which the field’s methods and techniques are applied to problems drawn from the domain of public health. Public health informatics The
systematic application of informatics methods and tools to support public health goals and outcomes, regardless of the setting.
Public Health Surveillance The ongoing sys-
tematic collection, analysis, and interpretation of data (e.g., regarding agent/hazard, risk factor, exposure, health event) essential to the planning, implementation, and evaluation of public health practice, closely integrated with the timely dissemination of these data to those responsible for prevention and control. 7 http://www.aphl.o rg/Pages/default.a spx. Also see Biosurveillance and Surveillance.
Public Library of Science (PLoS) A family of scientific journals that is published under the open-access model. Publication type One of several classes of
articles or books into which a new publication will fall (e.g., review articles, case reports, original research, textbook, etc.).
PubMed Central (PMC) An effort by the National Library of Medicine to gather the full-text of scientific articles in a freely accessible database, enhancing the value of Medline by providing the full articles in addition to titles, authors, and abstracts. QRS wave In an electrocardiogram (ECG),
the portion of the wave form that represents the time it takes for depolarization of the ventricles. Quality assurance A means for monitoring
and maintaining the goodness of a service, product, or process. Quality Data Model An information model that describes the relationships between patient data and clinical concepts in a standardized format. The model was originally proposed to enable electronic quality- performance measurement and it is now aligned with CDS standards. Quality management A specific effort to let quality of care be the goal that determines changes in processes, staffing, or investments.
1072
Glossary
Quality
measurements Numeric metrics that assess the quality of health care services. Examples of quality measures include the portion of a physician’s patients who are screened for breast cancer and 30-day hospital readmission rates. These measurements have tradition- ally been derived from administrative claims data or paper charts but there is increasing interest in using clinical data form electronic sources.
Radiology The medical field that deals with
the definition of health conditions through the use of visual images that reflect information from within the human body. Radiology Information System (RIS) Com puter-based information system that supports radiology department operations; includes management of the film library, scheduling of patient examinations, reporting of results, and billing.
Quality-adjusted life year (QALY) A mea-
sure of the value of a health outcome that reflects both longevity and morbidity; it is the expected length of life in years, adjusted to account for diminished quality of life due to physical or mental disability, pain, and so on.
Random-access memory (RAM) The portion of a computer’s working memory that can be both read and written into. It is used to store the results of intermediate computation, and the programs and data that are currently in use (also called variable memory or core memory).
Quasi-experiments A quasiexperiment is a
non-randomized, observational study design in which conclusions are drawn from the evaluation of naturally occurring and noncontrolled events or cases.
Randomized clinical trial (RCT) A prospective
Query The ability to extract information from an EHR based on a set of criteria; e.g., one could query for all patients with diabetes who have missed their follow-up appointments.
Randomly Without bias.
experiment in which subjects are randomly assigned to study subgroups to compare the effects of alternate treatments.
Range check A procedure applied to entered
data that detects or prevents entry of values that are out of range; e.g., a serum potassium level of 50.0 mmol/L—the normal range for healthy individuals is 3.5–5.0 mol/L.
Query and Reporting Tool Software that supports both the planned and ad-hoc extraction and aggregation of data sets from multiple data forms or equivalent data capture instruments used within a clinical trials management system.
Ransomware Malicious software that blocks access to a computer system or its data until a sum of many is paid to the perpetrators.
Query-response cycle For a database system, the process of submitting a single request for information and receiving the results.
Read-only memory (ROM) The portion of a computer’s working memory that can be read, but not written into.
Question answering (QA) A computer-based
process whereby a user submits a natural language question that is then automatically answered by returning a specific response (as opposed to returning documents).
Really simple syndication (RSS) A form of XML that publishes a list of headlines, article titles or events encoded in a way that can be easily read by another program called a news aggregator or news reader.
Question understanding A form of natural language understanding that supports computer-based question answering.
Real-time acquisition The continuous measurement and recording of electronic signals through a direct connection with the signal source.
1073 Glossary
Real-time feedback This is feedback to the learner in response to each action taken by the learner. Real time feedback is particularly useful in the initial steps of learning a topic. As the learner becomes more experienced with a topic, real time feedback is often withdrawn and summative feedback is provided at the end of a session. Recall In information retrieval, the ability of a system to retrieve relevant information (expressed as the ratio of relevant records retrieved to all relevant records in the database). Receiver In data interchange, the program or system that receives a transmitted message. Receiver operating characteristic (ROC) A graphical plot that depicts the performance of a binary classifier system as its discrimination threshold is varied.
Reference resolution In NLP, recognizing
that two mentions in two different textual locations refer to the same entity. Reference standard See gold standard test. Referential expression A sequence of one or
more words that refers to a particular person, object or event, e.g., “she,” “Dr. Jones,” or “that procedure”. Referral bias In evaluation studies, a bias
that is introduced when the patients entering a study are in some way atypical of the total population, generally because they have been referred to the study based on criteria that reflect some kind of bias by the referring physicians. Region of interest (ROI) A selected subset of pixels within an image identified for a particular purpose. Regional Extension Centers (RECs) In the
Records In a data file, a group of data fields
that collectively represent information about a single entity. Reductionist approaches An attempt to explain phenomena by reducing them to common, and often simple, first principles. Reductionist biomedical model A model of
medical care that emphasizes pathophysiology and biological principles. The model assumes that diseases can be understood purely in terms of the component biological processes that are altered as a consequence of illness.
con- text of health information technology, the 60+ state and local organizations (initially funded by ONC) to help primary care providers in their designated area adopt and use EHRs through out-reach, education, and technical assistance. Regional Health Information Organization (RHIO) A community-wide, multi-stakeholder
organization that utilizes information technology to make more complete patient information and decision support available to authorized users when and where needed. Regional network A network that provides
Reference
Information
Model
(RIM) The
data model for HL7 Version 3.0. The RIM describes the kinds of information that may be transmitted within health-care organizations, and includes acts that may take place (procedures, observations, interventions, and so on), relationships among acts, the manner in which health-care personnel, patients, and other entities may participate in such acts, and the roles that can be assumed by the participants (patient, provider, specimen, and so on).
regional access from local organizations and individuals to the major backbone networks that interconnect regions. Registers In a computer, a group of elec-
tronic switches used to store and manipulate numbers or text. Registry A data system designed to record and store information about the health status of patients, often including the care that they
1074
Glossary
receive. Such collections are typically organized to include patients with a specific disease or class of diseases. Regular expression A mathematical model
of a set of strings, defined using characters of an alphabet and the operators concatenation, union and closure (zero or more occurrences of an expression). Regulated Clinical Research Information Management (RCRIM) An HL7 workgroup
that is developing standards to improve information management for preclinical and clinical research. Relations among named entities The characterization of two entities in NLP with respect to the semantic nature of the relationship between them. Relative recall An approach to measuring recall when it is unrealistic to enumerate all the relevant documents in a database. Thus the denominator in the calculation of recall is redefined to represent the number of relevant documents identified by multiple searches on the query topic. Relevance judgment In the context of information retrieval, a judgment of which documents should be retrieved by which topics in a test collection.
Remote Intensive Care Use of networked
communications methods to monitor patients in an intensive care unit from a distance far removed from the patients themselves. See remote monitoring. Remote interpretation Evaluating tests (especially imaging studies) by having them delivered digitally to a location that may be far removed from the patient. Remote monitoring The use of electronic
devices to monitor the condition of a patient from a distant location. Typically used to refer to the ability to record and review patient data (such as vital signs) by a physician located in his/ her office or a hospital while the patient remains at home. See also remote intensive care. Remote-presence health care The use of video teleconferencing, image transmission, and other technologies that allow clinicians to evaluate and treat patients in other than faceto-face situations. Report generation A mechanism by which users specify their data requests on the input screen of a program that then produces the actual query, using information stored in a database schema, often at predetermined intervals. Representation A level of
medical data encoding, the process by which as much detail as possible is coded.
Relevance ranking The degree to which the
results are relevant to the information need specified in a query. Reminder message A
computer-generated warning that is generated when a record meets prespecified criteria, often referring to an action that is expected but is frequently forgotten; e.g., a message that a patient is due for an immunization.
Remote access Access to a system or to information therein, typically by telephone or communications network, by a user who is physically removed from the system.
Representational effect The phenomenon by which different representations of a common abstract structure can have a significant effect on reasoning and decision making. Representational state A particular configuration of an information-bearing structure, such as a monitor display, a verbal utterance, or a printed label, that plays some functional role in a process within the system. Representativeness A heuristic by which a
person judges the chance that a condition is true based on the degree of similarity between the current situation and the ste-
1075 Glossary
reotypical situation in which the condition is true. For example, a physician might estimate the probability that a patient has a particular disease based on the degree to which the patient’s symptoms matches the classic disease profile.
Review of systems The component of a typical history and physical examination in which the physician asks general questions about each of the body’s major organ systems to discover problems that may not have been suggested by the patient’s chief complaint.
Request for Proposals A formal notification
of a funding opportunity, requiring application through submission of a grant proposal. Research protocol In clinical research, a
prescribed plan for managing subjects that describes what actions to take under specific conditions. Resource Description Framework (RDF) An emerging standard for cataloging metadata about information resources (such as Web pages) using the Extensible Markup Language (XML).
RFP See: Request for Proposals. Ribonucleic acid (RNA) Ribonucleic acid, a nucleic acid present in all living cells. Its principal role is to act as a messenger carrying instructions from DNA in the production of proteins. Rich text format (RTF) A format developed to allow the transfer of graphics and formatted text between different applications and operating systems. RIM See Reference Information Model.
RESTful API A “lightweight” application pro-
gramming interface that enables the transfer of data between two Web-based software systems.
Risk attitude A person’s willingness to take
risks. Risk-neutral Having the characteristic of
Results reporting A software system or sub-
system used to allow clinicians to access the results of laboratory, radiology, and other tests for a patient. Retrieval A process by which queries are
com- pared against an index to create results for the user who specified the query. Retrospective chart review The use of past data from clinical charts (classically paper records) of selected patients in order to perform research regarding a clinical question. See also retrospective study. Retrospective study A research study performed by analyzing data that were previously gathered for another purpose, such as patient care. See also retrospective chart review. Return on investment A metric for the benefits of an investment, equal to the net benefits of an investment divided by its cost.
being indifferent between the expected value of a gamble and the gamble itself. Role-limited access The mechanism by which
an individual’s access to information in a database, such as a medical record, is limited depending upon that user’s job characteristics and their need to have access to the information. Router/switch In networking, a device that
sits on the network, receives messages, and for- wards them accordingly to their intended destination. RS-232 A commonly used standard for serial data communication that defines the number and type of the wire connections, the voltage, and the characteristics of the signal, and thus allows data communication among electronic devices produced by different manufacturers.
1076
Glossary
RSS feed A bliographic message stream that provides content from Internet sources.
and to decrypt information. Thus, the key must be kept secret, known to only the sender and intended receiver of information.
Rule engine A software component that
implements an inference engine that operates on production rules. Rule-based system A kind of knowledge-
based system that performs inference using production rules. Sampling rate The rate at which the continuously varying values of an analog signal are measured and recorded. Scenario A method of teaching that presents
a clinical problem in a story format. Schema In a database-management system,
a machine-readable definition of the contents and organization of a database.
Secure Sockets Layer (SSL) A protocol for
transmitting private documents via the Internet. It has been replaced by Transport Layer Security. By convention, URLs that require an SSL connection start with https: instead of http: Security The process of protecting informa-
tion from destruction or misuse, including both physical and computer-based mechanisms. Segmentation In
image processing, the extraction of selected regions of interest from an image using automated or manual techniques.
Selectivity In data collection and recording,
SCORM Shareable Content Object Reference
the process that accounts for individual styles, reflecting an ongoing decision-making process, and often reflecting marked distinctions among clinicians.
Model, a standard for interoperability between learning content objects.
Self-experimentation Experiments in which
Script In software systems, a keystroke-bykeystroke record of the actions performed for later reuse.
Semantic analysis The study of how symbols
Schemata Higher-level kinds of knowledge
structures.
SDO See: Standards development organiza-
tions. Search A synonym for information retrieval. Search See Information retrieval. Search engine A
computer system that returns content from a search statement entered by a user.
Secondary knowledge–based information Writing that reviews, condenses, and/
or synthesizes the primary literature (see primary knowledge-based information). Secret-key cryptography In data encryption, a method whereby the same key is used to encrypt
experimenters themselves are subjects of their research. or signs are used to designate the meaning of words and the study of how words combine to form or fail to form meaning. Semantic class In NLP, a broad class that is
associated with a specific domain and includes many instances. Semantic grammar A mathematical model of a set of sentences based on patterns of semantic categories, e.g., patient, doctor, medication, treatment, and diagnosis. Semantic network A knowledge source in the UMLS that provides a consistent categorization of all concepts represented in the Metathesaurus in which each concept is assigned at least one semantic type.
1077 Glossary
Semantic patterns The study of the patterns formed by the co-occurrence of individual words in a phrase of the co-occurrence of the associated semantic types of the words. Semantic relations A classification of the
meaning of a linguistic relationship, e.g., “treated in 1995” signifies time while “treated in ER” signifies location. Semantic sense In NLP, the distinction between individual word meaning of terms that may be in the same semantic class. Semantic types The categorization of words into semantic classes according to meaning. Usually, the classes that are formed are relevant to specific domains. Semantic Web A future view which envisions the Internet not only as a source of content but also as a source of intelligently linked, agent-driven, structured collections of machine-readable information. Semantics The meaning of individual words and the meaning of phrases or sentences consisting of combinations of words. Semi structured interview Where the investigator specifies in advance a set of topics that he would like to address but is flexible about the order in which these topics are addressed, and is open to discussion of topics not on the pre-specified list. Sender In data interchange, the program or system that sends a transmitted message. Sensitivity (of a test) The probability of a positive result, given that the condition under consideration is present—for example, the probability of a positive test result in a person who has the disease under consideration (also called the true-positive rate). Sentence boundary In NLP, distinguishing the end of one sentence and the beginning of the next. Sentiment analysis The study of how symbols or signs are used to designate the meaning of
words and the study of how words combine to form or fail to form meaning. Sequence alignment An arrangement of two
or more sequences (usually of DNA or RNA), highlighting their similarity. The sequences are padded with gaps (usually denoted by dashes) so that wherever possible, columns contain identical or similar characters from the sequences involved. Sequence database A database that stores the nucleotide or amino acid sequences of genes (or genetic markers) and proteins respectively. Sequence information Information from a
database that captures the sequence of component elements in a biological structure (e.g., the sequence of amino acids in a protein or of nucleotides in a DNA segment). Sequential Bayes A reasoning method based
on a naïve Bayesian model, where Bayes’ rule is applied sequentially for each new piece of evidence that is provided to the system. With each application of Bayes’ rule, the posterior probability of each diagnostic possibility is used as the new prior probability for that diagnosis the next time Bayes’ rule is invoked. Server A computer that shares its resources with other computers and supports the activities of many users simultaneously within an enterprise. Service An intangible activity provided to consumers, generally at a price, by a (presumably) qualified individual or system. Service oriented architectures (SOA) A software design framework that allows specific processing or information functions (services) to run on an independent computing platform that can be called by simple messages from another computer application. Often considered to be more flexible and efficient than more traditional data base architectures. Best known example is the Internet which is based largely on SOA design principles.
1078
Glossary
Set-based searching Constraining a search to
include only documents in a given class or set (e.g., from a given institution or journal).
in an academic department such as anesthesiology or surgery depending on the center’s origin and history.
that converts video content to analog or digital television signals.
Simultaneous access Access to shared, computer-stored information by multiple concurrent users.
Shallow parsing See partial parsing.
Simultaneous controls Use of participants in
Set-top box A device, such as a cable box,
Shielding In cabling, refers to an outer layer
of insulation covering an inner layer of conducting material. Shielded cable is used to reduce electronic noise and voltage spikes. Short-term/working memory An emergent
property of interaction with the environment; refers to the resources needed to maintain information active during cognitive activity. Signal processing An area of systems engi-
neering, electrical engineering and applied mathematics that deals with operations on or analysis of signals, or measurements of timevarying or spatially-varying physical quantities. Simple Mail Transport Protocol (SMTP) The standard protocol used by networked systems, including the Internet, for packaging and distributing email so that it can be processed by a wide variety of software systems. Simple Object Access Protocol (SOAP) A protocol for information exchange through the HTTP/HTTPS or SMTP transport protocol using web services and utilizing Extensible Markup Language (XML) as the format for messages.
a comparative study who are not exposed to the information resource. They can be randomly allocated to access to the information resource or in some other way. Single nucleotide polymorphism (SNP) A DNA sequence variation, occurring when a single nucleotide in the genome is altered. For example, a SNP might change the nucleotide sequence AAGCCTA to AAGCTTA. A variation must occur in at least 1% of the population to be considered a SNP. Single-photon emission computed tomography A nuclear medicine tomographic imag-
ing technique using gamma rays. It is very similar to conventional nuclear medicine planar imaging using a gamma camera. However, it is able to provide true 3D information. This information is typically presented as cross- sectional slices through the patient, but can be freely reformat- ted or manipulated as required. Single-user systems Computers designed for
use by single individuals, such as personal computers, as opposed to servers or other resources that are designed to be shared by multiple people at the same time. Six sigma A management strategy that seeks
Simulation A system that behaves according
to a model of a process or another system; for example, simulation of a patient’s response to therapeutic interventions allows a student to learn which techniques are effective without risking human life. Simulation center Specialized type of learning center, though its governance may reside
to improve the quality of work processes by identifying and removing the causes of defects and minimizing the variability of those processes. Statistically, a six sigma process is one that is free of defects or errors 99.99966%, which equates to operating a process that fits six standard deviations between the mean value of the process and the specification limit of that process.
1079 Glossary
Slip A type of medical error that occurs when
Slots In a frame-based representation, the
Social determinants of health Conditions in which people live, learn, work, and play. Negative examples include: poverty, poor access to healthy foods, substandard education, unsafe neighborhoods.
elements that are used to define the semantic characteristics of the frame.
Social networking The use of a dedicated
the actor selects the appropriate course of action, but it was executed inappropriately.
SMART See: Substitutable Medical Appli
cations and Reusable Technologies. SMART on FHIR An open, standards- based platform for medical apps to access patients’ data from electronic medical records. SMART on FHIR builds on two technology efforts: the Substitutable Medical Applications, Reusable Technologies (SMART) Platforms Project and Fast Health Information Resources (FHIR). Smart phones A mobile telephone that typi-
cally integrates voice calls with access to the Internet to enable both access to web sites and the ability to download email and applications that then reside on the device. Smartwatch A type of wearable computer in the form of a wristwatch. Typically provides health monitoring features, ability to run simple third-party apps, and WiFi or Bluetooth connectivity, in addition to telling time. SMS messaging The sending of messages using the text communication service component of phone, web or mobile communication system-Short Message Service. SNOMED Systematized Nomenclature of Medicine—A set of standardized medical terms that can be processed electronically; useful for enhancing the standardized use of medical terms in clinical systems. SNOMED-CT The result of the merger of an
earlier version of SNOMED with the Read Clinical Terms. SNP See Single nucleotide polymorphism.
Web site to communicate informally (on the site, by email, or via SMS messages) with other members of the site, typically by posting messages, photographs, etc. Sociotechnical systems An approach to the study of work in complex settings that emphasizes the interaction between people and technology in workplaces. Software Computer programs that direct the
hardware how to carry out specific automated processes. Software development life cycle (SDLC) or software development process A framework
imposed over software development in order to better ensure a repeatable, predictable process that controls cost and improves quality of a software product. Software oversight committee A groups within organizations that is constituted to oversee computer programs and to assess their safety and efficacy in the local setting. Software psychology A behavioral approach
to understanding and furthering software design, specifically studying human beings’ interactions with systems and software. It is the intellectual predecessor to the discipline of Human-Computer interaction. Solid state drive (SSD) A data storage device
using integrated circuit assemblies as memory to store data persistently. SSDs have no moving mechanical components, which distinguish them from traditional electromechanical magnetic disks such as hard disk drives (HDDs) or floppy disks, which contain spinning disks and movable read/write heads.
1080
Glossary
Spamming The process of sending unso-
licited email to large numbers of unwilling recipients, typically to sell a product or make a political statement. Spatial resolution A measure of the ability
to distinguish among points that are close to each other (indicated in a digital image by the number of pixels per square inch). Specialist Lexicon One
of three UMLS Knowledge Sources, this lexicon is intended to be a general English lexicon that includes many biomedical terms and supports natural language processing.
Specificity (of a test) The probability of a
negative result, given that the condition under consideration is absent—for example, the probability of a negative test result in a person who does not have a disease under consideration (also called the true-negative rate).
Standard order sets Predefined lists of steps that should be taken to deal with certain recurring situations in the care of patients, typically in hospitals; e.g., orders to be followed routinely when a patient is in the postsurgical recovery room. Standard-gamble A technique for utility assessment that enables an analyst to determine the utility of an outcome by comparing an individual’s preference for a chance event when compared with a situation of certain outcome. Standards development organizations An organization charged with developing a standard that is accepted by the community of affected individuals. Static In patient simulations, a program that presents a predefined case in detail but which does not vary in its response depending on the actions taken by the learner.
Spectrum bias Systematic error in the esti-
mate of a study parameter that results when the study population includes only selected subgroups of the clinically relevant population—for example, the systematic error in the estimates of sensitivity and specificity that results when test performance is measured in a study population consisting of only healthy volunteers and patients with advanced disease.
Stemming The process of converting a word
to its root form by removing common suffixes from the end. Stop words In full-text indexing, a list of words that are low in semantic content (e.g., “the”, “a”, “an”) and are generally not useful as mechanisms for retrieving documents. Storage devices A piece of computer equip-
Speech recognition Translation by computer
of voice input, spoken using a natural vocabulary and cadence, into appropriate natural language text, codes, and commands. Spelling check A procedure that checks the spelling of individual words in entered data.
ment on which information can be stored. Store-and-forward A telecommunications technique in which information is sent to an intermediate station where it is kept and sent at a later time to the final destination or to another intermediate station. Strict product liability The principle that
Spirometer An instrument for measuring the
states that a product must not be harmful.
air capacity of the lungs. Structural alignment The study of methods Standard of care The community-accepted
norm for management of a specified clinical problem.
for organizing and managing diverse sources of information about the physical organization of the body and other physical structures.
1081 Glossary
Structural informatics The study of methods for organizing and managing diverse sources of information about the physical organization of the body and other physical structures. Often used synonymously with “imaging informatics”.
Study population The population of sub-
jects—usually a subset of the clinically relevant population—in whom experimental outcomes (for example, the performance of a diagnostic test) are measured. Subheadings In MeSH, qualifiers of subject
Structure validation A study carried out to
headings that narrow the focus of a term.
help understand the needs for an information resource, and demonstrate that its proposed structure makes sense to key stakeholders.
Subjectivist approaches Class of approaches
Structured data entry A method of human-
computer interaction in which users fill in missing values by making selections from predefined menus. The approach discretizes user input and makes it possible for a computer system to reason directly with the data that are provided. Structured encounter form A form for collecting and recording specific information during a patient visit. Structured interview An interview with a
schedule of questions that are always presented in the same words and in the same order.
to evaluation that rely primarily on qualitative data derived from observation, interview, and analysis of documents and other artifacts. Studies under this rubric focus on description and explanation; they tend to evolve rather than be prescribed in advance. Sublanguage Language of a specialized domain, such as medicine, biology, or law. Substitutable Medical Applications and Reusable Technologies (SMART) A technical
platform enables EHR systems to behave as “iPhone-like platforms” through an application programming interface (API) and a set of core services that support easy addition and deletion of third party apps, such that the core system is stable and the apps are substitutable.
Structured Query Language (SQL) A com-
monly used syntax for retrieving information from relational databases. Structured reports A report where the content of the report has coded values for the key information in each pre-specified part of the report, enabling efficient and reliable computation on the report.
Summarization A
computer system that attempts to automatically summarize a larger body of content.
Summary ROC curve A composite ROC curve developed by using estimates from many studies. Summative evaluation after the product is in
Study arm in the context of clinical research,
a study arm represents a specific modality of an experimental intervention to which a participant is assigned, usually through a process of randomization (e.g., random assigned in a balanced manner to such an arm). Arms are used in clinical study designs where multiple variants of a given experimental intervention are under study, for example, varying the timing or dose of a given medication between arms to determine an optimal therapeutic strategy.
use, is valuable both to justify the completed project and to learn from one’s mistakes. Supervised learning An approach to machine learning in which an algorithm uses a set of inputs and corresponding outputs to try to learn a model that will enable prediction of an output when faced with a previously unseen input. Supervised learning technique A method for determining how data values may sug-
1082
Glossary
gest classifications, where the possible classifications are enumerated in advance, and the performance of a system is enhanced by evaluating how well the system classifies a training set of data. Statistical regression, neural networks, and support vector machines are forms supervised learning.
with a computer system and that allow users to operate the hardware. Systematic review A type of journal article
that reviews the literature related to a specific clinical question, analyzing the data in accordance with formal methods to assure that data are suitably compared and pooled.
Supervised machine learning A
machine learning approach that uses a gold standard set as input to learn classifiers.
Surveillance The ongoing collection, analysis, interpretation, and dissemination of data on health conditions (e.g., breast cancer) and threats to health (e.g., smoking prevalence). In a computer-based medical record system, systematic review of patients’ clinical data to detect and flag conditions that merit attention. Also see public health surveillance and biosurveillance. Symbolic-programming language A programming language in which the program can treat itself, or material like itself, as data. Such programs can write programs (not just as character strings or texts, but as the actual data structures that the program is made of). The best known and most influential of these languages is LISP. Syndromic surveillance A particular type of public health surveillance. It is an ongoing process of monitoring clinical data, generally from public health, hospital, or outpatient resources, or surrogate data indicating early illness (e.g., school or work absenteeism) with a goal of early identification of outbreaks, new conditions, health threats, or bioterrorist events.
Systems biology Research on biological
networks or biochemical pathways. Often, systems biology analyses take a comprehensive approach to model biological function by taking the interactions (physical, regulatory, similarity, etc.) of a set of genes as a whole. Tablet Generally refers to a personal com-
puting device that resembles a paper tablet in size and incorporates features such as a touch screen to facilitate data entry. Tactile feedback In virtual or telepresence environments, the process of providing (through technology) a sensation of touching an object that is imaginary or otherwise beyond the user’s reach (see also haptic feedback). TCP/IP Transmission Control Protocol/ Internet Protocol—A set of standard communications protocols used for the Internet and for net- works within organizations as well. Teleconsultation The use of telemedicine
techniques to support the interaction between two (or more) clinicians where one is providing advice to the other, typically about a specific patient’s care. Telegraphic In NLP, describes language that
Synonyms Multiple ways of expressing the
same concept. Syntax The grammatical structure of language describing the relations among words in a sentence. System programs The operating system, com-
pilers, and other software that are included
does not follow the usual rules of grammar but is compact and efficient. Clinical notes written by hand often demonstrate a “telegraphic style”. Telehealth The use of electronic information
and telecommunications technologies to support long-distance clinical health care, patient and professional health-related education,
1083 Glossary
public health and health administration. See telemedicine.
information retrieval, a word or phrase which forms part of the basis for a search request.
Telehome care The use of communications
Term frequency (TF) In information retrieval, a measurement of how frequently a term occurs in a document.
and information technology to deliver health services and to exchange health information to and from the home (or community) when distance separates the participants. Tele-ICU See remote intensive care. Telemedicine A broad term used to describe
the delivery of health care at a distance, increasingly but not exclusively by means of the Internet. Teleophthalmology The use of telemedicine
methods to deliver ophthalmology services. Telepresence A technique of telemedicine in which a viewer can be physically removed from an actual surgery, viewing the abnormality through a video monitor that displays the operative field and allows the observer to participate in the procedure. Telepsychiatry The use of telemedicine meth-
ods to deliver psychiatric services.
Term weighting The assignment of metrics to terms so as to help specify their utility in retrieving documents well matched to a query. Terminal A simple device that has no processing capability of its own but allows a user to access a server. Terminology A set of terms representing the system of concepts of a particular subject field. Terminology authority An entity or mecha-
nism that determines the acceptable term to use for a specific entity, descriptor, or other concept. Terminology services Software methods, typically based on computer-based dictionaries or language systems, that allow other systems to determine the locally acceptable term to use for a given purpose.
Teleradiology The provision of remote inter-
pretations, increasing as a mode of delivery of radiology services. Telesurgery The use of advanced telemedicine methods to allow a doctor to perform surgery on a patient even though he or she is not physically in the operating room. Temporal resolution A metric for how well an imaging modality can distinguish points in time that are very close together.
Test collection In the context of information
retrieval, a collection of real-world content, a sampling of user queries, and relevance judgments that allow system-based evaluation of search systems. Test-interpretation bias Systematic error in the estimates of sensitivity and specificity that results when the index and gold-standard test are not interpreted independently. Test-referral bias Systematic error in the esti-
Terabyte A unit of information equal to one
million million (1012) or strictly, 240 bytes.
mates of sensitivity and specificity that results when subjects with a positive index test are more likely to receive the gold-standard test.
Term A word or phrase. Term Designation of a defined concept by a
linguistic expression in a special language. In
Tethered personal health record An EHR portal that is provided to patients by an institution and can typically be used to manage
1084
Glossary
information only from that provider organization. Text generation Methods that create coherent natural language text from structured data or from textual documents in order to satisfy a communication goal. Text mining The use of large text collections (e.g., medical histories, consultation reports, articles from the literature, web-based resources) and natural language processing to allow inferences to be drawn, often in the form of associations or knowledge that were not previously apparent. See also data mining.
Thick-client A computer node in a network or client–server architecture that provides rich functionality independent of the central server. See also thin client. Thin client A program on a local computer system that mostly provides connectivity to a larger resource over a computer network, thereby providing access to computational power that is not provided by the machine, which is local to the user. Think-aloud protocol In cognitive science,
the generation of a description of what a person is thinking or considering as they solve a problem.
Text processing The analysis of text by comThread The
puter. Text readability assessment and simplification An application of NLP in which compu-
tational methods are used to assess the clarity of writing for a certain audience or to revise the exposition using simpler terminology and sentence construction. Text REtrieval Conference (TREC) Organized by NIST, an annual conference on text retrieval that has provided a testbed for evaluation and a forum for presentation of results. (see 7 trec.nist.gov).
Text summarization Takes one or several documents as input and produces a single, coherent text that synthesizes the main points of the input documents.
smallest sequence of programmed instructions that can be managed independently by an operating system scheduler.
Three-dimensional printing Construction of a physical model of anatomy or other object by laying down plastic versions of a stack of cross-sectional slices through the object. Three-dimensional structure information In a biological database, information regarding the three-dimensional relationships among elements in a molecular structure. Time-sharing networks An historical term describing some of the earliest computer networks allowing remote access to systems. Time-trade-off A common approach to util-
Text-comprehension A process in which text
can be described at multiple levels of realization from surface codes (e.g., words and syntax) to deeper level of semantics. TF*IDF weighting A specific approach to term weighting which combines the inverse document frequency (IDF) and term frequency (TF). Thesaurus A set of subject headings or
descriptors, usually with a cross-reference system for use in the organization of a collection of documents for reference and retrieval.
ity assessment, comparing a better state of health lasting a shorter time, with a lesser state of health lasting a longer time. The time-tradeoff technique provides a convenient method for valuing outcomes that accounts for gains (or losses) in both length and quality of life. Tokenization The process of breaking an
unstructured sequence of characters into larger units called “token,” e.g., words, numbers, dates and punctuation. Tokens In language processing, the compos-
ite entities constructed from individual char-
1085 Glossary
acters, typically words, numbers, dates, or punctuation.
into proactive, predictive, preventive, and participatory health.
Top-down In search or analysis, the breaking down of a system to gain insight into its compositional subsystems.
Translational medicine Translational medicine: the process of transferring scientific discoveries into preventive practice and clinical care.
Topology In networking, the overall connec-
tivity of the nodes in a network.
Transmission control protocol/internet protocol (TCP/IP) The standard protocols used for
Touch screen A display screen that allows users to select items by touching them on the screen.
data transmission on the Internet and other common local and wide-area networks. Transport Layer Security (TLS) A protocol
Track pad A computer input device for con-
trolling the pointer on a display screen by sliding the finger along a touch-sensitive surface: used chiefly in laptop computers. Also called a touchpad.
that ensures the privacy of data transmitted over the Internet. It grew out of Secure Sockets Layer.
Transcription The conversion of a recording
Treatment threshold probability The probability of disease at which the expected values of withholding or giving treatment are equal. Above the threshold treatment is recommended; below the threshold, treatment is not recommended and further testing may be warranted.
of dictated notes into electronic text by a typist.
Trigger event In monitoring, events that
Transaction set In data transfer, the full set of
information exchanged between a sender and a receiver.
cause a set of transactions to be generated. Transcriptomics The study of the set of RNA
transcripts that are produced by the genome and the context (specific cells or circumstances) in which transcription occurs. Transition matrix A table of numbers giving the probability of moving from one state in a Markov model into another state or the state that is reached in a finite-state machine depending on the current character of the alphabet.
True negative In assessing a situation, an instance that is classified negatively and is subsequently shown to have been correctly classified. True positive In assessing a situation, an
instances that is classified positively and is subsequently shown to have been correctly classified. True-negative rate (TNR) The probability of a
Transition probability The probability that a
person will transit from one health state to another during a specified time period. Translational Bioinformatics (TBI) According to the AMIA: the development of storage, analytic, and interpretive methods to optimize the transformation of increasingly voluminous biomedical data, and genomic data,
negative result, given that the condition under consideration is false—for example, the probability of a negative test result in a patientwho does not have the disease under consideration (also called specificity). True-negative result (TN) A negative result
when the condition under consideration is false—for example, a negative test result in a
1086
Glossary
patient who does not have the disease under consideration.
UMLS See: System.
True-positive rate (TPR) The probability of a positive result, given that the condition under consideration is true—for example, the probability of a positive test result in a patient who has the disease under consideration (also called sensitivity).
UMLS Knowledge Sources Components of the Unified Medical Language System that support its use and semantic breadth.
True-positive result (TP) A positive result when the condition under consideration is true—for example, a positive test result in a patient who has the disease under consideration.
Unified
Medical
Language
UMLS Semantic Network A knowledge source in the UMLS that provides a consistent categorization of all concepts represented in the Metathesaurus. Each Metathesaurus concept is assigned at least one semantic type from the Semantic Network. Unicode Represents characters needed for foreign languages using up to 16 bits.
Turn-around-time The period for completing
a process cycle, commonly expressed as an average of previous such periods. Tutoring A computer program designed to
provide self-directed education to a student or trainee. Tutoring system A computer program designed to provide self-directed education to a student or trainee. (Also Intelligent Tutoring System). Twisted-pair wires The typical copper wiring
used for routine telephone service but adaptable for newer communication technologies. Type-checking In computer programming,
the act of checking that the types of values, such as integers, decimal numbers, and strings of characters, match throughout their use. Typology A way of classifying things to make
Unified Medical Language System (UMLS) Project A terminology system, developed
under the direction of the National Library of Medicine, to produce a common structure that ties together the various vocabularies that have been created for biomedical domains. Unified Modeling Language (UML) A standardized general-purpose modeling language developed for object-oriented software engineering that provides a set of graphic notation techniques to create visual models that depict the relationships between actors and activities in the program or process being modeled. Uniform resource identifier (URI) The combination of a URN and URL, intended to provide persistent access to digital objects. Uniform resource locator (URL) The address
of an information resource on the World Wide Web.
sense of them, for a certain purpose. Ubiquitous computing A form of computing
and human-computer interaction that seeks to embed computing power invisibly in all facets of life. Ultrasound A common energy source derived
from high-frequency sound waves.
Uniform resource name (URN) A name for a Web page, intended to be more persistent than a URL, which often changes over time as domains evolve or Web sites are reorganized. Unique health identifier (UHI) A governmentprovided number that is assigned to an individual for purposes of keeping track of their health information.
1087 Glossary
Universal Serial Bus(USB) A connection technology for attaching peripheral devices to a computer, providing fast data exchange. Unobtrusive measures Measures made using the records accrued as part of the routine use of the information resource, including, for example, user log files.
Utility In decision making, a number that
represents the value of a specific outcome to a decision maker (see, for example, quality- adjusted life-years). Validity check A set of procedures applied to data entered into an EHR intended to detect or prevent the entry of erroneous data; e.g., range checks and pattern checks.
Unstructured interview An interview where
there are no predetermined questions.
Value-based reimbursement In health care,
Unsupervised machine learning A machine
an alternative to traditional fee-for- service reimbursement, aimed at rewarding quality rather than quantity of services.
learning approach that learns patterns from the data without labeled training sets. URAC An organization that accredits the
quality of information from various sources, including health-related Web sites.
Variable Quantity measured in a study. Variables can be measured at the nominal, ordinal, interval, or ratio levels.
good service to one who wishes to use a product.
Vector mathematics In the context of information retrieval, mathematical systems for measuring and comparing vector representations of documents and their contents.
Usability testing A class of methods for collecting empirical data of representative users performing representative tasks; considered the gold standard in usability evaluation methods.
Vector-space model A method of full- text indexing in which documents can be conceptualized as vectors of terms, with retrieval based on the cosine similarity of the angle between the query and document vectors.
User authentication The process of identify-
Vendor-neutral archives (VNA) A technology in which images (and potentially any file of clinical relevance) is stored (archived) in a standard format with a standard interface (e.g., DICOM), such that they can be accessed in a vendor-neutral manner by other systems.
Usability The quality of being able to provide
ing a user of an information resource, and verifying that the user is allowed to access the services of that resource. A standard user authentication method is to collect and verify a username and password. User-centered design An iterative process in
which designers focus on the users and their needs in each phase of the design process. UCD calls for involving users throughout the design process via a variety of research and design techniques to increase the likelihood that the product will be highly usable by its intended users. User-interface layer The architectural layer
of a software environment that handles the interface with users.
Vertically integrated Refers to an organizational structure in which a variety of products or services are offered within a s ingle chain of command; contrasted with horizontal integration in which a single type of product is offered in different geographical markets. A hospital that offers a variety of services from obstetrics to geriatrics would be “vertically integrated.” A diagnostic imaging organization with multiple sites would be “horizontally integrated”.
1088
Glossary
Veterinary informatics The application of
biomedical informatics methods and techniques to problems derived from the field of veterinary medicine. Viewed as a subarea of clinical informatics. Video-display terminal (VDT) A device for
displaying input signals as characters on a screen, typically a computer monitor. View In a database-management system, a
logical submodel of the contents and structure of a database used to support one or a subset of applications. View schemas An application-specific description of a view that supports that program’s activities with respect to some general database for which there are multiple views. Virtual address A technique in memory management such that each address referenced by the CPU goes through an address mapping from the virtual address of the program to a physical address in main memory.
Virtual Private Network (VPN) A private communications network, usually used within a company or organization, or by several different companies or organizations, communicating over a public network. VPN message traffic is carried on public networking infrastructure (e.g., the Internet) using standard (often insecure) protocols. Virtual reality A collection of interface methods that simulate reality more closely than does the standard display monitor, generally with a response to user maneuvers that heighten the sense of being connected to the simulation. Virtual world A three-dimensional represen-
tation of an environment such as a hospital, a clinic or a home-care location. The represented space usually includes a virtual patient, and interactive equipment and supplies that can be used to examine and care for the patient. Some virtual worlds are multi-user and allow multiple learners to manifest themselves as characters in the virtual world for interaction with each other and the patient.
Virtual medical record A standard model
of the data elements found in EHR systems. The virtual medical record approach assumes that, even if particular EHR implementations adopt nonstandard data dictionaries and disparate ways for storing clinical data, mapping the contents of each EHR to a canonical model greatly simplifies interoperability with CDS Systems and other applications that may need to access the data. Virtual memory A scheme by which users can access information stored in auxiliary memory as though it were in main memory. Virtual memory addresses are automatically translated into actual addresses by the hardware. Virtual patient A digital representation of
a patient encounter that can range from a simple review of clinical findings to a realistic graphical view of a person who can converse and can be examined for various clinical symptoms and laboratory tests.
Virus/worm A software program that is writ-
ten for malicious purposes to spread from one machine to another and to do some kind of damage. Such programs are generally self- replicating, which has led to the comparison with biological viruses. Visual-analog scale A method for valuing health outcomes, wherein a person simply rates the quality of life with a health outcome on a scale from 0 to 100. Vocabulary A dictionary containing the terminology of a subject field. Volatile A characteristic of a computer’s memory, in that contents are changed when the next program runs and are not retained when power is turned off. Volume rendering A method whereby a com-
puter program projects a two- dimensional
1089 Glossary
image directly from a three- dimensional voxel array by casting rays from the eye of the observer through the volume array to the image plane. vonNeuman machine A computer architec-
ture that comprises a single processing unit, computer memory, and a memory bus. Voxel A volume element, or small cubic area of a three-dimensional digital image (see pixel). Washington DC Principles for Free Access to Science An organization of non-profit pub-
lishers that aims to balance wide access with the need to maintain sustainable revenue models. Wearables In the context of mobile health,
wearables refer to a range of electronic devices that can be incorporated into clothing or worn on the body, such as smartwatches, activity trackers, and physiological sensors, that are used to collect health-related data and provide health interventions. Also referred to as wearable devices or wearable technologies.
WebMD An American company that provides
web-based health information services. Whole Slide Digitization The process of cap-
turing an entire specimen on a slide into a digital image. Compared with capturing images of a single field of view from a microscope, this captures the entire specimen, and can be millions of pixels on a side. This allows subsequent or remote review of the specimen without requiring capture of individual fields. Wide-area networks (WANs) A network that connects computers owned by independent institutions and distributed over long distances. Wi-Fi A common wireless networking tech-
nology (IEEE 802.11x.) that uses radio waves to provide high-speed connections to the Internet and local networks. Word In computer memory, a sequence of bits that can be accessed as a unit. Word sense disambiguation (WSD) The pro-
cess of determining the correct sense of a word in a given context.
Web browser A computer program used to access and display information resources on the World Wide Web.
Word senses The possible meanings of a
Web catalog Web pages containing mainly links to other Web pages and sites.
Word size The number of bits that define a word in a given computer.
Web Services Discovery Language (WSDL) An XML-based language used to describe the attributes of a web service, such as a SOAP service. Web-based technologies Computer capabili-
Workstation A powerful desktop computer system designed to support a single user. Workstations provide specialized hardware and software to facilitate the problem-solving and information-processing tasks of professionals in their domains of expertise.
ties that rely on the architecture principles of the Internet for accessing data from remote servers.
World Intellectual Property Organization (WIPO) An international organization, head-
Weblogs/blogs A type of Web site that pro-
vides discussion or information on various topics.
term.
quartered in Geneva and dedicated to promoting the use and protection of intellectual property.
1090
Glossary
World Wide Web (WWW or Web) An application implemented on the Internet in which multimedia information resources are made accessible by any of a number of protocols, the most common of which is the HyperText Transfer Protocol (HTTP). Worm A self-replicating computer program,
similar to a computer virus; a worm is selfcontained and does not need to be part of another program to propagate itself. xAPI Experience Application Programming
Interface goes beyond interoperability standards, such as SCORM, and supports col-
lection of data about the learner’s experience while using the learning object. XML A metalanguage that allows users to define their own customized markup languages. See Extensible Markup Language. X-ray crystallography A technique in crystallography in which the pattern produced by the diffraction of x-rays through the closely spaced lattice of atoms in a crystal is recorded and then analyzed to reveal the nature of that lattice, generally leading to an understanding of the material and molecular structure of a substance.
1091
Name Index A Aach, J. 894, 895 Aalbersberg, I. 762 Aarts, J. 21, 156, 833 Abaluck, B. 142, 154, 155, 157, 429, 454 Abbas, U.L. 442 Abbey, B. 649 Abbott, P.A. 122, 726, 832 Abe, R. 958 Abeler, J. 995 Abelson, R. 532 Abend, A. 929 Abernethy, A.P. 995 Abernethy, N.F. 63 Aberra, F. 833 Aboukhalil, A. 709 Abraham, C. 649 Abrahamsen, A. 125 Abrams, K.R.,93 Abril, P. 410 Ackerman, M. 645, 767 Ackers, M. 436, 438, 462 Adam, E.J. 675 Adami, A. 649 Adams, C. 782 Adams, D.R. 951 Adams, J. 759, 975 Adams, K. 168, 579 Adams, M.A. 656 Adams, S.A. 377 Adams Becker, S. 862 Adelman, D.C. 889 Adeoye, O.M. 681 Adida, B. 379 Adlassnig, K.P. 232 Adler-Milstein, J. 63, 201, 380, 518, 537, 802, 804, 973, 974, 977, 978 Adolf, J. 680 Aebersold, M. 855 Afessa, B. 725 Afrin, J.N. 678 Agarwal, P. 643 Agarwal, S. 268 Agrawal, A. 168, 826 Agrawal, M. 337 Agudelo-Londoño, S. 853 Aguirre, G. 304 Agutter, J. 157 Ahern, D. 579 Ahmad, T. 954 Ahmed, A. 711–714, 718, 719, 721 Aine, C. 304 Aizawa, K. 649 Akerkar, R. 772
Akhlaghi, F. 953 Akin, O. 127 Al'Absi, M. 647 Alagoz, O. 106 Alam, R. 681 Alamri, Y. 808 Albayrak, Y. 780 Alberini, J. 304, 309 Alderson, P.O. 245 Aldrich, M.C. 889 Alecu, I. 881 Alexander, C.M. 647 Alexander, M. 857 Algaze, C.A. 808 Ali, A. A. 649 Ali, R. 719 Alipour, A. 649 Alkmim, M.B.M. 367 Allard, F. 134 Allen, M. 160 Allis, C.D. 282 Al-Mahrouqi, H. 808 Almeida, C. 156 Almoguera, B. 945 Almoosa, K.F. 159 Alnadi, N.A. 960 Alpert, S. 402 Alshurafa, N. 649 Alter, G. 768 Alterovitz, G. 873 Althoff, T. 645 Altman, R.B. 253, 280, 878, 883, 884 Altschul, S.F. 284 Altshuler, D.M. 878 Alverson, D.C. 683 Amalberti, R. 141 Amaral, L.A. 719 Amarasingham, R. 700, 972 Ambert, K. 382, 782 Ambite, J.L. 951 Amft, O. 649 Amini, F. 644 Ammenwerth, E. 372, 427, 972 Amodei, D. 258 Amour, J. 680 Anand, V. 477 Anandan, C. 427 Ananiadou, S. 255 Ananthanarayanan, V. 862 Anbäcken, E.-M. 165 Ancker, J. 123, 168, 974 Ancukiewicz, M. 207 Anderson, J. 127, 129, 130, 399, 429, 825 Anderson, K.M. 802 Anderson, N.R. 63 Anderson, S.A. 447
A
1092
Name Index
André, B. 329 Andreu-Perez, J. 444 Andrews, D. 785 Andrews, R.D. 714 Angrist, M. 894–896 Angus, D. 700 Anh, D. 703, 711, 714 Anis, A.H. 106 Annas, G.J. 898 Annett, J. 158 Antani, S. 767, 785 Anthony, K. 879 Antieau, L. 247 Anwar, S. 253 Appelboom, G. 372 Appleton, G. 762 Apweiler, R. 887 Araki, K. 379 Aral, S. 762 Araya-Guerra, R. 972 Arbo, M. 681 Ardern-Jones, M. 958 Ardolino, M. 974 Armstrong, L.L. 946, 947 Armstrong, R. 312 Arnold, E. 641 Arocha, J.F. 70, 72, 123, 134, 135, 139, 165, 833 Aronson, A. 244, 245, 264, 773, 886 Arora, N.K. 370 Arora, S. 683 Arsoniadis, E.G. 493 Arulmalar, S. 676 Arumugam, M. 281 Asch, S. 759, 975 Aschenbrener, C.A. 845 Ash, J. 122, 143, 156, 157, 380, 394, 401, 417, 585, 597, 599, 721, 804, 824, 825, 832, 833, 973 Ash, J.A. 832 Ashburner, J. 337 Ashburner, M. 290, 888 Ashley, E.A. 281, 895 Ashworth, M. 977 Asilkan, O. 780 Asnani, M.R. 367 Asselbergs, F.W. 953 Assendelft, W.J. 375 Assila, A. 166 Assimacopoulos, A. 681 Atienza, A. 650, 655 Atkinson, A.J. 873 Atkinson, R.C. 125 Atreya, R. 403 Atwood, A.K. 650, 652, 655 Audebert, H.J. 681 Auer, P.L. 878 Aung, M.H. 650 Aurora, N. 899 Austin, B.T. 365, 375 Austin, D. 381 Austin, J. 245 Auton, A. 944
Avanti, A. 812 Avery, A.J. 417 Avery, C.L. 951 Avillach, P. 915 Avni, Z. 328 Axton, M. 762 Aydin, C. 399 Aydin, C. E. 429 Aziz, T. 399 Azzopardi, L. 782
B Baars, M.J.H. 803 Babior, B.M. 279 Bach, P.B. 972 Bacino, C.A. 951 Bäcker, M. 995 Badawi, O. 723 Baer, R.M. 370 Bagchi, S. 786 Bagoutdinov, R. 925 Bahr, N.J. 881 Bai, C. 272 Bai, X. 649 Bai, Y. 649 Bailey, K.R. 157, 826 Bajcsy, R. 381, 382 Bajorath, J. 884 Baker, D.B. 372 Baker, D.W. 719 Baker, J. 316, 878 Bakken, S. 160, 223, 367, 575, 592, 807 Bakr, W. 641 Bakshi, V. 682 Balas, A. 514 Balas, E. 32, 33, 235 Balasubramanian, S. 809 Baldwin, E. 246 Baldwin, J. 276 Bales, E. 645 Balint, E. 365, 369 Ball, E.V. 878 Ball, J.R. 140 Ball, M. 527 Ball, T.B. 436, 438, 462 Ballard, A.J. 845 Ballard, R.M. 648 Balogh, E.P. 140 Band, G. 63 Bandrowski, A. 931 Bandura, A. 375 Barabasi, A.L. 889–891 Barbalic, M. 953 Barbarino, J.M. 253 Bard, J. 291 Bardon, C.G. 974 Barker, A. 378 Bar-Lev, S. 804, 833 Barnard, F. 302
1093 Name Index
Barnes, J.I. 109 Barnes, K.C. 878 Barnes, L.E. 651 Barnett, G. 37, 66, 471, 770, 804, 811 Barnett, P.G. 106 Barnett, W.K. 933, 936 Baron, R.J. 801 Barosi, G. 132, 139 Barr, R.M. 64 Barrell, D. 887 Barrett, T. 878 Barry, C. 312 Barry, M. 98, 369, 370, 974 Bartlett, J. 760 Barto, A.G. 656 Bartolomeo, P. 131 Basford, M. 948 Basford, M.A. 944–946, 951, 958 Bastarache, L. 889, 944, 945, 951, 953, 954 Batalas, N. 651 Bates, D. 187, 188, 380, 417, 429, 484, 501, 514, 564, 589, 803, 972–974, 976–979 Baudic, F. 772 Bauer, D. 479 Bauer, J.M. 995 Bauer-Mehren, A. 888 Baumann, B. 312 Bayir, H. 711 Bayley, L. 764 Bayoumi, A.M. 106, 108 Bayraktar, E. 166 Bazan, M. 282 Bazzi, R. 703, 711, 714 Beachkofsky, T. 958 Beal, A.C. 926, 927 Beale, R.J. 704 Beasley, J.W. 801 Beattie, J. 780 Beaudoin, M. 460 Beccaro, M.A.D. 973 Becher, D.A. 167 Bechhofer, S. 343 Bechtel, W. 125 Becich, M. 312, 926 Beck, B.F. 682 Beck, J.R. 108, 109 Beck, K. 189 Becker, J. 323 Becker, K.G. 878 Beckman, T.J. 853 Becnel, L.B. 215 Bedrick, S. 782 Beebe, C.E. 826 Beedle, M. 189 Beesley, J. 952 Begun, J.W. 20 Behmanesh, A. 644 Belenkaya, R. 930 Bell, C.M. 972 Bell, D.S. 263 Bell, E. 873
Bell, H. 167 Bellen, H.J. 951 Bello, J.A. 64 Ben Abacha, A. 247 Ben-Assuli, O. 167 Bender, D. 826, 930 Bengio, Y. 258 Benham-Hutchins, M. 915 Benitez, K. 402, 403 Bennett, A. 247 Bennett, C.L. 974 Bennett, G.G. 367 Bennett, H. 371 Bennett, J.M. 896 Bennett, N.R. 367 Bennett, T. 312 Benson, T.J. 222 Bentley, A.R. 956 Bentley, F. 654 Benyon, D. 158 Berbaum, K.S. 675 Berg, C.D. 443 Berg, M. 156, 973 Berg, R.L. 947, 948, 954 Bergan, E.S. 27, 208 Bergen, M.R. 106 Bergus, G. 760 Berkman, B.E. 960 Berkman, N.D. 374 Bernard, G. 954, 959 Berndt, S.I. 952 Berners-Lee, T. 758 Bernstam, E.V. 66, 933 Bernstein, E. 282 Bernstein, J.A. 951 Beroes, J.M. 372 Berrios, R.A.S. 719 Berry, W.R. 719 Berthelot, M. 444 Bertilsson, L. 956 Berwick, D.M. 365, 366, 368, 716, 976 Berwick, J.R. 641 Besançon, L. 644 Besio, W. 649 Besser, H. 787 Bewersdorf, J. 312 Bezerianos, A. 644 Bhatia, M. 899 Bhatt, D.L. 918, 919 Bhupatiraju, R. 781–782 Bibbins-Domingo, K. 115 Bichsel, J. 849 Bick, A.G. 952 Bickel, R. 590 Bickford, C. 223 Bidgood, W. 314 Bielinski, S.J. 945, 946, 948, 951 Biezunner, T. 282 Bigelow, J. 974 Bigham, A.W. 281 Billington, R.A. 675
B
1094
Name Index
Binda, J. 643 Binns, D. 887 Biondich, P.G. 477 Bird, A. 282 Bird, K.T. 670 Bird, S. 262 Birdsall, T. 90 Birney, E. 872 Birren, B. 276 Bisantz, A.M. 156 Bishop, M. 328, 334 Biswal, S. 310, 311 Bittorf, A. 313 Bjeletich, B. 683 Bjerke, T.M. 493 Björne, J. 254 Black, A.D. 427 Black, W. 809 Blackstone, E.H. 447 Blackstone, E.A. 551 Blake, B. 675 Blake, K.D. 374 Blandford, A. 156 Blandford, D. 156 Blascheck, T. 644 Blatt, M.A. 105, 106, 111 Blatter, B. 156, 166 Blau, H.M. 955 Blazing, M.A. 954 Blei, D.M. 250 Bleich, H. 471, 769, 809 Block, P. 670 Blois, M.S. 30, 31, 122, 123, 394 Blom, J.A. 820 Bloom, F. 323 Bloomrosen, M. 122, 143, 155, 167, 497, 804, 829, 973 Blosky, M.A. 715 Bloxham, A. 598 Blum, B.I. 66 Blum, J.M. 707 Blumenthal, D. 185, 405, 707, 801–803, 824, 935, 977 Bluml, S. 304 Blundell, J.E. 641 Blunt, D. 744 Bobe, J. 894, 895 Bober, R.M. 973 Bobrow, M. 893 Boddington, P. 877–878, 893 Bodenheimer, T. 365, 375 Bodenreider, O. 770, 820, 881, 886 Boehm-Davis, D.A. 168 Boergers, J. 657 Boerwinkle, E. 957 Bogun, F. 703, 711, 714 Bohlin, E. 995 Bohm, S.H. 714, 722 Bojanowski, P. 258 Bokun, T. 427 Bolival, B. Jr. 282 Bonacci, B.B. 960 Bonanno, J.B. 294
Bonato, P. 995 Bonazzi, V. 933 Bones, C.B. 711 Bonhomme, V. 704 Bonnardot, L. 688 Bonomi, A. 375 Boockvar, K.S. 168 Bookchin, B. 245 Booth, R. 598 Boren, S. 32, 33, 235, 514 Borgman, C. 759, 786 Boriello, G. 648, 659 Bork, P. 881, 884 Borriello, G. 648 Borycki, E.M. 142 Bosch, A. 329 Bossuyt, P.M. 93 Bostwick, S. 167, 712, 715 Bosworth, K. 370, 371 Bouchard, C. 958 Bouhaddou, O. 881 Bourisquot, B.C. 109 Bourne, P.E. 893, 933 Bousquet, C. 881 Bowden, M. 323 Bowen, M. 972, 973 Bower, A. 974 Bower, G.H. 125 Bowie, J. 37 Bowler, E. 950 Bowles, K.H. 931 Bowton, E. 889, 958 Boxwala, A.A. 826 Boxwalla, A. 832 Boyd, J. 167 Boyer, C. 248 Boyle, M.G. 650, 652, 655 Boynton, J. 779 Brach, C. 365 Bradford, Y. 945, 946 Bradley, V. 168 Bradley, W.G. 898 Bradlyn, A.S. 853, 854 Bradshaw, K.E. 696, 722 Brainin, E. 374 Bratan, T. 978 Braunstein, M. 209 Braunwald, E. 701 Brechner, R.J. 675 Brehmer, M. 644 Breitenstein, M.K. 915 Breizat, A.H. 719 Brender, J. 429 Brennan, P.F. 21, 63, 375, 410, 995 Brennecke, R. 750 Brenner, S.E. 378 Breslow, M. 681, 712, 724 Bresnick, G. 675 Brewster, D.H. 447 Bridewell, W. 246 Bright, T.J. 804, 878
1095 Name Index
Brightling, C.E. 915 Brilliant, M.H. 951 Brimmer, N. 243 Brin, S. 774 Brinkley, J. 322, 327 Brinkman, R. 872, 886, 931 Britt, H. 881 Britto, J.F. 442 Brochhausen, M. 931 Brodsky, B. 215 Brody, W. 407 Brooke, J. 166 Brooks, D.C. 849 Brooks, L.D. 944 Brotman, D. 974, 975 Brown, A. 375 Brown, B.W. Jr. 375 Brown, D. 282, 317 Brown, E. 786, 881, 931 Brown, J.S. 926, 927 Brown, L.T. 704 Brown, R. 284, 684, 715 Brown, S. 225, 470, 471 Brown, S.T. 106 Brown, T. 896 Brown-Connolly, N.E. 829 Browne, A. 248 Browner, W.S. 917, 918, 920 Brown-Gentry, K. 944, 945, 951 Brownstein, J. 496 Bruce, A. 451 Bruer, J.T. 127 Bruford, E.A. 879 Brunak, S. 884, 891 Brush, M.H. 931 Bruza, P. 784 Bryan, R. 333 Bryant, J. 672 Bryant, S.H. 881 Bu, D. 804 Bublitz, C. 972 Buchanan, A.V. 475 Buchanan, B.G. 246, 440, 448, 812–815 Buchanan, J. 482, 484 Buchanan, N.N. 455 Buchman, T. 469, 682, 703, 709 Buchmeier, N. 893 Buchoff, J. 879 Buckeridge, D. 978 Buckingham, K.J. 281 Buckley, C. 781 Budd, M. 722 Buermeyer, U. 995 Buetow, K.H. 879 Buettner, M. 649 Bug, B. 931 Bug, W. 291 Bugrim, A. 875 Bull, J. 845 Bunt, A. 644 Bunt, H. 248
Buntin, M. 501 Bura-Rivière, A. 803 Burchard, E. 878, 956 Burgdorf, K.S. 281 Burke, H.B. 167 Burke, J. 489 Burke, J.P. 723 Burke, L.E. 649 Burkhoff, D. 704 Burley, S. 294, 599 Burns, A.R. 560 Burnside, E. 316, 341 Burrows, D. 165 Burstin, H. 975 Burton, A.M. 134 Bury, J. 827 Burykin, A. 469, 703, 709 Burzykowski, T. 915 Busch, A.B. 978 Bush, A. 959 Bush, G.W. 568 Bush, W.S. 878 Butson, M. 896 Butte, A. 281, 869, 871, 877, 889, 955 Buxton, R. 307, 714 Buys, S. 443 Buyse, M. 915 Bycroft, C. 63 Byrne, E. 978 Byrnes, J.M. 672
C Cabrera Fernandez, D. 312 Cai, B. 895 Cai, T. 948, 949, 954 Cailliau, R. 758 Caldwell, M.D. 954 Calestani, M. 448 Califano, A. 871 Califf, R.M. 926, 927, 932 Callaghan, F.M. 167 Callier, S. 956 Cameron, D.W. 106 Cameron, J.D. 378 Cameron, T. 845 Cameron, W. 106 Camon, E. 887 Camp, C. 649 Campanella, P. 188 Campbell, D. 441, 455 Campbell, E. 585, 600, 707 Campbell, K.E. 222, 881 Campillos, M. 881, 884 Campion, T. 602 Candler, C. 855 Canese, K. 778 Cannon, C.P. 954 Canto, J.G. 701 Cao, Y. 247
C
1096
Name Index
Caputo, B. 329 Caputo, M. 680 Car, J. 427, 654 Carayon, P. 123, 140, 167, 168, 715 Carchidi, P.J. 974 Carcillo, J.A. 187, 711, 973 Card, S.K. 125, 162, 163 Carding-Edgren, S. 857 Carek, A.M. 647 Carey, M. 672 Carey, T. 158 Carleton, B. 958 Carlsen, B. 225 Carlson, R. 723 Carlson, S.A. 648 Carneiro, G. 320 Carpenter, A. 312 Carr, D. 374 Carrell, D. 947 Carroll, A.E. 477 Carroll, D. 597 Carroll, J.K. 379 Carroll, J.M. 129, 154 Carroll, P. 683 Carroll, R.J. 948, 949, 951, 954 Carter, J. 225 Carter, S. 651 Cartmill, R. 167, 168, 715 Casper, G. 375, 410 Cassell, E.J. 369 Castaneda, C. 560 Castillo, C. 762 Castro, D. 527 Cato, K. 167 Caulton, D.A. 165 Cava, A. 404, 410 Cavallari, L.H. 957 Cavalleranno, A.A. 675 Cavallerano, J.D. 675 Caviness, V. 323 Cech, T.R. 280 Cedar, H. 870 Celi, L.A. 712 Cerezo, M. 950 Ceusters, W. 291 Cha, S. 679 Chadowitz, C. 649 Chadwick, M. 27, 208 Chalfin, D.B. 700 Chalmers, M.C. 643 Chambliss, M. 760 Chambrin, M.C. 707 Chan, C.V. 157 Chan, G. 600 Chan, I.S. 803, 899 Chan, K.S. 975 Chan, R.P. 162 Chandler, P. 129 Chandra, S. 711–714, 719 Chang, B.J. 846 Chang, C.T. 639
Chang, J. 686 Chang, M.W. 258 Chang, Y. 744 Channin, D. 316, 317, 319 Chantratita, W. 956 Chao, J. 684 Chapanis, A. 139, 140 Chapman, W.W. 244, 246 Charen, T. 771 Charness, N. 128 Chase, W.G. 133 Chatila, R. 803 Chaudhry, B. 188, 477, 726, 824 Chaudhry, R. 679 Chaudhuri, S. 680 Chauvin, C. 141 Chbani, B. 680 Cheatle-Jarvela, A.M. 951 Check Hayden, E. 898 Chen, C. 896 Chen, E. 596 Chen, H.C. 649 Chen, H.M. 704 Chen, J. 33, 682, 684, 899 Chen, K. 258 Chen, M.Y. 651 Chen, Q. 363–384, 889 Chen, R. 281, 871, 889, 954 Chen, S. 427, 438, 461, 658 Chen, T. 704 Chen, W. 654, 995 Chen, X. 889 Chen, Y. 649, 873, 954 Cheng, K. 168 Cheng, W.-Y. 954 Cheng, Z.H. 639 Chenok, K.E. 975 Chernew, M.E. 976 Cheung, N. 471 Chew, E.Y. 950 Chewning, B. 370, 371 Chi, M.T.H. 133, 134 Chiang, A.P. 877, 889 Chiang, M.F. 63, 162, 667–689, 725 Chibnik, L. 945, 948 Chibucos, M.C. 931 Chichester, C. 248 Chih, M.Y. 650, 652, 655 Child, R. 258 Childs, B. 890, 891 Chin, T. 599 Chipman, S.F. 159 Chiu, K.M. 649 Chiu, M.C. 649 Cho, I. 223 Cho, S.J. 892 Choe, E.K. 637–661 Choi, H. 333 Choi, J. 645, 658 Choi, M. 893 Cholan, R.A. 975
1097 Name Index
Chopra, A. 520 Chou, E.Y. 726 Chou, T. 980 Chou, W.-Y.S. 374 Choudhary, V. 948 Choudhury, T. 648, 650 Chow, A. 106 Chowdry, A.B. 961 Christensen, C. 380, 571 Christensen, L. 246 Chu-Carroll, J. 786 Chueh, H.C. 926 Chunara, R. 684 Chung, C.F. 645 Chung, J. 310 Chung, M.H. 436, 438, 462 Chung, P. 158 Church, G. 893–895 Church, T. 443 Churchill, S. 926 Chused, A.E. 833 Chute, C.G. 63, 221, 246, 886, 944, 950, 951 Cimino, J.J. 156, 157, 160, 245, 247, 263, 316, 380, 582, 679, 760, 804, 807, 808 Cirulli, E.T. 874 Clancey, W.J. 127, 818 Clancy, C.M. 802 Clapp, M. 899 Clark, A.P. 704 Clark, C. 258 Clark, E.N. 703 Clark, R.S. 711 Clark, R.S.B. 973 Clarke, R. 980 Clarysse, P. 327 Classen, D. 417, 489, 501, 723 Clauser, B.E. 860 Claveau, V. 257 Clayton, E.W. 402, 403, 889 Clayton, P.D. 245, 710 Clemmensen, P. 703 Clemmer, T.P. 695–726 Clifford, G. 10, 703, 704, 709 Clifton, D. 995 Clough, S.S. 648 Clyman, S.G. 860 Clyne, B. 488, 501 Cochrane, G. 872 Cocos, C. 780 Cody, S. 682, 713, 714, 724 Coenen, A. 223 Coffey, R. 471 Cohen, A. 142, 154, 155, 157, 429, 454, 781–782 Cohen, D. 108 Cohen, H. 723 Cohen, J. 304, 957 Cohen, K. 785 Cohen, M. 154, 339 Cohen, P.R. 246 Cohen, S.J. 461 Cohen, T. 131, 139, 141, 156, 159, 719
Coiera, E. 156, 452, 784 Coki, O. 687, 725 Col, N.F. 974 Cole, S.W. 853, 854 Coleman, K. 365 Coleman, M.T. 375 Coleman, R.W. 820 Coletti, M. 769 Colevas, A. 931 Colgan, J. 645 Colin, N.V. 979 Collen, M. 26, 370, 471, 590 Collins, D. 333, 337 Collins, F. 206, 367, 715, 891, 894, 899, 928 Collins, L.M. 655 Collins, S. 167, 579, 588, 589, 598, 601, 603, 604 Colombet, I. 803 Comulada, W.S. 377 Conant, J. 647 Conlin, P.R. 675 Connelly, K.H. 641 Connolly, D.C. 704 Conroy, D.E. 377 Consolvo, S. 647, 648, 651 Conway, M. 246, 947 Conwell, Y. 684 Cook, D. 680, 763, 764 Cook, D.A. 853 Cook, J. 645 Cook, N.R. 876, 952 Cook, R.I. 139, 140 Cook-Deegan, R. 896 Cooke, N.J. 159 Cooper, D.N. 878 Cooper, G.F. 246 Cooper, G.M. 954 Cooper, M. 975 Coopersmith, C.M. 682 Coram, M. 955 Corcoran, P. 248 Coren, M. 442 Corina, D. 304 Corley, S. 167, 210 Cornelius, P.J. 719 Cornet, R. 64 Corrado, G. 258 Corrigan, J. 140, 711, 725, 801 Cote, R.A. 221, 222, 881 Cou Serhal, C. 703, 711, 714 Coulet, A. 883 Counts, S. 246 Couture, B. 167 Covell, D. 489, 760, 801 Cowie, C.C. 675 Cox, A. 887 Cox, J. Jr. 702 Cox, N.J. 878 Crabtree, M. 783, 784, 787 Craig, D.W. 893 Craig, S.D. 860 Crandall, E.D. 700
C
1098
Name Index
Crathorne, L. 447 Crawford, D.C. 944–946, 950 Crawford, T. 744 Creinin, M.D. 378 Cresswell, K. 427, 976, 977 Creswell, J. 518 Crick, F. 870 Critchfield, A.B. 678 Croghan, I. 679 Croghan, S.M. 683 Cronenwett, L. 592 Cronin, R.M. 363–384, 948 Crossley, G.H. 703 Crosslin, D.R. 944, 945, 950 Crossno, P.F. 725 Crotty, K. 374 Crowell, J. 248 Crowgey, E.L. 915 Crowley, W.F. Jr. 918 Cryan, J.F. 899 Cuadros, J. 675 Cullen, D.J. 974 Cullen, T.A. 167, 210 Culver, E. 367 Cummings, S.R. 917, 918, 920 Cummins, M. 862 Cundick, R.M. Jr. 704 Cupples, L.A. 896 Curfman, G. 761 Curran, A. 684 Curran, W. 413 Currie, L.M. 160 Curry, S.J. 115 Curtis, L.H. 926, 927 Curzon, P. 156 Cusack, C.M. 497, 974 Cushing, H. 699 Cushman, R. 410 Cusick, M.E. 890, 891 Cyrulik, A. 723 Czaja, S.J. 128
D Dachselt, R. 644 D’Agostino, M. 367 D’Agostino, R.B. Jr. 876 Dahlstrom, E. 849 Dai, H. 288, 684 Dai, J. 649 Dai, Q. 889 Daillere, R. 899 Dal Pan, G.J. 447 Daladier, A. 248 Dalai, V.V. 166 Dale, A. 333 Daley, G.Q. 408 Daley, K.A. 16 Dalrymple, P.W. 726 Dalto, J.S. 706, 713, 715 Dameron, O. 345
Damiani, G. 488 Damush, T. 496 Danahey, K. 960 Daneshjou, R. 956 Dang, P. 749 Daniel, J.G. 377 Daniel, M. 517, 975 Daniels, C.E. 711, 721 Daniels, L. 477 Dantzig, G.B. 370 Darmoni, S. 772 Darzi, A. 251 Das, A.K. 116, 804, 821, 827 Das, S. 958 Daskalova, N. 657 Datta, R. 339 Datta, S. 264 Davatzikos, C. 333 Davey Smith, D. 447 Davey Smith, G. 944 Davidson, K.W. 115 Davidson, M. 371 Davies, A.R. 369 Davies, K. 280 Davies, N.M. 944 Davis, A. 862 Davis, C. 375 Davis, J. 782, 981 Davis, K.K. 645 Davis, K.L. 109 Davis, R. 723, 833 Davis, S.E. 367–384 Davis, T. 159 Davison, B. 762 Dawadi, P.N. 680 Dawant, B.M. 725 Day, F.R. 951 Day, M. 879 Day, R.O. 368 Day, T. 370, 371 Dayhoff, M.O. 283 De Choudhury, M.D. 246 de Dombal, F.T. 393, 810 De Jong, M. 165 de Keizer, N. 64 De La Torre-Díez, I. 642 de Oliveira, K.M. 166 De Palo, D. 157 de Silva, C. 649 de Vries, J. 877–878, 893 Dean, J. 258 DeAngelis, C. 768 DeBakey, M. 758 Debevc, M. 166 deBliek, R. 782 Decker, B. 956 Decker, B.S. 958 Decker, S.L. 801 DeClerq, P.A. 820 DeCristofaro, A. 759, 975 Deeks, J.J. 93
1099 Name Index
Deerwester, S. 250 Deflandre, E. 704 DeGroot, A.T. 133 Del Fiol, G. 490, 808, 825 Delaney, J.T. 957, 958 Deléger, L. 248, 257 delFiol, G. 760 Deligianni, F. 444 Dellinger, E.P. 719 DeLong, E.R. 207 Delphin, E. 371 Delvaux, N. 488, 501 Demaerschalk, B.M. 681 Demiris, G. 366, 372, 375, 442, 458, 667–689 Demner-Fushman, D. 244, 245, 247, 261, 266, 267, 767, 773, 782, 785, 886 Dennis, J. 957 Dennis, M.L. 652 Denny, J.C. 248, 252, 889, 945–948, 950, 951, 954, 959 Dent, K.M. 281 Dente, M.A. 155, 167 Denton, C.A. 164, 169 Dentzer, S. 365, 376 DeRisi, J.L. 276 Dermatis, H. 371 DeSalvo, K. 185, 194 Deselaers, R. 329, 339 Deserno, T. 339 DesRoches, C.M. 707, 801 Desta, Z. 959 Detmer, D. 526 Detsky, A.S. 85 Devaney, S. 995 Devarakonda, S. 650 Devereaux, P. 764 Devine, E.B. 518 Devlin, J. 258 Dewan, A. 950 Dewey, C. 995 Dewey, F.E. 283, 895 Dexter, P. 488, 499 Dey, A.K. 639, 651 Dhurjati, R. 804 Diakopoulos, N. 154 Diamant, J. 657 Diamond, G. 759–760 DiCenso, A. 764 Dick, R. 5, 7 Dickersin, K. 372 Dickinson, L.M. 972 Dickson, S.P. 892 Diener-West, M. 972 Diepgen, T. 313, 762 Die-Trill, M. 371 Dillon, G.F. 860 Dimitrov, D.V. 378 Dimsdale, J.E. 372 Ding, K. 945, 950 Ding, X.R. 995 Dingbaum, A. 375 Dinwiddie, D.L. 960
Dion, D. 683 D’Itri, V.S. 926 Divakar, U. 858 Dixon, A. 369 Dixon, B.E. 33, 499, 830, 832 Dligach, D. 954 Do, C.B. 961 Do, N. 881 Do, R. 954 Doan, S. 889 Dodd, B. 527 Dodd, N. 672 Dodge, J. 845 Doebbeling, B. 169 Dogan, R.I. 886 Doig, A.K. 706 Doing-Harris, K. 595 Dolan, M.E. 878 Doležal, J. 680 Dolin, R. 595, 832 Dolinski, K. 887 Domrachev, M. 878 Donahue, K.E. 374 Donaldson, L. 251 Donaldson, M. 140, 711, 725, 801 Donath, J. 684 Donelan, K. 707 Dong, E. 994 Dong, Y. 721, 725 Doniz, K. 122, 142, 154 Donley, G. 965 Donovan, T. 341 Donovan, W.D. 64 Doolan, D. 591, 599, 977 Dooley, K. 20 Dorman, T. 681, 724 Dorr, D.A 975 Dorsey, J.L. 471 D’Orsi, C. 317 Doshi-Velez, F. 243, 955 Douyere, M. 772 Dowdy, D. 914–915, 918, 920, 924 Dowling, C. 764 Downes, M.J. 672 Downs, S. 375, 410, 477 Doyle, D.J. 118, 142, 154 Doyle, J. 978 Draghici, S. 887 Drake, I. 952 Draper, S.W. 143 Drazen, J. 761, 768 Drenos, F. 953 Dressler, D.D. 641 Drew, B.J. 700, 703 Drew, J.A. 165 Drews, F.A. 157, 706, 710, 713, 715, 722 Dreyer, K. 749 Drouin, K. 979 Drucker, E.A. 168 Drury, H. 323 Du, H. 994
D
1100
Name Index
Du, J. 262 Du, X. 899 Du, Y.E. 687, 725 Duan, C. 378 Duan, S. 878 Dublin, S. 246 Duch, W. 129 Duda, R. 334, 394 Dudek, S. 945, 951 Dudley, J.T. 378, 877 Dugan, T.M. 477 Dugas-Phocion, G. 335 Duggan, D. 893 Duggan, M. 760, 780 Duke, J.D. 499, 825, 927 Dulbandzhyan, R. 471 Dumais, S.T. 250 Dumont, E.P. 372 Dumontier, M. 762 DuMouchel, W. 245 Duncan, B. 169 Duncan, K.D. 158 Duncan, R. 471 Dundas, D.D. 675 Dunham, I. 893 Dunn, A.G. 368 Dunn, M. 933, 973 Dunnenberger, H.M. 956 Dupuits, F.M. 198 Duran-Nelson, A. 780 Durbin, R.M. 878, 945 Durfy, S.J. 278 DuVall, S.L. 253 Dy, S.M. 979 Dykes, P. 167, 575, 596–598, 604 Dyrbye, L.N. 658, 801, 973 Dyson, E. 527 Dziadzko, M.A. 711, 721
E Eadie, M. 648 Eadon, M.T. 959 Earle, K. 493 East, T.D. 706, 710, 713–715, 722 Ebell, M. 115, 760 Eberhardt, J. 782 Ebner, S. 156 Eckert, G.J. 455 Eckert, S.L. 896 Eddy, C. 803 Eddy, D.M. 115 Eddy, S.L. 844 Edelman, L. 680 Eden, K.B. 974 Edgar, E. 475 Edgar, R. 878 Edwards, A. 974 Edwards, B. 577 Edwards, P. 642 Edworthy, J. 707
Efthimiadis, E.N. 167 Egan, D. 782 Egbert, L. 706, 713, 715 Eichstaedt, J.C. 246 Eisenberg, M.A. 973 Ek, A.-C. 165 Ekins, S. 875 Elhadad, N. 248, 785 Elia, M. 641 Elias, J.A. 892 Elias, S. 898 Elkin, P.L. 157 Elledge, S.J. 276 Elliott, L.T. 63 Ellis, K. 651 Elmqvist, N. 154 Elphick, H.E. 953 Elstein, K.A. 36, 67, 68, 70, 72, 127, 137, 139 Ely, J. 760 Emani, S. 976 Embi, P. 167, 802, 914, 915, 933, 936 Emdin, C.A. 952 Eminovic, N. 429 Engel, G.L. 369, 803 Englander, R. 845 Englesakis, M. 371 Ennett, S. 367 Enthoven, A.C. 368 Epstein, D.A. 654, 657 Epstein, R.M. 365, 372, 859 Er, A. 780 Erickson, B. 734–753 Ericsson, K.A. 126, 133, 134, 165 Erlbaum, M.S. 225 Erlich, Y. 63 Ernest, J., III 374 Ertin, E. 647 Esko, T. 952 Espadas, D. 159 Espino, J.U. 246 Estambale, B. 436, 438, 462 Estes, W.K. 125 Esteva, A. 960 Estrin, D. 377 Etchells, E. 972 Etkin, J. 659 Etzioni, R. 115 Evans, B. 651, 657 Evans, D.A. 132, 139, 263 Evans, E. 760 Evans, J. 930 Evans, K. 878 Evans, M.A. 372 Evans, R.S. 188, 288, 488, 489, 706, 710, 713–715, 721–723, 725 Evans, W.E. 63 Everton, A. 380 Every, N. 701 Eyler, A.E. 946, 948, 955 Eysenbach, G. 264, 313, 371, 375, 379, 762 Ezzedine, H. 166
1101 Name Index
F Fabregat, A. 875 Fabry, P. 248 Fafchamps, D. 479, 486, 801, 972 Fagan, L.M. 641, 815 Fairhurst-Hunter, Z. 955 Fairon, C. 888 Fallacara, L. 188 Fan, J. 786 Fan, S. 899 Fanale, C.V. 681 Fang, S.H. 649 Fang, Z. 899 Farach, F.J. 914, 924 Farcas, C. 873 Fargher, E.A. 803 Farhat, J. 782 Farr, M.J. 132 Farrer, L.A. 896 Farzanfar, R. 641 Fawcett, T. 376 Faxvaag, A. 979 Feblowitz, J. 721 Feeny, D. 106 Fehske, A. 374 Fei-Fei, L. 329 Feijóo, C. 1001 Feinstein, A.R. 91 Feldman, L.S. 974 Fellencer, C.A. 801 Feltovich, P. 135, 137 Feng, X. 163, 168 Feng, Z. 647 Fennie, K.P. 703 Fenton, S. 645, 785 Feolo, M. 925 Ferguson, T. 371, 763 Fernández, X. 767 Fernstrom, J.D. 649 Ferranti, J.M. 231, 476 Ferreira, D. 639, 651 Ferris, T.A. 888 Ferris, T.G. 707 Ferrucci, D. 786 Fettweis, G. 374 Feuer, E.J. 115 Feupe, S.F.. 873 Feustel, P.J. 476 Fiala, J. 337 Ficarra, V. 683 Field, J.R. 958 Fields, L. 156 Fields, R.E. 142 Figurska, M. 312 Fiks, A. 604 Fiksdal, A. 780, 785 File, D. 782 Finger, P. 312 Finkelstein, S. 442, 458 Finnell, J.T. 33
Fiol, G.D. 193 Fiorini, N. 778 Firth, J.R. 258 Fischer, B.A. 291 Fischer, S.H. 981 Fisher, A. 165, 378 Fisher, E. 684, 974 Fisk, A.D. 128 Fiszman, M. 245, 248, 785 Fiszman, P. 246 Fitts, P.M. 163 Fitzgerald, D. 782 Fitzmaurice, M. 471 Fitzpatrick, R. 455 Flach, P.A. 945 Flatley, M. 809 Flatt, M.A. 377 Fleischmann, W. 887 Fletcher, J. 825 Fleurence, R.L. 926 Flexner, A. 471, 803 Flin, R. 139 Flint, V.B. 722 Flisher, A.J. 641 Flocke, S. 475 Flowers, C. 106 Flynn, J.T. 676, 687, 725 Fogarty, J. 653, 654, 657 Fontelo, P. 167 Fonteyn, M. 165 Ford, E.W. 977 Fore, I. 768 Forman, H.P. 377 Forster, A.J. 972 Forsythe, D. 440, 448 Fortina, P. 960 Fortis, S. 682 Fougerousse, F. 323 Foulois, B. 710 Foulonneau, M. 758 Fowler, F.J. 369, 370, 373 Fowler, L.P. 165 Fowler, S.E. 112 Fowles, J.B. 975 Fox, C. 774 Fox, E. 776 Fox, J. 427 Fox, P. 304 Fox, S. 759, 760, 780 Foy, R. 979 Fracalanza, S. 683 Frackowiak, R. 304 Frakes, W. 776 Francis, D.K. 367 Frank, M. 761 Frank Chang, F. 167 Franken, E.A. 675 Franklin, A. 159 Franklin, P.D. 975 Frankovich, J. 833 Frantz, G.L. 157
F
1102
Name Index
Franz, C. 805 Frase, A. 944 Fred, M. 167, 198, 712, 715 Frederick, P.D. 701 Frederick, P.R. 246 Frederiksen, C.H. 130 Frederiksen, J.R. 131 Free, C. 642 Freeman, A. 862 Freeman, C. 63 Freeman, S. 844 Freigoun, M.T. 656, 657 Freton, A. 312 Fridsma, D. 784, 930 Fried, T.R. 488 Friedberg, M.W. 972, 973 Friedlin, F.J. 262 Friedman, C.P. 244, 245, 248, 259, 394, 425–455, 565, 593, 599, 782–785, 935 Friedman, D. 873 Friedman, J. 288, 289 Friefeld, O. 335 Fries, J. 485 Frikke-Schmidt, R. 953 Frisse, M.E. 377, 493 Friston, K. 304, 333, 337 Fritz, T. 643 Frizelle, F. 808 Froehlich, J. 651 Froomkin, M. A. 410 Frost, J.H. 379 Frueh, F.W. 899 Frühauf, J. 166 Fry, E. 479 Fuchs, V.R. 975 Fuhr, J.P. Jr. 551 Fullerton, S.M. 956 Fulton, J.E. 648 Fultz-Hollis, K. 936, 943 Fung, K. 477 Funk, M. 700, 703, 772 Funk, R.R. 652 Funkesson, K.H. 165 Furnas, G.W. 250 Furniss, D. 156 Furniss, S. 169
G Gaba, D.M. 855 Gabriel, D. 873 Gabril, M. 312 Gadd, C.S. 132, 139 Gainer, V. 926, 939, 945, 954 Gaisford, W.D. 704 Gajic, A.A. 712, 714 Gajic, O. 711–714, 718, 719, 721, 725 Gajos, K.Z. 469, 653 Gallagher, L. 185 Gallego, B. 368
Galluzzi, L. 899 Galt, J. 784 Galuska, D.A. 648 Gamazon, E. 878, 957 Gambhir, S. 310–311 Gandhi, T.K. 167, 210 Ganesan, D. 649 Ganiats, T.G. 106 Ganiz, M.C. 268 Gao, Y. 647, 818, 819 Garber, A.M. 93 Garber, M.E. 889 Gardner, H. 125 Gardner, L. 994 Gardner, M. 258 Gardner, R.M. 399, 403, 415, 416, 475, 489, 499, 696, 701, 703, 704, 706, 710, 713–715, 721–723, 726 Garg, A. 488, 784 Garrison, E.P. 945 Garten, Y. 883 Garvican, L. 428 Gaschnig, J. 429 Gaskin, D.J. 972 Gasser, U. 400, 713 Gatsonis, C. 93 Gavin, A.C. 884 Gawande, A. 558, 719, 803, 808 Gaye, O. 762 Ge, D. 892, 895 Ge, Y. 954 Gehring, D. 750 Geissbuhler, A. 248, 413 Gelman, R. 687 Gelmann, E. 443 Gelmon, L.J. 436, 438, 462 Genel, M. 918 Gentile, C. 272 George, J. 305 George, P.P. 888 Geppert, C.M. 683 Gerber, A. 764 Gerber, M.S. 651 Gernsheimer, T. 648, 659 Gerstner, E. 308 Ghassemi, M. 243 Ghazvinian, A. 884 Ghazvininejad, M. 258 Giannangelo, K. 645 Gibbons, M.C. 372 Gibbs, A. 165 Gibbs, H. 378 Gibson, K. 283 Gibson, R. 394 Gibson, W. 700 Gideon, J. 653 Giger, M. 312 Gignoux, C. 878 Gil, L. 949 Gilbert, M. 374 Giles, J. 767 Gillan, D.J. 122, 128
1103 Name Index
Gillis, A.E. 683 Gilman, M.W. 115 Gilmour, L. 655 Gilson, M.K. 881 Ginsburg, G.S. 803, 899 Ginter, F. 254 Giordano, R. 645, 658 Giri, J. 721 Girosi, F. 974 Gish, W. 284 Gishen, P. 744 Gittelsohn, A. 369, 370 Giugliano, R.P. 953, 956 Giuliano, K. 702–704 Giuse, D. 471, 954 Gladding, S. 780 Glanville, J. 779 Glaser, J. 404, 803 Glaser, R. 133–135 Glass, L. 684 Glickman, M.L. 215 Glicksberg, B.S. 954 Globerson, T. 144, 145 Glowinski, A.J. 223 Goddard, K. 459 Goel, M. 648, 659 Goelz, E. 994 Goh, K.I. 890, 891 Gold, J. 527 Goldberg, A.D. 282 Goldberg, H.S. 830, 832 Goldberg, S. 317 Goldhaber-Fiebert, J.D. 109 Golding, S.G. 898 Goldmann, D. 711, 975 Goldstein, D.B. 874, 892 Goldstein, I. 267 Goldstein, M. 532, 820 Goldzweig, C. 372, 558 Golomb, B.A. 372 Gombas, P. 312 Gomez, L. 782 Gondek, D. 786 Gong, M.N. 711, 721 Gonzalez, R. 324, 853 Good, W. 744 Goodloe, R. 945 Goodman, K. 391–420, 894 Goodman, L.R. 112 Goodwin, J. 577 Goodwin, R.M. 167 Gorbanev, I. 853 Gorden, S. 763 Gordon, A.S. 944 Gordon, C.J. 860 Gordon, C.M. 381 Gordon, P. 933 Gordon, S.M. 879 Gorges, M. 704, 707–708 Gorman, P. 148, 155–157, 373, 759, 760 Gorry, G.A. 67, 68, 811
Goryachev, S. 947, 948, 953 Gosling, A. 784 Gottesman, O. 947, 953 Gottfried, M. 248 Gottipati, D. 166 Gottlieb, L.M. 558 Gottlieb, S. 979 Gottschalk, A. 112 Gottschalk, H.W. 216 Gould, J. 812, 813 Goulding, M. 246 Gouveia, R. 644 Goyal, N. 258 Grabenkort, W.R. 682 Grad, R. 784 Grady, D.G. 917, 918, 920 Graesser, A.C. 860 Graham, G. 125 Grando, A. 169, 873 Grando, M. 169 Grandt, D. 977 Grant, R. 596 Grave, E. 258 Gray, E. 447 Gray, G.W. 447 Gray, W.D. 168 Greaves, F. 251 Greco, F. 527 Green, B. 493 Green, E.D. 933, 949 Green, R. 719, 896 Green, T. 282 Greenberg, M.D. 980 Greene, J. 369 Greenes, R. 6, 20, 37, 739, 803, 806, 829 Greenhalgh, T. 764, 978 Greenland, S. 441 Greeno, J.G. 125–127 Greenspan, H. 297–347 Greenway, L. 706, 710, 713, 715, 722 Gregg, R.E. 702, 703 Gregg, S.R. 682 Gregory, S. 874 Grethe, J. 768 Griffin, D. 930 Griffin, S.C. 106 Griffin, S.J. 375 Grimm, R.H. 809 Grishman, R. 245 Griswold, W.G. 655 Grobe, S. 165, 580, 582 Groen, G.J. 36, 130, 133–135, 137–139, 165 Groopman, J. 417 Gross, J.B. 647 Gross, T. 447 Grossman, D.C. 115 Grossman, J. 571 Grossman, J.H. 380, 471 Grosz, B. 261 Groves, R.H. Jr. 700 Gruber-Baldini, A.,648
G
1104
Name Index
Grumbach, K. 365, 375 Gu, W. 287 Gu, Z. 932 Guerin, P. 762 Guestrin, C. 812 Guglin, M.E. 703 Gulacti, U. 493 Gulbahce, N. 799 Gulshan, V. 955 Guo, J. 377 Guo, X. 658 Gupta, A. 721 Gupta, S. 649 Gur, D. 744, 745 Gurses, A.P. 140, 717 Gurwitz, D. 899 Gusfield, D. 284 Gustafson, D.H. 370, 371, 650, 652, 655 Gustafsson, S. 951 Gutnik, L.A. 131 Gutschow, M.V. 282 Guyatt, G. 763, 764 Guyer, M. 933, 950
H Haasova, M. 447 Habbema, J.D.F. 115 Habibi, M. 253 Habyarimana, J. 436, 438, 462 Hackbarth, A.D. 705 Hackenberg, M. 709 Haddad, P. 562 Haddow, G. 451 Hadley, D. 883 Haefeli, W.E. 488, 501 Haehnel, D. 648 Haendel, M.A. 63 Hagen, P.T. 477 Hagerty, C.G. 116 Haggstrom, D.A. 372 Hagler, S. 381, 382 Hahn, C. 639 Hahn, U. 261 Haines, A. 455 Haislmeier, E. 527 Hakenberg, J. 253 Halamka, J. 395, 418, 471, 973 Hales, C.A. 112 Halford, S. 448 Hall, J.A. 369 Hall, M. 165 Hall, P. 950 Hall Giesinger, C. 862 Hall P.S. 447 Halpern, D.J. 374 Hamed, P. 976 Hammond, K.W. 167 Hammond, W.E. 231, 471, 476 Han, Y.Y. 187, 711, 973
Hanauer, D.A. 21 Hanbury, A. 785 Hanbury, P. 246 Hangiandreou, N. 743 Hannay, T. 879 Hans, P. 704 Hansen, L. 304 Hansen, N. 726, 884 Hanson, G.J. 679 Haque, S.N. 21 Haralick, R. 328, 330, 332, 333 Harkness, H. 122, 142, 154 Harlan, W.R. 809 Harle, C. 762, 971, 972 Harman, D. 781 Harman, S. 812 Harney, A. 310 Harpaz, R. 888 Harrington, L. 122, 167, 210 Harrington, R.A. 418, 492, 833 Harris, K. 337 Harris, L. 379 Harris, L.E. 455 Harris, M.I. 675 Harris, M.A. 225 Harris, R. 109 Harris, S. 248 Harris, T. 248 Harris, Z. 248 Harrison, A.M. 719 Harrison, B. 648, 651 Harrison, L. 379 Harrison, M. 146, 585 Harrison, M.I. 804, 833 Harris-Salamone, K. 394, 599 Harry, E.D. 973 Harshman, R. 250 Hart, A.A.M. 288 Hart, J. 744 Hart, S. 166 Harter, S. 782 Hartman, S. 950 Hartzband, P. 417 Hasan, K. 308, 644 Hasan, O. 658 Hashmat, B. 155, 167 Hassan, E. 712, 723 Hasselberg, M. 684 Hassenzahl, M. 644 Hassol, A. 596 Hastak, S. 215, 930 Hastie, T. 9, 256, 288, 289, 805, 810, 811 Hastings, E. 950 Hatfield, L.A. 976 Hatt, W.J. 382 Haug, C. 761, 768 Haug, C.J. 980 Haug, P. 246 Haun, J. 378 Haun, K. 527 Hauser, D. 974
1105 Name Index
Haux, R. 598 Hawkins, D. 648, 659 Hawkins, N. 877–878, 893 Hawkins, R. 370, 371 Hawley, W.H. 710, 713, 715 Hawley, W.L. 706, 713, 722 Hayden, A.C. 221 Hayden, J.K. 857 Hayden, M. 878 Hayes, M.G. 944–947, 950 Hayes, R.B. 443 Hayes, T.L. 381, 649 Haynes, A.B. 719 Haynes, C. 950 Haynes, R. 188, 488, 764, 779, 782, 804 Hazlehurst, B. 146, 155, 156 Hazlett, D.E. 379 HealthIT. 977, 978 Hearst, M. 781–782 Heath, B. 680 Heaton, P.S. 971 Hebbring, S. 951, 953 Heckerman, D. 813, 818 Heddle, N.M. 641 Hedeen, A.N. 167 Heeney, C. 877–878, 893 Heer, J. 651 Heiat, A. 33 Heikamp, K. 884 Heinzen, E.L. 892 Heiss, W. 304 Hekler, E.B. 650, 656, 657 Helbling, D. 960 Held, K. 335 Heldt, T. 703, 704, 709 Helfand, M. 759, 760 Helfendbein, E.D. 702, 703 Hellier, E. 707 Hellmich, M. 93 Helsel, D. 645 Helton, E. 215 Heltshe, S. 648, 659 Hemigway, B. 648 Hemmen, T. 681 Henderson, J. 115, 369, 370 Henderson, L.E. 641 Hendrick, H.W. 156 Henley, W.E. 447 Henneman, L. 803 Henriksen, K. 139 Heo, E. 645, 658 Herasevich, V. 695–726 Herbosa, T. 719 Herman, W.H. 675 Herold, C.J. 898 Heron, K.E. 652 Herrera, L. 899 Hersh, J.J. 263 Hersh, W. 327, 339, 373, 521, 757, 760, 764, 780–785, 787
Heselmans, A. 488, 501 Hess, D.C. 681 Hess, O. 750 Hettinger, A.Z. 156, 168 Hewing, N.J. 162 Heywood, S. 878 Hibbard, J.H. 369, 376 Hick, W.E. 164 Hickam, D. 780, 782–784, 787, 835 Hickey, B. 945, 948 Hickman, J.A. 957 Hicks, F. 165 Hicks, J. 759, 975 Hicks, N. 460 Hicks, P. 370 Hiddleson, C.A. 682 Higgins, M. 105, 106, 111, 598 Hilgard, E.R. 125 Hillestad, R. 516, 974, 980, 981 Hilliman, C. 157, 161, 162 Hillman, C. 157, 161, 162 Himmelstein, D. 560 Hindmarsh, M. 375 Hinds, D.A. 961 Hingle, S. 995 Hinshaw, K. 333 Hinton, G. 444 Hintze, B.J. 244 Hirsch, J.A. 64 Hirsch, J.S. 484 Hiruki, T. 36 Hitchcock, K. 378 Hlatky, M.A. 109, 116 Ho, W.T. 953 Ho, Y.-L. 948 Hoang, A. 167 Hoath, J.I. 803 Hobbs, H.H. 957 Hobden, B. 672 Hock, K. 675 Hoehe, M.R. 894, 895 Hoerbst, A. 372 Hoey, J. 768 Hofer, C. 649 Hoffman, B.B. 820 Hoffman, C. 375 Hoffman, J. 310–311 Hoffman, M.A. 803, 873, 899, 915 Hoffman, R.R. 133, 134 Hofmann, T. 250 Hofmann-Wellenhof, R. 166 Hogshire, L. 784 Hohne, K. 323 Holcomb, B.W. Jr. 700 Holden, R.J. 140 Hole, W.T. 225 Holland, J.C. 371 Holland, S. 158 Hollandsworth, H.M. 33 Hollingsworth, J. 475 Holm, K. 165
H
1106
Name Index
Holman, H. 373, 375 Holmes, M.V. 953, 954 Holmgren, A.J. 973 Holodniy, M. 93, 106 Holohan, M.K. 894 Holroyd-Leduc, J. 394 Holton, S. 378 Holve, E. 975 Holzinger, A. 166 Hom, J. 809 Homer, N. 893 Homer, R.J. 892 Homerova, D. 878 Honarpour, N. 957 Honerlaw, J. 948 Hong, J.H. 639 Hong, M. 476 Hong, N. 826 Hong, Y.R. 995 Hood, L. 872, 1004 Hoogi, A. 303–347 Hook, J.M. 974 Hoonakker, P. 140, 167, 168, 715 Hooper, F.J. 675 Hopkins, A. 680 Horii, S. 306, 314 Horrocks, J. 343, 810 Horsky, J. 142, 146, 155, 157, 168 Horton, R. 768 Horvath, J.A. 134 Horvath, M. 915 Horvitz, E. 246, 645, 818 Hossain, S.M. 649 House, E. 439, 440 Hoving, C. 379 Hovsepian, K. 647, 649 Howe, S. 725 Howell, B. 995 Howells, K. 878 Howick, J. 764 Howie, L.J. 675 Howley, M.J. 726 Howrylak, J.A. 889 Hricak, H. 898 Hripcsak, G. 167, 187, 198, 244, 245, 259, 367, 467–501, 601, 679, 712, 715, 825, 927 Hsi-Yang Fritz, M. 872 Hu, J.X. 891 Hu, X. Hu, Y. 447 Hu, Z. 312 Huang, E.M. 643 Huang, G. 855 Huang, J. 657 Huang, R.S. 878 Huang, Y. 651 Hudgings, C. 580, 582 Hudson, D. 339 Hudson, H.E. 670 Hudson, K.L 206, 894 Huerta, M. 302
Huerta, T.R. 367 Huff, S. 246, 263 Hufford, M.R. 650 Hughes, D. 641 Hughey, J.J. 953 Hui, S. 461, 952 Hujcs, M.T. 714 Hull, S.C. 960 Hulley, S.B. 917, 918, 920 Hulot, J.S. 958 Humphreys, B. 66, 225, 246, 757, 770, 786 Hundt, A.S. 142, 167, 168, 715 Hung, A. 244 Hunt, D. 188, 488 Hunt, J.M. 649 Hunter, L. 871 Hunter, N.L. 447 Hupp, J.A. 804 Hurley, A. 588, 597 Huser, V. 927 Hüske-Kraus, D. 248 Hussain, M. 878 Hutchins, E. 143, 146, 156 Hwang, J. 380, 572 Hwang, S.Y. 276 Hwang, T.S. 51 Hyde, C. 447 Hyman, R. 164 Hysen, E. 649, 653 Hysong, S.J. 159
I Ibrahim, S. 379 Ide, N.C. 932 Ienca, M. 401 Iftode, L. 650 Ikari, K. 952 Imai, K. 961 Imhoff, M. 707 Imran, T.F. 948 Inan, O.T. 647 Ingbar, D. 700 Instone, K. 782 Intille, S.S. 641, 655 Ioannidis, J. 764 Ip, H. 320 Ip, I. 209 Irani, P. 644 Irwin, A. 379 Isaac, T. 723, 833 Isaacs, J.N. 378 Isenberg, P. 644 Ishizaki, T. 956 Isla, R. 122, 142, 154 Issell, B. 657 Ivers, M. 590 Iyer, S. 888 Iyyer, M. 258
1107 Name Index
J Jack, W. 436, 438, 462 Jackson, G.P. 363–384 Jackson, G.T. 860 Jackson, J. 596 Jackson, M.L. 246 Jacobs, J.M. 282 Jacobs, S. 154 Jacobsen, P.B. 371 Jacobsen, S. 98 Jadad, A. 762 Jadhav, A. 780, 785 Jaffe, C. 215 Jahn, B. 108 Jain, A. 818, 819 Jain, P. 954 Jain, R. 995 Jain, S. 671 Jakicic, J.M. 645 James, A. 762 James, B. 560, 599 James, G. 9, 256, 805, 810, 811 Jamoom, E.W. 801 Janamanchi, B. 479 Jang, Y. 978 Janneck, C.D. 268 Jarabek, B. 493 Jarvelin, K. 781 Jarvik, G.P. 63 Jaspan, H.B. 641 Jaspers, M. 154, 487, 600 Jaulent, M.C. 881 Jaworski, M.A. 801 Jay, S.J. 429 Jayakody, A. 672 Jeffries, H.E. 973 Jeffries, P.R. 855, 857 Jena, A.B. 978 Jenders, R. 198, 785 Jensen, L.J. 881, 884 Jensen, M.K. 953 Jeon, J.H. 651 Jerome, R.N. 954 Jha, A. 63, 201, 380, 707, 804, 977 Ji, N. 995 Ji, S. 645, 658 Ji, Z. 262 Jia, W. 649 Jiang, G. 825, 827 Jiang, M. 246, 889, 954 Jiang, Y. 329 Jimison, H. 373, 379, 381, 382, 659 Jin, H. 378 Jin, Y. 925 Jin, Z. 647 Jing, X. 808 Johanson, W. 700 John, B. 855 John, V. 166 Johnson, A.E.W. 725
Johnson, C. 160, 168 Johnson, D.C. 959 Johnson, K. 158, 167, 248, 252, 323, 380, 515 Johnson, K.V. 706, 713, 715, 722, 723, 725 Johnson, L.B. 926, 927 Johnson, P. 137, 781–782 Johnson, R.A. 650, 652, 655 Johnson, S. 21, 168, 245, 471, 479, 486, 591, 914–915, 918, 920, 924 Johnson, T. 66, 128, 142, 154, 158, 160, 435 Johnston, D. 516, 974, 978 Jollis, J.G. 207 Jones, B. 654 Jones, S.S. 188, 971 Jones, W.D. 889 Jonquet, C. 889 Jordan, M.I. 250 Jordt, H. 844 Jorissen, R.N. 881 Joseph, G.M. 139 Joseph, J. 649 Joseph, S. 719 Joshi, A. 261, 647 Joshi, M.S. 801 Joshi, R. 243 Joulin, A. 258 Jouni, H. 944, 950 Joyce, D. 477 Joyce, V.R. 106 Ju, X. 658 Judd, B.K. 860 Jude, J.R. 700 Judson, T.J. 684 Juengst, E.T. 377 Jumper, J. 286 Jung, K. 812 Junker, H. 649 Juraskova, I. 378 Jurie, F. 328 Just, B.H. 473 Justice, A.E. 952
K Kaalaas-Sittig, J. 394, 401 Kabanza, F. 460 Kaber, D.B. 168 Kaelber, D. 531, 974 Kagita, S. 141 Kahali, B. 952 Kahn, C. 317, 785, 813 Kahn, J.M. 714 Kahn, M. 914, 915 Kahn, M.J. 161 Kahn, S.A. 27, 208 Kahneman, D. 86 Kalahasty, G. 703, 711, 714 Kalantarian, H. 649 Kalishman, S. 683 Kallas, H.J. 681
K
1108
Name Index
Kallem, C. 645 Kalogerakis, E. 649 Kalyanpur, A. 786 Kamarck, T. 647 Kamaya, A. 444, 446 Kan, M.Y. 248 Kaneko, A. 956 Kang, H. 647 Kang, H.M. 944 Kang, J. 258, 310, 643 Kang, M.J. 892 Kanhere, S.S. 650 Kankaanpaa, J. 429 Kannampallil, T.G. 123, 124, 139, 141, 146, 155, 156, 156n1, 159, 164, 166, 169 Kannry, J. 144, 933 Kanoulas, E. 782 Kantarcioglu, M. 889 Kao, S.C. 675 Kapit, A. 251 Kaplan, B. 394, 599 Kapoor, R.R. 442 Kapur, T. 334 Kar, S. 952 Karahoca, A. 166 Karahoca, D. 166 Karanja, S. 436, 438, 462 Karapanos, E. 644 Karat, C.-M. 154, 157 Kariri, A. 436, 438, 462 Karkar, R. 654, 657 Karlson, E.W. 954 Karplus, M. 283 Karr, J.R. 282 Karras, B.T. 826 Karsh, B. 122, 167, 168 Karsh, B.-T. 140, 726, 832 Karson, A.S. 973 Karuza, J. 684 Kashani, K. 718 Kashyap, R. 719 Kass, M. 333 Kassirer, J.P. 67, 68, 110, 111, 139, 813 Kastor, J. 550 Katalinic, P. 675 Kato, P.M. 853, 854 Kattan, M.W. 876 Kaufman, D.R. 123, 130–132, 134, 135, 139, 141, 144–146, 155, 156, 156n1, 157, 161, 162, 165, 169 Kaufmann, A. 515 Kaukinen, K. 889 Kaul, S. 759–760 Kaushal, R. 572, 804, 974 Kavanagh, J. 762 Kavuluru, R. 254 Kawamoto, K. 193, 231, 476, 803, 825, 830 Kawano, K. 379 Kaye, J. 381, 649, 877–878, 893 Kaziunas, E. 645 Keddington, R.K. 725 Keech, A.C. 957
Keefe, M.R. 706 Keeling, A. 168 Keesey, J. 759, 975 Keiser, M.J. 884 Kekalainen, J. 781 Kelemen, S. 379, 380 Kellerman, R. 365 Kelley, M.A. 700 Kelley, M.J. 244 Kelly, B.J. 209 Kemper, A.R. 115 Kendall, D. 527, 529 Kendall, S. 165 Kennedy, D. 333 Kennelly, R.J. 706, 715 Kenron, D. 375 Kent, H. 682 Kent, W.J. 286 Keren, D. 784 Keren, H. 704 Kerlavage, A.R. 995 Kern, D.E. 853 Kern, L.M. 461, 974 Kern, M.L. 246 Kerr, E. 759, 975 Kersey, P. 887 Keselman, A. 248, 373 Kesselheim, A.S. 979 Kevles, B. 305 Keyhani, S. 246 Khajouei, R. 487 Khalid, S. 166 Khan, H. 725 Khan, R. 784 Khan, V.J. 651 Khare, R. 930 Khatri, P. 887, 889 Khayter, M. 930 Khera, A.V. 952 Khera, R. 671 Kho, A. 496, 641, 888, 946–948 Kholodov, M. 878 Khorasani, R. 209 Khosla, C. 889 Kianfar, S 167 Kibatala, P.L. 719 Kidd, M. 527 Kiefer, A.K. 961 Kiefer, R.C. 825, 951 Kilickaya, O. 718, 721 Kilicoglu, H. 245, 247, 261, 785 Killarney, A. 367 Kiln, F. 378 Kim, D. 258 Kim, H. 596, 873 Kim, J. 645, 658, 873, 877 Kim, M.I. 380 Kim, S. 258, 784 Kim, W. 778 Kim, Y. 645, 651, 658, 723, 885 Kimani, J. 436, 438, 462
1109 Name Index
Kimborg, D. 304 Kimmel, S.E. 142, 154, 155, 157, 429, 454 Kimura, M. 925 Kinder, T. 722 King, B. 744 King, N. 641 King, W. 311, 645 Kinkaid, B. 701 Kinmonth, A.L. 455 Kinnings, S.L. 893 Kinosian, B. 93 Kintsch, W. 127, 130 Kipper-Schuler, K.C. 246 Kirby, J.C. 948 Kircher, M. 954 Kireev, E. 778 Kirk, L. 365 Kirkpatrick, J. 286 Kish, L.J. 527 Kitamura, K. 649 Kitchens, B. 762 Kitzinger, J. 164–165 Klahr, P. 429 Klann, J.G. 929 Klasnja, P. 637–661 Klavans, J.L. 248 Klein, E. 262 Klein, G. 134 Klein, R.J. 950 Klein, S.W. 37 Klein, T.E. 253, 281 Kleiner, B.M. 156 Kleinman, A. 369 Kleinmuntz, B. 174 Klopfer, D. 135 Knaplund, C. 167 Knaus, W. 406, 812 Kneeland, T. 105, 106 Knickerbocker, G.G. 700 Knight, R. 642, 899 Knosp, B. 936 Knox, C. 881 Ko, J. 955 Koatley, K. 143 Koch, T. 772 Koehler, C. 643 Koehler, S. 246 Koepsell, T.D. 471 Koester, S. 246 Kogan, B.A. 476 Kohane, I.S. 193, 267, 496, 527, 829, 873, 926, 954, 978 Kohn, L. 142, 583, 711, 725, 801 Kok, M. 379 Kolesnikov, N. 878 Kolodner, R.M. 209 Komaroff, A. 809 Komatsoulis, G. 933 Konkashbaev, A. 957 Kononowicz, A.A. 858 Kontos, E. 374 Kooienga, S. 448
Koopman, B. 784 Koppel, R. 142, 154, 155, 157, 167, 405, 429, 454, 585, 597, 804, 833 Kor, D.J. 708 Koreen, S. 687 Korhonen, I. 381 Korinek, E. 656, 657 Kormanec, J. 878 Korner, M. 306 Korsch, B.M. 369 Korsten, H.H. 820 Kosec, P. 166 Koshy, E. 654 Koshy, S. 476 Koslow, S. 302 Kostakos, V. 651 Kostyack, P. 527 Kotzin, S. 773 Kouwenhoven, W.B. 700 Kowalczyk, L. 707 Kozbelt, A. 133, 134 Kozower, B.D. 447 Kraemer, D. 781–782 Kraemer, M.U.G. 995 Krall, M. 394, 401 Krallinger, M. 253 Kralovec, P. 973 Kramer, A.A. 812 Kramer, B.S. 443 Kramer, H. 193 Kraus, V.B. 874 Kreda, D. 193, 405, 829, 873 Kreider, N. 763 Kremkau, F. 307 Krestin, G.P. 898 Kricka, L.J. 961 Krížek, M. 680 Kripalani, S. 641 Krischer, J. 930 Krishnan, L. 888 Krishnaral, A. 744 Krist, A. 115, 527, 576, 596 Kristiansen, K. 286 Krizhevsky, A. 444 Kroemer, G. 899 Krueger, D. 167, 168 Krueger, R.A. 164 Krumholz, H.M. 33, 377, 671 Krupa, S. 879 Kubose, T. 142, 160, 435 Kuhl, D. 683 Kuhls, S. 707 Kuhn, I. 471, 478 Kuhn, M. 881, 884 Kuhn, T. 167 Kuipers, B. 165 Kuivaniemi, H. 944, 945, 950 Kukhareva, P. 193 Kukla, C. 641 Kulikowski, C. 25, 35, 36, 300 Kullo, I.J. 944, 945, 947, 950
K
1110
Name Index
Kumar, S. 647, 649, 655 Kumbamu, A. 780, 785 Kuntz, K.M. 115 Kuperman, G. 142, 167, 187, 210, 394, 590, 710, 714, 833 Kuprel, B. 955 Kurillo, G. 381, 382 Kurreeman, F. 945, 948 Kurth, A.E. 115 Kush, R. 215 Kushniruk, A. 142, 144, 145, 157, 162, 168, 380 Kuss, O. 375, 762 Kutejova, E. 878 Kuttler, K.G. 725 Kwasny, M. 719 Kweon, J. 885 Kwon, M. 639 Kwon, Y. 995 Kyaw, B.M. 858 Kyaw, T.H. 703, 709 Kyriakides, T.C. 106
L Labarthe, D.R. 246 Labkoff, S. 537 Lacefield, D. 375 Lacson, R. 209, 827 Lafferty, J. 776 Lahdeaho, M.L. 889 Lai, A.M. 244 Laine, C. 762 Laird, N. 974 Lam, C. 932 Lamb, A. 471 Lamb, J. 877, 878 Lambert, D. 641 Lambrew, C.T. 701 Lanckriet, G. 651 Landauer, T. 250, 782 Landay, J.A. 647, 651 Lander, E.S. 276, 870 Landman, A. 979 Landon, B.E. 976, 979 Landrigan, C.P. 711 Landrigan, P.J. 899 Landrum, M.J. 879 Lane, N.D. 639 Lane, S.J. 641 Lane, W.A. 393 Lang, F. 245, 264, 886 Langer, R. 644 Langer, S. 744 Langheier, J.M. 803 Langlotz, C. 316, 318 Langridge, R. 283 Lanktree, M.B. 953 Lanuza, D. 165 Lapitan, M.C. 719
Lapsia, V. 525, 526 Larabell, C. 312 Larimer, N. 379 Larkin, J. 127, 129, 134, 138 Larmuseau, M.H.D. 896 Larsen, R. 489 Larson, E.C. 648, 659 LaRusso, N.F. 157 Lasater, B. 803 Lash, A.E. 878 Lash, T.L. 441 Lashkari, D.A. 276 Lassar, A.B. 246 Lattin, D.J. 51 Lau, A.Y.S. 381 Lau, F. 64, 167, 501 Lau, I. 859 Lau, L.M. 246 Laurent, M. 767 Laurila, K. 889 Lavallee, D.C. 975 LaVange, L. 447 Lavelle, M. 157 Lavie, C.J. 973 Lawrence, J. 115, 995 Laws, S. 744 Lazar, A. 643, 680 Lazar, M. 377 Le Bihan, D. 308 Leahy, R. 333 Leape, L.L. 188, 801, 974 Leaper, D.J. 810 Leatherman, S. 560, 975 Leavitt, M. 185 Lebiere, C. 129 Lederer, R. 649 Ledley, R. 137, 341, 810 Lee, B. 644, 645, 651 Lee, C. 281, 892 Lee, D. 64, 308 Lee, H. 645, 658 Lee, J. 258, 306, 725, 744, 979 Lee, J.M. 645 Lee, J.Y. 639, 687, 725 Lee, K. 244, 258 Lee, T. 573, 979 Lee, Y. 341, 639 Leeflang, M.M.G. 93 Lefebvre, C. 779 LeFevre, M.L. 115 Lehman, C.U. 372 Lehmann, H.P. 372 Lehmann, T. 339 Leiner, F. 598 Leinonen, R. 872 Leitner, F. 253 Lek, M. 878 Lender, D. 648 Lenert, L.A. 106 Lenfant, C. 871 Lenze, E.J. 933
1111 Name Index
Leong, A. 312 Leong, F. 312 Leong, K.W. 654 Leong, K.C. 654 LePendu, P. 888 Lerman, B.J. 109 Leroy, G. 373 Leroy, J. 772 Leser, U. 253 Lesgold, A. 135 Leshno, M. 167 Lesk, M. 787 Leslie-Mazwi, T.M. 64 Lesselroth, B.J. 168 Lester, R. 436, 438, 460 Lethman, L.W. 703, 709 Letunic, I. 881 Leu, M. 599 Levas, A. 786 Leventhal, L. 782 Leventhal, R. 979 Levesque, Y. 144, 145, 162 Levick, D. 797, 833 Levin, C.A. 973 Levine, D.M. 976, 979 Levine, R.B. 853 Levis, J. 413 Levitt, M. 280, 283 Levy, B. 760 Levy, K.D. 959 Levy, L. 654 Levy, M. 320, 321, 346 Levy, O. 258 Lewin, J.S. 898 Lewis, C. 161 Lewis, J.L. 976 Lewis, J.R. 165, 166 Lewis, M. 258 Lexe, G. 310 L’Homme, M.-C. 257 Lhotska, L. 680 Li, B. 647 Li, C. 649 Li, D. 782, 826 Li, H. 776 Li, J. 157, 807 Li, L. 244, 954 Li, Q. 246, 704 Li, R. 281, 284 Li, S. 762 Li, W. 648, 659 Li, X. 378 Li, Y. 244, 284, 649 Li, Z. 377, 649, 782 Liang, S. 253 Liang, Y.C. 649 Liao, K. 945, 948, 949, 954 Libicki, M.C. 211 Lichtenbelt, B. 326 Lieberman, M.A. 371 Liebovitz, D.M. 367
Lieto, A. 129 LiKamWa, R. 639 Lillie, E.O. 657 Lilly, C.M. 682, 713, 714, 724 Limdi, N.A. 957 Lin, C. 954 Lin, H. 584, 592, 593, 596, 832–833 Lin, J. 109, 247 Lin, L. 122, 142, 154 Lin, N.X. 447 Lin, S.C. 201, 978 Lin, Y. 639, 881 Lin, Z. 671 Lindauer, J.M. 702, 703 Lindberg, D. 66, 246, 265, 590, 757, 770, 786 Linder, J.A. 824, 976 Lindgren, H. 803 Lindhurst, M.J. 893 Lindström, S. 952 Lindtner, S. 645 Linethal, A.J. 700 Lingsma, H.F. 876 Linnarsson, S. 281 Linton, L.M. 276, 870 Linzer, M. 995 Lipman, D.J. 284 Lipsitz, S.R. 719, 973 Lipton, E. 515 Listgarten, J. 378 Litell, J. 696, 711, 720 Littenberg, B. 93, 106 Littlejohns, P. 428 Liu, D.T.L. 950 Liu, F. 167, 247 Liu, H. 246, 649, 650, 889 Liu, J. 444, 446, 649, 932 Liu, L. 680 Liu, M. 682, 889, 899, 950, 954 Liu, N. 893 Liu, R. 650 Liu, T. 881, 884 Liu, V. 980 Liu, Y. 258, 341, 639, 784, 888, 932 Lloyd, J.F. 723, 725 Lo, B. 444 Localio, A.R. 142, 154, 155, 157, 429, 454 Lochbaum, C. 782 Locke, A.E. 952 Logan, R. 373 Lohman, M.E. 367 Lok, U. 493 Lomatan, E.A. 830 Long, W.J. 978 Longhurst, C.A. 494, 808, 833 Loper, E. 262 López-Coronado, M. 642 Lopez-Meyer, P. 649 Lopo, A.C. 854 LoPresti, M. 372 Lorenzetti, D. 394 Lorenzi, N. 122, 143, 158, 167, 804, 973
L
1112
Name Index
Lorig, K. 373, 375 Loscalzo, J. 889 Lottin, P. 144, 145, 162 Loukides, G. 402, 403 Louwerse, M.M. 860 Lovato, E. 188 Love, R.M. 975 Lovell, N.H. 995 Lowe, D. 313, 328, 330 Lown, B. 700 Lowry, S.Z. 167 Lu, S. 860 Lu, Z. 254, 886 Luan, D. 258 Lubarsky, D. 479 Lum, J.K. 956 Lumpkin, B. 253 Lumpkins, C. 105, 106 Lundberg, G. 763 Lundgrén-Laine, H. 165 Lundsgaarde, H. 428, 715 Lunenburg, F.C. 454 Lunshof, J.E. 894, 895 Luo, Y. 268 Luotonen, A. 758 Lupton, D. 643 Lush, M.J. 879 Lussier, Y. 259, 869, 915 Lusted, L. 137, 341, 810 Lyall, D.M. 954 Lye, C.T. 377 Lyman, J.A. 157 Lyman, M. 245, 248 Lynch, J.A. 244 Lynn, J. 406, 812 Lyon, C.R. 722 Lyson, M.C. 659 Lyu, H. 975
M MacArthur, J. 878, 950 MacDonald, D. 333, 338 MacFadden, D. 926 MacFarland, P.W. 703 Machan, C. 972 Maciosek, M. 115 MacKenzie, I.S. 164 Mackinnon, A.D. 675 Macklin, D.N. 282 Macklin, R. 403 MacMahon, H. 312 Macpherson, J.M. 961 MacRae, C. 976 Magder, S. 132 Maglione, M. 477, 726 Magnani, L. 132, 139 Magrane, M. 887 Mahajan, R.P. 707 Mahmood, U. 305, 310
Mahoney, E.R. 369, 376 Maia, J.M. 892 Mailman, M.D. 878, 925 Mainous, I.I.I.A. 995 Maitz, G. 744 Majchrowski, B. 413 Majeed, A. 654 Majersik, J.J. 681 Major, K. 493 Makary, M.A. 517 Makeyev, O. 649 Maki, M. 889 Malani, P. 973 Malik, J. 334 Malin, B. 402, 403, 889 Malviya, S. 705 Mamlin, B. 499 Mamykina, L. 497 Manchanda, R. 560 Mancuso, A. 188 Mandel, J.C. 193, 475, 829 Mandelin, A.M. 948, 949 Mandell, S. 974 Mandl, J. 829 Mandl, K.D. 193, 379, 380, 527, 829, 929, 973 Mane, V.L. 248 Mani, I. 785 Mani, S. 954 Manicavasagar, V. 378 Manichanh, C. 281 Mankoff, J. 651 Mann, D.M. 684 Manning, C.D. 258 Manning, D. 341 Manning, P. 760, 801 Manolio, T.A. 892 Mant, J. 458 Mao, M. 288 Mao, Z.H. 649 Marcetich, J. 773 Marcial, L.H. 803 Marcin, J.P. 681 Marciniak, T.A. 33 Marcolino, M.S. 367 Marcus, M. 267, 645 Margolis, D. 310, 311 Margolis, M.J. 860 Mariakakis, A. 648 Mark, D.B. 207 Mark, R.G. 695–726 Markewitz, B.A. 704, 707–708 Markey, M.K. 36 Markle-Reid, M. 379 Markopoulos, P. 651 Marley, S.M. 675 Marlin, R. 448 Marlo, J. 953 Marone, C. 188 Maroto, M. 246 Marquardt, C. 712 Marquet, G. 345
1113 Name Index
Marra, C.A. 436, 438, 460 Marroquin, J. 335 Marti, J. 447 Martin, C.A. 656, 657 Martin, D.K. 499, 974 Martin, R. 316, 323 Martínez-Pérez, B. 642 Martinez-Perez, M.E. 687 Martins, S. 822 Marton, K.I. 105, 106, 110 Marwede, D. 317 Masanz, J.J. 246 Masci, P. 156 Masiello, I. 858 Masinter, L. 787 Maskrey, N. 764 Maslen, J. 887 Mason, J. 164 Massagli, M. 379, 723 Massaro, T. 157, 585 Massoud, T. 310 Mastura, I. 654 Masys, D. 63, 372, 514 Mathew, J.P. 803 Mathews, C. 641 Mathews, S.C. 706, 715 Mattern, C. 744 Matthew-Maich, N. 379 Matthijs, K. 896 Mattick, P. 248 Mattison, J.E. 210 Matzner, Y. 279 Maurer, C. 335 May, C. 448 May, J.L. 703 Mayer, A. 156, 960 Mayer, E.A. 899 Mayer-Blackwell, B. 975 Mayes, T.J. 143 Mazmanian, S.K. 899 Mazurek, M.O. 684 Mazzeo, S.E. 658 McAfee, A.P. 380 McAlearney, A.S. 367 McAteer, J. 649 McBride, A. 744 McCabe, G.P. 459 McCagg, A. 954 McCann, A.P. 810 McClellan, M. 802, 804, 976 McConchie, S. 564 McConnell, S. 189 McCool, J. 761 McCormack, L. 378 McCormick, J. 780, 785 McCoy, A.B. 403 McCray, A.T. 246 McCullough, C.M. 394 McCullough, J. 560 McCusker, J.H. 276 McDaid, D. 779
McDermott, J. 127, 129, 134, 138 McDonagh, M. 782 McDonald, C. 167, 210, 244, 455, 459, 467–501, 516, 590, 974 McDonald, D.W. 647 McDonald, T.W. 93 McDonough, M. 844 McEvoy, C.A. 719 McEvoy, M. 371 McFadden, H.G. 377 McGeary, J. 657 McGill, M. 775, 776 McGlynn, E. 514, 759, 976 McGraw, J.J. 372 McGregor, A.M. 143 McInnes, P. 930 McInnis, M. 653 McKanna, J. 382 McKee, S. 477 McKeown, K.R. 248 McKethan, A.N. 976 McKibbon, K. 779, 782, 784 McLachlan, G. 334 McLean, R.S. 137 McManus, J. 598 McMullen, C. 146, 155, 156 McMurry, A.J. 926 McMurry, T.L. 447 McNair, D.S. 812 McNeer, J.F. 401 McNeil, B. 223 McPhee, S. 488 McTavish, F. 370, 650, 652, 655 McWilliams, J.M. 976 Mead, C.N. 930 Meade, M. 763 Meade, T. 310 Meadows, G. 830 Mechouche, A. 320 Meckley, L.M. 900 Medina, J. 700 Meer, P. 334 Mega, J.L. 958 Mehrotra, A. 979 Mehta, C. 918, 919 Mehta, T. 303 Meier, B. 750 Meigs, J. 92 Meili, R. 974 Meinert, E. 932 Meisel, A. 395, 396, 411–413 Meissen, H. 682 Mejino, J. 316 Melnick, E.R. 801 Melnikow, J. 115 Melton, B.L. 427, 438, 459 Melton, G.B. 467–501 Melton III, L. 471 Meltzer, S. 721 Melzer, D. 447 Menachemi, N. 516, 522, 971, 972, 983
M
1114
Name Index
Mendelsohn, G.A. 653 Mendenhall, J. 370 Mendis, M. 926 Mendonca, E.A. 157, 954 Mennemeyer, S.T. 977 Merad, M. 899 Merchant, R.M. 246 Merkel, M. 248 Merry, A.F. 719 Merson, L. 762 Mervin, M.C. 672 Metaxa-Kakavouli, D. 657 Metcalf, S.M. 714 Metlay, J.P. 142, 154, 155, 157, 429, 454 Meyer, B.C. 681 Meyer, E. 680 Meystre, S.M. 262 Mgbako, O. 933 Michaelis, J. 442 Michaels, M. 830 Michailidou, K. 952 Michel, M. 822 Michelson, D. 106 Michie, S. 649 Mickish, A. 471 Middleton, B. 155, 167, 417, 721, 797, 802, 804, 813, 825, 830, 831, 974, 978 Miguel-Cruz, A. 680 Mikk, K.A. 528 Mikolov, T. 258 Milam, S.G. 675 Milani, A.R. 973 Milani, R.V. 973 Miles, W. 758 Milius, R.P. 215 Millard, L.A.C. 945 Mille, W. 284 Miller, A. 443, 980 Miller, B.S. 527 Miller, B.T. 140 Miller, C. 248, 684 Miller, E.H. 933 Miller, G.C. 881 Miller, H. 527 Miller, J.C. 898 Miller, M.E. 974 Miller, N.A. 960 Miller, R.A. 248, 252, 391–420, 440, 448, 486, 487, 490, 499, 954 Miller, R.H. 527 Miller, S.M. 413 Miller, T.A. 954 Miller, V. 778 Miller-Jacobs, H. 187 Millett, C. 251, 977 Millhouse, O. 323 Mills, E.J. 436, 438, 460 Milne-Ives, M. 932 Milstein, A. 975 Mimi, O. 654 Min, J. 310, 639
Minor, J.M. 648 Mirhaji, P. 930 Mishra, R. 159 Misitano, A. 785 Misra-Hebert, A.D. 476 Mitchell, C. 408 Mitchell, H.H. 860 Mo, H. 948, 954 Moazed, F. 719 Modayur, B. 333 Moffatt-Bruce, S. 933 Mohamed, A. 258 Mohammadpour, A.H. 953 Moher, D. 761 Moher, E. 761 Mojica, W. 477, 726 Molich, R. 160 Moller, J.H. 137 Mongan, J. 573 Moody, B. 703, 709 Moody, G. 703, 708, 709 Moore, G. 498 Moore, K. 375 Moore, L.B. 683 Moorhead, S.A. 379 Moorthy, K. 719 Moran, T.P. 125, 162, 163 Morea, J. 499 Morel, G. 141 Morgan, A. 253, 877 Morgan, S. 378 Morin, P.C. 157, 161, 162 Mork, J. 244, 773, 886 Morris, A.H. 714, 722 Morrison, A. 643, 954 Morrison, F.P. 244 Morrow, G.R. 896 Mort, M. 878, 888 Mortensen, J.M. 888 Morton, S.C. 187, 477, 726 Mosa, A.S. 933, 936 Moses, L. 93 Mosley, J.D. 951 Mosley, T.H. 957 Mossberger, K. 374 Motik, B. 343 Mougin, F. 881 Mrabet, Y. 247 Muehling, J. 893 Mueller, E. 786 Muhlbaier, L.H. 207 Mulder, N. 887 Mullen, P. 379 Müller, D.J. 956 Müller, H. 339, 782, 785 Mulley, A.G. 369, 370 Mulsant, B.H. 933 Mulvaney, C. 803 Munasinghe, R. 601 Mundkur, M. 167 Munn, Z. 765
1115 Name Index
Munson, S.A. 644, 645 Munsterberg, A.E. 246 Murarka, T. 141 Murphy, G.C. 643 Murphy, R.L.H. 670 Murphy, S.A. 655, 957 Murphy, S.N. 926 Murphy, T. 638 Murray, E. 455 Musacchio, R. 763 Musen, M. 116, 804, 805, 812, 820, 821, 827, 835, 883, 886, 888, 889, 980 Musser, R.C. 231, 476 Musters, A. 706 Muth, C. 488, 501 Myer, L. 641 Myers, B.A. 154 Myers, E.W. 284 Myers, G. 725 Mynatt, B. 659, 782 Myneni, S. 141
N Nadkarni, P.M. 476 Naessens, J.M. 679 Nahum-Shani, I. 650, 655 Najafzadeh, M. 436, 438, 460 Nakajima, M. 647 Namath, A.F. 276 Namer, F. 257 Nanduri, V. 442 Napel, S. 300, 302, 319, 320, 330, 339 Narayanaswamy, A. 955 Natarajan, P. 952 Nath, B. 650 Nault, V. 464 Naumann, T. 243 Nazi, K.M. 380 Nealy, B.E. 975 Nease, R.F. Jr. 105, 106, 109, 112–115 Nebeker, J.R. 168, 487 Nebeling, L. 650 Needleman, S.B. 283 Neinstein, A.B. 684 Nelsen, L. 780 Nelson, C.P. 953 Nelson, H.D. 115 Nelson, N.C. 713, 714, 723 Nelson, R. 848 Nelson, S.D. 881 Nelson, S.F. 893 Nelson, S.J. 225 Nergiz, M.E. 402, 403 Nesbitt, T.S. 681 Nesson, H.R. 471 Neter, E. 374 Neuman, J. 700 Neumann, M. 258 Neumann, P.J. 106, 900
Neveol, A. 886 Neves, M. 253 Newcomb, C. 763 Newell, A. 125, 126, 128, 162, 163 Newell, M. 317 Newman, T.B. 917, 918, 920 Newman, W. 803 Newmark, L.P. 979 Newnham, H. 378 Newton, K.M. 948 Newton, K.S. 375 Neyman, J. 370 Ng, A.Y. 250 Ng, G. 334 Ng, S.B. 281, 893 Ngo, L. 248 Ngugi, E. 436, 438, 460 Nguyen, D.H. 643 Nguyen, T.C. 711, 973 Nguyên, V.H. 253 Nichol, W.P. 721 Nicholson, D. 767 Nicholson, L. 652 Nicol, G.E. 933 Nicolae, D.L. 878 Niehoff, K.M. 488 Nielsen, B. 312 Nielsen, H. 758 Nielsen, J. 154, 160, 430, 456 Nielsen, L. 709 Nigrin, D.J. 926 Nikolaidis, I. 680 Nikolaou, C.K. 858 Nikolskaya, T. 875 Nikolsky, Y. 875 Niland, J. 914 Nilsen, W. 650, 655 Niv, M. 251 Nixon, L. 780 Nocola, G.N. 64 Nolan, T.W. 716 Noll, A. 960 Norby-Slycord, C. 648, 659 Norden, A. 786 Norfolk, E. 715 Norman, D.A. 130, 146, 154, 156 Norman, L.R. 700 Noronha, J. 649, 653 Norris, P.R. 725 Norris, S. 373 Nosal, S. 974 Nossal, M. 251 Novack, D.H. 896 Novak, L.L. 167 Novara, G. 683 Noveck, H. 784 Novillo-Ortiz, D. 367 Novoa, R.A. 955 Nowak, E. 328, 329 Nowlan, W.A. 223 Noy, N. 818, 819, 886
N
1116
Name Index
Nugent, K. 312 Nugent, N. 657 Nusbaum, C. 276 Nygren, P. 373, 448 Nystrom, K.V. 681
O Obdržálek, Š. 381, 382 Obeid, J. 873 O’Brien, E. 954 Ochitill, H. 896 O’Connor, A.M. 974 O’Connor, G.T. 105, 106 Oded, N. 684 Odigie, E. 209, 827 Odisho, A.Y. 684 O’Donnell, P.H. 959 Oentaryo, R.J. 129 Oesterling, J. 92 Oetjens, M.T. 957 Ofli, F. 381, 382 Ogren, P.V. 246 Ogunyemi, O. 826 Oh, S. 956 Ohno-Machado, L. 63, 198, 768, 869, 873 O’Horo, J.C. 711, 719, 721, 726 Ohta, T. 255 Oinn, T. 887 Okada, Y. 952 Okafor, M.C. 648 Okoroafor, N. 844 Okryznski, T. 744 Olchanski, N. 711, 721 O’Leary, K.J. 367 Olender, S. 933 Olesen, F. 974 Oliveira, J.A.Q. 367 Olney, A. 860 Olsen, J. 141 Olson, G.M. 162 Olson, J.S. 162 Olson, N. 225 Olson, S. 896 Oltramari, A. 129 O’Malia, J. 282 O’Malley, A.S. 972 Ong, K. 573 Ong, M. 452 Ong, T. 930 Onigkeit, J. 718 Oniki, T. 710, 713–715 Openchowski, M. 471 Oppenheim, M.I. 146, 155, 157 O’Reilly, R. 954 Orho-Melander, M. 953 Orme, J.F. Jr. 696, 706, 710, 713–715, 722 Ormond, K.E. 895, 896, 899 Ornstein, C. 598, 599 O’Roak, B.J. 954
O’Rourke, K. 784 Orr, R.A. 711 Orshansky, G. 372 Osheroff, J. 440, 448, 760, 797, 825, 830, 833 Osterbur, D. 209 Oury, J.D. 129 Overby, C.L. 803 Overdijkink, S.B. 379 Overhage, J.M. 155, 167, 380, 455, 973, 974 Overhage, M. 515 Overington, J. 881 Owens, D. 93, 106, 108, 109, 112–115 Ozbolt, J.M. 575, 577, 578, 593 Ozdas, A. 488 Ozkaynak, M. 21
P Pacheco, J.A. 946, 947 Paddock, S. 312 Padrón, N.A. 684 Page, D. 951, 954 Page, L. 774 Pageler, N.M. 808, 833 Pagliari, C. 427 Paige, D.L. 142, 160, 435 Paige, N.M. 372 Pakhomov, S. 246 Palchuk, M.B. 824, 927 Palda, V.A. 85 Palfrey, J. 400 Palmer, B.K. 723 Palmer, T.M. 953 Palotti, J. 785 Paltoo, D.N. 63 Pan, E. 804, 974, 978 Pang, J.E. 417 Panicker, S.S. 248 Panos, R.J. 682 Paoletti, X. 915 Papanicolaou, G.J. 63 Parate, A. 649 Parchman, M.L. 975 Park, A. 973 Park, G. 246 Park, H.A. 223 Park, J.W. 639 Parker, A.G. 659 Parker, C. 710 Parker, E.B. 670 Parker, J.S. 889 Parnes, B. 972 Parrish, F. 881 Parry, G.J. 711 Parshuram, C.S. 488, 501 Parsi, S. 648 Parsons, A.S. 394 Partridge, C. 167 Parts, L. 378 Paskin, N. 787
1117 Name Index
Pasquier, M. 129 Patashnik, E. 764 Patay, B. 657 Patel, A. 371 Patel, B. 985 Patel, R.S. 954 Patel, S. 648, 659 Patel, U. 675 Patel, V. 36, 70, 72, 122–124, 128, 130–135, 137–146, 154–165, 168–170, 380, 435, 479, 484, 499, 719, 807, 833, 925, 973 Paterno, M.D. 830 Patey, R. 139 Pathak, J. 780, 826, 827, 951 Patil, O.R. 647 Patil, V.B. 248 Patra, B.G. 247 Patrick, K. 655 Patterson, E.S. 139, 140 Pauker, S. 68, 108–110, 393, 813, 978 Paul, M.H. 700 Pavel, M. 379, 381, 382, 655, 659 Pavliscsak, H. 375 Payne, P.R. 914–915, 918, 920, 924, 933 Payne, S.J. 131 Payne, T.H. 157, 167, 210 Payton, F. 576 Peabody, G. 79 Pearce, C. 604 Pearson, J.V. 893 Pecina, J.L. 679 Peck, T. 469, 703, 709 Peel, D. 334 Peiperl, L. 762 Peirce, C.S. 139 Peissig, P.L. 948, 954 Peleg, M. 131, 827 Pellegrini, C.A. 377 Peloso, G.M. 953, 954 Pelphrey, K. 914, 924 Pencina, M.J. 876, 915, 954 Pendergrass, S.A. 945, 951 Peng, J. 682 Peng, L. 955 Peng, Y. 254 Pennington, J. 258 Pentecost, J. 782, 783 Percha, B. 883 Pereira, F. 644 Perera, M.A. 957 Perkins, D.N. 134, 144, 145 Perkins, G. 312 Perona, P. 329 Perry, T. 188, 971 Pers, T.H. 952 Persell, S.D. 719 Pestotnik, S.L. 188, 488, 489, 723 Peters, E. 369 Peters, L. 886 Peters, M.P. 258 Peters, S.G. 711, 721, 725
Petersen, C. 377, 975 Petersen, L.A. 974 Peterson, E.D. 915 Peterson, J.F. 959 Peterson, N.B. 889 Peterson, W. 90 Petkova, D. 63 Petratos, G.N. 723 Petrone, J. 927 Petrovski, S. 892 Pettersson, M. 649 Pettit, L. 936 Peute, L. 596 Pevzner, J. 156, 157, 161, 162 Pew 981 Pfammatter, A.F. 377 Pham, H. 334, 972 Phan, L. 878 Phansalkar, S. 486 Phatak, S.S. 656, 657 Phelps, M. 304 Philipose, M. 649 Phillips, A.D. 878 Phillips, E.J. 956 Phillips, R.R. 471 Phillips, S.M. 377 Piaguet-Rossel R. 447 Piccirillo, J.F. 933 Pickering, B. 695–726 Piercy, K.L. 648 Pierson, R.C. 959 Pifer, E. 760 Pignone M.P. 115 Piña, I.L. 954 Pina, L.R. 653, 654, 657 Pinborg, A. 762 Ping, G. 377 Pingree, S. 370 Pinhas, A. 339 Pinsky, P. 443 Piot, J. 772 Pivovarov, R. 248, 785 Plaisant, C. 154, 168 Plantinga, L. 972 Plarre, K. 649 Platt, R. 926, 927 Ploeg, J. 379 Plumer, R. 896 Plummer, F.A. 436, 438, 460 Plummer, H. 471 Plunkett, R.J. 221 Pluye, P. 784 Podchiyska, T. 888 Poissant, L. 475 Poland, G. 157 Pollack, A. 898 Pollock, B.H. 853, 854 Polson, P.G. 161 Poole, J.E. 703 Poon, A. 592, 641 Poon, E. 493, 804, 833
P
1118
Name Index
Poon, K.B. 708 Pope, C. 448 Popejoy, A.B. 957 Pople, H. 429 Porter, E. 527 Porter, M. 573 Posadzki, P. 858 Posner, D. 948 Poterack, K.A. 169 Pottenger, W.M. 268 Potts, H.W.W. 978 Potucek, J. 680 Pouratian, N. 305 Pourhomayoun, M. 649 Powe, N.R. 972 Powell, J. 371, 375, 762 Powell, T. 225 Powsner, S. 168 Prail, A. 161 Prasad, R. 649 Prasad, V. 803 Prastawa, M. 335 Pratt, V.M. 959 Pratt, W. 637–661 Preece, J. 127, 154, 158 Pressler, T.R. 915 Prestin, A. 374 Prey, J.E. 367 Price, C. 64 Price, M. 167 Prichard, J. 448 Prior, R.E. 37 Privitera, M.R. 802 Prokscha, S. 918, 920 Pronovost, P.J. 706, 715, 980 Prorok, P. 443 Prosser, L.A. 804, 974 Prothero, J.S. 337 Prothero, J.W. 337 Provost, E.M. 653 Provost, F. 376 Prud’hommeaux, E. 825 Pryor, D. 105, 106, 207 Pryor, T. 198, 471, 710, 714, 721, 722 Przvbvlo, J.A. 493 Pugh, J. 399 Puglielli, S. 786 Pui, C.-H. 958 Pulido, J. 718 Pullara, F. 683 Pulley, J.M. 944, 945, 950, 951, 954, 958, 959 Purcell, G.J. 704 Pycroft, L. 399 Pynadath, A. 750 Pysz, M. 310, 311 Pyysalo, S. 254, 255
Q Qaseem, A. 765
Qin, J. 281 Quaid, K.A. 896 Quan, H. 394 Quigley, A. 651 Quimby, K.R. 367 Quinn, C.C. 648 Qureshi, N.S.II 981
R Raab, F. 655 Rabbi, M. 650 Racunas, S.A. 888 Radford, A. 258 Radford, M.J. 33 Raes, J. 281 Raghavan, V.A. 929 Rahimi, B. 596, 599 Rahman, M.M. 649 Rahmani, R. 339 Rahurkar, S. 971, 972, 977 Raij, A. 647 Raj, T. 952 Raja, A. 209, 827 Rajakannan, T. 768 Rajalakshmi, R. 676 Rajani, A.K. 812, 813 Ralston, J. 596 Raman, R. 681 Ramanathan, N. 377 Ramaprasad, A. 378 Ramelson, H.Z. 168 Ramirez, A.H. 944, 950, 957–959 Ramirez, M.P. 954 Ramirez-Cano, D. 251 Ramnarayan, P. 442 Ramoni, M. 132, 139 Ramoni, R. 193, 995 Ramos, J. 639 Ramsaroop, P. 527 Ramsdell, J. 873 Ramsden, M. 782 Randy Smith, G. Jr. 367 Ransohoff, D.F. 91 Ranta, A. 488 Ranum, D.L. 246 Rao, S. 282, 707 Rapp, B.A. 265 Rappaport, M. 773 Rappelsberger, A. 232 Rasmussen, L.V. 948 Rasmussen-Torvik, L. 945–947 Raths, D. 925 Ratinov, L. 258 Rausch, T. 596 Ravert, R.D. 371 Ravi, D. 444 Raworth, R. 167 Ray, P. 305, 310 Raychaudhuri, S. 948, 949, 954
1119 Name Index
Raze, C. 245 Read, J.D. 222 Read-Brown, S. 51 Reade, S. 683 Reason, J.T. 141 Rector, A. 315 Rector, A.L. 223 Redd, W.H. 371 Reddy, M. 645 Reding, D. 443 Redman, M. 893 Reed, C.E. 370 Reed, D.A. 853 Reed, W.C. 543–574 Reeder, B. 680 Reeves, J.J. 33 Reeves, M.J. 681 Reginster, J.-Y. 372 Rehm, H.L. 879 Reiber, H. 750 Reich, C. 927, 930 Reid, C. 772 Reiner, B.I. 675 Reiser, S. 468, 497 Reisinger, H.S. 682 Reisner, A.T. 703, 704, 709 Rekapalli, H. 781–782 Relkin, N.R. 896 Relling, M.V. 63, 958 Remde, J. 782 Remien, R.H. 933 Ren, C. 682 Renders, C.M. 375 Rennie, D. 763 Reshef, R. 246 Resnik, P. 251 Retecki, S. 380 Reynolds, P.I. 705 Reynolds, R. 855 Reznick, R.K. 719 Rhee, S.Y. 887 Ribaric, S. 313 Ribeiro, A.L. 367 Ribeiro, M.T. 812 Ribitzky, R. 168 Ricci, M.A. 680 Ricciardi, W. 188 Rice, D. 375 Rich, M. 762 Richardson, J.E. 803 Richardson, J.S. 283 Richesson, R. 915, 930, 933 Richter, G.M. 676 Ridgely, M.S. 981 Ridgeway, C.J. 720 Ridgway, P.F. 683 Rief, J.J. 493 Rieman, J. 161 Rigby, M. 427, 429 Rigden, D. 767 Riley, J. 758
Riley, W.T. 655 Rimoldi, H.J. 137 Rind, D. 380, 471 Rindfleisch, T. 245, 248, 760, 785 Ringertz, H.G. 898 Rios, A. 254 Rios Rincon, A. 680 Ritchie, C. 307 Ritchie, E. 378 Ritchie, M.D. 888, 944–946, 950, 951 Ritter, F.E. 129 Ritter, P. 375 Ritvo, P. 436, 438, 460 Rivera, D.E. 656, 657 Rizer, M.K. 367 Rizo, C. 371 Rizzo, A.A. 855 Robbins, A. 37 Roberts, J.S. 896 Roberts, K. 247, 262, 265, 782 Roberts, L.M. 813 Roberts, P. 781–782 Robertson, J. 413 Robertson, S. 642, 654, 776 Robinson, A.Q.L. 109 Robinson, J.R. 944, 945, 948 Robinson, P. 63, 338 Rocca-Serra, P. 872 Rocha, B.H. 830 Rocha, R. 246 Roden, D. 496, 889, 944, 945, 948, 954, 958 Rodriguez, L. 264 Rodriguez, M. 156 Rodriguez, S. 193 Roentgen, W. 305 Rogers, A.J. 889 Rogers, M.P. 974 Rogers, R.J. 645 Rogers, W. 128, 264 Rogers, W.J. 245, 701 Rogers, Y. 122, 123, 127, 154, 156, 158, 641 RohitKumar, C. 786 Rohlfing, T. 335 Rohlman, D. 782 Rohrer Vitek, C.R. 889 Rohwer, P. 641 Roland, M. 974 Romacker, M. 261 Roman, L.C. 168 Romano, M. 501 Rooksby, J. 643 Rosamond, W.D. 681 Rosati, R.A. 401 Rose, L.L. 703 Rose, M.T. 227 Rosebrock, B.J. 714 Rosemblat, G. 245, 248 Rosen, G. 323 Rosenbaum, S. 707 Rosenbloom, S.T. 363–384, 488, 954 Rosenbloom, T. 497
R
1120
Name Index
Rosencrance, L. 598 Rosenfeld, B. 681, 712, 724 Rosenfeld, M. 648, 659 Rosenthal, D. 749 Roski, J. 976 Rosman, A.N. 379 Ross, B. 304 Ross, D.T. 878 Ross, J. 564, 671 Ross, L. 781–782 Rosse, C. 291, 315 Rossi-Mori, A. 223 Rosson, M.B. 154 Rost, M. 643 Roter, D. 369 Roth, E. 156, 477, 726 Rothenberg, J. 788 Rothman, K.J. 441 Rothwell, D.J. 221, 222 Rotimi, C.N. 956 Rotman, B. 598 Rottscheit, C. 954 Roudsari, A. 458 Rough, D. 651 Roy, D. 762 Rozenblit, L. 914, 924 Rozenblum, R. 973, 978, 979 Rozner, M.A. 703 Rubin, D. 299–347 Rubin, L. 370 Rubinson, H. 135 Rubinstein, Y.R. 930 Ruch, P. 248 Ruder, S. 262 Rudin, R.S. 188, 971, 972, 981, 982 Ruiz, M. 339 Rumshisky, A. 243 Rusincovitch, S. 915 Rusk, N. 878 Ruslen, L. 781–782 Russ, A.L. 427, 438, 459 Russell, J. 653, 978 Russell, L.B. 106 Russell, S.A. 427, 438, 459 Ryall, T. 860 Ryan, E.P. 282 Ryan, N. 782 Ryan, P. 930 Ryan, W. 750 Ryckman, T. 248
S Sa, E.-R. 375, 762 Saad, E.D. 915 Sabatine, M.S. 957 Saccavini, C. 527 Sachdeva, B. 975 Sacherek, L. 783, 784, 787 Sackeim, A.D. 367
Sackler, R.S. 950 Saconi, B. 643 Sadatsafavi, M. 436, 438, 460 Sadoughi, F. 644 Sadovnick, A.D. 896 Saeed, M. 703, 704, 709 Saffle, J.R. 680 Safran, C. 471 Sagae, K. 855 Sage, J. 877 Sager, N. 245, 248 Sahni, P. 762 Saitwal, H. 163, 168, 458 Saksono, H. 659 Salakhutdinov, R. 444 Salakoski, T. 254 Salanterä, S. 165 Salber, P. 918 Saldanha, A. 889 Saleem, J. 599 Salerno, R. 680 Sales, A.E. 684 Salomon, G. 144, 145 Salpeter, E.E. 109 Salpeter, S.R. 109 Salton, G. 774–776 Salzberg, C. 972, 978 Samal, L. 372 Samantaray, R. 501 Samore, M.H. 262, 706, 713, 714, 723 Samuel-Hodge, C. 367 Sanchez-Pinto, L.N. 933, 936 Sanctorius, S. 699 Sandberg, A. 399 Sandel, M.T. 560 Sander Connolly, E. 372 Sandercock, P. 455 Sanders, D.S. 51 Sanders, G.D. 106, 109, 116 Sanderson, I. 873 Sandor, S. 333 Sands, D. 380, 723, 833, 973 Sandy, L. 918 Saner, D. 959 Sanghvi, J.C. 282 Sangkuhl, K. 958 Sansone, S. 768 Sanson-Fisher, R. 672 Santorini, B. 265 Santoro, A.F. 933 Santos, S.L. 168 Santos Costa, V. 954 Saperia, D. 471 Saranummi, N. 381 Saria, S. 812, 813, 955 Sarin, S. 649 Sarkar, I.N. 869, 915, 933 Sarrafzadeh, M. 649 Sarrazin, M.V. 682 Sartipi, K. 826, 930 Satele, D. 658, 973
1121 Name Index
Sathanandam, S. 451 Sato, K. 675 Sauerwein, T.J. 684 Saunders, C.J. 960 Savage, S.J. 686 Savitz, L. 560 Savova, G.K. 246 Savulescu, J. 399 Sawhney, M.K. 159 Saxena, N. 858 Saxton, G.A. Jr. 725 Sayan, O.R. 719 Sayfouri, N. 644 Saygin, Y. 402, 403 Sazonov, E. 649 Scaffidi, M. 784 Scaletti, J.V. 683 Schadt, E.E. 890, 900 Schaefer, C.F. 879 Schaefer, J. 375, 564 Schaffner, K.F. 391, 395, 396, 411–413 Schaffner, M.J. 710 Schaltenbrand, G. 323 Schatzberg, M. 166 Schechter, C.B. 115 Schedlbauer, A. 488, 803 Scheraga, H. 283 Scherting, K. 721, 722 Scherzinger, A. 767 Schieble, T. 371 Schildcrout, J.S. 944, 950, 957–959 Schilling, L. 930 Schimel, A. 312 Schinnar, R. 833 Schmickl, C. 718 Schmidt, A.F. 954 Schmidt, D.E. 380 Schmitter-Edgecombe, M. 680 Schnall, R. 595, 601 Schneider, E.C. 971–973 Schneider, L. 209 Schneider-Kolsky, M. 744 Schnell-Inderst, P. 372, 972 Schnipper, J. 596, 824 Schnitzer, G. 251 Schnock, K. 167 Schoen, R.E. 443 Schoenberg, C. 371 Scholmerich, J. 707 Schoolman, H. 66, 770 Schork, N.J. 657 Schraagen, J.M. 159 Schrag, D. 972 Schreiner, M.N. 725 Schrodi, S.J. 951 Schroeder, J. 654, 657 Schuckers, S. 649 Schuemie, M.J. 927 Schuffham, P.A. 672 Schuger, C. 703, 711, 714 Schulte, F. 479
Schulte, P.J. 954 Schultz, E. 315 Schultz, J. 471 Schulz, S. 261 Schumacher, R.M. 167 Schumm, J.S. 164 Schvaneveldt, R.W. 122, 128 Schwab, M. 958 Schwab, R. 873 Schwamm, L.H. 681 Schwantzer, G. 166 Schwartz, D. 899 Schwartz, H.A. 246 Schwartz, J.S. 93 Schwartz, R.J. 475 Schwartz, S. 134 Schwartz, W.B. 29, 833 Scichilone, R. 645 Scofield, J. 654, 657 Scott, C.K. 652 Scott, G. 429, 436, 438, 442 Scott, J. 93 Scott, P. 109 Scott, S.A. 958 Scott, T. 977 Scoville, R. 974 Sculpher, M. 106 Scwartz, W.B. 68 Seal, R.L. 879 Searcy, T. 973 Secret, A. 758 Seebregts, C. 641 Segal, C.D. 975 Seger, A.C. 833 Seger, D.L. 833 Sejersten, M. 703 Selby, J.V. 926, 927 Senathirajah, Y. 168, 592 Senders J.W. 139 Sengupta, S. 157, 679 Senior, A.W. 286 Seo, J. 651 Sepucha, K.R. 973 Sepúlveda, M. 786 Serrato, C.A. 380 Serrichio, A. 164 Sessler, C.N. 700 Sessums, L.L. 167 Seto, E. 381 Setser, A. 931 Settersten, R.A. Jr. 377 Sevusu, P. 650 Shabo, A. 527 Shabot, M.M. 721, 722 Shabtai, I. 167 Shachter, R.D. 112, 115, 813 Shadbolt, N. 134, 645, 658 Shaffer, K.A. 813 Shafran-Topaz, L. 931 Shagina, L. 259 Shah, A. 198, 889
S
1122
Name Index
Shah, N. 368, 888 Shah, N.D. 679 Shah, N.H. 418, 492, 825, 833, 883, 886, 888, 889, 927 Shah, N.R. 833 Shah, S. 647, 874 Shahar, Y. 116, 804, 821, 827 Shakib, J.H. 193 Shalaby, J. 832 Shalev, N. 933 Shalin, V.L. 159 Shanafelt, T.D. 658, 973 Shapiro, D. 93 Shapiro, E. 281 Shapiro, J.S. 164, 169 Shapiro, L. 328, 332, 333 Shar, A. 655 Sharek, P.J. 711 Sharit, J. 128 Sharma, D.K. 825 Sharma, S. 417 Sharp, H. 127, 154, 158 Sharp, K. 63 Shaw, L. 105, 106 Sheikh, A. 440, 955, 976, 977 Sheikh, M.A. 654 Shekelle, P.G. 187, 188, 971, 980 Shen, S. 253, 262 Shen, Z. 649 Shendure, J. 954 Shenson, J.A. 363–384 Sheridan, S.E. 926, 927 Sheridan, S.L. 374 Sherifali, D. 493 Sherman, R.E. 447 Sherry, S.T. 878 Sherwood, L.M. 918 Shi, B. 932 Shi, J. 334 Shi, N. 378 Shi, Y. 932, 957, 959 Shianna, K.V. 892 Shields, A. 707 Shields, D. 193 Shiffman, R.N. 826 Shiffman, S. 650 Shiffrin, R.M. 125 Shih, S. 394 Shihab, H.M. 974 Shihipar, T. 652 Shimoni, K. 809 Shin, D. 245, 248 Shirey-Rice, J.K. 954 Shivade, C. 247 Shneiderman, B. 154, 168 Shooshan, S.E. 264, 886 Shortell, S. 552 Shortliffe, E.H. 6, 11, 12, 13n1, 18, 20, 21, 25, 29, 35, 36, 65, 122, 123, 127, 128, 141, 143, 154, 156, 170, 394, 408, 429, 519, 528, 531, 641, 786, 801, 812–815, 871, 972, 973 Shortliffe, T. 976
Shoultz, D.A. 701 Shubeck, K.T. 860 Shubin, H. 701 Shuldiner, A.R. 882 Shulman, L.S. 36, 67, 68, 70, 72, 127, 137, 139 Shuren, J. 979 Shwe, M.A. 813 Si, Y. 262 Siebert, U. 108, 972 Siebig, S. 707 Siegel, E.L. 675 Siegel, E.R. 265 Siegel, J.E. 106 Siegelaub, A. 370 Siegler, E. 591 Siek, K. 641, 659 Siewert, M. 705 Sifre, L. 286 Silberg, W. 763 Sim, I. 764 Simborg, D.W. 27, 208, 471 Simmons, B. 165 Simmons, D. 714 Simmons, R. 134 Simon, D.P. 127, 129, 134, 138 Simon, H. 125–127, 129, 133, 134, 138, 165 Simon, R. 750 Simon, S.R. 972 Simonaitis, L. 481, 483, 499, 721, 830, 832 Simpson, C.C. 658 Simpson, G. 683 Simpson, K.J. 725 Simpson, M. 767, 782, 785 Simpson, P. 247 Sinagub, J. 164 Sinclair, J. 779 Singer, B.D. 719 Singer, E. 895 Singer, J. 394 Singh, A. 310 Singh, H. 156, 159, 401, 515, 802, 832, 973, 979 Singh, K. 979 Singh, S. 812 Sinsky, C. 475, 521, 658, 801, 802, 973 Sirota, M. 877 Sirotkin, K. 878 Sisk, J.E. 801 Sistrom, C. 749 Sitapati, A. 873 Sittig, D. 146, 156, 159, 394, 401, 417, 515, 721, 722, 802, 804, 832, 833, 973, 979 Siu, A.L. 115 Sivaraman, V. 368 Skube, S.J. 493 Slack, M.A. 215 Slack, W. 370, 471 Slagle, J. 122, 125 Slayton, T.B. 402 Slight, S.P. 429 Sloan, J. 658 Sloboda, J. 134
1123 Name Index
Smalheiser, N. 782 Smelcer, J. 187 Smeulders, A. 339 Smigielski, E.M. 878 Smiley, R.A. 857 Smith, A.V. 888 Smith, B. 293, 886 Smith, C.A. 373 Smith, C.Y. 944, 950 Smith, D.M. 459 Smith, F.E. 455 Smith, J. 933 Smith, J.A. 36n23 Smith, J.C. 403 Smith, J.W. 66 Smith, L. 80, 82, 86, 253 Smith, M.K. 844 Smith, M.L. 700 Smith, P.C. 972 Smith, R.L. 896 Smith, S.N. 655 Smith, T. 283 Smith, W.L. 675 Smoyer, W.E. 933 Smyth, J.M. 652 Snow, V. 131 Snyder, M.K. 369 Snyder-Halpern, R. 188 Snyderman, R. 803 So, C.H. 258 Sobel, D.S. 375 Sobieszczyk, M.E. 933 Socher, R. 258 Soden, S.E. 960 Sohl, K. 684 Sohn, S. 246 Solbrig, H.R. 827 Sollins, K. 787 Solomon, A. 169 Solomon, M. 305 Somashekhar, S. 786 Somerville, I. 428, 435 Sondik, E. 13n3 Song, R.J. 948 Soni, H.C. 164 Sonnad, S. 93 Sonnenberg, F.A. 108, 109, 115 Sordo, M. 826 Sorensen, A. 308 Sorensen, B. 157 Sorensen, K. 378 Soto, G. 337 Soukoreff, R.W. 164 Soulakis, N.D. 244 South, B.R. 246, 253, 262 Southon, F. 599 Souza, J. 979 Sox, H.C. 96, 97, 105, 106, 110, 760 Soysal, E. 246 Spackman, K. 64, 222, 315, 881 Spasic, I. 248
Specchia, M.L. 188 Speedie, S. 442, 456 Speltz, P. 948 Spickard, A.III. 248, 252 Spiegelhalter, D. 427, 429, 455 Spijker, R. 782 Spilker, B. 918, 920 Spina, J.R. 427, 438, 459 Spineth, M. 232 Spitz, J. 959 Spitzer, V. 337, 767 Splinter, K. 951 Spock, B. 396 Sprafka, S.A. 36, 67, 68, 70, 72, 127, 137, 139 Spring, B. 377, 655 Spruijt-Metz, D. 650, 655 Sprusansky, O. 878 Spuhler, V.J. 706, 713, 715, 716 Spurr, C.D. 974 Squara, P. 704 Squires, D. 975 Srivastava, N. 444 Staccini, P. 381 Stacey, D. 974 Stafford, R. 501 Staggers, N. 188 Stahl, E. 945, 948 Stallings, S.C. 945 Stallings, W. 227 Stanfill, M. 785 Staniland, J.R. 810 Stanley, J. 441 Starchenko, G. 778 Starkes, J.L. 134 Starmer, C.F. 401 Starren, J. 122, 143, 157, 161, 162, 479, 486, 667–689, 725, 804, 914–915, 918, 920, 924, 933, 973 Staveland, L. 166 Stead, W. 209, 454, 455, 471, 584, 592, 596, 832–833 Stearns, M.Q. 64 Steegers-Theunissen, R.P. 379 Steehouder, M. 165 Steele, R.J. 648 Steen, E. 5, 7 Stefanelli, M. 132, 139 Stegle, O. 378 Stein, B. 142 Stein, C.M. 958 Stein, D. 493 Stein, D.M. 974 Stein, L.D. 872, 888 Stein, P.D. 112 Steinberg, D.M. 367 Steinbrook, R. 527 Stenetorp, P. 255 Stensaas, S. 323 Stenson, P.D. 878 Stephan, D.A. 893 Stephens, K. 703 Stephenson, P. 654 Sterling, M. 168
S
1124
Name Index
Stern, A. 371 Sternberg, R.J. 133 Stetson, P.D. 833 Stevens, R. 820 Stevens, R.H. 854 Stevermer, J. 760 Stewart, A.L. 375 Stewart, V. 448 Steyerberg, E.W. 876 Stiell, I.G. 972 Stipelman, C.H. 193 Stitziel, N.O. 954 Stockard, J. 369, 376 Stockman, G. 333 Stoeckle, J.D. 365, 369 Stohs, N. 647 Stone, A. 650 Stoner, J. 251 Store, M.-A. 886 Storey, J.D. 289 Stowell, E. 659 Strahan, R. 744 Stramer, K. 978 Strasberg, H.R. 825 Stratton, R.J. 641 Straus, S.E. 394 Street, R.L. Jr. 365, 372, 488 Streeter, A.J. 447 Strickland, N. 744 Strom, B. 144, 154, 155, 157, 429, 454, 469, 833 Stroulia, E. 680 Stubbs, R.J. 641 Stumpe, M.C. 955 Su, E. 762 Su, G.L. 684 Suarez-Kurtz, G. 958 Subramaniam, B. 333 Subramanian, A. 708, 888 Suchman, L.A. 143 Sukasem, C. 956 Sullivan, T. 187 Sumkin, J. 744, 745 Sumner, W. 105, 106 Sun, D. 682 Sun, H. 106 Sun, J.X. 704 Sun, M. 649 Sun, Y. 377, 649 Sun, Z. 859 Sundaram, V. 106 Sundsten, J. 322 Sung, H.-Y. 375 Sung, N.S. 871, 918 Surodina, S. 932 Sussman, S.Y. 125 Suther, T.W. 720 Sutherland, S.M. 833 Sutskever, I. 258, 444 Sutton, A.J. 93 Sutton, R.S. 656 Suzek, T.O. 881
Suzuki, M. 379 Swan, H.J. 704 Swangnetr, M. 168 Swanson, D. 782 Swanson, D.B. 137 Swanson, D.R. 268 Swanson, L. 323 Swanson Kazley, A. 710 Sweeney, L. 11, 403 Sweeney, T.E. 889 Sweet-Cordero, A. 877 Sweller, J. 129 Swendeman, D. 377 Swerdlow, D.I. 954 Swets, J.A. 90 Swetter, S.M. 955 Swigert, T.J. 684 Switzer, J.A. 681 Sykes, L. 394 Syn, T. 378 Syroid, N. 157 Syverson, G.D. 960 Syzdykkova, A. 501 Szczepura, A. 429 Szelinger, S. 893 Szleifer, I. 719 Sznajder, J.I. 719 Szolovits, P. 243, 393, 526, 709, 972, 978
T Tabor, H.K. 281 Tachinardi, U. 933, 936 Taft, T. 193 Taichman, D. 762 Tait, A.R. 705 Tajmir, S. 488 Takahashi, N. 956 Takahashi, P.Y. 679 Takesue, B.Y. 499 Talairach, J. 323 Talbot, T.B. 855 Taljaard, M. 972 Tallett, S. 168 Talmon, J. 429, 455 Talos, I. 346 Tamblyn, M. 978 Tamersoy, A. 403 Tamler, R. 954 Tan, Y.J. 109 Tanabe, L. 253 Tanenbaum, A.S. 227 Tanenbaum, J. 643 Tang, P. 210, 380, 467–501, 763, 801, 972, 973 Tanner, A. 725 Tannock, I.F. 957 Tapper, E.B. 684 Tarczy-Hornoch, P. 36, 803, 869, 871 Tariq, H. 706, 713 Tassini, K. 639
1125 Name Index
Tate, D.F. 367 Tate, K.E. 721, 722 Tatoglu, E. 166 Tatonetti, N.P. 882, 883, 885 Tavenner, M. 185, 802, 824 Taylor, A. 265 Taylor, B. 719 Taylor, C.F. 872 Taylor, H. 759 Taylor, P.M. 442 Taylor, R. 974 Taylor, S.L. 980 Tbahriti, I. 248 Tehranchi, F. 129 Teich, J. 471, 488 Teich, J.A. 797, 833 Teich, J.M. 797, 825, 830, 974 Teixeira, P.L. 948 Teller, L.E. 647 Tembe, W. 893 Temte, J. 801 Ten Kate, L.P. 803 Tenenbaum, J. 894, 895, 898, 899, 915, 933 Terao, C. 952 Terry, A. 429 Terry, S.F. 893 Terzic, A. 871 Testa, P.A. 684 Tettelbach, W.H. 725 Tewari, A. 655 Thabane, L. 436, 438, 460 Thakuria, J.V. 894, 895 Thatai, D. 703 Theroux, P. 954 Theurer, L. 680 Thielke, S.M. 167 Thiemann, D. 974 Thirion, B. 772 Thoma, G. 767, 785 Thomas, C.E. 891 Thomas, F. 696 Thomas, N.J. 889 Thompson, C.B. 188 Thompson, E.T. 221 Thompson, G. 726 Thompson, H.J. 680 Thompson, J.P. 707 Thompson, W.K. 946–949 Thomsen, G.E. 722 Thongprayoon, C. 719 Thorisson, G.A. 888 Thrall, J. 749, 898 Thrun, S. 955 Tian, W.S. 658 Tibshirani, R. 9, 256, 289, 805, 810, 811 Tidmarsh, P. 783, 784, 787 Tiefenbrunn, A.J. 701 Tiernan, E. 684 Tierney, W. 210, 455, 459, 496, 501, 974 Till, J.E. 262 Tilling, K. 945
Tillisch, K. 899 Tilson, H.H. 914–915, 918, 920, 924 Timpson, N.J. 945 Tiong, I.C. 711, 720, 721 Tirozzi, K.J. 560 Tofias, Z. 380 Toga, A. 304, 323 Tolbert, C.J. 374 Tolchin, S.G. 27, 208 Tollmar, K. 654 Tolson, A.M. 675 Tomczak, A. 887 Tomlinson, A.L. 442 Tommasi, T. 329 Tompkins, R.G. 704 Toms, E. 760 Tong, R. 782 Tonge, P.J. 893 Tonning, J.M. 265 Toomre, D. 312 Topaloglu, U. 927 Topaz, M. 931 Topić, G. 255 Topol, E.J. 379, 527, 657, 952 Toren, R. 251 Torkamani, A. 952 Torok, D. 930 Toronto, A.F. 701, 704, 710 Torrance, G.W. 106 Torriani, F.J. 33 Torstenson, E.S. 945, 951 Toth, M. 833 Tournoux, P. 323 Toutanova, K. 258 Towfigh, A.A. 372 Trafton, J. 822 Tran, A. 657 Tran, M.C. 371 Traub, S. 164, 170 Traugott, F.M. 704 Tremper, K. 705, 707 Trent Rosenbloom, S. 954 Triggs, B. 328, 330 Triola, M.M. 142 Tripathi, M. 395, 418 Troiano, R.P. 648 Tromp, G. 944, 945, 950 Troster, G. 649 True, M.W. 684 Truog, R.D. 408 Truong, K.N. 652 Tryka, K. 925 Trynka, G. 952 Tsai, J.Y. 950 Tsarkov, D. 343 Tse, T. 248, 768, 925, 932 Tsui, C. 784 Tsui, F.R. 246 Tsujii, J.I. 254, 255 Tu, D.C. 51 Tu, S. 116, 804, 821, 827
T
1126
Name Index
Tucker, C. 980 Tudor Car, L. 858 Tufte, E. 479 Tung, J.Y. 961 Turian, J. 258 Turkman, Y.E. 703 Turn, R. 532 Turnbull, J. 448 Turner, D. 477 Tusler, M. 369, 376 Tuttle, M.S. 222, 225 Tutty, M. 973 Tversky, A. 84 Tyrer, P. 455 Tysyer, D. 414
U Ulhmann, L. 488, 501 Ullman-Cullere, M. 803, 873 Uman, G. 760, 801 Unay, D. 782 Unertl, K.M. 158, 167, 959 Upatising, B. 679 Uscher-Pines, L. 979 Usha, M. 676 Uzuner, Ö. 253, 268
V Vaizman, Y. 651 Valaitis, R. 379 Valerio, M. 378 Vali, M. 246 Valiquette, L. 459 Valk, G.D. 375 Valle, D. 890, 891 Van, T. 684 van Beukering, M.D. 379 Van Busum, K. 972, 973 Van Calster, B. 876 Van Cura, L.J. 370 van de Vijver, M.J. 288 van der Does, E. 805, 835 van der Lei, J. 762, 805, 835 van der Sijs, H. 833 van Dijk, T.A. 127, 130 van Essen, D. 323, 333 van Gennip, E. 429 Van Houten, H. 679 Van Kleek, M. 645, 658 van Leemput, K. 335 van Noorden, S. 311 van Rijsbergen, C. 774 Van Thienen, K. 488, 501 van Walraven, C. 972 van Way, C. 444 Vankipuram, A. 170 VanLehn, K. 860 Van’t Veer, L.J. 288
Vapnik, V. 328 Varma, M. 329 Varmus, H. 891, 928 Varoquiers, C. 655 Vasan, R.S. 876 Vassy, J.L. 948 Vaughn, S. 164 Vawdrey, D. 167, 198, 497, 591, 706, 710, 712, 713, 715, 722 Vayena, E. 401 Vedantam, S. 952 Veinot, T. 659 Velu, A.V. 379 Vender, J.S 700 Venkataraman, S.T. 711, 973 Venkatesh, V. 155 Venter, J.C. 870 Ventres, W. 448 Ventura, H.O. 973 Ventura, M. 860 Ver Hoef, W. 215 Vergese, A. 418 Vest, J.R. 971, 972 Vetizou, M. 899 Via, M. 878 Viaud, S. 899 Vicente, K.J. 122, 127, 142, 154, 158 Vickers, A.J. 876 Vickers, T. 767 Vidal, M. 890, 891 Viera, A. 374 Vigoda, M. 479 Villarroel, M. 703, 709 Vincent, A. 974 Vinson, D. 760 Visweswaran,, S. 926 Vnencak-Jones, C.L. 959 Voepel-Lewis, T. 705 Vogel, L. 543–573 Voight, B.F. 953 Volk, L.A. 972 Voorhees, E. 781, 782 Vorhaus, D.B. 894, 895 Vorobeychik, Y. 889 Voronov, D. 253 Vorst, R.V. 972 Vosoughi, S. 762 Vreeman, D. 473 Vseteckova, J. 858 Vuckovic, N. 448 Vulto, A. 833 Vural, R. 780
W Wac, K. 639 Wacholder, N. 784 Wachter, R. 979, 994 Wachter, S.B. 157 Wager, K.A. 710
1127 Name Index
Wagner, D.P. 406, 812 Wagner, E.H. 365, 375 Wagner, G.S. 401 Wagner, M. 246, 515 Waitzkin, H. 365, 369 Wakefield, S. 899 Wald, J. 596, 973 Waldman, D.M. 686 Waldman, S.A. 871 Waljee, A.K. 684 Walji, M. 163, 168, 833 Walker, C. 779, 782 Walker, I. 641 Walker, J. 497, 516, 715, 804, 974, 978 Walker, M. 373 Walker, R.L. 246 Walker, S. 776 Wallace, A.G. 401 Wallace, C.J. 714, 722 Wallingford, K.T. 265 Walo, H. 710 Walter, L.C. 246 Walter, S.R. 368 Wan, Z. 889 Wang, A.Y. 64 Wang, C. 784 Wang, D. 70, 72, 378 Wang, E. 648, 649 Wang, J. 188, 246, 284, 339, 341, 477, 726, 824, 881 Wang, J.J. 394 Wang, J.K. 809 Wang, L. 954 Wang, P. 246, 854 Wang, S.A 878 Wang, S.J. 198, 804, 974 Wang, Y. 33, 135, 881 Ward, D.S. 367 Ward, M.H. 878 Ware, J.E. Jr. 369 Warnekar, P. 881 Warner, H. 471 Warner, H.R. 701, 704, 710, 721 Warner, J. 873, 948 Warner, P.B. 193 Warren, J.J. 223 Warren, W. 323 Washington, V. 185, 194 Wasson, J. 444 Waterman, M. 283 Watson, D. 705 Watson, R.S. 711, 973 Watts, C.M. 719 Weal, M. 658 Wears, R.L. 122, 726 Weaver, C. 210, 497 Weaver, D.L. 283 Weaver, L.K. 714, 722 Weaver, R.R. 368 Weaver, W.D. 703, 711, 714 Webb, T. 649 Weber, G. 525, 926
Weber, L. 253 Weber, R.J. 723 Wechsler, L.R. 681 Weed, L. 471, 590 Wei, L. 280 Wei, W.-Q. 944, 945, 948, 950 Weibel, N. 651 Weibel, S. 772 Weil, M.H. 701 Weil, T. 569 Weilburg, J. 749 Weill, P. 564 Weinberger, M. 459 Weiner, J.B. 944 Weiner, J.P. 975 Weinfurt, P.T. 702 Weingart, S. 380, 723, 833 Weinger, M.B. 122, 125, 157, 158, 726, 832 Weinstein, S. 261 Weinstock, R.S. 157, 161, 162 Weir, B. 893 Weir, C. 193, 476 Weir, C.R. 167, 210 Weir, T.L. 282 Weiser, T.G. 719 Weiss, A. 653 Weiss, C.H. 719 Weiss, K.M. 475 Weiss, L. 684 Weissleder, R. 305, 311 Weissman, A. 167 Weissman, J. 723, 833 Weitzman, E.R. 379, 380 Welch, J.L. 641 Wellek, S. 442 Wells, S. 973 Wen, J.C. 648 Wen, X. 881 Wenderoth, M.P. 844 Weng, C. 914, 915 Wennberg, J. 369, 370, 760 Wenseleers, T. 896 Wensheng, W. 374 Wenzel, R. 762 Wessels, J. 312 West, B.J. 696 West, C.P. 658, 973 West, P. 645, 658 Westbrook, J. 784 Westenskow, D. 157, 704, 707–708 Weston, A.D. 872 Wetherall, D. 649 Wetterneck, T. 140, 167, 168, 715, 801 Wexler, R. 972, 973 Wharton, C. 161 Wheeler, M.T. 281 Whellan, D.J. 954 Whirl-Carrillo, M. 253, 956, 958 White, B.Y. 131 White, J.A. 954 White, K.D. 958
W
1128
Name Index
White, M.J. 956 White, P.J. 802, 995 White, R.W. 645 Whitehouse, P.J. 896 Whiting-O’Keefe, Q. 27, 208, 471 Whitlock, D. 337, 767 Whitlock, E.P. 115 Whitlock, W.L. 375 Whittaker, R. 642 Whittington, C. 649 Wickramasinghe, N. 564 Wiegandt, D.L. 253 Wigdor, D.J. 652 Wilbur, J. 253 Wilcox, L. 367 Wilczynski, N. 779, 804 Wilde, C. 725 Wildemuth, B. 782 Wilen, E. 899 Wiley, T. 655 Wilkinson, M. 762, 872 Will, J.C. 675 Willard, H.F. 803 Willems, J.L. 442 Willett, D. L. 482 Willett, L. 784 Williams, A. 133, 134, 684 Williams, D.H. 973 Williams, D. Jr. 995 Williams, M. 785, 803 Williams, R. 768, 925, 932 Williams, S.L. 676 Williamson, S.S. 382, 383 Willmann, J. 310, 311 Willson, D. 714 Wilner, D.G. 714 Wilson, D.J. 51 Wilson, E.J. 374 Wilson, L. 159 Wilson, R.F. 372 Wilson, S.R. 265 Wilson, T. 312 Wilson III, E.J. 374 Wilt, T.J. 115 Wineinger, N.E. 952 Winkler, C.G. 703 Winnenburg, R. 881 Wishart, D.S. 899 Wissner, E. 703, 711, 714 Witkiewitz, K. 655 Witten, D. 9, 256, 805, 810, 811, 954 Wiviott, S.D. 957 Wobbrock, J.O. 648 Wold, D. 215 Wolf, J.A. 458 Wolinsky, F.D. 455 Won, H.-H. 954 Won, W.Y. 639 Wong, A. 804, 935 Wong, B. 322 Wong, C. 444
Wong, H.R. 889 Wood, A.R. 952 Wood, D.L. 157 Wood, E.H. 704 Wood, L. 881, 931 Wood, M. 808, 833 Wood, S. 168, 881, 931 Wood, V. 887 Wood-Harper, T. 598 Woods, D.D. 139, 140 Woods, R. 337 Woods, S. 373, 380 Woolf, S. 527, 576, 596 Woolhandler, S. 560 Woollen, J. 367 Wootton, R. 688 Worthey, E.A. 960 Worzala, C. 801, 973 Wraith, S.M. 815 Wrede, C.E. 707 Wrenn, J. 476 Wright, A. 417, 440, 497, 721, 824, 825, 832 Wright, M.W. 879 Wright, P.C. 146 Wright, S.M. 853 Wright, W.R. 369 Wu, A.H. 956 Wu, B. 972 Wu, D. 952, 955 Wu, H. 776 Wu, J. 168, 258 Wu, S. 262, 477, 726, 824 Wu, Y. 246, 954 Wunderink, R.G. 719 Wunsch, C.D. 283 Wurth, R.C. 659 Wyatt, J. 425–460 Wyatt, S. 444
X Xia, S. 932 Xia, W. 889 Xiao, J. 881 Xiao, Y. 717 Xie, A. 167 Xie, L. 893 Xiong, H. 651 Xu, B. 995 Xu, H. 246, 768, 889, 954, 957 Xu, T. 975 Xu, X. 899 Xuan, D. 649
Y Yale, J.F. 145 Yaman, H. 780 Yamasaki, T. 649 Yamin, C.K. 973
1129 Name Index
Yan, J. 287 Yang, G.Z. 444 Yang, J. 781–782, 952 Yang, S. 145 Yang, W. 682 Yang, Z. 649 Yardimci, A. 780 Yardley, L. 649 Yasnoff, W. 11, 511–537 Yaviong, L. 956 Yavuz, E. 780 Ye, Y. 246 Ye, Z. 951 Yeager, V.A. 518 Yeh, H.-C. 974 Yildirim, M.A. 890, 891 Yilmaz, M. 725 Yoo, T. 324 Yoon, W. 258 Yoskowitz, N.A. 719 Youle, M. 106 Young, H.F.W. 710, 713, 715 Young, W. 323 Youngner, S. 407 Youngstein, T. 980 Yousef, G. 312 Yu, C. 782 Yu, F. 320 Yu, H. 268 Yu, P. 782 Yu, V.L. 812, 815 Yu, W. 106 Yu, X. 378 Yu, Y. 826 Yue, Y. 649 Yurchak, P. 670
Z Zafar, A. 496 Zahabi, M. 168 Zaharias, G. 847 Zalis, M. 317 Zarin, D. 768, 925, 932 Zarnke, K.B. 972 Zaroukian, M.H. 210 Zary, N. 858 Zayas-Cabán, T. 995 Zech, J. 473 Zeelenberg, C. 750 Zeiss, C. 950 Zeng, B. 899 Zeng, L. 899 Zeng, Q. 248 Zeng-Treitler, Q. 373, 595, 948, 949, 954 Zerhouni, E. 761, 871 Zettlemoyer, L. 258 Zhai, C. 776 Zhang, B. 890 Zhang, H. 248, 311, 335, 649, 653
Zhang, J. 128, 140–142, 154, 158, 160, 163, 168, 435, 719, 833, 881, 925 Zhang, M. 650 Zhang, P. 873 Zhang, R. 476 Zhang, S. 682, 782 Zhang, S.S.-M. 950 Zhang, S.X. 164 Zhang, W. 878 Zhang, X. 653, 782 Zhang, Y. 647, 709 Zhao, C. 950 Zhao, G. 377 Zhao, H. 682, 713, 714, 724 Zhao, W. 953 Zheng, B. 744 Zheng, J. 246 Zheng, K. 21 Zheng, L. 169 Zheng, M. 378 Zheng, P. 899 Zhenyu, H. 339 Zhong, L. 639 Zhou, C. 899 Zhou, L. 592 Zhou, S.H. 702, 703 Zhou, W. 642 Zhou, X. 377 Zhou, X.H. 455 Zhou, X.N. 932 Zhou, Z. 951 Zhu, H. 645 Zhu, J. 890 Zhu, M. 892 Zhu, Q. 892 Zia, J. 645 Zielinski, D. 63 Zielstorff, R. 580, 582 Zigmond, M.J. 290 Zijdenbos, A. 333 Zillessen, H. 995 Zillich, A.J. 427, 438, 459 Zimlichman, E. 978 Zimmerman, B. 20 Zimmerman, J.E. 812 Zimmermann, T. 643 Zink, R. 948, 949, 951 Zisserman, A. 329 Zitvogel, L. 899 Zody, M.C. 276 Zoll, P.M. 700 Zong, W. 708 Zuccon, G. 784 Zunic, A. 248 Zuriff, G.E. 125 Zweigenbaum, P. 248, 257
Z
1131
Subject Index A AAFP See American Academy of Family Physicians (AAFP) AAMC See American Association of Medical Colleges (AAMC) AAMSI See American Association for Medical Systems and Informatics (AAMSI) ACC See American College of Cardiology (ACC) Accelerated Digital Clinical Ecosystem (ADviCE) 1006 Access 786ff Accountable care organizations (ACO) 208, 404, 545, 586, 587, 976 Accredited canvass 214 Accredited Standards Organization (ASC) –– ASC X12 208, 214 ACO See Accountable care organizations (ACO) Active failure 141 ADA See American Dental Association and American Diabetes Association (ADA) Adaptive learning 862 ADE See Adverse drug event (ADE) AdEERS See Adverse Event Expedited Reported System (AdEERS) Ad hoc standards development 210 ADL See Archetype Definition Language (ADL) ADMD See Administration Management Domain (ADMD) Admission-discharge-transfer (ADT) 207, 565 Advanced Cardiac Life Support 846 Advanced Informatics in Medicine (AIM) 222 Advanced Research Projects Agency (ARPA) 12 Advanced Trauma Life Support 846 Adverse drug event (ADE) 723 Adverse Event Expedited Reported System (AdEERS) 932 Affordable Care Act of 2010 587, 589 After Scenario Questionnaire (ASQ) 166 Agency for Health Care Research and Quality (AHRQ) 29, 489, 585, 765, 766, 974 Aggregated content 768 Agile software development model 189 AHA See American Heart Association (AHA) AHIMA See American Health Association Information Management (AHIMA) AHRQ See Agency for Health Care Research and Quality (AHRQ) AIM See Advanced Informatics in Medicine (AIM) Alarms 701, 707ff Alerts 180, 182, 700, 923 Algorithms 26, 334 –– Smith-Waterman (See Smith-Waterman Algorithm) All Kids Count (AKC) program 626
Allscripts, Inc. 494 Alphanumeric sequence 774, 1019 AMA See American Medical Association (AMA) Amazon Mechanical Turk 649 Ambulatory medical record systems (AMRSs) 550, 579 –– See also Electronic health records American Association for Medical Systems and Informatics (AAMSI) 226 American College of Cardiology 822 American College of Radiology/National Electrical Manufacturers Association (ACR/NEMA) 208, 227, 742 American Dental Association (ADA) Standards 233 American Health Information Management Association (AHIMA) 415, 767 American Heart Association 848 American Heart Association (AHA) 545, 660, 703, 822 American Immunization Registry Association (AIRA) 626 American Medical Association (AMA) 221, 660 American Medical Informatics Association (AMIA) 25, 415, 592, 598, 787, 871 –– Nursing Informatics Special Interest Group 223 –– Summit on Translational Science 871 American National Standards Institute (ANSI) 214, 216, 816, 826, 827 American Nurses Association 415 American Psychiatric Association 221 American Public Health Association (APHA) 614 American Recovery and Reinvestment Act of 2009 (ARRA) 21, 556, 569, 801 American Society for Testing and Materials(ASTM) 208, 214 American Society of Anesthesiologists 847, 856 American Standard Code for Information Interchange. (ASCII) 206 AMIA See American Medical Informatics Association (AMIA) Analog signal 51 Anatomical-Therapeutic-Chemical classification (ATC) 223 Anatomy 315 Anchoring and adjustment 84 Ancillary services 565 Annotated corpora (or content) 767–768 Anonymization 403 ANSI See American National Standards Institute (ANSI) ANSI X12 See ASC X12 under Accredited Standards Organization (ANSI X12) Antibiograms 1020 APACHE III Critical Care Series 581, 721 APIs (Application Programming Interfaces) 200 Apple Computer 414
1132
Subject Index
Application Programming Interfaces (APIs) 645, 979ff Application research 32 Appropriate use 394 –– and educational standards 395 Archetype Definition Language (ADL) 219 Arden Syntax 198, 816ff, 826 Armed Forces Health Longitudinal Technology Application (AHLTA) 163 ARPA See Advanced Research Projects Agency (US Department of Defense) ARPAnet 12 ARRA See American Recovery and reinvestment Act of 2009 (ARRA) Artificial intelligence (AI) 31, 852, 862, 954ff, 1000, 1001 ASC See Accredited Standards Organization (ASC) ASCII 206 ASC X12 214 Association of Academic Health Center Libraries 415 ASTM (American Society for Testing Materials) 208, 214 –– ASTM E31 committee 214, 226 –– ASTM standard 1238 227 Asynchronous transfer mode (ATM) 1021 ATA See American Telemedicine Association (ATA) ATC See Anatomic Therapeutic Chemical Classification (ATC) ATHENA system 820, 821 Atlases 321 ATM See Asynchronous transfer mode (ATM) Auditing 924 Augmented reality (AR) 858ff Availability heuristic 84
B Backbone networks 12 Background question 763 Backward chaining 814 Barcode scanner 716 BARN See Body Awareness Resource Network (BARN) Basic Local Alignment Search Tool (BLAST) 284 Basic research 32 Basic sciences 32 Bayesian diagnosis programs 810ff Bayes’ theorem 71, 93ff, 111, 341, 810, 811 –– cautions in application 98 –– derivation of 116 –– implications of 96 –– odds-ratio form 94 Baylor College of Medicine 960 Bedside monitor 700ff, 707 Bedside terminals 714 Behavioral research 915 Behaviorism 124, 125 Belief networks 112, 811 Best-of-breed systems 190, 191, 208, 547 Beth Israel-Deaconess Medical Center 471
Bibliographic content 764, 765 Bibliographic database 764, 766, 920 BICS See Brigham Integrated Computing System (BICS) Big data 4, 62, 237 Billing and coding systems 495 Billing systems 207 Biobanks 873 Biocomputation 23 –– progress in 869ff Bioinformatics 34, 869ff –– biomedical ontologies 291ff –– building blocks 278 –– clinical informatics 277ff –– curse of dimensionality 289ff –– database structure 293ff –– data sharing 290ff –– EcoCyc project 294 –– epigenetics data 282 –– ethical issues 404, 894, 896 –– expression data 281ff –– future challenges 295ff –– gene expression data 287ff –– genome sequencing data 280ff –– genomics explosion 279 –– information sources 275ff –– integrative database 294ff –– metabolomics data 282 –– metadata standards 291ff –– modern bioinformatics 278ff –– sequence alignment 284ff –– sequence information 279ff –– structural biology 280 –– structure analysis 283ff –– structure and function prediction 286ff –– systems biology 282ff Biomarkers 873ff Biomed Central (BMC) 761 Biomedical computing 23 –– See also Biocomputation Biomedical engineering 38 Biomedical informatics 20, 21ff –– and biomedical engineering 38 –– and biomedical science and medical practice 29ff –– and cognitive science 37 –– component sciences 36ff –– and computer science 37 –– definition 25 –– history 26ff Biomedical information retrieval 123, 247, 758ff, 920 –– See also Information retrieval Biomedical Information Science and Technology Initiative (BISTI) 24 Biomedical Research Integrated Domain Group (BRIDG) 214–215, 930 Biomolecular imaging 34, 738 Biosurveillance system 617 BI-RADS 317 BISTI See Biomedical Information Science and Technology Initiative (BISTI) Blackberry alphanumeric pager 722
1133 Subject Index
BLAST See Basic Linear Alignment and Search Technique (BLAST) Blinding 917 Blog 767 Blood pressure monitoring 703, 704 Blood pressure (BP) monitoring 647ff Blue Cross 551 –– See also Insurance Blue Shield 551 –– See also Insurance BMC See Biomed Central (BMC) BMJ See British Medical Journal (BMJ) Body Awareness Resource Network (BARN) 370ff Body of knowledge (BOK) 767 Boolean operators 775 Boolean searching 775 Branching logic 809 BRIDG See Biomedical Research Integrated Domain Group (BRIDG) Brigham and Women’s Hospital 592, 596, 597 British Medical Journal (BMJ) 768
C Cancer Biomedical Informatics Grid (caBIG) 319 Cancer Therapy Evaluation Program (CTEP) 931, 932 Canonical form 769 CAP See College of American Pathologists (CAP) Capitation 40 Cardiac imaging 749 Cardiac output 704 Case coordinator 579 Case manager 579 Cataloging 759 Catalogue et Index et Sites Médicaux Francophones (CISMeF) 772 Causal networks 890 CCD See Continuity of Care Document or charge coupled device (CCD) CCHIT See Certification Commission for Health Information Technology (CCHIT) CCOW See Clinical Context Object Workgroup (CCOW) CCR See Continuity of Care Record (CCR) CDA See Clinical Document Architecture (CDA) CDC See Centers for Disease Control and Prevention (CDC) CDER See Center for Drug Evaluation and Research (CDER) CDEs See Common Data Elements (CDEs) CDISC See Clinical Data Interchange Standards Consortium (CDISC) CDL See Constraint Definition Language (CDL) CD Plus 765, 766, 783, 920 –– See also Ovid CDR See Clinical data repository (CDR) CDS integration (of electronic health records) 801 CDSS See Clinical decision-support system (CDSS) CDW See Clinical data warehouse (CDW) Cedars-Sinai Medical Center 722
Cellular imaging 312 Cellular phones 639ff –– See also Smart phones CEN See Comité Européen de Normalisation (CEN) Center for Drug Evaluation and Research (CDER) 932 Center for Healthcare Information Management 415 Centers for Disease Control and Prevention (CDC) 64, 236, 489, 618, 626, 766 Centers for Medicaid and Medicare Services (CMS) 208, 584, 587, 589, 679, 686, 900, 977, 1058 Central computer system 546 Central Dogma of Biology 275, 278 Central processing unit (CPU) 27 Certification Commission for Health Information Technology (CCHIT) 185, 417 Challenge evaluations 781 Change of shift 719ff Check tags 770 Chemical Abstracts 223 CHESS See Comprehensive Health Evaluation and Social Support System (CHESS) CHI See Consumer health informatics (CHI) Chief Information Officers (CIOs) 570 Chief Technology Officers (CTOs) 570 Children’s Hospital of Philadelphia 960 Children’s Mercy Hospital and Clinics 960 CHIN See Community Health Information Networks (CHIN) Chronic care management (CCM) 686 Chunking 133 CIMI See Clinical Information Modeling Initiative (CIMI) CINAHL (Cumulative Index to Nursing and Allied Health Literature) 765 –– Subject Headings 770 CISMeF See Catalogue et Index et Sites Médicaux Francophones (CISMeF) Citation database 767, 768 Citrix 190 Classification 759 Classroom technologies 844 Clinical algorithm 821 –– See also Algorithm Clinical and Translational Science Awards (CTSAs) 871, 925 Clinical Data Interchange Standards Consortium (CDISC) 214ff, 220, 930 Clinical data warehouse (CDW) 10, 920, 921 Clinical decisions 47, 749 –– See also Decision making; decision support Clinical decision-support system (CDSS) 600ff, 618, 700, 713, 797ff, 800, 920 –– See also Decision support systems Clinical Document Architecture (CDA) 231ff Clinical documentation 566 Clinical Element Models 220 Clinical evidence 767 Clinical Genomics Workgroup (HL7) 873 Clinical guidelines 9, 15, 115, 513, 553, 554, 568, 923 Clinical informatics 33, 277ff
1134
Subject Index
Clinical Information Modeling Initiative (CIMI) 220 Clinical information systems 550 Clinical Language Annotation, Modeling, and Processing (CLAMP) 246 Clinically relevant population 91 Clinical models 217ff Clinical pathways 9, 567 Clinical practice guidelines 9, 15, 115, 513, 553, 554, 567, 923 –– See also Clinical guidelines Clinical prediction rules 85 Clinical research 7, 15, 58, 61, 403, 496, 914ff Clinical research informatics (CRI) 915ff Clinical research management systems (CRMS) 924ff Clinical trial management system (CTMS) 921, 923 Clinical trials 7, 10, 870 –– active phase 919 –– enrollment phase 919 –– Phase I 870, 918 –– Phase II 918 –– Phase III 918 –– Phase IV 918 –– randomized 58, 916ff ClinicalTrials.gov 768, 930, 932 Closed-loop therapy 567, 592, 725 Cloud computing 40, 588, 752, 921, 1028 Clustering algorithm 334 CMS See Centers for Medicaid and Medicare Services (CMS) COBOL 206 Cochrane Database of Systematic Reviews 768 Coded terminologies 217ff Cognitive artifacts 122 Cognitive engineering (CE) 156 Cognitive heuristics 84 Cognitive science 23, 37, 122ff –– explanatory framework 124 Cognitive task analysis (CTA) 142, 158ff Cognitive walkthrough (CW) 146, 160ff Cohort discovery 922 Collaborative technologies 845 Collaborative workspaces 154 College of American Pathologists (CAP) 221 Columbia University 471, 474, 492, 494 Comité Européen de Normalisation (CEN; European Committee for Standardization) 208, 216 –– Technical Committee 251 (TC251) 208, 216ff, 223 Commercial off-the-shelf software (COTS) 185, 191 Commodity Internet 674ff Common Data Elements (CDEs) 930 Common Terminology Criteria for Adverse Events (CTCAE) 931, 932 Communications 12, 1050, 1053, 1056, 1061, 1075 –– See also Data communications Community-based research 914, 915 Community Health Information Networks (CHINs) 516 Comparative studies 444 Comprehensive Health Enhancement Support System (CHESS) 370ff
Computed radiography (CR) 741 Computed tomography (CT) 40, 306, 735 Computer-aided diagnosis (CAD) 745 –– See also Decision-support systems Computer-based monitoring 695 –– intensive care unit 715 Computer-Based Patient Record Institute (CPRI) 415, 516 Computer-based patient records (CPRs) 5, 49, 167ff, 179, 181, 190, 193, 236, 237, 380, 405, 468ff, 515ff, 519, 585, 592, 600ff, 760, 768, 801, 888, 915, 916, 920, 921, 916ff –– See also Electronic health records (EHRs) Computerized provider order-entry (CPOE) 15, 142, 486, 516, 567, 599, 600ff, 373 Computerized Unified Patient Interaction Device (CUPID) 591 Computer literacy 41 Computers and Biomedical Research (journal) 24, 25 Computer science 37 Computer-simulated patients 846 Computer-Stored Ambulatory Record (COSTAR) 590 Concept 769 Conceptual operators 142 Concept Unique Identifier (CUI) 771 Concordance 88 Conditional independence 98, 99 Conditional Random Fields 253 Confidentiality 11, 399, 413, 557 Confusion matrix 265 Congestive heart failure (CHF) 674 Consensus standards development 210 Consent 529 Constraint Definition Language (CDL) 220 Consumer health informatics (CHI) 34, 409, 669 Content standardization 596 Context 775 Context-specific information retrieval 807 Contingency tables 88 Continuity of Care Document (CCD) 595, 831 Continuous glucose monitor (CGM) data 645 Continuous Learning Health System 1001ff Continuum of care 55, 586 Contract management 568 Contrast radiography 311 Contrast resolution 310 Control intervention 917 Controlled terminologies 769ff, 820 Copyright 414 Coreference resolution 261 Correctional telehealth 669, 678 COSTAR (Computer-Stored Ambulatory Record) 471 Cost-benefit tradeoffs 500, 559, 804 Cost control 41, 973 COTS See Commercial off-the-shelf software (COTS) COVID-19 620, 623, 624 CPOE See Computerized provider order entry (CPOE) CPR See Computer-based patient record (CPR) CPT See Current Procedural Terminology (CPT) Creative Commons 415
1135 Subject Index
CRI See Clinical research informatics (CRI) CRMS See Clinical research management systems (CRMS) Cryoelectron microscopy 279 CTCAE See Common Terminology Criteria for Adverse Events (CTCAE) CTEP See Cancer Therapy Evaluation Program (CTEP) CT imaging See Computed tomography (CT) CTMS See Clinical trial management system (CTMS) CTSA See Clinical and Translational Science Awards (CTSA) CUI See Concept Unique Identifier (CUI) Culture change 561 Cumulative Index to Nursing and Allied Health Literature (CINAHL) 765 CUPID See Computerized Unified Patient Interaction Device (CUPID) Curly braces problem 816 Current Procedural Terminology (CPT) 64, 221, 235
D Dana Farber Cancer Institute 596 DARPA 12 –– See also Advanced Research Projects Agency (US Department of Defense) Dashboard 597, 748ff, 797 Data 66 –– access 6 –– acquisition 7, 51, 180ff, 580, 712ff, 805 –– capture 472 –– display 181, 479 –– entry 72 –– genetic 403 –– integration and standards 472 –– management 10, 801 –– medical 48, 58, 60, 61, 68, 72ff, 218, 227, 674, 712, 970, 979, 988 –– mining 883, 888 –– presentation 581 –– processing 581 –– quality 712ff –– recording 51 –– reuse 235, 484 –– sharing 399 –– storage 180ff, 576ff, 872ff –– summarization 181, 785 –– timeliness 712ff –– transformation 581 –– validation 479, 805 Data analysis 439ff Database 66 –– EBM (evidence-based medicine) 767, 768 –– genomics 34, 767, 768 –– phenotypic 34 –– regional 15 Database management systems 767 Database of Genome and Phenome See dbGAP Datum Data collection 439ff
Data communications 12, 1051, 1054, 1057, 1062, 1076 –– Internet 4, 12ff, 17ff, 39, 122, 220, 290, 371, 409ff, 453, 517, 546, 670, 679, 684, 751, 759ff, 780, 1051 Data-driven reasoning 139, 833 Data interchange standards 226ff Data standards 645ff dbGAP (Database of Genome and Phenome) 925 DCI See Dossier of Clinical Information (DCI) DCMI See Dublin Core Metadata Initiative (DCMI) Decentralized Hospital Computer Program (DHCP) 590 Decision analysis 813 Decision making 47 Decision nodes 102 Decision science 37 Decision Support 749 Decision-support methodology 798, 806ff Decision support systems 5, 9, 16, 180, 182, 396, 488, 513, 559, 567, 797ff –– and alerting 721ff –– data-driven 721 –– human-computer interaction 122 Decision tree 100 deCode Genetics 960 deCODEme service 960ff De-duplicated immunization 625 de facto standards development 210 De-identified data 515 Demand for trainees 20, 21 Dental informatics 33 Deoxyribonucleic acid (DNA) 278 Departmental systems 27, 547ff Department of Defense 219 Department of Veterans Affairs 396, 469, 473 –– See also Veterans Health Administration Dependent variables 441 Dermatologic imaging 312 DES See Data Encryption Standard (DES) Description logic 343 Descriptive studies 444 Design validation 434 Diagnosis 36, 79, 340ff, 397, 735, 745, 798 Diagnosis-Related Groups (DRG) 221, 587 Diagnostic and Statistical Manual of Mental Disorders (DSM) 221 DICOM See Digital Imaging and Communications in Medicine (DICOM) Differential diagnosis 68, 735, 1034 Digital Anatomist Dynamic Scene Generator 322 Digital Anatomist Interactive Atlases 322 Digital image 742 –– See also Image Digital Imaging and Communications in Medicine (DICOM) 208, 215, 314, 472, 675, 741ff, 747 Digital libraries 757ff, 763, 784, 786ff –– intellectual property 787, 788 –– preservation 787ff Digital Object Identifier (DOI) 787 Digital radiography 741 Digital subtraction angiography 326
1136
Subject Index
Digital technology –– artificial intelligence and adaptive learning 862 –– assessment of learning –– analytics 860ff, 862 –– branching scenario 860 –– formative and summative 859 –– intelligent tutoring, guidance, feedback 860ff –– quizzes, multiple choice questions, flash cards, polls 859ff –– simulations 860 –– digital content –– augmented reality 858ff –– cases, scenarios and problem-based learning 860ff –– games 852ff –– interactive content 852ff –– levels of 851 –– mannequins 856ff –– procedures and surgery 856 –– production and verification 862 –– text/image/video content 851–852 –– three dimensional (3D) printing 859 –– virtual patients 855ff –– virtual reality 858 –– virtual world 857ff –– interoperability standards 849ff –– just-in-time learning systems 849 –– LCMS 848 –– learner audiences –– health care providers 846ff –– patients, caregivers, and public 847ff –– undergraduate and graduate health care professions students 845ff –– learning environments 844ff –– LMS 848ff, 862ff –– performance support 849 –– real-time feedback 862 –– theories of learning 843ff –– usability and access 850ff Directionality of reasoning 134 Direct-to-consumer (DTC) 959ff Discordance 88, 89 Discrete-event simulation models 114 Distributed cognition 143ff Distributed cognition (DCog) 156 DNA methylation 277 DNS See Domain Name System (DNS) Doctoral dissertations 37 DOI See Digital Object Identifier (DOI) Domain expert 134 Dossier of Clinical Information (DCI) 144 Double-blinded, randomized, placebo-controlled trial 917 Double blinding 61, 917 Draft standard for trial use (DTSU) 213 Drawings in patient records 52 DRG See Diagnosis-Related Groups (DRG) Drug codes 223–5 Drug repurposing 877 DSL See Digital Subscriber Line (DSL) DSM See Diagnostic and Statistical Manual of Mental Disorders (DSM)
DSP See Digital signal processing (DSP) DTSU See Draft standard for trial use (DTSU) Dublin Core Metadata Initiative (DCMI) 772 DXplain system 804 Dynamical systems models 656 Dynamic programming 283, 284 Dynamic transition models 114 DynaTAC 8000X, 639ff
E EBCDIC 206 EBM See Evidence-based medicine (EBM) ECG See Electrocardiogram (ECG) Ecological momentary assessment (EMA) 656ff eCRFs See Electronic case report forms (eCRFs) EDC See Electronic data capture (EDC) EDIFACT See Electronic Data Interchange for Administration, Commerce, and Transport (EDIFACT) Education 18, 35, 738 –– computers in 4ff, 18 EHRs See Electronic health records (EHRs) EKG See Electrocardiogram (EKG) El Camino Hospital 471 electrocardiogram (ECG or EKG) 698, 705, 708 –– automated analysis of 703 Electrocardiographic Society 703 Electroencephalography 304 Electronic case report forms (eCRFs) 924 Electronic data capture (EDC) 921, 923, 924 Electronic data interchange (EDI) 568 Electronic Data Interchange for Administration, Commerce, and Transport 208, 233 Electronic health information exchange (HIE) 621 Electronic health records (EHRs) 5, 49, 167ff, 179, 181, 190, 193, 236, 237, 380, 405, 468ff, 515ff, 519, 585, 592, 600ff, 760, 768, 801, 888, 915, 916, 920, 921, 927ff –– certification 213 –– clinician order entry (See CPOE) –– distributed cognition 143ff –– precision medicine –– genomic discovery 944ff –– research-grade phenotypes 945ff Electronic-long, paper-short 1037 –– See also ELPS (electronic-long, paper-short) Electronic mail 12 –– See also E-mail Electronic medical records (EMRs) 55, 556, 569 –– See also Electronic health records Electronic Medical Records and Genomics (eMERGE) network 945ff Eligibility criteria 821 ELPS (electronic-long, paper-short) 1037 E-mail 12 EMBASE 765 –– EMTREE 770 Emergency department 695 –– See also Emergency room
1137 Subject Index
Emergency department (ED) 679, 680 Emergency room (ER) 695 EMERGE Research Network 888 EMRs (Electronic Medical Records) 55, 556, 569 –– See also Electronic Health Records ENCODE (Encyclopaedia of DNA Elements) 893 Encryption 13 ENIAC computer 26 Enrichment analysis 886 Enrollment 919 Enterprise Resource Planning (ERP) systems 569 Entrez system 881, 1038 Entry term 769 ePad 321 Epic Systems 484 Epidemiology 26, 932 Epigenetics 282, 870 Epigenome 277 Epistemologic framework 132, 133 E-prescribing 972 ER See Emergency room (ER) Error 140, 141, 583 Error checking 14 Error matrix 265 Ethernet 743 Ethics 391ff, 894ff, 896ff Ethnography 448 ETL See Extraction-transformation-load (ETL) EudraCT (European Union Drug Regulating Authorities Clinical Trials) 932 European Committee for Standardization Technical Committee 208, 216 –– See also Comité Européen de Normalisation European General Data Protection Regulation (GDPR) 645 European Union Drug Regulating Authorities Clinical Trials 932 –– See also EudraCT Evaluation 425ff, 536 –– communicating results 452ff –– contract 431 –– data analysis 451 –– data collection 451 –– information retrieval 779ff –– negotiation phase 431 –– study types 437 –– system oriented 780 –– user oriented 782ff Evaluation metrics 265ff Event-condition-action (ECA) rules 816 Event monitors 567 Evidence-based guidelines 9 –– See also Clinical guidelines Evidence-based medicine (EBM) 9, 404, 763, 779, 783 Exact-match retrieval 775 Experimental intervention 917 Experimental robotics 279 Experimental science 32 Expert 134 expertise 134 Expert systems 394
Extended Binary Coded Decimal Interchange Code See EBCDIC eXtensible Markup Language 321, 826, 932 –– See also XML Extension for Community Healthcare Outcomes (ECHO) 689ff External validity 920 Extraction-transformation-load (ETL) 925 Extreme programming 190
F Falls prevention 597, 598 Falls risk assessment 597 False alarms 708 Fast Health care Interoperability Resources (FHIR) 232–3, 237, 475, 646 FDA See Food and Drug Administration (FDA) FDAAA See Food and Drug Administration Amendments Act of 2007 (FDAAA) FDDI See Fiber Distributed Data Interface (FDDI) Feasibility analysis 920 Federal Government 1013ff Federal Trade Commission 414 Feedback 682, 737 –– See also Haptic feedback Fee-for-service 550 fee-for-service Fellowship 931, 932 –– See also National Library of Medicine postdoctoral fellowship FHIR See Fast Health care Interoperability Resources (FHIR) Fiber Distributed Data Interface 743 Field function study 436 Field user effect study 436 Fifth generation wireless systems (5G) 677 Financial savings 803 –– See also Cost-benefit tradeoffs; Cost-benefit tradeoffs Findable, accessible, interoperable and re-usable (FAIR) 1013 Fitts law 163 Flexner report 471 FM See Frequency modulation (FM) Focus groups 164ff Food and Drug Administration (FDA) 225, 413–415, 558, 723, 752, 881, 882, 919, 930ff Food and Drug Administration Amendments Act of 2007 (FDAAA) 932 Force feedback 682 –– See also Haptic feedback Foreground question 763 Forward chaining 814 Foundational Model of Anatomy 316, 322 FTP See File Transfer Protocol (FTP) Full-text content 765ff Functional imaging 303, 738 Functional magnetic resonance imaging (fMRI) 304 Functional mapping 738
1138
Subject Index
Fundus on phone (FOP) 676 Future of EHRs 11, 169 Future perspectives –– COVID-19 pandemic 932ff –– Federal Government 1012ff –– health policy 1005ff –– HIT 1010ff –– informatics education and informatics practice 993ff –– informatics research 993 –– institution-centric electronic health record 1008 –– learning health system 993 –– nursing informatics 995, 1004ff –– payment models and reimbursement 1007 –– precision medicine 995, 997ff –– social networks and telemedicine 1009 –– speech and gesture recognition 1009 –– translational bioinformatics 995, 997ff –– translation and opportunities 995, 1000ff
G GALEN 222, 315 GALEN Representation and Integration Language (GRAIL) 223 GELLO 826 GenBank 279 Gene expression data 287ff, 888, 889 Gene expression microarrays 276 Gene Expression Omnibus (GEO) 279 Gene Ontology (GO) 225, 291 General Electric Corporation 220, 705 Genes 278 Genetic Information Nondiscrimination Act (GINA) 414, 894 Genetic risk score (GRS) 952 Genetic sequencing 279 Genome 278, 870 Genome-wide association studies (GWAS) 873, 882, 944, 946, 950 Genomic medicine 872 Genomics 275, 276 Genomic sequencing 950ff Genotype-based databases 277 Geographic Information System (GIS) 617, 621 Georgetown Home Health Care Classification (HHCC) 223 Germline pharmacogenomics 958ff GINA See Genetic Information Nondiscrimination Act (GINA) GIS See Geographic information systems (GIS) GLIF See GuideLine Interchange Format (GLIF) Global health 631ff Global health security agenda (GSHA) 631 Global processing 325 Global Trachoma Mapping Project (GTMP) 632 GO See Gene Ontology (GO) Goals, Operators, Methods and Selection (GOMS) rules 162ff Google 774, 779
Google Scholar 767 Governance, 523 Government roles 15, 210 GPGPU See General purpose graphical processing unit (GPGPU) GPS 645 GPU See Graphical processing unit (GPU) GRAIL See GALEN Representation and Integration Language (GRAIL) Granularity 775 Graphical user interfaces (GUI) 154 Graphs 889 Group Health Cooperative 596 Growth charts 57 GS1 215 Guideline Element Model (GEM) 826, 827 GuideLine Interchange Format (GLIF) 827 Guidelines 9, 15, 115, 513, 553, 554, 566, 923 –– See also Clinical guidelines
H Handovers 715, 719 Haptic feedback 682, 737 Hardcoding clinical algorithms 809 Harvard University 895 HCI See Human-computer interaction (HCI) Healthcare Effectiveness Data and Information Set (HEDIS) 629 Healthcare financing 39 –– See also Insurance Healthcare informatics 20, 21ff –– See also Biomedical informatics Healthcare Information and Management Systems Society (HIMSS) 168, 660 Healthcare information systems (HCIS) 545ff Healthcare organizations (HCOs) 547ff Health care providers 846ff Healthcare team 51 Health disparities 659, 956ff Health Evaluation Logical Processing system 590, 696, 710, 715, 721 –– See also HELP System Health Industry Business Communications Council (HIBCC) 233 Health Industry Business Communications Council Standards 233 Health information and communications technology (HICT) 669 –– See also Health information technology Health information exchanges (HIEs) 493, 516, 572, 977ff Health information infrastructure (HII) 21, 493, 512ff, 633 Health information technology (HIT) 1005ff, 1010ff –– See also EHRs, CPOE, departmental systems, HIE, Usability Health Information Technology for Economic and Clinical Health Act (HITECH) 517
1139 Subject Index
Health Information Technology for Economic and Clinical Health Act of 2009 (HITECH) 21, 395, 405, 414, 500, 556, 573, 579, 584, 801, 977ff Health information technology (HIT) policy 970ff Health Insurance Portability and Accountability Act of 1976 (HIPAA) 14, 402, 414, 470, 519, 558, 589, 628, 677 Health Level 7 (HL7) 14, 198, 211, 227, 231, 474, 549, 550, 710, 749, 807, 825, 826, 873, 930, 932 –– Clinical Document Architecture (CDA) 930 –– Clinical Genomics Workgroup 873 –– Individual Case Safety Reports 932 –– Reference Information Model (RIM) 219, 232 –– Terminfo 219 Health literacy 374 Health maintenance organizations (HMOs) 208 Health NLP (hNLP) 243ff, 476, 805, 888 –– See also Natural language processing (NLP) Health on the Net Foundation (HON) 763 –– HON Select 765 Health policy 1005ff Health promotion 15 Health Record Banking Alliance (HRBA) 524, 526, 527 Health Record Banks (HRBs) 524ff, 572 Health science education 36, 592, 833, 843ff, 991 –– See also Digital technology HealthVault See Microsoft HealthVault HELP System 590, 696, 710, 715, 721 Heritage Foundation 527 Heuristic evaluations (HE) 160 Heuristics 66 –– See also Cognitive heuristics HGNC See HUGO Gene Nomenclature Committee (HGNC) HHCC See Home Health Care Classification (HHCC) HIBCC See Health Industry Business Communications Council (HIBCC) Hick-Hyman law 164 HICT See health information and communications technology (HICT) Hidden Markov Models 253 HIEs See Health information exchanges (HIEs) Hierarchical task analysis (HTA) 158 Hierarchy 769 High-bandwidth connections 670 High blood pressure 247 HII See Health information infrastructure (HII) HIMSS See Health Information Management Systems Society (HIMSS) Hindsight bias 140 HIPAA See Health Insurance Portability and Accountability Act (HIPAA) HIS See Hospital information systems (HIS) Historical control 917 HIT See Health information technology (HIT) HITECH See Health Information Technology for Economic and Clinical Health Act (HITECH) HITECH Act of 2009 1003 HL7 See Health Level 7 (HL7)
HL7 Clinical & Administrative Domains 230 HL7 Current Projects & Education 231 HL7 EHR Profiles 231 HL7 Fast Healthcare Interoperability Resources 232 HL7 Foundational Standards 230 HL7 Implementation Guides 231 HL7 Primary Standards 230 HL7 Standards 230 HL7 Standards Rules & References 231 HMOs See Health maintenance organizations Hodgkin’s lymphoma 252 Home Health Care Classification (HHCC) 223 Home telehealth 669, 678ff Home Telemedicine Units (HTU) 679 HON See Health on the Net Foundation (HON) Hospital-acquired infections 696 –– See also Nosocomial infections Hospital information systems 26, 182, 207, 550 HTML (HyperText Markup Language) 772 HTTP See HyperText Transfer Protocol (HTTP) HUGO Gene Nomenclature Committee (HGNC) 225 Human Brain Project 323 Human-computer interaction 122, 154ff –– See also Usability Human-computer interaction (HCI) –– clinical workflow (See Workflow) –– innovative design concepts 154 –– role of 155 –– theoretical foundation 155ff –– usability (See Usability) Human Disease Network 890 Human factors 122, 139ff –– See also Usability Human Gene Mutation Database (HGMD) 277 Human Genome Project 62, 237, 275ff, 870ff Hypertension 247, 258 HyperText Markup Language 772 –– See also HTML (HyperText Markup Language) Hypotension 258 Hypothesis-driven reasoning 137 Hypothesis generation 68, 71 Hypothetico-deductive approach 67, 81, 137
I IAIMS See Integrated Academic Information Management Systems (IAIMS) I2b2 (Informatics for Integrating Bench and Bedside) 929 ICANN See Internet Corporation for Assigned Names and Numbers (ICANN) ICD-9 See International Classification of Diseases, Ninth Revision ICD-10 See International Classification of Diseases, Tenth Revision (ICD-10) ICD-9-CM See International Classification of Diseases, Ninth Revision-Clinical Modifications ICD-10-CM See International Classification of Diseases, Tenth Revision-Clinical Modifications (ICD-10-CM)
1140
Subject Index
ICHPPC See International Classification of Health Problems in Primary Care (ICHPPC) ICMP See Internet Control Message Protocol (ICMP) Icons ICNP See International Classification of Nursing Practice (ICNP) ICPC See International Classification of Primary Care (ICPC) IDEATel See Informatics for Diabetes Education and Telemedicine (IDEATel) IDF See Inverse document frequency (IDF) IDF*TF weighting 774 –– See also TF*IDF weighting IDNs See Integrated delivery networks (IDNs) IEEE See Institute of Electrical and Electronics Engineers (IEEE) IHE See Integrating the Healthcare Enterprise (IHE) IHTSDO See International Health Terminology Standards Development Organization (IHTSDO) Image –– acquisition 305ff, 741 –– biomolecular 738 –– content representation 313 –– content segmentation 332ff –– database 767 –– enhancement 325 –– exchange 751 –– interpretation 301ff, 338ff, 739 –– link-based 774 –– metadata 314 –– patches 328 –– quality 310ff –– quantitation 325, 340 –– registration 336ff –– rendering 325 –– retrieval 338 Image compression 670 Image-guided procedures 737 Image processing 301, 323ff Imaging informatics 33, 300ff, 734, 1047 Imaging systems 734ff IMIA See International Medical Informatics Association (IMIA) Immersive simulated environments 154 Immunization Information Systems (IIS) –– data quality 629 –– functions 625 –– funding and sustainability 628 –– governance issues 628ff –– history, context and success 625ff –– interdisciplinary communication 627ff –– legislative and policy issues 628 –– monitoring 629ff –– provider and program levels 624ff –– stakeholder collaboration 627 –– system design and information architecture 630ff Immunization registries 619, 624, 1048 –– See also Immunization Information Systems (IIS) Implementation science 918
Incentive Programs for Electronic Health Records rule 584 Independent variables 83, 441 Indexing 774ff –– automated 769, 775ff –– controlled terminologies 771ff –– manual 769, 773ff Indexing 618ff, 758 Index Medicus 758 Indirect care 580 Individualized medicine 63, 872, 943 –– See also Personalized medicine Infectious disease monitoring 720 Inference 339 Influence diagrams 813 Infobutton 484, 490, 492, 797, 807 InfoRAD 1049 Informatics for Diabetes Education and Telemedicine (IDEATel) project 679 Informatics for Integrating Bench and Bedside 929 –– See also i2b2 Information 23, 66 –– extraction 785 –– management 545ff –– model 318 –– nature of 30ff –– need 759 –– needs 801 –– requirements 552 –– resources 427 –– science 23, 37 –– structure 30 –– theory 23 Information retrieval (IR) 123, 247, 755ff, 920 –– evaluation 779ff –– exact-match 775ff –– partial-match 776, 777 –– retrieval systems 777ff Inspection methods –– cognitive walkthrough 160ff –– heuristic evaluations 160 –– role of 159ff Institute of Electrical and Electronics Engineers (IEEE) 208, 216, 227 Institute of Medicine (IOM) 500, 583, 584, 591, 615 Institutional Review Boards (IRBs) 403, 919 Insurance –– Blue Cross 551 –– Blue Shield 551 –– Medicaid 551 –– Medicare 551 Integrated circuits 702 Integrated delivery networks (IDNs) 402, 545ff, 585, 586 Integrated Service Digital Network (ISDN) 677 Integrating the Healthcare Enterprise 216 Integration 11, 749 Integrative database 294ff Integrative models 275
1141 Subject Index
Intellectual property 414, 787, 788 Intelligent Tutoring Systems (ITS) 860ff Intensive care unit (ICU) 681ff, 695 –– neonatal (NICU) 695 –– surgical (SICU) 700 Interactions 889 Interactive content 852ff Interdisciplinary care 579 Interface engine 550 Intermediate effect 135 Intermountain Healthcare 488, 706, 716 Internal Revenue Service 208, 220 Internal validity 920 International Classification of Diseases 64, 220, 235, 805, 931 –– Ninth Edition-Clinical Modification (ICD- 9-CM) 220, 221 –– Ninth Revision (ICD-9) 220–4 International Classification of Diseases –– Ninth Revision (ICD-9) 889 International Classification of Diseases –– tent Edition-Clinical Modification (ICD- 10-CM) 931 –– Tenth Edition-Clinical Modification (ICD- 10-CM) 220, 224 –– Tenth Revision (ICD-10) 64, 220, 224 International Classification of Nursing Practice (ICNP) 223 International Council of Nurses 223 International Health Terminology Standards Development Organization (IHTSDO) 217, 222, 593 International Medical Informatics Association (IMIA) 599 International Standards Organization (ISO) 206, 216, 772 –– ISO/CEN 13606 219 –– ISO Standard 1087, 220 –– Technical Committee 215 (Health Informatics) (TC215) 208, 216 Internet –– development of 12 Internet of Things (IoT) 558, 674 Interoperability 9, 235, 571, 787, 788, 828–830, 922 Interoperability standards 849ff Intervention 917 Interventional radiology 739 Interviews 164ff, 451 Intravenous (IV) pump 696, 723ff Inverse document frequency (IDF) 774 IOM See Institute of Medicine (IOM) IPA See Individual practice associations iPad 713 iPhone 713 IR See Information retrieval (IR) ISDN See Integrated Services Digital Network (ISDN) ISG See Internet support group (ISG) ISO See International Standards Organization (ISO) ISP See Internet service provider (ISP) IV pump See Intravenous (IV) pump
J JAMIA See Journal of the American Medical Informatics Association (JAMIA) JBI See Journal Biomedical Informatics (JBI) JCAHO 236, 554, 587 –– See also Joint Commission, The JIC See Joint Initiative Council (JIC) Jobs 18, 738, 899 –– See also Training Joint Commission, The (TJC; formerly, Joint Commission for Accreditation of Health Care Organizations; JCAHO) 236, 554, 587 Journal of Biomedical Informatics (JBI) l 24, 35, 871 Journal of the American Medical Informatics Association 871 JSON See JavaScript Object Notation Judgment 58, 81, 83, 84, 123, 141, 157, 210, 395, 405, 407, 412, 448, 457, 601, 783, 786, 797, 800, 859 Just-in-time adaptive interventions (JITAIs) 655ff Just-in-time learning systems 849
K Kaiser Health System 26 KEGG See Kyoto Encyclopaedia of Genes and Genomes (KEGG) Key performance indicators (KPIs) 748 Keystroke-Level Model (KLM) 162ff, 169 KF See Knowledge Finder (KF) Knowledge 66 –– acquisition 823 –– discovery 785 –– evidence-based 22 –– organization of 129 –– representation 314 Knowledge base 66, 813, 822 –– reasoning 343 Knowledge based systems 813 –– See also Expert systems Knowledge Finder (KF) 783
L Labor and delivery suites 695 Laboratory function study 435 Laboratory information system (LIS) 182 Laboratory user effect study 436 LANs See Local-area networks (LANs) Latency 682 Latent conditions 140 Latent Dirichlet Allocation (LDA) 250 Latent failure 141 LCD See Liquid crystal display (LCD) LDS Hospital 26, 489, 718ff, 722, 1043, 1045 Leadership 501 LEAN 562 Learning analytics 861ff, 862 Learning Content Management System (LCMS) 848 Learning health system 16, 20, 833, 936ff
1142
Subject Index
Learning management system (LMS) 851ff, 862ff Learning Tools Interoperability (LTI) 850 LED See Light-emitting diode (LED) Leeds abdominal pain system 835 Legacy systems 592 Legal issues 56, 411 –– See also Regulation Lexical-statistical retrieval 776 Lexicon 747 Liability under tort law 411 Library 786 Library of Congress 788 Light emitting diodes (LEDs) 647 Likelihood ratios 94 Linguistic knowledge 254 –– document-level representation 260 –– ontological knowledge 256 –– pragmatics 261 –– semantics 259 –– sentence boundary detection 259ff –– spelling variants and errors 258 –– syntactic knowledge representation 259 –– terminological knowledge 256 –– tokens 257ff –– word embedding 257 LIS See Laboratory information system (LIS) Literature reference databases 764 Local Area Networks (LANs) 5, 548, 742 Local health departments (LHDs) 620 Lockheed Corporation 27 LOCKSS project (National Digital Information Infrastructure Preservation Program) 788 Logical Observations, Identifiers, Names, and Codes (LOINC) 225–7, 235 Logical Record Architecture 220 LOINC See Logical Observations, Identifiers, Names, and Codes (LOINC) Long-term memory 129 Lossless compression 743 Lossy compression 743 Lots of Copies Keep Stuff Safe 788 –– See also LOCKSS project
M M (computer language) 547 –– See also MUMPS Machine learning 725, 811, 955 Magnetic-resonance imaging (MRI) 40, 305, 309, 735 Magnetic-resonance spectroscopy 305 Magnetoencephalography 305 Mainframe computers 546 Malpractice 412 Managed care 40 Management of chronic diseases 679 Management science 37 Mannequins 857ff MAP See Mean average precision (MAP) Markov models 108 Massachusetts General Hospital (MGH) 26, 37, 596
Massively Online Open Courses (MOOCs) 852 Master patient index (MPI) 565 Mastery learning 851 Mayo Clinic 471, 704, 711, 713, 715, 717, 720, 725 Mean average precision (MAP) 781 Meaningful use (of electronic health records) 16, 21, 185, 194, 557, 935, 970 Means-ends analysis 126 Mean time between failures (MTBF) 531 Measurement 441 Medbiquitous 850 MedDRA See Medical Dictionary of Regulatory Affairs (MedDRA) Medicaid 208, 584, 587, 589, 679, 686, 900, 977, 1056 –– See also Centers for Medicaid and Medicare Services (CMS); Insurance Medical cognition 124, 132 Medical College of Wisconsin 960 Medical computer science 23 Medical Data Interchange Standards 227 Medical devices 39 Medical Dictionary of Regulatory Affairs (MedDRA) 235, 931, 932 Medical errors 123 Medical expertise 137 Medical home 579 –– See also Patient centered medical home Medical informatics 29 –– See alsoBiomedical informatics Medical Information Bus (MIB) 216, 706, 715 Medical Library Association 415 Medical Literature Analysis and Retrieval System (MEDLARS) 758, 771 Medical logic modules (MLMs) 198, 817ff Medical record committees 403 Medical records 5, 49, 167, 179, 181, 190, 193, 236, 237, 380, 405, 468, 516, 519, 585, 592, 604ff, 760, 768, 801, 888, 915, 916, 920, 921, 929ff –– See alsoElectronic health records (EHRs) Medical Subject Headings (MeSH) 225, 317, 758, 771ff, 770 Medicare 208, 584, 587, 589, 679, 686, 900, 977, 1055 –– See also Centers for Medicaid and Medicare Services (CMS); Insurance Medication reconciliation (MedRec) tools 168 MEDINET project 26 MEDLARS See Medical Literature Analysis and Retrieval System (MEDLARS) MEDLARS Online 581, 758, 764, 777ff, 781ff, 783 MEDLINE 581, 758, 764, 771ff, 783 MEDLINEplus 489, 768, 881 MedWISE 592 Memory (human) 128, 129 Mendelian randomization (MR) 954ff Mental images 131 Mental models 131 Mental representations 127 Merck Medicus 1057 MeSH See Medical Subject Headings (MeSH) Messaging 209 Messenger RNA (mRNA) 278
1143 Subject Index
Meta-analysis 92–93 Metabolome 275, 276 Metabolomics 282, 874 Metadata 758, 872, 921 Metagenome 275 Metagenomics 281, 1057 Metathesaurus (UMLS) 225, 770 MGH See Massachusetts General Hospital (MGH) MGH Utility Multi-Programming System (MUMPS) 547 mHealth 674 –– See also Mobile health MHS See Military Health System (MHS) MiCare 603 Microarray chips 287ff, 888, 889 –– See also Gene expression data Microcomputers 27 Microfluidics 279 Microlearning 851 Micromedex 490, 581 Microprocessor 27, 702 Microsimulation models 114 Microsoft 527 Microsoft Bing 774, 779 Microsoft, Inc. 414 Microsoft Research 1001 MIF See Model Interchange Format (MIF) Military Health System (MHS) 603 MIMIC-II (Multiparameter Intelligent Monitoring in Intensive Care) 725 Minicomputers 547, 702 Misspellings 258 Mistakes 141 MIT See Massachusetts Institute of Technology (MIT) Mixed-initiative dialog 1058 MKSAP See Medical Knowledge Self-Assessment Program (MKSAP) MLMs See Medical logic modules (MLMs) MMS See Massachusetts Medical Society Mobile health (mHealth) 674 Mobile health (mHealth) applications –– advancements 639ff –– clinician work, changes in 658 –– data access 645 –– data standards 645 –– evolution of –– PDAs and cellular phones 642 –– smartphones and tablets 642 –– wearable devices 643 –– future of 660 –– health disparities 659 –– individuals’ daily lives, interventions 654 –– JITAI 665 –– patient work, changes in 658 –– platforms 645 –– privacy and security 657–658 –– regulatory issues 659–660 –– self-experimentation 657 –– self-report data –– data entry 653–654
–– in-situ data collection methods 650–651 –– user interactions 652–653 –– sensor data –– assessing physiological processes 646–648 –– examples 646 –– inferring activities 648–649 –– inferring context 649–650 Mobile health care (mHealth) 377 Mobile networks 677 Mobile phones 713 –– See also Smart phones Mock-ups 185 Model-based approaches 162 Model Interchange Format (MIF) 219 Model organisms databases 768 Moderate to vigorous physical activity (MVPA) 648 Molecular Biology Database Collection 767 Molecular imaging 310 Monotonicity 135 Moore’s Law 28, 498 Morphology 257, 775 Morphometrics 738 Motion artifacts 670 MPI See Master patient index (MPI) MPLS See Multiprotocol label switching (MPLS) MRI See Magnetic resonance imaging (MRI) Multi-axial terminology 224 Multidisciplinary Epidemiology and Translational Research in Intensive Care Data Mart 725 Multimodal interfaces 154 Multiparameter Intelligent Monitoring in Intensive Care database 547, 702 –– See also MIMIC-II Multiphasic screening 471 Multiprotocol label switching (MPLS) 671 MUMPS 547 MYCIN 814
N NAHIT See National Alliance for Health Information Technology (NAHIT) Naïve Bayes 810 Named-entity normalization 246 Named entity recognition (NER) 246, 252 NANDA See North American Nursing Diagnosis Association (NANDA) NAR See Nucleic Acids Research (NAR) database Narrative data 319 NASA See National Aeronautics and Space Administration (NASA) National Academy of Sciences 583–584 National Cancer Institute (NCI) 766, 893, 931, 932 National Center for Biomedical Computing 926 National Center for Biomedical Ontology (NCBO) 886 National Center for Biotechnology Information (NCBI) 225, 765, 925 –– NCBI Bookshelf 766 National Center for Microscopy and Imaging Research 338
1144
Subject Index
National Committee on Vital and Health Statistics (NCVHS) 517 National Council for Prescription Drug Program (NCPDP) 208, 217 National Digital Information Infrastructure Preservation Program (NDIIPP) 788 National Drug Codes (NDC) 225 National Guidelines Clearinghouse (NGC) 765 National health goals 971 National Health Information Infrastructure (NHII) 21, 493, 517, 633 –– See also Health information infrastructure National Health Information Network (NHIN) 583, 585, 589 –– NHIN Connect 585 –– NHIN Direct 585 National Health Service (UK) (NHS) 222 National Human Genome Research Institute (NHGRI) 893 National Information Standards Organization (NISO) 772 National Institute for Standards and Technology (NIST) 781 National Institutes of Health (NIH) 29, 760, 768, 871, 925, 930, 931, 933 –– NIH Reporter 768 –– Pain Consortium 931 National Library of Medicine (NLM) 29, 225, 489, 758, 765, 766, 773, 777, 779, 881, 886, 932, 934, 1013 –– postdoctoral fellowship 931, 932 National Provider Identifier (NPI) 208 National Quality Forum (NQF) 236 National Research Council 402, 517, 583, 584, 593 National Science Foundation (NSF) 12, 760 Nationwide Health Information Network (NwHIN) 494 Natural history study 916, 918 Naturalistic studies 449 Natural language 243 Natural language processing (NLP) 243, 476, 805, 888 –– applications 245 –– context for 248 –– data annotation 263 –– evaluation 263 –– future of 266 –– good system performance 262 –– labeling approach 249 –– linguistic knowledge (See Linguistic knowledge) –– manual tasks 262 –– motivation 243 –– privacy and ethical concerns 262 –– relation extraction 253 –– sequence labeling 252 –– system interoperability 263 –– template filling 254 –– text labeling 252 –– topic modeling 251ff Natural language query 776 NCBI See National Center for Biotechnology Information (NCBI)
NCBO See National Center for Biomedical Ontology (NCBO) NCI See National Cancer Institute (NCI) NCPDP See National Council for Prescription Drug Programs (NCPDP) NCVHS See National Committee on Vital and Health Statistics (NCVHS) NDC See National Drug Codes (NDC) NDIIPP See National Digital Information Infrastructure Preservation Program (NDIIPP) Needs assessment 434 Negative dictionary 774 Negative predictive value 95 Negligence theory 412 Net reclassification improvement (NRI) 876 Network analysis 889 Networking and Information Technology Research and Development (NITRD) Program 1014 Network operations center (NOC) 525 Neuroinformatics 303, 337 NeuroNames 317 New York Presbyterian Hospital (NYPH) 471, 679 New York State Psychiatric Institute (NYSPI) 678 NGC See National Guidelines Clearinghouse (NGC) NHII See National Health Information Infrastructure (NHII) NHIN See National Health Information Network (NHIN) NHS See National Health Service (UK) NIC See Nursing Interventions Classification (NIC) NICU See Neonatal intensive care unit (NICU) NISO See National Information Standards Organization (NISO) NIST See National Institute for Standards and Technology (NIST) NLP See Natural language processing (NLP) NOC See Nursing Outcomes Classification (NOC) Nomenclature 63 Non-Hodgkin’s lymphoma 252 North American Nursing Diagnosis Association (NANDA) 223 Nosocomial infections 696 NPI See National Provider Identifier (NPI) NSF See National Science Foundation (NSF) Nuclear magnetic resonance (NMR) 309 Nuclear magnetic resonance spectroscopy 279 Nuclear medicine imaging 309 Nucleic Acids Research (NAR) database 767 Numeracy 374 Nurse Licensure Compact (NLC) 686 Nursing care 578 Nursing informatics (NI) 33, 995, 1003 Nursing Interventions Classification (NIC) 223 Nursing Outcomes Classification (NOC) 223 Nursing terminologies 223 Nutritionist 577 NYPH See New York Presbyterian Hospital (NYPH) NYSPI See New York State Psychiatric Institute (NYSPI)
1145 Subject Index
O
P
OBI See Ontology for Biomedical Investigations Objectivist studies 440 Observational cohort study 917, 945 Occupational therapist 577 Odds 95 Odds ratio 95 Office of Civil Rights 414 Office of the National Coordinator (ONC) 494 Office of the National Coordinator for Health Information Technology (ONC; formerly ONCHIT) 21, 185, 213, 584, 832, 970 OHDSI 217 OLAP, seeOn line analytic processing (OLAP) OLTP, seeOn line transaction processing (OLTP) Omaha system 223 –– omics data 869 Omics technologies 282 ONC; formerly ONCHIT See Office of the National Coordinator for Health Information Technology (ONC; formerly ONCHIT) Online Mendelian Inheritance in Man (OMIM) database 766, 767, 881, 891 Ontologies 219ff, 344, 820, 885 Ontology for Biomedical Investigations (OBI) 931 Open access 760 Open consent models 278 Open Directory 772 OpenEHR 217 OpenEHR Foundation 219, 591 Open source 501 Open standards development policy 212 Open Standards Interconnect (OSI) protocol 227, 228 Operating characteristics (of tests) 87 Operating rooms 695 Ophthalmologic imaging 313 Optical character recognition 474 Optical Character Recognition (OCR) 474 Optical coherence tomography 313 ORBIT See Online Registry of Biomedical Informatics Tools (ORBIT) Order-entry systems See Computerized provider order entry (CPOE) Order sets 9 Organizational change 20 Organizational landscape 569 Organizing/grouping information 808 Oscilloscope 702 OSI See Open Standards Interconnect protocol (OSI) Outcome –– data 406, 974 –– variables 441 Overload 182 OVID (formerly, CD Plus) 765, 766, 783, 920 OWL (Web Ontology Language) 219, 344 Oximeter 704 –– See also Pulse oximeter
PACS See Picture-archiving and communication systems (PACS) PageRank algorithm 774 Paper records 472 Partial-match searching 776, 777 Partial parsing 1064 Participant recruitment 922 Participant registration 923, 924 Participant screening 924 Partners Health Care 491, 596, 597, 960 PatCIS See Patient Clinical Information System (PatCIS) Patents 414 Paternalism 368 Pathognomonic tests 70 Patient –– identifiers 208, 209 –– portals 560 –– tracking 566 –– triage 568 Patient care 514 Patient care information management 576 Patient care information systems 581 Patient-care systems 576 Patient-centered care 21, 576 –– acute care settings 367 –– ambulatory care 366 –– BARN 371 –– care transitions 367 –– CHESS 370 –– development and incorporation 365 –– doctor-patient communication 369 –– examples 370 –– home setting 367 –– improvements 372 –– Internet 371 –– limitations of 368 –– overview 366 –– paternalism 378 –– personalized medicine 367 –– randomized controlled trials 371 –– written materials 369 Patient centered medical home 579 Patient Centered Outcomes Research Institute (PCORI) 930 Patient centered systems 16, 587, 599 Patient Gateway 596 Patient monitoring 695 –– intensive care units 695, 700 Patient portals 672 Patient Portals and Telehealth 978 Patient safety 123, 139, 429, 979 Patients Over Paperwork 497 Patient tracking 566 Pay for performance 550, 565, 587, 589 PCAST See President’s Council of Advisors on Science and Technology (PCAST)
1146
Subject Index
PCORI See Patient Centered Outomes Research Institute (PCORI) PCPs See Primary care physicians (PCPs) PDAs See Personal digital assistants (PDAs) PDF See Portable Document Format (PDF) Performance indicators 748 –– See also Key performance indicators Personal Analytics Companion (PACO) 651, 657 Personal clinical electronic communications 672 Personal computers (PCs) 4, 27, 28 Personal Connected Health Alliance (PCHA) 217 Personal digital assistants (PDAs) 639ff Personal Genome Project (PGP) 894 Personal grid architecture 532 Personal health information (PHI) 669 –– aging 375 –– analytic framework 373 –– behavior management 375 –– characteristics 375 –– chronic conditions 374 –– communication 378 –– data science 376 –– digital divide 374 –– EHR 380 –– ethical, legal and social issues 377 –– future opportunities and challenges 383 –– health literacy and numeracy 374 –– mHealth 378 –– patient-centered care (See Patient-centered care) –– PHR 380 –– precision medicine 376 –– sensors 381 –– social network systems 379 Personal health records (PHRs) 380, 409, 498, 602, 603, 972, 980 –– tethered 596 Personalized medicine 63, 872 PET See Positron-emission tomography (PET) Pharmacogenomics 34, 879 Pharmacogenomics Knowledge Base (PharmGKB) 277, 880 Pharmacogenomics knowledge base (PharmGKB) 253 PheKB 948 Phenome-wide association scan (PheWAS) 889 Phenome-wide association study (PheWAS) 945, 946 Phenotype 752 Phenotype risk score (PheRS) 953 Phenotypes 277 PHI See Personal health information (PHI) Philips Corporation 705 PHIN See Public Health Information Network (PHIN) Photoplethysmography (PPG) measurement 647 PHRs See Personal health records (PHRs) Physical therapist 577 Picture-archiving and communication systems (PACS) 190, 313, 675, 743 PIER See Physicians’ Information and Education Resource (PIER) PITAC See President’s Information Technology Advisory Committee (PITAC) Pixels 314
Placebo 917 Placebo-controlled trial 917 Plain old telephone service (POTS) 677 Planning 346 PLoS See Public Library of Science (PLoS) Policy 970 –– See also HIT policy Policy for Optimizing and Innovating with Health IT 977 Polygenic risk score (PRS) 952 Polysemy 775 Population health 614, 617 Population health informatics 614 –– See also Public health informatics Portable Document Format (PDF) 775 Portico 788 Positive predictive value (PPV) 95, 949 Positron-emission tomography (PET) 305 Postdoctoral fellowship 931, 932 –– See also National Library of Medicine postdoctoral fellowship Postgenomic databases 872 Post-Study System Usability Questionnaire (PSSUQ) 166 Post-test probability 83 PPO See Preferred provider organizations (PPO) Practice management systems 550 Pragmatics 261 Precision 780 Precision medicine 376, 752, 802 –– See alsoPersonalized medicine Precision medicine (PM) 995, 996 –– clinical practice 958 –– cancer genomic testing 957 –– disease diagnosis and risk assessment 959 –– germline pharmacogenomics 958 –– cohorts 956 –– definition 943 –– dense genomic and phenomic data –– artificial intelligence 954 –– machine learning 954 –– mendelian randomization 953 –– redefine disease 954 –– risk scores 954ff –– DTC 960ff –– early in life 960 –– EHRs –– genomic discovery 945ff –– research-grade phenotypes 946f –– goals of 943 –– omic discovery approaches –– genomic sequencing 950 –– GWAS 950 –– investigations 952 –– PheWAS 951 Predicate calculus 130 Prediction of function 883 Predictive biomarkers 874 –– See also Biomarker Predictive value 71 –– negative 95
1147 Subject Index
–– positive 71, 95 Preferred provider organizations 208 President’s Council of Advisors on Science and Technology (PCAST) 517 President’s Emergency Plan for AIDS Relief (PEPFAR) 631 President’s Information Technology Advisory Committee (PITAC) 517 Pretest probability 81 Prevalence 71 Prevention (of disease) 13, 15 Primary care physicians (PCPs) 130, 159, 179, 180, 604, 668, 678, 684, 864, 974 Primary knowledge-based information 759 Prior probability 81 –– See also Pretest probability Privacy 11, 15, 190, 234, 399, 413, 519, 657, 980 PRMD See Private Management Domain (PRMD) Probabilistic systems 810 Probability 71, 79 –– subjective assessment 83 –– threshold 109 Problem-based learning (PBL) 846 Problem impact study 436 Problem-Oriented Medical Information System (PROMIS) 590 Problem solving 799 –– See also Decision making Problem space 126 Procedure-based Payments 569 Process integration 554 Process reengineering 20, 562 Productivity 559 Professional-patient relationship 398, 408 Professional Standards Review Organizations (PSROs) 587 Prognostic scoring system 406 Project HealthDesign Initiative 410 PROMIS See Problem-Oriented Medical Information System (PROMIS) PROSPECT See Prospective Outcome Systems using Patient-specific Electronic Data to Compare Tests and Therapies (PROSPECT) Prospective studies 61, 496 Protégé system 821 Protein Data Bank (PDB) 279, 283 Proteomic mass spectrometry 279 Proteomics 276 Protocol (clinical) 8, 725 Protocol analysis 126 Protocol authoring 923 Protocol management 924 Provider communications 54 Provider-profiling systems 568 PSRO See Professional Standards Review Organizations (PSRO) Public health 403, 799 –– assessment 615 –– assurance 615ff –– informatics 30
–– –– –– –– ––
overview 614 policy development 615 vs. population health 617 practice of 617 public health informatics (See Public health informatics) –– services 616 –– surveillance data 617 Public health informatics 614 –– challenges and opportunities 619 –– context 619 –– definition 618 –– epidemiology 619 –– global health 632 –– HII 633 –– IIS –– data quality 629 –– functions 625 –– funding and sustainability 628 –– governance issues 628 –– history, context and success 625 –– interdisciplinary communication 627 –– legislative and policy issues 628 –– monitoring 629 –– provider and program levels 625 –– stakeholder collaboration 627 –– system design and information architecture 630 –– information sharing 620 –– local public health practice 620 –– national public health practice 621 –– non-health organizations 633 –– state public health practice 620–621 Public health surveillance 616 Public Library of Science (PLoS) 761 Public policy 970 –– See also Policy PubMed 489, 761, 767, 920 Pulse oximeter 704
Q QALYs See Quality-adjusted life years (QALYs) QOS See Quality of service (QOS) QSEN See Quality and Safety Education for Nurses (QSEN) Quality –– control 14 –– management 559, 589 –– measurement 980ff –– reporting 497 Quality-adjusted life years 103 Quality and Safety Education for Nurses (QSEN) 592 Quality of service (QOS) 671 Quasi-experiement 916 Query 758 Query and reporting tools 924 Query-response cycle 525 Question answering (QA) 249ff, 786 Questionnaire for User Interface Satisfaction (QUIS) 166
1148
Subject Index
R Radiology 739 Radiology Society of North America (RSNA) 216, 225, 742 Radiology systems 739 –– information systems 744 RadLex 225, 320ff, 347, 747 Randomized clinical trials (RCTs) 58, 446, 916–919 RCRIM See Regulated Clinical Research Information Management (RCRIM) RCTs See Randomized clinical trials (RCTs) RDF See Resource Description Framework (RDF) Read Clinical Codes 64, 218, 316 Read-only memory (ROM) 1072 Really Simple Syndication (RSS) 765 Real-time feedback 862 Real-time monitoring 703 Recall 780 Receiver 227 Receiver-operating characteristic (ROC) curve 90, 91, 875 Record Matching and Linking 981 Records 5, 49, 167, 179, 181, 190, 193, 236, 237, 380, 405, 472ff, 519ff, 519, 585, 592, 603, 760, 768, 801, 888, 915, 916, 920, 921, 928 –– See also Medical records Reductionist approach 275 Reference Information Model (RIM) 219, 232, 825 –– See also Health Level Seven Reference Information Model Referential expression 261 Referral bias 86, 92 Regenstrief Institute 471, 482, 483, 499, 590 Regenstrief Medical Record System (RMRS) 471, 590 Regional extension centers (RECs) 517 Regional Health Information Organizations (RHIOs) 517 Registries 16 Regulation 405, 411, 898, 980 Reinforcement learning (RL) algorithms 656 Relative recall 780, 781 Relevance judgement 781 Relevance ranking 776 Reminder message 495 Reminders 180, 182 Re-Mission game 854 Remote intensive care 669, 681 Remote interpretation –– commodity Internet 674 –– retinopathy screening 674 –– teleophthalmology 674 –– teleradiology 675 Remote monitoring 673 Remote presence healthcare 5, 18, 409, 672 –– See also Telemedicine Representational state 146 Representativeness 84 Research monitoring tools 924 Research planning 921 Resource Description Framework (RDF) 772
REST See Representational State Transfer (REST) Results reporting 566 Retinopathy of prematurity (ROP) 675 Retrieval 758, 775 Retrospective chart review 61 Retrospective research study 58, 496, 916 Return on investment (ROI) 804 Review of systems (ROS) 68 RFDS See Australian Royal Flying Doctor Service (RFDS) RHIO See Regional Health Information Organization (RHIO) Ribonucleic acid (RNA) 278 RIM See Reference Information Model (RIM) (RIM) Risk attitude 106 Risk-neutral 106 Roadmap for Medical Research (NIH) 871 Robert Wood Johnson Foundation 410, 588 ROC curve See Receiver-operating characteristic (ROC) curve ROM (read-only memory) 1072 Rounds report 718, 720 Royal Flying Doctor Service (RFDS) 670 RS-230 706 RSNA See Radiology Society of North America (RSNA) RTF See Rich text Format (RTF) Rule-based systems 813 –– See also MYCIN RxNorm 225, 473, 881, 888
S Safety 123, 139, 429, 979 –– See also Patient safety Sampling 443 Sandia National Laboratory 815 Satellite 13 Schema 131 Science Citation Index (SCI) 767 SCO See Standard Development Organizations Charter Committee (SCO) SCOPUS 767 SCORM See Sharable Content Object Reference Model (SCORM) Screening tools 923 SDLC See Software development lifecycle (SDLC) SDOs See Standards development organizations (SDOs) Search 123, 247, 758, 920 –– See also Information retrieval Secondary knowledge-based information 759 Secondary use of data 235, 484 –– See also Data reuse Security 11, 190, 557, 657, 924, 980 Self-experimentation 654, 657 Semantic Network (UMLS) 770 Semantics 259 Semi-structured interviews 164 Sender 227
1149 Subject Index
Sensitivity 70, 79, 949 Sensors –– assessing physiological processes 646 –– examples 646 –– inferring activities 648 –– inferring context 649 Sequence alignment 284 Sequence information 275 Sequencing 885 –– next generation 893 Service-oriented architecture (SOA) 199, 232, 921 Set-based searching 775 Setting-specific factors (SSF) 829 Set-top boxes 13 SFTP See Secure FTP (SFTP) Shareable Content Object Reference Model (SCORM) 849 SHARE models 220 Short message service (SMS) messaging 688 Short-term memory 129 SICU See Intensive care unit, surgical (SICU) Side effects 884 Simulation 920 Single nucleotide polymorphisms 882 Single photon emission computed tomography (SPECT) 310 Single-signon 193 Six Sigma 562 Slips 141 SMArt (Substitutable Medial Applications, reusable technologies) 220, 646 Smart phones 5, 53, 567, 638, 640, 642 Smartwatch 638, 640 Smith-Waterman matrix 284 SMK See Structured Meta Knowledge (SMK) SMS See Short Message Service (SMS) SMTP See Simple Mail Transport Protocol (SMTP) SNOMED-CT (Systematized Nomenclature of Medicine-Clinical Terms) 64, 217–219, 222, 225, 235, 316, 317, 473, 593, 805, 881, 888, 931 SNOMED International 217 SNOP (Standardized Nomenclature of Pathology) 64 SNP See Single nucleotide polymorphism (SNP) SOA See Service-oriented architecture (SOA) SOAP See Simple Object Access Protocol (SOAP) Social networking 5 Social network systems 379 Social Science Citation Index (SSCI) 767 Societal Change 572 Sociotechnical systems theory 155 Software 178 –– certification 417 Software development 183, 188–190 –– analysis 184 –– evaluation 188 –– implementation phase 187 –– integration 186, 196 –– lifecycle (SDLC) 183 –– planning 184 –– testing 185 Software oversight committees 415
Software Usability Measurement Inventory (SUMI) 166 Spaced repetition 851 Spatial resolution 311 Specialist Lexicon (UMLS) 770 Specialty Care Access Network-Extension for Community Healthcare Outcomes (SCAN- ECHO) 684 Specificity 70, 79ff SPECT See Single photon emission computed tomography (SPECT) Spectrum bias 92 Speech recognition 476, 744 Spelling variants 258 Spirometers 674 SQL See Structured Query Language (SQL) SSCI See Social Science Citation Index (SSCI) SSH See Secure Shell (SSH) SSL See Secure Sockets Layer (SSL) Standard development organizations 210, 214 Standard of care 397 Standards 9, 11, 206, 499, 830 –– data in clinical research 922, 928 –– data definitions 14 –– data transmission and sharing 14 –– development process 210 Standards and Certification Criteria for Electronic Health Records rule 585 Standards and Systems 234 Standards development organizations (SDO) 210, 593 Stanford University 27, 852, 880, 895 Stat! Ref 766 STEEP See Safe, Timely, Effective, Efficient, Equitable, Patient-centered (STEEP) care Stem cells 885 Stemming 774 Stop words 774 STOR 471 Storage 40 Store-and-forward systems 671 Stratified medicine 872 –– See alsoGenomic medicine Strict product liability 413 Structural alignment 286 Structural imaging 303 Structural informatics 34 –– See alsoImaging informatics Structured data entry 805 Structured form 477 Structured interviews 164 Structured Meta Knowledge (SMK) 223 Structure validation 435 Study arm 917 Subheadings 770 Subjectivist studies 448 Substitutable Medial Applications, reusable technologies (SMArt) 220, 646 Supervised learning 811, 875 Supervised machine learning 954 Support vector machine (SVM) 329 Surescripts 495
1150
Subject Index
Surgical intensive care unit 700 –– See also Intensive care unit, surgical Surveillance 13, 183, 515 –– syndromic 401, 725 Syndromic surveillance 401, 725 –– system 617 Synonymy 769, 775 Syntax 258 Systematic review 763 Systematized Nomenclature of Medicine–Clinical Terms (SNOMED-CT) 64, 217–219, 222, 225, 235, 316, 317, 473, 593, 805, 881, 888, 931 Systematized Nomenclature of Pathology (SNOP) 64 System learnability 123 Systems biology 282 System Usability Scale (SUS) 166
T Tablet computer 638, 640, 642 Tablet computers 53, 567 Tabulating machines 26 Tactile feedback 682, 737 –– See alsoHaptic feedback TATRC See Telemedicine and Advanced Technology Resource Center (TATRC) TBI See Translational bioinformatics (TBI) TCP/IP See Transmission Control Protocol/Internet Protocol (TCP/IP) Technicon Data Systems (TDS) 27 Telecommunications 12 Teleconsultation 669, 670 Telehealth 973, 979 –– categorization 671 –– challenges 684 –– COVID-19 pandemic 684 –– definition 668 –– electronic messaging 672 –– future of 688 –– historical perspectives 669 –– knowledge networks 683 –– licensure and economics 685 –– logistical requirements 687 –– low resource environments 687 –– remote interpretation 675 –– remote monitoring 672 –– telephone 671 –– telepresence 682 –– video-based telehealth (See Video-based telehealth) Telehome care 679 –– See also Home telehealth Tele-ICU 724 Telemedicine 5, 18, 409, 668 –– See also Telehealth Teleophthalmology 669 Telepresence 409, 669, 682, 737 Telepsychiatry 669, 678 Teleradiology 18, 669, 670, 675, 749 Telestroke 681 Telesurgery 669
Temporal resolution 311 Term 769 Terminologies 11, 220 –– for clinical research 935 Term weighting 774 Test collection 781 Test-interpretation 69 Test-referral bias 92 Text comprehension 127 Text/image/video content 851–852 Text mining 785, 883, 920 Text REtrieval Conference (TREC) 781 Text summarization 785 TF See Term frequency (TF) TF*IDF weighting 774 The Electronic Data Interchange for Administration, Commerce, and Transport Standard 233 The Medical Record (TMR) 471, 590 Therapeutic targeting 874 Thesaurus 769 Thick clients 190 Thin clients 190 Think-aloud protocols 126 Thomson-Reuters 875 Three dimensional (3D) printing 859 Three-dimensional structure information 280 Throughput sequencing methods 281 Timeline flowsheets 481 Time-shared computers 27 Time-sharing networks 1084 Tissue imaging 313 –– See also Joint Commission, The TMIS See Technicon Medical Information System (TMIS) TMR See The Medical Record (TMR) Today’s Electronic Health Record (EHR) 6 Today’s Reality and Tomorrow’s Directions 234 Tokens 256 Toll-like receptor-4 (TLR4) 254 Tort law 411 Tower of Hanoi 125, 126 Training 18, 738, 899 Transaction sets 227 Transcription 476 Transducer 703, 704 Transition probabilities 108, 109 Translating Research into Practice (TRIP) 765 Translational bioinformatics (TBI) 869, 995, 996 Translational medicine 869 –– See also CTSAs Translational research 918 Transmission Control Protocol/Internet Protocol (TCP/ IP) 742 TREC See Text REtrieval Conference (TREC) Tree,99 –– See also Decision tree Triage 568 Trigger event 229, 723 TRIMIS See Tri-Service Medical Information (TRIMIS) TRIP See Translating Research into Practice (TRIP)
1151 Subject Index
Tri-Service Medical Information System (TRIMIS) 590 Twenty-three and Me (23andMe) 897, 898, 960
U Ubiquitous computing 154 UCC See Uniform Code Council (UCC) UCUM 473 UDP See User Datagram Protocol (UDP) Ultrasound 308 UML See Unified Modeling Language (UML) Unified Medical Language System (UMLS) 66, 225, 253, 256, 317, 770, 881, 886 Unified Modeling Language (UML) 219, 220 Uniform Resource Identifier (URI) 787 Uniform Resource Locator (URL) 787 Uniform Resource Name (URN) 787 Unintended consequences 142, 597, 598, 600 Unique health identifier (UHI) 981 Universal Product Number Repository 233 Universal System Bus (USB) 706 University of California Los Angeles 324 University of Colorado 410 University of Columbia Missouri 590 University of Pennsylvania 305 University of Pittsburgh 305 University of Washington 324 Unsupervised machine learning 954 Up-To-Date,490 581, 768 URI See Uniform Resource Identifier (URI) URL See Universal Resource Locator (URL) URN See Uniform Resource Name (URN) Usability 156, 498 –– attributes 157 –– classification 157 –– field and observational methods 166 –– focus groups 164 –– inspection-based evaluation 159 –– interviews 164 –– model-based evaluation 162 –– surveys and questionnaires 166 –– task analysis 158 –– think aloud studies 165 Usability testing 435, 597 USB See Universal System Bus (USB) User authentication 190 User interfaces 154 US Food and Drug Administration (FDA) 659 Utility 100, 105, 106, 810, 811, 832
V Validation 187 Validity 920 Value-based payments 569 Vanderbilt University 601 Vector mathematics 776 Vendors (of clinical systems) 405
Ventilator alarms 722 Verification 187 Veterans Health Administration 225, 396, 469, 473, 522, 824 –– National Drug File (VANDF) 225 Veterinary informatics 33 Video-based telehealth –– correctional telehealth 678 –– emergency telemedicine 680 –– home telehealth 678 –– mobile networks 677 –– remote intensive care 681 –– requirement 676 –– telepsychiatry 678 –– video cameras 677 Video display terminals (VDTs) 547 Virginia Commonwealth University 596 Virtual Private Networks (VPNs) 751 Virtual reality (VR) 154, 858 Virtual world simulation 857 Visible Human Project 767 VistA system 396, 473, 820 Visual analog scales 106 Visualization 745, 920 Vital signs 697, 699 Vocabulary 63, 220, 769 Volume rendering 327 Volunteer effect 443 Voxel 305, 307, 337 Voxelman 324 VPNs See Virtual Private Networks (VPNs)
W WANs See Wide-area networks (WANs) Washington DC Principles for free access to science 761 Waterfall software development model 188, 189 Wearable devices 638, 643 Web catalog 765 WebCIS See Web-based Clinical Information System (WebCIS) Web Content Accessibility Guidelines (WCAG) 851 Weblog 767 –– See also Blog Web of Science 767 Web Ontology Language (OWL) 219, 344 WEDI See Workgroup for Electronic Data Interchange (WEDI) WHO See World Health Organization (WHO) WHO-ART See World Health Organization Adverse Reactions Terminology (WHO-ART) Whole slide digitization 750, 1098 WICER See Washington Heights/Inwood Informatics Infrastructure for Comparative Effectiveness Research (WICER) Wide-area networks (WANs) 239, 737, 1089 WIPO See World Intellectual Property Organization (WIPO) Wireless networking 5
1152
Subject Index
WIRM See Web Interfacing Repository Manager (WIRM) WizOrder 488 WONCA See World Organization of National Colleges, Academies and Academic Associations of General Practitioners/Family Physicians (WONCA) Word embedding 257 Workflow –– cognitive overload 168 –– EHR 167 –– MedRec tools 168 –– overview 166 –– vital signs 169 Workflow management 748, 829 Workflow modeling 597 Workflow modelling 922 Workforce training 921 Working memory 128 –– See also Short-term memory Workstations 53 World Health Organization (WHO) 64, 220, 723 –– WHO Drug Dictionary 223 World Intellectual Property Organization (WIPO) 787
World Wide Web (WWW) 758 WSD See Word sense disambiguation (WSD) WSDL See Web Services Description Language (WSDL) WWW See World Wide Web (WWW)
X xAPI standard 850 X-linked lymphoproliferative (XLP) syndrome 960 XML (Extensible Mark-up Language) 322, 826, 932 X-ray crystallography 279
Y Y2K problem 48
Z Z39.84 787 Zynx 768