160 79 16MB
English Pages 258 [260] Year 1990
de Gruyter Studies in Organization 23 Organization, Management, and Expert Systems
de Gruyter Studies in Organization Innovation, Technology, and Organization A new international and interdisciplinary book series from de Gruyter presenting comprehensive research on the inter-relationship between organization and innovations, both technical and social. It covers applied topics such as the organization of: — R&D — new product and process innovations — social innovations, such as the development of new forms of work organization and corporate governance structure and addresses topics of more general interest such as: — the impact of technical change and other innovations on forms of organization at micro and macro levels — the development of technologies and other innovations under different organizational conditions at the levels both of the firm and the economy. The series is designed to stimulate and encourage the exchange of ideas between academic researchers, practitioners, and policy makers, though not all volumes address policy- or practitioner-oriented issues. The volumes present conceptual schema as well as empirical studies and are of interest to students of business policy and organizational behaviour, to economists and sociologists, and to managers and administrators at firm and national level. Editor:
Arthur Francis, The Management School, Imperial College, London, U.K. Advisory Board:
Prof. Claudio Ciborra, University of Trento, Italy Dr. Mark Dodgson, Science Policy Research Unit, University of Sussex, GB Dr. Peter Grootings, CEDEFOP, Berlin, Germany Prof. Laurie Larwood, Dean, College of Business Administration, University of Nevada, Reno, Nevada
Organization, Management, and Expert Systems Models of Automated Reasoning Editor: Michael Masuch
W DE
G
Walter de Gruyter • Berlin • New York 1990
Editor Michael Masuch Associate Professor of Computer Science and Organization, and Scientific Director of the Center for Computer Science in Organization and Management (CCSOM), University of Amsterdam, The Netherlands With 24 figures
Library of Congress Cataloging-in-Publication
Data
Organization, management, and expert systems : models of automated reasoning / editor, Michael Masuch. 15,5 p. 23 cm. — (De Gruyter studies in organization ; 23) Includes bibliographical references and index. ISBN 0-89925-556-6 (alk. paper) 1. Management — Data processing. 2. Organization — Data processing. 3. Expert systems (Computer science) I. Masuch, Michael, 1 9 4 9 . II. Series.
Deutsche Bibliothek Cataloging-in-Publication
Data
Organization, management, and expert systems : models of automated reasoning / ed.: Michael Masuch. — Berlin ; New York : de Gruyter, 1990 (De Gruyter studies in organization ; 23 : Innovation, technology, and organization) ISBN 3-11-011942-0 NE: Masuch, Michael [Hrsg.]; GT
® Printed on acid free paper which falls within the guidelines of the ANSI to ensure permanence and durability. © Copyright 1990 by Walter de Gruyter & Co., D-1000 Berlin 30. All rights reserved, including those of translation into foreign languages. No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. — Printed in Germany. Typesetting: Knipp Textverarbeitungen, Wetter — Printing: Kupijai & Prochnow, Berlin — Binding: Dieter Mikolai, Berlin — Cover design: Johannes Rother, Berlin.
Acknowledgements
This book would not have happened without the help of many people - too many people, in fact, to be mentioned here. Esther Wouters processed the text, designed the graphics, and maintained the day-to-day communication with the authors. Scip Garling of USEnglish in Washington D.C. did the language editing (combining the knowledge of the natural English speaker with the wit of a Dartmouth College classics major). Johan Henselmans and Huub Knops kept the computers up and running (so that electronic mail, the lifeline of modern scientific discourse, would never go down). Miranda and Sonja did the same for the FAX machine. Thanks are due, last but not least, to Vice Provost Tom Nieuwenhuis, who provided incessant encouragement, running jokes, and who made the funds available for the language editing of this book.
Michael Masuch Center for Computer Science in Organization and Management, University of Amsterdam
Contents
Introduction Expert Systems and Organizations Michael Masuch Chapter 1 Experts, Expert Systems, and Organizations Jeremiah J. Sullivan 1.1 Experts and Expert Systems 1.2 Decision-Support Systems and Expert Systems 1.3 Integrating Expert Systems in Organizations 1.4 Summary Chapter 2 Devising Expert Systems in Organization Theory: The Organizational Consultant Helmy H. Baligh, Richard M. Burton and B0rge Obel 2.1 Introduction 2.2 Creating Knowledge Bases from the Literature 2.3 An Expert System for the Contingency Theory of Organization . . 2.4 Composing the Knowledge Base 2.5 The Organizational Consultant for Designing an Organization . . . 2.6 An Illustration 2.7 Validation 2.8 Alternative Expert Systems Chapter 3 Creating an Expert System to Design Organizations: DESIGN 6 Helmy H. Baligh, Richard M. Burton and B0rge Obel 3.1 Introduction 3.2 Design-First 3.3 The Expert System "DESIGN 6" 3.4 Results 3.5 A Comparison of the Two Expert Systems 3.6 Validation of the Expert System 3.7 The Special Needs of Design-First 3.8 Conclusions
1
13 13 20 24 33
35 35 36 39 44 45 47 52 56
59 59 61 62 65 69 70 72 74
VIII Appendix: Terms, Definitions and Concepts for DESIGN 6 Chapter 4 Formalizing Organizational Theory: A Knowledge-Based Approach . . . Josh C. Glorie, Michael Masuch and Maarten Marx 4.1 Introduction 4.2 The Potential of Knowledge-Based Computer Applications 4.3 Contingency Theory and The Structuring of Organizations 4.4 The Theory's Core Assumptions: Design Hypotheses 4.5 The Language of Formalization 4.6 The Formalization Process 4.7 Observations and Conclusions Appendix Chapter 5 Building an Artificial Intelligence Model of Management Policy Making: A Tool for Exploring Organizational Issues Roger I. Hall 5.1 Introduction 5.2 The Conceptual Framework 5.3 The Corporate System Model 5.4 The Policy Making AI Model 5.5 Relevance and Possible Uses of the Modelling Method
75
79 79 81 83 85 86 92 96 99
105 105 106 108 Ill 120
Chapter 6 Expert Systems Supporting Organization and Information Management Henk W. M. Gazendam 6.1 Introduction 6.2 Interactive Object-Oriented Modelling 6.3 A Method to Describe Organization Model Types 6.4 Prototype Organization Models Based on Different Organization Metaphors 6.5 Example Programs and Implementation 6.6 Suggestions for Further Research Appendix: Methodology of Model Description
128 134 141 143
Chapter 7 Environmental and Organizational Interactions in the Design of Knowledge Based Systems: Armand Hatchuel and Benoît Weil The METAL Case
155
123 123 123 126
IX 7.1 7.2 7.3 7.4 7.5
Introduction Strategic Planning of Off-Shore Drilling Activities: A Challenge for Oil Corporations METAL's Birth METAL's Knowledge Base Conclusions
Chapter 8 Casting Managerial Skills into a Knowledge Based System Tim O. Peterson and David D. van Fleet 8.1 The Functions of Management 8.2 Performance Appraisal 8.3 Performance Mentor 8.4 Can Expert Systems Really Help? 8.5 Discussion and Implications Chapter 9 A Cost/Benefit Analysis of Expert Systems H. A. M. Daniels and P. van der Horst 9.1 Introduction 9.2 Cost/Benefit Analysis 9.3 System Development and Project Management 9.4 Case Study 9.5 Conclusions
155 157 158 162 168
171 171 172 176 179 182
185 185 187 188 190 192
Chapter 10 An Overview of Expert System Principles Linda van der Gaag and Peter Lucas 10.1 Introduction 10.2 Expert System Architecture 10.3 Knowledge Representation and Inference 10.4 User Interface and Explanation Facilities 11.5 Knowledge Engineering 10.6 Conclusions
195 197 199 219 221 224
About the Authors
225
References
231
Systematic Index
247
195
Introduction Expert Systems and Organizations Michael Masuch
Recent years have seen a surge of publications on business applications of Expert Systems (ES). Usually written by computer scientists, this material has focused predominantly on the technical aspects of building ES for a given business environment. Little has been published about the organizational problems that this new technology creates for managers. In what circumstances can ES replace human experts? Will ES incorporate specific managerial skills, freeing managers from routine tasks? For which kinds of decisions are ES well-suited? What are the limitations of present-day ES? How do organizational constraints impinge on the use of ES? How is this new technology going to change the organization? All these questions puzzle the practitioner who is pondering the future of his or her company. They also interest the scholar who is concerned with the impact of new technologies on modern organizations, or who wants to use these technologies in his own work. Organization, Management and Expert Systems, which grew out of the preparations for an international workshop of Amsterdam University's Management Center, approaches these questions head-on.
Human Expertise Versus Machine Expertise in Organizations The first chapter, written by Jeremiah Sullivan of the University of Washington, examines the differences between human and machine expertise in today's organizations. Experts play an increasingly important role as organizations become more complex. Their knowledge differs from common-sense knowledge in two important respects: First, experts possess a theoretical knowledge of the field, a set of (one would hope) coherent propositions about its domain that guide their reasoning. Second, experts develop mental schémas of the domain. Schémas are knowledge structures that guide fast, useful actions or judgments; experts may take as much as 10 years to develop such schémas. While theories provide explicit knowledge in declarative form, schémas are often implicit, or procedural, they contain those parts of the expert's experience that he or she
2
Michael Masuch
cannot learn at school. Indeed, an expert's ability to structure his or her knowledge results from thousands of hours of learning and experience. The role of experts in modern organizations is quite expansive. Their main function is, of course, to give advice; they provide knowledge unobtainable from other sources. Besides this main function, however, they have other important activities. Experts act as fiduciaries of stakeholders in the organization (for example, accountants representing shareholders). This kind of expertise serves a normative, prescriptive function, comparable with the "voice" in Hirschman's Exit, Voice, and Loyalty (1970). In addition, their expertise may serve to justify and legitimize decisions already reached, or actions already taken. Experts are rallying points around which issues may cluster (Cohen, March and Olsen, 1972; March and Olsen, 1986). Present-day expert systems may not be able to replace human experts in all these areas. While such systems are already fairly good at encompassing an expert's declarative knowledge (the text-book knowledge), they are not as good at incorporating the expert's procedural knowledge. Knowledge engineers (professional builders of experts systems) may elicit some pieces of this knowledge, while other pieces may resist, especially those intertwined with the human background knowledge derived from other domains. Consequently, ES may not be able to replace human experts completely. On the other hand, ES may perform some tasks better than human experts. ES are always on-line; they are accessible even when human experts are not, or are unwilling to risk giving advice. ES may offer a chance to obtain consistent, structured advice when human experts are inconsistent (imagine trying to get consistent advice about buying data-base software for micro-systems). Also, ES may function as training tools for developing expertise in an organization. Finally, ES may complement experts; the user can take ES's advice as a starting point in consulting with the expert. Moreover, ES may also improve the performance of human experts in organizations. ES may embody the consensus expertise of a group of experts, so that a single expert can test his or her conclusions against the group's. In a similar vein, ES can serve as collegiality mechanisms, each expert adding to the system, and sharing expertise with his or her peers. What impact will ES have as they become more prevalent in modern organizations? It might be too early to make any definitive predictions about their consequences, but there are three theories worth pondering. First, there is the manager-as-expert theory. To the extent that ES can replace human experts, they may reduce support staff in an organization. The power of experts would also be reduced. The manager-as-expert theory posits flattened, flexible organizations. The second theory (strong expert theory) foresees growing power for human experts, as ES may encourage the centralization of expertise. Where managers of the past made the rules as they went along, now they may feel compelled to go directly to an expert source (the ES), which depends, ultimately,
Introduction
3
on human experts. The third theory (no-expert theory) foresees rather gloomy consequences. Knowledge, it is argued, is socially constructed and not detachable from people, except for very limited purposes. As experts are replaced by ES, the socializing and context sensitivity of experts will disappear. As experts disappear and ES fail as legitimizing agents, they may take on the role of coercion agents. They may create an image of power over phenomena that will make arguing with them impossible. Chapter 1 concludes with the observation that most experts on ES presently support the first theory, "seeing a brave new world of humanistic, flexible organizations." So, knowledge is detachable from people, and the power of experts would be reduced. This theme is developed in the next four chapters. These chapters are not only theorizing about ES in organizations, they are using ES for that very purpose. Chapter 2, written by Helmy Baligh and Richard Burton of Duke University, and B0rge Obel of Odense University, presents an expert system for the core knowledge of modern organization and management theory. This system, the Organizational Consultant, is based on the contingency theory, currently the leading theory of organizational structure and design.
Contingency Theory Since the contingency theory plays a crucial role in some other chapters as well, here is a brief summary of it. Traditional Organization and Management Theory (OMT) (established by Max Weber (1922 [1947]), and others) focused primarily on the organization as a closed system. The aim was to find the ideal type of the efficient organization. Modern OMT, in contrast, became increasingly aware of factors upon which organizational efficiency is contingent (hence the term "contingency factor"). Such factors modify the conditions of organizational efficiency; different environments may require different organizational structures for efficient organizing. While traditional OMT would posit the optimal structure of an organization quite unconditionally, contingency theory would make such structures contingent upon other factors, such as the nature of the organization's environment, the organization's technology, or its maturity. Conversely, designing an efficient organization requires that one takes the contingency factors into account. Organizational efficiency comes to depend on the right match between contingency factors and organizational design. There is no single ideal type of efficient organization; there is a multitude of organizational structures, each of which may, or may not, be the best choice, depending on the contingency factors.
4
Michael Masuch
The Organizational Consultant and DESIGN 6 The Organizational Consultant covers six contingency factors: the size of the organization and its technology; the organization's strategy; the state of the environment (e.g., turbulent or calm); the ownership; and management preferences. The design factors include, among others, the organization's structure, complexity, differentiation, formalization, centralization, and span of control. There are three different criteria for the right match between contingency factors and design factors: effectiveness, efficiency, and viability. The Organizational Consultant contains 250 rules that relate the contingency factors and design parameters. For example, one rule may state: if the environmental uncertainty is stable, then the centralization is high cf20. Since most knowledge about organizations is "soft", or uncertain, the Organizational Consultant uses certainty factors, hence the expression "cf20" at the rule's tail.1 The Organizational Consultant can specify appropriate organizational structures and properties for given organizational situations. The system can serve as a tool in basic research in organization and management theory, both in theory testing and in the generation of new research questions. It can also serve as an educational tool. Finally, the system can serve as a basis for discussing trade-offs in the design of organizations. The Organizational Consultant has already been used by several companies and proven to be an excellent tool for assisting executives in organizational design. The next chapter presents a complementary ES, DESIGN 6, developed by the same authors. While the Organizational Consultant is a descriptive system, built primarily to describe and analyze, DESIGN 6 is a prescriptive system, built to "interfere and to create." Both systems work on rules stated in terms of structural properties, but each has its own set of properties. DESIGN 6 makes extensive use of performance properties, and the Organizational Consultant does not. There is also an important difference in the architecture: the Organizational Consultant is built around the declarative knowledge of contingency theory, whereas DESIGN 6 is built around efficient search procedures. Efficiency is of the essence because for any combination of contingency factors and design parameters, there is a large number of possible design choices that have to be checked. Inferencing with DESIGN 6 begins with the specification of organiza-
1
The reader is referred to Chapter 10 for all technical details
Introduction
5
tional goals (e.g., to operate efficiently). The system then works its way through a set of rules, and produces an organizational structure that attains the organizational goals.
Validating Expert Systems Both Chapter 2 and 3 discuss the important issue of validation. There are six important criteria for validation: (1) the accuracy of the knowledge base', (2) the completeness of the knowledge base; (3) the adequacy of knowledge-base weights', (4) the adequacy of the inference mechanism', (5) the adequacy of condition-decision matches', and (6) the soundness of the inference engine (finding the right answer for the right reasons). It appears that the knowledge base's accuracy can be ascertained by carefully screening the relevant knowledge sources (such as the literature). The knowledge base's completeness seems to be a "more elusive issue"; with limited computational resources completeness is impossible. However, one can come reasonably close to completeness by observing systematic procedures when examining the knowledge sources. Establishing the adequacy of the knowledge-base weights is also hampered by limited computational resources. The designer must first consider each rule and its accompanying certainty factor independently. Each factor should incorporate a received degree of belief in the strength of the statement. But, although the combinations of two or more rules are also important, they cannot be checked exhaustively because of the combinatorial explosion involved. Chapter 2 suggests a way to circumvent this problem by working backwards from the conclusion(s) of a specific combination of rules. The adequacy of the inference mechanism (the fourth criterion) is determined by answering such questions as: have the correct rules been applied, and were they applied in the right sequence? As it turns out, these questions can be settled even when an exhaustive examination of all inferences is not possible. The condition-decision matches refer to the external validity of the model. At issue is whether recommended organizational designs are reasonable for a specific combination of contingency factors. For the Organizational Consultant and DESIGN 6, the designers assembled a number of textbook cases. They then, as human experts, solved these cases, and compared their solutions with the solution provided by the ES. The sixth validation test is to analyze the condition-decision matches for the "right answer for the right reason." Baligh, Burton, and Obel conclude that validation is an ongoing process for an expert system. With the amplitude of the process, not all of the potential errors will ever be detected and corrected. But a systematic, multifaceted approach seems to yield very reasonable outcomes.
6
Michael Masuch
Machine-Based Theorizing Chapter 4 puts the effort of building an ES into a broader context. The authors make a point for the computer-based formalizations of social science theories (including, of course, organization and management theory) by arguing that computer-based inferencing can eliminate the pitfalls of intuitive reasoning. As opposed to intuitive, brain-based, inferencing, computer-based inferencing can be made reliable, and reproducible. Computer programs provide the same rigor as mathematical theories, that is, errors are traceable to erroneous assumptions. In addition, computer programs allow one to map complex interaction effects between a multitude of factors; they provide a grip on causally complex domains that are inaccessible to intuitive reasoning. Formalization of theories can clarify the logical structure of a theory, drawing attention to inconsistencies hidden underneath the theory's verbal representation, and facilitating a theory's axiomatization; rhetoric will become useless. A machine-based formalization can be used to examine the full theory-space, i.e., the set of all valid propositions that can be derived from the theory's premises, and allow the examination of very complex theories, beyond the grasp of the human brain. Furthermore, such procedures may produce curious heuristics, search procedures that detect counterintuitive propositions - arguably the most interesting propositions, since they provide new knowledge about the theory's domain (if true), or about deficiencies of the theory (if false). Formalizing a theory is not an easy task, since social science theories tend to have the complex, discursive structure of natural language. Chapter 4 discusses a way of filtering down the discursive text to a set of rules of a manageable size, suggesting that one can concentrate on those assumptions of the theory which are explicitly stated as hypotheses. Empirical theories make propositions about the structure of a real-life domain; hence, they are subject to the imperatives of falsifiability. Since hypotheses are singled out as the empirical touchstone of a theory, they can be regarded as the theory's core assumptions. Like the Organizational Consultant and DESIGN 6, the knowledge bases of the models of Chapter 4, f l and T2, are based on contingency theory. But f l and n use a different formalism of knowledge representation, that is, First Order Predicate Logic (FOPL). FOPL is more cumbersome than the rule-based formalism of the former models, but it makes the formalization process more transparent. Chapter 4 can therefore draw conclusions regarding important gaps in present-day contingency theory.
Introduction
7
Learning From the Corporate System Model Unlike Chapter 4, which centers on the theoretical issues of building knowledgebased systems from organization theory, Chapter 5, written by Roger Hall from the University of Manitoba, takes a more practical view. The focus is on how managers can learn from knowledge-based models of organization. Hall's Corporate System Model is informed by behaviorally-based learning techniques. These techniques usually rely on linking together concept variables from the managers' cognitions. Typically, however, these formal methods do not portray the characteristic responses of the management to policy problems based on the organization's internal culture or ideology; such decision making is usually specific to an organization, and intertwined with its history (learning of the past). Hall's system (based on his prize-winning paper "The Natural Logic of Management Policy Making" (1984)) aims at developing a model of the policy making apparatus of organizations generic enough to map the essential processes of policy making, yet adaptable to specific organizations. While all other models presented in this book are purely declarative (mapping the domain by means of declarative rules), the Corporate System Model provides an interesting combination of procedural and declarative modeling. Essentially, the model has two modules. One module maps the organization as an integrated flow of resources, drawing on the author's previous work in modeling corporate systems. The second module captures the essence of the collective decision making process about that flow of resources. The first module uses a well-known procedural simulation technique, System Dynamics. It allows one to uncover feedback loops with dysfunctional effects hidden in the flow of resources. However, System Dynamics is less effective at mapping the driving forces that would provide, in a corporate setting, the changes in policies (Argyris' second order learning). Providing these driving forces is the task of the second module. The model is built around a generic causal map of the organization; it aims at replicating human reasoning and learning in the organizational context, including the social, political, and cultural processes of organizational decision making. The Corporate System model has several uses. It may serve as an alternative policy tool for companies undergoing a crisis (where the effects of the normal policy making process are not yielding the desired results, for no obvious reasons). In addition, the model may become a vehicle for uncovering critical success factors in managerial decision making (a subject of particular interest to management information specialists). Finally, the model can serve as a training tool for new executives.
8
Michael Masuch
Information Management and Models of Organization Chapters 2 through 5 propose expert systems for deriving models of organization. Chapter 6 reverses this perspective, proposing models of organization for deriving ES. Henk Gazendam from the University of Groningen argues that building an ES for an organization without a suitable model of that organization is likely to lead to a collection of loosely related, possibly inconsistent knowledge rules. To avoid this danger, ES building must be supported by suitable models of organization. Many models of organization are now available. Chapter 6, in the tradition of Morgan (1986), distinguishes eight basic metaphors from which such models can be derived: machines, organisms, brains, cultures, political systems, psychic prisons, flux, and instruments of domination. Such metaphors are expressions of ways of seeing, and of thinking about, an organization. Traditional approaches are mostly based on the machine metaphor, which, because it lacks flexibility, is largely inadequate for information management. In fact, no single metaphor may provide enough flexibility. To avoid being trapped in a single metaphor, ES should support a variety of metaphors, and, consequently, a variety of models based on these metaphors. As a result, Chapter 6 proposes a variety of models of organization using a standard formalism for describing each model. This convention makes all models comparable, and facilitates their translation into computer code. So, Chapter 6 contains, in nuce, the full spectrum of present-day organization and management theory, ranging far beyond the contingency theory of the previous chapters. The set of organization models includes the contingency model, the process decomposition model, the system decomposition model, the autopoiesis and chaos model, the knowledge based system model, the learning knowledge based system model, the multi-actor knowledge based model, a classification type expert system, a multi-actor deduction model, and an object-oriented simulation. Chapter 6 discusses the transposition of the last two models into two working ES of information management (in PROLOG and in SMALLTALK), and tells the story of implementing these ES in a large government department. An appendix gives the model description methodology and the full source code of the PROLOG model.
ES in the Corporate Context Chapter 6 points to the problems of the implementational aspects of ES development. Chapter 7, written by Armand Hatchuel and Benoît Weil of the École des Mines of Paris, details these problems through the case study of an ES for strategic decision making in a major oil company. Both authors took part
Introduction
9
in the project as participant observers, from beginning to end. They present unique material about ES design problems in a corporate context. METAL, as the ES is called, was to relieve the oil corporation of conflicts between its headquarters and its subsidiaries, which were caused by inconsistent standard operating procedures regarding the allocation of drilling rigs. Hatchuel and Weil discovered, however, that such inconsistencies reflected an equilibrium in the power struggle between headquarters and subsidiaries, and advised corporate management of the necessity of resolving this struggle if METAL was to be successful. The company went ahead with METAL, but the power struggle was not resolved; the project floundered in the next crisis. Chapter 7 provides a concrete example of the intricacies of knowledgeengineering in a corporate context. Because the knowledge pertaining to the allocation of drilling rigs was distributed across political factions, and because of the strategic value of this knowledge, it became very difficult to find individuals to share their expertise with METAL's developers.
Managerial Skills METAL's developers ultimately failed at casting managerial skills into a knowledge-based system. Chapter 8, written by Tim Peterson of the US Air Force and David Van Fleet of the State University of Arizona, shows how this can be done. They discuss the merits of the Performance Mentor, an ES for helping managers in one of their most difficult tasks: appraising the performance of subordinates. While the claims of proponents of ES seem promising, those claims have rarely been supported by carefully collected, objective research data. The question remains whether ES can provide managers with the necessary support to perform their jobs more effectively. Peterson and Van Fleet designed an experiment to test the hypothesis that an ES would enable subjects to accomplish performance appraisals more effectively. Most managers are promoted for their technical skills; they may thus lack the required managerial skills. Performance appraisal involves monitoring and adjusting organizational and individual activities toward goal attainment. Of the many different control activities within an organization, assessing human performance seems to be the most difficult one for managers. Managers are especially reluctant to give performance feedback that is critical or negative. Yet, without that performance feedback, the employee does not know how to do things right. The Performance Mentor is designed to assist managers in the feedback process. Having obtained a management style profile from the user and a profile of the workplace, the Performance Mentor asks about the subordinate's experience on the job, responses to previous performance appraisals, relevant performance criteria - a total of sixty six questions. The system then uses all of this information to recommend a performance feedback strategy.
10
Michael Masuch
For their experiment, Peterson and Van Fleet used managers within a single, large organization. Half used the Performance Mentor, while the other half did not. The setting was a highly realistic task, involving extensive paper documentation of performance and videotapes. The important point was not the actual performance appraisal evaluation (which was designed to be negative), but the feedback session. The focus was on the subjects' ability to identify the appropriate behavior for a negative performance feedback session. The results clearly support the hypothesis that an ES can help managers do their jobs better. The use of the Performance Mentor almost doubled the selection of correct behaviors and reduced the selection of incorrect behaviors by half.
Costs and Benefits If Chapter 1 through 8 have a common message, it would be that developing ES for organizations is a challenging, difficult task. The task may become so difficult that a cost/benefit analysis may be advisable. ES are expensive to develop, despite advances in ES shells. Prudent business practice requires that a decision about the development of ES be preceded by a cost/benefit analysis. Chapter 9, written by Hennie Daniels of Tilburg University and Pirn van der Horst of Credit Lyonnais in The Netherlands provides a generic methodology for the cost/benefit analysis of ES, taking into account both the qualitative and the quantitative aspects of ES development. The chapter argues that a cost/benefit analysis should serve as a guiding principle for management to evaluate the different opportunities for applying an ES. Insufficient understanding of the benefits may cast such development efforts in a negative light. Based on the substitution principle of capital and labor, the proposed methodology compares the costs of advice with and without the ES. Important factors in the analysis are the frequency of advice, the fees of the human expert, and the fraction of problems which the new ES cannot resolve. The chapter emphasizes the necessity of the prototyping approach to keep cost under control.
Expert System Principles The last chapter, written by Linda van der Gaag and Peter Lucas of Utrecht University, and Amsterdam University, respectively, provides a generic overview of ES terminology and ES principles. ES are still new technology, so some readers may appreciate a concise introduction. Written by the authors of a new textbook on ES (Addison Wesley, 1990), Chapter 10 provides a comprehensive oversight of all relevant aspects. It draws attention to the important distinction between procedural and declarative knowledge, outlines various paradigms of
Introduction
11
knowledge representation and inferencing (logic, production rules, semantic nets, frames), and discusses matters of reasoning with uncertainty. In addition, the reader is introduced to user interface and explanation facilities, and receives a crash course in knowledge engineering.
Chapter 1 Experts, Expert Systems, and Organizations Jeremiah J. Sullivan
The information needs of organizations in the 1980s are being met through the development of computer hardware and software of increasing sophistication. Early transaction-based systems recorded accounting, financial, and marketing data. Next came management information systems to transform data into information. Recently, decision-support systems (DSS) have emerged which are tailored to the manager's needs for fast, easy access to data, for the manipulation of data, for input to statistical models, for estimating the consequences of decisions, and for actually making decisions. Expert systems are the latest development in business software. In addition to having the capabilities of decision support systems, ES can explain how and why they arrive at a specific decision. Generic expert systems for special tasks within a domain common to most organizations (e.g., tax accounting) are now becoming available, but many ES are still customized creations (Humpert and Holley, 1988; Tang and Adams, 1988; Whitaker and Ostberg, 1988). This chapter examines their uses and limitations in organizations. Our inquiry starts with the nature of expertise in organizations. What is a human expert? What does he or she do? Are experts necessary? What is their real impact on organizational effectiveness? In which respects are ES likely to differ from human experts? What are the effects of bringing this new technology to organizations and their management?
1.1 Experts and Expert Systems Experts in accounting, finance, law, marketing, and economics are entrenched among the management of today's corporations and public organizations. They play an increasingly important role as organizations become more complex, risk-averse, and global in focus. Amateurism and trial-and-error approaches to problem solving are not likely to prosper as management strategies.
14
Jeremiah J. Sullivan
According to one influential definition, expertise is (1) domain knowledge, (2) awareness of significant domain problems, and (3) skill at solving those problems (Hayes-Roth, Waterman, and Lenat, 1983). Domain knowledge consists of facts, rules, and inferences accumulated from observation and training, and knowledge of the sources of information in reports, texts, manuals, and so forth. Domain-problem awareness is learned through experience of what constitutes an important relationship between tasks and goals. Problem-solving skill is developed through training, communications with other experts, and observations. It consists of the application of mental models, scripts, schemas, plans, heuristics, theories, and cognitive knowledge structures which satisfy organizational needs.
How Experts Think Over the last decade, a number of studies have compared the thought processes of experts to those of non-experts in an attempt to understand the nature of expertise. In one interesting study (Bouwman, 1981), accounting experts and novices were asked to examine financial information on a company and to suggest underlying problem areas. Protocol analysis revealed the following about the experts: • The experts began by developing a total picture (a mental model or schema) about the firm, and then searching for specific items of interest. This active information acquisition contrasted with that of the novices, who analyzed the data in the order it was presented. • The novices made simple comparisons and did little other analysis. The experts used comparisons across time and accounting categories to form a series of hypotheses which then guided them in their further investigations. • The expert's guiding hypotheses generated a series of checklist-like questions which sought information to either confirm the hypotheses or revise them. Research summarized by Glaser (1985) supports Bouwman's findings. Experts have the ability to perceive large meaningful patterns, embodying numerous complex inferences and abstract principles arranged in a coherent and useful manner. They are able to construct cause and effect sequences which lead toward the explanation of a problem or the attainment of a goal within their domain, and their model building propensities allow them to proceed with relatively little memory search and processing.
Experts, Expert Systems, and Organizations
15
Organizational Common Sense and Expertise Experts in organizations are identified as such because they possess knowledge of important domains in the form of facts, procedures, and information research methods. Expert knowledge transcends common sense, which is embodied, first, in shared fundamental assumptions about the organization (e.g., the goal of the firm is to make profits through mass marketing of branded products). Second, shared maxims and beliefs amplify fundamental assumptions (e.g., price-cutting based on cost control is the best way to maximize revenues). Third, shared ways of interpreting, explaining, and understanding organizational events foster the processing of information (e.g., lowered demand for the firm's products is usually the result of cyclical downturns in the economy). The value of common wisdom lies in its power to cement communal bonds and to reduce uncertainty so that actions can be legitimized and justified (White, 1984). Indeed, organizations with strong, communal cultures and powerful common wisdom may not need or tolerate expertise or ES to the extent that weak, culture-impoverished organizations do. Organizational common sense implies a simplified, constrained way of looking at the world and is relatively uniform across the organization. It facilitates stability, coherence, and consensus. To fly in the face of common wisdom is a daunting task, yet experts often are asked to recommend optimal solutions, not just acceptable ones. While they draw from the common wisdom - they would not survive if they did not - their expertise is embodied in the form of schemas and theories based on specialized experience unavailable to most members, and on statistical theory, neither of which are noteworthy contributors to common sense (Furnham, 1988). A schema is a knowledge structure residing in memory containing information on the expert's domain, problems in the domain, goals, and procedures for solving problems to attain goals (Fiske & Taylor, 1984). An expert may take as many as ten years to develop domain schemas which he or she invokes in the face of recognizable patterns of events and problems (Gilhooly, 1988). The expert in a domain may be referred to as "schematic" for the domain, and is differentiated from "a-schematics" (non-experts), individuals who have not developed knowledge structures to guide fast, useful actions or judgments (Markus, 1977). An expert's schema directs what phenomena he or she will attend to (Cacioppo, Petty, and Sidera, 1982). The schema suggests steps to take and helps the expert to justify action as consistent, effective, and correct (Hansen and Donoghue, 1977; Fields and Schuman, 1976; Harvey, Wells, and Alvarez, 1978). A schema may be either tacit or conscious (Berry, 1987). Tacit schemas are recoverable by a knowledge engineer. Some tacit schemas, however, may have been implicitly learned. These are not recoverable; a
16
Jeremiah J. Sullivan
knowledge engineer must simulate them. A conscious schema, in contrast, is easily recoverable. Knowledge engineers, then, must assess the kind of non-common sense schemas held by the experts they are studying. Expert systems which are isomorphic to conscious schemas can be constructed. Tacit, learned schemas can also be modeled in a rough one-to-one fashion. For tacit, implicitly learned schemas, no isomorphism can be created. An expert system may then simulate an expert's reasoning without replicating it. If these kinds of expert simulations are developed, they ought not to stand alone but rather be used in concert with an available expert. They would complement rather than replace, the expert. Schemas are augmented by theories in an organizational expert's non-commonsensical wisdom. As used here, the term theory refers to metaknowledge, i.e., sets of coherent propositions about domains which guide reasoning and action when the expert cannot invoke a schema because of the novelty or complexity of a problem or the lack of information. A theory differs from a schema in a number of ways. In particular: • It is less embedded in deep memory. Thus experts can report theories more easily to knowledge engineers than most schemas. • A theory is often short, consisting of a few general statements rather than a list of linked inferences or procedures. • It is dependent on formally acquired knowledge, whereas schemas are the result of repeated experiences and trial and error. • It is easily and quickly revised. Schemas change only slowly. In the examples noted above on generating revenues from sales of branded products, the common wisdom is that little can be done to increase revenues in a downturn. A marketing expert, however, might hold schemas and theories which differ from the organizational wisdom. She might share the fundamental common sense assumption about relying on branded products, but have developed non-commonsensical knowledge of procedures for developing and selling branded products. These are her schemas. Assume now that a massive recession is beginning, and her expertise is sought out. Conventional wisdom dictates layoffs and other cost-cutting measures, but the expert disagrees. She can't rely solely on her schemas to guide her in the non-normal environment, but she does have a theory, perhaps learned in her MBA training and based on empirical evidence, that branding is one of the best ways to make a company recession-proof. Sales of well-known products maintain revenue levels even in recessions. She hypothesizes that the firm should begin immediately to develop a strong brand image for those products not currently branded. These costly efforts should pay off as the recession increases by ensuring that revenues stay firm. As she thinks about her theory, however, she realizes that branded products which are highly price-elastic do not maintain stable revenues in recessions. She immediately revises her theory to focus only on developing new branded
Experts, Expert Systems, and Organizations
17
products which are moderately price-elastic and/or price-inelastic. The revised theory guides her response to management. It is not based on the conventional wisdom, and it is more general, thoughtful, and revisable than her schemas. Together with her schemas, her theories add up to her expertise, and the organization depends on expertise to guide optimizing actions. For satisficing and routine actions, the conventional wisdom usually will work, and experts will not be needed. An expert's ability to structure his or her knowledge stems from thousands of hours of learning and experience (Chi et al., 1982). As noted above, this experience becomes summarized in cognitive schemas and theories which direct judgment and advice. These need not lock an expert into a set of inferences, however. Indeed, experts appear to possess developed procedures for revising theories and schemas (Lesgold, 1984). They are goal-driven, and when one cognitive approach does not work, they have the ability to try another. We might expect, then, to see different experts employ a variety of equally fruitful approaches to solve the same problem. In the author's research this has been the case. An ES devised by consulting experts in one firm was submitted for its comments to another in the same market and with the same task. The second firm's partner rejected the ES, explaining that, "We go about it differently here". Because of their flexibility and ceaseless revising of procedures, there often is little similarity in experts' methods for accomplishing the same task and in their communications among themselves and with managers. Expertise is organization- and context-specific. In sum, in the current theory of expertise, experts are goal-driven and employ schemas and theories to guide their information searches and hypothesizing. They ask questions of people and search for data to test both their models and theory-based hypotheses. Mentally embedded revising procedures are brought into play when tests are not supportive. Eventually, a set of hypotheses is derived that lead to the goal. Expert systems in current use attempt to mimic this process. However, ES developers attempting to model the thinking of a particular expert often find that experts have little experience in articulating their goal attainment process (Olson and Rueter, 1987). In addition, "If one accepts the conclusion that experts tend to have conceptually abstract, pattern-oriented mental models, then one must simultaneously question the accessibility of these models via verbalization methods." (Rouse and Morris, 1985: 29). H. Dreyfus, a leading critic of artificial intelligence efforts, has described a second theory of expertise that denies the possibility of ES modeling (Dreyfus and Dreyfus, 1984). He focuses on the human capacity to discriminate thousands of special cases - a dimension of expertise beyond the reach of a rule-based system (Dreyfus, 1988). Rather than make inferences, which can be captured in a knowledge engineer's rules, the expert simply may recognize patterns connected facts, procedures, problems and solutions - which are not expres-
18
Jeremiah J. Sullivan
sible in language. This inexpressibility ensures that an ES will not model an expert with high accuracy. Moreover, even if the inferencing of an expert can be modeled, the ES may capture only the expert's biases and quasi-rational heuristics along with his or her goal-focused thought processes (Kahneman, Slovic, and Tversky, 1982). However, Dreyfus' theory of expertise lacks empirical support.
Experts in Organizations The role of "expert" in modern organizations is quite encompassing, and ES certainly can mimic many different varieties of expertise. We can home in on the nature of organizational expertise by examining the functions of experts and by contrasting them with non-experts. What do experts do? First, an expert is an uncertainty-reducing organizational entity who substitutes experience and training for empirical observation. Managers receive speedy inferences from such people. Second, an expert also may be a source of patterned responses which can be accessed in no other way in the organization. Expertise is inspired intuition. This is Dreyfus' view, as described above. Third, an expert can be an advocate who represents stakeholders in the organizations (e.g., accountants representing shareholders, lawyers representing the courts). This kind of expertise serves a normative, prescriptive function. It also plays a "voice" role at times of the kind described by Hirschman in his influential Exit, Voice, and Loyalty (Hirschman, 1970). Thus a labor expert in the organization can articulate worker dissatisfaction as a spur to corrective action before "exit" of labor occurs. Fourth, an expert can serve as a rallying point around which either pro or contra positions on an issue may cluster (Leonard-Barton, 1985). The expert is a "point man." Fifth, expertise may be used as a justification device to legitimize decisions already reached and actions already taken. If expert systems are not very good at modeling inspired intuition, they can function in the other ways experts do. They can provide uncertainty reducing information, fill in users on the "party line" of an important power group in the organization, articulate pro and con positions on an issue and recommend one, and provide justification for decisions post hoc. Given the functions of experts, one can locate expertise as occurring somewhere along a continuum with at one end a manager and a professional at the other. Experts possess characteristics of both professionals and managers. They do both routine and non-routine tasks, are both cooperative and competitive with colleagues, help clients with benefits to themselves, hold dual memberships in professional and organizational communities, and focus both on accumulating and using knowledge. In a sense, then, an expert can be defined as a managerial
Experts, Expert Systems, and Organizations
19
Professional
Manager
• Treats his/her work as more than routine. • Engages in mostly cooperative relations.
• Often perceives work to be routine. • Often engages in competitive relations with colleagues. • Often engages in self-serving relations with clients. • Identifies himself/herself as a member of an organizational community. • Goals often focus on achieving something of benefit to the organization. • Focus on the instrumental use of knowledge.
• Engages in mostly non-self-serving relations. • Identifies himself/herself as a member of a professional community. • Goals often focus on achieving something of benefit to humankind. • Focus on the accumulation of knowledge.
professional within an organizational setting. This definition can be further refined by contrasting the managerial professional with three roles which have become prevalent in modern large organizations, • The "hot-shot," who possesses elaborate tools but has no real sense of the significant problems to which those tools should be applied. • The "nerd," who works long and hard to solve non-problems. • The "cloud-walker," who solves problems he or she perceives to be significant but which in fact do not focus on organizational goal attainment (e.g., the compensation manager who develops a complex compensation system which does not help the company attain its goal of profitability). All of these roles compete for the title of expert, but a genuine expert possesses complex tools, uses them to solve significant problems, and focuses on problems associated with the attainment of organizational goals.
Expert Systems Although ES can make attributions, predictions, reasoned inferences, diagnoses, explanations, interpretations, prescriptions, and conclusions, their general function is to give either direct or implied advice to managers who have a problem to solve or a decision to make (Fersko-Weiss, 1985). Within this general context, and given the variety of roles experts play, the full range of specific uses of ES in organizations has not yet become clear. They probably will do most of the following: • Provide expert advice when an expert is not available or is unwilling to risk giving advice. In one system, for example, a hotel clerk can use an ES to advise a customer without having to waste time while telephoning an expert about an important problem.
20
Jeremiah J. Sullivan
• Offer a chance to obtain consistent, structured advice to replace the often inconsistent, ad hoc advice of a collection of on-hand experts who are poorly organized and poor communicators. • Complement experts so that the user will access an ES and use its advice and reasoning as a starting point in his or her consultation with an expert. • Embody the consensus expertise of a group of experts so that a single expert can test his or her conclusions against the group's. • Act as a collegiality mechanism so that an expert can add to the system and thus share his or her expertise with peers. • Serve as an inexpensive, easy-to-build and repair substitute for an elaborate DSS. • Function as a training aid in the development of expertise in the organization. • Integrate expert advice with policies, regulations, goals, and other organizational constraints.
1.2 Decision-Support Systems and Expert Systems Data bases answer "What is?" questions. Decision-support systems answer "What if?" questions, and "Analyze my options" requests. Expert systems focus on "Now what?", "Why?", and "Tell me what to do" (Blanning, 1984). A DSS employs data, stochastic models, and econometric models to develop analyses for decision makers. It doesn't necessarily provide solutions to problems but rather is designed to improve the quality of inferences, judgments, and choices of managers who often must operate in a problem domain in which a set of possible solutions are not always available. DSS exist to help managers forecast sales and then to develop capital investment scenarios based on forecasts and production plans. Others focus on portfolio management, comparing held portfolios with useful models, or on analyzing the possible allocations of tasks across the work-force (Ford, 1985). Expert systems, while capable of doing many DSS tasks, generally focus on structured problems, those where a set of possible solutions exist and the decision task is to choose the appropriate one. "Appropriate" means in accord with the advice of at least one expert or within the constraints of policy, rules, and the existing knowledge base. Where DSS use optimization tools and formal reasoning, ES usually employ goal attainment based on heuristic reasoning. ES can be, but usually are not, used to supply the kind of sensitivity analyses, simulations, and Monte Carlo approaches of DSS. Thus the ES is not really the spur to informed, thoughtful choice that the DSS is. Indeed, it provides advice with only modest amounts of justification. What many (not all) users want from an ES is a tool to do their thinking for them.
Experts, Expert Systems, and Organizations
21
Expert Systems Do Things That Decision Support Systems Don't What is likely to be the value of an ES in comparison with a DSS employing an elaborate statistical model which accesses a database of historical costs, time series data, and other information? 1. The ES does not depend primarily on a large, cumbersome database, although it must seek some data from the user during a session. Instead, it depends on facts and rules established by an expert. Essentially, the expert has filtered much of the input data that a DSS would require and developed knowledge which reduces uncertainty just as a DSS model does. Since the expert would not have retained his or her role as expert without developing a method to filter data and acquire significant knowledge, the advice provided is likely to be timely, relevant, accurate, and credible - or at least, to be perceived as such by users. 2. Expert Systems of the 200-300 rule variety - the kind most likely to be used in managerial settings - are relatively easy to build and maintain. A general rule of thumb for expert system developers is that it takes one workhour to produce one validated ES rule. A 200 rule system would take about 200 workhours to develop, test, and implement. A DSS statistical model to accomplish a similar task might take longer to build and almost certainly would require more time to develop up-to-date data to maintain the system. ES maintenance only requires an interview with the expert and simple rule changes in the system program. 3. Because of their rule-based nature, such systems incorporate a sense of change in them, something difficult to accomplish in a DSS, which depends on the application of a model to historical data. Experts tend to have an inventory of heuristics which they use not only to make judgments based on historical data, but also to judge when such judgments are likely to be in need of change. They, and the ES which mimic them, have an awareness of when turning points are about to occur. (See the discussion below on irrationality and expert sensing of turning points.) 4. An ES models the decision-making process of managers (who use them) as well as that of experts. These processes are not described by a DSS model, which is based on massive data search and retrieval and elaborate, complex analyses. Most managers, however, rely on cognitive scripts, schemas, knowledge structures, and heuristics which reduce their need for data search and formal analysis as a prelude to decisions. An ES captures this way of thinking and thus is likely to have more credibility in its communications than a system which users cannot understand. Its credibility is further
22
Jeremiah J. Sullivan
enhanced by an ability to explain its conclusions and, in a simple way, its line of reasoning. 5. An ES allows a user-machine communication via a natural-language interface, an important advantage over command-driven DSS. Moreover, the ES creates a sense that behind the machine is another human, an expert in the organization who can be called on for further advice or elaboration - or argument. There is no arguing with a DSS. In sum, the logic of ES suggests that they will become important computer decision aids in business in the future. In relation to DSS, ES will function in two ways. First, they will be integrated into DSS. An ES can help a manager decide which statistical model to use in analyzing a body of data. The user-system interface can also be made into a natural-language interchange so that little assistance is needed from technical personnel. Second, ES will stand alone, serving decision makers who want quick advice on which decision alternative to follow. DSS will differ in aiding those managers who want to be more involved in the process leading to a decision. These managers will want to be more in touch with a database and the inferencing process, and they will prefer a command-driven interaction rather than a more elaborate dialogue with the system. ES will be preferred when managers either trust the level of expertise in the organization or don't trust or understand statistical models (perhaps because the environment is too turbulent to be easily modeled).
Rhetorical Power When top management sets objectives for the next year or the next five years, managers responsible for profit centers must make projections, often in the form of income or cash flow statements. ES and DSS can help the manager make forecasts under a range of assumptions. In turn the realism of top management's goals can be tested. In a sense the user asks the computer to make a case for each of a set of scenarios crafted from the executives' goals, the manager's judgments, and the current database. The output has two values. First, it constitutes the inferences about the future which the computer has generated under a set of assumptions. Second, the output is a rhetorical tool which a manager can employ to convince senior executives either to change their goals or to approve manipulation of controllable marketing variables. DSS are good at achieving the first value. ES are good at achieving the second, a rhetorical value. The computer, as a source of information, plays a dual role as staff analyst and expert. As an analyst, it gathers data, processes it, and makes inferences in the form of predictions. This is the role which a DSS plays so well. But an ES can go further. For example, it can (1) suggest actions to take to increase the level of predicted sales and, either implicitly or explicitly, (2) suggest arguments
Experts, Expert Systems, and Organizations
23
to persuade others to approve such actions. This second function may be the more important. In a study of DSS's use, Lodish (1982) found that the output from an optimization model in the DSS was not accepted by users. Instead, they wanted information, advice, and arguments which they could employ in their negotiations with senior managers. What they wanted was an ES to help them persuade senior managers. It is this rhetorical support function which has been lacking in DSS. By the same token, when managers realize how ES can be used as persuasive tools as well as decision aids, their use should grow.
Rules of Thumb and Irrationality Expert systems often are easier to develop than DSS based on statistical models. ES may also last longer, since the expert's rules of thumb represent long term experience not captured in a statistical model (Feigenbaum, McCorduck, and Nii, 1988). For example, an expert may forecast sales quite well using as a starting point a heuristic IF (the weather is hot) THEN (people tend to buy more insurance). The heuristic may have predictive value, even though weather and insurance purchases are not directly related. Instead of a DSS model measuring the concept "more," the rule of thumb leaves it vague. Other rules will progressively reduce the ambiguity until "more" takes on a value or a range of values. • IF (people buy more insurance) THEN (they tend to increase amounts of existing policies rather than buy new policies) • IF (people increase existing policies) THEN (they usually do it in increments of $1,000, $5,000, or $10,000) • IF (the economy is on the upswing) THEN (when they increase, it's by $10,000 on average) •
Eventually, the application of heuristics would home in on a value useful to a decision marker. The information provided by the ES is not embedded in formulaic relationships. Such relationships can be quite unstable and in need of frequent revision. The expert's rules of thumb, however, are usually quite general and thus need less frequent updating. Updating, moreover, is simply a matter of changing premises and conclusions in rules based on an expert's revised judgments. No elaborate counting and measuring is needed. One interesting advantage an ES may have over MIS and DSS is the fruitful use of serendipity. Sometimes an expert may find that for no rational reason something tends to predict something else. For example, changes in the stork population may tend to predict changes in the human population. A demographic expert may uncover this relationship and quietly incorporate it in an ES he
24
Jeremiah J. Sullivan
is devising (ES designers, by the way, have created inference engines which are protected from the prying eyes of users who might be appalled at the strange rules of thumb experts sometimes use). This "whatever-it-takes" approach of experts is not likely to be looked on favorably by MIS and DSS system builders. Not only is the use of serendipitous predictors a boon to experts and ES, but the use of irrational heuristics can help as well. DSS which do forecasts based on statistical or econometric models all have a difficult time dealing with turning points. Sales which have tended to go up and down in predictable cycles may suddenly deviate in a dramatic way from the trend, and sophisticated techniques such as Box-Jenkins or catastrophe theory will be unable to model the change. An expert employing the gambler's fallacy, however, might successfully predict the turning point (Kahneman, Slovic, and Tversky, 1982). In this fallacy, "chance is viewed as a self-correcting process in which a deviation in one direction induces a deviation in the opposite direction to restore the equilibrium" (p.7). Assume that sales have been trending upwards for six quarters with no reason to predict a fall. Yet an expert, mumbling something about "the well-known pendulum effect," might decide that what swings one way must eventually swing the other way and thus predicts an imminent decline. If she were correct, she - and the ES modelling her wisdom - would be praised as a keen observer worthy of the name expert. If no fall occurred, the expert might simply be labeled conservative in her expectations. Thus the risk of employing the gambler's fallacy is probably more than matched by the expected value of increased rewards. With such incentives, the experts and ES would tend to use the gambler's fallacy and other irrational heuristics. Moreover, through trial and error an expert might learn when to use the gambler's fallacy and when not to. Essentially, she would have learned something useful about the environment, and even though she was applying her knowledge through the mechanism of an irrational process, the result would be improved forecasts of some turning points. For the wrong reasons (e.g., the "pendulum effect"), the expert would be communicating right forecasts. In these situations, the presence of irrational inference-making tools in the expert's mind (and the expert system) is a stimulus rather than a hindrance to learning, prediction, and good organizational functioning.
1.3 Integrating Expert Systems in Organizations An ongoing problem in strategic planning and decision making in organizations is information management. Information must flow to decision makers at a rate which doesn't overload them and in a form which is focused on future outcomes rather than on descriptions of the past. It should communicate inferences rather
Experts, Expert Systems, and Organizations
25
than raw data or even transformed data. Also, it should be relevant, timely, reliable, material, comparable, authentic, and congruent. Figure 1.1 describes an expert system in an organizational context which possesses these characteristics. In this configuration, the ES advice is influenced by the user's input, which is developed after accessing context-establishing information about problems, opportunities, goals, and policies. Thus the advice should be relevant and congruent with a user's needs and constraints. Because the ES will be embedded in an information center structure existing within M.I.S., a structure which is managed so that its files, ES systems, and databases are periodically maintained, the advice will be timely and reliable. Moreover, the files of data and information will be in a form allowing uniformity and comparability across departments and decisions. Materiality, the significance of events and information describing events, will be maintained by constant surveying of the expert groups which make up the society of experts. Part of their activities as experts is identifying important organizational problems and opportunities. Finally, authenticity will be developed. Authenticity is the degree to which a communication transmits information perceived by a receiver to describe reality. If a user perceives advice to be non-authentic, then it will be rejected, no matter how relevant, timely, etc. Lack of authenticity has been a major problem with DSS software. Expert systems, however, communicate in a user-friendly manner and can make use of rhetorical techniques to convince users that what they have to say is realistic and useful.
Context Dependency Expert systems are likely to be organization specific. The point here is that expert knowledge captured by an ES has no meaning except within the context of an organization at a particular time in its existence. To maintain meaningfulness, an ES must be adjusted regularly to reflect changes in the environment, the problem domain, the important problems, and the society of experts. Generic ES currently being sold by vendors may have little advisory value, although they may have use as training aids and simulation devices. A homegrown ES should provide explanations and justifications which make sense to users, knowledge engineers, and experts not modeled. If not, they will be ignored. Large, complex, multiple goal commercial ES are not likely to be adept at explaining themselves within the organization's context, and thus their advisory power - which is to say their persuasive power - will be weak.
26
Jeremiah J. Sullivan
Figure 1.1: An Expert System in an Organization
The Nature of Problems and ES Costs Expertise in organizations focuses on problems, which can be defined as the determination of how to move from one state to another through some kind of action. Well-defined problems have specified starting states, ending states, and procedures (Reitman, 1965). The role of expertise requires the ability to handle problems which are both ill-defined and well-defined. Ill-defined problem: Do something to increase revenues. Poorly-defined problem: A. Only end state specified: Do something to increase revenues for product X within 12 months B. End state and starting state specified: Do something to increase revenues for product X from $500,000 annually to 600,000 annually within 12 months.
Experts, Expert Systems, and Organizations
27
C. Only starting state specified: Current revenues for product X are $500,000. Improve these. D. Only procedure is specified: Improve revenues by expanding market share through advertising. E. Starting state and procedure are specified: Current revenues of product X are $50,000. Improve these by expanding market share through advertising. Well-defined problem: Current annual revenues of product X are $500,000. Improve these to $600,000 within 12 months by expanding market share through advertising. Experts prefer well-defined problems, but they also can cope with poorly or ill-defined problems. For well-defined problems they invoke schemas, often of the conscious variety. These are easily modeled by knowledge engineers. For other kinds of problems experts draw on tacit schemas and theories and even intuitive pattern recognition processes. These problems are hard to model. In addition, as new information is received, experts will revise theories or invoke new schemas to deal with poorly-defined problems. Expert systems which simulate expertise of poorly-defined problems thus will be expensive to build and maintain. Knowledge engineers will spend many hours trying to model how an expert handles a vague problem. Then, because of theory revising, they will have to return several weeks or months later to capture revisions to expertise. The cost of ES development can be brought down considerably by building ES mostly for well-defined problems. Another way of saying this is that, since well-defined problems are modeled by small ES, ten 200-rule systems for a domain are likely to be less costly to maintain than one 2000-rule ES modeling an ill-defined problem.
The Impact on Experts A number of hypotheses and counter hypotheses can be developed about the effect of ES on experts in organization. Typically, within an organization, experts rarely agree completely on a complex course of action. Each approaches the problem domain in a different manner, and each uses different rules, schemas, and scripts to arrive at a goal. When they can't agree, a process of negotiation occurs which develops a consensus on what ought to be done. What impact will ES have on this process? Will they foster negotiation by forcing a decision on how a particular ES should be configured? Or will the forcing lead to the decline of negotiation? These questions will require a good deal of research. In addition, will ES encourage the development of expertise in an organization or else encourage stasis? A case can be made for both outcomes.
28
Jeremiah J. Sullivan
Finally, experts appear to perform a nurturing function. Will this hand-holding of managers vanish or increase as a result of ES? Expert systems will have an impact as they become more prevalent in organizations, but how they impact experts' interactions with each other, the development of expertise, and the managerial support function are questions which will stimulate organizational researchers well into the next century (Cupello and Mishelevich, 1988). This paper ends with three theories which describe the impact of ESs on expert functioning in organizations.
Adopting Expert Systems An organization is likely to consider managerial ES for executives' use in strategic planning and for mid-level managers' needs to diagnose problems; to schedule and assign material resources; to make predictions, bids, and forecasts; to develop and manage human resources (Blanning, 1984; Feigenbaum, McCorduck, and Nii, 1988; Mockler, 1989). The likelihood of adoption will depend on receptiveness to innovation as well as cost-benefit analyses (Burbridge and Friedman, 1987; O'Leary and Turban, 1987). Research in the acceptance of information technology suggests that successful implementation depends to some extent on how much potential users are consulted, how fast implementation occurs, and how well users are trained (Morieux and Sutherland, 1988). If ES are implemented well, what will be their impact on organizational culture as it is reflected in roles, leadership and power, communication and information flows, and personnel requirements? Important technology almost always forces changes in the ways people relate to each other in institutions. How ES will affect change is another important question for future research.
Will They Be Accepted? Hough and Duffy (1987) found that most top managers have not heard of decision support systems. DSS and presumably ES are used only occasionally by those who have heard of and have access to them. The decision problems of top managers are characterized by time pressure, conflicting objectives and decision criteria, complexity, and a lack of ways to learn of the impact of specific actions. Will these executives come to believe that ES can help them? Answers depend on a number of factors. The acceptance process may involve a designation of ES by managers as either producers of effects or as agents possessing intelligence (Karpat and Schof, 1982). In organizations facing turbulent environments, managers are likely to be performance-oriented. They will want ES to be producers of effects. In less turbulent environments, effects are not as important as processes. Here
Experts, Expert Systems, and Organizations
29
the intelligent discourse of an ES in the role of an automated advisor and assistant will come into play. Research is needed relating organizational environments to ES functions. We can hypothesize that ES intelligent discourse capabilities of explanation, justification, and human-machine interaction will be more important in organizations facing placid rather than turbulent environments. This suggests that ES will look and talk differently and be less accepted in a fast-paced consumer products firm in comparison, say, with a defense contractor enjoying long-term government contracts. In firms where the ES's communications facility is important, explanation functions may have to improve. Currently, advanced shells allow explanations in response to a "Why" query, but ES also ought to be able to explain counter-intuitive recommendations or diagnoses by responding to "Why not X?" queries (Rousset and Safar, 1987). ES lacking communication skills will be resisted (Negrotti, 1987). For example, managers do not employ probabilistic reasoning in the way current ES shells would expect (Bramer, 1988). They tend to use verbal terms rather than numbers to describe states of confidence or uncertainty. Managers I have studied use these terms: • High probability • Very likely • Excellent chance • Probable • Likely • Highly Possible • Reasonably possible • Possible • Slight • Low Probability • Unlikely • Minimal likelihood • Remote • Extremely doubtful • Very unlikely The grouping of terms indicates those items which have roughly the same uncertainty meaning. However, different organizational cultures will impose different meanings on those words. Consider the words soon and immediately. In one firm I found the distribution of immediately to be from "right now" to one week. Soon ranged from one day to one year! Expert systems which must model
30
Jeremiah J. Sullivan
managers' use of non-numerical terms to be accepted will have to be customized to reflect prevailing practices in each firm. Experts often know these things, and knowledge engineering will have to develop ways of extracting uncertain terminology from them.
Impact on Organizations Three theories can be developed explaining the status of expertise in organizations after ES become integrated. In the manager-as-expert theory, ES cause expertise to be shared with all employees. Expert power declines. In the strong expert theory, the dangerous tendencies of experts are freed up by ES to become much more influential in the organization. The final theory, the no expert theory, sees organizations losing the important symbolic and ritual functions of experts as ESs replace them or force them into less visible positions.
Manager as Expert Theory In this theory, an ES is likely to encourage reduced support staff in the organization. This "every manager her own expert" perspective will lead to the removal of human experts from frequent, ongoing, and extensive interactions with managers and relegate them to communicating with software producers in conjunction with knowledge engineers. Senior accounting staff, for example, would be less available to managers who need advice on accounting for foreign exchange earnings. Instead, an ES would be on call and regularly updated. The layers of secretarial and support staff which formerly surrounded the accounting expert to help him cope with his elaborate communications network would be reduced. The power of the expert would also be reduced. This theory is behind the well-documented reluctance of some experts to allow themselves to be modeled by an ES (Harmon and King, 1985; Evanson, 1988). In the manager as expert theory, ES will be key support tools for "cluster" organizations (Applegate, Cash, and Mills, 1988). Ad hoc teams working on specific problems will be able to draw on banks of ES which are continually updated by a central unit charged with capturing expertise and sharing it. Whereas formerly expertise had to be built up in a permanent group which was narrowly focused on routine, standardized performance, cluster teams without expertise nevertheless will have it available. Clusters will give flexibility and speed of response to changing environments, yet ES will help them retain the built-up wisdom of permanent teams. With each manager tapping into the organization's expert wisdom, decision-making will be pushed further and further downward until white-collar decisions are made by blue-collars (Zuboff, 1988). Moreover, the ability to retain expertise in software will ease the need to
Experts, Expert Systems, and Organizations
31
retain a given expert. Indeed, expert turnover may be encouraged as the ES builders seek collective wisdom of multiple expert perspectives. Managerial and professional mobility may increase in the future because of expert systems, and most new hires will be chosen not so much for experience and knowledge, which already resides in the computer, but for their creativity, that is, their ability to offer novel uses of their knowledge. The manager as expert theory posits flattened, flexible organizations made up of clusters of inexperienced but highly creative technical, blue-collar, and managerial employees. Flexible creativity will enable organizations to respond to rapidly changing consumer demands and competitor actions in a global setting of great complexity.
Strong Expert Theory In the second theoretical position a less positive impact occurs, and experts become more powerful. Here ES encourage the centralization of inference making. Where managers made up the rules as they went along in some situations and consulted their own experience, now they will feel compelled to go directly to an expert source through the easily accessed ES. As more and more demands are put on experts for highly detailed advice, their importance in organizations will grow, as will their staffs and their power. This second theory has ominous implications for the business community, where expertise is currently quite important and may become even more so. One of the problems of expert advice in this perspective is that it will emerge out of a sifting and filtering of detailed data which the expert has felt a need to collect rather than out of any worldly wisdom. Consider the consequences of this approach when expertise becomes embodied in ES which are used in strategic marketing decisions and which are more pervasive and influential than the experts alone ever were. • Expertise is likely to suggest market penetration rather than market development. It is easier for experts to obtain data on current markets than on new ones. • Expertise is likely to stress cost management rather than revenue management: cost data is easier to get. • Expertise will focus on quantifiable economic dimensions to an issue rather than qualitative dimensions. • Expertise will "earn its keep" by focusing on complex rather than simple products and product mixes. If ES embodying this approach of expertise become widespread and persuasive, strategic marketing decisions could result in ever more complicated products
32
Jeremiah J. Sullivan
competing in overcapitalized and probably oligopolistic markets. Once the experts are let loose in ES, in other words, they may run amok.
No Expert Theory Knowledge is not a commodity, according to this theory. It is socially constructed and "is not detachable from people except for limited purposes within the domain of relevance" (Stamper, 1988: 4). As experts are centralized, replaced by on-line ES in daily contact with managers, the socializing and context sensitivity of experts will disappear. They will have difficulty maintaining their knowledge, which depends on constant interactions with their community (Bloomfield, 1988). Moreover, ES will be unable to perform the ritual, symbolic actions of legitimization which experts often carry out. These rituals license actions in ways ES cannot. As experts disappear and ES fail as legitimizing agents, they may take on the role of coercion agents. An ES will create an image of power over phenomena which will make arguing with it impossible. As managers comply, they will carry out actions to which they have little commitment. Organizations will suffer as decision-makers become detached from their decisions. As the sense of responsibility, commitment, and legitimacy becomes weak, actions will become insensitive to context, the long-term, and to other actions. Where the strong expert theory sees ES fostering a risk-averse, overly complex organizational functioning, the no expert theory sees organizations becoming dysfunctional. In sum, ES will affect organizations mostly through their impact on the functioning of experts and the uses of expertise. Most theorists support the expert-as-manager theory and see a brave new world of humanistic, flexible, creative organizations resulting from expert system development (in conjunction with other developments focused on strategy, structure, and human resource management). Dark side theories, however, warn of the dangers of expert systems and similar information technology. Shoshanna Zuboff s In the Age of the Smart Machine (1988) provides ample case study evidence to support either positive or negative theories. Clearly, then, some variables are missing from the theories. Researchers need to learn those factors that will foster a positive impact of ES on organizations and those factors that will foster negative impacts. Based on my discussion in this chapter, I hypothesize that positive impacts of ESs will outweigh negative impacts in those organizations in which • The organizational environment is not turbulent. • Organizational culture is weak. • ES development focuses on building many small systems for routine, well-defined problems.
Experts, Expert Systems, and Organizations
33
• Users are trained to see ES as complements, not supplements, to experts; every ES should come with an expert's telephone number attached. • Experts continue to socialize as well as pontificate.
1.4 Summary Expertise is mostly organization and context specific. It consists of the recognition of appropriate goals and the development of schémas and theories which lead to goal attainment. The human embodiment of expertise, experts, are managerial professionals who have the ability to make speedy inferences which no one else in the organization can or will. They often serve as rallying points for parties in conflict, and are at times used to legitimize decisions already reached. The computer embodiment of expertise, an expert system, also functions along these lines. ES serve managers by offering expert advice, but they also should • Establish the context in which advice is given. This is an educational function. • Foster collegiality by allowing experts to tap into the thinking of the society of experts. This is a communications function. • Build credibility. They should influence users to believe in them as sources of advice. • Build persuasion. They should influence users to accept their advice. ES should be a persuasive source of persuasive messages. The dark side of ES in organizations is that they are not really a spur to informed choice. Instead, they stimulate rapid, uninformed choice. As such, their focus is on decision output, not on the decision process. In organizations where elaborate managerial participation in complex, highly communicative decision processes is part and parcel of the organizational culture or its task environment, ES are not suitable. The dark side perspective has been ignored in most parts of the ES literature, but it should be emphasized that ES serve a significant but only a limited function and only in certain types of organizations in certain types of environments.
Chapter 2 Devising Expert Systems in Organization Theory: The Organizational Consultant Helmy H. Baligh Richard M. Burton B0rge Obel
2.1 Introduction Organization theory is a positive science that focuses on the understanding of organizations. It is a multidisciplinary science, in which the separate disciplines have created their own distinct questions, hypotheses, methodologies, and conclusions. These various views do not necessarily fit together into a comprehensive view, nor do they necessarily offer recommendations on how organizations should be designed. Organizational design, in contrast, is a normative science that focuses on creating an organization to obtain given goals. Design "is concerned with how things ought to be, with devising structures to attain goals" (Simon, 1981:133). Organizational design depends on organization theory - its prescriptive purpose complements the descriptive function of the positive theory. In the past, both the theory and the design of organizations depended primarily on the intuitive activity of human brains. Today, new tools from computer science can assist the designer, helping her to avoid the pitfalls of intuitive reasoning (Baligh et al., (1987), Masuch and LaPotin (1989)). In this paper we create an expert system to serve as a decision support system in the design process of organizations. The expert system incorporates knowledge from the literature on organization theory as well as the expertise of authorities in the field. An expert system is composed of three parts (Harmon and King, 1985): a knowledge base, an inference engine, and a user interface. The knowledge base incorporates the basic concepts of organization theory, such as structure, centralization, formalization, and complexity. Each concept takes on various
36
Helmy H. Baligh, Richard M. Burton and B0rge Obel
values; e.g., an organizational structure can be of functional, product, or matrix form. As we shall demonstrate, the propositions in organization theory can be stated as relatively simple "if-then" rules. An example is "if the organization is large then the formalization is high". The construction of a knowledge base for the ES requires clarity of the several partial theories, and further, a careful consideration of the issue of the composition of the several partial theories into comprehensive and consistent statements. The inference engine generates conclusions by inspecting a number of "if-then" rules. We then use the inference engine to search and combine the rules in the knowledge base to recommend designs. An expert system can show which rules were used to generate a conclusion. We use this feature to explain why our system, the Organizational Consultant, reaches particular conclusions. This feature is important for both the validation process and for the actual application of the Organizational Consultant. The Organizational Consultant does not require a sophisticated user interface and communicates with the user by asking factual questions. The user responds by choosing from a prespecified menu of legal answers (e.g., your organization is large, medium, small, or unknown). Obviously, the way these questions are phrased is very important. The next section takes the well-known models of Duncan (1979) and Perrow (1967) and restates them as rule-based expert systems. The issues of translation and composition are discussed. Section 2.3 presents the development of a contingency theory rule-based knowledge base. The several partial theories of the contingency imperatives of size, strategy, technology, etc., are given. Section 2.4 discusses the knotty problem of weaving the several partial theories into a consistent and comprehensive knowledge base. Section 2.5 discusses the Organizational Consultant. In section 2.6, the expert system is applied to an airline company, both before and after deregulation. Section 2.7 discusses validation of an expert system and indicates what we have done to validate this expert system. Section 2.8 mentions an effort to create a complementary expert systems for organizational design. For conclusions, the reader is referred to the final section of Chapter 3.
2.2 Creating Knowledge Bases from the Literature Duncan (1979) and Perrow (1967) use different organization theoiy models that can be translated into "if-then" rules. We demonstrate the translation of each into a knowledge base. Next, we consider how to put the separate knowledge bases together into a combined knowledge base. This illustrates the complexity of composing a knowledge base from a large, diverse, and sometimes inconsist-
Devising Expert Systems in Organization Theory
37
ent organization theory literature. We begin by stating Duncan's (1979) model of environmental contingencies as a rule-based knowledge base. Duncan's knowledge base can be stated as the following six "if-then" rules: if environmental complexity is simple and environmental change is static then organizational structure is functional. if environmental complexity is simple and environmental change is dynamic then organizational structure is mixed functional. if environmental complexity is complex and environmental segmentation is yes and environmental change is static then organizational structure is decentralized. if environmental complexity is complex and environmental segmentation is yes and environmental change is dynamic then organizational structure is mixed decentralized. if environment complexity is complex and environmental segmentation is no and environmental change is static then organizational structure is functional. if environmental complexity is complex and environmental segmentation is no and environmental change is dynamic then organizational structure is mixed functional. Duncan himself presents the equivalent of a rule-based knowledge base in the form of a decision tree, rather than "if-then" statements.This knowledge base is small enough to be examined directly without the aid of a computer. Consider the goal of this "expert system": It is to recommend an organizational design which is stated as either functional, mixed functional, decentralized, or mixed decentralized. These are the four possible answers to complete the statements "then the organization structure is...." as given in the six rules above. The recommended structure is thus contingent upon the environment, as stated in the " i f ' part of each rule. Duncan describes the environment along three dimen-
38
Helmy H. Baligh, Richard M. Burton and B0rge Obel
sions; complexity, change, and segmentation. Environmental complexity is either simple or complex, environmental change is either static or dynamic, and environmental segmentation is either yes or no. An organization can find itself in eight ( 2 x 2 x 2 ) possible environments. The knowledge base is the set of six "if-then" rules that link the environmental condition with the recommended organizational design, which is contingent upon the organization's environment. To illustrate Duncan's system, consider an airline company in the 60s with a static and simple environment. Using the "if-then" rules in the knowledge base, we find that the recommended structure is functional. Today the environment is complex, probably not segmentable, and certainly dynamic. The recommended structure is now a mixed functional structure; i.e., functional with lateral activities. But if the environment were segmentable, then the structure should be mixed decentralized, i.e., decentralized with lateral activities. The recommended structure is dependent upon the environment and is relatively sensitive to shifts in the environment; as the environment changes the recommended structure changes as well. We now turn to a Perrow's (1967) model of technology contingencies. Like Duncan's model, it can also be translated into a knowledge base of "if-then" rules. Its goal is to recommend an organizational design from the choices; routine, engineering, craft, or non-routine. The organization's technology has two dimensions: task variability, which can be routine or high; and problem analyzability, which can be ill-defined or analyzable. There are then four (2 x 2) possible technologies, which are the facts for the expert system. Perrow's knowledge base can be stated as the following four statements: if task variability is routine and problem analyzability is ill defined then organization structure is craft. if task variability is high and problem analyzability is ill-defined then organization structure is non-routine. if task variability is routine and problem analyzability is analyzable then organization structure is routine. if task variability is high and problem analyzability is analyzable then organization structure is engineering.
Devising Expert Systems in Organization Theory
39
Perrow developed his knowledge by describing and analyzing the real world. Here, these propositions serve as normative statements as to how organizations ought to be structured in order to become efficient. This approach takes the knowledge of positive science and uses it in a normative system that recommends how organization should be structured. Above, the Duncan's (1979) and Perrow's (1967) contingency models were translated into separate expert systems. In Duncan's system, the organizational design is dependent upon the organization's environment; in Perrow's system, the organizational design is dependent upon the technology. Each system is self-contained and independent. Yet the reader must somehow feel that each one is incomplete and limited. Can the two systems be put together so that the composed system is more complete and more practical? Both the environment and the technology are important contingencies in design (Daft, 1986, Robbins, 1987). But the process of putting the two systems together is not at all obvious. Each approach has its own vocabulary and definitions, and the relations among the definitions are not clear. How do Duncan's functional and decentralized structures combine with Perrow's routine, engineering, craft, and non-routine structures? Are Duncan's four organizational designs simply different names for Perrow's four designs? For example, is a functional organization the same as a routine organization, or are the two sets of recommendations independent? If they are independent, there are sixteen (4 x 4) possible designs, such as routine functional, craft decentralized, engineering mixed functional, etc. There are two simple ways of putting the systems together. The first is to rename the recommended designs; the second is to consider each one independently and additively. There are also a large number of complex ways to put the recommendations together. It is not obvious what an appropriate composition might be. As we shall discuss later, the composition of the literature is a primary concern in the creation of an ES for designing an organization.
2.3 An Expert System for the Contingency Theory of Organization Contingency theory is a dominant theme in organization theory. In design terms, contingency theory suggests that an appropriate organizational design is contingent upon such factors as size, strategy, technology, environment, and power. There is a large body of literature (e.g., Daft, 1986, Robbins, 1987) that supports this view. Yet the creation into a comprehensive contingency theory of organization is still ad hoc. Miller (1984), e.g., suggests a sequential approach to
40
Helmy H. Baligh, Richard M. Burton and B0rge Obel
integration, where one contingency factor may be important today, but replaced by a different factor in the future. In creating the expert system, we have chosen an alternative approach to composition. We begin by considering several contingency theories of organization. Then we put together the theories into a comprehensive statement. This statement then becomes the knowledge base for the expert system. It is a set of "if-then" statements. These contingency concepts are summarized in Table 2.1. The contingency factors are on the left, and the design recommendation possibilities are on the right. An effective and efficient organizational design constitutes a good fit between the contingency factors and the structure of the organization. The contingency theory of organization is the basis for the knowledge base in the Organizational Consultant. That knowledge base is gleaned from many literature sources. As we argued above, the knowledge from the literature must be translated into meaningful "if-then" rules and composed into a set of consistent rules. Overall, there are approximately 250 rules in the knowledge base, gleaned from a broad spectrum of the contingency-theory literature. As depicted in Table 2.1, these "if-then" rules relate the contingency factors size, technology, strategy, environment, ownership, and management preferences to the organizational design recommendations. The definition of terms must be clear such that the user can respond to the questions and implement the recommendations. The "if-then" rules in the knowledge base should be a clear translation of our knowledge into the rule statements. The definition of concepts must resolve the issue that was raised in the comparison of the recommendations by Duncan and Perrow. We have chosen to use the structural properties shown in Table 2.1 and then translate the recommendations given in the literature into these concepts. We have followed similar attempts to combine knowledge from the literature by Daft (1986) and Robbins (1987), among others. Some concepts have been broken down into sub-concepts to produce more precise recommendations. In the Duncan-Perrow case we will have statements like: if the environmental complexity is high and the environment is static, then formalizations should be high and centralization should be low. If task variability is high and the tasks well defined, then formalization should be low and centralization should be high. The Organizational Consultant asks the user about the facts of the size, technology, etc. Then having applied the rules in the knowledge base to these facts, the system will make a specific recommendation on the structure and properties. For example, it might recommend that the organization should be functional with high formalization, narrow span of control, and lots of rules. The knowledge base (the "if-then" propositions) contains general knowledge about the relations between the contingency factors and the design. The query process gathers the facts about the particular organization. The recommendations follow the application of the rules to the facts.
Devising Expert Systems in Organization Theory THE CONTINGENCY FACTORS:
SIZE TECHNOLOGY STRATEGY ENVIRONMENT OWNERSHIP MANAGEMENT PREFERENCES FIT CRITERIA: EFFECTIVENESS EFFICIENCY VIABILITY
41 DESIGN PARAMETERS: STRUCTURE: SIMPLE, FUNCTIONAL DIVISIONAL, MACHINE-BUREAUCRACY MATRIX, ETC.
PROPERTIES: COMPLEXITY AND DIFFERENTIATION FORMALIZATION CENTRALIZATION SPAN OF CONTROL RULES PROCEDURES PROFESSIONALIZATION ACTIVITIES MEETINGS REPORTS COMMUNICATIONS
Table 2.1: The Contingency Model of Organization Theory
Although the knowledge base is too large to include in toto, we capture its essence with a few sample rules. The rales are meant to illustrate how we translated the knowledge from the literature into "if-then" rules for the knowledge base. We begin with size as a contingency factor, and then move on to other factors. It is generally accepted that the size of the organization affects its structure. The literature brims with support (Blau and Schoenherr, 1971; Pugh et al, 1969) and counterargument (Aldrich, 1972). We have taken this idea and translated it into rules that state that the size of the organization should affect its structure. To illustrate, one rule in our knowledge base is: if size is large then the formalization is high cf20. Here, an organization is considered large if it has 5 000 or more employees. This is a question of fact for the ES program. An organization is large, or it is not. If it
42
Helmy H. Baligh, Richard M. Burton and B0rge Obel
is large, then the Organizational Consultant will recommend that the formalization of the organization should be high with well-defined jobs, regulations, and work standardization, and relatively less freedom on the job for subordinates. The cf20 is a qualifier in the statement. The cf stands for "certainty factor" and measures the degree of belief one has in the statement. A certainty factor can range from -100 to 100. A cf20 implies that the statement has a relatively weak effect, but cannot be ignored. A stronger statement would increase the certainty factor, and cflOO would mean total certainty. Negative certainty factors reflect disbelief. Hall et. al. (1967) argue that size is an important determinant of structure, but not the only one, hence, we assign it a relatively weak effect. Size affects not only formalization, but also complexity and centralization; these are incorporated in the knowledge base. Technology, too, is a determinant of the appropriate structure (Table 2.1). As discussed earlier, Perrow (1967) investigated the relation between technology and structure. We include the effect of the technology on the structure. This rule illustrates: if technology is routine then the complexity is low cf20. The organizational complexity is the degree of horizontal or vertical differentiation in the organization. Robbins (1987:144) indicates that there is empirical evidence, but it is qualified. Hence, the certainty factor is 20. The goal is to recommend the appropriate level of complexity for the organization. The program queries the user about the technology to determine whether it is routine, or not. If it is routine, then low complexity is recommended. The organization's strategy also determines the organizational design. In 1962, Chandler made his now famous proposition "structure follows strategy." In our knowledge base, we use Miles and Snow's (1978) hypotheses about strategy and structure. The organization's strategy can be categorized as reactor, defender, prospector, or analyzer. A prospector's "domain is usually broad and in a continuous state of development" (Miles and Snow, 1978:56). This rule from the knowledge base illustrates: if the strategy is prospector then the centralization is low cf20. The rationale for the rule statement is that the prospector who has a large number of diverse activities requires a decentralized structure; without one, decision bottlenecks tend to slow down organizational activity . The statement is qualified since the strategy is not the only determinant of the structure; hence the relatively low certainty factor (as in previous examples). Organizational design is also determined by the environment. Duncan (1972) provided early empirical evidence of this. For our knowledge base, one rule is:
Devising Expert Systems in Organization Theory
43
if the environmental uncertainty is stable then the centralization is high cf20. Duncan's (1979) model is stated differently, but provides support. Robbins (1987: 17) argues that centralization is possible when the environment is stable, since there is time to process requisite information; nevertheless, the statement is a qualified one. Environment hostility (Robbins, 1987: 172) is also a determinant of the structure. Greater hostility requires greater centralization of the organization. To illustrate, one rule is: if the environmental hostility is extreme then the centralization is high cf40. We are more certain that an extremely hostile environment calls for a unified effort; hence, the certainty factor is 40. The evidence is not only from business, but finds its origins in military lore where unified command is the norm in battle. Are the needs of management important in the contingency framework? Child (1972) argues that the environment and technology leave some discretion to the managers, that is, they have some choices concerning the organizational design. We include rules that relate the management's desire for power as a determinant of the structure: if the power desire is high then the centralization is high cf40. That is, a greater desire for power by management indicates greater centralization. The certainty factor 40 indicates some qualification. In the above rules, we have included the influence of size, technology, strategy, environment, and management preferences on the structure. Many of the other rule statements are similar in form and related in content to those given above. Compound rules, which have two or more contingent conditions required, are also part of the knowledge base. For example, one rule requires both high complexity and non-routine technology: if the complexity is high and the technology is not routine then the horizontal differentiation is high cf60. If both conditions are met, then the horizontal differentiation should be high with considerable confidence, i.e., cf equals 60.
44
Helmy H. Baligh, Richard M. Burton and B0rge Obel
In our knowledge base, we also include rules that caution the user, but do not give a suggested structure. To illustrate, one rule is: if the strategy is prospector and the technology is routine then this may cause problems. Intuitively, one can judge that a prospector strategy and a routine technology are not compatible. The Organizational Consultant leaves open any recommendation for resolution of this situation. It demonstrates that the system is limited in its recommendations.
2.4 Composing the Knowledge Base The construction of a knowledge base for an ES is a statement of our knowledge about organization theory. The above rule statements are clear and can be applied one by one, but they have yet to be woven into a consistent and coherent set of rule statements. Returning to our discussion of the Duncan and Perrow knowledge bases, each one is individually clear, but can we put them together into a consistent and coherent system? It is not obvious how to compose the two independent systems into an integrated ES even if we have resolved the problem of concept definitions. The composition of the rules is essential to the development of the knowledge base. It is particularly important for a complex system of 250 rules. To illustrate the composition issue for our ES, the two rules on environment and strategy were justified independently. In the Organizational Consultant, rules are not independent but part of the total statement of our knowledge and understanding of organizations. They must be composed, or considered together. The contingency factor influences are then balanced. The certainty factors (cf) are used to compare each statement and summarize our understanding and knowledge. Consider two knowledge-base statements: if the size is large then the decentralization is high cf30. if the strategy is prospector then the decentralization is high cf20. Now let us assume that both antecedents are true; i.e., the size is "large" and the strategy is "prospector"; then the rule for combining the conclusion on the structure is given by:
Devising Expert Systems in Organization Theory
45
decentralization is high (cf44) where the calculation follows the rules form the MYCIN concept (Ml Manual, 1987).1 The resulting conclusion on decentralization is stronger than either statement would be alone. These two rules concerning size and strategy both have a positive relation with decentralization. Each contingency factor adds to the conclusion, but neither contingency factor is sufficient to make a certain conclusion or recommendation. The Organizational Consultant allows the answer "unknown" and thus recommendations based on incomplete information. As can be seen from the composition approach information is stronger the higher the certainty of the recommendations. We may also have knowledge that tells when a relationship is not positive. For example, one rule might state: if structural complexity is low, then decentralization should not be high. In this ES we can incorporate the negation by using a negative certainty factor: if structural complexity is low then decentralization is high -cf30. By combining all three statements we find that decentralization is high cf20. Thus positive and negative effects can be appropriately included. In this system, we are combining the various contingency theories into a unified and consistent statement of our knowledge. The literature is silent on the appropriate combinations (although there is considerable evidence for the separate contingency factors to determine the appropriate structure). Validation of the knowledge base requires ongoing revision through application in real situations and cases, and we shall discuss this issue in a later section.
2.5 The Organizational Consultant for Designing an Organization So far, we have only discussed the creation of the rule-based knowledge base. We now consider how the system generates design recommendations. Here, we want to use our knowledge about organizations to specify appropriate organizational structures and properties for given organizational situations, as shown in Table 2.1. So, the Organizational Consultant wants to determine values for the
1
Here, we have 30 + (100-30)*20/100
= 44.
46
Helmy H. Baligh, Richard M. Burton and B0rge Obel
structure and properties. The inference engine begins a query process, seeking the needed facts from the user about the organization's situation, i.e., answers for the contingent circumstances. Given the goal, it runs backwards through the rules of the knowledge base to determine the facts that it needs to know about the environment, strategy, technology, etc. It can determine the contingent facts only by asking the user. The inference engine begins with a goal, and works backwards through the knowledge base, requesting fundamental facts. The ES then recommends a structure and properties - such as a functional structure with high formalization and many rules. The Organizational Consultant analyzes the current organizational structure by querying the user about 27 facts related to the functioning of the organization. The structure is then described in terms of structure and structure properties (see, outprint in the next section). Next, the Organizational Consultant asks for input related to 14 major situational variables (see output in the next section). The input is given by again answering a number of specific questions about the organization's situation, in the current version, a maximum of 25 questions, depending on the particular situation. These answers are then translated into values for the internal concepts used in the system. Based on this input, the system recommends the structure and structure properties that best fit the specified situation. The situation itself is analyzed and possible situational misfits are given. Finally, the current and prescribed organizational structure is compared and possible changes recommended. The system includes a feature that enables the user to change input values and rerun the consultation, thereby providing a way to perform sensitivity analysis. This knowledge base is divided into 4 major modules, which have been developed independently. The first module is a data module that seeks some fundamental facts about the organization. The second module relates to the current organization as described above. The third module deals with the organization's situation; it is partitioned into a number of submodules. One submodule asks the user about relevant information; the second submodule is a database which maps the situation onto overall organizational designs; and, finally, the third submodule analyzes the situation in more detail, both with respect to situational misfits and more detailed recommendation about the prescribed structure. The fourth module prints the output, compares the prescribed and current organization, and controls the looping necessary for the sensitivity analysis, in which the user can change any input item and get a new analysis. The knowledge base has been structured in this way for a variety of reasons. When one uses a relatively small ES shell like Ml, the knowledge bases often become very complicated to write, debug, and maintain (Rauch-Hindin 1988: 134). Therefore, a specific structure of the knowledge base was developed. Actually, the first version of current organizational structure subprogram and the prescribed organizational structure subprogram were developed as independent
Devising Expert Systems in Organization Theory
47
ES. Later in the design process, they were merged. However, they can still be run independently if the conclusions made in the first part are transferred to the second part. This suggests another reason for the structuring of the knowledge base. In small PC based ES shells, the number of rules becomes the bottleneck. We have not reached the limit yet with the Organizational Consultant, but should it happen we can separate the knowledge base into two without ruining the system. In principle, Ml uses a backward chaining inferencing process. Ml allows more than one goal. The separability of the various parts of the knowledge base has also been enhanced by using relatively many goals related to the various subsections of the system. Another advantage has been that the sequence of questions that the system asks has been strictly controlled, so that they appear in a logical sequence. For example, when the system prompts questions about the environment, it asks all questions about the environment given the situation that it estimates to be relevant. In rare cases, the system may ask a question whose answer does not have any impact on the conclusion. The use of certainty factors to combine the partial organizational theories was an integral part of our development process. The use of positive and negative certainty factors turned out to have some important effect on the use of our system. The first version of our system used only positive certainty factors. But negative certainty factors capture negation statements from the literature. Ml allows multivalued variables. Using only positive certainty factors enabled us to run each of the multivalued answers on a parallel basis using the same rules. However, when we started adding knowledge about what should not occur (rules with negative certainty factors), this parallelism collapsed, and we had to control the above two situations directly in a more complicated rule structure.
2.6 An Illustration Over the last decade, the airline companies have gone from very strict regulation to deregulation. In the U.S., the Civil Aeronautics Board has removed price controls and route assignments, but still maintains safety standards. The International Aviation Transportation Association has similarly lost its regulatory powers, if not jurisdiction. Consider a large international airline such as SAS, United, or Eastern. In the 1970s, this airline operated in a very regulated environment. The prices were set by a regulatory agency; further, the price was the same for each airline offering the same class service on a given route. Routes were assigned to the airline, and route reassignments were infrequent, with few surprises. Competition was minimal except when two airlines flew the same route at roughly the same time. Most airlines had similar equipment and offered similar customer service. On
Helmy H. Baligh, Richard M. Burton and B0rge Obel
48
the North Atlantic run, for example, the definition of a sandwich limited the amount of meat that it could contain. Further, only a sandwich, and not a steak, could be served as a snack and only one meal was permitted per crossing. All the airlines were regulated to eliminate competition on almost all fronts. It is well known that the greater the load factor, i.e., the percentage of occupied seats, the greater the profit. Costs which could be justified in terms of equipment, fuel, personnel, safety, etc. were passed on to the customer. That is, prices were justified to the regulator in terms of costs. The airline had a rather stable and known environment; prices, markets, routes, and passengers changed slowly and in predictable ways. Aircraft technology changed predictably after the jet engine was introduced in the 50s. Airline strategy was limited to cost control and relatively efficient operations. Costs were important but not paramount. Given the very high fixed costs of operations, it was an advantage to give away an extra steak to obtain one more passenger, if possible. Of course, every passenger had his/her own favorite airline, where a smiling attendant could be the margin of difference. Today's airline exists in a different world. With the exception of safety, the regulations have largely disappeared. An airline can choose the routes to fly, at prices it determines, on its own schedule, and offer what services it wants. It is the airline's choice. There can be more prices available as a function of service class, duration of stay, purchase time, irrevocability of change, change opportunities, etc., than there are seats on the plane. The person next to you is more likely to have paid a different price than the same one as you. The airlines compete for passengers. Airlines adjust prices to match competition. The airline has very high fixed costs. Cost control and efficient operations remain important. The load factor remains extremely important - provided the seats are filled with paying passengers. Service is very important, yet one meal per crossing remains the norm for the North Atlantic run. Despite the dramatic change, a smiling attendant may still be the margin of difference for the passenger. It is reasonable to assume that the structure that was appropriate for the regulated days is no longer so. A new structure might be needed for the new conditions. If so, the airline management faces important strategic choices: What should the new design be? We use the Organizational Consultant to determine whether a different organizational design is desired. The object is to recommend a design that leads to satisfied customers, efficient operations, and profitability. The Regulated Airline of the 70s and today's deregulated airline are analyzed and designed by the Organizational Consultant.2
2
The example is run on Ver 3.1.
Devising Expert Systems in Organization Theory
49
Example RESULTS FROM THE ORGANIZATIONAL CONSULTANT * * * 30389
EXISTING ORGANIZATIONAL STRUCTURE Regulated Airline has a functional structure.The organization has a high complexity.The horizontal differentiation is medium.The vertical differentiation is medium and the spatial differentiation is high.The formalization is high and the centralization is high. Type yes to continue. 1. yes »1 EXISTING BUSINESS SITUATION You provided the following information about Regulated Airline's situation: size (Regulated Airline) = large (100%) because you said so. age (Regulated Airline) = mature (100%) because you said so. diversity = few (100%) because kb-63. technology-routineness (Regulated Airline) = yes (100%) because you said so. technology-type = 0 (100%) because rule-u. technology-divisibility = little (100%) because you said so. strategy = defender (100%) because you said so. ownership = private (100%) because you said so. wish-of-power-concentration = yes-indeed (100%) because you said so. environmental-complexity = simple (100%) because rule-auncer. environmental-uncertainty = stable (100%) because rule-uncer. environmental-hostility = medium (100%) because you said so. capital-requirement = high (100%) because you said so. product-innovation = low (100%) because you said so. Type yes to continue 1. yes »1 THE SUGGESTED NEW ORGANIZATIONAL STRUCTURE Based on the input you provided it is most likely that the best structural form for Regulated Airline is a machine-bureaucracy structure (cf 85).The formalization should be high (cf 71) and the centralization should be high (cf 65) .The complexity should be low (cf 52) with a low vertical differentiation.The span of control should be wide and coordination and control should be obtained through the following means:planning-and-rules and rules-and-procedures.Regulated Airline should have a structure somewhat similar to a functional structure (cf 60).The structure of Regulated Airline should not be a divisional structure (cf -100) . Type yes to continue 1. yes
50
Helmy H. Baligh, Richard M. Burton and B0rge Obel
»1 POTENTIAL PROBLEMS Current and prescribed structure do not match. Current and prescribed complexity do not match. Type yes to continue 1. yes » 1 * * * RESULTS FROM ORGANIZATIONAL CONSULTANT * * * 030389 EXISTING ORGANIZATIONAL STRUCTURE Deregulated Airline has a functional structure. The organization has a high complexity. The horizontal differentiation is medium. The vertical differentiation is medium and the spatial differentiation is high.The formalization is high and the centralization is high. Type yes to continue 1. yes »1 EXISTING BUSINESS SITUATION You provided the following information about Deregulated Airline's situation: size (Deregulated Airline) = large (100%) because kb-67 . age (Deregulated Airline) = mature (100%) because you said so. diversity = some (100%) because kb-79. technology-routineness (Deregulated Airline) = some (100%) technology-type = 0 (100%) because rule-u. technology-divisibility = some (100%) because you said so. strategy = analyzer (100%) because you said so. ownership = incorporated (100%) because you said so. wish-of-power-concentration = yes-to-a-certain-extent (100%) because you said so. environmental-complexity = some (100%) because rule-auncer. environmental-uncertainty = changing (100%) because rule-uncer. environmental-hostility = medium (100%) because you said so. capital-requirement = high (100%) because you said so. product-innovation = high (100%) because you said so. Type yes to continue 1. yes »1 THE SUGGESTED NEW ORGANIZATIONAL STRUCTURE Based on the input you provided it is most likely that the best structure form for Deregulated Airline is a functional structure (cf 40) . The formalization should be medium (cf 61) and the centralization should be medium (cf 71) . The complexity should be medium (cf 61) witha low vertical differentiation.The span of' control should be moderate and coordination and control should be obtained through the following means: reports-and meetings, rules-and-procedures and planning-andrules.
Devising Expert Systems in Organization Theory
51
Type yes to continue 1. yes »1 POTENTIAL PROBLEMS Current and prescribed complexity do not match. Current and prescribed centralization do not match. Current and prescribed formalization do not match. Type yes to continue 1. yes
Analysis of the Organizational Consultant's Recommendations The output summarizes the current organizational structure using terms such as structural form, centralization, complexity, and formalization. This description is the same for both the regulated and deregulated airline as we entered the same input. Next, the program lists the overall description of the business situation. This description comes from the user's responses to questions or conclusions that summarize more detailed responses. The third part of the output is the actual recommendation. If the airline is regulated, the Organizational Consultant recommends a machine-bureaucracy with a very high certainty factor. This means that the ES has high confidence in its recommendation. It also recommends high formalization and high centralization, both with relatively high certainty factors. These recommendations fit very well with the particular situation for a regulated airline. The airline must be both efficient and very cost effective, and it can be so by using rules and regulations in the regulated environment. The ES also recommends that the coordination and control be obtained through the use of planning, rules, and procedures. The regulated airline is also recommended to have a structure that resembles a functional structure. It is known that a functional structure goes very well with a highly centralized and formalized machine-bureaucracy. A machine-bureaucracy and a functional structure normally have a high complexity. In this case a rather low complexity with a medium certainty factor is recommended. The ES thus recommends that the regulated airline should have a reasonably traditional machine- bureaucracy/functional structure but that the number of levels in the hierarchy should be lower than normal. It concludes with a high degree of certainty that a divisional structure is not appropriate. If one wants to analyze any particular recommendation, the system enables one to look into its rules to see why a particular recommendation was made. In this particular case, one would see that the reason (not shown) for the recommendation of the relative low complexity is that the leadership style has a tendency toward a centralized structure, where the top-level management has tight control over the actual activities in the organization. This is best obtained
52
Helmy H. Baligh, Richard M. Burton and B0rge Obel
by a relatively flat organization where many units report directly to the top management. This is also seen by the recommendation that the control span should be wide. Looking into this kind of explanation, one could find that there are factors in the situation that push the organization toward a high complexity. However, the preferred organization structure should have relatively low complexity. This kind of insight would also provide a basis for a sensitivity analysis. For example, the input to the leadership style could be changed and the analysis re-run. The sensitivity feature of Organizational Consultant makes such analysis very easy. Finally, the ES looks into potential problems with respect to both the situation and the match between the current organization structure and the suggested organization structure. There are no mismatches with respect to the situation, but the actual structure and the prescribed complexity do not match the current organizational form. In the second part of the output for the deregulated airline, the Organizational Consultant shows changes that occurred in the situation when the airline was deregulated. The technology is now less routine, but the major changes occurred in environmental complexity and environmental uncertainty. Now the environment is less stable and more complex than it was before. In this situation there is also a requirement for high product innovation. The recommended structure for the deregulated airline is a functional structure with moderate certainty. Both centralization and formalization are now medium instead of high. The complexity has gone up from low to medium, and the vertical differentiation is still rather low. The leadership style has changed because of the changes in the environment, so that there is a tendency toward more decentralized organization. This has some effect on the complexity score as discussed above. With respect to coordination and control, the output suggests that additional means are called for. It is shown that not only rules and procedures, but also reports and meetings should be used. With the more complex dynamic environment there is a requirement for flexibility and more information processing capability. The change in the organizational situation does not change the actual functional form, but it does change the formalization, complexity, and centralization within the same form.
2.7 Validation The validity of an ES can follow the same principles as any other model but it is likely to be different in detail. These principles have been stated in various ways. Feldman and Arnold (1983: 23-24) define validity in terms of content, construct, and criterion-related validity: i.e., does it make sense to a group of experts, is it measuring the underlying characteristics, and is it related to the real world intent? The classic work on the validity issue in social science is Campbell and Stanley (1963). Cook and Campbell (1976) later developed four concepts of internal validity, statistical conclusion validity, external validity, and construct
Devising Expert Systems in Organization Theory
53
validity. Burton and Obel (1984: 44-57) adapted their approach to simulation models in organizational design. Generally, these validity questions arose in a positive science framework. One purpose of an ES is normative - to indicate how the world ought to be. Therefore, one might suggest that the detailed validity operationalization should be different, although related. It has yet to be specified. Whether the ES is correct, consistent, comprehensive, relatively complete, and operational remain fundamental. We develop a validation approach for the Organizational Consultant, which follows O'Leary (1988). O'Leary has suggested an approach for validating an ES. O'Leary (1988: 75) suggests that there are six possibilities for analyzing an ES: "Analyze the knowledge base for accuracy, analyze the knowledge base for completeness, analyze the knowledge base weights, test the inference engine, analyze the condition-decision matches for decision quality, and analyze the condition-decision matches to determine whether the right question was found for the right reasons." (Table 2.2) No one of these six possibilities is sufficient to validate the ES. Our approach applies all these criteria, with differing degrees of emphasis. As will become evident, these questions do address the fundamental validity issues and are related to the social science validity operationalizations (albeit in a somewhat different form). Basically, new questions and models do require alternative approaches to validating the system. The accuracy of the knowledge base has been analyzed in various ways. First, the "if-then" rules were gleaned from the literature, as we described in an earlier section. This process requires care and judgement. Each rule must be supported in its own right together with the degree of certainty, or the importance of the rule in determining the organizational design. Each rule must make sense to anyone familiar with contingency theory notions, as suggested by content validity. A major issue is the certainty factor (cf) for each rule. The assignment of these certainty factors follows largely from inferences and judgments of the authors, as the literature is silent on their values. To be sure, there are hints, indeed challenges, that the several organizational imperatives are partial answers (Miller, 1987). Nonetheless, the exact number assignment of the certainty factor is a matter of judgement, but a matter open to extensive research as well. The completeness of a knowledge base is a "more elusive issue" (O'Leary, 1988:76). Basically, one is never certain that the knowledge is complete. Under conditions of bounded rationality, completeness is impossible. Even if it were possible, it is probably not cost-effective to develop the knowledge base completely, as the return from the last rule would not be worth the cost (the return function is probably concave, suggesting diminishing returns, and the cost function is probably convex). Knowing when "enough development is enough" is also a matter of judgement. For our knowledge base, completeness is addressed by attempting to survey the contingency theory literature in a comprehensive fashion. To say "no article remains unturned" would be an
54
H e l m y H. Baligh, Richard M. Burton and B0rge Obel
Table 2.2: Validation Issues for Expert Systems according to O'Leary a • • •
priori: analyze the knowledge base for accuracy analyze the knowledge base for completeness analyze the knowledge base weights
in • • •
situ: test the inference engine analyze the condition-decision for decision quality analyze the condition-decision matches to determine whether the right question was found for the right reasons
exaggeration. But we have incorporated the main concepts of contingency theory in a systematic way. An equally important test of completeness is through use and application. We have analyzed a number of cases and found that the ES does yield recommended organizational designs. Similarly, in our field tests with executives, we find that the ES yields appropriate recommendations for these real-world situations. One could also dream up unusual situations to try to crash the system. We have not made any systematic attempt to do so. Basically, we have checked the knowledge base ex ante by systematically gathering the literature. And, ex post, the ES's completeness is verified through use in cases and by executives. Knowledge base weights take on different forms in different systems. The certainty factors (cf's) are very important, as we have argued before. The first level of analysis considers each rule and its certainty factor independently. The certainty factor should incorporate our degree of belief, or strength of the statement. Second, the composition of two or more rules must be considered. These comparisons for consistency grow rapidly. For example, for ten (n) rules, there are 90 [n(n-l)] two-way comparisons. Three-way and higher order comparisons quickly become impractical. One must develop a heuristic strategy for checking the consistency between and among the rules. Our approach begins with the recommendations and works back to the premises. For example, we start with the conclusion that centralization is high. Then, we examine the rules that lead to a conclusion on centralization and analyze whether they fit together in a reasonable way. This approach greatly reduces the number of rules examined for consistency. There are fewer recommendations than two way and above rule comparisons. For the Organizational Consultant, there are normally ten recommendations, but there are 150 x 149 or 22,350 two-way rule comparisons. Further, this strategy focuses on those connections which are most directly relevant to the recommendations.We also check to see if the facts that
Devising Expert Systems in Organization Theory
55
go into a conclusion on centralization are consistent with the literature. It is a reasonable and systematic approach, but not exhaustive. Testing the inference engine is a different matter - one tests the algorithm rather than the contents of the knowledge base. Our shell, M l , uses a backward chaining approach, which begins with the goal of determining an organizational design (i.e., a functional, a divisional structure, etc.) and works back to the facts on the contingency factors (i.e., size, strategy, etc.). This trace through the knowledge base is managed by the inference engine, and is accessible for examination. Have the correct rules been applied, were they applied in the right sequence, were the certainty factors combined properly? Again, an exhaustive examination is not practical. However, in our limited experience with the ES shell, we found no errors or unreasonable choices in managing the rules. Another problematic issue is the application of the certainty factors (cf s). The combination of two rules using the certainty factor calculus (as discussed in Chapter 10) relies upon the very reasonable notion that two supporting statements of a reasonable degree of belief should yield a combined statement of strengthened belief. For example, if one statement yields high centralization cf 20, and another statements yields high centralization cf30, then high centralization is strengthened and by the rules in the M l inference engine, centralization is high cf 44. The resulting degree of belief is stronger than either statement alone. This seems quite reasonable, but there are other formulae which also meet the reasonableness test. One could take the maximum cf; here, max [20, 30] is 30. Here, only the strongest belief is considered, and others ignored. Obviously, one could invent many combination rules. The choice of the best rule depends upon the nature of the subject matter in the knowledge base. That is, the inference engine rules should follow the knowledge. We chose the Ml MYCIN combination rules because they make sense for the contingency theory of organization. Obviously, other combination rules can be devised and other rules can be discarded as well. One obvious area of future research is the investigation of alternative combination rules to determine the relative merits of various combination rules for the inference engine. But how do we test whether the system recommends reasonable organizational design? We have assembled a number of textbook cases that are rich enough in their description to provide input for our system. We - as experts have then solved these cases, and/or have reviewed the recommended solution provided from other sources, e.g., the instructor's manual. The system is then confronted with these cases. This first part of the validation is used to tune the composition of the partial theories described earlier. The second part of the validation is a test using American and European executives from companies of various sizes and in various industries. These include a large U.S. pharmaceutical company, a large telecommunications organization, a small Danish retail store, a division of a large Danish company in the machinery industry, a retail and wholesale chain in the building industry in
56
Helmy H. Baligh, Richard M. Burton and B0rge Obel
Denmark, and a new TV channel. The executive chooses the organization, usually his own firm or division. The ES asks questions, then gives a summary of the current situation and the recommended organizational design. Finally, the executive is asked "Does it make sense?" We also ask the executive whether the questions are clear and can be answered. Do the questions seem reasonable as input to determine an organizational design? Is the summary correct, and does it represent what you wanted to say? Have new issues surfaced which give you insight, or demand further investigation? Will you implement the recommendations? Why or why not? It is an interactive verbal protocol that brings out the executive's own expertise and juxtaposes it with the ES. Of course, an executive's agreement with the ES results is more satisfying to the developer, but less constructive than disagreement. Something must be wrong. The error may be with the executive, or with the ES. The developer must apply his judgment after listening to the executive's rationale. The sixth validation test is to analyze the trace of the condition-decision matches for the "right answer for the right reason." That is, the black box is opened and examined. The trace is an important tool, since any recommendation can be traced from conclusion to premise. Every recommendation is simply the product of a sequence of rules. The developer, together with the executive, can investigate this sequence or rules in detail. Have we captured the knowledge adequately? Usually the general statements are appropriate, but the relations among the rules need to be modified (e.g., as discussed above, the certainty factors must be changed). This often means that more precision is built into more "if-then" rules. On occasion, one finds rules in error or rules with format errors, but it is the composition of the rules which usually creates the problem. In the tests with the executives, some executives found that some questions were difficult to understand or to answer. Generally, the executives preferred very detailed and specific questions to more general and less specific questions. Validation is an ongoing process for an ES. Validation, as discussed here, is basically a control process of error detection and correction. Given the magnitude of the process, not all of the potential errors will ever be detected and corrected. But a systematic, multifaceted approach yields very reasonable outcomes.
2.8 Alternative Expert Systems Contingency theory is one approach to organizational design. The notion that an organization's design is contingent upon environmental and other factors is fundamental. But there are many variations on this theme. In the next chapter,
Devising Expert Systems in Organization Theory
57
we present an alternative ES where the organizational design is contingent upon the environment (markets) and the technology. A different environmental and technological vocabulary is used, and the recommended design is stated in different terms. Consequently, the "if-then" rules take on a different form. The basic logic of the model is to identify the properties of the environment and the technology, and determine the performance properties needed to do well in those conditions. Then we can determine the structure properties that will produce these required performance properties.
Chapter 3 Creating an Expert System to Design Organizations: DESIGN 6 Helmy H. Baligh Richard M. Burton B0rge Obel
3.1 Introduction There are two complementary views of an organization; one is to describe and analyze, the second is to design and create. The first approach begins with a description of the world as we find it. Following the description, we seek to explain observed relationships through analysis. The second approach, design, begins with the identification of what might be. It then searches this set of possibilities to find what ought to be. The statements of knowledge that best serve the purpose of description and analysis may not serve the purpose of design well. However, both require statements on relation, association, and causation. When one's approach is design-first, one needs to use a systematic process of searching the space of possibilities; one needs operational statements that are implementable, not just measurable. Design-first aims to interfere and to create, not merely to describe, observe, and understand. It needs operational and well-ordered relations between design possibilities and organizational outcomes to be effective and efficient. Our goal in this paper is to report on the development of a design-first approach to an expert system for organizational design. The expert system in Chapter 2 is based on the analytic mappings of the literature on contingency theory. It uses relations identified in the describe-andanalyze approach. These relations may or may not be what one would have produced for a design-first approach. In this paper, we develop a design-first expert system which uses an analytic base expressly intended for the design process. It is devised as the basis for a set of design rules which can be used to
60
Helmy H. Baligh, Richard M. Burton, and B0rge Obel
determine an efficient organization. Creating this ES starts with some concepts obtained from Baligh and Damon (1980) and Baligh, Burton, and Obel (1987), which distinguish organizations on the basis of performance and structure properties. The system's recommendations on structure design are made in terms of performance properties such as responsiveness, and structure properties such as the comprehensiveness of the decision rules, their fineness, and the enfranchisement of the users of these rules. This system, called "DESIGN 6", and the Organizational Consultant of Chapter 2 are similar in some respects, but different in others. Both work on rules stated in terms of structural properties and give their recommendations in these terms. Each has its own set of properties defined differently with different degrees of operationality and clarity. Finally, DESIGN 6, the design-first model, makes extensive use of performance properties while the Organizational Consultant does not. The literature has placed a greater emphasis on building the analytic base of organization theory than on devising efficient search procedures for the design of organizations. Not only is this latter problem important in a practical sense for business decisions, it is important in the theoretical sense. We need to design structures to test the meaningfulness of the analytic generalizations and find out whether they are operational. Given the large number of design choices, efficient search methods are called for. The problem of design is a formidable one. Let us assume that we accept the five classes of structures of Mintzberg (1979). From the definitions of these five, one can generate over a million other classes. How does one search such a set? Let us assume that an organizational structure can be defined in terms of whether it is functional, divisional, or matrix, and whether it is centralized or not, formalized or not, and has many or few rules. There are 24 = (3x2x2x2) possible designs from which to choose. The number of choices grows exponentially as the number of organizational dimensions approaches useful realistic proportions. In a contingency theory of organization, these design alternatives would need to be evaluated for all possible conditions. In the simple case of ten dimensions for the conditions with only two values allowed for each, we would have to evaluate 24 classes of structures under 1024 different conditions. There is no alternative to this blind search unless we can put the set of all organization structures in some order. The orders developed by Baligh and Burton (1984) are useful, but not strong enough. The alternative is to order the set of structures on the basis of well-defined properties that are operationally defined, and take values that can be well ordered (Appendix). That done, we can develop design rules which an ES can use to search efficiently a very large space.
Creating an Expert System to Design Organizations
61
3.2 Design-First Our object is to design, and our concern is "with how things ought to be, with devising artifacts to attain goals" (Simon, 1981: 133). In a parallel research effort, we began with a positive science contingency theory, developed a knowledge base from this literature and translated it into a set of rules to design an organization. This time, we begin with the need to design organizations first. The two approaches are complementary, each with its own advantages and limitations. The design-first approach begins with the specification of the organizational goals (e.g., to operate efficiently). The ES works its way through a set of "if-then" rules and produces an organization structure that attains these goals. The rules must be based on what is feasible, and must be stated in terms of concepts which are operational, that is, as variables with values. If these variables are parameters in the system, their value must be observable; if they are design decision variables, they must be described in terms of the "knobs" manipulated to get an actual organizational structure (Baligh and Burton, 1981). In contrast, the analysis description first approach is built on terms and concepts of variables with values which must be measurable, but which are not necessarily directly related to the "knobs" that translate into real organization. The rules in the ES developed below are stated in terms of properties of the environment and the technology of the organization, and of the properties of the organization's performance and structure. These properties must be relevant, that is, expected to influence the efficiency of the organization. They must be operational, which means they must be defined in terms of variables with values that are either observable or can actually be set. Finally, the values that these variables take should come from a well-ordered set that allows comparison. The system's design rules must be in terms of such properties if our process of design is to be systematic and efficient. Relevant properties to be used in the analytic conclusions or generalizations must be those that show that different values of a property produce different performances, outcomes, etc. If the property is defined in terms of structure, then its relevance to performance must be shown explicitly by the analysis. If the property is defined in terms of performance, then its relevance in terms of structure components must be shown explicitly by the analysis. In all cases, relevance requires clear statements of the kinds of conditions, if any, on which the relevance depends. Mere association of property values with others is not useful as a basis for design.
62
Helmy H. Baligh, Richard M. Burton, and B0rge Obel
3.3 The Expert System "DESIGN 6" The new ES we call DESIGN 6 is based on a new set of performance and structural properties (Baligh and Damon, 1980; Baligh, Burton and Obel, 1987; Baligh, 1989). The rules the system uses are stated in terms of these new properties. One difference between this ES and the one of Chapter 2 is that DESIGN 6 relies heavily on performance properties of structures. It uses more analytic mappings than the other system, and allows for more flexibility in design. It produces results about both performance and structure properties, and may thus be easier to explain to the user. By using performance properties, we can make do with a smaller number of rules in the ES than we could without these properties. Figure 3.1 describes the model which underlies this new ES. All the property definitions are given in the Appendix. We begin with the identification of the relevant properties of the environment and the technology. Then we determine which performance properties of the organization are required for a good performance, given a specific environment and technology. Next, we derive the structure properties of the organization (that will produce the required performance properties). Finally, the actual structure must be specified from these properties. Each subsystem in the model sequence is developed in a way that allows us to test it, analyze it, and derive rules about what the next (more comprehensive) system should look like. The task of the ES to design an organization involves a process of sequential ES designs, each step being used to get the next better model. There are about 175 rules in the ES. One set of the design rules specifies what performance property values are desirable, given environment and technology property values. Another set specifies what values the structure property ought to have. One example of the first set of rules states: If the environment raggedness is high then the organization's responsiveness should be high cf40. Raggedness is a property of the environment that is measured by the relative magnitude of the changes that occur in it. High levels of raggedness occur, for example, when competitors change their prices by huge amounts. Responsiveness is measured by the time it takes an organization to make new decisions that fit the new facts, say, to change its prices. This rule stems from the fact that the more quickly one responds to changes, the less time one is stuck with the old decisions, which, though once good, may now be bad (Marschak, 1972; Baligh, 1989). The larger these changes, the worse the old decisions are likely to be. Thus, responsiveness under these
Creating an Expert System to Design Organizations
•= m o .2 « I § 58 a ë 5 £ «f ff S S u os ni u
-a s s £
5 f ^ £
s J o z a u tu H
eu1
c/i U S U n
rv,
es 3 .1 'S S o O
3 s x E
frllÎS S I I S 2 S s g S" " being defined as a special operator representing the material implication "—»". The same holds for "&", replacing the "A". Unfortunately, PROLOG reverses the case conventions of FOPL, so that variables start with an upper case symbol, while constants begin with a lower case symbol. We refrain from providing an introduction into PROLOG here. Suffice it to say that PROLOG is so close the syntax of FOPL that minor adjustments suffice to build an interpreter in PROLOG that looks, and behaves (almost) like FOPL.
94
Josh C. Glorie, Michael Masuch, and Maarten Marx
To obtain the entire theory space of the 14 structural hypotheses, the FOPL-formalization was converted to clausal form (making use of an automatic cross-interpreter). Subsequently, 26 new rules were derived on the basis of the available 30 rules, so that the full theory-space of the first formalization is shown to contain a total of 56 rules (see appendix). No contradiction was derived, indicating that the set of assumptions 71 is consistent.
The Need For a Second Formalization A closer look at Mintzberg shows, however, that the design process may contain contradictions despite the apparent consistency of 71. For example, two or more known contingency factors may place incongruous demands on the same design parameter: an old organization may stay small, so that the work-related variable predictability of work ought to be simultaneously high (because of old age) and low (because of small size). A contradiction results. In a similar vein, the efficiency assumption underlying the structural hypotheses may not hold for specific organizations. The efficiency assumption states that organizations exhibit a specific match of contingency factors and design parameters because management makes the right choices when designing an organization. In specific cases, however, decision makers may err, so that the match between contingency factors and design parameters is inefficient. Also, the combination of certain design parameters may be infeasible. For example, the use of specific liaison devices, such as matrix structures, is not consistent with strong behavior formalization or high centralization. In logical terms, either the set of assumptions is inconsistent, or the inference process exhibits non-monotonicity. Non-monotonicity means that one is working with a dynamic set of assumptions such that adding new assumptions to the old set may invalidate previously drawn conclusions. The first problem can be ruled out because of the apparent consistency of 71. The second problem, however, is obviously present, since contradictions arise when one adds new assumptions to the original set (such as the assumption that an organization exists that is simultaneously old and small, something which is not logically entailed by 71). Non-monotonicity can be dealt with in various ways (see Genesereth and Nilsson, 1988). The most straightforward approach is to add to each general rule the required number of exceptions. Unfortunately, this approach may become quite cumbersome as the number of exceptions grows large. An alternative approach, suggested by McCarthy (Genesereth and Nilsson, 1988) invokes the use of taxonomic hierarchies. A rale is asserted in conjunction with the provision that the rule is only valid if it is not subject to any exception. For example, hypothesis one, written out above, then takes on the form:
Formalizing Organizational Theory
95
1.1 age ( Org, V) & equals ( V, high) & consistent( predictability_of_work( Org, high)) => predictability_of_work( Org, high). 1. 2 predictability_of_work( Org, V) & consistent(behavior_formalization(Org, strong)) => behavior_formalization(Org, strong).8
The term "consistent" is not an ordinary predicate here. Instead, it must be interpreted as an additional operator, subject to a specific interpretation. Consistent is true if no incompatibility is detected between any inference on the basis of the rule under evaluation and original assertion or any earlier derived assertion. The predicate incompatibility refers to a lookup table that provides a list of incompatibilities. For example, the assertion that Organization 1 is old and large.... size(orgl, large), [f(l)]). age(orgl, high), [f (2) ] ) .
...would now lead to the following hypotheses h: h (bureaucratic_coordination(orgl, present), [h(9), r(53), h(2), r (33 ) , h ( 0) , r (1) , f (2) ] ) ,9 h(unit_grouping(orgl, differentiated), [h(8), r(43), h(l), r(5), f(1) ] ) . h(liaison_devices(orgl, much_used), [h(7), r(41), h(l), r(5), f (1) ] ) • h(planning_and_control_devices(orgl, ~ much_used), [h(6), r(40), h(l) , r(5) , f(1) ] ) . h(planning_and_control_devices(orgl, much_used), [h(5), r(39), h(1), r(5) , f(1) ] ) . h(job_specialization(orgl, high), [h(4), r(37), h(1), r(5), f(1)]) . h(bureaucratic_coordination(orgl, strong), [h(3), r(35), h(0), r(1) , f(2) ] ) . h(behavior_formalization(orgl, strong), [h(2), r(33), h(0), r(l), f(2) ] ) . h(diversity_of_work(orgl, high), [h(l), r(5), f(l)]). h(predictability_of_work(orgl, high), [h(0), r(l), f(2)]).
To interpret consistent, a specific interpreter is required. Using this interpreter (the details of which are beyond the scope of this paper) any hypothesis generated through the application of the consistent operator of the hypotheses may now evaluate to true or false. A hypothesis evaluates to true if no 8 9
Prolog permits using predicates as terms. This formula should be read as follows: the functor h indicates a hypothesis, functor r a rule, and functor f a fact (a rule that is unconditionally true). The list [h(9), r(53), h(2), r(33), h(0), r(l), f(2)] indicates the steps via which the hypothesis was derived. So, hypothesis 9 was derived from rule 53 and 33 in conjunction with the hypotheses 2 and 0 and the fact 2. For decoding of the numbers, please refer to the appendix.
96
Josh C. Glorie, Michael Masuch, and Maarten Marx
inconsistency is encountered, in which case it is accepted as a theorem. In the above example there is no incompatibility. Asserting, on the other hand, that an organization is both old and small, the following results (the functor i indicates inconsistent conclusions): h(predictability_of_work(orgl, high), [h(l), r ( l ) , f ( 2 ) ] ) . h ( p r e d i c t a b i l i t y _ o f _ w o r k ( o r g l , low), [h(2), r ( 4 ) , f ( l ) ] ) . h ( d i v e r s i t y _ o f _ w o r k ( o r g l , low), [h(3), r ( 6 ) , f ( l ) ] ) . h ( b e h a v i o r _ f o r m a l i z a t i o n ( o r g l , s t r o n g ) , [h(4), r ( 3 3 ) , h ( l ) , r ( l ) , f(2)] ) . h ( b e h a v i o r _ f o r m a l i z a t i o n ( o r g l , ~ s t r o n g ) , [h(5), r ( 3 4 ) , h ( 2 ) , r(4) , f (1) ] ) . h ( b u r e a u c r a t i c _ c o o r d i n a t i o n ( o r g l , s t r o n g ) , [h(6), r ( 3 5 ) , h ( l ) , r ( l ) , f (2) ] ) . h ( b u r e a u c r a t i c _ c o o r d i n a t i o n ( o r g l , ~ s t r o n g ) , [h(7), r ( 3 6 ) , h ( 2 ) , r ( 4 ) , f (1) ] ) . h ( j o b _ s p e c i a l i z a t i o n ( o r g l , ~ h i g h ) , [h(8), r ( 3 8 ) , h ( 3 ) , r ( 6 ) , f(1) ] ) . h ( p l a n n i n g _ a n d _ c o n t r o l _ d e v i c e s ( o r g l , ~ much_used), [h(9), r ( 4 0 ) , h(3) , r ( 6 ) , f (1) ] ) . h ( l i a i s o n _ d e v i c e s ( o r g l , ~ much_used), [h(10), r ( 4 2 ) , h ( 3 ) , r ( 6 ) , f(l)J) • h ( u n i t _ g r o u p i n g ( o r g l , ~ d i f f e r e n t i a t e d ) , [h(11), r ( 4 4 ) , h ( 3 ) , r ( 6) , f(1) ] ) . i ( [ p r e d i c t a b i l i t y of_work, o r g l , [high, low], [ [ h ( l ) , r ( l ) , f ( 2 ) ] , [h(2 ) , r ( 4 ) , f ( 1 ) ] ] ] ) . i ( [ b e h a v i o r _ f o r m a l i z a t i o n , o r g l , [strong, - s t r o n g ] , [ [ h ( 4 ) , r ( 3 3 ) , h ( l ) , r ( 1 ) , f (2) ] , [h(5) , r (34) , h(2) , r ( 4 ) , f (1) ] ] ] ) . i ( [ b u r e a u c r a t i c _ c o o r d i n a t i o n , o r g l , [strong, ~ s t r o n g ] , [ [ h ( 6 ) , r(35) , h ( l ) , r ( 1 ) , f ( 2 ) ] , [h(7), r ( 3 6 ) , h ( 2 ) , r ( 4 ) , f ( l ) ] ] ] ) . For another example: there are two competing hypotheses for the predictability of work, one derived from the organization's old age and the first rule, the second derived from the organization's small size and the fourth rule. To decide this case, one would need additional rules governing the precedence of one rule above the other. At this point, the formalization needs to take recourse to additional sources.
4.7 Observations and Conclusions We have argued in the introduction that the potential of knowledge-based computer applications manifests itself through various aspects of the formalization of theories. Although our formalization of Mintzberg's contingency theory does not cover all these aspects, we can draw some important conclusions. First, we have seen how the formalization process itself clarifies the logical structure
Formalizing Organizational Theory
97
of a theory by forcing the researcher to disentangle the ambiguities of the natural discourse. Paradoxically, because of its expressive power, natural language is not the ideal medium for expressing propositional systems. The complexity of language can hobble a theoretician. We encountered terminological problems at various points during the formalization process of Mintzberg's theory. For example, the two design parameters "vertical decentralization" and "horizontal decentralization" suggest a unidirectional causality, yet the (de)centralization process may work both ways (as hypotheses XII and XV exemplify). The third design parameter is labeled "behavior formalization;" but the hypotheses (and the surrounding comments) sometimes use the term "formalization" without the qualification "behavior" (although the superconcept's extension exceeds the extension of behavior formalization), so that it remains ambiguous how the theory would treat non-behavior-related formalization. There are comparable conceptual problems with "liaison devices", and "planning and control systems". Second, the formalization process forces one to identify the core assumptions of the theory. In doing so, one discovers the gap between the empirical value of a theory (as expressed through those concepts required to state the hypotheses), and its rhetorical value (as expressed through all other concepts). In the present case, this gap seems huge, and we have little reason to expect that other, comparable theories are better focused. In fact, Baligh, Burton and Obel have made the same point with respect to Duncan's and Perrow's contingency theories in Chapter 2. Third, we have also seen how the machine-based reasoning process may help to clarify the structure of the theory. Recasting the dichotomy of descriptive vs. explanatory theory in logical terms, one discovers that a purely descriptive theory is a fiat theory - a theory that states everything through its assumptions. Conversely, deep theories allow "deep" inferencing - with the inference process providing the explanation(s). While descriptive theories accommodate linear representations, explanatory theories require inference trees. As a consequence, explanatory theories may suffer under the linear representation of natural discourse, viz., the contradictions arising from the non-monotonicity underlying Mintzberg's design hypotheses. Of even greater importance is the reasoning process itself. Inferencing by means of natural language may go awry, since the logical structure of complex expressions may mislead the reasoning process. 10
10
Readers who want to take exception are invited to find out how much time it takes them to determine whether they can derive a specific conclusion from a specific set of assumptions while reasoning in natural language. The assumptions are (the example is from Lewis Carroll): (1) The only animal in this house are cats: (2) Every animal is suitable for a pet, that loves to gaze at the moon; (3) When I detest an animal, I avoid it; (4) No animals are carnivorous, unless they prowl at night; (5) No cat fails to kill
98
Josh C. Glorie, Michael Masuch, and Maarten Marx
Potential conclusions may never be discovered, unanswered questions never raised. The linear structure of natural language is particularly unsuited for dealing with nested problems, where conditionals are deeply nested, or each antecedent has several consequents. Mintzberg would have answered more questions, had he been confronted with the inference trees that arise on the basis of his design hypotheses. This implies no criticism of Mintzberg himself. Having worked for a couple of months with his theory, we have not lost our respect for his towering intellectual capacities. But that even the best of organizational theories runs into considerable problems from the very start betrays severe shortcomings in the discipline as a whole.
mice; (6) No animals ever take to me, except what are in this house; (7) Kangaroos are not suitable for pets; (8) None but carnivora kill mice; (9) I detest animals that do not take to me; (10) Animals that prowl at night always love to gaze at the moon. The questions is, can one derive the proposition: "I always avoid a kangaroo," and if so how, and if not why not.
Appendix 1.Rules of the First Formalization (Fl) age(X, V) = > predictability_of_work( X, V) . size(X, V) => predictability_of_work( X, V) . size(X, V) => diversity_of_work( X, V) . stability(X, V) =>predictability_of_work( X, V) . complexity( X, V) & stability(X, V) =>standardization_of_skills( X, V) . complexity( X, ~V) & stability! X, V) =>standardization_of_work( X, V) . complexity ( X, V) & stability ( X, ~V) => mutual_adjustment ( X, V) . complexity) X, ~V) & stability( X, ~V) => direct_supervision( X, V) . diversity ( X, V) => market_grouping ( X, V) . technical_s_regulation (X, V) => predictability_of_work( operating_core(X), V) . technical_s_sophistication(X, V) & equals(V,5) =>automation( X, V). technical_s_sophistication( X, V) => comprehensibility_of_work( support_staff(X), ~V) . automation( X, V) =>organic_coordination( X, V) . horizontal_decentralization( X, V) =>liaison_devices( X, V) . power_needs_of_apex( X, V) =>centralization( X, V) . outside_control( X, V) =>power_needs_of_apex( X, V) . comprehensibility_of_work( X, ~V) =>job_specialization( X, V) . comprehensibility_of_work( X, ~V) =>horizontal_decentralization( X, V) . comprehensibility_of_work(X, V) => coordination_by_standardization(X, V). predictability_of_work( X, V) => behavior_formalization( X, V). predictability_of_work( X, V) =>bureaucratic_coordination( X, V). diversity_of_work( diversity_of_work( diversity_of_work( diversity_of_work( diversity_of_work(
X, V) =>job_specialization( X, V). X, V) =>planning_and_control_devices(X, V) . X, V) =>liaison_devices( X, V) . X, V) =>unit_grouping( X, differentiated). org(X), V) =>homogeneity_of_work(units(X), V) .
homogeneity_of_work( units(X), V) =>unit_size( org(X),V). coordination_by_standardization( tion ( X, V) .
X, V)
=>bureaucratic_coordina-
coordination_by_standardization( X, V) =>unit_size( X, V) . behavior_formalization( X, V) =>bureaucratic_coordination( X, V) .
100
Josh C. Glorie, Michael Masuch, and Maarten Marx
2. The Theory-Space of the First Formalization The listing below gives the output from the resolution-refutation procedure in clause form convention. The comma replaces the v, so that , (p } is equivalent to (|) - > cp, and /- predictability_of_work( Org, high). age ( Org, V) & equals ( V, low) & consistenti predictability_of_work( Org, low)) => predictability_of_work( Org, low) size( Org, V) & equals ( V, large ) & consistenti predictability_of_work{ Org, high)) => predictability_of_work( Org, high) . size( Org, V ) & equals ( V, small ) & consistenti predictability_of_work( Org, low ) ) => predictability_of_work( Org, low ). size(Org, V) & equals( V, large ) & consistenti diversity_of_work( Org, high )) => diversity_of_work( Org, high ). size(Org, V) & equals( V, small ) & consistenti diversity_of_work( Org, low )) => diversity_of_work( Org, low ). en_stability ( Org, V) & equals i V, high ) & consistenti predictability_of_work( Org, high )) => predictability_of_work( Org, high ) . en_stability ( Org, V) & equals) V, low ) & consistenti predictability_of_work( Org, low )) => predictability_of_work( Org, low ) . en_complexity ( Org, VI) & en_stability ( Org, V2 ) & equals ( VI, high) & equals ( V2, high) & consistenti standardization_of_skills( Org, high)) => standardization_of_skills( Org, high). en_complexity ( Org, VI) & en_stability ( Org, V2) & equals ( VI, low) & equals ( V2, high) & consistenti standardization_of_work( Org, high)) => standardization_of_work( Org, high). en_complexity ( Org, VI) & en_stability ( Org, V2) & equals ( VI, high) & equals* V2, low) &
102
Josh C. Glorie, Michael Masuch, and Maarten Marx
consistent! mutual_adjustment( Org, dominant)) => mutual_adjustment( Org, dominant). en_complexity ( Org, VI) & en_stability { Org, V2 ) & equals! VI, low) & equals ( V2 , low) & consistent! direct_supervision( Org, dominant)) => direct_supervision( Org, dominant). en_diversity ( Org, V) & equals (V, high) & consistent( market_grouping(Org, dominant)) => market_grouping( Org, dominant).% Hll en_diversity ( Org, V ) & equals (V, low ) & consistent( market_grouping(Org, -dominant )) => market_grouping( Org, -dominant). technical_s_regulation( Org, V) & equals(V, high ) & consistent(predictability_of_work ( operating_core(Org), high ) ) => predictability_of_work( operating_core(Org), high ). technical_s_regulation ( Org, V ) & equals (V, low ) & consistent(predictability_of_work( operating_core(Org), low) ) => predictability_of_work( operating_core(Org), low) . technical_s_sophistication( Org, V ) & equals(V, high) & consistent(comprehensibility_of_work( support_staff(Org), =>
low))
comprehensibility_of_work( support_staff(Org), low). technical_s_sophistication ( Org, V ) & equals (V, low ) & consistent(comprehensibility_of_work( support_staff(Org), high )) = >
comprehensibility_of_work( support_staff(Org), high ) . automation( part(Org), present) & consistenti organic_coordination( part(Org), dominant)) => organic_coordination( part(Org), dominant). automation! part(Org), absent ) & consistent) organic_coordination( part(Org), -dominant )) => organic_coordination( part(Org), -dominant ) . horizontal_decentralization( Org, V) & equals( V, high) & consistent(liaison_devices( Org, much_used)) => liaison_devices( Org, much_used). horizontal_decentralization( Org, V) & equals ( V, low ) & consistent(liaison_devices( Org, ~much_used)) => liaison_devices( Org, ~much_used). power_needs_of_apex( Org, V) & equals) V, high) & consistent(centralization( Org, high)) => centralization( Org, high). power_needs_of_apex ( Org, V ) & equals ( V, low ) & consistent(centralization( Org, low )) => centralization ( Org, low ) . outside_control( Org, V) & equals(V, strong) & consistent(power_needs_of_apex( Org, high ) ) => power_needs_of_apex( Org, high).
Formalizing Organizational Theory outside_control( Org, V) & equals ( V, weak ) & consistent(power_needs_of_apex( Org, -high ) ) => power_needs_of_apex( Org, -high ) . comprehensibility_of_work( Org, V) & equals( V, low) & consistent(job_specialization( Org, high)) => job_specialization( Org, high). comprehensibility_of_work ( Org, V) & equals ( V, high ) & consistent(job_specialization( Org, low)) => job_specialization( Org, low ). comprehensibility_of_work( Org, V) & equals( V, low ) & consistent(horizontal_decentralization( Org, high )) => horizontal_decentralization( Org, high ) . comprehensibility_of_work ( Org, V) & equals ( V, high ) & consistent(horizontal_decentralization( Org, low)) => horizontal_decentralization( Org, low). comprehensibility_of_work(Org, V) & equals( V, high) & consistent(coordination_by_standardization(Org, strong)) = coordination_by_standardization(Org, strong). comprehensibility_of_work (Org, V ) & equals ( V, low ) & consistent(coordination_by_standardization(Org, -strong )) coordination_by_standardization(Org, -strong ). predictability_of_work( Org, V) & equals( V, high) & consistent(behavior_formalization( Org, strong)) => behavior_formalization( Org, strong). predictability_of_work ( Org, V ) & equals ( V, low ) & consistent(behavior_formalization( Org, -strong )) => behavior_formalization( Org, -strong ) . predictability_of_work( Org, V) & equals( V, high) & consistent(bureaucratic_coordination( Org, strong)) => bureaucratic_coordination( Org, strong). predictability_of_work ( Org, V) & equals ( V, low ) & consistent(bureaucratic_coordination( Org, -strong )) => bureaucratic_coordination( Org, -strong ) . diversity_of_work( Org, V ) & equals) V, high ) & consistent(job_specialization( Org, high )) => job_specialization( Org, high ) . diversity_of_work ( Org, V ) & equals ( V, low ) & consistent(job_specialization( Org, -high )) => job_specialization( Org, -high). diversity_of_work( Org, V) & equals( V, high) & consistent(planning_and_control_devices(Org, much_used)) = planning_and_control_devices(Org, much_used). diversity_of_work ( Org, V ) & equals ( V, low ) & consistent(planning_and_control_devices(Org, ~much_used )) planning_and_control_devices(Org, ~much_used ) . diversity_of_work ( Org, V ) & equals ( V, high ) & consistent(liaison_devices( Org, much_used )) => liaison_devices( Org, much_used ) . diversity_of_work ( Org, V ) & equals ( V, low ) & consistent(liaison_devices( Org, ~much_used)) =>
104
Josh C. Glorie, Michael Masuch, and Maarten Marx
liaison_devices( Org, ~much_used). diversity_of_work( Org, V) & equals( V, high) & consistent(unit_grouping( Org, differentiated)) => unit_grouping( Org, differentiated). diversity_of_work ( Org, V ) & equals ( V, low ) & consistent(unit_grouping( Org, -differentiated)) => unit_grouping( Org, -differentiated). diversity_of_work( org(Org), V) & equals( V, high) & consistent(homogeneity_of_work(units(Org), high)) => homogeneity_of_work(units(Org), high). diversity_of_work( org(Org), V) & equals( V, low ) & consistent(homogeneity_of_work(units(Org) , low)) => homogeneity_of_work(units(Org), low ) . homogeneity_of_work( units(Org), V) & equals) V, high ) & consistent(unit_size( org(Org), large )) => unit_size( org(Org), large ) . homogeneity_of_work( units(Org), V) & equals( V, low ) & consistent(unit_size( org(Org), small )) => unit_size( org(Org), small ) . coordination_by_standardization( Org, V) & equal( V, dominant ) & consistent(unit_size( Org, large )) => unit_size( Org, large ) . coordination_by_standardization( Org, V) & equal( V, -dominant ) & consistent(unit_size( Org, small )) => unit_size( Org, small ) . technical_s_sophistication(Org, V) & equals(V,very_high) => automation( Org, V) . coordination_by_standardization( Org, V) &equals( V, present) => bureaucratic_coordination( Org, present). behavior_formalization( Org, V) & equals ( V, strong) => bureaucratic_coordination( Org, present).
Chapter 5 Building an Artificial Intelligence Model of Management Policy Making: A Tool for Exploring Organizational Issues Roger I. Hall
5.1 Introduction The objective of this exploratory study is to build an Artificial Intelligence model of the policy making processes within an organization. The model consists of a set of interlocking "generic" processes based on social-psychological and social-political theories and on observations of group decision making. The design of the decision protocols in the model is outlined below. The model is intended as: (1) a research instrument to explore issues of organization and management theory, (2) a training tool to foster organizational learning, and (3) an aid to managers making policy under stressful situations. Several behavior-based learning techniques are available that can help groups of managers improve their understanding of the policy environment. A recent interest in these techniques is centered in the U.K., where they have aided policy makers in companies suffering from deregulation, privatization, globalization of markets and severe competition from freer trade (the "Strategic Options Development and Analysis" concept of Eden: Eden, 1986, 1988; Eden et al., 1986; Smithin and Eden, 1986). A similar technique based on System Dynamics modeling has been used to help North American managers deal with complex and relatively unstructured policy determinations requiring cooperation among the participating managers ("The Strategy Forum" concept of Richmond, 1987; the "Corporate System Modeling" approach- see Hall and Menzies, 1983; the "Flight Simulator" concept of Sterman — see Fiske and Hammond, 1989, Stata, 1989, Solomon, 1989; and the "Management Learning Laboratory" Senge, 1989). These mapping and modeling methods are also being used to help senior managers redesign the controls of organizations.
106
Roger I. Hall
These techniques usually rely on some form of formal mapping to represent the managers' policy domains by linking together concept variables from the managers' cognitions in patterns that trap their essential connectedness and mutual influences. These formalized methods have been called, variously, Signed Digraphs (Axelrod,1976), Cognitive Maps (Eden, 1986), Influence Diagrams (Coyle, 1972; Hall, 1979; Roos and Hall, 1980), and System Flow Diagrams (Lyneis, 1980). However, these formal methods and models usually do not portray the characteristic responses of the management to policy problems realistically enough to enhance understanding and learning. Such decision making characteristics are often specific to an organization and are themselves derived from "the learning of the past". They are the vital forces that drive the organization into the future. The success (or lack thereof) of the policies so derived will set the scene for the organizational learning and adaptation, which will, in turn, change the decision making character of the organization. It is the purpose of this project to develop a model of the policy making apparatus of organizations general enough to trap the essential generic processes of policy making, yet be adaptable to specific organizations.
5.2 The Conceptual Framework The AI model of organizational policy making that is being developed to accomplish this design is based on Process Modeling. Process Modeling is borrowed from basic science where it is used to describe the form, matter, and motion of phenomena in a flow-of-events style (Bohr's model of the atom, or Crick and Watson's model of the DNA molecule are examples). In an organizational context, a process model would describe how things are decided and done by people and groups. But as Mohr (1982) has pointed out: Process models are used little in organization theory and even less in many other social science subfields. When they are used, they are often underdeveloped. There is a tendency to present and conceptualize the stages in the process but to omit the forces that drive the movement from one stage to another. The latter, however are essential (p. 14).
Process modeling provides a rich descriptive theory of the workings of an organization, but not of the driving forces. To capitalize on its strength while avoiding its weakness, the author has developed a two part modeling methodology that constitutes a conceptual framework (see Figure 5.1). The first part models how the organization works as an integrated flow of resources (the Corporate System Model). It draws on the author's previous successful work in modeling corporate systems (Hall,1976; Hall and Menzies,1983). The second part, which is the focus of this paper, fashions an AI model to capture the
Building an AI of Management Policy Making
107
essence of the collective decision making behavior of managers in an organizational setting to control the resource flows (the Policy Making Model). This model draws on the author's earlier work in modeling decision making processes (Hall, 1981 and 1984). Its effectiveness has been demonstrated only in a descriptive-qualitative sense (Roos and Hall, 1979; Hall,1984).
METHODOLOGY FOR BUILDING CORPORATE SYSTEM MODEL using System Dynamics as an Expert System based on: influence diagramming techniques system simulation modeling statistical inference computer simulation
CORPORATE SYSTEM Performance Indices
Flows and accumulations of: customers/clients, products/services, expenditures and revenues. Budget Plan Decisions and Controlling Actions
Periodic Reports and Reviews
POLICY MAKING & CONTROLLING SYSTEM The process of: forming goal priorities, mapping the environment, formulating plans, recognizing problems, choosing and implementing policies.
_L
Infusion of New Ideas Comparative Industry Performance Indices
FRAMEWORK FOR MODELING POLICY MAKING SYSTEM using an Artificial Intelligence approach based on: evolutionary learning theory, cognitive mapping, human learning, judgment and intuition, politics of organizational decision making, the organization's culture, beliefs, and orientation, behavioral theory of the firm.
Figure 5.1: A Conceptual Framework
108
Roger I. Hall
5.3 The Corporate System Model The first part of the method uses System Dynamics1 and its associated simulation languages.2 System Dynamics has precise rules specifying the correct order for assembling the system components and the equations representing them as a unified system. It also provides a kind of Expert System for creating a realistic state-dependent Corporate System Model based on processes (flows of resources and the mechanisms for controlling them). The Stella software (High Performance Systems, 1985), in particular, uses these rules in conjunction with a draw package that allows the user to put together System Flow icons, representing the resource flows and the mechanism for regulating them, on a monitor screen while the software semi-automatically writes the program. Standard configurations of component parts of a model are also available to assist the model builder (Wolstenholme and Coyle, 1983). Feedback loops of cause-and-effect embedded in the Corporate System Model can be "hidden demons", giving the model a life of its own that may produce unexpected and counter-intuitive results. A major thrust of the System Dynamics methodology is to uncover the feedback loops through experimentation and analysis, and neutralize any dysfunctional effects on the model. For this project, such models can help convert policy changes into performance indices in a realistic manner. For example, a change in price or promotion budget is converted into the movements of such performance indices as expenses, revenues, profits, and such system states as the pool of regular customers and reputation of the firm.3 An Influence Diagram of such a Corporate System Model for a magazine publishing company appears in Figure 5.2. The diagram is used to program a set of "time difference" equations (the Corporate System Simulation Model) that will simulate the effect of different values of the policy variables on the performance indices. Time-series plots of the output of this simulation model are shown in Figure 5.3.4
1
2
3
4
For an introduction to the use of System Dynamics for policy making purposes, see Lyneis (1980). Dynamo and Professional Dynamo (Pugh-Roberts, 1986), Dysmap (Salford University, 1986), and Stella (High Performance Systems, 1985). For examples of the effects of influence feedback loops (the "hidden demons") on Corporate System performance see Hall (1976) and Hall and Menzies (1983). A hidden demon buried in the diagram (Figure 5.2) and uncovered by a System Dynamics study (Hall, 1976) is the prime cause of the exponential growth shown in Time Series Plots (Figure 5.3). This feedback loop systematically confounded the attempts of the management to control the magazine and probably led to its demise.
Building an AI of Management Policy Making
Accounting
Relations
Factual Relations Belief Relations
-
-
-
Inter-Map Relations
Figure 5.2: Influence Diagram of a Magazine Publishing Company Taken from Hall (1989).
109
Roger I. Hall
110
O U 60 e
£ u e 60
C3
s1)
(D O
T3
x iu o É M S 3 é>
"3
cu Z
N < O < S
J Q O g S Ph o o
z z zz ztu s >
OS w w w Q oi £ , Si W co
o cu co §
(N
2
O o
"X o
Ö
T
ö
W
C/5
u cS t-l O & o U a e
p o E
«
•e a> co
pe 3 fì4 3
O rn «ri B 3 00
Building an AI of Management Policy Making
111
However, it can be seen from Figure 5.2 that some of the Policy Variables lack inward arrows of influences (the driving forces) that would, in a corporate setting, bring forth the changes in policies. It is the express purpose of the AI model (outlined below) to simulate these policy changes and thus supply the missing driving forces.
5.4 The Policy Making AI Model The second part of the method (which is the focus of this paper) tries to find the missing driving forces. It is based on a process modeling approach similar to that of Cyert and March (1963) in their pioneering study of organizational decision making. The process model of organizational policy making (shown in outline form in Figure 5.4) uses cause mapping and decision-behavioral protocols within an evolutionary learning framework. The model is, in effect, a skeletal AI model that will convert inputs, such as the periodic reports and reviews of the current performance from a Corporate System Simulation model, into simulated budget plans and controlling actions. These plans and actions, in turn, drive the Corporate System Simulation model to new states. The model, if properly designed, should make policy decisions in a way that closely follows the theory and systematic observations on human reasoning and learning, and the socialpolitical and internal cultural processes of organizational decision making (Hall, 1981). The model has been used (albeit in a qualitative fashion) to explain the "naturallogic" used by the senior management of a magazine company that resulted in some seemingly strange decisions - strange enough to doom the company (Hall, 1984). Similarly, the cognitive map of a hospital unit head was put together from interviews and observations to provide a credible explanation for his behavior (Roos and Hall, 1980). To the author's knowledge, the building of such a comprehensive AI model of management policy may be a groundbreaking attempt. The theory underlying Figure 5.4 has already been described in detail elsewhere (Hall, 1984). Here, a short summary should suffice.
An Evolutionary Framework We assume that organizations evolve through (1) enactment, (2) selection, and (3) retention processes that enable it to handle ambiguous information resulting from the organization's actions on its environments. These processes are driven by the (4) driving forces of equivocality control (making sense of ambiguous information ) and subunit status maintenance or enhancement. The organization
Roger I. Hall
"D00
SELECTION PROCESSES
Register nature of residual equivocality & uncertainty
RETENTION PROCESSES
Admit changes to retained set
112
es
œ
s + , r e l a t i o n s h i p 'with o t h e r o b j e c t ' > * ; < m e t h o d > * } + , < r e l a t i o n s h i p ' b e t w e e n o b j e c t s at s y s t e m level's-*; < m e t h o d 'at s y s t e m l e v e l ' > * }
In this description method, a method is a SMALLTALK-style object method, which can be:
128
Henk W.M. Gazendam
• a rule; • a formula operating on data; • a simulated object action. Methods and relationships may be specified on the top level of the model (so that they are available for all objects), or only known by a specific object (so that they are "private" knowledge of that object). An example of a private relationship is the relationship between the "whole" object and its "part" objects. Because a relationship is defined as a partnership between two or more objects, each object can be recursively defined. This means, that our model type description method is a context-free language (Mac Lennan 1987: 167). A further model specification consists of a description of: • the type hierarchies of objects, relationships, attributes, methods, conditions, formulas, data, and actions; • the attributes, other objects, and methods known by specific object types; • the partners in, and attributes of, relationship types; • the method specification on a generic level.
6.4 Prototype Organization Models Based on Different Organization Metaphors Morgan (1986) distinguishes eight organization metaphors (TABLE 6.1): Table 6.1: Organization Metaphors Distinguished by Morgan (1986) METAPHORS
FOCUS ON
LEAD ON
Machine
clockwork like system
process decomposition model
Organism Flux and transformation
living system
system attribute model decomposition model flux and transformation model
Brains Culture Psychic prison
cognitive characteristics of people within organizations
knowledge based system model
Political system Domination
power plays and cognitive characteristics of people in an organizational network
multi-actor knowledge based system model
From an declarative point of view, two model dimensions can be distinguished (TABLE 6.2):
Expert Supporting Organization and Information Management
129
Table 6.2: Model Dimensions from a declarative point of view MODEL DIMENSION
MODEL TYPES
object types
system level (global) models system with subsystem models system, subsystem, en environment models person level models multi-person models
operation types
data models functional models simulations models evolution models
The object types distinguished by the model determine how reality is interpreted or perceived. For example, an information manager who is reasoning within the framework of the machine model is unable to see the cognitive aspects of his problem. The operation types determine the possible inferences. For example, a functional model does not admit inferences about the time evolution of an organization. Data models contain only one objects and attributes, and correlations between the attributes. Functional models are data models, enhanced with functions or rules; by means of these rules, unknown attributes can be inferred from known attributes. Simulation models are functional models enhanced with rules or actions, so that the dynamic behavior of objects, attributes, and their interaction, can be inferred. Evolution models are simulation models with the capability of changing rules, functions, or actions during execution time. The combination of object types and operation types yields twenty model subtypes. In the following paragraphs, we focus the attention on a number of these model subtypes.
The Machine Metaphor The machine metaphor (Morgan, 1986: 19) sees organizations as machines that execute their programmed task deterministically. Organizational activity is seen as decomposable into subprocesses. Each subprocess is analyzed to optimize its efficiency. Specific task types ("functions") are assigned to each department or position. All parts (persons, groups, or machines) must stick to their pre-programmed subtasks. The integration of all subprocesses is supposed to lead to a smoothly working machine, a business process. The traditional way to design information systems is to analyze business processes, their relations with organization units, and their information input and output. In this analysis, business processes are decomposed, as well as input and output data sets. The subprocesses to be automated are chosen and redesigned
130
Henk W.M. Gazendam
(by further decomposition, and recomposition) with the objective of using data structures and computer programs efficiently. To optimize the redesigned automata, the principle of minimizing information exchange between automata is used. This principle is based on Simon's (1962) nearly decomposable system concept. Simon argues, that processes, which are subdivided in hierarchically organized subprocesses, are more efficient than non-subdivided processes. This because of localization of the influence of external disturbances during execution. The traditional information system design process leads to data flow diagrams (models relating processes and data sets), HIPO-charts (models of the decomposition of processes), and data models (models of the decomposition of data sets) (Martin, 1982). Mechanical systems can be optimized by analyzing their dynamic behavior. For instance, waiting queues of material objects (buffers) between processes are necessary when the processes are not synchronized. Synchronization of processes, and reducing the buffer costs, is an objective of just-in-time management (Davis and Olson, 1984: 283). Waiting queue simulation models can be used (Kreutzer, 1986; Birtwistle, 1979). A prototype model for this metaphor is: . ::= {, +, +, , {, }*; +}
A prototype using a passive object (input object or output object) is feasible as well. In information systems design, this leads to data-oriented design (Martin, 1982). Processes are seen as operations on data sets. Data sets are ordered according to object types and relationship types. Data sets can have the role of a component-simulator, or of an event-registration. A component-simulator simulates a passive object, passing through all subprocesses during its life cycle. An event-registration contains the time, components, and active objects participating in a single event during this life cycle, and is not changeable. Another viewpoint centers on the active object (organization or individual doing a task). In information systems design, this leads to user interface oriented design. Decision support systems are an example of this orientation (Sprague and Carlson, 1982; Bennett, 1983). A knowledge based system for production planning and control was built by Fox (1983). This system was the basis for the development of the knowledge engineering environment Knowledge Craft. A knowledge based system to assist the development of data-oriented information systems was designed by Jarke, Jeusfeld, and Rose (1988).
Expert Supporting Organization and Information Management
131
Contingency Theory Contingency theory's aim is to detect correlations or cause-effect-relations between (among others) structural, environmental, technical, and geographic attributes of organizations (Kieser and Kubicek, 1983). This leads to models where the organization or system is the only object, and relations between system attributes are expressed in correlations, or logical relationships (including equations). The model type is data model or functional model. A prototype model for contingency theory is: , ::= {, < a t t r i b u t e > + ; r e l a t i o n s h i p 'between a t t r i b u t e s ' > + }
Quasi-continuous models (Kreutzer, 1986: 31) are examples of simulation models that use only one object with attributes. Mathematical chaos models (Peitgen and Richter, 1986) are examples of evolution models using only one object with attributes.
Open Systems Theory The open systems approach builds on the principle that organizations, like organisms, are open to their environment, and must achieve an appropriate relation with the environment if they are to survive (Morgan, 1986: 44). The open systems approach defines an organization in terms of interrelated subsystems. The internal regulatory mechanisms of a system must be as diverse as the possible disturbances from the environment with which it is trying to deal (Ashby's "law of requisite variety", Emery, 1969: 105). A prototype model for this theories is: . , : : = { , {, {, < f u n c t ion 'of s u b s y s t e m w i t h respect to system' >}* , {< ' e n v i r o n m e n t a l ' system>, < f u n c t i o n 'of s y s t e m w i t h respect to e n v i r o n m e n t a l system'>}*; < w o r k m e t h o d > * , * , i n f o r m a t i o n p r o c e s s i n g method>*}}
::= {l o r g a n i z a t i o n m o d u l e > I < i n f o r m a t i o n s y s t e m > I e x t e r nal organization>I e n v i r o n m e n t a l system>l l < o t h e r system>}
Living Systems Theories Flux and transformation theories of living systems see organizations as living systems, creating and adapting themselves in interaction with their ecological
132
Henk W.M. Gazendam
environments. Flux and transformation theories include self-organizing systems theories, lifecycle theories, ecological network theories, and chaos theories. The Prigogine and Stengers (1984) theory of self-organizing dissipative systems connects individual system characteristics with the macro state variables and methods of the ecological system. In this theory, individual systems respond to external forces by regulating matter, energy, and information influx and outflux (force/flux-relationships). These fluxes create internal system structures. In the Maturana and Varela (1984) autopoiesis theory, the individual living system regulates fluxes based on internal system goals or patterns. The autopoietical living system thus has the capability to transform and reproduce itself. It influences its own environment, as well as adapts to the environment. It has its own life cycle, inherited by reproduction. The autopoiesis theory is a lifecycle theory as well as a ecological network theory. Chaos theories are mathematical theories about system states dependent on feedback loops or other recurrent behavior patterns. Under certain conditions, recurrent behavior patterns lead to a stable system state (e.g. a normal thermostat). Under other conditions, systems may cycle between two or more definite system states, or even between infinite number of system states (chaotic behavior) (Peitgen and Richter, 1986). If a system behaves chaotically, its future states are unpredictable by mechanistic models. Chaotic states seem to be necessary in living systems to cope with their environment (Varela, 1989). A prototype model for this theories is: . . ::= {, < e c o l o g i c a l system state v a r i a b l e > + , < ecological s y s t e m m a c r o method>+, { , < s y s t e m goal>*, < s y s t e m lifecycle state v a r i a b l e > + ; < s y s t e m e n v i r o n m e n t i n t e r a c t i o n m e t h o d > + , < s y s t e m lifecycle method>+}+}
::= {I < a u t o p o i e s i s method>I < e c o l o g i c a l cycle method>}
Knowledge-Based Systems Theories The failure of first generation management information systems (MIS) led to a correction of the more naive systems control theory. Aspects of information processing capacity of the controlling managers had been overlooked, as well as organizational processes that could reduce information needs (Galbraith, 1973). Therefore, theories of human cognition have to be the base for the second generation of MIS, for which the label knowledge-based systems may be more appropriate.
Expert Supporting Organization and Information Management
133
Knowledge-based system models are system models in which personal knowledge, creative processes, learning processes, goals, and beliefs of individuals and groups are important. The focus is on describing and explaining the reason why individuals and groups behave in a certain manner, and change behavior. Systems theory has gradually extended in the direction of soft systems methodology (Checkland, 1981), it is seen to be developing further in the direction of knowledge based system theories. Organization culture theory, and cognitive theories of management evolved in the process (Morgan, 1986). Organization theory now pays more attention to incremental, organizational learning, and growth models of planning and control (Van Gunsteren, 1976). A prototype model for these theories is: . ::= {, *, c'self i n f o r m a t i o n ' state v a r i a b l e > + , {'inf o r m a t i o n about' e n v i r o n m e n t a l object>, < s t a t e v a r i a b l e > + } + ; < ' s e l f ' m o d e l > + , < ' e n v i r o n m e n t a l o b j e c t ' m o d e l > + , + , < ' e n v i r o n m e n t a l ' control b e h a v i o r rule> + }
::= {| < g r o u p of persons>} environmentalobject> : := {I I < ' o t h e r ' system>} . . ::= {, {; +}}
Multi-Actor Knowledge-Based Systems Theories Multi-actor theories, such as the transaction cost theory (Williamson, 1985), assume a multitude of actors or organizations. These actors have to cooperate, somehow (via the market mechanism, for example). Actors have their own private goals, while their information processing capacities are limited. In the prototype models, a learning and a non-learning variant can be distinguished. We describe the non-learning variant: . . ::= {{}+, s y s t e m attribute>*}
{+, +; +, } }
::= {write < m a r k e t c o n t r a c t > }
A third possibility in PROLOG programming is to use elementary sentence types as predicates:
::= { < o b j e c t i d > , { { < d i m e n s i o n > , }I < a t t r i b u t e > } , < d u r i n g time interval>}.
::= { < r e l a t i o n s h i p i d > , < o b j e c t i d l > , < o b j e c t i d 2 > , {{, }I < a t t r i b u t e > } , < d u r i n g t i m e i n t e r v a l > } .
In this approach, relationship objects can be used as first order binary relationships, as well as higher order relationships, which organize more complex relations. Complex sentences, leading to complex predicate structures, have to be normalized until the information in the sentence can be written down in the elementary sentences. This option gives you the most flexibility, but the PROLOG program becomes more difficult to construct and to read.
The Information Strategy Choice Model in PROLOG In the PROLOG Information Strategy Choice Model, three alternative information strategies can be chosen: • expansive • cutback • stepwise In the expansive strategy, an attempt is made to address all user department needs. To do so, external consultants are hired. This leads to higher costs, sometimes exceeding spending limits. If the latter is the case, the cutback strategy takes effect. External consultants are sent away, internal employees fired. This leads to lower costs, but also to backlogs. Under the pressure of backlogs, the expansive strategy becomes dominant again. The third strategy, the stepwise strategy, only becomes dominant when the information manager can send the message that he intends to. In the stepwise strategy, user department needs are stepwise fulfilled, internal employees are stepwise hired (if necessary), and the number of external consultants is limited to the number of internal employees. This strategy leads in most cases to optimal results. It can, however, be overthrown by the expansive or the cutback strategy. In the PROLOG model, the logic of information strategy choice, the effects of a chosen strategy, and the consequent choice for the same or a new strategy are modelled:
Expert Supporting Organization and Information Management
139
::= {+, simulationtime; +, s i mulation action>+, +}
::= {, +}
::= {l l }
::= {minister I law I costs I infomanagement}
::= {i_employeesI e_consultants}
::= {, +}
::= { number, productivity, costs}
: := {infosystemsid, sizeinfp , readytime , budget , sizeready , costs , cumcosts}
::= {if {+, } then {}}
::= {I I +, +}
::= { I I }
::= {, *, simulationtime}
::= {, +, cinternal variable:»*; i n i t ialization action>*, +, []}
::= { I l }
::= {, +}
::= {functionPoints}
::= {intEmployeesI extConsultants}
::= {functionPointsReady}
::= {infostrategy}
::= {infomanagerI department I infosysteml resource object set and system statistics collector}
Expert Supporting Organization and Information Management
141
::= { I «budget c a l c u l a t i o n m e t h o d > I « s t r a t e gy i m p l e m e n t a t i o n method>I « r e s o u r c e d e m a n d m e t h o d > I « p r o d u c t i o n method>I « r e p o r t i n g action>} « s t r a t e g y choice m e t h o d > ::= {if { l « r e s o u r c e object set>l « i n t e r n a l v a r i a b l e > } + then {infomanager, strategy}} «budget calculation method> : := {if {infomanager, strategy} t h e n {do < f o r m u l a > w i t h {infostrategy, budget}}} «strategy implementation method> ::= {if {infomanager, strategy} t h e n {do < f o r m u l a > w i t h } + } «resource demand method> : := {increment « d e m a n d object set> at random, increment s i m u l a t i o n t i m e w a i t i n g for «result o b j e c t set> w h e r e « d e m a n d object set> = « r e s u l t o b j e c t set>} «production method> ::= {{decrement « r e s o u r c e o b j e c t set>} + , increment simulationtime, p r o d u c e «result object set>, {increment « r e s o u r c e object set>} + } «reporting action> ::= {write {simulationtime, < o b j e c t i d > + , < a t t r i b u t e > + , +}}
The SMALLTALK InfoStrategy model's stability is sensitive to the initial conditions. Under certain conditions, the model cycles chaotically between the three strategies. One interesting pattern is: after a period of "expansive cutback - expansive - cutback" changes, a period of "stepwise - stepwise -stepwise" appears, which is broken by a survival of the expansive or the cutback strategy. An interesting feature of the model is the importance of strategy implementation. If most personnel are bound to InfoSystem projects, a cutback of personnel costs can take a long time. For instance, when the stepwise strategy starts under such unfortunate circumstances, it may soon be changed to a cutback strategy.
6.6 Suggestions for Further Research In the near future, organization theories focusing on multi-actor systems, and on cognitive components in organizational planning, control and development, will probably gain importance. Before being applied to information management, these theories have to be tested on their modelling capabilities. Research and development of more comprehensive interactive thought support systems, which can use several organizational metaphors, and offer tools for creativity support holds a promise. In this research, knowledge of the DSS, ES, and organization and information systems disciplines must be combined.
142
Henk W.M. Gazendam
In the field of technical tools, a modelling environment combining logic and object-oriented tools is necessary to implement interactive thought support systems. Tools for object-oriented databases, and for quick interface development in SMALLTALK are necessary for efficient programming (Boarne, 1989). Special objects can be made to control relationship types between objects, or to act as method libraries (Van Oosten, 1988). Expert objects can use small rule bases to deduct properties of objects known to them.
Appendix: Methodology of Model Description
A generic model description is: "specification of m o d e l t y p e "
::= { { < o b j e c t > , < a t t r i b u t e > + , c r e l a t i o n s h i p 'with o t h e r o b j e c t ' > * ; < m e t h o d > * } + , c r e l a t i o n s h i p ' b e t w e e n o b j e c t s at s y s t e m l e v e l ' >* ; < m e t h o d 'at s y s t e m l e v e l ' > * } A g e n e r i c f u r t h e r m o d e l specification is: "specification of o b j e c t t y p e s "
::= { < s u b t y p e specification;»}
::= {, < a t t r i b u t e > + , c r e l a t i o n s h i p 'with o t h e r o b j e c t at o b j e c t l e v e l ' > * ; c m e t h o d 'at o b j e c t l e v e l ' > * } "specification of r e l a t i o n s h i p t y p e s at s y s t e m l e v e l " "specification of r e l a t i o n s h i p t y p e s at o b j e c t l e v e l "
::= { < s u b t y p e specification;-}
::= { < o b j e c t > + , < a t t r i b u t e > + } "specification of o b j e c t a t t r i b u t e t y p e s " "specification of r e l a t i o n s h i p a t t r i b u t e t y p e s "
: := { < s u b t y p e specification:»} "specification of m e t h o d s at s y s t e m l e v e l " "specification of m e t h o d s at o b j e c t l e v e l "
: := { < s u b t y p e s p e c i f i c a t i o n ^
::= {if < c o n d i t i o n > + t h e n < m e t h o d > + I [assign < v a l u e > to] < a t t r i b u t e > of {object I r e l a t i o n s h i p } I do < f o r m u l a > w i t h < d a t a > + I }
: := { < s u b t y p e s p e c i f i c a t i o n ^
::= {l < r e l a t i o n s h i p > I < a t t r i b u t e > l < m e t h o d > } + {exists I not exists}
::= { < s u b t y p e specification>}
: : = { < f o r m u l a b o d y > [with < ' i n p u t o r o u t p u t 'parameter>+] }
144
Henk W.M. Gazendam
::= { i s ] { < o b j e c t > l
: : = { }
}}+}
Listing of the PROLOG model /* L O G I C O F I N F O R M A T I O N P L A N N I N G A P P R O A C H C H O I C E M O D E L */ /* T h i s m o d e l g i v e s a n a d v i c e a b o u t i n f o r m a t i o n p l a n n i n g a p p r o a c h c h o i c e . It d e p e n d s o n o r g a n i z a t i o n t y p e a n d i n f o r m a t i o n p l a n n i n g e x p e r i e n c e w h i c h a d v i c e is g i v e n */ domains time = integer database orgtype(symbol) history{time,symbol)
/* /*
individual, system, supersystem, n e t w o r k */ pragmatic, bsp, corporate, exchange*/
simulationtime(time) predicates advice(symbol) histind(symbol) hold(symbol,time) implement(symbol) clauses simulâtiontime(0). implement(X)
advice(X), hold(X,1).
advice(pragmatic)
not(history(_,corporate)), not(history(_,exchange)), simulationtime(X),not(history(X,_)), assertz(history(X,pragmatic)).
advice(bsp)
not(orgtype(individual)) and histind(one) not(history(_,corporate)), not(history(_,exchange)), simulationtime(X),not(history(X,_)), assertz(history(X,bsp)).
advice(corporate)
histind(two) and orgtype(supersystem), not(history(_,exchange)), simulationtime(X),not(history(X,_)), assertz(history(X,corporate)).
advice(corporate)
histind(two) and orgtype(network), not(history(_,exchange)), simulationtime(X),not(history(X,_) assertz(history(X,corporate)).
advice(exchange)
histind(three)
Expert Supporting Organization and Information Management
145
and orgtype(network), simulationtime(X),not(history(X,_)), assertz(history(X,exchange)). histind(one)
:-
history(_,pragmatic).
histind(two)
:-
history(_,pragmatic) and history(_,bsp).
histind(three)
:-
history (_, pragmatic) and history (_, bsp) and history(_,corporate).
hold(_,X)
:-
simulationtime(Y), retractall(simulationtime(_)), Z=X+Y,assertz(simulationtime(Z)).
/* INFOSTRATEGY MODEL IN PROLOG*/ /* This model simulates information strategy choice. It calculates the effects of the chosen information strategy. These effects are input for the next information strategy choice. Three basic strategies are distinguished : expansive (traditional), cutback, and stepwise. */ domains object attribute value
= = =
symbol symbol real
database f(object,attribute) fl(object, attribute, value) simulationtime(value) predicates p(object,attribute) go initialize decide strategy calculate(value) reportl report2 clauses /*states and their interconnections*/ P(0,A) p(minister, unhappy) p(minister, unhappy) p(minister, happy) p(law, implementedintime) p(law, unimplementedintime) p(law, implemented) :• p(law, unimplemented) p(infosysterns,ready) :
p(infosystems, readyintime)
p(infosystems, notready) : p(infosystems, notreadyintime) :-
p(costs, high)
f (0,A) . p(law, unimplementedintime). p(costs, high). not(p(minister,unhappy)). p(infosystems, readyintime). p(infosystems, unreadyintime). p(infosystems, ready), p(infosystems, notready). fl(infosystems, sizeinfp, FP) , fl(infosystems,sizeready, FR) , FR >= FP. p(infosystems,ready), simulationtime(T), fl(infosystems,readytime, E), T = E, p(infosystems, notready). fl(infosystems, costs, X),
146
Henk W.M. Gazendam
p(costs, reasonable) /*actions*/ initialize
go
:-
f1(infosystems, budget, Y), X > (Y*1.5). not(p(costs,high)).
r e t r a c t a n (simulationtime (_) ) , r e t r a c t a n (fl{_, ) ) , assertz(simulationtime( 0 ) ) , assertz(fl(infosystems, sizeinfp, 36000)), assertz(f1(i_employees,number,10)), assertz(fl(i_employees, productivity, 3)), assertz(fl(i_employees, costs, 0.5 )), assertz(fl(e_consultants, number, 0)), assertz B, retractall(f(infomanagement,_)), assertz(f(infomanagement,cutback)).
decide
:-
f(infomanagement,stepwise), retractall(f(infomanagement,_)), assertz(f(infomanagement,stepwise)).
decide
:-
not(f(infomanagement,cutback)), not(f(infomanagement,stepwise)), retractall(f(infomanagement,_)), assertz(f(infomanagement,expansive)).
strategy
strategy
p(infomanagement, expansive), retractall(f(succès,_)), assertz(f(succès,false)), simulationtime(T), fl(infosystems, sizeinfp, FP) , fl(infosystems, readytime, RT) , fl(infosystems, sizeready, SR), fl(i_employees, number, IN), fl(i_employees, productivity, IP), f1(e_consultants, productivity, EP) , ENI = ((FP-SR)/(RT-T) - (IN*IP))/EP, ENI > 0, retractall(fl(e_consultants, number, _) ) , assertz ( fl (e_consultants, number, E N D ) , retractall(f(succes,_)), assertz(f(succès,true)). :-
p{infomanagement, stepwise), retractall(f(succès,_)), assertz(f(succès,false)), simulationtime(T), fl(infosystems, sizeinfp, FP) , fl(infosystems, readytime, RT) , fl(infosystems, sizeready, SR),
Expert Supporting Organization and Information Management f 1 ( i _ e m p l o y e e s , p r o d u c t i v i t y , IP), fl(e_consultants, number, EN), f 1 ( e _ c o n s u l t a n t s , p r o d u c t i v i t y , EP) , E N l = E N / 2, E N 1 >0, INI = ( ( F P - S R ) / ( R T - T ) - ( E N 1 * E P ) ) / I P , INI > 0, r e t r a c t a l l ( f l ( e _ c o n s u l t a n t s , n u m b e r , _) ) , assertz(fl(e_consultants, number, INI)}, retractall(f(succes,_)), assertz{f{succes,true)). strategy
p(infomanagement,cutback), retractall(f(succes,_)), assertz(f(succes,false)), f 1 ( i n f o s y s t e m s , b u d g e t , B) , f 1 ( i _ e m p l o y e e s , n u m b e r , IN), f l ( i _ e m p l o y e e s , c o s t s , IC) , fl(e_consultants, costs, EC), INI = 0.8*IN, I N I > 0, E N l = ( B - I N l * I C * 2 0 0 ) / 2 0 0 * E C , E N l > 0, r e t r a c t a l l ( f l ( _ , n u m b e r , _) ) , a s s e r t z (fl ( e _ c o n s u l t a n t s , n u m b e r , E N D ) , assertz(fl{i_employees, number, INI)), retractall(f(succes,_)), assertz(f(succes,true)).
strategy
f(succes,false).
calculate(T)
f l ( i n f o s y s t e m s , s i z e r e a d y , SR) , f1(infosystems, cumcosts, CICUM), f 1 ( i _ e m p l o y e e s , n u m b e r , IN), f l ( i _ e m p l o y e e s , p r o d u c t i v i t y , IP), f 1 ( i _ e m p l o y e e s , c o s t s , IC) , f l ( e _ c o n s u l t a n t s , n u m b e r , EN), f 1 ( e _ c o n s u l t a n t s , p r o d u c t i v i t y , EP) , fl(e_consultants, costs, EC), S R I = S R + (IN*T* IP) + (EN*T*EP) {(EN/10)*T*IP), C I = (IN*T*IC) + ( E N * T * E C ) , C I C U M l = C I C U M + CI, retractall(fl(infosystems, sizeready,_)), r e t r a c t a l l ( f l ( i n f o s y s t e m s , c o s t s , _) ) , retractall(f1(infosystems, cumcosts,_)), assertz(fl(infosystems, sizeready, SRI)), assertz(fl{infosystems, costs, CI)), assertz(fl(infosystems, cumcosts, CICUM-1) simulationtime(ST), retractall(simulationtime(_)), ST1 = S T + T , assertz(simulationtime(STl)),!.
reportl
simulationtime(T), write(Hsimulationtime ",T),ni,fail. f(infomanagement,X), w r i t e ( " i n f o m a n a g e m e n t ",X), n l , f a i l . p ( m i n i s t e r , M) , write("minister : n,M),ni,fail. p ( l a w , L) , write("law : M,L),nl,fail. p(infosystems,I) , write("infosystems : ",I),nl,fail. p(costs,C),write("costs : ",C),nl,fail.
reportl reportl reportl reportl reportl report2 report2
simulationtime(T), write("simulationtime fl(infosystems,X,Y) ,
",T),nl,fail.
148
Henk W.M. Gazendam write("infosystems ",X, " Y) , nl,fail. f1{i_employees, X,Y), write("internal employees ",X, " ",Y), nl,fail. fl(e_consultants, X,Y), write("external consultants ",X, " ",Y),nl,fail.
report2 report2
" INFOSTRATEGY MODEL IN SMALLTALK" "COMMANDS" Input inspect, "modify input parameters" HenkTools infostratSim: 1001 version: '1' . "run the simulation" HenkTools dataReport. "edit the general simulation report" HenkTools traceReport: '1' . "edit the trace report" HenkTools inputReport: '1' . "edit the input report" Statistics inspect, "inspect system state variables" "TOOL METHODS FOR SIMULATION CONTROL" dataReport TextStyle setDefaultTo: #fixed. DataReport edit. TextStyle setDefaultTo: #default. ! infostratSim: aTime version: n aFileStreaml I aSimulation aFileStream := FileStream newFileNamed: DefaultDir r 'inftra' , n EventMonitor file: aFileStream. DataReport := FileStream newFileNamed: DefaultDir , 'infrep' , n aSimulation := InfoStrategy new startup. [aSimulation time < aTime] whileTrue: [aSimulation proceed] . self dataReport. !
, ' . txt' . , 1 .txt' .
inputReport: n I input I input := FileStream fileNamed: DefaultDir , ' infinp' , n, ' .txt' . Input printOn: input. TextStyle setDefaultTo: #&xed. input edit. TextStyle setDefaultTo: #default. ! traceReport: n I trace I trace : = FileStream fileNamed: DefaultDir , ' inftra ' , n, ' . txt1 . TextStyle setDefaultTo: #fixed. trace edit. TextStyle setDefaultTo': ttdefault.! ! "SIMULATION MODEL CLASSES" Simulation subclass: ftlnfoStrategy instanceVariableNames: ' ' classVariableNames: ' 1 poolDictionaries: ' ' category: 'Simulation Applications' ! ! InfoStrategy methods For:
1
initialization' !
defineArrivalSchedule self scheduleArrivalOf: InfoManager new at: 0.0. (Input at: #numberOfDepartments) timesRepeat: [self scheduleArrivalOf: (Department new) at: 0.0.] . ! defineResources self produce: self produce: self produce: self produce:
O o f : ' FunctionPoint 1 . 0 of: 'FunctionPointReady'. (Input at: ttintEmployees) of: 'IntEmployees'. (Input at: #extConsultants) of: 'ExtConsultants'!
Expert Supporting Organization and Information Management init : aKey Statistics at : aKey put : 0 . ! initFromlnput: aKey I aData I aData := Input at: aKey if Absent: [ aData := 0) . Statistics at: aKey put: aData.! initialize super initialize, self init: ttdepartmentsWaiting. self init: #costsPA. self init: ttcostsCUM. self initFromlnput #budget. self initFromlnput #minSize. self initFromlnput #maxSize. self initFromlnput #optEndTime. self initFromlnput #intProductivity. self initFromlnput #extProductivity. self initFromlnput #intCostsPD. self initFromlnput #extCostsPD. self initFromlnput #intEmployees. Statistics at: #intEmployeesDoNothing put: (Input at: #in-tEmployees). #intEmployeesActive. self init: #extConsultants. self initFromlnput #extConsultantsActive. self init: #functionPointsWaiting . self init: #functionPointsInProjects. self init: #functionPointsReady. self init: #infoSystemsWaiting. self init: #infoSystemProj ects. self init: #infoSystemsReady. self init: #internalSystemsReady.! ! self init: EventMonitor subclass : #InfoManager instanceVariableNames: 'history strategy learningTime classVariableNames: ' ' poolDictionaries: ' ' category : ' Simulation Applications ' ! ! InfoManager methodsFor: 'simulation control' chooseStrategy I budget costs internalSystemsI budget := Statistics at :#budget. costs := Statistics at:#costsPA. internalSystems : = Statistics at: #internalSystemsReady. his-tory := strategy. costs > (budget * (Input at: #cutbackCriterium)) ifTrue: [strategy := 'cutback'] ifFalse: ( ((history = 'stepwise') I ((learningTime > 2) & (internalSystems > 1))) ifTrue: (strategy := 'stepwise'] ifFalse: [strategy := 'expansive']. (Statistics at : #functionPointsWaiting) > (Input at: #expan-siveCriterium) ifTrue: [strategy := 'expansive']. 1 • ! determineBudget I a b c e budget I := Statistics at: #functionPointsWaiting. a b Statistics at : #extProductivity. c := Statistics at : #extCostsPD.
150
Henk W.M. Gazendam e := Statistics at : #budget. strategy = 'cutback' ifTrue: [budget := e * (Input at: #budgetraise) . Statistics at:ttbudgetput: budget] . strategy = 'expansive' ifTrue: [budget : = e * (Input at: #budgetraise) + ( (a * c) / (b * 4) ). Statistics at: #budget put: budget] . strategy = 'stepwise' ifTrue: [budget := e * (Input at: #budgetraise) . Statistics at: #budget put: budget]
implementStrategy I budget intCostsPD extCostsPD intEmployees extConsultants n growthRate cutbackRate maxGrowthRate I budget Statistics at: #budget. intCostsPD Statistics at: #intCostsPD. extCostsPD Statistics at: #extCostsPD. intEmployees Statistics at: #intEmployees. extConsultants: Statistics at: #extConsultants. growthRate Input at: #growthRate. cutbackRate In-put at: #cutbackRate. maxGrowthRate := In-put at: #maxGrowthRate.
[
strategy = 'expansive' ifTrue: Statistics at: #minSize put: 1200 ; at: #maxSize put: 6000; at: #optEndTime put: 200. n := (1 + growthRate) * intEmployees * intCostsPD * 200. budget := budget - n. n := (budget / (extCostsPD * 200)). n := {(n - extConsultants) * 1.1) aslnteger. n > 0 ifTrue: [self produce: n ofResource: 'ExtConsultants' . Statistics increment: #extConsultants with: n.]. n := (intEmployees * growthRate) aslnteger. self produce: n ofResource: 'IntEmployees'. Statistics increment: #intEmployees with: n. Statistics increment: #intEmployeesDoNothing with: n.
]strategy = 'cutback' ifTrue: ( Statistics at: #minSize put: 400 ; at: #maxSizeput: 3000; at: #optEndTime put: 150. intEmployees > 8 ifTrue: [n := (intEmployees * cutbackRate) aslnteger. self acquire: n ofResource: 'IntEmployees'. Statistics decrement: #intEmployees with: n. Statistics decrement: #intEmployeesDoNothing with: n.]. extConsultants > intEmployees ifTrue: [n := (extConsultants * 0.5) aslnteger. (extConsultants - n) > 0 ifTrue: [ n timesRepeat: [self acquire: 1 ofResour-ce: 'ExtConsul-tants' withPriority: 7. Statistics decrement: #extConsultants with: 1]] J . strategy = 'stepwise' ifTrue: [ Statistics at : #minSizeput: 400 ; at: #maxSize put : 1500; at: #optEndTime put: 100. n := ((budget * 0.5) / (intCostsPD * 200)) aslnteger.
Expert Supporting Organization and Information Management n := n - intEmployees. n < (intEmployees * maxGrowthRate) ifFalse: ( n := {intEmployees * 0.5} aslnteger]. n > 0 ifTrue: [self produce: nofResource: 'IntEmployees1 . Statistics increment: #intEmployees with: n. Statistics increment: SintEmployeesDoNothing with: n. ]. n : = ( ( (budget * 0.5) / (extCostsPD * 200) ) aslnteger) . extConsultants > n ifTrue: [n := n - extConsultants aslnteger. n timesRepeat: [self acquire: 1 of Resource: 'ExtConsultants' withPriority: Statistics decrement: #extConsultants with: 1] . ex-tConsultants := extConsultants - n.].
].!
report I a b I (Simulation active time) = 0 ifFalse: [Statistics at: #costsPAput: 0. a := Statistics at: #intEmployees. b := Statistics at: #intCostsPD. Statistics increment: #costsPA with: (a * b * 200) . Statistics increment: #costsCUM with: (a * b * 200) . a := Statistics at: #extConsultants. b := Statistics at: #extCostsPD. Statistics increment :#costsPAwith: (a * b * 200} . Statistics increment: #costsCUM with: ( a * b * 200)). DataReport cr. Simulation active time printOn: DataReport. DataReport nextPutAll: ' InfoManager reports: ' . Statistics printOn: DataReport.! reportBudget I budget I budget := Statistics at: #budget. DataReport cr; nextPutAll: 'Budget determined: ' . budget printOn: DataReport. DataReport cr . 45 timesRepeat: [ DataReport nextPutAll: '-'].! reportStrategy DataReport cr; nextPutAll:
1
Strategy determined: '. strategy printOn: DataReport.!
tasks self report. self chooseStrategy. self reportStrategy. learningTime := learningTime + 1. self determineBudget. self reportBudget. Simulation active scheduleArrivalOf: self at: Simulation active time + 200. self implementStrategy.! ! ! InfoManager methodsFor : 1 initialization ' ! initialize super initialize, history : = 1 none'. strategy : = (Input at: #initStrategy). learningTime := 0.! ! EventMonitor subclass: #Department instanceVariableNames: ' infosystems ' classVariableNames: ' ' poolDictionaries: ' category: 1 Simulation Applications ' ! ¡Department methodsFor: 'simulation control 1 !
1
151
152
Henk W.M. Gazendam
tasks I number personnel I Statistics increment: #departmentsWaiting with: 1. number := (Uniform from: 200 to: 8000) next, self produce: number ofResource: 1 FunctionPoint' . Statistics increment: #functionPointsWaiting with: number, personnel := self acquire: 1 ofResource: ' IntEmployees 1 . Statistics decrement: #intEmployeesDoNothing with: 1. Statistics increment: #intEmployeesActive with: 1. self holdFor: ( (Uniform from: 10 to: 40) next) . Simulation active scheduleArrivalOf: (InfoSystem new) at: Simulation active time, self release: personnel. Statistics increment: #intEmployeesDoNothing with: 1. Statistics decrement: #intEmployeesActive with: 1. Simulation active scheduleArrivalOf: self at: ( (Simulation active time) + ( (Uniform from: 30 to: 200) next)), self acquire: number ofResource: ' FunctionPointReady' . Statistics decrement: #departmentsWaiting with: 1.! !
Dictionary variableSubclass : #SimulationDictionary instanceVariableNames : classVariableNames: 1 ' poolDictionaries: ' 1 category: ' Simulation Applications ' ! ! Simulat ionDictionary methodsFor:
1
printing' !
edit I aStream I aStream : = FileStream newFileNamed: DefaultDir , 'simstat.txt'. self printOn: aStream. aStream edit. ! printOn: aStream Is I aStream cr. 45 timesRepeat: [ aStream nextPutAll: 1 - ' ] . sel£ keys asSortedCollection do: [ ¡key I aStream cr. key printOn: aStream. s : = 25 - (key size) . s timesRepeat: [aStream nextPutAll: 1 1 ] . aStream tab. (self at: key) printOn: aStream]. aStream cr. 45 timesRepeat: [ aStream nextPutAll: '-'].! ! ! SimulationDictionary methodsFor : ' accessing ' ! decrement: key with: value I number I number := self at: key ifAbsent: [self at: key put: 0 . self decrement: key with: value] . self at: key put: (number - value) . ! increment: key with: value I number I number := self at: key ifAbsent: ( self at: key put: 0. self increment: key with: value], self at: key put: (number + value) . ! ! EventMonitor subclass: #InfoSystem instanceVariableNames: 'sizeinfp readyintime budget sizeready costs ' classVariableNames: 1 ' poolDictionaries: ' ' category: ' Simulation Applications 1 !
Expert Supporting Organization and Information Management ! InfoSystem methodsFor: ' simulation control' ! tasks I size minSize maxSize intProductivity number optEndTime extProductivity personnel task I size = Statistics at: #functionPointsWaiting. minSize : = Statistics at: #minSize. maxSize := Statistics at: ttmaxSize. size < minSize ifTrue: [-nil]. size > maxSize ifTrue: [size := maxSize]. optEndTime := Statistics at: #optEndTime. intProductivity := Statistics at: #intProductivity. extProductivity := Statistics at: #extProductivity. Statistics increment: #infoSystemsWaiting with: 1. self acquire: size of Resource: ' FunctionPoint' . Statistics decrement: #infoSystemsWaiting with: 1. Statistics increment: #infoSystemProjects with: 1. number := (size / optEndTime) quo: intProductivity. (self inquireFor: number ofResource: 'IntEmployees') ifTrue: [personnel := self acquire: number ofResource: 'IntEmployees' Statistics increment: #intEmployeesActive with: number. Statistics decrement: #intEmployeesDoNothing with: number. Statistics decrement: #functionPointsWaiting with: size. Statistics increment: #functionPointsInProjects with: size, self holdFor: {size / (number * intProductivity)). self release: personnel. Statistics decrement: #intEmployeesActive with: number. Statistics increment: #intEmployeesDoNothing with: number. Statistics increment: #internalSystemsReady with: 1. ] ifFalse: [number := (size / optEndTime) quo: extProductivity. personnel := self acquire: number ofResource: 1 ExtConsultants Statistics increment: #extConsultantsActive with: number. Statistics decrement: #functionPointsWaiting with: size. Statistics increment: #functionPointsInProjects with: size, self holdFor: (size / (number * extProductivity)) . self release: personnel. Statistics decrement: #extConsultantsActive with: number. ] self produce: size ofResource: ' FunctionPointReady 1 . Statistics decrement: ttfunctionPointsInProjects with: size. Statistics increment: ttfunction-PointsReady with: size. Statistics decrement: #infoSystemProjects with: 1. Statistics increment: #infoSystemsReady with: 1.! !
Chapter 7 Environmental and Organizational Interactions in the Design of Knowledge Based Systems: The METAL Case Armand Hatchuel Benoît Weil
7.1 Introduction This paper presents a case study of the development of an Expert System for strategic planning in a major oil company (called "MEGA"). METAL, as we call the project, was to coordinate decision making across several decision making layers of the corporate structure. Ultimately, the project failed, the prey of corporate politics. METAL is nonetheless of specific interest because ES are not yet commonly applied to problems that involve the complex interaction of several decision making units. The authors of this chapter were involved in the project as participant observers from beginning to end. They present unique material about ES design problems in a corporate context. Students of organization generally agree that today's business environment requires new, innovative forms of decision-making (Huber, 1986). Not only is the variety of stochastic or qualitative variables entering the decision process growing (especially when strategic decisions are concerned), but pressure from middle management and other professionals within the organization to participate in top-level decisions is also growing. These changes require the development of new organizational devices for cooperative decision making. The advance of Artificial Intelligence in recent years has sparked high hopes that ES and other knowledge-based systems might help decision makers consider a wider variety of complex decision alternatives. Yet, research on traditional Operations Research (OR) and Decision Support Systems (DSS) has shown that these traditional systems often fail to improve even routine organizational decision making. The authors of this chapter have argued elsewhere (Hatchuel et. al. 1986) that the side effects of OR and DSS
156
Armand Hatchuel and Benoît Weil
applications (as catalysts for gaining a deeper understanding of the organization) may outclass their intended effects (the generation of better decisions). ES, even when not immediately effective as decision making tools, may play a similar role, helping the student of organizations to understand the conditions of complex decision making in today's organizations. Viewed as an extension of OR/DSS systems, ES face two important problems in complex decision making. First, the systems need to find their niche in existing organizations. Organizations are networks of actors within structures; decision making in such structures is distributed across many actors. Consequently, ES can have a positive impact only if they establish new forms of communication between actors, yielding better forms of cooperation. Second, the systems require a new approach to identifying organizational knowledge. Which knowledge should an ES include? OR and DSS Systems generally rely on their builders for new solutions to the problem under study. In the case of ES, on the other hand, the knowledge resides outside the systems analyst, either as a set of (usually ill-defined) heuristics in the brains of individual experts, or as a set of procedures that guide organizational cooperation and coordination (i.e., as an implicit, distributed knowledge base). Therefore, the design process of organizational ES must retrieve knowledge not only from individual experts (which is difficult enough, especially when such knowledge is procedural), but also from the interaction patterns of organizational cooperation that constitute the collective knowledge of an organization. These problems require specific precautions on the part of the participant observer. To find an appropriate way of monitoring the project, we introduced a two-level procedure based on the distinction between management and evaluation. The management of an ES project uses traditional structures and hierarchies to control the decision making during the design of the ES under study. The evaluation of an ES project constitutes a scientific effort to analyze, at each step, the complex interactions that shape the project. In theoretical terms, this approach to project monitoring resembles the "meta-control" for innovative processes, where the goals may change during the project's life cycle (Van Gigch, 1986). The ultimate aim of this approach is to advise management how to reduce unnecessary conflicts and biased decision making. Analyzing the evolution of the project will allow us to discuss the efficiency of our approach to project monitoring and to derive some insights about the place of ES in organizations.
Interactions in the Design of Knowledge Based Systems
157
7.2 Strategic Planning of Off-Shore Drilling Activities: A Challenge for Oil Corporations The exploration and development of new oil fields is a long, complex, and expensive process that involves a great number of participants. Exploration may be abandoned after a few trials, or go on for years. Obviously, drilling activities occupy a critical place in these ventures. Exploration begins when the geologist discovers a "prospect", that is, an area where geological structures containing oil may exist. Once a promising prospect has been found, the exploration of a specific area is allocated by an agreement to an association of oil companies. This agreement usually transfers to one of these companies the responsibility of all technical operations, and defines the conditions of future business in case of proven oil fields. In spite of sophisticated geophysical analysis, evidence on the nature and extension of a field can be obtained only by drilling at specific points of the prospect's area. These activities comprise the major costs of the exploration program. Professionals estimate that no more than 10% of the prospects are successful, that is, lead to an economically exploitable well. Drilling decisions are apt to be complex. Decision makers face tough questions, such as: "What is a valuable prospect?", "How can a drilling program be justified?", "What are promising results?" Any answer to these questions must deal with many uncertainties. The most important uncertainty during the past two decades has been the price of crude oil, obviously a key variable. Sharp price discontinuities created by the two oil shocks (1973 and 1979) drastically changed the economic value of prospects. Other uncertainties arose from trade-offs in corporate strategy, especially through variations in the company's "portfolio" of prospects. For instance, in recent years, no more than fifty percent of all scheduled drilling activities were actually performed, because newly discovered prospects often outdo prospects discovered earlier, but not yet explored.
Drilling Equipment and the Management of Drilling Activities As noted before, oil drilling wells require highly specialized equipment, platforms, and rigs with their skilled teams. Rigs are typically not owned by the corporation; they are leased for a limited time period or a fixed number of wells. The owners of this equipment are contractors who manage a fleet of rigs variously adapted to different types of drilling activities. Oil corporations exert the demand for rigs, contractors provide the supply. Price rates and contractual requirements depend on the conditions of this rig market, which, in turn,
158
Armand Hatchuel and Benoît Weil
depends largely on - ever-changing - exploration plans. These markets are concentrated in the main regions where oil production takes place, since the transfer of rigs from one region to another takes months. Hence, oil corporations have to design rig contracts according to their drilling activities, considering each of these regions separately. Big oil corporations (such as the one in this study) expand their drilling activities to other countries where they have established subsidiaries. Nominally, these subsidiaries have considerable autonomy, but in reality this autonomy is fairly limited. First, subsidiaries need the consent of local governments (in whose territory the drilling takes place) and of other parties involved (other partners in a specific venture). Second, the exploration budgets of each subsidiary have to fit into the corporation's global budget. Third, the corporation may intervene during the project through technical controls, involving several different departments at the corporation's headquarters. In fact, the role of these departments is fairly complex: they must assist the subsidiary upon request, but at the same time advise corporate top management in the budget negotiations with the subsidiaries. Fourth, although subsidiaries are supposed to contract their own rigs, in many cases such contracts cannot be negotiated without taking into account the activities of other subsidiaries (with which they compete in the market for rigs).
7.3 METAL's Birth Coordinating drilling activities between headquarters and subsidiaries has proven to be a difficult, error-prone affair. This prompted MEGA to improve its coordinating mechanisms. Disappointed by traditional coordinating tools, MEGA's management turned to the idea of a knowledge-based tool for planning drilling activities.
The Rig Market Crisis Although the idea of a knowledge-based planning system surfaced in 1984, it was shaped by the oil crises of the seventies. The oil shocks of the seventies created strong discontinuities in the rig market. Due to the abrupt price increase from politically induced shortages, many potential drilling sites ("prospects") suddenly became very attractive, while windfall profits provided the subsidiaries with large amounts of cash. Expanding the exploration programs seemed the best strategy to augment oil resources, and at the same time avoid excessive income taxes (as new programs required large capital investments, reducing
Interactions in the Design of Knowledge Based Systems
159
pre-tax profits). The demand for rigs rocketed up. Since the lease price rates for rigs was climbing rapidly and the existing equipment had reached full use, rig contractors planned to build new rigs. Companies accepted contracts for rigs at high rates and for long periods (even as long as two years). Although unsuccessful in retrospect, this policy seemed workable at the time, since everybody expected further increases in oil prices (and since acquiring rigs is a precondition for further expansion). Contrary to expectation, however, oil prices fell and oil corporations were forced to cut down on their exploration plans. This brutalized the rig market. Oil companies were locked into expensive and long-term contracts for rigs (which they could no longer put to profitable use). Contractors faced severe losses as the new rigs were shipped. Utilization ratios were falling, and the subsidiaries were trying by all means to reduce their "stand-by" costs, that is, the cost of idle equipment.
Strengthening Central Coordination While the price boom lasted, MEGA's headquarters helped subsidiaries spot rigs on the overheated rig market. A task force was formed to monitor the rig market so that no free rig would go undetected. In addition, the task force was supposed to coordinate the use of the rigs among MEGA's subsidiaries at the regional level. When the tide turned and the market collapsed, this task forces switched to activities keeping subsidiaries from contracting rigs on the market if idle rigs from other subsidiaries could be used. Complex coordination problems ensued. These problems could become highly controversial, creating conflicts among the subsidiaries or between subsidiaries and MEGA's headquarters. For example, whereas a subsidiary would prefer to contract a new rig at the market rate, arguing that this equipment would be more efficient than the "stand-by" rig offered by other subsidiaries, headquarters, concerned with the corporation's overall revenues, would insist on using idle rigs already contracted. Top management became sufficiently aware of these problems in mid 1982 to create a standing committee to oversee the rig management and recommend a strategic plan for future rig contracting. Chaired by a senior vice president, the committee originally had representatives from all subsidiary companies and from corporate headquarters. Yet its first recommendation, the framework of a strategic plan for drilling activities, was immediately rejected by top management as lacking vision and as a mere collection of past decisions. New contracting rules were issued to limit the decision range of subsidiaries; these regulations required that any rig contract extending a period of six months be approved of by MEGA's CEO. In addition, new subcommittees were installed for each region, with top level representatives from both the corporation and its subsidiaries. It was hoped that any decision about rigs could now be examined appropriately by headquarters.
160
Armand Hatchuel and Benoît Weil
In due course, however, top management (confronted with the minute details of the rig business) complained about decision overload.Without the expertise to screen the data effectively, top management had to work with incoherent or inconsistent data. The need for new decision-making tools was obvious. Such tools would have to check the relevant data for coherence and completeness, and help top management devise a new drilling strategy that would effectively address the concerns of both headquarters and the subsidiaries.This strategy, it was hoped, would mitigate future conflicts and reduce negotiation biases. As this idea surfaced in late 1984, it was suggested that some kind of knowledge-based systems be developed; thus was born the project METAL}
The Project Group In order to understand this decision, recall how AI caught the imagination of the European business community in the early eighties. AI suddenly had become the premier symbol of high-tech industrial progress. Since no corporation of standing would be caught without an AI group, such a group was established at MEGA. This group, unhappily, labored under somewhat unrealistic expectations. When METAL was devised, the AI-group had already suffered some failures (when attempts to build ES stalled, for unclear reasons); pressure to produce tangible results - soon - was building. The AI-group saw the drilling strategy issue as a two-fold opportunity. First, the project was an opportunity for research in complex planning (Fox, 1983; Lepape, 1988; Descottes and Latombe, 1985; Delesalle and Lagoude, 1989). Second, because of the importance of drilling to the corporation's future, the success of METAL would bring the AI group much-needed recognition. Yet funds had to be found to support the project. MEGA's Research Department agreed to fund METAL's development, but required that the project should be monitored closely by two management consultants (the authors of this chapter). The technical and organizational uncertainties of the project were such that permanent feedback was required for making METAL a success or, in case of a failure, for obtaining, at least, an accurate record of what went wrong and why. A special project group was installed to monitor the METAL project, consisting of the secretary of the (now decentralized) rig committees, regional drilling specialist (from the region chosen as METAL's test site), other experts in scouting and drilling, and the computer scientists in charge of METAL design.We were to participate in the project almost on a day-to-day basis, attending all important meetings of the project group; we were also authorized to
1
We have changed the project's name to secure the corporation's anonymity.
Interactions in the Design of Knowledge Based Systems
162
Armand Hatchuel and Benoît Weil
talk to everybody involved in the company's drilling activities. Figure 7.1 summarizes the project life cycle graphically.
7.4 METAL's Knowledge Base The first task of the project group was to identify the knowledge and expertise necessary to build an intelligent system. It quickly became apparent that three types of knowledge were required for building METAL: 1. Technical Knowledge - knowledge of the geology of oil fields, drilling, legal requirements, and other relevant domains. 2. Strategic Knowledge - knowledge relevant to negotiating contracts, setting prices, etc. (e.g., the priority schedules for contracted rigs or the estimated productivity of new oil fields etc). Strategic knowledge was less codified than the technical knowledge. For example, strategic knowledge was implicit in the drilling committee's past decisions for which formal description could be devised. However, there was no systematic knowledge for selecting relevant rules. 3. Planning Knowledge - knowledge of the methods for generating drilling plans in a corporate setting. Since drilling plans had to respect both technical and "political" constraints, any viable plan would have to be a compromise between potentially conflicting interests. Although knowledge of how to reach such compromises was implicit in past decisions, there were again no explicit rules. Thus, three types of knowledge were required for putting together a corporate drilling plan, but no single individual at MEGA possessed all three kinds of knowledge (Figure 7.2).
Planning Without Planners The dearth of strategic and political experts drove us to consult experts from all central departments and representatives of each subsidiary. Also, we examined all available planning instruments used by the (decentralized) drilling committees. Everyone feared another crisis in the rig market and felt that the company had to establish efficient procedures in time to prepare for the next shock. At the same time, however, our interlocutors were less zealous in their faith that an ES could help. They steeled themselves to resist the seductive forecasts of the next inevitable price explosion. The company would have to stop its ears against the sirens of boom, counteracting (or at least discouraging) the expansion of the
Interactions in the Design of Knowledge Based Systems
163
Figure 7.2: METAL'S Structure
subsidiaries. Almost all experts not directly involved in the project remained indifferent about METAL; only two, who had been liaison managers during the last crisis (informally helping subsidiaries' drilling programs jibe with the available equipment) showed enthusiasm. Although they had never been planners, they would later provide the strategic and planning knowledge for METAL. Meanwhile, our investigations unearthed an important flaw in the coordination between MEGA's Headquarters and the subsidiaries: standard operating procedures forced each subsidiary to propose a yearly exploration budget to headquarters. However, the exploration budget of each prospect was computed on the basis of the changing market rates for rigs. As a consequence, this exploration budget would not take into account the lease rate for idle rigs contracted by other subsidiaries, although these rates could be very different from the market rates. During the slump, efficient planning from the headquarter's point of view would amount to re-allocating idle rigs among the subsidiaries, whereas the subsidiaries would, of course, prefer to lease rigs on the market
164
Armand Hatchuel and Benoît Weil
at the - much lower - market rates. This discrepancy could persist because the exploration budget was fixed by corporate headquarters while the (decentralized) committees directly in charge of rig-allocation were not involved in the budget negotiations. Thus, our investigations highlighted two important organizational implications for METAL. One was to identify both the strategic and planning knowledge in a company where such knowledge was hidden in the procedures for coordinating rig allocation. The second consequence was to stimulate a better coordination of prospect policy (which prospects should be scouted) and equipment strategy (which rigs should be used). Addressing both implications would require considerable institutional and technical commitments. However, if these commitments were made, METAL would have a good chance of success.
What is a Performing System ? In only a few months, METAL project group designed a prototype of the ES based primarily on the knowledge of the two liaison managers. We give a short outline of the Model's architecture:2 The PROTOTYPE consisted of two modules, one to evaluate the assignment, and another to contrive the plan. Assignment Evaluation. The prototype's data base contains description of "prospects" and "rigs" as basic objects. The assignment evaluator used various characteristics of these objects (such as the expected pay-off of a prospect, budget requirements, technical parameters, cost, contract duration) to map an "assignment index" into a set of ordered pairs , with p. an element from the set of prospects (P), and r an element from the set of rigs (R), under consideration. These rules have the structure: IF (prospect variable I = A , for prospect X) and IF (rig variable J = B for rig Y) THEN (assign prospect X to rig Y with index - a e [-!,+!]) As rigs and prospects are specified by a total of twenty characteristics, the assignment evaluation has to aggregate all index values produced by each rule and map them into an assignment value for each pair . The Cartesian Product PxR yields a matrix of index values providing an estimate of the expected "value" of using rig r for prospect p.. These index values are,
2
A more detailed description can be found in Itmi (1987)
Interactions in the Design of Knowledge Based Systems
165
in fact, multi-attribute utility functions aggregating the economical, legal and technical aspects of a specific rig-prospect combination. In this way they supplied both the technical and the "political" knowledge required. The matrix of assignment values served as the input to the planning generator. Planning Generation. The Planning Generator has to produce an optimal choice of ordered pairs by means of planning rules framed around a constraint satisfaction algorithm. Typical rules would look like: "The budget of the program from a subsidiary must not exceed the allocated budget plus amount X (weight factor WJ " or "The drilling in prospect A cannot begin before date d (weight factor W2)." The inference engine is based on best-first search with backtracking, the weight factors ranking order in which the rules are applied (Descottes and Latombe, 1985). METAL's model approach was not the most robust, or the most efficient, or even the most convenient for the subsidiaries. However, by minimizing the weight of rejected rules, METAL's planning mechanism worked towards a compromise by the unifying treatment of heterogeneous constraints.
A System for Experts? Implementation Alternatives As soon as the prototype was up and running, METAL's project group began simulations of planning situations, varying the model's weight allotments (thus attributing various weights to a rule). Unfortunately, the domain experts were hard pressed to give "realistic" estimates for these weights. After numerous tries with different sets of weights, we concluded that there was probably no set of weights sufficiently robust to generate acceptable scenarios for all planning contingencies. Yet, in working with the model, the experts gained an intimate knowledge of METAL's reasoning strategies, which enabled them to devise counterintuitive planning scenarios. So even though the system was merely a prototype, it was already boosting the planning skills of its expert users. On the other hand, it became clear that working with the model would help the experts to craft the additional planning rules needed for a definitive version of the model. We realized that METAL would never be a stand-alone ES in the hands of unexperienced users; at best, it would become a system for experts (an ESS in the terminology of Chapter 6). This conclusion was of great importance for the project's future, as it enabled us to circumscribe METAL's future role without having to wait for the system's final design. The system's users would most likely be experts at MEGA's
166
Armand Hatchuel and Benoît Weil
headquarters. They could draft plans with the help of the system, and submit them to the (decentralized) rig committees. These committees would then check the feasibility of the draft plans, and (re)negotiate the more political details before forging the final plan. As in the case of the traditional DSS (Hatchuel and Molet, 1986), the system would need to be integrated into the organizational structure. To implement the model would necessitate a central planning team (the team actually using the model) with close ties with the subsidiaries. We were not certain that this new team would fit into the corporate structure. Questions remained: which procedures could establish the planning team as an "expert" accepted by all the negotiators; how could the prospect data and the rig data be integrated? (recall that both sets of data had been treated differently before, with adverse results); would METAL's mechanism for arriving at a compromise command sufficient credibility so that subsidiaries would accept the outcomes as a basis for further negotiations? These questions forced upon the company a long overdue review of corporate planning, particularly its drilling activities. At this point, our alternatives for METAL were: (A) continue to build a complete system (a stand-alone ES), as originally envisioned or (B) develop the prototype into a system for experts (in the sense outlined above). Our report to Corporate Headquarters recommended alternative B, arguing that the organizational environment would not support a stand-alone ES. We confirmed that the difficulty in crafting a comprehensive set of planning rules was not merely a technical problem; it was a political one as well. METAL could not solve this problem because of the power struggle between Corporate Headquarters and the subsidiaries. Any compromise in this conflict would have to be forged not by a machine, but by the negotiated consensus of the players involved. Corporate Headquarters had a difficult decision to make. Yet, before it could do so, a new price shock shattered the oil structure. In the autumn of 1985, the spot price of crude oil fell by almost 50 per cent.
The Effect of the Oil Depression: What Does Planning Mean in a Cyclical Economy? As soon as they could, oil corporations re-lowered their exploration budgets. A severe depression in rig demand yawned before contractors; rig release rates fell to the bottom. Corporate Headquarters could do little but follow simple, rigid heuristics. Subsidiaries were asked not to engage in any new contracts, and to lease rigs only at the last moment. Since rig lease rates were becoming very cheap, rig planning no longer needed to maximize the use of already contracted rigs. Instead, Corporate Headquarters could lean back, leaving the subsidiaries alone, and wait for the rig lease rates to fall low enough to warrant a strategy of
Interactions in the Design of Knowledge Based Systems
167
long-term rig contracts again. Such contracts would entail only small economic INDEX OF MARKET PRICE FOR DRILLING RIGS
Figure 7.3: The Cyclical Phases of the Rig Market
risks on the downside, but generate important savings if the rig market would recover. Successfully applied, this strategy would prepare the company for new price hikes, or even a new oil crisis (Figure 7.3). The new planning heuristics utilized the very same knowledge already incorporated in METAL's prototype, although some rules and weight assignments would have had to be adapted. Also, the new conditions would have required some extensions and a different setup for using the system. Because decisions to contract rigs for a long period at low lease rates were highly sensitive to both the estimated value of prospects and the robustness of such estimates (which depended, above all, on geological appraisals of the prospects's sites), the basis of these estimates had to be the same for all projects. This standardization would have necessitated a re-centralization of rig planning, since different subsidiaries, left to their own devices, would probably base their estimates on divergent assumptions. In short, without appropriate changes, METAL was useless under the new market conditions, although it was quite obvious that a market upturn would re-install METAL's importance. Hence, from a long-term point of view, continuing its development appeared to be the right strategy.
168
Armand Hatchuel and Benoît Weil
METAL's Death Unfortunately, the continuing depression of the oil price overruled the long-term view, as the Senior Vice President in charge decided to discontinue the project. When interviewed later, he cited two main reasons for his decision. First, there was the need to cut cost. Due to heavy windfall losses, the company was hobbled by severe cash-flow problems. Many staff positions at Headquarters were eliminated, including those of the two principal experts of METAL's project group. Second, Senior Management was temporarily unable to deal with long-term decision making, since the crisis demanded its full attention. For example, a huge effort had to be made to keep the decentralized committees in place that had been established after the crisis of '80-'81. Under these conditions, the Vice President felt constrained to make the "political" decision to discontinue the project. In our final evaluation of the project, we advised not that future AI research be abandoned along with METAL, but that the company continue to look for promising application areas. We also alerted the company to the irreversible loss of expertise connected with the termination of the project. We recommended keeping the existing committees, and distributing our evaluation throughout the company. The Headquarter's Research Division further suggested that our case study be an essential part of the training of the company's future managers, to prevent the same amnesia that struck the company after the first oil crisis.
7.5 Conclusions METAL's case presents some interesting features: the complexity of the problem, its strategic importance, the turbulence and hostility of the MEGA's environment. Nevertheless, these extreme conditions helped to create an experimental setting where causal factors could be clearly discerned.
The Efficiency of the Project Monitoring Concept As noted earlier, the monitoring of the project received special attention. The task of the authors of this chapter was to participate in the project and report on all its aspects in as great detail as possible. We had direct access to various sources of information, covering all relevant aspects of the project; we could also examine all archival data related to rig contracting since 1975. This abundance of information allowed us to cross-reference from AI techniques to planning concepts and from the economic analysis to the organizational requirements of the project.
Interactions in the Design of Knowledge Based Systems
169
Some players feared that the project group would ignore the organizational issues of the planning problem, so that the project's short "underground" life would die a silent death. Others, however, thought that the project would help to increase the AI expertise within the company regardless of any immediate benefits. Our main contribution was, perhaps, to disentangle these points of view, and to point to the planning factor as the crucial variable, thus distinguishing between short-term views and the long-term investment in AI. We could show that the project provided an enriched understanding of the management of drilling activities in a cyclical economy, as well as the opportunity for a critical examination of the methods used by the AI group. It was not easy for the other participants to address these issues directly, since this required a global policy review. But our regular reports provided an effective stimulus for embarking on this policy review. This became especially clear after the abrupt depression of the oil market in late '85 when it became obvious that the project, as originally conceived, was inadequate, and could survive only if a long-run organizational and technical strategy was adopted. Management killed the project when they gave up on maintaining a consistent long-term strategy under the high pressure of the declining oil market (even though they knew that the project had shown the danger of disappearing knowledge about rig contracting during other phases of the economic cycle). From a theoretical point of view, our task was to extend the different "causal maps" (the cognitive models maintained by the different actors, (Hall, 1984, and Chapter 5) by checking all the technical, economical, and organizational implications, and thus build a more comprehensive causal model of corporate drilling activities. We would not have been able to do so without the information privileges granted to us right from the start. Enabled to work with all levels in the corporate hierarchy, we could reconstruct the various cognitive models, and locate each actor in the overall model, integrating their partial models. However, we should be careful not to overstress our role. After all, the project did not reach its initial objectives, despite our input. Perhaps, these objectives were unattainable - the prospects of complex innovation projects are notoriously hard to predict (Hatchuel et al., 1987). The development of knowledge-based systems in highly complex and changing contexts will always be highly uncertain.
The Nature of Knowledge in Organizations: Expert-Systems and Systems for Experts Organizations are knowledge consumers; they devour technical, legal, and economic knowledge. But they must produce knowledge as well - knowledge needed to solve their own specific problems that have no precedent in the
170
Armand Hatchuel and Benoît Weil
literature. In our view, strategic planning belongs primarily to the body of specific knowledge, although corporations tend increasingly to codify this knowledge explicitly in handbooks, rules, and regulations. The value of such rules is difficult to assess. They are established by individuals who are recognized as corporate "experts" (Chapter 1). Yet, their knowledge is primarily of a "tactical" nature with little potential for generalization to a broader set of circumstances. In our case, this became apparent when the "experts" were unable to choose weights for planning rules that would hold across all scenarios, so that specific weights had to be tailored for each specific scenario in order to obtain valid plans. The type of knowledge needed for generating a stand-alone ES was simply not available. To be able to address complex group decisions, knowledge-based systems of the future will have to account for the kind of tactical knowledge described above. This required a division of the knowledge base into two parts. A first part of such systems would contain the domain knowledge about stable relationships and objects, and the second part would contain the more political knowledge (the two types of knowledge we formerly labeled "strategic" and "political"). Being of a procedural, rather than declarative nature, this knowledge would have to be organized around the "tactic seeking" procedures known from Operations Research. ES design in the '80s was based on the idea that the complete knowledge of an expert could be captured in a rule-based system, so that stand-alone applications would work in the hand of non-experts.This research suggests that ES of complex group decision depend on types of knowledge that cannot be completely specified independent of the experts who generate it. As a consequence, such ES remain "systems for experts"; they can help only those who have more knowledge than the ES themselves do.
Knowledge and Actors in Organizations Our study of METAL represents a particular case of longitudinal research in organizational dynamics. Recent process models about change in organizations (Hall, 1984, Pettigrew, 1985, chapter 5 and 6) have stressed the necessity of such studies. Our study suggests that important links between historical, contextual, and structural aspects of change can be found by investigating the nature and the knowledge of the actors involved. AI, by centering on the concept of knowledge (chapter 10) provides useful tools for analyzing organizational change, quite in the same way that OR's focus on rational decisions provided a frame of reference for discovering the importance of non-rational decision making in organizations.
Chapter 8 Casting Managerial Skills into a Knowledge Based System Tim O. Peterson David D. Van Fleet
Most people are hired by organizations initially to perform some specialized set of tasks such as accounting, computer programming, or electrical engineering. These types of skills are generally referred to as technical skills. Technical skills provide an individual with expertise to perform specialized tasks within a specific work domain. Quite often, individuals are promoted to managerial positions because they have shown themselves technically competent (Maimon, 1986). In fact, it has been posited that establishing some level of technical competence by an individual is a prerequisite for promotion to a managerial position (Dalton, Thompson, and Price, 1977). Thus, managers frequently acquire their managerial positions because of technical knowledge and competencies rather than managerial knowledge and competencies (Rosen, Billings, and Turney, 1976; Stumpf and London, 1981). While technical skills provide knowledge on specialized tasks, managerial skills provide expertise on managerial activities.
8.1 The Functions of Management Advancing from a technical position to a managerial one is difficult because, not only does one have to retain one's technical knowledge and competence, but one must also develop new knowledge and competence called managerial skills (Mainiero, 1986). As specialists move away from their technical area and into managerial positions, they also move away from the use of specialized behavioral tasks and toward the use of more knowledge-based processes such as managerial skills (Gellman, 1985; Tosi and Tosi, 1986). Many management scholars (Carroll and Gillen, 1987; Guglielmino, 1979; Katz, 1955; Kotter, 1982; Mann, 1965; Mintzberg, 1973) firmly believe that it is this set of
172
Tim O. Peterson, and David D. Van Fleet
managerial skills, coupled with technical skills, which enables managers to manage effectively. Recently, O'Neal (1985: 51) stated that without these fundamental skills, "managers cannot effectively plan, direct, control, or assess work activities." The empirical work of Mann (1965), Mintzberg (1973), and Guglielmino (1978) also confirms the importance of managerial skills. Since many managers are promoted for their technical skills, they may not have the managerial skills necessary to perform their tasks effectively. Bandura and Schunk (1981) found that people will not attempt tasks if they believe that they do not have the abilities and skills essential to perform them effectively. This means that at least some managers will tend to avoid the managerial aspects of their jobs preferring instead the technical aspects, since they feel that they have the necessary skills to perform the technical but not the managerial aspects of their jobs. This type of situation seems to be exactly what Laurence J. Peter had in mind when he coined the Peter Principle. This situation should be a major concern of organizations. They should ensure that managers have the necessary managerial skills for at least three reasons. First, managers require these skills to effectively discharge the functions of management necessary to their jobs. Organizations achieve their objectives when managers apply managerial knowledge to managerial functions. When they possess the managerial skills they need, managers can perform their roles correctly; they become more managerial self-efficacious.That is, when they are faced with a specific managerial task, they are more likely to attempt to perform the task than avoid it. Finally, the acquisition of managerial skills and increased self-efficacy, in turn, increases the likelihood that managers will perform effectively in the future.
8.2 Performance Appraisal There is one particular management function that requires all of the manager's skills: the control function, that is, the process of monitoring and adjusting organizational and individual activities toward goal attainment. Of the many different control activities within an organization, one of the most important (Griffin, 1987) and yet most difficult for managers (Albanese and Van Fleet, 1983) is assessing human performance. Such assessments involve the perceptions of both assessor and assessee - which are not likely to match. Furthermore, these evaluative judgments - especially when negative or critical of poor performance - may be challenged. Evaluating employees' performance is normally part of an organization's performance appraisal process. While performance appraisal serves many different purposes within organizations, its
Casting Managerial Skills into a Knowledge Based System
173
primary purpose is to provide feedback to employees on their work-related performance. In the performance appraisal process, managers seem to struggle most with the final and perhaps most important step: giving performance feedback to subordinates. This is especially true when the performance feedback is critical or negative (Fisher, 1979; Larson, 1984 and 1986). Yet, without that performance feedback (positive or negative), the employee does not know whether (s)he is performing correctly or incorrectly, nor does the employee know how to change inappropriate work behaviors into appropriate ones (Van Fleet, 1988; Griffin, 1987; Albanese and Van Fleet, 1983). Hawkins, Penley, and Peterson (1981) found that employees' performance ratings were higher for those employees who reported receiving high levels of performance communication, or feedback, from their supervisors. Performance communication was defined in the Hawkins, Penley, and Peterson study as "information communicated about the quality of the employee's work." In addition, employees report that the single, most valuable interpersonal source of feedback about their performance is their immediate supervisor (Greller and Herold, 1975). Thus, if the supervisor is not providing feedback or not providing it well, the subordinate is denied the information necessary to improve his or her job performance. When this happens, as the subordinate suffers for lack of feedback, so too the organization suffers from poor performance. Organizations make certain that managers not only provide performance feedback, but do it well. Managers may have difficulty providing negative performance feedback to their group members for a number of reasons. Two are particularly important. First, the managers may realize that they do not have the necessary skills to do the task well (Bandura, 1977). Second, they may have low self-efficacy in their ability to provide negative feedback while still maintaining a positive interpersonal relationship with group members (Lefton, 1985; Carroll, 1982). Both of these deserve the organization's attention. Ever since Maier (1958) wrote that the failure to provide performance feedback was attributable to a lack of managerial skill, organizational practitioners and management scholars alike have tried three different ways to provide inexperienced managers with the necessary knowledge base to conduct effective feedback sessions. These methods - social learning, training, and education have not been particularly successful. As stated earlier, experienced managers possess domain-specific knowledge bases about the different functions and tasks of management. These knowledge bases have traditionally been developed through education, training, and social learning on the job. This is also true for the performance feedback knowledge base.
174
Tim O. Peterson, and David D. Van Fleet
The Feedback Process In the mid 1950s, an adequate performance feedback knowledge base did not exist. The effort to help managers deal with the knowledge base deficiency has produced a flood of research and theories on the feedback process. Over one hundred articles have been published on the topic over the last thirty years. A number of books (e.g., Annett, 1969; King, 1984a; Maier, 1958; Maier, 1976; Nadler, 1977) have also been published on the topic of performance feedback over this same time period, as well as major conceptual pieces (e.g. Cederblom, 1982; Larson, 1984; Taylor, Fisher, and Ilgen, 1984). All of this writing has created a huge knowledge base of facts and rules on how to conduct performance feedback interviews with subordinates. But the vast size of the performance feedback knowledge base itself confuses anyone trying to use it. Although the knowledge base now exists, that does not mean that individuals have access to it when they become managers. From the research reviewed in this chapter, it seems safe to say that new, inexperienced managers rarely possess this domain-specific knowledge, except what they have picked up as employees receiving performance feedback. Management scholars' answer to this problem has been to recommend training on performance feedback for new managers (e.g. Kinlaw, 1984; Latham and Kinne, 1974; Miller, 1981; Smith, 1986). Generally, this training has been in the form of declarative knowledge (Greeno, 1973). In fact, a great deal of the popular trade literature on performance feedback is an effort to provide managers with declarative knowledge. So are executive education and MBA programs. While training is the recommended method of providing managers with a performance feedback knowledge base, very little literature has reported any positive effects of this type of training. Since statistically insignificant research results are less likely to be published, it may be that training has not proven to be as effective a method for providing performance feedback knowledge as originally thought. In fact, the size of the knowledge base may be part of the problem. Trainers limit what will be covered in the training sessions, which are time-consuming and costly to run (Davis and Mount, 1984b). Considering the current knowledge base on performance feedback, trainers may find it impossible to furnish managers with a significant enough part of the knowledge base, given the training time available. Besides, managers, overwhelmed by the size of the knowledge base, may feel that the number of factors, the number of rules, the complexity of the interpersonal relationship, and the many different possible outcomes which must be considered when providing performance feedback, are more than any one person can comprehend. For whatever reason, managers particularly inexperienced managers - still seem to lack a performance feed-
Casting Managerial Skills into a Knowledge Based System
175
back knowledge base, and so are reluctant to conduct performance feedback interviews. Due to this lack of success in providing managers with a complete declarative knowledge base, management scholars have turned more recently to providing procedural knowledge. Procedural knowledge is knowing how things are actually done (Greeno, 1973). In these training sessions, a piece of the overall knowledge base is extracted and modeled for the manager. Quite often the trainee practices the desired behavior during the session. Several of these training programs (e.g., Allinson, 1977; King, 1984b; Robinson and Robinson, 1978) have been reported. Generally, several appear successful at imparting to managers a portion of the knowledge base and demonstrating that managers can, with training, successfully provide performance feedback to subordinates (even if it is limited to a small subset of the whole performance feedback knowledge base). One problem with this form of training is that often the trainees can only apply the "how-to" knowledge to situations similar to those experienced in the training session (Mayer, 1975; Mayer and Greeno, 1973; Mayer, Stiehl, and Greeno,1975). A direct link, it seems, is made between the knowledge, the behavior, and the situation. So the pattern of events must match those presented in the training session very closely to educe from the manager the appropriate behavior. However, this training has failed to provide the manager with a complete knowledge base and a cognitive retrieval method to access the knowledge base in situations. Until recently, these methods seemed to be the best available for providing inexperienced managers with performance feedback skills. It was far from perfect, but it was the best that management scholars and organizational practitioners could come up with. This volume addresses a fourth potential method for providing managers with knowledge necessary to carry out their tasks - the use of ES. New knowledgebased ES are now being developed that claim to provide managers with managerial knowledge and skills (Silverman, 1987). The first such systems were developed in accounting (Dungan and Chandler, 1985), production (Bourne and Fox, 1984; Descotte and Latcombe, 1981), and finance (Bouwman, 1983; Whalen, Schott, and Ganoe, 1982), but systems focusing on more general managerial tasks are now starting to emerge. Those more general systems are available now to provide the necessary knowledge and skills to perform specific managerial functions such as planning (Duchessi, 1987), organizing (Blanning, 1987), directing (Kearsley, 1987), and controlling (Slagle and Hamburger, 1987). In addition, ES have appeared that claim to provide managers with the necessary knowledge and skills to give performance feedback (Blanning, 1987; Kearsley, 1987). One of those, Performance Mentor, seems particularly useful to managers (originally developed by AI Mentor, Inc.).
176
Tim O. Peterson, and David D. Van Fleet
8.3 Performance Mentor Performance Mentor is a commercially available ES. It is designed to run on personal computers in dual floppy, hard disk, or hard disk and one floppy configurations. Performance Mentor has changed owners and is now available through Nova Expert Systems, Inc., in Palo Alto, California. Performance Mentor is a rule-based ES with over 350 rules. According to the company, the advice provided by the system is based on a massive literature search by the developers and utilizes material from over 100 sources (Schlitz, 1986). Performance Mentor is designed for managers and not for human resource personnel. It is designed to assist in the feedback process, not in the evaluation itself. Performance Mentor begins with an opening screen which asks the user to select one of five options: (1) to profile his or her workplace and management style, (2) to profile one or more employees, (3) to go to an advice menu, (4) to go to a reports menu, or (5) to exit the system. Options 3 and 4 are not usable, however, until options 1 and 2 have first been used.
The Profile The first step for the manager is to profile himself or herself. The managerial style profile is developed by responding with an A for agree or a D for disagree to 59 terms such as: relaxed precise humorous praising
charming cautious informal outgoing
artistic thorough reserved assertive
calm warm jolly quiet
Similarly, 45 terms are used to profile the workplace. Some of those terms are: isolated artistic friendly original
pragmatic impulsive scholarly objective
stable questioning complex unclear
precise low-key ordered frustrating
free solid rigid normal
To complete the workplace and style profiles, 38 multiple choice questions are asked. They deal with such things as the user's experience giving performance appraisals, whether or not a union is involved, and the type of performance appraisal instrument used in the organization. This information is stored by the system in a separate database so that it can be updated when changes occur. This way it can also be maintained on a separate floppy disk, thus assuring confidentiality (provided the user keeps that disk secure).
Casting Managerial Skills into a Knowledge Based System
177
Next, the software asks the manager to profile each of his or her subordinates; using 68 terms similar to those used for the manager and his or her workplace.
Advice After completing these profiles, the user can, then, move to obtain advice or to get reports. However, before the system can provide either, it asks a series of 66 multiple choice questions. These deal with such matters as the particular person's experience on the job, his or her responses to previous performance appraisals, and whether that individual's gender is the same as that of most persons who hold that job in your organization. Performance Mentor also asks the manager to describe the immediate work setting, the organizational environment, relevant performance criteria, and the type of feedback which will be provided to the subordinate. The system then uses all of this information to recommend a performance feedback interviewing strategy for each superiorsubordinate dyad. The system's reports typically run two to four single-spaced pages long. They are clear, diplomatic, and have a "ring of truth" to them. While reviewing the advice on the screen, one can get explanations of why that advice is being given. Those explanations teach the user the "why" behind recommendations so that he can eventually make them himself. As an example, in one published review of this system, the user obtained this advice: "An examination of your responses indicates that although you show an appreciation for the importance of maintaining good relationships with employees, you may not always provide the direction and resources they need to get their work done. In addition, the environment you have described is likely to be somewhat unstructured, so an important managerial task will be to provide enough direction to keep people on track" (Your PC as Personnel Counselor, 1986).
Another reviewer noted that some of the advice is very predictable. If you indicated that you did not have clearly defined objectives for the job involved, it will tell you that you should establish them (de Jaager, 1986). Initially, this system is very time consuming. However, since most of the information about yourself and your workplace would not substantially change from one performance appraisal session to the next, subsequent uses of the system would not be so onerous. Indeed, the time involved can be very brief since one need only note those areas where change has occurred. The system would then merely remind the user of the correct approach to take with each employee. Performance Mentor, then, takes a broad approach to performance appraisal. It incorporates empirical findings from the feedback literature as well as from the literature about human relations, interpersonal communications, and persona-
178
Tim O. Peterson, and David D. Van Fleet
lity. For these reasons, Performance Mentor seems to provide a complete and realistic recommendation on how to conduct performance feedback interviews. Performance Mentor is not, however, an interactive ES. It cannot respond to questions from the user. Further, the system does not do your performance appraisal for you or even assist in that chore. It is strictly designed to aid in the feedback part of the process. This, however, enables it to avoid violating the recommendations of the U.S. Civil Services Reform Act of 1976 in that it does not make interpersonal comparisons nor does it use forced distributions or bell curves in evaluations.
An Example To obtain a "feel" for how Performance Mentor operates, the following example was developed (with apologies to George Lucas). A hypothetical manager, Darth Vader, was entered into the program. Among the terms used to describe him were: precise, decisive, assertive, methodical, out-spoken, deliberate, and self-reliant. His workplace was described by terms that included: ordered, solid, rigid, complex, bureaucratic, and controlling. The subordinate involved was Princess Liah who was described with terms such as: daring, strong, striving, excitable, talkative, idealistic, adventurous, independent, and self-reliant. Queried for advice on both objective setting and appraisal feedback, the system produced a report of six and one-half single spaced pages. The following is part of that report. "Liah's experience and position will give her insight into such things as appropriate goals, project deadlines, and standards of performance. When you discuss her performance and plan new goals, resist the temptation to control the proceedings. Share decision-making authority. Consider letting Liah take the initiative in identifying goals and deadlines, and submitting them for your approval. Drawing her into the process as a participant rather than an observer will elicit greater effort and higher performance from her. "Because your evaluation of Liah's performance is generally negative, she may feel disheartened after the appraisal meeting. Still, the setting of objectives is forward looking and it should be a positive, upbeat experience. To help Liah see it this way, separate the setting of objectives from the appraisal function. If you are setting objectives in the context of an appraisal, consider actually scheduling different meetings to reinforce the idea that this is an opportunity for a fresh start. "Since you and Liah have had difficulties working together, your meeting will have little value unless you create a cooperative atmosphere. If appropriate, be willing to conduct a series of brief meetings rather than attempting to iron out all the problems and discuss all the relevant issues in one marathon session. Deal with the issues one at a time, starting with the least controversial and emphasizing areas of agreement. Avoid carrying discussion to the point of confrontation. Explain to Liah at the outset that you are trying to reconcile past differences, and ask her to participate in the effort.
Casting Managerial Skills into a Knowledge Based System
179
"It is possible that Liah is disinclined to consider your feedback because she does not see you as a legitimate judge of her work. When you discuss performance with her, make sure to have the technical information necessary to proceed with confidence. For example, gather any sales figures, quality measurements, or profit/loss statistics that are pertinent to Liah's job. You can also check in with her on a more frequent basis, which will help convince her that you know enough about her job to make pertinent observations and comments. Furthermore, more frequent contact could increase the trust between you. "When discussing Liah's work, avoid discussing personality factors unrelated to job performance. As much as possible, use work-related terms and examples to back up your assessment. In this way, the two of you can discuss her strengths and weaknesses productively, while diminishing the likelihood of allegations of unfairness, bias, or personal dislike. Remember, the purpose of an appraisal is to evaluate Liah's competence as an employee, not her character or likableness. "The appraisal meeting is not the place to resolve hostilities between you and Liah the situation is going to be emotionally charged already. While negative feelings are detrimental to your working relationship, it is inappropriate to deal with them in the review meeting. Concentrate on areas of agreement and the exchange of information, and save non-appraisal issues for other sessions. If the two of you have strong disagreements about the quality of performance, do not try to work them out in one sitting. Try to wrap up the evaluation and schedule another meeting (or series of meetings) to resolve differences of opinion. If your company has a Human Resources or Industrial Relations group, ask them for help." This sort of a report reminds good managers of proper approaches to take in providing performance appraisal feedback. It also instructs newer, inexperienced, or less competent managers in those approaches so as to improve their feedback sessions with subordinates. However, the system will be beneficial only to the extent that managers learn how to use it and actually follow the advice given them.
8.4 Can Expert Systems Really Help? While the claims of proponents of knowledge-based ES such as Performance Mentor seem promising, those claims have not been supported by carefully collected, objective research data instead of more subjective, anecdotal reports from users. The question remains whether these new computer-based systems can provide managers with the necessary knowledge and skills to perform their jobs more effectively. As Dr. Arie Lewin (1986) said at the 1986 national Academy of Management meetings, "Whether these new systems help managers is still a question to be answered. We just don't know. At present, we are working on faith." The fundamental question is, can ES really help managers do their jobs better?
180
Tim O. Peterson, and David D. Van Fleet
An Experiment An experiment was designed to test the hypothesis that using an ES would enable subjects to perform more effectively the task of giving negative feedback. The experiment was conducted in a behavioral laboratory at Texas A&M University in the United States. The experiment took approximately three hours for each subject. The subjects were all managers within a single, large organization. Half of them used the ES and half did not. The subjects performed the role of a supervisor on three purchasing agents. The setting was a highly realistic task involving extensive paper documentation of performance (20 pages of single spaced material which included a general description, a memo from the previous supervisor, other memos, and six months of purchasing records including monthly summaries which noted over- and under-ordering of parts, supplies, and/or materials) as well as a 15 minute long videotape of the individual to be evaluated showing him at work. The important point was not the actual performance appraisal evaluation (which was designed to be negative), but the feedback session. In this study, the focus is on the subject's ability to identify the appropriate behaviors for a negative performance feedback session, and not their ability to complete a performance appraisal form on the subordinate in question. The subjects had to indicate their intentions for the performance feedback session. The task could be evaluated by the subjects' intentions as indicated by their identification of correct behaviors or by the identification of incorrect behaviors which should be avoided. The subjects had to specify what they felt where the ten correct behaviors from a list of 50 possible behavioral responses which included both correct things to do (such as "acknowledge his experience and tell him you know he can do the job if he wants to" and "keep this initial feedback session brief but tell the individual you will be having follow up meetings with him") and incorrect things to do (such as "demand an explanation for his sloppy ordering over the last six months" and "tell the individual he should be embarrassed at his performance."). In addition, the list of possible behavioral responses also included some neutral behavioral responses such as "call the individual in immediately." These responses are not intended for sharing information or feelings; they are used only to open the way for more meaningful communication later on (Verderber, 1981).
Results The results clearly support the hypothesis that an ES can really help managers do their jobs better. As shown in Figure 8.1, managers using the ES were able to
Casting Managerial Skills into a Knowledge Based System
181
identify more of the correct behavioral responses than managers not using the system (63% versus 39%). The difference between managers using the ES and managers not using the ES is significant at the .01 level, using Student's t-test as recommended by Ott (1984). Further support for the hypothesis can be gleaned by examining the number of incorrect behaviors chosen by the two groups. Managers using the ES identified fewer of the incorrect behaviors as being appropriate than did the managers not using the system (6% versus 17%). These findings are also significantly different, but at the .05 level. These results are depicted in Figure 2. It is interesting to note that the use of the ES almost doubled the selection of correct behaviors and reduced the selection of incorrect behaviors by half. Whether these managers would follow through on their intentions and effectively apply that knowledge is another issue. But, the potential is there since the ES did aid the mangers in identifying both the proper and the improper behavioral responses. One thing which was observed, but not directly tested or analyzed, was that the use of the ES, although substantially improving the quality of the process, slowed down the decision process. On the average, it took managers using the
100
Percent Identified Incorrect Behaviors
90'
100"
80"
90' 80"
70"
(63)
70" 60'
60"
50
50"
40"
(39)
40"
30"
30'
20"
20r -
10"
10
0
0 NO SYSTEM
SYSTEM
Figure 8.1 and 8.2: Identification of Behavioral Responses Correct Behavioral Responses Incorrect Behavioral Responses
(17)
NO SYSTEM
SYSTEM
182
Tim O. Peterson, and David D. Van Fleet
ES 15 minutes more to come to a decision than it did the managers not using the ES. From a cost/benefit perspective, use of the ES still appeared worthwhile, however considering the increase in appropriate behavioral responses over this period of time. Further research will need to examine this problem more carefully. This type of information will be essential in evaluating the costs and benefits of the application of ES to managerial tasks (Chapter 9).
8.5 Discussion and Implications Sullivan, in his chapter in this volume, argues that ES may build credibility and persuasion in users. Observations during this study suggest that, while that seems true for inexperienced managers, experienced managers may react differently. For example, in this study, inexperienced managers seemed much more willing to accept the advice of the ES than the experienced managers, who were clearly more skeptical about the advice. Perhaps experience with computers in general moderates the reaction to ES. Here, too, more research is clearly needed. There would appear to be three major and very different kinds of implications for organizations arising from this study. First, the training involved in this study was extremely brief - about one hour - and yet it yielded good results. If similar brief training sessions yield similar results with other ES, then, training time and costs can be reduced by using knowledge-based ES. Obviously, this would depend upon the "user friendliness" of the ES. If the system is well designed and tested before implementation (as in this study), training time and costs may be substantially reduced. This would mean that new managers could more rapidly become as effective as experienced managers. Second, organizations may need to use several different implementation procedures when introducing ES. A major moderator might be experience, not just the managerial experience level of the personnel involved, but also experience with computers, software, and the like. Experience with the task may speed up or eliminate the need for the ES; experience with computers and software may make the training quicker or easier. Third, given that this study demonstrates the effectiveness of an ES in this one area, it is reasonable to assume that ES can be developed to assist in other areas of management. Therefore organizations should actively seek out such systems or develop them on their own in order to remain competitive. Clearly, then, managers and organizations may benefit from the use of ES even in areas of administration which are not routine and quantifiable. These
Casting Managerial Skills into a Knowledge Based System
183
results support continuing research into the usefulness of ES technology to management tasks. As more and better ES are developed, organizations will be able to increase their human productivity.
Chapter 9 A Cost/Benefit Analysis of Expert Systems H.A.M. Daniels P. van der Horst
9.1 Introduction ES are among the most promising applications of Artificial Intelligence so far. Their judgmental reasoning abilities are comparable to those of domain experts in some areas (Bobrow et. al., 1986). The decline of initial hardware- and software-costs is encouraging widespread use of these systems. In the financial industry especially, ES technology is a fast growing area (Waterman, 1986; Humpert et. al., 1987). The early developments were in financial institutions and consultancy firms in the United States (Shpilberg et. al., 1986). Europe is catching up on the U.S. in the application of ES. European banks and consultancy agencies are preparing for new European common market regulations in 1992. Faced with increased competition, these institutions want to incorporate new technologies in their basic activities in which ES play a key role (Bensimon, 1988). To exploit the full potential of ES, management must identify promising uses for such systems in the organization. A list of questions that should be considered to obtain a sound judgement about a possible application can be found in Waterman (1986). Here we provide a short summary of the issues most relevant to ES in financial services. There are three important questions to ask: (1) Is ES development possible? (2) Is it appropriate? (3) Is it justified? To answer the first question (is ES development possible?) one should make sure that: • genuine experts exist (problems that cannot be solved by experts can certainly not be solved by ES [e.g., nuclear waste disposal]);
186
H.A.M. Daniels and P. van der Horst
• experts agree on solutions (in domains where the opinions of experts differ to a large extent or are contradicting, an ES cannot provide solutions which are widely accepted); • the task does not require background knowledge (It is well-known that ES stumble badly when faced with tasks that require such knowlegde). To answer the second question (is ES development appropriate?) one should consider whether: • solving the problem requires rules, heuristics and rules of thumb (in problems where the major part consists of calculating or data base consultations, spreadsheets or database management systems will do better than ES); • uncertainty is part of the problem (conventional programs are not good at handling uncertainty whereas the inference engine of most ES can reason through uncertainty); • solving the task is not easy (a common rule is that the duration of the task is between one hour and one week). And to answer the third question (is ES development justifiable?) one should determine whether: • the task solution has a high payoff; • expertise is scarce and can be lost (e.g., when an expert leaves the company because of retirement or job transfer); • expertise is needed in many or hostile locations. For fiscal legislation, the answers to these questions are affirmative. • Question one, is ES development possible?: • • it is clear that genuine experts exist; • • consultancy agencies have well defined policy for tax advice, so experts in one consultancy agency agree on solutions; • • the knowledge needed is very specific. • • • •
Question two, is ES development appropriate?: • the fiscal domain is characterized by a large number of rules; • there is often uncertainty about future tax rates; • the problems involved are not easy.
The issues corresponding to question three are directly related to cost/benefitanalysis and will be considered in the next section of this chapter. Apart from these considerations, tax consultancy also has the characteristics described in Leonard-Barton et. al. (1988), such as a frequent use of questionnaires and a slowly changing knowledge domain, which are also positive factors. Once the opportunities for applying ES have been determined, the next step is selecting and evaluating profitable projects.
A Cost/Benefit Analysis of Expert Systems
187
9.2 Cost/Benefit Analysis A cost/benefit analysis should serve as a guiding principle for management to evaluate the different opportunities to use the system. There is no doubt that insufficient understanding of the benefits and the inability to quantify them will frustrate any development effort (Farrel et. al., 1988). Decisions concerning the development of ES are based on a cost/benefit-analysis ex ante. For different projects that have been selected on the basis of the criteria in the previous paragraph a cost/benefit-analysis can be used as an instrument to discriminate between these projects. In a cost/benefit-analysis of ES one should take into account both qualitative and quantitative aspects. By qualitative benefits we mean those benefits that are difficult or even impossible to quantify. Although difficult to quantify, qualitative benefits may dominate in the long run (Strassman et. al., 1985; Shafe et. al., 1988). We cite the most important, often mentioned in literature (see e.g., Farrel et. al., 1988, Michaelsen et. al., 1986; Rauch-Hindin, 1986): • • • • •
structured knowledge conservation; knowledge distribution; learning system; consistent decisions; marketing instrument.
These qualitative factors play a role in almost every ES project. The importance of each factor differs from project to project. The functioning of an ES can be viewed on different organizational levels. The qualitative benefits have an impact on the tactical and strategic level because they are not directly related to the operational functioning of the ES. In fact, these benefits occur as a side-effect of the development and usage of the ES. Sometimes management is more interested in the benefits that are related directly to the functioning of the ES. It is therefore important to quantify the benefits on the operational level. They can be calculated by using the substitution principle of capital and labor. This analysis ex ante is even more important than a similar ex post evaluation, because the decisions between rivaling projects depend on it. The method of cost/benefit-analysis given below applies particularly to hand-me-down ES (Leonard-Barton et. al., 1988). However, it can be generalized easily to ES where the expert himself uses the system. The cost of development and maintenance are relatively easy to quantify. The estimation of the development costs for information systems are fairly wellestablished, especially for those projects where standard software is used (Kleijnen, 1980). However, the benefits are more difficult to determine.
H. A.M. Daniels and P. van der Horst
188
The costs of an ES consist of hardware, software, the wages of the expert and the knowledge engineer, and the costs of maintenance throughout the economical duration of life of the ES. To quantify the operational benefits of hand-me-down ES we distinguish between two different kinds of experts. The first is the expert who has the knowledge to solve a particular problem. This expert participates in the development of the ES (let us call him El). El's knowledge is handed down by means of the system to a second expert (E2), who is going to perform the same task with the ES. The resulting benefits are twofold: El is freed to solve more demanding problems and the costs of performing routine tasks are reduced. In most cases a small fraction F of time of El, compared to the old situation, is still needed in the new situation, because usually El is the contact person with the client. Let 77 denote the time spent by El to give one advice without ES; let T2 denote the time spent by E2 to perform the same task with ES. To calculate the cost savings we compare the costs in the old situation to costs in the new situation. Let W1 and W2 denote the fees of El and E2, respectively. The cost of the ES is C and the total number of consultations during the economical duration of life is denoted by N. In C the costs of hardware and software, development and maintenance are included. The cost of performing the task in the old situation is given by: 77 * W1
(1)
The cost of performing the task in the new situation is given by: F*T1*W1
+ T2*W2 + C/N
(2)
The amount of cost savings per task equals: (T1*W1)*(1-F)-(T2*W2
+ C/N)
(3)
Important factors in this analysis are N, W1 and F. Pairs of F and N that result in zero profit form the curve of break-even-points. This curve divides the area of F and N combinations in two parts: the area consisting of profitable combinations and its complement. Figure 9.4 in section 94 shows this curve for a specific expert system, Expattax. In Figure 9.3 the dependence of the total profits as a function of F are given in the case of Expattax.
9.3 System Development and Project Management The development of ES is similar to the development of standard information systems. However, there are two main differences: most ES are developed using a prototyping approach, and the domain expert plays an important role in their
189
A Cost/Benefit Analysis of Expert Systems
development. In conventional software development projects there usually is no such expert. Four project phases can be distinguished: identification, conceptualization, formalization and implementation, testing and validation (Jackson, 1986; see also Chapter 10). A prototyping approach is practically feasible because high level development tools like ES shells are available. In a standardized approach these phases follow sequentially; reports at the end of each period mark the starting point of the next. In a linear development process one tries to minimize feedback to preceding phases. In rapid prototyping, the development process consists of a large number of cycles in which all of the phases previously mentioned are present. 1
,2
—n
| 3 | 4
| 5 ,
1
1
1
1
6 j 1
IDENTIFICATION
1 |
1 I
CONCEPTUALIZATION 1
|
DURATION OF THE PROJECT
FORMALIZATION 1
I
IMPLEMENTATION 1
VALIDATION
Figure 9.1: System Development Cycle
These cycles reflect the incremental approach of the development of the ES. Different versions of the system are validated by the user and the expert. As the project evolves, the cycles tend to stretch because the experience of the project team is growing and less feedback is required. When a new version or part of the system becomes available, the expert evaluates the reasoning capabilities of the system and gives feedback information to the knowledge engineer. It is important to provide high level tools for communication between the expert and the knowledge engineer. In this way, the knowledge engineer and the expert can communicate at the conceptual level, and the expert is not burdened with unnecessary details of implementation. For tax experts, directed reasoning graphs and and/or trees turned out to be convenient tools. There is no way to keep the knowledge engineer isolated from the knowledge domain. In many cases the knowledge engineer starts arguing with the expert, because experts often cannot describe their expertise consistently. This may impede the progress of the project. One way to avoid this is to share a lexicon in
190
H.A.M. Daniels and P. van der Horst
which the most important concepts of the field are defined. This remedy can avoid many ambiguities. Another way is for the project leader to manage the interaction between the knowledge engineer and the expert. Project management is an important factor in the development of ES. Experts have the scarce knowledge and skills that are valuable to the organization. Usually experts are high-profile people in the organization. On the one hand, these factors contribute positively to the decision process of building an ES (Waterman, 1986), but they tend to influence the development process negativeiyConsider the project organization as depicted in Figure 9.2a in which the project leader and the expert operate on the same level. This situation is quite natural in an organizational context as described above. It is relatively independent of the organizational structure as a whole. However, in this type of project organizational conflicts may arise, because the knowledge engineer receives contradictory information from the project leader and the expert, and because the role of the expert in the project is uncontrollable. Although this project organization constitutes a medium where conflicts naturally arise, their are no instruments within easy reach to resolve them. An alternative approach would be the organizational structure of figure 2b, where these problems are avoided. This situation is optimal from a project management point of view, but its interorganizational relations may obstruct smooth working relationships and create friction. This puts high demands on the capacity of the project leader. Still, we believe that situation b is preferable to a.
9.4 Case Study Fiscal legislation consists of a large quantity of rules, which can be divided in two parts: jurisprudence and heuristics. The fiscal expert tries to describe the problem of a client in judicial language. Problem solving consists of two phases: problem description and problem solving. These phases can be executed by different experts (Sphilberg et. al., 1986). The rules of fiscal legislation are complex and sometimes seem contradicting to novices. Only skilled experts can tackle seemingly contradictory situations. The output of a fiscal expert is an advice and the impact of implementing this advice. Expattax is an ES that deals with problems concerning expatriate tax payers in the Netherlands. These problems are scattered about a wide range of fiscal fields. The cases that occur frequently can be handled by Expattax. Expattax is a rule based system implemented in the shell Xi Plus. The core system consists of four knowledge bases:
191
A Cost/Benefit Analysis of Expert Systems
o
USER (E2)
KNOWLEDGE ENGINEER
USER(E2)
EXPERT (El)
KNOWLEDGE ENGINEER
Figure 9.2a: Conflict-Arising Type Project Organization b: Preferred Project Organizations 1) 2) 3) 4)
Resident, Incomtax, Estattax, Securtax.
These separate modules contain isolated chunks of knowledge.
Cost/Benefit Analysis Applied to Expattax. The application of the cost/benefit analyses as given in section 9.2 to Expattax is straightforward. W1 = $125.00lhour; T1 = 8 hours. So the cost of an advice without using Expattax is $1000. W2 = $65; T2 = 2 hours; F = time El with Expattax/time El without Expattax. In our case F is estimated to be 1/8. The economical duration of life of Expattax is set on 3 years. The number of total advices per year = 40, so N = 120. The cost C of Expattax consist of: hardware software consultance of El 100*$125 = knowledge engineer maintenance 3*$1,000 =
$ 7,500 $ 5,000 $ 12,500 $ 6,750 $ 3,000
Total costs C =
$ 34,750
192
H.A.M. Daniels and P. van der Horst
The cost of hardware consists of a personal computer and a printer. The cost of software is the price of the development tool Xi Plus. The consultance of El is necessary to the knowledge acquisition phase. The estimated cost of maintenance is based on $ 1500 for the knowledge engineer and $ 1500 for El consultation. The total development cost of Expattax are relatively low. The system is of a moderate size and the knowledge engineer was a graduate student who could do the job at relatively low salary. In the calculation above, no costs for training are taken into account because Expattax serves as a general training tool for experts entering the company. In general, if the introduction of the system requires extra training these costs should be incorporated. It should be noted that empirical formulas exist to estimate expenditure on an ES once the size of the system is known. With the current technology, the total duration of the project in days is the number of rules divided by a factor between one and five (in the case of Expattax, 500 rules in 6 months). We now apply the results of section 9.2 to this particular case. Cost of an advice with ES: 0.125*1000 + 2*65 + 34,750/120 = $ 544.60 So the savings are: 1000 - 544.60 = $ 455.40 and the total savings are N*455.40 = $ 54,648 in three years. Important factors are F an TA. Usually, TA is known to the company, but F has to be estimated. Figure 9.3 shows the relation between the fraction F and the profits per advice where TA is kept fixed, TA = N/3 = 40. In Figure 9.4 the curve of break-even-points divides the area of TA and F pairs in two parts: the area consisting of profitable combinations and its complement.
9.5 Conclusions The domain of fiscal legislation is particulary suitable for ES. In each individual case, the cost/benefit analyses can be carried out to estimate the potential operational benefits. This method can be used to select the most promising projects. To complete the project successfully, an appropriate project structure has to be chosen. Critical success factors are project management, knowledge acquisition, and knowledge representation. In assessing ES, one should take into account both operational and strategic aspects. Qualitative (strategic) benefits, as described in section 9.2, were all present in the case at hand. The knowledge is geographically distributed to several sites and the system is used as a learning tool for unexperienced fiscal "experts".
193
A Cost/Benefit Analysis of Expert Systems
•
Figure 9.3: Relation Between F and Profits per Advice with Total Advices TA = 40 80 TAPER YEAR
7n
..
10
-
0 H
0.1
1
0.2
1
0.3
1
0.4
Figure 9.4: Curve of Break-Even-Points with Profit = 0
1
0.5
1
0.6
b-
0.7
F
Chapter 10 An Overview of Expert System Principles Linda van der Gaag Peter Lucas
10.1 Introduction Knowledge-based systems are information systems that use some symbolic representation of human knowledge, usually in a way resembling human reasoning. Among knowledge-based systems, Expert Systems (ES) have received the most attention in recent years. ES are capable of offering solutions to specific problems in a given domain both in a way and at a level comparable to that of human domain experts. Examples of such domains include medical diagnosis, financial advice, and product design. Most present generation ES are restricted to narrow problem domains. But even in highly restricted domains, ES usually need large amounts of knowledge to perform comparably to human experts. Building expert systems for specific application domains is not trivial and demands great skill and effort from the builder of the system and from the participating domain experts. The art of developing an expert system has even become a separate subject of study known as knowledge engineering. The knowledge for an ES is obtained by consulting various knowledge sources, such as human experts, text books, and databases. The process of collecting and structuring knowledge in a problem domain is called knowledge acquisition.
Origin and Evolution Artificial Intelligence grew from the efforts of a few computer scientists in the early 50s to use computers for applications other than "number crunching." One of the first areas of attention was problem solving. Researchers such as Herbert Simon and Allen Newell focused on developing programs with a general capability for solving different types of problems. Newell and Simon's General
196
Linda van der Gaag and Peter Lucas
Problem Solver (GPS) is the best known achievement of that period (Newell, 1963; Ernst, 1969). GPS represents problems in terms of an initial state, a goal, or final state, and a set of transitions that transform states into other ones. Given this representation, GPS generates a sequence of transitions which transforms the initial state into the desired final state. For example, consider the problem of planning a trip to another city. The traveller finds himself at his home town (the initial state), trying to get to another city (the final state). Solving the problem means reducing the distance between the travellers' present abode and his destination step by step. The first step might involve taking a taxi, the next step taking a plane, etc. Unfortunately, GPS was not very successful. Representing non-trivial problems in terms that GPS could understand proved to be no easy task; also, the system turned out to be rather inefficient in operation. Since it was built as a general problem solver, GPS could not exploit the specific knowledge pertaining to the problem at hand when calculating a transition from one state to the next: at each step, GPS had to examine all possible transitions, so that the number of alternatives grew exponentially in the number of steps. However, GPS did provide important insights. Edward Feigenbaum (a student of Newell and Simon) recognized that people perform certain tasks well not by applying some general problem-solving capability, but instead by using their specific knowledge about a particular domain; as he put it, "In the knowledge lies the power." This insight shifted attention toward specialized knowledge-based systems. Many problems require expert skill and knowledge for their solution. The knowledge of a domain expert is usually not laid down in clear definitions or unambiguous algorithms; it is captured in facts and rules of thumb acquired through years of experience, called heuristics. The success of ES is mainly due to their capability to represent such heuristic knowledge, and to make it accessible to computers. The shift of attention from general problem solvers to ES that use highly specialized domain knowledge for guiding the reasoning process is generally viewed as a major breakthrough in AI research.
Examples of Expert Systems The first ES were developed already in the late 60s, but large-scale research began not before the 70s. The early ES were mostly concerned with medical diagnosis (Clancey, 1984). The best-known ES of that period is MYCIN (Shortliffe, 1976; Buchanan, 1984); its development took place at Stanford University. The MYCIN system assists physicians in diagnosing and treating some infectious diseases, meningitis and bacterial septicemia in particular. When a patient shows the signs of such a disease, usually a culture of blood and urine is made in order to determine which bacterium species caused the
An Overview of Expert System Principles
197
infection. Usually, it takes 24 to 48 hours before the laboratory results are known. Often, however, the physician cannot wait that long before starting treatment, since otherwise the disease will progress and actually cause the death of the patient. MYCIN gives an interim indication of the organisms most probably responsible for the patient's disease on the basis of the (possibly incomplete and inexact) patient data available to the system. Given this indication, MYCIN advises on the administration of appropriate drugs to control the infection. MYCIN clearly left its mark on ES development; even today, MYCIN is a source of inspiration for ES researchers. Another well-known expert system is XCON, (Kraft, 1984). This system configures VAX, PDP11, and microVAX computer systems from Digital Equipment Corporation (DEC). DEC offers the customer a wide choice in components when purchasing computer equipment, so that each client can be provided with a custom-made system. XCON analyses the customer's order to detects possible flaws, such as missing components, and calculates feasible configurations. Here, the problem is not so much that the information is incomplete or inexact, but that the information is subject to rapid change. DEC contracted John McDermott from Carnegie-Mellon University in the late 70s to develop XCON; the system became fully operational in 1981. At present, XCON is supplemented with XSEL, a system that assists DEC sales agents in drawing up orders.
10.2 Expert System Architecture In the early years, ES were written in a high-level programming language, usually in LISP. Using such a language demands disproportionate attention to implementational aspects of the system unrelated to the problem domain. Moreover, the expert knowledge of the domain and the algorithms for applying this knowledge will become densely interwoven. ES built this way resist adaptation to changing views about the domain. Expert knowledge, however, is dynamic: knowledge and experience are continuously changing, requiring modifications of the corresponding ES. Attempts to solve this problem led to the view that the domain knowledge and the algorithms for applying this knowledge should be separated explicitly. This principle constitutes the paradigm of today's ES design: ES = knowledge
+
inference
Accordingly, today's ES typically have two basic components: • A knowledge base that captures the domain-specific knowledge;
198
Linda van der Gaag and Peter Lucas
• An inference engine that consists of algorithms for manipulating the knowledge represented in the knowledge base. Modern ES are rarely written in a high-level programming language. Instead, they are built in a special environment, called an expert system shell. A popular example of such an environment is EMYCIN (Essential MYCIN) that emerged from MYCIN by stripping it of its knowledge (Melle, 1980; Melle, 1981). Other, more recent, examples include ESE, Nexpert, Ml, 1 and Personal Consultant. ES shells generally are easy to use, and do not require professional programming skills. Besides shells, more versatile tools for building ES have become available. These tools are sometimes called expert system builder tools. Like shells, ES builder tools enforce the separation of knowledge and inference, but they require professional programming skills. Their advantage over ES shells is that they offer greater flexibility and can cover a broader range of applications. Examples include ART (Artificial Reasoning Tool), KEE (Knowledge Engineering Environment), and Knowledge Craft. 2 Each ES shell or builder tool offers one or more formalisms for encoding the domain knowledge in the knowledge base. Furthermore, it provides a corresponding inference engine that is capable of manipulating knowledge represented in the formalism. The developer of an ES is therefore shielded from the system's algorithmic aspects; only the domain-specific knowledge has to be provided and expressed in the formalism. Separating domain knowledge from the inference engine bears important advantages. The knowledge base can be developed and refined stepwise. Errors and inadequacies can be easily remedied without making major changes to the program necessary. Furthermore, a given knowledge base can be substituted by another knowledge base about a different domain, yielding a different ES. The inference engine of a typical ES shell is part of a so-called consultation system. The consultation system further provides a user interface for interaction with the user; this interaction generally takes the form of a question and answer session. Such a session is called a consultation of the system. Furthermore, most ES shells offer a variety of facilities for investigating the content of the knowledge base and the reasoning behavior of the system. The explanation facilities can explain how certain conclusions are arrived at, why a specific question is asked, or why alternative conclusions have not been drawn. By using the trace facilities the reasoning behavior of the system can be followed during a consultation one step at a time. The components of a consultation system are shown in Figure 10.1, which depicts the more or less characteristic architecture of a system built using an ES shell.
1 2
Ml was used by for the ES of Chapter 3. Harmon (1985) offers an overview of present-day shells and ES builder tools.
An Overview of Expert System Principles
199
Figure 10.1: Global Architecture of an Expert System
10.3 Knowledge Representation and Inference Part of the work of the knowledge engineer concerns the selection of a suitable knowledge-representation formalism for presenting the domain knowledge to the computer in an encoded form. Four such formalisms are usually distinguished: (1) (2) (3) (4)
Logic, Production rules, Semantic nets, and frames.
Specific methods for handling knowledge are associated with each of these formalisms. The early ES shells (particularly the derivatives of the EMYCIN system) were mostly using production rules. Although production rules have proven to be suitable for several types of applications, they are not considered the ultimate solution to the difficult problem of knowledge representation. There is no present consensus on the best knowledge-representation formalism. Many recently introduced ES builder tools, therefore, provide a combination of
Linda van der Gaag and Peter Lucas
200
representation formalisms. The following sections briefly review each formalism.3
Logic Logic is one of the earliest formalisms for knowledge representation: its roots can be traced back to the ancient Greeks. Today, logic has a well-defined syntax and semantics, and offers various inference rules for manipulating logical formulas (Enderton, 1972; Kowalski, 1979). Although symbolic logic has rarely been used for building ES, the formalism is very useful for understanding the essentials of the other formalisms. Its basic concepts are best illustrated by propositional logic, which, however, is not expressive enough for most applications. More expressive is first-order predicate logic, on which the programming language PROLOG has been based (Bratko, 1986). However, we restrict the discussion to propositional logic in order to avoid unnecessary mathematical detail. Propositional logic allows one to express statements that are either true or false. Examples of such statements are: "The profit of the firm has increased" "10 is greater than 90" Statements like these are called propositions. Let us denote the first proposition by the letter P and the second one by Q. Simple propositions such as P and Q are called atomic propositions or atoms for short. By means of so-called logical connectives, atoms can be combined to yield composite propositions. In the language of propositional logic, we have the following five connectives at our disposal: negation: conjunction: disjunction: implication: bi-implication:
—1 A V
(not) (and) (or) (if then) (if and only if)
Example. When we assume that the propositions P and R have the following meaning P = "The profit of the firm has increased" R = "The stock holders are happy" then the composite proposition
—P v R
3
The reader is referred to Lucas (1990) for a more thorough discussion of the formalisms and their associated inference methods.
An Overview of Expert System Principles
201
has the meaning: "Either the profit of the firm has not increased or the stock holders are happy"
Not all formulas that consist of atoms and connectives are (composite) propositions. It is possible to construct formulas from the atoms and the connectives to which no suitable meaning can be ascribed. For example, the formula P—i R is syntactically incorrect. Syntactically correct formulas that do represent propositions are called well-formed formulas. The notion of well-formedness concerns the syntax of formulas in logic only, it provides no information about the truth value of well-formed formulas, that is, it tells us nothing with respect to the meaning of formulas in propositional logic. The meaning of formulas in propositional logic is defined through a function that assigns to each proposition a truth value (either true or false). For example, the truth function may assign the value false to the proposition P of the previous example. Such a function is usually called an interpretation function, or interpretation for short, if it satisfies certain conditions. In particular, it has to meet the properties of the connectives stated in Table 10.1. The first two columns in this table list all possible combinations of truth value assignments for the atomic propositions F and G; the remaining columns define the meanings of the respective connectives. By repeatedly using the properties listed in Table 10.1, it is possible to express the truth value of an arbitrary formula in terms of the truth values of its atomic constituents. Example. Table 10.2 lists all possible combinations of truth values for the atoms in the formula P —> (—iQ A R). For each combination, the resulting truth value for this formula is determined; it is shown in the last column of the table. A table like this is called a truth table.
If an interpretation assigns to a given formula F the truth value true, then the interpretation is said to satisfy the formula. Another important notion in logic is that of logical consequence. We say that a formula F is the logical consequence of a set of formulas S, if any interpretation satisfying S satisfies the formula F as well. Example. Consider the following set of propositions: S = [P, P —» R), where P again stands for "the profit of the firm has increased" and R for "the stock holders are happy". Now, in order to satisfy this set of propositions S, an interpretation has to assign the truth value true to the proposition P. Furthermore, it follows from Table 10.1 that the interpretation has to assign to R the truth value true as well: if P is true, then P —> R can only be true if R is also true. Therefore, one can conclude that R is a logical consequence of the set S. In other words, it follows from S that "the stock holders are happy". Note that this coincides with our intuition.
202
Linda van der Gaag and Peter Lucas
Table 10.1: The Meaning of the Connectives F
G
FaG
FwG
F —>G
F G
true
true
false
true
true
true
true
true
false
false
false
true
false
false
false
true
true
false
true
true
false
false
false
true
false
false
true
true
Table 10.2: Truth Table for P
(-, Q A R)
R
- e
-i Q A/?
P —> (—ij2 A R)
true
true
false
false
false
true
false
false
false
false
false
true
true
true
true
false
false
true
false
false
P
Q
true true true true false
true
true
false
false
true
false
true
false
false
false
true
false
false
true
true
true
true
false
false
false
true
false
true
Assigning a meaning to logical formulas (as we have done above) is usually called the declarative semantics of logic. It offers a means for investigating whether or not a given formula is a logical consequence of a set of formulas. But one can answer this question without examining the meaning of the formulas concerned by applying so-called inference rules. Unlike truth tables, inference rules are purely syntactic operations on formulas, i.e. operations that modify only the form of the elements of a given set of formulas. Inference rules either add, replace, or remove formulas. In general, an inference rule is given as a schema in which a kind of meta-variables occur that may be substituted by arbitrary formulas. An example of such an inference rule is shown below: A,A -> B B
The formulas above the line are called the premises, and the formula below the line is called the conclusion. This rule is known as modus ponens. The following schema provides another example of an inference rule: A,B A AB
An Overview of Expert System Principles
203
Repeated applications of inference rules give rise to what is called a derivation or deduction. Example. Modus ponens can be applied to draw the conclusion S from the two premises P A (Qv R) and P A (Q V R) —» S. It is said that there exists a derivation of the formula S from the set of formulas [P A (Q V R), P A (G v R) —> S). This is denoted by: {P
A
(Q
V
R), P
A
(Q v R) -> 5) h- S
The symbol "I-" is known as the turnstile. Note that in applying the inference rule to derive S there was no need to investigate the meanings of the formulas. Example. Consider the set of formulas {P, Q, P A Q —> S}. If the inference rule A,B AhB is applied to the formulas P and Q, then the formula P A Q is derived; the subsequent application of modus ponens to P A Q and P A Q —> S yields S. So, we have {P, Q, P
A
Q
I- S
Now we have introduced inference rules, it is relevant to investigate how the declarative semantics of a particular class of formulas and its procedural semantics, described by means of inference rules, are interrelated. If these two notions are related, we can assign a meaning to formulas which have been derived using inference rules, simply by applying our knowledge of the declarative meaning of the original set of formulas. On the other hand, when starting with the known meaning of a set of formulas, one can derive only formulas that are consistent with that meaning. These two properties are known as soundness and completeness, respectively, of a collection of inference rules. Example. Modus ponens is an example of a sound inference rule. From the given formulas P, standing for "the profit of the firm has increased", and P —»R, standing for "if the profit of the firm has increased, then the stock holders are happy", the formula R, that is, "the stock holders are happy", can be derived by applying modus ponens; formally, we have: {P, P —» R) I- R. On the other hand, if P R and P are both true under a particular interpretation, then from the truth Table 10.1 we have that R is true under that interpretation as well.
The major application of logic in AI is in theorem proving. 4 Research in this area deals with proving theorems automatically from a given set of axioms. The axioms and the theorem to be proven are expressed in logical formulas, and a sound and complete collection of logical inference rules is applied to the formulas in order to prove the theorem. After some initial success in the 1950s and early 60s, it became apparent that the inference rules known at that time
4
The reader is referred to Chapter 4 for an example.
204
Linda van der Gaag and Peter Lucas
were not as suitable for application in digital computers as was hoped. Many AI researchers lost interest in applying logic, and shifted their attention towards other formalisms. Although by now theorem proving has grown into an established research area in AI, logic still plays a secondary role in ES building.
Production Rules and Inference Most present-day ES employ the formalism of production rules for representing knowledge. Such systems are called rule-based ES or production systems. Production rules were introduced by Allen Newell and Herbert Simon, who used them in their theory of human problem solving (Newell, 1972; Newell, 1973). MYCIN is an early demonstration of the suitability of production rules for diagnostic ES. A production system offers a number of formalisms for representing knowledge. The most important of these is the production-rule formalism itself. The entire set of production rules of a production system represents the domain knowledge of the system; it is called the system's rule base. In addition to the production-rule formalism, a production system provides a means for representing factual information, or data. During a consultation of the ES, new facts are constantly being added, removed, or modified as a result of the application of production rules, of data entered by the user, or as a result of querying some database. These facts are stored in a so-called fact set, also known as the global database or working memory. Facts can be represented in a number of ways. One frequently used way is to represent facts by means of values of attributes, which are grouped in objects. An object gives a description of a sub-domain by means of its attributes. The attributes of an object may be either single-valued or multi-valued. Single-valued attributes are used to represent unique facts; multi-valued attributes are used for representing a collection of interrelated facts. Example. We consider the fact set: F = {employee.age = 40, employee.sex = female, employee.job = {computer-scientist, teacher}, salary.rate = 50000). The fact set comprises four facts: three concerning the object employee, and one concerning the object salary. Three attributes concerning the object employee have been specified: the attributes age and sex are single-valued attributes, and the attribute job is a multi-valued one. Furthermore, the attribute rate specified for the object salary is single-valued. The definition of the objects and their attributes plus the value restriction on attributes constitute the object schema of the system. The rule base and the object schema together form the knowledge base of the production system containing its consultation-independent information. The fact set which contains consultation-dependent information, is not a part of the knowledge base, but instead is kept apart.
An Overview of Expert System Principles
205
Production rules are, in fact, the formalized version of every-day heuristic rules or rules of thumb. A heuristic rule relates conditions to conclusions as follows: if certain conditions are fulfilled, then certain conclusions may be drawn. Knowledge engineering transforms such heuristic rules into production rules. A production rule, just like a heuristic rule, consists of a number of conditions and conclusions. These comprise the following elements: • • • •
Objects (also called contexts), Attributes (also called parameters), Symbolic and numeric constant values, and Predicates and actions.
Predicates and actions operate on tuples consisting of an object, an attribute, and a value; such tuples are usually called object-attribute-value or o-a-v triples. An important part of the translation process from heuristic rules to production rules concerns the identification of the objects, attributes, and constants distinguishable in the heuristic rules. Example. Consider one of the rules from Chapter 2: if the complexity of the environment is simple, and the environment is subject to dynamic changes then conclude that the structure of the organization is mixed functional. In this heuristic rule, we can identify the objects environment and organization. For the object environment, the attributes complexity and change have been specified. The object organization has an attribute named structure. The constant values mentioned in the rule are simple, dynamic, and mixed-functional. The rule may now be formally represented by the following production rule: if same(environment,complexity,simple) and same( environment, change, dynamic) then add( organization, structure, mixed-functional)
fi The production rule from the previous example consists of two conditions, delimited between the key-words if and then, and one conclusion following the key-word then. Both conditions in the rule specify the predicate same. A
206
Linda van der Gaag and Peter Lucas
predicate generally takes three arguments. The first argument is an object, the second an attribute, and the third a (symbolic) constant. The predicate same investigates whether or not the combination of its arguments (object, attribute, value) is feasible. An action specified in a conclusion of a rule also takes three arguments. In the production rule shown above, the conclusion specifies the action add, which adds new information in the form of an object-attribute-value triple to the fact set when executed. Roughly speaking, an inference method for a production system selects and subsequently applies production rules from the rule base; in applying the rules, it executes the actions specified in their conclusions. Execution of an action may cause facts to be added to, to be modified in, or to be deleted from the fact set. There are two basic inference methods, offering completely different reasoning strategies. We introduce them informally with the help of a simplified example, in which we abstract from the syntactical structure of conditions, conclusions, and facts. Now, consider Table 10.3. • Top-down inference starts with the statement of one or more goals. In the example, we have just the single goal g. A goal may match the conclusion of one or more production rules in the rule base. All production rules matching the goal are selected for application. In the present case, the only rule selected is R v Each one of the selected production rules is subsequently applied by considering the conditions of the rule as the new subgoals to be achieved. If there are facts in the fact set which match these new subgoals, then these subgoals are taken as having been achieved; subgoals for which no matching facts can be found are matched against the conclusions of the production rules from the rule base. Again, matching production rules are selected for application. In our example, we derive from the rule /?, the new subgoal b, which in turn causes the selection of the production rule Ry This process is repeated recursively. Note that in top-down inference, production rules are applied in a backward manner. If all the subgoals have been achieved, we say that the rule has succeeded. If, on the other hand, a condition is encountered which is not fulfilled, then the rule is said to have failed. When a production rule succeeds, its actions are executed, possibly changing the fact set; if a rule has failed, then its conclusions are just ignored. Since the subgoal a in the production rule R3 matches with the fact a, the rule's condition is fulfilled; its action is executed, yielding the new fact b. This new fact in turn fulfills the condition b of rule /?,, which led to the selection of R3 in the first place. The inference process is terminated once the initially specified goal has been achieved. Note that only production rules relevant for achieving the initial goal are applied. This explains why rule R2 was not used. • Bottom-up inference starts with a fact set, in our example {a}. The facts in the fact set are matched with the conditions of the production rules from the rule base. If for a specific production rule all conditions are fulfilled, then it is
An Overview of Expert System Principles
207
Table 10.3: Production System Before and After Execution State
Component
Top-down Inference
Bottom-up Inference
Initial
Goal
8
-
Final
Facts
[a)
(a)
Rules
Ri: if b then g fi R2: ifg then cfi R3: if a then b fi
Rù if b then gfi R2: ifg then cfi R}: if a then b fi
Facts'
{a,b,g)
1 a,b,c,g\
selected for application. The rule is applied by executing the actions mentioned in its conclusion. So, in our example the rule R3 will be applied first. The application of the selected production rules is likely to result in changes in the fact set, so that other production rules may become applicable in the next round. In the present case, application of R3 leads to the new fact set {a,b}. As a consequence, /?, becomes applicable. The fact g, added to the fact set as a result of executing the action of rule Rv results in the subsequent application of Rr Thus, the final fact set is equal to {a,b,c,g}. The inference process is terminated as soon as all applicable production rules have been processed. Note the difference in the resulting fact sets in Table 10.3 obtained from applying top-down and bottom-up inference, respectively. Top-down inference is called backward chaining if the selected production rules together with their conditions and conclusions are evaluated in the order in which they have been specified in the rule base. Under a similar condition, bottom-up inference is called forward chaining. As a consequence of their different behavior, the two forms of inference are suitable for building different types of ES. Top-down inference is often used in inference engines of diagnostic ES in which the inference process is controlled by a specific goal and a small amount of data. Bottom-up inference is most suitable for applications with a vast amount of data, and without preset goals. Example. Consider the object schema depicted in Figure 10.2, which shows two objects, employee and salary, with their associated attributes. Part of the information presented in this object schema is employed in the following tiny rule base. It deals with the selection of the salary scale for personnel, based on items such as position, salary rate, etc. We take the attribute scale of the object salary as the goal of the consultation. All attributes, with the exception of the attribute skills, are assumed to be single-valued. Now, consider the following rule base:
208
Linda van der Gaag and Peter Lucas age
/
sex position category \
s
skills
rate
Figure 10.2: An Object Schema R{- if same{employee,position,responsible) and same(employee,skills,preponderance) and notsame(employee,skills,obedience) then add(employee,category,management-staff)
fi R2- if same(employee,position,responsible) and same(employee,skills,preponderance) and same(employee,skills,obedience) then add(employee,category,supervisory-staff)
fi Ry if notsame{employee,position,responsible) notsame{employee,skills,preponderance) same(employee,skills,obedience) then add(employee,category,minor-official)
and and
fi Ri if same(employee,category,management-staff) greaterthan(salary,rate,100 000) then add{salary, scale ,'13-16')
fi
and
209
An Overview of Expert System Principles Ry if same(employee,category,supervisory-staff) lessequal(salary,rate,100 000) and greaterthan(salary,rate,50 000) then add(salary, scale, '9-14')
and
fi
««: if same{employee,category,minor-official) lessequal(salary,rate,50000) and greaterthan(salary,rate,20 000) then add(salary,scale,'6-10')
and
fi
The backward-chaining algorithm starts with the selection of the three production rules R4, Ry and R6, since the goal attribute scale of the object salary appears in their respective conclusions. /?4 will be the first one to be applied. Its first condition concerns the attribute category of the object employee. The rule base contains three production rules, Rv R2, and Rv concluding on this attribute; hence, these rules are selected. Of these, Rl will be applied first. Since there are no production rules concluding on the attribute position occurring in the first condition of rule R r the user is asked to enter values for this attribute. We suppose that the user answers by entering the fact employee.position = responsible. If follows that the first condition of /?, succeeds, and the second condition will be evaluated next. Since there are no facts in the fact set about the attribute skills, nor production rules concluding on it, the user is asked once more to provide additional information. Suppose that the user enters the fact employee.skills = {preponderance, obedience}, then the second condition of succeeds as well. Its third condition subsequently tests whether it is true that the constant obedience does not occur in the set of values of the attribute skills. This test evidently fails, so that the entire rule Rt fails. Next, rule R2 will be investigated. All three of its conditions, each concerning facts already present in the fact set, will succeed. Execution of the action in the conclusion of the rule results in the addition of the fact employee category = supervisory-staff to the fact set. The third rule, R3, subsequently fails due to the failure of its first condition. At this stage, the intermediate fact set F looks as follows: F = {employee.position = responsible, employee.skills = [preponderance, obedience}, employee.category = supervisory-staff}. We recall that the rules /?,, R2, and /?3 were selected during the evaluation of the first condition of R4 for deriving a fact about the attribute category. The fact that has been derived, however, causes R4 to fail. Recall that for the initial goal R5 and R6 were selected as well. Hence, the next rule to be evaluated is Ry Its first condition succeeds, and subsequently its second condition is evaluated. This leads to prompting the user to enter values for the attribute rate of the object salary. Suppose that the user enters salary.rate = 75000, then both the second and third condition are met. Execution of the action in the conclusion causes the fact salary.scale = '9-14' to be added to the fact set. To conclude, rule R6 fails due to the failure of its first condition. The final fact set F' therefore equals: F' = F u {salary.rate = 75000, salary.scale = '9-14}.
210
Linda van der Gaag and Peter Lucas
Semantic nets Semantic nets have rarely been used as a knowledge-representation formalism in building ES. Nevertheless, we shall briefly discuss some of their characteristics, since the semantic net is often viewed as a precursor of the frame formalism. Semantic nets were introduced in the 1960s. One of their first uses was in representing the meaning of English words in terms of links to other words (Quillian, 1968). A semantic net is usually depicted as a directed graph, consisting of vertices and labeled arcs between vertices. Several disciplines contributed to the original idea of a semantic net; each discipline has brought its own interpretation of the vertices and arcs, and each discipline has adapted the notion of the semantic net to its own purposes. Consequently, there is hardly any consensus as to what semantic nets are, nor is there any consistent terminology. Here, we take each vertex in the graphical representation of a semantic net to represent a concept. The arcs of the graph represent binary relations between concepts. Here is an example of how knowledge is represented in a semantic net. Example. Consider the following (simplified) information: "A firm consists of employees and stock holders. In a firm, all executives are employees. John is an executive." This information involves five concepts: firm, employee, stock-holder, executive and John. Three different relations are used to express the information about these concepts: the part-of relation, the subset-of relation, and the member-of relation. In the first sentence, the part-of relation expresses that a firm is viewed as consisting of employees and stock holders. The subset-of relation in the second sentence is used to state that all executives are employees. Finally, the member-of relation expresses that John is one of the executives of the firm. Using these concepts and relations, the semantic net in Figure 10.3 depicts the information given above.
In the previous example, we have encountered the subset-of and member-of relations. Together they are usually referred to as is-a links. They reflect the two different ways of interpreting a concept (Brachman, 1983). A concept may denote either an individual object or a class of objects: • To express that a class of objects is a subclass of another class of objects, such as in the statement "an executive is an employee" from the previous example, the subset-of relation is used to represent the "is an" part of the statement. So, it defines a relation between two sets of elements.
An Overview of Expert System Principles
211
sub
meml
Figure 10.3: Some Information Concerning a Firm in a Semantic Net.
• To express that an individual object is a member of a certain class of objects, such as in the statement "John is an executive" from the previous example, the member-of relation is used to represent the "is an" part of the statement. So, it defines a relation between an element and a set of elements. The subset-of and member-of relations in a semantic net may be exploited to derive new information from the net, that is, they may be used as the basis for an inference engine. We illustrate the use of these relations in reasoning by means of the following example. Example. Consider the semantic net shown in Figure 10.4. The following two statements, among others, are represented in the net: "An executive is an employee" "A chief executive is an executive"
212
Linda van der Gaag and Peter Lucas
Figure 10.4: Inheritance
From these two statements we may derive the statement "A chief executive is an employee" exploiting the meaning of the subset-of relation. Furthermore, the statement "John is an employee" can be derived from the net by exploiting the meanings of both the member-of and subset-of relation. Exploiting the meanings of the member-of and subset-of relations in the manner discussed in the preceding example forms the basis of a reasoning mechanism called (property) inheritance: through these relations, a concept inherits the properties of the concepts "higher" in the net. Example. Consider the semantic net depicted in Figure 10.4 once more. Using property inheritance we can for example derive the fact: "A chief executive is a part of an organization" The semantic net is a natural formalism for representing knowledge hierarchically. However, semantic nets proved to be too rigid for modelling most real-life
An Overview of Expert System Principles
213
domains; yet, they paved the way for the much more flexible frame formalism discussed below.
Frames The notion of frames has been introduced half-way the 1970s by Marvin Minsky for exerting semantic control in a pattern recognition application (Minsky, 1975). Since then frames have been used in several knowledge-based systems (Fikes, 1985). Frames provide a formalism for explicitly grouping all knowledge concerning the properties of individual objects or classes of objects. Within a frame, part of the properties is specified as reference information to other, more general frames. This reference information is represented by means of is-a links, which are quite similar to the is-a links in semantic nets. This way, the knowledge is organized hierarchically in a frame taxonomy. Such a taxonomy is often depicted graphically as a directed, acyclic graph which bears a close resemblance to a semantic net. However, in the graph representation of a frame taxonomy only the is-a links are shown; the knowledge about a specific individual object or a class of objects is represented in the internal structure of the vertices of the graph. Different types of knowledge are distinguished explicitly, so that each can be treated differently in the inference process. The inference method associated with frames is based on inheritance, just as in semantic nets. Two types of frames are distinguished: • Class frames, or generic frames, which represent knowledge about classes of objects; • Instance frames, which represent knowledge about individual objects. Class frames have much in common with the record datatype (as provided, for example, in the PASCAL programming language); instance frames are similar to filled-in record variables. Since there are two types of frames, there are also two types of is-a links for reference information: • An instance-of link, which is an is-a link between an instance frame and a class frame; • A superclass link, which is an is-a link between two class frames. These is-a links are similar in meaning to the member-of and subset-of links of semantic nets. It has been noted before that a set of frames is organized hierarchically in a taxonomy. It is by means of an instance-of or superclass link that a frame indicates its relative position in the taxonomy.
214
Linda van der Gaag and Peter Lucas
Example. Consider the following frame taxonomy: class employee is superclass nil; age : int; position : {responsible, subordinate}; firm = Toy-Ltd end class executive is superclass employee; position = responsible end class minor-official is superclass employee; position = subordinate end instance John is instance-of executive; age = 35 end This set of frames consists of three class frames with the names employee, executive, and minor-official, and one instance frame with the name John. A class frame has the following components: • A superclass specification, indicating the position of the frame in the taxonomy. For example, the class frame with the name executive indicates that the class frame with the name employee is more general than the executive class frame. • A collection of attribute-type specifications, indicating which values an attribute is allowed to take. For example, the specification age : int in the class frame employee indicates that the attribute age may adopt only integer values. • A collection of attribute-value specifications, indicating the actual values the attributes have adopted initially or during the consultation. For example, the employee frame indicates that all employees are of the same firm, Toy-Ltd, by means of the attribute-value specification firm = Toy-Ltd. An instance frame is similar to a class frame; however, it contains an instance-of specification instead of a superclass specification, and is not allowed to contain attribute-type specifications. In the example instance John, it is specified that John is 35 years of age. The inference method associated with the frame formalism is called inheritance. As the name suggests, this method is similar to the property inheritance associated with semantic nets: less general frames "inherit" the information from the more general frames. In other words, all information that holds for a particular frame can be determined by tracing the taxonomy from the frame itself to the root of the taxonomy (that is, the most general frame), and collecting the attributes with their associated values that are found in each frame along the
An Overview of Expert System Principles
215
way. This procedure terminates as soon as the information in the root has been processed. Example. Consider the frame taxonomy of the previous example once more. In the instance with the name John the attribute-value pair age = 35 is specified. This instance inherits the attribute-value specification position = responsible from the more general frame executive. From the employee frame, the instance further inherits the attribute-value specification firm = Toy-Ltd.
Inheritance is a simple, yet effective means for reasoning about concepts. However, the expressive power of the frame formalism presented so far is rather limited because of the restriction to using only attribute-value specifications. Many problems in real-life defy representation by means of this formalism. To remedy this shortcoming, many extensions of the basic formalism have been proposed. One extension - now included in many ES builder tools - allows procedures, called demons, to be attached to attributes. Demons provide specific information about an attribute in procedural form; if upon reasoning with the frame taxonomy, an attribute having a demon attached to it is investigated, then the demon is executed in order to compute values for that specific attribute. Providing demons is called procedural attachment. The notion of procedural attachment is not restricted to frames; many production-rule formalisms offer similar possibilities. Example. Consider the following class frame: class employee is superclass person; salary : real; tax-rate : demon compute-rate(salary,marital-status) end The class frame employee has two attribute-type specifications: one concerns an ordinary real-valued attribute salary, and the other one specifies a demon called tax-rate. When a value for tax-rate is needed during a consultation of the knowledge base, the procedure compute-rate attached to tax-rate is called, and the current values of the attributes salary and marital-status are passed as arguments to that call. Note that the attribute maritalstatus has not been specified in the class frame employee, so it should have been specified in one of its superclasses. The compute-rate procedure is assumed to be written in some programming language such as PROLOG or LISP. Now, suppose that we have inserted the following instance into the frame taxonomy: instance David is instance-of employee; age = 50; salary = 50000; marital-status = married end
216
Linda van der Gaag and Peter Lucas
If we want to know which part of David's salary has to be paid as tax, then the demon tax-rate is activated to compute the current value for the attribute tax-rate, using the values 50000 and married of the instance as arguments.
Frames have become very popular; many present-day ES builder tools provide frames in conjunction with the production-rule formalism (e.g., Nexpert (Neuron Data, 1988)).
Reasoning with Uncertainty One of the complications of real life is that information is often uncertain or unreliable. Nevertheless, human experts are often able to use such information and make the right decisions. In order to be useful in environments where only imprecise information is available, ES have to capture not only the highlyspecialized expert knowledge, but the uncertainties that go with it as well. Research on the representation and manipulation of uncertainty has become a major research area, called inexact or plausible reasoning. Since probability theory is one of the oldest mathematical theories about uncertainty, it is no wonder that in the early 70s it was chosen as the point of departure for models for handling uncertainty in rule-based ES. It was soon discovered, however, that probability theory could not be applied in this context in a straightforward manner. Subsequent research aimed at developing modifications of probability theory that should overcome these problems and that could be applied efficiently in a rule-based ES. Unfortunately, the results turned out to be disappointing from the mathematical point of view. Since applying probability theory had proved to be rather problematic, many AI researchers soon lost interest and proposed other points of departure for handling uncertainty (Kanal, 1986). So far, the research has not yet yielded a model for plausible reasoning that is both mathematically correct and easy to use. In this chapter, we restrict the discussion to the most widely used model, the certainty factor model. The reader is referred to (Lucas, 1990) for a more thorough treatment of plausible reasoning in ES. The certainty factor model was developed by Edward Shortliffe and Bruce Buchanan to introduce the notion of uncertainty in the MYCIN system (Shortliffe, 1975; Gaag, 1989). Although inspired by probability theory, the certainty factor model deviates from probability theory in certain respects. Though mathematically flawed (Gaag, 1988), the model is used in a large number of ES. Its success stems from its simplicity: certainty factors are easy to handle and, in most cases, yield acceptable results. A domain expert expresses his knowledge of the domain in production rules, and quantifies his beliefs about these rules in certainty factors, which are numbers ranging from -1 to +1. A positive certainty factor is associated with a conclusion of a rule if the statement in the conclusion is confirmed to some
217
An Overview of Expert System Principles
degree by the truth of the corresponding conditions (a certainty factor of +1 indicates the definite truth of the statement). A negative certainty factor, in contrast, is associated with a conclusion of a rule if the statement in the conclusion is disconfirmed by the truth of the corresponding conditions. If the fulfillment of the conditions does not influence the confidence in the statement in the corresponding conclusion, a certainty factor equal to zero is associated with this conclusion. Example. A typical production rule expressing uncertain knowledge is the following rule taken from Chapter 3: if same(environment, complexity, high) and notsame(structure, technology, routine) then add (environment, differentiation, high) CF = 0.6
fi This rule says that if the statements in the two conditions are completely true, then there is some evidence (to a degree indicated by the certainty factor 0.6), that the differentiation of the environment is high.
The purpose of employing the certainty factor model is to associate a certainty factor with each fact the system derives. Recall that a certainty factor associated with a conclusion of a production rule was meant to indicate the degree to which fulfillment of the corresponding conditions confirms or disconfirms the statement in that conclusion. When a production rule has succeeded, a new fact is derived. With this fact an appropriate certainty factor has to be associated. If the statements in the condition part of the rule are not absolutely true (that is, uncertain facts are used in fulfilling the conditions), one cannot simply associate the certainty factor of the conclusion with the new fact. The actual certainty factor depends on both the certainty factor of the conclusion and the certainty factors indicating the degrees to which the statements of the conditions have been confirmed. The certainty factor to be associated with the newly derived fact, denoted by CFfact, is computed using the formula CF fact
=
CF
conclusion
'
mClX
^
^
conditions^
where CF conclusion . . denotes the certaintyJ factor associated with the conclusion of the rule. The certainty factors resulting from the conditions are combined into a new certainty factor, denoted by CFconditions, using the formulas CF
aandc
CFC\or
2... and cn = m i n {CFcy, C2... or c n = max
{CFci,
CFC2, •••, CFCn} CFc2,
...,
CFcn)
where CFc is the certainty factor resulting from condition cr i = 1 , 2 W h i c h certainty factor results from a condition is prescribed by the definition of its
218
Linda van der Gaag and Peter Lucas
predicate. M o s t predicates take the certainty factors of their attribute v a l u e s into account. T h e predicate same,
for instance, tests w h e t h e r the attribute v a l u e
s p e c i f i e d in the c o n d i t i o n has a certainty factor greater than 0.2; in w h i c h case, the predicate returns the certainty factor f o u n d in the fact set. W h e n the certainty factor for the v a l u e o f the attribute is l e s s than or equal to 0.2, the test fails. S i n c e a rule b a s e m a y contain several production rules h a v i n g the s a m e c o n c l u s i o n , it m a y happen that during the application o f o n e o f these rules, a fact c o n c e r n i n g the s a m e attribute v a l u e is already present in the fact set. In that case, the certainty factor o f this o l d fact, CFoltP derived fact, CFnew,
and the certainty factor o f the n e w l y
are c o m b i n e d into a n e w certainty factor, CFfact,
indicating
the net c o n f i d e n c e in the fact, by using the f o l l o w i n g formula:
CF
fact
f(CF0id,
=f(CFold>
CFnew) =
CFnew), w h e r e CF old + CFnew
• (1 - CFold)
CFold +CFni 1 -min{\CFM\, -f(-CFold,
\CFnew\} - CFnew)
if CFold
> 0 and CFnew > 0
if - 1 < CF0id if CFold
• CFnew < 0
< 0 and CFnew < 0
T h e f o l l o w i n g e x a m p l e s h o w the order in w h i c h t h e s e f o r m u l a s are applied. Example. Consider the following two production rules, in which we once more have abstracted from the internal structure of facts, conditions and conclusions: Rr if A then Hlmfi R2: if B and C then
H080fi
The expert has associated with the conclusion H of the rule R{ the certainty factor CF / ; '=1.00, and with the conclusion H of the rule R2 the certainty factor CF/j =0.80. We assume that the certainty factor CFA = 0.50 has been assigned to the fact A earlier during the inference process, that the certainty factor CFg = 0.40 has been assigned to the fact B, and the certainty factor CFc = 1.00 to the fact C. We suppose that H is the goal of the consultation. Using backward chaining, R{ will be selected to be applied first. Since the fact A has not been confirmed with absolute certainty, which is indicated by the certainty factor 0.50 for it in the fact set, the uncertainty in A has to be propagated to the conclusion of the rule. Using the formula CFfacl = CFconclusim • max {CFcmdmoJ, we have that the application of rule Rl results in the new certainty factor CFj = 1 • 0.50 = 0.50 being associated with the fact H. Subsequently, the production rule R2 is selected. Before we can employ the formula for propagating the uncertainty of B and C to the conclusion H of the rule, first a certainty factor for this composite condition B and C has to be computed: we find that CFBandc = min{CFB , CFc) = 0.40. Application of rule R2 therefore yields the certainty factor CFj = 0.80 • 0.40 = 0.32 for the goal H. Finally, by means of the last formula for combining the results of production rules having the same conclusion, the total confidence in H is computed: CFH = CFj+CFj-
( 1- CF„ ) = 0.50 + 0.32 • (1 - 0.50) = 0.66.
An Overview of Expert System Principles
219
Note that by combining the two degrees of confirmation of H, the total confidence in H is greater than the partial ones resulting from the different rules.
10.4 User Interface and Explanation Facilities More than a suitable knowledge-representation formalism and associated inference method are needed to turn an ES into a useful tool for solving difficult real-life problems. To be useful to the practitioner, an ES has to be able to explain how it arrives at its conclusions, or why it asks certain questions, and to justify its final recommendations. Much research is currently being done to improve the comprehensibility and effectiveness of the reasoning behavior of ES. However, progress has been slow. Most modern ES shells still employ techniques based on methods which were developed for the M Y C I N system. Important for the knowledge engineer is the trace facility that provides a detailed description of the reasoning behavior of the ES during the consultation. This facility visualizes the inference steps at various levels of detail (the reader may be familiar with similar facilities from programming languages such as LISP and PROLOG). Facilities that allow the user to inspect selected parts of the inference are called explanation facilities. Three types of explanation facilities are distinguished: • The why facility, which allows the user to examine why a certain question is being asked. It shows which production rules are currently being applied by the inference engine, and which subgoal has led to the question. • The how facility, which offers a means for investigating how values for a particular attribute have been established, that is, which production rules have been applied for deriving its values, and whether a question about the attribute has been put to the user. • The why-not facility, which is complementary to the how facility. It offers a means for determining during the consultation why a particular value for a specific attribute was not derived. The why-not facility shows the production rules with the concerned object-attribute-value triple in their conclusion. Each of these types of explanation uses information present in the rule base and in the fact set, and also information about the inference process itself which has been recorded during the consultation. The explanation facilities often apply some stylized form of natural language for displaying information, in which case they are said to provide a quasinatural language interface. In a quasi-natural language interface, a rather straightforward automated translation of the original representation of facts and
220
Linda van der Gaag and Peter Lucas
production rules into natural language takes place. Predicates, objects, attributes, and values have associated short phrases, which are combined using simple grammar rules to produce entire sentences describing the contents of conditions and conclusions. Example. Consider the following condition of an arbitrary production rule: same(employee,position,responsible) Suppose that the verbal phrase "is" has been associated with the predicate same. Furthermore, the phrase "the employee" may have been associated with the object employee, the phrase "the position" with the attribute position and the phrase "responsible" with the constant value responsible. It is now rather simple to translate the condition into the following sentence: the position of the employee is responsible just by arranging the phrases into the right order, and by inserting the word "of' at the correct position. This example shows that a quasi-natural language interface yields increased readability. However, in more complicated cases, the sentences may look a bit artificial. To demonstrate the use of the trace and why facilities in a quasi-natural language interface, the knowledge base discussed in the section on production rules and inference above has been consulted by means of an actual ES shell. The program used is discussed in detail in (Lucas, 1990). Comments are enclosed between braces. Example. Enter the name of the knowledge base: salary Do you want trace information? (y or n) y Tracing: the scale of the salary Selected rules: R-4 R-5 R-6 Tracing: the category of the employee Selected rules: R-l R-2 R-3 Tracing: the position of the employee What kind of position do you prefer? -> why The following rule is being applied to conclude about the category of the employee: R-l If the position of the employee is responsible and the skill of the employee is preponderance and the skill of the employee isnot obedience then conclude that the category of the employee is management-staff
An Overview of Expert System Principles
221
-> why {i.e. why is it necessary to conclude about the employee's category?} The followind rule is being applied to conclude about the scale of the salary: R-4 If the category of the employee is management-staff and the rate of salary is greater than 100000 then conclude that the scale of the salary is 13-16 { So, rule R-l was needed to establish the salary scale of the employee. } -> ? what kind of position do you prefer? -> (responsible) Tracing: the skill of the employee Do you have a natural tendency to obedience or to preponderance? -> (preponderance obedience) Tracing: the rate of the salary What do you want to earn? -> why The following rule is being applied to conclude about the R-5 If the category of the employee is supervisory-staff and the rate of the salary is less than or equal to 100000 and the rate of salary is greater than 50000 then conclude that the scale of the salary is 9-14 -> 75 000 CONCLUSIONS: Object: the salary Attribute: the scale Value (s) : (9-14) This consultation illustrates the reasoning behavior and interaction with the user of a typical ES, using backward chaining. The trace facility tells the user each time an attribute is traced. By successively entering the why command, the user can determine which reasoning steps prompted the questions being asked.
10.5 Knowledge Engineering The number of fully operational ES in business and industry is still quite limited (for an overview see Feigenbaum, McCorduck and Nii, 1988). The slow
222
Linda van der Gaag and Peter Lucas
progress is often attributed to the lack of general, sound methodologies for knowledge engineering. Knowledge engineering ranges from the acquisition of domain knowledge, via its formalization using a knowledge-representation formalism, to its final incorporation into an ES. For the last six years, much research has been devoted to knowledge engineering, and some new methodologies have become available. Each methodology for knowledge engineering aims at providing a systematic and detailed description of the development process of an ES. It is widely acknowledged that such a methodology is essential for the commercial development of ES in the software industry. The first methodology was proposed in (Buchanan, 1983). It described the development of an ES as a cyclic process through five phases, as depicted in Figure 10.5. The following phases were distinguished • Identification, which entails the selection of the participants of the project, the characterization of the type of problem to be solved by the ES, and the identification of resources required, such as computing facilities, time, and money. • Conceptualization, which concerns the analysis of the domain expert's activities in the field. The aim is to get insight into the nature of the initially given data, the conclusions to be drawn, and the strategy followed in solving the problem. This phase must provide a collection of the most important concepts and their interrelationships in the field. • Formalization, which transforms the specification of the verbal, informal concepts and relationships resulting from the conceptualization into a knowledge-representation formalism. • Implementation, which translates the formal specification obtained in the formalization phase into a running program (usually by using one of the available ES shells or ES building tools). • Testing, which evaluates various characteristics of the new ES, such as its performance measured by means of a database of test cases. Note that the decomposition of the development of an ES into various stages has much in common with the traditional software life-cycle. Critics of this methodology draw attention to the lack of detail in the phase description of the development cycle, and want more emphasis on its organizational aspects. More recently developed methodologies generally take the software life-cycle as a starting point, and aim at adapting the various stages to the peculiarities of ES building. For example, it is well-known that the development of ES differs from the traditional software life-cycle in that the feasibility of the project and the appropriateness of the selected tools are much more difficult to ascertain. Furthermore, unlike traditional software develop-
An Overview of Expert System Principles
223
Figure 10.5: Cyclic Development of an Expert System
ment, in designing an ES it is usually not possible to provide an exhaustive description of the relationships between the expected input of the system and its output. Much more than in traditional software development, the analysis of the requirement and the design go hand in hand: exploratory programming, prototyping, and experimentation are fundamental aspects of ES development. Although most research projects try to cover the entire ES life-cycle, there has been limited attention to the implementation and the evaluation phases of ES so far; the emphasis has been mainly on the initial phases. Most methodologies for building ES emphasize a systematic approach in the initial phases of the development of an ES to reduce both cost and risk of failure. Giovanni Guida and Carlo Tasso, for example, propose an ES life-cycle whose initial phases are a plausibility study, followed by the development of several prototype systems. This way, relevant information about the feasibility of the project (or the suitability of the chosen approach) will become available early on (Guida, 1989).
224
Linda van der Gaag and Peter Lucas
Many experts agree that knowledge acquisition is the most important bottleneck of the knowledge engineering task, since extracting knowledge from a human expert has proved to be a very difficult task. Techniques for approaching the knowledge acquisition task in a systematic, structured way have been proposed. Jos Breuker and Bob Wielinga, for example, have proposed a methodology for knowledge acquisition, called KADS, in which the data gathered in the initial phase of an ES project are transformed in several steps into an architecture of an ES, using a conceptual model of the problem domain at hand (Breuker, 1989). In section 10.2, we have mentioned various software tools which can be applied during the implementation phase of the development of an ES. As we have seen, these vary from the ES shells, which are generally easy to use, but only suitable for certain types of application, to the ES builder tools, which are more like full programming environments. Beside these tools, however, there are also some specialized - but still experimental - tools available for assisting the knowledge engineer in the knowledge acquisition task. One of these tools is Shelley, developed by Anjo Anjewierden, as part of the KADS methodology (Anjewierden, 1988). It basically consists of an integrated collection of special-purpose editors for analyzing the data gathered by interviewing domain experts. A similar tool is Acquist, developed by E. Motta and others (Motta, 1989). Both tools allow the knowledge engineer to carry out an analysis of transcripts obtained from interviewing the expert in the knowledge acquisition process, to produce a structured, conceptual model of the problem domain. Both systems apply the technique of hypertext to isolate concepts in the transcript analyzed. The conceptual model produced is used as a starting point for the design and implementation of the ES.
10.6 Conclusions This chapter provided a review of the most important topics in the field of ES. Current ES continue to be based on techniques that were developed for the MYCIN system. The field is still evolving and it is to be expected that future ES may apply different, more sophisticated techniques. Current trends in ES research suggest, for instance, that both logic and probability theory will become more important in future. Furthermore, until recently, research in knowledge engineering has focused mainly on the initial phases of the development of an ES. Not much research has as yet been done concerning the implementation and testing of ES. Developing a methodology that covers the entire life-cycle of an ES is one of ES research's most interesting challenges.
About the Authors
Helmy H. Baligh is Professor of Business Administration at the Fuqua School of Business of Duke University. He received his B.A. in Philosophy, Politics, and Economics from Oxford University, and his M.B.A. and Ph.D. from the University of California, Berkeley. Professor Baligh is co-author of Vertical Market Structures and of articles in Management Science, Organization Science, Journal of Marketing Research, Theory and Decisions and Kyklos. His research work is on economic structures, organizations, and markets. Richard M. Burton is Professor of Business Administration in The Fuqua School of Business of Duke University. He received his B.S., M.B.A., and D.B.A. from the University of Illinois, Urbana. Professor Burton is the co-editor of Innovation and Entrepreneurship in Organizations and Organizational Responses to the New Business Conditions. He has published many articles in such journals as Management Science, Administrative Science Quarterly, Technovation, Omega, and The Colombia Journal of World Business. He is the Departmental Editor for Organizational Analysis, Performance and Design of Management Science. He is currently doing research on organizational design and expert systems. Hennie Daniels is Associated Professor of Computer Science at the Economics Department of Tilburg University. He received his M.S. degree in Mathematics at the Technical University of Eindhoven, and a Ph.D. from Groningen University. He was a project manager of several software projects in the Dutch aerospace industry. He published articles in handbooks like Handbook of CAD/CAM, Handbook of Data and Resource Management, and many journals, such as International Journal for Numerical Methods in Engineering, Journal of Economic Dynamics and Control, Computer Science in Economics and Management, and in Dutch journals. His current research interest focuses on applications of Computer Science in Economics. Linda C. van der Gaag is Assistant Professor of Theoretical Computer Science at Utrecht University. She received her M.S. (Computer Science) from the Delft University of Technology, and a Ph.D. from the University of Amsterdam. She is co-author of the book Principles of Expert Systems (Addison Wesley, 1990), and published articles in such journals as International Journal of Man-Machine Studies, AI Expert, International Journal of Approximate Reasoning, and in
226
About the Authors
Dutch journals. Her current research interest focuses on probabilistic methods for plausible reasoning in knowledge-based systems. Henk Gazendam is Associate Professor of Information Systems at Groningen University. He received his M.S. degree in Chemistry and Philosophy from the University of Utrecht. Prior to his appointment in Groningen he worked for 12 years as a senior policy officer in Dutch government agencies. He is the senior author of a Dutch book on strategic information planning (Informatiebeleid, hoe bestaat het). He also published reports and articles on research management, information planning, decision support systems, and ES. He is a member of the international research group on cognition based DSS (MIND). Recently, he has focused on the development of a formal language to describe organization and information system theories. Josh C. Glorie is a senior programmer with the Department of Physics at Dartmouth College. He received an B.S. from the University of Amsterdam. His research interest focuses on computer-based representations of quantum-effects. Roger I. Hall is Professor of Organizational Theory in the Department of Management, University of Manitoba, Canada. His academic qualifications include B.Sc. (Physics) from the University of Birmingham, D.I.C. (graduate diploma in Production Engineering and Management) from the Imperial College, Certificate of the International Teachers Programme from the Harvard Business School, and Ph.D. (Business Administration) from the University of Washington. He has been a recipient of a Bronfman Foundation Senior Faculty Award, the Canadian Pacific Prize of the Administrative Sciences Association of Canada, and a first prize in the International Competition of The Institute of Management Sciences (College on Organization) for the most innovative new contribution to the field of Organizational Analysis and Design. His published articles have appeared in Administrative Science Quarterly, Management Science, and Dynamica. Recently he has focused on developing an Artificial Intelligence model of the collective policy making behavior of managers. Armand Hatchuel is Professor of Management Science and Production Systems at the École des Mines de Paris (France). A Civil Engineer, he received a Doctorate in Engineering and Management from the École des Mines de Paris. Professor Hatchuel is co-author of books on Production Management and has published several articles in French and International journals. He works on applications of formal decision theories to organizational concepts. He has also developed a methodology to guide the intervention of management scientists in Organizations. Recently, he has focused on the development of ES in the light of this methodology.
About the Authors
227
Pim van der Horst is EDP Auditor at Credit Lyonnais Bank in The Netherlands. He received his MS in Management Information Systems from the University of Tilburg, The Netherlands. Pim van der Horst worked on the development of ES at Coopers & Lybrand Tax Consultancy. At the Postbank he was an advisor on office information systems. He has published in several journals and conference proceedings. Peter J.F. Lucas is Assistant Professor of Medical Computer Science at the University of Amsterdam. He received his M.S. degree in Medicine from the University of Leiden. He started research on expert systems in 1983 at Delft University of Technology, where he developed the DELFI-2 expert system shell. He has been project leader of the ES Research Group at the Center of Mathematics and Computer Science in Amsterdam. He is co-author of the book Principles of Expert Systems (Addison Wesley, 1990), and published articles in books such as Decision Support Systems and Expert Systems, in journals such as Liver, and AI Expert, and in Dutch journals. His current research focuses on ES and their application in medical decision making. Maarten Marx is a doctoral student at the Center of Computer Science in Organization and Management (CCSOM) of the University of Amsterdam. He holds an M.A. from the University of Amsterdam and has recently completed a report on an ES shell for reasoning about organizations. His research interest focuses on the application of intensional logic to default reasoning. Michael Masuch is Associate Professor of Organization and Computer Science at the University of Amsterdam and the director of its Center for Computer Science in Organization and Management (CCSOM). He received an M.A. from the Free University, Berlin, and a Ph.D. from the University of Amsterdam. He published several books, and many articles in such journals as Administrative Science Quarterly, Kölner Zeitschrift für Sociologie und Sozialpsychologie, Studio Organisatione, and in Dutch journals. He received the Thyssen Award for the best paper published in German social science journals in 1983. His present research focuses on machine-based theorizing in the social sciences. B0rge Obel is Professor of Business Administration at the Department of Management, Odense University, Denmark. He received an M.B.A. and a Ph.D. (Economics) from the University of Aarhus, Denmark. Professor Obel has published more than 25 articles in journals such as Management Science, Administrative Science Quarterly, Journal of Management Studies, and Journal of Economic Behavior and Economics. He is the author of five books. His most recent books include Designing Efficient Organizations: Modelling and Experimentation (North Holland, 1984) co-authored by Richard M. Burton, and Organizational Responses to the New Business Conditions: An Empirical
228
About the Authors
Perspective, (Elsevier, 1989), co-authored by Richard M. Burton and John D. Forsyth. His current research focuses on the application of ES in organization theory and design. Tim Orlo Peterson is the Director of Logistics Training and Knowledge Systems for the Air Force Logistics Management Center at Gunter Air Force Base, Montgomery Alabama. He received his B.S. degree in Management from the University of Nebraska, Lincoln, his M.B.A. from the University of Texas, San Antonio, and his Ph.D. from Texas A&M University. In 1989, he won the Distinguished Graduate Research Award (Doctoral Level) from Texas A&M University. Dr. Peterson is co-editor of Managing The Most Important Resource: People and Human Resource Management: Readings and Cases. He has published articles in journals such as the Organizational Behavior Teaching Review, Logistics Spectrum, and Personnel Journal and is coauthor of two articles which appeared in Human Factors in Management Information Systems. Recently, he has focused on developing computer-based training and expert systems for Air Force logistics units and on researching the impact of expert systems on managerial performance. Jeremiah Sullivan is Associate Professor of Business Communications in the Graduate School of Business Administration at the University of Washington. He received his B.S. degree in Marine Transportation from the Maritime College, State University of New York, his M.B.A. from the University of Washington, and B.S. and Ph.D from New York University. Professor Sullivan is the author of Handbook of Accounting Communications, Pacific Basin Enterprise and The Changing Law of the Sea, and Foreign Investment in the U.S. Fishing Industry. He has published many articles in such journals as the Academy of Management Journal, Academy of Management Review, Journal of International Business Studies, Journal of Small Business Management, and the Journal of Cross Cultural Psychology. Recently he has focused on developing ES for small business management and on researching the impact of managerial expert systems in large organizations. David D. Van Fleet is Professor of Management at Arizona State University West Campus. His degrees are from the University of Tennessee, Knoxville. Professor Van Fleet has 26 years of full-time teaching experience, extensive editing work, over 140 publications and presentations, numerous officerships in professional associations, extensive executive education experience, and active consulting both in the United States and abroad. A Fellow of the Academy of Management, Professor Van Fleet has been President of the SouthWest Academy of Management and Chair of the Management History Division of the Academy of Management among other officer roles. His books include Contemporary Management, Military Leadership, and Organizational Behavior. He was the Editor of the Journal of Management from 1986 through 1989.
About the Authors
229
Benoît Weil is Research Engineer and Assistant Professor in Production Systems at the École des Mines de Paris. He is Civil Engineer in Mines. He has recently developed a research program on the development of ES in organizations. He is also responsible for the special group on ES at the French Association for Industrial Management (AFGI).
References
Aho, Alfred V., et al. (1974): The Design and Analysis of Computer Algorithms. Reading, MA: Addison Wesley. Albanese, R., and D.D. Van Fleet (1983): Organizational Behavior. Hinsdale, IL: Dryden press. Aldrich, Howard E. (1972): "Technology and Organization Structure: A Reexamination of the Findings of the Aston Group", Administrative Science Quarterly, I: 26-43. Allinson, C.W. (1977): "Training in Performance Appraisal Interviewing - An Evaluation Study." Journal of Management Studies, 14: 179-191. Annett, J. (1969): Feedback and Human Behavior. Middlesex, England: Penguin. Applegate, L.M., J.I. Cash, and D. Quinn Mills (1988): "Information Technology and Tomorrow's Manager." Harvard Business Review, 66: 128-136. Axelrod, R. (1976): The Structure of Decision: The Cognitive Maps of Policy Elites. Princeton University Press, Princeton, N.J.. Baligh, Helmy H. (1989): "Decision Rules Theory and its Use in the Analysis of the Organization's Performance." Working Paper, Faqua School of Business, Duke University. Baligh, Helmy H. (1986): "Decision Rules and Transactions, Organization and Markets". Management Science, 32: 1480-1491. Baligh, Helmy H., and Richard M. Burton (1981): "Describing and Designing Organization Structures and Processes." International Journal of Policy Analysis and Information Systems, 5: 251-266. Baligh, Helmy H., and Richard M. Burton (1984): "The Process of Designing Organization Structures and Their Information Substructures." In S.K. Chang (editor), Management and Office Information Systems: 3-25. New York: Plenum Press. Baligh, Helmy H., and William W. Damon (1980): "Foundations for a Systematic Process of Organizational Structure Design." Journal of Information and Optimization Sciences, 1: 133-165. Baligh, Helmy H., Richard M. Burton, and B0rge Obel (1989): "An Expert System for Organizational Design Using Structures and Their Properties: Beginning Anew". Baligh, Helmy H., Richard M. Burton, and B0rge Obel (1987): "Designing Organizational Structures: An Expert System Method" presented at Economics and Artificial Intelligence Conference, Aix-En-Provence, September. Bandura, A., and D.H. Schunk (1981): "Cultivating Competence, Self-Efficacy, and Intrinsic Interest through Proximal Self-Motivation." Journal of Personality and Social Psychology, 41: 586-598. Bandura, A. (1977): "Self-Efficacy: Toward a Unifying Theory of Behavioral Change." Psychological Review, 84: 191-215. Barr, A., and J. Davidson (1980): Representation of Knowledge, In the Handbook of AI, Report no. STAN-CS-80-793, Computer Science Department, Stanford University.
232
References
Bennett, J.L. (ed.) (1983): Building Decision Support Systems. Reading (Mass.): Addison-Wesley. Bensimon, M., M. Ducamp, and L. Nguyen (1988): Les Systemes Experts en Milieu Financier. Proceedings of the Eighth International Workshop on Expert Systems and Their Applications, Vol. 2, Avignon. Berry, D.C. (1987): "The Problem of Implicit Knowledge." Expert Systems, 4:144-150. Birtwistle, G.M. (1979): DEMOS: A System for Discrete Event Modeling in Simula. Houndmills: MacMillan. Blalock, H.M. (ed.) (1971): Causal Models in the Social Sciences. London: Macmillan. Blanning, R.W. (1987): " A Survey of Issues in Expert Systems for Management." In B.G. Silverman (ed.), Expert Systems for Business: 24-39. Reading, MA: Addison-Wesley. Blanning, R.W. (1984): "Management Applications of Expert Systems." Information and Management, 7:311-316. Blau, Peter M., and Richard A. Schoenherr (1971): The Structure of Organizations, New York: Basic Books. Bloomfield, B.P. (1988): "Expert Systems and Human Knowledge: A View from the Sociology of Science." AI & Society, 2:17-29. Boarne, B. (1989):"SMALLTALK: Lowering the Threshold". HOOPLA, 2: 6. Bobrow, D. G., S. Mittal, and M.J. Stefik (1986): "Expert Systems: Perils a Promise." Communications of the ACM, 29: 9. Bobrow, Daniel G., (1984): "Qualitative Reasoning about Physical Systems: an Introduction." Artificial Intelligence, 24: 1-5. Bougon, M., K. Weick, and D. Binkhorst (1977): "Cognition in Organizations: an Analysis of the Utrecht Jazz Orchestra.", Administrative Science Quarterly., 22: 606-639. Bourne, L. E., and M.S. Fox (1984): "Autonomous Manufacturing: Automating the Job-Shop." Computer, 17: 76-86. Bourne, L.E., R.L. Dominowski, E.F. Loftus, and A.F. Healy (1986): Cognitive Processes (2nd Edition). Englewood Cliffs, NJ: Prentice-Hall, Inc. Bouwman, M. (1981): "The Use of Accounting Information. Expert Versus Novice Behavior." In Decision-Making: An Interdisciplinary Inquiry. Office of Naval Research, Contract No. N00014-80-G-0091. Bouwman, M. (1983): "Human Diagnostic Reasoning by Computer: An Illustration from Financial Analysis." Management Science, 29: 653-672. Brachman , R, and H. Levesque (eds.) (1985): Readings in Knowledge Representation. Los Altos, CA: Morgan Kaufman Brachman, R J . (1983): "What IS-A is and isn't: An analysis of Taxonomic Links in Semantic Networks", IEEE Computer, vol. 16, no. 10, pp. 30 - 36. Bradford, University of (1978): Dysmap User's Manual. University of Bradford Management Centre (U.K.). Bramer, M.A. (1988): "Expert Systems in Business: A British Perspective." Expert Systems, 5: 104-119. Bratko, I. (1986): PROLOG Programming for Artificial Intelligence, 3: 76 - 83, Reading, Massachusetts: Addison-Wesley.
References
233
Breuker, J. & B. Wielinga (1989): "Models of Expertise in Knowledge Acquisition", In G. Guida, C. Tasso (eds.), Topics in ES Design: Methodologies and Tools, Amsterdam: North-Holland. Breuker, J., en B.Wielinga. (1986): "Analyse van expertise voor expertsystemen." In A.Nijholt en L.Steels (red.), Ontwikkelingen in expertsystemen. Den Haag: Academic Service. Broadbent, Donald E. (1975): "The Magic Number Seven After Fifteen Years," In A. Kennedy and A. Wilkes (eds.), Studies in Long Term Memory. London: John Wiley. Brodie, M.L., J.Mylopoulis, and J.W.Schmidt (eds.) (1984): On Conceptual Modelling: Perspectives from Artificial Intelligence, Databases, and Programming Languages. New York: Springer. Buchanan, B.G., and E.H. Shortliffe (1984): Rule-based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Reading, Massachusetts: Addison-Wesley. Buchanan, B.G., D. Barstow, R.B. Bechtal, et al.(1983): "Constructing an Expert System", In F. Hayes-Roth, R.A. Waterman, and D.B. Lenat (eds.), Building ES, Reading, Massachusetts: Addison-Wesley. Buchanan, Bruce G., and Edward H. Shortliffe (1984): Rule-Based Expert Programs: the MYCIN Experiments of the Stanford Heuristic Programming Project. Reading, Massachusetts: Addison-Wesley. Burbridge, J.I. and Friedman, W.H. (1987): "The Integration of Expert Systems in Post-Industrial Organizations." Human Systems Management, 7:41-48. Bürge, W.H. (1975): Recursive Programming Techniques. Reading, Massachusetts: Addison-Wesley. Burton, Richard M. and B0rge Obel (1984): Designing Efficient Organizations: Modelling and Experimentation. Amsterdam: North-Holland. Burton, Richard M„ and B0rge Obel (1980): "A Computer Simulation Test of the M-Form Hypothesis." Administrative Science Quarterly, 25: 457-466. Burton, Richard M., and B0rge Obel (1984): Designing Efficient Organizations. Modeling and Experimentation. New York: North Holland. Cacioppo, J.T., R.E. Petty, and J.A. Sidera, (1982): "The Effects of a Salient Self Schema on the Evaluation of Proattitudinal Editorials: Top-down Versus Bottom-up Message Processing." Journal of Experimental Social Psychology, 18 : 324-338. Campbell, D.T. and J.C. Stanley (1963): Experimental and Quasi-Experimental Designs for Research, Chicago: Rand McNally. Carroll, S. J., and D.J. Gillen (1987): "Are the Classical Management Functions Useful in Describing Managerial Work?" Academy of Management Review, 12: 38-51. Carroll, S. J. (1982): The Performance Review and Quality of Work Life Programs. Paper presented at the Annual Meeting of the Eastern Academy of Management, Baltimore, Maryland. Cederblom, D. (1982):"The Performance Appraisal Interview: A Review, Implications, and Suggestions." Academy of Management Review. 7: 219-227. Chandler, Alfred D., Jr. (1962): Strategy and Structure: Chapters in the History of the American Industrial Enterprise, Cambridge, Massachusetts: The MIT Press. Chamiak, Eugene, Christopher K. Riesbeck, and Drew V. McDermott (1986): Artificial Intelligence Programming (2nd. ed). Hillsdale, NJ: Lawrence Erlbaum .
234
References
Charniak, Eugene, and Drew McDermott (1985): Introduction to Artificial Intelligence. Reading, Massachusets: Addison-Wesley. Checkland, P. (1981): Systems Thinking, Systems Practice. Chichester: Wiley. Chi, M.T.H., R. Glaser, and E. Rees, (1982): "Expertise in Problem Solving." In R. Sternberg (ed.), Advances in the Psychology of Human Intelligence (Vol I). Hillsdale, NJ: Erlbaum. Child, John (1972): "Organizational Structure and Strategies of Control: A Replication of the Aston Study." Administrative Science Quarterly, 17: 56-77. Clancey, B.C., and E.H. Shortliffe (eds.) (1984): Readings in Medical AI: The First Decade, Reading, Massachusetts: Addison-Wesley. Clark, P. (1987): WORC: The ERSC research program 82-87, Aston University, Birmingham . Cohen, M.D., J.G. March, and J.P. Olsen (1972): "A garbage can model of organizational choice." Administrative Science Quarterly, 17: 1-25. Cook, T.D., and D.T. Campbell (1976): "The Design and Conduct of Quasi-Experiments and True Experiments in Field Settings." In M.D. Dunnette (ed.), Handbook of Industrial and Organizational Psychology., Chicago: Rand McNally. Coser, L.A. (1964): "The Termination of Conflict." In W.J. Gore and J.W. Dyson (eds.), The Making of Decisions, Glencoe, Illinois: Free Press. Coyle, R.G. (1977): Management System Dynamics. London: Wiley. Cupello, J.M., and D. Mishelevich (1988): "Managing Prototype Knowledge/Expert System Projects." Communications of the ACM, 31: 534-541. Cyert, R.M., and J.G. March (1963): A Behavioral Theory of the Firm, Englewood Cliffs, N.J.: Prentice-Hall. D.J. Hickson, C.R. Hinings, and C. Turner (1969): "The Context of Organization Structures", Administrative Science Quarterly, 14: 91-114. Daft, Richard L. (1986): Organization Theory and Design, West Publishing Company. Dal ton, G. W., Thompson, P. H., & Price, R. I. (1977) "The four stages of professional careers: A new look at performance for professionals." Organizational Dynamics, 6: 23-42. Date, C.J. (1985): An Introduction to Database Systems. Reading, Massachusetts: Addison-Wesley. Davis B.L., and M.K. Mount (1984): "Design and Use of Performance Appraisal Feedback System." Personnel Administrator, 29: 91-97. Davis, W.S. (1983): Systems Analysis and Design: A Structured Approach. Reading, Massachusetts: Addison-Wesley. Delesalle, H., and Y. Lagoude (1989): "L'Intelligence Artificielle pour la Planiflcation et la Conduite du Travail en Atelier." Working Paper ITMI. Descotte Y„ and J.C. Latcombe (1986): GARI: A Problem Solver that Plans how to Machine Mechanical Parts. Proceedings of the Seventh International Joint Conference of Artificial Intelligence, 300-332. Descottes, Y., J.C. and Latombe (1985): "Compromising Among Antagonist Constraints in a Planner." AI Journal, 4271: 183-217. Douglas, M. (1987): How Institutions Think. London: Routledge & Kegan Paul. Downs, Anthony, (1967): Inside Bureaucracy. Boston: Little, Brown. Dreyfus, H.L. (1988): "The Socratic and Platonic Basis of Cognitivism." AI & Society, 2: 99-112.
References
235
Dreyfus, H.L., and S.E. Dreyfus (1984): "From Socrates to Expert Systems, the Limits of Calculative Rationality." Technology in Society, 6: 217-233. Duchessi, P. (1987): "The Conceptual Design for a Knowledge-Based System as Applied to the Production Planning Process." In B.G. Silverman (ed.), Expert Systems for Business. 174-194. Reading, Massachusetts: Addison-Wesley. Duncan, Robert B. (1972): "Characteristics of Organizational Environments and Perceived Environmental Uncertainty." Administrative Science Quarterly, 17: 313-327. Duncan, Robert B. (1979): "What is the Right Organization Structure?" Organizational Dynamics, 59-79. Dungan, C.W., and J.S. Chandler (1985): "AUDITOR: A Micro-Computer Based Expert System to Support Auditors in the Field." Expert Systems, 2: 210-221. Eden, Colin, H.Williams, and T. Smithin (1986): "Synthetic Wisdom: the Design of a Mixed Mode Modelling System for Organizational Decision Making." Journal of the Operational Research Society, 37: 233-241. Eden, Colin (1986): "Managing Strategic Ideas: the Role of the Computer.'7CL Technical Journal, 173-183. Eden, Colin (Ed.) (1988):,"Using Cognitive Mapping for Strategic Options Development and Analysis", in: Structuring Decisions. Chichester: Wiley. Einhorn, H.J., and R.M. Hogarth (1978): "Confidence in Judgement: Persistence of the Illusion of Validity." Psychological Review, 85: 395-416. Emery, F.E.(ed.). (1969): Systems Thinking. Harmondsworth: Penguin. Enderton, H.B. (1972): A Mathematical Introduction to Logic, London: Academic Press. Ernst, G., and A. Newell (1969): GPS: A Case Study in Generality and Problem Solving, New York: Academic Press. Evanson, S.E. (1988): "How to talk to an Expert." AI Expert, February: 36-42. Farrel, P.M. and Pingry, J. (1988): Expert Systems in Manufacturing, Proceedings of the Eighth International Workshop on ES and their Applications, Vol. 2, Avignon. Feigenbaum, E, P. McCorduck, and H.P. Nii (1988): The Rise of the Expert Company. Feldman and Arnold (1983): Managing Individual and Group Behavior in Organizations, New York: McGraw Hill Book Company. Fersko-Weiss, F. (1985): "Expert Systems, Decision-Making Power." Personal Computing. November. Fields, J.M., and H. Schuman (1976): "Public Beliefs and the Beliefs of the Public." Public Opinion Quarterly, 40: 427-448. Fikes, R., and T. Kehler (1985): "The role of Frame-Based Representation in Reasoning." Communications of the ACM, 28: 904 - 920. Fisher, C.D. (1979): "Transmission of Positive and Negative Feedback to Subordinates: A Laboratory Investigation." Journal of Applied Psychology, 64: 533-540. Fisher, Douglas, and Pat Langley (1986): "Conceptual Clustering and its Relation to Numerical Taxonomy." In William A. Gale (ed.), Artificial Intelligence and Statistics. Reading, Massachusetts: Addison Wesley. Fiske, Edward B. and Hammond, Bruce (1989): "Computer Savvy in the B-Schools", Lotus, (September): 60-65. Fiske, S.T. and S.E. Taylor, (1984): Social Cognition. New York: Random House. Fletcher, G.(1984): "Psychology and Common Sense." American Psychologist, 39:203-213. Ford, F.N. (1985): "Decision Support Systems and Expert Systems: A Comparison."
236
References
Information and Management, 8: 21-26. Forsyth, R. (1987): " Expertech Xi Plus, Expert Systems." Software Review , 4, Nr. 1. Fox, M.S. (1983): "The Intelligent Management System: An Overview." In H.G.Sol (ed.), Processes and Tools for Decision Support. Amsterdam: North-Holland. Fox, M.S. (1983): Constraint Directed Search, a Case-Study of Job-Shop Scheduling. Ph.D. Thesis, Carnegie Mellon University. Freeman, John, H. (1978): "The Unit of Analysis in Organizational Research." In Marshall W. Meyer et al. (eds.), Environments and Organizations. San Francisco: Jossey Bass. Freeman, John, H. (1986): "Data Quality and the Development of Organizational Social Science. Administrative Science Quarterly, 31: 298-303. Frost, Richard A. (1986): Introduction to Knowledge Base Systems. London: Collins. Furnham, A.F. (1988): Lay Theories, Everyday Understanding of Problems in the Social Sciences. Oxford: Pergamon Press. Gaag, L.C. van der (1987): "The Certainty Factor Model and Its Basis in Probability Theory." Report no. CS-R8816, Centre for Mathematics and Computer Science, Amsterdam. Gaag, L.C. van der (1989): "A Conceptual Model for Inexact Reasoning in Rule-Based Systems." International Journal of Approximate Reasoning, 3: 239 - 258. Galbraith, J.R. (1973): Designing Complex Organisations. Reading, Massachusetts: Addison-Wesley. Garson, Barbara (1988): The Electronic Sweatshop: How Computers are Transforming the Office of the Future into the Factory of the Past. New York: Simon and Shuster. Gazendam, H.W.M. (1986): "Beslissingsondersteunend Systeem Besturing O&W." Research Report, Groningen: Faculteit Bedrijfskunde. Gazendam, H.W.M. (1987): "Vier Wijzen van Aanpak van Informatieplanning." In S.K.Th.Boersma, en O.A.M.Fisscher (red.), Structureren van het Ongestructureerde. Groningen: OFIR: 149-166. Gazendam, H.W.M. (1988): "Evaluatie Departementaal Informatiebeleid O&W 19851987." Research Report, Groningen: Faculteit Bedrijfskunde, No.88- 06. Gazendam, H.W.M. (projektleider) (1985): Besturing Informatievoorziening Onderwijs en We tense happen: Visie 1985-1990. Zoetermeer: Ministerie van O&W. Gazendam, H.W.M., J.ter Heegde, J.A.Sturkenboom, en J. Zwier. (1987): Informatiebeleid, hoe bestaat het. Amsterdam: NGI-SIC. Gazendam, H.W.M., en W.M. de Jong. (1989): Blauwdruk of Bestemmingsplan. Working Paper, Groningen: Faculteit Bedrijfskunde. Gellman, H. (1985, autumn): "Knowledge workers: It's your choice." Business Quarterly, 50: 48-50. Genesereth, Michael, and Nils Nilsson (1988): Logical Foundations of Artificial Intelligence. Los Altos, California: Morgan and Kaufmann. Gilhooly, K.J. (1988): Thinking: Directed, Undirected, and Creative. London: Academic Press. Glaser, R. (1985): "Thoughts on Expertise". Technical Report No. 8. Learning Research and Development Center, University of Pittsburgh. Goldberg, A., and D.Robson (1983): SMALLTALK-80: The Language and its Implementation. Reading, Massachusetts: Addison-Wesley.
References
237
Goudsblom, Johan (1977): Sociology in the Balance; A Critical Essay. Oxford: Basil Blackwell. Greeno, J.G. (1973): "The Structure of Memory and the Process of Solving Problems." In R.L. Solso (ed.), Contemporary Issues in Cognitive Psychology, 103-133. Washington, D.C.: V.H. Winston & Sons. Greller, M.M., and D.M. Herold (1975): "Sources of Feedback: A Preliminary Investigation." Organizational Behavior and Human Performance, 13: 244-256. Griffin, R.W. (1987): Management. Boston: Houghton Mifflin. Guglielmino, P.J. (1979): "Perceptions of Skills Needed by Mid-Level Managers in the Future and the Implications for Continued Education: A Comparison of the Perceptions of Mid-Level Managers, Professors of Management, and Directors of Training." Dissertation Abstracts International, 39: 1260A. Guida, G., and C. Tasso (1989): "Building ES: from Life Cycle to Development Methodology." In G. Guida, and C. Tasso (eds.), Topics in ES Design: Methodologies and Tools, Amsterdam: North-Holland. Gunsteren, H.R. van (1976): The Quest for Control: A Critique of the Rational-CentralRule Approach in Public Affairs. London: Wiley. Guru Reference Manual, Michael Data Base Systems, Inc., Lafayette, Indiana, 1985. Hagendijk, Robert P., and A.M. Prins (1984): "Referenties en Reverences; Onzekerheid, Afhankelijkheid, en Citeernetwerken in de Nederlandse Sociologie." Mens en Maatschappij, 59: 226-251. Hall, Richard H., J. Eugene Haas, and Norman J. Johnson, (1967): "Organizational Size, Complexity, and Formalization." American Sociological Review, 32: 903-912. Hall, Roger I. (1979): "Simple Techniques for Building Explanatory Models of Complex Systems for Policy Analysis," Dynamica, 3: 101-144. Hall, Roger I. (1976): "A System Pathology of an Organization: the Rise and Fall of the Old Saturday Evening Post." Administration Science Quarterly, 21: 185-211. Hall, Roger I. (1981): "Decision Making in a Complex Organization." In England, G.W., A. Neghandi, and B. Wilpert (eds.), The Functioning of Complex Organizations, Ch. 5: 111-144. Massachusetts: Oelgeschlager, Gunn and Hain, Cambridge. Hall, Roger I., and William Menzies (1983): "A Corporate System Model of a Sports Club: Using Simulation as an Aid to Policy Making in a Crisis." Management Science, 29: 52-64. Hall, Roger I. (1984): "The Natural Logic of Management Policy Making: Its Implications for the Survival of an Organization." Management Science, 30: 905-927. Hall, Roger I., Peter Aitchison, William L. Kocay, and Henian Li (1989): "The Organization's Policy Maps: Types of Maps, Methods for Soliciting and Recording Them, and Techniques for Analysis" Working Paper, Faculty of Management, University of Manitoba, Winnipeg, Canada R3T2N2. Hall, Roger I., and Malcolm Munro (1989): "Corporate System Modeling as an Aid to Defining Critical Success Factors." Working Paper, Faculty of Management, University of Alberta. Hall, Roger I. (1989a): "An Artificial Intelligence Approach to Building a Process Model of Management Policy Making", in Jackson, M. C., P. Keys and S .A. Cropper (Eds.), Operational Research and the Social Sciences, Plenum Press, New York: 439-444. Hall, Roger I. (1989b): "A Training Game and Behavioral Decision Making Research Tool: an Alternative Use of System Dynamics Simulation." In Milling, Peter M., and
238
References
Eric O. K. Zahn (eds.), Computer-Based Management of Complex Systems, 221-228. Berlin: Springer-Verlag. Halpern, Joseph Y. (1986): Theoretical Aspects of Reasoning About Knowledge. Proceedings of the 1986 Conference. Los Altos, California: Morgan Kaufman. Hansen, R.D., and J.M. Donoghue (1977): "The Power of Consensus: Information Derived from One's Own and Others' Behavior." Journal of Personality and Social Psychology, 35: 294-302. Harmon, P., and D. King (1985): Expert Systems: Artificial Intelligence in Business, New York:John Wiley & Sons. Harvey, J.H., G.L. Wells, and M.D. Alvarez (1978): "Attribution in the Context of Conflict and Separation in Close Relationships." In J.H. Harvey, W. Ickes, and R.F. Kidd (eds.), New Directions in Attribution Research (Vol 2). Hillsdale, NJ: Erlbaum. Harvey, John H., and Gifford Weary (1984): "Current Issues in Attribution Theory and Research." Annual Review of Sociology, 35: 427-59. Hatchuel, A., and H. Molet (1986): "Rational Modeling in Understanding and Aiding Human Decision Making: About two Case-Studies." EJOR, 4241: 178-186. Hatchuel, A., P. Agrell, and J.P. Van Gigch (1987): "Innovation as System Intervention". Systems Research, 441: 5-11. Hawkins, B.L., L.L. Penley, and T.O. Peterson (1981): Developing Behaviorally Oriented Communication Scales for the Study of the Communication - Performance Relationship, Paper presented at the annual meeting of the Academy of Management, San Diego. Hayes, P.J. (1973): Computation and Deduction. Proceedings of the 2nd Mfcs symposium. Czechoslovak Academy of Science, 105-18. Hayes-Roth, F., D. Waterman, and D. Lenat, (1983): Building Expert Systems. Reading, Massachusetts: Addison-Wesley. High Performance Systems, Stella (1985) Brochure available from High-Performance Systems Inc., Box B1167, Hanover, NH 03755. Hirschman, A.O. (1970): Exit, Voice, and Loyalty. Cambridge: Harvard University Press. Hough, P.K., and N.M. Duffy (1987): "Top Management Perspective on Decision Support Systems." Information and Management, 12: 21-31. Huber, G.P. (1986): "The Nature and Design of Post-Industrial Organizations" Management Science, 8: 928-951. Humpert, B., and P. Holley, (1987): "Expert Systems in Finance Planning" Expert Systems, 5: 78-103. Humphreys, P., and G. Richter (1988): "Design Enquiry: A Sociotechnical Perspective based on a Generic Office Frame of Reference (GOFOR)." Working Paper, London: London School of Economics and Political Science. ITMI (1987): "Naval, une maquette de systeme expert pour l'élaboration, le suivi et l'harmonisation des plannings prévisionnels de forage en mer", rapport de synthese. Jaager, J. de (1986): "Software Evaluations." HRD Review, 4. Jackson, P. (1986): An Introduction to Expert Systems, Addison-Wesley. Jarke, M., M. Jeusfeld, and T. Rose (1988): "Concept Base: A Prototype Knowledge Manager for Software Processes." Working Paper, Passau: Faculty of Informatics, University of Passau. Jung, R. (1965): "Systems of Orientation" In D.M. Kochen (ed.), Some Problems in Information Science, New York: Scarecrow Press.
References
239
Kahneman, D., P. Slovic, and A.Tversky (1982): Judgment Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press. Kanal, L.N., and J.F. Lemmer (1986): Uncertainty in Artificial Intelligence, Amsterdam: North-Holland. Karpatschof, B. (1982): "Artificial Intelligence or Artificial Signification?" Journal of Pragmatics, 6: 293-304. Kastelein, J. (1985): Modulair Organiseren Doorgelicht. Gronigen: Wolters- Noordhoff. Katz, R.L. (1955): "Skills of an Effective Administrator." Harvard Business Review, 33: 33-42. Kaufman, Herbert, (1973): Administrative Feedback: Monitoring Subordinates' Behavior. Washington, D.C.: The Brookings Institution. Kearsley, G. (1987): "Software for the Management of Training." Training News, 8: 13-15. Kieser, A., und H. Kubicek (1983): Organisation: 2. Auflage. Berlin: De Gruyter. King, P. (1984a): Performance Planning and Appraisal. New York, NY: McGraw-Hill Book Company. King, P. (1984b): "How to Prepare for a Performance Appraisal Interview." Training & Development Journal, 38: 66-69. Kinlaw, D.C. (1984, January): "Performance-Appraisal Training: Obstacles and Opportunities." Training, 21: 43-53. Kleijnen, J.P.C. (1980): Computers and Profits (Quantifying Financial Benefits of Information) Addison-Wesley. Kohonen, Teuvo (1977): Associative Memory. A System-Theoretical Approach. Berlin: Springer. Kohonen, Teuvo (1984): Se If organization and Associative Memory. Berlin: Springer. Kotter, J.P. (1982): "What Effective General Managers Really Do." Harvard Business Review, 60: 156-167. Kowalski, R. (1979): Logic for Problem Solving, New York: North-Holland. Kraft, A. (1984): "XCON: An Expert Configuration System at Digital Equipment Corporation." In P.H. Winston, and K.A. Prendergast (eds.), The AI Business: The Commercial Uses of AI, Cambridge, Massachusetts: The MIT Press. Kreutzer, W. (1986): System Simulation: Programming Styles and Languages. Sydney: Addison-Wesley. Kurbel, K., und W. Pietsch, (1988): "Projekt Management bei einer Expertensystem-Entwicklung." Information Management No. 1. Lakatos, Imre (1970): "Falsification and the Methodology of Scientific Research Programs." In I. Lakatos, and A. Musgrave (eds.): Criticism and the Growth of Knowledge. Cambridge: Cambridge University Press. Larson, J.R., Jr. (1984): "The Performance Feedback Process: A Preliminary Model." Organizational Behavoir and Human Performance, 33: 42-76. Larson, J.R., Jr. (1986): "Supervisor's Performance Feedback to Subordinates: The Impact of Subordinate Performance Valence and Outcome Dependence." Organizational Behavior and Human Decision Processes, 37: 391-408. Latham, G.P., and S.B. Kinne, III. (1974): "Improving Job Performance through Training in Goal Setting. Journal of Applied Psychology, 59: 187-191. Leeuw, A.C.J, de (1986): Organisaties: Management, Analyse, Ontwerp en Verandering: 2e gewijzigde druk. Assen: Van Gorcum.
240
References
Lefton, R.E. (1985): "Performance Appraisal: Why they go wrong and how to do them right." National Productivity Review, 5: 54-63. Leonard-Barton, D. (1985): "Experts as Negative Opinion Leaders in the Diffusion of a Technological Innovation." Journal of Consumer Research, 11: 914-926. Leonard-Barton, D., and Sviokla, J.J. (1988): "Putting Expert Systems into Work." Harvard Business Review, March - April. Lepape, C. (1988): Des systèmes d'ordonnancement flexibles et opportunistes. These de Doctorat en Sciences, Université de Paris Sud. Lesgold, A.M. (1984): "Acquiring Expertise." In J.R. Anderson, and S.M. Kosslyn (eds.), Tutorials in Learning and Memory: Essays in Honor of Gordon Bower. Hillsdale, NJ: Erlbaum. Lewin, A.Y. (1986): Presentation at the Annual Academy of Management Conference. Chicago, Illinois. Lindblom, C.E. (1965): The Intelligence of Democracy. New York: The Free Press. Lindblom, C.E. (1968): The Policy Making Process, Englewood Cliffs, N.J.: PrenticeHall. Lodish, L.M. (1982): "A Marketing Decision Support System for Retailers." Marketing Science, 1: 31-56. Lucas, P.J.F., and L.C. van der Gaag (1990): Principles of Expert Systems, Reading, Massachusetts: Addison-Wesley. Lyneis, James M. (1980): Corporate Planning and Policy Design: a System Dynamics Approach. Cambridge, Massachusetts: MIT Press. MacLennan, B.J. (1987): Principles of Programming Languages: Design, Evaluation, and Implementation. New York: Holt, Rinehart and Winston. Maes, R., and J.E.M. van Dijk (1986): "A Propositional Formalism for a Managerial DSS-Generator." In L.F. Pau (ed.): Artificial Intelligence in Economics and Management. New York: Elsevier: 65-75. Maier, N.R.F. (1958): The Appraisal Interview: Objectives, Methods, and Skills. London, UK: Wiley. Maier, N.R.F. (1976): The Appraisal Interview: Three Basic Approaches. La Jolla, California: University Associates, Inc. Maimón, Z. (1986): "Business studies and the Development of Managerial Skills ."Studies in Educational Evaluation, 6: 83-97. Mainiero, L.A. (1986): "Early Career Factors that Differentiate Technical Management Careers from Technical Professional Careers "Journal of Management, 12: 561-575. Mann, F.C. (1965): "Toward an Understanding of the Leadership Role in Formal Organizations." In R. Dubin, G.C. Homans, F.C. Mann, and D.C. Miller (eds.), Leadership and Productivity. San Francisco: Chandler Publishing, 68-103. March, James G., and Johan P. Olsen (1986): "Garbage Can Models of Decision Making in Organizations." In James G. March, and Roger Weisinger-Baylon (eds.), Ambiguity and Command. Massachusetts: Pittman, 11-53. March, James G., and Herbert A. Simon (1958): Organizations. New York: John Wiley. Markus, H. (1977): "Self-Schemata and Processing Information About the Self." Journal of Personality and Social Psychology, 35: 63-78. Marschak, Thomas, A. (1972) "Computation in Organization: The Comparison of Price Mechanisms and Other Adjustment Processes," In C.B. McGuire, and R. Radner (eds.), Decision and Organization, 237-282. New York: North Holland
References
241
Martin, James (1982): Strategic Data Planning Methodologies. Englewood Cliffs, NJ: Prentice Hall. Masuch, Michael, Perry J. LaPotin, en R.Verhörst. (1987): "Kunstmatige Intelligentie in Organisaties: Een Simulatiemodel van Organisatorische Besluitvorming." Mens en Maatschappij, 62: 358-381. Masuch, Michael (1985): "Vicious Circles in Organizations." Administrative Science Quarterly, 30: 14-33. Masuch, Michael, and Perry J. LaPotin (1989): Administrative Science Quarterly, 34: 38 - 67. Maturana, H.R., and F.J.Varela. (1984): The Tree of Knowledge: The Biological Roots of Human Understanding. Boston: Shambala. Mayer, R.E., and Greeno, J.G. (1973): "Structural Differences between Learning Outcomes Produced by Different Instructional Methods. Journal of Educational Psychology, 63: 165-173. Mayer, R.E. (1975): "Informational processing variables in learning to solve problems." Review of Educational Research, 45: 525-541. Mayer, R.E., Stiehl, C.C., and J.G. Greeno (1975): "Acquisition of Understanding and Skill in Relation to Subjects' Preparation an Meaningfulness of Instruction." Journal of Educational Psychology, 67: 311-350. Melle, W. van (1980): A Domain-Independent System that Aids in Constructing Knowledge-Based Consultation Programs, Ph.D dissertation, Report no. STAN-CS-80-820, Computer Science Department, Stanford University. Melle, W. van, A.C. Scott, J.S. Bennet, and M. Peairs (1981): The EMYC1N Manual, Report no. STAN-CS-81-16, Computer Science Department, Stanford University. Michaelsen, R., and Michie, D. (1983): "Expert Systems in Business (Recent Developments in Expert Systems Point Toward Success for this Technology in Business Environments)" Datamation, November. Michaelsen, R. and Michie, D. (1986): "Prudent Expert Systems Applications Can Provide a Competitive Weapon", Data management, July. Michalski, Ryszard S., Jaime G. Carbonell, and Tom M. Mitchell (eds.) (1983): Machine Learning: An Artificial Intelligence Approach. Palo Alto, California: Tioga. Miles, Raymond E., and Charles C. Snow (1978): Organizational Strategy, Structure, and Process, New York: McGraw-Hill Book Company. Miller, D.B. (1981): "Training Managers to Stimulate Employee Development." Training & Development Journal, 35: 47-53. Miller, Danny (1987): "The Genesis of Configuration", The Academy of Management Review, 12: 686-701. Miller, George A. (1956): "The Magic Number Seven Plus or Minus Two: Some Limits on our Capacity for Processing Information." Psychological Review, 64: 81-97. Miller, R.M. (1986): "Markets as Logic Programs". In L.F.Pau (ed.), Artificial Intelligence in Ecomics and Management. Amsterdam: North-Holland: 129-136. Minsky, M. (1975): "A Framework for Representing Knowledge." In P.H. Winston (ed.), The Psychology of Computer Vision, New York: McGraw-Hill. Mintzberg, H. (1973): The Nature of Managerial Work. Englewood Cliffs, NJ: PrenticeHall. Mintzberg, Henry D. (1979): The Structuring of Organizations. Englewood Cliffs, NJ: Prentice Hall.
242
References
Mockler, R.J. (1989): Knowledge-Based Systems for Strategic Planning. Englewood Cliffs, NJ: Prentice Hall. Möhr, Lawrence B.(1982): Explaining Organizational Behavior. San Francisco: JosseyBass, Morgan, G. (1986): Images of Organization. Beverly Hills: Sage. Morieux, Y.V.H., and E. Sutherland (1988): "The Interaction Between the Use of Information Technology and Organizational Culture." Behavior and Information Technology, 7: 205-213. Motta, E., T. Rajan, and M. Eisenstadt (1989): "A Methodology and Tool for Knowledge Acquisition in KEATS-2." In: G. Guida, and C. Tasso (eds.), Topics in ES Design: Methodologies and Tools, Amsterdam: North-Holland. Mumford, Enid, and Andrew M. Pettigrew (1975): Implementing Strategic Decisions. Longman, London. Munier, B., Schakun, M.F. (eds.) (1985) Compromise, Negotiation and Group Decision Dordrecht-Boston-Tokyo: Reidel. Nadler, D.A. (1977): Feedback and Organizational Development: Using Data Based Methods. Reading Massachusetts: Addison-Wesley. Nagel, E. (1961): The Structure of Science: Problems in the Logic of Scientific Explanation. London: Routledge & Kegan Paul. Negrotti, M. (1987): "The AI People's Way of Looking at Man and Machine." Applied Artificial Intelligence, 1: 109-116. Newell, A., and Herbert A. Simon (1963): "GPS, a Program that Simulates Human Thought", In E.A. Feigenbaum, and J. Feldman (eds.), Computers and Thought, New York: McGraw-Hill. Newell, Allen, and Herbert A. Simon (1972): Human Problem Solving. Englewood Cliffs, N.J.: Prentice Hall. Newell, A. (1973): "Production Systems: Models of Control Structures." In: W.G. Chase (ed.), Visual Information Processing, New York: Academic Press. Newell, Allen, and Herbert A. Simon (1957): "Empirical Explorations of the Logic Theory Machine: a Case Study in Heuristics." In Proceedings of the Joint Computer Conference, 218-230. Nonaka, I., and Johansson, J.K. (1985): "Japanese Management: What About the Hard Skills?" Academy of Management Review, 10: 181-191. O'Leary, D.E., and Turban, E. (1987): "The Organizational Impact of Expert Systems." Human Systems Management, 7: 11-19. O'Leary, Daniel E. (1988): "Methods of Validating Expert Systems", Interfaces, 18: 72-79. O'Neal, M.A. (1985, July): "Managerial Skills and Values - for Today and Tomorrow. Personnel, 62: 49-55. Olson, J.R., and H.H. Rueter (1987): "Extracting Expertise From Experts: Methods for Knowledge Acquisition." Expert Systems, 4: 152-169. Oosten, R.C.H. van (1988): "The Object-Relation Model." Research Report, Groningen: Hanze Polytechnics. Ott, L. (1984): An Introduction to Statistical Methods and Data Analysis (2nd Edition). Boston, Massachusetts: Durbury Press. PC Business Softwarereview (1986): Your PC as Personnel Counsellor, September.
References
243
Peitgen, H.O., and P.H. Richter (1986): The Beauty of Fractals: Images of Complex Dynamical Systems. Berlin: Springer. Perrow, Charles, "A Framework for Comparative Analysis of Organizations", American Sociological Review, 32: 194-208. Pettigrew, Andrew M. (1985): The Awakening Giant Oxford: Basil Blackwell. Pettigrew, Andrew M.(1973): The Politics of Organizational Decision Making. London: Tavistock. Pfeffer, Jerry (1982): Organizations and Organization Theory. Marshfield, Massachusetts: Pitman. Phillips, Derek L. (1968): Knowledge From What? Theory and Methods in Social Research. San Francisco: Jossey Bass. Piersol, K.W. (1987): HUMBLE V.2.0. Reference Manual. Pasadena: Xerox Special Information Systems Vista Laboratory. Pinson, L.J., and R.S.Wiener (1988): An Introduction to Object-Oriented Programming and SMALLTALK. Reading, Massachusetts: Addison-Wesley. Popper, Karl (1959): The Logic of Scientific Discovery. London: Hutchinson. Prigogine, I., and I. Stengers. (1984): Order Out of Chaos. Toronto: Bantam Books. Pugh, Alexander L. (1983): Dynamo User's Manual. MIT Press. Pugh-Roberts Assoc.(1986): Professional Dynamo Plus Reference Manual. Brochure available from Pugh-Roberts Assoc. Inc., Five Lee St., Cambridge, MA 02139. Quillan, M. (1968): "Semantic memory." In: M. Minsky (ed.), Semantic Information Processing, Cambridge, Massachusetts: MIT Press. Rauch-Hindin, W. (1986): Artificial Intelligence in Business, Science and Industry. Prentice-Hall. Rauch-Hindin, W.B. (1988): A Guide to Commercial Artificial Intelligence: Fundamentals and Real-World Applications, Englewood Cliffs, NJ: Prentice Hall. Reichmann-Adar, R. (1984): "Extended Person-Machine Interface." Artificial Intelligence, 22: 157-218. Reisig, W. (1985): Systementwurf mit Netzen. Berlin: Springer. Reitman, W.R. (1965): Cognition and Thought. New York: John Wiley. Richmond, Barry (1987): "The Strategic Forum: From Vision to Strategy to Operating Policies and Back Again." available from High Performance Systems, Inc., 13 Dartmouth College Hwy., Lyme, NH 03768. Robbins, Stephen A. (1987): Organizational Theory: The Structure and Design of Organization, Englewood Cliffs, NJ: Prentice-Hall, Inc. Robinson, J.C., and Robinson, L.C. (1978): "Modeling Techniques Applied to Performance Feedback and Appraisal. Training & Development Journal, 32: 48-53. Rockart, J.F., and C.V. Bullen (1986): The Rise of Managerial Computing. Homewood, Illinois: Dow Jones-Irwin. Rockart, J.F., and D.W. DeLong (1988): Executive Support Systems. Homewood, (Illinois): Dow Jones-Irwin. Roos, Leslie, L., and Hall, Roger I. (1980): "Influence Diagrams and Organizational Power", Administrative Science Quarterly, 25: 57-71. Rosen, N., R. Billings, and J. Turney (1976): "The Emergence and Allocation of Leadership Resources over Time in a Technical Organization." Academy of Management Journal, 19: 165-183.
244
References
Rouse, W.B., and N.M. Morris (1985): "On Looking Into the Black Box: Prospects and Limits in the Search for Mental Models." Center for Man-Machine Systems Research, Georgia Institute of Technology. Rousset, M., and B. Safar (1987): "Negative and Positive Explanations in Expert Systems". Applied Artificial Intelligence, 1: 25-38. Roy, B. (1987): "Meaning and Validity of Interactive Procedures as Tools for Decisionmaking" EJOR, 431, 13. Salancik, Gerald R., and J. Pfeffer (1977): "Who Gets Power - and How They Hold on to It: A Strategic - Contingency Model of Power", Organizational Dynamics, 3 - 2 1 . Salancik, Gerald R., and Huseyin Leblebici (1985): "Towards a Theory of Organizational Form." Working Paper, College of Commerce, University of Illinois. Salford, University of, Dysmap2 for the IBM-PC, (1986): Brochure available from University of Salford, Dept. Bus. and Managt. Studies, U.K. M5 4WT. Saris, Willem, and Henk Stronkhorst (1984): Causal Modelling in Nonexperimental Research. Amsterdam: Sociometric Research Foundation. Schlitz, J. (1986): US Government and AI Mentor Inc. Sign Site License Agreement. Press Release, AI Mentor, Inc.. Scott, P. (1983): "Knowledge-Oriented Learning." Proceedings In AI, 8: 432-435 . Senge, Peter M.(1989): "Organizational Learning: a New Challenge for System Dynamics." In Milling, P.M., and E.O.K. Zahn (eds.), Computer-Based Management of Complex Systems, 229-235. Berlin: Springer-Verlag. Shafe, L. and Willocks, R. (1988):77ie Strategic Implications and Applications in Finance, Implementing Applications: Some Practical Examples, Proceedings of the Eighth International Workshop on ES and their Applications, Vol. 2, Avignon. Shortliffe, E.H., and B.G. Buchanan (1975): "A Model of Inexact Reasoning in Medicine", Mathematical Biosciences, 23: 351 - 379. Shortliffe, E.H. (1976): Computer-Based Medical Consultations: MYCIN, New York: Elsevier. Shpilberg, D., L.E. Graham, and H. Schatz (1986): "ExpeiTAX: an Expert System for Corporate Taxplanning" Expert Systems, 3. Shrisvastava, P. (1982): "Decision Support Systems for Strategic Ill-Structured Problems." In Proceedings of the International Conference on Information Systems, Ann Arbor. Shriver, B., and P.Wegner (eds.). (1987): Research Directions in Object-Oriented Programming. Cambridge, Massachusetts: The MIT Press. Silverman, B.G. (1987): "Should a Manager 'Hire' an Expert System?" In B.G. Silverman (ed.), Expert Systems for Business. Reading, Massachusetts: Addison-Wesley, 5-23. Simon, Herbert A. (1962): "The Architecture of Complexity." Proceedings in the American Philosophical Society, 106: 467 - 482. Simon, Herbert A. (1981): The Science of the Artificial, Cambridge, Massachusetts: The MIT Press. Slagle, J.R., and H. Hamburger (1987): "Resource Allocation by an Expert System." In B.G. Silverman (ed.), Expert Systems for Business. Reading, 195-223. Massachusetts: Addison-Wesley. Smith, D.E. (1986): "Training Programs for Performance Appraisal: A Review." Academy of Managemant Review. 11: 22-40.
References
245
Smithin, Tim, and Colin Eden (1986): "Computer Decision Support for Senior Managers: Encouraging Exploration", International Journal for Man-Machine Studies, 25: 139-152. Sol, H.G. (1982): "Simulation in Information Systems Development." Thesis, Groningen. Solomon, Jolie (1989): "Now, Simulators for Piloting Companies: Computers Let Managers Test Various Policies", Wall Street Journal, 3: B l . Sowa, John F. (1984): Conceptual Structures: Information Processing in Mind and Machine. Reading, Massachusetts: Addison Wesley. Sprague, R.H., and B.C.McNurlin (1986): Information Systems Management in Practice. Englewood Cliffs: Prentice Hall. Sprague, R.H., and E.D.Carlson (1982): Building Effective Decision Support Systems. Englewood Cliffs: Prentice Hall. Stamper, R. (1988): "Pathologies of AI: Responsible Use of Artificial Intelligence in Professional Work." AI & Society, 2: 3-16. Stata, Ray (1989): "Organizational Learning: The Key to Management Innovation", Sloan Management Review, 63-74. Stegmüller, Wolfgang (1969): Probleme und Resultate der Wissenschaftstheorie und analytischen Philosophie. Berlin: Springer. Sterling, Leon, and Ehud Shapiro (1986): The Art of Prolog, Cambridge, Massachusetts: MIT Press. Strassman, P. (1985): Information Payoff, The Transformation of Work in the Electronic Age. Free Press, 100. Stumpf, S.A., and M. London (1981): "Management Promotions: Individual and Organizational Factors Influencing the Decision Process." Academy of Management Review. 6: 539-549. Suomi, R. (1988): "Inter-Organizational Information Systems: A New Area to be Studied." Research Report, Turku: Turku School of Economics and Business Administration. Tang, P.L.L., and Adams, S.T. (1988): "Can Computers Be Intelligent? Artificial Intelligence and Conceptual Change." International Journal of Intelligent Systems, 3: 1-17. Tarjan, R. (1972): "Depth-First Search and Linear Graph Algorithms." SIAM Journal of Comput., 1: 146-160. Taylor, M.S., C.D. Fisher, and D.R. Ilgen (1984): "Individual's Reactions to Performance Feedback in Organizations: A Control Theory Perspective." In K.M. Rowland, and G.R. Ferris (eds.), Research in Personnel and Human Resources Management 2: 81-124. Greenwich, CT: JAI Press, Inc. Thoenig, Jean-Claude (1982): "Research Management and Management Research." Organization Studies, 3: 269-275. Tosi, H., and L. Tosi (1986): "What Managers Need to Know about Knowledge-Based Pay." Organizational Dynamics, 14: 52-64. Tukey, John W. (1977): Exploratory Data Analysis. Reading, Massachusetts: AddisonWesley. Turner, Barry A. (1976): "The Organizational and Interorganizational Development of Disasters", Administrative Science Quarterly, 2: 378-397. Van Fleet, D.D. (1988): Contemporary Management. Boston: Houghton Mifflin.
246
References
Van Gigch, J-P. (ed.) (1986): Decision-Making about Decision Making. Metamodels and Metasystems. Tunbridge Wells, Kent: Abacus Press. Varela, F J . (1989): "Chaos as Self-Renewal". Lecture, Rotterdam: Erasmus University. Verderber, R.F. (1981): Communicate! (3rd Edition). Belmont, California: Wadsworth Publishing Company. Waterman, D.A. (1986): A Guide to Expert Systems, Addison-Wesley. Wegner, P. (1987): "The Object-Oriented Classification Paradigm." In B.Shriver, and P.Wegner (eds.), Research Directions in Object-Oriented Programming. 479-560. Cambridge, Massachusetts: The MIT Press. Weick, Karl E. (1969): The Social Psychology of Organizing. Reading, Massachusetts: Addison-Wesley. Whalen, T., B. Schott, and F. Ganoe (1982): Fault Diagnosis in a Fuzzy Network. Proceedings of the International Conference on Cybernetics Society, 35-39. Whitaker, R. and O. Ostberg (1988): "Channeling Knowledge: Expert Systems as Communications Media." AI & Society, 2: 197-208. White, P. (1984): "A Model of the Lay Person as Pragmatist." Personality and Social Psychology Bulletin, 10: 333-348. Williamson, O.E. (1985): The Economic Institutions of Capitalism. New York: The Free Press. Williamson, Oliver, E. (1975): Markets and Hierarchies: Analysis and Antitrust Implications. New York: Free Press. Wolstenholme, E.F., and R.G. Coyle (1983): "The Development of System Dynamics as a Methodology for System Description and Qualitative Analysis." Jou. Opl. Res. Soc., 34: 565-575. Young, L.F. (1987): "The Metaphor Machine: A Database Method for Creativity Support." Decision Support Systems, 3: 309-317. Young, L.F. (1989): Decision Support and Idea Processing Systems. Dubuque (Iowa): W.C. Brown. Zuboff, S. (1988): In the Age of the Smart Machine. New York: Basic Books.
247
Systematic index A A-schematics ART 198
Corporate System Model 108, 118 Cost/Benefit Analysis 185, 187, 191 Criterion-related validity 46
15
Autopoiesis theory
132
B Backward chaining 47, 207 Bottom-up inference 206-207 C Certainty factor 42-43,45,47-50, 64, 216-219 Certainty factor model 216-217 Changeability 66, 72, 75 Class frames 213-214 Class inheritance 125 Clausal Form Logic 89 Cognitive Maps 106 Cognitive schema 17 Completeness 47-48, 64, 66, 76, 89, 92 Compound rule 43, 73 Condition-decision matches 47-48, 50 Connectedness 66, 75 Conscious schema 16 Construct validity 47 Consultation system 198 Content validity 47 Context of discovery 79 Context of justification 79 Contingency factor 40-41, 82-83, 91 Contingency theory 36, 39-40, 47-49, 51, 60-61, 82, 95, 131 Control function 172 Controlledness 66, 68, 76 Coordinatedness 64, 66-68, 76 Coordinating mechanism 82 Core Assumption 84-85
D Data model 129-131 Declarative semantics 202-203 Demons 108, 215 Description-first approach 70 Design parameter 82-83, 91-93, 95 Design-first approach 59, 61, 70-71 E Enactment Process 113 Environmental change 37-38 Environmental complexity 37, 46 Environmental segmentation 37-38 Event-registration 130 Evolution model 129, 131 Expert system builder tools 198 Expert system shell 85, 198 Expert systems DESIGN 6 59-60, 62, 65, 69-71, 73-75 Expattax 188, 190-192 MYCIN 49, 196-199, 204, 216, 219, 224 Organizational Consultant 35-36, 40, 42, 44, 60 XCON 197 XSEL 197 Explanation Facilities 219-220 F Fact set 204, 206-207, 209, 218-220 First order predicate logic 86-87, 127 Flight Simulator 105, 121 FOPL 86, 88-89, 92 Frames 199, 212-216, 224
248 Functional model
129
G Gambler's fallacy 24 General Problem Solver Generic frames 213 Global database 204 GPS 195-196
195-196
H Heuristic rule 205 Heuristic Simulation 80 Heuristics 14, 18, 21, 23-24, 156, 166, 186, 190, 196 How facility 219 / In situ test 71 In situ validation 71 Incremental approach 189 Inference engine 24, 35-36, 47-49, 71, 165, 186, 198, 211, 219 Inference mechanism 90 Inference rule 87-89, 202-203 Inference structure 126 Inferencing 18, 22, 84, 89, 92 Influence Diagrams 106 Information Spread 66, 77 Information Strategy Choice Model 134 Inheritance 125, 213-214 Instance frames 213 Instance-of link 213 Is-a link 210, 212-213 K KEE 198 Knowledge acquisition 192, 195, 224 Knowledge base 20, 35-38, 40-44, 47-49, 61, 70-71, 73, 92, 130,
155-156, 162, 173-175, 190, 197-198, 204, 215, 220 Knowledge Craft 198 Knowledge-based systems 79, 130, 132, 155, 160, 195 L Long-term memory
80, 113
M M l 49, 85, 143, 198 Machine metaphor 124-125, 129 Management information 13, 132 Management Learning Laboratory 105 Managerial skills 171-172 Mental schema 14Metaknowledge 16 Modus ponens 87, 89, 92, 202-203 Modus tollens 87 Multi-valued 204
N NEXPERT 216 No expert theory
30, 32
O Object schema 204, 207 Object types 125, 127-130, 138 Object-Oriented Modelling 123, 125-126 Open system decomposition model 131-132 Open Systems Theory 131 Operating core 82, 86, 91 Operation types 129 Organization Metaphor 124, 128 Organizational complexity 42 Organizational design 35-40, 42-43, 47-49, 51, 70-71, 73-74 Organizational Parts 82
249 Organizational structure 72-74, 190
37, 44, 60,
P Peter Principle 172 Procedural attachment 215 Procedural semantics 203 Process decomposition model 130 Process Model 1 0 6 , 1 1 1 , 1 1 4 Process structure 126 Production Rules 204-207, 209, 217-220 Production systems 204 Property inheritance 214 Prototyping approach 189 Q Quasi-natural language interface
220
Simulation models 47, 79, 126, 129-131 Soft systems methodology 133 Soundness 88 Standardized approach 189 Strategy Forum 105 Strong expert theory 30-32 Structural Configurations 83 Superclass link 213-214 System Dynamics 108 System Dynamics modeling 105 System Flow Diagrams 106 T Tacit schema 15, 27 Technical skills 171 J l h e o r e m Proving 80, 203 Theory-space 92, 99 Top-down inference 206-207 Trace facility 219, 222
R Retained set 113-114 Retention Process 113 Rule Comprehensiveness 66, 77 Rule Fineness 66, 73, 77 Rule Lumpiness 66, 68, 77 S Selection Process 113 Semantic nets 199, 210, 213-214 Short-term memory 80 Signed Digraphs 106 Simple rules 73
U User interface
35-36, 130, 198, 219
V Validation 36, 46-50, 70-71, 189 Validity 46-47, 70-71 W Why facility 219 Why-not facility 219 Work-Related Variables Working memory 204
83
de Gruyter Studies in Organization An international series by internationally known authors presenting current research in organization Vol. 20
Capitalism in Contrasting Cultures Edited by Stewart R. Clegg and S. Gordon Redding Assisted by Monika Cartner 1990.15.5x23 cm. VIII, 451 pages. Cloth. ISBN 3-11-011857-2; 0-89925-525-6 (U. S.) Vol. 21
The Third Sector: Comparative Studies of Nonprofit Organizations Edited by Helmut K. Anheier and Wolfgang Seibel 1990.15.5x23 cm. XIV, 413 pages. Cloth. ISBN 3-11-011713-4; 0-89925-486-1 (U. S.) Vol. 22
The Spirit of Chinese Capitalism By S. Gordon Redding 1990.15.5x23 cm. XIV, 267 pages. Cloth. ISBN 3-11-012333-9; 0-89925-657-0 (U. S.) Vol. 24
Symbols and Artifacts: Views of the Corporate Landscape Edited by Pasquale Gagliardi 1990.15.5 x 23 cm. XVII, 428 pages. Cloth. ISBN 3-11-012012-7; 0-89925-569-8 (U. S.) Vol. 26
Human Resource Management: An International Comparison Edited by Rudiger Pieper 1990.15.5x23 cm. XII, 285 pages. Cloth. ISBN 3-11-012573-0; 0-89925-720-8 (U. S.)
WALTER D E G R U Y T E R • BERLIN • N E W YORK Genthiner Strasse 13, D-1000 Berlin 30, Tel.: (030) 2 6005-0, Fax 2 60-05-2 51 200 Saw Mill River Road, Hawthorne, N.Y. 10532, Tel.: (914) 747-0110, Fax 747-1326