Theory of Control in Organizations [1 ed.] 9781624178306, 9781624177941

The theory presented in the book deals with methodological and mathematical foundations of control in organizations. It

218 78 7MB

English Pages 361 Year 2013

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Theory of Control in Organizations [1 ed.]
 9781624178306, 9781624177941

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

MANAGEMENT SCIENCE - THEORY AND APPLICATIONS

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

THEORY OF CONTROL IN ORGANIZATIONS

No part of this digital document may be reproduced, stored in a retrieval system or transmitted in any form or by any means. The publisher has taken reasonable care in the preparation of this digital document, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained herein. This digital document is sold with the clear understanding that the publisher is not engaged in rendering legal, medical or any other professional services.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

MANAGEMENT SCIENCE THEORY AND APPLICATIONS Additional books in this series can be found on Nova’s website under the Series tab.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Additional e-books in this series can be found on Nova’s website under the e-books tab.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

MANAGEMENT SCIENCE - THEORY AND APPLICATIONS

THEORY OF CONTROL IN ORGANIZATIONS

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

DMITRY NOVIKOV

New York

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013 by Nova Science Publishers, Inc. All rights reserved. No part of this book may be reproduced, stored in a retrieval system or transmitted in any form or by any means: electronic, electrostatic, magnetic, tape, mechanical photocopying, recording or otherwise without the written permission of the Publisher. For permission to use material from this book please contact us: Telephone 631-231-7269; Fax 631-231-8175 Web Site: http://www.novapublishers.com NOTICE TO THE READER The Publisher has taken reasonable care in the preparation of this book, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained in this book. The Publisher shall not be liable for any special, consequential, or exemplary damages resulting, in whole or in part, from the readers’ use of, or reliance upon, this material. Any parts of this book based on government reports are so indicated and copyright is claimed for those parts to the extent applicable to compilations of such works.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Independent verification should be sought for any data, advice or recommendations contained in this book. In addition, no responsibility is assumed by the publisher for any injury and/or damage to persons or property arising from any methods, products, instructions, ideas or otherwise contained in this publication. This publication is designed to provide accurate and authoritative information with regard to the subject matter covered herein. It is sold with the clear understanding that the Publisher is not engaged in rendering legal or any other professional services. If legal or any other expert assistance is required, the services of a competent person should be sought. FROM A DECLARATION OF PARTICIPANTS JOINTLY ADOPTED BY A COMMITTEE OF THE AMERICAN BAR ASSOCIATION AND A COMMITTEE OF PUBLISHERS. Additional color graphics may be available in the e-book version of this book.

Library of Congress Cataloging-in-Publication Data Theory of control in organizations / author: Dmitry Novikov (Moscow Institute of Physics and Technology (MIPT), Moscow, Russian Federation). pages cm Includes bibliographical references and index. ISBN:  (eBook) 1. Organization. 2. Management. 3. Organizational effectiveness. 4. Communication in organizations. I. Novikov, Dmitry. HD31.T4865 2013 303.3'3--dc23 2012050342

Published by Nova Science Publishers, Inc. † New York Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

CONTENTS   vii 

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Introduction Chapter 1

Control in Organizations

Chapter 2

Incentive Mechanisms

19 

Chapter 3

Planning Mechanisms

87 

Chapter 4

Mechanisms of Organizing

121 

Chapter 5

Mechanisms of Controlling

159 

Chapter 6

Staff Control

177 

Chapter 7

Structure Control

191 

Chapter 8

Informational Control

217 

Chapter 9

Institutional Control

241 



Conclusion

263 

Appendices

267 

References

331 

Index

335 

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

INTRODUCTION Control mechanisms. What are the attributes of an efficient manager? Should he or she be well-educated? That might be so. What is about rich experience? That is not obligatory. Education and experience could be generally related to what should be done in various situations. On the other hand, how should the required actions be done is not discussed in almost any university. Yet, learning by one’s own mistakes costs dear. Indeed, numerous problems of control in organizations (firms, enterprises, institutions, etc) having different scale and domain arise due to the following. Competent goal-setting is often ensued by actions and measures being hardly connected with implementation of the declared goals. On a national level, this is evident, e.g., from inactive laws adopted. On an enterprise level, orders of managers often lead to the results being quite opposite to the planned ones. The underlying reason consists in that merely adopting a law or signing an order is insufficient. One has to provide for appropriate mechanisms of their implementation. Thus, we have already mentioned the key word of the book –“mechanism”. According to its general definition [39], a mechanism1 is “a system, a device that determines the order of a certain activity.” The present book is devoted to the description of control mechanisms in organizational systems (OS). According to the definition provided by Merriam-Webster dictionary, an organization is: 1. The condition or manner of being organized; 2. The act or process of organizing or of being organized; 3. An administrative and functional structure (as a business or a political party); also, the personnel of such a structure. The third meaning of the term “organization” can be extended to the definition of organizational system in the following way: “an organization is an association of people engaged in joint implementation of a certain program or task, using specific procedures and rules.” The set of such procedures and rules is said to be a mechanism of operation. This implies that the term “organization” can indicate a property, a process and an object (see Figure I.1). We will use the last meaning, i.e., understand an organization as an 2 organizational system representing an association of people engaged in joint implementation of a certain program or task, using specific procedures and rules. 1 2

In this book, we italicize the basic notions. For their rigorous definitions, see the Glossary (Appendix 5). Evidently, an organizational system possesses a specific organization (see the first meaning of the term “organization”). The latter results from the process of organizing (see the second meaning).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

viii

ORGANIZATION

Property The condition or manner of being organized

Process The act or process of organizing or of being organized

Organizational system An association of people being engaged in joint implementation of a certain program or task, acting based on specific procedures and rules– mechanisms of operation

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Figure I.1. Definition of organization.

Note that the presence of definite procedures and rules (mechanisms) regulating joint activity of members of an organization appears of crucial importance; this feature makes it possible to distinguish an organization from a group and a collective. In the sense of organizational systems, a mechanism of operation is a set of rules, laws and procedures regulating the activity of organizational system participants. On the other hand, a control mechanism is a set of management decision-making procedures within an organization. Therefore, the mechanisms of operation and control mechanisms describe the behavior of the members of an organization, as well as their decision-making (see models of activity and decision-making in Section 1.1). A control subject (a principal) can choose a certain decision-making procedure (a certain control mechanism as a relation between his or her actions, purpose of the organization and actions of the controlled subjects, i.e., agents) only if he or she is able to predict the behavior of the agents. In other words, he or she should foresee their reaction to certain control actions. Conducting experiments in real life (using different control actions and studying the reaction of the agents) is inefficient and is almost impossible in practice. One can use modeling which is analyzing control systems based on their models. Being provided with an adequate model, it is possible to analyze the reaction of a controlled system (analysis stage) and then choose and apply a control action which ensures the required reaction (synthesis stage). The presence of a definite set of specific control mechanisms in an organization seems appealing, first for the principal, as it allows for the prediction of the behavior of the controlled agents and, second for the agents, since it makes the behavior of the principal predictable. The reduction of uncertainty by virtue of control mechanisms is an essential property of any organization as a social institution. Background. The late 1960s were remarkable for the rapid development of cybernetics and systems analysis [3, 7, 41, 63], operational research [25, 60, 62] and mathematical economic theory [26, 38], mathematical control theory (automatic control theory – ACT), as well as for the intensive implementation of their results into technology.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Introduction

ix

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Figure I.2. Evolution of the concept of an organizational system.

At the same time, many scientific research centers endeavored to apply general approaches of control theory to design mathematical models of social and economic systems. Theory of active systems (TAS) [9, 11], theory of hierarchical games (THG) [18], mechanism design (MD) were used, see Figure I.2. Note that MD de bene esse includes agency theory (and Principal-Agent Problems) and contract theory [8, 32, 38, 57, 59]. Today, one observes almost complete coalescence of these theories, resulting in a new synthetic science known as the theory of control in organizations. The corresponding object of research is given by organizational systems, subject of research lies in control mechanisms, and the basic method of research represents mathematical modeling3. A theory is generally a form of organizing a reliable scientific knowledge about a specific set of objects, representing a system of interrelated statements and proofs, and containing certain techniques for explaining and predicting phenomena and processes in the given problem domain (i.e., all phenomena and processes described by the theory). Any scientific theory (a) includes interrelated structural components and (b) possesses a backbone element as its original basis [49]. A backbone element of control theory (for social systems, organizational systems and other systems) is the category4 of organization; indeed, control is a process of organizing (control activity [49]) which leads to the property of organization in a controlled system (see Figure I.1).

3

We emphasize that organizations are studied by many other sciences (for instance, see Fig. I.2). The difference consists in the objectives of research (descriptive or prognostic objectives, normative objectives) and in the methods of research (e.g., management science analyzes control mechanisms by observing and systematizing positive experience accumulated by managers). 4 According to Merriam-Webster dictionary, a category (from Greek katēgoria predication, category) is (1) any of several fundamental and distinct classes to which entities or concepts belong and (2) a division within a system of classification. Therefore, it represents a general notion being generated by abstracting specific features of objects. Hence, it is impossible to find a generic notion for category. At the same time, this notion possesses minimum content, i.e., fixes a minimum set of attributes of the corresponding objects. Nevertheless, the above content reflects fundamental (essential) associations and relations in objective reality and cognition. A specific system of categories is intrinsic to each science.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

x 5

The structural components of control theory (see Figure I.3) are:            

control problems; a scheme of control activity; conditions of control; types of control; subjects of control; methods of control; forms of control; control tools; control functions; factors having an impact on control efficiency; control principles6; control mechanisms.

For a detailed description of these components, we refer the reader to [39]. The structure of control theory as the set of stable relations among its components is studied in [12, 39]. This book presents the basic results derived within the theory of control in organizations, viz., the results concerning the development and implementation of mathematical models of control mechanisms in organizational systems. Let us provide a system of classification for these mechanisms.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Subjects of control

Control problems and control mechanisms

A scheme of control acti vity

Methods and types of control

COMPONENTS OF CO NTROL THEORY

Forms of control Control principles

Control functions

Control tools

Conditions of control

Figure I.3. Components of control theory. 5

Other important components of control theory (the properties of a control subject/controlled subject and control efficiency criteria) depend on specific features of a corresponding control system and on goals of control. 6 See Appendix 5. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Introduction

xi

Staff control

Informational control

CONTROL IN OS

Motivational control

Structure control

Institutional control

Figure I.4. Methods (types) of control: A classification.

A classification of control mechanisms. System analysis implies that any system is described by its staff, structure and functions. Participants of an organizational system (OS) have purposeful behavior; consequently, their functions are treated within the framework of decision-making models7 (see Chapter 1, Appendices 1 and 3). Accordingly, we define a model of an organizational system by specifying [12, 39] (Figure I.4):  

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.



  

staff of the OS (elements or participants of the OS); structure of the OS (a set of informational, control, technological and other relations among the OS participants); sets of feasible actions8 (constraints and norms of activity) imposed on the OS participants; these sets reflect institutional, technological and other constraints and norms of their joint activity; preferences (motives, purposes, etc.) of the OS members; information, i.e., data regarding essential parameters being available to the OS participants at the moment of decision-making (choosing the strategies). sequence of “moves” (a sequence of data acquisition and strategy choice by the OS participants).

The staff determines “who” is included into the system; the structure describes “who interacts with whom, who is subordinate, etc.” Finally, feasible sets define “who can do what,” goal functions represent “who wants what,” and information states “who knows what.” Control is interpreted as an influence exerted on an organizational system to ensure its required behavior. Control may affect each of the parameters listed, called by objects of control; these parameters of an OS are modified during a process of control and as the result of control.

7

According to decision theory, any decision-making model includes, at least, a set of alternatives available for choice, preferences that guide the choice of an alternative by a decision-maker, and information available to him or her. 8 In decision theory, the term “strategy” indicates the choice of a decision-maker or a rule which guides him or her during such choice. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

xii

Hence, using the object of control as the first basis of classification of control mechanisms in OS, we obtain the following methods (types) of control (see Figure I.49):     

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.



staff control [50]; structure control [12]; institutional control (control of constraints and norms of activity) [12]; motivational control [50] (control of preferences and goals); informational control (control of information available to OS participants at the moment of decision-making) [14, 52]. “moves” sequence control (control of the sequence of data acquisition and strategy choice by the OS participants).

Let us briefly discuss specific features of different methods (types) of control. Naturally, in practice it may be difficult to choose explicitly a certain type of control, since some of them could and should be used simultaneously. Staff control deals with the following issues: who should be a member of an organization–who should be dismissed or recruited. Usually staff control also includes the problems of personnel training and development. As a rule, the problem of structure control is solved in parallel to that of staff control. Its solution answers several questions, viz., that of control functions to employee distribution, of control and subordination assignment, etc. Institutional control appears to be the most stringent–the principal seeks to achieve his or her goals by restricting the sets of feasible actions and the results of activity of his or her subordinates. Such restriction may be implemented via explicit or implicit influence (legal acts, directives, orders, etc) or mental and ethical norms, corporate culture, etc. Motivational control seems “softer” than institutional control, and consists in purposeful modification of preferences (utility functions) of subordinates. This modification is implemented via a certain system of penalties and/or incentives that stimulate a choice of a specific action and/or attaining a definite result of activity. As compared with institutional and motivational ones, informational control appears the “softest” (most indirect) type. Yet, it has been least investigated in the sense of formal models. A particular case of informational control consists in active forecasting. Thus, we have already given a classification of control (see above) based on those components of the controlled system (to be more precise, its model), influenced by control; the list of these components includes the staff, structure, feasible sets, goal functions and information. Obviously, in general case the impact may and should be applied simultaneously to all parameters mentioned above. Seeking for an optimal control consists in identification of the most efficient (feasible) combination of all parameters in an OS. Nevertheless, the theory of control in organizations traditionally studies a certain system of nested control problems (solutions to “special” problems are widely used to solve more “general” ones). Today, there are two common ways to describe an OS model (as well as to formulate and solve the corresponding control problems). They are referred to as the “bottomtop” approach and the “top-bottom” approach. 9

Note that generally game-theoretic models consider sequence of moves’ control as a special case of structure control (see Appendix 1). Thus, in the sequel we do not focus on it.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Introduction

xiii

According to the first (“bottom-top”) approach, particular problems are solved first; using the obtained solutions to the particular problems, general ones are treated then. For instance, a particular problem could be that of incentive scheme design. Suppose this problem has been solved for any possible staff of OS participants. Next, one may pose the problem of staff optimization, i.e., choosing a certain staff to maximize the efficiency (under a proper optimal incentive scheme). An advantage of this approach lies in its constructivity. A shortcoming lies in high complexity due to the large number of possible solutions of the upper-level problem, each requiring the solution of the corresponding set of particular subproblems. The second (“top-bottom”) approach eliminates this shortcoming; this approach states that the upper-level problems must be solved first, while their solutions serve as constraints for the particular lower-level problems. In fact, we doubt whether (e.g., when creating a new department) a manager of a large-scale company would first think over the details of regulations describing the lowest-level employees’ interactions. Quite the contrary, he would delegate this task to the head of the department (providing him with necessary resources and authorities). Construction of an efficient control system for an organization requires combining both approaches in theory and in applications. This book provides some examples. Let us continue the classification of control in organizational systems. The basic (elementary) model of an OS consists of a single controlled subject (an agent) and a single control subject (a principal); they make decisions one-time under complete information. Extensions of the basic model are the following: 

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

   





a dynamic OS (includes participants making decisions repeatedly; an extension with respect to the subject of move sequence control); a multi-agent OS (includes several agents making decisions simultaneously; an extension with respect to the subject of staff control); a multilevel OS (includes three or even more levels of hierarchy; an extension with respect to the subject of structure control); a distributed-control OS (includes several principals controlling the same agents; an extension with respect to the subject of structure control); an OS with uncertainty (includes participants which have incomplete or imperfect information on essential parameters; an extension with respect to the subject of information control); an OS with constraints on joint activity (includes global constraints imposed on the joint choice of agent actions; an extension with respect to the subject of institutional control); an OS with private information (includes revelation of private information by agents and/or a principal; an extension with respect to the subject of informational control).

Therefore, the second basis of the classification system could be related to the presence or the absence of the following elements:   

dynamics; interconnected agents; multiple levels;

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

xiv    

multiple subordination; uncertainty; joint actions constraints; revelation of information.

The third basis of the classification system is given by a modeling method. Using this basis, it seems possible to identify the models of control based on optimization methods10, as well as on a game-theoretic approach. Optimization-based models could be further divided into models involving the following tools: probability theory (including reliability theory, queuing theory, statistical-decision theory), operations research (linear and nonlinear programming, stochastic programming, integer or dynamic programming, etc), theory of differential equations, optimal control theory, as well as discrete mathematics, generally, graph theory (results in transportation and assignment, scheduling problems, location problem, resource distribution problem for networks, etc). Similarly, the models based on a game-theoretic approach are subdivided into the ones involving different game-theoretic frameworks: non-cooperative games [46, 47], cooperative games [45, 55], repeated games [17, 46], hierarchical games [18] or reflexive games [14, 52] (see also Appendix 1). The fourth basis of the classification system of control problems in OS is provided by control functions which have to be implemented. Process-based management11 includes the following primary functions: planning, organizing, motivating (stimulating) and controlling. Project-based management [35] has the following stages of a project lifecycle:

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.









initiation (concept), i.e., data collection and current state analysis; goals setting, tasks decomposition, criteria selection, requirements and constraints (internal and external) definition, assessment and approval of a project framework; planning and design, i.e., team building and training, project charter and scope development, strategic planning, holding tenders, contracting and subcontracting, project draft design development and approval; execution, i.e., starting up a project management system developed at the previous phase, organization and coordination of project activities, starting up an incentive system for project team members, operating planning, procurement, and operating management; monitoring and controlling, i.e., measuring ongoing project activities, monitoring project performance indicators against a project performance baseline and a project

10

The essence of optimization models consists in searching for optimal values of the varying parameters of the system (i.e., feasible values being the best in the sense of a given criterion). In game-theoretic models, some of these values are chosen by participants of the system, possessing personal interests; therefore, the control problem lies in defining “rules of the game”, ensuring that the controlled agents would choose the required actions. 11 One often distinguishes between process-based management (i.e., control of a regular and repetitive activity) and project-based management (i.e., control of changes). A project is a time-limited purposeful change in a separate system under given requirements concerning the results and feasible consumption of resources; any project possesses a specific organization.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Introduction



xv

plan, performing corrective actions to address issues and risks, change management (adjusting project plans and budgets); completion, i.e., planning a process of project completion, checking and testing project deliverables, customer staff training, formal acceptance of the project , project performance assessment, project closing, and dissolving the project team.

According to these stages, one may speak about the following basic control functions: planning, organizing, motivating and controlling. Finally, psychology identifies the following procedural components of any activity: a motive, a goal, a technology (content and forms, methods and tools) and a result (see Section 1.1 and [12, 34]). These components correspond to four basic management functions (see Table I.112) [34, 39]. Table I.1. Control types and control functions Control types

Control functions

Process-based management

planning

Organizing

motivating

controlling

Project-based management

initiation

planning and design

execution

monitoring

Controlled components of activity

purposes

technology

motives

results

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Table I.2. Control functions and control mechanisms Control functions

Control mechanisms

Planning

resource allocation mechanisms; mechanisms of active expertise; transfer pricing mechanisms rank-order tournaments; exchange mechanisms.

Organizing

mechanisms of joint financing; cost-saving mechanisms; mechanism of cost-benefit analysis; mechanisms of self-financing; mechanisms of insurance; mechanisms for production cycle optimization; mechanisms of assignment.

Motivating

individual incentive schemes; collective incentive schemes; unified incentive schemes; team incentive mechanisms; incentive mechanisms for matrix structures.

Controlling

integrated rating mechanisms; mechanisms of consent; multi-channel mechanisms; mechanisms of contract renegotiation.

12

Again, we mention that the controlled components of activity (see Table I.1) are common for any activity, including process-based management and project-based management.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

xvi

Hence, using the fourth basis of the classification system (control functions), we distinguish planning mechanisms, mechanisms of organizing, incentive mechanisms and mechanisms of controlling [39]. The fifth basis of the control models classification lies in control issues (or typical business-cases) solved with a certain control mechanism. The following mechanisms [39] are historically identified by TCO and represent a sort of “key words.” Moreover, they have welldeveloped theoretical and experimental grounds, and were successfully implemented in industry; the mechanisms are listed in Table I.2. Note these are mostly the mechanisms of motivational control. See Chapters 2–5 for a detailed study. Let us emphasize that the classification illustrated by Table I.2 turns out rather formal due to several reasons. On the one hand, the values of the classification attributes are, in fact, the classes of control mechanisms that have been analyzed in detail. On the other hand, the same class of the mechanisms may be used to implement several different control functions. The sixth basis of the classification system for control problems in an OS is, in fact, the size of real systems where a certain problem generally arises (a country – a region – an enterprise – a structural department within an enterprise – a collective – an individual). Possible fields of application of the control mechanisms are outlined in the Conclusion. Structure of the book. The introduced classification system determines the structure of the book. In Chapter 1 a general formulation of control problems in organizational systems is given. Further material being presented relates to different types of control (see Figure I.5).

A ppendices 1–5 13

Figure I.5. Structure of the book .

13

Chapters 2-9 are self-contained and could be studied independently.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Chapter 9. INSTITUTIONAL CONTROL

Chapter 8. INFORMATIONAL CONTROL

Chapter 7. STRUCTURE CONTROL

Chapter 6. STAFF CONTROL

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

MOTIVATIONAL CONTROL Chapter 2. Incentive Mechan isms Chapter 3. Planning Mechanisms Chapter 4. Mechanisms of Organizing Chapter 5. Mech anisms of Co ntrolling

Chapter 1. Control in Organizations

Introduction

xvii

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Chapters 2-5 are devoted to motivational control (as the one that has been intensively studied to date). These chapters correspond to the functions of motivating, planning, organizing and controlling, respectively (thus, analyzing incentive mechanisms, planning mechanisms, mechanisms of organizing and mechanisms of controlling). Chapter 6 deals with control mechanisms for OS staff, while Chapter 7 addresses control mechanisms for OS structure. Finally, in Chapters 8-9 we focus on the mechanisms of informational control and institutional control, respectively (see Figure I.5). The complex of typical control mechanisms described below represents a “kit” [39], whose units allows for constructing various (integrated) mechanisms to solve specific control problems in organizational systems. For the suggested control mechanisms, possible fields of application and experience in their practical implementation are overviewed in the Conclusion. We also discuss some prospects of further investigations there. The list of references includes the basic works dedicated to mathematical models of control in organizations. The appendices provide necessary prerequisites from game theory (Appendix 1), graph theory (Appendix 2), decision theory (Appendix 3) and fuzzy-set theory (Appendix 4), as well as control in organizations’ glossary (Appendix 5). Finally, the author wishes to express gratitude to A. Yu. Mazurov, Cand. Sci. (Phys.Math.), for his careful translation of the book from Russian into English, as well as for his permanent feedback and contribution to the final version of the book.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Chapter 1

CONTROL IN ORGANIZATIONS

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

In the Introduction, we have emphasized that constructing efficient control mechanisms requires a model of a controlled system to analyze its response to certain control actions. Any organizational system (OS) consists of individuals, groups, collectives, etc.; human beings are remarkable for the ability of independent decision-making. Hence, first we have to describe a model of decision-making. Accordingly, the present chapter possesses the following structure. In Section 1.1 we sketch the essence of the control activity and provide some models of decision-making to-be-used in Section 1.2 for general statement of control problem in an OS. Next, Section 1.3 concentrates on the description of control technology for organizational systems, i.e., the basic stages of analysis and synthesis of optimal control mechanisms (including their implementation in practice). Finally, in Section 1.4 we discuss general approaches to control problems in OS.

1.1. CONTROL ACTIVITY AND MODELS OF DECISION-MAKING Activity. Let us consider the basic structural (procedural) components of any activity of some subject, see Figure 1.1 (below activity is understood as a purposeful human action) [12, 34, 49]. The chain “need  motive  purpose  tasks  technology  action  result,” highlighted by thick arrows in Figure 1.1, corresponds to a single “cycle” of activity. For convenience, the margins of a subject (individual or collective one) are marked by the dotted rectangle. Needs are defined as a requirement or lack of a certain entity being essential to sustain vital activity of an organism, an individual, a social group or society as a whole. The needs of social subjects, i.e., an individual, a social group and society as a whole, depend on a development level of a given society and on specific social conditions of their activity; the aforesaid is shown by arrow (1) in Figure 1.1. Note here we are not concerned with the needs of social subjects. The needs are stated in concrete terms via motives that make a man or a social group act; in fact, activity is performed for the sake of motives. Motivation means a process of stimulating an individual or a social group to fulfill a specific activity, actions and steps (see arrow (1) in Figure 1.1).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Need, motive Purpose

Conditions

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

EXTERNAL ENVIRONMENT

Figure 1.1. Structural components of activity.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

CORRECTIONS

Tasks

Technology (content and forms, methods and tools)

Norms

EXTERNAL ENVIRONMENT

Self-regulation

Action

Principles

Result

2 Dmitry Novikov

EXTERNAL ENVIRONMENT

Criteria

Assessment

Control in Organizations

3

Motives cause formation of a purpose as a subjective image of the desired result of the expected activity or action. The purpose is decomposed with respect to conditions, norms and principles of activity into a set of tasks. Next, taking into account the chosen technology (that is, a system of conditions, forms, methods and tools to solve tasks), a certain action is chosen; note that technology includes content and forms, methods and tools. The above-mentioned action leads (under the influence of an environment) to a certain result of activity. The result of activity is assessed by the subject according to his (internal) criteria, as well as by external subjects (being a part of an environment) according to their own criteria. A particular position within the activity structure is occupied by those components referred to as either self-regulation (in the case of an individual subject) or control (in the case of a collective subject). Self-regulation represents a closed control loop. During the process of self-regulation the subject modifies the components of his activity based on the assessment of the achieved results (see the thin arrow in Figure 1.1). The notion of an external environment (illustrated by Figure 1.1) is an essential category in system analysis, which considers human activity as a complex system. The external environment is defined as a set of those objects/subjects lying outside the system if, first, changes in their properties and/or behavior affect the system under consideration and, second, their properties and/or behavior change depending on behavior of the system. The following factors (see Figure 1.1) are set by an external environment (with respect to the given subject of activity):

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

 

criteria used to assess the compliance of a result to a purpose; norms (legal, ethical, hygienic, etc) and principles of activity, widely adopted within a society or an organization; conditions of activity:

        

motivational, personnel-related, material and technical, methodical, financial, organizational, regulatory and legal, informational.

Thus, we have discussed primary characteristics of activity and the corresponding structural components. Now, let us proceed to control issues. Control. Modeling a controlled system also requires modeling a corresponding control system. A complex hierarchical structure of systems, diverse types, methods, styles and forms of control have resulted in a variety of the corresponding models. The main content of many models of organization often reduces to a sort of a control model. Starting the discussion about the models of control, one should precisely formulate what control is; therefore, below we give a series of common definitions:

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

4

Dmitry Novikov

Control is “the process of checking to make certain that rules or standards being applied”. Control is “the act or activity of looking after and making decisions about something”. Control is “an influence on a controlled system with the aim of providing the required behavior of the latter”. There exist numerous alternative definitions, which consider control as a certain element, a function, an action, a process, a result, an alternative, and so on. We would not intend to state another definition; instead, let us merely emphasize that control being implemented by a subject (this is exactly the case for organizational systems) should be considered as an activity. Notably, methodology is represented as a science of activity organization, while control is viewed as a type of practical activity regarding organization of activity. Such approach, when control is meant as a type of practical activity1 (control activity, management activity) puts many things into place; in fact, it explains “versatile character” of control and balances different approaches to this notion. Let us clarify the last statement. If control is considered as activity, then implementing this activity turns out to be a function of a control system; moreover, the control process corresponds to the process of activity, a control action corresponds to its result, etc. In other words, within organizational systems (where the principal and controlled system are both subjects, see Figure 1.3) control is activity regarding organization of activity. We can further increase the level of reflexion. On the one hand, in a multilevel control system the activity of a top manager may be considered as activity regarding organization of activities of his subordinates; in turn, their activity consists in organization of activity of their subordinates, and so on. On the other hand, an army of consultants represents experts in organization of management activity (first of all, the matter applies to management consulting). 2 Consider an elementary input-output model of a system composed of a control subject 3 (the principal) and a controlled subject (the agent) – see Figure 1.2. The input includes control action and external disturbances, while the output consists of action of the controlled subject. Feedback provides the principal with information on the state of the agent. A primary input-output structure of the control system illustrated by Figure 1.2 bases on the scheme of activity presented by Figure 1.1; the point is that both the principal and the agent carry out the corresponding activity. Combining the structure of both sorts of activity according to Figure 1.1, one obtains the structure of control activity illustrated by Figure 1.3.

1

At first glance, interpreting control as a sort of practical activity seems a bit surprising. The reader knows that control is traditionally seen as something “lofty” and very general; however, activity of any manager is organized similarly (satisfies the same general laws) to that of any practitioner, e.g., a teacher, a doctor, an engineer. Moreover, sometimes “control” (or management activity) and “organization” (as a process, i.e., activity oriented to ensure the property of organization) are considered together. Even in this case, methodology as a science of organizing any activity determines general laws of management activity. 2 This model is considered elementary as it involves a single principal controlling a single agent. Generalization of the model is possible by increasing the number of principals or agents, adding new hierarchy levels or external parties; for details see Sections 1.3-1.4. 3 Let us clarify the difference between the controlled subject and control object. The term “object” is often used in control theory and decision theory. The object possesses no activity by definition; in contrast, the subject is active and able to take his own decisions. Thus, both terms are applicable, but mentioning “subject,” we emphasize the humanity of the controlled system. We will use the terms “principal” and “agent” throughout the book, assuming the both are active in the stated sense.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Control in Organizations

5

CONTROL SUBJECT (Principal) State of control object

Control

CONTROL OBJECT (Agent) External disturbances

Figure 1.2. The input-output control model. PR I N C I PA L C on dit i ons

N ee d , m oti ve

G oal

N or m s

T ask s

T e c hn ol ogy (c ont e nt an d for m s , m e th od s and tool s )

A c t ion

R e sul t

Se l f-r e gu la ti o n

Cr iteria

A ssessment

Corr ec tio ns

P r in ci ple s

St ate o f c ont r olle d sy ste m

Co nt ro l

Ne e d, mot iv e

G oal

T asks

T e c hno l ogy ( co nte n t and fo r m s, m e th ods a nd t ool s)

C orre c tion s

P r inc ip le s

A c ti on

R es ult

S e l f-r eg ul at i on

Crite ria

Ext e rn al dist ur ban ce s

N or m s

Assessment

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

A G EN T C ond it ion s

Figure 1.3. Structure of control activity.

It should be noted that from the agent’s point of view the principal is a part of an external environment (numbers of actions in Figs. 1.1 and 1.3 coincide), which exerts an influence for a definite purpose (double arrows (1)-(4) and (6) in Figure 1.1), see Figure 1.3. Some components of environment influence may even have a random, nondeterministic character, and be beyond the principal’s control. Along with actions of the controlled system these actions exert an impact on the outcome (the state) of the controlled system (double arrow (5) in Figure 1.1); see also external disturbances in Figure 1.3. In what follows, the actions under consideration are reflected by nature uncertainty or game uncertainty. The structure given by Figure 1.3 may be augmented by adding new hierarchical levels. The principles used to describe control in multilevel systems remain unchanged. However,

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

6

Dmitry Novikov

multilevel systems have specifics distinguishing them from a serial combination of two-level “blocks”. Models of decision-making. Consider an elementary organizational system which consists of two participants, a principal and an agent. They possess the property of activity, i.e., the ability to behave purposefully according to individual preferences and the ability to make decisions independently. Following the approaches adopted in the theory of hierarchical games [18], contract theory [8, 57] and the theory of active systems [9, 11], we will call a principal a player making the first move (a metaplayer who has the right to establish the rules of play for other players). Accordingly, a player making the second move (under known choice of the principal) is said to be an agent. In control models for socio-economic systems, a principal represents a control authority, while an agent plays the role of a controlled subject. Moreover, these roles may have not been assigned at the beginning of the game. Let us state the model of agent’s decision-making. For defining the preferences of an agent (and a principal), we introduce the following description of agent’s interaction with his or her environment. The environment includes other agents and principals, as well as other objects and subjects (belonging to the OS considered or being elements of an external environment); for the time being, we are not concerned with identifying certain boundaries of OS. Suppose that an agent can choose actions (strategies, states, etc.) from a set of feasible actions A. Denote an action by y (y  A). By choosing the action y  A in a specific situation, the agent obtains a result of activity z  A0, where A0 indicates a set of feasible results of activity. Possible discrepancy between the agent’s action and result of activity is subject to an impact of the situation, i.e., an external environment, actions of other participants of OS, and so on. The relationship between the agent’s action y  A and the result z  A0 of his or her activity may have an intricate character–for instance, probability distributions or fuzzy information functions may be used (see below). Assume that an agent possesses preferences over the set of results z  A0; in other words, the agent compares different results of activity. Let R A0 and  A0 be the preferences of the agent and the set of feasible preferences, respectively. Preferences from the set  A0 are often parameterized by a variable r taking values from a subset  of the real line (   1). This means that each feasible preference of an agent, RA0   A0 , uniquely corresponds to a certain value of the parameter r   (known as a type of the agent). In many applications, the type of an agent is interpreted as the efficiency of his or her activity or as an optimal quantity of a resource, a plan assigned by a principal to the agent in question. Choosing the action y  A, an agent follows his or her preferences; in addition, the agent takes into account how the action would influence the result of activity z  A0. Notably, the agent is guided by a certain law WI () representing variations of the result depending on the action and the situation (the variable I contains information on the latter). The agent’s choice is determined by the rule of individual rational choice PW I (  A0 , A, I)  A, which separates a set of the most beneficial actions (from the agent’s view).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Control in Organizations

7

According to a deep-rooted tradition in the theory of individual and collective decisions [1, 31, 45, 48], we define the rule of individual rational choice as follows. Propose two hypotheses [12, 31]:

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

1) the hypothesis of rational behavior–an agent chooses (under all available information) actions leading to the most beneficial results of his or her activity; 2) the hypothesis of determinism–an agent strives for eliminating the existing uncertainty (based on all available information), so as to make decisions under complete information. Equivalently, the final criterion used by a decision-maker (DM) must not involve uncertain parameters. We have to clarify two expressions: “under all available information” and “the most beneficial results of activity.” Let us start with the second one. There are different ways to specify individual preferences. In fact, the following approaches prevail: preference relations and utility functions (see Appendix 3). For a pair of alternatives, a binary relation defines the “best” alternative; on the other part, a utility function maps each alternative into a real value, i.e., the utility of the alternative. According to the hypothesis of rational behavior, an agent chooses an alternative from the set of “best” alternatives. Utility functions being used, this is a set of alternatives, where the utility function attains its maximum. And so, the matter concerns the “best” alternative. However, generally agent’s preferences are defined on the set of activity results that depend on the situation (as well as on actions of the agent). Therefore, in the general case, no unique relationship exists between an action of the agent and the result of his or her activity. Accordingly, choosing a certain action (making a certain decision), the agent should foresee possible consequences of his or her actions and analyze benefits of the corresponding results of activity. Note that the information on the situation (being available to the agent) appears essential. The process of passing from the preferences R A0 defined on the set A0 to the induced preferences4 RA defined on the set A by means of the law WI () is said to be uncertainty elimination. When preferences of an agent are a priori described by the utility function, his or her induced preferences are determined by a goal function. The latter maps each action of the agent to a certain real value (his or her “gain” yielded by choosing this action). Studying mathematical models of decision-making, we will distinguish between objective uncertainty (incomplete information about parameters of the external environment) and subjective uncertainty (incomplete information about behavioral principles of other subjects). The corresponding basis for classification is given by the objects and subjects with incomplete information. The uncertainty regarding parameters describing OS participants is called internal uncertainty, while the uncertainty regarding external parameters is referred to as external uncertainty.

4

We introduce the term “induced preferences,” since preferences on the set of actions are generated (equivalently, induced) by preferences on the set of activity results and by the law of interdependence between the actions and results.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

8 I

AGENT {A, A 0, , w (), v (), I}

yA

CONTROLLED SUBJECT w (): A    A 0

z  A0

 

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Figure 1.4. The structure of an agent’s decision-making model.

Moreover, the external objective uncertainty is said to be uncertainty of nature (or uncertainty of the state of nature), and internal subjective uncertainty5 is known as game uncertainty. In the sequel, we adopt the following model of agent’s preferences and awareness. Suppose that agent’s preferences on the set of feasible results of activity are defined by his or her utility function v (). Assume that the result of activity z  A0 depends on the action y  A and on the situation   : z = w (y,  ), where z() is a known function6. Then the law WI () is characterized by a function7 w () (reflecting the structure of a passive controlled subject) and by the information I (being available to the agent at the moment of decision-making). The structure of the decision-making model of an agent is shown by Figure 1.4. Note that the so-called input-output model of Figure 1.4 is common for classical control theory; this science studies control problems for passive (technical) systems. In such class of problems (except the models of man-machine systems), control subject appears passive, as well. Let us elucidate the meaning of information and how a specific type of uncertainty is eliminated. First, consider objective uncertainty (external or internal one). In this case, information on the situation is essential to an agent. Accordingly, possible types of uncertainty are the following8: 1) the set of feasible situation values ′  . The corresponding uncertainty is said to be an interval uncertainty; to eliminate it, one should apply the principle of maximum

5

As a rule, external subjective uncertainty is not considered. Really, it could be easily eliminated by incorporating the subjects (whose behavioral principles are not completely known to a decision-maker) in the OS. We involve such description without loosing generality. In multi-agent systems, partners of each agent can be viewed as an external environment for the agent; consequently, their strategies form a “state of nature.” Though, each agent will have a specific state of nature. For a description of game uncertainty, the reader is referred to Appendix 1. 7 A mapping that connects the actions and situation with the results of activity represents a “technology” of operation of a certain object being controlled by an agent (see Fig. 1.1). 8 We suppose that the functions mentioned below possess unique points of maximum or minimum. 6

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Control in Organizations

9

guaranteed result (PMGR): f (y) = min v ( w ( y,  )) , the hypothesis of benevolence   '

(HB): f (y) = = max v ( w ( y ,  )) , their combinations, and so on; θ'

2) the probability distribution p () on the set ′  . The corresponding uncertainty is called a probabilistic uncertainty; to eliminate it, one should apply the mathematical expectation

f ( y) 

 v (w ( y,  )) p ( )d

or evaluate possible risks (the

  '

variance) and higher-order moments; 3) the utility function ′() of the fuzzy set ′  . The corresponding uncertainty is known as a fuzzy uncertainty; to eliminate it, one should separate out the set of maximally undominated actions (see Appendix 4). For all types of uncertainty mentioned above, a “limiting” case is when the result of activity is deterministic, i.e., the result does not depend on the situation (equivalently, the set ′ consists of a single element). In other words, each action y  A corresponds to a unique result of activity z = w (y)  A0. Then we assume that the preferences of an agent are defined on the set of his or her actions. Let v () represent the utility function of the agent; in the deterministic case, the agent’s goal function is defined by f (y) = v (w (y)). In the deterministic case, the principle of individual rational choice lies in that the agent chooses actions ensuring maximum value of his or her goal function:

PWI (  A0 , A, I) = Arg max f (y).

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

y A

Therefore, the hypothesis of determinism has the following effect. By eliminating the existing uncertainty (involving the PMGR, the mathematical expectation, undominating relation, specific assumptions about behavior of other agents, and so on–depending on the type of uncertainty), the agent implements transition from the preferences that depend on the uncertain parameters to the preferences that depend on his or her actions (namely, to the induced preferences). The hypothesis of rational behavior consists in that the agent chooses actions being the best according to his or her induced preferences. In other words, the agent strives for maximizing his or her utility function (e.g., guaranteed utility, expected utility, etc.) by a proper choice of his or her actions. We have discussed individual decision-making. Now, let us analyze game (internal subjective) uncertainty; here the essential assumptions of an agent are connected with the set of feasible situations (actions of the rest agents, guided by some principles of behavior that are not completely known to the agent in question). To describe collective behavior in a specific multi-agent OS (composed of a principal and several agents), it is not sufficient to define the preferences of the agents and the rules of rational individual choice. Indeed, we have to describe the model of their joint (collective) behavior. The following aspect has been underlined earlier. In a single-agent system, the hypothesis of rational (individual) behavior implies that the agent seeks to maximize his or her goal function by a proper choice of the action. In the case of several agents, we should account for their mutual influence. Consequently, a game arises as an interaction among the agents (participants of a certain OS) such that the utility of each agent (player) depends both

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

10

on his or her action (strategy) and on the actions of the rest players (his or her opponents). Suppose that (due to the hypothesis of rational behavior) each agent strives to maximize his or her function by choosing a proper strategy (action); evidently, in the case of several players, a rational strategy of each player depends on the strategies of the opponents. The set of such rational strategies, i.e., stable and predictable outcomes of the game, is said to be a solution (an equilibrium) of the game. Today, one would hardly find a common (universally recognized) definition of equilibrium in game theory. Different assumptions regarding rational behavior of a player generate different concepts of equilibrium (see Appendix 1). Generally speaking, the same game does not admit equilibria of all types. Let us associate each of n players (agents) with the payoff function vi (y), where

y  ( y1 , ..., yn )  A'   Ai is the action vector of all players, and N  {1, 2, ..., n} is the i N

set of players. Following the terminology adopted in game theory, the actions y i are said to

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

be strategies; on the other hand, the vector y represents an action profile of the game. For player i, the set of strategies y i  ( y1 , ..., y i 1 , y i 1 , ..., y n ) is called an opponents’ action profile. Thus, a rational collective behavior corresponds to the choice of equilibrium strategies by the players (we make an a priori reservation for the type of equilibrium in each concrete game). Note that any concepts of equilibrium must correspond to the principles of individual rational choice (in the special case n = 1). Moreover, in game-theoretic models we may believe that the opponents’ action profile determines the state of nature for a specific player (agent) considered, i.e., i = y–i, i  N. Accordingly, all players have the same result of activity–the action profile, i.e., zi = y, i  N. Information of a player (and the assumptions he or she adopts regarding behavior of the rest players) reflect his or her principle of uncertainty elimination. The whole set of uncertainty elimination principles used by the players generates the type of game equilibrium (for instance, the maximin equilibrium corresponds to the principle of maximum guaranteed result, the Bayesian equilibrium corresponds to the averaging principle, the Nash equilibrium corresponds to the assumption of fixed action profiles, and so on–see Appendix 1). A game equilibrium is a stable (in a certain sense) set of actions of system participants. In other words, the subjective (game) uncertainty is often eliminated by making certain assumptions about the behavioral principles of system participants; as the result, it is possible to redefine uniquely their strategies. Notably, we eliminate the subjective uncertainty in two stages: (1) choose the concept of equilibrium and (2) define the principle to-be-used by the players in choosing specific equilibrium strategies (in the case when several equilibria exist). This could be the hypothesis of benevolence, the principle of guaranteed result, etc. [25, 49]. We have described the models of individual decision-making (the models of collective decision-making are also discussed in Appendix 1 and in [1, 45, 48]). Let us proceed to the general formulation of control problems in organizational systems.

1.2. GENERAL CONTROL PROBLEM Consider the formal statement of a control problem for a certain (passive or active) system.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Control in Organizations

11

Any control is always purposeful; therefore, we discuss the following issues. How should a required behavior of a controlled system be understood? And, first of all, who specifies these “requirements”? Designing and analyzing a model, an expert in operations research generally acts as a control subject, i.e., a principal [12, 18]. Consequently, it is necessary to describe the preferences of a principal, as well as to consider the model of his or her decision-making (the principal’s action is to choose control for a fixed agent9). Suppose that all OS participants know the relationship w () between the result of activity (on the one part) and the chosen action and the existing situation (on the other part); moreover, assume that the relationship is “frozen.” In practice, this means that the technology of agent’s activity is fixed (or the technology of operation of the object controlled by the agent is fixed). Such assumption appears by no means critical; indeed, almost any change in the relationship between the action and the result can be expressed as a certain function of this relationship against the situation. In addition, modifying the “technology” w () of operation of the subject controlled by the agent is exactly a problem of classical control theory (control of passive systems, i.e., technical systems). Without loss of generality, we also believe that the set of situations  is known to all OS participants and is fixed. For fulfilling this assumption, one can always choose a sufficiently large set of situations. In each specific case, feasible values of the situations are then limited by the information available to an agent. On the whole, the principal’s model of decision-making is similar10 to the above model of decision-making used by an agent. The principal’s model is described by the tuple11 0 = {UA, Uv, UI, A0, , w (), v0 (), I0}. Let us study elements of the model (see Figure 1.5). “Actions” of a principal (strategies chosen by him or her) are the control actions12 uA  UA, uv  Uv, and uI  UI. Denote by u = (uA, uv, uI)  U = UA  Uv  UI the vector of control actions (control vector). The majority of control models for organizational systems proceed from the following idea. The only role of a principal consists in implementing control actions, and he or she possesses no individual (direct) result of activity (in contrast to an agent). Thus, generally the result of agent’s activity is considered to be the result of principal’s activity. Therefore, the structure of a control system for an agent takes the form illustrated by Figure 1.5 (draw a parallel between it and the structure of the decision-making model of an agent–see Figure 1.4).

9

Evidently, if we study the interaction between a fixed principal and a fixed agent, only the problems of institutional control, motivational control and informational control make sense. Accordingly, in this chapter staff control problems and structure control problems lie beyond consideration. 10 In the present chapter, the subscript “0” indicates variables chosen by a principal. However, the notation A0 for the set of results of the agent’s activity seems inappropriate (yet, it has been historically established). 11 The uniform framework adopted to describe the models of decision-making in complex (multilevel hierarchical) systems implies the following. A principal may be treated as a subject controlled by a higher-level principal, while an agent represents а principal controlling a lower-level agent (compare Fig. 1.1 and Fig. 1.2). 12 Here the subscripts correspond to different objects of control, viz., “А” relates to institutional control, “v” deals with motivational control, and “I” concerns informational control.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

12 I0

PRINCIPAL 0 = {U A, U v, U I, A 0, , w (), v 0(), I0}

u

AGENT {A, A0, , w(), v (), I}

I

yA

CONTROLLED OBJECT w (): A    A0

z  A0



Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Figure 1.5. The structure of a control system.

Next, recall that the principal’s preferences v0 () are specified on the set A0 of feasible results of agent’s activity. The latter depend on the actions of an agent and on the situation13. Hence, control lies in that a principal motivates an agent to choose specific actions. What actions should the principal stimulate the agent to choose? The principal’s preferences v0 () defined on the set U  A0 (taking into account the information I0 being available to him or her) induce the corresponding preferences on the set U  A, i.e., the principal’s goal function f0 (). Note that the principal eliminates the existing uncertainty by the same technique as the agent does (see above). The rational choice P() of an agent (discussed earlier) depends on the control actions u ()  U applied by a principal. In other words, the set of rational choice of the agent is P(u) = PW I (  A0 ( u A ) (uv), A(uA), I (uI))  A. Thus, the principal can predict that, under a certain control u  U, an agent chooses an action from the set P(u)  A. If this set consists of several elements (at least, two), the principal deals with the uncertainty regarding the agent’s choice. It could be eliminated by using one of the methods described. For instance, involve the hypothesis of benevolence (or 13

Naturally, the situation for a principal (and his or her information about the situation) may differ from that of an agent. Moreover, the control model being considered does not cover the case of incomplete awareness of the principal about the agent (e.g., the type of the agent, the rules of uncertainty elimination and the principles of decision-making used by the agent). Yet, the stated shortcoming could be easily rectified. Incomplete awareness of the principal about the agent’s type is incorporated in control mechanisms with revelation of information that do not contradict the control model in question (see Chapter 3). Still, incomplete awareness of the principal about the principles of decision-making used by the agent has not been intensively investigated to date.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Control in Organizations

13

the principle of optimistic assessment) which implies that the value of the agent’s goal function under the control action u  U constitutes K(u) = max f0 (u, y). yP ( u )

The quantity K(u), u  U, is said to be control efficiency. The essence of the hypothesis of benevolence is that, given a set of rational choice, an agent chooses an action being the most beneficial to the principal. A possible alternative lies in employing the principle of maximum guaranteed result–the principal evaluates the worst-case choice of the agent. As the result, we obtain the following formula of control efficiency (also known as guaranteed efficiency): K(u) = min f0 (u, y). yP (u )

Hence, control problem for an organizational system can be formally posed as follows. Find a feasible control action ensuring the maximum efficiency (an optimal control), i.e., K(u)  max .

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

uU

The studied control model is the basic (elementary) model of control in organizational systems, as far as it provides a uniform description of decision-making processes for OS participants. Actually, on the one hand, in multilevel systems the interaction among participants at different levels of control can be described by “vertical” expansion of the structures shown in Figures 1.1-1.2. On the other hand, incorporating several principals or agents in the system corresponds to “horizontal” expansion of these structures. Thus, above we have formulated the control problem in its general setting. To realize how this problem is posed and solved in every specific case, let us consider a general technology of control in organizational systems.

1.3. CONTROL TECHNOLOGY FOR ORGANIZATIONAL SYSTEMS Let us describe control technology for organizational systems [39]. We understand technology as a set of methods, operations, procedures, etc., that are successively applied to solve a specific problem. Note that technology of solving control problems discussed below covers all stages, from OS model construction to performance assessment during adoption (see Figure 1.8; for better clarity, stages back couplings are omitted). The first stage (model construction) consists in description of a real OS in formal terms, i.e., specification of staff and the structure of OS, goal functions and sets of feasible strategies of principals and agents, their information, as well as the order of operation, behavioral principles, etc. This stage substantially involves the apparatus of game theory (see Appendix 1); as a rule, the model is formulated in terms of the latter. The second stage (model analysis) lies in studying the behavior of principals and agents under certain control mechanisms. To solve a problem of game-theoretic analysis means to find equilibria of the game of the agents given a fixed control mechanism, chosen by the principal(s).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

14

Real OS System description and model construction Model analysis Control synthesis problem Analysis of solution stability

T E O R E T I C A L

S T U D Y

Identification of OS Simulation Training of managing staff, implementation, efficiency assessment of application, etc

MODEL ADJUSTMENT

IMPLEMENTATION

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Figure 1.8. Control technology for OS.

After solving the analysis problem (i.e., being aware of behavior of agents under various control actions of principals) one may proceed to the third stage. First, one solves the direct control problem, that is, the problem of optimal control actions synthesis (find a feasible control ensuring the maximal efficiency). Second, one solves the inverse control problem (finding a set of feasible controls rendering the OS to the desired state). Control efficiency criterion is represented by the maximum (when agents can be considered benevolent) or guaranteed (when benevolence cannot be assured) value of the goal function of the principal over all equilibrium states of the agents’ game. It should be emphasized that, in general, this stage causes major theoretical difficulties and seems the most time-consuming one for a researcher. When the set of solutions to the control problem is calculated, one can move to the fourth stage, notably, studying stability of the solutions. Stability analysis implies solving (at the very least) two problems. The first problem is to study the dependence of optimal solutions on parameters of the model; in other words, this is analysis problem for solution stability in a classical representation (well-posed character of an optimization problem, sensitivity, stability of principles of optimality, etc). The second problem turns out specific for mathematical modeling. It consists in theoretical study of model adequacy with respect to the real system; such study implies efficiency evaluation for those solutions derived as optimal within the model when they are applied to the real OS (modeling errors cause difference of the model

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Control in Organizations

15

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

from the real system). The solution of the adequacy problem is a generalized solution of a control problem, i.e., a family of solutions parameterized by the value of the guaranteed efficiency within specific set of real OS [51]. Thus, the above-mentioned four stages constitute a general theoretical study of the OS model. To use the results of a theoretical study in practice the model must be tuned (i.e., identified in a series of simulations; these are the fifth and the sixth stage, respectively). The system is identified using those generalized solutions that rely only upon the information available in reality. In many cases the simulation stage appears necessary due to several reasons. First, one seldom succeeds in obtaining an analytical solution to the optimal control synthesis problem and in studying its dependence on the parameters of the model. In this situation simulation gives a tool to derive and assess the solution. Second, simulation allows verifying validity of hypotheses adopted while constructing and analyzing the model (in the first place, with respect to behavioral principles of system members, that is, procedures used to eliminate uncertainty, rules of rational choice, etc). In other words, simulation gives additional information on adequacy of the model without conducting natural experiment. Finally (and this is the third reason), employing business games and simulation models in training lets the managing staff master and test the suggested control mechanisms. The seventh stage, that of implementation, finalizes the process; it includes training of the managing staff, implementation of control mechanisms (designed and analyzed at the previous stages) in real OS, with subsequent efficiency assessment, correction of the model, and so on. From the description of control technology in OS, we pass to the general approaches to solving theoretical control problems.

1.4. GENERAL APPROACHES TO CONTROL PROBLEMS IN ORGANIZATIONAL SYSTEMS For the direct control problem (see Section 1.2)

max f0 (u, y)  max ,

yP (u )

uU

one often faces two obstacles, viz., (a) determining the set of rational choice P(u) of an agent (the solution set for a game of the agents–in the multi-agent case) and (b) searching for a control action which maximizes the efficiency. The existing experience to manage these obstacles lies in the following. Denote by (y) = {u  U | y  P(u)} the set of controls implementing a given action y  A. Let G(y) = max f0 (u, y) be the maximal value of the principal’s goal function under the u ( y )

action above. Finally, introduce P(U1) =

 P(u)

as the set of actions being implemented

uU 1

by controls from the set U1 (here U1  U is a certain subset of the set of feasible controls U).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

16

The problem max max f0 (u, y) can be rewritten as max G(y), i.e., we have optimal yP (u )

uU

*

yP (U )

*

*

control in the form u  Arg max* f0 (u, y ), where y stands for the optimal implementable u ( y )

*

action y  Arg max G(y). yP (U )

Assume that P(U) = A; this means that, according to control constraints, any feasible actions are implementable. Next, K* = f0 (u*, y* ) is the maximal value of the efficiency criterion (the principal’s goal  function), u ( y, ) = arg max f0 (u, y) represents a certain solution to the inverse control u ( y )

problem, i.e., a control implementing the action in question and maximizing the principal’s  goal function. The quantity u ( y, ) is said to be minimal control “costs” of the principal regarding implementation of the action y  A. Consequently, the control problem can be expressed in the form



y* = arg max f0 ( u ( y , y ) , y).

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

y A

It would seem that the obtained statement is not easier than the original control problem. However, it allows for deriving simple and interpretable solution principles. The central idea is either guessing a class of “elementary” controls with optimal solution, or decreasing the number of operations to enumerate over the set of feasible controls. Evidently, this is possible by imposing additional constraints on the parameters of OS model. Guessing “elementary” controls (e.g., parametric control actions or control actions with additional properties) possesses an obvious advantage. Suppose we have demonstrated that for any feasible control there exists a control from a certain class with identical or greater efficiency. Then we focus on this class of controls and search for optimal control exactly in it. Classic examples of additional control properties in organizational systems are the concepts of incentive compatibility in an incentive problem [50] (actions chosen by agents coincide with plans, i.e., actions recommended by a principal) and strategy-proofness in a planning problem [12, 39, 45, 46] (agents report actual values of the parameters being unknown to a principal). Formally, the assertion regarding optimality of the class of feasible controls U1  U can



be rewritten as  u ( y * , )  U1, or, equivalently,  (u1  U1, y1  A):  (u1, y1) = K*.  Apparently, it suffices to require that  y  A  u1  U1: u1 = u ( y, ) . Notwithstanding spurious “roughness” of such sufficient condition, it holds for many classes of control in a series of (theoretically and practically) important control problems in organizational systems. The examples are compensatory and jump incentive schemes in incentive problems [50], proportional mechanisms of resource allocation and the mechanisms of active expertise in planning problems [12, 39, 45, 46], incentive problem in a multi-agent OS [50], and others. Thus, in the present chapter we have considered the models of decision-making, provided the general statement for a control problem in OS, as well as described the technology and common approaches to control problems in OS.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Control in Organizations

17

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Now, let us continue with the description of specific mechanisms to solve such control problems. First, we address the most intensively investigated ones, i.e., the mechanisms of motivational control–incentive mechanisms (Chapter 2), planning mechanisms (Chapter 3), mechanisms of organizing (Chapter 4), and the mechanisms of controlling (Chapter 5). Second and third, we discuss the mechanisms of staff control (Chapter 6) and structure control (Chapter 7) in OS, as well as the mechanisms of informational and motivational control (Chapters 8 and 9, respectively).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Chapter 2

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

INCENTIVE MECHANISMS An incentive means motivating a subject to perform specific actions; in organizational systems, a principal stimulates an agent by exerting an impact on his or her preferences (i.e., the goal function). The scientific research in the field of formal incentive models (within the framework of control theory) started in the late 1960s. The investigations were organized almost simultaneously and independently in the former USSR and in the USA, the UK, etc. The corresponding basic schools include the theory of active systems [9, 11, 12, 50], the theory of hierarchical games [18] and contract theory [8, 32, 57, 59]. Moreover, we underline that many incentive problems (demand for labor, labor offer, etc.) are traditionally studied in labor economics [42]. In addition, applied incentive problems are considered theoretically and widely used in the practice of staff management. The present chapter focuses on the description of the basic approaches developed and the results obtained for the problems of incentive mechanism design. The material presented possesses the following structure. First, in Sections 2.1-2.2 we discuss incentive problems for a single-agent organizational system (in a continuous setting and in a discrete setting). Second, we describe the basic incentive mechanisms reflecting common forms and schemes of payment (Section 2.3). Next, Section 2.4 is devoted to incentive mechanisms in contract theory (notably, we treat an incentive problem for a single-agent system under conditions of stochastic uncertainty regarding the results of agent’s activity). Finally, Sections 2.5–2.11 deal with different incentive mechanisms intended for a collective of agents (agents performing a joint activity).

2.1. INCENTIVE MECHANISM: A CONTINUOUS MODEL In control theory, the major modeling tool for incentive problems consists in game theory, a branch of applied mathematics which analyzes models of decision-making in the conditions of noncoinciding interests of opponents (players); each player strives for the influence on the situation in his or her favor (see Appendix 1). An elementary game model is an interaction between two players–a superior (a principal) and a subordinate (an agent). Such organizational system has the following structure: the principal occupies the upper level of an hierarchy, while the agent is located at the lower level. For instance, a principal is an employer, an immediate superior of an agent, or an organization which has concluded an

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

20

agreement with an agent (e.g., a labor contract, an insurance contract, a works contract, and so on). On the other hand, a wage worker, a subordinate employee, or an organization (being the second part in a corresponding contract) acts as an agent. An agent’s strategy is choosing an action y  A from a set of feasible actions A. In practice, an action means hours worked, product units manufactured, etc. The set of feasible actions represents a set of alternatives to-be-used by an agent in his or her choice (e.g., a range of possible working time, a nonnegative production output satisfying certain technological constraints, to name a few). Let us introduce a series of definitions. An incentive mechanism is a principal’s decisionmaking rule concerning rewards given to an agent. An incentive mechanism includes an incentive scheme; within the scope of models considered in this book, an incentive scheme is completely defined by its incentive function. By-turn, an incentive function specifies the relationship between the agent’s reward (given by a principal) and the actions chosen by an agent. Therefore, in the sequel (to study game-theoretic models) we employ the terms “incentive mechanism,” “incentive scheme” and “incentive function” as equivalents. Given a set of feasible strategies M, a principal’s strategy is choosing an incentive function  ()  M, mapping each action of an agent into a nonnegative reward paid to the agent, i.e.,  : A  1+. Constraints on the set of feasible rewards can be initiated by various legal acts (e.g., minimum wage) or can be adopted on the basis of economic expedience (of the principal’s activity). Wage-rate schedules may be also used for an agent, and so forth. By choosing the action y  A, the agent incurs the costs c(y); accordingly, the principal yields the income H(y). We believe that the agent’s cost function c(y) and the principal’s income function H(y) are a priori known. The interests of the organizational system participants (the principal and the agent) are determined by their goal functions (alternatively, by their payoff functions or utility functions–we omit dependence on the principal’s strategy). Denote these functions by  (y) and f (y), respectively. The agent’s goal function constitutes the difference between his or her reward and costs1: f (y) =  (y) – c (y).

(1)

At the same time, the principal’s goal function represents the difference between his or her income and the costs to motivate the agent–the reward paid to the latter:

 (y) = H (y) –  (y).

(2)

Thus, we have defined the goal functions reflecting the preferences of the OS participants. Now, it seems reasonable to discuss distinction between financial incentives and non-financial recognition. The presence of a scalar goal function implies the existence of a uniform equivalent used to measure all components of the goal function (the agent’s costs, the principal’s income, and, naturally, the reward itself).

1

In this book, each section has independent numbering of formulas.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

21

When we speak about financial incentives of an agent, the equivalent is money. Consequently, the principal’s income has an obvious interpretation (furthermore, most of the research describing formal incentive models proceeds from expressing the agent’s reward and the principal’s income in money terms). A somewhat intricate matter concerns the agent’s costs–measuring by money, e.g., the agent’s satisfaction with his or her work is not always possible. In economic sense, one may understand the agent’s costs as a money equivalent of his or her efforts required for choosing a specific action. Accordingly, it seems natural to suggest the idea of costs compensation; notably, the reward paid by the principal must (at least) cover the agent’s costs. For a formal statement, see below. Suppose the agent’s costs are expressed in terms of some “utility” (e.g., bodily fatigue, “physic” income, and so on), and such utility has nothing in common with money. Moreover, the measuring units of the utility can not be reduced to the units of money by a linear transformation. In this case, summing up and subtracting the utility in the goal function (1) are well-posed only if we define the utility of a reward. For instance, financial incentive being involved, one may introduce the utility function u~ ( ( y )) that would describe the utility of money for an agent. The agent’s goal function takes the form:

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

f (y) = u~ ( ( y )) – c (y).

Let us make the following assumptions to-be-used in this chapter (otherwise, special provisions are given). First, suppose that the set of feasible actions of the agent is the positive real line; zero action means agent’s non-participation in the OS (inaction). Second, assume that the cost function is nondecreasing and continuous and vanishes in the origin (sometimes, we will also require its convexity and continuous differentiability). Third, suppose that the income function is continuous, possesses nonnegative values, and attains the maximum under nonzero actions of the agent. Fourth, assume that the reward paid to the agent by the principal is nonnegative. We elucidate the assumptions below. The first assumption implies that feasible actions of the agent are nonnegative real values, e.g., hours worked, production output, etc. Such structure of feasible actions determines “continuous” nature of the model considered in Section 2.1. The corresponding discrete model (with a finite set of feasible actions) of incentives is discussed in Section 2.2. According to the second assumption, the choice of greater actions requires (at least) the same costs of the agent. For instance, the costs may grow following the rise in production output. In addition, zero action (agent’s inactivity) causes no costs, while marginal costs2 increase under greater actions; i.e., each subsequent increment of the action (by a fixed quantity) incurs higher costs. The third assumption imposes certain constraints on the principal’s income function by requiring that the principal benefits from the agent’s activity. Evidently, no incentive problem exists otherwise (if the principal’s income function attains the maximum under zero action of the agent, the former pays nothing to the latter and the agent is inactive). The fourth assumption means that the principal does not penalize the agent. 2

In economics, marginal costs are defined as the derivative of a cost function.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

22

The models of decision-making given in Chapter 1 state the following. A rational behavior of an OS participant lies in maximization of his or her goal function under all available information by choosing a proper strategy. Let us define awareness of players and sequence of moves. Suppose that at the moment of decision-making (strategy choice) OS participants know all the goal functions and all the feasible sets. A specific feature of the game-theoretic incentive problem consists in a fixed sequence of moves (the game Г2 with side payments in the theory of hierarchical games, see Appendix 1 and [18]). A principal–a metaplayer–has the right to move first; thus, he or she reports the chosen incentive function to an agent. Under available information on the principal’s strategy, the agent then chooses his or her action maximizing his or her goal function. And so, we have described the basic parameters of an OS (a staff, a structure, feasible sets, goal functions, awareness and sequence of moves). Now, we pose the control problem proper–the problem of optimal incentive mechanism design. Recall that the agent’s goal function depends both on his or her strategy (action) and on the incentive function. According to the hypothesis of rational behavior, an agent chooses actions to maximize his or her goal function (for a given incentive function). Apparently, the set of such actions (referred to as the set of implementable actions) depends on the incentive scheme established by the principal. The central idea of motivation lies in that a principal stimulates agent’s choice of specific actions by varying the incentive scheme. Since the principal’s goal function depends on the agent’s action, the efficiency of an incentive system is the value of the principal’s goal function on the set of the agent’s actions implemented by the incentive scheme (in other words, on the set of actions chosen by the agent under the given incentive scheme). Consequently, the incentive problem is to find an optimal incentive scheme, i.e., an incentive scheme with maximal efficiency. We give formal definition below. The set of agent’s actions maximizing his or her goal function (depending on the incentive scheme adopted by a principal) is said to be a solution set of the game or a set of implementable actions (for the given incentive scheme): P () = Arg max [ (y) – c (y)]. y A

(3)

Being aware that the agent chooses actions from the set (3), the principal must find an incentive scheme such that his or her goal function is maximized. Generally, the set P() is not a singleton; therefore, we have to redefine the agent’s choice (in the sense of the principal’s preferences regarding behavior of the agent). If special mention of the opposite takes no place, we believe that the hypothesis of benevolence3 (HB) is valid. Then the agent chooses the most beneficial action for the principal from the set (3). A possible alternative for the principal is expecting the worst-case choice of the agent. Consequently, the efficiency of the incentive scheme   M is defined by

3

The hypothesis of benevolence consists in the following. If an agent is indifferent in choosing among several actions (e.g., actions ensuring the global maximum to his or her goal function), he or she definitely chooses the action being the most beneficial to a principal (the action maximizing the principal’s goal function).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms K() = max  (y),

23 (4)

yP ( )

with the function  (y) being given by formula (2). The hypothesis of benevolence being rejected, the guaranteed efficiency Kg() of the incentive scheme   M makes Kg() = min  (y). yP ( )

The direct incentive problem (the problem of optimal incentive scheme design) is to choose a feasible incentive scheme with maximal efficiency: K()  max .

(5)

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

 M

The inverse incentive problem is to find a set of incentive schemes implementing a given action or (in a general setting) a given set of actions A*  A. For instance, in the case A* = {y*}, the inverse problem lies in finding the set of incentive schemes M (y* ) that implement this action: M (y* ) = {  M | y*  P()}. The set M (y* ) being evaluated, the principal can choose the “minimal” incentive scheme belonging to this set (in the sense of minimal costs to motivate the agent) or an incentive scheme with certain properties (e.g., monotonicity, linearity, etc.). It should be emphasized that the above assumptions agree. An agent is always able to choose zero action causing no costs (the second assumption). At the same time, a principal can pay nothing to the agent for such action. All interpretations of game-theoretic models of incentives presuppose a special alternative to an agent; notably, an agent may preserve the status-quo, i.e., not cooperate with a principal (concluding no labor contract with the latter). Not participating in a given OS, the agent does not obtain a reward from the principal; the former can always choose zero action and guarantee a nonnegative (more specifically, zero) value of the goal function. Suppose that

outside a given OS an agent may receive a guaranteed utility U  0 (in contract theory, this is the unemployment relief constraint or the reservation wage constraint). Accordingly, in the case of participation in the OS considered the agent should be given (at least) the same level of utility. Taking into account the reservation utility, the set of implementable actions (3) takes the form P(, U ) = Arg

max

{ y A |  ( y )  c ( y )  U }

{ (y) – c (y)}.

(6)

In the sequel, we assume zero reservation utility for simplicity of exposition. Making a small digression, let us discuss the model of agent’s decision-making in a greater detail. Suppose that a certain agent wants to get a job in an enterprise. He or she is suggested a contract { (y), y*}, with a specific relationship  () between the reward and the results y of his or her activity. In addition, the agent is informed of the expected results y* of the activity. What are the conditions when the agent signs the contract? Note that both sides,

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

24

viz., the agent and the enterprise (the principal), make decision regarding signing of the contract independently and voluntarily. To answer, we start with consideration of some principles used by the agent. The first condition–the incentive compatibility constraint–consists in the following. In the case of participation in a contract, the choice of the action y* (only and exactly this action) maximizes the agent’s goal function (utility function). In other words, the incentive scheme agrees with the interests and preferences of the agent. The second condition–the contract participation condition (also known as the individual rationality constraint) claims the following. Signing a given contract, the agent expects gaining a greater utility than in the case of concluding an agreement with another organization (another principal). The agent’s beliefs regarding possible income at the labor market are characterized by reservation wage; accordingly, the reservation wage constraint represents a special case of the individual rationality constraint. Similar constraints of incentive compatibility and individual rationality can be applied to a principal, as well. Imagine there exists a single agent–a contender to sign a contract; then the contract will be beneficial to the principal under two conditions. The first condition is analogous to the incentive compatibility constraint; it reflects conformity of the incentive scheme with the interests and preferences of the principal. Applying exactly the incentive scheme mentioned in the contract attains the maximum to the principal’s goal function (utility function), see formula (4). The second condition for the principal is identical to the contract participation constraint used for the agent. Notably, signing a contract with the agent appears more beneficial to the principal than preserving the status quo (rejection of the contract). For instance, assume that the profits of an enterprise (the principal’s goal function) are zero without the contract. Then the profits must be strictly positive in the case of signing the contract. Thus, we have discussed the prerequisites of mutually beneficial labor contracts between agents and principals. Getting down to formal analysis, we now solve the incentive problem (5). Note that a “frontal attack” in solving this problem seems rather difficult. Fortunately, one may guess the optimal incentive scheme based on certain considerations; after that, its optimality can be proved rigorously. Assume that a certain incentive scheme () has been used by the principal, and under this scheme the agent has chosen the action x  P (~ ()) . Let another incentive scheme

~ () be involved such that it vanishes everywhere with the exception of the plan (the point

x) and coincides with the previous incentive scheme at the point x:

 ( x ), y  x ~( y )   0,

y  x.

Then under the new incentive scheme the same action of the agent would ensure the maximum of his or her goal function. Let us provide a formal proof of this assertion. The condition that the chosen action x ensures the maximum to the agent’s goal function (provided that the incentive scheme () is used) could be rewritten in the following form. The difference between compensation and costs is not smaller than in the case of choosing any other action:

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

25

 ( x )  c ( x )   ( y )  c ( y ) y  A . Now, replace the incentive scheme () for that of ~ () and obtain the following. The

incentive scheme ~ () still equals the incentive scheme () at the point x. The right-hand

side of the expression includes the incentive system ~ () which vanishes if y  x:

 ( x )  c ( x )  0  c ( y ) y  x . The first system of inequalities being valid implies the same for the second one. Hence, x  P (~ ()) and the proof is completed. Apparently, under the introduced assumptions the agent obtains (at least) zero utility by participating in the organizational system. The condition of agent’s nonnegative utility  y  P(): f (y)  0

(7)

forms the individual rationality constraint. Hence, (at the minimum) the actions are implementable such that the agent’s goal function possesses nonnegative values (see formula (6)):

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

P0() = {y  A |  (y)  c (y)}  P().

(8)

Figure 2.1 shows the curves of the functions H(y) and (c (y) + U ). According to the principal’s viewpoint, a reward can not exceed the income gained by him or her from the agent’s activity (indeed, the principal always obtains zero utility by refusing to cooperate with the agent). Hence, a feasible solution lies below the curve of the function H(y). On the other hand, the agent believes a reward can not be smaller than the sum of the costs and reservation utility (zero action of the agent always leads to reservation utility). Consequently, a feasible solution lies above the curve of the function c (y). In addition, Figure 2.1 demonstrates the domain of actions being implementable based on the constraints of individual rationality ( (y* )  c (y* ) + U ) and incentive compatibility ( y  A:  (y* ) – c (y* )   (y) – c (y)), taking into account non-negativity of the goal function. The set of agent’s actions and corresponding values of the goal function satisfying the above-mentioned constraints (incentive compatibility, individual rationality and others–both for the principal and for the agent) is said to be a “domain of compromise,” see the shaded area in Figure 2.1. The set of agent’s actions ensuring nonempty domain of compromise is given by S = {x  A | H(x) – c(x) – U  0}.

(9)

Obviously, under fixed income and cost functions, increasing the quantity U gradually makes the domain of compromise degenerate.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

26

c(y) + U

H(y)

A B

U

y

y*

0 S

Figure 2.1. Optimal solution to the incentive problem.

Remember a principal seeks to minimize payments to an agent provided that the latter chooses the required action. This means that the point of optimal contract (under the hypothesis of benevolence) should be located on the lower boundary of the shaded domain in Figure 2.1 (the domain of compromise). In other words, the reward should be exactly equal to the sum of the agent’s costs and reservation utility. This important conclusion is referred to as “the principle of costs compensation.” It claims that the principal has to compensate merely the agent’s costs for motivating the agent to choose a specific action. The principal may also establish a bonus4   0 (besides costs compensation). Therefore, for agent’s choosing the action x  A, the principal’s incentive must be not smaller than

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

 (x) = c (x) + U + .

(10)

We pay the reader’s attention to the following. If an agent chooses certain actions (differing from the plan x) and his or her reward constitutes zero, then the constraints of incentive compatibility and individual rationality are satisfied for the agent. And the reward (10) given by the principal is maximal. Well, we have proven that a parametric solution to the problem (5) is defined by the incentive scheme:

 c( x)  U   , y  x

K(x, y) = 

0,

yx

.

(11)

Here the parameter is x  S. Such incentive schemes are referred to as compensatory (Ctype) ones. The principle of costs compensation forms a sufficient condition of implementability of the required action.

4

Suppose that the hypothesis of benevolence takes no place; to find the most efficient incentive, the principal applies the principle of maximal guaranteed result on the maxima set of the agent’s goal function. Formally, the bonus must be then strictly positive (but not arbitrarily small!). The hypothesis of benevolence remaining valid, the bonus may be zero. Generally, a bonus reflects the aspects of non-financial recognition.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

27

Now, let us analyze what action should be implemented by the principal, i.e., what value x  S appears optimal. According (10)–(11), the reward equals the agent’s costs; thus, the optimal implementable action y* maximizes (on the set S) the difference between the principal’s income and the agent’s costs. Hence, optimal implementable action is a solution to the following standard optimization problem: y* = arg max {H(x) – c(x)},

(12)

xS

known as the problem of optimal incentive-compatible planning [9, 12]. Actually, the action to-be-performed by the agent (as the result of the principal’s incentive) can be viewed as a plan–an action of the agent desired by the principal. The principle of costs compensation implies that the plan is incentive-compatible (recall an incentive compatible plan appears beneficial to the agent), and the principal has to find an incentive-compatible plan due to (11). Within the framework of the hypothesis of benevolence, the value of the principal’s goal function under optimal compensatory incentive scheme constitutes  = max {H(x) – c(x)}. xS

Assume that the income and cost functions are differentiable, the principal’s income function is concave, while the agent’s cost function is convex. In the model considered, the optimality condition for the plan y* takes the form

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

quantity

dH ( y * ) dc( y * ) = . In economics, the dy dy

dc ( y ) dH ( y ) is said to be the marginal rate of production (MRP), and is the dy dy

marginal costs (MC). The optimum condition (MRP = MC) determines the action y* and

corresponds to the so-called effective wage c (y* ) + U . Let us point out an important interpretation of the condition (12). The optimal plan y* maximizes the difference between the principal’s income and the agent’s costs, i.e., attains the maximum to the sum of the goal functions (1)-(2) of the OS participants. Consequently, it turns out efficient in the sense of Pareto. Note that the compensatory incentive scheme (11) is not the only optimal incentive scheme. One may easily show that, under the hypothesis of benevolence, a solution to the





 () meeting the condition  (y*) = c (y* ) + U ,  y   * * y*:  (y)  c (y). See Figure 2.2, where we show three optimal incentive schemes–  1 ,  2 , and  3* .

problem (5) is any incentive scheme

The notion of a domain of compromise is extremely important methodologically. A nonempty domain of compromise implies a possibility of coordinating the interests of the principal and agent under the existing conditions. We clarify this. In the formal model of incentives, strategies of the participants are limited by the corresponding feasible sets.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

28

c(y) + U

 1* U

 2* 0

y

*

 3*

y

Figure 2.2. Optimal incentive schemes.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Rigorous consideration of the individual rationality constraint of the agent (we believe that the reservation wage parameter U reflects existing restrictions at the labor market) and of the principal (we believe that the nonnegative goal function of the principal reflects financial efficiency restrictions of the principal’s activity, i.e., the costs to motivate the agent must not exceed the income gained by his or her activity), as well as of other incentivecompatibility conditions makes narrow the set of “rational” strategies–the domain of compromise. In fact, a compromise between the principal and the agent consists in some utility allocation (actually, they divide the difference between the utilities in the points А and В, see Figure 2.1). By making the first move (i.e., offering a contract), the principal “appropriates” this difference, compelling the agent to agree with the reservation utility. Consider the opposite situation when the agent makes the first move and suggests his or her contract to the principal; evidently, zero utility is obtained by the latter, while the former “collects” the difference between the utilities in the points А and В. Intermediate situations are also possible with an a priori fixed rule of allocating the income AB between the participants (based on a compromise mechanism [39, 50]). The performed analysis implies that the incentive problem is solved in two stages. Notably, the first stage serves for treating the coordination problem (i.e., that of defining the set of implementable actions under given constraints). Next, at the second stage one deals with the problem of optimal incentive-compatible planning (searching for an implementable action being the most beneficial from the principal’s viewpoint). Such decomposition approach is widely used in many sophisticated control problems for OS. An essential advantage of compensatory incentive schemes consists in their simplicity and a high level of efficiency. Yet, an appreciable drawback is absolute instability against possible uncertainties of the model parameters. Indeed, suppose that the principal does not exactly know the agent’s cost function. Then arbitrarily small inaccuracy may lead to considerable variations of actions implemented. The issues of incentive models adequacy and optimal solutions stability are studied in detail in [18, 50, 51]. The analysis framework and the methods of improving the guaranteed efficiency of incentives (proposed in the abovementioned works under information being available to the principal) can be directly applied to the models considered below. Thus, here we skip over the issues of adequacy and stability.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

29

Recall the optimal solution to the incentive problem derived above (i.e., when the principal adopts a compensatory incentive scheme). In the case of plan fulfillment, the value of the agent’s goal function is equal to zero (or to the sum of the reservation utility and a bonus). Therefore, special attention (due to its wide spread occurrence) should be paid to the following situation. A labor contract (alternatively, an agreement between a customer (a principal) and an executor (an agent)) fixes the agent’s profitability norm   0; in other words, the agent’s reward depends on his or her action:

(1   ) c( x ), y  x . yx  0,

 (x, y) = 

This is an incentive scheme with profitability norm [39, 50]. By assuming zero reservation utility of the executor, one obtains that the problem of optimal incentive-compatible planning takes the following form (compare with formula (12)): y*() = arg max {H(y) – (1 + ) c(y)}. y A

Hence, the maximum value of the principal’s goal function constitutes:

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

 () = H (y*()) – (1 + ) c (y*()).

Obviously,    0:  ()  . Consider an example. Set H (y) = y, c (y) = y2 / 2 r. Then we have: y*() = r / (1 + ),  () = r / 2 (1 + ). The conditions of individual rationality imply that   0. In the example under consideration, the agent’s income  c (y*()) attains the maximum under  = 1, i.e., the agent benefits twice more from overstating the amount of work performed. According to the principal’s viewpoint, zero profitability is preferable. Thus, we have described the approach for studying the incentive problem based on the analysis of properties being intrinsic to the sets of implementable actions. Yet, there exists an alternative approach to investigate incentive problems. Above we have defined the set of actions implemented within a certain incentive scheme; after that, we have evaluated the maximum of the goal function (on this set) and chosen the corresponding incentive scheme. Note that solution process of the incentive problem has been decomposed into two stages, viz., the stage of interests’ coordination and the stage of incentive-compatible planning. This procedure possesses the following explicit representation. At the first stage, for each feasible incentive scheme   M one evaluates the set of implementable actions P() and “sums” them up: PM =

 P( ) .

At the second stage, one solves the problem of incentive-

 M

compatible planning, i.e., the maximization problem for the principal’s goal function on the set PM (also, see the general approach to control problems in organizations in Section 1.4). Being able to solve the direct incentive problem, we easily find the solution to its inverse counterpart. For instance, the expression (9) allows for determining the minimal constraints to-be-imposed on rewards for implementability of given actions.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

30

Interestingly, we have actually “guessed” the optimal solution without a “frontal attack” of the incentive problem5. The idea to introduce the sets of implementable actions has been of perceptible use here. An alternative technique lies in analyzing the minimal “costs” of the principal to motivate the agents6. Let us discuss it. Suppose that the same action can be implemented within different incentive schemes. Apparently, greater efficiency is gained by an incentive scheme with smaller costs to motivate the agents. In other words, the optimal class of incentive schemes implements any action of the agent under the minimal principal’s costs to motivate the latter. Despite its self-evidence, this statement provides a universal tool to solve incentive problems; below we will use it intensively. Consider a class of feasible incentive schemes M; the minimal feasible reward that would stimulate agent’s choosing a required action y  PM is said to be the minimal principal’s costs to implement the action. This quantity is defined by

 min (y ) = min { (y) | y  P(), H(y) –  (y)  0}.  M

(13)

If actions are not implementable in the class M, the corresponding minimal costs to motivate implementation of these actions are assumed to be infinite:

 min ( y ) =+, y  A \ PM.

(14)

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Hence, the introduced assumptions allow for reformulating the principle of costs compensation as  y  PM :  min ( y ) = с (y). We emphasize that the principle of costs compensation should not be understood verbatim et literatim. The agent’s “costs” may include certain norms of profitability, and so on. The notion of the minimal costs to motivate implementation of actions is of crucial importance. Analysis of these costs enables solving the design problem for optimal incentive function, studying the properties of optimal solutions, etc. [12]. Thus, we have shown that a C-type incentive scheme forms the optimal solution to the incentive problem (under the adopted assumptions). One would think, “What else can be “taken out” from this model?” The whole point is that we have supposed feasibility of compensatory incentive schemes. Unfortunately, in practice a principal may be restricted by a fixed class of incentive schemes. Such restrictions are subject to exogenous factors (e.g., legal norms for wages) or to endogenous factors (e.g., the principal inclines towards piece wage or time-based wage instead of simple compensation of costs–see Section 2.3).

5

We should admit the following aspect. In the theory of control in organizations, a researcher often guesses solutions (based on intuition, meaningful considerations, etc.) and strives for deriving the corresponding analytical solution. The reasons are clear enough; indeed, the analysis of a formal model of an organizational system is not an end in itself for a researcher. Quite the contrary, he or she has to propose the most adequate (reality-consistent and easily interpretable) solution to a control problem. 6 Let us make a remark on terminology. The notion “costs” characterizes the costs of an agent to choose a specific action. On the other part, the notion “costs to motivate implementation of an action” characterizes the costs required for the principal to stimulate implementation of the action.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

31

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

2.2. INCENTIVE MECHANISM: A DISCRETE MODEL Let us formulate and solve the discrete incentive problem in a two-level OS which consists of a principal and a single agent. Actually, this is the discrete counterpart of the problem studied in Section 2.1 (an incentive problem is said to be discrete if the agent’s set of feasible actions appears finite). Solution to the discrete incentive problem. Assume that an agent has a finite set of feasible actions: N = {1, 2, …, n}. In the absence of incentives, the agent’s preferences form the vector q = (q1, q2, …, qn), whose components mean the income gained by the choice of a corresponding action. The principal’s control lies in choosing an incentive scheme  = (1, 2, …, n), i.e., in paying a reward (negative or positive) to the agent for specific actions. We will believe that the rewards are unbounded. The agent’s “goal” function f = (f1, f2, …, fn) represents the sum of the income and the reward: fi = qi + i, i  N. Within the hypothesis of rational behavior (see Section 1.1) and under a known incentive scheme, the agent chooses an action maximizing his or her goal function. Imagine that several such actions exist; in this case, we suppose the agent chooses the action being the most beneficial to the principal (in a certain sense)–the hypothesis of benevolence takes place. The efficiency of an incentive scheme (a control mechanism) is the maximal value of the principal’s goal function on the set of agent’s action implementable under the incentive scheme. The incentive problem consists in establishing a certain incentive scheme (by the principal) such that the agent chooses the most beneficial action to the principal. Solution to this problem seems elementary (see Section 2.1). Indeed, for a fixed incentive scheme , one defines the set of agent’s actions attaining the maximum to the agent’s goal function (referred to as the set of implementable actions): P() = {i  N | fi  fj, j  N}. After that, one searches for an incentive scheme which implements the action being most beneficial to the principal. Assume that the principal’s goal function  = (1, 2, …, n) makes up the difference between the income and the incentive, i.e., i = Hi – i, i  N. Consequently, we obtain the following optimal incentive scheme:

* = arg max max {Hi – i}. iP ( )



Rewrite the set of implementable actions as P() = {i  N | qi + i  qj + j, j  N}. The minimal incentive scheme (i.e., possessing the minimal value at each point) which implements all actions of the agent under the hypothesis of benevolence, is the compensatory incentive scheme K = (  1 , K

 2K , …,  nK ). It is determined by

 Kj = qk – qj, j  N,

(1)

where k = arg max qj. j N

The set of optimal implementable actions (according to the principal’s view) takes the form

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

32

P(, f) = Arg max {Hi –  iK } = Arg max {Hi – qk + qi}.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

i N

i N

(2)

Being a solution to the incentive problem, the compensatory incentive scheme in medias res makes all feasible actions of the agent equivalent (in the sense of his or her goal function values). In other words, this scheme compensates exactly the costs incurred by the agent as the result of choosing the required action (as against choosing the action k which yields the maximum gain in the absence of incentives). Indeed, the principal should not pay excess for the choice of the action k. Thus, if we state the incentive problem in terms of the agent’s goal functions, his or her preferences on a finite set are defined by the vector q. Components of the vector are certain values, and the differences between them (see formula (1)) represent minimal payments that make the corresponding pairs of actions equivalent (in the sense of values of the agent’s goal function values). An alternative approach is to describe the preferences directly on the pairs of agent’s actions. In other words, one can just enumerate n2 – n values (e.g., expert information obtained during paired comparison of different options); they mean relative preference of actions in the sense of minimal excess payments required for equivalence of the corresponding pair. Such technique and its correlation with the method of defining preferences in terms of the goal functions are discussed in the current section. The incentive problem stated in terms of metrized binary relations. The agent’s goal function depends on the incentive scheme established by the principal; accordingly, this function generates a complete antisymmetric transitive binary relation on the set N (see Appendix 3). Moreover, one may always find (at least, a single) alternative (an action) being undominated with respect to this relation. The incentive problem allows for the following statement in terms of the binary relation: find an incentive scheme such that the most beneficial alternative for the principal is undominated. However, the above statement of the problem seems somewhat artificial. First, we loose a practically relevant interpretation of the incentives as the rewards for choosing specific actions (really, making the binary relation explicitly dependent on the incentive vector is rather exotic). Second, the same binary relation can be generated by different goal functions (not necessarily identical up to an additive constant–see Appendix 4). Finally, the way of performing the reverse transition (from a binary relation to a corresponding goal function) is not totally clear. In applications, exactly numerical values of the agent’s rewards play the key role. An intermediate position between “standard” binary relations and goal functions is occupied by the so-called metrized relations (MR). On the set N, a MR is specified by the matrix  = ||ij||, i, j  N. The elements ij of the matrix  (i, j  N) are positive, negative or zero values representing comparative preferences of different alternatives (in our case, these are actions of the agent). Note that we consider complete relations, i.e., incomparability of actions is impossible, and so on. Suppose that if ij < (>) 0, then action i is strictly better (worse, respectively) for the agent than action j (in the absence of incentives). Actions i and j are equivalent for the agent if ij = 0. In practice, ij constitutes the sum to-be-paid excess to the agent for making action i equivalent to action j.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

33

Next, assume that the principal’s control (incentive) lies in modifying the comparative preference of different actions, i.e., elements of the matrix . As usual, the incentive problem is to find a feasible variation of the elements such that the most beneficial action (according to the principal’s view) coincides with the best action for the agent. Let the agent’s preferences satisfy the following property:  i, j, m  N: im + mj = ij (referred to as the internal consistency condition (ICC) of preferences). The ICC implies that ii = 0, ij = –ji, i, j  N (see the section devoted to pseudopotential graphs in Appendix 2). Moreover, the graph which corresponds to the matrix  is a potential one with qi (i  N) as the node potentials. The latter are evaluated up to an additive constant as follows:

qi  

1 n

n

 m 1

im

, i  N.

(3)

The matrix  can be uniquely recovered using the potentials qi, i  N:

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

ij = qj – qi, i, j  N.

(4)

The potentials of actions may be treated as values of the agent’s income function, while elements of the matrix  serve as their first differences. Imagine that the agent’s preferences are specified in the form of a MR which satisfies the ICC. In this case, information on all elements of the matrix  turns out redundant. For instance, a certain row (or column) being known, the ICC enables easy recovery of the rest elements by summing over the corresponding chains. Such property of an internally consistent MR seems attractive in the aspect of the amount of information required for correct identification of OS parameters. For the agent, the best action within the model considered consists in an action k such that kj  0 for all j  N. In the case of internally consistent preferences, the described action (probably, not unique) always exists. Actually, this is the action possessing the maximal potential. Hence, the set of implementable actions forms P() = {k  N | kj  0, j  N}. Consider an arbitrary pair of actions, i and j (i, j  N), and define the “equalizing” j i

operation ( j  i ) for their potentials: q j

 q j  (qi  q j ) . In terms of elements of the

matrix , the operation includes two steps: 1)

 jmj i   jm   ij , m  N;

2)

 mjj i   jm , m  N.

Obviously, action j becomes equivalent to action i (ij = ji = 0), and the internal consistency condition for the agent’s preferences is preserved. For the principal, the costs to perform the operation ( j  i ) constitute ji = qi – qj (see (1)). The idea to solve the incentive problem lies in the following. To stimulate agent’s choosing an action l  N, the principal must pay the agent the reward l meeting the system of inequalities l – i  li, i, l  N. The compensatory incentive scheme

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

34

l = max lj = max (qj – ql) = qk – ql = lk, l  N, j N

j N

(5)

satisfies this system of inequalities. Consequently, if k is the most beneficial action for the agent (in the absence of incentives), then the minimal value of the incentive l required for implementing the action l equals lk, l  N. Again, we emphasize that the compensatory incentive scheme (5) makes all actions of the agent equivalent in his or her opinion. Suppose that the agent’s preferences (in the absence of incentives) are given by a MR, i.e., by the matrix  = ||ij||, i, j  N, and the ICC takes place. One may set up a correspondence between the matrix  and the principal’s income “function”

Hi  

1 n

n

 m 1

im

, i  N. Assume that the reward paid to the agent is deduced from the

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

principal’s income function; implementing the action l, the principal then “looses” lk, l  N. Therefore, the comparative preference of the actions’ pair (k, l) changes in the eyes of the principal, as well. Due to the ICC, the new value is the sum (kl + kl). Hence, the principal’s preferences (taking into account the incentives) are represented by the metrized relation  =  +  = ||ij + ij||, i, j  N. Note that the principal’s preference relation  additively includes his or her own preferences and the agent’s preferences (both preferences are in the absence of incentives). This fact allows for interpreting the incentive as coordination of their interests. Apparently, if the preferences of the agent and of the principal are internally consistent (in the absence of incentives), then the metrized relation  meets the ICC. This leads to the following assertion [50]: the set of optimal implementable actions of the agent is given by (compare with formula (2)) P(, ) = {i  N | ij  ji, j  N}.

The correlation of the incentive problems formulated in terms of goal functions and MR is represented by the following statement [50]: the incentive problems formulated in terms of goal functions and MR that satisfy the ICC are equivalent. The equivalence means mutual reducibility of these problems. Suppose that the incentive problem is stated in terms of goal functions, i.e., we know the agent’s income function q. We believe that the values of the income function are potentials and define the matrix  by formula (4). One would easily see that the ICC is valid. Similarly, under the ICC, the matrix  can serve for restoring the potentials (the income function) by means of the expression (3). Thus, we perform the reverse transition. And so, under the ICC formulas (3)–(4) imply that P(, ) = P(, f ). The conducted analysis shows that MR describe a wider class of the preferences (both for the agent and the principal) than goal functions. In fact, the latter are equivalent to internally consistent MR. Of course, one has no guarantee that an MR (obtained in practice, e.g., by an expertise and reflecting the preferences of a controlled subject) appears internally consistent. The methods to solve the incentive problems stated in terms of MR without the ICC are described in [50].

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

35

2.3. BASIC INCENTIVE MECHANISMS Let us discuss the basic incentive schemes (mechanisms) in single-agent deterministic organizational systems, i.e., the systems operating under complete information on all essential (internal and external) parameters. A compensatory (C-type) incentive scheme is the optimal basic incentive scheme, see Section 2.1. A jump (J-type) incentive scheme is characterized by the following: the agent obtains a fixed reward С provided that his or her action appears not smaller than a planned action x; otherwise, the agent has zero reward (see Figure 2.3):

C , y  x .  0, y  x

J(x, y) = 

(1)

The parameter x  X is said to be a plan, i.e., a state (an action, a result of activity, etc.) of an agent desired by a principal. J-type incentive schemes may be treated as a lump sum payment which corresponds to the reward С under a given result (e.g., an output level being not smaller than a predetermined threshold, working hours, and so on). Another interpretation is when the agent is paid for hours worked; for instance, the reward then corresponds to a fixed wage under full-time occupation.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

J (x, y) C

y 0

x

Figure 2.3. A jump incentive scheme.

Proportional (linear) (L-type) incentive schemes. Fixed reward rates are widely used in practice. For instance, a fixed per-hour rate implies the same wage is paid for every hour worked, while piece-based wage implies a fixed reward for every unit of the manufactured product. In both schemes the agent’s reward is proportional to his action (hours worked, product units manufactured, etc) and the wage rate   0 represents the proportionality coefficient (see Figure 2.4):

L(y) =  y.

(2)

In the general case, some reward is given to an agent regardless of his or her actions, i.e., proportional incentive schemes take the form: Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

36

L(y) = 0 +  y. Suppose that a linear incentive scheme is employed and the cost function of the agent is continuously differentiable, monotonous and convex. Optimal action y* of the agent (maximizing his or her goal function) is defined by the formula y* = c1 (), where c1 () stands for the inverse derivative of the agent’s cost function. Note that the principal more than compensates the agent’s cost by choosing the action y*; actually, the principal overpays the following amount: y*c'(y* ) – c (y* ). For instance, suppose the agent has the income function H(y) = b y, b > 0, while his cost function is convex: c(y) = a y2, a > 0. In this case, for any feasible action of the agent the principal pays twice compared to the optimal payment. Therefore, under a convex cost function of the agent, efficiency of the proportional scheme is not greater than that of the compensatory one. A curve of the agent’s goal function (under a proportional incentive scheme used by the principal) is demonstrated by Figure 2.5. Low efficiency of proportional incentive schemes described by the formula L(y) = y is subject to non-negativity of rewards. Assume that the reward may be negative for some actions (note that these actions are probably never chosen, as shown in Figure 2.6), i.e.,   L ( y ) = 0 +  y, with 0  0. Then, under a convex cost function of the agent, efficiency of the proportional incentive scheme



 L () could equal that of the optimal (compensatory)

incentive scheme.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

L (y)



y

0 Figure 2.4. A proportional incentive scheme.

y y

0 y*

f(y) –c(y)

Figure 2.5. A goal function of the agent: the principal uses an L-type incentive scheme. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

37



L (y)

y 0

–  0 /

Figure 2.6. A linear incentive scheme.

c(y)



 L ()

y

y* 0



Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

0

f (y)

Figure 2.7. A goal function of the agent: the principal uses the incentive scheme



 L () .

It suffices to involve the following expressions to substantiate the above assertion (see Figure 2.7, as well): y*() = c′ –1(), 0() = c (c′ –1()) –  c′ –1().

The optimal value * of the wage rate is chosen from the maximum condition for the goal function of the principal:



* = arg max [H(y*()) –  L ( y* ( )) ].  0

Incentive schemes based on income redistribution (D-type) employ the following idea. Since the principal represents preferences of the whole system, principal’s income can be equated to that of the whole organizational system. Hence, one may base an incentive scheme of the agent on the income obtained by the principal; in other words, one may set the agent’s reward equal to a certain (e.g., fixed) share   [0; 1] of the principal’s income:

D(y) =  H(y). Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(3)

Dmitry Novikov

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

38

We underline that C-, J-, L-, and D-type incentive schemes are parametric. Notably, it suffices to choose the pair (x, C) for specifying the jump incentive scheme. Defining the proportional incentive scheme requires designating the wage rate . Finally, one should only select the income share  to describe the incentive scheme based on income distribution. The incentive schemes described above are elementary and serve as blocks of a “kit”; using these blocks, it is possible to construct complex incentive schemes (referred to as secondary schemes with respect to the basic ones). Thus, we should define operations over the basic incentive schemes for making such a “construction” feasible. Dealing with a singleagent deterministic OS, the researcher may confine himself to the following three types of operations. The first-type operation is the transition to a corresponding incentive quasi-scheme, i.e., the reward is assumed to be zero everywhere, with the exception of the planned action. In the complete information framework, “nulling” the incentive in all points (except the plan) does not modify the properties of the incentive scheme under the hypothesis of benevolence. Therefore, in the sequel we will not dwell on differences between a specific incentive scheme and its counterpart (a scheme derived from the initial scheme by means of the first-type operation). The second-type operation is the composition, i.e., employing different basic incentive schemes in different subsets of the set of feasible actions. The resulting incentive systems are called composite. The third-type operation is represented by algebraic addition of two incentive schemes (this is possible as the reward enters the goal functions additively). The result of such operation is referred to as a cumulative incentive scheme. For instance, Figure 2.8 shows an incentive scheme of J+L type (a tariff plus-bonus incentive scheme), derived by summing-up jump and linear incentive schemes. Thus, the basic incentive schemes include the ones of J-, C-, L-, and D- type, as well as any schemes derived from them through the above-mentioned operations. First, it is shown in [50] that the incentive schemes derived for the basic incentive schemes discussed cover all personal wage systems used in practice. Second, the cited work provides some estimates of comparative efficiency for different combinations of the basic incentive schemes.

 J+L(x, y)

J

C

L  0

x

Figure 2.8. An incentive scheme of J+L type (a cumulative incentive scheme). Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

y

Incentive Mechanisms

39

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

2.4. INCENTIVE MECHANISMS IN CONTRACT THEORY Contract theory is a branch of control theory for socio-economic systems, studying game-theoretic models of interaction between a control subject (a principal) and a controlled subject (an agent) under the conditions of external stochastic uncertainty [8]. In models of contract theory, the uncertainty is taken into consideration by the following technique. The result of agent’s activity z  A0 represents a random variable whose realization depends on the agent’s actions y  A and on an external uncertain parameter, known as the state of nature   . A state of nature reflects external conditions of agent’s activity that affect the result of activity, thus making it different from the action. The participants of OS possess the following awareness. At the moment of decisionmaking, they know the probability distribution p ( ) of the state of nature or the conditional probability distribution p (z, y) of the activity result. The principal does not observe actions of the agent; instead, the former merely learns the result of the latter’s activity. At the moment of choosing his or her action, the agent may know either the state of nature (asymmetric awareness) or the corresponding probability distribution (symmetric awareness). It should be emphasized that the second case better fits the incentive models; consequently, we focus on it below. The principal’s strategy is choosing a certain function  () according to the result of agent’s activity. Depending on possible interpretations of the model, this function represents an incentive function (labor contracts), an insurance compensation (insurance contracts), debts or payments (debt contracts), and so on. The agent’s strategy lies in choosing an action under a known strategy of the principal. A contract is a set of strategies of the principal and of the agent. Note there exist explicit contracts being legally valid (e.g., the majority of insurance contracts and debt contracts) and implicit contracts being tacit or not concluded de facto (e.g., labor contracts in some situations). The result of agent’s activity depends on uncertain parameters and defines the utilities of the OS participants. Therefore, we assume that (making their decisions) the participants average the utilities with respect to known probability distribution and choose the strategies by maximizing their expected utility. An optimal contract is the most beneficial to the principal (as attaining the maximum to his or her goal function) provided that the agent benefits from interaction with the principal. From the agent’s viewpoint, this means that the contract participation condition and the individual rationality condition must be satisfied (similarly to the model considered in Section 2.1). The pioneering works on contract theory appeared in the early 1970s. That research involved game-theoretic models as an endeavor to explain the existing contradictions between the results of macroeconomic theories and the actual rates of unemployment and inflation in developed countries at that time. Notably, one of the “contradictions” was the following. There are three types of wage, viz., a market wage (the reservation utility being guaranteed to an employee), an efficient wage (the payments maximizing the employee’s efficiency in the sense of the whole enterprise; as a rule, the efficient wage is defined by the equality between the marginal product yielded by the employee and his or her marginal costs) and an actual wage (the one

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

40

given to the employee). Statistical data indicated that the actual wage differed from its efficient counterpart. The first models of contract theory analyzed the problems of optimal number of employees under the participation condition and fixed strategies of the principal. After that, the investigations were focused on solving control problems (optimal contract design) under the conditions of participation and consistency. Subsequently, the emphasis was put on complex models describing multi-agent and dynamic organizational systems, contract renegotiation, and others (for an overview, see [8, 57]). In the context of the insurance effect (i.e., risk redistribution), we should acknowledge the following conclusion drawn in contract theory. The difference between the efficient wage and the actual one is subject to that a risk-neutral principal (see Section 4.5) insures riskaverse agents against wage variations depending on the state of nature. Notably, wage stability is achieved due to the fact that in favorable7 situations the reward appears smaller than the efficient wage. But on the other hand, in unfavorable situations the reward is higher than it would be without accounting for risk redistribution8. Let us provide an illustrative example. Suppose an agent has two feasible actions, A = {y1; y2}, leading to two results A0 = {z1;

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

z2}; the corresponding probabilities are defined by the matrix P =

p 1 p

1  p , where 1/2 < p

p  1. Hence, in the majority of cases (as far as p > 1/2) the result of agent’s activity “coincides” with the corresponding action. The costs incurred by choosing the first and the second actions constitute c1 and c2, respectively (c2  c1). The expected income of the principal gained by the first (second) action makes H1 (H2, respectively). Next, the agent’s reward for the first and second result of activity is 1 and 2, respectively. The principal’s goal function  represents the difference between the income and the incentive. Finally, the agent’s goal function f is the difference between the incentive and costs. The principal’s problem lies in assigning an incentive scheme to maximize the expected value of his or her goal function9 E (provided that the agent’s action maximizes the expected value Ef of his or her own goal function). Assume that the agent is risk-neutral (i.e., his or her utility function which reflects the attitude towards risk is linear). Consider the incentive scheme to-be-used by the principal for motivating the choice of the action y1 by the agent. Under zero reservation utility, the problem of minimal incentive scheme implementing the action y1 takes the form: p 1 + (1 – p) 2  min

 1 0 ,  2 0

7

(1)

Activity of an enterprise (and the wages of its employees) depends on external macroparameters (seasonality, the periods of economic recession and upswing, world prices, etc.) and microparameters (health status of employees, and so on). 8 Perhaps, exactly this important conclusion has an impact on further development of contract theory–most of models studied include only an external stochastic uncertainty. Indeed, in the deterministic case (or in the case of uncertainty with a risk-neutral agent), the insurance effects disappear and the actual wages coincides with the efficient one. 9 Remind that the symbol “E” stands for the operator of mathematical expectation.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

41

p 1 + (1 – p) 2 – c1  p 2 + (1 – p) 1 – c2

(2)

p 1 + (1 – p) 2 – c1  0.

(3)

The conditions (2)–(3) represent the incentive-compatibility constraint and the condition of agent’s individual rationality, respectively. The problem (1)–(3) is the one of linear programming. In Figure 2.9, the set of rewards that satisfy the conditions (2)–(3) is shaded, while its subset with the minimum value of the expression (1) is marked by heavy line. The contour curve of the function (1)–see the dotted line in Figure 2.9–possesses the same slope as the segment10 А1B1 (the arrow shows the direction of increase). 2 A1 с1 /(1 – p)

C1 (c2 – с1)/(2p – 1)

0

B1

1

с1 /p

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Figure 2.9. The principal implements the action y1 under a risk-neutral agent.

Suppose the hypothesis of benevolence takes place. For definiteness, let us choose the point С1 from the segment C1B1 as the corresponding solution; the point is given by

1 = [p c1 – (1 – p) c2] / (2p – 1),

(4)

2 = [p c2 – (1 – p) c1] / (2p – 1).

(5)

Evidently, the expected costs of the principal E (y1) to implement the action y1 constitute c 1: E (y1) = с1.

(6)

Now, assume that the principal wants to implement the action y2. By solving a problem similar to (1)–(3), we obtain the following (see the point С2 in Figure 2.10):

10

In the case of risk-neutral principal and agent, the existence of a solution set is typical in problems of contract theory. At the same time, a strictly concave utility function of the agent (which describes his or her risk-averse character) leads to uniqueness of the solution, see below.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

42

1 = [p c1 – (1 – p) c2] / (2p – 1),

(7)

2 = [p c2 – (1 – p) c1] / (2p – 1),

(8)

E (y2) = с2.

(9)

2 A2 с2 /p

C2 (c2 – с1)/(2p – 1)

0

B2

1

с2 /(1 – p)

Figure 2.10. The principal implements the action y2 under a risk-neutral agent/

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

At Step 2, the principal chooses what feasible action is more beneficial to implement, i.e., what action maximizes the difference between the principal’s income and the expected costs to implement such action. Therefore, the expected value of the principal’s goal function in the optimal contract equals

* = max {H1 – c1, H2 – c2}. To proceed, let us analyze the insurance effects in the model. Suppose that the agent appears risk-neutral; in other words, he or she estimates uncertain parameters according to a strictly increasing concave utility function u(). Recall that the random variable (the result of agent’s activity) determines his or her reward (the value of the incentive function). Thus, we believe that the agent’s goal function is expressed as f ( (), z, y) = u ( (z)) – c (y).

(10)

Introduce the substitutions11 v1 = u (1) and v2 = u (2), where u–1() is the inverse function to the agent’s utility function (we assume it is nonnegative and vanishes in the origin). Imagine the principal is interested to stimulate the agent’s choice of the action y1. Then the incentive problem takes the form

11

Such change of variables makes it possible to linearize the system of constraints. It is used in the so-called twostep solution method for problems arising in contract theory [48].

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms p u–1(v1) + (1 – p) u–1(v2) 

43

min

(11)

v1  0 , v2  0

p v1 + (1 – p) v2 – c1  p v2 + (1 – p) v1 – c2,

(12)

p v1 + (1 – p) v2 – c1  0.

(13)

The conditions (12)–(13) represent the incentive-compatibility constraint and the condition of agent’s individual rationality, respectively. Clearly, the linear inequalities (12)–(13) coincide with (2)–(3) up to notation. Figure 2.11 below demonstrates the set of feasible values for the variables v1 and v2 (the shaded one). The contour curves of the function (11) (being convex due to concavity of the agent’s utility function) are marked by the dotted line. A strictly concave utility function of the agent leads to the strict convexity of the goal function (11). Consequently, the internal solution to the conditioned minimization problem (11)–(13) is unique. For instance, take the utility function u(t) =  ln(1 +  t), where  and  are positive constants; hence, the solution is given by v1 = c1 + (c1 – c2) (1 – p) / (2p – 1), (14) v2 = c1 + (c2 – c1) p / (2p – 1). (15) Apparently, in this case the incentive scheme (14)–(15) makes the agent’s expected utility (as the result of payments by the principal) equal to the agent’s costs to choose the first action:

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Ev = c1.

(16)

Similarly, one may show that if the principal stimulates the agent to choose the second action, then the agent’s expected utility (as the result of payments by the principal) coincides with the agent’s costs to choose the second action.

v2 A1 с1 /(1 – p)

C1 (c2 – с1)/(2p – 1)

0

B1 с1 /p

Figure 2.11. The principal implements the action y1 under a risk-averse agent. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

v1

Dmitry Novikov

44

v

B

v2

D

E

c1 v1 A 0

ua() un()

F

C

 1 Ea

En

2

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Figure 2.12. The insurance effect: the principal implements the action y1

Formulas (14)–(15) imply that, in the case of a risk-averse agent, the principal “underpays” for implementation of the first result of activity (v1  c1) and “overpays” for implementation of the second result of activity (v2  c1). Moreover, in the limiting deterministic case12 (as p  1) we have v1  c1. The effect of insurance in the present model is illustrated by Figure 2.12 (implementation of the first action). Here the reader may find the linear utility function un () of the agent (defined up to an additive constant) and the strictly concave utility function ua () of the agent. The segment AB lies to the top and/or to the left of the segment CD, and the expected utility in both cases makes c1. Consequently, under a risk-averse agent the expected payments Ea are smaller than the expected payments En that correspond to a risk-neutral agent (compare the points E and F in Figure 2.12). And so, we have analyzed an example of the insurance effect arising in the models of contract theory. Now, let us describe the incentive problems in multi-agent systems.

2.5. COLLECTIVE INCENTIVE MECHANISMS In the previous sections we have considered individual incentive systems. The present section and the three subsequent ones are dedicated to the description of collective incentive systems, i.e., incentives for a collective of agents. An elementary extension of the basic single-agent model consists in a multi-agent OS with independent (noninteracting) agents. In this case, the incentive problem is decomposed into a set of corresponding single-agent problems. Suppose that common constraints are imposed on the incentive mechanism for all agents or for a certain subset of agents. As the result, we derive the incentive problem in an OS with 12

Note all models with uncertainty must meet the conformity principle: as the uncertainty vanishes (i.e., the limiting transition to a corresponding deterministic system takes place), all the results and estimates must tend to their deterministic counterparts. For instance, the expressions (14)–(15) for p = 1 determine optimal solutions in the deterministic case.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Incentive Mechanisms

45

weakly related agents (discussed below). This problem represents a set of parametric singleagent problems, and one can search for optimal values of the parameters using standard techniques of constrained optimization. If the agents are interrelated, viz., the costs or/and rewards of an agent depend on his or her actions and the actions of the rest agents, one obtains a “full-fledged” multi-agent incentive model. It will be studied in the present section. Note this book provides no coverage of the situation when the same constraints apply to the sets of feasible states, plans or actions of the agents (for its detailed description, the reader is referred to [50]). The sequences of solving the multi-agent and single-agent problems have much in common. At the beginning, it is necessary to construct a compensatory incentive scheme implementing a certain action (an arbitrary feasible action under given constraints). In fact, this is Stage 1–analyzing the incentive compatibility. Under the hypothesis of benevolence, in single-agent OS it suffices to verify that the maximum of the agent’s goal function is attainable (by an implementable action). On the other hand, in multi-agent systems one should demonstrate that the choice of a corresponding action makes up an equilibrium strategy in the game of agents. Imagine there are several equilibria; then we have to verify the hypothesis of rational choice for an action in question. In the majority of cases, it takes only to accept the unanimity axiom (according to the latter, the agents do not choose equilibria being dominated in the sense of Pareto by other equilibria). Sometimes, the principal has to evaluate his or her guaranteed result on the set of equilibrium strategies of the agents, and so on. Further, it is necessary to equate the incentive and the costs and solve a standard optimization problem (find an implementable action to-be-rewarded by the principal). Actually, this is Stage 2–incentive-compatible planning (see Section 2.1). Let us study the above approach in a greater detail. Incentives in OS with weakly related agents. Recall the results derived in Section 2.1 for the single-agent incentive problem. They can be directly generalized to the case of n  2 agents if the following conditions hold true. The agent’s goal functions depend only on their own actions (the case of the so-called separable costs), the incentive of each agent depends exclusively on his or her own actions, and some constraints are imposed on the total incentive of the agents. The formulated model is said to be an OS with weakly related agents. In fact, this is an intermediate case between individual and collective incentive schemes. Let N = {1, 2, …, n} be a set of agents, yi  Ai stand for an action of agent i, and ci (yi) mean his or her costs. Moreover, denote by i (yi) a reward given by a principal to agent i (i  N); accordingly, y = (y1, y2, …, yn) represents an action profile of the agents, y  A′ =

A . i

iN

Assume that the principal gains the income H(y) from agent’s activity. Suppose that the individual rewards of the agents are majorized by the quantities {Ci}i  N; in other words,  yi  Ai: i (yi)  Ci, i  N. The wage fund (WF) being bounded by R (i.e.,

С

iN

i

 R), we obtain that the maximal set of implementable actions of agent i depends on

the corresponding constraint R of the incentive mechanism. Within the framework of the assumptions of Section 2.1, this set makes up Pi (Ci )  [0; yi (Ci )] , where

yi (Ci )  max { y  Ai | ci ( y )  Ci } , i  N. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

46

Consequently, the optimal solution to the incentive problem in an OS with weakly related agents is defined as follows. One has to maximize the function

 ( R) 

max

{ y i Pi ( Ci )}iN

H ( y1, ..., yn )

by a proper choice of the individual constraints {Ci}i  N that satisfy the budget constraint

С

iN

i

 R. Apparently, this is a standard problem of constrained optimization.

We underline that for a fixed WF the agent’s costs are not extracted from his or her income. At the same time, in the case of a variable WF, the optimal value R* is a solution to the following optimization problem: R* = arg max [ (R) – R]. R0

Example 2.1. Choose the cost function of agent i as ci (yi) = yi2 / 2ri, i  N, and the

principal’s income function as H ( y ) 

  y , where { } iN

i i

i iN

are positive constants.

Under the imposed constraints {Ci}i  N, the maximal implementable action of each agent constitutes yi (Ci ) =

2riCi , i  N. The problem has been reduced to searching for an

optimal set of the constraints { Ci }i  N which meets the budget constraint and maximizes the

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

principal’s goal function:

  i 2riCi  max {C i  0}iN iN .    Ci  R  iN The unique solution to this problem has the form

C i 

ri  i2 R, iI .  r j 2j jN

The optimal WF is R* =

 r iN

i

2 i

/ 2. 

Incentives in OS with strongly related agents. Denote by y–i = (y1, y2, …, yi–1, yi+1, …, yn)  A–i =  A j the opponents’ action profile for agent i. j i

The preferences of the OS participants (the principal and the agents) are expressed by their goal functions. Notably, the principal’s goal function  (, y) represents the difference between his or her income H (y) and the total incentive  (y) paid to the agents:  (y) = Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

47

n

  ( y) , where  (y) is the reward of agent i,  (y) = ( (y),  (y), …,  (y)). On the other i 1

i

i

1

2

n

part, the goal function of agent i, fi (i, y), is determined by the difference between the reward obtained from the principal and the costs ci (y), namely, n

 i ( y) ,

(1)

fi (i, y) = i (y) – ci (y), i  N.

(2)

 (, y) = H(y) –

i 1

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

We acknowledge the following aspect. Both the individual incentive and the individual costs of agent i to choose the action yi generally depend on the actions of all agents (the case of weakly related agents with inseparable costs). Let us adopt the following sequence of moves in the OS. At the moment of their decision-making, the principal and agents know the goal functions and feasible sets of all OS participants. Enjoying the right of the first move, the principal chooses incentive functions and reports them to the agents. Next, under known incentive functions, the agents simultaneously and independently choose their actions to maximize appropriate goal functions. We make a series of assumptions to-be-applied to different parameters of the OS: 1) for each agent, the set of feasible actions coincides with the set of nonnegative real values; 2) the cost functions of the agents are continuous and nonnegative; moreover,  yi  Ai : ci (y) does not decrease with respect to yi and  y–i  A–i: ci (0, y–i) = 0 (i N); 3) the principal’s income function is continuous with respect to all arguments and attains the maximum for nonzero actions of the agents. In essence, Assumption 2 implies that (regardless of the actions of the rest agents) any agent can minimize his or her costs by choosing an appropriate nonzero action. Other assumptions are similar to the single-agent case (see Section 2.1). Both the costs and incentive of every agent generally depend on the actions of all agents. Hence, the agents are involved in a game, where the payoff of each agent depends on the actions of all the others. Suppose that P() is the set of equilibrium strategies of the agents under the incentive scheme () (in fact, this is the set of game solutions). For the time being, we do not specify the type of equilibrium, only presuming that the agents choose their strategies simultaneously and independently (thus, they do not interchange information and utility). Similarly to the single-agent OS discussed in Section 2.1, guaranteed efficiency (or simply “efficiency”) of an incentive scheme represents the minimal value (within the hypothesis of benevolence–the maximal value) of the principal’s goal function over the corresponding set of game solutions:

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

48 K() = min  (, y).

(3)

yP (  )

The problem of optimal incentive function/scheme design lies in searching for a feasible incentive scheme * yielding the maximal efficiency:

* = arg max K().

(4)

 M

The results of Section 2.1 imply the following. In the particular case of independent agents (i.e., the reward and costs of each agent are subject to his or her actions), the compensatory incentive scheme

 ci ( yi* )   i , yi  yi*  i K ( yi )   , i  N, yi  yi* 0, appears optimal (to be correct, -optimal, where  =

(5)

 iN

i

). In the formulas above, {i}i  N

designate arbitrarily small strictly positive constants (bonuses). Moreover, the optimal action y*, being implementable by the incentive scheme (5) as a dominant strategy equilibrium13 (DSE), solves the following problem of optimal incentive-compatible planning: y* = arg max {H(y) –

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

y A

 c ( y ) }. i

iN

i

Suppose that the reward of each agent depends on the actions of all agents (this is exactly the case for collective incentives studied here) and the costs are inseparable (i.e., the costs of each agent generally depend on the actions of all agents, thus reflecting the interrelation of the agents). Then the sets of Nash equilibria14 EN ()  A′ and DSE yd  A′ take the form: EN () = {yN  A |  i  N  yi  Ai ,

(6)

i (yN) – ci ( y N )  i (yi, y Ni ) – ci (yi, y Ni )}. By definition, y i  Ai is a dominant strategy of agent i iff d  yi  Ai,  y–i  A–i: i ( y i , y–i) – ci ( y i , y–i)  i (yi, y–i) – ci (yi, y–i). d

d

13

Recall that a DSE is an action vector such that each agent benefits from choosing a corresponding component (irrespective of the actions chosen by the rest agents–see Appendix 1). 14 Recall that a Nash equilibrium is an action vector such that each agent benefits from choosing a corresponding component provided that the rest agents choose equilibrium actions (see Appendix 1). Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

49

Imagine that a dominant strategy exists for each agent under a given incentive scheme. In this case, the incentive scheme is said to implement the corresponding action vector as a DSE. Let us fix an arbitrary action vector y*  A′ of the agents and consider the following incentive scheme:

ci ( yi* , y i )   i , yi  yi* i (y , y) =  , i  0, i  N. * yi  yi  0, *

(7)

In [50] it was shown that under the incentive scheme (7) used by the principal the vector y* forms a DSE. Moreover, if i > 0, i  N, then y* makes up a unique DSE. The incentive scheme (7) means that the principal adopts the following principle of *

decomposition. He or she suggests to agent i, “Choose the action yi , and I compensate your costs regardless of the actions chosen by the rest agents. Yet, if you choose another action, the reward is zero.” Using such strategy, the principal decomposes the game of agents. Assume that the incentive of each agent depends implicitly only on his or her action. By fixing the opponents’ action profile for each agent, let us pass from (7) to an individual incentive scheme. Notably, fix an arbitrary action vector y* A′ of the agents and define the incentive scheme

c ( y * , y * )   i , yi  yi* i (y*, yi) =  i i i , i  0, i  N. * 0 ,  y y i i 

(8)

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

In this case, we have the following interpretation. The principal suggests to agent i, *

“Choose the action yi , and I compensate your costs, as if the rest agents have chosen the *

corresponding actions y i . Yet, if you choose another action, the reward is zero.” Adhering to such strategy, the principal also decomposes the game of agents, i.e., implements the vector y* as a Nash equilibrium of the game. Note that the incentive scheme (8) depends only on the action of agent i, while y * i enters this function as a parameter. Moreover, in contrast to the incentive scheme (7), that of (8) provides each agent merely with indirect information about the action vector desired by the principal. For the incentive scheme (8) to implement the vector y* as a DSE, additional assumptions should be introduced regarding the cost functions of the agents, see [65]. This is not the case for the incentive scheme (7). It is not out of place to discuss here the role of the nonnegative constants {i}i  N in the expressions (5), (7) and (8). If one needs implementing a certain action as a Nash equilibrium, these constants can be chosen zero. Imagine that the equilibrium must be unique (in particular, the agents are required not to choose zero actions; otherwise, in evaluation of the guaranteed result (3) the principal would be compelled to expect zero actions of the agents). In this case, the agents should be paid excess an arbitrarily small (strictly positive) quantity for choosing the action expected by the principal. Furthermore, the parameters {i}i  N in formulas (5), (7) and (8) appear relevant in the sense of stability of the compensatory

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

50

incentive scheme with respect to the model parameters. For instance, suppose that we know the cost function of agent i up to some constant i  i / 2. Consequently, the compensatory incentive scheme (7) still implements the action y* (see [51]). The vector of optimal implementable actions y*, figuring in the expression (7) or (8) as a parameter, results from solving the following problem of optimal incentive-compatible planning: y* = arg max {H(t) –  (t)},

(9)

t  A

where v(t) =

 c (t ) , and the efficiency of the incentive scheme (7), (9) constitutes iN

K* = H(y*) –

i

c (y ) *

iN

i

– .

It was shown in [50] that the incentive scheme (7), (9) appears optimal, i.e., possesses the maximal efficiency among all incentive schemes in multi-agent OS. Let us consider several examples of designing optimal collective incentive schemes in multi-agent OS. Example 2.2. Solve the incentive problem in an OS with two agents, whose cost functions

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

are ci (y) =

( yi   y3 i ) 2 , i = 1, 2; here  represents an interdependence parameter of the 2ri

agents. Assume that the principal’s income function is defined by H(y) = y1 + y2, and the wage fund is bounded above by R. Under the incentive scheme (7) used by the principal, the incentive problem is to find optimal implementable actions:

 H ( y )  max y 0 .   c ( y ) c ( y ) R 2 1 Applying Lagrange’s multiplier method yields the following solution:

y1* =

2 R  r2  r1 , y 2* = 2 r1  r2   1

2 R  r1  r2 . r1  r2  2  1

Finally, substitute the equilibrium actions of the agents into the principal’s goal function to obtain the optimal value of the WF: R* = arg max [ 2 R (r1  r2 ) /(1 – ) – R] = R0

r1  r2 . 2(  1) 2

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

51

Example 2.3. The second example is the model of joint production. Consider a multiagent two-level OS composed of a principal and n agents. Suppose that the goal function of agent i, fi (y, ri), forms the difference between the income hi (y) gained by the joint activity and the costs ci (y, ri), where ri is the efficiency parameter (type) of the agent. In other words, fi (y, ri) = hi (y) – ci (y, ri), i  N. Choose the following income and cost functions: hi (y) = i  Y, i  N, ci (y, ri) =

yi2

2( ri   i  y j )

, i  N,

j i

where Y =

 y , 

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

iN

i

iN

i

 1. We believe that

y j i

j



ri

i

for the case of minus in the

denominator. A possible interpretation lies in a firm which manufactures the same product sold at a price  on the market. The total income  Y is distributed among the agents according to fixed shares {i}i  N. For each agent, the costs increase with respect to his or her actions, while the agent’s type ri determines the efficiency of activity. Interaction of the agents is modeled by the relationship between the costs (the efficiency of activity) of each agent and actions of the rest agents. The sign “+” in the denominator corresponds to the efficient interaction of the agents (decreasing costs). Indeed, the larger actions are chosen by the rest agents, the smaller costs has the agent in question (the higher is the efficiency of his or her activity). In practice, this means the reduction of fixed incremental charges, sharing of experience or technologies, etc. On the other hand, the sign “–“ in the denominator describes the inefficient interaction of the agents (increasing costs). The larger actions are chosen by the rest agents, the higher costs has the agent in question (accordingly, his or her efficiency drops). In practice, such situation corresponds to the shortage of the basic assets, constraints imposed on secondary indicators (e.g., environmental pollution) and so on. The coefficients {i  0}i  N reflect the interdependence level of the agents. Suppose that all OS participants know the market price . Involve the first-order necessary optimality conditions for the agents’ goal functions to obtain yi = i  (ri  i

 y j ), i  N. j i

By summing up the above expressions, we have the following relationship between the total actions Y+ and the parameter  :

i  ri

Y +( ) =

 1   

i N

i

i

  i 1  i i N 1  i i

.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

52

In fact, incentives mean modifying the parameters {i}i  N, that represent internal (corporate, transfer) prices. Example 2.4. The third example is lump-sum labor payments. Consider an OS with two 2

agents, whose cost functions are ci (yi) = yi / 2ri (ri stands for the type of agent i, yi  Ai =

1 , i = 1, 2). The goal function of agent i is given by the difference between the reward i (y1, y2) paid by a principal and the costs, namely, fi (y) = i (y) – ci (yi), i = 1, 2.

Let the principal adopt the incentive scheme

Ci , y1  y2  x , i = 1, 2.  0, y1  y2  x

i (y1, y2) = 

(10)

Thus, the principal provides a fixed incentive to each agent if their total action is not smaller than a planned value x > 0. Denote by yi =

2riCi , i = 1, 2, Y = {(y1, y2) | yi  yi ,

i = 1, 2, y1 + y2  x}, the set of individually rational actions of the agents. We study four possible combinations of the variables (see Figures 2.13–2.16). In case 1 (Figure 2.13), the set of Nash equilibria forms the segment EN () = [N1; N2].

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Fix an arbitrary equilibrium y* = ( y1* , y 2* )  EN (). Hence, the set of Nash equilibria possesses the cardinality of continuum; this causes some disadvantages in the sense of efficiency of incentives, see below. All points of the segment [N1; N2] are Pareto-efficient from the agents’ point of view. Accordingly, it seems rational to pay excess the agents for choosing specific actions from the segment (a certain strictly positive small incentive).

y2

y 2 x

N1

y 2* Y

0

y1

N2

y1*

x

Figure 2.13.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

y1

Incentive Mechanisms

53

We design an individual incentive scheme according to the results derived above (see formulas (8)-(9)):

C1 , y1  y1* , *  0, y1  y1

~1* (y1) = 1(y1, y 2* ) = 

C 2 , y 2  y 2*

~2* (y2) = 2( y1* , y2) = 

*  0, y 2  y 2

(11)

.

Under such incentive scheme, the point y* = ( y1* , y 2* ) is a unique Nash equilibrium. Notably, by passing from the incentive scheme (10) for each agent (this scheme depends on the actions of all agents) to the incentive scheme (11) (which is completely defined by the action of the agent in question), the principal decomposes the game of agents and implements the unique action. Evidently, the efficiency of the incentive does not decrease; quite the reverse, it can be higher than for the initial incentive scheme.

y2 x N1

y 2 y 2* Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

N2 0

y1*

y1

y1

x

Figure 2.14.

y 2 x

y2 N1

y 2* N2 y1 0

y1*

y1

x

Figure 2.15.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

54

y2 x

y 2

N1

y 2* N2 y1

y1*

0

y1

x

Figure 2.16.

In cases 2 and 3, Nash equilibria are the segments [N1; N2] illustrated by Figs. 2.14 and 2.15, respectively. Finally, in case 4 (see Figure 2.16), the set of Nash equilibria consists of the point (0; 0) and the segment [N1; N2], i.e., EN () = (0; 0)  [N1; N2].

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Moreover, the points of the interval (N1; N2) are (different) undominated equilibria in the sense of Pareto. Now, within the framework of the current example, let the cost functions of the agents be inseparable: ci (y) =

( yi   y3 i ) 2 . 2ri

We define the set of individually rational actions of the agents: Y = {(y1, y2) | ci (y)  Ci, i = 1, 2}. To avoid consideration of all feasible combinations of the parameters {r1, r2, C1, C2, x}, we take the case demonstrated in Figure 2.17.

2 r1C1 / 

y2

x

2r2C2 y 2*

N1 N2 y1

0

y1*

2r1C1 x

Figure 2.17. The set of Nash equilibria [N1; N2] under inseparable costs. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

2r2C2 / 

Incentive Mechanisms

55

Consequently, the set of Nash equilibria includes the segment [N1; N2]. The incentive scheme

c1 ( y1* , y 2 ), y1  y1* y1  y1*  0,

~1* (y) = 

(12)

c2 ( y1 , y 2* ), y 2  y 2*  0, y 2  y 2*

~2* (y) = 

implements the action y*  [N1; N2] as a dominant strategy equilibrium. We have finished discussion of the incentive mechanisms for individual results of agents’ activity. To proceed, let us describe some collective incentive mechanisms.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

2.6. COLLECTIVE INCENTIVE MECHANISMS (UNOBSERVABLE ACTIONS) The majority of known incentive models consider two types of OS. The first type is when a control subject (a principal) observes the result of activity for all controlled subjects (agents), being uniquely defined by the strategy (action) chosen by an agent. The second type includes OS with uncertainties, where the observed result of agents’ activity depends not only on his or her actions, but also on uncertain and/or random factors (e.g., see the model of contract theory in Section 2.4). The present section provides the statement and solution to the collective incentive problem in a multi-agent deterministic OS, where a principal possesses only some aggregated information about the results of agents’ activity. Recall the model studied in the preceding section. In an n-agent OS, let the result of agents’ activity z  A0 = Q(A′) be a certain function of their actions: z = Q(y) (Q() is referred to as the aggregation function). The preferences of the OS participants, i.e., the principal and agents, are expressed by their goal functions. In particular, the principal’s goal function makes up the difference between his or her income H(z) and the total incentive  (z) paid to the agents:  (z) =

 ( z) , where  (z) stands for the incentive of agent i,  (z) = ( (z), iN

i

i

1

2(z), …, n (z)), i.e.,  ( (), z) = H(z) –

 ( z) . iN

i

(1)

The goal function of agent i represents the difference between the reward given by the principal and the costs ci (y): fi (i (), y) = i (z) – ci (y), i  N.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(2)

Dmitry Novikov

56

We adopt the following sequence of moves in the OS. At the moment of decisionmaking, the principal and the agents know the goal functions and feasible sets of each other, as well as the aggregation function. The principal’s strategy is choosing incentive functions, while the agents choose their actions. Enjoying the right of the first move, the principal chooses incentive schemes and report them to the agents. Under known incentive functions, the agents subsequently choose their actions by maximizing the corresponding goal functions. Imagine that the principal observes individual actions of the agents (equivalently, the principal can uniquely recover the actions using the observed result of activity). In this case, the principal may employ an incentive scheme being directly dependent on the agents’ actions:  i  N: ~i (y) = i (Q(y)) - how such incentive problems are treated is discussed in the previous section. Therefore, we analyze a situation when the principal observes merely the result of activity in the OS (which predetermines the principal’s income); he or she is unaware of the individual actions of the agents and appears unable to restore this information. In other words, aggregation of information takes place–the principal possesses incomplete information on the action vector y  A′ of the agents. He or she only knows a certain aggregated variable z  A0 (a parameter characterizing the results of joint actions of the agents). In the sequel, we believe that the OS parameters meet the assumptions introduced in Section 2.5. Moreover, assume that the aggregation function is a one-valued continuous function. By analogy to the aforesaid, the efficiency of incentive is treated as the minimal value (or the maximal value–under the hypothesis of benevolence) of the principal’s goal function on the corresponding solution set of the game:

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

K( ()) =

min  ( (), Q (y)).

yP ( ( ))

(3)

The problem of optimal incentive function design lies in searching for a feasible incentive scheme * ensuring the maximal efficiency:

* = arg max K( ()).  ( )

(4)

In the incentive problems investigated in Section 2.5, decomposition of the agent game was based on the principal’s ability to motivate the agents for choosing a specific (observable!) action. Actions of the agents being unobservable, direct application of the decomposition approach seem impossible. Thus, solution of the incentive problems (where the agents’ rewards depend on the aggregated result of activity in the OS) should follow another technique. This technique is rather transparent. Find a set of actions yielding a given result of activity. Then separate a subset with the minimal total costs of the agents (accordingly, with the minimal costs of the principal to stimulate the agents under optimal compensatory incentive functions, see Sections 2.1 and 2.5). Next, construct an incentive scheme implementing this subset of actions. Finally, choose the result of activity with the most beneficial implementation for the principal.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

57

Now, let us give a formal description to the solution of the incentive problem in an OS with aggregation of information about agents’ activity. Define the set of action vectors of the agents, leading to a given result z of activity: Y(z) = {y  A′ | Q(y) = z}  A′, z  A0.

It has been demonstrated above that, in the case of observable actions of the agents, the minimal costs of the principal to implement the action vector y  A′ equal the total costs of the agents

 c ( y) . iN

i

Similarly, we evaluate the minimal total costs of the agents to

~

demonstrate the result of activity z  A0:  ( z ) = min

yY ( z )

action set Y*(z) = Arg min

yY ( z )

 c ( y) , and the corresponding iN

i

 c ( y) , which attains the minimum. iN

i

Fix an arbitrary result of activity x  A0 and an arbitrary vector y*(x)  Y*(x)  Y(x). Let us make a technical assumption as follows:  x  A0,  y′  Y(x),  i  N,  yi  Proji Y(x): the function cj (yi, y′–i) does not decrease with respect to yi, j  N. It was rigorously shown in [50] that: 1) under the incentive scheme

 ix* (z)

ci ( y* ( x))   i , z  x =  , i  N, zx 0,

(5)

the action vector of the agents y*(x) is implementable as a unique equilibrium with the Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

~

minimal costs of the principal to stimulate the agents (these costs constitute  ( x ) + ,  =

 iN

i

);

2) the incentive scheme (5) is -optimal. Hence, Step 1 to solve the incentive problem (4) consists in finding the minimal incentive

~

scheme (5), which leads to the principal’s costs  ( x ) to stimulate the agents and implements the action vector of the agents, leading to the given result of activity x  A0. Therefore, at Step 2 one evaluates the most beneficial (for the principal) result of activity x*  A0 by solving the problem of optimal incentive-compatible planning:

~

x* = arg max [H(x) –  ( x ) ]. x A0

(6)

And so, the expressions (5)–(6) provide the solution to the problem of optimal incentive scheme design in the case of agents’ joint activity.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

58

Let us analyze how the principal’s ignorance (infeasibility of observations) of the agents’ actions affects the efficiency of incentives. As usual, it is assumed that the principal’s income function depends on the result of activity in the OS. Consider two possible cases. 1. the actions of the agents are observable, and the principal is able to motivate the agents based on their actions and on the result of collective activity; 2. the actions of the agents are unobservable, and the incentives could depend on the observed result of collective activity (exclusively). Let us compare the efficiency of incentives in these cases. Under observable actions of the agents, the principal’s costs 1(y) to implement the action vector y  A' of the agents constitute 1(y) =

 c ( y) , and the efficiency of iN

i

incentives is K1 = max {H(Q(y)) – 1(y)} (see Section 2.5, as well). y A

Actions of the agents being unobserved, the minimal costs of the principal 2(z) to implement the result of activity z  A0 are defined by (see (5)-(6)): 2(z) = min

yY ( z )

 c ( y) ; iN

i

accordingly, the efficiency of incentives makes K2 = max {H(z) – 2(z)}.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

z A0

In [50] it was demonstrated that K1 = K2. The described phenomenon could be referred to as the perfect aggregation theorem for incentive models. In addition to the estimates of comparative efficiency, it has an extremely important methodological sense. It turns out that, under collective incentive scheme, the principal ensures the same level of efficiency as in the case of individual incentive scheme! In other words, aggregation of information by no means decreases the operational efficiency of an organizational system. This sounds somewhat paradoxically, since existing uncertainties and aggregation generally reduce the efficiency of management decisions. The model considered includes perfect aggregation. In practice, the interpretation is that the principal does not care what actions are selected by the agents; they must lead to the desired result of activity provided the minimum total costs. Informational load on the principal is decreased (provided the same efficiency of incentives). Therefore, the performed analysis yields the following conclusions. If the principal’s income depends only on the aggregated indicators of agents’ activity, using them is reasonable to motivate the agents. Even if individual actions of the agents are observed by the principal, an incentive scheme based on individual actions of the agents does not increase the efficiency of control (but definitely raises informational load on the principal). Recall that in Section 2.1 we have formulated the principle of costs compensation. For models with data aggregation, the principle is extended in the following way. The minimal costs of the principal to implement a given result of activity in the OS are defined as the minimal total costs of the agents compensated by the principal (provided that the former choose an action vector leading to this result of activity). Let us consider an illustrative example.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

 y , H(z) = z, c (y ) =

Example 2.5. Set z =

iN

i

i

59

yi2 / 2ri, i  N (see also the examples in

i

Section 2.5). We evaluate Y(z) = {y  A′ |

y iN

i

= z}.

Solution to the problem

 c ( y )  min under the constraint  y iN

i

i

y A'

takes the form y i* (x) =

iN

ri

W

x, where W =

=x

i

 r , i  N. The minimal costs to implement iN 2

i

the result of activity x  A0 are equal to  (x) = x / 2 W. By maximizing the principal’s goal function (by estimating max [H(x) –  (x)]), we x0

obtain the optimal plan x* = W and the optimal incentive scheme

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

 i* (W,

2  x z) = ri 2W 2 , z  x , i  N.  0, zx

The efficiency of incentives (the value of the principal’s goal function) is K = W / 2. In Sections 2.5-2.6 we have studied collective incentive schemes with specific (for each agent) relationships between the rewards and actions or results of the agents. In practice, it happens that the principal has to apply the same relationship between the rewards and actions (or results of joint activity) for all agents. To proceed, let us focus on such models.

2.7. UNIFIED INCENTIVE MECHANISMS In personalized (individual and collective) incentive schemes studied above, for any agent the principal chooses a specific relationship between the reward and his or her actions (Section 2.1), the actions of the rest agents (Section 2.5) or the results of their joint activity (Section 2.6). In addition to personalized incentive schemes, there exist unified incentive schemes with an identical (for all agents) relationship between the reward and certain parameters. The necessity of using unified incentives follows from institutional constraints or emerges as the result of principal’s aspiration for “democratic-type” management, suggesting equal opportunities for the agents, etc. Since unified control is a special (“simplified”) case of personalized control, the efficiency of the former is not greater than that of the latter. Hence, the following questions

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

60

arise. What efficiency losses appear as the result of egalitarianism? When the losses actually vanish? Consider two models of collective unified incentives, viz., a unified proportional (linear) incentive scheme and a unified collective incentive scheme for the results of joint activity; note this technique could be used for the analysis of any incentive scheme. In the first model, unification leads to no efficiency losses (just the opposite, unified incentive schemes turn out optimal in the class of all proportional ones). In the second model, the efficiency is considerably lower. Unified proportional incentive schemes. Let us introduce the following assumption regarding the cost functions of the agents: ci (yi, ri) = ri  (yi /ri), i  N,

(1)

where  () is a smooth strictly increasing convex function such that  (0) = 0 (e.g., for the Cobb-Douglas function we have  (t) = t / ,   1), and ri > 0 is the efficieny parameter (type) of agent i. Suppose that the principal uses proportional (L-type) individual incentive schemes: i (yi) = i yi. Then the agents’ goal functions take the form fi (yi) = i yi – ci (yi). Find the action chosen by agent i under a certain fixed incentive scheme applied by the principal:

yi* (i) = ri  ' –1(i), i  N,

(2)

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

with  ' –1() being the reverse function to the derivative of  (). The minimal total costs of the principal to stimulate the agents constitute

L () =

n

 i 1

i

ri  '1 ( i ) ,

(3)

where  = (1, 2, ..., n). The total costs of the agents make up c() =

n

 r  ( ' i 1

i

1

( i )) .

(4)

Note that the above general model of proportionnal incentives covers different statements of specific problems. We study some statements below, interpreting actions of the agents as the amounts of products manufactured. Problem 1. Suppose that the principal is interested in agents’ performing an assigned plan R of the total output under the minimum costs of the agents. We underline the necessity of distinguishing the total costs of the agents and the total costs of the principal to motivate the agents. Then he or she chooses wage rates {i}i  N by solving the following problem:

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

 c( )  min  n .  *  yi ( i )  R  i 1

61

(5)

This problem has the following solution:

 i* =  ′(R / W); yi* = ri (R / W); i  N, c* = W  (R / W);  L* = R  ′(R / W),

where W =

(6)

n

 ri . i 1

Since optimal wage rates turn out identical for all agents, exactly the unified linear incentive scheme is optimal. Problem 2. The problem of total output maximization under the constraints on the total costs of the agents

n *  yi ( i )  max ,   i 1  c( )  R

(7)

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

is “dual” to Problem 1. The solution to (7) is given by:

 i* =  ′( –1(R / W)); yi* = ri  –1(R / W); i  N, c* = R;  L* =  –1(R / W) W  '( –1(R / W)).

(8)

In other words (and naturally enough), the optimal solution again consists in unified proportional incentive schemes. Now, in Problems 1 and 2 substitute the total costs of the agents by the total costs of the principal to motivate them. This generates another pair of dual problems. Problem 3. Suppose that the principal is interested in agents’ implementation of a planned total output R under the minimum total costs to motivate them. The corresponding wage rates are defined by solving the following problem:

 L ( )  min  n .  *  y (  ) R  i i  i 1

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(9)

Dmitry Novikov

62

Then the unified scheme (6) again provides the solution (exactly as in Problem 1). This is rather a curious result, since the total costs of the agents reflect the interests of the controlled subjects, while the total costs to motivate them correspond to the interests of the principal. Of course, the reason lies in the assumptions made earlier. Problem 4 is to maximize the total output under existing constraints on the total costs to motivate the agents:

n *  yi ( i )  max .   i 1   L ( )  R

(10)

Lagrange’s multiplier method yields the following optimality condition ( is the Lagrange multiplier):

  ' –1(i)  ''(i) + i = 1, i  N. Hence, all wage rates must be identical and satisfy the equation

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

  ' –1() = R / W.

(11)

Therefore, we have derived the following result. In organizational systems with weakly related agents (whose cost functions are given by (1)), unified incentive schemes turn out optimal on the set of proportional incentive schemes. Note that the optimality of unified proportional incentive schemes (UL-type incentive schemes) could also be demonstrated on the set of proportional incentive schemes in OS with weakly related agents and the cost functions (1). Thus, it seems interesting to study their relative efficiency on the set of all possible (not only proportional) incentive schemes. It suffices to compare the minimal costs to motivate the agents, e.g., in Problem 2, with the costs to motivate the agents in the case of the optimal compensatory incentive schemes K (y*) =

n

 r  ( y / r ) (see Sections 2.1 and 2.5). i 1

i

i

i

By solving the choice problem for the vector y*  A' which maximizes K (y*) under the n

constraint

y i 1

* i

= R, one obtains that

* = R  ' (R / K* = W  (R / W). By substituting UL

W) from the expression (6), one evaluates the ratio of the minimal costs to motivate the agents: * * UL / K = R / W  ' (R / W) /  (R / W).

(12)

* / K  1. Moreover, one would easily From the convexity of  () it follows that UL

*

show that for R / W > 0 and strictly convex cost functions the ratio (12) exceeds the unity. The total costs to motivate the agents in unified proportional schemes are higher than in the case Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

63

of “absolutely optimal” compensatory incentive schemes. Accordingly, the former are nonoptimal in the class of all feasible (e.g., nonnegative and monotonic) incentive schemes. The result obtained for multi-agent organizational systems agrees with the conclusion made in Section 2.3 (in single-agent systems the efficiency of proportional incentives does not exceed that of compensatory incentives). Unified incentive schemes for joint activity results. In Section 2.5 we have analyzed personalized incentive schemes of agents for the results of their joint activity. Now, let us apply a unified incentive scheme to this model. Consider the class of unified incentive schemes for joint activity results (see Section 2.5), i.e., the incentive schemes where a principal uses an identical relationship between individual rewards and the result of activity z  A0 for all the agents. We introduce the following function: c (y) = max {ci (y)}. iN

(13)

At Step 1, evaluate the minimal costs U (z) of the principal to implement the result of activity z  A0 under a unified incentive scheme:

U(z) = min c (y). yY ( z )

The set of action vectors minimizing the costs to implement the result of activity z  A0 takes the form Y*(z) = Arg min c (y). yY ( z )

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

By analogy to Section 2.5, one may show that the unified incentive scheme

c( y* ( x))   / n, z  x ix(z) =  , i  N, zx 0,

(14)

where y*(x) is an arbitrary element from the set Y*(x), guarantees the incentive compatibility and implements the result of activity x  A0 under the minimal principal’s costs (in the class of unified incentive schemes). At Step 2 of designing the optimal unified incentive scheme, one evaluates the most beneficial (according to the principal’s viewpoint) result of activity xU* in the OS. For this, solve the problem of optimal incentive-compatible planning:

xU* = arg max [H(z) – n U (z)]. z A0

(15)

Formulas (14)–(15) describe the solution to the problem of optimal unified incentive scheme design in the case of joint activity of the agents. Obviously, the efficiency of the

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

64

unified incentives (14)–(15) does not exceed the efficiency of the personalized incentives (5)– (6). Example 2.6. Recall the first example in Section 2.5 and assume that the principal has to 2

use a unified incentive scheme. Set c(y) = y j / 2rj, where j = arg min {ri}. Then the minimal iN

*

costs to motivate the agents constitute U (z) = z / 2 n rj. The optimal plan xU = n rj yields the 2

efficiency n rj / 2. Generally, it is smaller than the efficiency

 r / 2 ensured by the iN

i

personalized incentive scheme (both efficiencies coincide in the case of identical agents).

2.8. TEAM INCENTIVE MECHANISMS This section is devoted to the description of collective incentive models, notably, team payments. They are remarkable for that an agent (a member of a team) obtains a reward defined by activity participation factor (APF); thus, the reward depends on the action of the agent in comparison with the actions of other agents. Generally, the bonus fund is determined according to an aggregated activity result of the whole team (in a particular case, it is fixed). It is possible to apply different APF design procedures:

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

 

making APF proportional to a wage category (qualification level) of an employee; making APF proportional to activity contribution factor (ACF) of an employee (his or her individual BSC).

Forming APF in a proportion to wage categories means the following. Suppose that a wage category characterizes the activity of each employee (agent). Moreover, the greater is the wage category, the higher is the qualification level of an agent. Hence, a wage category describes the efficiency of each agent and can be involved to assess his or her activity. In the case of ACF, one accounts for the actual contribution of each agent to the overall result of the whole team (depending on individual labor productivity and quality of work). Thus, in a work team the managers possess specific goals and form conditions of functioning to achieve them. Accordingly, the agents have their own goals and strive for attaining them by a proper choice of actions. We believe that, based on the results of its activity, a work team gains a given bonus fund R to-be-distributed among the agents under a chosen incentive scheme. Suppose that agent i is assigned a rate ri reflecting his or her qualification level (the efficiency of activity) and the individual costs of agent i, ci = ci (yi, ri), strictly decrease with respect to the qualification level ri, i  N. A work team composed of agents with an identical qualification level is said to be uniform (and non-uniform if the levels differ). In the sequel, the efficiency of an incentive scheme is defined by the total action of the agents:

 (y) =

y . iN

i

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

65

The procedures based on APF. First, let us consider the case of APF. A fund R is distributed among the agents according to the activity participation coefficients {i}i  N,

 jN

j

 1 . Thus, the bonus of agent i makes up i = i R.

The goal functions of the agents take the form fi (yi) = i – ci (yi, ri), i  N.

(1)

Due to its simplicity, a widespread procedure of APF design is based merely on accounting for the qualification level of agent i, i.e.,

i 

ri

r j N

. Substitute this formula in j

(1) to obtain the following result. APF involving solely the qualification levels of the agents (and not their actual actions) exert no impact on the agents. In particular, such procedures do not motivate the agents to choose, e.g., larger actions. Therefore, we proceed to ACF. The procedures based on ACF. For an agent, a natural and elementary technique to define ACF is making it proportional to the action of the agent:

i 

yi , i  N.  yj

(2)

j N

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Suppose that the cost functions of the agents are linear: ci (yi, ri) = yi / ri. Then the expressions (1)-(2) imply that the goal function of agent i depends on the actions of all agents: fi (y) = R

i 

yi – yi / ri, i  N.  yj

(3)

j N

Hence, the modeled situation represents a game among n players with the payoff functions (3). Uniform teams. We begin with uniform teams (with identical agents). Nash equilibrium actions of the agents take the form:

yi* 

Rr (n  1) , i  N, n2

(4)

leading to the following value of the efficiency: K1(R, r, n) =

Rr (n  1) . n

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(5)

Dmitry Novikov

66

Formula (4) demonstrates that a larger bonus fund stimulates the choice of greater actions by the agents. According to (5), the efficiency grows linearly as one increases the bonus fund or qualification levels of the agents. In other words, there is no optimal bonus fund which maximizes the efficiency K1 / R of its utilization. At the same time, the actions of the agents being bounded above, there exists an optimal value of the bonus fund. If one knows the upper bound, the optimal value is expressed from (4). It is easy to prove that partitioning a uniform team in smaller subteams (with appropriate splitting up of the bonus fund) would not increase the efficiency of its utilization. Moreover, for a fixed wage fund any reduction of a uniform team causes efficiency losses and the choice of larger actions by the agents. Consider the following issue. Preserving the same bonus fund R, is it possible to improve the total efficiency of a uniform team by a proper design of agents’ ACF? For this, let us study the following ACF design procedure:

i 

yi n . , i  N, 1     n 1  yj

(6)

j N

Note it turns out more sensitive to agents’ differentiation than the procedure (2). Nash equilibrium actions of the agents are

yi*  

Rr (n  1) , i  N, n2

(7)

and appear greater than the actions given by (4). Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Hence, under the constraint 1   

n , one claims that applying the ACF design n 1

procedure (6) improves the efficiency as against the procedure (2) by 1 / (n – 1) per cent. For instance, in a team of 11 employees, the possible gain constitutes 10%. Non-uniform teams. Formulas (2)-(3) show that, in the corresponding non-uniform team, the Nash equilibrium is ensured by the following agents’ actions and efficiency15:

y  * i

1 / r j N

j

 (n  1) / ri

(  1 / rj )

R (n  1) , i  N,

(8)

2

j N



K2(R, r , n) =

y j N

* j



R ( n  1) .  1 / rj j N

15

For identical agents, (8) and (4), as well as (9) and (5) coincide.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(9)

Incentive Mechanisms

67

Assume that a team includes agents of two types, i.e., m skillful agents with the efficiency r+ and (n – m) “ordinary” agents with the efficiency r–, where r+ > r–. Then one derives

1 / r iN

i

= m / r+ + (n – m) / r–.

Using the expression (8), find the equilibrium actions of the skillful agents: y+ =

R(n  1) (n  1) 1 [1 –  ],   m / r  ( n  m) / r r m / r  ( n  m) / r  

(10)

and of the “ordinary” agents: y– =

1 R(n  1) (n  1) [1 –  ].   m / r  ( n  m) / r r m / r  ( n  m) / r  

(11)

Next, evaluate the efficiency by formula (9): K2(R, m, n) =

R(n  1) . m / r  ( n  m) / r 

(12)



Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Evidently, (8) and (10)-(11) demonstrate that the agents with a higher level of qualification compel the agents with a lower qualification level to choose smaller actions. Consequently, the values of their goal functions are accordingly decreased. In addition, formula (11) indicates of the following. If the number of skillful agents in a

1/ r  team is such that m  , then the “ordinary” agents benefit nothing by increasing 1/ r   1/ r  their actions. Yet, for m = 1, the “ordinary” agents always benefit from increasing their actions. At the same time, simple reasoning shows that skillful agents improve the efficiency of the whole team (despite the choice of small actions by the “ordinary” agents). Let us analyze the feasibility of further increase in the efficiency in a team under a fixed bonus fund R. Partition a non-uniform team into two uniform subteams. Suppose that the first subteam includes m skillful agents, and (n – m) “ordinary” agents form the second one. We also accordingly divide the bonus fund R of the team: R = R+ + R –. In the Nash equilibrium, the efficiency of the first (second) subteam equals

R  r  (m  1) m

(respectively,

R  r  (n  m  1) ). nm Hence, the overall efficiency of the team of n agents is K3(R, m, n) =

R  r  (m  1) R  r  (n  m  1) + . m nm

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(13)

Dmitry Novikov

68

We have noted earlier that partitioning a uniform team into several subteams by no means improves the total efficiency. Generally, this is not true for a non-uniform team. For instance, compare the expressions (12) and (13) provided that the team includes a half of skillful agents whose efficiency exceeds by two times the efficiency of “ordinary” agents; then separating the first group in a subteam would increase the total efficiency only if the initial team is composed of six agents maximum. Otherwise, reduction of the total efficiency is feasible as the result of partitioning the non-uniform team into two uniform subteams (even in the case of optimal distribution of the bonus fund among the subteams). Individual and collective incentives. To conclude the current section, we compare the efficiency of individual and collective incentives in a series of practically relevant situations. Let the cost functions of the agents be linear: ci (yi, ri) = yi / ri, i  N. Assume there exists a common constraint ymax for the maximum actions of agents: Ai = [0; ymax], i  N. Renumber the agents in the descending order of the efficiencies of their activity: r1  r2  …  rn.

(14) *

Suppose that ymax is such that the action y1 defined by (8) under i = 1 appears feasible.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

In this case, the actions of the rest agents are also feasible under the incentive scheme (2)  based on ACF. The efficiency of team incentive K2(R, r , n) is given by formula (9). Evaluate the efficiency of individual incentive when the principal may motivate the agents independently for individual results of activity (provided that the total incentive does not exceed R). We adopt the principle of costs compensation (see Section 2.1) and the results obtained for the incentive problem in a system with weakly related agents (see Section 2.5). If the principal uses compensatory incentive schemes, it is optimal to compensate the costs of the first k agents in the sequence (14) (alternatively, the costs of the first (k + 1) agents–depending on the parameters): k = min {j  N | ymax

j

j 1

i 1

i 1

1 / ri  R, ymax 1 / ri > R}.

(15)

Formula (15) means that the principal should first employ the agents whose efficiency is maximal. In other words, a nonzero reward is provided to the first k or (k + 1) agents, while the rest receive nothing (employing them seems unreasonable). Thus, the efficiency of individual incentive is



K4(R, r , n) = k ymax + rk+1 (R – ymax

k

1 / r ). i 1

i

(16)

The expressions (9) and (16) serve for analyzing the efficiencies gained by collective and individual incentives. As a rule, individual incentives are more efficient (see Section 2.7). For instance, in the case of uniform teams we have the estimate:

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

69

K4(R, r, n) / K1(R, r, n)  n / (n – 1)  1.

Team payments are close to the so-called rank incentive schemes, where collective rewards involve the procedures of competition, norms, etc. A detailed discussion of this class of collective incentive schemes could be found in [50] and Section 2.10. In what follows, we analyze incentive mechanisms in matrix structures.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

2.9. INCENTIVE MECHANISMS IN MATRIX STRUCTURES In many real organizational systems, the same agent is simultaneously subordinated to several principals located at the same or at different hierarchical levels. The first case is said to be distributed control, while the second one is known as interlevel interaction. Interlevel interaction. The analysis of interlevel interaction models [50] indicates that double subordination of an agent to principals located at different levels of an hierarchy appears inefficient. An indirect confirmation of this fact consists in a famous management principle “a vassal of my vassal is not my vassal.” Therefore, naturally each agent should be subordinated only to his or her immediate superior, a principal located exclusively at the next (higher) level of an hierarchy (see also the models of hierarchies optimization in [44]). The following question arises accordingly. Why in real organizational systems one often observes the effects of interlevel interaction? A possible descriptive explanation (without consideration of normative interaction structure for the participants and institutional constraints) is given below. Generally, efficiency losses are assumed to appear only due to the factors of aggregation, decomposition of control problems and insufficient awareness of a principal about certain agents. In particular, imagine there are informational constraints at an intermediate level (e.g., the amount of information to-be-processed by a principal in a certain system exceeds his or her capabilities). Then some control functions (possibly, in the aggregated form) are transferred to a higher level. Simply speaking, incompetency of an intermediate principal (in the objective sense) usually represents the primary cause of the interlevel interaction observed in practice. Therefore, on the one hand, one should a priori admit the feasibility of interlevel interaction in solving the design problems for institutional, functional, informational and other structures in an OS (still, striving to avoid it as much as possible). On the other hand, the presence of interlevel interaction in a real OS testifies to its nonoptimal functioning; for a manager, this is an indication of the necessity to review the structure (or even the composition) of the system. At the same time, double subordination of agents to some same-level principals may be unavoidable. An example is matrix control structures being remarkable for distributed control. Distributed control. A specific feature of matrix control structures (MCS) is that a single employee appears simultaneously subordinated to several superiors (at the same hierarchical level) performing different functions (e.g., a coordinating function, a supporting function, a controlling function, and others). Note that MCS are natural for project-oriented organizations. For instance, a “horizontal” structure of projects is superimposed on an hierarchical organizational structure (see Figure 2.18).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

70

Top management Functional structure

Projects

Engineering management

R&D management

Employees

Employees

Project Менеджер managers проекта

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Figure 2.18. A matrix control structure of an organization.

In MCS, principals controlling an agent are involved in a “game,” with a complicated equilibrium. In particular, we can separate two stable modes of interaction among the principals–the cooperation mode and the competition mode. In the cooperation mode, the principals act jointly and such strategies ensure the required results of agent’s activity under minimal resources. In the competition mode (note it takes place when the goals of the principals differ appreciably), the resources are spent inefficiently. Let us formulate an elementary model of a matrix control structure; a comprehensive overview of modern research in the field of these control problems could be found in [12, 50]. Suppose that an OS consists of a single agent and k principals. A strategy of the agent lies in choosing an action y  A, which incurs the costs c (y). As the result of the agent’s activity, each principal gains a certain income described by the function Hi (y). Moreover, principal i pays to the agent the reward i (y), i  K = {1, 2, …, k} (K stands for the set of principals). Thus, the goal function of principal i takes the form

i (i (), y) = Hi (y) – i (y), i  K.

(1)

The agent’s goal function is defined by f ({i ()}, y) =

  ( y) – c (y). iK

i

(2)

The sequence of moves is the following. The principals simultaneously and independently choose incentive functions and report them to the agent; the latter then chooses an action. For the game of principals, let us confine the analysis to the set of Pareto-efficient Nash equilibria. In this case, the principals’ strategies are

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

i , y  x , i  K. 0, y  x

i (x, y) = 

71

(3)

This means the principals agree about motivating the agent’s choice of a specific action x  A (referred to as a plan), as well as agree about implementation of joint incentives. Such mode of interaction among the principals is said to be the cooperation mode. The conditions of optimality (in the Pareto sense) imply the following. The total incentive received by the agent from the principals (in the case of plan fulfillment) equals the costs:

 i

= c (x).

(4)

iK

In fact, this is the principle of costs compensation generalized to the systems with distributed control. For each principal, the beneficial cooperation condition may be stated as follows. In the cooperation mode, each principal obtains the utility not smaller than as if he or she motivated the agent independently (by compensating the agent’s costs to choose the most beneficial action for this principal). The utility of principal i from “independent” interaction with the agent is defined by (see Section 2.1): Wi = max [Hi (y) – c (y)], i  K.

(5)

y A

Denote  = (1, 2, …, k) and let S = {x  A |      : Hi (x) – i  Wi, i  K,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

k

 i

= c (x)}

(6)

iK

be the set of agent’s actions such that their implementation makes cooperation of the principals beneficial. A set of the pairs x  S and corresponding vectors  is called a domain of compromise:

 = {x  A,    k | Hi (x) – i  Wi, i  K,

 i

= c (x)}.

(7)

iK

By definition, the cooperation mode takes place if the domain of compromise is nonempty:   . In the cooperation mode the agents obtain zero utility. Set W0 = max [ y A

 H ( y) iK

i

– c (y)].

Then the domain of compromise is non-empty iff [50]

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(8)

Dmitry Novikov

72 W0 

Wi .

(9)

iK

Thus, the implementability criterion for the cooperation mode is given by formula (9). This means that, acting jointly, the principals may gain a greater total efficiency as against their single-handed behavior. The difference W0 –  Wi can be interpreted as a measure of

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

iK

interests’ coordination among the principals and as a rate of OS emergence. The condition (9) being not met ( = ), the competition mode takes place for the principals; it is characterized by the so-called auction solution. Sort (renumber) the principals in the ascending order of the quantities {Wi}: W1  W2  …  Wk. The winner is the first principal suggesting the agent a utility being by an arbitrarily small quantity greater than W2 (provided that the agent’s costs are compensated, as well). Let us discuss the obtained results. A drawback of MCS is that under insufficient separation of authorities a conflict may take place between the principals–e.g., project managers and functional managers; both project managers and their functional collegues (i.e., principals located at an intermediate level of an hierarchy) strive for “winning over” the agents being simultaneously controlled by them. Evidently, the whole OS bears certain losses of efficiency, since gaining the support of the agents may require considerable costs. Cooperation of middle-level principals (joint assignment of plans and application of an agreed incentive scheme of the form (3) to motivate the agents) enables avoiding such conflicts and efficiency losses. Passing from the competition mode to the cooperation mode makes it necessary to coordinate the interests of the principals. This can be done by superiors at higher hierarchical levels via certain incentive techniques. We consider a possible model16 below. Earlier, we have analyzed the cases when in an MCS middle-level principals of an hierarchy (e.g., project managers) benefit from cooperation. The principals form a coalition and jointly assign a plan for an agent. Hence, all principals can be treated as a single agent maximizing the goal function

ФK ()   H i ( y )  c( y ) .

(10)

iK

However, is such situation good or bad for a top manager (TM) (see Figure 2.18) representing the interests of the whole organization? Answering the posed question requires defining the interests of TM, as well as the methods of influencing on operation of the organizational system. According to the TM viewpoint, the controlled subject is the set of all middle-level principals and the agent. The principals are described by the income functions Hi (y), i  K, while the agent is described by the cost function c (y). Suppose that the interests of a principal depend only on the result of system functioning, i.e., on the values of income and costs implemented by the agent’s actions. Then the TM goal function is given by F ()  F ( H1 (),..., H k (), c()) . 16

This model has been developed by M.V. Goubko, Cand. Sci. (Tech.).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

73

Moreover, it seems rational to assume the following. The TM aims for increasing (as much as possible) the income gained by each project (represented by the agents) and for reducing the costs to implement these projects. Thus, the TM goal function increases with respect to the variables H1, H2,…, Hk and decreases with respect to the agent’s costs c. An elementary goal function of TM is a linear convolution of all subgoals (with certain nonnegative weights i), yielding the criterion

F ( y )    i H i ( y )   0c( y ) .

(11)

iK

Compare this formula with the expression (10) defining the goal function in the case of a colition of the principals. Apparently, all the coefficients {  i } being different, the system

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

includes interests’ miscoordination between the TM and middle-level principals (project managers). By maximizing their goal function, the latter implement a “wrong” action of the agent (undesired by the TM). Hence, the TM should exert an impact on middle-level principals for reducing the gap between the implemented action y and the required one (which maximizes the efficiency criterion (11)). A possible technique of influencing the system functioning by the TM consists in internal “taxation.” Notably, certain deduction rates {i} for benefit of the TM are established for the incomes gained by the principals of the middle level {Hi()}. Furthermore (or alternatively), certain deduction rates i can be introduced for the profits {Hi () – i ()}. We will show that complete agreement between the interests of the TM and middle-level principals takes place in the case of a flat rate   [0; max] being applied to the profits of all principals. Under a flat rate for profit tax and a differential rate for income tax, the goal function of the TM and of the middle-level principals’ coalition are respectively rewritten as17

F ( y )   [  i  i H i ( y )   0c( y )]

(12)

Ф( y )  (1   )[ (1   i ) H i ( y )  c( y )] .

(13)

iK

and

iK

To coordinate the interests of the TM and middle-level principals, it suffices that their goal functions attain the maximum at the same point. Formulas (12)-(13) imply that this condition holds true if

 i  i /  0  1   i (i.e., under the income tax rate  i 

The TM is interested in increasing the share in the profits; thus,

1 1  i /0

).

   max . In such taxation

system, complete interests’ coordination between the TM and project managers (middle-level

17

Note that in the models (11) and (12) the interests of TM differ.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

74

Dmitry Novikov

principals) takes place. For instance, the income tax rate is 50% provided that

 i  1 (i  K)

and 0 = 0. Therefore, in multi-level organizational systems, ensuring efficient operation requires that each higher level of the hierarchy performs interests’ coordination with lower-level agents (particularly, by the choice of an appropriate incentive scheme). In other words, normal operation of an MCS requires that the top manager uses control actions such that middle-level principals can elaborate joint policy and assign coordinated plans to the agents.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

2.10. RANK INCENTIVE MECHANISMS In many incentive schemes, rewards of agents depend on the absolute values of their actions (see Section 2.1) and/or on the result of activity (see Sections 2.5, 2.7 and 2.8). At the same time, rank incentive schemes (RIS) are widely adopted in practice; an agent’s reward is defined either by his or her activity indicator (the action or the result) belonging to a certain given range of values (normative RIS), or by the number the agent keeps in the ordered sequence of activity indicators of the agents (competition RIS). A major advantage of rank incentive schemes lies in that the principal not necessarily needs to know actual actions chosen by the agents. Instead, he or she suffices to know feasible ranges the actions belong to or information on the ordered actions. Normative RIS (NRIS) are remarkable for the existing procedures of assigning certain ranks to the agents depending on their activity indicators (e.g., selected actions, etc.). Let us introduce the following assumptions to-be-valid in the current section. First, suppose that the sets of feasible actions of the agents are identical and coincide with the set A of nonnegative real values. Second (similarly to Sections 2.1 and 2.5), assume that the cost functions of the agents are monotonic and vanish in the origin (choosing zero action leads to zero costs). Denote by N = {1, 2, …, n} the set of agents, by  = {1, 2, ..., m} the set of feasible ranks, where m is the dimension of an NRIS. Next, {qj}, j = 1, m , represent a certain set of m nonnegative values of the rewards for corresponding ranks. Finally, i: Ai  , i = 1, n , are classification procedures. Then an NRIS is the tuple {m, , {i}, {qj}}. For any incentive scheme, there is a corresponding NRIS having not smaller efficiency. Indeed, for any incentive scheme and any agent, one may find an individual procedure of classifying his or her actions, so as under NRIS the agent chooses the same action as under the initial incentive scheme. However, in practice it appears unreasonable (or even impossible) to use an individual classification procedure for each agent. Therefore, consider the case of an identical classification procedures adopted for all agents (known as a unified NRIS (UNRIS). The problems of incentive scheme identification are also discussed in Section 2.7. Unified normative rank incentive schemes. A UNRIS being applied, the agents choosing the same actions obtain the same rewards. Consider a vector Y = (Y1, Y2, ..., Ym) such that 0  Y1  Y2  ...  Ym < +; it defines a certain partition of the set A. A unified NRIS is determined

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

75

by the tuple {m, {Yj}, {qj}}, provided that the reward of agent i (denoted by i) constitutes i m

(yi) =



qj I (yi  [Yj, Yj+1)), where I () means the indicator function, Y0 = 0, q0 = 0.

j 0

A unified NRIS is said to be progressive if the rewards increase with respect to the actions: q0  q1  q2  ...  qm. The curve of a progressive UNRIS is shown in Figure 2.19.

 qm

q2 q1

y Y1

0

Y2

Y3

Ym

Figure 2.19. An example of a progressive UNRIS.

A UNRIS is a piecewise constant function. Hence, it follows from the monotonic property of the cost functions that the agents choose actions with minimal costs on the corresponding segments. In other words, one may believe that under a fixed incentive scheme the set of feasible actions is Y = {Y1, Y2, ..., Ym}, and q0 = 0 if ci (0) = 0. The action y i*

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

chosen by agent i depends on the pair of vectors (Y, q), i.e., y i* (Y, q) = Y k i , where ki = arg max {qk – ci (Yk)}, i  N.

(1)

k  0, m

*

*

Set y*(Y, q) = ( y1 (Y, q), y 2 (Y, q), ..., y n* (Y, q)). The problem of optimal UNRIS design lies in choosing a UNRIS dimension m and vectors q, Y satisfying the given constraints and maximizing the principal’s goal function:

 (y*(Y, q))  max Y,q

.

(2)

Fix a certain action vector y*  A' = An desired by the principal as the result of UNRIS implementation. Recall that within a UNRIS the agents choose actions from the set Y. Therefore, the minimal dimension of the incentive scheme must be equal to the number of pairwise different components of the action vector to-be-implemented. Consequently, using a UNRIS of a greater dimension than n seems unreasonable. We confine ourselves with incentive schemes whose dimension coincides with the number of agents: m = n.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

76

*

Given a fixed action vector y*  A', set Yi = yi , i  N, and denote cij = ci (Yj), i, j  N. The definition of an implementable action (see (1)) implies the following. A necessary and sufficient condition for a UNRIS to implement the action vector y*  A' (motivating the agents to choose the corresponding actions) consists in the following system of inequalities: qi – cii  qj – cij, i  N, j = 0, n .

(3)

Let

 (y*) =

n

q (y ) i 1

*

(4)

i

be the total costs to implement the action y* in a UNRIS; here q(y*) satisfies the system (3). The problem of optimal (minimal) UNRIS design is to minimize (4) under the constraints (3). Suppose that the agents can be sorted in the ascending order of their costs and marginal costs: '

 y  A: c1' (y)  c2' (y)  ...  cn (y).

To proceed, fix a certain vector y*  A' such that

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

y1*  y 2*  ...  y n* .

(5)

Notably, the higher are the agent’s costs, the smaller actions he or she actually chooses. The described assumptions hold true for many common cost functions of the agents widely used in mathematical economics. For instance, these are ci (yi) = ki c (yi), ci (yi) = ki c (yi / ki), c(0) = 0, where c () indicates a monotonic differentiable function and the coefficients are sorted: k1  k2  ...  kn (they reflect the efficiency of agents’ activity). Special cases are linear cost functions, the Cobb-Douglas cost functions and others. In [50] it was shown that: 1) unified normative rank incentive schemes implement only the actions meeting the condition (5); 2) optimal UNRIS is progressive; 3) optimal rewards can be evaluated using the recurrent formula q1 = c11, qi = cii +

max {qj – cij}, i = 2, n ; j i

4) in a UNRIS implementing the vector y*  A', the individual rewards satisfy

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

qi =

i



*

*

(cj( y j ) – cj( y j 1 )).

77 (6)

j 1

The expression (6) serves for analyzing the properties of a UNRIS. It is possible to find optimal rewards, to construct optimal classification procedures, to compare the efficiency of a UNRIS with the efficiency of compensatory incentive schemes, etc. Competition rank incentive schemes. Let us briefly consider some properties of competition rank incentive schemes (CRIS), where a principal defines the number of classes and the number of available places within each class, as well as the rewards of the agents in a certain class. In other words, in a CRIS an individual reward of an agent does not directly depend on the absolute value of his or her action. Instead, the reward is determined by the place the agent has according to the ordered actions or results of activity of all agents. It was proved in [50] that: 1) the inequality (5) is a necessary and sufficient condition of implementability of the action vector y*  A’ in the class of CRIS; 2) the above vector is implementable by the following incentive scheme ensuring the minimal costs of the principal: qi (y*) =

i



j2

*

*

{cj–1( y j ) – cj–1( y j 1 )}, i = 1,n .

(7)

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Formula (7) is useful to study the properties of a CRIS (to find optimal rewards, to construct optimal classification procedures, to compare the efficiency of a CRIS with the efficiency of other incentive schemes, etc.).

2.11. MECHANISMS OF ECONOMIC MOTIVATION Incentive mechanisms motivate controlled subjects (agents) to perform specific actions for the benefit of a control subject (a principal). In the previous sections of this chapter, we have discussed the mechanisms, where an incentive lies in a direct reward of an agent by the principal. In contrast, this section focuses on certain mechanisms of economic motivation–a principal controls agents by establishing different norms (e.g., tax rates, profit rates, and so on) of agents’ activity. The corresponding examples are internal taxation rates defining income or profit allocation between units or departments and the whole organization (a corporate principal or a holding company) and external taxation rates defining the payments of enterprises to regional or municipal budgets. Consider the following model. An organizational system (a corporation, a firm) includes a principal and n agents. Suppose the costs ci (yi) of agent i to be known and dependent on his or her action yi    (e.g., on the amount of products manufactured by the agent); i  N = 1

{1, 2, …, n}, where N is the set of agents. Moreover, assume that the cost function is continuous, increasing and convex, vanishing in the origin. The goal function of agent i represents the difference between his or her income Hi (yi) and the costs ci (yi): Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

78 fi (yi) = Hi (yi) – ci (yi), i  N.

Let the cost functions of the agents take the form ci (yi) = ri  (yi / ri), i  N,

where  () is an increasing smooth convex function,  (0) = 0. Denote by () = ' –1() the inverse function to the derivative  (). In the sequel, we study five mechanisms of economic motivation of the agents, viz., 1) 2) 3) 4) 5)

deduction mechanism (for income tax); centralized mechanism; profitability rate mechanism; profit tax mechanism; profit share mechanism.

Deduction mechanism. Suppose that an internal (transfer) price  is given for a unit product manufactured by agents. Imagine that a principal uses a certain deduction rate18   [0; 1] for the income gained by the agents. Hence, for agent i, the income is Hi (yi) =  yi and the goal function takes the form fi (yi) = (1 – )  yi – ci (yi), i  N.

(1)

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

The deduction rate  can mean income tax rate. Each agent chooses an action maximizing his or her goal function: yi () = ri  ((1 – ) ), i  N.

(2)

The principal’s goal function (as the total deduction obtained from the agents) makes up

 () =   H  ((1 – ) ), where H =

(3)

r . iN

i

The principal strives for maximizing his or her goal function by choosing an appropriate deduction rate:

 ()  max .

(4)

 [ 0;1]

For the Cobb-Douglas cost functions of the agents, i.e., ci (yi) =

1



(yi) (ri)1 – ,   1, i

 N, one derives the following solution to the problem (4): 18

Evidently, under the assumptions introduced below a uniform rate for all agents is optimal, see Section 2.7.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

*() = 1 – 1/.

79 (5)

Hence, the optimal rate * is an increasing function of . The optimal value of the principal’s goal function constitutes

 =

 1  H  ( /),   

i.e.,  = ( – 1) H ( ) /( 1) , and the total agents’ action is Y = H  ( /) = H ( /)1 / ( – 1).

The gain of agent i is described by the formula fi = ri (1 – 1/) ( / ) / ( – 1), i  N,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

while the sum of the goal functions of the system participants (the principal and all agents) is W = (2 – 1)H( / ) / ( – 1)/. Centralized mechanism. Let us compare the obtained parameters with the corresponding values in another mechanism of economic motivation, viz., the centralized scheme. The principal “acquires” all income of the agents and compensates their costs incurred by the actions yi provided that the plans xi are fulfilled (a compensatory incentive scheme–see Sections 2.1, 2.3). In this case, the principal’s goal function takes the form

 (x) = 

x iN

i



 c (x ) . iN

i

(6)

i

By solving the problem  (x)  max , the principal evaluates the optimal plans: { xi  0}

xi = ri  (), i  N.

(7)

For the Cobb-Douglas cost functions of the agents, the optimal value of the principal’s goal function makes

x =  / ( – 1) H (1 – 1/), while the sum of agents’ actions is Yx = H  () = H 1/ ( – 1). The gain of agent i identically equals zero, since the principal exactly compensates the agent’s costs and the sum of the goal functions of the system participants (the principal and all agents) is Wx = x. Compare the results of deduction mechanism and centralized mechanism:

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

80 1



x /  =   1  1 and decreases with respect to  ;



Yx / Y = 



1

 1

Wx / W = 

 1 and decreases with respect to  ;

  1

/ ( + 1)  1 and decreases with respect to .

Therefore, if the agents have the Cobb-Douglas cost functions, the centralized mechanism of economic motivation is more beneficial for the whole organizational system than the deduction mechanism (indeed, the former ensures a greater total output and a higher total utility of all system participants than the latter). The reservation “for the whole organizational system” seems essential, as in the centralized mechanism the profits (the goal function value) of the agents are zero–all available resources are accumulated by the “metasystem.” Such scheme of interaction between the principal and agents may be inconvenient for them. Thus, let us analyze a generalized version of the centralized scheme known as the profitability rate mechanism. Here the reward of an agent (provided by a principal) not only compensates the costs under a fulfilled plan, but also includes a certain utility being proportional to the costs. The coefficient of proportionality is said to be the profitability rate (see Section 2.1, as well). The above centralized scheme corresponds to zero value of the profitability rate. Profitability rate mechanism. Given a profitability rate   0, the principal’s goal function is given by

 (x) = 

x

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

iN

i

– (1 + )

 c (x ) . iN

i

i

(8)

By solving the problem  (x)  max , the principal evaluates the optimal plans19: { xi  0}

xi = ri  ( / (1 + )), i  N.

(9)

For the Cobb-Douglas cost functions of the agents, the optimal value of the principal’s goal function makes up

 =  ( / (1 + ))1 / ( – 1) H (1 – 1 /), while the sum of the agents’ actions is Y = H  ( / (1 + )) = H ( / (1 + ))1 / ( – 1).

The gain of agent i constitutes fi =  ri ( / (1 + )) / ( – 1)/, and the sum of the goal functions of the system participants (the principal and all agents) is W =  H ( / (1 + ))1 / ( – 1) ( – 1 / (1 + )) / . 19

Evidently, the principal has zero optimal profitability rate.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

81

Again, we compare the derived results (note that, if  = 0, the formulas for the profitability rate mechanism yield the corresponding formulas for the centralized mechanism):  

x /  = (1   ) Yx / Y = (1   )

1

 1

1  1

 1 and increases with respect to  ;  1 and increases with respect to  ; 1



Wx / W =

1 (1  )(1   ) 1



1 1 (1   )

 1 and increases with respect to  .

Interestingly, the maximal sum of the goal functions of the system participants (the principal and all agents) is attained by zero profitability rate (i.e., under absolute centralization)! Now, compare the profitability rate mechanism with the deduction mechanism:  

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.



 /   = (

1 

1

)

 1

and increases with respect to  ;

 1 1    1 Y / Y = ( ) and increases with respect to  ;  1 ( 2  1) 1    1 W / W = ( ) and increases with respect to  .   2  (1   )

Thus, one draws the following conclusion. For the Cobb-Douglas cost functions of the agents, the profitability rate mechanism with  =  – 1 is equivalent to the deduction mechanism. This assertion follows from that under  =  – 1 all (!) parameters of the profitability rate mechanism coincides with their counterparts in the deduction mechanism: yi () = xi , i  N,  =  , Y = Y , fi = fi , i  N, W = W. To proceed, let us study the fourth mechanism of economic motivation–profit tax mechanism. Profit tax mechanism. Suppose that the agent’s income is interpreted as his or her goal function (the difference between the income and costs). Under a profit tax rate   [0; 1], the goal function of agent i takes the form fi (yi) = (1 – ) [ yi – ci (yi)], i  N.

Accordingly, the principal’s goal function is defined by

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(10)

Dmitry Novikov

82

 (y) =  [

y iN

i



 c ( y ) ]. iN

i

i

(11)

In this case, the agents choose the same actions as in the centralized scheme; consequently, one obtains yi = ri  (), i  N.

(12)

For the Cobb-Douglas cost functions of the agents, the optimal value of the principal’s goal function constitutes20

 =   / ( – 1) H (1 – 1/), while the sum of agents’ actions makes up Y = H  () = H 1 / ( – 1).

The gain of agent i equals fi = (1 – )  / ( – 1) ri (1 – 1/),

and the sum of the goal functions of the system participants (the principal and all agents) is W =  / ( – 1) H (1 – 1/).

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

As usual, we compare the results:   

x /  = 1 /   1 and increases with respect to  ; Yx / Y = 1; Wx / W = 1.

Therefore, the profit tax mechanism leads to the same sum of utilities and the same sum of equilibrium actions of the agents as its centralized counterpart. Yet, the principal’s utility appears by  times smaller. Hence, the profit tax mechanism may be treated as the compromise mechanism, where the compromise point within the domain of compromise is defined by the profit tax rate (the share of allocating the system’s profits between the principal and agents). Compare the profit tax mechanism with the profitability rate mechanism: 1



20

 /  =  (1   ) 1 ;

Obviously, for the principal the optimal value of the profit tax rate  is the unity (accordingly, the profit tax mechanism turns into the centralized mechanism).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Incentive Mechanisms

83

1



Y / Y = (1   ) 1  1; 1



W / W =

1 (1  )(1   ) 1



1 1 (1   )

 1.

Finally, perform the comparison with the deduction mechanism:   

 /  =  

Y / Y = 

1

 1

1  1

W / W = 

;

;

  1

/ ( + 1).

The analysis indicates that, the agents having the Cobb-Douglas cost functions, the profit tax mechanism possesses the following properties: 1



for  = 1 /  the principal);



for  = 1 – 1 /   1 it is equivalent to the optimal deduction mechanism (according to the agents);



for  = 1 / (1   ) 1 it is equivalent to the profitability rate mechanism (according

 1

it is equivalent to the optimal deduction mechanism (according to 

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

1

to the principal); 

for  = 1 –  / ( – 1) (1   )

  1

it is equivalent to the profitability rate mechanism

(according to the agents). Profit share mechanism. Here the principal gains the profits H(y) as the result of agents’ activity and pays a fixed share   [0; 1] of the profits to each agent (the same share is assigned to all agents–profit share mechanisma are unified). For agent i, the goal function takes the form fi (y) =  H(y) – ci (yi), i  N,

(13)

while the principal’s goal function makes up

 (y) = (1 – n  ) H(y). Under the profit share mechanism, the agents choose the actions:

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(14)

Dmitry Novikov

84 yi = ri  ( ), i  N.

(15)

Let the principal’s profits be a linear function of agents’ actions: H(y) = 

 y . For the i

iN

Cobb-Douglas cost functions of the agents, the principal’s goal function has the value

 = (1 – n  ) H   ( ), and the sum of agents’ actions constitutes Y = H  ( ). The gain of agent i equals fi = H [n    ( ) –  ( )], i  N,

while the sum of the goal functions of the system participants (the principal and all agents) is

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

W = H [  ( ) –  ( )].

In the case of quadratic cost functions of the agents, the optimal profit share (according to the principal) is  * = 1/2 n. Discussion. Thus, we have studied five mechanisms of economic motivation. In the aspect of efficiency (treated as the sum of utilities of all system participants and/or the sum of agents’ actions), the best ones are the centralized mechanism and profit tax mechanism with any tax rate. Indeed, using the deduction mechanism or profitability rate mechanism yields a lower efficiency. Within the framework of the deduction mechanism, profitability rate mechanism or profit tax mechanism, one obtains different allocation of the principal’s utility and agents’ utility depending on the corresponding parameters (deduction rate, profitability rate and profit tax rate, respectively) as compared with the centralized mechanism (see the estimates above). In a specific situation, the derived formulas enable estimating the values of the parameters making the mechanisms equivalent. For instance, for quadratic cost functions ( = 2) the optimal deduction rate (income tax rate) constitutes * = 0.5. Full equivalence between the profitability rate mechanism and deduction mechanism is observed for * = 1. Table 2.1. The parameters of the mechanisms of economic motivation: The case of quadratic cost functions of the agents Name of a mechanism Deduction mechanism Centralized mechanism

Parameters

 2H / 4 2H / 2

Y H / 2 H

Profitability rate mechanism

2H / (2(1+))

H / (1+)

Profit tax mechanism

 H / 2

H

Profit share mechanism

2H / (4n)

H / (2n)

2

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

W 32H / 8 2H / 2 2H(1+2) / (2 (1+)2) 2H / 2 2H(2n – 1) / (4n2)

 fi 2H / 8 0

2H / (2 (1+)2) (1 – )2H / 2 2H(n – 1) / (4n2)

Incentive Mechanisms

85

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

On the other hand, under * = 0.5 (* = 0.75) the profit tax mechanism appears equivalent to the both ones mentioned according to the principal (according to the agents, respectively). These conclusions are combined in Table 2.1. A promising direction of further research lies in generalization of the obtained results to other classes of cost functions, as well as in the analysis of the mechanisms of economic motivation in some models of organizational systems with uncertain factors and interrelations of the agents. Therefore, in the present chapter we have overviewed the basic results for the incentive problems. The following line of investigations seems to have prospects here: deriving analytical solutions to the problems of optimal incentive mechanism design in multilevel multi-agent dynamical systems with uncertain parameters and distributed control. From practical considerations, the results of theoretical study can be used to develop and adjust software tools of personnel management.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Chapter 3

PLANNING MECHANISMS

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

In the present chapter the models of planning mechanisms in organizational systems are studied. Specific features of planning mechanisms consist in the following. A principal makes decisions under incomplete information, basing on the messages of agents. Being active, the agents may demonstrate a strategic behavior (manipulate data), i.e., report untrue information. Section 3.1 deals with a general formulation of the planning problem; in addition, we briefly overview the results in the field of data manipulation problem analysis. Subsequent sections describe “classic” mechanisms with revelation of information–resource allocation mechanisms (Section 3.2), the mechanisms of active expertise (Section 3.3), transfer pricing mechanisms (Section 3.4), rank-order tournaments (Section 3.5), and the mechanisms of exchange (Section 3.6).

3.1. PLANNING PROBLEM. FAIR PLAY PRINCIPLE Consider a two-level multi-agent OS composed of a single principal and n agents. A strategy of each agent lies in reporting certain information si  i, i  N = {1, 2, …, n} to the principal. Based on the reported information, the principal assigns the plans xi = i (s)  Xi  1 to the agents; here i:   Xi, i  N is a planning procedure (mechanism), s = (s1, s2, …, sn)   =



i

means a vector of the agents’ messages.

iN

The preference function i (xi , ri):  2  1 of agent i reflects his or her preferences in planning problems; this function depends on the corresponding component of the plan (assigned by the principal), as well as on a certain parameter, i.e., a type of the agent. As a rule, one understands the agent’s type as a maximum point of his preference function (in other words, this is the most beneficial value of the plan in the agent’s view). Formally, one may draw the following analogy between incentive problems (discussed in Chapter 2) and planning problems, see Table 3.1. When making decisions, each agent is aware of the following information: the planning procedure, the value of his or her type ri  1 (known as an ideal point, a peak point, or “a top”), as well as the goal functions and the feasible sets of all agents.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

88

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Table 3.1. Incentive problems vs. planning problems Incentive problems

Planning problems

Strategy of an agent

y  A′

s

Control

(y)

(s)

Preferences of an agent

f(y, ())

(s, ())

The principal knows the functions i (xi, ) and the sets of messages available to the agents; yet, the principal possesses no information on the exact values of the agents’ types. The move sequence is as follows. The principal chooses a planning procedure and announces it to the agents; being aware of the planning procedure, the latter report to the former the source information for the planning procedure. The decision made by the principal (the plans assigned to the agents) depends on the information reported by the agents; hence, the agents are able to take advantage of the situation and exert an impact on the decision (by reporting information ensuring the most beneficial plans for them). Naturally, in such conditions the information obtained by the principal may appear spurious. Hence, the problem of strategic behavior arises. Generally, one studies planning mechanisms (in models of information revelation) under the assumption that the preference functions of the agents are single-peaked ones with the peak points {ri}i  N. This implies that the preference function i (xi, ri) is a continuous function with the following properties. It strictly increases with respect to xi, reaching the unique maximum point ri, and strictly decreases then. The assumption means that the agent’s preferences over the set of feasible plans are such that there exists a unique value of the plan, being optimal for the agent (a peak point); for any other plan, the preference level monotonically decreases as one moves off the peak point. Suppose that the agents do not cooperate, playing dominant or Nash equilibrium strategies. Let s* be the vector of equilibrium strategies1. Obviously, the equilibrium point s* = s*(r) generally depends on the agents’ type profile r = (r1, r2, …, rn). The planning mechanism h (): n  n is said to be the direct mechanism, corresponding to the mechanism  ():   n if h (r)= (s*(r)); it maps the vector of peak points of the agents into the vector of plans. Involving the term “direct” is justified by the fact that the agents report their peak points directly; within the framework of the original (indirect) mechanism  (), they are able to reveal some indirect information. Imagine truth-telling is an equilibrium strategy for all agents in the corresponding direct mechanism; then the mechanism is called an equivalent direct (strategy-proof) mechanism. Consider possible approaches to guarantee truth-telling. Probably, the suggestion to introduce a certain penalty system for data manipulation seems the most evident (under the assumption that the principal eventually (ex post) becomes aware of the actual values of the parameters {ri}i  N). It has been demonstrated that truth-telling is ensured by adoption of “sufficiently great” penalties. Suppose now that the principal does not expect ex-post information about the parameters {ri}i  N; an identification problem (based on information available to the principal) arises then for the unknown parameters. As the result, one also 1

In the case of several equilibria, definite rules should be formulated for choosing a unique equilibrium from any set of equilibria.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Planning Mechanisms

89

deals with the problem of constructing a penalty system for indirect indicators of data manipulation. The fair play principle (FPP) [9, 12] is a fundamental theoretical result (also, see the revelation principle in [38, 45, 46]). The underlying idea is to use a planning procedure that maximizes the goal function of each agent (under the assumption that agents do not manipulate information). In other words, the principal trusts the agents and believes they would not live up to his or her expectations [10]. This explains another term widely used for the mechanism based on the FPP – “the fair play mechanism.” A formal definition is the following. The condition

i(i (s), si) = max i(xi, si), i  N, s  ,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

xi X i ( s i )

where Xi(s–i) represents a set of plans (decentralizing sets) depending on the opponents’ action profile s–i = (s1, s2, …, si–1, si+1, …, sn) for agent i, is called the perfect concordance condition. The planning procedure that maximizes the goal function (, s) of the principal over the set of plans satisfying the perfect concordance condition, is referred to as the fair play mechanism. The following result takes place. Revelation of information is a dominant strategy of the agents, iff the planning mechanism is a fair play mechanism [12, 46]. The above-mentioned statement is nothing about the uniqueness of equilibrium strategy profile. Of course, with the condition of benevolence being met (if si = ri, i  N, appears a dominant strategy, then the agents reveal true information), using the fair play principle guarantees agents’ truth-telling. Let us formulate a sufficient condition for the uniqueness of the equilibrium strategy profile si = ri, i  N under a fair play mechanism. For agent i, denote by Ei (si) = Arg max i xiX i

(xi, si) the set of his or her concerted plans. We will say that the condition of equitable preference functions holds for agent i if the following expression is valid:  si1  si2  i: Ei(si1)  Ei(si2) = . That is, for all feasible non-coinciding estimates si1 and si2, the corresponding sets of concerted plans do not intersect. The condition of equitable preference functions for all agents is sufficient for the existence of a unique equilibrium strategy profile. We provide a series of necessary and sufficient conditions of truth-telling as a dominant strategy. A necessary and sufficient condition for truth-telling being a dominant strategy under any agents’ type profile r   is, in fact, the existence of the sets {Xi (s–i)}i  N satisfying the perfect concordance condition [9, 10, 12]. This assertion could be reformulated as follows. Assume that a dominant strategy equilibrium exists in the original planning mechanism; then the corresponding direct mechanism is strategy-proof. Under the FFP, truth-telling in messages (provided that the set of feasible plans of an agent is independent of the reported estimate) naturally leads to systems with a large number

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

90

Dmitry Novikov

of agents. Consider a situation when some planned rates  in the OS with several agents are the same for all agents; in other words, the plan has the form  = (, {xi}i  N). We have to find a control  being beneficial to all agents; if the principle of incentive-compatible planning is involved, a fundamental question concerning the existence of a solution is immediate. Such issues are not natural for systems with a large number of agents; indeed, the impact of an estimate (provided by a specific agent) on common control is small. If every agent does not account for the impact exerted by his or her estimate si on (s), the hypothesis of weak impact (HWI) is valid. Under the HWI, one has to coordinate the plans only with respect to individual variables. In [9] it has been shown that truth-telling is a dominant strategy if the HWI is valid and the planning procedure x (s) meets the perfect concordance condition. Till now, we have been mostly focused on the conditions of truth-telling. It is reasonable to put the question on how the strategy-proofness and optimality of the mechanisms are interconnected. In other words, is it always possible to find a strategy-proof mechanism among the optimal ones? Or is it always possible to find an optimal mechanism among the strategy-proof ones? Answering these questions is critical, since truth-telling per se is a convenient property, but not as important as optimality of the mechanism. Therefore, in the sequel we discuss some results regarding optimality (in the sense of maximal efficiency) of strategy-proof mechanisms. In [9, 12] it has been shown that, in the case of a single agent, for any mechanism there exists a strategy-proof mechanism with the same or a greater efficiency. This fact could be explained in the following way; for a single agent, the decentralizing set is given by the whole set of his or her feasible plans (in other words, a single agent always has a dominant strategy). Consider an organizational system with a greater number of agents (n  2). The conclusion regarding optimality of the incentive-compatible mechanisms holds true merely in some special cases. For instance, similar results have been obtained for the resource allocation mechanisms, the mechanisms of collective expert decisions (the problems of active expertise) and the transfer pricing mechanisms studied below in the present chapter. They have been also obtained for other planning mechanisms. The established facts concerning the relation between the properties of optimality and strategy-proofness in planning mechanisms give causes for optimism; the matter is that these properties are not mutually exclusive. At the same time, a number of examples indicate of (general) non-optimality of the mechanisms ensuring revelation of information. Hence, the question of correlation between optimality and strategy-proofness is still open. Studying the incentive mechanisms in OS (see Chapter 2), we have used the term “incentive-compatible mechanism” when the agents are motivated to fulfill the assigned plan. Imagine an OS, where the agents’ strategies consist in choosing both the messages and actions (in fact, this is a “hybrid” incentive-planning problem). If the mechanisms are simultaneously incentive-compatible and strategy-proof, they are said to be correct mechanisms. Of crucial importance is the issue when one may find an optimal mechanism in the class of correct mechanisms. Some sufficient conditions for optimality of correct control mechanisms in OS are given in [9, 12]. Thus, we have finished a brief description of common problems of designing efficient and strategy-proof planning mechanisms in OS. Now, let us consider a series of classic

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Planning Mechanisms

91

planning mechanisms in multi-agent organizational systems, where the fair play principle guarantees optimality.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

3.2. RESOURCE ALLOCATION MECHANISMS Let us state the problem of resource allocation in a two-level organizational system. Assume that a principal owns a divisible resource of a quantity R. In a standard formulation of the problem, the principal should allocate the resource among agents to maximize a certain efficiency criterion, e.g., the total efficiency of resource utilization by the agents. Imagine the principal does not know the efficiencies of the agents. Thus, the principal has to use messages of the agents (e.g., what quantity of the resource they need). Evidently, limited resources lead to the problem of manipulation (strategic behavior)–generally, the agents would not report true data, striving to obtain an optimal quantity of the resource. To study the issue of strategic behavior, we provide an elementary example. Suppose that a principal should allocate a resource between two agents. Denote by ri the peak point of agent i–the quantity of the resource ensuring the maximum to the single-peaked preference function of agent i (i = 1, 2). Assume that the principal makes the decision regarding the quantity of allocated resource based on the agents’ messages s1 and s2 (here si stands for the quantity requested by agent i). Obviously, if s1 + s2  R and r1 + r2  R, no difficulties appear (it suffices to choose x1 = s1, x2 = s2). However, what should the principal do under the condition of resource deficiency (i.e., when r1 + r2 > R)? Let the agents’ requests be bounded and the allocated quantity be unit: 0  si  R = 1, i = 1, 2. In other words, (at the least) an agent may refuse to receive the resource (by reporting the message si = 0); alternatively, an agent may ask for the whole quantity of the resource (by reporting the message si = 1). Imagine that the principal uses the following resource allocation mechanism   (in the general case, planning mechanism):

x i   i ( s1 , s 2 ) 

si R, i  1, 2. s1  s 2

Such concept of allocation is known as the proportional allocation principle, as far as the principal distributes the resource proportionally to the agents’ requests. Note that, for each agent, the obtained quantity of the resource depends on his or her request and on the request reported by the other agent (the agents play a game). Accordingly, the principal acts as a metaplayer, establishing “the rules of game,” viz., the mechanism  . In strictly monotonic resource allocation mechanisms (such that

 i ( s )  0, i  N ), a si

Nash equilibrium for the game of agents possesses the following structure: 1) the agents obtaining in an equilibrium a smaller resource quantity than they actually require would submit the maximal possible requests;

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

92

Dmitry Novikov 2) if an agent in an equilibrium submits a request being strictly smaller than the maximal possible one, then he or she would obtain an optimal quantity of the resource. What requests will be submitted by the agents? The following situations may happen. 1. r1 = +, r2 = +, i.e., both agents are interested in the maximal quantity of the resource (“the higher is the quantity, the better is the result”). In this case, the equilibrium requests constitute s1* = s2* = 1. Here we understand the equilibrium in the Nash sense (a Nash equilibrium is a point such that a unilateral deviation from it turns out unbeneficial for each agent). Indeed, an agent is unable to report si > 1. By reporting si < 1, agent i receives a strictly smaller quantity of the resource (provided that sj = sj* = 1, j  i). In other words, a unilateral deviation allows an agent to reduce (actually, not to increase) the value of his or her preference function. Therefore, x1* = 1 (s1*, s2*) = x2* = 2 (s1*, s2*) = R/2 = 1/2.

Apparently, the equilibrium outcome remains the same if r1 > 1/2, r2 > 1/2.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

2. r1  1/2, r2 > 1/2. In this case, the request s2* equals 1, while s1* = r1 / (1 – r1) (one can see that si*  (0; 1)  r1 < 1/2). Consequently, x1* = r1 and x2* = 1 – r1; this means agent 1 represents a dictator, receiving an optimal quantity of the resource (see the mechanisms of active expertise in Section 3.3). In fact, what is the value xi = 1/2 remarkable for? If ri  1/2 (r1 + r2 > R), then agent i becomes a dictator and obtains exactly the quantity he or she needs. Note that 1/2 = i (1, 1), i.e., this quantity of the resource is allocated when both agents report the maximal requests. Moreover, the stated mechanism appears to be not strategy-proof; indeed, equilibrium requests of the agents do not coincide with their actual requirements. However, is it possible to solve the problem of strategic behavior? The answer is positive! Consider the following mechanism. Within the framework of the current example, construct a corresponding direct mechanism (see Section 3.1). Suppose that the agents report to the principal not the requests si  [0, 1], but the direct estimates ~ ri  0 of the parameters ri describing their preference

r1 , ~ r2 ), the principal defines the functions i (xi, ri). Having received the estimates ( ~

equilibrium point (s1*( ~ r1 , ~ r2 ); s2*( ~ r1 , ~ r2 )) based on the following procedure: 1) if both agents report ~ ri > 1/2, then s1* = s2* = 1;

2) if a certain agent reports ~ ri  1/2 ( ~ rj >1/2, j  i, ~ r1  ~ r2  1 ), then si* = ri / (1 – ri), sj* = 1; r1 , s2* = ~ r2 . 3) if ~ r1  ~ r2  1 , then s1* = ~

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Planning Mechanisms

93

Next, the principal allocates the resource according to the initial mechanism of proportional allocation by substituting the relationship s*(r); in other words, the principal utilizes the corresponding direct mechanism (see Section 3.1). Evidently, in the corresponding direct mechanism each agent obtains exactly the same quantity as in the initial mechanism (hence, the efficiencies of these mechanisms coincide). Now, let us analyze whether the corresponding direct mechanism is strategy-proof or not (equivalently, whether the message ~ ri  ri, i = 1, 2, appears a Nash equilibrium or not). Consider the following cases. 1. If both agents have ri > 1/2, then (by reporting ~ ri  ri) they obtain exactly half of the

ri < 1/2; consequently, xi* < 1/2, which resource. The allocation changes only if ~

means that the gain of agent i decreases (such situation is unbeneficial to him or her). 2. If ri  1/2, rj > 1/2, then agent i benefits nothing by a unilateral deviation from the equilibrium (since he or she obtains the optimal quantity ri of the resource). Yet, for agent j such deviation is unbeneficial, as well (the agent receives a smaller quantity than needed). Indeed, if he or she reports ~ r j < rj, then the principal “recovers” sj*(ri,

~ rj )  1 and the resulting quantity of the resource is not increased.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

3. If r1 + r2  1, then x1 = r1, x2 = r2. Notably, each agent receives an optimal quantity of the resource, and a strategic behavior yields nothing to him or her (we assume that, truth-telling and data manipulation leading to the same result, an agent prefers reporting actual data). Thus, the provided example has demonstrated the following. Given a certain initial mechanism of resource allocation, it is possible to construct a corresponding direct mechanism being strategy-proof and ensuring the same efficiency (resulting in the same resource allocation). To proceed, we endeavor extending this important result to the case of an arbitrary mechanism of resource allocation. Assume that a principal has a certain resource of the quantity R to-be-allocated among n agents with the preference functions i (xi, ri), i  N. Suppose that the requests submitted by the agents satisfy the constraints si  Di, i  N. Denote by {ri}i  N the ideal points of the single-peaked preference functions of the agents. n

Imagine that the resource deficiency takes place:

r  R . i 1

i

Suppose that the principal uses a continuous and monotonic resource allocation mechanism  : xi = i s, i  N, with the following properties [10, 12]: n

1) the resource is fully allocated, i.e.,

n

  ( s)  R for any s:  s i 1

i

i 1

i

 R (the balance

property); 2) if an agent has been given some quantity of the resource, then he or she has the opportunity to obtain any smaller quantity by decreasing the request;

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

94

3) if the quantity of the resource (allocated among a group of agents) is increased, then each agent of the group is given (at least) the same quantity of the resource than before. The listed properties of the resource allocation mechanism seem natural. Indeed, they are common for the majority of resource allocation mechanisms used in practice. The set of all agents N can be partitioned into two subsets, notably, Q and P (Q  P = ; Q  P = N). The set of priority consumers Q (known as dictators) is remarkable for that the latter obtain optimal quantities of the resource (recall that the preference function of agent i attains the global maximum at the point ri). The agents belonging to the set P receive strictly smaller quantities of the resource (as against the optimal ones): xi (s*) < ri, i  P, where s* stand for equilibrium messages of the agents. Evidently, si* = Di,  i  P. Now, let us construct a corresponding direct mechanism, i.e., the one involving the estimates { ~ ri } reported by the agents. For this, one suffices to define the set of priority consumers. Use the following algorithm. 1. Set Q = , P = N and find xi (D), i  N, where D = (D1, ..., Dn); in other words, assume that all agents have reported the maximal requests. If xj (D)  ~ r j , then Q: = Q  {j}, j  N.

2. Suppose that  i  P: si = Di and allocate the resource R –

 ~r

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

jQ

j

among them–take

the resource allocation procedure and substitute there the requests of the agents from the set Q such that they obtain optimal quantities of the resource. This operation is feasible due to the second property of the resource allocation procedure. New priority consumers (if appear) should be included in the set Q, and Step 2 is repeated accordingly. Obviously, the algorithm converges at a finite number of iterations. Let us discuss the corresponding interpretation in practice. At step 1, the principal evaluates the quantities to-beallocated to each agent in the case of maximal requests reported. Imagine agent j receives more than actually needs (a certain quantity exceeding rj); then the surplus quantity of the resource (xj (D) – ~ rj ) can be given to other agents being short of the resource (this follows from the second and third properties of the procedure  ). Next, the priority agents obtain exactly the optimal quantities of the resource, and the remainder is allocated among the rest agents. The direct mechanism determined by the above algorithm involves the messages ~ ri  and leads to the same resource allocation as the initial mechanism  . Moreover, by analogy to the analyzed example, one can demonstrate that the direct mechanism is strategy-proof (truthtelling is a Nash equilibrium)! Since the equivalent direct mechanism provides the same resource allocation as the initial one, the former and the latter possess identical efficiencies. Thus, we have established the following fact. Under the introduced assumptions, for any resource allocation mechanism there exists an equivalent direct mechanism (which is strategy-proof and has the same efficiency). Hence, an optimal mechanism belongs to the

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Planning Mechanisms

95

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

class of strategy-proof mechanisms; i.e., designing a mechanism with agents’ truth-telling, a principal does not decrease the efficiency. Anonymous mechanisms. A wide class of resource allocation mechanisms consists in anonymous mechanisms, where any permutation of the agents does not modify the plans assigned to the agents (taking into account the permutation). In the case of resource allocation mechanisms, this means that in an anonymous mechanism the sets of feasible messages of the agents are identical, while the planning procedure is symmetrical with respect to the agents’ requests. Note that the anonymous property of a mechanism by no means implies identical agents. Contrariwise, the agents can be totally different–the only (yet, rather democratic) requirement imposed on an anonymous planning mechanism lies in a symmetrical planning procedure. One would easily show that any anonymous mechanism of resource allocation is equivalent to the mechanism of proportional resource allocation. This directly follows from that all anonymous mechanisms turn out equivalent (any anonymous mechanism is equivalent to a mechanism of serial resource allocation–see below), while a mechanism of proportional resource allocation is anonymous. Let us describe some common classes of resource allocation mechanisms in a greater detail. Priority-based mechanisms. In priority-based mechanisms of resource allocation, different sorts of agents’ priority indexes are used when deciding what quantity of resource each agent should get. In the general case, a priority-based mechanism is described by the following procedure: n  if  s j  R;  si ,  j 1 xi  s    n  min s ,    s , if  s j  R. i i i  j 1 

Here n stands for the number of the agents, {si}i  N designate their requests, {xi}i  N are the allocated quantities of the resource, R represents the available quantity of the resource, {i(si)}i  N mean the preference functions of the agents, and  is a certain parameter. Minimization in the previous formula means that an agent never receives the more resource than he or she requests. The parameter  serves for normalization and should fulfill the budget constraint n

 mins , ( s )  R . i 1

i

i

i

In other words, under given requests and preference functions (in the conditions of resource deficiency), the available quantity R of the resource should be allocated in full. Depending on the type of priority, such mechanisms could be divided into three classes, notably, straight priority-based mechanisms (i (si) is an increasing function of the request si, i  N), absolute priority-based mechanisms (priorities of the agents are fixed and independent

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

96

of the reported requests2), and reverse priority-based mechanisms (i (si) is a decreasing function of the request si, i  N). Below we discuss straight priority-based mechanisms and reverse priority-based mechanisms of resource allocation. Straight priority-based mechanisms. Suppose that the agents’ preference functions i (xi, ri) are strictly increasing with respect to xi (i.e., the agents are interested in receiving the maximal quantities of the resource). Since a straight priority-based mechanism xi is an increasing function of the request si, then all agents report the maximal requests for the resource. The described phenomenon (the tendency of growing requests) is well-known in economics. Therefore, straight priority-based mechanisms (involving the principle “the more you ask for–the more you actually receive”) have been subjected to just criticism. If the preference functions of the agents possess the maxima at the points {ri}i  N, the analysis gets somewhat complicated. However, the conclusion appears the same: under (even an arbitrarily small) deficit D 

n

 r  R  0 , the tendency of requests growth takes place. i 1

i

Note that the procedure discussed in the example above (the mechanism of proportional resource allocation) is a straight priority-based mechanism:

i (si) = si, i  N,  ( s )  R /

n

s j 1

j

.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Recall that an anonymous straight priority-based mechanism is equivalent to a mechanism of serial resource allocation (in fact, this follows from that, by reporting identical requests in an anonymous mechanism, the agents obtain the same quantity of the resource). Consequently, to evaluate equilibrium requests and an equilibrium allocation of the resource, the principal may utilize the following procedure of serial allocation: a) sort all agents in the ascending order of their estimates of the peak points; b) allocate the required quantities of the resource to the agents with the minimal peak point (if the available resource is sufficient; otherwise, allocate the resource in equal parts), and eliminate the agents with the minimal peak point from further consideration; c) allocate the remainder of the resource to the rest agents according to Step 2. Evidently, the stated procedure of resource allocation (the corresponding direct mechanism) appears strategy-proof, i.e., truth-telling is a dominant strategy of each agent. Let us provide an example. Set n = 5 and ri = 0.1 i, R = 1. Hence, the resource deficiency takes place:  = 0.5. At Step 1, the principal allocates 0.1 units of the resource to all agents (this is the quantity needed by agent 1). The remainder (0.5 units of the resource) is distributed among agents 2-5. Since 0.5/4 = 0.125 > r2 – 0.1 = 0.1, the four mentioned agents would receive 0.1 units of the resource. Subsequently, the residual resource (0.1 units) is 2

In the case of absolute priority-based mechanisms, the plans assigned to the agents do not depend on their requests; hence, any absolute priority-based mechanism could be considered strategy-proof under the hypothesis of benevolence. This class of mechanisms has an obvious drawback–the principal uses no information reported by the agents.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Planning Mechanisms

97

allocated among agents 3-5; in the resulting equilibrium (indeed, we have an equilibrium– even agent 3 is given a smaller quantity than the optimal one), each obtains 0.23(3) units of the resource. The equilibrium resource allocation being found, one may evaluate the equilibrium requests of the agents. Assume that the principal uses the mechanism of proportional resource allocation; in the equilibrium, agents 3-5 would report the maximal possible requests (in fact, 1), while agents 1-2 report requests in order to obtain optimal quantities of the resources. The requests satisfy the system of two algebraic equations with two unknown quantities: s1 / (3 + s1 + s2) = 0.1; s2 / (3 + s1 + s2) = 0.2. Thus, s1* =

3 * 6 , s2 = . 7 7

Apparently, in the initial mechanism all agents distort information (demonstrate a strategic behavior), while in the corresponding direct mechanism they follow truth-telling. Reverse priority-based mechanisms. These mechanisms [10] include i (si) as a decreasing function of si, i  N; they possess several advantages against straight prioritybased mechanisms. Let us analyze a reverse priority-based mechanism with the preference functions i = Ai / si, i  N, where {Ai}i  N are positive constants. The value of Ai characterizes the losses in an OS caused by leaving agent i with no resource at all. Then the rate Ai / si determines the efficiency of resource utilization by agent i. This is why reverse priority-based mechanisms are also known as the mechanisms of efficiency-proportional resource allocation (EP-mechanisms). Consider three agents (n = 3), with А1 = 16, А2 = 9, А3 = 4, and R = 18. First, suppose that the agents strive for obtaining the maximal quantity of the resource. Find a Nash equilibrium outcome. It could be easily observed that the function xi ( s)  min {si ,  ( Ai / si )} attains the maximum value over si at the point satisfying si =  Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

x  s    A

i i . (Ai / si). Hence, i Evaluate  from the budget constraint

n

x i 1

 i

n

  i 1

 Ai  R . In this case,    R 

n

 i 1

2

 Ai  . 

For the present example, we have  = 4; the corresponding equilibrium requests are computed using the condition xi  s i  R

Ai n

 j 1

.

Aj

They are s1* = 8; s2* = 6, s3* = 4. Now, make sure this is a Nash equilibrium. Take agent 1; if he or she decreases the request (s1 = 7 < s1*), then s1 + s2* + s3* < R. Consequently, x1 = s1 = 7 < x1*. On the other hand, with s1 = 9 > s1* one obtains   4.5; x1 = 8  x1*. It is easy to show [10] that the derived strategies are secured ones for the agents; i.e., they maximize the payoffs under the worst-case strategies of the rest agents.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

98

Dmitry Novikov

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

If the preference functions of the agents attain maximum at the points {ri}i  N and si* > ri, then agent i will ask (and be given) the quantity ri. The matter is that reducing the request leads to a growing priority of an agent. The set of priority consumers of the resource is constructed exactly in this way. Moreover, it is possible to show that (under a sufficiently large number of agents) the reverse priority-based mechanism with penalties for noncoinciding expected and planned effects appears optimal in the sense of the total efficiency [10]. Rank-order tournaments. An important prerequisite of improving control efficiency lies in designing control mechanisms motivating the maximal potential utilization by agents (stimulating a competition among them). Therefore, one would observe wide popularity of the so-called rank-order tournaments; they are remarkable for the following. Agents participate in a certain competition to receive a resource, preferential terms of financing, a project, and so on. Discussing the reverse priority-based mechanisms, we have underlined that the resource is allocated proportionally to the efficiency i = i (xi, ri) / xi of its utilization by agents. Rankorder tournaments are remarkable for that the resource is provided exclusively to tournament winners (the total quantity of the resource may be not enough for all agents). Suppose the agents report two parameters to the principal, i.e., the resource request si and the estimate i of the expected utilization efficiency. In this case, the total expected effect (provided to the OS by activity of agent i) constitutes wi = i si, i  N. Let us sort the agents in the descending order of their efficiency:     …  n. Apparently, the agents may promise a lot in order to receive the resource. Therefore, when applying rank-order tournaments, the principal should take care of a control system to monitor the fulfillment of the commitments by the agents. Introduce a system of penalties i   i si  i ( si )  ,   0 , i  N, being proportional to the deviation of the total expected effect i si = wi from the actual one, i (si). Note that the quantity (i si – i (si)) characterizes deliberate deception of the agent, used to win the tournament. The goal function of agent i has the form

f i ( i , i )   i ( si )    i si   i ( si ) , i  N, where  is a share of the effect remaining at disposal of the agents (i.e.,  i (si) defines the income of agent i). We emphasize that the agent is penalized only in the case when i si > i (si). If the actual total effect exceeds the expected one, the penalties are set to zero. The principal owns the resource quantity R and allocates it in the following way. Having the maximal efficiency, agent 1 is given the required quantity s1 of the resource. The agent with the second largest efficiency obtains the resource in the second place (the corresponding quantity makes s2); the procedure continues till the resource is completely consumed. In other words, the principal distributes the resource in the required quantity following the descending order of the efficiency rates (as long as the resource is available). The agents that have received the required quantity of the resource are known as tournament winners. It seems essential that some agents (e.g., the last one in the ordered sequence of winners) may have obtained less resource than they actually require. Nevertheless, these agents are still able to yield a certain effect to the principal. For the stated reasons, such tournaments are referred to

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Planning Mechanisms

99

as divisible-good auctions (in contrast to discrete competitions considered in Section 3.5; see also the mechanism of cost-benefit analysis in Section 4.3). Note that the described procedure implies that winning the tournament is subject to the value of efficiency i (there is no correlation to the value of the request si). Thus, the agents would seek to maximize their goal functions, i.e., request for the quantities attaining the maximal values to their goal function (if they win the tournament). Denote by m the maximal index of the agent that has won the tournament (in other words, the agents with indices j  1, m are the winners). It could be easily demonstrated that all winners would report identical efficiency estimates in the equilibrium, i.e., j* = *,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

j  1, m  1 . Moreover, under rather general assumptions imposed on the penalty functions, rank-order tournaments ensure optimal resource allocation [10]. Cost sharing mechanism. Up to here, we have considered resource allocation mechanisms, where agents represent consumers of a resource. The principal’s problem has been to find a mechanism satisfying certain properties, e.g., optimality (in terms of maximal efficiency), strategy-proofness and so on. In a certain sense, the cost sharing problem is a dual problem [45]. Suppose that the agents are interested (to a certain degree) in production (purchase) of a specific public good. For instance, this can be a new technology, manufacturing equipment, an expert, information, etc. The term “public” means that the good can be used by any agent. The cost (price) of this good is fixed; hence, the agents should “club together” to produce (purchase) and use it. We suppose that utilization of the public good yields certain profits to each agent. The whole point is how much each agent has to pay (in other words, what should be the cost sharing procedure of the agents). Imagine that, for each agent, the principal knows “the level of satisfaction” from public good utilization. It is possible to suggest different principles of cost sharing, e.g., in equal parts, proportionally to the needs in utilization, proportionally to the levels of satisfaction, etc. The question regarding the best principle (in the sense of “justice”) seems to be separate. Generally, only the agents know their needs. If the agent’s costs depend on his or her messages (which are impossible or difficult to check), then the agent strives for “taking a ride” for account of the others (the so-called free rider problem [45] arises). Hence, the problem of data manipulation (strategic behavior) is immediate in cost sharing mechanisms (similarly to resource allocation mechanisms). Let us illustrate the cost sharing problem by the following example. Consider two towns (agents) separated by a river. The agents apply to a bridge-building company. The company announces its willingness to erect the bridge at the price (cost) of С; for convenience, choose С = 1. The expected incomes of the towns (as the result of bridge utilization) constitute q1 = 0.4 and q2 = 1.2, respectively. Evidently, the bridge (as a public good) is beneficial to both towns, since q1 + q2 > C. However, how should the costs be shared between the towns? In other words, how much should town 1 (town 2) pay–С1 (respectively, С2)? Naturally, С1 + С2 = С. We study several alternatives below. 1. The principle of equal cost sharing. Set С1 = С2 = С / 2. If q1 > C / 2, q2 > C / 2, i.e., the goal functions

fi  qi  Ci , i  1, 2 , Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

100

Dmitry Novikov

are nonnegative, then this principle is admissible (the present example provides an exception). Note that the corresponding mechanism is strategy-proof (in fact, the principal asks nothing from the agents–absolute priorities are employed). Still, not always is the principle of equal sharing just; if a priori q1  q2 (the expected incomes of public good utilization vary), compelling the agents to equally share the costs seems incorrect. 2. The principle of proportional cost sharing. Adopt the next principle–“an agent being mostly interested in the public good utilization pays more.” In other words, let us divide the

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

costs proportionally to the reported incomes: Ci 

si C , i = 1, 2, where S = s1 + s2 and si is S

the income estimate reported by agent i to the principal. Let us analyze the mechanism of proportional cost sharing. Obviously, s1 + s2  C (otherwise, the total income is smaller than the construction costs, and it seems unreasonable to build a bridge). To make the goal functions nonnegative, require that C1  q1, C2  q2. These inequalities define the admissible range of agents’ requests (s1, s2). Clearly, both agents strive for reducing the requests. A Nash equilibrium is the set of requests (s1*, s2*) representing the segment: s1* + s2* = С, s1*  q1, s2*  q2. Interestingly, truth-telling in the mechanism of proportional cost sharing is not an equilibrium. Due to the multiplicity and Pareto efficiency of Nash equilibria, “the struggle for the first move” takes place if the agents know actual incomes of each other. For instance, agent 1 reports s1 = 0 (C1 = 0), thus shifting all costs to agent 2 (the latter is compelled to request s2 = 1 (C2 = 1)). One can easily observe that the mechanism of proportional cost sharing is the equal profitability mechanism. Define the profitability of agent i, i = (si – Ci) / Ci, by the ratio of the profit and costs (the profit is evaluated on the basis of the agent’s message si). Substitute the result in the procedure of proportional cost sharing to obtain 1 = 2; notably, the levels of agents’ profitability coincide (the actual levels given by (qi – Ci) / Ci may vary). 3. The principle of equal profits. Consider the following mechanism:

C1 

C ( s1  s2 ) C (s  s )  ; C2   2 1 . 2 2 2 2

The resulting set of Nash equilibria is the same as under the principle of proportional cost sharing. The discussed principles of cost sharing are easily generalized to any finite number of agents. Of course, they do not provide a comprehensive set of all possible alternatives; today, numerous principles are explored in theory and widely used in practice [10, 12, 38, 45]. Most of them do not address the issue of strategic behavior. Nevertheless, in many cases the abovementioned result regarding strategy-proofness (derived for resource allocation mechanisms) can be applied to a certain class of cost sharing mechanisms (being “dual” to resource allocation mechanisms); the details are provided in [45].

3.3. THE MECHANISMS OF ACTIVE EXPERTISE Numerous tasks and problems being solved by a manager of an organization, a lot of subordinates with specific abilities and skills, as well as certain constraints and conditions of

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Planning Mechanisms

101

an external environment–all these factors require the principal to possess a great amount of information for efficient decision-making. However, the principal’s cognitive capabilities are limited, and sometimes he or she can not get necessary information directly. Hence, there is a need for receiving information from the rest participants of OS, from an external environment, and so on. In control of socio-economic systems, an important role is played by the expertise mechanisms, viz., mechanisms of data acquisition and processing by experts in specific fields [61]. Today, one would identify multiple mechanisms used for experts’ questioning and processing of their opinions [10, 12, 61]. A detailed description lies beyond the scope of this book. Below we address only a certain property of expert procedures, notably, the possibility of data manipulation by agents (experts). Imagine the following situation. A principal intends to acquire information, e.g., on production capabilities of agents. Naturally, the agents know their capabilities and may act as experts. Suppose that the principal asks the agents and, based on the reported information, makes a management decision. Since the decision directly infringes on agents’ interests (and is made on the basis of their messages), then–most probably–each agent reports information to ensure the most beneficial decision (for him or her). An elementary example is when a principal asks agents about the subsidies required for implementing a project. The agents will scarcely tell the truth (especially, under insufficient financial resources). In other words, the agents may manipulate data (demonstrate a strategic behavior) according to their interests. Such behavior is known as active; accordingly, in this section we deal with an active expertise. The principal should design a mechanism (a procedure) to guarantee truth-telling of the agents. Is that possible? In some cases, the answer is “yes.” Suppose there are n experts assessing a certain object (e.g., a candidate for a position, possible directions of financing, and so on). Expert i  N reports an estimate si  [d; D], where d and D are the minimal and maximal feasible estimates, respectively. The final assessment x =  (s) to-be-used in decision making is a function of the experts’ messages, s = (s1, s2, ..., sn). Denote by ri the subjective opinion of expert i (this is his or her true belief about the object assessed). Assume that the expertise procedure  (s) is a strictly increasing continuous function meeting the unanimity condition:  a  [d, D]:  (a, a, ... , a) = a. Generally, the experts are supposed to report their actual opinions {ri}i  N. If each expert makes a small mistake (unconsciously–depending on his or her qualification), then the average estimate

1

n

r n i 1

i

provides an objective and accurate assessment of the object. The

experts being concerned with specific expertise results would not (surely) report the actual opinions. Notably, the mechanism  () can be subjected to manipulation (si  ri). Let us formalize the interests of an expert. Assume that each expert (due to his or her professional or/and timeserving interests) strives for making the expertise result x as close to his or her opinion ri as possible. In other words, the goal function of expert i is defined by

f i ( x, ri )   x  ri , i  1, n . Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

102 And

the

expert

reports

the

estimate

  s1 , ..., si , ..., sn   ri .

minimizing

si

the

quantity

Consider an example of manipulation. Set n = 3; d = 0; D = 1; r1 = 0.4; r2 = 0.5; r3 = 0.6 and fix the following estimate processing procedure (expertise procedure):

1 n x   ( s )   si . If si  ri , i  1, 3 (all agents adhere to truth-telling), then x = 0.5. 3 i 1 Moreover, the final result coincides with the actual opinion of expert 2 (the latter is fully satisfied). In contrast, the rest experts (1 and 3) are not satisfied, as far as r1 < 0.5 and r3 > 0.5. Hence, they try to change the outcome by reporting other estimates s1 and s3. Imagine the experts report s1  0, s2  0.5, s3  1. Then x    ( s1 , s2 , s3 )  0.5. Thus, we have derived the same final assessment. Again, experts 1 and 3 are not satisfied; let us analyze 

whether they can change the situation independently. If s1  s1 and s 2  s 2 , s 3  s 3 , then

 ( s1 , s2 , s3 )  x  ; thus, by modifying his or her estimate, expert 1 moves the final assessment away from his or her own opinion. This is the case for expert 3, as well:

 ( s1 , s2 , s3 )  x , if s3  s3 . A unilateral deviation from the message s* benefits nothing to

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

each agent (in the sense of the distance to the subjective opinion). And so, s* = (0; 0.5; 1) is a Nash equilibrium. Introduce the following values to-be-interpreted as opinions of phantom agents (voters): w1 =  (d, D, D) =  (0, 1, 1) =

2 ; 3

w2 =  (d, d, D) =  (0, 0, 1) =

1 3

(note that  (0, 0, 0) = 0 and  (1, 1, 1) = 1). Moreover, w2  r2  w1 (

2 1 1   ). In other 3 2 3

words, within the segment [w2; w1] expert 2 is “a dictator with restricted authority” (the restrictions are, in fact, the boundaries of the segment). Now, let us construct a mechanism, where all experts benefit from truth-telling, and the final assessment coincides with that of the mechanism  () . The principal may ask the experts for the actual opinions r = {ri}i  N and use them as follows (an equivalent direct mechanism). First, sort the experts in the ascending order of the reported peak points. Second, if there exists q  2, n such that wq–1  rq–1; wq  rq (note q is easily shown to be unique), then x* = min (wq–1; rq).

In the present example, q = 2 and

1 2 1 = min ( ; ). 2 3 2

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Planning Mechanisms

103

Evidently, si  d , i  q, si  D, i  q . Therefore, based on the message r and using the values w1 and w2, the principal may find a Nash equilibrium s*. In expertise mechanisms, the described principle of equilibrium evaluation is referred to as the median scheme [10, 45]. Check whether the experts can “improve” the final assessment by reporting ~ ri  ri . Clearly, expert 2 benefits nothing by changing his or her message, since x*(r1, r2, r3)  r2. Let

r1  r1 . For definiteness, set ~ r1  0,2 . The situation remains the same–expert expert 1 report ~

r1  r1 , then expert 1 may modify the final assessment by becoming a 2 is still a “dictator.” If ~ r1  r2 . Consequently, the principal evaluates “dictator,” i.e., by reporting ~

 (~ r1 , r2 , r3 )  ~ r1 ,

but r1  ~ r1  r1  r2 (expert 1 has worsen the final assessment). Hence, by varying the message ~ r1 , expert 1 is unable to make the final assessment closer to r1. Similarly, one can show that expert 3 benefits nothing by data manipulation, as well. Thus, we have demonstrated that in an equivalent direct mechanism truth-telling is a Nash equilibrium for experts, and the final assessment coincides with its counterpart in the initial mechanism. Now, let us consider the general case with an arbitrary number of experts. Suppose that all ri differ and are sorted in the ascending order, i.e., r1 < r2 < ... < rn, and x* forms a Nash equilibrium (x* =  (s*)). 



It is easy to show the following: if x* > ri, then si  d ; if x* < ri, then si  D . In the 

d  si  D ,

case

one

obtains

x*

=

ri.

Note

that

x*

=

rq

yields



Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

j  q : s j  d , j  q : sj  D . The quantity sq is defined by the condition 



  d , d , ..., d , sq , D, D, ..., D   rq .   q 1 

  nq 

Thus, for evaluating an equilibrium it suffices to find a positive integer q. To succeed, let us define (n + 1) numbers

wi =





  d , d , ... , d , D , D , ... , D , i  0 , n ,       

i

ni

 

which can be interpreted as the opinions of phantom agents. Here w0 = D > w1 > w2 > ... > wn = d, and if wi  ri  wi–1, then x* = ri (expert i is a dictator on the segment [wi; wi–1]). Evidently, there exists a unique expert q such that wq–1  rq–1, wq  rq. By defining q, one can evaluate the final equilibrium assessment: x* = min (wq–1, rq). Again, it is possible to prove that truth-telling ( ~ ri  ri )i  N is a Nash equilibrium in the game of experts.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

104

Recall any expertise mechanism  () has been shown to possess an equivalent direct mechanism, where truth-telling is a Nash equilibrium. This result allows claiming the following. If a principal strives for agents’ truth-telling, this can be achieved using a strategyproof direct mechanism. However, the principal may have other interests. Consider an example, where the principal manipulates the experts (in fact, this is a mechanism of informational control–see Chapter 8). For instance, the principal is concerned with making the expertise result as close to a value x0  [d; D] as possible. Assume that the principal knows the agents’ opinions {ri  [d; D]}i  N, but each agent knows nothing about actual opinions of the others. For each agent, reflexive (informational) control by the principal consists in forming specific beliefs about the opponents’ opinions such that information reported by the agents (as a subjective informational equilibrium–see Chapter 8) leads to the most beneficial result according to the principal’s view (i.e., as close to x0 as possible). Denote by x0i (ai, ri) a solution to the equation

 (ai, …, ai, x0, ai, …, ai) = ri,

(1)

where x0 holds position i, i  N. The condition (1) means the best response of agent i to the message ai unanimously reported by the rest agents. The monotonicity and continuity of the mechanism  () implies that x0i (ai, ri) is a continuous decreasing function of ai (under a fixed type ri of agent i). Let x0  [d; D], then  ai   1,  ri  [d; D] x0  [di(ri); Di(ri)], i  N,

(2)

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

where di(ri) = max {d; x0i (D, ri)}, Di (ri) = min {D; x0i (d, ri)}, iN.

(3)

Suppose that the principal (coordinator of the expertise) knows the type of each expert (the experts are unaware of the types of their opponents). Any results x0 such that x0  [ max di (ri); min Di (ri)], iN

(4)

iN

can be implemented as an unanimous collective decision by a proper reflexive control. 1 si , si, Let us apply this result to the linear anonymous expertise mechanism  (s) = n iN



ri  [0; 1], i  N (recall that any mechanism being symmetrical with respect to agents’ permutation is said to be anonymous). Evaluate ai =

n ri  x0 , i  N. Since ai  [0; 1] (see also (2)–(4)), one obtains the range n 1

of unanimously implemented collective decisions: max {0; n ( max ri – 1) + 1}  x0  min {1; n min ri}. i N

iN

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(5)

Planning Mechanisms

105

Note that the condition (5) implies the constraint

max ri – min ri  1 – 1 i N

iN

n

to-be-imposed on the feasible variations of the experts’ opinions to ensure the existence of (at least, a single) result x0 implemented by reflexive control as an unanimous collective decision. On the other hand, (5) means that x0  [0; 1] if

max ri  1 – 1 , min ri  1 . i N

n

iN

n

This indicates that, in a linear anonymous expertise mechanism, the following represents a sufficient condition of unanimous implementation of any collective decision as the result of reflexive control: there exist no experts with extremely low or extremely high opinions. Now, let us reject the requirement of collective decisions unanimous implementation. Consider two vectors: d(r) = (d1(r1), d2(r2), …, dn(rn)), D(r) = (D1(r1), D2(r2), …, Dn(rn)).

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Suppose that the principal knows the type of each expert (the experts are unaware of the types of their opponents). Then any result x0 such that x0  [ (d (r));  (D (r))],

(6)

can be implemented as a collective decision by a proper reflexive control. Again, apply this result to the linear anonymous expertise mechanism  (s) =

1

s , s,

n iN

i

i

ri  [0; 1], i  N. Evaluate the message si of expert i being subjectively optimal for the latter under the opponents’ action profile s–i (denote by S–i =

s j i

j

 [0; n – 1]):

si (ri, S–i) = n ri – S–i, i  N.

(7)

Thus, Xi (ri) = [max {0; 1 – n (1 – ri)}; min {1; n ri}], i  N. Under (7), substitute the left and right boundaries of the sets Xi (ri) into the linear anonymous planning mechanism to obtain x0  [

1

1

 n max {0; 1 – n (1 – r )};  n min {1; n r }]. i

iN

i

iN

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(8)

Dmitry Novikov

106

Let us study a numerical example with three agents whose peak points are r1 = 0.4; r2 = 0.5 and r3 = 0.6. Set x0 = 0.8. Under the conditions of truth-telling, the indirect mechanism yields the result x = 0.5 (the same result is provided by the corresponding direct (strategyproof) mechanism). Yet, the principal would like each agent to report a greater estimate, making the final assessment closer to 0.8. The condition (5) is satisfied. Find the quantities 0.8 + 2 a1 = 3  0.4  a1 = 0.2; 0.8 + 2 a2 = 3  0.5  a2 = 0.35; 0.8 + 2 a3 = 3  0.6  a3 = 0.5. For agent 1, the principal forms the belief that the types (peak points, actual opinions) of the rest agents constitute 0.2, while they believe the type of agent 1 is 0.2 (and this is a common knowledge). Similar “beliefs” are formed by the principal for agents 2 and 3 (0.35 and 0.5, respectively). The best response of agent 1 (leading to the collective decision coinciding with his or her peak point) to the message 0.2 reported by the rest agents equals 0.8. Moreover, this is the best response of agents 2-3, as well. Thus, all agents unanimously report 0.8. In the studied numerical example, the condition (8) holds for any x0  [0; 1], i.e., n ( max ri – 1) + 1  0, n min ri  1.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

i N

iN

Let us consider another example: n = 2; r1 = 0.2; r2 = 0.7. Then (5) implies there exists a unique x0 (actually, 0.4) being implemented as an unanimous collective decision. At the same time, from (6) it follows that the set of implementable collective decisions makes up the segment [0.2; 0.7]. The boundaries of this segment have matched the agents’ types incidentally: for instance, for r1 = 0.1; r2 = 0.5 the unanimous collective decisions lie within the segment [0; 0.2]. To complete the discussion of reflexive control in the mechanisms of active expertise, we emphasize the following aspect. The derived results proceed from the assumption that the coordinator of the expertise (the principal) knows the type of each agent (but the agents are unaware of the types of each other!). A somewhat more realistic assumption is that each participant (the principal and experts) possesses specific beliefs about the ranges of opponents’ types, i.e., managerial capabilities of the principal are limited. In this case, a promising direction of further research consists in the analysis of the set of collective decisions being implemented as informational equilibria.

3.4. TRANSFER PRICING MECHANISMS A classical example, being popular in mathematical economics, is provided by an OS with the Cobb-Douglas cost functions of the agents: ci (yi, ri) =

1



yi ri1–,   1, ri > 0.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Planning Mechanisms

107

Assume that the principal’s problem lies in motivating a collective of agents to choose a set of actions {yi} with a fixed sum R (for the corresponding interpretation, see below). Let the principal set the price , then the goal function of agent i represents the difference between the income  yi and the costs: fi (yi, ri) =  yi – ci (yi, ri).

(1)

Solving the total costs minimization problem by a proper choice of ({xi}, ) provided that xi  Arg max fi (yi, ri) and y i  Ai

xi (R, r) =

where W =

iN

= R, one obtains (see also Section 2.7):

i

ri R,  (R, r) = (R / W) – 1, W

(2)

 r , r = (r , r , ..., r ). iN

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

x

1

i

2

n

The solution (2) minimizes the total costs of the agent under a given constraint on the total action of the agents, i.e., ensures a Pareto-optimal equilibrium for the agents. Note the price  is actually the Lagrange multiplier. The stated formal model possesses numerous interpretations. They are: assignment of works in a collective ( stands for a wage rate) [50], resource allocation ( indicates a resource price) [10], order distribution in a corporation ( specifies a transfer price) [12], compensation mechanisms in operational management of projects and industrial production ( designates a bonus rate for reduced duration of technological operations) [39], to name a few. A common feature of all interpretations consists in the existence of a uniform price used for all agents. The solution (2) has been derived under the assumption that the principal knows the parameters {ri}i  N of the agents’ cost functions. Otherwise, the parameters are reported by the agents and the problem of manipulation in this planning mechanism is immediate. The studied model is unique in the following sense. There exists a corresponding equivalent direct mechanism, i.e., a strategy-proof one satisfying the fair play principle. Within this mechanism, truth-telling is a dominant strategy for each agent under certain conditions (discussed below). Let us substantiate this. Suppose that the agents report to the principal the estimates {si}i  N of the cost functions’ parameters, and the principal uses the following planning mechanism (the fair play mechanism for choosing plans and a price):

 x (s,  ) = R, iN

i

xi (s, ) = arg max { (s) yi – ci (yi, si)}. yi  Ai

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(3)

(4)

Dmitry Novikov

108

Actually, the principal substitutes the reported estimates (considering them as true messages) into the goal functions of the agents and assigns the most beneficial plans for the agents based on these estimates. The condition (4) is said to be the perfect concordance condition–see Section 3.1. The parameter  is chosen to meet the balance constraint (3) for the plans xi (s, ). The solution to the problem (3)–(4) (transfer pricing mechanism) takes the form xi (R, s) =

where V =

V

(5)

 s , s = (s , s , ..., s ). iI

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

si R,  (R, s) = (R / V) – 1, i

1

2

n

We emphasize obvious similarity of the expressions (5) and (2) to-be-used in further analysis. Recall the hypothesis of weak contagion (HWC) claims that, under a sufficiently large number of the agents, the impact exerted by the message of a specific agent on the common control parameter  (R, s) turns out negligible. The HWC being true, substitute (5) into (1) to obtain the following result. For any messages of the rest agents, the maximal value of the goal function of agent i with respect to his or her message is attained by si = ri. In other words, the HWC makes truth-telling a dominant strategy for each agent. The transfer pricing mechanism (5) seems unique. First, it is a strategy-proof mechanism (a fair play mechanism) with the same efficiency as the mechanism (2) under the complete awareness conditions. Second, it minimizes the total costs of the agents to perform a common planned task. And third, it admits arbitrary decentralization, i.e., each subsystem may be considered as a single agent, whose action is the sum of actions of the member agents; this aggregative agent has the Cobb-Douglas cost function with the parameter defined by the sum of the corresponding parameters of the member agents. The presented results can be made even more general, i.e., extended to the case of the agents’ cost functions in the form ci (yi,ri) = ri  (yi / ri), i  N, where  () denotes a smooth increasing convex function. Then the resource price is defined by  (R, s) = ' (R / V) (compare with (5)), while the optimal plans are still evaluated by (5). It should be mentioned that perfect aggregation in the considered model is feasible due to the shape of the agents’ cost functions and planning procedure. Generally, the obtained results take no place for arbitrary cost functions of the agents. To conclude this section, we analyze the efficiency of the fair play mechanism with transfer prices. Before the principal’s goal function has been supposed to depend on the income yielded by performing the total amount of work R (on the one part) and on the total costs of the agents to perform this work (on the other part). Indeed, under a fixed amount of work, the income is constant.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Planning Mechanisms

109

The mechanisms (2) and (5) minimize the total costs of the agents provided that the principal assigns the same price to all agents. Imagine that the principal has individual interests (in addition to plan fulfillment, he or she strives for minimizing the total payments to the agents). Then the transfer pricing mechanism may be treated not only as a planning mechanism, but also as a proportional incentive mechanism, where the reward of an agent is proportional to his or her action. The proportionality coefficient then represents a price, e.g., a wage rate (see [50] and the interpretations discussed above). As is generally known, proportional incentive schemes are inefficient under monotonic continuous cost functions. In particular, if the agents have the Cobb-Douglas cost functions, then optimal compensatory incentive schemes are strictly more efficient than their proportional counterparts (see Section 2.3 and [12, 50]). Let us elucidate this. In a compensatory incentive scheme, the minimal costs  (x) to implement the action vector x  A make up K (x) =

n

 c ( x ) . In the case of a proportional incentive scheme, i

i

i 1

these costs are L (x) = 

n

x

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

i 1

* i

, where xi* meets (2).

The ratio L (x) / K (x) =   1 does not depend on the action vector and shows how much does the principal “overpays” to the agents by using a uniform transfer price (in comparison with the minimal costs to implement the action vector in question). Hence, it would be desired to find a control mechanism with an existing equivalent fair play mechanism (ensuring strategy-proofness in the case of principal’s incomplete awareness about the agents) with almost the same efficiency as the optimal compensatory incentive scheme. Such mechanism actually exists. Under incomplete awareness, let the principal use the control mechanism

i (yi, ri) =

  1– y r ,   1.  i i

(6)

Then the agent’s goal function takes the form (also, see (1)): fi (yi, ri) =

  1– y r – ci (yi, ri).  i i

For the agents with the Cobb-Douglas cost functions ci (yi, ri) =

(7) 1



yiri1 – , i  N and

 =  – , where   (0; ), one obtains: a) the mechanism (6) is -optimal, where    / ( – ); b) within the HWC, the mechanism (6) has an equivalent fair play mechanism. Numerous applications of transfer pricing mechanisms are provided in [39].

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

110

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

3.5. RANK-ORDER TOURNAMENTS3 A common idea of any tournament lies in the following. Participants are sorted in the descending order on the basis of available information about them (objective one or reported directly by the participants). The winner (or winners) is the participant who has been ranked first (several participants that have been ranked first, depending on the conditions of a specific tournament). An immediate problem is that a participant may manipulate data reported (demonstrate a strategic behavior) to win the tournament. There exist discrete and continuous tournaments. In the first case, a participant requires a certain quantity of a resource; thus, he or she will be not satisfied with any smaller quantity, leading to zero effect (e.g., a project will not be completed, a product will not be manufactured, etc.). In contrast, in continuous tournaments a participant obtaining a smaller quantity (than he or she actually has requested) may yield a nonzero effect. For instance, consider a proportional relationship between the effect and allocated quantity of a resource (the efficiency is fixed). We have already analyzed continuous tournaments in Section 3.2. Thus, the present section concentrates on the model of a discrete tournament, using the investment problem as an example. For project i, denote by li the estimate of the expected effect from its implementation and by si the estimate of the required investments in the project. Generally, the estimate li is provided by an expert commission taking into account different market, economic or social conditions. On the other hand, the estimate si is defined by a firm or an organization submitting or implementing the project. Suppose that the estimate li turns out rather objective. In principle, one should not rule out the possibility of conscious overstating or understating the effect estimates by the experts interested in a certain project. Concerning the investment estimates, we emphasize the general tendency of overstating the required investments by potential participants. To reduce the negative impact of these phenomena, one often uses rank-order tournaments. A certain estimate of the efficiency (priority levels) of investment projects is introduced, which depends on the expected effect li and on the estimate of the required investments si. Next, the projects are sorted in the descending order of their efficiencies and are invested accordingly (until the available amount of financing is totally allocated). The following estimates of the efficiency are wide-spread: qi = li / si and qi = li –  si ( represents a normalizing coefficient commensurating the effect and costs). Such tournament is called simple (a first-price auction). How could the efficiency of a tournament be measured? Denote by ri the objective estimate of required investments in project i (we study a discrete tournament–the investments being smaller than ri, the project takes the risk of failure). Suppose that for all projects one knows the objective estimate of required investments. It is possible to choose an optimal subset Q from the whole set of projects N by solving the problem

l

iQ0

3

i

 max , Q0  N

This section was written with the assistance of Prof. V.N. Burkov, Dr. Sci. (Tech.).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(1)

Planning Mechanisms

r

iQ0

i

 R,

111 (2)

where R specifies the available amount (fund) of financing. The maximal effect gained by the solution to the problem (1)–(2) constitutes Lmax. Let Q be the set of tournament winners. The total effect gained by the subset of winning projects is given by

LQ    li .

(3)

iQ

Evidently, L(Q)  Lmax. The ratio

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

K

LQ  Lmax

(4)

determines the efficiency of a rank-order tournament. Below we show that the efficiency of simple tournaments may be arbitrarily small. Example 3.1. Consider two projects, where l1 = 2 , r1 =  ( designates a small positive value), l2 = 150, r2 = 100. The available amount of financing makes up R = 100. In the case of the estimates q1 = l1 / r1 = 2 and q2 = l2 / r2 = 1.5, the winner is project 1 receiving the investments s1 = . Thus, project 2 is out of financing. Consequently, Q = {1} and L(Q) = 2 . Clearly, the maximal effect equals Lmax = 150 (project 2 being invested). The efficiency of this tournament is K = 2  / 150 =  / 75 (in fact, arbitrarily small). When the efficiency is estimated by the formulas q1 = l1 –  r1 and q2 = l2 –  r2, for  = 1.5 one obtains q1 = 0.5 , q2 = 0; hence, project 1 wins for any . The efficiency remains the same as in the previous case. Let us endeavor to derive an estimate of the guaranteed efficiency in a simple tournament, taking into account the following aspects. First, tournament winners may overrate the required investments and, second, the remaining amount may be insufficient for implementation of a next project. The guaranteed efficiency K of a simple tournament exceeds the following quantity:

K

1 1 2    1

,

(5)

where  = emin / emax (emin and emax stand for the minimal and maximal efficiencies of the projects participating in a tournament),  = R / r (R is the available fund, r means the maximal investments required for a single project). Below we deduce formula (5). Assume that Э1 is the maximal efficiency of the losing projects, l2 is the total effect gained by the winners, and s2 is the total estimate of required investments for the latter. Obviously, l2 > e1 s2 and R – s2 < r. Hence, the following estimate is immediate:

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

112 l2 > e1 s2 > e1 (R – r).

Actually, the winning projects can be implemented via smaller investments:

r2 

e1 ( R  r ) e (R  r) can be used , and the residual financial resources  = R – r2 < R – 1 emax emax

to implement other projects, yielding the additional effect of not smaller than



e1  < e1  R 



e1 (R  r )  . emax 

Thus, for a simple tournament one obtains the efficiency estimate 1 1  e1 ( R  r )  emin 1 . 1 2    2  e1 ( R  r )   1 emax R  1 e1 ( R  r )  e1  R  emax   r

K

By the definition, the maximal efficiency of a simple tournament is attained when emin = emax (the projects have almost the same efficiencies) and r is sufficiently small (the projects require low investments). The minimal efficiency takes place for projects, whose investments approach the fund R. If r  0, the above estimate allows a simplified form:

1 1  . 2  2

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

K

Therefore, for small projects the efficiency of a simple tournament always exceeds 50%. Consider a “knapsack” tournament, where the winners are defined by solving the maximal total effect problem

l iQ

i

 max Q

(6)

under the constraints

 si  R .

(7)

iQ

Evidently, the efficiency of a “knapsack” tournament is not smaller than 0.5. This estimate could not be refined. Example 3.2. There are two projects with the following parameters: l1 = 100 + , r1 = 50, l2 = 100, r2 = 50 (again,  specifies an arbitrarily small positive value). Suppose that under the available fund R = 100 the participants have reported the estimates s1 = 100, s2 = 50.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Planning Mechanisms

113

Apparently, project 1 wins as the result of solving the problem (6)-(7), i.e., Q = {1}, L (Q) = 100 + . At the same time, Lmax = 200 +  and

K

 100   .  0,5  200   400  2

Recall  is arbitrarily small; hence, K can be made arbitrarily close to 0.5. Let us study a somewhat more sophisticated case–the so-called two-stage tournaments (second-price tournaments). At stage 1, find all solutions to the problem (6)-(7) such that L(Q)   L0,

(8)

where L0 is the total effect in the optimal solution, and 0 <   1 represents a fixed parameter. In other words, choose all subsets of projects, whose total effect is not smaller than a specific share  of the maximal effect (under the reported estimates {si}). At stage 2, consider the resulting projects (i.e., the ones satisfying the condition (8)) to choose a project requiring the minimal investments. In the described mechanism, a nontrivial issue concerns the choice of . To analyze it, we focus on the case of two projects. For a given , four situations are then feasible (for definiteness, assume that l1  l2):

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

a) l2 / l1 <  and r1 + r2 > R. At stage 1, only project 1 wins, and the efficiency K = 1; b) l2 / l1 <  and r1 + r2  R. At stage 1, again only project 1 wins. However, Lmax = l1 + l2 leads to the efficiency

K

l1 1 ;  l1  l2 1  

c) l2 / l1   and r1 + r2 > R. At stage 1, two different subsets win (each includes a project). At stage 2, in the worst case project 2 wins (if r2 < r1) and the efficiency makes up

K

l2  ; l1

d) l2 / l1   and r1 + r2  R. In the least favorable case, at stage 1 the two subsets win– see variant c); subsequently, project 2 wins at stage 2. This happens if s1 + s2 > R and s2 < s1. On the other hand, under s1 = r1 (the loser reports the minimal estimate) the worst case for the tournament organizer takes place if r1 > R / 2 and r2 < r1. And the efficiency constitutes K

 . l2  l1  l2 1  

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

114

Clearly, variant d) is remarkable for the lowest efficiency. Since the latter increases with , one should choose  = 1. Thus, again we have constructed a “knapsack” tournament. Under the introduced assumptions, it seems there exist no rank-order tournaments with the guaranteed efficiency over 0.5. The situation may be improved by admitting other hypotheses regarding the behavior of tournament participants. Till now, we have believed that tournament participants strive for an equilibrium outcome (a Nash point). The advantages of a two-stage tournament show up when tournament participants try to maximize the guaranteed result. Indeed, a participant submitting project 1 definitely wins at stage 2 in one of the following situations: (1) he or she is sure that only a single subset composed of project 1 wins at stage 1 (2) he or she reports the minimal estimate s1 = r1 to improve the chances at stage 2. Similarly, the participant representing project 2 reports s2 = r2. Hence, in variant d) the least favorable case is impossible and the efficiency equals 1. Therefore, the guaranteed efficiency makes up

1  K  min   ,  1 

 . 

The maximal value is attained by  

1 . By solving the corresponding equation, 1

one obtains the optimal value :

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

0 

1 5  0,6 . 2

The derived estimate of the guaranteed efficiency seems correct when the number of tournament participants exceed 2, as well. This follows from the assumption that the efficiency does not decreases for a greater number of the participants.

3.6. THE MECHANISMS OF EXCHANGE IN PLANNING PROBLEMS 4 AND IN INCENTIVE PROBLEMS Many economic problems may be stated in the form of an exchange problem. Therefore, in this section we demonstrate the feasibility of applying the design method for strategy-proof mechanisms of exchange to solve planning and incentive problems in organizational systems. The method involves the fair play principle formulated in Section 3.1. This enables obtaining efficient solutions to exchange problems under incomplete awareness of a principal about parameters of agents. As illustrative examples, we consider the planning problem and the incentive problem (viz., the pricing problem) in an OS with incomplete awareness of a principal. The model of exchange scheme. Let us introduce the key notions required for further development of the section. 4

This section was written by N.A. Korgin, Cand. Sci. (Tech.).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Planning Mechanisms

115

An exchange is the process of reallocating a resource among participants of an organizational system. The set of exchange options is a set of individually rational allocations of a resource, being feasible within a given organizational system. The exchange scheme is a combination of exchange options (an organizational system with a nonempty set of exchange options). Thus, an exchange scheme represents an organizational system, where (at least, some) participants are interested in interaction, i.e., in reallocation of a resource. In the sequel, we consider the two-element exchange scheme (with two types of resources). Within the framework of the incentive problem (see Chapter 2), the participants of the exchange scheme include an employer (a principal) and an employee (an agent), while the resources are money and a certain product manufactured by the agent. For the planning problem (actually, the pricing problem–see Chapter 3), the participants of the exchange scheme represent a seller (a manufacturer of a product), acting as a principal, and a buyer, acting as an agent; the resources are money and a divisible product. The both problems possess identical statements. A principal should exchange with an agent in the most beneficial way. The principal does not know the exact “type” of the agent (i.e., the parameter determining the agent’s utility function). For instance, the agent’s type may be the efficiency of his or her activity, work output, etc. The principal suggests the mechanism of exchange–depending on the type reported by the agent, different exchange options are assigned to the latter. Traditionally, the type of an agent serves to reflect the “utility” of the agent’s activity according to the principal’s viewpoint; the “better” is the type (the higher is its absolute value), the greater profit the principal has as the result of interaction with the agent. In the incentive problem, the higher is the agent’s type, the smaller are his or her costs to manufacture a specific quantity of a product. In the pricing problem, the higher is the agent’s type, the better he or she appreciates the product offered by the principal. The fundamental principle of designing strategy-proof mechanisms of exchange consists in the perfect concordance condition (see Section 3.1). The exchange option corresponding to the agent’s request must be the most beneficial to the agent whose type corresponds to the request (among all options offered). Thus, the principal stimulates agent’s truth-telling. The mathematical model of an exchange scheme has the following form. The preferences of OS participants (agent 0 and agent 1) are described by the functions

0 (y 01 , y 0 2 , r 0 )  r 0 y 0 2  y 01 , 1 (y11 , y12 , r1 )  y11  (Y2  y12 ) 2 / 2 r1 , where yij means the quantity of resource j available from agent i; ri stands for the type of

Y

agent i (i, j = 0, 1); y 0   1 0

0  is the initial resource allocation, i.e., agent 0 (1) Y2 

accumulates all quantity of resource 1 (respectively, 2). The individual rationality constraints defining feasible exchange options for each agent take the form: Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

116

I R(y0)  {i  0, 1  i ( yi )   i ( yi )} . 0

In other words, for each agent rational exchange options do not decrease his or her utility function. Define the transfer of resource j for agent i during the exchange by y0 → y: 0

x ij  y ij  y ij .

For

agent

i,

the

utility

function

of

exchange

is

f i ( xi )  i ( xi  yi0 )  i ( yi0 ) [12]. The stated exchange scheme includes no hierarchy of the participants. In the incentive problem and pricing problem, a certain participant has the status of a principal, while the other is assigned the status of an agent. The former suggests to the latter different exchange options (“menu”); the agent chooses the best option and reports it to the principal. The exchange takes place according to the option chosen by the agent. The incentive problem. The proposed model serves for solving the incentive problem under incomplete awareness of a principal about parameters of an OS. Agent 0 is treated as a principal (an employer), while agent 1 is an agent (an employee). The utility functions of exchange for the principal and for the agent are

f 0 ( x1 , x2 )  r 0 x2  x1 , 2

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

f1 ( x1 , x 2 )  x1 

x2 . 2r

The transfer of resource 1 represents the principal’s payments for work performed by the agent; the transfer of resource 2 consists in the amount of performed work. Moreover, Y1 defines the principal’s budget constraint and Y2 indicates the maximal amount of work to-beperformed by the agent. The principal’s problem is to find the mechanism of exchange by maximizing his or her expected utility Ef0 ( ( s )) → max of the exchange with the agent. The principal is  (s )

supposed to be unaware of the agent’s type (the principal knows the type is uniformly distributed on Ω1 = [r1min, r1max]). The principal suggests to the agent the mechanism of exchange  (s) = (x1(s), x2(s)), where the amount of performed work and the corresponding payments depend on the agent’s message s representing the estimate of his or her type. The mechanism of exchange offered by the principal is a fair play mechanism if it satisfies the perfect concordance condition, see Section 3.1. A detailed discussion of the perfect concordance conditions in the mechanisms of exchange could be found in [12, 38, 42]. In the model of exchange scheme, the mechanism of exchange appears strategy-proof (i.e., meets the perfect concordance condition) iff the following formulas hold true:

dx1 x ( r ) dx2 (r)  2 (r)  0 , dr r dr Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(1)

Planning Mechanisms



x2 ( r ) dx2 (r)  0 , r 2 dr

s  1 ,

117 (2)

dx1 dx ( s )  0, 2 ( s )  0 . ds ds

(3)

The condition (3) specifies a fundamental property of a strategy-proof mechanism of exchange. The amount of performed work and the corresponding payments grow as the employee increases the estimate of his or her type. In other words, the better reputation has an employee been acquired, the higher amount of work he or she is suggested to perform and the greater payments are offered for this work. If the mechanism of exchange satisfies the conditions (1)–(3), the agent’s profit gained by exchange (the utility function  1 ( r )  f1 ( x1 ( r ), x2 ( r ), r ) of the agent) may be rewritten as

 1 (r ) 

r



r1min

x2 (  ) 2 d . 2 2

(4)

The analysis of (4) yields the following. To design the mechanism of exchange maximizing the principal’s profit, one should solve the optimization problem r 1 max

Ef 0 (Ω )  1



r 1 min Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

x2 (r ) 2 x ( ) 2   2 2 d ] dr  max x2 2r 2 r 1 min r

[r 0 x2 (r ) 

0 ≤ x2(r) ≤ Y2, 0 ≤ x1(r) ≤ Y1. Not dwelling on the technicalities, let us provide the final formula for the resulting mechanism of exchange:

x2 (r ) 

3 1 3 r 0 (r ) 2 0 2 4( r )  ( rmin )  , , , r    , r   x ( r )  ( r ) 1 1 r1max 6(rmax )2

r     [ r1min , ~ r 1] , 3 2 1 3 1/ 3 0 2 / 3 ~ r  min {r1max , (r 1maxY2 ( r 0 ) 1 )1 / 2 , ( rmax Y1  rmin ) (r ) } . 2 4 Here ~ r stands for the maximal type of the agent, which is suggested a specific plan of exchange. This type is defined by the budget constraint of the principal (alternatively, by the maximal amount of work). Imagine that the reported type of the agent appears better than ~ r ; then the agent is offered the plan of exchange for the type ~ r .

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

118

Figure 3.1 below illustrates the derived mechanism of exchange. Clearly, improving the type reported by the agent reduces the incremental cost of the work performed by the latter (as the ratio of the reward paid by the principal to the amount of performed work). The figure demonstrates how the type ~ r depends on the constraints imposed on the resources (here, one deals with the budget constraint of the principal). Planning (pricing) problem. Similarly to the incentive problem, let us study the pricing problem. A seller of a certain product acts as a principal, whose goal function of the exchange takes the form 2

x2 . 2r 1

f1 ( x1 , x2 )  x1 

x1

 (r~ )

Y1

 ( r 1 max )

3

x1  (x 2 ) 2

 ( r 1 min ) 2

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

r 1 min / r 1 max

r 1 max

x2

Figure 3.1. A strategy-proof mechanism of exchange in the incentive problem

Accordingly, the goal function of a buyer (serving as an agent) is defined by

f 0 ( x1, x2 )  r x2  x1 . The seller possesses an arbitrarily divisible product in the quantity of Y2; the buyer has the amount of money Y1. The seller’s problem is to find the mechanism of exchange maximizing his or her expected utility of the exchange with the buyer Ef1 ( ( s )) → max .  (s )

The principal is supposed to know that the agent’s type has a uniform distribution on Ω0 = [r0min, r0max]. By analogy to the incentive problem, the posed problem can be reduced to the design of a strategy-proof mechanism of exchange  (s) = (x1(s), x2(s)), i.e., a fair play mechanism. The

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Planning Mechanisms

119

problem is feasible iff the mechanism of exchange satisfies the following requirements [12, 59]:

r

dx2 dx (r )  1 (r )  0 , dr dr

(5)



dx 2 (r)  0 , dr

(6)

dx1 dx2 ( s)  0, (s)  0 . ds ds

s   0

(7)

Under the conditions (5)–(7), the agent’s profit of exchange (his or her utility function  0 ( r )  f 0 ( x1 ( r ), x2 ( r ), r ) ) can be rewritten as r

 0 (r ) 

 x ( ) d .

(8)

2

r

0

min

The design problem for the mechanism of exchange maximizing the expected profit of the principal is reduced to the optimization problem r 0 max

r

x (r ) 2 Ef1 ( )   [rx2 (r )  2 1   x2 ( ) d ] dr  max , x2 2r r 0 min r 0 min Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

0

0 ≤ x2(r) ≤ Y2, 0 ≤ x1(r) ≤ Y1. Again, we show the final result (provided that x2(r0max)  Y2 and x1(r0max)  Y1):

x2 ( r )  r1 ( 2r  rmax ) , r     [rˆ, r 0 max ] , x2 ( r )  0 , r   0 /   ,

x1 (r 0 )  r1 (r 2  (r 0 max  rˆ)rˆ), r     [rˆ, r 0 max ] , x1 ( r 0 )  0 , r   0 /   ; rˆ  max [r 0 min , r 0 max / 2] . Here ~ r means the worst-case type of the buyer, which is suggested a specific plan of exchange. If the reported type of the buyer is worse than ~ r , “zero” plan of exchange is offered to the agent (actually, no exchange takes place). Similarly to the mechanism of exchange in the incentive problem, the conditions x2(r0max)  Y2 and x1(r0max)  Y1 being false, one evaluates ~ r (the maximal type of the buyer for exchanging with the seller under the above resource constraints).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

120

Figure 3.2 illustrates the obtained mechanism of exchange in the pricing problem. Clearly, improving the buyer’s type reported decreases the incremental costs of the product offered (e.g., growing parcels lead to bulky discounts). To conclude this section, we emphasize the following aspect. Solving the abovementioned problems, one may not involve the type of an agent (as an parameter being difficult to interpret in practice). For this, use the following technique. The principal does not ask the agent about the type; instead, he or she simply suggests choosing an exchange option. Hence, in the incentive problem or in the pricing problem the principal offers the agent to choose an option from the menu (contract) described by the curve in Figures 3.1-3.2.

x1

Y1

 (r~ )

 (r 0 max )

x1  (x 2 ) 2

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Figure 3.2. A strategy-proof mechanism of exchange in the pricing problem.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Chapter 4

MECHANISMS OF ORGANIZING The present chapter provides a description for mechanisms of organizing. Much attention is paid to mechanisms of financial control, notably, the mechanisms of joint financing (Section 4.1), counter-expensive mechanisms (Section 4.2), the mechanisms of cost-benefit analysis (Section 4.3), the mechanisms of self-financing (Section 4.4), and the mechanisms of insurance (Section 4.5). Finally, we consider important classes of mechanisms such as the mechanisms of production cycle optimization (Section 4.6) and the mechanisms of assignment (Section 4.7).

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

4.1. MECHANISMS OF JOINT FINANCING1 Generally, large-scale projects have several sources of financing. Project initiators strive for obtaining funds from state, regional or municipal budgets, privately-owned companies, etc. In this case, financing belongs to the class of resource allocation (cost sharing) problems discussed in Section 3.2. Let us analyze the mechanisms of joint financing in projects; to implement the latter, it would be desirable to attract the capital of privately-owned companies (agents). However, projects can be unbeneficial to the agents if their effect (return on investment) is smaller than 1. As a rule, a budget is limited and insufficient to implement all necessary projects. Nevertheless, agents have no objection to receive budget allocations or a preferential credit. The idea of joint financing consists in the following. Budgetary funds or preferential credits are issued provided that an agent undertakes the obligation to invest personal financial resources. In practice, one often deals with a fixed share of the agent’s funds (e.g., a budget provides 20%, while 80% of the investments are allocated by an agent). Still, such approach to project financing possesses definite shortcomings. For instance, under a small share of budgetary funds the corresponding private investments would be low, as well. Certain difficulties appear if the share is too high. First, there would be too many agents interested in financing of a project (thus, an additional procedure for choosing participants is required, e.g., based on rank-order tournaments). Second, the efficiency of budget utilization is decreased. Below we study the mechanism of joint financing with a flexible share of budget funds. 1

This section was written with the assistance of Prof. V.N. Burkov, Dr. Sci. (Tech.).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

122

Let us formulate the design problem for the mechanism of joint financing. Consider n agents representing, e.g., potential investors in the regional or municipal program of social development. There exists a centralized fund intended for the program. Agent i submits a certain project for being included in the social development program; the project requires the total investments Si, i  N = {1, 2, …, n}. The project passes an expertise and is assigned the level of social effect fi (Si), i  N. In addition to its social effect, the project presented by the agent possesses the economic effect i (Si) for the agent. Using the agents’ requests, a principal (regional authority) defines the amounts of financing {xi} for the agents’ projects (generally, xi  Si) under limited budget funds R. The procedure {xi = i (S), i  N}, where S = (S1, S2, …, Sn) stands for the request vector of the agents, is called the mechanism of joint financing. Agent i has to provide the deficient financial resources yi = Si – xi. Hence, the agent’s interests are given by

i (Si) – yi,

(1)

where i (Si) means the agent’s income (yi being a credit with a bank, one should account for the bank interests on credit). The principal’s problem lies in designing the mechanism  (S) ensuring the maximal social effect Ф 

 f S  ; here S n

i 1

 i

i

*

= {Si*} are equilibrium strategies of the agents (a Nash

equilibrium in the corresponding game). Consider the linear case: i (Si) = ai Si, fi (Si) = bi Si, 0 < ai < 1, bi > 0, i  1, n . In fact, for

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

project i, ai specifies the return on investments; the projects are unprofitable: ai < 1 (i  N). We analyze the straight priority-based mechanism

xi ( S ) 

li Si R, i  N , l jS j

(2)

j N

where li stands for the priority of agent i. Without loss of generality, set R = 1. Note it is possible that xi (S) > Si (agent i receives more resources than requested). In this case, assume that the difference xi (S) – Si is preserved by the agent. Evaluate a Nash equilibrium outcome. For this, substitute (2) in (1) and maximize over Si the following expression:

 lS  lS ai Si   Si  i i   i i  (1  ai ) Si , L( S )  L( S )  where L( S ) 

l S j N

j

j

.

Simple computations yield Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Mechanisms of Organizing

li Si  L( S )1  qi L( S ) , where qi  The condition

l S

i N

L( S  ) 

where Q =

i

i

123

1  ai . li

 L(S ) leads to

(n  1)  ( n  1)  ( n  1) qi  , Si  , 1 liQ  Q  Q

 q . Clearly, the inequality S

iN

i

* i 

(3)

0 holds true, i.e.,

qi 1  , i  1, n . Q n 1

(4)

This condition being violated, the corresponding agents are removed from project participants. The calculations should be repeated with new values of Q and n. Again, the agents not satisfying the constraint (4) are eliminated from further consideration, and so on. At a finite number of steps, one obtains an equilibrium outcome such that all agents meet (4). Let the agents be sorted in the ascending order of qi: q1  q2  ...  qn. To determine the number of agents for participation in the project (social development program), find the

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

k Qk maximal number k such that qi  , where Qk   q j , i  1, k . k 1 j 1

Consider an example (ai, li, qi are combined in Table 4.1). Table 4.1. Parameters of the mechanism of joint financing

ai li qi

1 0.9 1 0.1

2 0.6 2 0.2

3 0.1 3 0.3

4 0.12 2.2 0.4

5 0.75 0.5 0.5

6 0.1 1,5 0.6

Apparently, the maximal value is k = 2. Indeed,

q1  q2  0.3  q2  0.2 , 1 At the same time,

q1  q2  q3  0.3  q3  0.3 . 2

Thus, agents 1 and 2 are the participants of the program within the mechanism of joint financing. If bi = li for all i, then the total effect gained by the program constitutes Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

124

L( S  ) 

7 (n  1) 1 (recall that R = 1).  3 , and the total investments are S   2 Q3 3 9

7 times exceed the budget funds. The 9 5 2   equilibrium requests of the agents make up S1  2 , S 2  . 9 9 Therefore, the investments in the program by 2

In this example, li = bi, i  N. Let us design a straight priority-based mechanism (see Section 3.2) ensuring the maximal social effect. It is necessary to find the priorities {li}iN maximizing the total social effect. The problem is reduced to evaluating {li  0}iN such that the quantity n

n

i 1

i 1

 bi Si  

bi ( n  1) R  ( n  1) qi  1 liQ  Q 

(5)

attains the maximum. Introduce the notation li = (1 – ai) / qi, qi / Q = i, pi = (1 – ai) / bi to rewrite (5) as n

i

i 1

pi

Ф( )  

1  (n  1)i ,

(6)

where  = (1, 2, …, n).

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Hence, one has to find {i  0}iN,

n

 i 1

i

 1 , maximizing the function (6). Using the

Lagrange multiplier method yields

i0 

1  ( n  2)  i , where  i  2( n  1)

pi , i  1, n .  pj

(7)

j N

Thus, li0 

1  ai

i0

, i  N (up to a constant factor). Interestingly, for two agents optimal

priorities do not depend on b1, b2. Now, evaluate optimal priorities for the previous example. For two agents, 10   20  substitute in (6) to obtain p1 = 0.1; p2 = 0.2; 1 =

1 2 ; 2 = ; 3 3

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

1 ; 2

Mechanisms of Organizing

125

 0  0 3 Ф   1 1  10   2 1   20   3 . p2 4  p1  This is greater than 3 3

1 . Accordingly, the total investments have been increased up to 3

1 . 8

Optimal priorities may change the number of the agents–possible participants of the program. Thus, we should study the case of more agents. Consider three agents: p1 = 0.1; p2 = 0.2; p3 = 0.3; 1 =

10 

1 1 1 ; 2 = ; 3 = ; 6 3 2

1  2 1 0 1  3 3 1  1 7  ;  20   ; 3   . 4 8 4 3 4 24

Since all {  i0 } are smaller than

1 , the conditions (4) hold true. Again, substitute in (6) 2

to obtain

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

 0  0 0 1 Ф  2  1 1  210   2 1  2 20   3 1  230   4 . p2 p3 6  p1  Clearly, the efficiency of the mechanism of joint financing has been improved. Consider four agents: p1 = 1 = 0.1; p2 = 2 = 0.2; p3 = 3 = 0.3; p4 = 4 = 0.4;

10 

1  2 1 1  22 7 4  0,2;  20   ;  30  ;  40  0,3 . 6 6 30 15

Still, the conditions (4) take place; the total social effect constitutes 4 Ô 0  3 i 1  3i0   R i 1 pi

5 1  0.2  0.4 7  0.3  0.5 8  3    0.1  0.3  2.5  4 4 . 30 45 24 6  0.1  The social effect has been again increased; thus, one should check the case n = 5. We have:

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

126

p1 = 0.1; p2 = 0.2; p3 = 0.3; p4 = 0.4; p5 = 0.5;

1 =

1 2 1 4 1 ; 2 = ; 3 = ; 1 = ; 2 = ; 15 15 5 15 3

10 

1  3 1 6 7 8 9 10 .  ;  20  ;  30  ;  40  ;  50  8 40 40 40 40 40

The condition (4) is violated for agent 5. Hence, the optimal solution includes four agents with the total social effect 4

5 . By optimal choice of the mechanism of joint financing, the 24

social effect has been increased by 25% (under the same budget). Now, consider the nonlinear case. For agent i, the effect from project implementation makes up

 i Si  

1



Si ri1 , 0    1 .

(8)

The agent’s interests are described by the formula

 i Si   yi 

1



Si ri1  Si  xi  .

(9)

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Let us analyze the straight priority-based mechanism

 i (S ) 

Si . Sj jN

Admit the hypothesis of weak contagion, which implies that an agent does not account for the impact of his or her request on the common factor

 S 

1

j

. Then the equilibrium

request of agent i is defined by 1 

 ri     Si 

1

1 , i  N, S0

(10)

or

 1 Si  ri 1    S0 



1 1 

, i  N,

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(11)

Mechanisms of Organizing

127

where S0 is a solution to the equation

 1 H  S0 1    S0 

1 1

, H   rj .

(12)

j N

Apparently, equation (12) possesses a unique solution S 0* > 1. We show that S 0* > H; if H > 1, this follows from 1

1 1  1    1 .  H Thus, the mechanism of joint financing attracts more financial resources of the agents than any mechanism of direct financing by the agents. Indeed, under direct financing agent i gains the maximal profit for the amount of financing Si = ri. And the total attraction of financial resources of the agents in the case of direct financing equals H. It seems interesting to estimate the ratio u = S0 / H depending on the parameter . Making the change of variables S0 = u H in (12), one obtains the following equation for u: 1

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

1 1  u 1    1.  uH 

(13)

The analysis indicates that u increases for growing . Hence, the effect of the mechanism of joint financing is an increasing function of the parameter . To proceed, study the problem of optimal mechanism of joint financing in the linear case on the set of all mechanisms

Si  i (S )  , iN .  S j

(14)

j N

The agent’s profit is then given by

   Si   i ( Si )  Si   i ( S )   ai Si   Si  . S j    jN   The equilibrium request satisfies the system of equations

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(15)

Dmitry Novikov

128

 Si 1

 S

 1  ai , i  N .

(16)

j

j N

Here we also postulate the hypothesis of weak contagion. The system (16) yields 1

1  Si   1  ai S 0 ( )  1 , where S0 ( )    

 S j

follows from the equation

j N



 1   1 n S ( )   S ( )  1  a j   1 .   j 1

  n We have S0 ( )    1  a j   1   j 1 

  1

. And finally,

1

Si  

1  ai   1 .   1  ai   1 j N

Consequently,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

total

financing

of

projects

by

all

agents

constitutes

 1  a  i

S0  

the 1  1

jN



 1  ai  1

.

jN

For identical agents ( ai  a , i  N ), the expression takes the simplified form

S0 

 (1  a)

. In other words, growing  increases the total amount of financing. Therefore,

the optimal mechanism of joint financing in medias res corresponds to a rank-order tournament, where the resources are first allocated to an agent submitting the maximal request. The performed analysis has not covered a practically important constraint: an agent obtains the amount of financing being not greater than his or her request. Taking this constraint into account and analyzing the case of non-identical agents seem complicated and require further research.

4.2. COUNTER-EXPENSIVE MECHANISMS In this section we consider a certain class of financial mechanisms that allow a principal to control monopolistic agents.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Mechanisms of Organizing

129

Counter-expensive mechanisms stimulate each agent to increase the efficiency of his or her activity as much as possible (i.e., to perform a corresponding work with high quality and minimal costs). Clearly, in the case of many (almost) homogeneous agents in the market, a competition among them would not enable a specific agent to overrate the cost and sale prices for a product. Under existing monopolists, one should apply special control mechanisms ensuring unbeneficial results from overrated costs. Counter-expensive mechanisms are based on the following idea. Assume that the agent’s goal function depends on variables of two types, viz., the parameters chosen by the agent (e.g., the costs) and the parameters defined by a principal (e.g., profitability norms, pricing coefficients, etc.). The principal’s problem lies in choosing control mechanisms (parameters of the second type) such that the agent’s goal function possesses required properties (e.g., increases or decreases with respect to appropriate parameters of the second type). As an example, let us consider the design of a pricing counter-expensive mechanism. The costs of a product manufactured by an agent constitutes C = S + a,

(1)

where a stands for labor expenditures and S designate material costs (including raw materials, equipment depreciation, etc.). The product price is given by P = (1 + ) С,

(2)

with  representing the profitability norm.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

The agent’s profit takes the form

 = P – С =  С.

(3)

Note that the conditions (1)–(3) are written for unit of the product. These expressions hold true for any product output under a fixed profit rate. Imagine that the principal has a certain “device” measuring the minimal costs (MC) required to manufacture unit of the product; consequently, the pricing problem is easily solved. However, only an agent knows the MC; due to his or her activity, an agent may report a cost price exceeding the MC (indeed, under a fixed profitability norm , an agent is interested in overrating the cost price). In other words, the described mechanism does not possess the counter-expensive property. Ensuring the latter is possible, e.g., by making the profitability norm dependent on the efficiency of the agent’s activity. Yet, how should the efficiency of an agent be interpreted? Suppose that the product manufactured by an agent is characterized by the costs С (established by the agent) and by the effect l (defined by a customer or by the principal). This might be a material or an intellectual product, a service. Obviously, the efficiency must be an increasing function of the effect and a decreasing function of the cost price. An elementary function with such properties is the efficiency

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

130 g= l

C

.

(4)

Set the profitability norm as a certain function of the efficiency:  =  (g); find the relationship  () ensuring the counter-expensive property to the corresponding mechanism. Accordingly, the agent’s profit (3) must go down as the cost price rises:

d  0. dC

(5)

At the same time, the product price (2) must grow with respect to the cost price:

dP  0. dC

(6)

The formulas (5)–(6) are said to be the counter-expensive conditions. They can be rewritten as the following constraints:

0 g

d (g)   ( g )  1. dg

(7)

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

A product whose effect equals the costs brings no profits. Thus, imposing the entry condition  (1) = 0, one derives the following relationship ensuring the counter-expensive property of the pricing mechanism [10, 12]: g

(g)  g  1

h( x ) dx , x2

(8)

where h (x) means an arbitrary function taking the values from (0; 1). The closer is h (x) to zero, the stronger is the impact of decreased costs on price reduction (and the weaker is their influence on profit growth). Inversely, the closer is h (x) to the unity, the weaker is the impact of decreased costs on price reduction (the stronger is their influence on profit growth). The principal should choose a proper function in each specific case. Thus, we have considered a certain counter-expensive mechanism (in fact, a pricing counter-expensive mechanism). Let us discuss other possible counter-expensive mechanisms. The presented arguments are valid for planned indicators (profits, bonus funds, etc.). The actual incomes being defined by the above norms, the cost tendencies remain the same. Eliminating them requires introducing a specific norm of assignments to the bonus fund (from the above-plan profits). By choosing appropriate norms, one may avoid the necessity of extending the product line (under the same cost price but different relations between labor and material costs).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Mechanisms of Organizing

131

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Assume that the payment fund includes the wage fund and the bonus fund ( , where  specifies a certain coefficient). Even for a fixed coefficient , the counter-expensive property may be guaranteed by setting a variable coefficient  =  (l / C). In many cases of cost reduction, labor and material resources saved can be used for output expansion, i.e., for gaining additional profits. The requirement of cost reduction for labor payments (the latter go up as the costs decrease) is rather strong. Actually, the following is important to create the counter-expensive effect. Decreasing costs increase not the whole payment fund, but labor remuneration for those employees that have achieved the above level of cost reduction. It is possible to design a control mechanism for cost reduction, where efficient agents are intolerant to the agents demonstrating low efficiency [5]. The rank-order tournaments discussed in Chapter 3 are remarkable for that a natural competition amplifies the counter-expensive properties of the mechanism. One would face an opinion that rank-order tournaments (e.g., for contract negotiation) represent an alternative to the counter-expensive mechanisms studied in this section. This is not true. In fact, rank-order tournaments are efficient in the case of “almost identical competitors” (and may fail for monopolists–see Sections 3.2 and 3.5). Thus, rank-order tournaments and counter-expensive mechanisms (intended for monopolistic agents) are rather mutually complementary than exclusive. The latter play the “anti-monopolist role,” being used for existing monopolists (and being “switched off” under efficient operation of rank-order tournaments). Consider the following illustrative example. Suppose that a principal organizes a rank-order tournament among m agents for a project with the effect L. Denote by Сi the costs of work performed by agent i. Assume that the agents are interested in profit maximization; consider the following pricing procedure which ensures cost saving: Pi = (1 +  (gi)) Ci, gi = L / Ci, i  1, m .

(9)

Let xi be the guaranteed profitability norm for agent i (e.g., an agent may negotiate other contracts gaining profits not smaller than xi per unit costs). Apparently, agent i benefits from performing the work, if the price exceeds the quantity Аi = (1 + xi) Ci. We will believe that i (gi) > xi, i  1, m , viz., concluding the contract with the principal is beneficial for all agents. Imagine a single monopolistic agent exists; consequently, the contract will be concluded at the price P1. A competition takes place among several monopolistic agents. Assume that the agents are sorted in the ascending order of Ai: A1  A2  ...  Am. One would easily demonstrate that the tournament winner is, in fact, agent 1 with the minimal price {Ai} defined by P* = min (P1, А2).

(10)

Indeed, if P1  А2, the rest agents benefit nothing from the contract with the price P* (agent 1 appears a monopolist and the counter-expensive “component” of the mechanism “switches on”). In a somewhat complicated case, when a rank-order tournament is organized

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

132

for several projects, the equilibrium contract prices of the winners are determined by analogy to formula (10) (see Section 4.6).

4.3. MECHANISMS OF COST-BENEFIT ANALYSIS

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

In project management, organizing or restructuring of industrial enterprises, there exists the necessity to choose a set of projects to-be-implemented for gaining the maximal effect (of course, under imposed constraints). Consider the mechanism of cost-benefit analysis using the following example. Let a set of possible projects be defined; their parameters are provided by Table 4.2. Renumber the projects for assigning index 1 to the most efficient one (index 2 to the second efficient project, and so on). Subsequently, combine the sorted projects in Table 4.3; note that additional columns with cumulative costs and cumulative effects are included in the table. Thus, Table 4.3 with cumulative values of the effect and costs (and with the projects sorted in the ascending order of their efficiencies) serves for the cost-benefit analysis. The corresponding curve is shown in Figure 4.1. The cost-effect relationship possesses a remarkable property, i.e., it defines the maximal effect yielded by the given set of projects under fixed financing. The actual effect may be smaller due to discrete nature of the projects. Indeed, suppose there are 140 units of financial resources; then the first two projects are not implementable simultaneously (as they require 160 units of the resources). An optimal choice lies in implementing the second and third projects leading to the total effect of 380 units (evidently, smaller than the effect of 480 units indicated by Figure 4.1). Table 4.2. Parameters of the projects Project no. 1 2 3 4

The costs S 40 100 50 60

The effect Q 80 300 50 240

The efficiency g=Q/S 2 3 1 4

Table 4.3. The projects sorted by the efficiency Q / S Project no. 1 2 3 4

The costs S 60 100 40 50

The effect Q 240 300 80 50

The cumulative costs 60 160 200 250

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

The cumulative effect 240 540 620 670

Mechanisms of Organizing

133

Of course, in the case when each project is implementable partially (with a proportional reduction in the costs and effect), the figure would demonstrate the real effect under any level of the costs (see discrete and continuous tournaments in Sections 3.5 and 3.2). For plotting an actual relationship between the costs and effect, one should solve the knapsack problem [60, 62] (see Appendix 2) by setting different amounts of financing R 240 x1 + 300 x2 + 80 x3 + 50 x4  max , xi {0;1}

subject to the constraint 60 x1 + 100 x2 + 40 x3 + 50 x4  R. Dynamic programming [60, 62] represents an efficient technique for solving the posed problem under different R. To apply the technique, first choose a coordinate system such that axis 1 (axis 2) corresponds to the projects (to the amount of financing, respectively)–see Figure 4.2. Axis 1 includes numbers of the projects (1–4). Next, draw two arcs from the origin, viz., a horizontal arc to the point (1, 0) and a slanting arc to the point (1, 60) (here 60 is the amount of financing for project 1). The first arc means that project 1 is out of financing, while the second one corresponds to the opposite situation. Now, use the both points ((1, 0) and (1, 60)) to draw two arcs for project 2. As the result, one obtains four points: (2, 0), (2, 60), (2, 100), and (2, 160); they correspond to four feasible options for projects 1–2. Note just three points appear if these projects require an identical amount of financing. Proceeding in the described way, construct the network shown by Figure 4.2. 700

The effect

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

670 620

600

540

500 400 300 240

200 100 The costs 0

50

100

150160

Figure 4.1. The cost-effect relationship.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

200

250

Dmitry Novikov

134 250

S

[670]

210

[590] [620] [430 ]

[620]

200 190

[540 ]

160

[540 ]

150

[540 ] [370] [380 ]

[380 ]

140

110

[300]

100

[290] [300] [130]

[320]

90

60

[240]

[240]

[240]

50

[240] [50] [80]

[80]

40

[0] 1

[0] 2

[0]

[0] 3

4

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Figure 4.2. Dynamic programming.

Clearly, a path in the network from the origin (0, 0) to a terminal node represents a certain set of projects. And vice versa, any set of projects possesses a definite path in the network linking the origin to a terminal node. A coordinate on axis 2 specifies the amount of financing for a corresponding set of projects. Let the length of horizontal arcs be 0, and make the length of slanting arcs equal to the effects of the corresponding projects. Consequently, the length of a path linking the origin to a terminal node defines the total effect of the corresponding set of projects. Hence, the maximal path from the origin (0, 0) to the point (4, S) describes the set of projects yielding the maximal effect (among all sets of projects requiring the total amount of financing of S units). Thus, one obtains optimal sets of projects under any amounts of financing. The analysis of the derived solutions (see Figure 4.2) discovers the following paradox. For instance, under the amount of financing of 100 units, the ensured effect constitutes 300 units; yet, increasing the amount by 10 units reduces the effect by 10 units (down to 290 units). One easily observes similar phenomenon by comparing the effects yielded by the amounts 200 and 210 units, 140 and 150 units, and so on. The irony is that any sensible person would claim the effect increases for a greater amount of financing (naturally, for an optimal set of projects).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Mechanisms of Organizing

135

Table 4.4. The cost-benefit analysis The amount of financing The effect

40 80

60 240

100 300

140 380

160 540

200 620

250 670

The effect 670 620 540

380 300 240

80 The costs 40

60

100

140

160

200

250

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Figure 4.3. The cost-effect relationship (taking into account discrete nature of the problem).

The stated paradox takes place due to discrete nature of the problem. Evidently, the options violating the monotonicity (the so-called paradoxical options) should not be considered. To proceed, fill in Table 4.4 with the values of the maximal effect derived for different amounts of financing. The corresponding curve is demonstrated in Figure 4.3. Here the thin line indicates the previous relationship between the costs and effect (see Figure 4.1). The cost-effect relationship being available, one may treat the problems of attracting additional investments (e.g., crediting). For instance, assume there are 90 units of the financial resources, and the interest rate of a bank makes up 300%. Which amount of credit would lead to the maximal effect? The curve in Figure 4.3 illustrates that we should consider four options, viz., the amounts of credit being equal to 10, 70, 110, and 160 units. In the case of 10-unit credit, the additional effect is 300 – 240 = 60 units (the resulting efficiency constitutes 600% and exceeds the interest rate). Hence, being credited turns out reasonable. Next, 70-unit credit would ensure the additional effect of 540 – 240 = 300 units; the efficiency is 430% (again, exceeding the interest rate). A similar picture is observed for 110-unit credit (the additional effect makes up 620 – 240 = 380 units and the corresponding efficiency 345% is higher than the interest rate). Finally, crediting by 160 units yields the additional effect of 670 – 240 = 430 units; the

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

136

resulting efficiency 281% turns out smaller than the interest rate (the same story takes place for the 50-unit credit). Thus, the optimal credit is 70 units ensuring the effect of 540 units; the interest-free effect equals 540 – 370 = 330 units. The cost-effect relationship characterizes the potential of a project (an enterprise, etc.) in the sense of a corresponding criterion; one may define the minimal amount of financing to attain the formulated goals. And vice versa, under limited financing, it is possible to evaluate the maximal effect with respect to a given criterion. For instance, one has to ensure the effect of 600 units under the above criterion. For the given set of projects, this requires (at least) 200 units of the financial resources (according to the figure, the effect is 620 units). Yet, reducing the amount of financing decreases the effect down to 540 units, and the goal is not achieved. Just 150 units being available, the maximal feasible effect constitutes 380 units (attaining the posed goal requires merely 140 units of the resources).

4.4. MECHANISMS OF SELF-FINANCING A top manager of an organization (or a project manager) faces the problem of production costs (project costs) minimization. Suppose that a manufacturing technique or a project

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

includes n operations with defined costs

i in1 . Thus, the total costs make up

n

   i . i 1

Note that  is independent of the execution order of the above operations. Assume that at the beginning of the project a principal possesses financial resources R0 such that R0  . This amount is sufficient to perform all operations in any feasible order. However, if R0 < , there is an immediate problem to design a mechanism of self-financing; the latter defines an optimal feasible execution order of the operations such that certain operations are partially financed by the incomes gained by execution of the previous operations. Let operation i be described by the tuple (i, i, i), where i  0 stands for the income from operation i, and i means its duration. In the sequel, we will distinguish between profitable (i  i) and unprofitable (i < i) operations. The initial fund being deficient, some operations may be financed using the incomes from the performed operations. Thus, an ideal situation is when a project or a set of operations appears completely autonomous (i.e., self-financing covers the total costs and attracting external sources is unnecessary). For simplicity, suppose there exist no technological constraints imposed on the execution order of the operations and an arbitrary number of operations may run parallel. Denote by ti  0 the starting time of operation i, and by R the amount of external resources. The principal may obtain interest-free credits for any amount and at any time instant (discounting takes no place). For the time instant t, the financial balance is given by n

n

i 1

i 1

f (t )  R0  R   i I (t  ti )    i I (t  ti   i ) ,

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(1)

Mechanisms of Organizing

137

1, t  ti is the indicator function. 0, t  t i

where I (t  t i )  

Evidently, making the execution of operations possible requires a nonnegative financial balance for any time instants, viz., a feasible balance satisfies the condition f (t)    t  [0, T] (T is the total duration of the operations or project). The framework of the stated model leads to a series of optimization problems. For instance, one may pose the problem of choosing the execution order of operations (i.e., time instants to start them) by minimizing the amount of external resources:

 R  min {t i } .   f (t )  0,t  0

(2)

It is possible to consider the minimization problem for the project duration

T  max {ti   i } involving only internal resources and a fixed share of external resources: i 1, n

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

T  min {t i } .   R  const , f (t )  0, t  0

(3)

Note that, under successive execution of operations, project completion time is independent of the execution order. Therefore, various statements may be employed. All optimization problems lie in finding an optimal execution order of operations (i.e., an optimal mechanism of self-financing). In the case of discounting, by analogy to (3) one can maximize the terminal (discounted) profit, etc. The existing technological constraints can be also added to the problems (2)–(3). Still, there exist no universal and efficient tools to solve the problems belonging to the considered class of network scheduling and planning problems (NSP) [19, 20]. Since the number of feasible options (execution orders) is finite, exhaustive search may be involved. However, even under a sufficiently small number of operations (tens of operations), exhaustive search appears extremely time-consuming. Thus, for solving the network scheduling problems, one often uses the method of goal-directed search, the method of branches and boundaries, and others. As an example, let us consider the following heuristic algorithm to solve the problem (3). 1. Define all combinations of operations that can be started at zero time instant (i.e., are feasible in the sense of the budget constraint). 2. For each feasible combination, after completion of a certain operation define unperformed operations that can be started next. If such operations do not exist, wait for completion of the subsequent operation. This procedure repeats until all operations are completed and/or none can be started.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

138

Steps 1-2 yield all feasible combinations meeting the balance constraint (in fact, a tree of combinations). Dangling vertices may include the ones corresponding to execution not of all operations. By comparing the duration of the options (dangling vertices) that correspond to execution of all project operations, one obtains solution to the problem (3), notably, the options with the minimal duration. Generally, the described algorithm is less time-consuming than exhaustive search; indeed, unfeasible options are eliminated immediately, and the corresponding subtrees are not analyzed. Other heuristic algorithms can be proposed for numerical solution of the problem (3) (their performance would depend on the values of the initial parameters). Analytical methods for obtaining the optimal solution exist merely for the following problem (2). Consider a graph with (n+1) nodes, where nodes 1, 2, ... , n correspond to operations, and node 0 represents zero operation. Suppose that project implementation starts with zero node, its costs and income equal 0 (0 = 0). Let  = (0, i1, i2, ..., in, 0) be an arbitrary Hamilton contour, i.e., a contour passing all node of the graph exactly one time [23, 54] (see the basics of graph theory in Appendix 2). Denote by M j (  ) 

  j

k 1

ik



  ik 1 the total length of the first j arcs in the contour .

An inbound arc to node i ( i  1, n ) requires the costs i, while an outbound arc from node i corresponds to the income i. The studied model admits parallel execution of all operations (there are no technological constraints imposed on the execution order). Hence, the minimal external resources are attracted in the case of successive execution of operations (the project n

duration makes T   i ). Thus, the graph is complete and symmetrical. Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

i 1

As the result, the problem has been reduced to finding an optimal execution order of operations, minimizing the amount of external resources. Successive execution of all operations (none of operations is repeated) is described by a certain Hamilton contour. Let the arc length ij stand for the difference between the costs of operation j and the income from operation i, i.e., ij = j – i (see Figure 4.4). Obviously, the resulting graph is pseudopotential. Indeed, any Hamilton contour corresponds to execution of all operations. Irrespective of the summation order of arc lengths,



one obtains the invariant (contour-independent) quantity   





n

   . Then M () is the i 1

i

j

net income (with the minus sign) gained by execution of the first (j – 1) operations in the contour  and by starting operation j.

– i

i

i

– j

Figure 4.4. A segment of a Hamilton contour.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

j

j

Mechanisms of Organizing

139

On the other hand, Mj () may be interpreted as the deficient internal resources for implementing operation j (in the contour ). If Mj () > 0, exactly this amount should be borrowed. In the case Mj ()  0, the internal resources of the principal are sufficient to perform operation j. Now, assume that the principal’s problem consists in finding an execution order of operations, minimizing the maximal amount of a single loan (under no internal resources, i.e., R0 = 0). Formally, this problem is expressed as follows. Choose a Hamilton contour  with the minimal value

M    max M j   . j 1, n

The stated problem is solved according to the following theorem: there exists an optimal solution to the problem M()  min , 

(4)

such that first are the nodes with i  0 in the ascending order of i, followed by the nodes with i  0 in the descending order of i (Appendix 2); here i = i – i is the “profit” of operation i. Set Mmin = min M (), and denote by  = (0, i1, i2, ... , in, 0) the optimal Hamilton 

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

contour (a solution to the problem (4)). Then the following inequalities hold true:

M min   i1  M min   i1   i2  , M min   i1   i2  i3  ............................ M min   i .....  i   i 1 n 1 n 

(5)

k    M min  max   i1 , max   ik 1    i j  . 1 k  n  j 1  

(6)

and

The system of inequalities (5) and formula (6) admit the following interpretation. The first inequality claims that the minimal external resources attracted exceed the costs of operation executed first. Naturally, we have supposed that the amount of internal resources is zero (otherwise, Mmin should be proportionally decreased). Hence, operation 1 requires the costs  i1 , since none of the operations has been performed before (the incomes gained by the

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

140

previous operations are zero). The second inequality states that the costs  i2 to implement operation 2 are smaller than the borrowed amount Mmin plus the income  i1 from operation 1 (and so on for all operations). Thus, the optimal solution has the following structure:  

sort the profitable operations (such that i  0) in the ascending order of the costs i and add them in the Hamilton contour; add the unprofitable operations (such that i  0) to the above contour in the descending order of the incomes i.

Therefore, optimal solution is to execute first the profitable operations in the ascending order of the costs (starting with the cheapest ones) and then the unprofitable operations in the descending order of the income (starting with the ones yielding the maximal income). Accordingly, the minimal amount of external resources attracted is given by (6). In practice, this means the following. At the very least, one has to borrow (a) the costs of operation 1 (if the income gained by this operation and successive operations is enough to execute the rest operations or the borrow does not exceed  i1 ) or (b) the maximal costs among the rest operations due to deficient internal resources to perform them.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

4.5. MECHANISMS OF INSURANCE Before proceeding to the mechanisms of insurance, let us describe a way of accounting human attitude towards risks. Suppose that a certain individual is offered to invest money with a great profitability (but under high risks). Assume that p stands for the probability of zero income, and (1 – p) is the probability of the income x. Evidently, the expected income constitutes Ex = (1 – p) x. Put the following question. What amount x0 is the individual willing to pay for participating in such lottery (see Appendix 3)? For convenience, individuals are traditionally divided into three groups, namely, 1) risk-neutral individuals, who agree to participate in the lottery for the expected win, i.e., x0 = (1 – p) x; 2) risk-averse individuals, who are going to pay for the lottery a strictly smaller sum than the expected win, i.e., x0 < (1 – p) x; 3) risk-seeking individuals, being ready to participate in the lottery even if the expected win does not exceed their payment, i.e., x0 > (1 – p) x. Examples of the curves x0 (x) for risk-neutral, risk-seeking and risk-averse individuals are provided by Figure 4.5. Recall that utility (see Appendix 3) represents a numerical characteristic of human preferences on a set of options depending on random parameters. Introduce the following notation: x is an option (e.g., a lottery win), u() designates the utility function defined on a set of options. Consequently, risk-neutral individuals have linear utility functions (u′ = Const > 0, u′′ = 0, utility is evaluated up to a monotonous linear transformation), while risk-seeking

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Mechanisms of Organizing

141

and risk-averse ones possess convex utility functions (u′ > 0, u′′ > 0) and concave utility functions (u′ > 0, u′′ < 0), respectively.

x0 risk-seeking individuals risk-neutral individuals (x0 = (1 – p) x)

risk-averse individuals

x 0

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Figure 4.5. The “payment-expected utility” curves.

Graphical interpretation of the utility functions of individuals having different attitude towards risks motivates the following example. Imagine that an individual possesses a certain sum of money M0; he or she is offered to participate in a lottery, where the individual wins or looses the sum M with equal probabilities. For the linear utility function u(x) = x, the utility increase as the result of win u1 = M equals (by the absolute value) the utility decrease as the result of loss u2 = M, i.e., the individual is risk-neutral. Under a concave utility function, the utility increase u1 as the result of win (again, by the absolute value) is strictly smaller than the utility decrease u2 in the case of the same loss; thus, an individual with such utility function would avoid risks (and not participate in the lottery). Similarly, a risk-seeking individual (with a convex utility function) has the utility increase (as the result of win) exceeding the utility decrease in the case of loss. Hence, the shape of utility function reflects the attitude towards risks. The following facts are well-known (and proved by numerous investigators). Cash lotteries and risky financial transactions are intended for risk-seeking individuals. As a rule, insurants are risk-averse individuals; by “transferring” their risks to an insurer, they obtain a greater utility than just expected costs compensation, lost income and so on. Generally, insurers are risk-neutral (their risks are decreased by aggregation and diversification of many small risks). Let us analyze several properties of insurance mechanisms, resulting from active behavior of insurants (agents) and/or an insurer (a principal). Insurance serves for risks rearrangement. Imagine several economic subjects face a small risk of a contingency causing considerable losses; consequently, the subjects benefit from “combining their efforts,” i.e., establishing a compensation fund for losses (partial compensation is often used). A “founder” could be economic subjects (mutual insurance with the minimal commercial component), a state (state insurance) or private insurance companies (commercial insurance). A contingency (an insured event) represents a nondeterministic parameter. Even under a known probability distribution (despite involving the expected values in insurance models),

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

142

ruin probability for an insurer in the case of a few homogeneous insurants is higher than in the case of many such insurants. Indeed, increasing stability of an insurance portfolio for a greater number of insurants is a basis of insurance business. Clearly, under a risk-neutral insurant and insurer, insurance makes no sense; an insurant pays to an insurance fund exactly the sum he or she obtains in the case of a contingency (the requirement of compulsory full compensation of losses can be violated, and other procedures of evaluating insurance fees should be used). We give an illustrative example. Consider a set of insurants N = {1, 2, ..., n} having independent contingencies with probabilities {pi}. Hence, one, two,…, or n contingencies may take place. Denote by Hi the income of insurant i in a favorable situation (the income makes zero in the case of a contingency). Let ri be the insurance fee and hi be the insurance indemnity. Finally, pi means the probability of a contingency, and ci are the costs. For insurant i, the expected value of the goal function is

f i  (1  pi ) H i  pi hi  ci  ri , i  N. ~

The insurer obtains the sum R =

(1)

n

 ri

to the insurance fund and pays on the average

i 1 n

R   pi hi . Find out the requirements to-be-satisfied by a mechanism of insurance.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

i 1

1. An insurance scheme must not motivate an insurant to “promote” the occurrence of a contingency (e.g., an insurance indemnity in the case of fire does not exceed the cost of a burnt object). Hence, in a favorable situation the goal function of an insurant must be greater than in a contingency: hi  Hi, i  N. This constraint reflects the moral hazard property (to-be-accounted for in mechanisms of insurance). Really, changing the behavior is natural for humans after avoiding risks (i.e., after “shifting the load to someone’s arms”). For instance, a man insured against car theft is less concerned with car security; similarly, a man insured against summer cottage fire would hardly purchase new fire extinguishers, and so on. The second property of insurance mechanisms lies in the adverse selection problem (potential insurants may possess information unavailable to insurers). An example is when insurance against accidents is beneficial rather to an absent-minded person than to an accurate and careful one. 2. Insurance must be reasonable for an insurant:

ri  pi hi , i  N. (the total balance condition below is weaker). 3. Require that the goal functions of insurants are nonnegative for any situation: Hi – сi – ri  0, hi – сi – ri  0, i  N. 4. Insurance must be reasonable for an insurer:

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Mechanisms of Organizing n

n

i 1

i 1

143

 ri   hi pi  0 .

(2)

The above condition implies that the expected insurance indemnities must not exceed the insurance fees. Nevertheless, this does not guarantee security against insurer’s ruin. The fourth constraint may be supplemented with the condition that the probability of indemnities exceeding the insurance fund must not be greater than a given small threshold. Note that zero in the right-hand side of the inequality corresponds to mutual insurance (zero load for net premium). In the case of commercial insurance, an insurer must provide means for his or her activity (i.e., obtain a certain positive expected income). In the considered class, the domain of feasible mechanisms of insurance (see Conditions 1–4) is often empty. Even if not, risks rearrangement (insurance) among risk-neutral subjects makes no sense. Study the case of risk-averse insurants. Consider the model with a single insurer (an agent) and a single insurant (a principal). Suppose the former is risk-averse with a strictly increasing continuously differentiable concave utility function u (), and the latter is risk-neutral with a linear utility function. Assume that the insurant may obtain two income values x  R1: 0 < x1 < x2 (the corresponding probabilities make up (1 – p) and p , where p  [0; 1]). In other words, the probability of a contingency (when the insurant gains a smaller income) equals (1 – p). The principal’s expected utility takes the form

Φ  r  h (1  p) ,

(3)

where r  0 is an insurance fee, and h  0 designates an insurance indemnity. Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

x1  x1  r  h (in The insurance contract being signed, the insurant obtains the income ~

x2  x2  r (otherwise). the case of the contingency), or the income ~ The

expected

utility

U  u( x1 )(1  p )  u( x2 ) p ; ~ U  u~ x1 (1  p )  u( ~ x2 ) p .

of

the

under

insurant the

without

insurance

the

contract,

contract it

is

constitutes defined

by

Suppose that the principal concludes the insurance contract only if it guarantees a certain nonnegative utility H: Ф = H > 0 (actually, this is the participation condition). In the case of noncommercial insurance, the expected utility of the insurer equals zero: H = 0. On the other hand, commercial insurance is remarkable for a strictly positive expected utility of the insurer. Within the stated model, the insurance contract is described by the tuple {h, r, H ; x1, x2, p, u ()}, where x1, x2, p, u () are internal parameters of the insurant, while h, r, H

x1 and ~ x2 ) are the parameters of the insurance scheme chosen by the insurer. (equivalently, ~

A feasible insurance contract represents a set of nonnegative values {h, r, H} such that Ф  H and insurance is beneficial to the insurant; viz., a feasible insurance contract appears beneficial both to the insurant and insurer. This implies that, under the insurance contract

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

144

offered by the insurer, the expected utility of the insurant is not smaller than in the absence of the contract. Now, find the constraints to-be-imposed on the parameters of the insurance contract–the range of feasible values (h, H) making insurance beneficial to the insurant. Substitute Ф = H in the principal’s goal function to express the insurance fee through the insurance indemnity and the expected income of the insurer. Thus, one obtains

~ x1  x1  p h  H ,

(4)

~ x2  x2  (1  p ) h  H .

(5)

The expected incomes of the insurant are Ex  (1  p ) x1  px2 (without the insurance

x  (1  p ) ~ x1  p~ x2 (under the insurance contract). contract) and E~

x  Ex  H . Introduce the following functions (if x = x1 – x2 = 0 or h = Obviously, E~ x, the problem is degenerate): U x  

u x2   u x1  x  u x1  x2  u x2  x1 , x2  x1

x  x1 , x2 ;

u ~x2   u ~x1  x  u ~x1  ~x2  u ~x2  ~x1 , x  ~x , ~x ; ~ U x   1 2 ~ x2  ~ x1





Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

x' ( p)  max x  R1 u  x   U Ex   u 1 U  , with u–1() being the inverse function of the insurant’s utility function. Since Ex  [x1; x2], then the concavity of the utility function implies that  p  [0; 1]: x′(p)  [x1; Ex]. Actually, under x = Ex, U(x) is the expected utility of the insurant from participation in the lottery with the options x1, x2 (the corresponding probabilities are (1 – p)

~

and р). If x = E~ x , then U (x) means the expected utility of the insurant from participation in

x1 , ~ x2 (the probabilities are (1 – p) and р). the lottery with the options ~

The quantity u = u (x) – U (x)  0 may be interpreted as a bonus for risks (in the units of utility). It defines the minimal guaranteed payments to the insurant such that the latter turns out indifferent (in the sense of the expected utility) between participation in the lottery and gaining the income Ex directly. The positive sign of u is caused by risk aversion of the insurant. For a risk-neutral insurant, the above bonus vanishes. If the insurant seeks for risks (i.e., has a convex utility function), then similar arguments lead to the following conclusion. The bonus for risks is nonpositive–a risk-seeking insurant is ready to pay for participation in the lottery (e.g., a differential measure of the attitude towards risks can be the logarithmic derivative of the utility function). Thus, x′(p) is an action being equivalent (in the sense of the expected utility) to participation in the lottery (see Figure 4.6). For the insurant, the condition of beneficial insurance contract takes the form

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Mechanisms of Organizing

145

~ U E~ x   U Ex  .

(6)

Formula (6), accompanied with Ф  H, provides the feasibility criterion for the insurance contract. However, using it in optimal insurance contract design seems difficult (the constraints imposed on the mechanism parameters may be rather cumbersome). Hence, below we suggest (simple and constructive) sufficient conditions with a clear interpretation.

u(x)

~ U ( x)

U(x) x 0

x1

~ x1

x’(p)

x’’

Ex

~ x2

x2

Figure 4.6. The insurant’s utility and expected utility.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

The properties of concave functions imply that, in the case of commercial insurance, the system of inequalities

x1  x' ( p)  ~ x1  Ex  ~ x2

(7)

is sufficient for validity of (6). Noncommercial insurance being considered, it suffices to require that

x1  ~ x1  Ex  ~ x2  x2 .

(8)

First, study the elementary case of noncommercial insurance. Under H = 0, one obtains

E~ x = Ex. The rest conditions of the system (8) are met for any mechanism (except the moral x1  ~ x2 one hazard case, when a contingency is beneficial to the insurant, and to guarantee ~ should require h  x). The benefit of commercial insurance for the insurant can be substantiated without the system (7)–(8). The condition (6) holds true; indeed, regardless of the insurance indemnity, the following estimate is valid due to the concave function u ():

[u( x1  ph )  u( x1 )](1  p )  u( x2  h(1  p ))  u( x2 ) p   p(1  p )h[u( x1  ph )  u( x2  h(1  p ))]  0. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

146

Therefore, within the framework of the stated model, noncommercial insurance is always beneficial to a risk-neutral or risk-averse insurant. This agrees with intuitive idea of insurance as risks rearrangement; under a mutually beneficial mechanism of noncommercial insurance, an insurant transfers some risks to an insurer (this is beneficial to the both, since the latter is risk-averse, while the former is risk-neutral). Evaluate the most beneficial insurance indemnity (according to the insurant). The

~

analysis of U h  yields that (despite the fact that r = h (1 – p) and the insurance fee grows

for increasing insurance indemnity), the optimal value h coincides with the maximal possible x1  ~ x2  E~ x  Ex , and the insurant actually eliminates the one– x. Moreover, ~

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

uncertainty, gaining the expected utility u (Ex). Apparently, u (Ex)  E u (x), i.e., insurance is beneficial to the insurant, while the insurer turns out indifferent to the insurance contract. The following properties of the above mechanism of noncommercial insurance deserve mentioning. The mechanism parameters (constraints and optimal values) are independent of the insurant’s utility function. The mechanism parameters are defined by  x only (and not by the separate incomes). The insurance indemnity does not exceed possible losses  x caused by the contingency. In the limit case of the corresponding deterministic model, one obtains: if  x = 0, then h = r = 0; if p = 0, then h = r =  x; finally, if p = 1, then h =  x, r = 0 (yet, the insurance indemnity is paid with zero probability). For a fixed insurance indemnity, the insurance fee increases with respect to the contingency probability. For a fixed contingency probability, the insurance fee increases with respect to the insurance indemnity. Under a riskneutral insurant, insurance (as risks rearrangement with a risk-neutral principal) appears meaningless. Indeed, the insurant’s expected utility possesses the same value irrespective of the insurance indemnity. Now, consider a mechanism of commercial insurance. Then the system of inequalities (7) enables establishing constraints to-be-imposed on the insurance indemnity depending on the

x1 , ~ x1  Ex , expected income of the insurer. Taking into account the conditions x1  ~ Ex  ~ x2 , one obtains H  p h,

(9)

H  p [h –  x],

(10)

H  (1 – p) [ x – h].

(11)

Formulas (10)-(11) imply h   x.

(12)

x2 < x2. Moreover, the constraints (9)–(12) are Thus, the moral hazard is avoided and ~

x1 (see (7), as well). Note this condition is violated in supplemented with x1  x'  p   ~ Figure 4.6.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Mechanisms of Organizing

147

For a linear utility function of the insurant, we have x' ( p )  Ex and the expression (7)

x1  Ex . Due to (12), this leads to H  0; i.e., commercial holds true only if x'  p   ~

insurance is impossible in the case of a risk-neutral insurant (risks rearrangement yields no income). It was shown in [10] that assigning certain boundaries to the mechanism parameters is optimal for the insurer (in the sense of maximal efficiency as the expected utility). This statement has the following reasoning. x1 and ~ x 2 yield The definitions of ~

Ф  p ( x2  ~ x2 )  (1  p )( ~ x1  x1 ) . x1 and ~ x 2 ; the Evidently, the efficiency of the mechanism  is a monotonic function of ~ smaller are the values of these parameters, the higher is the efficiency. On the other hand, the minimal possible values are determined by (7). Thus, it is sufficient to choose the parameters satisfying the formulas

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

~ x1  x' ( p) , ~ x2  Ex .

(13)

Recall the conditions (7) are sufficient. A mechanism meeting (13) appears feasible, yet does not guarantee the maximal possible expected utility of the insurer on the set of all feasible mechanisms (beneficial to the insurer). In practice, formulas (13) correspond to the situation when the insurant is offered a new lottery, where his or her expected utility from minimal possible income exceeds the expected income in the original lottery. Clearly, this is beneficial to the insurant. At the same time, the insurer gains a nonnegative expected utility (a positive expected utility if p  0, p  1, x  0). Generally, the derived estimate could be refined. In other words, using the conditions (13) simplifies the analysis and allows for evaluating the mechanism parameters without complex computations. But simplicity causes possible losses of the efficiency. To illustrate the mentioned ideas, let us study a special case; in the contingency, the insurant’s income equals zero, while the insurance indemnity constitutes x2 (i.e., x1 = 0, h = x2). Denote by  the insurance rate. It is determined by the net premium 0 and insurance load :  = 0 (1 + ). The equivalence principle implies that 0 = 1 – p. For the insurant, the condition of beneficial insurance contract yields the following estimate of the maximal insurance load:

max =

px2  u 1 ( pu ( x2 )) . (1  p) x2

(14)

(the insurer strives for maximizing the load). Clearly, max increases with respect to p and x2 and is concave with respect to x2. Practical interpretations seem evident, as well. If the insurant is risk-neutral, then max = 0 (i.e., the insurer benefits nothing from the insurance

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

148

contract). On the other part, max is strictly positive for a strictly concave utility function of the insurant. For instance, if u(x) = x , it follows from (14) that max = p. The performed analysis of insurance mechanisms demonstrates that the benefit of risks rearrangement is subject to different attitude of the insurant and insurer towards risks. A riskaverse behavior of the insurant seems natural. Therefore, we have to find out why the insurer may be risk-neutral. What are the distinctions of the insurance mechanisms in multi-agent (multi-insurant) models? Let an OS consist of n insurants (index i  1, n corresponds to the number of an agent, n

i.e., an insurant). The total insurance fee of the agents makes up

 r , while the total i 1

i

n

expected insurance indemnity equals

 1  p  h . i 1

i

i

The problem of optimal insurance

contract lies in evaluating a feasible set {ri, hi} maximizing the principal’s expected utility n

Ф   [ pi ( x2 i  ~ x2 i )  (1  pi )( ~ x1i  x1i )] , i 1

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

xi , ri  x2i  ~ x2i . where hi  xi  ~ As is generally known, insurance is beneficial under many insurants; there are several reasons. First, an increasing number of insurants reduces the ruin probability of an insurer (in addition to the expected utility, one should also analyze the second moments–the goal functions and constraints of the mechanism may vary). Second, even a risk-averse insurant may benefit from an insurance contract. We elucidate this below. Suppose there exist n homogeneous insurants, and an insurer possesses the same (strictly concave) utility function as the insurants. If n = 1, nobody benefits from the insurance contract (risks rearrangement among agents with an identical attitude towards risks is pointless). The models considered above indicate that insurance is beneficial when the insurant and insurer have different bonuses for risks. As n grows, under a strictly concave utility function of an insurer, his or her bonus for risks decreases (yet, being fixed for each insurant). Naturally, the system of events (feasible outcomes) gets complicated in comparison with the single-agent case. In other words, risks rearrangement between two agents is mutually beneficial if one of them has “a less concave” utility function than the other. The models of mutual insurance are described in [10]. Let us briefly discuss the basic approaches and results. Consider n insurants. For each insurant, the result of activity is a certain random parameter with two values (corresponding to a favorable situation and a contingency, respectively). The probability of the contingency for insurant i constitutes pi and is known to an “insurer” (this could be a union of insurants–accordingly, all the probabilities is a common knowledge among insurants participating in mutual insurance). Note this model is directly extended to the case of any finite number of possible results of insurants’ activity. For simplicity, assume that a contingency may take place for a certain insurant only.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Mechanisms of Organizing

149

In the case of the contingency, insurant i requires the insurance indemnity hi (e.g., the costs of recovery work and compensations to the third parties as the result of damages caused by an enterprise accident). Assume that merely insurant i knows the quantities hi. Designing an insurance mechanism, one has to involve certain estimates of the quantities {hi} (recovered using indirect information–e.g., the results of an ecological expertise or available statistical data) or the messages {si} reported by the insurants. Complete guaranteed coverage of possible damage being required, the reserve max {hi} must be provided. Since {hi} are unknown, the i

reserve (safety fund) makes up max {si}. i

Consider the goal functions of the insurants. Insurant i yields the income Hi and pays the insurance fee ri (s), where s = (s1, ... , sn) stands for the insurants’ message vector. In a favorable situation, the insurant incurs the costs Ci; on the other hand, in the case of the contingency, he or she has the costs (Ci + hi) and obtains the insurance indemnity si. Thus, the expected value of the goal function of insurant i is fi = Hi – ri (s) – Ci + pi (si – hi), i  N = {1, 2, ..., n}.

(15)

Let the insurer use the following procedure of insurance fee evaluation:

ri ( s ) 

( pi si ) n

 (s j 1

j

R, i  N,

(16)

pj)

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

i.e., each insurant pays to the insurance fund a sum being proportional to his or her request (

s

n

 r (s)  R ,  i  N: r i 1

i

i

(s) increases with respect to si). Obviously, under a fixed

opponents’ message profile s–i = (s1, s2, ... , si–1, si+1, ... , sn), the maximum value of (pi si – ri

si  max{s j } . Clearly, truth-telling is generally not a (s)) with respect to si is attained by ~ ji

Nash equilibrium. Moreover, any game outcome, where all agents report identical requests, forms an equilibrium. Formula (16) can be substituted by ri (s) = pi si. Then the insurant’s goal function does not depend on s and (due to the hypothesis of benevolence) he or she reports si = ri, i  N. Hence, each insurant pays to the insurance fund a fee being equal to the expected deficit of the financial resources. However, the total fee can be smaller that the necessary indemnities; i.e., there may exist insurant j such that h j 

n

 p h . This situation should be taken into i 1

i i

account by accurate handling of the expected values. To conclude this section, we make the following remark. The basic “technical” difficulties in the analysis of insurance mechanisms arise due to a nonlinear utility function of an insurant. At the same time, exactly this nonlinearity (reflecting his or her risk-averse property) makes insurance possible and mutually beneficial to an insurant and insurer.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

150

Therefore, to simplify the exposition, let us mention possible ways to account for the riskaverse property without utility functions. For this, introduce a risk bonus in the insurant’s goal function; it would reflect the value of insurance indemnity gained in the case of a contingency. Suppose that g forms the component of the insurant’s goal function being independent of random events. Next, Q are his or her additional costs incurred by a contingency, and h(h) stands for the “value” of the insurance indemnity. Then the expected value of the insurant’s goal function is rewritten as Ef = g – r + p (h(h) – Q), where p indicates the contingency probability. The insurant benefits from the insurance contract if p h(h)  r.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

It follows from the equivalence principle that the load for the net premium makes up (h(h) – h). Hence, the insurance contract is beneficial to the insurer provided that h(h)  h. For instance, set h(h) = h e, where   0 is a constant characterizing the risk-averse property of the insurant (for a risk-neutral insurant,  = 0). Consequently, under small , the Taylor expansion formula yields h(h)  h +  h, i.e.,  can be interpreted as the maximal load for the net premium. Finally, we emphasize that insurance may play not just a “compensating” role, but also a “preventive” role; for a corresponding theoretical background see [39].

4.6. MECHANISMS OF PRODUCTION CYCLE OPTIMIZATION Duration of a production cycle exerts an appreciable impact on the efficiency of a production process and on the amount of circulating assets required. Generally, production cycle optimization is included in the development program of an enterprise as a key task. Consider the problem of optimal coordinated planning for production cycle optimization. A certain production process may be represented as a technological network whose nodes correspond to shop floors (or work cells), while arcs reflect necessary technology of the process. Denote by i the reduction in duration of the production process in critical shop floor i (lying on the critical path in the network [20, 54], see also Appendix 2). Then the total reduction in duration of the production cycle is defined by the sum of reductions in the lengths of critical operations. Note here we assume that after the reduction in the lengths of critical operations the same path remains critical. In two pages below we will refuse this assumption and consider the general case. Study the problem of production cycle reduction by a given value  (in the sense of cycle duration). First, we describe a specific case when a technological network represents a successive chain of n shop floors. Each shop floor (an agent) elaborates and submits to the strategic development department (a principal) a list of actions for production cycle optimization. In

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Mechanisms of Organizing

151

the aggregated form, these actions can be expressed by the relationship Si (i) between the costs required to decrease the production cycle on the value i. Consider two mechanisms to solve the posed problem. Mechanism 1. The action plan of production cycle reduction by the value  is defined by n

solving the problem

n

 S    min under the constraint  i 1

i

i

i 1

i

  . Suppose that  i is an

optimal solution to the above problem. Then shop floor i receives (a) the planned value

 i of



production cycle reduction and (b) the amount of financing Si (  i ) to perform the corresponding actions. Mechanism 2. The amount of financing to perform the optimization actions is directly proportional to the value i of production cycle reduction for shop floor i: Si =  i (note  means the amount of financing to perform the reduction by a unit value–see Sections 2.7 and 3.4). To calculate the action plan and estimate , each shop floor submits to the strategic development department a possible value of production cycle optimization in the shop floor (depending on the parameter ). Let i = i () be the value of production cycle optimization suggested by the shop floor under the amount of financing  i. The strategic development department determines  and the planned value of production n

cycle optimization using the condition

     ; i.e., this department finds the minimal i 1

i

value * satisfying the above constraint. Next, each shop floor receives the planned value

 i   i   of production cycle optimization and the corresponding amount of financing

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

 i . To analyze the comparative efficiency of these mechanisms, let us take the CobbDouglas production functions: Si  i  

1

 i ri1 , i  1, n,   1 , where the parameter ri  characterizes the technological efficiency for the actions of production cycle optimization. Suppose that for each shop floor the goal function represents the difference between the amount of financing (received for implementing the actions of production cycle optimization) and the actual amount of financing being necessary for these actions. Evidently, the both mechanisms lead to identical overestimation of the allocated amount of financing (as against the necessary amount); hence, in the above sense the mechanisms possess equivalent efficiency levels. However, a substantial advantage of mechanism 2 lies in the following. It stimulates truth-telling regarding the necessary amount of financing, i.e., makes up a fair play mechanism (see Chapter 3). This property is crucial for developing the corporate culture of an enterprise (fiduciary relations among shop floors and departments). Therefore, the analysis has demonstrated certain benefits of mechanism 2; indeed, it results in the same amount of financing, but ensures such an important property as truthtelling by the agents. Hence, we consider mechanism 2 in the case of an arbitrary technological network. Suppose that all shop floors have reported the relationships i = i (), i  1, n . Let T0 be the length of the critical path. We use the following algorithm to solve the problem.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

152

 Step 1. Evaluate 0 by the formula     S 

 1

(see also Section 3.4); here S is the sum

of the estimates si of operations in the critical path, and ti1 = ti – i (0). Step 2. Find the length of the critical path provided that the corresponding operations have the durations ti1. Denote by T1 the length, and by 1 the critical path. If T1 > T0 – , then compute a new value 1 by the above formula with  = T(1) – T0 + , where T(1) indicates the length of the path 1 under the initial durations of the operations {ti}, and S is the sum of the estimates si of the agents in the path 1. Note that 1 > 0. Find the critical path 2 and its length T(2) under the durations ti2 = ti – i (2) and repeat the procedure. Recall the number of feasible paths in the network is finite; thus, at finitely many steps one obtains a minimal value * such that the critical path length in the network makes (T0 – ) under the durations ti – i (*) for operations of the path  k. Now, for all shop floors it is necessary to define the plans i of production cycle optimization (the durations must satisfy the constraints ti = ti – i (*)  ti – i  ti.) As an optimization criterion, this stage of the algorithm involves the amount of financing n

 

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

i 1



i

. The stated problem is a particular case of the well-known network cost optimization

problem [20]. We have discussed the problem of optimal coordinated planning for a production cycle. Other important classes of problems include commercial cycle planning and logistic cycle planning. Notably, the ultimate financial result (profit-making) directly depends on transfer of products from a manufacturer to a customer (including transportation, warehousing, and sales). The commercial cycle is a sequence of different operations. As a rule, commercial cycle planning should take into account numerous factors such as shipping costs, storage costs, sales costs, cycle duration (production-to-sales), sale income (including barter) and various risks. These factors are interconnected. For instance, by increasing the costs one may reduce the cycle and risks or stimulate a greater demand for a product, etc.

4.7. MECHANISMS OF ASSIGNMENT2 In Section 3.5 we have analyzed rank-order tournaments, where competitors are sorted by a certain criterion. In complex tournaments (e.g., choosing executors for project operations), each agent may pretend to implement different operations. Let Aij be the real minimal price such that agent i agrees to perform operation j, and Sij be the price for the operation declared by agent i (clearly, Sij  Aij). A principal (project manager) has to assign all operations to minimize the total cost of their implementation. Assume that each agent agrees to perform only a single operation. To formalize the principal’s decisionmaking problem, set xij = 1 if operation j is assigned to agent i and xij = 0 otherwise. Then the assignment problem for operations (with m = n) can be expressed in the following form:

2

This section was written with the assistance of Prof. V.N. Burkov, Dr. Sci. (Tech.).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Mechanisms of Organizing

153

  xij Sij  min  i, j   xij  1, j  1, m ,  i  x  1, i  1, n ij  j

(1)

where n is the number of agents, and m means the number of operations. Accordingly, for operation j the costs are q j 

n

x S i 1

ij

ij

.

In fact, several rank-order tournaments interlace here (by the number of operations); they are related by the condition that an agent may win only a single tournament (i.e., may perform merely a single operation). The analysis of such rank-order tournament essentially depends on the relationship between the number of operations and the number of agents. One would easily show that a Nash equilibrium outcome corresponds to the assignment of operations which minimizes the total objective costs

C   xij Ai j .

(2)

i, j







Proof. Suppose that {Sij } is an equilibrium outcome. Assume that xij = 1. Set i = Sij – Aij = qj – Aij. Note that if Sik – Aik > i, then the agent would decrease Sik, striving for operation k and the maximal gain. Such tendency preserves until Sik = Aik + i. On the other hand, if Sik < Aik + i, then increasing Sik up to Aik + i does not change the assignment of operations. Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.



Hence, solution of the problem (1) with {Sij } is equivalent to the corresponding solution of this problem under Sij = Aij + i, i , j  1, n . Finally, it seems natural to believe that all agents without assigned operations would report the minimal estimates Sij = Aij (striving to obtain, at least, a certain operation). Hence, the assignment minimizing

( A

ij

 i ) xij definitely

i, j

minimizes

A x

ij ij

. However, the operations may be assigned not at the minimal prices Aij

i, j

(choose sufficiently great values i). This completes the proof. First, consider the case when the numbers of agents and operations coincide. Let S = {Sij} be a certain outcome (a set of prices suggested by the agents), and xij (S) corresponds to the solution of the assignment problem. If an agent increases the prices of all operations at the same value Sij = Sij + i, j  1, n , then the solution of the assignment problem remains unchanged (yet, the agent obtains the same operation at a higher price). Therefore, the tendency of price growth is immediate. When does it finish? Majorize the price of each operation by a certain bound Lj (the so-called limit price). Apparently, (at least) for a single operation an agent suggests the limit price.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

154

Suppose that the agents are renumbered such that (in the optimal solution of the assignment problem) operation i is assigned to agent i provided that Sij = Aij; consequently, qi 0

= Sii. Choose the initial prices qi0 = Li and the initial estimates Sij = Lj ( i, j  1, n ). Next, correct the estimates and prices:

Sij  min {Li , qi  Aij  Aii } , j  i,

(3)

Sii = qi = min {Li, min Sji}.

(4)

j i

Clearly, this procedure is finite and yields the equilibrium estimates {Sij } and equilibrium prices qi = S ii , i  1, n . Of crucial importance is that the procedure starts from the maximal (limit) prices. Moreover, at least a single operation is assigned at the limit price. Thus, the case m = n would hardly be a rank-order tournament. Rather, it belongs to the monopolistic way of financing (the more so if each agent specializes in a certain operation, e.g., agent i specializes in operation i). Example 4.1. Set Li = L; Aii =  < L; Aij = L, j  i. Clearly, for all i, j the equilibrium 

outcome is Sij = L. The corresponding equilibrium solution of the assignment problem takes 

the form xii = 1; xij = 0, j  i; qi* = L, i  1, n . The efficiency of the rank-order tournament (as the ratio of the minimal costs of all operations Smin = n  to their costs in the equilibrium

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

outcome S = L n) equals K 

S min a   1 if  Фk – Ф0. Accordingly, the solution to the optimization problem

 A

ij

  i  xij  min would not coincide with that of the problem

i, j

A x

ij ij

 min .

i, j

Thus, in an equilibrium outcome the condition k  Фk – Ф0 holds true. Since the agents are 

interested in increasing k, then in an equilibrium outcome we have k = Фk – Ф0 and Sij = Aij + Фi – Ф0, Фm+1 = Ф0. The efficiency of the rank-order tournament in the case n = m + 1 is described by the formula:

K

Ф m

 Ф  m  1Ф i 1

i

.

0

All values Фi, i  1, n , are defined using the minimal prices Aij. Hence, the efficiency of the rank-order tournament depends only on the minimal prices (and not on the limit ones) under sufficiently large limit prices. Example 4.3. Within the framework of Example 4.1, add an agent willing to perform operation 1 or operation 2 (as identically beneficial to him or her): A31 = A32 = . Suppose that  <  < L. In this case, Ф0 = 2 , Ф1 = Ф2 =  + , 1 = 2 =  –  and the efficiency of the rankorder tournament makes

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

K

2   , 2 

while the prices for the both operations coincide: q1* = q2* = . Example 4.4. Now, under the conditions of Example 4.2, add an agent with the following

 15 10    parameters: A31 = 40, A32 = 20. One obtains that A =  25 15  , Ф0 = 30, Ф1 = 45, Ф2 = 35,  40 20    1 = 15, 2 = 5. The equilibrium outcome is given by

S11 = 30, S12 =25,   S 21 = 30, S 22 =20,   = 40, S 32 = 20. S 31





The resulting assignment of operations is x11 = x22 = 1, and xij = 0 for the rest i, j. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Mechanisms of Organizing

157

Therefore, agent 1 receives operation 1 at the price q1* = 30, and agent 2 is assigned operation 2 at the price q2* = 20. The efficiency of the rank-order tournament turns out K =

30 = 0.6 (i.e., it has been improved by approximately 4.2 times as against the previous case 50 1 K = ). 7 These examples show that adding just a single agent may completely change the efficiency of the rank-order tournament. The above efficiency attains the maximum under equal competitors: Aij = Aj for all

i  1, n ; consequently, Фk = Ф0, k = 0 (all operations are assigned at the minimal prices Aj, i  1, m ). Thus, increasing the number of participants in the rank-order tournament generally improves (at least, does not reduce) the efficiency of the tournament. If n > m + 1, the rank-order tournament is analyzed similarly to the previous case. However, computations get bulky as n grows. For instance, for n = m + 2 one has to consider

Cm2 problems (as the result of substituting any two agents i, j that have obtained operations for two agents without assigned operations in an equilibrium outcome). Denote

Фij  min 

x

s k i , j

ks

Aks

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

under the constraints

  xks  1, s  1, m k  i , j m  xks  1, k  i , j.  s 1 Then any Pareto-optimal solution of the system

 i   j  Фij  Ф0 , i , j  1, m ,  0   i  Фi  Ф0 , i  1, m , determines an equilibrium outcome. The efficiency of the rank-order tournament can be estimated by introducing max = max



i

. The efficiency constitutes K 

i

Example 4.5. Consider Example 4.4 and add agent 4:

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Ф0 . Ф0   max

Dmitry Novikov

158 15   25 A 40   20 

10   15  . 20   40 

One obtains Ф1 = 35, Ф2 = 30, Ф12 = 40, Ф0 = 30, 1  35 – 30 = 5, 2  30 – 30 = 0. Accordingly, the equilibrium outcome makes up S11 = 20, S21 = 25, S31 = 40, S41 = 20; S21 = 15, S22 = 15, S32 = 20, S42 = 40.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Here there exist two options of assigning operations. In the first one, operation 1 is allocated to agent 1, and operation 2 is given to agent 2. The second option lies in assigning operation 1 to agent 4, and operation 2 to agent 1. In the case of four agents, the efficiency of the rank-order tournament is improved up to K = 0.84 > 0.6. Therefore, in the present section we have discussed the mechanisms of assignment; they may serve for efficient solution of optimal staff problems in OS (e.g., choosing project executors).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Chapter 5

MECHANISMS OF CONTROLLING This chapter provides a description to mechanisms of controlling. In this class, one may emphasize such mechanisms of data acquisition and data processing in a controlled system as integrated rating mechanisms (Section 5.1), mechanisms of consent (Section 5.2) and multichannel mechanisms (Section 5.3). In addition, this class includes certain mechanisms of operational control intended for the principal’s “real time” correction of control actions depending on the states of a controlled system and external disturbances; in particular, we describe certain mechanisms of contract renegotiation (Section 5.4).

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

5.1. INTEGRATED RATING MECHANISMS To choose efficient control actions (beginning with the goal-setting stage and concluding with the stage of operational control), a principal must have sufficient information on the behavior of controlled subjects, including the results of their activity. In complex (multiagent, multi-level, multi-criteria) organizational systems, it seems rational to involve integrated rating mechanisms because of limited capabilities of the principal to process huge amounts of information or due to the absence of detailed information. Such mechanisms enable performing convolution of different indicators, i.e., aggregation of information regarding the results of activity of specific system elements (agents). Generally, large-scale organizational systems (OS) with numerous agents possess a complex hierarchical structure. For the whole system, the activity result has a nontrivial dependence on the actions of the agents. How should a successful functioning of a system be interpreted? What assessment criteria (indexes) should be used for proper control? As a rule, successful functioning of OS requires solving several problems (ensuring successful operation of lower-level subsystems). By-turn, solving these problems is reduced to solving special subproblems, and so on. Sequential analysis of the system’s problem structure yields a decomposition tree known as the goal tree. The root node represents an aggregated quality indicator for the system operation, while dangling nodes correspond to the quality indicators for operation of specific structural units, agents, etc. The attainability level of a certain subgoal (a node of the goal tree) will be assessed using a discrete scale. Consider an illustrative example. Suppose a project lies in the development of an educational institution (EI). As an integrated rating, let us choose “the level of EI

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

160

development,” being defined by “education quality” and “economic state of EI elements” (e.g., branches of the institution). Assume that education quality is described by the criteria called “the quality of general education” and “the quality of special education.” The corresponding1 tree is demonstrated in Figure 5.1. Hence, EI operation is modeled by the goal tree with a certain discrete scale for assessing the attainability levels (see below). At each level, evaluating ratings requires the knowledge of the rules to compute them using lower-level ratings (the lowest-level ratings are provided by experts or defined according to a preset procedure of “converting” the available quantitative or qualitative information to a discrete scale). Thus, the first problem is choosing a certain rule for the rating aggregation. For the system elements to reach specific ratings, the principal must allocate proper resources (e.g., financial funds). Hence, the following problem is immediate. Find out how the costs to reorganize the EI depend on the costs of its elements (in terms of the corresponding resources). Integrated rating mechanism. For each criterion (each node of the goal tree), let us introduce a discrete scale. Establish a correspondence between values of the scale and numbers 1, 2, ... The scale capacity can be arbitrary; choosing the number of different ratings, one may account for (a) the specifics of the EI and criterion being used and (b) the fact that large capacity of the scale increases computational complexity of optimization problems. In the example above, let us take the scale with the following four ratings: “poor” (1), “fair” (2), “good” (3), and “excellent” (4). Now, define the procedure of rating aggregation. Suppose the rating with respect to a generalized (aggregative) criterion depends on the ratings with respect to two lower-level criteria (being aggregated). Consider the matrix A = ||a (i, j)||, where a (i, j) means the ratings with respect to the aggregative criterion under ratings i and j for the criteria being aggregated. Dimensions of the matrix and the number of pairwise different elements depend on the corresponding scales. In the example considered, choose the convolution matrices shown in Figure 5.2.

The development level of EI

The economic state of EI elements

Education quality

The quality of general education

The quality of special education

Figure 5.1. The goal tree for an educational institution.

1

Note in this example all numerical values are rather arbitrary; the example does not claim to provide an exhaustive study of any real EI.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Mechanisms of Controlling

161

4| 3 3 3 4 3| 2 2 3 4 2| 2 2 3 3  Education quality (C4) 1| 1 1 2 2 12 34 The quality of special education (C2)

The quality of general education (C1)

4| 2 3 4 4 3| 2 3 3 3 2| 1 2 2 3  1| 1 1 2 2 1 23 4 Education quality (C4)

The economic state of EI elements (C3)

The level of EI development (C)

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Figure 5.2. A convolution matrix.

For instance, rating 3 for the criterion C1 (“the quality of general education”) and rating 2 for the criterion C2 (“the quality of special education”) lead to aggregated rating 2 for the criterion C4 (“education quality”). If rating 4 has been reached for the criterion C3 (“the economic state of EI elements”), the generalized rating for the criterion C (“the level of EI development”) makes 3. The following question is immediate. Who should choose the goal tree structure and rating scales, as well as form the convolution matrices? Generally, the mentioned parameters are chosen by decision-makers (managers of the corresponding element of the EI or a board of administration) and/or by a collective of experts. On the one hand, the matrices must be easily modified for new priorities; on the other hand, such procedure inevitably includes subjectivism. To form the system of convolution matrices, one should adhere to the monotonicity rule: a rating being aggregated, resulting from an increase in (at least) a single aggregative rating, must not be smaller than the initial rating being aggregated. In other words, the ratings do not decrease as one moves from the left bottom of the matrix to the right or to the top. Therefore, we have studied two-dimensional matrices (the so-called binary convolutions), defining the aggregation procedure for two criteria. Clearly, by introducing three-dimensional matrices (or matrices of higher dimensionality), one may aggregate an arbitrary finite number of ratings. The above methods are applicable here, as well. However, exactly the binary convolutions allow for visual demonstration of the preference structures and priorities of decision-makers. Thus, no essential algorithmical distinctions of kind exist between the twodimensional and multidimensional cases. In the sequel, we will focus on the former for the reasons of simplicity. Costs analysis. The next stage is forming the rating tree. Given the goal tree and the set of convolution matrices, find a set of lower-level ratings leading to each (feasible) generalized rating. Descend the goal tree and (at each level) define what combinations of lower-level ratings generate the current rating. In the considered example, the value C = 4 results from the following rating combinations in terms of the criteria (C1, C2, C3): (4; 4; 4); (3; 4; 4); (4; 1; 4); (4; 2; 4); (4; 3; 4); (3; 3; 4); (2; 3; 4); (2; 4; 4). Build similar trees for all other values of ratings with respect to the criterion C being aggregated (generalized ratings).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

162

Dmitry Novikov

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

A set of lower-level ratings leading to a required generalized rating is said to be a development alternative (or an alternative). Suppose that one has the rating tree and the costs to achieve each lower-level rating; hence, it is possible to solve the costs minimization problem for ensuring a certain generalized (aggregative) rating. Starting from the lowest level of the rating tree (under fixed costs to achieve a specific rating), ascend the rating tree to choose the minimal costs alternative. The costs to ensure each generalized rating represent the sum of the costs to achieve the ratings being aggregated. The costs in a branch point (with several alternatives) are defined as the minimal costs of the corresponding alternatives leading to the required rating. The reverse (top-down) method serves to find the minimal costs alternative. Thus, we have described the integrated rating mechanism (including the technique of building the rating tree and defining the costs of a development alternative). Now, it seems necessary to relate these components and study their interdependency (in order to choose the best alternative in a certain sense). Recall that lower-level elements of the assessed EI have been assumed independent. Consequently, we consider one of them. Each alternative is assessed by the criteria of quality and costs. The notion of an optimal alternative appears ambiguous; the suggested model generates a class of optimization problems. Let us describe a search algorithm for feasible values of quality and costs. 1. For each feasible variation of the rating of a lower-level element in the goal tree, find the minimal costs. 2. The amount of financing being finite, choose only those combinations of the costs that meet the budget constraint. 3. For each feasible combination of financing, evaluate the total financing costs and generalized rating. As the result, one obtains a set of points in the space “quality  costs” (a feasible domain). Each point is a feasible alternative of financing. 4. In the feasible domain choose a point (a set of points) being optimal, e.g., in the sense of the maximal quality rating (depending on the posed problem). In large-scale systems, computational complexity of the above algorithm may be high; however, all possible alternatives are embraced (i.e., global optimization takes place). In practice, it is reasonable to involve certain modifications of the algorithm (taking into account the specifics of a particular problem). To illustrate the presented ideas, we consider the method of building the so-called crucial alternatives. A development alternative, where not ensuring the required rating (at least) for a single criterion causes not achieving the required value for the generalized rating, is said to be a crucial alternative. For the rating C = 4, a crucial alternative makes up (C3 = 4; C4 = 3). On the other hand, the alternatives (C1 = 4; C2 = 1) and (C1 = 2; C2 = 3) are crucial for the rating C4 = 3. Crucial alternatives possess some advantages. First, the number of feasible combinations is appreciably decreased (in the example above, one has to consider just two alternatives instead of eight). Second, using crucial alternatives eliminates “system redundancy” (a failure in an element disrupts the whole system); thus, strong reasons exist to believe that crucial alternatives lead to the minimal costs (and minimal risks). Crucial alternatives are especially helpful in solving the minimization problem for the amount of financing required to achieve a given generalized rating.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Mechanisms of Controlling

163

X

The level of socio-economic development

X1

X2

The level of economic development of the region X1

The level of investments

The level of social development of the region X2

X12

X21

Average wage

The level of prices

X12

X1

Environmental situation

.

Figure 5.3. A criteria tree.

X

X2 1 2 3 4

1 1 2 2 1

2 2 2 3 2

2 3 3 3 3

3 3 4 4 4 X1

1 2 3 4

1 1 2 2 1

1 2 3 3 2

2 3 3 4 3

2 3 4 4 4 X11

X2

X22

1 2 3 4

1 1 1 2 1

1 2 2 2 2

3 3 3 3 3

3 3 4 4 4 X21

Figure 5.4. Convolution matrices.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Table 5.1. Aggregation of crisp ratings Criteria Х Х1 Х2 Х11 Х12 Х21 Х22

Crisp ratings 3 4 2 4 3 2 3

To conclude this section, let us describe the mechanisms of fuzzy integrated rating on the following example. It is necessary to assess the level of socio-economic development of a certain region (the criterion X in Figure 5.3). This level is defined by the level of economic development (the criterion X1) and by the level of social development (the criterion X2). Byturn, the level of economic development depends on the level of investments (the criterion X11) and on the average wage (the criterion X12), while the level of social development is determined by the level of prices (the criterion X21) and by environmental situation (the criterion X22). Suppose that ratings with respect to each criterion possess a finite number of values (for simplicity, we use 4-point rating scale: 1–“poor,” 2–“fair,” 3–“good,” and 4–“excellent”).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

164

Under available ratings with respect to the lower-level criteria X11, X12, X21, and X22, the problem is to find the aggregated rating with respect to the criterion X. For a binary tree, criteria convolution is performed using convolution matrices, whose elements define the aggregative rating provided that the ratings with respect to the criteria being aggregated are indexes of the corresponding rows and columns (see above). In the present example, apply the convolution matrices of Figure 5.4. For X11 = 4, X12 = 3, X21 = 2, and X22 = 3, one obtains that X1 = 4, X2 = 2, and X = 3 (see Table 5.1). A possible generalization of the above integrated rating mechanism lies in that of fuzzy integrated rating; generally, ratings with respect to each criterion appear fuzzy and are aggregated by the convolution matrices. Fuzzy ratings may correspond to the vectors of experts’ degrees of confidence in appropriate crisp (non-fuzzy) ratings. The resulting aggregated rating is fuzzy, as well; yet, it provides more information than the crisp counterpart. x1 stand for the fuzzy rating with respect to the first criterion, being defined by the Let ~ membership function  ~x1 ( x1 ) on the universal set given by the corresponding scale (in the

x2 is the fuzzy rating with example considered, this set forms {1, 2, 3, 4}). By analogy, ~ respect to the second criterion, being defined by the membership function  ~x 2 ( x2 ) .

x yielded According to the aggregation principle [29] (see Appendix 4), the fuzzy rating ~ by aggregation with respect to the procedure f (,) specified by the convolution matrix is described by the membership function

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

μ~x ( x) =

sup {( x 1 ,x 2 ) | f ( x 1 ,x 2 )  x}

min {  ~x1 ( x1 ) ,  ~x 2 ( x2 ) }.

(1)

In the limit case (aggregation of crisp ratings), the aggregated rating is naturally crisp and coincides with the one generated by the crisp integrated rating mechanism with appropriate convolution matrices. Within the framework of the current example, suppose that fuzzy ratings with respect to the lower-level criteria take the values provided by Table 5.2. Applying the convolution matrices from Figure 5.4 and formula (1), one obtains the following fuzzy ratings with respect to the criteria being aggregated–see Table 5.2. Table 5.2. Aggregation of fuzzy ratings Criteria

Х Х1 Х2 Х11 Х12 Х21 Х22

Fuzzy values 1 0.00 0.00 0.20 0.00 0.00 0.20 0.00

2 0.20 0.10 0.90 0.20 0.10 0.90 0.30

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

3 0.70 0.40 0.30 0.40 1.00 0.30 0.95

4 0.30 0.70 0.10 0.70 0.40 0.10 0.40

Mechanisms of Controlling

165

1,00

0,80 X

0,60 X1

0,40

Х2

0,20

0,00

1

2

3

4

Figure 5.5. Fuzzy ratings with respect to the criteria X, X1 and X2.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Fuzzy ratings with respect to the criteria X, X1 and X2 in the present example are demonstrated by Figure 5.5. We underline simplicity of implementing the aggregation methods; viz., all tables and figures in this section have been imported from a fuzzy integrated rating mechanism realized in Microsoft Excel. By analogy to crucial alternatives in crisp integrated rating mechanisms, one may introduce fuzzy crucial alternatives. Consider a given fuzzy rating vector for a criterion being aggregated; in our example, this is the vector X = (0; 0.2; 0.7; 0.3). The minimal vectors of aggregative ratings leading to a required fuzzy rating vector are said to be crucial. In the example considered, one easily observes these are the vectors X1 = (0; 0; 0.2; 0.7) and X2 = (0.2; 0.7; 0.3; 0). A crucial alternative corresponds to the following set of the lower-level ratings: X11 = (0; 0; 0.2; 0.7), X12 = (0; 0; 0.7; 0), X21 = (0.2; 0.7; 0.3; 0), and X22 = (0; 0; 0.7; 0). The differences between the ratings given in Table 5.3 and the crucial ones may be interpreted as reserves with respect to the corresponding criteria. Thus, it is possible to pose and solve optimization problems for the reserves, costs and risks. Therefore, integrated rating mechanisms represent a flexible and efficient data processing tool for the support of managerial decisions.

5.2. MECHANISMS OF CONSENT Let us consider the mechanism of expert assessment, where the result of a collective decision is a certain allocation of financial resources among agents. The decision is made jointly by an expert commission, whose members (representatives of the agents) act as experts for assessing the necessary amounts of financing. We emphasize that it is possible to involve independent experts, as well.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

166

Obviously, each expert possesses an individual belief about allocation of the available (generally, limited) financial resources and the opinions of different experts would hardly coincide. How the collective decision is made? What should be done to avoid a situation when each expert “pulls the wrap on himself” and manipulates the information? The mechanism of making coordinated decisions under noncoinciding opinions is said to be the mechanism of consent. Consider the mechanisms of financing being widely used in practice, basing on expert assessments; their shortcomings are clear (see Chapter 3). As a rule, the sum of submitted requests exceeds the available financial resources, and the whole burden of “cutting the budgets” falls on the principal. The tendency of overstated requests takes place in the case of independent experts, as well. How can these negative features be overcome? Let us describe the mechanism of consent. The key idea consists in decomposition of the expertise procedure–expert commissions are created for related criteria, and one of them receives the basic status. Consider an example with three criteria, viz., “living standards” (C1), “environmental situation” (C2), and “social development” (C3). For instance, choose the latter to be the basic criterion. In this case, create two expert commissions–each will be responsible for a pair of the criteria. In particular, the first expert commission deals with assessing the criteria C1 and C3, while the second one estimates C2 and C3. Each expert commission elaborates a decision about relative amounts of financing for each criterion (the ratio of financial resources to-be-invested in a given criterion (C1 or C2) to that of the basic criterion (C3)). Denote by 1 и 2 the corresponding estimates. The parameter 1 (2) indicates that the amount of financing x1 (x2) for the criterion C1 (C2) must be by 1 (2) times higher as against the corresponding amount for the criterion C3. In mathematical notation, this is rewritten as 1 = x1 / x3 (2 = x2 / x3); apparently,  i  0, i  1, 2 . Using this information, the principal

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

defines the amount of financing for the criteria:

xi 

i , i  1, 3 , 1

(1)

where  = 1 + 2, 3  1. In the formula above, xi represents a share in the total amount of financing. In other words, if R units of the resources should be allocated among the criteria C1, C2 and C3, the criterion Ci receives xi R units. The suggested mechanism possesses several benefits. First, the principal accounts for opinions of the agents entering expert commissions. Second, separating the basic criterion enables skill-sharing among the agents. And finally (which is of crucial importance), the proposed mechanism of consent appears strategy-proof. We elucidate this using an example. Table 5.3 provides the true relative amounts of financing for the criteria C1, C2 (with respect to the basic criterion C3). Table 5.3. Expert commissions 1 2

Criteria К1 r11 = 3 r21 = 3

К2 r12 = 1 r22 = 4

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

К3 1 1

Mechanisms of Controlling

167

To make the picture complete, we demonstrate opinions of the experts for the criteria being not assessed by them (r12 and r21). Evidently, the experts consider their criteria to be more important than the basic one (r11 = 3 > 1, r21 = 4 > 1). Let the total amount of financing be equal to 100 units. The following allocation of financial resources results from truth-telling of the expert commissions:

3 4 x1 ( r11 , r22 )   100  37.5; x 2 ( r11 , r22 )   100  50 ; 8 8 1 x3 ( r11 , r22 )   100  12.5. 8

Note that the balance constraint is valid for any messages (x1 + x2 + x3 = R). The first expert commission believes that the resources should be allocated in the quantities

3 1 x1 (r11 , r12 )  100  60; x2 ( r11 , r12 )   100  20; 5 5

1 x3 (r11 , r12 )   100  20. 5

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

According to the second expert commission, the resources should be allocated in the quantities

3 4 x1 ( r21 , r22 )   100  37.5; x2 ( r21 , r22 )   100  50; 8 8 1 x3 ( r21 , r22 )   100  12.5. 8

In other words, the allocation of financial resources resulting from truth-telling completely coincides with opinion of the second expert commission. Yet, the first commission believes the criterion C1 should receive much more, while the criteria C2 and C3 should be given appreciably smaller and slightly more, respectively. Clearly, the first expert commission would increase the amount of financing for projects 1 and 3 (at the expense of project 2). Is it possible by data manipulation (by reporting 1  r1)? For instance, suppose that the first expert commission has reported the overrated estimate 1 = 5, leading to the allocation

x1 ( 1 , r22 ) 

5 4 100  50; x2 ( 1 , r22 )  100  40; 10 10

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

168

x3 ( 1 , r22 ) 

1  100  10. 10

Such situation would hardly satisfy representatives of the third criterion. And the first expert commission appears discontented (it prefers increasing the amount of financing for the first and third criteria). Note that exactly the “concern” for the basic criterion ensures strategy-proofness of the mechanism. Consider another variant of data manipulation. Let the first expert commission underrate the estimate (1 = 1). Then

x1 ( 1 , r22 ) 

4 1  100  16.(6); x2 ( 1 , r22 )   100  66.(6); 6 6

x3 ( 1 , r22 ) 

1  100  16.(6). 6

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Moreover, the first expert commission has increased the amount of financing for the third and second criteria (but, what is important, has reduced it for the first criterion!). We have studied the case 1 = 1, 1 = 5. Apparently, by reporting 1  r1, the first expert commission is unable to increase the amounts of financing for the first and third criteria (at the expense of the second one). Now, define the goal function for expert commission i:

 x  f i (ri , xi , x j )  min  j , j  1, 3; i  1, 2 . j r   ij 

(2)

Assume that each expert commission strives for maximizing its goal function. In the current example, the goal function for expert commission 1, f1 = min {x1 / r11; x2 / r12; x3 / r13), is maximized by reporting 1  r1. The structure of the goal function (2) is such that each expert commission seeks to maximize the greatest deviation between the actual and “fair” amounts of financing (according to its viewpoint). Under rather general assumptions, it is possible to show that truth-telling maximizes the goal functions (2) (is a dominant strategy) for an arbitrary number of experts. In particular, a certain assumption (the hypothesis of sufficient interest (HSI)) lies in the following. Each expert commission assesses its own criterion higher as against the true estimates provided for this criterion by other expert commissions. Equivalently, each expert commission considers its own criterion to be the most important. In the example above, this hypothesis holds true (r11 = 3 > r12 = 1; r22 = 4 > r21 = 3). Thus, if the expert commissions have the goal functions (2), the mechanism of consent is strategy-proof. Just three criteria being assessed, it is always possible to choose the basic criterion to satisfy the HSI. When the number of expert commissions (criteria) exceeds 3, decompose the expert commissions into an hierarchy of “triplets.” The art of a principal consists in partitioning the

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Mechanisms of Controlling

169

expert commissions into the “triplets” such that mutually interested experts enter each triplet; evidently, the described mutual interest follows from the shape of the goal function (2).

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

5.3. MULTI-CHANNEL MECHANISMS Recently, one would observe the popularity of the mechanisms of decision-making support (DMS), being remarkable for the following feature. Decisions (recommendations) are generated by “adviser” (expert) systems in several parallel channels. Such mechanisms are said to be multi-channel. The underlying reason of their high efficiency consists in interaction of the channels (i.e., interaction of the experts). How can the experts be stimulated to increase the efficiency of suggested decisions? How should the best control decision be made using their advice? A possible approach is applying certain systems of comparative assessment for the efficiencies yielded by decisions of the channels (and providing rewards to the channels as the result of such assessment). The present section focuses on several models of multichannel mechanisms of expertise. Multi-channel mechanisms involving models of a controlled system. Imagine that a principal (an organizer of an expertise, a decision-maker) intends to give rewards to experts based on the efficiency of the decisions proposed by them. Naturally, the principal has to know the results of a control decision suggested by each expert. As a rule, conducting realtime experiments to study the behavior of a controlled system under different controls is impossible. Thus, a certain model of the controlled system should be used. Consider an example. Let the efficiency e of an accepted control decision U depend on parameters of the model and an external environment–state of nature q (a priori, these parameters are unknown to the principal). Suppose that e = U – U 2/ 2 q. If the principal chooses the decision U0 and the actual efficiency equals e0, it is possible to estimate the realization value of the unknown parameter: q = U02 / 2 (U0 – e0). Assume there exist n experts. Substitute the above estimate in the initial expression defining the efficiency. Thus, we have obtained a formula for the efficiency ei of expert i (as if the control ei proposed by the latter is used):

U i2 ei (U i )  U i  2 U 0  e0  , i  1, n . U0 How should the principal motivate the experts? Perhaps, the estimates ei (Ui) should be employed (note that if Ui = U0, then ei = e0). In other words, the higher is the efficiency of the suggested decision, the greater reward is paid to the expert. Introduce the normative efficiency em = max {ei } , which equals the maximal efficiency. In an elementary case, the principal’s i 1, n

reward depends on the efficiency e0 of the chosen decision e0 and on the normative efficiency:

 (e0  em ), if e0  em f 0  e0   , 0    1,   0.   (em  e0 ), if e0  em Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

170

Hence, the principal’s decision being better than the most efficient decision proposed by the experts (e0  em), the principal is paid proportionally to the quantity (e0 – em). At the same time, if the efficiency e0 is smaller, then the reward appears proportional to the quantity (em – e0). The experts are paid by analogy (based on comparison of the values ei and e0, or ei and em):

 (ei  e0 ), if ei  e0 fi  ei   , 0    1,   0.   (e0  ei ), if ei  e0 However, what are the coefficients  and  in the incentive functions? Let us argue in the following way. It may happen that the principal (being able to influence the actual efficiency e0 of his or her decision U0) intentionally reduces this efficiency for modifying the estimated efficiencies of the channel (expert). When is such situation possible? In the majority of models of controlled systems, there exists a monotonic relationship between the efficiency e0 and efficiencies of the channels. Within the framework of the present example, the higher is e0, the greater is ei. If the efficiency e0 of the principal’s decision exceeds the normative one (e0  em), then the principal’s goal function f 0  (1   ) e0   em increases with respect to e0; hence, the principal is not interested in underrating e0. Some intricacies arise if e0 < em, i.e., if the principal’s decision is less efficient (in comparison with the experts). In this case,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

f 0  (1   )e0   em , and the principal may be concerned with reducing the efficiencies Эi of the channels (accordingly, in decreasing em). In the example considered, the principal’s goal function takes the form

f0   U m (

Um U2  1)  e0 (1     m2 ) . U0 U0

If Um > U0 (e0 < em) and  is sufficiently large, the principal seeks for reducing the actual efficiency e0 (without changing U0). To eliminate the interest in such reduction,  should be taken sufficiently small:



U 02 . (U m2  U 02 )

Suppose that the principal’s decision is worse than the normative one. In this case, heavy penalties (large values of  ) are not desirable from the following viewpoint. Trying to avoid “mistakes,” the principal may prefer a decision proposed by the experts. Clearly, this leads to a certain “loss” in the principal’s independence and initiative. Autonomous expertise mechanisms. In the previous section, we have considered expertise mechanisms with experts being motivated based on comparison of the efficiencies of the decisions suggested by them (the efficiencies are estimated using a model of a controlled system). However, sometimes a controlled system turns out complex and designing

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Mechanisms of Controlling

171

an adequate model seems problematical. What should the principal do? A possible approach lies in “shifting the load to experts’ arms,” thus insisting on their responsibility for solving the control problem. Accordingly, the principal acquires a coordinated decision from the experts (instead of several decisions suggested by experts) and uses it. Let us study conditions motivating the experts to work independently, coordinate decisions and propose the best one to the principal. Assume that the experts have to generate a decision in a specific situation. Due to different specialization, experience, etc., certain experts possess a higher qualification level in a problem domain than the others (depending on the situation of decision-making, i.e., on the set of feasible situations). Figure 5.11 shows a relationship between the efficiency ei(x) of the decisions suggested by expert i ( i  1, n ) and the situation x. The principal would desire that in any situation the experts propose the most efficient

decision, i.e., the efficiency of the collective of experts takes the form e( x )  max ei ( x )

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

i 1, n

(the corresponding curve represents the envelope in Figure 5.6). Suppose that each expert knows only his or her efficiency ei (x) (and is unaware of the efficiencies of the rest experts). Hence, each agent may manipulate the reported information. Moreover, imagine that all experts observe the exact situation x. How can the principal stimulate the experts to choose the most efficient decision in any situation? Consider the following mechanism. The principal tells the experts, “Each of you reports to the rest experts the pair (Ui (x), ei (x)), where Ui stands for the suggested control in the situation x, and ei (x) means the efficiency of this decision (provided that in each situation expert i knows the true efficiency of a decision proposed by him or her). Then you report the most efficient decision to me, and I pay rewards proportionally to the efficiency of the proposed decision.” Indeed, such mechanism is simple–the experts jointly choose the decision for the principal (without his assistance). A natural question is whether the experts tell the truth. We prove that in this mechanism truth-telling is a Nash equilibrium. The efficiency e1(x)

e2(x)

en(x)

x The set of feasible situations Figure 5.6. The efficiencies of decisions.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

172

If all experts report true opinions (e1(x), ..., en(x)), the principal is suggested the decision

e( x )  max e i ( x ) . Since the experts’ rewards are proportional to e(x), the goal function of i 1, n

expert i is described by n

fi (e1 ( x), ..., en ( x))   i   i e( x)  e ( x) , i  1, n, 0    i  1, i 1

where i are constants, e ( x ) is the actual (realized) efficiency. Now, suppose that expert j demonstrates a strategic behavior by reporting e j ( x )  e j ( x ) (in fact, he or she claims that the efficiency of his or her decision in the situation х equals e j ( x) ). If ej(x) = e(х), then (as far as expert j knows the true efficiency e ( x )  e j ( x ) ) by reporting e j ( x )  e j ( x ) or e j ( x )  e j ( x ) , this expert decreases the value of his or her goal function. Imagine that ej (x)  e(х), i.e., another expert (e.g., expert k) suggests a decision with a higher efficiency ek(x) = e(х) > ej (x); consequently, by reporting e j ( x )  e j ( x ) , expert j would not modify the final decision (and the value of his or her goal function). On the other hand, by reporting e j ( x )  e j ( x ) , i.e., striving for e( x )  e j ( x ) , this expert would merely

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

reduce his or her payoff, since e ( x )  e j ( x )  e j ( x ) . Therefore, we have substantiated that truth-telling is a Nash equilibrium. Autonomous expertise mechanisms possess the following advantages. First, they decrease the “informational load” on the principal–the latter obtains the optimal decision directly (according to the agents’ viewpoint). Second, they guarantee strategy-proofness. Applying autonomous mechanisms, the principal should be certain about (a) correct identification of the situation by the experts and (b) accurate efficiency estimates provided by the agents for their decisions. A multi-channel structure of a controlled system as a possible way to reduce uncertainty. Suppose that the control efficiency e(u) depends on an unknown parameter–state of nature q: e(u) = u – u2/2q. Let the principal be aware of that the parameter q belongs to the segment [a; b]; in other words, there exists an uncertainty caused by lack of knowledge regarding this parameter. What control should the principal choose? Different approaches to solve this problem are applicable. The first approach consists in that the principal chooses control, expecting the worst-case value of q (by adopting the principle of maximum guaranteed result). Indeed, if a > 0, then the worst-case corresponds to q = a. By choosing u = a, the principal maximizes the efficiency in such situation and attains the efficiency e(а) = а / 2. In some cases, the described approach appears too pessimistic. Now, assume that the probability distribution for the parameter q is given. Consequently, the principal can maximize the expected value of the efficiency by a proper choice of control. For instance, if q has the uniform distribution on [a; b], then the expected value of the goal

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Mechanisms of Controlling

173

ba  u2  b  function constitutes u  ln  . Accordingly, the maximal expected value b 2b  a   a   2 ln

a

ba is reached at u* = . b ln a

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Consider situations when the principal does not know the probability distribution or the application of the principle of maximal guaranteed result leads to a low efficiency. Then it is possible to involve the procedures of expert assessment to acquire additional information (reduce the uncertainty) concerning the parameter q. The experts possessing more information about the parameter q (than the principal does) can be asked to report their parameter estimates directly. The corresponding model of active expertise has been studied earlier (in this case, d = a, D = b, and one can design a strategy-proof mechanism). Yet, an alternative lies in using multi-channel mechanisms. Suppose that the principal knows the efficiency of a control decision (suggested by the experts) is ei (ui) = ui – (ui2 / 2 q). Moreover, let the principal be confident in the experts (they possess more complete information about the parameter q). Then the principal asks the experts to report not the estimates of the unknown parameter, but control decisions the experts would choose themselves. Imagine that the principal has acquired information about {ui }i 1, n from the experts. Now, he or she may think as experts to decide why the latter have chosen specific values ui. It is possible that, by choosing ui, the experts strive for maximizing their goal functions based on available information about the parameter q (i.e., ui = ui (q)). In this case, under known goal functions of the experts, the principal can use the values ui for “retrieving” the information about q. Note that when the principal’s decision infringes on the agents’ interests, it is necessary to account for agents’ foresight (a strategic behavior is possible). For a fixed q, the maximum of the expert’s goal function is attained by u = q (more specifically, ui = qi, i  1, n ). Indeed, each expert may have a specific belief qi about the parameter q. Hence, the knowledge of ei (q) = q / 2 (ui = qi) allows the principal to acquire information about (q1, q2, ..., qn). This additional information may assist in reducing the uncertainty and making a more efficient decision. Therefore, using a multi-channel mechanism, the principal can organize an “indirect” expertise (estimate q based on indirect information) by forecasting the experts’ behavior; thus, the existing uncertainty is naturally reduced.

5.4. MECHANISMS OF CONTRACT RENEGOTIATION Making well-timed management decisions is a subject of intensive research in the theory of control in organizations. The present section illustrates the analysis and synthesis ideology for operational control mechanisms; as an example, we use the mechanisms of contract renegotiation. In practice, one often faces situations when mutually beneficial parameters of a contract become unbeneficial due to altered circumstances, external conditions, forecast errors,

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

174

planning errors, etc. Hence, a certain contracting partner (or even both) seeks to modify parameters of the contract. Such situation is said to be contract renegotiation [17, 57, 59]. Assume that contract renegotiation takes place only in the following case. Under a new contract, each system participant–the principal and all agents–obtains at least the same utility (value of the goal function) as under the initial contract. Notably, each participant (the principal or an agent) has the right of veto: he or she may reject the new contract (and preserve validity of the initial one) if the former ensures for him a strictly smaller utility than the latter. As a matter of fact, the customer generally represents the interests of the whole system (control efficiency is defined by his or her goal function, see the discussion above). Thus, the stated condition of contract renegotiation implies the following. In the case of contract renegotiation, control efficiency increases (at least, remains the same). Hence, to analyze the conditions of contract renegotiation, we have to establish conditions allowing to design a new contract (find parameters of a new contract) ensuring not smaller utilities to all system participants (taking into account the incoming information). In contract theory [57, 59], one distinguishes between contracts with obligations and contracts without obligations. In the first case, violating conditions of a contract causes imposing considerable penalties on the corresponding participant (for making such violation unbeneficial). Accordingly, analyzing the mechanisms of contract renegotiation in contracts with obligations, one should compare two situations: (a) the customer (the principal) and the executor (agent) follow the conditions of the initial contract and (b) the both (!) adhere to the conditions of the new contract. On the other hand, in contracts without obligations participants may violate conditions of the initial contract (choosing their optimal strategies according to incoming information). Below we discuss contracts with obligations. Let the customer’s income function H(y, ) and the executor’s cost function c (y, r) depend on uncertain parameters (  0 and r > 0, respectively). Actually,  can be interpreted as an external price of a product manufactured by the executor, and r possibly means the efficiency of the executor’s activity. Suppose that    0: H(0,  ) = 0,  r > 0: c (0, r) = 0. Thus, the goal functions of the system participants are rewritten as (see incentive models in Section 2.1)

 ( (), y, ) = H(y, ) –  (y),

(1)

f ( (), y, r) =  (y) – c (y, r).

(2)

Suppose that the initial contract has been signed under the (actual or forecasted) values 0 and r0. Compute the optimal action of the executor (according to the customer’s viewpoint): x*(0, r0) = arg max [H(y, 0) – c (y, r0)]. y A

(3)

Then the optimal parameters of the initial contract2 (under a compensatory incentive scheme–see Section 2.1), viz., the executor’s action and the reward, make up x*(0, r0) and c (x*(0, r0), r0). The customer’s utility in the initial contract is 2

Recall that (within the framework of game-theoretic models) a contract is described by the pair “an agent’s action”–“a reward given by the principal.”

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Mechanisms of Controlling (0, r0) = H(x*(0, r0), 0) – c (x*(0, r0), r0).

175 (4)

At the same time, the executor’s utility equals 0 (due to the principle of costs compensation). The actual values of the parameters  and r may vary from the forecasted ones 0 and r0. As the result, the actual utilities of the customer and executor may also differ from the corresponding forecasted utilities. Introduce the following quantities: (0, , r0, r) = H(x*(0, r0), ) – c (x*(0, r0), r0),

(5)

 (0, , r0, r) = c (x*(0, r0), r0) – c (x*(0, r0), r),

(6)

0(, r) = H(x*(, r), ) – c (x*(, r), r).

(7)

The expression (5) defines the customer’s utility under altered conditions in the initial contract. Next, formula (6) specifies the executor’s utility under altered conditions in the initial contract. And finally, that of (7) represents the customer’s utility under new conditions (in the new contract being optimal for the altered conditions). Assume that the executor’s cost function decreases monotonically with respect to r. Consider two cases. First, let r < r0. Then the executor’s utility is  (0, , r0, r) < 0, and the latter benefits from contract renegotiation. On the other hand, the customer benefits from concluding the contract with the parameters (x*(, r); c(x*(, r), r)) if

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

0(, r)  (0, , r0, r).

(8)

Second, let r > r0. Consequently, the executor’s utility makes up  (0, , r0, r) > 0, and he or she benefits from contract renegotiation only when new contract conditions ensure a utility not smaller than  (0, , r0, r). Hence, the condition of contract renegotiation for the customer is rewritten as 0(, r) – (0, , r0, r)  (0, , r0, r).

(9)

Thus, we have shown the following fact. If the executor’s cost function decreases monotonically with respect to r and r < r0 (r > r0), then the condition of contract renegotiation takes the form of inequality (8) (inequality (9), respectively). Concluding the present section, let us consider an illustrative example. Choose the customer’s income function H(y, ) =  y,   0, and the executor’s cost function c (y, r) = y2/ 2 r, r > 0. According to the customer’s viewpoint, y0 = 0 r0 is then the optimal action of the executor under the parameters (0; r0). In the initial contract, the payment equals (0)2 r0 / 2. Moreover, the customer expects the utility 0(0; r0) = (0)2 r0 / 2, while the executor is guaranteed zero utility. Now, consider the parameter values (; r); if r  r0, in the initial contract the customer and executor gain the utilities  (0; ; r0; r) = 0 r0 ( – 0 / 2) and f

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

176

(0; ; r0; r) = (0)2 r0 (r – r0) / 2 r, respectively. Yet, for r < r0, the initial contract yields a negative utility to the executor, and he or she naturally refuses to work by choosing zero action. Suppose that the new contract is signed with the action y =  r and payment 2 r / 2; the customer’s utility makes up  (; r) = 2 r / 2, and the executor obtains zero utility. Below we analyze possible alternatives. If r < r0, the executor becomes indifferent to contract renegotiation–for any values of the parameter , he or she gains zero utility. At the same time, the principal benefits from contract renegotiation provided that  (; r)   (0; ; r0; r), i.e., 2 r – 2   0 r0 + r0 (0)2  0. In the case r < r0, we have f (0; ; r0; r) = (0)2 r0 (r – r0) / 2 r  0. Thus, the executor definitely chooses zero action, except the principal offers a contract with the reward 2 r / 2 + (0)2 r0 (r – r0) / 2 r for choosing the action y =  r. Clearly, making such an offer is always beneficial to the customer, since

(0; ; r0; r) = 0 r0 ( – 0 / 2)  2 r / 2 – (0)2 r0 (r – r0) / 2 r. Therefore, if r < r0, contract renegotiation (as being beneficial to both sides) takes place under the condition

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

2 r – 2   0 r0 + r0 (0)2  0. For the initial conditions 0 = 8 and r0 = 8, the domain of the parameters  and r ensuring contract renegotiation is shaded in Figure 5.7. Let us summarize the outcomes of this section. We have described the model of contract renegotiation in a system with a single customer and a single executor. The derived results indicate that, contract renegotiation being feasible, it is necessary to revise conditions of a contract. The analysis demonstrates that contract renegotiation appears efficient in a wide class of systems. Accordingly, it seems reasonable to use contract renegotiation in practice for studying the conditions of mutually beneficial cooperation. Finally, note there also exist other common mechanisms of controlling, e.g., mechanisms of predictive self-control [12, 39].

r

 Figure 5.7. The domain of the parameters  and r ensuring contract renegotiation.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Chapter 6

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

STAFF CONTROL In the present chapter we consider models and methods of staff control in organizational systems–personnel recruitment, dismissal, etc. Beyond studying are assignment problems (agents’ allocation among different positions and jobs), as well as problems of personnel development (training). A classification of staff control problems. Let us briefly overview the statements and results obtained for staff optimization problems in OS. Most of the research focused on control in OS (see [12, 39]) proceeds from the following assumption. The staff of an organizational system consists of the set of control subjects (principals) and controlled subjects (agents). In the sequel, to be compact we will adopt the term “staff.” Suppose that one knows a solution to a control problem with a fixed staff (see Chapters 2–5). Hence, it is possible to analyze staff control problems for an OS, i.e., to find an optimal (in a certain sense) set of agents and principals to-be-included in the system. We list the basic approaches to staff control problems. In contract theory, investigators deal with the models of evaluating the optimal number of (generally) homogeneous employees under the incentive compatibility constraints and the reservation wage constraints [50]. Within the framework of labor economics, the basic result concerning the optimal number of employees reflects the equality of their marginal output and marginal costs for intaking and keeping the employees. The amount of additional product (income) yielded by a firm as the result of recruiting an additional employee (a labor unit) is said to be the marginal product of labor. The marginal costs are, in fact, the principal’s costs to motivate recruiting an additional employee. The condition of profit maximization (as the difference between the principal’s income and his or her incentive costs) requires making the profit maximal. For this, one should modify the number of employees (increase if the marginal income exceeds the marginal costs or decrease otherwise) until the marginal income coincides with the marginal costs [38, 42]. In organizational economics, the following approach to evaluate optimal size of an organization appears wide-spread (see the detailed discussion and references in [40, 42, 43]). On the one hand, there exists a market as a system of exchanging propriety rights. On the other hand, economic agents form organizations interacting at the market. A possible explanation for existing economic organizations lies in the necessity of a compromise between the transaction costs and organizational costs (the latter are defined by the “coordination costs” in an organization, being increased for growing organizations).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

178

Dmitry Novikov

The transaction costs prevent replacement of an organization by the corresponding market, while the organizational costs prevent replacement of the market by an organization. In the analysis of staff formation problems, a central idea of organizational economics is that both the transaction and organizational costs depend on the size of an organization and its structure. Hence, theoretically there exist optimal parameters of an organization, balancing the replacement tendencies discussed above (see Chapter 10). Now, let us discuss the results obtained in the theory of control in organizations. Staff formation problems were considered in [9] for project assignment. As a matter of fact, the appointment problems (where a principal is unaware of the agents’ efficiencies at different positions and uses their messages) were treated by many researchers in project management (see Section 4.7). Moreover, the methods of graph theory have become popular in control problems for OS. The problems of reaching the optimal execution sequence for certain operations (e.g., production and commercial cycle optimization, logistics, etc. [39]) can also be interpreted as staff formation problems. Probably, the largest class of OS control mechanisms to-be-considered as staff formation problems is represented by rank-order tournaments and auctions. Here a certain resource (alternatively, a set of jobs) is allocated among pretenders by sorting the efficiencies of their activity. Examples are direct, simple and two-stage tournaments, assignment problems (complex tournaments) and others (see Sections 3.5 and 4.7). Systematic statements of staff formation problems (not relating to the aspects of staff control–most of known models build OS “starting from scratch”) have been discussed recently [50]. The cited work identifies three approaches to solve staff formation problems in OS based on analyzing incentive problems. The first approach consists in a “frontal attack” of all feasible combinations of potential OS participants. A benefit of this approach is guaranteed evaluation of the optimal solution, while high computational complexity makes up a drawback. The second approach involves local optimization methods (examination of all feasible staffs in a certain neighborhood of a given staff). Generally, these heuristic methods do not yield optimal solutions (thus, their guaranteed efficiency should be evaluated). Finally, the third approach lies in eliminating a fortiori inefficient combinations of agents (using the problem specifics). Computational complexity is appreciably reduced, and the exact (optimal) solution can be obtained; unfortunately, this approach is far from being always applicable (one should substantiate its applicability in a specific case). We have completed a brief overview of staff models in OS. Now, let us give a classification of staff control problems. Introduce the following notation: N0 = {1, 2, …, n} is an actual (initial) staff of an OS which includes n agents, |N0| = n; N means a final staff of an OS (a solution to a staff control problem); N′ stands for a set of potential (actual and pretending) participants of an OS (the universal set): N  N’, N0  N′; +(N, N0) = N \ N0 represents a set of recruited agents (agents included in OS staff); –(N, N0) = N0 \ N indicates a set of dismissed agents (agents eliminated from OS staff);  (N, N0) designates a functional mapping initial and final staffs to a real number (staff control efficiency).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Staff Control

179

We analyze the nature of the above functional in a greater detail. As it has been mentioned, there exists a control problem for a fixed staff (see Figure 6.1). A solution is a set of principal’s strategies maximizing the control efficiency defined as the guaranteed value of the principal’s goal function on a solution set for the game of agents (see Chapters 2–5). Staff formation problem are posed in the following way: find a feasible staff with the maximal control efficiency. Then (explicitly or implicitly) one assumes that an OS is rebuilt all over again. At the same time, the matter may concern formation of a new staff for an existing OS (staff optimization), i.e., passing from a staff N0 to a staff N. In this case, the control efficiency criterion depends both on the initial and final staffs (some dismissed employees must obtain a new job or be paid allowances, and so on). Thus, in the complete set of staff control problems in OS one may identify staff formation problems and staff optimization problems (the corresponding basis for classification consists in the presence or absence of the initial staff–see Figure 6.1). Next, staff optimization problems include staff extension problems (|N| > |N0|), staff reduction problems (|N| < |N0|) and staff substitution problems (N  N0) (see Figure 6.1). Below we formulate these classes of staff control problems in OS. Staff formation problem is characterized by the absence of the initial staff (N0 = ):  (N, ) 

maxN' .

(1)

N 2

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Staff control

Staff formation problems

Staff optimization problems

Recruitment problems

Dismissal problems

Figure 6.1. Staff control problems in OS.

Staff optimization problem1 (under a fixed initial staff N0) is generally posed as

 (N, N0)  max . N' N 2

(2)

Staff extension problem (also known as recruitment problem) differs from (2) in that – = ; moreover, upper or lower bounds can be imposed on the number of agents being included 1

Clearly, staff optimization problem represents a general problem of staff control, while the rest problems (staff formation, substitution, etc.) are special cases. Separating staff formation problems has been historically established.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

180

in OS staff. For instance, under the initial staff of n agents and the upper bound m, the problem takes the form

 (N, N0) 

max

N 2 N' : N 0  N, |N| n  m

.

(3)

Staff reduction problem (also referred to as dismissal problem) is stated by analogy, but a N

new staff must not exceed the initial one. Notably, find a set –  2 0 maximizing the efficiency provided that + = . For instance, under the initial staff of n agents and the lower bound m on the number of dismissed agents, the problem is rewritten as

 (N, N0) 

max

N  N 0\Δ  ,|Δ | m

.

(4)

Staff substitution problem lies in searching for sets of dismissed or recruited employees, maximizing the efficiency. For instance, under the initial staff of n agents and the number m of substituted agents, the problem is posed as

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

 (N, N0) 

max

N 2 N' ,|Δ ||Δ | m

.

(5)

A common shortcoming of these models is basing on the hypothesis of complete interchangeability of agents. That the models are infrequently used in practice is due in part to this aspect. Meanwhile, any practicing manager knows that substituting a certain employee by another causes definite losses to a firm (even if the both possess equivalent levels of qualification). The underlying reason is that there exist several adaptation stages of an employee to a new OS. Actually, for an employee the maximal possible action ymax (per unit time) is a function of his or her stay time t in an organization (see an example of this function in Figure 6.2).

ymax

ymax (t)

t 0 Figure 6.2. The adaptation process.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Staff Control

181

The function ymax(t) is a “splicing” of the general logistics curves; the adaptation process is representable as employee’s training in new institutional norms (formal and informal) that exist in an OS (see Chapter 9). Note that the presence of a decreasing segment of the “learning curve” ymax(t) is not a common rule. A possible loss in functional abilities of an employee (and in the efficiency of his or her labor) results from aging and depends on additional functions vested on the employee. The growing role of human capital (including specific skills of employees) in industrial processes reduces the adequacy of the models considering employees as interchangeable “cogs in machines.” Hence, analyzing the adaptation processes of employees seems an interesting direction for further research. Yet, in control theory the majority of existing models do not account for the adaptation effects. Thus, from general statements of staff control problems in OS we pass to a series of solution examples. Examples of staff control problems. Consider a model, where the optimal number of agents to-be-included in OS is interpreted in the sense of informational load on the principal [50]. Example 6.1. Suppose that the incentive problem lies in allocating a wage fund (WF) R among n homogeneous (identical) agents. Let the agents’ cost functions be c(y) = y2/ 2, and the principal’s income be proportional to the sum of agents’ actions. Under a fixed WF, the relationship between the incentive efficiency and the number of agents takes the form

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

*(n) =

2  R n – R.

Recall the assumptions concerning the goal functions introduced in Chapters 1-2. Moreover, assume that c'(0) = 0, H'′(y) > 0. Consequently, the principal benefits from involving as many agents as possible (motivating them for performing negligibly small actions). Indeed, in a neighborhood of the action minimizing the costs (y = 0), the marginal costs of each agent are minimal. Note that the maximum *(n) with respect to R is attained for a WF being proportional to the number of agents in OS: R* =  n / 2. Hence, under a fixed WF, the principal is interested in an infinitely increasing number of the agents (we consider the case when the principal not necessarily guarantees an arbitrarily small positive utility to the agents). In other words, the maximal staff appears optimal. The optimal solution changes if the principal possesses limited control (cognitive, data processing) capabilities. As a rule, the number of relations among n subordinated agents controlled by the principal is estimated as  2n. In practice, this estimate corresponds to the number of feasible coalitions among n agents. Let us account for the informational constraints through multiplying *(n) by the rate 2– n (  0):

 (n) = ( 2  R n – R) 2– n.

nma

The maximum of  (n) with respect to n is achieved for n = nmax, where 2 = R 1 1 4 . 8 Rln2





Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

182

Now, assume that for each agent in the OS the principal has to guarantee a certain minimal level of utility U   / 2 (the reservation wage constraint or the unemployment relief constraint–see Chapter 2 and [50]). Solving the problem of evaluating optimal rewards of the agents, one obtains that under a fixed WF R the incentive efficiency (as a function of the

2 ( R  nU )n – R.

agents’ number) takes the form *(n) =

R , i.e., the reservation wage constraint defines 2U optimal staff of the organizational system (optimal size in the case of homogeneous agents). Now, consider several examples, where the agents’ reservation utility plays a crucial role. Example 6.2. Introduce the following assumptions. The principal’s goal function is given This expression is maximized by n =

by H(yN) =

 y , and  y

–i

i

 A–i the cost function ci (y) of agent i is convex with respect to

i N

yi  Ai, yN = (yi)i  N, N = {1, 2, …, n}. Making the above assumptions, we adhere to the following arguments. Imagine the principal’s income function is additive, i.e., H(yN) = functions. By passing to H(yN) =

y

i

 H ( y ) , where {H (y )} are concave i

i

i

i

i N

(viz., applying a change of variables), one modifies

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

i N

the cost functions of the agents; still, they preserve the convex property. This is sufficient for ensuring the existence and uniqueness of the maximum of the principal’s goal function. In other words, during linearization of the principal’s income function, technological links among the agents are accounted for in the separable cost functions of the agents. To proceed, let us study the staff formation problems. We will successively complicate the models, moving from OS with separable costs of the agents to OS with inseparable costs of the agents (recall that inseparability of the costs reflects interdependency of the agents–see Section 2.5). First, suppose that the agents’ costs are separable: ci = ci (yi), i  N. By solving the incentive problem (see Section 2.5), one obtains that for the staff N optimal control efficiency makes up

 (N) =

max

y N AN

{ y

i

 ci ( yi )} .

(6)

iN

Hence, the optimal staff problem in OS is to find a staff N maximizing the function (6) on the set of nonnegative values. Under the above assumptions, it was shown in [50] that the solution to the problem (6) represents the maximal staff of the OS: N* = N′. This is due to three factors. First, in a neighborhood of the zero action the principal’s income grows faster than the agents’ costs. Second, the principal has a permanent income from scale of production (his or her income function is linear and there exist no technological constraints on the number of agents performing joint activity in the OS). And third, in an equilibrium the agents gain zero utility (in the sense of their goal functions, they are indifferent to participation in the OS, entering it only because of a benevolent attitude to the principal).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Staff Control

183

Hence, we should analyze a class of models, where optimal and maximal staffs of OS differ. Consider models with one of the stated three factors being valid. Assume that in an equilibrium the principal has to guarantee to agent i the maximal utility level Ui

(if the agent participates in the OS) and the minimal utility level Ui

max

(otherwise), and Ui

max

 Ui

min

min

, i  N′.

Recall the results derived in Sections 2.1 and 2.5. Within the framework of the hypothesis of benevolence, for separable costs of the agents the quasi-compensatory incentive scheme

ci ( yi )  U i , yi  yi* max i(y , yi) =  , i  N, 0 , yi  yi*  *

(7)

is the minimal incentive scheme implementing the action y*. Define the following quantities:

 i* = max { yi  ci ( yi )  U imax } , i  N.

(8)

yi Ai

Accordingly, the principal’s goal function takes the form

 N) =

 iN

* i



U

i N '\ N

i min

.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Hence, the staff N*= {i  N′ |  *i  – Uimin } is optimal. If * =  (N*) =

U

iN '\ N

*

i min



* i



iI *

< 0, all staffs appear infeasible. Thus, the OS staff should include only agents

whose activity yields the income (minus the costs to motivate them) exceeding the costs to compensate their exclusion from the staff. Assume the principal’s goal function * on this staff is strictly negative. Then the reservation wages of the agents belonging to N′ are higher than the effect from their participation in the OS. 2

Example 6.3. Let the agents’ cost functions be defined by ci (yi) = yi /2ri. Consequently,

 N) =

r

{ 2i U iN

imax

}–

U

iN '\ N

imin

.

First, consider the case of homogeneous agents: ri = r, U i  N′, U min  U max . Then

 (n) = n (r / 2 – U max ) – (|N'| – n) U min , n = 0, | N' | . The problem  (n)  max possesses the solution 0 n|N '|

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

max

= U max , U i

min

= U min , i

Dmitry Novikov

184

| N '|, r  2(U max  U min ) . 0, r  2(U max  U min )

n* = 

Now, address the case of six nonhomogeneous agents with the following parameters (see Table 6.1). Table 6.1. Parameters of the agents in Example 6.3 Parameter \ i ri

1 12

2 10

3 8

4 6

5 4

6 2

Ui

4

4

3

1

2

2

1

1

1

1

1

1

2

1

1

2

0

–1

max

Ui

min

 i*

Compute the values of the principal’s goal function under different staffs of the OS (clearly,

Ui

min

being identical, the agents should be included in the OS in the ascending

*

order of i ):

*({1}) = –3, *({1}{4}) = 0, *({1}{2}{4}) = 2,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

*({1}{2}{3}{4}) = 4, *({1}{2}{3}{4}{5}) = 5, *({1}{2}{3}{4}{5}{6}) = 5. Therefore, optimal staff is either the maximal staff of the OS, or agents 1–5 (in Table 6.1, agent 6 is emphasized by grey color). Moreover, the principal appears indifferent to inclusion of agent 6 in the OS staff2. Indeed, we have  6* = – U 6

min

, i.e., the losses caused by his or

her participation in the OS coincide with the costs to compensate his or her exclusion from the staff. Now, assume that the “fees for participating in the OS” { Ui

max

 i: U i

2

min

} have been cancelled, and

= 3 (see Table 6.2).

Perhaps, in such situations it seems reasonable to involve the hypothesis of benevolence for the principal’s attitude to agents–including an agent in the OS staff (employment) is an important motivation even under zero utility level of the agent.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Staff Control

185

Table 6.2. New parameters of the agents in Example 6.3 Parameter \ i ri

1 12

2 10

3 8

4 6

5 4

6 2

Ui

0

0

0

0

0

0

3

3

3

3

3

3

6

5

4

3

2

1

max

Ui

min

*i

Evaluate the efficiency criterion for different staffs:

*({1}) = –9, *({1}{2}) = –1, *({1}{2}{3}) = 6, *({1}{2}{3}{4}) = 12, *({1}{2}{3}{4}{5}) = 17,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

*({1}{2}{3}{4}{5}{6}) = 21. Hence, the principal benefits from including all six agents in the OS staff. Now, consider the staff formation problem in OS, provided that the principal uses a unified proportional incentive scheme with the reward rate3  < 1 (the efficiency estimates and other relevant properties of proportional incentive schemes are discussed in Section 5.1). Then the actions chosen by the agents make up y i* = i (), where i () = ci′–1(), i  N. The principal’s goal function (the difference between the linear income and the costs to motivate the agents) takes the form

 (yN) = (1 – )

 ( ) . i

(9)

i N

Apparently, i () are continuous increasing concave functions, and the function (9) appears concave, as well. And so, for each fixed OS staff N there exists a unique wage rate *(N) being optimal from the principal’s view. In other words, the principal’s strategy “include in the OS either all agents or nobody” is optimal. Getting away from the above trivial solution, suppose each agent possesses a specific reservation wage U i (not the corresponding reservation utility!). An agent agrees to

3

The principal’s income function is directly proportional to actions of the agents. Thus, wage rates exceeding the unity result in negative values of the principal’s goal function and its decrease with respect to any feasible actions of the agents.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

186

participate in the OS only if his or her reward exceeds the reservation wage. Thus, for agent i the participation condition is defined by

 i ()  U i , i  N.

(10)

Denote by i a solution to the equation  i () = U i , i  N, and sort the agents in the ascending order of i. The first k agents participating in the OS, the principal’s goal function is

 (k) = (1 – k)

k

 i (k ) , k = 1, | N' | .

(11)

i 1

The optimal staff problem for the OS has the solution N* = {1, 2, …, k*}, where k* = arg max  (k). k 1,| N ' |

(12) Therefore, the staff optimization problem has been reduced to sorting the agents in the ascending order of i and finding the number k* which maximizes the principal’s goal function (11). Example 6.4. Within the framework of Example 6.2, consider the agents’ cost functions ci 2

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

(yi) = yi /2ri. Then

i () =  ri,  (yN) = (1 – ) 

r . i

i N

The minimal wage rates such that the corresponding agents agree to participate in the OS constitute i =

Ui ri

. Suppose there are just five agents pretending to be hired; their

parameters are listed in Table 6.3. Consequently, k* = 4, i.e., the optimal staff of the OS consists of agents 1–4 (after appropriate renumbering as the result of sorting i). Table 6.3. Parameters of the agents in Example 6.4 Parameter \ i ri

1 1

2 1

3 1

4 1

5 1

Ui

0.6

0.7

0.75

0.8

0.9

i  (i)

0.77 0.1746

0.84 0.2733

0.87 0.3481

0.89 0.3777

0.95 0.2434

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Staff Control

187

The performed analysis of staff formation problems in a multi-agent OS with separable costs of the agents leads to the following conclusion. In this class of models, the available information can be used to sort agents and solve the problem of evaluating optimal combination of agents on the set of N combinations (and not on the set of all feasible combinations 2N!). Let us reject the assumption of separable costs. Then the optimal staff problem can be rewritten as N* = arg max  (N),

(13)

N  N'

where

  (N) = max y I AI

{ y  c ( y i

iN

i

N

)} ,

(14)

provided that  (N*)  04. We have already noted that two major intricacies arise in the problems (13). The first one is connected with high computational complexity (the maximal efficiencies must be computed and compared for numerous OS staffs). The second intricacy lies in the necessity of constructive definition of the agents’ costs depending on OS staff and actions of all member agents. Consider an example illustrating the above specifics. Example 6.5. Assume that the agents are homogeneous and have the following cost functions (||  1/n):

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

2

   yi    y j    jN \i  , i  N. ci (yN) =  2r

(15)

Assume that the principal has to guarantee a utility level U to each agent. Then the optimal incentive scheme is quasi-compensatory (see Sections 2.1 and 2.5), leading to the following value of the principal’s goal function:

 (yN) = g(n)

 y – c ( y i

i N

i

N

) –nU .

(16)

iN

Here n = |N|, and g(n) is a factor making the principal’s income a decreasing function of the number of agents in the OS staff. Evaluate actions of the agents being the most beneficial to the principal: y* =

rg ( n ) . (1   ( n  1)) 2

4

This constraint is pointless if  () = 0 and   N′.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

188

Consequently, the relationship between the principal’s goal function and the number of agents n is given by

 (n) =

ng 2 ( n ) r –nU . 2(1   ( n  1)) 2

(17)

Suppose that  < 0, then for g(n) = n–1/2 one has

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

 (n) =

r –nU . 2(1   (n  1))2

Finally, maximize  (n) to obtain the optimal OS size. Concluding this chapter, we study a staff optimization model in OS which uses the results of experimental investigations [50] in the field of individual strategies of labor supply. Let us remind the reader the key outcomes of these investigations. Experimental data indicate that the analysis of real curves  () (the relationships between the daily working time  desired by an agent and the time-based wage rate ) makes it possible to identify four different types of agents with individual strategies of labor supply. Type 1: the desired working time (almost) does not depend on the wage rate starting from a certain quantity 0; Type 2: the desired working time is a monotonically increasing function of the wage rate; Type 3: the desired working time is a monotonically decreasing function of the wage rate; Type 4: the desired working time first increases with respect to the wage rate and then decreases. Assume that the problem is to define a staff of an OS manufacturing a certain product. The total size of the order is R, while the unit price at the market constitutes . Next, a set of agents N′ operate at the labor market; these agents are able to manufacture the product with a fixed intensity of work . The set of potential pretenders N′ is characterized by the share of agents having a certain type (see above). Let n10–n40 be the number of pretenders of type 1–4, respectively (each type corresponds to the strategy of labor offer). Evidently, n10 + n20 + n30 + n40 = |N′|. Suppose that the following parameters are known for each type of agents: the minimal reservation utility U i , i = 1, 4 (to-be-ensured by the principal to this type if employed), and the minimal wage rate 0 (being identical for all agents’ types). The staff formation problem consists in choosing a set of agents N  N′ and establishing the vector of agents’ wage rates  = (1, 2, 3, 4) to maximize the OS income

 (, N) =  R –

 [  i

i

( i )  U i ] ,

i N

where  i stands for the wage rate of agent i (i  N). Formally, the control problem is posed as

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(18)

Staff Control

 (, N)  max .

189 (19)

,N

The staff N can be defined as the number of agents of each type, being included in the OS, i.e., N = (n1, n2, n3, n4). Clearly, ni  ni0, i = 1, 4 . Suppose that for agents of each type one knows the relationship i (i), i = 1, 4 (the desired working time as a function of the wage rate). Moreover, definition of type 1 implies that    0: 1() = 1. Denote T = R / . Then the problem (19) takes the form: (11(1)+ U 1 ) n1 + (22(2)+ U 2 ) n2 + (33(3)+ U 3 ) n3 + + (44(4)+ U 4 ) n4 

min

( ni  ni0 ,  i  0 )i41

(20)

subject the condition n1 1 +2(2) n2 + 3(3) n3 + 4(4) n4  T.

(21)

Sometimes the constraint (21) is supplemented with

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

i (i)  16, i  N.

(22)

This is an explicit constraint on the maximal daily working time of each agent (at least, 8 h are needed for sleeping, eating, etc.). In the control problem (20)–(22), one also seeks for a set of wage rates (generally, being specific for each type of agents). At the same time, there exist unified incentive schemes (UIS, see Section 2.7) with common payment conditions for all agents. In the model considered, the unified property means that wage rate is identical for all agents. Denote it by  and rewrite the staff formation problem: ( 1() + U 1 ) n1 + ( 2() + U 2 ) n2 + ( 3() + U 3 ) n3 + + ( 4() + U 4 ) n4 

min

( ni  ni0 )i41 ,   0

(23)

under the constraint n1 1() +2() n2 + 3() n3 + 4() n4  T.

(24)

Let K be the optimal value of the goal function (20) and KUIS be the optimal value of the goal function (24). One would easily show that the inequality K  KUIS holds true. The staff formation problem in the system (20)–(21) is somewhat intricate, since it includes a discrete component. Nevertheless, such problems can be solved numerically for a reasonable number of pretenders (some examples are provided in [50]).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Chapter 7

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

STRUCTURE CONTROL1 A structure is defined as a set of stable relations between elements of a certain system. Organizational systems being considered, one deals with informational, control or other relations between system participants (including subordination relations and distribution of decision-making power). Moreover, an organizational structure can be interpreted as a structure of the process of organizing (as a set of temporal, cause-and-effect and other relations between the corresponding stages) or as a structure of the OS. No doubt, the second definition is generally accepted; in the sequel, by adopting the term “organizational structure,” we will imply it exactly. Let us identify the following typical structures of OS. First, a degenerate structure (DS), which has no relations between participants. Second, a linear structure (LS), where subordination of OS participants represents a tree, i.e., each participant is subordinated only to a single participant located at a higher level of the hierarchy. In this context, one should emphasize the following aspect. Most of the researchers, focused on formal control models in organizational systems, consider models of OS described by tree-like structures. And third, a matrix structure (MS), being remarkable for that some participants can be simultaneously subordinated to several participants located at the same (or a higher) level of the hierarchy or at different levels of the hierarchy. Accordingly, it is necessary to distinguish among double subordination, interlevel interaction and distributed control. Alternative classifications seem also possible (e.g., basing on goals, functions, territorial division, product specialization, etc.). For instance, one may separate out the following types of organizational structures of industrial firms: hierarchical structures (resulting from decomposition of the highest goal of an organization into goals, subgoals, and so on), functional structures (decomposition takes place according to the functions performed– research, production, marketing, etc.), divisional structures (decomposition with respect to relatively independent divisions, each possessing a certain structure), and matrix structures (the “horizontal” responsibility of project managers is superimposed on the functional structure of an organization). Furthermore, there exist “intermediate” structures (e.g., divisional-regional, divisionaltechnological, divisional-product ones, and so on). Nevertheless, any of the above OS

1

This chapter was written with the assistance of M.V. Goubko, Cand. Sci. (Tech.) and S.P. Mishin, Dr. Sci. (Tech.).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

192

Dmitry Novikov

structures can be classified as a typical structure (DS, LS or MS). Actually, the criteria used here must reflect the problem specifics. Imagine that typical structures express statistical characteristics of an OS. In this case, the notion of a network structure (NS) serves for describing their temporal variations. In an NS, there exist potential relations among all participants; some of the relations are activated (thus, generating a linear or matrix structure from a DS) until a system goal is achieved, and are “terminated” then (getting back to the DS until a new goal appears). Notably, network structures may include either double subordination or interlevel interaction, while the same subjects can act as principals and agents (i.e., enter into network interaction). In other words, a network structure is a set of a priori equal agents, where permanent (e.g., hierarchical) structures may arise due to the problem specifics. The order of interaction and a control mechanism (an hierarchy) appear in a network structure due to the necessity of specialization, enabling efficient solution of partial problems. For instance, in the process of multiple solution of similar problems an LS arises in an NS as a mechanism of transaction costs reduction [40]. That is, diverse problems treated generate organizational systems as permanent hierarchies in a degenerate structure. Hence, the type of the observed structure depends on the moment of such observation. Within large periods (as against the interval of changes in external conditions), an OS can be viewed as a network. Accordingly, within small periods an OS possesses a certain typical structure (DS, LS or MS). One may de bene esse believe that typical structures of OS differ in the degree of such properties as the hierarchical property (the distributed property is the opposite) and the number of relations. An LS is fully hierarchical, while a DS demonstrates the absence of the hierarchical property. At the same time, an intermediate position is occupied by an MS (hierarchies and distributed control are inherent to such structures). Now, let us discuss the number of relations; the minimal (maximal) number of relations takes place in a DS (an MS, respectively). And an LS holds an intermediate position; recall we have suggested interpreting an MS as a superimposition of several LS. In what situations are certain structures efficient? Which factors cause transformations of structures? The efficiency and transformation of different structures depend on the existing (variable) external and internal conditions of functioning. External conditions are requirements applied to an OS by an external environment (norms, constraints, expectations, market parameters, social orders, etc.). First of all, internal conditions are defined by organizational costs depending on the conditions of interaction among OS participants (the costs to organize, perform and coordinate such interaction–the number of relations, informational load, etc.). Generally, a structure control problem in OS is posed as follows: find a structure or a set of structures minimizing the organizational costs (or maximizing a specific functional which reflects in the aggregated form the preferences of OS participants and/or other subjects) provided that the system satisfies external requirements. Let us introduce two assumptions regarding the comparative efficiency of typical structures. The first assumption sorts three typical structures with respect to their “complexity” (roughly speaking, the number of relations among OS participants). Suppose that the “simplest” structure is a DS, while the “most complex” one is an MS (again, an LS occupies an intermediate position). The second assumption connects the complexity of a typical structure to its organizational costs and (hence) to its efficiency depending on the

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Structure Control

193

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

frequency of changes in the external conditions. Notably, suppose that simpler structures possess smaller organizational costs and are efficient for a higher frequency of changes in the external conditions. The above assumptions imply that new hierarchies appear when an organization faces new goals, projects, etc. (alternatively, when feasible organizational costs increase). In other words, the structure gets complicated and a “transition” from a DS to an MS takes place (see Figure 7.1a; loops indicate preserving the structure’s type). At the same time, the number of goals being reduced, projects being completed, etc. (or feasible organizational costs being decreased), some existing hierarchies disappear, i.e., the structure gets simplified and an organization “passes” from an MS to a DS (see Figure 7.1b). Similarly, one observes simplification of the structure under a growing frequency of changes in the external conditions. Therefore, an MS turns out efficient under fixed external conditions and high organizational costs. On the other hand, a DS is efficient in the case of varying external conditions and low organizational costs. Finally, an LS occupies an intermediate position (see Figure 7.2). Now, consider feasible transitions between typical structures and the corresponding reasons. Recall that reduced efficiency of a certain structure may be caused by the varying external conditions and/or changes in the organizational costs.

Structure co mplicatio n

Structure simplification

DS

MS

LS

MS

Increasing the organizat ional costs , reducing the frequency of changes in the external conditions

a)

LS

DS

R educing the organizational costs, increasing the frequency o f changes in the external conditions

b)

Figure 7.1. Natural laws of complication and simplification in OS structure.

Organizatio nal costs High

MS

LS, MS

LS

Medium

LS, MS

LS

DS, LS

Low

LS

LS, DS

DS

Medium

High

Low

The frequency of changes in the external conditions

Figure 7.2. The efficiencies of typical OS structures and natural laws of their transformation. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

194

Dmitry Novikov

The transformation process can be described as appearance or disappearance of new hierarchies (elementary linear structures). According to the above natural laws of complication and simplification, feasible transformations of typical structures (under the assumptions made) are exactly the ones presented in Figure 7.2 (note that LS form a “universal” type). Thus, within an organization a key task of managers lies in creating a rational organizational structure, i.e., the one naturally corresponding to the mission of the organization and ensuring the maximal efficiency of its functioning. Unfortunately, control problems for organizational structures have been little-investigated to date. In the first place, this is due to complexity of the related optimization problem; indeed, the design principles used for organizational structures depend on numerous factors– the size of an organization, the specifics of its functioning, the type of documents circulation, constraints regarding data acquisition and processing in a control system, statutory provisions, and so on. The second intricacy is connected with the following. The design problem for an organizational structure represents a “higher-level problem” with respect to other control problems. Indeed, suppose it is necessary to define the efficiency of a management structure. Given human resources and hiring capabilities, the problem is to decide who should occupy certain positions in the management structure of an organization to maximize the efficiency of its functioning. In fact, this is an optimal staff formation problem (see Chapter 6). Generally speaking, efficiency estimation for a staff requires solving the problem of optimal control mechanism synthesis taking into account the given staff (see Chapters 2–5). Notably, one should evaluate an optimal incentive scheme for the given staff. Only in this case is it possible to estimate reliably the efficiency of functioning for an organization with a given structure and staff. Clearly, the mentioned actions must be repeated for each structure suggested! In practice, such process is almost unrealizable (it serves to compare two or three organizational structures only). Therefore, solving the problem of organizational structure formation requires definite skills in solving the problems of staff formation and optimal control mechanism synthesis. Efficient treatment of the problem of organizational structure formation (with many feasible structures) presupposes its (somewhat artificial) “separation” from other control problems; one has to seek for a rational structure with a “typical” staff and “standard” control mechanisms. Even such simplifications have yet not facilitated in the development of general methods to solve the problem of organizational structure formation. Nevertheless, below we consider a series of models enabling solution of certain problems of organizational structure formation [12, 44] reflecting well-known management rules [43]. For instance, Section 7.1 focuses on designing an optimal hierarchy over the technological graph. In Section 7.2 we address the same problem over the technological network. Finally, Section 7.3 is dedicated to the issues of choosing the type of organizational structures (linear or matrix structures), while Section 7.4 deals with the models of network structures.

7.1. THE HIERARCHY OVER THE TECHNOLOGICAL GRAPH As is generally known, exactly the structure of technological flows mostly defines an organizational structure. Hence, we study several simple models of optimal control structure for technological links in an organization [12, 44].

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Structure Control Acquiring data about suppliers (marketing analyst 1)

(0

,1

0)

195

Contract negotiation with suppliers (manager 1) Preparation of a draft contract (a company (6 ) ,1 lawyer) ,5 1 ) (

Contract visa (an accountant)

(3, 0)

) ) (2 ,7 20 ,2 (7 (0, Offer analysis ) (marketing analyst 3) Acquiring data Contract about customers negotiation with (marketing customers analyst 2) (manager 2)

Figure 7.3. A technological graph of contract conclusion in an industrial firm.

A technological graph over a set of nodes N is a directed loop-free graph

T   N , ET  , whose arcs (u, v)  ET correspond to r-dimensional vectors lT (u, v) with nonnegative components: lT : ET  R . For the basics of graph theory, the reader is r

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

referred to Appendix 2. The nodes of a technological graph represent elementary operations of a technological process in an enterprise or certain employees (workers, work stations). The relation (u, v)  ET in a technological graph means that the r-component flow (raw materials, energy, data, etc.) goes from the element u to the element v. The intensity of each component of the flow is described by lT (u , v) . An example of a technological graph with two-component flows is shown in Figure 7.3 (component 1 reflects printed documents circulating in an organization, while component 2 corresponds to “oral data”). For each arc of the technological graph, numerical values of these components describe the amounts of printed documents and oral data transmitted from a certain work station to another. In principle, the nodes of this graph can be viewed as work stations or operations of a technological process. Organizing such technological processes, one often confines with assigning responsible persons for operations (the so-called responsibility matrix). However, this is not enough for normal system functioning–the technological relations among operations (workers) are not controlled. For instance, take marketing analyst 3. He/she is responsible for assessing the offers of marketing analysts 1-2 to choose the way of supply. But ensuring well-timed data supply from these workers, controlling its completeness and reliability, choosing the way of data supply are not the duties of market analyst 3. Thus, it is necessary to design a structure (a control system for technological relations) to manage the flows between elements of the technological graph. To be compact, we adopt the term “organization” for such structure.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

196

III

I

II

1

4 3

2

6

7

5

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Figure 7.4. An example of a control system structure for technological relations.

“Set” the technological graph in the horizontal plane, as indicated by Figure 7.4. Then constructing an organization can be posed as the problem of a control hierarchy built over the technological network (a tree whose “upper” nodes are managers, while “lower” ones or leafs are the nodes of the technological graph). The tree arcs go from a subordinate to a superior. Figure 7.4 shows a control system consisting of nodes 1–7 of the above technological graph (see Figure 7.3) and three additional nodes (managers): I, II, III. The subordinates of node I include all marketing analysts (nodes 1–3 of the technological graph). Next, the subordinates of node II are all managers and the company lawyer (nodes 4–6). Finally, nodes I, II and the accountant (node 7) appear subordinated to node III. Each tree fully controls all nodes and relations of the technological graph. Indeed, each tree has a root node controlling (probably, through intermediate nodes) all nodes of the technological graph. Therefore, different organizations (more specifically, organizational structures– hierarchies over the technological graph) vary in the costs to maintain them; the problem of optimal organization lies in finding a tree with the minimal costs. To define the costs of the organizational graph, let us introduce the notion of a group being controlled by a node of the graph. Consider an organizational graph and choose a node v; the group g (v) of the node of the organizational graph is a subset of nodes of the technological graph (leafs in the organizational tree) leading to the node v in the organizational graph. As an example, in Figure 7.4 the group of node I includes elements {1, 2, 3}, the group of node II consists of elements {4, 5, 6}, and the group of node III coincides with the set N (in Figure 7.4 the groups of these nodes are contoured). The groups of nodes 1–7 contain just an element (the corresponding node). Fix an arbitrary node v in an organizational graph and denote by Q (v ) the set of nodes being directly subordinated to this node in the organizational graph. For instance, in Figure

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Structure Control

197

7.4 nodes 1–3 are directly subordinated to node I, nodes 4–6 are directly subordinated to node II, while nodes I, II and 7 are directly subordinated to node III. Clearly, the group of the node v in the organizational graph is the union of the groups of the nodes being directly subordinated to the node v: g (v) 

 g (v ' ) .

v 'Q ( v )

Suppose that the node v of the organizational graph controls technological flows only among the nodes of the subordinate group g(v) (in the sequel, we will also discuss the model, where a node controls external flows). Define the vector lT (g ) of total flow among the nodes of an arbitrary group g:

lT ( g ) :

l

T u , v g ( u , v )ET

(u, v) .

The total flow inside the group g (v) of the node v makes up lT ( g (v)) ; at the same time, the flows lT ( g (v1 )), ..., lT ( g (vk )) are controlled by direct subordinates of the node v. Hence, the node v must directly control only the flow

LT (v)  lT ( g (v))  lT ( g (v1 ))  ...  lT ( g (vk )) ,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

where g (v1 ), ..., g (vk ) are the groups of the subordinate nodes {v1 ,..., vk }  Q(v) . As the result of constructing the organizational graph, a specific node becomes responsible for (controls) a relation in the technological graph. Assume that the technological graph T is connected. Controlling each relation in this graph requires the existence of a node in the organizational graph, whose group coincides with the set of workers N. This is the root node of the organizational tree. Maintaining each node in the organizational graph causes definite costs. Suppose that the costs to maintain the node v depend on the flow LT (v) being directly controlled by the node. The costs are described by a function K ( LT (v))  0 2. Accordingly, the costs P (G ) of the whole organizational graph G  V , E  (where V stands for the set of control nodes of the graph, i.e., the nodes having subordinates, and E means the set of arcs defining the mutual subordination of the nodes) represent the sum of the costs of the nodes: P (G ) 

 K ( L (v)) . vV

T

Thus, in the terminology of [44], the problem of defining the structure of an optimal control system for technological relations can be stated as the problem of finding an optimal organizational tree of a single group N on the set of workers N with the costs functional

2

For simplicity, the suggested model proceeds from the following idea. The costs to maintain a control node are independent of a person holding the corresponding position. Thus, the model does not cover the issues of manager rewards (the costs to motivate them are included in the total costs to maintain the node).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

198

P(G )   K ( LT (v)) . vV

What are the properties of the cost function K()? It describes the costs to maintain a certain node in the organizational graph, and its value depends on the amount (flow) of information being directly controlled by the node in question. Consequently, it seems reasonable to believe that the costs grow following an increase in any of the components of the controlled flow. Assume that a manager is located at each node of the organizational graph. Human beings are able to process a limited amount of information; notably, the process of assimilating new information takes place with more difficulties as information tobe-assimilated gets accumulated (it requires more time, efforts, and a higher level of qualification). Accordingly, the cost function K() is convex with respect to each component of the flow. The value of the cost function under zero flow reflects the “initial” costs needed to maintain a manager (the minimal wage, the cost to organize his/her work station, etc.) even in the absence of the controlled flow. It is easy to show the following [44]. Under zero (negligibly small) initial costs, it appears beneficial to maintain as much managers (nodes of the organizational graph) as possible, provided that each manager controls the smallest possible flow. Such an absurd statement (in the sense of practical applications) underlines the important role played by the initial costs to maintain a node in the design of an organizational structure. Let us provide an example of constructing an almost optimal organization. For this, suppose that the function K () depends only on the linear combination of the components of the flow vector:

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

K ( L)  K (1 L1  ...   r Lr ) , where K () indicates a convex one-variable function such that K (0)  0 . Consider a certain organizational graph G with the nodes v1, …, vn controlling (a) the groups g1, …, gn and (b) the flows L1, …, Ln. The costs of such organization constitutes n

P (G )   K ( Li ) . i 1

Denote by LT 

l

T ( u , v )ET

(u , v ) the sum of all flows in the technological graph T. An n

arbitrary organization G meets the equality

L i 1

i

 LT .

On a temporary basis, assume that (the organizational graph being constructed) one may redistribute the flows among its nodes for reducing or enhancing the load of certain nodes. For a given set of nodes v1, …, vn, to find their loads L1' ,..., L'n leading to the minimal costs in the control system, one has to solve the problem n

( L1' ,..., L'n )  arg min [ K ( Li )] L1 ,..., L n

i 1

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Structure Control

199

n

under the condition

 Li  LT . i 1

Recall that the function K() depends on the linear combination of the components. Substitute the r-component flows lT (u , v)  (lT (u , v) ,..., lT (u , v) ) by the one-component 1

r

flows lT (u , v) : lT (u , v)  1lT (u , v)  ...   r lT (u , v) . '

'

1

r

Hence, we have derived the problem with one-component flows and a convex cost function K () . The solution is the identical loads of the nodes: L1'  ...  L'n  LT / n . The organizational costs are then defined by n  K ' ( LT / n) , and this is the minimal costs to maintain the total flow LT with n control nodes. However, an arbitrary redistribution of loads among control nodes is impossible; indeed, the redistribution is completely determined by the groups controlling nodes (on the one part) and by the groups being controlled by their direct subordinates (on the other part). Hence, one may merely modify the organizational graph by passing some subordinates from a certain node to another. This leads to an appropriate redistribution of flows. For instance, take a node v in an organizational graph with two subordinate nodes v ' , v' ' Q (v ) . Modify the organizational graph by making the node v′′ subordinated to the

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

node v′. Imagine that the total costs of the nodes v and v′ (i.e., K ( LT (v))  K ( LT (v' )) ) in the new graph is smaller than that in the initial graph. In this case, the total costs of the new graph decreases in comparison with that of the initial one (the costs of the rest nodes remain the same). Moreover, one can face situations when the organizational costs go down as the result of transferring a subordinate node in the opposite direction. Therefore, if a certain organizational graph admits passing subordinates from a more loaded node v1 to a less loaded one v2 (i.e., smoothing the difference in the flows controlled by them), the graph is nonoptimal. We have established what organizations are not optimal. Yet, how can an optimal organization be found? Even in such a simplified formulation, the problem of evaluating an optimal organizational graph possesses a high computational complexity. However, in practice it suffices to find a “good” organization, the costs to maintain which slightly exceed the minimal ones. The following algorithm can be suggested for this. 1. Compute an approximate number of nodes in the organizational tree:

n*  arg min n  K ' ( LT / n) . n 1,| N | 1

If the flows can be distributed in equal shares among n nodes of the organizational graph, then the number n* ensures the minimal costs to maintain the organization. 2. Define the “reference” flow L : LT / n intended for a single node. *

3. Sequentially add to the organizational graph new nodes such that the flow controlled by them is as close to the reference flow L as possible; terminate this process when each relation of the technological graph is controlled by a certain node of the organizational graph.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

200

IV III

II

I

1

4 10

7

6

3 2

6

3

7

9

9

20

5

Figure 7.5. An example of an almost optimal control system structure for technological relations.

Consider an example as follows. Within the framework of the technological graph shown in Figure 7.5, choose the function K ( L)  300  ( L1  L2 ) . In this case, 2

n*  4 , L  16 , P* (n* ) : n* K ( LT / n* )  2224 .

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Figure 7.5 demonstrates an organization constructed according to the proposed algorithm. The loads of control nodes I–IV make up LI  16 , LII  15 , LIII  13 , and LIV  20 . The organizational costs constitute P (G )  2250 , thus slightly differing from the minimal costs of the system with four control nodes (this quantity equals P (4)  2224 ). *

7.2. THE HIERARCHY OVER THE TECHNOLOGICAL NETWORK In the model considered above, we have assumed that the manager’s costs depend only on the total flow within a controlled group of workers. Consequently, a manager “bears no responsibility” for the flows “his/her” group is exchanging with the rest part of an organization or an external environment. Such assumption appreciably restrains applicability of the model. For instance, consider a convex cost function and zero initial costs. Moreover, reject the tree-like structure of the organizational graph and let a subordinate have (at least) two direct superiors. In this case, an optimal hierarchy is the one, where each manager controls a separate flow. The stated shortcoming is easily avoided by a slight complication of the model. Consider an illustrative example [12, 44]. Suppose that all flows are scalar (one-component).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Structure Control





w1

201



w2

w3



Figure 7.6. A symmetric network.

m m1

w1

m2

w2

w3

w4

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Figure 7.7. A control tree for a chain.

Figure 7.6 shows a technological network for the manufacturing process of a product (alternatively, for rendering a service). The worker w1 receives raw materials from a supplier, performs primary processing and transfers the semi-manufactured product to the worker w2. The latter performs the next technological operation and transfers the resulting semimanufactured product further. The final technological operation being finished, the last worker (here, w3) transfers the product to a customer. If all flows are identical, such graph is said to be a symmetric network. Denote by  the intensity of flows in it. This model admits the flows between a node of the technological graph and an external environment (see “hanging” links in Figure 7.6). Consider the hierarchy in Figure 7.7. Assume the flow between the workers w2 and w3 is disrupted (e.g., due to a conflict). The worker w2 informs his/her direct superior m1 of the problem. The manager m1 turns unable to settle the conflict– the worker w3 is not subordinated to him/her. Similarly, the manager m2 would not cope with the conflict reported by the worker w3. The managers m1 and m2 notify their direct superior m of the problem; the latter makes a decision to solve it. This decision is passed by the managers m1 and m2 to the workers w2 and w3, respectively. By analogy, one can study flow planning for the workers w2 and w3. The manager m transfers a flow plan to the managers m1 and m2, and they communicate with the workers w2 and w3, respectively. The fact of plan fulfillment is reported to the manager m in the opposite direction. Thus, the managers m1, m2 and m participate in control of the flow between the workers w2 and w3. At the same time, the flow between w1 and w2 is controlled only by the manager m1 (he/she makes independently all decisions regarding the flow). And just the manager m2 controls the flow between w3 and w4. On the other hand, the flow between the external environment and w1 is controlled by the managers m1 and m (a purchase plan is defined by the manager m, refined by the manager m1 and passed to the worker w1). The external flow from the worker w4 pro tanto includes the managers m2 and m.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

202

The example demonstrates that the total flow controlled by a manager consists of two parts, viz., 1) an internal flow of a controlled group (similarly to the previous model, it is not controlled by subordinate managers); 2) an external flow (i.e., the flow between a subordinate group and all other workers, as well as the flow between the subordinate group and an external environment). Let us study all possible hierarchies controlling a technological graph. Notably, there is (at least) a single manager in an hierarchy, which controls all workers (directly or via subordinate managers). Then one would easily show the existence of an optimal hierarchy with the following properties:

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

1) all managers control different groups of workers; 2) only a single manager possesses no superiors. The rest managers and all workers are subordinated to this manager; 3) immediate subordinates of the same manager do not control each other. Property 1 means the absence of overlapping, when two managers control exactly the same group of subordinates. Figure 7.8a shows an example of such overlapping; two managers control the same group {w1, w2, w3}. Note that one of these managers is easily eliminated (another manager is then made subordinate to all his/her direct superiors) without costs increase. According to Property 2, there exists a single manager m which has no superiors. All workers and the rest managers within the hierarchy are subordinated to him/her. The manager m is said to be a top manager. This conforms to a typical organization, where only top manager has the right to make decisions being mandatory for all workers (e.g., negotiate a conflict situation between any workers). Figure 7.8b shows an example with two managers having no superiors; thus, Condition 2 is violated. Obviously, a “superfluous” manager could be eliminated without increasing the costs of the hierarchy. Property 3 may be interpreted in the following way. Suppose the manager m2 is directly subordinated to the manager m1. Then the latter does not directly control the subordinates of the manager m2. This feature meets “regular” functioning of an organization, when managers control all workers by immediate subordinates (and not themselves).

w1

w2 w3 a)

w4

w1

w2 w3

w4

b)

Figure 7.8. Hierarchies a)-c) violate Conditions 1-3, respectively.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

w1

w2 w3 c)

w4

Structure Control

203

Figure 7.8c demonstrates a situation when the top manager m directly controls the workers w2 and w3 (despite they are controlled by the immediate subordinates m1 and m2 of the manager m). Moreover, in the case of a concave cost function a two-level hierarchy is optimal (a single superior controls all nodes of the technological network). The same technique can be used to describe small organizations, where two-level hierarchies are common. However, in a growing organization (for a sufficiently large flow function), the cost function looses the concave property–a single top manager is very busy, and his/her marginal costs go up. Notably, each additional flow unit causes costs increase; the cost function becomes convex. Below we consider a convex cost function and obtain the shape of optimal hierarchy controlling the symmetric network; moreover, we show that (in a sufficiently large organization) an optimal hierarchy is multilevel. Let us demonstrate that sometimes an optimal hierarchy is not a tree. Assume an asymmetric production line has four workers with the flows f(wenv, w1) = 3, f(w1, w2) = 1, f(w2, w3) = 5, f(w3, w4) = 1, and f(w4, wenv) = 3. Consider the cost function of a manager in the form

 ( x)  x 3 (here x is the manager’s flow). The optimal hierarchy for this production line is shown in Figure 7.9. The example provides an illustration to the following general rule: the flows with maximal intensity must be controlled at lower levels of an hierarchy. The above example studies an extreme case when a special lower-level manager is allocated to control the maximal flow. The next example indicates that expansion of a technological network not necessarily increases the management expenses.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

m4 m2

m3 m1

w1 1

3

5

w2

1 w4 3

w3

Figure 7.9. An optimal hierarchy which controls an assymetric production line.

5

w2

1

w3 5

1

w1

5

w2

a)

1

w3

5

w4

b)

Figure 7.10. Organization expansion resulting in decrease of management costs.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

1

Dmitry Novikov

204

Consider an asymmetric production line composed of four workers with the flows f(wenv, w1) = 1, f(w1, w2) = 5, f(w2, w3) = 1, f(w3, w4) = 5, f(w4, wenv) = 1 and the manager’s cost function

 ( x)  x 2 (again, x is the manager’s flow). First, assume that only the workers w2

and w3 belong to the organization; i.e., study the technological network N = {w2, w3}. Then there exists a unique hierarchy illustrated by Figure 7.10a. Now, suppose the organization is extended by adding two workers, w1 and w4. For instance, consider a large-scale wholesale company purchasing a raw materials supplier (the “worker” w1) and a retail store network (the “worker” w4), striving to control the whole chain from the upstream to the downstream. The large flow f(w1, w2) = 5 corresponds, e.g., to the information flow connected with the interaction problems between the company and the raw materials supplier (let’s say, due to a high level of defects). Similarly, the large flow f(w3, w4) = 5 could be connected with the interaction problems between the company and the retail store network (e.g., due to a high volume of product returns). The extended organization controls the workers w1, w2, w3, and w4. It is possible to modify the control hierarchy as shown in Figure 7.10b (hire two lower-level managers, making them responsible for controlling the large flows). Compare the costs of the hierarchies: (5 + 1 + 5)2 = 121,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

(1 + 5 + 1)2 + (1 + 5 + 1)2 + (1 + 1 + 1)2 = 49 + 49 + 9 = 107. Hence, the management expenses could be reduced by extending the technological network (adding new workers as a part of an external environment). This is a possible reason to purchase a new business being unprofitable, yet allowing reducing management expenses for a primary business. These examples show that one succeeds in describing many interesting phenomena of real organizations by introducing small modifications in the model. Consider the problem of an optimal hierarchy controlling a symmetric network. There exists a tree representing an optimal hierarchy over a symmetric network [44]. In the tree each manager controls a group of workers located sequentially in the network. In the case of a convex cost function, different managers in the tree possess the numbers of direct subordinates, varying at most by the unity. Thus, one should look for an optimal hierarchy only in the class of tree-like structures (according to the above example, this is not true for an asymmetric network). Moreover, the investigator may be confined only with trees, where each manager has a group of subordinate workers located sequentially. Notably, each manager must control a section of the technological network. An attempt to subordinate unconnected parts of production to a manager increases the costs of the hierarchy and renders the latter nonoptimal. Suppose the cost function is the power function

 ( x)  x ,   1 ; then it is possible to

find an optimal hierarchy analytically. For a symmetric network, an optimal hierarchy represents any tree, where each manager possesses r*  (  1) /(  1) direct subordinates. The value r* being a noninteger, take a closest integer.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Structure Control

205

Therefore, we have defined an optimal span of control, i.e., the number of subordinates of a single manager. This parameter of an organization is discussed in many works on management science. Under   3 , one obtains (  1) /(  1)  2 , that is, an optimal tree consists of the minimal number of direct subordinates r*  2 . The tree with r* direct subordinates of each manager exists if (n – 1) is divided by (

r*  1 ) without a remainder. For instance, for r*  3 we have n = 3, 5, 7, 9… . Assume that n  r* ; one can construct an optimal symmetric tree with l levels of l

hierarchy, such that each manager at level 1 possesses r* subordinates and each manager at the next level has exactly r* subordinate managers of the previous level. The case r*  3 , n = 9 is presented in Figure 7.11. The discussed relationship between the optimal number of direct subordinates r* and the

parameter  is illustrated by Figure 7.12. Clearly, as  approaches 1, the optimal span of control tends to   . In other words, as the function  () gradually becomes concave, the two-level hierarchy turns out optimal for greater n, i.e., a single manager may control a higher number of workers. If   1 , the function  () is concave–the two-level hierarchy is optimal

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

for any number of workers n. Thus, a single manager controls all flows.

w1

w2

w3

w4

w5

w6

Figure 7.11. An example of a symmetric tree over a network.

Figure 7.12. The optimal span of control r* ( ) . Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

w7

w8

w9

206

Dmitry Novikov

The situation fundamentally changes when   3 . For any n, an optimal hierarchy has the minimal span of control (in fact, 2). This means that each manager has just two direct subordinates. The total number of managers equals (n – 1), i.e., the optimal number of managers attains the maximum and each manager controls the minimal number of flows. In practice, most organizations represent an intermediate type of hierarchy–each manager has 3-10 direct subordinates (sometimes, the number of direct subordinates reaches several hundreds). These cases correspond to the range 1    3 . Figure 7.12 demonstrates an example of a step increase in the optimal span of control as  drops from 3 to 1. For any r  2 there exists a certain range, where the hierarchy with r direct subordinates of each manager is optimal. The parameter  can be interpreted as instability level of an external environment. The value   1 describes the completely stable environment (workers handle their duties independently and a certain manager can directly control any number of workers). As  grows, instability of the external environment increases the manager’s costs–the manager has to make well-timed decisions in changing conditions (regarding supplies, sales, production, etc.). Consequently, the manager’s costs rise dramatically, and in an optimal hierarchy the whole amount of work is allocated among several managers. The case   3 reflects a higher level of instability (a separate manager is required to control each flow).

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

7.3. CHOOSING THE TYPE OF ORGANIZATIONAL STRUCTURE Most of modern organizations and firms (not only project-oriented companies) face the problem of a rational balance between functional3 and project structures. A linear structure generated by functional specialization appears efficient in the case of process functioning, i.e., in the conditions of a relatively invariant set of functions performed by a system. On the other hand, within a project structure system participants get “attached” not to functions, but to projects (clearly, the latter are temporary). A certain “hybrid” of functional and project structures lies in a matrix structure, where each agent is generally subordinated to several managers simultaneously, e.g., to a functional manager and to a project manager. Thus, in the sequel we consider models accounting for the benefits and drawbacks of different structures; these models enable defining optimal types of structures [12, 44] (in the sense of a certain criterion). The matter concerns exactly the types of structures, since the problem of an optimal hierarchical structure is not studied comprehensively (see the models above)–just elementary two-level “blocks” are analyzed. “Assignment” model. Consider a system with n agents (workers engaged in corporate projects, N = {1, 2, …, n} stands for the set of agents) and m  n principals. Each principal assigns (corresponds to) a certain type of work. Consequently, a project (chosen at “unit time”) can be described by the vector v = (v1, v2, …, vm), where vj  0 means the amount of work for principal j (j  M, the set of works or principals).

3

Roughly speaking, a functional structure is a linear (tree-like) structure, where departments are created according to a specific (e.g., functional, territorial, product-type) attribute. Note the attributes may vary for different levels of the hierarchy.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Structure Control

207

Introduce the matrix ||yij||iN, jM, whose element yij  0 reflects the amount of work j performed by agent i. Denote by yi = (yi1, …, yim)  m the vector of amounts of work performed by agent i (i  N), and by y = (y1, …, ym)  mn the vector of work assignment among agents. Let ci (y): mn    be the cost function of agent i. The problem of work assignment 1

consists in minimization of the total costs fulfillment of each work j  M:

i N

y

i

under the condition of complete

= vj. We emphasize that this model does not cover

ij

i N

 c ( y)

possible constraints on the maximal amount of work performed by the agents. Suppose that the cost functions are convex with respect to the corresponding variables; accordingly, one obtains a convex programming problem. Use the notation C0(v) for the minimal value of the total costs. For instance, if

 c ( y) =   y i N

:=

r

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

i N

ij

i

i N jM

, i  N, j  M, and C0(v) =

2 ij

/ 2rij , the solution makes up yij = rij vj / rj, where rj

 v 2j / 2rj .

jM

In practice, the stated problem corresponds to defining the structure of interrelations among the agents and principals (recall that each principal is “responsible” for a certain project or work). Generally, each agent is related to each principal, since the former performs the work of different (or even all) types in an optimal assignment. For convenience, one may believe that such relations correspond to a matrix control structure, whose efficiency depends on a considered project v and makes up C0(v). Thus, such problem can be referred to as the problem of optimal matrix structure synthesis. An alternative is using a functional structure, where each agent is appointed to a single principal (a single project or type of work). To find an optimal functional structure, one must solve the assignment problem. Let us formulate it rigorously. Assume that the agent’s cost functions are separable, i.e., ci (y) =  cij ( yij ) . Then the jM

problem of an optimal functional structure lies in partitioning the set of agents N into m nonempty subsets S = {Sj}j  M, such that the total costs to perform the whole amount of works in the project are minimal. The work of a specific type is allocated among the corresponding elements of the subsets by analogy to the problem of optimal matrix structure synthesis. The assignment problem for work j among the elements of the set Sj  N consists in minimization of the total costs

 cij ( yij )

under the conditions

iS j

 yij =

iS j

vj, where yS j

indicates the action vector of agents belonging to the set Sj, j  M. Denote by Cj (Sj, vj) the minimal value of the total costs to perform work j. Then the problem of an optimal functional structure is to find a partition S minimizing the sum of the costs  C j ( S j , v j ) for all works. Assume that C(v) are the minimal total costs required. jM

Evidently, within the framework of the models studied, we have C(v)  C0(v), i.e., the costs of an optimal functional structure are not smaller than the costs of an optimal matrix Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

208

structure. The efficiencies of functional and matrix structures (C (v) and C0(v), respectively) serve as indirect estimates of the maximal extra control costs, caused by passing from a linear (functional) control structure to a matrix control structure. Let us elucidate this. As is well-known, a functional structure requires minimal control costs (self-functioning). Yet, it leads to an inefficient assignment of works among agents. On the other part, a matrix structure ensures a more efficient allocation of works, but results in higher control costs. Thus, choosing a structure (or changing a structure), one should consider the both factors, viz., the control costs and the efficiency of work allocation. Moreover, many real organizations include matrix and linear substructures. Balancing between these types is possible, see the approach described above. In the mathematical sense, the problem of an optimal functional structure appears rather complicated. Solving it for sufficiently large values of m and n can be extremely difficult. Therefore, to make qualitative conclusions, we introduce a series of simplifying assumptions. Consider a special case when the number of agents coincides with the number of work types, the agents’ costs are separable and the incremental costs cij of agent i to perform work j are constant (i  N, j  M). Consequently, the subsets in the partition S represent singletons and the problem of optimal matrix structure synthesis takes the following form. Minimize the total costs yij = vj for all j  M. cij yij provided that all the works are completely fulfilled:





i N

i N j J

The problem of an optimal allocation of work among agents is reduced to the following standard assignment problem: xij = 1 (j cij v j xij  min under the constraints



 M),



{ xij {0;1}}

i N j J

i N

 xij = 1 (i  N), see Appendix 2.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

jM

Due to the linearity of the minimized expression, solution to the problem of an optimal allocation of work among agents appears trivial: yij = vj, if i = arg min cij (yij = 0, if i  arg iN

min cij, i  N). In other words, all amount of work j is given to an agent performing it with iN

the minimal incremental (marginal) costs. It is possible that all works are allocated to the same agent. Such assignment of work would be optimal in the sense of the total costs criterion (yet, unrealizable in practice). To avoid the trivial (and sometimes non-implementable) solution, let us introduce the upper bounds Yi on the maximal total amount of work agent i can perform (i  N). In this case, the problem of an optimal functional structure is reduced to the following transport problem (see Appendix 2). Minimize yij = cij yij under the constraints

 i N j J

vj, j  M,

 yij  Yi, i  N (being feasible if  Yi   v j ). jM

jM

 i N

i N

The studied problems have been formulated in the single-project case. Similarly, one can pose and solve the problems of optimal (matrix and linear) structures when an organizational system implements sequentially a set of projects with given parameters (or parameters described by statistical data). Accordingly, a matrix structure corresponds to variable allocations of works among agents (depending on the project being implemented). In this

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

209

context, a matrix control structure (defined by solving the assignment problems at each step) is close to a network structure. On the other hand, a linear structure corresponds to allotting agents to specific principals (types of work). The efficiency of a certain structure in the dynamics can be estimated by the sum (or mathematical expectation, if the parameters of work appear unknown) of the costs to implement the whole set of projects during a given period. The conclusion that a matrix structure leads (at most) to the same costs of agents as a linear structure holds true in the dynamics4. Consider an example. There are two types of work and two agents, whose incremental costs are defined by the matrix 1 2 ; Y1 = Y2 = 1 are the constraints on the amount of work. 2 3 Assume there exists a flow of 60 projects with amounts (v1, v2), being uniformly distributed on the segment v1 + v2  2. Table 7.1 combines the mean values (over 60 projects) of the costs obtained during numerical simulation. Figure 7.13 demonstrates the curves of the functions ((c1 / c) – 1) and ((c2 / c) – 1), characterizing the losses in the efficiency due to a constant linear structure (being independent on the specifics of the project implemented). Therefore, posing and solving the assignment problems enable (a) assessing the comparative efficiency of different structures and rules of their transformation and (b) choosing an optimal or rational organizational structure depending on the set of projects being implemented within a corporate program. The model of distributed control. The analysis results for systems with distributed control, where the same agent is simultaneously subordinated to several principals (see Section 2.9), testify that there exist two modes of interaction between principals, viz., cooperation mode and competition mode. In cooperation mode, an agent chooses an action being beneficial to all principals (in a certain sense), and the principals control the agent jointly. This situation corresponds to a matrix control structure.

3.08

4

3.02

2.81

2.76

Optimal matrix structure without upper bounds on the amount of work

Optimal linear structure without upper bounds on the amount of work

Optimal matrix structure with upper bounds on the amount of work

Constant linear structure 2

Table 7.1. The mean costs

Constant linear structure 1

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Structure Control

2.29

Note that we do not consider the costs to modify the organizational structure (indeed, using an optimal matrix structure requires involving the corresponding optimal structure for each new project). The models taking into account the costs to “reform” organizational structures are discussed in [19, 37].

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

210

Dmitry Novikov

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Figure 7.13. The relative efficiency of a constant linear structure.

In competition mode, an agent is controlled by a single principal, which is defined by analyzing the auction equilibrium in the game of principals. This situation corresponds to a linear (fan) control structure. Cooperation mode (and a matrix structure) is feasible under the condition of a nonempty domain of compromise. Next, the domain of compromise appears nonempty iff the maximal value (over agent’s actions) of the sum of goal functions for all system participants (the principals and agents) is not smaller than the sum of the maximal values of principals’ goal functions; each maximum is evaluated by assuming that the corresponding principal controls an agent independently (see Section 2.9). Suppose that the goal functions and feasible sets of the system participants depend on some parameters. In this case, it is possible to study the relationship between the system structure and these parameters. Notably, for the parameter values satisfying the abovementioned condition, one should implement a matrix structure (and a linear structure for the rest values). The costs to modify these parameters being known, one can pose and solve the development problem–find an optimal change of the parameters, taking into account the costs to modify them and the efficiency of structures [44]. Consider an example illustrating application of the above approach in the case of an organizational structure with distributed control, which consists of an agent and two principals. The agent’s strategy lies in choosing an action y  [0; 1], being interpreted as the share of his/her working time for the benefit of principal 1. Hence, (1 – y) specifies the share of the working time for the benefit of principal 2. The principals gain their incomes depending on the working time of the agent: H1(y) = y, H2(y) = 1 – y. On the other hand, the agent incurs the costs c (y) =  y2 / 2 + (1 – y)2 / 2, where   0. The minimum of the agent’s cost function is attained by the action 1 / (1 + ). Define the agent’s action being the most beneficial to principal 1 (as maximizing the difference H1(y) – c(y)):

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Structure Control

211

 1 1,  . y = 2 , 1 1   * 1

For principal 1, the payoff makes up

 1   / 2,   1  W1 =  3   . , 1  2(1   ) Now, find the agent’s action being the most beneficial to principal 2 (as maximizing the *

difference H2(y) – c(y)): y2 = 0. Accordingly, the payoff of principal 2 constitutes W2 = 1/2. Evaluate the action y0 maximizing the expression [H1(y) + H2(y) – c(y)]: y0 = 1/(1 + ). Compute the following quantity (see Section 2.9): W0 = [H1(y0) + H2(y0) – c (y0)] =

 2 . 2(  1)

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

The condition of a nonempty domain of compromise (feasibility of a matrix structure) has the form W1 + W2  W0. Since W1 and W0 depend on , one can find the range of this parameter ensuring the condition W1 + W2  W0. The following cases are possible. 1)   1, which means that W1 + W2  W0 and W1  W2; hence, in this range a linear structure is optimal, where the agent is subordinate to principal 1; 2)   [1; 2], which means that W1 + W2  W0 and W2  W1; hence, in this range a linear structure is optimal, where the agent is subordinate to principal 2; 3)   2, which means that W1 + W2  W0; hence, in this range a matrix structure is optimal. By analogy, one can perform the parametric analysis in the dynamics. The knowledge of optimality domains for different structures (under the forecast of changes in the essential parameters) allows for a priori synthesizing an organizational structure with the maximal (expected, feasible, etc.) efficiency.

7.4. NETWORK STRUCTURES The majority of models discussed in control theory for socio-economic systems proceed from given subordination of participants. Let us describe the difference between the “roles” of OS participants from the game-theoretic view (see Appendix 1). The qualitative distinction between hierarchical games [18] and “standard” nonantagonistic games [17, 46, 55] consists

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

212

Dmitry Novikov

in the following. The former sort OS participants by the sequence of moves. Recall that an agent’s strategy is a rule used by him/her to choose actions depending on the information being available at the moment of decision making (see Appendix 1). Traditionally, in twolevel systems a principal possesses the right for the first move, i.e., chooses the strategy first and reports it to controlled subjects (agents). A principal may expect to know the agent’s action; thus, a principal chooses his/her strategy as in a “common” game (i.e., mapping the whole available information into the set of actions), as a “function” of the agent’s choice or in a more complex form (see Appendix 1 and [18]). Hence, a principal becomes a meta-agent, defining the “rules of play” for the rest agents (the decision-making power authority). Therefore, a specific participant of a two-level OS belongs to the set of principals if he/she possesses a priority in the sequence of moves and is able to choose his/her strategy as a “function” of actions (in the general case, strategies) of agents having a lower priority. This idea can be extended to the case of multilevel systems. For instance, consider an OS whose participants make decisions successively and there exist three “moments” of decision making. Such an OS can be treated as a three-level hierarchical system. The participants making the first move are interpreted as high-level principals (meta-principals), the ones making the second move represent intermediate-level principals (or simply “principals”), while the participants choosing their actions last are controlled subjects (agents). Here the strategies of meta-principals can be functions of intermediate-level principals ad control subjects, and so on. Consequently, in the game-theoretic model an hierarchical structure of an OS is generated by fixing the sequence of moves, the properties of the sets of feasible actions and awareness of the participants (see Appendix 1). Therefore, in the process of network interaction (see the introduction to this chapter) each participant can generally act as a principal at a certain hierarchical level or as an agent located at the lower level. The actual role of any participant is defined by two factors. The first one consists in the influence of the power authority, i.e., in the institutional feasibility of a specific participant to play a certain role. The second factor lies in reasonability (economic efficiency) of this role, both for the participant in question and for other participants. Let us fix the exogenously given power authority. Consider the efficiency of different roles played by OS participants (in a network structure as a temporary hierarchy constructed from a degenerate structure). In other words, suppose there are several agents (OS participants), each being able (a) to choose his/her strategy at a specific moment and (b) to make his/her action dependent on the strategies of participants choosing their actions afterwards (depending on the sequence of moves adopted in the OS). As the result, one obtains a metagame–a game defining the roles of participants (assume that their payoffs can be evaluated for each fixed role). Such approach can be treated as modeling of selforganization processes in organizational systems. Consider an example of a control mechanism in network structures, viz., the transferpricing mechanism (see Section 3.4), which defines an optimal fan structure. In other words, this control mechanism serves to decide which agent (from a given set) should be a principal. There are n agents with the following goal functions: fi (, yi, ri) =  yi – ci (yi, ri), i  N. Here  stands for the internal price of a unit product manufactured by agents, yi means the

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Structure Control

213

output of agent i, and ri is the efficiency of his/her activity (the parameter of the agent’s cost function ci (yi, ri), i  N = {1, 2, …, n}, the latter indicates set of agents). In practice, the union of agents must ensure the total output R (an external order). Suppose that the agents have the Cobb-Douglas cost functions: ci (yi, ri) = ri  (yi /ri), where  () is a monotonic convex function. If an external principal is assigned, then minimization of the total costs of the agents yields the internal price R / H, where H =  ri (see Section 3.4). iI

2

For simplicity, set  (z) = z / 2 and study the problem of an optimal fan structure (recall that a fan structure is a two-level hierarchical structure with a single high-level principal), where an agent appointed to the position of the principal must implement the order by choosing an optimal internal price (according to his/her view), being the same for him/her and his/her subordinates. Then the principal acts as a negotiator, and the payoff of each system participant (the agent and principal) is defined by the difference between the internal price of the products manufactured by him/her and his/her costs. Denote by fik () the goal function of agent i provided that agent k is assigned the principal. The principal’s goal function is fk (yk, rk) = k yk – ck (yk, rk) and the agents’ goal functions make up fik (yi) = k yi – ci (yi, ri), i  N\{k}. Fix the price k. Then the action chosen by agent i (i  k) equals yik = k ri. Hence, the principal has to choose the action yk = R – k H–k, where Y–k =

y i k

i

, H–k =

r . i

i k

According to the principal, the optimal price maximizes his/her goal function:

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

k =

RH . H  ( rk ) 2 2

Let the efficiency criterion be given by the total value of the goal functions of all n agents entering the OS. Then the solution to the problem of an optimal fan structure lies in assigning the principal the agent with the maximal efficiency (the corresponding interpretations are obvious). For instance, consider 10 agents whose efficiencies are r1 = 1, r2 = 2,..., and r10 = 10. Under R = 80, the optimal actions and total utility of the system participants possess the values in Table 7.2 (each row corresponds to index of the agent assigned the principal). Moreover, Table 7.2 includes the order price (the product k R) and the “profit” as the difference between the order price and the total utility of the system participants. According to the principal, the optimal solution is engaging all agents in the order (excluding an agent decreases the order price). Therefore, in the example considered the highest utility of system participants takes place if agent 10 is the principal (the total utility equals 61.85, see Table 7.2 and Figure 7.14). In the case of an external principal (within the model studied in Section 3.4), the total utility decreases, reaching 58.18. From the customer’s view, agent 1 should be assigned the principal–this would minimize the costs to place the order (its costs). Note the ratio of the total utility to the costs increases for a higher index of agent being assigned the principal. Thus, if the customer is interested in the maximal “profitability,” again agent 10 should be assigned the principal.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

214

Table 7.2. Parameters of the transfer-pricing mechanism The total

The order

utility

price

14.6

58.2

116.4

58.2

13.1

14.6

58.3

116.5

58.2

11.7

13.1

14.6

58.5

116.7

58.2

10.2

11.7

13.2

14.6

58.8

117

58.2

8.8

10.3

11.7

13.2

14.7

59.1

117.3

58.2

7.4

7.9

10.3

11.8

13.3

14.7

59.5

117.8

58.3

5.9

7.4

8.9

9.0

11.8

13.3

14.8

60

118.3

58.3

4.5

5.9

7.4

8.9

10.4

10.2

13.4

14.9

60.5

118.9

58.3

3.0

4.5

6.0

7.5

9.0

10.5

12.0

11.3

15.0

61.2

119.6

58.4

3.0

4.5

6.0

7.5

9.0

10.5

12.0

13.5

12.3

61.9

120.3

58.5

Actions

1

2

3

4

5

6

7

8

9

10

1

1.4

2.9

4.4

5.8

7.3

8.7

10.2

11.6

13.1

2

1.5

2.8

4.4

5.8

7.3

8.7

10.2

11.7

3

1.5

2.9

4.1

5.8

7.3

8.8

10.2

4

1.5

2.9

4.4

5.4

7.3

8.8

5

1.5

2.9

4.4

5.9

6.7

6

1.5

2.9

4.4

5.9

7

1.5

3.0

4.4

8

1.5

3.0

9

1.5

10

1.5

62

The profits

121

The costs to place the order

120

The total utility of the agents

60

119

58

118

56

The total costs of the age nts

117

54

116 The total utility of all age nts ( except the principal)

52

115

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

50

114 1

2

3

4

5

6

7

8

9

10

Index of the agent being assigned the principal Figure 7.14. Efficiency criteria depending on the agent assigned the principal.

Thus, in this model the optimal assignment of the principal appears nontrivial and depends on the efficiency criterion used by a decision-maker (see Figure 7.14). Let us study another model of interaction between the OS and an external customer (accordingly, another mechanism of interaction among the system participants). Suppose that one knows a market price 0 for the unit product (here the market is represented by the customer). Assume that the principal gains the income 0 R as the result of fulfilling the order R; moreover, the principal incurs the costs ck(yk, rk) and pays to the rest agents by a uniform wage rate k (his/her costs to motivate the agents constitute k Y–k). In other words, it has been earlier believed that the principal pays the agents indirectly; now, we study the situation when he/she pays the agents explicitly. Therefore, the principal’s goal function is fk(yk, rk) = 0 R – k Y–k – ck (yk, rk), while the agents’ goal functions make up

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Structure Control

215

fik(yi) = k yi – ci (yi, ri), i  N \ {k}. Note that the action chosen by agent i (i  k) still equals yik = k ri, while the principal’s action is yk = R – k H–k. According to the principal, the optimal price changes: k = R / (H + rk). In the case R = 80 and 0 = 2, the values of the optimal actions, total utility and the principal’s utility for the example considered are presented in Table 7.3. Interestingly, in the second transfer-pricing mechanism a greater share of the order is fulfilled by the principal; at the same time, the allocation of work in the first transfer-pricing mechanism is almost uniform (compare Tables 7.2-7.3). Moreover, it turns out that (in the sense of the total utility of all agents) agent 1 must be assigned the principal in the second mechanism (yet, he/she is the least efficient). Such situations happen in real life! In the both mechanisms considered, it is possible to reduce the costs to motivate the agents by rejecting the assumption of a uniform wage rate for all agents. In this case, the principal can assign plans to all agents and compensate their costs. The resulting optimal actions would be the same as for a proportional wage rate, the costs would be decreased by two times (see Chapter 2), while the total utility of all agents (except the principal) vanishes. Similarly, one can consider the problems of multilevel hierarchical structure synthesis based on transfer-pricing mechanisms (e.g., meta-principals define the amounts of work and prices for the subordinate groups of principals and agents), as well as extend well-known and intensively investigated control mechanisms for systems with a fixed structure to the case of network interaction.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Table 7.3. Optimal actions and total utility of the agents in the transfer-pricing mechanism

1 2 3 4 5 6 7 8 9 10

1

2

3

4

2.86 1.40 1.38 1.36 1.33 1.31 1.29 1.27 1.25 1.23

2.86 5.61 2.76 2.71 2.67 2.62 2.58 2.54 2.50 2.46

4.29 4.21 8.28 4.07 4.00 3.93 3.87 3.81 3.75 3.69

5.71 5.61 5.52 10.85 5.33 5.25 5.16 5.08 5.00 4.92

5

6

7

8

9

10

7.14 8.57 10.00 11.43 12.86 14.29 7.02 8.42 9.82 11.23 12.63 14.04 6.90 8.28 9.66 11.03 12.41 13.79 6.78 8.14 9.49 10.85 12.20 13.56 13.33 8.00 9.33 10.67 12.00 13.33 6.56 15.74 9.18 10.49 11.80 13.11 6.45 7.74 18.06 10.32 11.61 12.90 6.35 7.62 8.89 20.32 11.43 12.70 6.25 7.50 8.75 10.00 22.50 12.50 6.15 7.38 8.62 9.85 11.08 24.62

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

The Price principal’s utility 1.43 78.78 1.40 80.20 1.38 81.63 1.36 83.06 1.33 84.49 1.31 85.92 1.29 87.35 1.27 88.78 1.25 90.20 1.23 91.63

The total utility 133.88 132.40 131.10 129.94 128.93 128.06 127.31 126.67 126.14 125.72

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Chapter 8

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

INFORMATIONAL CONTROL1 The present chapter is dedicated to applied mathematical models of informational control in organizational systems. Prior to consideration of specific models, let us discuss the meaning of the term “informational control” (the necessary background on the theory of reflexive games is given in Appendix 1). First, endeavor to model the reasoning of a decision-maker, i.e., a principal (see the models of decision making in Section 1.1 and Appendices 1, 3). Suppose that he/she believes the opponents would choose certain actions. Then the principal should choose the best action in the existing situation (under a given action profile of the opponents). However, he/she may treat the opponents to be rational (as the principal is). Consequently, he/she assumes that (while choosing their actions) the opponents expect the corresponding behavior from the principal. However, he/she should also take into account that the opponents know that the principal considers them as rational subjects, and so on. In fact, we obtain an infinite chain of “embedded” logical reasonings. How could the infinite chain of reasoning be closed? What decision should be made in the situation of choice? Probably, the most widespread approach lies in involving the concept of the so-called Nash equilibrium. A Nash equilibrium is a strategy profile such that none of the players can benefit by a unilateral deviation (see Appendix 1). In other words, “if all opponents choose the actions from such a strategy profile, I can benefit nothing by unilaterally changing my action” (this statement holds true for each player). Reflexion. The above-mentioned process and result of agent’s reasoning regarding the opponents’ principles of decision making and actions being chosen by them is said to be strategic reflexion [33, 52]. In contrast to strategic reflexion, within the framework of informational reflexion [52] a subject analyzes his/her beliefs about the awareness of other subjects, their beliefs about these beliefs, and so on. The majority of solution concepts in game theory (including that of a Nash equilibrium) imply that the information on the game (i.e., its staff, the sets of strategies and the payoff functions of the players) is a common knowledge among the players. Notably, all players (agents) know the game; everybody knows that all know the game; everybody knows that all players are aware of that all players know the game, and so forth (again, this process appears infinite).

1

This chapter was written with the assistance of Prof. A. G. Chkhartishvili, Dr. Sci. (Phys.-Math.).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

218

Dmitry Novikov

Of course, a common knowledge (more specifically, a symmetric common knowledge) is a special case. Generally, the beliefs of agents, their beliefs about the beliefs,…, vary. For instance, an asymmetric common knowledge may take place, where the players understand the game differently (but their subjective understanding forms a common knowledge). In the case of a subjective common knowledge, a player believes a common knowledge takes place (in fact, this could be wrong). An hierarchy of beliefs of agents is said to be an awareness structure. Moreover, a reflexive game2 [14, 52] is the model of agents’ collective decision-making based on the hierarchy of their beliefs. In such a game each agent models the opponents’ behavior using his/her beliefs (thus, generating first-level phantom agents, i.e., the agents existing in minds of real agents). First-level phantom agents model the behavior of their opponents, viz., second-level phantom agents exist in their minds, and so on (exactly the set of mutual beliefs of the real and phantom agents forms an awareness structure). Thus, each agent chooses his/her actions by modeling the interaction with phantom agents (expecting certain actions from the opponents). A stable outcome of such interaction is called an informational equilibrium [14, 52]. However, as soon as real agents have chosen the actions, they can acquire information for estimating (implicitly or explicitly) the actions chosen by the opponents. Therefore, an informational equilibrium can be stable (when all agents–real and phantom ones–make sure their expectations are correct) or instable (otherwise). Furthermore, there are true and false stable informational equilibria (the former preserve the equilibrium property in the case of a complete adequate awareness of the agents, while the latter do not). Informational control. Let us get back to informational control. An equilibrium in a reflexive game of the agents depends on their awareness structure. By a proper modification of this structure, one can change an informational equilibrium. Hence, we will understand informational control as a control action applied to the awareness structure of the agents to change an informational equilibrium. An informational control problem could be posed in the following (informal, yet qualitative) way. Find a certain awareness structure such that the outcome of the reflexive game of agents would be the most beneficial to a principal (a control subject); note the outcome is calculated according to the concept of informational equilibrium. To proceed, let us make an important terminological remark. Sometimes informational control is viewed as informational impact–e.g., reporting specific information. In this book, we consider information as a control object (and not as a control means). Notably, we act on the premise that the principal can form a certain awareness structure for the agents (from a given set of structures), and study the result. Beyond our analysis is the issue how should such a structure be formed. In each model, the solution procedure for the problem of informational control can be divided into several stages. The first stage (perhaps, the most time-consuming one) lies in modeling of agents’ behavior. In fact, one has to study an informational equilibrium, i.e., define the relationship between the outcome of the agents’ reflexive game and the structure of their awareness. The second stage consists in solving the control problem. Under a given relationship between the informational equilibrium and the awareness structure, find the best feasible 2

A rigorous definition of a reflexive game (including its properties) is provided in Appendix 1.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Informational Control

219

awareness structure (according to the principal’s viewpoint). The “best” awareness structure induces the agents to choose the most beneficial action profile (again, according to the principal) as their informational equilibrium. Note that the principal’s costs to form such awareness structure should be considered, as well. The third stage is analyzing the properties of informational control: (a) efficiency, being defined as the value of the principal’s goal function on the set of informational equilibria in the game of agents (b) stability of the informational equilibrium implemented by the principal, and (c) complexity. The notion of complexity is closely connected with the problem of the maximal reflexion rank. Therefore, we address this property of informational control in a greater detail. The problem of the maximal reflexion rank. An awareness structure of agents generally represents an infinite tree; at level 1 there are the beliefs of real agents, at level 2 there are the beliefs of real agents about the beliefs of their opponents (first-level phantom agents), at level 3 there are the beliefs about the beliefs about their beliefs (second-level phantom agents), and so on. Suppose that at a certain level the subtree coincides with the higher-level subtree; in this case, the former subtree (and the descending subtrees) can be rejected. The number of pairwise different subtrees in an awareness structure is said to be its complexity. The maximal depth of the resulting tree (after rejecting all “repetitive” subtrees) is said to be the depth of the awareness structure [14]. The depth of a subtree corresponding to a real agent characterizes his/her reflexion rank (in fact, exceeds it by the unity). The problem of the maximal reflexion rank includes the following questions. Are there any upper bounds for the reflexion ranks of agents such that a further increase of the ranks appears pointless? What are these bounds in each specific case? We should clarify the word combination “appears pointless.” First, the capabilities of a human being to process data are naturally limited; making decisions, nobody can perform reflexion “ad infinitum.” Unfortunately, no rigorous results have been derived in this field to date; practice indicates that people scarcely perform reflexion with the rank exceeding 2-3 (the maximal estimate achieved in experiments makes up 5-7). Second, within the framework of many mathematical models increasing the depth of an informational structure in excess of a certain threshold does not yield new informational equilibria [52]. According to the agents’ viewpoint, this means that exceeding this reflexion rank turns out unreasonable. For a principal this means that (without any loss in the efficiency of informational control) he/she can consider only the class of awareness structures whose depth is limited by the above threshold. Therefore, an important result in the analysis of informational control problems lies in evaluation of the maximal reflexion rank of agents (the maximal reasonable rank) such that a principal may not consider higher ranks to form their awareness structures. Applied models. Chapter 8 deals with general statements and solutions for the problem of informational control in several practical situations. Let us outline the models studied. In Section 8.1 we discuss the model of manufacturers and intermediate sellers, which involves an agent manufacturing a certain product and a principal acting as an intermediate seller between the agent and the market. The intermediate seller is supposed to know the exact market price (actually, the manufacturer does not know it).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

220

Dmitry Novikov

The manufacturer and intermediate seller a priori agree about the income sharing proportion. Afterwards, the intermediate seller reports to the manufacturer (not necessarily true) information on the market price. Finally, the manufacturer chooses an output (the amount of products to-be-manufactured). The intermediate seller’s choice of the message regarding the market price can be viewed as informational control. Stable informational control guarantees that the real income of the manufacturer coincides with the expected one (on the basis of the message reported by the intermediate seller). It appears that (by a proper choice of informational control) the intermediate seller ensures his/her maximal income irrespective of the sharing proportion (in other words, the intermediate seller can agree with any offer of the manufacturer, since he/she would be able to gain almost any payoff). Interestingly, sometimes the manufacturer makes higher profits than in the case of truth-telling by the intermediate seller. In the model of corruption (see Section 8.2), each governmental official possesses subjective beliefs about the penalties for the fact of a bribery (these penalties depend on the “total level of corruption”). If the officials observe the total level of corruption, in a stable outcome this level is independent of the mutual beliefs of the corruptionists about their types. Whether such beliefs are true or false is actually not important. Consequently, it seems impossible to influence the level of corruption only by modifying the mutual beliefs of the officials; therefore, any stable informational control leads to the same level of corruption. Section 8.3 focuses on the model of team building, where uncertain parameters are the efficiency levels of agents’ activity. A team is a collective (a union of people performing a joint activity and possessing common interests), being able to achieve a goal in an autonomous and self-coordinated way under the minimal control actions. The following couple of aspects are essential in the definition of a team. The first aspect concerns goal achievement, i.e., a final result of a joint activity represents the unifying factor for any team. The second aspect is related to autonomy and self-coordination of the team activity; notably, each member of a team shows the behavior required under specific conditions (leading to the posed goal), i.e., the behavior expected by the rest team members. Notwithstanding numerous qualitative discussions being available in the scientific literature, today one would hardly find formal models of team building with non-trivial mutual beliefs. Thus, in Section 8.3 we suggest the model of team building based on the hierarchies of mutual beliefs of the agents about the efficiencies of their individual activity. According to existing beliefs, each agent can forecast the actions to-be-chosen by other agents, the individual “costs” to-be-incurred by them and the total costs of the team. Assume that the actions are chosen repeatedly and the reality observed by a certain agent differs from his/her beliefs. Then the agent has to correct the beliefs and use “new” ones in his/her choice. The analysis of informational equilibria shows the following. It is reasonable to consider a team as a set of agents whose choices agree with the hierarchy of their mutual beliefs about each other. As a matter of fact, such definition of a team turns out close to the notions of stability and coordination of informational control (both require that the real actions or payoffs of the agents coincide with their expected counterparts).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Informational Control

221

Moreover, an interesting conclusion is that stability of a team and self-coordination of its functioning can be ensured under false beliefs of the agents about each other. To disturb a false equilibrium, one should report additional information to the agents about them. The conducted analysis makes it possible to draw the following conclusion. The models of team building and functioning described in terms of reflexive games reflect the autonomy and coordination of the team activity. Furthermore, they allow for posing and solving control problems for the team building process. Control capabilities include, first, creating various situations of activity (to identify essential characteristics of the agents–a learning model) and, second, ensuring the maximal communication and access of team members to all essential information. In Section 8.4 we study the hustings model; here informational control is to persuade voters supporting certain candidates that the latter would not be elected (and that the voters should support other candidates). The resulting informational equilibrium can be stable (and even true). Finally, Section 8.5 is devoted to the model of product advertizing; an agent makes the decision regarding product purchase based not only on his/her beliefs, but on information about the share of agents willing to purchase the product or expecting this agent to purchase the product. Most of real advertizing campaigns can be described within the framework of the model of informational control with reflexion rank 1 of reflexion rank 2 of the agents.

8.1. MANUFACTURERS AND INTERMEDIATE SELLERS

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Let us consider a situation with two participants, viz., an agent manufacturing a certain product and a principal being an intermediate seller of the product. They interact as follows: 1) agree about the proportions  and (1 – ) to share the income between the manufacturer and intermediate seller, respectively,   (0; 1); ~ 2) the intermediate seller reports to the manufacturer the estimate  of the market price ; 3) the manufacturer makes a certain amount of the product y  0 and passes it to the intermediate seller; 4) the intermediate seller vends the product at the market price and gives the income   y to the manufacturer (the intermediate seller keeps the income (1 – )  y). The model proceeds from the assumption that the intermediate seller knows the exact market price (in contrast, the manufacturer possesses no a priori information about the price). The manufacturer is characterized by the cost function c (y), relating the product output and the corresponding manufacturing costs. Suppose there exist no output constraints–any amount of the product can be manufactured. In the stated situation, one would identify three key parameters, notably, the share , the price  and the product output y. Both sides negotiate the share in advance, the price is reported by the intermediate seller, while the manufacturer chooses the product output. Now, let us analyze the behavior of the participants as soon as the shares  and (1 – ) have been settled. Striving to maximize his/her profits, the manufacturer chooses the product

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

222

output y* depending on his/her cost function, his/her share and the market price reported by the intermediate seller. Assume that the manufacturer trusts the intermediate seller, and the former has no opportunity to verify truth-telling of the latter. In this case, the intermediate

~

 not coinciding with the actual value  of the ~ market price. The choice of the message  by the intermediate seller can be treated as

seller may generally report a certain value

informational control. Finally, imagine that the intermediate seller adheres to stable informational control; in other words, he/she tries to guarantee to the manufacturer exactly the income being expected

~

by the latter on the basis of the value  . According to the above assumptions, the goal functions of the intermediate seller and manufacturer make up

~ ~ ~ ~ f0(y,  ) =  y –   y, f (y,  ) =   y – c (y). Note these goal functions are rewritten taking into account the stabilization effect (income reallocation by the principal–the intermediate seller). This is done for control stability, see the introduction to the present chapter. Require the cost function to be such that the manufacturer’s profits (the difference

~

between the income and costs) attains a unique maximum at a point y* = y*(  ) > 0. It suffices that the function is twice differentiable satisfying the conditions c (0) = c'(0) = 0, c'(y) > 0, c''(y) > 0 for y > 0,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

c'(y)   as y  . Moreover, suppose that the following properties take place: (y c'(y))' is a continuous increasing function tending to infinity as y  . Then the following statements hold true.

~

1) By choosing an optimal value  (according to his/her viewpoint), the intermediate seller can ensure the maximal value of his/her goal function irrespective of . 2) There exist a quantity * = *( ) such that a. b.

~

if  = *, then truth-telling is optimal for the intermediate seller (i.e.,    ); if  < * ( > *), then the manufacturer gains larger (smaller) profits as against the profits ensured in the case seller).

~

   (i.e., under truth-telling of the intermediate

3) If and only if the cost functions are power-type c (y) = ky (k > 0,  > 1), the value * is constant and independent of the price : * = 1/.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Informational Control Proof. Under the message

223

~

 reported by the intermediate seller, the manufacturer

~ maximizes his/her goal function by choosing the product output ~ y = arg max f (y,  ) y A

~

using the condition c′( ~ y ) =  .

Substitute ~ y in the goal function of the intermediate seller. Applying the first-order

necessary optimality conditions and the expression

d~ y  , one obtains the equation ~ y) d c( ~

c(~ y)  ~ y c(~ y) .

(1)

This equation admits a unique solution ~ y (the product output ~y depends only on the actual market price  ), and the corresponding optimal message of the intermediate seller is

~

 

c( ~ y)



.

(2)

Clearly, the utility function of the manufacturer

~ f 0( ~ y ,  ) = ~y ( – c'( ~y )) appears independent of the share . Furthermore, the manufacturer’s profits do not depend on

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

: ~ f(~ y,  )= ~ y c′( ~ y ) – c( ~ y ).

(3)

Evaluate * in the following way:

* =

c( ~ y)



.

(4)

~

Compare (2) and (4) to observe that, if  = *, the message    is optimal to the intermediate seller. Now, let  < *. Then formulas (2) and (4) imply that the optimal message of the intermediate seller constitutes

~

 

*   . 

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(5)

Dmitry Novikov

224

The intermediate seller reporting , the manufacturer chooses y* by solving the equation c′(y*) =  .

(6)

Therefore, the manufacturer gains the profits f ( ) = y*c′(y*) – c (y* ).

(7)

Compare (2), (5) and (6) to see that ~ y > y* (recall c′(y) is an increasing function). Next, one would easily show that the function y c′(y) – c (y) increases. Hence, formulas (3) and (7) indicate that the message  reduces the profits of the manufacturer (in comparison with the ~ message  ). Similarly, it is possible to prove that, if  > *, the converse statement holds true, i.e., the ~ manufacturer’s profits are higher under the message  than under the message  . Let us establish the condition to-be-imposed on the cost function c (y) for making the right-hand side of (4) independent of . It follows from (1) that one suffices to have (a)

c( y )



 k1 and (b)

yc( y )



 1  k1 (k1 is a constant).

Divide (b) by (a) to derive the differential equation

yc( y )  k2c( y )  0,

(8)

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

where k2 = (1 – k1)/k1 stands for an arbitrary constant. Solve equation (8) to obtain that c(y) = ky (k > 0,  > 1). Use formulas (1) and (4) to check that * = 1/.

8.2. CORRUPTION Consider the following game-theoretic model of corruption. There are n agents (government officials), additional income of each official is proportional to the total bribe xi  0 taken by him or her, i  N= {1, …, n}. We will assume that a bribe offer is unbounded. Let every agent be characterized by the type ri > 0, i  N, which is known to him or her (and not to the rest agents). The type may be interpreted as the agent’s subjective perception of the penalty “strength.” Irrespective of the scale of corruption activity (xi  0), agent i may be penalized by the function i(x, ri) which depends on the actions x = (x1, x2, …, xn)    of all agents and on n

the agent’s type. Consequently, the goal function of agent i is defined by fi (x, ri) = xi – i (x, ri), i  N.

(1)

Suppose the penalty function has the form

i (x, ri) = i (xi, Qi (x–i), ri). Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(2)

Informational Control

225

Formula (2) means that the penalty of agent i depends on his or her action and on the aggregated opponents’ action profile Qi(x-i) (in the view of agent i, this is a “total level of corruption of the rest officials”). Assume that the number of agents and the general form of the goal functions are a common knowledge; moreover, each agent has an hierarchy of beliefs about the parameter r = (r1, r2, …, rn)   . Denote by rij the belief of agent i about the type of agent j, by rijk the n

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

belief of agent i about the beliefs of agent j about the type of agent k, and so on (i, j, k  N). In addition, assume that the agents observe the total level of corruption. Thus, informational equilibrium stability takes place under any beliefs about the types of real or phantom agents such that the corresponding informational equilibrium leads to the same value of the aggregate Qi() for any i  N. In this case, the goal functions (1)-(2) of the agents obviously satisfy the sufficient conditions of true and stable equilibrium [52]. Hence, for any number of the agents and any awareness structure, all stable equilibria of the game are true. In other words, the following statement holds. Let the set of actions x*,   + (+ indicates the set of all finite sequences of indexes from N) be a stable informational equilibrium in the game (1)-(2); then this equilibrium is true. Consequently, the level of corruption in a stable opponents’ action profile does not depend on the mutual beliefs of corrupt officials about their types. Whether these beliefs are adequate or not appears not important. It is then impossible to influence the level of corruption only by modifying the mutual beliefs. Therefore, any stable informational equilibrium results in the same level of corruption. Suppose that

i(xi, Qi(x-i), ri) = xi (Qi(x-i) + xi) / ri, Qi(x-i) =

x j i

j

,iN

and all types are identical: r1 = … = rn = r. Evidently, equilibrium actions of the agents are xi = corruption constitutes

x iN

i



r , i  N, while the total level of n 1

nr . The latter quantity may be changed only by a direct n 1

impact on the agents’ types.

8.3. TEAM BUILDING Consider the set of agents N = {1, 2, …, n}. The strategy of agent i is choosing an action yi  0, which requires the costs ci (yi, ri). Here ri > 0 means the type of the agent, reflecting the efficiency of his/her activity3. In the sequel, we study the Cobb-Douglas cost functions ci (yi,

3

The analysis is premised on the assumption that the costs of an agent appear a decreasing function of his/her type.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

226

ri) = yi ri1 /, i  N. Let the goal of agents’ joint activity lie in ensuring the total “action”

y iN

i

= R under the minimal total costs

 c ( y , r ) . From the game-theoretic viewpoint, iN

i

i

i

for convenience one may believe that the goal functions of the agents coincide and equal the total costs with negative sign. In practice, this problem possesses the following interpretations: order execution by a production association, performing a given amount of works by a team (a department), and so on. Without loss of generality, set R = 1. Suppose that the vector r = (r1, r2, …, rn) is a common knowledge. By solving the constrained optimization problem, each agent evaluates the optimal action vector *

*

y*(r) = ( y1 (r ) , y2 (r ) , …, yn* ( r ) ), where

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

yi* ( r ) = ri /

 rj , i  N.

(1)

jN

Now, discuss several different variants of agents’ awareness about the type vector (the model considered includes the hierarchy of agents’ beliefs about the parameters of each other). Notably, let us be confined with the analysis of two cases. In the first case, each agent has certain first-order beliefs rij > 0 about the types of other agents; in the second one, each agent has certain second-order beliefs rijk > 0 about the types of the agents, i, j, k  N. Making a digression, note there may exist a principal who knows the true types of agents and performs motivational control. Consequently, regardless of agents’ awareness, each agent would independently choose the corresponding action (1) (see Sections 2.7 and 3.4), if the principal adopts a proportional incentive scheme with the wage rate 1 /

r j N

j

.

Suppose that each agent knows his/her type. Moreover, the axiom of self-awareness [14, 52] implies that rii = ri, riij = rij, rijj = rij, i, j  N. According to his/her beliefs, each agent can forecast the actions to-be-chosen by other agents, the individual costs to-be-incurred by them, and the resulting total costs. Assume that the actions are chosen repeatedly and the reality observed by an agent differs from his/her beliefs. Thus, the agent should correct the beliefs and involve “new” beliefs for the next choice. The set of parameters observed by agent i is said to be his/her subjective game history and denoted by hi, i  N. Within the framework of the present model, the subjective game history of agent i may include the following information: 1) the actions chosen by other agents (of course, agent i is always aware of his/her actions chosen): y–i = (y1, y2, …, yi–1, yi+1, …, yn);

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Informational Control

227

2) the actual costs of other agents (and their total costs): с–i = (с1, с2, …, сi–1, сi+1, …, сn); 3) the total costs of all agents: c =

 ci ;

iN

4) the actions and actual costs of other agents (and their total costs): (y–i; c–i); 5) the actions of other agents and the total costs of all agents: (y–i; c). Apparently, the stated variants are not equivalent. Variant 4 seems the most “informative,” variant 3 is less informative in comparison with variant 2, etc. The choice of an awareness variant is a way of informational control by a principal. Two cases of awareness structures (beliefs in the form rij and in the form rijk) and five variants of the subjective game history generate Models 1-10 combined in Table 8.1. We believe that the subjective histories and awareness structures of all agents coincide (otherwise, the number of possible combinations grows exponentially). Table 8.1. The models of team building Subjective game history

Awareness structure {rij} Model 1 Model 2 Model 3 Model 4 Model 5

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

y–i с–i c (y–i; c–i) (y–i; c)

{rijk} Model 6 Model 7 Model 8 Model 9 Model 10

To proceed, let us analyze what decision-making procedures can be used by the agents. Under the awareness structure {rij}, agent i can choose his/her action according to the procedure (1):

yi* ({rij }) = ri /

r j N

ij

.

(2)

Alternatively, the agent may first estimate the opponents’ actions by means of the procedure (2), then evaluate his/her action leading to the required sum of actions:

yi* ({rij }) = 1 –

 k i

(rik /

r

il

), i  N.

(3)

l N

Evidently, the procedures (2) and (3) are equivalent. Under the awareness structure {rijk}, agent i can estimate the opponents’ actions using the procedure (1):

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

228

yij* ({rijk }) = rij /  rijl , j  N.

(4)

l N

Then the agent evaluates his/her action ensuring the required sum of actions:

yi* ({rijk }) = 1 –  (ril /  rijq ), i  N. l i

(5)

qN

Thus, we have described the models of agents’ decision-making in the statics. Now, let us consider the dynamics of their collective behavior. Suppose that at each step the agents make their decisions simultaneously using the information about the previous step only. In other words, the subjective game history includes the corresponding values during the previous period merely. Such assumption eliminates the case when decision making takes place on the basis of the whole preceding game trajectory observed by the agent. The underlying reason is that these models of decision making appear extremely complicated and would hardly yield practically relevant conclusions. Denote by Wi t ( hit ) the current state of the goal of agent i during the time period (step) t, i.e., his/her beliefs I it about the opponents’ types that could lead to their choices observed by the agent during the period t = 0, 1, … , (i  N). Assume that initially the agents possess the beliefs I it and modify them depending on the subjective game history according to the hypothesis of indicator behavior [12]:

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

I it 1 = I it +

 it ( Wi t ( hit ) – I it ), t = 0, 1, … , i  N,

(6)

where  it means a vector whose components are real values from the segment [0; 1] (“step sizes”). The beliefs of each agent are described by a finite number of the parameters rij or rijk, i, j, k  N. Hence, we will understand (6) as (the “vector” representation of) the law of independent variations in the awareness structure components. Now, we can rigorously define the notion of a team. Notably, a team is a set of agents whose choices are coordinated with the hierarchy of their mutual beliefs. In the model under consideration, a team is a set of agents with the awareness structure representing a fixed point of the mapping (6) provided that the actions chosen by the agents according to their awareness structures are given by formula (2) or (5). The introduced definition of a team is close to the properties of stability and coordination of informational control; the latter require that the real actions or payoffs of the agents coincide within the expected actions or payoffs (see the above discussion and [52]). Therefore, in each case the dynamics of mutual beliefs of the agents is described by the relationship Wi t () between the current state of the agent’s goal and his/her subjective game history. Model 1. Suppose that agent i with the awareness structure {rij} observes the actions x–i chosen by the opponents.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Informational Control

229

For agent i, take the set of opponents’ types such that their actions chosen according to formula (2) coincide with the observed actions x–i; denote this set by

 i1 = {rij > 0, j  N \ {i} | rij /

r

ij

j N

= xj, j  N \ {i}}.

(7)

Next, let wij ( xt i ) be the jth projection of the point from the set  i1 , being the closest to t

t

the point ( rij )j  N \ {i}. Then the dynamics of the beliefs of agent i can be rewritten as t t t rijt 1 = rijt +  ij ( wij ( x i ) – rijt ), j  N \ {i}, t = 0, 1, … , i  N.

(8)

And his/her choice of actions would follow the expression (2). Model 2. Assume that agent i with the awareness structure {rij} observes the costs of other agents с–i. For agent i, take the set of opponents’ types such that their costs incurred by the choice of actions according to formula (2) coincide with the observed costs c–i; denote this set by

 i2 = {rij > 0, j  N \ {i} | сj(rij /  ril , rij) = cj, j  N \ {i}}.

(9)

l N

Again, let wij ( c i ) be the jth projection of the point from the set  i2 , being the closest t

t

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

to the point ( rij )j  N \ {i}. Then the dynamics of the beliefs of agent i can be characterized by the procedure (8), and his/her choice of actions agrees with (2). In the sense of informativeness and feasibility of the corresponding system of equations (see formulas (7) and (9)), this case slightly differs from Model 1. Model 3. Assume that agent i with the awareness structure {rij} observes the costs of all agents с. For agent i, take the set of opponents’ types such that their total costs incurred coincide with the observed total costs c; denote this set by:

 i3 = {rij > 0, j  N\{i} | ci (yi, ri) +



lN \ {i }

[сj (ril /

r

q N

iq

, rij)] = c}.

(10)

Similarly, let wij (с) be the jth projection of the point from the set  i3 , being the closest t

t

to the point ( rij )j  N \ {i}. Then the dynamics of the beliefs of agent i can be modeled by the procedure (8), and his/her choice of actions agrees with (2). As a matter of fact, this case substantially varies from Models 1–2 (in the aspects of informativeness and non-unique solutions to the equation in the definition of the set  i3 (see formula (10)), as well as in the aspect of modeling complexity).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

230

Models 4–5 are treated by analogy to Models 1–2; thus, addressing them in a greater detail makes no sense. Model 6. Assume that agent i with the awareness structure {rijk} observes the actions x–i chosen by the opponents. For agent i, take the set of opponents’ types such that their actions being chosen by the procedure (4) coincide with the observed actions x–i; denote this set by

 i6 = {rijk > 0, j  N \ {i}, k  N | rij /  rijl = xj, j  N \ {i}}.

(11)

l N

Moreover, let wijk ( xt i ) be the jkth projection of the point from the set  i6 , being the t

t

closest to the point ( rijk )j  N \ {i}. Then the dynamics of the beliefs of agent i can be defined by: t t t t t rijkt 1 = rijk +  ij ( wijk ( x i ) – rijk ), j  N \ {i}, t = 0, 1, … , i  N.

(12)

And his/her choice of actions satisfies the expression (5), i.e., *

yit ({ rijkt }) = 1 –



Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

j i

( rilt /

r

q N

t ijq

) i  N.

(13)

Model 6 appears equivalent to Model 1 in the sense of the description and analysis techniques. On the other hand, Model 7 is equivalent to Model 2, and so on. Therefore, Models 7–10 are not extensively studied here. Thus, for each agent Model 1 includes (n – 1) equations with (n – 1) unknown quantities, Model 2 includes (n – 1) equations with (n – 1) unknown quantities, Model 3 includes 1 equation with (n – 1) unknown quantities, Model 4 includes 2 (n – 1) equations with (n – 1) unknown quantities, Model 5 includes n equations with (n – 1) unknown quantities, Model 6 includes (n – 1) equations with n (n – 1) unknown quantities, and so on. To conclude this section, let us consider the simplest one among Models 1–10, viz., Model 1 in the case of three agents with the separable quadratic cost functions ci (yi, ri) = (yi)2 / 2 ri. Model 1 (an example). It follows from formula (7) that w13(x2, x3) = x3 r1 / (1 – x2 – x3), w12(x2, x3) = x2 r1 / (1 – x2 – x3), w21(x1, x3) = x1 r2 / (1 – x1 – x3), w23(x1, x3) = x3 r2 / (1 – x1 – x3), w31(x1, x2) = x1 r3 / (1 – x1 – x2),

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Informational Control

231

w32(x1, x2) = x2 r3 / (1 – x1 – x2). Set r1 = 1.8, r2 = 2, r3 = 2.2, and take the initial beliefs of the agents about their types to be identical and equal to 2. The objectively optimal action vector (in the sense of the minimal total costs) makes up (0.30; 0.33; 0.37). Suppose that the agents act in the following way. Based on their own beliefs about their types and the types of the opponents, the agents evaluate the opponents’ actions attaining the “subjective” total minimum to the costs sum (i.e., forecast the actions of the opponents); note this is done according to the procedure (2). Next, the agents compare the observed actions with the forecasted ones and modify their beliefs about the opponents’ types proportionally to the difference between the observed and forecasted actions; the proportionality coefficient constitutes

 ijt = 0,25, i, j  N, t = 0, 1, …

As the result of such procedure, after 200 steps we obtain the action vector (0.316; 0.339; 0.345) and the following beliefs of the agents about their types: r12 = 1.93 < r2, r13 = 1.94 < r3, r21 = 1.86 > r1, r23 =2.01 < r3, r31 = 2.02 > r1, and r32 = 2.17 > r2. Despite evident mismatches between the reality and existing beliefs of the agents, the outcome appears stable–the expected actions and the observed ones coincide to four digits after the decimal point. Now, set r1 =1.8, r2 = 2, r3 = 2.2 and choose other initial beliefs of the agents about their types:

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

r120 = 2, r130 = 2.5, r210 = 1.5, r230 = 2.5, r310 = 1.5, r320 = 2. Still, the action vector (0.30; 0.33; 0.37) is objectively optimal (in the sense of the minimal total costs). After 200 steps, we arrive at the action vector (0.298; 0.3484; 0.3524) and the following beliefs of the agents about their types: r12 = 2.1 > r2, r13 = 2.12 < r3, r21 = 1.71 < r1, r23 =2.01 < r3, r31 = 1.85 > r1, and r32 = 2.16 > r2. Again, despite evident mismatches between the reality and existing beliefs of the agents, the outcome appears stable–the expected actions and the observed ones coincide to four digits after the decimal point. Under the same initial data, the procedure (8) leads to the action vector (0.318; 0.341; 0.341) and the following beliefs of the agents about their types: r12 = 1.93 < r2, r13 = 1.93 < r3, r21 = 1.87 > r1, r23 = 2.00 < r3, r31 = 1.05 > r1, and r32 = 2.2 > r2. Similarly, despite evident mismatches between the reality and existing beliefs of the agents, the outcome appears stable–the expected actions and the observed ones coincide to six digits after the decimal point. The phenomenon of informational equilibrium stability (when the mutual beliefs of the agents do not coincide with the reality) has an easy explanation. The system of equations (7) for all agents with respect to their beliefs and actions possesses non-unique solutions. Indeed, in the case of two agents the system of three equations  r12  r  r  x2  1 12  x1  x2  1  r21  x1   r2  r21

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(14)

Dmitry Novikov

232

with four unknown quantities r12, r21, x1, x2 admits an infinite set of solutions. Notably, expressing all unknown quantities through x1, one obtains the following family of solutions: r12 = r1 (1 / x1 – 1), r21 = r2 x1 / (1 – x1), x2 = 1 – x1, and x1  (0; 1). Substitution of these beliefs of the agents in (2) yields identities. Note that transition to Model 4, i.e., adding information about the opponents’ costs may considerably reduce the set of solutions for the corresponding system of equations. In this model, simultaneous observation of the costs and actions of an agent enables a unique determination of his/her type (in one step). We provide an example. There exist two agents with the types r1 = 1.5 and r2 = 2.5. The 0

0

initial beliefs are r12 = 1.8 and r21 = 2.2 (appreciably “incorrect”). After 200 steps, the resulting mutual beliefs of the agents make up r12 = 1.747 and r21 = 2.147, i.e., still being far from the truth. At the same time, the subjectively equilibrium actions constitute x1 = 0.4614 and x2 = 0.5376. The actions observed by the agents form an informational equilibrium–they are coordinated with the individual beliefs of the agents (i.e., satisfy the system of equations (14)).

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

r21

r12

Figure 8.1. The set of subjective equilibria.

r12

r21

Figure 8.2. The set of subjective equilibria and their domains of attraction. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Informational Control

233

For the example above, the set of subjective equilibria is illustrated by Figure 8.1 (here the circle indicates the initial point, the diamond stands for the actual values of the types, while the arrow shows the direction of variation of the agents’ beliefs). The system of equations (14) implies that stable informational equilibria satisfy the following condition: r12 r21 = r1 r2.

(15)

The set of mutual beliefs (r12; r21) meeting (15) is a hyperbola on the corresponding plane. Figure 8.2 demonstrates an example of such hyperbola in the case r1 = 2, r2 = 1. The performed analysis makes it possible to define the set of false equilibria (15), as well as to study the corresponding domains of attraction. It follows from (8) that the dynamics of the mutual beliefs obeys the equation

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

r12t 1  12t 1 r12t = t 1 t , t = 0, 1, … , r21t 1  21 r21

(16)

Hence, under fixed and constant “steps” , the trajectories of the mutual beliefs are lines passing through the origin. The slope of these lines (the domains of attraction for the points of intersection with the hyperbola (15)) depends on the initial point. For instance, any initial point lying on the thick line in Figure 8.2 (r12 = r21 / 2) leads to the true equilibrium. This fact is of a definite interest in the sense of informational control. For a desired terminal point, the principal may easily evaluate the set of initial points (a line) leading to this terminal point (the agents would definitely reach the equilibrium4 required by the principal). Note that the principal must have adequate information about the types of the agents. The given example leads to the conclusion that stability and congruence of a team can be achieved under false beliefs of team members about each other. Disturbing a false equilibrium requires additional information about the agents. Therefore, the models of team building and functioning described in terms of reflexive games reflect autonomy and self-coordination of a team. Furthermore, they enable posing and solving control problems for the process of team building. Indeed, the study of Models 1–10 shows that the most important information about the game history is the one available to the agents. Thus, a control capability lies in (1) creating different situations of activity and (2) ensuring the maximal communication and access to the relevant information. The analysis also testifies that the speed of team building (the rate of convergence to an equilibrium) essentially depends on the parameters {} (“step sizes”) used in the dynamics procedures for collective behavior of the agents. The impact on these parameters can be treated as control by the principal5.

4

In the case of variable “steps,” the problem is reduced to searching for a trajectory which satisfies (16) and passes through a given point of the set (16). 5 Note that increasing the step size improves the rate of convergence; on the other hand, for sufficiently large step sizes the procedure may appear instable. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

234

A natural question arises immediately. How much is the outcome of a false equilibrium typical? Let us elucidate this issue by stating the general conditions of its occurrence (within the framework of Model 1). Suppose that the agents’ type vector r = (r1, r2, …, rn) is a common knowledge, and there *

*

exists a unique optimal action vector y*(r) = ( y1 ( r ) , y2 ( r ) , …, y n* ( r ) ). Hence, we have defined n functions  i: r  yi* ( r ) , i  N, mapping the type vector r into an optimal action of agent i (the domains of the functions  i include only type vectors such that there exists a unique vector of optimal actions). Now, assume that the outcome described above takes place subjectively; i.e., each agent believes that the type vector is a common knowledge. Then the awareness structure of the game is defined by N vectors of the form (ri1, ri2, …, rin), i  N. The informational *

*

equilibrium y* = ( y1 , y2 , …, y n* ) is stable if each agent observes the actions of the opponents expected by him/her. This means validity of the equalities

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

i (rj1, rj2, …, rjn) = yi* , i, j  N.

(17)

The equilibrium y* being arbitrary, formula (17) specifies n2 constraints on the awareness structure. Next, if the type of each agent is fixed (and each agent knows his/her type), then to guarantee the expressions (17) one should evaluate n (n – 1) quantities rij, i, j  N, i  j. The system (17) is a fortiori satisfied by the set of quantities rij such that rij = rj for all i and j. Therefore, under a fixed set of types (r1, r2, …, rn), the issue regarding the existence of a false equilibrium is reduced to the following question. Does the system (17) admit nonunique solutions? One may put the following hypothesis. The outcome of a false equilibrium is rather an exception, and its occurrence in the discussed examples is connected with a specific interaction among the agents. This hypothesis is confirmed by some examples in [52] (no false equilibria are found there).

8.4. THE HUSTINGS Let us consider the example of reflexive control in the hustings (an election campaign). There exist three candidacies (a, b, and c). The election procedure is based on the simple majority principle (to win, a candidacy has to poll more than 50 percent of votes. If none of the candidacies has been given the majority of votes, an additional round is organized with other candidacies; denote them by d. Suppose there are three groups of voters whose shares make up 1, 2 and 3, respectively (1 + 2 + 3 = 1). Table 8.2 below shows strict preferences of the groups of voters (this information is a common knowledge). The higher position of a candidacy in the table means he/she is more preferential to the corresponding group of voters. Compare each pair of the candidacies by evaluating the number (share) of voters believing that one candidacy is better than the other: Sab = 1 + 3, Sac = 1, Sba = 2, Sbc = 1 + 2, Sca = 2 + 3, Scb = 3.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Informational Control

235

Table 8.2. The preferences of the groups of voters 1 a b c d

2 b c a d

3 c a b d

Consider the game of voters, where the strategy set of each agent is A = {a, b, c}. Presuming that the vector (1, 2, 3) = (1/3, 1/3, 1/3) is a common knowledge, one obtains that the set of Nash equilibria includes six vectors: (a, a, a)  a, (b, b, b)  b, (c, c, c)  c, (a, b, a)  a, (a, c, c)  c, (b, b, c)  b. Now, let us study the reflexive game, where agent 1 foists on a certain awareness structure on agents 2–3. Agent 1 aims to “elect” the candidacy a. Suppose that the awareness structure corresponds to the graph of the reflexive game6, see Figure 8.3. Group 1 aims, first, to convince group 3 that the most preferential candidacy c (according to their viewpoint) will not be elected (and that this is a common knowledge), and they should support the candidacy a. It suffices that

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

32 + 3 < 1/2, 31 + 3 > 1/2, 31 + 3 + 32 = 1.

(1)

Second, group 1 has to convince group 2 that the candidacy a will be elected and that the actions of group 2 actually affect nothing (group 2 supports the candidacy a in the last place). It suffices that group 2 is adequately informed about the beliefs of group 3 (see Figure 8.3). The election outcome is independent of group 2. Thus, one may believe this group will support the most preferential candidacy (in fact, b); i.e., an informational equilibrium will be defined by the vector (a, b, a). This vector is a stable informational equilibrium. Moreover, since (a, b, a) is a Nash equilibrium under the conditions of a common knowledge (see the discussion above), it is a true equilibrium (although the beliefs of group 3 can be false). 1

2

3

 31

 32

Figure 8.3. The graph of the reflexive game (the hustings). 6

A rigorous description of the graph of a reflexive game is given in Appendix 1.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

236

Dmitry Novikov

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

8.5. PRODUCT ADVERTIZING In this subsection we consider certain models of informational control implemented by the mass media. We involve advertizing and the hustings as the corresponding examples. 1. Suppose there is an agent representing the object of informational impact. The impact should form a required attitude of the agent to a specific object or subject. In the case of advertizing, the agent is a customer, while a certain product or a service acts as the object. It is required that the customer purchases the product or service in question. In the case of the hustings, a voter serves as the agent, and the subject is provided by a candidate. It is required that the voter casts an affirmative vote for the given candidate. Consider agent i. Combine the rest agents to form a single agent (in the sequel, we will use the subscript j for him or her). Denote by    an objective characteristics of the object (being unknown to all agents). For instance, the characteristics could be customer-related properties of the products, personal properties of the candidates, etc. Let i   be the beliefs of agent i about the object, ij   be his or her beliefs about the beliefs of agent j about the object, and so on. For simplicity, let us make the following assumptions. First, the set of feasible actions of the agent consists of two actions, viz, Xi = Xj = {a; r}; the action a (“accept”) means purchasing the product (service) or voting for the candidate, while the action r (“reject”) corresponds to not purchasing the product (service) or voting for alternative candidates. Second, the set  is composed of two elements describing the object’s properties: g (“good”) and b (“bad”), i.e.,  = {g; b}. Below we study several models of agent’s behavior (according to the growing level of their complexity). Model 0 (no reflexion). Suppose the behavior of the agent is described by a mapping Bi() of the set  (object’s properties) into the set Xi (agent’s actions); notably, we have Bi:   Xi. Here is an example of such mapping: Bi(g) = a, Bi(b) = r. In other words, if the agent believes the product (candidate) is good, then he or she purchases the product (votes for this candidate); if not, he or she rejects the product (or the candidate). Within the given model, informational control lies in formation of specific beliefs of the agent about the object, leading to the required choice. In the example above, the agent purchases the product (votes for the required candidate) if the following beliefs have been a priori formed: i = g. Recall the present textbook does not discuss any technologies of informational impact (i.e., the ways to form specific beliefs). Model 1 (first-rank reflexion). Suppose the behavior of the agent is described by a mapping Bi() of the sets   i (object’s properties) and   ij (the beliefs of the agent about the beliefs of the rest agents) into the set Xi of his or her actions, notably, Bi:     Xi. The following mappings are possible examples: Bi(g, g) = a, Bi(g, b) = a, Bi(b, g) = r, Bi(b, b) = r and Bi(g, g) = a, Bi(g, b) = r, Bi(b, g) = a, Bi(b, b) = r. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Informational Control

237

In the first case, the agent follows his or her personal opinion, while in the second case he or she acts according to the “public opinion” (the opinions of the rest agents). In fact, for the stated model informational impact is reflexive control. It serves to form agent’s beliefs about the object and about the beliefs of the rest agents, leading to the required choice. In the example considered, the agent purchases the product (votes for the required candidate) if the following beliefs have been a priori formed: i = g with arbitrary ij (in the first case) and ij = g with arbitrary i (in the second case). Moreover, informational impact by the mass media not always aims to form ij directly; in the majority of situations, the impact is exerted indirectly when the beliefs about behavior (chosen actions) of the rest agents are formed for the agent in question. Consequently, the latter may use this data to estimate their actual beliefs. Examples of indirect formation of the beliefs ij could be provided by famous advertizing slogans like “Pepsi: The Choice of a New Generation,” “IPod: Everybody Touch” In addition, this could be addressing an opinion of competent people or revelation of information that (according to a public opinion survey) the majority of voters are going to support a given candidate, etc. Model 2 (second-rank reflexion). Suppose the behavior of the agent is described by a mapping Bi() of the sets   i (object’s properties),   ij (the beliefs of the agent about the beliefs of the rest agents) and   iji (the beliefs of the agent about the beliefs of the rest agents about his or her individual interests) into the set Xi of his or her actions, i.e., Bi:       Xi. A corresponding example is the following:

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

    Bi(, , g) = a, Bi(, , b) = r. It demonstrates some properties being uncommon for Models 1 and 2. In this case, the agent acts according to his or her “social role” and makes the choice expected by the others. For this model, informational impact is reflexive control; it consists in formation of agent’s beliefs about the beliefs of the rest agents about his or her individual beliefs (leading to the required choice). In the example above, the agent purchases the product (votes for the required candidate) if the following beliefs have been a priori formed: iji = g. Note that informational impact not always aims to form iji directly. In many situations the impact is exerted indirectly when the beliefs about expectations (actions expected from the agent) of the rest agents are formed for the agent in question. The matter concerns the socalled social impact; numerous examples of this phenomenon are discussed in the textbooks on social psychology (see the models of social impact in [21]). Indirect formation of the beliefs iji could be illustrated by the slogans like “Do you...Yahoo!?”, “It is. Are you??”, “What the well-dressed man is wearing this year” and similar ones. Another example is revelation of information that, according to a public opinion survey, the majority of members in a social group (the agent belongs to or is associated with) would support a given candidate, etc. Therefore, we have analyzed elementary models of informational control by means of the mass media. The models have been formulated in terms of reflexive models of decisionmaking and awareness structures. All these models possess the maximum reflexion of 2. Probably, the exception is a rare situation when informational impact aims to form the whole informational structure (e.g., by thrusting a “common opinion” like “Vote by your heart!”, “This is our choice!” and so on).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

238

It seems difficult to imagine real situations when informational impact aims at the components of the awareness structure that have a greater depth. Thus, a promising direction of further investigations lies in studying formal models of informational control (and the corresponding procedures) for the agents performing collective decision-making under the conditions of interconnected awareness. 2. Now, assume there are two groups of agents. The first group tends to purchase the product regardless of advertizing; the second one would rather not purchase it without advertizing. Denote by   [0; 1] the share of the agents belonging to the first group. The agents in the second group (in fact, their share makes (1 – )) are subjected to the impact of advertizing; however, they do not realize this. We will describe the social impact in the following way. Suppose the agents in the second group choose the action a with probability p() and the action r with probability (1 – p()). The relationship between the choice probability p() and the share of the agents tending to purchase the product reflects agents’ reluctance to be “a square peg in a round hole.” Suppose that the actual share  of the agents belonging to the first group forms a common knowledge. Then exactly  agents will purchase the product; in fact, they observe that the product has been purchased by

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

x() =  + (1 –  ) p ( )

(1)

agents (recall the agents do not realize the impact of advertizing). Since    [0; 1]:   x(), an indirect social impact appears self-sustaining – “Look, it turns out that the product is purchased by more people than we have expected!” Let us analyze asymmetric awareness. The agents of the first group choose their actions independently; hence, they could be treated as being adequately aware of the parameter  and the beliefs of the agents in the second group. Consider the model of informational regulation with the feature that a principal (performing an advertizing campaign) forms the beliefs 2 of the agents in the second group about the value . We digress from the subject for a while to discuss the properties of the function p(). Assume that p() is a non-decreasing function over [0; 1] such that p(0) = , p(1) = 1 –  ; here   [0; 1] and   [0; 1] are constants such that   1 – ). In practice, introducing the parameter  means that some agents of the second group “make mistakes” and purchase the product (even if they think the rest agents belong to the second group). In a certain sense, the constant  characterizes agents’ susceptibility to the impact. Notably, an agent in the second group has a chance to be independent; even if he or she thinks the rest agents would purchase the product, he or she still has a chance to refuse. The special case  = 0,  = 1 corresponds to independent agents in the second group that refuse to purchase the product. The agents do not suspect the principle of strategic behavior. Hence, they expect to observe that 2 agents would purchase the product. Actually, it will be purchased by the following share of the agents: x(, 2) =  + (1 –  ) p (2).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(2)

Informational Control

239

Let principal’s income be proportional to the share of the agents that have purchased the product. Moreover, suppose that the advertizing costs c(, 2) are a nondecreasing function of 2. Then the goal function of the principal (the income minus the costs) without advertizing is defined by (1). With advertizing being used, it is given by

 (, 2) = x (, 2) – c (, 2).

(3)

Consequently, the efficiency of informational regulation may be specified as the difference between (3) and (1); the problem of informational regulation may be posed as

 (, 2) – x ( )  max .

(4)

2

Now, discuss constraints of the problem (4). The first constraint is 2  [0; 1] (to be more precise, 2  ). Consider an example. Set p() =  , c(, 2) = (2 – ) / 2 a scaling factor. The problem (4) takes the form (1 –  ) (

 2 –  ) – (2 –  ) / 2 r  max .  2 [ ;1]

r , where r > 0 stands for

(5)

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Solution to the problem (5) is provided by 2() = max {; r (1 – )2}; i.e., under  

( 2 r  1)  4 r  1 informational regulation makes no sense for the principal (the advertizing 2r costs are not compensated, since a sufficient share of the agents purchase the product without advertizing). In addition to the constraint 2  [; 1], let us require stability of informational regulation. Suppose that the share of agents purchasing the product is observable. Moreover, assume that the agents in the second group should observe the share of agents purchasing the product in a quantity not smaller than the value reported by the principal. That is, the stability condition could be rewritten as x(, 2)  2. Use formula (2) to obtain

 + (1 –  ) p (2)  2.

(6)

Hence, an optimal stable solution to the problem of informational regulation would be a solution to the maximization problem (4) under the constraint (6). Apparently, any informational regulation in the above example is stable in the sense of (6). Stability could be either viewed as a perfect coincidence between the expected results and the results observed by the agents (i.e., the constraint (6) becomes the equality). In this case,

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

240

Dmitry Novikov

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

the only stable informational regulation is principal’s message that all agents belong to the first group, i.e., 2 = 1. Such tricks are common for advertising. To conclude this section, we note that the solution to the problem (4), (6) is a false equilibrium. Indeed, if the agents in the second group find out the true equilibrium   [0; 1], they can observe that   2 and change the actions.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Chapter 9

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

INSTITUTIONAL CONTROL The present chapter concentrates on the description of institutional control, an important type of control in organizational systems. It represents a purposeful impact on constraints and norms of activity performed by participants of an organizational system. Activity constraints and norms. Analyzing the model of an organizational system (see Chapter 1), we have mentioned five components (staff, structure, preferences, constraints and awareness) corresponding to five types of control. Managing constraints imposed on the activity of OS participants is said to be institutional control. Let us introduce the following notions. An institute is: 1) (in sociology) certain organization of public activity and social relations embodying the norms of economic, political, legal, and moral life of a society, as well as social rules of vital activity and behavior of people; 2) (in law) a set of legal norms regulating any homogeneous and separated social relations. Hence, a norm as a legitimate and mandatory order appears the key aspect in the above definition. Therefore, we will understand institutional control as a purposeful impact directed either to constraints or norms of activity performed by OS participants. There exist explicit norms (e.g., a law, a contract, a job instruction, etc.) and implicit norms (e.g., ethical norms, an organizational or corporate culture, etc.). Generally, explicit norms are limiting, while the implicit ones are stimulating. Notably, the latter reflect the behavior of a subject, being expected by the others; for instance, see the model of team building in Chapter 8. Until last ten years, game-theoretic models of control have almost not addressed the norms of activity. Concerning activity constraints, we acknowledge that control models have been designed for   

games with forbidden situations (forbidden strategy profiles) [18]; manufacturing chains [39], where a certain technology applies constraints on the sequential choice of actions by agents; control mechanisms with revelation of information (see references in [12]); by modifying the set of feasible messages of agents, a principal ensures strategyproofness of a given mechanism (i.e., all agents benefit from truth-telling).

Originally, the role of institutes was studied in economic science, viz., institutional economics. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

242

Dmitry Novikov

Institutional economics is a branch of economics analyzing the role and influence of institutes; it comprises two scientific directions. First, neoinstitutional economics (including the theories of social choice and property rights) connected with the name of R. Coase. Second, new institutional economics (proposed by D. North, see references in [40]). A set of institutes forms an institutional structure of a society and economics. According to D. North, institutes create basic structures used by people to reduce the uncertainty. D. North believed that institutes act as “the rules of play” in a society, organizing the relations among people. He identified three major components of an institute:

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

1) formal rules (constitutions, laws, administrative acts, statutory norms of right); 2) informal constraints (traditions, mores, gentleman’s agreements, verbal arrangements, voluntary norms of behavior, unwritten codes of honor, dignity, professional self-consciousness, etc.); 3) compulsion mechanisms ensuring abidance by the rules (courts, police, and others). Notwithstanding rather unsystematical character of this enumeration, one would observe that formal rules reflect limiting norms, whereas their informal counterparts describe stimulating norms. The role of institutes lies in decreasing the uncertainty via establishing a stable (not necessarily efficient) structure of interaction among people (in economics – economic agents). Furthermore, it lies in defining and limiting the set of alternatives being available to an individual. Institutional premises exert determinative impact on the following issues. Which organizations appear? How do they develop? At the same time, organizations also influence the process of modifying institutional frameworks. The resulting direction of institutional changes mostly depends on the following factors. First, the “block effect,” arising due to interpenetration of institutes and organizations based on the structure of stimulating motives (generated by these institutes). Second, the inverse influence of changes in the set of opportunities on the perception and response of individuals. D. North and his followers constructed the general concept of institutes and institutional dynamics using the notions of property rights, transaction costs, contract relations and group interests. Adopting these notions by economics has made it possible to study the institutional structure of production (institutes influence economic processes, e.g., by exerting an impact on the costs of exchange and production). A separate important issue considered by institutional economics consists in the role of government regulation in economics. Therefore, institutes are the subject of research in institutional economics. Unfortunately, the absence of corresponding formal models and fruitful results makes this branch of economics useful merely as a methodological basis of institutional control in OS. The structure of exposition. This chapter has the following structure. In Section 9.1, we state a control problem for activity constraints and discuss solution methods. Next, Section 9.2 analyzes the models of simultaneous application of institutional and motivational control. In Section 9.3, we consider the specifics of institutional control in multi-agent systems. Section 9.4 formulates a control problem for activity norms and possible solution methods. Finally, Sections 9.5-9.6 present several examples of solving institutional control problems (the model of lump sum payment and the model of Cournot oligopoly, respectively).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Institutional Control

243

9.1. CONTROL PROBLEM FOR ACTIVITY CONSTRAINTS According to the results obtained in Section 1.1, to maximize his/her goal function f (), an agent chooses an action from the set С (f, A) = Arg max f (y). Consider a given universal y A

set X. A principal’s problem (control problem for activity constraints) lies in choosing a constraint B  X for the set of feasible actions of an agent provided that the latter chooses an action from the set С (f, B) = Arg max f (y). yB

Let the principal’s preferences be defined by a functional  (y, B): X  2X  1. It serves to compare the pairs “an agent’s action–the set of his/her feasible actions.” The relationship between the principal’s preferences and the set B of agent’s feasible actions is subject to the following. Introducing certain constraints may incur definite costs of the principal. If the functional  (y) turns out independent of the feasible set B, then the institutional control problem degenerates. Indeed, it suffices for the principal to choose B = {x}, where x = arg max  (y). y X

Recall the general approach of control theory applied to state a control problem (see Section 1.2). Accordingly, control efficiency for the activity constraints B  X is defined by K(B) = max  (y, B).

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

yC ( f ,B )

(1)

In the above definition, we proceed from agent’s benevolence to the principal. Notably, the former chooses the most beneficial action for the latter (from the set of maxima of the agent’s goal function). Control problem for activity constraints consists in choosing an optimal control B*  X, i.e., a feasible control ensuring the maximal efficiency: K(B)  max . X

(2)

B2

In other words, B* = arg max X B2

max  (y, B).

yC ( f ,B )

(3)

Consider the set X; enumerating all elements in the power set 2X may be a non-trivial and time-consuming problem (even for a finite set X). Moreover, this problem may be infeasible for an infinite set X. Thus, we analyze a series of cases when the specifics of the goal functions and/or feasible sets allow reducing the problem (2) to a certain standard optimization problem. Assume that the agent’s goal function is continuous and real-valued, whereas the set X is compact in m. Introduce the following quantities and sets:

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

244 f – = min f (y),

(4)

f + = max f (y),

(5)

l(w) = {y  X | f (y)  w}, w  [f –; f +],

(6)

h(w) = {y  X | f (y) = w}, w  [f –; f +],

(7)

L(x) = {y  X | f (y)  f (x)}, x  X,

(8)

x(B) = arg maх  (y, B), B  X,

(9)

y X

y X

yC ( f , B )

B(x) = arg

max

B{ D 2 X | xC ( f ,D )}

 (y, B), x  X.

(10)

According to the accepted notation, we have x  C ( f, L(x)), x  X,

(11)

h(w) = C ( f, l (w)), w  [ f –; f +].

(12)

And so, the problem (2)–(3) can be rewritten as

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

B* = B(y*),

(13)

where y* = arg max  (y, B(y)). y X

(14)

An alternative representation of the problem is

 (x (B), B). B* = arg max X B2

(15)

Clearly, the maximization problems (14) and (15) are generally not simpler than the initial one (3). Therefore, consider the case of a parametric system of sets M such that M0 = x0, M1 = X and  0      1, M  M (the corresponding parameters are  [0; 1] and x0  X). The quantity  can be interpreted as the “level of control centralization”; viz.,  = 0 describes complete centralization (“except x0, all actions are prohibited”), while  = 1 reflects complete decentralization (“all actions are allowed”).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Institutional Control

245

Define the functional  (y) =  (y, M), y  X,   [0; 1]. Under a fixed x0  X, institutional control may lie in choosing the parameter . Consequently, control efficiency is assessed by the following rate (compare with (1)): K() =

maх  (y).

(16)

yC ( f , M  )

Within the framework of the current model, institutional control problem takes the form: K()  max .

(17)

 [ 0 ;1]

The optimal value is given by

* = arg max

maх (y).

(18)

 [ 0 ;1] yC ( f , M  )

By analogy with (4)–(14), the problem (17) has an alternative representation. Denote x () = arg

 (x) = arg

max

yC ( f ,M  )

 (y),  [0; 1],

max

 {  [ 0 ;1]| xC ( f ,M  )}

(19)

 (y), x  X,

y* = arg max (y)(y),

(21)

* = arg max  (x ()).

(22)

y X

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

(20)

 [ 0 ;1]

The problems (21)-(22) are the ones of standard optimization. Hence, the major complicacies are connected with evaluation of the relationships (19)-(20). To succeed, we have to find sets used for maximization, i.e., the set of agent’s choice under a given control in (19) and the set of controls leading to the maximum of the agent’s goal function under a given action of the agent (see (20)). Suppose that the continuous function f () possesses n local maxima on the feasible set X (n is finite). The corresponding maximum points will be specified by x1, x2, …, xn (at least, one of them is the global maximum). In addition, reassign their subscripts so as 1  2  …  n, where i = min {  [0; 1] | xi  M}, i = 1, n . Then x () makes up a right continuous function with possible break points {i}. Set ′ = min {  [0; 1] | max f (y) = max f (y)}. y X

yM 

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

246

Consider an example when X  1 and f () is a convex function. Then there exists a unique maximum x1; in addition, x() represents a continuous function for   [0; '] and (22) appears a standard optimization problem. Let X = [0; 1],  (y) = y –  y 2, where  > 0 stands for a constant, M = [0; ], and f (y) = y – y 2.  ,   [0; ' ] Consequently,  ′ = 1/2 and x () =  , while  (x()) = x () –  (x())2 1 / 2,   [0; ' ] 2 =  –   for   [0; 1/2] and  (x()) = 1/2 –  / 4 for   [1/2; 1]. The posed control problem for activity constraints has the solution 1 /(2 ),   1 * =  .   [0; 1] 1 / 2,

9.2. INSTITUTIONAL CONTROL AND MOTIVATIONAL CONTROL Let us include the principal’s costs Q(B), Q: 2X  1 to control constraints B in the principal’s goal function in the following explicit form:

 (y, B) = H (y) – Q(B),

(1)

where H(y), H: X  1, is the principal’s income function. Define the sets

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

D(x) = {y  X | f (y) > f (x)}, x  X.

(2)

Obviously, y  C ( f (), B) if and only if D(y)  B = . Thus, control for activity constraints can be considered either as choosing the set of feasible actions of the agent or as prohibiting the choice of specific actions. The “prohibition costs” are q(x) =

min

{ B  X | B  D ( x )  }

Q(B), x  X.

(3)

The above quantity q(x) can be interpreted as the minimal costs of the principal to control activity constraints provided that the action x  X has been implemented (the agent has been stimulated to choose this action). Suppose that we know the minimal costs of the principal to control activity constraints. The control problem in question is reduced to the one of optimal incentive-compatible planning–find an optimal implementable action of the agent: xI* = arg max [H(y) – q(y)]. y X

The efficiency of institutional control makes up

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(4)

Institutional Control KI = H(xI*) – q(xI*).

247 (5)

Now, consider motivational control performed by the principal. It lies in stimulating the agent to choose certain actions by introducing a system of payments (depending on the choice). In other words, the principal gives incentives to the agent for choosing required actions (plans). Recall the results derived in Chapter 2; the minimal costs of the principal to implement an action x  X are described by the formula c(x) = max f (y) – f (x), x  X. y X

(6)

Using a compensatory incentive scheme

c ( x )   , y  x , yx  0,

 (x, y) = 

(here  > 0 is an arbitrarily small strictly positive constant), the principal motivates the agent to choose the action x  X as the unique maximum point of his/her goal function f (y) +  (x, y). Given the minimal costs of the principal to implement an action, the motivational control problem is reduced to the one of optimal incentive-compatible planning. Notably, find an optimal implementable action of the agent: xm* = arg max [H(y) – c (y)]. Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

y X

(7)

The efficiency of motivational control is Km = H(xm*) – q(xm*).

(8)

Compare the minimal costs of the principal (the expressions (3) and (6)) to draw conclusions regarding the comparative efficiency of institutional control and motivational control. Therefore, we have KI  Km (institutional control is not less efficient than its motivational counterpart) if  x  X q(x)  c (x).

(9)

Note that the condition (9) appears somewhat rough; moreover, it does not represent a necessary condition. In practice, one applies institutional control and motivational control simultaneously (i.e., the choice of certain actions is prohibited by the principal; at the same time, the latter suggests additional rewards for choosing some of feasible actions). And so, we study a formal model for defining a reasonable balance between institutional control and motivational control.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

248

Being subjected to motivational control, the agent chooses an action maximizing his/her goal function over the set of feasible actions (taking into account the incentive by the principal). On the other hand, “feasible” actions of the agent depend on institutional control used by the principal. By analogy to (6), let us define the minimal costs of the principal to implement an action x  B (to stimulate the agent’s choice of this action): c (x, B) = max f (y) – f (x), x  B. yB

(10)

Consequently, the principal’s goal function (1) can be rewritten as

 (y, B) = H(y) – c (y, B) – Q(B), y  B, B  X.

(11)

The first term indicates the principal’s income, the second one stands for the costs to implement the action y from the set B, and the third term means the costs of institutional control. To proceed, we evaluate the minimal costs of the principal to simultaneously apply institutional and motivational control in order to implement the action x  X (to stimulate the agent’s choice of this action): G(y) =

min {c (y, B) + Q(B)}, y  X.

{ B  X | yB}

(12)

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Assume that we know the relationship (12). Accordingly, the problem of simultaneous application of motivational and institutional control consists in solving the problem of optimal incentive-compatible planning: x* = arg max [H(y) – G(y)]. y X

(13)

To provide an illustration, recall the example analyzed at the end of the previous section. Set X = [0; 1], H(y) = y, M = [0; ], Q() =  2 ( > 0 is a constant), and f (y) = y – y2. Then we obtain c(y, ) = f (min{; 1/2}) – f (y), G(y) = min {f (min{; 1/2}) – f (y) + Q()},  [ 0; y ]

which yields x* = max [y – min {min{; 1/2} – (min{; 1/2})2 – y + y 2 +  2}]. y[ 0;1]

 [ 0; y ]

Therefore, the results of this section enable comparing the efficiency of institutional control and motivational control. In addition, they give a rational balance between Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Institutional Control

249

prohibitions and motivation of the agent. We also emphasize high complexity of institutional control problems. In practice, they are solved for special cases (situations, where the sets of feasible actions or imposed constraints are finite1). An alternative approach consists in comparing a finite number of control options (this yields not an optimal solution, but a rational option ensuring a necessary level of efficiency to the principal).

9.3. INSTITUTIONAL CONTROL IN MULTI-AGENT SYSTEMS Consider an OS composed of a single principal and n agents with goal functions fi (y), i  N = {1, 2, …, n}, y = (y1, y2, …, yn). Suppose that, in addition to individual constraints on the sets of feasible strategies (yi  Ai, i  N), there exist global constraints B on the choice of states by agents, i.e., y  A′  B, где A′ =

n

 Ai . i 1

One would identify several methods to account global constraints, i.e., methods of reducing game-theoretic models with global constraints on the sets of players’ feasible strategies to models satisfying the hypothesis of independent behavior (HIB). According to the latter, all components of a feasible action vector belong to corresponding feasible sets (no constraints apply, except y  A′ =

n

 Ai ).

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

i 1

The penalties’ method. In the case when the agents’ action vector lies outside the set B, the goal functions of players are set equal to minus infinity (i.e., players are penalized for violated constraints). Consequently, it is possible to study a game with “new” goal functions (admitting no global constraints). Depending on players’ awareness and players violating the global constraints, one can construct the so-called guaranteeing strategies [18]. The method of strategies’ extension. In the initial game, all agents choose their strategies simultaneously and independently, exchanging no information with other players (the feasibility and reasonability of data exchange–informational extensions of the games–are described in [18] for the games with forbidden situations). For instance, consider a game, where each player guesses the choice of the others or their response to a certain strategy. Such games often involve the concept of П-equilibria, which includes maximin equilibria, Nash equilibria and other equilibria as special cases. See also the description of Bayesian equilibria, Stackelberg equilibria, etc. in Appendix 1 and [17, 46, 52]. There are some particular cases with “automatic accounting” of global constraints. Assume that each player possesses a dominant strategy (alternatively, the game has a unique Nash equilibrium). Furthermore, imagine that the game is characterized by complete awareness. Then each player can evaluate the dominant strategies of the rest players (and the corresponding Nash point, as well). If the resulting vector of dominant strategies (or Nash point) meets the global constraints, one faces no problem of constraints’ analysis. We have to underline the following aspect. The method of strategies’ extension often requires introducing assumptions regarding the principles of players’ behavior (being difficult 1

Control problem for activity constraints can be also stated in the following way: given a finite number of feasible constraints, find their optimal combination. Such problem of discrete optimization is solvable via the dynamic programming method.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

250

to substantiate). Similarly to the penalties’ method, the method of strategies’ extension makes no provision of the presence of control applied by a principal. Contrariwise, the next couple of methods take into account global constraints using control capabilities of the principal. The concordance method. The key idea of the concordance method is the following (see also the solution methods for incentive problems and the method of incentive-compatible planning in Chapter 2). At Step 1 of incentive problem solution, for each action vector belonging to the set A′ (global constraints are not considered) the principal seeks for a feasible control ensuring that this vector belongs to the solution set of agents’ game. In the incentive problem, Step 1 yields the set AM of agents’ actions being implemented under existing constraints M imposed on the incentive scheme, AM  A′. Next, at Step 2 the principal searches for the set A* of agents’ actions (a) being implementable (b) satisfying the global constraints B and (c) maximizing his/her goal function. In other words, at Step 2 the principal solves the following problem:

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

A* = Arg

maх

y  A', M M ( y ), BB ( y )

 (y),

(1)

where М(у) indicates the set of controls implementing the action у  A', and В(у) are the constraints including у. The maximal control efficiency constitutes  (y*), where y* means an arbitrary element from the set A*. The method of modifying the sequence of moves. Generally, we believe that (under a known strategy of the principal) agents choose their actions simultaneously and independently. However, the principal (acting as a meta-player) may change the sequence of moves (i.e., the sequence of data acquisition and strategy choice used by the agents). By varying the sequence of strategy choice by agents, one may appreciably simplify the problem of global constraints’ consideration. Suppose there exists a numbering of agents such that feasible sets take the form Ai = Ai (y1, y2, …, yi–1). Then each agent must choose his/her strategy by accounting of (1) the requirements imposed by the global constraint and (2) the strategies chosen by agents with smaller numbers. For instance, in the above sense a feasible sequence of moves in an OS represents a directed graph (without loops). A special case is the sequential choice of strategies by agents (the so-called manufacturing chains). Again, let us focus the reader’s attention on the following. The applicability of the method of modifying the sequence of moves must be stipulated in the “rules of play” (embedded in the OS model). Finally, note that the set of equilibria in a new “hierarchical game” may differ from that in the initial game.

9.4. CONTROL PROBLEM FOR ACTIVITY NORMS Suppose that an OS consists of n agents. The latter choose actions yi  Ai from compact sets Ai and have continuous goal functions fi (, y), where    is a state of nature, y = (y1, y2, …, yn)  A′ =  Ai , i  N, and N = {1, 2, …, n} stands for the set of agents. i N

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Institutional Control

251

An activity norm is a mapping :   A′ of the set of feasible states of nature onto the set of feasible action vectors of agents. Actually, ith component of the vector-function () means the action of agent i, being expected by the rest agents and the principal. Let the principal’s preferences be defined on the set of states of nature, activity norms and actions of the agents:  (,  (), y). The agents are assumed to follow the norms. Denote by K( ()) = F ( (,  (),  ( ))) control efficiency for the activity norms  (), where F () is an uncertainty elimination operator. Depending on principal’s awareness, the uncertainty elimination operator represents the guaranteed result over the set  or the mathematical expectation with respect to a known probability distribution p () over the set  (for a discussion of uncertainty elimination, see Section 1.1). Under constraints M imposed on activity norms, a control problem for activity norms lies in choosing a feasible norm *()  M ensuring the maximal efficiency: *() = arg max K( ()) ()M 

(1)

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

(the agents adhere to the established norms of activity). The last condition should be elucidated. Agents are active and choose their actions independently; hence, the choice of an agent coincides with a norm iff the agent benefits from this. But how we should understand such a benefit? Consider the following model of bounded rationality. Define the parametric Nash equilibrium and rational behavior for each of three types of bounded rationality:

E N0 ( ) = {x  A' |  i  N,  yi  Ai fi (, x)  fi (, x–i, yi)},

(2)

E 1N ( , U ) = {x  A' |  i  N fi (, x)  U i },

(3)

E N2 ( ,  ) = {xA' |  iN,  yi  Ai fi(, x)  fi(, x–i, yi)–i},

(4)

E N3 ( ,  ) ={x  A' |  i  N,  yi  Ai fi (, x)  (1 – i) fi (, x–i, yi)}.

(5)

A norm  () is said to be compatible with type j of rational behavior (j = 0, 3 ) if     E Nj ()   ()  .

(6)

The condition (6) can be interpreted as follows. An activity norm implements a certain equilibrium if for any state of nature the choice defined by the norm does not contradict the rational behavior of agents (i.e., ensures the corresponding gain and/or makes unilateral deviation from the norm unbeneficial). In the sequel, we assume that  () is a single-valued mapping. Hence, principal’s imposing a compatible norm of activity on the agents can be

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

252

considered as narrowing of the equilibria set (a tip regarding the existence of a focal point– see the discussion of the problem of multiple equilibria in [17, 18, 33, 46]). According to this point of view, control for activity norms may be treated as the problem of ensuring correspondence in the group choice, where    is the vector of individual characteristics of agents. The conditions (2) and (6) can be combined as follows. A norm  () is compatible if    ,  i  N,  yi  Ai fi (,  ( ))  fi (, –( ), yi).

(7)

The condition (7) implies that the norm appears compatible with interests of the agents if for any state of nature each agent benefits from obeying the norm (provided that the rest agents demonstrate doing the same). Similarly to (7), one can rewrite the conditions (3)–(5). Now, let us study which awareness structure of the agents is necessary for the existence of a compatible norm. Clearly, the conditions of a game (the set of agents, goal functions, feasible sets, a given activity norm and state of nature) must form a common knowledge. Recall that in game theory a common knowledge [46] is a fact such that

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

a) every player is informed of it; b) every player knows a); c) every player knows b), and so on (generally, this procedure is infinite). Indeed, to “compute” a parametric Nash equilibrium under existing activity norms, each agent must be confident in that the rest agents would compute the same equilibrium. For this, the agent in question should act as the rest agents (they model his/her behavior). A way of creating a common knowledge lies in making a public announcement to all agents collected together. Perhaps, this is why modern firms pay much attention to informal communication of employees and corporate loyalty (to develop a corporate culture and behavior standards). Much time and resources are spent for making employees feel their belonging to a common task, for sharing common values. Indeed, these factors predetermine the existence of a common knowledge. Therefore, institutional control problem (as control for activity norms) represents the problem (1), (7) of finding a norm ensuring the maximal efficiency over the set of feasible and compatible norms. Denote by S the set of norms (different mappings :   A') meeting the condition (7). Consequently, the control problem can be expressed in the form K( ()) 

maх

()M   S

.

(8)

In other words, solving the control problem for activity norms consists in the following: 1) find the set of compatible norms S; 2) find the set of compatible and feasible norms (S  M); 3) in the latter set, choose a norm possessing the maximal efficiency (from the principal’s viewpoint).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Institutional Control

253

The first stage of solving the problem (8) represents the problem of optimal incentivecompatible planning (see Chapter 2). A high computational complexity of this problem is subject to that the desired variables are mappings :   A'. And so, we analyze the problem in a greater detail. Let institutional control be used simultaneously with motivational control; the goal function of agent i takes the form gi (, y, i) = fi (, y) + i (,  (), y), y  X, i  N,

(9)

where i:   M  A'   is the incentive function of agent i. 1

Accordingly, one arrives at the following. a) using the motivational control

 s ( , i ( )), yi  i ( ) i (,  (), y) =  i , i  N, yi  i ( ) 0,

(10)

where si = max fi (, –i( ), yi) – fi (,  ( )) + i, i  N,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

y i  Ai

(11)

(i > 0 is an arbitrarily small strictly positive constant, i  N), the principal makes the norm  () compatible; b) there exists no other motivational control implementing  () as a unique Nash equilibrium in the game of agents and requiring strictly smaller costs of the principal to motivate the agents. Formula (11) describes the minimal costs of the principal, stimulating agent i to obey the activity norm  (). Summarizing the expression (11) over all agents yields С (, ()) =

 max y A

i N

i

fi (, –i ( ), yi) –

i



fi (,  ( )).

(12)

i N

This is none other than the minimal costs of the principal to perform compatible (institutional and motivational) control. Rewrite the principal’s goal function  (,  (), y) as the difference between the income H(y) and control costs С (,  ()). Then the property of control compatibility leads to

 (,  ()) = H( ( )) – С (,  ()). and the efficiency of institutional control  () makes up K( ()) = F (H( ( )) – С (,  ())), where F () stands for the uncertainty elimination operator.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(13)

Dmitry Novikov

254

As against the problem (8), the one of institutional control F (H( ( )) – С (,  ()))  max

(14)

()  M 

differs in that (a) maximization takes place over the set of all feasible activity norms and (b) the compatibility condition is accounted for in the criterion being maximized2. A special case of the institutional control problem consists is when the principal has to apply a unified control (i.e., an identical control for all agents). Evidently, the efficiency of a unified control does not exceed that of a personalized control (when each agent is assigned a specific activity norm).

9.5. LUMP-SUM PAYMENTS3 Consider an organizational system composed of a principal and n agents performing a joint activity. The strategy of agent i lies in choosing an action yi  Xi =  , i  N. On the other hand, 1

the principal chooses an incentive scheme which determines a reward of every agent depending on the result of their joint activity. Imagine the following scheme of interaction among the agents. Attaining a necessary result requires that the sum of their actions is not smaller than a given threshold   . In this case, agent i yields a fixed reward i, i  N; if  yi < , the reward vanishes for all agents.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

iN

Implementation of the action yi  0 makes it necessary that agent i incurs the costs ci(y, ri), where ri > 0 denotes his or her type (a parameter representing individual characteristics of the agent), i  N. Concerning the cost functions of the agents, let us assume that ci(y, ri) is a continuous function, increasing with respect to yi and decreasing with respect to ri, and  y-i  A–i,  ri > 0: ci(0, y-i, ri) = 0, i  N. The stated model of interaction will be known as the lump-sum payment. Now, define the set of individual rational actions of the agents: IR = {y  A' |  i  N i (y)  ci (y, ri)}.

(1)

If the agents’ costs are separable (i.e., the costs ci(yi, ri) of every agent depend on his or her actions only), we obtain IR =

 [0; yi ] , where

i N

2

Making a digression, we emphasize that (since an activity norm represents a single-valued mapping) it suffices to apply motivational control with a flexible plan (a plan depending on the state of nature). Notably, in the proposed model for any institutional control there exists a motivational control with (at least) the same efficiency. Moreover, solving the motivational control problem is much easier than solving the corresponding problem of institutional control (indeed, maximization is performed over the set of agents instead of the set of mappings). 3 This section was written with the assistance of A.G. Chkhartishvili, Dr. Sci. (Phys.-Math.).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Institutional Control

255

yi = max {yi  0 | ci (yi, ri)  i}, i  N.

(2)

Introduce the following notation: Y( ) = {y  A' |

 yi

=  },

(3)

 ci ( y, ri ) .

(4)

iN

Y*( ) = Arg min

yY (  )

i N

Next, we analyze different variants of agents’ awareness about the parameter   . Variant I. Suppose that the value    is a common knowledge. Then the equilibrium in the game of agents is provided by a parametric Nash equilibrium belonging to the set EN( ) = IR  Y( ).

(5)

We also define the set of Pareto-efficient actions of the agents: Par ( ) = IR  Y*( ).

(6)

Since    : Y*()  Y(), the expressions (5)-(6) imply that the set of Pareto-efficient actions represents a Nash equilibrium. However, the set of Nash equilibria may be wider; in particular, under   max yi it always includes the vector of zero actions.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

i N

Note that the set (6) of Pareto-efficient actions can be made nonempty via motivational control (by choosing a proper incentive vector {i}). The condition    : Par ( )  EN( ) implies that any activity norm  () satisfying the condition    :  ( )  Par ( ),

(7)

is compatible and Pareto-efficient. In practice, using the norm meeting (7), the principal indicates to the agents a concrete Pareto-efficient point in a wide set of Nash equilibria (each agent benefits most from choosing the minimal action belonging to the corresponding projection of the set of Nash equilibria (5)). In other words, the principal minimizes the total costs of the agents to ensure the required result. We give an example. Consider n = 2 agents with the Cobb-Douglas cost functions ci(yi, ri) = ri (yi / ri), where () is a smooth strictly increasing function such that (0) = 0. In this case, the point y*( ) = { yi* ( )}, where yi* ( ) =  ri /

 rj , i  N, is a unique

jN

Pareto-efficient point. Let us evaluate yi = ri  –1(i / ri), i  N. Then under the condition

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

256

 rj ), i  N,

i  ri  ( /

(8)

jN

the Pareto set is nonempty (for different   , the Pareto set is a line segment whose slope equals the ratio of agents’ types). Furthermore, the following norm is compatible: i ( ) = yi* , i  N. The sets of Nash equilibria in the game of n = 2 agents (with 2 > 1) are illustrated by Figure 9.1; note that the point (0; 0) is a Nash equilibrium in both cases. Thus, we have studied an elementary variant of agents’ awareness structure (corresponding to a situation when the value    is a common knowledge). Now, consider the next variant with a greater level of complexity of agents’ awareness; here a common knowledge is represented by the individual beliefs {i} of the agents about the value   . Variant II. Suppose the beliefs of the agents about the uncertain parameter are pairwise different, yet make up a common knowledge. In other words, an asymmetric common knowledge takes place. Without loss of generality, we renumber the agents so as their beliefs form an increasing sequence: 1 < … < n. In this situation, the structure of feasible equilibria is defined in the following statement. Consider a lump-sum payment game, where i  j for i  j; depending on the existing relation among the parameters, the following (n + 1) outcomes may be an equilibrium: {y* |

yi* = 0, i  N}; {y* | y k* = k , yi* = 0, i  N, i  k}, k  N. In practice, this means that either

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

nobody works, or merely agent k does (and chooses the action k).

2

y2

1

y 2 EN( 1)

EN( 2) y*( 1) y*( 1)

tg() = r2/r1 0

y1

y 1

Figure 9.1. The parametric Nash equilibrium in agents’ game.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

1

2

Institutional Control

257

y 2 (0,  2)

2

 (0, 0)

2 – 1 ( 1, 0) 0

y

1

Figure 9.2. Equilibria in the game of two agents (the equilibrium-free domain is indicated by the symbol “”).

Now, consider the issue regarding the relations among the parameters i, yi , i  N that ensure every equilibrium stated above. The vector (0, …, 0) is an equilibrium when none of agents may perform sufficient work independently (according to his or her view) for receiving a proper reward (alternatively, his

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

or her action equals yi and the gain of agent i remains zero). Formally, the discussed condition is expressed by yi  i for any i. The vector {y* | y k* = k , yi* = 0, i  k} is an equilibrium if k  yk and all agents with the numbers i > k (actually, they believe no reward would be given) are not efficient enough for compensating the quantity (i – k) themselves. Formally, we have k + yi  i for any i > k. Feasible equilibria in the game of two agents are presented in Figure 9.2. It should be emphasized that (in contrast to Variant I) there exists an equilibrium-free domain. To proceed, consider a general case when the beliefs of the agents may coincide: 1  …  n. Similarly to Variant I, this could lead to a whole domain of equilibria. For instance, suppose that m = m+1 = … = m+p, i  m for i  {m, …, m + p}. m p

Under the conditions

 yk*

k m

by any vector y* such that

m p

 yk*

k m

 m and m + yi  I, i > m, the equilibrium is then given = m, yk*  yk , k  {m, …, m+p}; yi* = 0, i  {m, …, m

+ p}}. The corresponding interpretation consists in the following. In an equilibrium, all work is performed by the agents with identical beliefs of the volume required for being rewarded.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

258

Variant III. Let the awareness structure have depth 2 and each agent believe he or she participates in the game with an asymmetric common knowledge. In this case, the set of feasible equilibrium outcomes becomes maximal:

 [0; yi ] .

Moreover, the following

i N

statement takes place. Consider a lump-sum payment game; for any action vector y* 

 [0; yi ) , there exists a certain awareness structure with depth 2 such that the vector y*

i N

provides a unique equilibrium. Each agent then participates in the game with an asymmetric common knowledge. Proof is based on the following idea. For any i  N, it suffices to select

 yi* , yi*  0; (here  stands for an arbitrary positive number) and to choose any  *  yi   , yi  0

i  

values ij >

 yi , j N \ {i}. Then agent i expects zero actions from the opponents, while

iN

his or her own subjectively equilibrium action is given by yi* . Let us make two remarks. First, we have constructed the equilibrium which is (objectively) Pareto-efficient if the sum

 yi*

equals the actual value of the uncertain

iN

parameter . Second, the action yi*  yi is an equilibrium provided that i = yi . However,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

under this condition the action yi*  0 forms an equilibrium, as well. In both cases, the subjectively expected gain of agent i makes zero. Variant IV. Now, imagine the awareness structure of the game possesses depth 2 and the symmetrical common knowledge is at the lower level. In other words, every phantom agent believes that the uncertain parameter equals  and this fact represents a common knowledge. It turns out that (even in this case) the set of equilibrium outcomes is maximal:

 [0; yi ] . Furthermore, the following fact is true. Consider a lump-sum payment game

iN

with a symmetric common knowledge at the lower level; for any action vector y* 

 [0; yi ) , there exists a certain awareness structure of depth 2 such that the vector y*

i N

provides a unique equilibrium. To construct this structure, one should do the following. Take any value  >

 yi

and

iN

suppose this is a common knowledge among the phantom agents. Then a unique equilibrium in the game of phantom agents is, in fact, the zero action chosen by every agent. Next, for

 yi* , yi*  0, each number i  N we select  i   with  being an arbitrary positive  yi   , yi*  0, number. Apparently, the best response of agent i to the zero actions of the opponents (he or she expects such actions) lies in choosing the action yi* . Therefore, the game known as “lump-sum payment” is remarkable for complicated relationships between the structure of informational equilibria and the forms of awareness

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Institutional Control

259

structures and reflexive control. In addition, this game illustrates the role of control for activity norms when the set of equilibria in agents’ game includes (at least) two points.

9.6. THE COURNOT DUOPOLY Consider an example showing the reasonability of joint application of informational and institutional control. Recall that an activity norm is a mapping :   X' of the set of feasible states of nature onto the set of feasible action vectors of agents. Let the principal’s preferences be defined on the set of states of nature, activity norms and actions of the agents:  (,  (), y). The agents are assumed to follow the norms. Denote by K( ()) = F ( (,  (),  ( ))) the efficiency of institutional control  (), where F () is an uncertainty elimination operator. Depending on principal’s awareness, the uncertainty elimination operator represents the guaranteed result over the set  or the mathematical expectation with respect to a known probability distribution p () over the set . Under constraints M imposed on activity norms, an institutional control problem lies in choosing a feasible norm *()  M ensuring the maximal efficiency: *() = arg max K( ())

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

()M 

(the agents adhere to the established norms of activity). Above a norm  () is said to compatible if the action dictated by the norm represents an informational equilibrium in agents’ game. One may pose the inverse problem of informational control. Given an agents’ action vector x*  X', find a set I(x) of awareness structures rendering the action vector an informational equilibrium. Suppose that the formulated problem has been solved. Consequently, it is possible to state and solve many other (institutional or informational) control problems, e.g., joint evaluation of an informational structure and norm implementing required actions of the agents, etc. Consider an OS with two agents whose goal functions are fi (, y) = ( – y1 – y2) yi – (yi)2 / 2, i = 1, 2.

(1)

The sets of feasible actions coincide with the positive semiaxis, and  = [1; 2]. The sets of best responses of the agents are singletons: BR1(1, y2) = (1 – y2) / 3,

(2)

BR2(2, y1) = (2 – y1) / 3.

(3)

Assume that the subjective beliefs of agents regarding the state of nature form a common knowledge. Then the parametric Nash equilibrium is defined by Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

260

yi* (1, 2) = (3 i – 3–i) / 8, i = 1, 2.

(4)

Figure 9.3 presents the sets of best responses of the agents (under different   ), as well as the following sets (see the notation in Chapter 8, Section 4 of Appendix 1):

E

E N0 =

 

( ,  , ...,  ) –the set of different parametric Nash equilibria (segment

N

FG);

E

EN =

N

( ) –quadrilateral AGCF;

(5)

  n

E=

 Proj E i

N

–square ABCD;

(6)

iN

E4 (see the definition below)–hexagon KLMNPH.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Let us provide solutions to the inverse problems of informational control in the following situations. Variant I. The principal performs a unified (uniform) informational control, i.e., the awareness structure of agent i is Ii = , i  N,   ; the value of the state of nature  reported by the principal makes up a common knowledge. A segment of the graph of the corresponding reflexive game (for agents i and j) takes the form    and appears independent of the agents.

y2

2

1

BR1(1, y2)

2/3 5/8 7/12 1/2 1/3 1/4 1/6 1/8

0

BR1(2, y2)

A

B L K

D

M G

F

H

N P

BR2(2,y1)

C BR2(1, y1)

1/8 1/6 1/4

1/3 1/2 7/12 5/8 2/3

Figure 9.3. The sets of equilibria. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

1

y1 2

Institutional Control

261

Then the set of various informational equilibria in agents’ game is the segment (1/4; 1/4) – (1/2; 1/2). Next, under a fixed   [1; 2], the set of informational equilibria represents point ( / 4;  / 4). Therefore, the norm i1( ) =  / 4, i = 1, 2 is the unique compatible one. The inverse problem possesses the following solution. The identical actions of both agents from the segment [1/4; 1/2] are implementable as informational equilibria. Set  = 4 ,   [1/4; 1/2] to guarantee that the agents choose the action vector x1 = (, ). Variant II. Imagine that the principal uses a personalized informational control, i.e., the awareness structure of agent i is Ii = i, i  , i  N; moreover, the individual beliefs of agents regarding the state of nature constitute a common knowledge. A segment of the graph of the corresponding reflexive game (for agents i and j) takes the form i  j. Accordingly, the set of various informational equilibria in the agents’ game is EN (note it is wider than in Variant I). In this case, the set of different informational equilibria EN in the agents’ game is parallelogram AGCF (see Figure 9.3). Under a fixed vector (1, 2)  [1; 2]2, the set of informational equilibria is the point with the coordinates (4). And the norm i2(1, 2) = (3 i – 3–i) / 8, i = 1, 2 is the unique compatible one. The inverse problem has the following solution. The actions of both agents from 2

2

parallelogram AGCF are implementable as informational equilibria. Set 1 = 3 x1 + x2 , 2 2

2

2

2

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

= x1 +3 x2 to guarantee that the agents choose the action vector x2 = ( x1 , x2 ). Variant III. Suppose that the principal involves reflexive control by reporting to each agent (a) information on the uncertain parameter and (b) what the rest agents think (“know”) about the uncertain parameter. That is, the awareness structure of agent i is determined by Ii = {i, ij}, i, ij  , i, j  N. For agent i, a segment of the graph of the corresponding reflexive game takes the form i  ij. Furthermore, the set of different informational equilibria E in the agents’ game makes up square ABCD (see Figure 9.3). For instance, consider agent 1. According to his/her subjective point of view, the set of informational equilibria (under a fixed vector (1, 12)  [1; 2]2) is the point whose coordinates are given by (4), i.e., * (1, 12) = (3 12 – 1) / 8. y1* (1, 12) = (3 1 – 12) / 8, y12 3

(7) 0

Formula (7) implies that agent 1 chooses the action x1  X 1 = [1/8; 5/8] if the vector (1, 12) satisfies 3

(3 1 – 12) / 8 = x1 ,

(8) 3

3

(3 12 – 1) / 8  BR2(12, x1 ) = (12 – x1 ) / 3.

(9)

The condition (9) is true due to the definition of an informational equilibrium. Consequently,

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

262

13 ( x13 ) = {(1, 12)  [1; 2]2 | (3 1 – 12) / 8 = x13 }.

(10)

Similarly, one obtains for agent 2:

 23 ( x23 ) = {(2, 21)  [1; 2]2 | (3 2 – 21) / 8 = x23 }.

(11)

And the compatible norm takes the form i3(i, ij) = (3 i – ij) / 8, i  j, i, j = 1, 2. Variant IV. This is an alternative to Variant III. Suppose that the principal forms the awareness structure Ii = (i, {ij =  }j  i) for agent i (e.g., by public announcement of the value   , followed by the private message with the value i   ). Denote  i4 = (i,  ) 

 2, i  N. The graph of the corresponding reflexive game (for agent i) has a fragment in the form i     (see Chapter 8 and Appendix 1). The set of Nash equilibria in the game of phantom agents (level 2 and 3 of the awareness structure) is defined by EN (, , …,  ) (see the expression (3)). We underline that this set can be evaluated by all agents. Hence, X i4 (i,  ) = BRi (i, (EN (, , …,  ))–i). Let the set of various informational equilibria in Variant IV be specified by E4 =

  

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.



{y  A' | yi 

   i

X i4 (i,  )}.

(12)



In the current example, the set E4 is hexagon KLMNPH (see Figure 9.3). Now, fix an agents’ action vector x4  X'. Designate  4(x4) to be a set of values of the parameter vectors ({i}i  N,  )   n + 1 such that the above action vector represents an informational equilibrium (solves the inverse problem of informational control):  4(x4) = {({i}i  N,  )   n + 1 |  i  N

xi4  BRi (i, (EN (, , …,  ))–i)}.

(13)

Recall that the awareness of agent i is the vector  i4   2. Consequently, in Variant IV the norm i4 () turns out compatible if   i4   2, i4   i4   X i4 (  i4 ).

(14)

The presented results of analyzing the inverse problems of informational control (Variants I-IV) lead to a natural conclusion. In the sense of the sets of informational equilibria, the above-mentioned variants correlate as I  II  III, IV  III, II  IV; II  IV  . On the other hand, in the sense of the sets of compatible norms, the variants correlate as I  IV  III = II.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

CONCLUSION Applications and practical experience. Theoretical results obtained by the theory of control in organizations (TCO) have been widely involved to develop applied models; the latter have been used to synthesize or modify control mechanisms for real social and economic systems. In this context, we have to emphasize the following. Many classes of applied mechanisms with corresponding modifications have served to solve different applied problems. Below we list the fields of application (see Figure C.1) and some primary publications providing the description of implementation techniques and practical experience of using applied models [39]. In the sense of scale, the largest objects of control include regions (as administrative units). The design and implementation of regional development programs adopt (a) integrated rating mechanisms for assessing the state of a region, (b) rank-order tournaments for choosing member enterprises in regional development programs, (c) cost optimization methods for the programs, (d) allocation mechanisms for financial resources (including the mechanisms of consent and expertise mechanisms). Perhaps, the most abundant experience in implementing the results of control mechanisms modeling has been accumulated in the field of management of industrial enterprises. Perfecting corporate governance, reforming and restructuring of enterprises and corporations, designing the programs of innovative development require applying the mechanisms of allocating corporate orders and financial funds (including the mechanisms of cost-benefit analysis, transfer pricing mechanisms, incentive mechanisms and operational control mechanisms). An important field of application of the theoretical results derived for OS control consists in project management mechanisms (PM). They embrace most PM problems during the whole life cycle of a project. Another field of application is represented by security control mechanisms in complex systems; the matter concerns organizational, economic and information security. An independent class of control objects lies in ecological and economic systems. Moreover, thriving Internet-based social networks [21] form a new application domain for TCO. Solving control problems in such networks requires supplementing TCO with new analysis techniques (in addition to game-theoretic models and graph theory methods). Integration with other branches of science. The obtained results testify that control theory methods assist in improving the efficiency of control in different-scale social,

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

264

economic and organizational systems (from a team of workers or a small firm to an industry or a region). Management of Enterprises and Corporations

Corporate Governance

Managment of Educational Systems

Control of Social Systems

Control of Ecological and Economic Systems

Project and Program Management Management of Innovations

TCO: Possible Fields of Application Security Control

Regional Management

Control of Organizational and Technical Systems

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Figure C.1. TCO: Possible fields of application.

At the same time, practice permanently poses new problems for experts in control. Dealing with these problems calls for establishing close (essential and informational) connections with neighboring directions in control theory and practice. In the first place, one should compare different approaches and seek for points of contact. For instance, TCO and operations research (OR) [25, 60, 62] involve almost identical approaches to state and solve optimization problems of organizational control. However, in TCO one would observe systematic accounting for the purposeful behavior of control objects. For example, network planning and scheduling methods [20] are developed within the framework of both OR and TCO. But, to solve the complex applied problem of increasing the efficiency of project management, TCO supplements these methods with resource allocation mechanisms in uncertain conditions, incentive mechanisms for counter plans and incentive mechanisms for project optimization [39]. The object of management theory and organization theory coincides with that of TCO. Nevertheless, methods of these theories dramatically differ. Management science appears more flexible in the description of psychological aspects [5, 15, 22], whereas TCO reduces all psychological factors to the concept of rational behavior or to the concept of bounded rationality (based on utility theory [16, 47]). As a matter of fact, many conclusions of management science and organization theory supplement the formal analysis of TCO with empirical components; at the present stage of development, these components are not described by formal models (still, being vital for implementing theoretical results in practice). For instance, formal models of financial incentives (designed in TCO [50]) are successfully supplemented by numerous theories of motivation [24], and so on. Let us give another example of integration. Integrated rating mechanisms [39] suggested by TCO provide a tool for building efficiency management systems for companies based on the concepts of management by objectives (P. Drucker [15]) or balanced scorecards (D.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Conclusion

265

Norton and R. Kaplan [28]). On the other hand, incentive mechanisms for operation improvement represent a tool of motivational support of lean production programs [53]. THEORY

PRACTICE

Economics Psychology Sociology

Empiricism

Management TCO

Organizational systems

Models Mechanism Design Agency theory Social choice theory

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Figure C.2. Control in organizations: Theory and practice.

Next, the methods of project management (developed in OR and in TCO, see [35, 39]) are close to Goldratt’s theory of constraints [19]. In addition, Mintzberg’s approach to organizational structures’ formation [43] has become an empirical base for mathematical methods of hierarchical structure optimization in control of organizations [12, 44]. And so on. According to this viewpoint, TCO (in a harmonious combination with management science) may “champion” the results of other scientific directions in practical improvement of control efficiency in organizational systems (see the dashed arrow in Figure C.2). Prospects. The following classes of problems seem to be promising for future research: correct accounting and further development of modern ideas of psychology, economics and sociology in formal models; designing models and synthesis methods for staff and structure of different OS (including multi-level, dynamic and network control structures); designing models and methods of informational control; designing methods of efficiency assessment and synthesis of complex mechanisms using the “kit” of basic mechanisms studied in this book. According to practical viewpoint, we should mention the necessity of generalizing the practical experience accumulated for various control mechanisms; the ultimate goal lies in creating applied techniques and automated information (decisions support) systems enabling to employ adequate and efficient control procedures (in each specific case). Moreover, the list of relevant organization problems comprises the following. First, training highly-skilled control specialists equipped with the complete arsenal of modern knowledge and experience in the field of control. Second, popularizing theoretical results and establishing closer essential and informational relations with neighboring directions of control theory and practice. Indeed, further successful solution of theoretical and practical control problems in organizational systems can be guaranteed by concerted efforts of mathematicians, psychologists, economists, sociologists and representatives of other scientific fields.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

APPENDICES APPENDIX 1. THE BASICS OF GAME THEORY This appendix provides the description of some key concepts and models used in game theory. We briefly consider, inter alia, noncooperative games, cooperative games, hierarchical games, and reflexive games. For a detailed treatment of the problematique and results of applying game-theoretic models in organizational control problems, the reader is recommended a series of textbooks and monographs on “classical game theory” [17, 18, 44, 46], “behavioral game theory” (studying the behavior of real agents experimentally) [13] and “algorithmic game theory” [2, 37] (concentrating on decentralized implementation of game interaction among agents at the junction with the theory of multi-agent systems [58]).

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

A.1.1. Noncooperative Games Chapter 1 has described the models of individual decision-making. Now, let us analyze a game uncertainty reflecting joint decision-making of several agents (under given controls of a principal). Within the stated framework, of crucial importance are agent’s beliefs about the set of feasible values of the opponents’ action profile (i.e., actions chosen by other agents under certain principles of behavior). Generally, an agent possesses inaccurate knowledge about the principles of behavior used by his/her opponents. However, to describe collective behavior of the agents, it is insufficient to define their preferences and rules of individual rational choice of each agent. Indeed, in a one-agent system the hypothesis of rational (individual) behavior implies that the agent strives for maximizing his/her goal function by an appropriate choice of his/her action. Yet, in the case of several agents, one should account for their mutual influence. Accordingly, a game arises as an interaction, where the gain of each agent depends on his/her own action and on the actions chosen by the opponents (other agents). Imagine that (due to the hypothesis of rational behavior) each agent seeks to maximize his/her goal function by an appropriate action. Obviously, in the case of several agents, an individually rational action of each agent depends on actions of the others. Consider the following game-theoretic model of a noncooperative interaction among n agents. Suppose they make decisions, being unable to agree about chosen actions, to reallocate the resulting utility (gain), etc.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

268

Each agent chooses an action xi belonging to a feasible set Xi, i  N = {1, 2, …, n} (the set of agents). Moreover, the agents choose their actions one-time, simultaneously and independently. The gain of agent i depends on his/her action xi  Xi, on the opponents’ action profile x-i = (x1, x2, …, xi–1, xi+1, …, xn)  X–i =

Xj

jN \{i }

(agents belonging to the set N\{i}) and on the state of nature1   . Actually, it represents a real-valued gain function fi = fi (, x), where x = (xi, x–i) = (x1, x2, …, xn)  X' =

Xj jN

stands for the action vector of all agents (also known as the action profile). Under a fixed state of nature, the set Г0 = (N, {Xi}i  N, {fi ()}i  N) composed of the set of agents, their feasible actions and goal functions is said to be a normal-form game. A solution to the game (an equilibrium) is a set of stable action vectors of the agents. We underline that a clear provision for stability is made in each specific case. According to the hypothesis of rational behavior, each agent strives for choosing the best action (in the sense of his/her goal function) for a given action profile. Fix agent i; for him/her, the environment is the set of states of nature    and the opponents’ action profile x–i = (x1, x2, …, xi–1, xi+1, …, xn)  X–i =

X

j

.

jN \{i}

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Hence, agent i adopts the following decision-making principle (under a fixed opponents’ action profile and state of nature). A rational agent chooses actions from the set BRi(, x–i) = Arg max fi (, xi, x–i), i  N xi  X i

(here BR designates the best response)2. Let us discuss possible principles of agent’s decisionmaking. Each principle generates a corresponding equilibrium concept, i.e., defines the meaning of stability in the predicted outcome of the game. Dominant strategy equilibria. Assume that (for a certain agent) the set of best responses is independent of the opponents’ action profile. Then it represents the set of his/her dominant strategies. Next, the set of dominant strategies of all agents is called dominant strategy equilibrium (DSE). When each agent possesses a dominant strategy, the agents can make decisions independently (viz., choose actions without any information and beliefs about the opponents’ action profile). Unfortunately, many games admit no DSE. To implement a DSE (if exists), it suffices that each agent knows his/her own goal function and the feasible sets X', . Guaranteeing equilibria. The same awareness of the agents is required for implementing a guaranteeing (maximin) equilibrium, which exists almost in any game: 1 2

The state of nature may be a vector whose components reflect individual characteristics (types) of the agents. Here and in the sequel, we believe the corresponding maxima or minima do exist.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Appendices

269

xig  Arg max min min fi (, xi, x–i), i  N. x i  X i x i  X i

 

In practice, guaranteeing equilibria assume that each agent reckons on the worst-case action profile. Nash equilibria. Define the multi-valued mapping BR(, x) = (BR1(, x–1); BR2(, x–2), …, BRn(, x–n)). A Nash equilibrium under a state of nature  (more specifically, a parametric Nash equilibrium) is a point x*()  X' satisfying the following condition: x*( )  BR(, x*( )). This inclusion can be rewritten as  i  N,  yi  Xi fi (, x*())  fi (, yi, x* i ( ) ). The set of Nash equilibria EN () can be defined as

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

EN ( ) = {x = (x1, x2, …, xn)  X′ | xi  BRi (, x–i), i  N}. Implementing a Nash equilibrium requires that rationality of the agents, all parameters of the game and the value of the state of nature are a common knowledge. In other words, each agent appears rational and knows the set of game participants, the goal functions and feasible sets of all agents, as well as the value of the state of nature. In addition, the agent is aware of that the rest agents know this and they know that he/she is aware of this fact, and so on (generally, this line of reasoning could be infinite). By rejecting the assumption regarding the common knowledge, one reduces a normal-form game to a reflexive game (see Section A.1.4 and [52]). Subjective equilibria. The discussed types of equilibria represent special cases of subjective equilibria. Notably, a subjective equilibrium is an agents’ action vector, where each component describes the best response of a corresponding agent to an opponents’ action profile (expected by the agent according to his/her subjective viewpoint). Let us consider possible cases.



Suppose that agent i reckons on implementation of an opponents’ action profile x Bi and a state of nature



 i . Note the superscript “B” means “beliefs”; the term “conjecture” is also

applicable. Consequently, agent i chooses

  xiB  BRi(  i , xBi ), i  N. The vector xB is a point-type subjective equilibrium. Such definition of an equilibrium enables avoiding substantiation of agents’ beliefs about



actions of the opponents. In other words, it may happen that  i  N: x Bi  x Bi . A



substantiated subjective equilibrium ( x Bi = x Bi , i  N) represents a Nash equilibrium. In Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

270

particular, it suffices that all parameters of the game form a common knowledge and each



agent models rational behavior of the opponents to construct x Bi . Assume that the best response of each agent does not depend on the beliefs about the opponents’ action profile. Such subjective equilibrium is a dominant strategy equilibrium. Now, consider a somewhat general case. Agent i expects the opponents to choose their actions from the set X Bi  X–i and counts on implementation of a state of nature from the set



i  , i  N. The best response lies in the guaranteeing subjective equilibrium: 

xi ( X Bi , i )  Arg max minB min  fi (, xi, x–i), i  N. xi  X i x i  X i   i



If X Bi = X-i,  i = , i  N, then xi( X Bi ) = xiг , i  N. Notably, the guaranteeing subjective equilibrium coincides with “classical” guaranteeing equilibrium. Let us continue the generalization process. As the best response of agent i, one may take a probability distribution pi (xi), where pi ()  (Xi); the latter is the set of different distributions over Xi, maximizing the expected gain of the agent subject to his/her beliefs about (a) a probability distribution i (x–i)  (X–i) of the actions chosen by other agents and (b) a probability distribution qi ()  ( ) of the state of nature. In fact, we obtain the Bayesian principle of decision making [46]: pi (i (), qi (), )  Arg max

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

pi  ( X i )

 f ( , x , x i

i

i

) pi ( xi ) qi ( ) i ( xi ) d dx , i  N.

X ', 

Therefore, to implement a subjective equilibrium, one needs the minimal awareness of the agents–each agent must know his/her goal function fi () and the feasible sets , X′. However, such awareness may lead to the incompatibility of agents’ beliefs regarding the state of nature and behavior of the opponents. To ensure compatibility (to make beliefs substantiated), we should involve additional assumptions on mutual awareness of the agents. Probably, the strongest assumption presumes common knowledge (it transforms a point-type subjective equilibrium into a Nash equilibrium and the set of Bayesian principles of decision making into a Bayes-Nash equilibrium). Bayes-Nash equilibria. Consider a game with incomplete information (see [17, 46]); then a Bayes game is described by the following parameters:  

a set of agents N; a set of agents’ feasible types K, where type of agent i is ki  Ki, i  N, and the type vector makes up k = (k1, k2, …, kn)  K′ =  K i ; iN



a set of feasible action vectors X′ =

 Xi

of the agents;

i N

 

a set of utility functions ui: K′  X′   1; agents’ beliefs i (|ki)  (K–i), i  N.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Appendices

271

A Bayes-Nash equilibrium in a game with incomplete information is defined as a set of agents’ strategies in the form i: Ki  Xi, i  N, maximizing the expected utilities



Ui (ki, i (), –i()) =

ui(k, i (ki), –i(k–i)) i (k–i| ki) d k–i, i  N.

k i  K j j i

Generally, Bayesian games proceed from that the beliefs {i (|)}i  N are a common knowledge. It suffices that the beliefs are compatible, i.e., are deducible by each agent using the Bayes formula for the distribution  (k)  (K′) (the latter represents a common knowledge). This section has discussed some solution concepts of noncooperative games. Next, we provide the basic notions for cooperative games [45, 55]; they model an interaction among the agents being able to form coalitions (i.e., to agree about chosen actions, to reallocate the utility, etc.).

A.1.2. Cooperative Games A cooperative game is defined by a set of players N = {1, …, n} and a characteristic function v: 2N R, mapping each coalition of players S  N onto its gain. An allocation of a game (N, v) is a vector x = (x1, …, xn) such that  xi  v ( N ) (the iN

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

efficiency property) and xi  v ({i}) , i  N (the property of individual rationality). A solution to a cooperative game often represents a set of allocations being implementable under rational behavior of players. Different solution concepts for cooperative games vary in the assumptions regarding rational behavior of players. An allocation x is said to dominate an allocation y with respect to a coalition S ( x  S y ), if i  S :

xi  y i ,

 xi  v ( S ) . Assume there exists a coalition S such that x  S y ; iS

in this case, the allocation x dominates the allocation y. The set of undominated allocations of a game is called the core of the game. Under a given set of players N, a balanced map of a game assigns a number S ∈ [0, 1] to each coalition S ∈ 2N \{N}, so that   S  1 for all players i  N (here we sum up over S : iS

all non-empty coalitions, except N, which include player i). Necessary and sufficient conditions of the nonempty core of a game are stated by Bondareva’s theorem: the core of a game (N, v) is nonempty iff for any balanced coverage  S:



SN

S

v( S )  v( N ) .

Games with nonempty cores are called balanced. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

272

A cooperative game is said to be inessential, if for any coalition S  N :

v( S )   v({i}) ; otherwise, the game is said to be essential. The inessential character of a i S

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

game means the absence of cooperation among players. A strategy profile of a certain game is a strong Nash equilibrium, if none coalition benefits by deviating from the equilibrium strategy profile. The set of strong Nash equilibria may be empty. However, if a certain game with transferable utility of players admits a unique strong Nash equilibrium, the corresponding cooperative game appears inessential. The concept of Aumann-Maschler’s bargaining sets for cooperative games is based on the following idea. For instance, suppose that in a three-player game the coalition structure {{1, 2}, {3}} is formed, which includes the coalition T = {1, 2} with players 1 and 2. The payoff v({1, 2}) of the coalition {1, 2} is allocated between players 1 and 2; player 1 receives x1, and player 2 receives x2. Moreover, assume that player 1 turns out dissatisfied with such allocation. Accordingly, he/she tells the partner, “Increase my share or I will form another coalition S = {1, 3} to have a higher gain.” If such coalition S can be made, i.e., player 3 benefits by changing a configuration x to a new configuration y, then the above claim is said to be a threat of player 1 to player 2. At the same time, player 2 may parry player 1 as follows. “If you act like this, I offer to player 3 a configuration z of coalition structure {{1}, {2, 3}} such that player 3 receives a higher payoff than in the configuration y, whereas I gain (at least) the same as in the initial configuration x.” Therefore, player 2 raises a counterthreat to “protect” x2. Hence, an allocation of gains in coalitions of a coalition structure (among its participants) is an equilibrium subject to threats and counterthreats, if for each threat of an arbitrary coalition K against any other coalition L there exists a counterthreat of the coalition L against the coalition K.

A.1.3. Hierarchical Games In the above models of game uncertainty, we have believed that players (agents) choose their strategies simultaneously and one-time (this book addresses no models of repetitive games or differential games, see [17]). Contrariwise, hierarchical games [18] involve a fixed sequence of moves; viz., the first move is made by a principal, and then agents choose their strategies. In this sense, hierarchical games provide the most adequate framework to describe control problems in organizational systems. Hierarchical games are remarkable for using the maximal guaranteed result (MGR) as the basic concept of game solution. The “pessimistic” nature of the MGR (evaluating the minimum over the set of uncertain parameters) is compensated by feasibility of communication among players. Clearly, this reduces uncertainty during decision making. Denote by w1  f1 ( x1, x2 ) and w2  f 2 ( x1 , x2 ) the efficiency criteria (goal functions) of player 1 and player 2, respectively. The gains of players depend on their actions x1 and x2 0

0

chosen from action sets X 1 and X 2 . Generally, models of hierarchical games proceed from that player 1 (the principal) has x1 . The notion of a the right of the first move. His/her move lies in choosing a strategy ~

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Appendices

273

strategy appreciably varies from that of an action. The former is closely related to the awareness of player 1 about the behavior of player 2 (the agent). Here and in the sequel, we understand a strategy as a rule of his/her behavior (i.e., a rule of choosing a specific action depending on the content and concrete value of information received during the game). The principal can choose his/her action after the agent’s choice. The elementary strategy of the principal consists in choosing directly an action x1 (if additional information about agent’s action is not expected during the game). A complex x1 ( x2 ) (if the above additional strategy of the principal is to choose a certain function ~ information is expected). Moreover, the principal’s strategy can be reporting some information to the agent (e.g., a planned behavior depending on the agent’s choice). The agent must be sure that player 1 can implement such strategy (i.e., player 1 would exactly know the realization of the action x2 at the moment of choosing his/her action x1). For instance, suppose that the agent (choosing the strategy second) expects no information about the principal’s action. Consequently, the principal’s right of the first move

can be realized by reporting the function ~ x1 ( x2 ) to the agent. This can be interpreted as

promising to choose the action x1  ~ x1 ( x2 ) under the agent’s action x2. And so, the agent’s

~

strategy consists in choosing an action depending on the principal’s message, x2  ~ x2 ( ~ x1 ()) . If the agent trusts the principal, he/she must choose the action x2* implementing

max f 2 ( ~ x1 ( x2 ), x2 ) .

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

x 2  X 20

A game with the described sequence of moves is called a game Г2. An example lies in the incentive problem under the principal’s awareness about the agent’s actions–see Chapter 2) [20]. Now, assume the principal expects no information on the agent’s action (and this fact is *

known to the agent). Then the principal’s strategy is to choose a certain action x1 .

x2 ( x1* ) (he/she moves second, Accordingly, the agent’s strategy consists in choosing x2  ~ being aware of the principal’s action). Such a game is referred to as a game Г1 (e.g., recall the same incentive problem, where the principal possesses no information about the agent’s action) [18]. Let us first consider the game Г1. *

*

A pair of actions ( x1 , x2 ) in the game Г1 is said to be a Stackelberg equilibrium [46] if

x1*  Arg max f1 ( x1 , x2 ) ,

(1)

x2*  R2 ( x1* )  Arg max f 2 ( x1* , x2 ) ,

(2)

x1  X 10 , x 2 R 2 ( x1 )

x 2  X 20

i.e., R2 ( x1 ) is the function of the agent’s best response to the principal’s action.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

274

An equilibrium in a game Г1 differs from the Stackelberg equilibrium (1) in the following. To define optimal strategy of player 1, one has to evaluate the minimum over the set R2 ( x1 ) :

x1*  Arg max0 min

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

x1  X 1 x 2 R 2 ( x1 )

f1( x1 , x2 ) .

In a game Г1, the agent chooses an action under the conditions of complete awareness (knowing the principal’s action). Here gain maximization by a proper choice of the action represents a special case of the principle of MGR. The principal’s action in the form of a Stackelberg equilibrium also ensures the maximal guaranteed result (the principal must be confident in that the agent chooses his/her action according to (2) and the principle of benevolence). Therefore, equilibrium strategies of both the principal and agent are guaranteeing. Still, the situation when the first move provides some advantage is more typical. Imagine that the sequence of moves is defined by the players; naturally, they would struggle for leadership. A two-person normal-form game Г0 corresponds to two games Г1 (first-order games) that vary in the sequence of moves. Then the struggle for leadership (the first move) is determined by benefits as the result of passing from the initial game to a certain first-order hierarchical game. The following fact is well-known. If a two-player game admits (at least) two different Pareto-optimal Nash equilibria, then the game includes struggle for the first move. Nevertheless, in many cases the principal’s behavior corresponding to a game Г1 could not be called efficient. Let us get back to the incentive problem in Section 2.1; if the principal chooses his/her action first (e.g., a reward for the agent, a wage rate) and the agent chooses his/her action under a given incentive, the only Stackelberg equilibrium is that the principal pays nothing to the agent, and the agent does not work. Therefore, observing the agent’s action, the principal is interested in reporting his/her plans (chosen actions) to the agent depending on the agent’s action. Thus, the principal benefits from implementing a game Г2. To proceed, we formulate a theorem of the maximal guaranteed result gained by the principal in a game Г2. Note many control models are easily reduced to this game (e.g., see the incentive problem under complete awareness in Chapters 2-3). First, we introduce some necessary notions. Suppose that the goal functions of the players, w1  f1 ( x1 , x2 ) and w2  f 2 ( x1 , x2 ) , are continuous on compact sets x1  X 1 and x2  X 2 representing the sets of feasible actions of 0

0

the players. The principal’s strategy is ~ x1  ~ x1 ( x2 ) ; in other words, consider the following sequence of moves. Enjoying the right of the first move, player 1 (the principal) reports to player 2 (the agent) the planned choice of his/her strategy depending on a strategy x2 by player 2. Subsequently, player 2 chooses an action x2, maximizing his/her goal function (with embedded strategy of player 1). Finally, player 1 chooses the action ~ x1 ( x2 ) . A penalty strategy x1p  x1p ( x2 ) is defined by

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Appendices

275

f2 (x1p (x2 ), x2 )  min0 f2 (x1, x2 ). x1X1

Assume there exist several penalty strategies; the optimal penalty strategy is the one maximizing the gain of player 1. The guaranteed result of player 1 (in the case of a penalty strategy) constitutes

L2  max0 f 2 ( x1p ( x2 ), x2 )  max0 min0 f 2 ( x1 , x2 ). x2  X 2

x 2  X 2 x1  X 1

Consider player 2; the set of actions ensuring his/her maximal gain under a penalty strategy of player 1 makes up E2 = {x2 | f 2 ( x1p ( x2 ), x2 ) = L2}. The set of attainability D  {(x1, x2 ) : f2 (x1, x2 )  L2} is the contractual set of the above game. Notably, this is the set of different combinations of strategies chosen by players 1-2, guaranteeing to player 2 a strictly greater result than the worst-case one (when player 1 applies a penalty strategy). The best result of player 1 over the set of attainability is given by

 sup f1 ( x1, x2 ) , D   K  ( x1 , x2 )D . That a players’ strategy profile belongs to the set of ,D  

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

attainability ensures implementability of this result via a penalty strategy. Let us evaluate the action of player 1, implementing ( K   ), where   0, provided that player 2 chooses a recommended action from the set D:

f1( x1 , x2 )  K   , ( x1 , x2 )  D   . Now, compute M  inf sup f1 ( x1 , x2 ) –the principal’s guaranteed result in the case x 2 E 2 x1  X 10

of a penalty strategy (indeed, strategies of player 2 are bounded by the set E2). a

Now, define the strategy x1 ( x2 ) implementing the best response of the principal to an agent’s action x2 (the accuracy is within ). It is said to be the -dominant strategy:

f1(x1 (x2 ))  sup f1(x1, x2 )  . x1X10

Yu. Germeier’s theorem [18]. In a game Г2, the maximal guaranteed result of the principal makes up max [ K , M ] . If K > M, then the -optimal strategy of the principal is

 x1 , при x2  x2  ~ . If K  M, the principal’s optimal strategy consists in using x1 ( x2 )   н  x1 ( x2 ), при x2  x2 the optimal penalty strategy.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

276

What is the relationship between the principal’s gains in games Г1 and Г2 with identical goal functions? Are there any more rational methods of communication yielding a higher gain to the principal? These questions are settled by considering informational extensions of the game (known as metagames [18]). Imagine that the principal has no plans to obtain independently information on the agent’s action. In this case, the principal may choose an action first (thus, implementing a game Г1). One would also recommend a somewhat complicated behavior to the principal. For instance, the principal may ask the agent to report his/her strategy x2  ~ x2 ( x1 ) (this strategy proceeds from information about the principal’s action, being expected by the agent). Here

~ ~ the principal’s right of the first move consists in reporting the strategy ~ x1 ( x2 ( x1 )) to the agent. The above strategy can be interpreted as the principal’s promise to choose the action

~ ~ x1 ( ~ x2 ( x1 )) (provided that the agent engages for choosing his/her action according to ~ x ( x ) ). And so, one derives a game Г . 2

1

3

If the principal determines the order of data exchange, he/she may choose whether to play a game Г1 or Г3. In the both games, the principal has to choose an action “blindly” (the agent’s action appears unknown). In a certain sense, a game Г3 is a complicated modification of a game Г1. Using similar approach (an additional “feedback loop” transforms a game Г1 to a game Г3), one may complicate a game Г2; as the result, we derive a game Г4. Here (by analogy to a

x1 ( x2 ) from the principal, forms his/her strategy game Г2) the agent expects information ~

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

~ ~ x2 ( ~ x1 ) and reports it to the principal. The latter enjoys the right of the first move by ~ ~ ~ x1 ( x2 ) to-be-chosen by the adopting the strategies ~ x1 ( ~ x2 ) , which define the function ~ ~ ~ principal (based on the agent’s message x ). 2

This technique assists in constructing a game Г5 from a game Г3, etc. Consider even games Г2m, m = 1, 2, … ; as his/her strategies, the principal adopts certain mappings of the set of agent’s strategies (in this game) onto the set of principal’s strategies in a game Г2m–2. In much the same way, the agent’s strategies are mappings of the set of principal’s strategies (in a game Г2m) onto the set of agent’s strategies in a game Г2m–2. The described reflexion could be constructed infinitely (by passing to more complicated schemes of communication) if the principal’s gain increases (recall that metagames are analyzed for the principal’s benefit). However, the following result takes place. N. Kukushkin’s theorem [18]. The maximal guaranteed result of the principal in a game Г2m (m > 1) coincides with that in the game Г2. On the other hand, the maximal guaranteed result of the principal in a game Г2m+1 (m > 1) equals that in the game Г3. Therefore, while analyzing the principal’s guaranteed result (gain), one can study only games Г1, Г2 and Г3. Furthermore, the following fact has been established [18]. The maximal guaranteed result of the principal in a game Г2 is not smaller than that in the game Г3; by-turn, the latter is not less than the maximal guaranteed result in the game Г1. Accordingly, Г2 is an “ideal” game for the principal. If the principal can define the sequence of moves and the content of data exchange (as well as knows the agent’s choice), he/she should play a game Г2. If the agent’s action is unknown to the principal at the moment of decision making, the principal benefits more from playing a game Г3.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Appendices

277

Games and structures. Above we have considered the basic notions of game theory. Moreover, we have performed transition from games, where agents choose their actions simultaneously (a game Г0 in normal form or in the form of a characteristic function) to hierarchical games with a fixed sequence of moves (first, a principal choose his or her action and then an agent). Further complication of the interaction structure of players is possible. Consider a basic idea (see Figure A.1.1) allowing to observe the complete picture and to follow the reasoning of transition from simple problems to their complex counterparts; the latter could be decomposed into the former and thus becomes more comprehensible. The process of single subject decision-making (Figure A.1.1a) is explained on the basis of the hypothesis of rational behavior (HRB)–see Section 1.1. Notably, a rational agent always strives to maximize his or her goal function via a proper choice of the action (the chosen action should attain the maximum of the goal function). Next, the situation can get complicated by consideration of several subjects making decisions simultaneously (Figure A.1.2b). This interaction is modeled by a normal-form game Г0. In the situation with two agents interacting “vertically” (Figure A.1.1c), one obtains the game Гi, i = 1, 2, 3. Consider a “single principal - multiple agents” organizational structure (see Figure A.1.1d). Interaction of the agents at the same level of decision-making is modeled by the game Г0. On the other hand, interaction within a “principal-agent” chain is modeled with the game Гi. Hence, such structure can be formally represented by the game Гi defined “over” the game Г0 and denoted by Гi (Г0).

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.



HRB

Г0

Г i , i  1, 2,3

Гi (Г0 )

a)

b)

c)

d)

Г 0 ( Г i ( Г 0 ))

Г 0 ( Г i (...Г i ( Г 0 )...))

e)

f)

Figure A.1.1. Games and structures (i = 1, 2, 3).

To proceed, suppose there exist several principals and several agents, see Figure A.1.1e. At the lower level the agents play the game Г0. The principals located “above” the agents play an hierarchical game Гi; at the same time, the principals play the game Г0 at their level. The resulting game can be denoted by Г0(Гi(Г0)).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

278

We emphasize that complex structures with more complicated interaction are also possible (an example is illustrated by Figure A.1.1f). Such interaction is expressed by an hierarchical game between the levels, with normal-form games at a single level: Г 0 ( Г i (... Г i ( Г 0 )...)) . The corresponding analysis technique remains the same; one should decompose a complex organizational structure (a complex game) into a set of simpler ones and use the results derived for the latter. It appears that games and structures possess a strong connection, i.e., the moment of subject’s decision-making determines his or her “place” within an organizational hierarchy.

A.1.4. Reflexive Games3 Consider the set of agents: N = {1, 2,…, n}. Denote by    the uncertain parameter. The awareness structure Ii of agent i includes the following elements. First, the belief of agent i about the parameter  ; denote it by i, i  . Second, the beliefs of agent i about the beliefs of the other agents about the parameter  ; denote them by ij, ij  , j  N. Third, the beliefs of agent i about the beliefs of agent j about the belief of agent k; denote them by ijk, ijk  , j, k  N. And so on. Note in the sequel we employ the terms of informational structure and hierarchy of beliefs as synonyms for the awareness structure. Therefore, the awareness structure Ii of agent i is specified by the set of values  ij1 ... jl , where l runs over the set of nonnegative integer numbers, j1, …, jl  N, while  i1 ...il  .

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

The awareness structure I of the whole game is defined in a similar manner; in particular, the set of the values  i1 ...il is employed, with l running over the set of nonnegative integer numbers, j1, …, jl  N, and  ij1 ... jl  . We emphasize that the agents are not aware of the whole structure I; each of them knows only a substructure Ii. Thus, an awareness structure is an infinite n-tree; the corresponding nodes of the tree describe specific awareness of real agents from the set N, and also phantom agents (complex reflexions of real agents in the mind of their opponents). A reflexive game ГI is a game defined by the following tuple [14, 52]: ГI = {N, (Xi)i  N, fi()i  N, , I}, where N stands for a set of real agents, Xi means a set of feasible actions of agent i, fi():   X’  1 is his or her goal function (i  N);  indicates a set of feasible values of the uncertain parameter and I designates the awareness structure. The following aspect should be underlined, as well. All the elements of a reflexive game (except the awareness structure) are a common knowledge among agents, i.e., 1) all agents know these elements; 2) all agents know 1); 3

This section was written with the assistance of A.G. Chkhartishvili, Dr. Sci. (Phys.-Math.).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Appendices

279

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

3) all agents know 2) and so on (generally, the process is infinite). To proceed and formulate a series of definitions and properties, we introduce the following notation: + stands for a set of finite sequences of indexes belonging to N;  is the sum of + and the empty sequence; || indicates the number of indexes in the sequence    (for the empty sequence, it equals zero); this parameter is known as the length of an index sequence. Imagine i represents the belief of agent i about the uncertain parameter, while ii means the belief of agent i about his or her own belief. It seems then natural that ii = i. In other words, agent i is well-informed on his or her own beliefs. Moreover, he or she assumes that the rest agents possess the same property. Formally, this means that the axiom of selfawareness is satisfied:  i  N  ,    ii = i . In the sequel we suppose that the axiom of self-awareness holds; in particular, this axiom implies the following. Being aware of  for all   + such that || = , an agent may explicitly evaluate  for all   + with || < . In addition to the awareness structures Ii (i  N), the researcher may also analyze the awareness structures Iij (i.e., the awareness of agent j according to the belief of agent i), Iijk, and so on. Let us identify the awareness structure with the agent being characterized by it. In this case, one may claim that n real agents (i-agents, where i  N) having the awareness structures Ii also play with phantom agents (-agents, where   +, ||  2) having the awareness structures I = {},   . It should be emphasized that phantom agents exist merely in the minds of real agents; still, they have an impact on their actions; these aspects will be discussed below. Introduce now identical awareness structures, the basic notion for further analysis. Awareness structures I and I (,   +) are said to be identical if the following conditions are met: 1)  =  for any   ; 2) the last indexes in the sequences  and  coincide. Identity of these structures will be designated by I = I. The notion of identical awareness structures allows for introducing another relevant property – the complexity of the structure. We underline that (along with the structure I) there exists a denumerable set composed of the awareness structures I,   + such that among them one may separate out certain classes of pairwise nonidentical structures by the identity relation. The number of the mentioned classes is naturally referred to as the complexity of the awareness structure. We will say that the awareness structure I has finite complexity  = (I), if there is a finite set of pairwise nonidentical structures { I 1 , I 2 , …, I  }, l  +, l  {1, …, } such that any structure I   +, has an identical structure from this set. Otherwise, the structure I possesses the infinite complexity: (I) = . Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

280

A finite-complexity awareness structure is called finite (however, the corresponding awareness tree is infinite). If this is not the case, the awareness structure is said to be infinite. Obviously, the minimum possible complexity of an awareness structure equals to the number of real agents participating in the game (one can check that awareness structures for real agents are pairwise nonidentical by definition). Any (finite or denumerable) set of pairwise nonidentical structures I,   + such that any structure I,   + is identical to one of them, is referred to as a basis of the awareness structure I. Suppose the awareness structure I has finite complexity; then it is possible to estimate the maximum length of the index sequence  such that, given all structures I,   +, || = , one can find the rest structures. In a certain sense, this length characterizes the rank of reflexion necessary to describe the awareness structure. We will say that the awareness structure I, (I) < , has finite depth  =  (I) when the following conditions hold:

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

1) for any structure I,   +, there exists an identical structure I,   +, ||   ; 2) for any integer positive number ,  <  there exists a structure I,   +, being identical to none of the structures I,   +, || = . If (I) = , the depth is also considered infinite: (I) = . The notions of complexity and depth of awareness structures could be introduced in the -subjective context of the game, as well. In particular, the depth of an awareness structure of the game in the view of a -agent,   +, will be called the reflexion rank of the -agent. Assume the awareness structure I of a game is given; this means that the awareness structures are also defined for all (real and phantom) agents. Within the framework of the hypothesis of rational behavior, the choice of an action x performed by a -agent is described by his or her awareness structure I . Hence, the mentioned structure being available, one may model agent’s reasoning and evaluate his or her action. On the other hand, while choosing his or her action, the agent models actions of the rest agents (i.e., performs reflexion). Therefore, estimating the game outcome, we should account for the actions of real and phantom agents. The set of the actions x*,   +, is called an informational equilibrium [14, 52], if the following conditions are met: 1) the awareness structure I possesses finite complexity ; 2)  ,    Ii = Ii  xi* = xi*; 3)  i  N,    

x*i  Arg max f i (i , x*i1 ,..., x*i ,i 1 , xi , x*i ,i 1..., x*i ,n ) . xi  X i

Condition 1 in the definition of informational equilibrium claims that the reflexive game involves a finite number of real and phantom agents. Condition 2 expresses, in fact, the requirement that the agents with identical awareness choose identical actions.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Appendices

281

Finally, condition 3 reflects rational behavior of the agents–each agent strives for maximizing the individual goal function via a proper choice of his or her action. For this, the agent substitutes actions of the opponents into his or her goal function; the actions are rational in the view of the considered agent (according to the available beliefs of the rest agents). A convenient tool to analyze informational equilibria consists in a graph of the reflexive game. The nodes correspond to real and phantom agents; each node-agent has arcs coming from node-agents whose actions determine the payoff of the agent under consideration (in a subjective equilibrium). The number of such arcs is by the unity smaller than the number of real agents. The “classical” concept of Nash equilibrium is remarkable for its self-sustained nature. Notably, assume that a repetitive game takes place and all agents (except agent i) choose the same equilibrium actions. Then agent i benefits nothing by deviating from his or her equilibrium action; evidently, this feature is directly related to the following. Beliefs of all agents about reality are adequate, i.e., the state of nature appears a common knowledge. Generally speaking, the situation may change in the case of the informational equilibrium. Indeed, after a single play of the game some agents (or even all of them) may observe an unexpected outcome due to an inconsistent belief about the state of nature (or due to inadequate awareness of opponents’ beliefs). Anyway, the self-sustained nature of the equilibrium is violated; actions of agents may change as the game is repeated. However, in some cases a self-sustained equilibrium takes place for differing (generally, incorrect) beliefs of the agents. As a matter of fact, such situation occurs when each agent (real or phantom) observes the game outcome he or she expects. To develop a formal framework, let us augment the tuple defining a reflexive game with a set of functions wi():   X’  Wi, i  N, each mapping the vector (, x) into an element wi of a certain set Wi. The element wi is exactly what agent i observes as the outcome of the game. In the sequel the function wi() will be referred to as the observation function of agent i. Suppose that the observation functions are common knowledge of the agents. Imagine that wi(, x) = (, x), i.e., Wi =   X’; then agent i observes both the state of nature and the actions of all agents. On the contrary, the set Wi being composed of a single element, agent i observes nothing. Suppose the reflexive game admits an informational equilibrium x ,   + (recall  is an arbitrary nonempty sequence of indexes belonging to N). Next, fix i  N and consider agent i. He or she expects to observe the following outcome of the game: wi (i, xi1, …, xi, i-1, xi, xi, i+1, …, xin). Actually, he or she observes wi (, x1, …, xi-1, xi, xi+1, …, xn). Therefore, the stability requirement for agent i implies coincidence of these values. In general case (i.e., for i-agent, i  +), we introduce the following definition of stability.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

282

Dmitry Novikov

An informational equilibrium xi , i  +, is said to be stable under a given awareness structure I if for any i  + the following equality holds: wi (i, xi1, …, xi, i – 1, xi, xi, i + 1, …, xin) = = wi (, x1, …, x, i–1, xi, x, i+1, …, xn). If an informational equilibrium is not stable in the mentioned sense, we will call it unstable. Suppose the set of actions xi, i  +, represents a stable informational equilibrium. It will be referred to as a true equilibrium if the set (x1, …, xn) is an equilibrium under a common knowledge about the state of nature  (or about the set of the agents’ types (r1, …, rn)). In particular, the above definition implies that under common knowledge any informational equilibrium is true. A stable informational equilibrium which is not true in the above sense is said to be a false equilibrium. In other words, a false equilibrium is a stable informational equilibrium, which is not the equilibrium in the case of identical awareness of the agents (under a common knowledge).

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

APPENDIX 2. THE BASICS OF GRAPH THEORY As a theoretical discipline, graph theory4 can be considered as a branch of discrete mathematics studying the properties of finite sets with given relations among their elements (infinite graphs are beyond the scope of this book). As an applied discipline, graph theory serves to describe and analyze many technical, economic, biological, and social systems. Appendix 2 aims at presenting key notions and some results of graph theory, being necessary to pose and solve many control problems in organizational systems. For a detailed exposition on graph theory and corresponding applications, the reader is recommended the monographs and textbooks [6, 20, 23, 27, 54, 56, 60, 62]. The material possesses the following structure. In Section A.2.1 we introduce key notions of graph theory. The problems of extremal paths and loops in graphs are discussed in Section A.2.2. Next, Sections A.2.3 and A.2.4 deal with properties of pseudopotential graphs and maximal flow problems, respectively. Finally, some problems of network planning, scheduling and control (NPSC) are studied in Section A.2.5.

A.2.1. Key Notions of Graph Theory A graph is a system enabling intuitive consideration as a set of circles and a set of lines connecting them (the geometric representation of a graph–see Figure A.2.1).

4

Graph theory dates back to 1736, when L. Euler solved the so-called Königsberg bridge problem. The term “graph” was first introduced by D. Kőnig merely 200 years later (in 1936).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Appendices

283

node

arc

edge

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Figure A.2.1. An example of a graph.

In a graph, circles are called nodes; lines with (without) arrows are referred to as arcs (edges, respectively). A graph, where the direction of lines is not specified (all lines represent edges) is said to be an undirected graph. On the other hand, if the direction of lines appears important (lines are arcs), a graph is said to be directed. Graph theory can be treated as a branch of discrete mathematics (more precisely, set theory). A formal definition is provided below. Consider a finite set X consisting of n elements (X = {1, 2, ..., n}) known as nodes of a graph, and a subset V of the Cartesian product X  X, i.e., V  X 2, called a set of arcs. Then a directed graph G is the pair (X, V). Accordingly, an undirected graph is the pair of the set X and a set of unordered pairs of elements, each belonging to the set X. The arc between nodes i and j (i, j  X) is denoted by (i, j). The symbol m (V = (v1, v2, ..., vт)) expresses the number of arcs in a graph. The framework of graphs is comfortable to describe physical, technical, economic, biological, social and other systems. Let us mention some applications of graph theory. 1. Transportation problems, where destination points act as nodes, while roads (highways, railways, etc.) and other transport routes (e.g., major air routes) are corresponding edges. Another example concerns supply networks (power supply, gas supply, products supply); here nodes include production or consumption points, whereas edges stand for feasible transfer routes (power transmission facilities, gas pipelines, and roads). The corresponding class of optimization problems of freight shipment, allocation of production or consumption points, etc. is also called supply problems or allocation problems. A subclass comprises cargo carriage problems [60, 62]. 2. “Technological” problems, where nodes reflect production units (e.g., enterprises, workshops, manufacturing machines), and arcs are responsible for flows among the units (raw materials, semiproducts and finished products). Technological problems consist in evaluating an optimal load of production units and computing flows ensuring the optimal load [60, 62]. 3. Exchange schemes, used to model many phenomena (barter, netting, offsetting, etc.). Here nodes describe participants of an exchange scheme (chain); arcs correspond to the flows of material and financial resources among the participants. The problem is

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

284

Dmitry Novikov

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

to find an optimal exchange chain (e.g., for an exchange organizer), being coordinated with the interests of chain participants and existing constraints. 4. Project management5. According to graph theory, a project [35] is a set of operations and relations among them (a network diagram, see below). A construction project for a certain object makes up a classical example. The set of models and methods involving the framework and results of graph theory and directed towards solution of project management problems is called network planning, scheduling and control (NPSC) [20, 62]. Notably, NPSC deal with the problems of optimal sequence of operations and optimal allocation of resources among them. Optimality is interpreted in the sense of some criteria (project implementation time, costs, risks, etc.). 5. Models of collectives and groups (used in sociology) proceed from representation of individuals or collectives as nodes, and relations among them (e.g., acquaintance, trust, sympathy, etc.) act as edges or arcs. Such description assists in solving many problems for social groups, viz., structure analysis, comparison, computation of aggregated rates (reflecting the degree of tension, coordination of interaction, and so on) [21, 27, 56]. 6. Models of organizational structures, where nodes are participants of an organizational system; accordingly, edges or arcs stand for (informational, control, or technological) links among them [12, 43, 44]. Therefore, we have outlined some applications of graph theory. Now, let us present the key notions of graph theory. A subgraph of a graph G is a graph whose node set is a subset of that of G, including the edges or arcs that connect the above nodes. Eliminating some edges or arcs from a given graph yields a partial graph. Two nodes are said to be adjacent, if they are connected by an edge or arc. Adjacent nodes are called boundary nodes of a corresponding edge or arc, and the latter is said to be incident to these nodes. A path in a directed graph is a sequence of arcs such that the end of a certain arc coincides with the beginning of another one. A simple path is a path with no repeated arcs. An elementary path is a path with no repeated nodes. A loop in a graph is a path such that its terminal node coincides with the initial one. Path length (loop length) is the number of arcs in the path (the total length of arcs in the path, if these lengths are given). A graph, where (i, j)  V implies (j, i)  V, is called symmetric. If (i, j)  V leads to (j, i)  V, the corresponding graph is said to be antisymmetric. Consider an undirected graph. A chain is a set of edges that can be rearranged for making the end of a certain edge the beginning of another one. Below we provide an alternative definition. A chain is a sequence of adjacent nodes. A closed chain is a cycle. By analogy with simple and elementary paths, one can define simple and elementary chains and cycles, respectively. Any elementary cycle is simple; the inverse statement is generally untrue. An elementary chain (an elementary cycle, path or loop) passing through all nodes of a graph is called a Hamiltonian chain (a Hamiltonian cycle, path or loop, respectively). Next, a simple 5

Project management is a branch of management science studying methods and mechanisms of changes management (a project is the purposeful creation or modification of a certain system, having a specific organization under constraints imposed on available time and resources; any project turns out unique, i.e., corresponding changes are irregular).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Appendices

285

chain (a simple cycle, path or loop) containing all edges (arcs) of a graph is called an Euler chain (an Euler cycle, path or loop, respectively). Suppose that any two nodes of a graph are connectable via a chain; then the graph is said to be connected. Otherwise, the graph can be partitioned into connected subgraphs known as components. Graph connectivity is the minimal number of edges that might be eliminated to make a give graph disconnected. Assume that any two nodes of a directed graph can be united by a path; consequently, the graph is called strongly connected. The following results are well-known. Graph connectivity does not exceed [2m / n], where [x] indicates the integer part of x. There exist graphs with n nodes and m edges, whose connectivity equals [2m / n]. In a strongly connected graph, a loop passes through any two nodes. A connected graph admitting an Euler cycle is said to be an Euler graph. Consider an undirected graph. The order of node i is the number di of incident edges. Evidently, di  n – 1, i  X. A graph, where the total order of all nodes constitutes (n – 1) is called composite. A graph with identical orders of nodes is said to be homogeneous. A node possessing no incident edges (di = 0) is called isolated. A node having a single incident edge (di = 1) is said to be dangling. It has been established that

d

iX

i

= 2 m. This fact is known as the handshaking lemma.

Indeed, a handshake involves two hands; hence, for any number of handshakes, the total number of shaken hands is even (provided that a hand is accounted in each handshake it actually participated). In any graph, the number of nodes possessing an odd order is even. A connected graph is Euler one iff the orders of all its nodes are even (the Euler theorem). Denote by nk the number of nodes with order k, k = 0, 1, 2, ...; then

k n

k : nk  0

k

= 2 m.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Consider a directed graph. For each node, one can introduce two quantities, namely, the 



out-degree d i

(the number of outgoing nodes) and the in-degree d i

(the number of

incoming nodes). In the sequel (if the otherwise is not specially mentioned), we study graphs without loops, i.e., arcs with coinciding initial and terminal nodes. A well-known formula is

d

i X

 i

=

d

i X

 i





= m; for an Euler graph, one has d i = d i , i = 1, n . An Euler graph is a

unification of loops having no shared edges (in pairs). Define the adjacency matrix of a graph as a squared matrix n  n, whose element aij = 1 if (i, j)  V (aij = 0 provided that (i, j)  V, i, j  X). An undirected graph has a symmetric adjacency matrix. Next, the edge incident matrix of a graph is a rectangular matrix n  m, whose element rij = 1 if node i appears incident to edge j (otherwise, rij = 0); here i = 1, n , j = 1, m .

Similarly, the arc incident matrix of a graph is a rectangular matrix m  n, whose element rij = 1 (rij = – 1) if the arc Uj comes from (in) node i; in other cases, rij = 0; again, i = 1, n , j = 1, m

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

286

A tree is a connected graph without simple cycles, having (at least) two nodes. For a tree, one obtains m = n – 1, while the number of dangling nodes constitutes n1 = 2 +

 (i  2) n . Evidently, any two nodes in a tree are connected by a unique chain. i2

i

A grandtree is a directed tree, where a certain node (known as a root) has no incoming arcs and the rests nodes possess unit in-degrees. A flat (planar) graph is representable on a plane such that different nodes correspond to different circles and there are no intersecting pairs of edges (given a pair of edges, the only possible points of intersection are their endpoints). A planar graph involves the notion of a bound–a part of the corresponding plane limited by edges and containing no nodes or edges. For simplicity of bounds’ description, in the sequel we mostly consider graphs without dangling nodes. For instance, a tree has a single external bound (the plane itself). Bound order is the number of boundary edges in a graph (note that dangling edges are counted twice). Let p be the number of bounds in a planar graph, pk mean the number of its bounds with order k, p

and qi stand for the order of bound i. Then Euler’s formula takes place:

k

k : pk 0

 qi

= 2 m,

i 1

pk = 2 m, n + p = m + 2. The above equalities provide necessary conditions for the

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

existence of planar graphs with given sets of numbers {ni} and {pi}. Any connected planar graph G has a dual connected planar graph G*, defined as follows. Each bound of the graph G corresponds to a node of the graph G*, each edge V of the graph G (being boundary for bounds z1 and z2) corresponds to an edge V* of the graph G*, connecting the nodes that correspond to z1 and z2. The definition of a dual graph is closely related to the notion of duality in linear programming.

A.2.2. Extremal Paths and Loops in Graphs The problems of the longest and shortest paths in graphs arise in various fields of control. First, we study the shortest path problem, and then address the problems of extremal loops. The shortest path problem. Consider a network consisting of (n + 1) nodes, notably, a directed graph with two separated nodes–input (node 0) and output (node n). Each arc is described by its length. Path length (loop length) is the total length of all arcs entering a given path (loop, respectively). If the lengths of arcs are not specified, path length (loop length) is defined by the number of arcs in a given path (loop, respectively). The problem is to find the shortest (minimal length) path connecting the input and output of the network6. A well-known result states that the shortest path in a network exists iff the latter includes no loops with negative length. Suppose that the network has no loops. Then nodes can be renumbered to obtain the inequality j > i for any arc (i, j). Such numbering is said to be correct. Clearly, in a loop-free network there exists a correct numbering.

6

We assume that (1) any node of the network is reachable from the input and (2) the output is reachable from any node of the network. Any nodes not meeting the above requirement are simply eliminated.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Appendices

287

Denote by lij the length of the arc (i; j). The shortest path in a network with correct numbering is defined as follows. Algorithm 1 Step 0. Mark node 0 with the index 0 = 0. [8]

2

1

[9]

4

6

9 [0]

5 0

[14]

2 5

3

4 1

8

3

[3]

[10]

Figure A.2.2. Finding the shortest path.

Step k: Mark node k with the index k = min (i + lik). ik

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

The output index n equals the length of the shortest path7. Figure A.2.2 shows an example of applying Algorithm 1 (numbers at arcs designate their lengths, node indexes are given in square brackets, and double lines illustrate the shortest path). Suppose the indexes (or node potentials) have been assigned. The shortest path is evaluated by the backward induction method (from the output to the input). In other words, the shortest path is  = (0; i1; i2; ...; in–1; n) such that lin 1n = n – n–1, and so on. Next algorithm allows defining the shortest path in the general case (i.e., under an arbitrary numbering of nodes). Algorithm 2 (Ford’s algorithm) Step 0. Mark node 0 with the index 0 = 0, and the rest nodes with the indexes i = +, i = 1, n .

Step k. Consider all arcs. If for an arc (i; j) we have j – i > lij, then compute new value

j := i + lij.

The indexes are assigned at a finite number of steps. Denote by { i } the resulting values *

of indexes with the following property. The quantity

*i is the length of the shortest path

leading from node 0 to node i. Again, the shortest path from node 0 to node i is calculated by the return method. Suppose that the lengths of all arcs are nonnegative. The following algorithm is then applicable. Algorithm 3 Step 0. Mark node 0 with the index 0 = 0. Step k. Assume that a certain set of nodes has been marked. Let Q be the set of unmarked nodes being adjacent to the marked ones. For each node k  Q, evaluate the quantity k = min 7

In dynamic programming problems, Algorithm 1 reflects the Bellman optimality principle: in the shortest path, the length between two intermediate points must be also minimal.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

288

(i + lik); here minimization takes place over all marked nodes i being adjacent to node k. Choose node k such that k is minimal, and mark it with the index k = k. Repeat the described procedure until node n is marked. The shortest path length equals n, while the path is defined by analogy to the previous algorithms. Now, rewrite the shortest path problem as a linear programming problem (LP). Set xij = 1, if the arc (i; j) enters a path8  and xij = 0 (otherwise); here i, j = 0, n . The shortest path problem is expressed in the form9: n

L(x) =

l

i , j 0

x  min

ij ij

x

= 1,

x

=

0j

ki

x

jn

= 1,

(2)

j

j

i

(1)

x

x

jk

, k = 1, n  1 .

(3)

j

Any solution to the system of inequalities (2)–(3) defines a path in a loop-free network (not in a network with loops!). Suppose that all loops have a strictly positive length, i.e., there are no loops with a negative or zero length. Consequently, the solution to (1)–(3) determines the shortest path. Let us formulate an LP problem being dual to (1)–(3); for this, assign the constraints (2) with the dual variables (0, n), and the constraints (3) with the dual variables {i}, i = Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

1, n  1 : n – 0  max

(4)

j – i  lij, i, j = 0, n .

(5)

By the duality theorem of linear programming, the optimal values of the goal functions for the problems (1)–(3) and (4)–(5) coincide. The problem (4)–(5) is called the node potential problem. Its general statement is the following. Find potentials of the nodes {i}, meeting the system of inequalities (5) and maximizing a certain function  (), where  = (0, 1, ..., n). An example is the problem of neighboring potentials with  () =



|j –

0j |; here { 0j } mean the desired potentials.

j

8

We believe that each pair of nodes has two arcs (if not, simply add them and equate the corresponding lengths to infinity to eliminate the arcs from solution). 9 The constraint (2) claims that, in the initial path, the input has a single outcoming arc and the output has a single incoming arc. On the other hand, the condition (3) equates the number of incoming and outcoming arcs in an intermediate node. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Appendices

289

Similarly, one can pose the longest path problem. It suffices to change the signs of arcs for the opposite and solve the shortest path problem. The longest path problem admits a solution iff there are no loops with positive lengths. To proceed, consider the problem of maximally reliable path. Arc lengths can be interpreted as the probabilities of links among corresponding points. By substituting arc lengths for their logarithms with opposite signs, one arrives at the following. The maximally reliable path in the initial graph corresponds to the shortest path in the new one. By far complex are the problems of elementary shortest (longest) paths in the case when a network includes loops with negative (positive, respectively) length10. Actually, the above problems appear NP-complex11. Still, researchers have not developed efficient algorithms requiring no exhaustive search. One would face the same complexity in the problems of shortest or longest paths (loops) passing through all nodes of a graph. An elementary path (or loop) passing through all nodes of a graph is called a Hamiltonian path (or loop). A classical example of the Hamiltonian loop problem is the travelling salesman problem, which lies in the following. A travelling salesman has to visit n cities (passing through each city only once) and return in a departure point. Given nonnegative lengths of arcs (interpreted as distances costs of travel between cities), it is necessary to find a Hamiltonian loop with the minimal length. We emphasize that a graph of n nodes admits n! Hamiltonian loops. Solution algorithms for the shortest path problem can be applied to a wide class of discrete optimization problems. As an example, let us study a problem of integer linear programming, viz., the knapsack problem. Many relevant problems of factor optimization subject to constraints (imposed on total weight, square, volume, amount of financing, etc.) are naturally reduced to the knapsack problem. The knapsack problem. There are n useful items for a trip. The utility of item i is ai, while its weight (or volume) makes up bi. The maximal weight carried by a tourist (the volume of a knapsack) is bounded by R. The problem is to find a set of items ensuring the maximal total utility. Denote by xi a variable possessing value 0 (if item i is not put in the knapsack) and value 1 (otherwise). Consequently, the knapsack problem takes the following form: n

 a i xi i 1 n

 bi xi

 maxn

(6)

 R.

(7)

x{0;1}

i 1

10

There exist several algorithms to verify the absence of loops with negative (or positive) length. For instance, one can modify indexes until the number of iterations (algorithm steps) exceeds a maximal threshold (mn). Another approach lies in bounding node potentials by given values di; under i  di (i  di), one has to check whether the resulting value of a potential corresponds to the length of a certain path or there is a path with a negative (positive, respectively) length. And so on. 11 Set n as the number of nodes in a graph; let the complexity of an exact solution algorithm (the amount of computations, operations, steps, etc.) be proportional to n, where  is a positive value. Then the algorithm is said to possess a polynomial complexity. If the latter is proportional to  n, we deal with exponential complexity (NP-completeness).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

290

Dmitry Novikov

The upper estimate for the number of feasible combinations constitutes 2n. However, to solve the knapsack problem (an integer problem), one can employ an efficient algorithm–the dynamic programming method. Notably, construct a network (see examples in [62]) by the following rules. Associate the abscissa axis with the numbers of items, and the ordinate axis with their weights. Each point (including the origin) has two outcoming arcs, i.e., the horizontal one (the option “leave an item”) and the inclined one (the option “take an item”), whose vertical projection equals the weight of a corresponding item. Set the lengths of inclined arcs as the utilities of items. The lengths of horizontal arcs are zero. In the resulting network, the terminal node is fictitious and the weight of any arcs connecting the former with other nodes is zero. The network has the following properties. Any solution to the problem (6)–(7) corresponds to a certain path in the network; and vice versa, any path corresponds to a certain solution to the problem. Therefore, the knapsack problem has been reduced to the shortest path problem. The shortest loop problem is treated as follows. Suppose we know that a desired loop contains a certain node. It is necessary to find the shortest path from this node to itself (using the above algorithms). Generally, the shortest loop may pass through any node of a graph. Hence, one should consider shortest paths passing through each node and choose the shortest one among them. A simpler approach consists in adopting Algorithm 4. Under an arbitrary numbering of nodes in a graph, take node 1 and study a network, where this node simultaneously represents the beginning and end. Apply the above-stated algorithm to find a path 1 of the minimal length L(1) in the network. Next, reject node 1 and find the shortest path 2 for the network, where node 2 is the initial and terminal point. Afterwards, reject node 2, and so on. The described procedure is repeated for all nodes of the original graph that have a loop passing through them and through nodes with higher numbers. Finally, the shortest loop is min, whose length is defined by L(min) = min {L(1), L(2), ..., L(n)}. The average shortest loop problem lies in searching for a loop with the minimal ratio of its length to the number of its arcs. This problem is treated by means of Algorithm 5. 1. Define an arbitrary loop. Let L be its length and k be the number of its arcs. Evaluate lav = L / k and add (–lav) to the lengths lij of all arcs. 2. Define a loop with a negative length, repeat Step 1, and so on (until no loops with negative lengths appear). At each step, the lengths of all arcs vary by the same quantity. Consequently, at the last step the arc length equals (lij – ), where  means the total variation of the length of each arc during all steps. The value of  gives the average shortest length of arcs in loops of the graph. Moreover, the average shortest loop is defined at the penultimate step. The maximal efficiency path. Consider a network, where each arc (i; j) is assigned two numbers (Zij; Sij); (Zij) stands for the effect and Sij specify the costs of a corresponding operation.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Appendices The efficiency K() of a path  is determined by the ratio of its effect Z() = the costs S() =

 s

ij

291

 Z

ij

and

, i.e., K() = Z() / S(). The problem is to seek for a path * ensuring

the maximal efficiency: K()  max. Suppose that the solution K* = K(*) is known; by the definition of K*, we have that  : Z() – K* S()  0. (8) Hence, the original problem has been reduced to evaluation of the minimal quantity K* satisfying (8). In other words, find a minimal K* such that all paths (whose length makes up lij (K*) = Zij – K* Sij) in the network have a non-positive length (inequality (8) must be met, including the longest path). Algorithm 6

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

1. Set K* = 0. Find the longest path 1. Set K1 = Z(1) / S(1) (note for K = K1 the path length  (K1) = 0). 2. Next, find the longest path 2 under K = K1. If the length of the path 2, L(K1) = 0, then the problem is successfully solved. In the case L(K1) > 0, compute K2 = Z(2) / S(2) and find the longest path 2 under K = K2, and so on. The maximal efficiency path (the case of penalties). Assume that each arc in an (n + 1)node network is associated with two numbers, viz., the effect Zij and time period tij. Each path  from the initial node to a terminal node characterizes a certain process (e.g., a project). We understand path duration as the total time period of its arcs. If process duration varies from required time period T, some penalties  () are inflicted proportionally to the deviation:  ()  (T  T (  )), T (  )  T =  . The coefficients  and  have arbitrary signs.   (T (  )  T ), T  T (  ) The problem lies in searching for a path * maximizing the difference between the effect and penalties:

* = arg max [Z() –  ()]. 

Set lij () = Zij –  tij, where  means a parameter, T() indicates the duration of an optimal path under the parameter  (i.e., under the longest path whose length is measured by lij ()). Clearly, T() is a never-increasing function of . Let T() and T( ) be the durations of an optimal path provided that  =  and  = , respectively; moreover, denote by  () and  ( ) the corresponding paths. To find them, one should solve two longest path problems. We study six cases as follows (the initial problem can be decomposed into the maximization problems of Z() –  () subject to T()  T and subject to T()  T). Suppose that   ; consequently, T( )  T() and: 1) if T( )  T()  T, then  ( ) is the optimal solution; 2) if T  T( )  T(), then  () is the optimal solution;

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

292

3) if T( )  T  T(), compare  () and  ( ) by the lengths l = Z –  to choose the longest path. 4) Now, assume that   ; hence, T( )  T() and: 5) if T()  T( )  T, then  ( ) is the optimal solution; 6) if T  T()  T( ), then  () is the optimal solution; 7) if T()  T  T( ), then the problem has no efficient solution methods (possible approaches could be found in [20]).

A.2.3. Pseudopotential Graphs A composite symmetric graph with (n + 1) nodes is said to be pseudopotential if the length of any Hamiltonian cycle is the same. Denote by lij, i, j = 0, n the lengths of arcs. Necessary and sufficient conditions of graph pseudopotentiality consist in the following. There exist numbers i, i, i  0, n such that lij = j – i for all i, j  0, n . Alternatively, a graph is pseudopotential iff any its subgraph appears pseudopotential. Suppose that 0 = 0. Let M j (  ) 

  j

k 1

ik



  ik 1 be the sum of lengths of the first j arcs in a Hamiltonian

cycle , j = j – j. Consider the quantity M() = max Mj (). 1 j  n

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

There exists an optimal solution of the problem M()  min ,

(1)



such that first are nodes with i  0 in the ascending order of i, and second are nodes with I  0 in the descending order of i. To prove this assertion, set Mmin = min M() and denote by  = (0, i1, i2, ... , in, 0) the 

optimal Hamiltonian cycle (the solution to the problem (1)). Consequently, the following system of inequalities takes place:

M min   i1  M min   i1   i2  . M min   i1   i2   i3  ............................ M min   i .....  i   i 1 n 1 n 

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(2)

Appendices Assume that for a certain index s  {1, 2, ..., n} we have

293

 is 1  0,  is  0. In this case,

formulas (2) imply the corresponding system of inequalities for the next Hamiltonian cycle: (0, i1, ..., is–2, is, is–1, is+1, ..., in). Thus, there always exists an optimal solution, where one first traverses nodes with positive values of , and then nodes with negative values of  (node i with i = 0 can be included in any group). If

 is 1  0,  is  0 and  is 1 >  is , then (2) leads to the corresponding system of

inequalities for the Hamiltonian cycle (0, i1, ..., is–1, is, is+1, ..., in). Finally, if  is 1  0,  is  0 and  i s 1 < i s , then (2) yields the corresponding system of inequalities for the Hamiltonian cycle (0, i1, ..., is–1, is, is+1, ..., in). For instance, we provide a rigorous proof of the last statement. Using the system of inequalities (2), one can write s 2

Mmin +

 j 1

ij



 is 1 =  is 1 –  is 1 ,

s 2

Mmin +

 j 1

ij

+

 is 1   i s =  is –  is .

This immediately gives s2

Mmin +

 i

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

j 1

j

+

 is 1   is , since  i  0,

+

 is +  is 1   i s   is 1 .

s2

Mmin +

 i j 1

j

s

Therefore, suppose that  = (0, i1, i2, ... , in, 0) is the cycle solving the problem (1), i.e.,

 i j  0, j = 1, 2, ..., s, with  i1   i2  ...   i s ,  i j  0, j = s + 1, ..., n, with  i s 1   i s 2  ...   in . Consequently, k    M min  max   i1 , max  ik 1    i j  . 1 k  n j 1   

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(3)

Dmitry Novikov

294

A special case of a pseudopotential graph includes a potential graph; the latter is remarkable for that the length of any Hamiltonian cycle equals 012. In particular, potential graphs enjoy the following properties:  

the length of any cycle in a potential graph is 0; in an n-node potential graph there exist numbers {i} such that lij = j – i, i, j =

1, n .

A.2.4. Maximal Flow Problems Consider a network with (n + 1) nodes. Take each arc (i; j) and assign a number cij to it. This quantity is said to be capacity of the arc. In a network, a flow x is a set {xij}, where xij means the a flow over the arc (i; j), such that 0  xij  cij, i, j = 0, n ,

x

ij

j

x

0i

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

i

=

x

in

=

x

ki

, i  0, n. The value of a flow x is defined by  (x) =

k

.

i

The maximal flow problem consists in finding a flow with the maximal value13. A cut W of a network is any set of nodes, containing the output and not containing the input. The capacity C(W) of a cut W is the total capacity of arcs entering the cut. The value of any flow does not exceed the capacity of any cut (the Ford-Fulkerson theorem [54]). Assume that one succeeds in finding a flow whose value equals the capacity of a certain cut. Then the above flow is maximal, whereas the cut appears the minimal. Algorithm 7 (the Ford-Fulkerson algorithm). We illustrate it using the example of a network, see Figure A.2.3. Here the capacities of all arcs are unit. Step 0. Take an arbitrary flow (e.g., x01 = x12 = x25 = 1). Mark the initial node with index 0. Denote by Z the set of marked nodes. Step k 1. Mark node j with index +i if (a) there exists the arc (i; j) and (b) i  Z, j  Z, xij < Cij. Suppose that (as the result of such marking) we have marked the output. Then the flow can be increased (at least) by the unity provided that cij are integers. Moving backwards, one can find a path whose flow can be increased. However, the example shows this is insufficient to find the maximal flow.

12

A potential graph can be treated as a model of an electrical circuit, whereas the former’s properties could represent game-theoretic analogs of the Kirchhoff laws. 13 A widespread practical interpretation is transportation of cargo between the initial node and the terminal one through arcs of a graph; arc capacities describe the maximal quantities of cargo to-be-transported through them at unit time. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Appendices

295

2. Mark node i with index–j if (a) there exists the arc ( j; i) and (b) j  Z, i  Z, xij > 0. Obviously, first-type marking increases the flow through the arc (in contrast, the second-type marking reduces). Again, if such marking has affected the output, the flow can be increased. Moving backwards, one can find a chain, where each node is marked by the number of the preceding node (actually, the sign is not so important). Consider the chain  = (0; 3; 2; 1; 4; 5), see Figure A.2.3. The flows gained by action 2 are in bold type. The described algorithm is terminated according to the following criterion. If the both types of marking do not affect node n, the resulting flow possesses the maximal value. [–2]

1

1

4

[+1] 1

1 [0]

[+4]

0

5

5

0

1

1 1

3

2

[+0]

[+3]

Figure A.2.3. Searching for the maximal flow.

a1 am

s i1

i

1

b1

...

s1j s ij

s in

j

...

... 0

CUSTOMERS

s11

1

...

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

SUPPLIERS

bi

z

bn

smj m

smn

n

Figure A.2.4. The transport problem.

The minimal cost flow. Study a network with arc capacities cij. Suppose that each arc (i; j) is characterized by a number sij interpreted as costs (e.g., the costs to transport unit cargo from node i to node j). Under a given total flow , the problem of minimal cost flow is to find a flow distribution among arcs, minimizing the total costs. General solution methods for this problem are discussed in [54]. A special case of the problem of minimal cost flow lies in the transport problem. Consider a two-partite graph presented by Figure A.2.4 (a two-partite graph is a graph whose

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

296

node set can be divided into two nonintersecting subsets, and edges or arcs of the graph connect nodes only from different subsets). Nodes of the network are split in two groups, namely, m suppliers and n customers. A graph is two-partite iff it contains no cycles with odd lengths or all simple cycles possess even lengths. This result is known as König’s theorem. Suppliers are described by given quantities of available goods ai, i = 1, m . On the other hand, customers are characterized by necessary quantities of goods bi, i = 1, n . Moreover, the costs sij to transfer unit goods from supplier i to customer j. Assume that the problem is m

a

balanced, i.e.,

i 1

i

n

b

=

i 1

i

(the total offer equals the total demand). Note any open

transport problem can be rendered balanced by introducing a fictitious supplier or customer. It is required to find flows of goods from suppliers to customers, minimizing the total costs. Formally, the transport problem can be rewritten as m

n

i 1

j 1

 x

ij

sij  min

{ x ij  0}

(1)

n

x j 1

ij

= ai, i = 1, m

(2)

= bj, j = 1, n .

(3)

m

x Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

i 1

ij

Add input 0 and output z to the two-partite graph; next, connect the input and output to other nodes via arcs with flow x0i = ai, , i = 1, m , xjz = bj, j = 1, n . Proceeding in this way, one obtains the problem of minimal cost flow. For a detailed description of solution algorithms for the transport problem (and its dual counterpart), we refer the reader to [60, 62]. A special case of the transport problem is the allocation problem. Consider n employees being able to hold different positions (to perform different work). The number of positions equals the number of employees. Again we emphasize that adding fictitious positions and/or employees easily reduces an open allocation problem to a balanced one. Let sij be the costs to appoint employee i to position j (e.g., the minimal wage required for such employment). The problems is to find an allocation of employees among positions (each employee is appointed to a single position), minimizing the total costs. Note sij can mean the efficiency of employee i on position j. In this case, an optimal allocation maximizes the total efficiency. Formally, the allocation problem can be expressed in the following form (also compare with (1)–(3)): n

n

i 1

j 1

 x

ij

sij 

min

{ xij {0 ; 1}}

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(4)

Appendices n

 xij

297

= 1, i = 1, n

(5)

= 1, j = 1, n .

(6)

j 1 n

x i 1

ij

There exist many solution methods for the allocation problem [60, 62]. Study one of them using an example. Consider n = 3 employees and n = 3 positions. The corresponding cost matrix is defined by

1 2 3 2 4 7 . 5 3 8

Algorithm 8 Step 0. Appoint each employee to the least costly position (in Figure A.2.5, allocation are marked by thin arcs), i.e., set

 1, if sij  min sik , k xij0 =  0 , otherwise.  If such allocation appears feasible (all work is performed), the solution is derived. n

In the case of a “disbalance” (some works are not performed, i.e.,  j1:

x

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

i 1

0 ij1

> 1),

proceed to the next step. POSITIONS

EMPLOYEES [–1 ]

1

1

2 2

[–2]

4

2

1

[0 ]

2

[1]

7 5 [– 2]

3

3

3 8

3

[2]

Figure A.2.5. The allocation problem.

Step k. Introduce two subsets of arc sets: P1 = {(i; j) | xij = 1}, P2 = {(i; j) | xij = 0}. Let the network input be the set of nodes-positions having several employees appointed. The set of nodes-works being not performed is the network output. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

298

Change the directions of arcs from the set P1 for the opposite and equate their lengths to (–sij). Moreover, set the lengths of arcs from the set P2 equal to sij. Find the shortest path  k in the resulting network. Node potentials (evaluated when solving the above problem) are given in square brackets. k

if (i; j )   k

 xijk 1 ,

Next, define xij = 

k 1 k 1  xij , if (i; j )  

.

In this example, at step 1 we derive the optimal allocation (varying from the one evaluated at step 0 in that employee 1 obtains position 3–see the double-line arc in Fig A.2.5). At each step, the number of “disbalances” is reduced by the unity. Hence, the number of steps in the algorithm does not exceed the finite number of “disbalances.” Similarly, one can solve any transport problem (find the shortest path from the set of nodes with goods surplus to the set of nodes with goods shortage). In the general case, the problem of the minimal cost flow is solved by addressing the dual problem [54].

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

A.2.5. Problems of Network Planning, Scheduling and Control Consider a project consisting of a set of operations (elementary activities). Technological relations among operations are defined in the form of a network (a network diagram). Here arcs correspond to operations, whereas nodes reflect events (end moments for a certain operation or several operations). Each operation (i; j) has a given duration tij. The methods of describing and analyzing network diagrams are developed in the theory of network planning, scheduling and control (NPSC) [20, 62]. The problem of project duration (time management). Clearly, project duration is defined via the maximal length referred to as the critical path. Above we have described the methods of finding the longest path. In Figure A.2.6, the critical path is marked by double arcs; its length constitutes 16. Operations belonging to the critical path are called critical. The rest (uncritical) operations possess the so-called slack time (being characterized by the maximal delay of an operation which does not affect project duration). Critical operations have zero slack time. Let us provide the corresponding formulas. 1

3

3 2

7 4 0

5 4

5 2

6

4

Figure A.2.6. Searching for the critical path.

Algorithm 9. Suppose that implementation of a complex of operations (a project) starts at zero time. Denote by Q0 the set of events not requiring implementation of any operation (i.e., Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Appendices

299

the inputs of a network with correct numbering). Next, let Qi be the set of events preceding event i (in fact, the set of network nodes j with the existing arc (j; i)). Set

ti = max tji, ti = max ( t j + tji). j Q0

j Qi

(1)



The quantity ti is called the earliest finish time of event i; it determines the minimal time when the event can happen. The critical path length T = max ti i

(2)

is defined by the earliest finish time of a terminal event (i.e., the event of completion of all operations). The latest finish time ti of an event is the maximal time when the event can occur (not

affecting the project duration). Denote by Ri the set of events directly following event i (the set of network nodes j with the existing arc (i; j)). For each node-event i, evaluate the length li of the longest path from this node to the network output: li = max (lj + tij). j  Ri

(3)

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Notably, this is the event of completion of all operations. Now, set ti = T – li, i = 1, n . 

The project is implemented at time T iff event i occurs not later than the time ti , i = 1, n . The complete slack time ti of event i is the difference between the corresponding latest and earliest finish times: ti = ti – ti , i = 1, n .

(4)

Evidently, the complete slack time of critical events (the events belonging to the critical path) equals zero. The resource allocation problems in networks are (for convenience) considered by describing operations as network nodes and relations among them as arcs. As a matter of fact, the representations “operations-arcs, events-nodes” and “relations-arcs, operations-nodes” appear equivalent. Dashed lines may reflect resource relations (the same resources must be utilized to perform the same operations). Examples are the networks demonstrated in Figs. A.2.6-A.2.7.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

300

п

р

п

The complete slack time of operation (i; j) is the quantity ij = tij – tij , where tij stands р

for the latest start (finish) time of the operation, and tij designates the earliest start (finish) time of the operation. To choose the optimal allocation of resources, it is necessary to find critical paths for each possible allocation and compare the lengths of these paths. Consider the network shown in Figure A.2.7. 11 [7]

14

0–1

1–3 [10]

16 3–5 [12]

1–4

15 [11]

0–2

[11]

4–5

10 2–4

4

[22] 20

[17]

Figure A.2.7. The representation “operations-nodes”.

7 23 8

20

z

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

15 4

Figure A.2.8. An example of resource allocation.

The operations “0–1” and “0–2” have a common resource; the potentials of nodes corresponding to different ways of resource utilization (first, the operation “0–1” and then the operation “0–2” and vice versa) are demonstrated in Figure A.2.7 in square brackets and without brackets, respectively. Unfortunately, there exist no universal (efficient and accurate) solution methods for the resource allocation problems in networks. Again, we focus on the following example (as a special case admitting a simple algorithm). Take the network illustrated by Figure A.2.8. Suppose that we know the latest finish times I for three operations. The problem lies in defining the sequence to implement these operations (provided that all operations are performed using the same resource unit; thus, their simultaneous implementation turns out impossible). Clearly, in the present example, it is optimal to implement first the operation which requires the minimal time i. Assume that a limited resource quantity has been allocated to implement a project. The problem of best resource utilization is immediate then. Let wi be the

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Appendices

301

amount of operation i and fi (vi) mean the speed of its implementation (depending on the amount of resource vi). Suppose that fi () is a right continuous never decreasing functions such that fi (0) = 0. If vi (t) is the resource quantity for operation i at time t, then its finish time ti is defined as the minimal time satisfying the equation ti

 f (v (t ))dt = w . i

i

i

0

We say that a certain operation is implemented with a fixed intensity, if the resource quantity used to implement it appears time invariant. Consequently, operation duration is given by the following expression: ti (vi) = wi / fi (vi). No general algorithms have been designed to date to find an allocation of limited resources among operations, minimizing the finish time of a project. Thus, we consider a special case. Imagine that all operations are independent, being implemented using a single type of resource. The quantity of this resource constitutes R, and fi (vi) are continuous strictly monotone concave functions. Accordingly, there exists an optimal solution with the following properties. Each operation is implemented with a fixed intensity and all operations finish simultaneously at time T defined as the minimal time meeting n

f i 1

1 i

(

wi )  R. T

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Here f i 1 () stands for the inverse function to fi (), i = 1, n . A wide class of NPSC problems includes the so-called aggregation problems. Notably, it is necessary to represent a complex of operations (a project) as a single operation. Moreover, the problem comprises analyzing the properties of such representations when optimization (within the framework of an aggregated description) yields an optimal solution for the initial (detailed) representation.

APPENDIX 3. PREFERENCE RELATIONS AND UTILITY FUNCTIONS This appendix deals with a framework of describing the preferences of organizational system participants, viz., preference relations and utility functions. For further reading, we recommend a series of monographs and textbooks [1, 31, 45, 47, 48]. The models of multicriterion decision making are considered, e.g., in [30]. Preference relations. Chapter 1 has emphasized that models of decision-making proceed from the following assumption. Facing a choice problem and making a decision (choosing an alternative), a man is guided by personal preferences. In other words, a man chooses an action that would yield the most beneficial result of activity (according to his/her viewpoint). A

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

302

Dmitry Novikov

formal description for comparison of alternatives can be provided in terms of preference relations and equivalence relations. Let us introduce necessary definitions. A binary relation  on a set A0 is a subset   A0  A0, where A0  A0 indicates the set of all ordered pairs ( a, b) , a, b  A0 . If (a, b)  , we say that the relation  holds (equivalently, takes place) for (a, b) and write ab. The fact that a binary relation  takes no place for a, b is denoted by acb. The preference relation  is a binary relation determined by the following property: a  b iff a decision maker (DM) prefers a to b.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

с

с

The equivalence relation  takes place for a pair a, b if a  b and b  a . A relation  is called reflexive (antireflexive) if for all a  A0 we have aa (aca, respectively). A relation  is called antisymmetric if ab and ba imply a  b . Moreover, a relation  is called asymmetric if ab implies bca. In what follows, we consider strict preference relations  , that satisfy the asymmetric condition. A relation  is called transitive if ab and bc imply ac for all a, b, c  A0. A relation  is called complete if for all a, b  A0 we have ab or ba. Suppose that a DM preference is defined on an outcome set A0. Notably, this is a relation  holding for a pair of outcomes a, b from the set A0 if the DM prefers a to b. In addition, introduce an action set A. The latter comprises all feasible actions of the DM and consists of elements “To do something,” “To order something,” “To buy something,” and so on. Yet, stating a decision problem does not mean just defining the sets A0, A and a preference relation on the set A0. One should also establish a linkage between a decision and corresponding outcome. A decision-making problem is a problem of choosing by the DM an action from the set A, leading to the best outcome from the set A0 (according to DM preferences). Solving this problem requires the following. Given a preference relation on the outcome set A0, it is necessary to derive a preference relation on the action set A. Subsequently, it is necessary to choose the most preferential action. Assume there exists a certain function w: A  A0, i.e., a deterministic (single-valued correspondence) between a chosen action and its outcome. In this case, action choice is equivalent to outcome choice. Thus, the problem lies merely in finding an implementable outcome (i.e., an outcome admitting an action which implements it) being preferable against the rest implementable outcomes. A chosen action would belong to the set

P (, A) = {a  A |  b  A : w(b)  w( a )} . All actions belonging to the solution yield outcomes being equivalent in the sense of the relation  . The formulated problem is said to be a deterministic decision-making problem. The case is somewhat complicated if the outcome z of an action y depends not only on DM actions, but also on external factors. Accordingly, the relationship between outcomes and actions takes the form z = w (y, , u), where  and u are factors being not affected by the DM. Denote by  and U the sets of feasible values of these factors. Naturally, the above factors

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Appendices

303

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

being known at the moment of decision making, the problem is treated similarly to the previous case. However, an uncertainty arises otherwise. Indeed, a certain action y* chosen by the DM may lead to many feasible outcomes. Any outcome from the set R(y* ) = {w (y*, , u) |   , u  U} can be implemented depending on realizations of the external factors  and u. To choose an action, the DM has to compare these sets. Unfortunately, the preference relation on the system of sets R() has not been given in the problem situation. Therefore, this relation should be derived from the preference relation on the outcome set A0 (perhaps, using some additional assumptions). For instance, imagine that we know probability distribution for realization of events belonging to the sets  and U. Hence, it is possible to define occurrence probabilities for different outcomes under the choice of a specific action. Thus, we obtain a decision-making problem under probabilistic uncertainties (see Section 1.1 and [16, 47]). Not much different is the case when the DM possesses no information on probabilities of relevant events, but has some beliefs about them. Then the objective probabilities are replaced with the subjective ones, and the same solution approach is employed. In the example under consideration, each decision (action) of the DM results in a lottery, i.e., a random process, where outcomes are realized with certain probabilities. Passing from preferences on an outcome set to preferences on an action set requires the following. The DM must be able to compare his/her preferences on a set of such lotteries (to define which lottery is better or worse for him/her). And so, the optimal decision would be an action generating the best lottery. The techniques used for such passing are described below. Utilities and utility functions. As a rule, preference relations are not directly used to describe DM interests in decision-making problems. Actually, the framework of binary relations appears inconvenient to model real systems and to analyze them. Accordingly, researchers often involve utility functions. The correspondence between the preference relation  and a utility function

f : A0  1 is defined by the condition

a ,b  A0 : f (a) > f (b)  a  b.

(1)

What constraints must be imposed for passing from the preference relation to the utility function? This problem is studied in the mathematical utility theory [16, 47]. Recall the preference relation is a binary relation on the outcome set A0, meeting (at least) the asymmetric property. However, additional assumptions regarding the preference relation are necessary for its efficient usage. Making such assumptions, one must obtain a comfortable tool of analysis (applicable to real-life preferences). For years, the stated issue has been the subject of discussions; still, it has not been closed. Indeed, additional assumptions are introduced in the form of axioms or hypotheses regarding decision-making process and its laws; thus, the nature of certain assumptions is disputable. Let us present a typical combination of such axioms (note some of them appear dependent). Other axiomatics can be found in [16]. Introduce the following utility axioms.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

304

Dmitry Novikov 1. Suppose that  is an asymmetric preference relation, and  is an equivalence relation. Then for any outcomes x and y one of the following events takes place: x  y, or y  x, or x  y. In other words, for a pair of outcomes, the first outcome is more preferable than the second one, or the second outcome is more preferable than с

the second one, or these outcomes are equally preferable. If a  b  a  b and

b  с a , then this axiom is always valid. 2. For any outcome x we have that x  x. In other words, any fixed outcome is

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

equivalent to itself. This is immediate from the definition of equivalence relation. 3. If x  y and y  z, then x  z. This is the transitivity condition of the equivalence relation. In contrast to Axioms 1-2, this one is not so evident. There exist examples of reasonable (sensus communis) relations, where Axiom 3 fails (see examples and references in [31, 45]). 4. If x  y and y  z, then x  z (the transitivity condition of the preference relation). 5. If x  y and y  z, then x  z. In other words, if x is more preferable than y and y is equally preferable as z, then x is more beneficial than z. As a matter of fact, this axiom states the assumption regarding the arbitrarily deep “distinguishing ability” of an agent–the latter can distinguish between arbitrarily close outcomes. 6. If x  y and y  z, then x  z (by analogy to Axiom 5). The given assumptions suffice [47] for introducing the function f () such that the condition (1) holds. Nevertheless, they do not guarantee the unique determination of this function. Really, in the case of a finite number of outcomes, nonstrict ordering merely allows arranging the outcomes (from the “worst” outcome to the “best” one). This sequence of events can be assigned any sequence of increasing numbers (values of the utility function correspond to appropriate elements of the sequence). This means that the utility function is defined up to a monotonous transform. And so, passing from the preference relation to a utility function (defined up to a linear transform) requires formulating additional axioms (the so-called combining axioms that describe the model of behavior in uncertain conditions). Let x and y be any outcomes from A0 and 0  r , s  1 . Then the expression r x + (1 – r) y denotes the outcome representing the following lottery. The outcomes x and у are realized with the probabilities r and (1 – r), respectively. Assume that this lottery satisfies the conditions stated below.

7. For any lottery r with outcomes x and y, we have r x + (1 – r) y = (1 – r) y + r x (the commutative property). This is a technical condition with no restrictions on preferences. 8. For any lotteries s and r with outcomes x, y, and z  A0 we have r x + (1 – r) (s y + (1 – s) z) = r x + (1 – r) s y + (1 – r) (1 – s) z. According to this condition, the order of lotteries is not important to a DM. 9. r x + (1 – r) x = x (the reflexive property). 10. If x  z, then for any y and r we have

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Appendices

305

(r x + (1 – r) y)  (r z + (1 – r) y). 11. If x  z, then for any r > 0 and y we have (r x + (1 – r) y)  (r z + (1 – r) y). 12. Suppose that x  z  y. Then there exists 0  r  1 such that (r x + (1 – r) y)  z. This is the axiom of continuity. The following result was obtained in [47]. If the preference relation  meets Axioms 1– 12, then there exists a function f: A0  R such that for any x, y from A0 and any r  [0, 1]:

f ( x)  f ( y )  x  y ,

(2)

f (rx  (1  r ) y )  r f ( x )  (1  r ) f ( y ) .

(3)

The function f is unique up to a positive linear transform, i.e., if a certain function F() agrees with the conditions (2)-(3), then F ( x )   f ( x )   , where  > 0 and  are

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

constants. Therefore, Axioms 1–12 suffice to construct a unique utility function (up to a coordinates’ shift and rescaling) based on a preference relation. Hence, utility can be described as a function F ( x )   f ( x )   , where f (x) stands for a known function and constants  > 0,  are uncertain quantities. Problem definitions in mathematical economics and control almost do not involve preference relations. Consequently, utility functions are built empirically (more specifically, using well-known results of utility theory [16, 47]). Nevertheless, one should always keep in mind the following aspect. Correct application of the Neumann-Morgenstern utility functions requires that the corresponding preference relation agrees with Axioms 1–12. We have constructed utility function of a separate agent. Yet, decision-making theory and game theory aim at studying the interaction of many agents. Hence, it is interesting to know the correlation among utilities of different agents. In particular, how the scales of utility measurement of different agents are “reduced to a common denominator”? This issue is of crucial importance in game models, where players may exchange their utilities (transferable utility games or TU-games). In contrast, the rules of nontransferable utility games or NTUgames ban utility exchanges. We emphasize that utility exchange among players may have the form of payments or transfer of other material assets. Such payments are intended for affecting the utility (or payoff) of a certain player. Clearly, in this case the description of outcomes must include the amount of financial resources or material assets to-be-exchanged (indeed, utility functions are defined on the set of outcomes). The following result takes place [55]. A decrease in utility d (as the result of passing an amount of financial resources by a “donor”) corresponds to a proportional increase in utility a of an “acceptor,” if their utility functions Fi () have the form

Fi ( xi , ci )  g i ( xi )  i ci , i {d, a}. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

(4)

Dmitry Novikov

306

Here Fi () stands for the utility function of player i and сi means the amount of financial resources being available to player i; in addition, xi represent the rest components of the outcome for player i, and gi () specifies the utility of the outcome components x. Assume that utility functions are defined by (4) for all individuals considered. In this case, we say that there is a divisible linearly transferable good. By a proper choice of scale of preference functions, one can guarantee that utility increments (gained by transferring some amount of financial resources) are not just proportional but equal by the absolute value. The presence of a linearly transferable good simplifies the analysis of control models for organizational systems.

APPENDIX 4. THE BASICS OF FUZZY SET THEORY This appendix includes the definitions of fuzzy sets, fuzzy relations and the generalization principle. In addition, it describes their properties and a decision-making model under based on fuzzy-type initial information.

~

Fuzzy sets. Let X be a certain set. A fuzzy subset A of the set X is a set of pairs

~ A  { A~ ( x), x} , where x  X and  A~ ( x )  [0 , 1] . The function  A~ : X  [0 ,1] is called

~

the membership function of the fuzzy set A ; on the other hand, X is referred to as the basic sect. Throughout this appendix, we adopt notation with tilde to indicate fuzzy sets.

~

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

The support of the set A is a subset of the set X, whose elements have membership functions being positive:

~ supp A  {x  X  A~ ( x )  0} . Example A.4.114. For instance, consider the fuzzy set of real numbers being much larger

~

than the unity: A  {x  R x  1} . A possible membership function of this set is 1

demonstrated by Figure A.4.1. For comparison, we draw the membership function for the crisp set of real numbers exceeding the unity: B  {x  R x  1} . 1

Properties of fuzzy sets ~ 1. A fuzzy set A is normal if

sup  A~ ( x)  1 . x X

14

The presented examples illustrate the correspondence between crisp sets and fuzzy sets.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Appendices



1

B

307

(x)

 A~ ( x ) x 1

10

Figure A.4.1. An example of a crisp set and a fuzzy set.

~

~

2. Two fuzzy sets are equal ( A  B ) if

x  X :  A~ ( x)   B~ ( x ) . ~ ~ ~ ~ 3. A fuzzy set A contains a fuzzy set B (equivalently, B is a subset of A , i.e.,

~ ~ B  A ) if

x  X :  B~ ( x)   A~ ( x) . Example A.4.2. The membership functions of the fuzzy subsets A = [1, 5] and B = [3, 4]

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

of the set of real numbers (Figure A.4.2) have the form

1, x  [1, 5]; and 0, x  [1, 5],

 A (x)  

1, x  [3, 4]; 0, x  [3, 4].

 B (x)  

Using the above definition of subsets, one obtains B  A .

~

~

~

~

4. The intersection of fuzzy sets A and B ( A  B ) is the maximal set containing both

~ ~ A and B and possessing the membership function  A~  B~ ( x )  min { A~ ( x ) ,  B~ ( x )} ,

x X .

1

 A (x)

 B (x) x 1

3

4

Figure A.4.2. Inclusions of fuzzy sets.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

5

Dmitry Novikov

308

 A B (x)

 A (x)

1

 B (x ) x 3

1

5

4

Figure A.4.3. The intersection of fuzzy sets.

Example A.4.3. Consider crisp sets A = [1, 4] and B = [3, 5]. According to the above definition, the operation of set intersection is treated in a standard way for crisp sets (see Figure A.4.3).

~

~

~

~

5. The sum of fuzzy sets A and B is the minimal fuzzy set containing A or B , with the membership function

 A~  B~ ( x )  max{ A~ ( x) ,  B~ ( x )} , x  X .

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Example A.4.4. Take crisp subsets A = [1, 4] and B = [3, 5] of the set of real numbers. Again, the above-stated definition yields the standard sum of sets for crisp sets (see Figure A.4.4).

 A B (x )

1

 B (x)

 A (x )

x 1

3

4

5

Figure A.4.4. The sum of fuzzy sets.

~

~

6. The complement of a fuzzy set A in X is the fuzzy set  A having the membership function

A~ ( x)  1   A~ ( x) , x  X . Example A.4.5. Consider the crisp set A = [1, 4] of the set of real numbers. Similarly, being applied to crisp sets, the above definition of the complement leads to well-known results (see Figure A.4.5).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Appendices

309

 A~ ( x )

1

x 1

4

Figure A.4.5. The complement of a set.

y

x

0 R

Figure A.4.6. Set pair (x, y) for the relation “≥”.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Fuzzy relations. A crisp binary relation defined over a set X means a subset of the set

X  X (see Appendix 3). By transferring the definition of fuzzy sets to fuzzy relations, let us ~ 2 determine a fuzzy relation as a fuzzy subset of X . Therefore, a fuzzy relation R is a membership function  R~ ( x , y ) such that  R~ : X  X  [0, 1] . The value of this ~ membership function is interpreted as the degree of validity of the relation x R y . Example A.4.6. Consider the crisp relation “being equal or greater than” and denote it by R. Then R  {( x, y ) x  y} .

The membership function of this crisp binary relation makes up

1, x  y ; 0, x  y .

R ( x, y )  

The set R is illustrated by Figure A.4.6. Properties of fuzzy relations 1. Reflexivity

~



if x  X :  R~ ( x , x )  1 , then a fuzzy relation R is reflexive in the R1 sense;



if x  X :  R~ ( x , x ) 

1 ~ , then a fuzzy relation R is reflexive in the R2 sense. 2

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

310

~

2. Antireflexivity (in the R1 sense). If x  X :  R~ ( x , x )  0 , then a fuzzy relation R is antireflexive in the R1 sense.

~

3. Symmetry. If x , y  X :  R~ ( x , y )   R~ ( y , x ) , then a fuzzy relation R is symmetric. 4. Asymmetry. If x , y  X the condition  R~ ( x , y )  0 implies  R~ ( y , x )  0 , then a

~

fuzzy relation R is asymmetric.

~

5. Linearity (completeness). A fuzzy relation R is  x, y  X :

 -linear in the L1 sense if

max  R~ ( x , y ) ,  R~ ( y , x)   , where   [0 , 1) . In the case



~

  0 , R is weakly linear.



~

If x , y  X : max  R~ ( x, y ),  R~ ( y , x )  1 , then a fuzzy relation R is strongly linear.

~

A fuzzy relation R is linear in the L2 sense if x, y  X :  R~ ( x , y )  1   R~ ( y , x ) .

~

~

6. For a fuzzy relation R , the negation R is defined as a fuzzy relation whose membership function is

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

 R~  ( x , y )  1   R~ ( x , y ) x , y  X . ~

~

7. For a fuzzy relation R , the inverse relation R 1 is defined by the following formula:

 R~ ( x , y )   R~ ( y , x ) x , y  X 1

~

~

8. Consider fuzzy relations R1 and R2 . There exist different notions of the composition of relations (product of relations). Notably, they are defined by the following expressions: C1 (the maximin composition)

 R~  R~ ( x , y )  sup min{ R~ ( x , z ) ,  R~ ( z , y )} ; 1

2

z X

1

2

C2 (the minimax composition)

 R~  R~ ( x , y )  inf max { R~ ( x , z ) ,  R~ ( z , y )} ; 1

2

z X

1

2

C3 (the maxi-multiplicative composition) Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Appendices

311

 R~  R~ ( x , y )  sup { R~ ( x , z )  R~ ( z , y )} . 1

2

z X

1

2

9. Transitivity. The three notions of composition lead to three concepts of transitivity–

~ ~

~

(T1), (T2) and (T3)–using the scheme R  R  R . Note that the definition of the maximin transitivity (in the case of crisp binary relations) coincides with the corresponding definition given in Appendix 3. A fuzzy preference relation (FPR) is a fuzzy relation meeting properties (R1), (L1) and (T1). The generalization principle defines the image of a fuzzy set under a crisp or fuzzy mapping. Recall that the image of a mapping f: X  Y of a crisp set X onto a set Y is the set of elements belonging to the set Y and possessing preimages in the set X: f (X) = {y  Y |  x  X: f (x) = y}. According to the generalization principle (stated by R. Bellman and L. Zadeh), under a crisp mapping f: X  Y the image of a fuzzy set  A~ ( x ) , where x  X, is the fuzzy set

 B~ ( y ) , y  Y, with the membership function  B~ ( y ) =

sup { x X | f ( x )  y }

 A~ ( x ) , y  Y.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

The generalization principle is a convenient tool of “converting” crisp models and problems to their fuzzy analogs. This principle is widely used in models of decision-making [29]. Models of decision-making under fuzzy initial information. In Section 1.1 and Appendix 3, we have formulated the rule of individual rational choice

P ( RA0 , A0 )  {z  A0 t  A0 zRA0 t} for crisp relations. Now, let us restate it in terms of membership functions. The membership function of a crisp binary preference relation R is given by  R ( x , y )  1 if x R y. Its strict (i.e., asymmetric, antireflexive and transitive) component is defined by the membership function

 P ( x , y )  max{ R ( x , y)   R ( y , x) , 0} (strict preference relation). The set of alternatives x  A0 , being dominated at least by a single alternative y  A0 , has the membership function

 P ( y , x) . The complement of this set, i.e., the set of

alternatives x  A0 , being undominated by the given alternative

y  A0 , has the

membership function 1   P ( y , x) . By evaluating the intersection over all y  A0 , we obtain the set of alternatives being undominated with respect to the crisp binary set RA0 : Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Dmitry Novikov

312

P( RA0 , A0 )  inf {1   P ( y, x )} = 1  sup  P ( y , x) . y A0

y A0

Example A.4.7. Consider the following crisp reflexive, complete, transitive binary relation (preference relation) over the set of three actions y1, y2, and y3. Here y1 is not less preferable than y2, y2 is not less preferable than y3, and y1 is not less preferable than y3. This crisp preference relation is shown by Table A.4.1. Table A.4.1.

y1 y2 y3

y1

y2

y3

1 0 0

1 1 0

1 1 1

The matrix of the corresponding strict preference relation is presented in Table A.4.2. Table A.4.2.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

y1 y2 y3

The function

y1

y2

y3

0 0 0

1 0 0

1 1 0

 Ru (x) of this preference relation is specified by Table A.4.3. Table A.4.3.



u R

y1

y2

y3

1

0

0

The set of undominated actions is the singleton y1. We have completed the analysis of the example. To proceed, let us repeat similar reasoning for fuzzy sets. Consider the case of no uncertainty in the relation between actions and result of activity. One may believe that a fuzzy preference relation is defined on feasible actions’ set A:  R~ ( x , y ) , x , y  A .

~

~

Define the fuzzy strict preference relation (FSPR) P that corresponds to the FPR R by the following formula:

 P~ ( x , y )  max { R~ ( x , y )   R~ ( y , x ) , 0} , x , y  A . Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Appendices

313

Moreover, determine the fuzzy set of undominated alternatives (actions) as

 ( x)  1  sup  P ( y, x) , x  A . u R

y A

The quantity

 Ru ( x) can be interpreted as the degree of undominatedness of the action

x  A . Hence, a rational choice includes actions possessing maximal possible grade of membership to the crisp set of undominated alternatives. The set

Au ( R )  {x  A  Ru ( x)  sup  Ru ( z )} zA

is called the set of maximally undominated actions (the Orlovsky set).

~

Suppose that an individually rational choice (under the FPR R over the set of feasible actions) is defined by the following rule: P ( R , A)  A ( R ) . u

The crisp set

A u ( R )  {x  A  Ru ( x)   } ,   (0, 1] , is said to be the set of  -undominated actions.

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

APPENDIX 5. GLOSSARY Abstracting is the process15 of forming images of the reality (beliefs, conceptions, judgments) by distraction and supplementation, i.e., by using or assimilating merely some of the corresponding data and adding new information not following directly from this data. Action is 1) an arbitrary deed, move or process dedicated to a result, i.e., a process being subjected to a perceived goal; a piece of activity directed to achieve a certain goal; 2) (in game-theoretic models) a result of agent’s choice. Active agent is a (collective or individual) subject possessing the active property. Active forecast is a purposeful (with the aim of influencing agents’ actions) reporting of information on future values of parameters that depend on a state of nature and/or actions of agents (a forecast as a control tool). Active property is a general characteristic of living organisms, their own dynamics as a source of transforming or sustaining their essential relations with surrounding world; (in a narrow sense) the ability of independent choice of definite goals and actions (including the choice of states, reporting of information, etc.). Active system is a system, where (at least) one element possesses the active property.

15

Italicized terms are also defined in the Glossary.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

314

Dmitry Novikov

Activity is a specific form of human treatment of the surrounding world; its content consists in a reasonable modification and transformation of the surrounding world for the benefit of people. Adequate is equal, identical, completely appropriate. Agent is: 1) a controlled subject (e.g., a man, a collective, an organization); 2) (in gametheoretic models) a player moving second under a given move of a principal. Aggregation is the process of uniting certain homogeneous rates (quantities) to obtain more general or integrated rates (quantities). Allocation (in a cooperative game) is a distribution of the maximal coalition payoff among players such that each player gains more in comparison with his/her individual payoff. Alternative is an option to-be-selected from two or more variants; a thing that can be possessed or used instead of another thing. A set of alternatives serves for making a choice. Analogy is the similarity of objects (phenomena, processes, etc.) in some properties or aspects. Analysis is the procedure of mental or real decomposing an object, a phenomenon, a process or a relation between subjects into parts and establishing relations among them. Anonymous mechanism is a procedure of decision making (a mechanism) being symmetric with respect to permutations of agents. Antagonistic game is a two-player game, where the sum of their payoffs is fixed for any action profile. Approach is an initial principle or position used to study an object of research, a basic statement or view (the logical and historical approaches, the substantial and formal approaches, the qualitative and quantitative approaches, the phenomenological and essential approaches, the unitary and general (generalized) approaches–searching for common connections, regularities, typological features). Assumption is a statement temporarily taken as true (until its validity is established). Awareness is essential information possessed by a subject at a moment of decision making. Includes an informational structure. Balanced game is a game in the form of a characteristic function, possessing a non-empty core. Basics are primary or major statements, foundations. Behavior is an interaction with the external environment, being inherent in all living organisms and being mediated by their external (motive) and internal (mental) activity. The supreme level of behavior is represented by human activity. Binary relation over a certain set is a totality of ordered pairs of elements belonging to this set. Bounded rationality a decision making principle, where a subject (under the lack of time and/or information and/or cognitive resources and/or willingness) chooses not optimal but rational actions (i.e., actions being satisfactory for him/her). Compare with hypothesis of rational behavior). Characteristic function is a function of sets, mapping each coalition onto its payoff. Choice is an operation entering any purposeful activity and lying in a purpose-based narrowing of a set of feasible alternatives (if possible, down to a single alternative). Class is a set or group of subjects or phenomena enjoying common properties.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Appendices

315

Classification is the distribution of subjects belonging to a certain family into interconnected classes according to essential attributes, which are inherent to subjects of a given family and discriminate them from subjects of other families. Coalition interaction is negotiations, agreements and cooperation among players during formation and operation of coalitions. Coalition is a subset of players’ set (a set of agents acting jointly). Collective incentive is an agent’s incentive based on actions of the whole collective or on results of a collective activity. Collective is a set of people united by common interests or common occupation; a group demonstrating high-level development, where interpersonal relations are mediated by socially valuable and personally relevant content of a joint activity. Commission incentive scheme is an incentive scheme, where agent’s incentive is proportional to principal’s income or profit. Common knowledge is a fact such that 1) all agents know it 2) all agents know (1) 3) all agents know (2); and so on (generally, the number of “iterations” is infinite). Compensatory incentive scheme is an incentive scheme, where agent’s incentive equals his/her costs. Competition mode (in a system with distributed control) is an action profile of a principals’ game, not belonging to a domain of compromise. Complex of control mechanisms is a set of coordinated procedures of decision making (mechanisms) to modify parameters of an organizational system. Concept is an idea reflecting (in a generalized form) real objects and phenomena and their connections by fixing common and specific attributes such as properties of objects and phenomena and their relations. Condition is anything that exerts an impact on something else (being conditioned). Connection is the interconditionality of the existence of phenomena with space and/or time division. Consciousness is the supreme level of activity of humans as social beings, consisting in that reality reflection in the form of perceptible and mental images anticipates practical actions of humans, adding a purposeful character to them. Content is the essence of something. Contract is a set of equilibrium efficient strategies (an action and a corresponding reward) of a principal and agents in an incentive problem. Contract theory is a branch of control theory in social and economic systems, studying game-theoretic interaction between a principal and agents, acting in the conditions of an external probabilistic uncertainty. Control efficiency (in TCO) is a guaranteed value of a principal’s goal function over a set of solutions to agents’ game. Control is 1) an impact on a controlled system, intended for ensuring its necessary behavior; (in game-theoretic models): 2) a principal’s action; 3) a principal’s strategy, i.e., a functional mapping actions or activity results of agents onto a principal’s action. Control of sequence of moves is control of an organizational system, which lies in establishing the sequence of participants’ moves. Control problem is the problem of finding an optimal or rational feasible control. Cooperation mode (in a system with distributed control) is an action profile of a principals’ game, belonging to a domain of compromise.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

316

Dmitry Novikov

Cooperative game is a game, where players may act jointly (coordinate their actions, exchange information, utility, and so on). Coordinated control, incentive-compatible control is control making plan fulfillment beneficial to agents (i.e., plan fulfillment is an equilibrium in agents’ game). Corresponding direct mechanism is a direct planning mechanism constructed by substituting the relationship between equilibrium messages of agents and their types into an initial indirect mechanism. Criterion is 1) a means of judging; a standard for comparison; a rule for assessment; a yardstick; 2) a measure of closeness to a goal. Current control is a control mode for a dynamic organizational system, where decisions are made using the short-sighted approach (i.e., for a current period only). Data manipulation (under strategic behavior) is the process of purposeful (deliberate) distortion of information by agents, being reported by them to a principal. Decentralizing sets are a system of sets, being for each agent dependent exclusively on actions of his/her opponents. Decision is the process and result of choosing a goal and a way of performing actions. Decision making is a purposeful choice over a set of alternatives. Decomposition is the operation of dividing a whole into parts, preserving the attributes of subordination and belonging. Degenerate structure is a structure without relations among elements of a corresponding organizational system. Description is enumerating different attributes of a certain subject that reveal it more or less exhaustively. Development is an irreversible, directed and consistent change of material and ideal objects. Dictatorship sets are partitioning the set of feasible types of agents such that, for each element of the resulting partition, there are sets of agents obtaining absolutely optimal plans (such agents are called dictators). Direct control problem is a problem of finding an optimal control. Direct mechanism is a planning mechanism, where agents directly report their types to a principal. Distributed control is a structure of an organizational system, where the same agent is simultaneously subordinated to several principals. Distribution of foresights is a characteristic reflecting the way of agent’s consideration of the future (see foresight). Domain of compromise is a set of individually rational Pareto-efficient and Nash-stable actions of principals and agents. Dominant strategy equilibrium is an action profile of a game, where each player chooses his/her dominant strategy. Dominant strategy is the player’s choice of an action, ensuring the maximum of his/her goal function under any opponents’ action profile. Dynamic organization is an organizational system, where participants make decisions over and over again (the sequence of strategy choice, being inherent to static systems, is repeated several times–see “repeated game”). Element is a component part of something.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Appendices

317

Emergence is a property of systems, which lies in the following. Properties of a whole are not reduced to a set of properties of corresponding components, as well as are not deduced from them. Environment is a set of objects and subjects, phenomena and processes lying outside a given system, but interacting with it. Synonym–“external environment.” Equilibrium choice correspondence is mapping a set of equilibria onto a specific equilibrium. Equilibrium–see game solution. Equivalent direct mechanism is a strategy-proof corresponding direct mechanism of planning. Essence is the internal content of an object, representing the unity of all diverse and contradictory forms of its being. Exchange is the process of rearranging some resources among participants of an OS. Exchange scheme is a set of exchange options. Expertise mechanism is a planning procedure mapping experts’ messages (agents’) into an expertise result. External environment–see environment. Fair play principle, revelation principle is a planning principle, where a principal assigns plans maximizing his/her goal function under the perfect concordance conditions. Family is a logical characteristic of a class of objects, which includes other classes representing kinds of this family. Fan structure is a two-level linear structure (tree). Feasible set is a set of actions or controls satisfying all imposed constraints. Forecast is a concrete prediction, or judgment about the future state of a certain phenomenon or process. Foresight is the property of a subject to consider future consequences of decisions made today. Form is a kind, a type, a structure, a system of organization of something, conditions by a specific content. Function is 1) a relation of two (or several) objects such that a change in one object accompanies a corresponding change in another; 2) a duty, a circle of activity, an allocation, or a role. Functioning is performing one’s own functions, acting, being active, working. Fuzzy uncertainty is an awareness regarding the membership function of feasible values of an uncertain parameter (states of nature, types of other agents, etc.). Game history is a set of players’ choices and/or their payoffs (values of a utility function) and/or states of nature observed by a given subject. Game is 1) an interaction of sides whose interests mismatch; 2) a kind of unproductive activity whose motive consists not in the ultimate result, but in the process of such activity. Game solution is a predictable and stable outcome of a game (synonym–equilibrium). Game theory is a branch of applied mathematics, studying models of games, i.e., decision making under the conditions of mismatching interests of sides (players), when each side strives for situation development in its own interests. Game uncertainty is an incomplete awareness of a certain player regarding actions or decision making principles of other players.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

318

Dmitry Novikov

Game Г1 (Stackelberg game) is an hierarchical game, where a principal does not expect to observe the choice of an agent. Game Г2 is an hierarchical game, where a principal’s strategy consists in mapping a set of feasible actions of agents onto a set of his/her feasible actions. Generalized solution to a control problem is the parametric family of controls ensuring a given level of guaranteed efficiency on a specific set of models of organizational systems. Goal function is a real-valued function defined on a set of feasible actions of agents and controls of principals; it reflects preferences and interests of a certain agent (a rational behavior of the latter consists in striving for maximization of a utility function). Goal is a conscious image of an anticipated result of a certain activity. Graph of a reflexive game is a graph whose nodes correspond to real and phantom agents, each node has incoming arcs from nodes-agents affecting the gain of a given agent in an informational equilibrium; the number of incoming arcs is by unity less than the number of real agents. Graph theory is a branch of applied mathematics, studying the properties of sets (mostly, finite sets) with given relations among their elements. Group is a set of people united by common interests, by a common profession, by a common activity, etc. Guaranteeing strategy is the choice of a certain action by a player (agent), ensuring the maximal guaranteed result to him/her. Hierarchical game is a game with a fixed sequence of moves between principals and agents, where the former possess the right of the first move. Hierarchical games theory is a branch of game theory, studying hierarchical games. Hierarchy is a principle of structural organization in complex multilevel systems, which lies in ordering the interaction between levels (from the top level to the bottom one). Hypothesis is a supposition, an assumption whose true value appears indeterminate; an assumption whose validity is not obvious. Hypothesis of benevolence is the assumption that, given a set of equally preferable alternatives, an agent would choose an alternative being the most beneficial to a principal. Hypothesis of deterministic behavior is the assumption that a subject strives for eliminating an existing uncertainty and making decisions under the conditions of complete awareness (based on all information available to the subject). Hypothesis of independent behavior is the assumption that each subject performs the choice of his/her action regardless what other subjects choose. Hypothesis of indicator behavior is an assumption regarding the behavior of a certain participant of a dynamic organizational system; it implies that, within each period, the participant “moves” in an action space towards his/her action that would be optimal under the opponents’ action profile during the previous period. Hypothesis of rational behavior is the assumption that a subject (an agent or a principal) chooses actions yielding the most preferable results of activity (based on all information available to the subject). Hypothesis of weak contagion is the assumption that actions of a separate subject exert almost no impact on definite parameters of an organizational system.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Appendices

319

Implementability of social choice correspondence is the existence of an indirect planning mechanism admitting an equivalent direct mechanism leading (for any set of agents’ types) to the same outcome as social choice correspondence. Implementability theory is a branch of control theory in social and economic systems, studying implementability of social choice correspondence. Incentive function is a function mapping a set of feasible actions of agents onto incentives paid to them by a principal. Incentive mechanism–see incentive function. Incentive mechanism for cost reduction is a mechanism stimulating agents to reduce costs (cost prices) and prices. Incentive problem is the problem of finding an optimal incentive function (a game Г2 with side payments). Incentive system–see incentive function. Incentive, stimulation is an external impact on an organism, a person or a group of people, reflecting in the form of psychic reaction; motivation to perform a certain action; an influence conditioning the dynamics of psychic states of an individual, being related to the dynamics as an effect to a cause. Indirect mechanism is a planning mechanism, where agents report indirect information about their types to a principal (see direct mechanism). Individual incentive is an incentive of an agent based on his/her personal results. Individual is a separate person; a specimen, each independent living organism. Individual rationality is the property of subject’s decision making, which ensured a certain utility being not smaller than the utility gained by rejecting from making a decision. Inessential cooperative game is a cooperative game with an additive characteristic function. Information is 1) a message or report about the state of affairs, information on something; 2) the reduced or eliminated uncertainty as the result of receiving certain messages; 3) a message inseparably linked with control, signals in the unity of syntactical, semantic and pragmatic characteristics; 4) the transfer and reflection of diversity in any objects and processes (of animate and inanimate nature). Informational control is control directed towards the awareness of subjects, including their informational reflexion. Informational equilibrium is an equilibrium in a reflexive game (a generalized Nash equilibrium) with the following assumptions. Each (real and phantom) agent evaluates his/her subjective equilibrium (an equilibrium in a game he/she subjectively plays) based on the available hierarchy of beliefs regarding the objective and reflexive reality. Informational reflexion is the process and result of agent’s thinking about the values of uncertain parameters and about what his/her opponents know or think about these values. Informational structure, hierarchy of beliefs is a tree, where nodes correspond to agents’ information on essential parameters, beliefs of other agents, beliefs about beliefs, and so on. Institute is 1) (in sociology) a definite organization of social activity and social relations, personifying the norms of economic, political, legal, and moral life of a society, as well as social rules of vital activity and behavior of people; 2) (in law) a set of legal norms regulating uniform separate social relations. Institutional control is a purposeful impact on constraints and norms of activity performed by participants of an organizational system.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

320

Dmitry Novikov

Integrated rating mechanism is the aggregation procedure for a set of partial ratings used to obtain an integrated rating. Interest is a real reason of actions, events, achievements; (in psychology) a motive or motivational state stimulating a certain activity. Inter-level interaction is the subordination of an agent to a principal located at higher levels of an hierarchy. Interval uncertainty is an awareness regarding the set of feasible values of an uncertain parameter (states of nature, types of other agents, etc.). Inverse control problem is finding a set of feasible controls rendering a controlled system to a given state. Joint activity constraint is a constraint imposed on joint choice of actions by certain subjects (any situation when hypothesis of independent behavior takes no place). Jump incentive scheme is an incentive scheme, where an agent gains a nonzero incentive if his/her action or result of activity is not smaller (or greater) than a plan. Kind is a class of objects, which enters a wider class of objects known as a family. Knowledge is a result of the reality cognition process, an adequate reflection of the reality in one’s mind. Linear incentive scheme is an incentive scheme, where agent’s incentive is proportional to his/her action or result of activity. Linear organizational structure (tree) is a structure of an organizational system, where each agent appears subordinated to a single principal. Marginal utility is the derivative of a utility function. Matrix organizational structure is a linear functional structure superimposed by a horizontal structure of responsibility for projects implemented in an organizational structure. Maximal coalition is the coalition which includes all players. Maximal guaranteed result is the maximal value of the utility function of a subject (a principal or an agent) under the worst-case action profile in a game and/or state of nature. Maximal reasonable reflexion rank is the minimal reflexion rank to-be-used by an agent for covering the whole variety of outcomes in a reflexive game. Maximin equilibrium is an action profile of a game, where, each player chooses his/her guaranteeing strategy. Means is a way of action (or a tool) for achieving something. Mechanism is 1) a system or device determining the order of a certain activity; 2) the mechanism of functioning is a set of rules, laws and procedures regulating the interaction of organizational system participants; 3) control mechanism is a set of decision-making procedures used by a principal. Mechanism of consent is a strategy-proof planning mechanism, where plans are assigned basing on the relative preferences reported by agents. Mechanism of exchange is 1) a procedure of resource reallocation among participants of an organizational system; 2) a planning procedure mapping agents’ messages about their types into resource quantities offered by a principal for an exchange. Mechanism of joint financing is a planning mechanism, where a principal defines the terms of project financing from different funds. Method is 1) (= approach) a way of cognizing or studying phenomena in nature and social life; 2) a trick or a way of action.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Appendices

321

Methodological is related to methodology. Methodology is 1) the theory of activity organization; 2) the theory of scientific cognition method; 3) a set of methods applied in a certain science. Mixed strategy is a probability distribution over a set of feasible actions of a certain player (over a set of pure strategies). Model is an image of a certain system; an analog (a scheme, a structure or a sign system) of a certain fragment of the natural or social reality, a “substitute” for the original in cognition process and practice. Model of organization is a set of organizational system participants (staff), its structure, goal functions of the participants, sets of feasible actions of the participants, as well as of their awareness and the sequence of moves. Modeling, simulation is the method of studying cognized objects using their models, constructing models of real subjects and phenomena. Motivation is the process of stimulating towards an activity causing the activity of the subject and defining its direction. Motivational control is control of agents’ preferences (goal functions or utility functions). Motive is stimulation towards an activity connected with satisfying certain needs of a subject; the set of external or internal conditions causing the activity of the subject and defining its direction. Multi-agent organization is an organizational system, which includes several agents. Multi-channel mechanism is a mechanism, where a principal makes decisions based on the results of parallel data processing by several channels (experts, PC, etc.). Multi-level organization is an organizational system possessing an hierarchical structure with three or more levels. Multiproject is a project consisting of several technologically independent projects with shared (financial and material) resources. Nash equilibrium is an action profile of a game such that any unilateral deviation from it appears unbeneficial to any player. Need is a state of a certain individual, being generated by the lack of something; it represents a source of his/her activity. Network structure is 1) a structure with (potentially) existing connections among all elements; to solve a certain problem faced by a system, some connections are temporarily activated (thus, transforming a degenerate structure into a linear or matrix structure) and then terminated (restoring to the degenerate structure) until new problems appear; 2) a structure without an hierarchy. Noncooperative game is a game, where players may not act jointly. Norm is 1) a legal act, a generally accepted order; 2) (in game theory) a mapping of the set of action profiles and states of nature onto an action set of a decision maker. Normal-form game is the representation of a game in the form of a set of players choosing actions one-time, simultaneously and independently, their goal functions and feasible sets being a common knowledge among them. Object is an entity opposing a subject in his/her practical and cognitive activity; a part of the objective reality, interacting with a subject. Objective uncertainty is an incomplete awareness regarding states of nature. Synonym– natural uncertainty. Operation is a set of actions or measures intended to achieve a certain goal.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

322

Dmitry Novikov

Opponents are all participants of an organizational system, except a given one. Opponents’ action profile (for a certain player) is the vector comprising actions of all players, except the given one. Optimal compatible planning is a solution to a planning problem over a set of compatible (incentive-compatible) plans. Optimal control is feasible control ensuring the maximal efficiency. Order is the realized social or personal necessity of changes and formulation of requirements for such changes. Organization is 1) the condition or manner of being organized; 2) the act or process of organizing or of being organized; 3) an administrative and functional structure (as a business or a political party); also, the personnel of such a structure (see organizational system). Organizational system is an association of people (e.g., an enterprise, an establishment, a firm) engaged in joint implementation of a certain program or task, using specific procedures and rules (mechanisms). Pareto equilibrium is an action profile of a game such that there exist no other action profiles, where all players gain not smaller payoffs and, at least, one player gains a strictly positive gain. Synonym–efficient action profile. Peak point is a maximum point of a goal function or utility function. Penalty strategy is an action profile of a game or control minimizing agent’s goal function. Perfect concordance conditions are the conditions of assigning plans to agents by maximizing their utility functions over decentralizing sets. Personalized control (in the general case) is using specific control for each agent (see unified control). Phantom agent is an agent existing in minds of real and other phantom agents. Phenomenon is certain detection (expression) of an object or the external form of its existence. Plan is 1) a work designated to a certain period, including the specification of its goals, content, amount, methods, implementation sequence and due dates; an idea, a project, major features; 2) (in game-theoretic models) an action or a result of agent’s activity desired by a principal. Planning horizon is a number of future periods used to define plans in control of a dynamic organizational system. Planning is drawing a plan of a certain activity or development of something; defining a plan. Planning mechanism–see planning procedure. Planning problem is 1) the problem of evaluating optimal plans; 2) the problem of defining an optimal planning procedure. Planning procedure is a function mapping a set of feasible messages of agents onto a set of plans. Preferences are a set of properties and abilities of a certain subject for defining the value or utility of alternatives (actions, results of an activity, etc.) and their comparison. Principal is 1) a control authority; 2) (in game-theoretic models) a player making the first move (a meta-player establishing the rules of game for other players). Principle is 1) a basic statement of a theory, science, etc.; 2) a belief or view of something; 3) a key feature in the structure of something.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Appendices

323

Principle of adaptivity is a decision-making principle stating that one must consider all available information on the history of functioning of a controlled system. According to principle of adaptivity, once made decisions (and corresponding principles of decision making) must be regularly revised (see Principle of operational control) following any changes in the states of a controlled system and in the conditions of its functioning. Principle of adequacy is a principle stating that a control system (its structure, complexity, functions) must be adequate to a controlled system (to its structure, complexity, functions, respectively). Problems to-be-solved by a controlled system must be adequate to its capabilities. Principle of agents’ game decomposition is a principle stating that a principal applies controls admitting a dominant strategy equilibrium in agents’ game. For instance, consider an incentive problem in a multi-agent system; an agent is compensated his/her actual costs only in the case of fulfilling his/her own plan (regardless of actions chosen by other agents). Principle of complementarity is a principle stating that high-accuracy description of a system appears inconsistent with its high complexity. Principle of completeness and prediction is a principle stating that, under a given range of external conditions, the set of control actions must ensure posed goals (the completeness requirement) in an optimal and/or feasible way. This must be done taking into account possible response of a controlled system to certain control actions in predicted external conditions. Principle of coordination is a principle stating that, under existing institutional constraints, control actions must be maximally coordinated with interests and preferences of controlled subjects. Principle of costs compensation is a principle stating that, in an incentive problem, an optimal solution exactly compensates the costs of agents. Principle of decomposition of functioning periods is a principle stating that a principal applies controls making the decisions of agents independent of a game history. For instance, consider an incentive problem in a dynamical system; for each period, an agent is compensated his/her costs only in the case of plan fulfillment during this period (regardless of the results achieved at preceding periods). Principle of democratic control (principle of anonymity) is a principle requiring equal conditions and opportunities for all participants of a controlled system (without a priori discrimination in the receipt of informational, material, financial, educational and other resources). Principle of development is a principle stating that a control action lies in modifying a control system proper (being induced from within, it can be treated as self-development). The matter also concerns the development of a controlled system. Principle of efficiency is a principle stating that a control system must implement the most efficient control actions (from the set of feasible control actions). Principle of ethics, principle of humanism is a principle stating that, in management decisions, the accounting of existing ethical norms (in a society or an organization) has a higher priority over other criteria. Principle of feedback is a principle stating that efficient control generally requires information on the state of a controlled system and on the conditions of its functioning.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

324

Dmitry Novikov

Moreover, implementation of a control action and corresponding consequences must be monitored by a control subject. Principle of hierarchy is a principle stating that generally a control system has an hierarchical structure. It must agree with the functional structure of a controlled system and not contradict the hierarchy of (horizontally or vertically) adjacent systems. Tasks and resources supporting the activity of a controlled system must be decomposed according to its structure. Principle of monotonicity is a principle stating that complex systems (in the first place, biological ones) aim at “preserving the achieved.” Principle of non-interference is a principle stating that any-level principal interferes in a process iff his/her direct subordinates appear unable to implement a complex of necessary functions (at present and/or according to a forecast). Principle of openness is a principle stating that operation of a control system must be open to information, innovations, etc. Principle of predictive reflection is a principle stating that a complex adaptive system predicts feasible changes in essential external parameters. Consequently, when generating control actions, one should predict and anticipate such changes. Principle of purposefulness is a principle stating that any impact of a control system on a controlled system must be purposeful. Principle of rational decentralization is a principle stating that, in any complex multi-level system, there exists a rational decentralization level for control, authorities, responsibility, awareness, resources, etc. Rational decentralization implies adequate decomposition and aggregation of goals, problems, functions, resources, and so on. Principle of regulation and resource provision (in managerial activity) is a principle stating that, management activity must be regulated (standardized) and must correspond to constraints set by a meta-system (a system possessing a higher level of an hierarchy). Any management decision must be feasible (also, in the sense of provision with necessary resources). Principle of responsibility is a principle stating that a control system appears responsible for decisions made and the efficiency of controlled system operation. Principle of social and state control, principle of participation is a principle stating that control of a social system must aim at the maximal involvement of all interested subjects (society, bodies of state power, individual and artificial persons) in the development of a controlled system and its operation. Principle of sufficient reflexion is a principle stating that reflexion depth of an agent is defined by his/her awareness. Principle of trust is a principle stating that an agent trusts information reported by a principal. For instance, in the case of informational control, without loss of generality the problem can be solved by modifying the set of feasible strategies of the principal under the following assumption. An agent fully trusts the principal and makes decisions based on exactly the information reported by the principal. Principle of unification is a principle stating that controlled systems and control systems of all levels must be described and studied using common principles (this applies both to parameters of their models and to efficiency criteria of their functioning). However, such principles must not eliminate the necessity of considering specifics of a concrete system.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Appendices

325

On the other part, control inevitably causes specialization (restriction of variety) of control subjects and controlled subjects. Principle of uniformity is a principle stating that states of a system must be assessed by considering the states of all its elements (see maximal guaranteed result). In other words, the rate of change in any system is limited, being generally defined by its most inertial elements (“the slowest ship determines the speed of a squadron”). Principle of well-timed control is a principle stating that, in real-time control, information required for decision making must be supplied at the right time. Moreover, management decisions must be made and implemented quickly according to any changes in a controlled system and external conditions of its functioning. In other words, the characteristic time of making and implementing management decisions must not exceed the characteristic time of changes in a controlled system (i.e., a control system must be adequate to controlled processes in the sense of their rate of change). Probabilistic uncertainty is an awareness regarding the probability distribution of an uncertain parameter (states of nature, types of other agents, etc.). Problem domain is a certain domain of objects, universe of consideration (reasoning), a class (set) of objects studied within a given context. Problem is 1) something requiring execution or solution; a goal of activity specified in certain conditions; 2) (in science) a theoretical or practical issue requiring a study and a solution. Process is a course of a certain phenomenon, a successive change of states, development stages, etc. Production function is the relationship between quantities of used production factors (inputs) and corresponding maximally possible product outputs. Program control is a control mode in a dynamical organizational system, where decisions are initially made for all future periods. Program is a complex of operations or measures with technological, resource or organizational connections, ensuring a certain goal. Project is 1) a plan or an idea, an elaborated plan of a building or device, a draft version of a document; 2) is a time-limited purposeful change of a certain system under given requirements concerning the results and feasible consumption of resources; any project possesses a specific organization. Project management is a branch of control theory in social and economic systems, studying efficient methods, forms and means of control for certain changes (projects). Project organizational structure is a linear structure, where decomposition takes place with respect to projects implemented by an organizational system. Property is a philosophical category expressing an object’s side, which causes its difference or similarity with other objects and gets discovered in its relation to other objects. Quasi-single-peaked function is a function possessing a unique peak point and not increasing left and right to this point. Rank incentive scheme is an incentive scheme, where agent’s incentive depends on whether the result of his/her activity belongs to a given set (the so-called rank normative incentive schemes) or on the position occupied by the agent in the ordered sequence of activity results of all agents (the so-called competition rank incentive schemes).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

326

Dmitry Novikov

Rank-order tournament, tender is a planning mechanism, where agents are sorted by a principal depending on reported rates, and plans or incentives assigned to the agents are defined based on this sorting. Reflection is a universal property of matter, which lies in reproduction of attributes, properties and relations of a reflected object. Reflexion is reflection and analysis of a cognizing act. Reflexion rank is the level of a tree of an informational structure. Reflexive control is a purposeful impact on strategic reflexion of controlled subjects. Reflexive control problem is the problem of finding an optimal reflexive control. Reflexive game is a game, where players’ awareness is not a common knowledge, but is defined by an informational structure–the hierarchy of their beliefs (i.e., beliefs about essential parameters, beliefs about mutual beliefs, etc.). Reflexivity depth–see “reflexion rank.” Relation is a philosophical category characterizing the interdependency of elements in a certain system. Repeated game is a game, where a certain sequence of strategy choice (being characteristic for a single-period game, see normal-form game) repeats several times. Research is the process of obtaining new scientific knowledge, a certain kind of cognizing activity, being characterized by objectivity, reproducibility, validity and accuracy. Resource allocation mechanism is a planning procedure mapping agents’ messages into resource quantities allocated to them by a principal. Result of activity (output) (in game-theoretic models) is a variable whose value is defined by actions of agents and by state of nature. Science is a scope of human activity, whose function lies in generation and theoretical systematization of objective knowledge about the reality. Self-annihilating forecast is a forecast which becomes untrue only because of having been made and announced. Self-development is self-motion connected with transition to a higher level of organization. Self-implementing forecast is a forecast which becomes true only because of having been made and announced. Self-motion is a change of a certain object under the influence of its internal contradictions, factors and conditions. Self-organizing is a process leading to creation, reproduction or perfection of organization of a complex system. Semantics is a branch of semiotics, studying relations among signs and their meaning. Semiotics is a science studying sign systems. Sequence is a consecutive course of something; rules used to do something; an existing organization or mode of something. Sequence of moves is a sequence (order) of information acquisition and decision making adopted by participants of an organizational system. Set of exchange options is a set of individually rational allocations of resources that can be achieved via an exchange (within the framework of a given OS). Set of implementable actions is a set of agents’ actions providing a solution to their game under a given control by a principal. Side payment is a variable additively entering goal functions of a principal and an agent (or different agents).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Appendices

327

Sign is 1) a signal possessing a concrete meaning perceived by humans; 2) a real model of an abstract notion. Simple active agent is an agent acting in the conditions of a probabilistic uncertainty such that the result of activity does not exceed a corresponding action. Single-peaked function is a function possessing a unique peak point and strictly decreasing left and right to this point. Sliding control is a control mode for a dynamic organizational system, where decisions are made in real time using the long-sighted approach (i.e., simultaneously for several future periods and taking into account forecasts for these periods). Social choice correspondence is mapping a set of agents’ types onto a set of alternatives chosen by agents. Stable informational control is an informational control justifying expectations of agents (e.g., regarding their payoffs). Staff control is control of an organizational system, which lies in choosing (or modifying) its staff. Generally, staff control includes recruitment problems, dismissal problems and personnel development problems. Staff is a set of elements making up a comprehensive whole. State of nature is a parameter describing an external environment (with respect to an organizational system). Strategic reflexion is the process and result of agent’s thinking about decision-making principles used by his/her opponents within the framework of awareness he/she assigns to them as the result of informational reflexion. Strategy is (for each moment of decision making) a set of mappings of a game history and player’s awareness onto a set of his/her feasible actions. Strategy profile is a vector comprising actions of all players (agents). Strategy-proof control is principal’s control ensuring that agents’ truth-telling about their types is beneficial to the agents (truth-telling becomes an equilibrium in agents’ game) (see data manipulation). Strong Nash equilibrium is an action profile of a game such that any unilateral deviation from it appears unbeneficial to any coalition. Structure control is control of an organizational system, which lies in choosing its structure. Structure is a set of stable connections among elements of a certain system. Subject (active subject) is a carrier of a practical activity and cognition, a source of activity directed to an object; an individual or a group as a source of reality cognition and transformation, a carrier of activity. Subject is a category meaning certain integrity separated from the world of objects during the process of human activity and cognition; anything being in a relation or possessing a property; a side, an aspect, a viewpoint adopted by a researcher for cognizing an integral object (identifying the most essential attributes of the object). Subjective uncertainty is an incomplete awareness of a certain agent regarding types of other agents and/or principals in an organizational system. Superadditive characteristic function is a characteristic function such that the sum of its values for any pair of nonintersecting coalitions does not exceed its value for the union of these coalitions.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

328

Dmitry Novikov

Synthesis is integrating different elements or sides of a certain object in a comprehensive whole (a system); this process takes place in practical activity and cognitive activity. System is a set of elements having mutual relations and connections, which forms a definite unity. Team is a temporal or permanent organizational unit (probably, informal) intended for performing definite tasks, duties or works; a collective being able to achieve a goal in an autonomous and coordinated way under minimal control actions. Technics is 1) a set of skills, tricks or abilities to implement a technology; 2) a totality of material means of human activity. Technique is a set of methods used for reasonable implementation of some work. Technology is a set of methods, operations, stages, etc. whose successive realization ensures solving a certain problem. The minimal principal’s costs to implement a given action are the minimal value of principal’s cost function over a set of controls stimulating an agent to choose a given action. Theoretical is based on a theory, representing a theory, related to issues of a certain theory. Theory is a complex of views, beliefs, or ideas directed to interpret and explain a certain phenomenon; (in a narrow sense) the supreme and most developed form of organizing a scientific knowledge, providing an integral image of regularities and essential connections in a specific area of the reality (i.e., an object of this theory). Theory of active systems is a branch of control theory in social and economic systems (active systems, organizational systems), studying the properties of mechanisms of their functioning, caused by activity of system elements. Theory of control in organizations is a branch of control theory, studying control problems for organizational systems. Topicality (relevance) is the importance or significance of something for the present time. Transfer pricing mechanism is a planning procedure mapping agents’ messages into their plans assigned by a principal (including a common price for the agents). Two-step method is a certain method for solving a control problem, which consists in the following. At Step 1, for each vector of agents’ action, find the minimal control (incentive) implementing it (satisfying the conditions of incentive compatibility and individual rationality). At Step 2, solve a problem of optimal incentive-compatible planning. Type is a characteristic of an agent, uniquely defining all his/her essential characteristics, e.g., preferences. Uncertainty (uncertain means not precisely determined, not totally clear, equivocal) is ambiguity of any origin, an incomplete awareness. Uncertainty elimination is a procedure of passing from preferences that depend on uncertain parameters to preferences defined on a set of parameters chosen by a certain subject. Uncertainty type includes an interval uncertainty, a probabilistic uncertainty, and a fuzzy uncertainty. Uncertainty type includes an objective uncertainty (an external uncertainty), a subjective uncertainty (an internal uncertainty), and a game uncertainty. Unified control is control based on using anonymous mechanisms (see personalized control).

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Appendices

329

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

Unified is common, united. Utility function is a real-valued function defined on a set of feasible activity results and controls of principals; it reflects preferences and interests of a certain agent (a rational behavior of the latter consists in striving for maximization of a utility function). Utility is a conditional characteristic reflecting the degree of subject’s satisfaction in the result of a certain activity; utility value is defined by a utility function.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

REFERENCES [1] [2] [3] [4] [5] [6] [7]

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

[8] [9] [10] [11]

[12] [13] [14] [15] [16] [17] [18]

Aizerman M., Aleskerov F. Theory of Choice. – Amsterdam: Elsevier, 1995. – 324 p. Algorithmic Game Theory (ed. Nisan N., Roughgarden T., Tardos E., and Vazirani V.). – N.Y.: Cambridge University Press, 2009. – 776 p. Ashby W. An Introduction to Cybernetics. – London. Chapman and Hall, 1956. – 295 p. Ashimov A., Novikov D. et al Macroeconomic Analysis and Economic Policy Based on Parametric Control. Springer, 2011. – 278 p. Baron R., Greenberg J. Behavior in Organizations. – 9th Edition. – New Jersey: Pearson Education Inc., 2008. – 775 p. Berge C. The Theory of Graphs and its Applications. – New York: John Wiley and Sons, Inc., 1962. – 247 p. Bertalanffy L. General System Theory: Foundations, Development, Applications. – New York. George Braziller, 1968. – 296 p. Bolton P., Dewatripont M. Contract Theory. – Cambridge: MIT Press, 2005. – 740 p. Burkov V. Foundations of Mathematical Theory of Active Systems. – Moscow: Nauka, 1977. – 255 p. (in Russian) Burkov V., Danev B., Enaleev A. et al. Large-scale Systems: Modeling of Organizational Mechanisms. – Moscow: Nauka, 1989. – 245 p. (in Russian). Burkov V., Enaleev A. Stimulation and Decision-making in the Active Systems Theory: Review of Problems and New Results // Mathematical Social Sciences. 1994. Vol. 27. P. 271 – 291. Burkov V., Goubko M, Korgin N., Novikov D. Theory of Control in Organizations: an Introductory Course / Ed. by D. Novikov. 2013 (forthcoming). Camerer C. Behavioral Game Theory: Experiments in Strategic Interactions. – Princeton: Princeton University Press, 2003. – 544 p. Chkhartishvili A., Novikov D. Models of Reflexive Decision-Making // Systems Science. 2004. Vol. 30. № 2. P. 45 – 59. Drucker P. The Effective Executive: The Definitive Guide to Getting the Right Things Done. – N.Y.: Collins Business, 2006. – 208 p. Fishburn P. Utility Theory for Decision Making. – New York: Wiley, 1970. – 234 p. Fudenberg D., Tirole J. Game Theory. – Cambridge: MIT Press, 1995. – 579 с. Germeier Yu. Non-Antagonistic Games, 1976. – Dordrecht: D. Reidel Publishing Company, 1986. – 331 p.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

332

Dmitry Novikov

[19] Goldratt E. Theory of Constraints. – Great Barrington: North River Press, 1999. – 160 p. [20] Golenko-Ginzburg D. Stochastic Network Models in Innovative Projecting. – Voronezh: Science Book Publishing House, 2011. – 356 p. [21] Gubanov D., Chkhartishvili A., Novikov D. Social Networks: Models of Informational Influence, Control, and Confrontation / Ed. D. Novikov. – Moscow: Fizmatlit, 2010. – 228 p. (in Russian). [22] Handy C. Understanding Organizations (3rd ed.). – Harmondsworth: Penguin Books, 1985. – 448 p. [23] Harary F. Graph Theory. – Boston: Addison-Wesley Publishing Company, 1969. – 214 p. [24] Herzberg F., Mausner B. The Motivation to Work. – New Brunswick: Transaction Publishers, 1993. – 180 p. [25] Hillier F. and Lieberman G. Introduction to Operations Research (8 th ed.). – Boston. McGraw-Hill, 2005. – 1061 p. [26] Intriligator M. Mathematical optimization and economic theory. – NJ Upper Saddle River: Prentice Hall, 1975. – 508 p. [27] Jackson M. Social and Economic Networks. – Princeton: Princeton University Press, 2008. – 648 p. [28] Kaplan R., Norton D. Balanced Scorecard: Translating Strategy into Action. – Harvard: Harvard Business School Press, 1996. – 336 p. [29] Kaufman A. Introduction to Fuzzy Arithmetic. – New York: Van Nostrand Reinhold Company, 1991. – 384 p. [30] Keeney R., Raiffa H. Decisions with Multiple Objectives: Preferences and Value Tradeoffs. – New York: John Wiley and Sons, 1976. – 516 p. [31] Kozielecki J. Psychological Decision Theory. – London: Springer, 1982. – 424 p. [32] Laffont G., Martimort D. The Theory of Incentives: The Principal-Agent Model. – Princeton: Princeton University Press, 2001. – 421 p. [33] Lefevbre V. Algebra of Conscience. – London: Springer, 2001. – 372 p. [34] Leontjev A. Activity, Consciousness and Personality. – Prentice. Prentice-Hall, 1978. – 192 p. [35] Lock D. The Essentials of Project Management. – New York. John Wiley and Sons, 2007. – 218 p. [36] Malivaud E. Lectures on Microeconomic Theory. – Amsterdam: Elsevier, 1998. – 398 p. [37] Mansour Y. Computational Game Theory. – Tel Aviv: Tel Aviv University, 2003. – 150 p. [38] Mas-Collel A., Whinston M., Green J. Microeconomic Theory. – N.Y.: Oxford Univ. Press, 1995. – 981 p. [39] Mechanisms Design and Management / Ed. by D. Novikov. 2013 (forthcoming). [40] Menard C. Institutions, Contracts and Organizations: Perspectives from New Institutional Economics. – Northampton: Edward Elgar Pub, 2000. – 458 p. [41] Mesarović M., Mako D. and Takahara Y. Theory of Hierarchical Multilevel Systems. – New York. Academic, 1970. – 294 p. [42] Milgrom P., Roberts J. The Economics, Organization and Management. – Englewood Cliffs, NJ: Prentice Hall, 1992. – 621 p.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

References

333

[43] Mintzberg H. Structure in Fives: Designing Effective Organizations. – Englewood Cliffs, NJ: Prentice-Hall, 1983. – 312 p. [44] Mishin S. Optimal Hierarchies in Firms. - Moscow, PMSOFT, 2004. - 127 p. [45] Moulin H. Cooperative Microeconomics: a Game-Theoretic Introduction. – Princeton: Princeton University Press. 1995. – 440 p. [46] Myerson R. Game Theory: Analysis of Conflict. – London: Harvard Univ. Press, 1991. – 568 p. [47] Neumann J., Morgenstern O. Theory of Games and Economic Behavior. – Princeton: Princeton University Press, 1944. – 776 p. [48] Nitzan S. Collective Preference and Choice. – Cambridge: Cambridge University Press, 2010. – 274 p. [49] Novikov A., Novikov D. Research Methodology: from Philosophy of Science to Research Design. CRC Press, 2013. [50] Novikov D. Incentives in Organizations. – Moscow: Sinteg, 2003. – 312 p. (in Russian) [51] Novikov D. Management of Active Systems: Stability or Efficiency // Systems Science. 2001. Vol. 26. № 2. P. 85 – 93. [52] Novikov D., Ckhartishvili A. Reflexive Games. – Moscow: Sinteg, 2003. – 160 p. (in Russian). [53] Ohno T. Toyota Production System. – N.Y.: Productivity Press, 1988. – 152 p. [54] Ore O. Theory of Graphs. – Providence: American Mathematical Society, 1972. – 270 p. [55] Owen G. Game Theory. – Philadelphia: W.B. Saunders Company, 1969. – 228 p. [56] Roberts F. Discrete Mathematical Models with Applications to Social, Biological, and Environmental Problems. – Prentice: Prentice Hall, 1976. – 560 p. [57] Salanie B. The Economics of Contracts. 2nd Edition. – Massachusetts: MIT Press, 2005. – 224 p. [58] Shoham Y., Leyton-Brown K. Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. – N.Y.: Cambridge University Press, 2008. – 504 p. [59] Stole L. Lectures on the Theory of Contracts and Organizations. – Chicago: Univ. of Chicago. 1997. – 104 p. [60] Taha H. Operations Research: An Introduction (9th ed.). – NY: Prentice Hall, 2011. – 813 p. [61] The Cambridge Handbook of Expertise and Expert Performance (ed. by K. Ericsson). – Cambridge: Cambridge University Press, 2006. – 918 p. [62] Wagner H. Principles of Operations Research. 2-nd ed. – NJ Upper Saddle River: Prentice Hall, 1975. – 1039 p. [63] Wiener N. Cybernetics: or the Control and Communication in the Animal and the Machine. 2nd ed. – Massachusetts: The MIT Press, 1965. – 212 p.

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved. Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

INDEX A

C

accounting, 40, 65, 140, 206, 249, 250, 264, 265, 323 ACF, 64, 65, 66, 68 acquaintance, 284 adaptation, 180, 181 aggregation, 55, 56, 57, 58, 69, 108, 141, 159, 160, 161, 164, 165, 301, 320, 324 algorithm, 94, 137, 138, 151, 152, 162, 199, 200, 287, 289, 290, 294, 295, 298, 300 applied mathematics, 19, 317, 318 aspiration, 59 assessment, xvi, xvii, 3, 13, 15, 101, 102, 103, 106, 159, 165, 169, 173, 265, 316 assets, 51, 150, 305 authorities, xv, 72, 324 authority, 6, 102, 122, 212, 322 autonomy, 220, 221, 233 awareness, 8, 12, 22, 39, 69, 108, 109, 114, 116, 212, 217, 218, 219, 225, 226, 227, 228, 229, 230, 234, 235, 237, 238, 241, 249, 251, 252, 255, 256, 258, 259, 260, 261, 262, 268, 270, 273, 274, 278, 279, 280, 281, 282, 317, 318, 319, 320, 321, 324, 325, 326, 327, 328

campaigns, 221 candidates, 221, 236 category d, 64 Chicago, 333 circulation, 194 cities, 289 clarity, 13 classes, xi, xviii, 16, 77, 85, 95, 121, 152, 179, 263, 265, 279, 315, 317 classification, xi, xii, xiii, xiv, xv, xvi, xviii, 7, 74, 77, 177, 178, 179 cognition, xi, 320, 321, 327 cognitive activity, 321, 328 color, 184 commercial, 141, 143, 145, 146, 147, 152, 178 common rule, 181 communication, 221, 233, 252, 272, 276 compatibility, 16, 24, 25, 26, 28, 41, 43, 45, 63, 177, 253, 254, 270, 328 compensation, 21, 24, 26, 27, 30, 39, 58, 68, 71, 107, 141, 142, 175, 323 competition, 69, 70, 72, 74, 77, 98, 129, 131, 209, 210, 325 competitors, 131, 152, 157 complement, 308, 309, 311 complementarity, 323 complexity, xv, 160, 162, 178, 187, 192, 194, 199, 219, 229, 236, 249, 253, 256, 279, 280, 289, 323 compliance, 3 composition, 38, 69, 310, 311 compulsion, 242 computation, 284 computing, 283 concordance, 89, 90, 108, 115, 116, 250, 317, 322 conditioning, 319 configuration, 272

B ban, 305 bargaining, 272 barter, 152, 283 base, 37, 265 benefits, 7, 21, 29, 39, 48, 93, 102, 103, 131, 147, 148, 150, 151, 166, 175, 176, 181, 185, 206, 251, 252, 255, 272, 274, 276, 281 bonuses, 48, 148 bounds, 179, 208, 209, 219, 286 budget allocation, 121 buyer, 115, 118, 119, 120

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Index

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

336

conflict, 72, 201, 202 conformity, 24, 44 congruence, 233 connectivity, 285 consent, xvii, 159, 166, 168, 263, 320 construction, 13, 38, 100, 284 consulting, 4 consumers, 94, 98, 99 consumption, xvi, 283, 325 contingency, 141, 142, 143, 145, 146, 147, 148, 149, 150 contour, 41, 43, 138, 139, 140 convergence, 233 cooperation, 70, 71, 72, 176, 209, 272, 315 coordination, xvi, 28, 29, 34, 72, 73, 74, 177, 220, 221, 228, 233, 284, 323 corporate governance, 263 correlation, 32, 34, 90, 99, 305 corruption, 220, 224, 225 cost, xvii, 20, 21, 25, 27, 28, 36, 46, 47, 49, 50, 51, 52, 54, 60, 62, 65, 68, 72, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 99, 100, 106, 107, 108, 109, 118, 121, 129, 130, 131, 132, 133, 135, 136, 142, 152, 174, 175, 181, 182, 183, 186, 187, 198, 199,200, 203, 204, 207, 210, 213, 221, 222, 224, 225, 230, 254, 255, 263, 295, 296, 297, 298, 319, 328 covering, 320 criticism, 96 culture, xiv, 151, 241, 252 customers, 296 cycles, 284, 286, 296

D damages, 149 data processing, 159, 165, 181, 321 debts, 39 decentralization, 108, 244, 324 decision-making process, 13, 303 decomposition, xvi, 28, 49, 56, 69, 159, 166, 191, 323, 324, 325 deduction, 73, 78, 79, 80, 81, 83, 84 defects, 204 deficiency, 91, 93, 95, 96 deficit, 96, 149 degenerate, 25, 144, 191, 192, 212, 321 depreciation, 129 depth, 219, 238, 258, 280, 324, 326 detection, 322 determinism, 7, 9 developed countries, 39

deviation, 92, 93, 98, 102, 168, 217, 251, 291, 321, 327 dignity, 242 dimensionality, 161 discrete mathematics, 282, 283 discrimination, 323 distribution, xiv, xvi, 39, 68, 107, 118, 172, 191, 270, 271, 295, 314, 315 diversification, 141 diversity, 319 dominant strategy, 48, 49, 55, 89, 90, 96, 107, 108, 168, 249, 268, 270, 275, 316, 323 draft, xvi, 325 drawing, 322 duality, 286, 288 dynamical systems, 85

E economic development, 163 economic efficiency, 212 economic problem, 114 economic systems, xi, 6, 39, 101, 211, 263, 315, 319, 325, 328 economic theory, x, 332 economics, 19, 21, 27, 76, 96, 106, 177, 178, 242, 265, 305 education, 160, 161 efficiency criteria, xii, 272, 324 efficiency level, 151, 220 egalitarianism, 60 election, 234, 235 employees, xv, 40, 66, 131, 177, 179, 180, 181, 195, 252, 296, 297 employment, 184, 296 energy, 195 environment, 3, 5, 6, 101, 206, 268, 317 equality, 39, 177, 198, 239, 282 equilibrium, 10, 14, 45, 47, 48, 49, 50, 52, 55, 57, 67, 70, 82, 88, 89, 91, 92, 93, 94, 96, 97, 99, 100, 103, 104, 107, 114, 122, 123, 124, 126, 127, 132, 149, 153, 154, 155, 156, 157, 158, 182, 183, 210, 217, 218, 219, 221, 225, 231, 232, 233, 234, 235, 240, 251, 252, 255, 256, 257, 258, 259, 261, 262, 268, 269, 270, 272, 273, 274, 280, 281, 282, 315, 316, 317, 318, 319, 320, 322, 323, 327 equipment, 99, 129 ethics, 323 evidence, 30 exclusion, 183, 184 execution, xvi, xvii, 136, 137, 138, 139, 178, 226, 325 expenditures, 129

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Index expertise, xvii, 16, 34, 87, 90, 92, 101, 102, 103, 104, 105, 106, 122, 149, 166, 169, 170, 172, 173, 263, 317 external environment, 3, 5, 6, 7, 8, 101, 169, 192, 200, 201, 202, 204, 206, 314, 317, 327

F

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

false belief, 221, 233 families, 315 financial, 3, 20, 21, 26, 28, 101, 112, 121, 122, 127, 128, 132, 135, 136, 137, 141, 149, 152, 160, 165, 166, 167, 263, 264, 283, 305, 306, 321, 323 financial resources, 101, 112, 121, 122, 127, 132, 135, 136, 149, 165, 166, 167, 263, 283, 305, 306 financing agent, 127 fixed costs, 162 Ford, 287, 294 forecasting, xiv, 173 formation, 3, 178, 179, 182, 185, 187, 188, 189, 194, 236, 237, 265, 315 formula, 13, 23, 24, 25, 29, 32, 34, 36, 65, 67, 68, 72, 73, 76, 79, 95, 111, 117, 126, 132, 139, 150, 152, 156, 164, 166, 169, 175, 228, 229, 230, 234, 239, 247, 271, 285, 286, 310, 312 foundations, 314 function values, 32 funds, 121, 122, 124, 130, 160, 263, 320 fuzzy sets, 306, 307, 308, 309, 312

G game theory, xix, 10, 13, 19, 217, 252, 267, 277, 305, 318, 321 general education, 160, 161 goal-setting, ix, 159 graph, xvi, xix, 33, 138, 178, 194, 195, 196, 197, 198, 199, 200, 201, 202, 235, 250, 260, 261, 262, 263, 281, 282, 283, 284, 285, 286, 289, 290, 292, 294, 295, 296, 318 group interests, 242 growth, 96, 130, 153 guessing, 16

H Hamiltonian, 284, 289, 292, 293, 294 health, 40 highways, 283 hiring, 194 history, 226, 227, 228, 233, 317, 323, 327 holding company, 77

337

house, 332 human, 1, 3, 140, 181, 194, 219, 314, 326, 327, 328 humanism, 323 hybrid, 90, 206 hypothesis, 7, 9, 10, 12, 13, 22, 23, 26, 27, 31, 38, 41, 45, 47, 56, 90, 96, 108, 126, 128, 149, 168, 180, 183, 184, 228, 234, 249, 267, 268, 277, 280, 314, 320

I ICC, 33, 34 ideal, 87, 93, 136, 276, 316 identification, xiv, 33, 74, 88, 172 identity, 279 ideology, 173 image, 3, 311, 318, 321, 328 images, 313 incentive mechanism design, 19, 22, 85 income, 20, 21, 24, 25, 27, 28, 29, 31, 33, 34, 36, 37, 38, 40, 42, 45, 46, 47, 50, 51, 55, 56, 58, 70, 72, 73, 77, 78, 79, 81, 84, 98, 100, 107, 108, 122, 136, 138, 140, 141, 142, 143, 144, 146, 147, 149, 152, 174, 175, 177, 181, 182, 183, 185, 187, 188, 214, 220, 221, 222, 224, 239, 246, 248, 253, 315 income distribution, 38 income tax, 73, 78, 84 incompatibility, 270 independence, 170 independent living, 319 individual action, 56, 58 individual character, 252, 254, 268 individuals, 1, 140, 141, 242, 284, 306 induction, 287 industry, xviii, 264 inequality, 77, 123, 139, 143, 175, 189, 286, 291 inflation, 39 institutional change, 242 institutional economics, 241, 242 integration, 264 integrity, 327 interdependence, 7, 50, 51 interference, 324 internal consistency, 33 interpersonal relations, 315 interrelations, 85, 207 investments, 110, 111, 112, 113, 121, 122, 124, 125, 135, 163 investors, 122 irony, 134 issues, xiv, xvii, xviii, 3, 11, 28, 90, 194, 197, 242, 328

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Index

338

L labor economics, 19, 177 labor market, 24, 28, 188 large-scale company, xv laws, ix, x, 4, 193, 242, 294, 303, 320 lead, ix, 28, 58, 91, 120, 135, 144, 151, 161, 162, 228, 257, 262, 270, 303, 311 leadership, 274 lean production, 265 learning, ix, 181, 221 life cycle, 263 linear function, 84, 320 linear programming, 41, 286, 288, 289 logical reasoning, 217 logistics, 178, 181 loyalty, 252 lying, 3, 150, 233, 314, 317

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

M majority, 11, 39, 40, 45, 55, 94, 170, 181, 211, 217, 234, 237 man, 1, 8, 142, 237, 301, 314 management, x, xi, xvi, xvii, 4, 19, 58, 59, 69, 85, 101, 107, 132, 173, 178, 194, 203, 204, 205, 263, 264, 265, 284, 298, 323, 324, 325 manipulation, 87, 88, 91, 93, 99, 101, 102, 103, 107, 167, 168, 316, 327 manufacturing, 99, 136, 188, 201, 219, 221, 241, 250, 283 mapping, 8, 20, 178, 212, 228, 234, 236, 237, 251, 254, 259, 269, 271, 281, 311, 314, 315, 317, 318, 319, 320, 321, 322, 326, 327, 328 marginal costs, 21, 27, 39, 76, 177, 181, 203 marginal product, 39, 177 marketing, 191, 195, 196 mass, 236, 237 material resources, 131 materials, 204 mathematical methods, 265 mathematics, xvi, 282, 283 matrix, xvii, 32, 33, 34, 40, 69, 70, 160, 161, 164, 191, 192, 194, 195, 206, 207, 208, 209, 210, 211, 285, 297, 312, 321 matter, 4, 7, 21, 90, 98, 174, 178, 179, 206, 220, 229, 237, 263, 264, 281, 299, 304, 323, 326 measurement, 305 median, 103 membership, 164, 306, 307, 308, 309, 310, 311, 313, 317 mental image, 315

messages, 87, 88, 89, 90, 91, 94, 95, 99, 101, 108, 149, 167, 178, 241, 316, 317, 319, 320, 322, 326, 328 methodology, 4, 321 Microsoft, 165 minimum wage, 20 mission, 194 models, x, xi, xii, xiii, xiv, xvi, xviii, xix, 1, 3, 6, 7, 8, 10, 11, 15, 16, 19, 20, 21, 22, 23, 28, 39, 40, 44, 55, 58, 59, 60, 64, 69, 73, 85, 87, 88, 141, 148, 169, 170, 174, 177, 178, 180, 181, 182, 183, 187, 191, 194, 206, 207, 209, 211, 217, 218, 219, 220, 221, 227, 228, 233, 236, 237, 238, 241, 242, 249, 263, 264, 265, 267, 270, 272, 274, 280, 284, 301, 305, 306, 311, 313, 314, 315, 317, 318, 321, 322, 324, 326 modifications, 162, 204, 263 moral hazard, 142, 145, 146 Moscow, 331, 332, 333 motivation, 22, 77, 78, 79, 80, 81, 84, 85, 184, 249, 264, 319 multidimensional, 161 multiplier, 50, 62, 107, 124

N Nash equilibrium, 10, 48, 49, 53, 65, 66, 67, 88, 91, 92, 93, 94, 97, 100, 102, 103, 104, 122, 149, 153, 171, 172, 217, 235, 249, 251, 252, 253, 255, 256, 259, 269, 270, 271, 272, 281, 319, 321, 327 natural laws, 193, 194 negativity, 25, 36 negotiation, 131 neutral, 40, 41, 42, 44, 140, 141, 142, 143, 144, 146, 147, 148, 150 nodes, 138, 139, 150, 159, 195, 196, 197, 198, 199, 200, 203, 278, 281, 283, 284, 285, 286, 287, 288, 289, 290, 292, 293, 294, 296, 297, 298, 299, 300, 318, 319

O objective reality, xi, 321 objectivity, 326 obstacles, 15 officials, 220, 224, 225 oligopoly, 242 openness, 324 operations, xvi, 11, 13, 16, 38, 107, 136, 137, 138, 139, 140, 150, 152, 153, 154, 156, 157, 158, 178, 195, 264, 284, 289, 298, 299, 300, 301, 325, 328 operations research, xvi, 11, 264

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Index opportunities, 59, 242, 323 optimal control theory, xvi optimal resource allocation, 99 optimism, 90 optimization, xv, xvi, xvii, 14, 27, 45, 46, 69, 117, 119, 121, 137, 150, 151, 152, 155, 156, 160, 162, 165, 177, 178, 179, 186, 188, 194, 226, 243, 245, 246, 249, 263, 264, 265, 283, 289, 301, 332 organ, 131 organism, 1, 319 organize, 173, 192, 198 output index, 287

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

P parallel, xiv, 11, 136, 138, 169, 321 parameter estimates, 173 parameter vectors, 262 Pareto, 27, 45, 52, 54, 70, 71, 100, 107, 157, 255, 256, 258, 274, 316, 322 participants, x, xiii, xiv, xv, xvi, 6, 7, 9, 10, 11, 13, 20, 22, 27, 28, 39, 46, 47, 51, 55, 69, 79, 80, 81, 82, 84, 101, 110, 112, 114, 115, 116, 121, 123, 125, 157, 174, 178, 191, 192, 206, 210, 211, 212, 213, 214, 221, 241, 269, 272, 283, 284, 301, 315, 316, 317, 319, 320, 321, 322, 323, 326 partition, 74, 207, 208, 316 penalties, xiv, 88, 98, 170, 174, 220, 249, 250, 291 Philadelphia, 333 planned action, 35, 38 playing, 88, 276 police, 242 policy, 74 political party, ix, 322 pollution, 51 portfolio, 142 practical activity, 4, 327, 328 price changes, 215 principles, xii, 3, 5, 7, 8, 9, 10, 12, 13, 14, 15, 16, 24, 99, 100, 194, 217, 249, 267, 268, 270, 317, 323, 324, 327 private investment, 121 probability, xvi, 6, 9, 39, 140, 141, 142, 143, 146, 148, 150, 172, 173, 238, 251, 259, 270, 303, 321, 325 process duration, 291 production costs, 136 production function, 151 profit, 73, 77, 78, 81, 82, 83, 84, 85, 100, 115, 117, 119, 127, 129, 130, 131, 137, 139, 152, 177, 213, 315 profitability, 29, 30, 78, 80, 81, 82, 83, 84, 100, 129, 130, 131, 140, 213

339

programming, xvi, 133, 134, 207, 249, 287, 290 project, xvi, xvii, 69, 72, 73, 98, 101, 110, 111, 113, 114, 121, 122, 123, 126, 131, 132, 133, 136, 137, 138, 152, 158, 159, 167, 178, 191, 206, 207, 208, 209, 263, 264, 265, 284, 291, 298, 299, 300, 301, 320, 321, 322, 325 property rights, 242 proportionality, 35, 80, 109, 231 psychology, xvii, 265, 320 public opinion, 237

Q questioning, 101 queuing theory, xvi

R rate of change, 325 rating scale, 161, 163 rationality, 24, 25, 26, 28, 29, 39, 41, 43, 115, 251, 264, 269, 271, 314, 319, 328 raw materials, 129, 195, 201, 204, 283 reading, 301 real income, 220 real numbers, 306, 307, 308 real time, 159, 327 reality, 15, 30, 220, 226, 231, 281, 313, 315, 319, 320, 326, 327, 328 reasoning, 67, 147, 217, 269, 277, 280, 312, 325 recall, 12, 27, 94, 104, 124, 182, 192, 207, 213, 224, 238, 248, 273, 276, 281 recession, 40 recognition, 20, 26 recommendations, 169 recovery, 33, 149 recruiting, 177 redistribution, 37, 40, 199 redundancy, 162 reform, 209 rejection, 24 relevance, 328 reliability, xvi, 195 relief, 23, 182 reproduction, 326 reputation, 117 requirements, xvi, 11, 92, 119, 142, 192, 250, 322, 325 researchers, 178, 191, 289, 303 reserves, 165 resource allocation, xvii, 16, 87, 90, 91, 93, 94, 95, 96, 97, 99, 100, 107, 115, 121, 264, 299, 300

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Index

340

resource utilization, 91, 97, 300 resources, xv, xvi, 70, 80, 91, 97, 115, 118, 122, 127, 128, 132, 136, 137, 138, 139, 140, 160, 166, 167, 252, 284, 299, 300, 301, 314, 317, 321, 323, 324, 325, 326 response, 1, 104, 106, 242, 249, 258, 268, 269, 270, 273, 275, 323 restrictions, 28, 30, 102, 304 restructuring, 132, 263 retail, 204 rewards, 20, 29, 31, 32, 36, 41, 45, 56, 59, 63, 69, 74, 75, 76, 77, 169, 171, 172, 182, 197, 247 rights, 177 risk, 40, 41, 42, 43, 44, 110, 140, 141, 142, 143, 144, 146, 147, 148, 149, 150 risks, xvii, 9, 140, 141, 142, 143, 144, 146, 147, 148, 152, 162, 165, 284 root, 159, 196, 197, 286 roughness, 16 routes, 283 rules, ix, x, xvi, 4, 6, 9, 12, 15, 88, 91, 160, 194, 209, 212, 242, 250, 267, 290, 305, 320, 322, 326

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

S safety, 149 scale system, 162 scaling, 239 school, 19 science, xi, 4, 8, 205, 241, 263, 264, 265, 284, 321, 322, 325, 326 scientific knowledge, xi, 326, 328 scope, xvi, 20, 101, 282, 326 seasonality, 40 security, 142, 143, 263 self-awareness, 226, 279 self-consciousness, 242 self-control, 176 self-organization, 212 self-regulation, 3 seller, 115, 118, 119, 219, 220, 221, 222, 223, 224 semiotics, 326 sensitivity, 14 set theory, xix, 283 shape, 108, 141, 169, 203 shortage, 51, 298 showing, 259 signal possessing, 327 signals, 319 signs, 23, 289, 291, 326 simulation, 15, 209, 321 smoothing, 199 social development, 122, 123, 163, 166

social group, 1, 237, 284 social life, 320 social network, 263 social order, 192 social psychology, 237 social reality, 321 social relations, 241, 319 social rules, 241, 319 society, 1, 3, 241, 242, 319, 323, 324 sociology, 241, 265, 284, 319 software, 85 solution, xiv, xv, 10, 14, 15, 16, 22, 25, 26, 27, 29, 30, 32, 41, 42, 43, 46, 50, 55, 56, 57, 61, 62, 63, 72, 78, 90, 104, 107, 108, 111, 113, 126, 127, 138, 139, 140, 151, 153, 154, 155, 156, 157, 158, 177, 178, 179, 181, 182, 183, 185, 186, 192, 194, 199, 207, 208, 213, 217, 218, 223, 239, 240, 242, 246, 249, 250, 261, 265, 268, 271, 272, 284, 288, 289, 290, 291, 292, 293, 295, 296, 297, 300, 301, 302, 303, 317, 318, 322, 323, 325, 326 span of control, 205, 206 special education, 160, 161 specialists, 265 specialization, 171, 191, 192, 206, 325 stability, 14, 28, 40, 49, 142, 219, 220, 221, 222, 225, 228, 231, 233, 239, 268, 281 state, xvi, 4, 5, 6, 8, 10, 14, 22, 32, 35, 39, 40, 91, 121, 141, 160, 161, 169, 172, 228, 242, 243, 250, 251, 252, 254, 259, 260, 261, 263, 264, 268, 269, 270, 281, 282, 313, 317, 319, 320, 321, 323, 324, 326 states, xiii, xv, 6, 14, 45, 140, 159, 249, 251, 259, 268, 286, 304, 313, 317, 319, 320, 321, 323, 325 statutory provisions, 194 stimulation, 319, 321 storage, 152 strategic planning, xvi structure, ix, xii, xiii, xiv, xv, xviii, xix, 1, 3, 4, 5, 8, 11, 12, 13, 17, 19, 21, 22, 69, 70, 91, 140, 159, 161, 168, 172, 178, 191, 192, 193, 194, 195, 196, 197, 198, 200, 206, 207, 208, 209, 210, 211, 212, 213, 215, 218, 219, 225, 227, 228, 229, 230, 234, 235, 237, 238, 241, 242, 252, 256, 258, 259, 260, 261, 262, 265, 272, 277, 278, 279, 280, 282, 284, 314, 316, 317, 319, 320, 321, 322, 323, 324, 325, 326, 327 substitutes, 108, 281 substitution, 179, 180 substitutions, 42 superimposition, 192 supplementation, 313 supplier, 201, 204, 296 surplus, 94, 298

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,

Index susceptibility, 238 sympathy, 284 synthesis, x, 1, 14, 15, 173, 194, 207, 208, 215, 265 system analysis, 3

Copyright © 2013. Nova Science Publishers, Incorporated. All rights reserved.

T tariff, 38 tax rates, 77 taxation, 73, 77 team members, xvi, 220, 221, 233 teams, 65, 66, 68 techniques, xi, 45, 72, 230, 263, 265, 303 technologies, 236 technology, x, xvii, 1, 3, 8, 11, 13, 14, 15, 16, 99, 150, 241, 328 temporal variation, 192 tension, 284 territorial, 191, 206 textbook, 236 textbooks, 237, 267, 282, 301 theft, 142 theory of differential equations, xvi threats, 272 top-down, 162 total costs, 56, 57, 58, 60, 61, 62, 76, 107, 108, 109, 136, 197, 199, 207, 208, 213, 220, 226, 227, 229, 231, 255, 295, 296 total utility, 80, 213, 215, 289 Toyota, 333 traditions, 242 training, xiv, xvi, xvii, 15, 177, 181, 265 trajectory, 228, 233 transaction costs, 177, 178, 192, 242 transactions, 141 transfer pricing, xvii, 87, 90, 108, 109, 263 transformation, 21, 140, 192, 193, 194, 209, 314, 327 transmission, 283 transport, 208, 283, 295, 296, 298 transportation, xvi, 152, 294 treatment, 194, 267, 314 true belief, 101

341

U unification, 60, 285, 324 uniform, 11, 13, 20, 64, 65, 66, 67, 68, 78, 107, 109, 118, 172, 214, 215, 260, 319 unit cost, 131 universe, 325 USA, 19 USSR, 19

V variables, 11, 42, 43, 52, 73, 90, 127, 129, 182, 207, 253, 288 variations, 6, 28, 40, 105, 228 vector, 10, 11, 31, 32, 48, 49, 50, 56, 57, 58, 62, 74, 75, 76, 77, 87, 88, 109, 122, 149, 165, 188, 197, 198, 206, 207, 226, 228, 231, 234, 235, 249, 250, 251, 252, 255, 257, 258, 259, 261, 262, 268, 269, 270, 271, 281, 322, 327, 328 veto, 174 vote, 236 voters, 102, 221, 234, 235, 237

W wage rate, 35, 37, 38, 60, 61, 62, 107, 109, 185, 186, 188, 189, 214, 215, 226, 274 wages, 30, 40, 183 wholesale, 204 workers, 195, 197, 200, 201, 202, 203, 204, 205, 206, 264 working hours, 35

Y yield, 81, 98, 110, 122, 138, 147, 178, 219, 228, 301, 302

Theory of Control in Organizations, Nova Science Publishers, Incorporated, 2013. ProQuest Ebook Central,