Strategic Management, Decision Theory, and Decision Science: Contributions to Policy Issues 9811613672, 9789811613678

This book contains international perspectives that unifies the themes of strategic management, decision theory, and data

129 82 6MB

English Pages 293 [282] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Foreword
Preface
Contents
About the Editors
Sustaining High Economic Growth Requires a Different Strategy: An Integrated Approach for Broad-Based Knowledge Economy
1 Introduction
2 Uneven Process of Economic Development
3 A Few Pointers on Productivity and Competitiveness
4 Importance of Manufacturing
5 A Few Illustrations from Microeconomics for Analytics
5.1 Production Possibility Frontier
5.2 Consumer Surplus and Producer Surplus
5.3 Payment System and Consumer Surplus
6 Example of Untapped Potential in a Rural Setting
7 Bridging Rural-Urban Divide
8 Analytics for Micro-Macro Linkage
9 Institutions and Governance
10 Conclusion
References
The Secretary Problem—An Update
1 Introduction
2 Optimal Stopping Rule
3 Generalized Mukherjee-Mandal Formulation
4 Solution Through ENGS
5 Case of Multiple Rankings
6 Concluding Remarks
References and Suggested Reading
Energy Efficiency in Buildings and Related Assessment Tools Under Indian Perspective
1 Introduction
2 Literature Survey
3 Energy Efficient Building
3.1 Energy Conservation Building Code for Commercial Buildings
3.2 Energy Conservation Building Code for Residential Buildings
4 Green Building
4.1 LEED-India
4.2 Green Rating for Integrated Habitat Assessment (GRIHA)
5 Examples of Energy Efficient Buildings in India
5.1 ECBC-Compliant Composite Zone Located ITC Green Centre, Gurugram
5.2 IGBC Green Building: CII Sohrabji Godrej Centre, Hyderabad
5.3 Indira Paryavaran Bhawan, New Delhi: India’s First Net-Zero Energy Building (Blending of IGBC, GRIHA and ECBC Provisions)
6 Conclusion
References
A Multi-type Branching Process Model for Epidemics with Application to COVID-19 in India
1 Introduction
2 The Basic Model
3 Analysis of the Basic Model
4 Extended Model
5 Application: COVID-19 Epidemic in India
6 Conclusion
References
The Mirra Distribution for Modeling Time-to-Event Data Sets
1 Introduction
2 Synthesis of the Distribution
3 Distributional Properties
3.1 Raw and Central Moments
3.2 Mode
3.3 Generating Functions
4 Survival Properties
4.1 Stress-Strength Reliability
5 Estimation of Parameters
5.1 Method of Moments
5.2 Method of Maximum Likelihood
6 A Simulation Study
7 Application with Real Data Illustration
8 Concluding Remarks
References
Comparison of Local Powers of Some Exact Tests for a Common Normal Mean with Unequal Variances
1 Introduction
2 Review of Six Exact Tests for H0 Versus H1
2.1 P-value Based Exact Tests
2.2 Exact Test Based on a Modified t
2.3 Exact Test Based on a Modified F
3 Expressions of Local Powers of the Six Proposed Tests
3.1 Local Power of Tippett's Test [LP(T)]
3.2 Local Power of Wilkinson's Test [LP(Wr)]
3.3 Local Power of Inverse Normal Test [LP(INN)]
3.4 Local Power of Fisher's Test [LP(F)]
3.5 Local Power of a Modified t Test [LP(T1)]
3.6 Local Power of a Modified F Test [LP(T2)]
4 Comparison of Local Powers
5 Conclusion
References
Lower Bounds for Percentiles of Pivots from a Sample Mean Standardized by S, the GMD, the MAD, or the Range in a Normal Distribution and Miscellany with Data Analysis
1 Introduction
2 Sample Mean from a Normal Distribution Standardized with Sample Standard Deviation S, GMD, MAD or Range
2.1 GMD-Based Pivot
2.2 MAD-Based Pivot
2.3 Range-Based Pivot
2.4 Lower Bounds for the Upper Percentiles
2.5 Closeness of the Lower Bounds to the Percentiles
3 Comparing Normal Means Having Unknown and Unequal Variances: Two Lower Bounds for the Required Upper Percentile
3.1 Two Lower Bounds for the Upper Percentiles
3.2 Closeness of the Two Lower Bounds to the Percentiles
3.3 Limiting Values of LB1,m,α and LB2,m,α
4 Some Concluding Comments
References
Intuitionistic Fuzzy Optimization Technique to Solve a Multiobjective Vendor Selection Problem
1 Introduction
2 Preliminaries
3 Formulation of the Intuitionistic Fuzzy Multiobjective Model
4 Algorithm of Steps
5 Numerical Illustration
6 Results and Discussion
7 Conclusion
References
Testing for the Goodness of Fit for the DMRL Class of Life Distributions
1 Introduction
2 Methodology
2.1 A Measure of Non-decreasingness of MRL
2.2 Form of the Empirical Cumulative MRL and Its LCM
2.3 Consistency and Null Distribution
3 Simulations
3.1 Finding Least Favourable Distribution
3.2 Conservative Cut-Offs
3.3 Power of the Test
3.4 Size and Power for Censored Data
4 Data Analysis
References
Linear Empirical Bayes Prediction of Employment Growth Rates Using the U.S. Current Employment Statistics Survey
1 Introduction
2 The Data
3 Linear Empirical Bayes Prediction of the Final Growth Rate
4 Analysis of the Current Employment Statistics Survey Data
5 Concluding Remarks
References
Constrained Bayesian Rules for Testing Statistical Hypotheses
1 Introduction
2 Constrained Bayesian Method for Testing Different Type of Hypotheses
2.1 Simple Hypotheses
2.2 Composite Hypotheses
2.3 Directional Hypotheses
2.4 Multiple Hypotheses
2.5 Union–Intersection and Intersection–Union Hypotheses
3 Comparative Analysis of Constrained Bayesian and Other Methods
4 Conclusion
References
Application of Path Counting Problem and Survivability Function in Analysis of Social Systems
1 Introduction
2 SPCP and Its Applications
3 PCP and SF
4 SF and Its Application to Analysis of Social Systems
4.1 Measuring the Robustness of the Urban Traffic Road Network System in Tokyo
4.2 Investigating the Relationship Between Vote Share (vS) and Seat Share (SS)
5 Summary and Conclusion
References
Framework for Groundwater Resources Management and Sustainable Development of Groundwater in India
1 Introduction
2 Groundwater Aquifer as Part of Hydrological Cycle
3 Groundwater Exploration
4 Dynamic Groundwater Resources
5 Scope and Practice of Groundwater Resource Management
6 Natural Recharge, Artificial Recharge and Rain Water Harvesting
7 Management of Coastal Aquifers
8 Management of Karstic Aquifers
9 Management of Groundwater in Large-Scale Mining Project Area
10 Concluding Remarks
References
A New Generalized Newsvendor Model with Random Demand and Cost Misspecification
1 Introduction
2 SyGen: Symmetric Generalized Newsvendor Problem
2.1 Optimal Order Quantity for SyGen Newsvendor with Uniform Demand
2.2 Optimal Order Quantity for SyGen Newsvendor with Exponential Demand
3 Estimation of the Optimal Order Quantity
3.1 Estimating Optimal Order Quantity for U(0,b) Demand
3.2 Estimating Optimal Order Quantity for exp(λ) Demand
4 Simulation
5 Performance Comparison of Piecewise Linear and Nonlinear Costs with Misspecification
5.1 Cost Misspecification Under Unif(0,b) Demand
5.2 Cost Misspecification Under exp(λ) Demand
6 Conclusion
References
Analytics Approach to the Improvement of the Management of Hospitals
1 Introduction
2 Waiting Time Problem in a Department of Hospital
3 Operating Room Scheduling
4 Night-Shift Scheduling System for Residents
5 Concluding Remarks
References
A Novel Approach to Estimate the Intrinsic Dimension of fMRI Data Using Independent Component Analysis
1 Introduction
2 Theory and Methods
2.1 The Noisy Mixing Model
2.2 Estimation of Using ICA
2.3 Imaging and Data
3 Results
3.1 Simulated Data
3.2 Real Data
4 Discussion
4.1 Validity and Violations of Assumptions in the Proposed Method
4.2 Initial Choice for the Number of Independent Components
4.3 Sub-sampling for Identification of the Effectively I.I.D. Samples
5 Conclusion
References
Data Science, Decision Theory and Strategic Management: A Tale of Disruption and Some Thoughts About What Could Come Next
1 Introduction
2 Changes in Individual Fields
3 A Giant Leap
4 Where Do We Go from Here?
5 Not to Conclude
References
Recommend Papers

Strategic Management, Decision Theory, and Decision Science: Contributions to Policy Issues
 9811613672, 9789811613678

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Bikas Kumar Sinha Srijib Bhusan Bagchi   Editors

Strategic Management, Decision Theory, and Decision Science Contributions to Policy Issues

Strategic Management, Decision Theory, and Decision Science

Bikas Kumar Sinha · Srijib Bhusan Bagchi Editors

Strategic Management, Decision Theory, and Decision Science Contributions to Policy Issues

Editors Bikas Kumar Sinha Indian Statistical Institute Kolkata, West Bengal, India

Srijib Bhusan Bagchi University of Burdwan Barddham¯an, West Bengal, India

ISBN 978-981-16-1367-8 ISBN 978-981-16-1368-5 (eBook) https://doi.org/10.1007/978-981-16-1368-5 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Foreword

Basic & applied science and management science have lately got intricately intertwined both in scope and in practice. As humanity strives to orient the advances of S&T and the corresponding development of innovation towards social and economic progress, this intertwining expands manifold. Policy decisions (often disruptive and far-reaching) have to be taken within an environment of uncertainty. Forecasting, validation and strategizing of decision-making has necessitated the application of advanced statistical and quantitative methods in all facets of the S&T management knowledge space. It is heartening to note that the present conference addresses this issue adequately. Beginning with a new field of research in combinatorial decision-making, the deliberations span wide application areas such as the problem of managing the nation’s economy to achieve high growth with social justices by adopting a new strategy; the role of energy management at the micro-level to develop building designs; the utility of data accrued from surveys conducted by govt./private agencies to name a few. Data, decision theory and strategic management have been focused as key parameters in addressing macro- and meta-level issues of the present day. I welcome this volume that has been published by Springer. This would surely be a major resource for scientists, statisticians, policymakers and all other stakeholders working in this domain. K. Muraleedharan CSIR Emeritus Scientist, CMET, Thrissur & Former Director CSIR-CGCRI, Kolkata, India

v

Preface

Recent times have been characterized by phenomenal advances in Science and Technology which have been promoted and supported by major innovations—both benign and disruptive—that provide opportunities to institutions of all kinds to grow and excel on the one hand and pose challenges to the very existence to those that fail to take due cognizance of such innovations. To steer institutions in such difficult environments, strategic policy decisions and their effective implementations have become bad necessities for the management of enterprises and endeavors. Policy decisions regarding both core and support functions in any enterprise— engaged in Agriculture and allied activities or in manufacturing and other Industrial activities or in production and delivery of services should, can and do derive strengths from contemporary developments in management science that is based on Decision Theory and Information Processing. Theory of Constraints, Priority Pointing Procedures, Algorithmic Decision Theory, Multi-criteria Decision-making, Bayesian Methods, Machine Learning Algorithms and a host of similar other developments can all contribute effectively to strategic decision-making. Strategic Management as distinct from Operational Management has, no doubt, to be specific to some functions or processes and to the enterprise to be managed. However, this aspect of management with a focus on developing and deploying policy decisions can draw upon concepts, methods, models and tools in the Theory of Choice as also in Statistical Decision Theory. Strategic Decisions have to depend more on data, rather than opinions or gut feelings. And recent developments in the field of Data Science can extend big support to this task. With this objective in mind, the Indian Association for Productivity, Quality and Reliability [IAPQR] organized an International Conference in collaboration with CSIR—Central Glass and Ceramic Research Institute [CGCRI], Kolkata on Strategic Management, Decision Theory and Data Science (SMDTDS-2020) during January 4–6, 2020, in Kolkata, India. This book is an outcome of the conference. It comprises 17 selected chapters reporting the latest trends in the aforesaid areas. In his Keynote Address (Chapter Twelve), Oyama looks at various network structured social systems and explores the possibility of using the survivability function derived from path counting procedures to examine the robustness of the systems. The survivability function has been proposed by the author to approximate the expected vii

viii

Preface

edge deletion connectivity function. The method opens up a new field of research in combinatorial decision-making. Dealing with strategic management, Barman (Chapter “Sustaining High Economic Growth Requires A Different Strategy: An Integrated Approach for Broad-Based Knowledge Economy”) takes up the problem of managing the nation’s economy to achieve high growth with social justice by adopting a new strategy and not just tinkering with the past policies and practices. The efficiency of resource use to raise productivity and competitiveness, improved institutions and governance ensuring stability while pushing for structural adjustments are needed to a large extent to put the Indian economy on a high growth trajectory. The author emphasizes a knowledge economy with transparency and reduced information asymmetry at all levels. Ghosh (Chapter “Energy Efficiency in Buildings and Related Assessment Tools Under Indian Perspective”) discusses the role of energy management at the microlevel to develop building designs which are energy-efficient to reduce greenhouse gas emissions. He refers to the norms for energy-efficient building designs and their applications in the Indian building industry sector. He also mentioned the assessment ratings like Leadership in Energy and Environmental Designs and Green Rating for Integrated Habitat Assessment in assessing sustainable parameters in green buildings. Operations Research has been aptly considered by Miller and Starr as ”applied decision-making” and, as expected, a few articles contained interesting and useful applications of methods and models in Operations Research in strategic decisionmaking The paper by Suzuki (Chapter “Analytics Approach to the Improvement of the Management of Hospitals”) takes up the problems of designing the construction of a hospital building with an emphasis on facility payout decisions, of reducing the waiting times of patients visiting a hospital, of scheduling the available operation theatres for surgeries by different departments and of scheduling night-shift duties of resident doctors. Queuing models and corresponding solutions have been used along with designing an appropriate Management Information System to provide necessary input data. In the realm of Decision Theory, Mukherjee (Chapter “The Secretary Problem—An Update”) looks back at the Secretary Selection Problem from a different angle and offers a simple solution based on the concept of expected net gain due to sampling, after formulating the classical problem as a non-linear integer programming problem. He also points out several problems associated with ranking of interviewed candidates by several judges. Mukhoti (Chapter “A New Generalized Newsvendor Model with Random Demand and Cost Misspecification”) considers the effect of non-linear costs associated with shortage and excess in the classical newspaper boy problem which has been solved with linear costs earlier. Evidently, optimal order quantities as also the minimum total expected inventory costs differ from their classical counterparts. This could pave the way for introducing non-linear costs in other stochastic inventory models. Data Science does include developments of new models and methods resulting in new findings which will help data analysts to extract more and better information from data. It is not confined only to machine learning algorithms or data integration or

Preface

ix

meta-analysis or similar applications of computationally intensive statistical procedures in existence. And in this Conference, several such papers highlighting current research findings in statistical analysis and inference were presented and several of those have been selected for this volume [Chapters “A Multi-type Branching Process Model for Epidemics with Application to COVID-19 in India”, “Comparison of Local Powers of Some Exact Tests for a Common Normal Mean with Unequal Variances”–“Testing for the Goodness of Fit for the DMRL Class of Life Distributions”, “Constrained Bayesian Rules for Testing Statistical Hypotheses”, “A Novel Approach to Estimate the Intrinsic Dimension of fMRI Data Using Independent Component Analysis”]. Some of the authors discuss the utility of data accrued from surveys conducted by govt./private agencies. They suggest bringing out changes—some of them drastic in nature—in the overall situations regarding data, decision theory and strategic management over the years and to identify the potential consequences of these changes in managing macro- and meta-level issues [Chapters “The Mirra Distribution for Modeling Time-to-Event Data Sets”, “Linear Empirical Bayes Prediction of Employment Growth Rates Using the U.S. Current Employment Statistics Survey”, “Framework for Groundwater Resources Management and Sustainable Development of Groundwater in India” and “Data Science, Decision Theory and Strategic Management: A Tale of Disruption and Some Thoughts About What Could Come Next”]. The Editors acknowledge the Former Director Dr. K.Muraleedharan and all staff of the CSIR-Central Glass and Ceramic Research Institute, Kolkata, for their collaboration in organizing the conference SMDTDS-2020. Also, the Editors express their sincere gratitude to Tata Steel, India for the financial support. The Editors would like to take this opportunity to thank all contributory authors and the reviewers for their kind cooperation during this book project. Thanks are also due to Ms. Nupoor Singh and Daniel for handling the publication of this book with Springer Nature, New Delhi, with extreme courtesy at all stages. The Conference and the follow-up publication of this volume are the fruits of tremendous efforts of Professor S P Mukherjee [Mentor of the IAPQR Organization] who had been involved in the project right from its inception. His foresights and thoughtfulness have paved the way towards the success of the project. The Editors would find this venture fruitful if it caters to the needs of researchers, professionals and data analysts.

Kolkata, India Barddham¯an, India January 2021

Joint Editors Bikas Kumar Sinha Srijib Bhusan Bagchi

Contents

Sustaining High Economic Growth Requires a Different Strategy: An Integrated Approach for Broad-Based Knowledge Economy . . . . . . . R. B. Barman The Secretary Problem—An Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. P. Mukherjee

1 21

Energy Efficiency in Buildings and Related Assessment Tools Under Indian Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Avijit Ghosh and Subhasis Neogi

33

A Multi-type Branching Process Model for Epidemics with Application to COVID-19 in India . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Arnab Kumar Laha

51

The Mirra Distribution for Modeling Time-to-Event Data Sets . . . . . . . . . Subhradev Sen, Suman K. Ghosh, and Hazem Al-Mofleh Comparison of Local Powers of Some Exact Tests for a Common Normal Mean with Unequal Variances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yehenew G. Kifle and Bimal K. Sinha Lower Bounds for Percentiles of Pivots from a Sample Mean Standardized by S, the GMD, the MAD, or the Range in a Normal Distribution and Miscellany with Data Analysis . . . . . . . . . . . . . . . . . . . . . . Nitis Mukhopadhyay

59

75

87

Intuitionistic Fuzzy Optimization Technique to Solve a Multiobjective Vendor Selection Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Prabjot Kaur and Shreya Singh Testing for the Goodness of Fit for the DMRL Class of Life Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Tanusri Ray and Debasis Sengupta

xi

xii

Contents

Linear Empirical Bayes Prediction of Employment Growth Rates Using the U.S. Current Employment Statistics Survey . . . . . . . . . . . . . . . . . 145 P. Lahiri and Bogong T. Li Constrained Bayesian Rules for Testing Statistical Hypotheses . . . . . . . . . 159 K. J. Kachiashvili Application of Path Counting Problem and Survivability Function in Analysis of Social Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Tatsuo Oyama Framework for Groundwater Resources Management and Sustainable Development of Groundwater in India . . . . . . . . . . . . . . . 195 Asok Kumar Ghosh A New Generalized Newsvendor Model with Random Demand and Cost Misspecification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Soham Ghosh, Mamta Sahare, and Sujay Mukhoti Analytics Approach to the Improvement of the Management of Hospitals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Atsuo Suzuki A Novel Approach to Estimate the Intrinsic Dimension of fMRI Data Using Independent Component Analysis . . . . . . . . . . . . . . . . . . . . . . . . 257 Rajesh Ranjan Nandy Data Science, Decision Theory and Strategic Management: A Tale of Disruption and Some Thoughts About What Could Come Next . . . . . . 271 Federico Brunetti

About the Editors

Prof. Bikas Kumar Sinha has been publishing with Springer since 1989 and has to his credit both monographs and edited volumes (including three in Lecture Notes Series in Statistics on optimal designs—Vol. 54 (1989), Vol. 163 (2002) and Vol. 1028 (2014)), not to mention contributed chapters and articles in journals. As an academic he has been affiliated with Indian Statistical Institute Kolkata, and has travelled extensively within USA, Canada and Europe for collaborative research and teaching assignments. His publications span both statistical theory and applications and he has more than 120 research articles published in peer-reviewed journals. He has also served in the Editorial Board of statistical journals including Sankhya, Journal of Statistical Planning and Inference and Calcutta Statistical Association Bulletin. He has served Government of India as a Member of National Statistical Commission for the 3-year term [2006–2009]. He was appointed an ‘Expert on Mission’ for the United Nations-funded Workshop on Survey Methodology in 1991. Of late, he was awarded Centenary Medal of Excellence by the School of Tropical Medicine, Kolkata. Prof. Srijib Bhusan Bagchi retired from the University of Burdwan. Subsequently he joined the Department of Mathematics, Bethune College Kolkata as Jyostnamoyee Dey Professor of Actuarial Science. He also served Aliah University as Professor in the Department of Statistics and Informatics. He was an Emeritus Professor of the University of Engineering and Management, Kolkata. He published more than forty papers in reputed journals and conferences and a book of Statistics. His research interest includes reliability theory, SQC, social networks and Sampling Theory. Currently he is the Chairman of Indian Association for Productivity, Quality and Reliability (IAPQR).

xiii

Sustaining High Economic Growth Requires a Different Strategy: An Integrated Approach for Broad-Based Knowledge Economy R. B. Barman

1 Introduction The dream of a $5 trillion economy is at best an aspiration, unless backed by details on how to achieve the same. Is it realistic to set such a target to be achieved by 2024–25 from the current level of less than $3 trillion. This will require a sustained growth of over 10% on an average, optimizing resource use, raising productivity and competitiveness, ensuring inclusiveness—benefiting all, so that poverty becomes a thing of the past. We need this momentum of growth to continue for at least 10 years to give us a respectable space in the prevailing global dispensation. The government has not, as yet, spelt out the strategy to follow to achieve a high growth economy. The clue, if at all, seems to be the implicit assumption in 2020– 21 central government budget exercise, which gives the impression that there is likely to be heavy dependence on private initiative in a market economy through the interplay of domestic and foreign capital. The economy has many interdependencies. The government on its own contributes about one-fourth of GDP. Its expenditure on development, particularly key infrastructure, is a major facilitator, but constrained by fiscal limitations. The financial sector is intermediary for all other sectors, which has to come out of the heavy burden of nonperforming assets, being much more innovative in its business pursuit for injecting efficiently. India is a vast country following a federal structure and hence the policy stance of various states also have critical roles. In a market economy, price is expected to provide signals for allocative efficiency equilibrating demand and supply, aligning production with customer preferences. But when there is a serious problem affecting consumer demand, supply side action for raising production to prop up the economy may not yield desired results. The rising unemployment and underemployment (National Statistical Office 2019) along with structural bottlenecks affect potential for optimal use of available human and R. B. Barman (B) Formerly of National Statistical Commission, New Delhi, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 B. K. Sinha and S. B. Bagchi (eds.), Strategic Management, Decision Theory, and Decision Science, https://doi.org/10.1007/978-981-16-1368-5_1

1

2

R. B. Barman

material resources including demographic dividend. In such a situation the supply side dependent strategy expecting private sector along with foreign investment to share a major responsibility to prop up investment to deliver on expected lines aiming a double digit GDP growth seems unrealistic. The past performance does not carry sign of any such possibility. The Asian miracles were always backed heavily by public sector investment in core sectors including infrastructure, health, education, human development and efficient interventionist governance. The stance for very high economic growth should be backed by detailed exercise, spelling out the basis of such expectation along with meticulous plan of action—brick by brick. To put it simply, the objective of 10% annual growth is highly challenging, given the long-standing track record. In a sense, we may hardly have takers for growth beyond eight percent, even on a very optimistic basis. We never had such a high sustained growth in the past. But elsewhere, such as Japan, Korea, Taiwan, and China, a spell of double-digit growth was achieved in the past. It was Total Quality Management (TQM), innovation, absorption of advanced technology for both agriculture and industry that helped in transforming the society to leap to higher growth regime. Though we follow a different policy regime, we can possibly have similar growth if we embrace efficiency through a system which can set the target for optimal use of resources raising income for broad-based demand, monitor progress, and evaluate performance at each level of geography. The financial sector should support asset formation required to raise productivity and business expansion, even for not so well off, if there is enough scope for optimizing resource use injecting efficiency. There are other aspects, such as innovations on complex entrepreneurship, science and technology, strategic problem-solving ability along with reality check as integral components of a coherent framework to deliver on expected lines. In the past, we raised fingers on vested interest and rent seeking for sub-optimal performance. This can possibly be greatly circumvented through intelligently designed plan of action, effective monitoring and transparency to contain corrupt practices and inefficiency. The idea to set actionable target on such broad-based growth and monitor progress at each level of governance to deliver on expected lines is easy said than done. The historical capital-output ratio assumes high demand on saving and investment which appears to be beyond the realm of possibility. Hence, it is a different ball game for which we need a different approach as will be explained. There can either be a bottom up or a top down approach, or even a balanced combination of both, for seriously pursuing a high growth path. We are accustomed to top down approach, which is hazy, transmission channels are weak and left largely to invisible hands of market. A top down approach is also difficult to execute in a vast country, which is federal in structure. The regime under which Indian economy operates is layered, which is not so efficient. But we can not envisage any short cut to democratic market economy. What can be the option then. As Shermer (2008) noted, “… the long term effects of Adam Smith (1776) deeper principle, that economies are best structured from the bottom up (consumer driven) and not from the top down (producer driven). In fact it is the demand creating globally competitive technology, supported by innovative scientific advancement in production and organization and management of resources, that matters for progress in the real world, as Asian miracle

Sustaining High Economic Growth Requires a Different Strategy …

3

bears out. In my view the main focus of economic policy should be a bottom up approach, promoting efficiency using microeconomic concepts of production possibility frontier, data envelopment analysis and the like and optimization of pricing mechanism consistent with the concepts of consumer and producer surplus. In such a case it is also possible to pursue an integrated approach gauging the macroeconomy on policy priorities and evaluating performance, as will be explained. For a bottom-up approach, we need data right from district, if not gram panchayat and urban block, on key drivers of the economy—income, consumption, investment, internal and external trade, employment and unemployment, productivity, competitiveness, market microstructure and pricing, and so on. We also need information about people behind such activities, their skills, habitats, and so on to assess potential capability. We need to extensively use geo-coding of data elements for flexibility in accessing and relating data for such an exercise. We are in the midst of information revolution and exploratory data analysis— machine learning and artificial intelligence. Along with powerful statistical techniques, these approaches of pattern recognition and adaptive processes for modeling functional relationships capturing dependencies will be powerful for understanding the dynamics of the economy and the complexity under different situations. We can do a much better job of collection of data now taking advantage of information and communication technology, when the network and connectivity penetrated even a remote village. We need to know how to organize huge mass of data collected as part of digital transactions, e-governance, remote sensing or various surveys. The real sector has much to do even to collate data for an intelligent system of analytics. The operations of the financial sector is already almost fully digitized. The data on the fiscal sector and external sector are mostly available digitally, but requires analytics using an intelligent data model for versatility and deeper insight. The paper touches upon certain handles of the information system in the context of behavioral and institutional efficacy determining the need for developing an advanced system of analytics, basically for the real sector. It is about micro-meso-macro economy— micro, relating to performance at individual or firm level; meso, relating to groups of individuals or industry, village panchayat, urban block, or district; macro, covering state or country. An economy gets ready for rapid transformation when it becomes a knowledge economy. In the process of suggesting the reduction of information asymmetry, I would touch upon some of the practical aspects of analytics to indicate how micromacro linkage as a framework of economic analysis, as I have been advocating, makes enough sense to support policy for the optimal use of human and material resources for greater efficiency and synergy. I believe that this approach will lend considerable support for not only a much deeper insight on the economy but also to pursue the objective of high growth with socio-economic justice seriously, setting targets for performance at all levels and related accountability.

4

R. B. Barman

2 Uneven Process of Economic Development Economic development refers to processes through which systems of production and distribution of goods and services make a difference in people’s income and wealth affecting human welfare. It covers structural transformation shifting production from primary agriculture to industry and services along with labor force leading to higher productivity, expansion of business, urbanization, better standard of living, and social cohesion. While the forces of demand and supply guide the process, the nudge and intervention also play their roles in influencing the rapid change. Adelman (2001), in “Fallacies in Development Theory and Their Implications for Policy,” identifies three major misconceptions: (a) underdevelopment has but a single cause (whether it be low physical capital, missing entrepreneurship, incorrect relative prices, barriers to international trade, hyperactive government, inadequate human capital, or ineffective government); (b) a single criterion suffices to evaluate development performance; and (c) development is a log-linear process. Adelman maintains that development should be analyzed as a highly multifaceted, nonlinear, path-dependent, dynamic process involving systematically shifting interaction patterns that require changes in policies and institutions over time (see Meier and Stiglitz 2001). We need to look for opportunities for investment which saturate the potential for growth. The critical issue is to know as to what are most important growth areas and markets. The free market is not all that frictionless as assumed in theory to address this issue comprehensively. We need to intervene to take care of imbalances in development. The interventionist government policy intends to safeguard collective interest while providing the environment for innovation as a means of quality improvement. As Hamalainen (2003) observed, the prime instrument in the present Information Age is high-quality information and extraction of knowledge for productivity, competitive advantage, and higher sustained income through enterprise, capital formation, and demand-creating technology. Policy intervention must be based on the sophisticated understanding of value chains in global context and the potential domestic competitive advantages for a clear strategy, supported by measurable action plan. It requires high-quality strategic intelligence, its cross-examination by multiple experts, and stakeholders for enhancing the quality of strategic choice. It is necessary to know what the most promising growth areas, most important market and system challenges, and most effective intervention to deal with them are. The future of economic progress as suggested by World Economic Forum (WEF) has as its building blocks (1) Productivity and Competitiveness, (2) Inclusive Growth, (3) Economics of the Fourth Industrial Revolution, (4) Economics of Environmental Sustainability, (5) Globalization, and (6) Taxation and Sustainable Growth. These objectives are further expanded to cover sub-areas requiring focused attention. In the present paper, I will confine to first two areas because growth and competitiveness are fundamental for high growth, and sharing of its fruits for a broad-based social welfare making it inclusive.

Sustaining High Economic Growth Requires a Different Strategy …

5

3 A Few Pointers on Productivity and Competitiveness Though the structure of the Indian economy is heavily tilted toward the services sector with a share of about 60% of GDP, half the people depend on agriculture, contributing about 16%, as the main source of income. It is no wonder that most of the poor are either marginal farmers or landless laborers, mostly living in villages or urban ghettos. As a result, the fruits of productivity and competitiveness essential for sustained growth in modern market economy mostly elude them. We need effective policy to transform the economy ensuring inclusiveness. It is worth noting a few facts about agriculture, which require intervention. In 2012, the average yield of paddy was 3,721 kg/ha in India compared with 6,775 kg/ha in China. Average use of fertilizers in India was 164 kg/ha while in China it was 450 kg/ha in 2011. The proportion of people dependent on agriculture in Japan was 3.9%, in Australia 4.6%, and in developed countries it is 4.1%. Their contribution in GDP was also around the same percentage. This clearly states how agriculture is a remunerative sector. It is not confined only to developed countries. Even in highly populated China, productivity in agriculture is much higher than in India. The lesson is it is possible to transform agriculture for doubling farmer income if we modernize cultivation. But we have so far failed to do so, as I will explain with an example of a village. In India, a major focus of development effort should be to raise productivity of agriculture by supporting investment, inculcating knowledge of smart faring, and providing critical inputs. This will help in raising income triggering demand for non-agriculture goods and services. This will also result in structural transformation, paving the way for higher growth elsewhere. Though agriculture is the mainstay for livelihood of people in rural areas, it is entangled in a cobweb of low productivity and insufficient investment. Modern agriculture is undergoing a dynamic change, using genetically modified varieties, particularly through diversification and commercialization of crops with a distinct spatial flavor as part of smart farming. We need to analyze the input-output relationship, and the entire chain of demand and supply right from the origin of production to the final consumption, both on production and price, along with the players involved, region by region, to prescribe a target-oriented policy and act on them. In India, almost eighty percent of people live under the twice poverty line (World Bank 2011). It is essential to empower them augmenting capability to raise their income which will increase consumption demand leading to demand-creating technology required for economic growth. Direct Benefit Transfer Schemes for the poor are designed to help the most vulnerable; but these people in general need opportunity, not doles, for improving their economic condition (teach them how to catch fish). Unfortunately, the rural financial market is not mature enough to service the poor well, leaving them with limited options to escape poverty. As Ravallion (2016) observed, the teaching of economics seems to have become strangely divorced from its applications to real-world problems such as poverty. We need to overcome such shortcomings to improve living conditions of the poor.

6

R. B. Barman

4 Importance of Manufacturing In the process of structural transformation, manufacturing has a very crucial role. However, modern manufacturing is skill-intensive. The manufacturing in India needs to absorb latest technology and the skills and compete globally for scaling up. As a result of increased capital intensity, manufacturing employment is not growing as much. Robotization of routine activities, especially when these are somewhat hazardous, is another major constraint on the generation of employment. Hence, manufacturing may not necessarily create more jobs, but it can still make a major difference by accelerating growth. Whether such a structural change in Indian manufacturing is enough to change the trajectory of development is a moot question for research. However, we need to recognize that the path to accelerate manufacturing growth requires development of infrastructure, better access to land, and application of new technology, education, skill development, and fostering innovation in frontier technology. The regulatory environment should also be supportive, ensuring ease of doing business. Large industries are cost-effective and competitive, though they require sizeable investment. However, they do not generate enough employment. Hence, from the employment perspective, the priority should be for setting up micro, small, and medium enterprise (MSME) where there is enough scope and incentive for new entrants. There are, of course, various challenges on the availability of raw material, packaging, transporting, and marketing under local conditions. Even the availability of loans is a major concern for setting up such an enterprise. The interventionist government policy should be more effective to overcome such odds. The financial sector plays a very important role as a source of money for much needed business capital. There is a two way nexus between financial sector and real sector of the economy. A major part of the investment is channelised through intermediary financial services into real sector. For example, UK (Lombard Street) and other developed countries were helped, inter alia, by well developed financial systems and markets supporting high investment in innovative technology and business expansion required for sustained high growth right from the early phases of their development. At present the financial sector in India is crossing through difficult time due to heavy overhang of nonperforming assets constraining expansion of credit and external funding required for capital formation and current business operations. The regulatory system is grappling with policy options to turn the sector to a good health, including relaxing certain norms by the Reserve Bank through policy measures, for maintaining and augmenting lending by banks. However the situation is far more complex than what such inducements may result. On regulatory policy on financial support system, Nachane (2018) said, “The future success of financial reforms in India will be crucially contingent upon how successfully the regulatory architecture adopts to the competing dictates of financial development and financial stability, and the extent to which the regulatory and supervisory system succeeds in maintaining its independence from the government as well as market participants.” Let me add that the financial market for the unorganised sector is relatively underdeveloped,

Sustaining High Economic Growth Requires a Different Strategy …

7

the risk-return system is hazy, which leaves major financial institutions with limited choices. A glaring example is the low credit-GDP ratio in India compared to other leading Asian economies. There is an urgent need to significantly raise efficiency and capacity of the financial system to support higher level of investment. Such accommodation through innovative financial services has the potential to raise productivity by way of modernisation of agriculture, smart farming and sophistication on product quality of small scale sectors for global competitiveness. These two sectors can act as springboards for supporting and sustaining higher growth in production and employment. The large scale industries and services have critical roles for leading growth process through breakthroughs in twenty-first century sunrise industries and cutting-edge technology, reaching commanding heights for global competitiveness in many sectors. Many of them support ancillary sectors under small scale industry, thus broad basing growth process. All these are possible through a well functioning and responsive financial sector. In the present Information Age, with availability of transactional data, data science and analytics should be embraced wholeheartedly. It is essential to price risks realistically through sophisticated Analytics giving necessary objectivity and transparency in business operations. The institutional mechanism for deepening and broadening financial market also requires analytics as an indispensable part of management tools for designing financial products competitively suiting customer needs. Analytics also helps in successful marketing of these products, competing with micro-finance institutions and newer fin-tech entities. This deeper insight based on empirical evidence going into asset-liability management - bucket by bucket, portfolio choice, performance evaluation and timely accountable action will transform present landscape contributing to much needed higher efficiency in financial intermediation. As this is a fundamental component of broad based knowledge economy, this will be elaborated further. We need to have a much better understanding of micro realities. As Porter (2002) said, “Developing countries, again and again, are tripped up by microeconomic failures … countries can engineer spurts of growth through macroeconomic and financial reforms that bring floods of capital and cause the illusion of progress as construction cranes dot the skyline … Unless firms are fundamentally improving their operations and strategies and competition is moving to a higher level, however, growth will be snuffed out as jobs fails to materialize, wages stagnate, and returns to investment prove disappointing … India heads the list of low income countries with microeconomic capability that could be unlocked by microeconomic and political reforms.” This is fundamental when it comes to framework for formulation of policy. I consider this observation as a frontal indictment of conventional approach of macroeconomics. This advocacy of micro, if taken to its logical end, leads to micro-macro linkage so crucial for understanding the dynamics of the economy well, shedding enough light on the economy and policy thereof. As Hamalainen (2003) observed, “The rapid technological change, increasing mobility of productive resources and growing structural problems of industrialized economies have called into question the validity of traditional neoclassical economic

8

R. B. Barman

theories. These theories are based on the assumption of efficient markets, no unemployment of productive resources, international immobility of resources and global specialization of production based on competitive advantage. The diminishing policy relevance of macroeconomic theories is gradually shifting the economic policy debate to the microeconomic determinants of economic efficiency, competitiveness and growth.” This is a clear statement of changing focus determining the strategy required for sustained high economic growth. We will do well to leave zombie economics (Quiggin 2010) for a paradigm shift in economic policy in contemporary context. The post Second World War experience of high growth also accompanied a hegemony of a different kind under which developed countries maintained their imperialistic grip on developing countries (Bagchi 2005). This intellectual imperialism derived major support from policy prescription based on macroeconomics. The international funding agencies compelled developing countries to conform to conditionalities for granting loans for stabilization imposing severe restrictions on development expenditure. Such restrictions contained expansion of national balance sheet. As this is now getting exposed, the International institutions are revisiting their policy stance. As Nachane (2018) surveyed, the new consensus macroeconomics (NCM) is seriously challenged. This has acquired further support under the weight of rising inequality (Pketty 2014). Hence, it is an inflation point for macroeconomics (Barman 2019). To develop a new paradigm, we should have data to analyze the factors contributing to growth in distributional context. We need to analyze adverse distributional consequences of free market and its impact on growth and stability. This is essential for formulation of policy for broad-based growth and social welfare. In a nutshell, models to be of relevance to the new world must rest on two pillars: micro behavior of individuals, and structure of their mutual interactions. For this, we need to capture “the process of change as it actually happens through changing connections, networks, structures and processes, which is a complex adaptive system … involved in an economic system through the interplay of the elements and connections that form the network structure of the economy and the dynamic forces changing them through interesting processes” (see, Colander et al. 2008; Nachane 2018; Barman 2019). The idea, as such, is to develop a new framework of analysis for an empirically based paradigm capturing dynamic forces of change in reality, as this will give us much deeper insight for inclusive growth, transforming the society.

5 A Few Illustrations from Microeconomics for Analytics I am particularly impressed with the ideas contained in the microeconomics theory of production possibility frontier and consumer and producer surplus: the two concepts covering productivity and pricing for upholding both producer and consumer interest, while containing deadweight loss to the society. As these simple concepts are fundamentals of economics, let me briefly explain them.

Sustaining High Economic Growth Requires a Different Strategy …

9

5.1 Production Possibility Frontier The Production Possibility Frontier (PPF), in its simplest form, shows the trade-offs in production volume between two goods, with choices along the curve showing production efficiency of both goods. If a firm falls short of this frontier with production volume at a point inside the curve, it shows that it needs to improve performance for optimizing resource use, giving comparative advantage. To go beyond PPF, it needs breakthrough in technology and organization, given the fixed resources. The analysis can be extended to multiple products. This kind of analysis is also useful to determine what the right mix of goods for profitability is. An analysis using Efficiency Frontier is often termed as “Frontier Analysis” which envelops available data, and hence it is called Data Envelopment Analysis (DEA). It relates performance of firms in relation to best achieved performance. It measures how distant a firm is on production in relation to the best. It is a data-oriented approach for comparing and evaluating performance, which can help producers to strive for higher productivity. The results can be disseminated widely using algorithms at the backend of national data system, providing output at granular levels, such as each district and industry. There are other tools which can also be applied to data for similar analysis. The data we require for such analysis, in so far as the industrial sector is concerned, is covered in input-output tables as part of national accounting. The accounting identity is as follows: Total output + Imports =Intermediate Product + Consumption + Asset Formation + Exports. Input-output analysis is designed to analyze the interdependence of producing and consuming units based on interrelations among different sectors on purchase and sale of goods and services. It produces three main tables: (i) a transactions table, (ii) a table of technical coefficients, and (iii) a table of interdependence coefficients. The breakup of data includes salaries and wages, profits, taxes and subsidies, depreciation, and imports and exports under different industries. Data on final demand includes household and government consumptions and capital formation. When we get disaggregated data for each firm by product, industry, and occupation classifications along with deliveries to all other industries which are also geo-coded, these data can also be used for DEA and similar analysis. We expect these data to have a sample size sufficient enough to realistically approximate the distribution of productivity at the district level for carrying out analysis at that level. A framework of such kind of analysis will contribute to productivity, enhancing and solidifying the knowledge economy.

10

R. B. Barman

5.2 Consumer Surplus and Producer Surplus Consumer surplus is the difference between the minimum price consumers are willing to pay for a product and the actual price paid. A price higher than the equilibrium reduces consumer surplus. The consumer surplus is the area under the demand curve and above the equilibrium price. And the producer surplus is the area above the supply curve and below the equilibrium price. Markets function efficiently, allocating resources optimally, when the consumer and producer surplus are at a maximum. This leads to maximum benefit to the society, avoiding deadweight loss.

5.3 Payment System and Consumer Surplus The digitization of payments is a very good example of how a large consumer surplus gets lost when private service providers like Visa and Master cards start charging high while providing such services. They create brand loyalty, entry barriers, and use market segmentation by customer loyalty programmes and other means controlling service network for card-based payment services. This monopolistic strategy restricts customer choice. This e-payment channel, which remained under virtual duopoly initially, cost the customer very high. It was through merchant discount rate (MDR) that customers were made to pay up to 3% of the transaction amount, recovered through malls and shopping outlets. The justification given was that the system for issue of cards, acquiring payments from merchants, use of telecommunication network, development of applications for instantaneous clearing of transactions, and so on had heavy costs. The amount was shared with banks or other issuers, service providers to merchants called acquirers and the solution providers like Visa and Master cards. This gives customers anywhere, anytime money and hence it attracts customers. As the merchant recovers this amount from a customer by way of a markup on price, the price thus charged has to be paid by all, even if the transaction is a cash payment. If we assume that the total volume of transactions in India is double that of gross domestic product (GDP), assuming every good and service is transacted at least twice at the point of purchase, be it wholesaler, retailer or ultimate consumer, the total volume of transaction could be around $6 trillion. Hypothetically, if the entire volume is transacted through digital payments, the MDR even at 1% could be around $60 billion, a huge amount. At this stage, we might have only about 10% of transactions in India done digitally. Even then, the amount works out to $6 billion, a huge sum consumers using cards pay for payment services. As I mentioned, even those paying in cash indirectly pay up this markup. Though the transactions in cash are supposed to be free of such markup for an ultimate customer, the system forces them to pay up indirectly, affecting consumer surplus. RuPay cards dented this duopoly forcing multinational brands to reduce MDR. RuPay cards and Unified Payment Interface (UPI) are major products of National Payments Corporation of India (NPCI). NPCI as an umbrella organization for retail digital payments was conceived in 2005 as part

Sustaining High Economic Growth Requires a Different Strategy …

11

of the vision for the payment system set out by the Reserve Bank of India. NPCI is set up as not for profit organization and a strategic outfit to popularize digital payments supporting consumer interest (I was strongly advocating and steering this when in RBI). We insisted on this because the payment system is strategically important for the country. This provision as a not-for-profit company also protected NPCI against its take over by powerful private players. It is a satisfaction that in the budget just presented the government has done away with MDR for RuPay card transactions. While the government has rightly taken advantage of NPCI, its own extended arm, the provision should apply equally to others like Visa and Master cards. Otherwise, merchants will continue with markup as they cannot set two prices, thus defeating the very objective of loss of consumer surplus. We need to keep in mind that the payment system is of the nature of public good and hence should not be used to cause loss of consumer surplus (Barman 2018, 2020a). NPCI is an example of how an organization can succeed overcoming steep market barriers, containing the evil effects of monopolistic pricing. NPCI has emerged as a model for countries internationally showing how interventionist government policy on critical payments infrastructure can comprehensively protect consumer interest. However private producers of public utilities like digital payments need a window as a business proposition, as it provides for competition and innovation. In general, there should be a good balance between consumer and producer surplus. Marshall (1961), who established neoclassical economics showing how production and consumption were determined, said, “The work I have set before me is this—how to get rid of the evils of competition while retaining its advantages.” This is very important for public policy on the payment system which affects each and every person. However, this also supports interventionist government policy particularly when it involves public good. In actual practice, it is difficult to ensure fair price. On occasions, market intervention by the government is misused destroying a part of consumer surplus. However, considering that market power can lead to the extraction of super normal profits, interventionist government policy needs to undertake a balancing act. The availability of price information will reduce information asymmetry for the market to function more competitively. In such a case, price signals are expected to contribute more effectively in promoting allocative efficiency for optimal use of resources.

6 Example of Untapped Potential in a Rural Setting I have a place in a picturesque rural surrounding at a river front location a hundred kilometers from Mumbai. This is a tribal belt which has seen little change in years. It is representative of a vast stretch of such pockets scattered throughout the country— barren land growing wild trees and paddy in patches during monsoon. A small gated community center here serves as a weekend resort for the well-to-do living in the financial capital of India. It has certain facilities to attract them to spend some time

12

R. B. Barman

in the natural surrounding with hillocks and a forest at some distance and a rain-fed river. The monsoon is particularly attractive and enjoyable. The villagers around the resort in Pimplas village of Palghar district are generally very poor, though many of them possess land. When monsoon sets in, farmers grow paddy in small patches of cultivable land. The few well-to-do ones have deep tube wells to grow vegetables as a second crop in a few plots. This shows there is a scope for one more crop if some of the huge volumes of water discharged in the river during monsoon is stored in reservoir or ponds that can be dug up. This is expected to double income of many villagers, lifting them out of abject poverty. Orchard is a possibility along with poultry and goat rearing. At present, deep wells are used for such activities depleting water table. Instead, these activities can be supported by drip and minor irrigation. Thus, in spite of availability of land and hard-working human resource to raise income, capital required to exploit potential is not forthcoming. This is in spite of government giving attractive subsidy, the reason being lack of knowledge, risk capital, and local enterprise. Most of these poor people seem to have no aspiration in life, surrendered to their fate. The market economy bypasses them. This is a deficiency which needs to be contained. We need to recognize that the private enterprise will not focus on such investment possibility, because small is not beautiful for them. If, however, there is a faster mode of communication, some of the people in mega cities may like to stay in these nearby rural surroundings as it provides clean air and healthy environment. This will have externalities. And a clear mandate on development outcome making local authority responsible and accountable for performance may be of help. There are other possibilities. There is a dairy farm in Saudi Arabia by name Al Safi Danone (see Internet for details) deep inside the desert where only small patches of desert herbs used to grow. There was no surface water. A Saudi Prince started the enterprise in 1976 digging deep wells to convert desert into grass land, created air-conditioned sheds for bovines, automated processes like Western countries, and went into the Guinness Book of World Records for the highest yielding milch cow. It still maintains one of the highest yield per animal, setting an example as to how even a wasteland can be put to exemplary productive use. Israel is another classic example of how barren lands can be converted through drip irrigation. When we have manpower and know how for such possibility, what stops us from capital investment and training Human Resource including local enterprise to raise productivity and income? If the government steps in with schemes to support development, funds are available commercially, and the projects are well managed, the returns on such investments can be attractive as one more crop has the potential to double farmer income. This development effort can have multiplier effects through various channels, making these villagers enjoy a good lifestyle.

Sustaining High Economic Growth Requires a Different Strategy …

13

7 Bridging Rural-Urban Divide The rural area, where two-thirds of Indians live, may not be contributing more than one-third of GDP (agriculture contributes 16% of GDP growing at less than three percent). If we assume that these people can be empowered to raise potential to contribute an additional two to three percent, i.e. taking GDP growth to about five to six percent in rural areas, it will be an important step toward transformation of the economy. In such an eventuality, there is a possibility of a synergy triggering higher demand which can further accentuate economic growth. Then we may have an average growth of ten percent or more which can also be sustained. Of course there should be support system through various linkages and physical infrastructure for such a synergy. We need to undertake such an exercise for a scenario showing empirically how we can look for such a possibility. This calls for analytics linking micro with macro in regional and urban-rural contexts to exploit untapped opportunities for higher growth. The above are some of the possibilities for setting impulses for high growth through public policy as a catalyst. The free market has a strength of its own, but unless the market is efficient, the command over capital is evenly balanced, and the channels of transmission will not work well for those living in the periphery. The efficient market hypothesis lacks sufficient validity empirically (Quiggin 2010). The developing countries need focused and purposeful interventionist government policy for promoting collective interest like public utilities and infrastructure for the channelization of resources for such productive pursuit. This will help in exploiting certain untapped opportunities to set the path for balanced development. We need analytics for a realistic picture on reality to support such a policy.

8 Analytics for Micro-Macro Linkage We have the advantage of an Information Age to reengineer and redesign processes of the Indian official statistical system radically overhauling the system for collection, collation, dissemination, and analysis of data. It is now possible to collect a huge volume of numerical, text, audio or video data, whether structured or unstructured, for storing in cloud as part of Big Data technology. These data when organized for retrieval by various dimensions can be useful for empirical evidence going down up to ultimate granular level of a household, industry, occupation, organization, or geography. This will offer new possibilities to examine hypotheses or theory for empirical validity. There are empirical issues on nonlinearities, path dependence, and heterogeneity which need careful consideration. We can now take advantage of Big Data, properly structured, as a rich repository for deeper insight unearthing new knowledge relevant for policy. Through this process, it is possible to develop a system for micro-macro linkages for moving from bottom to top over time and space and vice versa. Besides providing empirical basis for policy at various levels of

14

R. B. Barman

governance, bringing about transparency and making a difference in the way policy is formulated and executed, performance is tracked, the system can be versatile for testing of various hypotheses and simulation including analysis of evolutionary aspects of economic growth and transformation (Dopfer 2004). As Schmidt (2010), former CEO of Google, observed, “There were five exabytes of information created between the dawn of civilization through 2003, but that much information is now created every two days, and the pace is increasing.” A new science, called Data Science, is a system of exploratory data analysis which enables analysis of such vast data for understanding patterns and dependencies. Data Science is basically an approach to develop data-driven theory for solving complex scientific and economic problems and provide insight for decisions. It tackles complexity by a process to identify, specify, represent, and quantify comprehensively. As Cao (2017) observed, the intelligence consists of thinking mindset and ability to think analytically, creatively, critically, and inquisitively. Artificial intelligence and machine learning are the basic tools for supervised and unsupervised learning, including text, audio, and video data for mining. We need spatial Big Data platform and data warehouse in a cloud environment to extract the data for analysis. Metadata describes the structure of and some meaning about data, thereby contributing to their effective use. It is a data dictionary for storing and managing data because the environment must know the specifics of all data to function properly. The most important metadata in data warehouse is data lineage tracing data from its exact location in the source system and documenting precisely what transformation is done to it before it is finally loaded. The data lineage includes the data definition of the entire lifecycle starting from source system database to the final resting place in data warehouse. Analytics covers the management of the complete data lifecycle, which encompasses collecting, cleansing, organizing, storing, analyzing, and governing data. The term includes the development of analysis methods, scientific techniques, and automated tools. The method of analytic thinking tries to look at the large whole based on ultimate granular data in their context. India is a vast country which is highly diverse in terms of topography, weather, habitat, natural resources, factor endowment, skill set, cultural and linguistic differences and federal structure. As mentioned, heterogeneity, non-linearity and path dependency are the resultant reality which must be recognised as major sources of variation in economic outcome while pursuing the path of balanced regional development and equity in distribution of the fruits of such development and social justice (Rawls 1971). There are also differences in governance and institutions stemming from these spatial variations and federalism, even if rules and regulations are expected to be congruent throughout the country, as will be discussed. Considering such imponderables on reality, we need to undertake Multilevel Analysis in a layered manner bringing out contribution of different sources of variation which has the potential to throw up intelligence that can enrich analysis and augment knowledge for such a policy and a more effective implementation of program at regional context, right from district upwards, with comparison and gap analysis in relation to different levels. Multilevel Analysis builds on data at different levels of analysis and it has a

Sustaining High Economic Growth Requires a Different Strategy …

15

way of tackling nesting, intraclass and interclass correlation to improve estimates as part of quantitative techniques of analysis. This will pave the way for a broad based knowledge economy. A data scientist should have expertise in the subject; in our case it is economics, statistics, and algorithms using software like R and Python. Varian (2014) and Hastae et al. (2008, 2017) are very good references on quantitative tools useful for extracting information from big data using new techniques of econometric analysis. Data Science is stimulating for advanced analysis. As an illustration, let me consider certain aspects of agriculture. We are in a position to collect information about land available for agriculture, land use, and yield rate using Satellite/drones. We undertake surveys for data on costs of cultivation. We need to know how to balance soil and fertilizer appropriately, application of chemicals and water and effective use of natural fertilizer, data for analysis of soil moisture, role of pre-harvest irrigation, and heat and plant stress. We also need information on sustainable farming practices, not only saving water and chemicals but also assisting them on the move to support agro-chemical practices through knowledge transfer of successful and profitable ecological practices. These data are now collected as a matter of routine by some countries, i.e. Israel, Spain, and South Africa. As mentioned earlier, such data will help us to go for Data Envelopment Analysis showing best possible combination of inputs for certain output, against which a gap analysis can be undertaken to inform a farmer how to raise productivity to realize the potential. A Multilevel Analysis can be undertaken using suitable algorithms to cover different layers of geography and administration of the country. And an application program interface (API) can pick up information from such analysis for an automated system for broadcasting, through which a farmer will know where he stands vis-à-vis the best. This is how a knowledge economy can be promoted. Even if productivity rises, farmers may not get remunerative price to raise their income. We need data on price formation for analyzing market microstructure and share of middleman involved in distributive trade. The surplus crops can go as raw material for the food processing industry adding value, thus stabilizing price. There are possibilities for more sophisticated analysis of complex systems using stochastic combinatorics, econophysics, and agent-based models (Barman 2019). Given intrinsic heterogeneity on resource endowment, skill set, and infrastructure facilities, economic development possibilities are contingent on how investable capital gets allocated. To influence investment even on consideration of return on investment, the research strategy should be to describe the macroeconomy as composed of possibilities at micro levels. A continuous-time Markov chain, that is Markov process with at most countable states, also called jump Markov process (Aoki and Yoshikawa 2007), is an example of such complex analysis. In this approach, accommodating probabilistic distribution based on micro data, the analytical approach builds into state spaces and transition rates so that a jump Markov process describes the behavior of economic agents in their environment. Economics is a behavioral science. Aoki and Yoshikawa (2007) show how stochastic combinatorics can accommodate behavioral relationship. Combinatorics

16

R. B. Barman

is usually considered as a body of methods for counting the number of ways through which objectives are set to be met under certain rules, imposing optimality criteria for desired solution. It tries to find out in how many ways a prescribed event can occur. In a production possibility curve, one studies the combination of inputs for an output. A random walk is a simple example of assigning probability to occurrences for approximating dynamic changes. Using these concepts, clusters are formed as state spaces and dynamics is measured using the Markov process. Extending these concepts, stochastic combinatorics and simulation can be considered as possibilities for economic analysis in behavioral context.

9 Institutions and Governance An interventionist government policy is most suited while promoting collective interest. However, the role of the public sector is criticized when it comes to management of business due to large inefficiency and rent seeking. But the modern information system and associated transparency will be useful to keep a check on irregularities. The private investment flourishes when rules and regulations are clear and transparent, outcomes are predictable, and dispensation of justice is quick. Hence I feel that a paper of this nature will not be complete without touching on institutions and governance. A policy regime, even if highly credible, must bank on these two aspects as most critical for its success. As Olson (1996) said, “the large differences in per capita income across countries cannot be explained by differences in access to the world’s stock of productive knowledge or to its capital markets, by differences in the ratio of population to land or natural resources, or by differences in the quality of marketable human capital or personal culture. (….) The only remaining plausible explanation is that the great differences in the wealth of nations are mainly due to differences in the quality of their institutions and economic policies“. A country’s income per head rises many times if governance improves, denting rent seeking and inefficiency. We need analytics to oversee efficient functioning of institutions and governance. The government has recently launched a program to rate states, using a governance index, which is a very welcome development. The Good Governance Index is expected to provide quantifiable data to compare the state of governance to understand and calculate citizen-centric and result-driven performance leading to improved outcome applicable to all states/UTs. The metric covers 60 indicators, each of which is assigned a weight. The sectors covered are Agriculture and allied services, commerce and industries, Human Resource development, public health, public infrastructure and utilities, economic governance, social welfare and development, judicial and public security, environment, and citizen-centric governance. It is hoped that once the system stabilizes, this will be extended up to district level to make district functionaries involved in development work accountable.

Sustaining High Economic Growth Requires a Different Strategy …

17

10 Conclusion India has the potential to grow around 10% per annum making optimal use of human and material resources. For this we need a different strategy, policy, and action plan to optimize resource use. Going by the past, considering the incremental capital-output ratio of 4%, it requires 40% of GDP to achieve such a growth target. In growth theory, while labor and capital are basic inputs in the specification of production function explaining growth, the importance of research and development, accumulation of knowledge, rule of law, development of financial institutions, and so on are analyzed extensively to explain why some countries have become rich and the others did not. The analysis of the whole is no doubt the sum total of its parts. However, we need to find a way to explain the whole along with its parts for a well-functioning efficient system including microeconomic behaviour of individual agents (Lucas 1976). In such a case it will be possible to consider and analyse the economy as an evolutionary process which moves from one stage of development to another through complex system of competition causing structural and distributional changes affecting people’s income and assets along with rising inequality and disparity impinging on social welfare. If the use of resources can be optimized, systems and processes are improved, business environment supports innovation, and a mechanism exists for monitoring performance closely, it should be possible to raise productivity reducing investment requirement. This paper explores this idea. The supply side action to raise output is not enough. We need to have demand. China produced goods for exports all over the world. Such a model is not likely to succeed now. We should have enough scope for generating internal demand. The people should find enough productive employment opportunity to raise their income which will create demand for produced goods and services. It is the demand-creating technology that matters. It is important that decisions affecting people’s lives are taken through a real understanding of the factors surrounding their living condition and potential. A broad brush approach misses out on many aspects of the economy. The market economy is indispensable, but it has also known limitations. The present system has not corrected the growing inequality as a tiny minority takes advantage of market through disproportionate access over pooled financial resources, segmentation of market and power play to serve its interest. The capital needed for raising productivity, the knowledge, and enterprise elude the poor. The interventionist government policy to promote collective choice fulfilling social contract, when remains opaque, affects the interest of these silent majority. Hence, the existing system will not deliver sustained high growth without major transformation, making the poor active parters in such a development. We need to analyze how individuals can be empowered to work together optimizing resource use exploiting potential for high growth. As the organization of work is facilitated by enterprise and ease of doing business, it is essential to promote

18

R. B. Barman

an environment for providing such opportunity. In addition to productivity, competitiveness, and high quality in produced output, we require (1) efficient market infrastructure for market clearing remunerative price, and (2) government intervention to support inclusive growth. We need the determination and mechanism to promote and enforce such an environment. We need a fully aligned real sector, financial sector, fiscal sector, and external sector for assessing optimal use of resources for balanced growth and stability. The analytics concomitant to support such strategy should be at the multilevel consistent with overall policy objectives. The strategy should cover the identification of potential, targets, and means on deliverables right from the district level which could be monitored and evaluated. This is a huge task which needs capability building following clearly defined national commitment. We need to embrace technology with an open mindset, even if the transition involves frictions. We are bound to encounter conflict in reengineering, but creative destruction is how a society graduates for progress. It is innovation and creative instinct which we need to inculcate as a society to climb and reach the summit. There is an urgent need to come out of the deeply entranced legacy of the past. I have illustrated this with a relatively simple example based on agriculture which supports half the people. Institutional reform is very challenging. As a simple example on possibilities for revamping and modernizing a strategically important institution, we may give the example of our statistical system, which I know well. It is a professional statistical setup manned by very competent people, yet it has come under sever scrutiny recently. We have not kept pace with time or developments elsewhere to modernize our statistical system. The bureaucratic red tape is slow on innovation, which is a major stumbling block. We need to act decisively guided by well thought out strategy. We must reengineer the official statistical system for developing analytics required for supporting a high growth strategy. The quality of policy regime and institutions play a very critical role in the economic development and social welfare. We need much improved institutions and governance to inject greater efficiency and transparency in the system. The government programmes on infrastructure development, industrialization for accelerating the pace of growth, and structural transformation for shifting surplus workers out of agriculture, sustainable development, research, and development are well meaning. But the issue of efficiency in drawing up a plan of action and its execution earnestly remains a major concern. The knowledge economy based on dependable information on economic, social, and institutional factors providing deeper insight on growth dynamics is expected to help the society to become aspirational and empowered, a basic requirement for convergence on growth and balanced development. We need to measure the pace of such convergence for timely action to accelerate the process. A measurable metric covering performance against set objective is expected to imbibe a culture of accountability and provide the transparency needed for achieving socio-economic transformation. This is expected to bring about synergy leading to optimal use of human and material resources for raising potential for broad-based sustained high growth.

Sustaining High Economic Growth Requires a Different Strategy …

19

We hope that such a vision will find takers considering that, given the commitment, these are achievable.

References Adelman I (2001) Fallacies in development theory and their implications for policy, in Meier and Stiglitz as cited below Aoki M, Yoshikawa H (2007) Reconstructing macroeconomics: a perspective from statistical physics and combinatorial stochastic processes. Cambridge University Press, New York Bagchi AK (2005) Perilous passage: mankind and the global ascendency of capital. Oxford University Press, New Delhi Barman RB (2018) Opinion: no reason for payments board, mint (daily newspaper), 15 Oct 2018 Barman RB (2019) In quest of inclusive growth. Econ Polit Wkly LIV(32):44–50 Barman RB (2020a) Potholes on the digital payment superhighway. The Hindu, 22 October 2020 Barman RB (2020b) Re-engineering for credibility: remedies for India’s ailing statistical system. Econ Polit Wkly LV(1):31–36 Cao L (2015, 2017) Metasynthetic complexity and engineering of complex systems. Springer and Understanding Data Science, Springer Colander D, Hewitt P, Kirman A, Leijohnufvud A, Mehrling P (2008) Beyond DSGE models: towards an empirically based macroeconomics. Am Econ Rev 98(2):236–240 Dopfer K, Foster J, Pitts J (2004) Micro-meso-macro. J Evolution Econ 14(3):263–279 GoI (2020) Union budget—state of the economy, Government of India, Chapter 1, 10 Jan 2020 Hamalainen TJ (2003) National competitiveness and economic growth: the changing dimensions of economic performance in the world economy. New horizons in institutional and evolutionary economics series. Edward Elgar Publishing Hastae T, Tibshirani R, Friedman J (2008, 2017) The elements of statistical learning: data mining, inference, and prediction, 2nd edn. Springer Lucas RE (1976) Economic policy evaluation: a critique. Carnegie-Rochester Conf Ser Public Policy 1:19–46 Marshall A (1961) Principles of economics. Macmillan, London Meier GM, Stiglitz JE (2001) Frontiers of development economics: the future in perspective. Oxford University Press, New York Nachane DM (2018) Critique of the new consensus macroeconomics and implications for India. Springer, New Delhi National Statistical Office (2019) Reports of periodic labour force survey, 2017–18, Quarterly Bulletin, Oct–Dec 2018, Government of India, May 2019 Olson M (1996) The rise and decline of nations: growth, stagflation, and social rigidities. Yale University Press, ISBN 13:978-03000030792. Also refer to Distinguished lecture on economics in government: big bills left on the sidewalk: why some nations are rich, and others are poor. J Econ Perspect 10(2):24–33 (Spring 1996) Pketty T (2014) Capital—in the twenty first century. Belknap Press of Harvard University Press Porter M (2002) Enhancing the microeconomic foundations of prosperity: the current competitiveness index. Harvard Business School, Sept 2002 Quiggin J (2010) Zombie economics: how dead ideas still walk among us. Princeton University Press, Princeton Ravallion M (2016) Economics of poverty. Oxford University Press, Oxford Rawls J (1971) A theory of justice. Oxford University Press, Oxford (1999) Schmidt E (2010) As quoted in Eivan L, Levin J (2014) The data revolution and economic analysis. NBER

20

R. B. Barman

Shermer M (2008) The mind of the market: compassionate apes, competitive humans, and other tales from a evolutionary economics. Times Books, Henry Halt and Company, New York, p 33 Smith A (1776) An inquiry into the nature and causes of the wealth of nations. Clarendon Press, Oxford Varian, Hall R (2014) Big data: new tricks for econometrics. J Econ Perspect 28(2):3–28 World Bank (2011) World development report

The Secretary Problem—An Update S. P. Mukherjee

1 Introduction The secretary problem (also called as secretary selection problem, Sultan’s dowry problem, marriage problem, etc.) has been widely studied over the last six decades by a host of authors. Although Gardner (1960) is credited with the first publication on the problem and Lindley (1961) provided the first published solution using dynamic programming and corresponding recursive relations, Cayley (1875) had much earlier referred to the underlying issue relating to a random sequence. Chow et al. (1984), Ferguson (1989), Gilbert and Mosteller (1966) and others made interesting contributions to a study of this problem as an interesting illustration of a stopping rule. Bruss (1984) came up with the shortest rigorous proof of the optimal properties of the ‘odds’ algorithm, also called the e-rule for solving the problem. Freeman (1983) gives a comprehensive account of the problem and its extensions. Krieger and Samuel-Cahn (2016) considered a generalized problem of selecting one of the few (number specified) best candidates in as small a number of interviewed candidates as possible. Stated simply, the problem involves a fixed number n of candidates who can be ranked unambiguously according to some selection criterion (criteria) through an interview from 1 (best) to n (worst) and there is no tie. We decide to sequentially interview the candidates one by one in a random order, accept or reject a candidate just after interview till we decide to stop at the candidate we are going to select as, say, the secretary. The candidate with whom the interview process is stopped should be better (having a lower rank) than any of the previously interviewed candidates. All these investigations were purely probabilistic (statistical) in nature and were attempting to achieve a lower bound to the probability of selecting the ‘best’ candidate. No consideration was given to the cost of interview and to any other cost(s) likely to be involved. Moriguti (1992) formulated the problem as one of determining S. P. Mukherjee (B) University of Calcutta, Kolkata, West Bengal, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 B. K. Sinha and S. B. Bagchi (eds.), Strategic Management, Decision Theory, and Decision Science, https://doi.org/10.1007/978-981-16-1368-5_2

21

22

S. P. Mukherjee

optimally a number i of candidates to be interviewed in a random order, choosing the best of these candidates (with rank 1 among them) and introducing some cost considerations. Let r be the rank of the selected candidate in the population of candidates. Evidently, we like to make this expectation as close to unity as possible. Moriguti suggested a cost function to be minimized, which he took as T = E (r ) + k.i where k is the cost of interviewing a candidate. Mukherjee and Mandal (1994) argued that we incur a penalty or opportunity cost by way of a regret that we failed to select the best candidate, if that happens. This could be taken proportional to (r−1). They took the expected cost as the objective function: T (i) = a + k · i + c[1 − E(r )]. To make the formulation a bit more realistic, they introduced a constraint, viz., Pr{l r − r0 l ≥ m} ≤ α( pre − assigned small) where r 0 and m are very small integers, and 1−α is an aspiration level. Moriguti (1992) and Mukherjee and Mandal (1994) formulated the problem as a non-linear integer programming problem. Assessing the extent of agreement among multiple experts usually involved in a selection process is an extremely difficult task. Suppose p judges are required to independently rank the candidates interviewed. It is quite likely that the same candidate will be judged the best by all the experts. Ruling out that unlikely ‘fortunate’ case, we may possibly allow the experts to discuss among themselves and eventually we ‘induce’ a consensus ranking or just the consensus choice of the best among the candidates interviewed. There have been several different approaches to find the overall ranks of the I candidates to identify the ‘best’ among them. In the next section, a brief review of the standard problem and the optimal stopping rules for solving the same has been taken up,. A generalized version of the MukherjeeMandal formulation of the problem has been suggested in Sect. 3, while Sect. 4 deals with the use of ‘expected net gain due to sampling’ as an objective function to work out some reasonable solutions to the problem. Section 5 takes up the case of multiple experts engaged in the process of interview and providing multiple rankings of the candidates interviewed. A few concluding remarks have been made in the last section.

2 Optimal Stopping Rule Stopping rules to solve the secretary problem define strategies like the following: Reject the first (r−1) candidates and let M be the best among them (with the lowest rank), where r is a pre-fixed number. Select the first candidate thereafter who is better than any of the previously interviewed candidates. Thus, the total number of candidates interviewed is r + p where none of the p candidates from the rth to

The Secretary Problem—An Update

23

the (r + p−1) th is better than M, while the (r + p) th candidate is better than M. The minimum number is r and the expected number interviewed is r + E (p). It can be shown that the probability of the best candidate out of all the n candidates with rank 1 being selected in the above procedure with r interviewees is Pr(r ) = (r −1)/nΣi 1/(i−1), and this can be approximated by the integral Pr(x = r/n) = −xln(x). This probability is a maximum at x = 1/e with a value 1/e. Thus, the optimal stopping rule is to reject the first n/e candidates and then to select the first candidate better than any of the previously interviewed candidates. This strategy gives a lower bound of 1/e to the ‘success’ probability or probability of selecting the best among all n candidate. Bruss and Samuels (1987) considered the more general case with an unknown number of candidates in the population. The probability of success or the probability that we stop after interviewing the first candidate who is better than any of the first n/e candidates interviewed earlier and the last candidate comes out to be the best candidate (with rank 1 in the population) was already shown by standard dynamic programming as 1/e for moderate values of n. In fact, for small values of n, the number of candidates interviewed and the success probability are like the following: n

2

3

4

5

6

7

8

9

r

1

2

2

3

3

3

4

4

Pr

0.50

0.50

0.46

0.41

0.43

0.43

0.43

0.41

To avoid the obvious limitation of this procedure, often referred to as the e-rule, that it fails to select a candidate in case the best among all the candidates appeared for interview among the first r−1 interviewees, the class of strategies considered was extended to include the possibility of recalling all the candidates interviewed and to select the best among them. Bruss (2000) proved that this e-rule is optimal in the class of rules of this type, and the lower bound 1/e for probability of success achieved by it is also the optimal bound. There have been some other rules of this type, including the one that rejects the first n1/2 candidates, lower than n/e in most cases. Samuels (1993) provided several benchmark bounds for the number of candidates to be interviewed before stopping in a series of articles. These bounds include the classical bound of 1/e as well as bounds like 3.8695 and a much lower figure of 2.6003.

24

S. P. Mukherjee

3 Generalized Mukherjee-Mandal Formulation Let r be the rank of the candidate found to be the best among the i candidates interviewed in the entire population of n candidates. The probability distribution is given by the probability mass function pr = (n−r ) C(i−1) /n Ci , r = 1, 2, . . . n−i −1, so that the cumulative probability of rank r or less is Fr = 1 −(n−r ) Ci /n Ci . The expected rank of the selected candidate (among the population of n candidates) is E (r ) = (n + 1) / (i + 1). Mukherjee and Mandal (1994) formulated the problem as one of minimizing the total cost, considering the cost of interviewing i candidates and the cost of lost opportunity in failing to select the best candidate or the cost of regret. They considered regret as linear in the rank difference (r−1) and took the total cost as the objective function T (i) = c(r − 1)k.i, involving one decision variable i. This being a random function, we can apply all the three classical rules to formulate the corresponding non-stochastic versions, viz., E-rule (expectation minimization), V-rule (variance minimization) and P-rule (exceedance probability minimization). Thus, we get three alternative formulations: to determine an integer i, 1≤ i ≤ n, such that E [T (i)] =(n − i) / (i + 1)k.iis a minimum, or Var [T (i)] =c2 Var (r )is a minimum( independent ofk), or Prob [T (i) > t]is a minimum for some specifiedt. The objective function in each of these formulations is non-linear in i. And we can add a constraint to each of these minimization problems. A somewhat obvious constraint could be f = i/n < f 1 (a specified fraction). Mukherjee and Mandal added a chance constraint to the first formulation to make the formulation more realistic. This is Pr[r0 ≥ r ] ≥ 1−α (an aspiration level), where r 0 is a specified small integer like 2 or 3. It is possible to consider the squared difference between the rank r of the selected candidate and that of the best candidate, viz., 1 to quantify regret, on the lines of the squared error loss in Bayesian inference. In such a case, we can have a generalized version of the Mukherjee-Mandal version of the problem as follows: To find i (positive integer less than n) such that

The Secretary Problem—An Update

25

T (i) = c[E (r ) − 1]2 + k.iis a minimum subject to Pr[r0 ≥ r ] ≥ 1−α. Thus formulated, this a is non-linear stochastic integer programming problem for which an efficient algorithm awaits development. It is a common practice to select a panel of candidates instead of a single candidate to fill up a single vacant position, lest the best candidate selected fails to join the post. For such an eventuality, the above formulation can be generalized to minimize the total cost in selecting a panel of three candidates, say, as the candidates with ranks 1,2 and 3 in the sample of I candidates interviewed. The expected total cost function in this case can be taken as E [T (i) ] = c1 [ E (r1 ) − 1 ] + c2 [ E (r2 ) − 2 ] + c3 [ E (r3 ) − 3 ] + k.i, where r 1 , r 2 and r 3 stand for the ranks of the three best candidates interviewed among the total population of n candidates. Coefficients in the cost of regret may be taken to be constant for all the three positions or we may have the choice c1 > c2 > c3 . And a rational chance constraint could be taken as Pr. [r3 > r0 ] < α (a pre-specified small fraction), where r 0 is a small integer, say 4 or 5.

4 Solution Through ENGS It has to be admitted that decisions are often made in the face of uncertainty and risk. Unless we have complete information about states of nature and correspondingly about the outcome or payoff of any decision or action, we cannot make the ‘best’ decision. In this context, the role of information in decision-making becomes quite relevant and the question of gathering and processing the requisite information to reduce uncertainty does arise. But this implies some additional cost with two components, viz., cost of gathering information and cost of processing the information gathered to take the terminal decision. And both these activities may proceed sequentially. The value of perfect information in the context of the secretary problem can be taken as the gain due to selection of the best candidate (in case we can interview all the candidates) compared to the selection of any candidate selected without an interview. Thus, the value of perfect information (VPI) = c(r–1) where r is the rank of the candidate chosen at random from the n eligible candidates and c is a factor of gain. Since r is random, we can define the expected value of perfect information as EVPI = c[E (r ) − 1] = c[n + 1] /2.

26

S. P. Mukherjee

Short of gathering a lot of information adequate to remove uncertainty about the true state of nature completely at a relatively large cost (which in some adverse cases may exceed the value of perfect information), one may think of gathering sample information using a proper sampling procedure to get some reasonable idea about the true state of nature. The cost of sampling will depend on the sample size as also the way sample information is processed to yield the desired information about the true state of nature. The excess of the cost of uncertainty over the cost of an immediate decision to gather sample information and to use the same for reaching the terminal decision is the net gain due to sampling. Averaging over possible sample sizes, we can define the expected net gain due to sampling (ENGS). In fact, this ENGS may be examined to work out the optimum sample size. It may be interesting to note how a consideration of ENGS can yield ‘good’ solutions to decision problems which otherwise would involve some complex optimization. Looking back at the Secretary Selection problem, let us define EVSI in a slightly different manner: If we do not interview any candidate and randomly select any of the n candidates, the selected candidate will have an expected rank (in the entire population of n candidates) given by E (r) = (n + 1)/2 and the regret, assumed to be linear in the difference between this expectation and 1 (rank of the candidate we wanted to select), stands at c (n –1)/2. Once we decide to interview i candidates selected at random, the EVSI can be considered as the difference between the two regrets—with and without sampling— and this comes out as c[(n + 1)/2 − (n + 1)/(i + 1)] = c(n + 1)(i−1)/[2(i + 1)]. Thus, the expected net gain due to sampling ENGS comes out as E = c(n + 1) (i − 1)/[2(i + 1)] − k.i . As expected, this quantity increases with n and depends on the cost-ratio c/k. E is negative for i = 1 since a sample of one candidate (who is to be selected) which has a non-zero cost adds no information over the situation where no candidate is interviewed and any one of the n candidates is selected at random. Considering ENGS as the objective function and ignoring the chance constraint, direct computation of ENGS gives a fair idea about a satisfactory choice of i. As can be simply expected, the choice depends on the cost ratio c/k. With c/k exceeding unity to a large extent, we require larger i to maximize ENGS. In fact, with moderate n and c = k = 1, ENGS may come out to be negative for all i, being smaller in magnitude for smaller values of i. For example, with n = 10, ENGS is – 0.2 for i = 2 and – 3.7 for i = 8. However, for the same n, but c = 2 k, ENGS is always positive and reaches a maximum of 2.6 at i = 4. With large n, however, ENGS remains positive for smaller i. Thus, with n = 50, ENGS reaches a maximum of 12.5 at i = 6. Incidentally, treating i as a continuous variable for an approximate solution for optimum, we find that the optimizing value of i is [c(n + 1)/k]1/2 − 1 and it can be verified to corroborate the findings from direct computation of ENGS. In case the cost of regret is more than proportional to the expected rank difference of the selected candidate between no sampling and sampling and involves the squared difference, the expression for ENGS becomes

The Secretary Problem—An Update

27

ENGS = c[{(n + 1)/2}{(i−1)/(i + 1})]2 k.i. In this case, we require a larger number of candidates for the best ENGS, unless the cost of the interview is substantially high. With c = k = 1 and n = 10, ENGS increases monotonically with i.

5 Case of Multiple Rankings In many selection processes, several experts are involved, usually required to rank the candidates interviewed independently of one another, Thus, in selecting a candidate for teaching position in an academic institution, the selection committee may include a few subject specialists, some senior academic in the broad faculty (like Science or Humanities or Commerce and the like), and some representative of academic administration like the Rector or the Dean. Once all the i candidates have been interviewed and each expert or judge has prepared his/her own ranking of the candidates from 1 to i (assuming no tie, as can be taken for granted in case this number is not large), we may find differences among the rankings and, more importantly in respect of the candidate judged the best. The problem is more likely to crop up when we are required to select a panel of m < i best candidates, instead of just the ‘best’ of all. An attempt is made to develop something like a consensus ranking or a consensus choice of the candidate(s) to be selected (being the candidate with a consensus rank of 1 or an agreed rank set of 1,2,…m). The term ‘consensus ranking’ sometimes also known as ‘median ranking’ is a generic term to imply a ranking that summarizes a set of individual rankings in some defensible way. In such situations, there have been three broad approaches, viz., (1)

(2)

(3)

a subjective approach of allowing consultations and discussions among the experts and taking recourse to negotiations and compromises, if needed, to arrive at an ‘induced’ consensus, in some similarity with Brainstorming or Delphi exercises; an ad hoc approach considering the given rankings without modifying or influencing any of those to come up with ranks based on aggregate ‘preferences’; a distance-based approach in which we try to define a measure of the distance between the consensus ranking and the individual rankings, in a bid to ensure that the consensus ranking has the least total distance from all the given expert rankings or the highest correlations with these given rankings. Going by the second approach, we can proceed in any of the following ways:

1. 2.

Select the candidate ranked 1 by most of the experts (this may pose problems when selecting a panel of, say, the best m candidates). Get the total of ranks assigned to any candidate and rank them on the basis of this aggregate (or average) rank. The candidate with the lowest aggregate

28

3.

S. P. Mukherjee

rank is selected. An important deficiency of this procedure, suggested by Borda (1871), is that it gives the same importance to all the places (ranks) 1 to i, which may not be true necessarily. To overcome this deficiency, one may associate weights to the different places (ranks) in a ranking and get the weighted total. The weighted aggregate score of candidate j given rank r by pjr experts can be taken as S j = Σ p jr wr , where wr is the weight associated with place or rank r. The candidate with the smallest S (or the m smallest scores, in case of selecting a panel of m candidates) will be selected. One system of weights has been wr = (n − r + 1). [In our case, we can take it as (n − i + 1).] Each judge can consider pairs of candidates and prefer one to the other within each such pair. We thus get, for each judge, the proportion of cases where a particular candidate is preferred to all others. These proportions are added over judges and the candidate with the highest aggregate proportion is selected. The data can be presented in terms of a support table, as proposed by Condorcet and the method bears some analogy with Thurstone’s method of paired comparison.

In both the above cases, weights given to different places in the ranking (directly or implicitly) are assigned exogenously and that way can invite some criticism of subjectivity. It will be better to derive the weights for these places endogenously from the rankings themselves using some approach like Data Envelopment Analysis (DEA) and some of its refinements like the super-efficiency model. However, use of DEA will result in different weights for the same places or ranks by different candidates and this may disturb the comparison of total weighted scores to find the best candidate or the few best candidates. In the third approach, Kendall (1962) treated the problem as one of estimations and proposed to rank items according to the mean of the ranks obtained, a method more or less similar to that of Borda (1871). Moreover, he suggested to consider the Spearman rank correlation coefficient given two preference rank vectors R and R*,  2 defined as 1−6Σi di2 /(n 3 −n), where d 2 (R, R∗) = Σ j R j −R j ∗ is the squared distance between rankings R and R*. Kendall also defined his own correlation coefficient by introducing ranking matrices, by associating with a rank vector of m objects, a matrix of order m × m with elements aij as ai j = 1 if unit i is preferred to unit j = 0 if units i and j are tied and . = − 1 if unit j is preferred to unit i. Given another ranking R* which can now be converted to a matrix ((bij )), one can define the generalized correlation coefficient between R and R* as   1/2  . t(R, R∗) = Σi Σ j ai j bi j / Σi Σ j ai2j Σi Σ j bi2j

The Secretary Problem—An Update

29

It may be noted that in both the above correlation coefficients, ranks have been taken as scores. And either of the coefficients can be used as a measure of ‘similarity’ between two rankings. Kemeny and Snell (1962) proposed and proved an axiomatic approach to find a unique distance measure between two rankings for the purpose of developing a consensus ranking. They introduced the four axioms stated below and also proved the existence of a distance metric which satisfies all these axioms., known as the Kemeny distance and its uniqueness. The following spell out the Kemeny-Snell axioms: 1.

2. 3.

4.

1. d(R, R  ) satisfies the three standard properties of a metric for distance, viz., (a) positivity, i.e. d(R, R  ) ≥ 0, (b) symmetry, i.e. d(R, R  ) = d(R  , R) , and (c) triangular inequality, i.e. d(R, R  ) = ≤ d(R, R  ) + d(R  , R  ) for any three rankings R, R and R with equality holding if and only if ranking R is between R and R . 2. Invariance: d(R, R  ) = = d(R∗, R  ∗) where R* and R * result from R and R , respectively, by the same permutation of the alternatives. Consistency in measurement: If two rankings R and R agree except for a set S of k elements, which is a segment of both, then d(R, R  ) may be computed as if these k objects were the only objects being ranked. The minimum positive distance is 1.

Kemeny distance between the rankings R and R* (converted  into matrices as in Kendall’s approach) is now defined as d(R, R∗) = 1/2 Σi Σ j ai j −bi j . Kemeny and Snell then suggested the median ranking which shows the best agreement in the ranking space indicated by the set of input rankings as the median ranking. We now define a consensus ranking as given by a matrix M such that  i d(Ri , M) is a minimum and Ri is the ith individual run vector. To find M is an NP-hard problem. When we have n alternatives, there exist n! complete rankings. In case tied ranks arise, the analysis gets more complex as the number of possible rankings becomes    n+1 n!. In fact, the complexity of the problem directly approximately 1 2 1 ln 2 depends on the number of alternatives. Bogart (1975) generalized the Kemeny-Snell approach to consider both transitive and intransitive preferences. Cook and Shaipe (1976) proposed a branch-and-bound algorithm to determine the median algorithm, deriving a solution by adjacent pair-wise optimal rankings. Emond and Mason (2002) pointed out that this method does not ensure that all solutions are found, and in some examples yielded only local optima. Cook (2006) presented a branch-andbound algorithm in the presence of partial rankings, but not allowing for ties. Emond and Mason (2002) proposed a new rank correlation coefficient that is equivalent to Kemeny-Snell distance metric. In the matrix representation of a ranking, they took aij = 1 if alternative i is either ranked ahead or tied with alternative j and aij = 0 only if i = j. Their correlation coefficient becomes t  = Σi Σ j ai j bi j /n(n − 1),

30

S. P. Mukherjee

which is equivalent to Kendall’s coefficient when ties are not allowed. A Branchand Bound algorithm was suggested by Emond and Mason (2002) based on this correlation coefficient to deal with the median ranking problem when the number of alternatives does not exceed 20 in a reasonable computing time. Two computationally more efficient algorithms have been proposed by D’Ambrosio et al. (2012) to find out M.

6 Concluding Remarks It is sometimes said—and not without reasons—that decision problems are never finally solved. Most decision problems lend themselves to multiple approaches for their formulation and solution. And different approaches open up different new problems of data, data analysis and even decision-making. An open problem in connection with secretary selection is posed to take cognizance of a real-life situation that disputes the basic assumption of the interview process to ensure an unambiguous ranking of the candidates. In fact, a more realistic assumption would be indicated by the conditional probability of a candidate being assigned a rank p on the basis of the interview which is the highest when the true rank is p, the other conditional probabilities on either sides of p behaving in some smooth manner. Of course, we may better consider the case of a single expert conducting the interview in this general uncertainty setup to avoid a lot of computational complexity. Acknowledgements The author thankfully acknowledges the suggestion of an anonymous referee to accommodate two references for making the content of this paper more inclusive.

References and Suggested Reading Borda JC (1871) Memoire sur les elections au scrutiny. In: Histoire de l’Academie Royale des Sciences, Paris Bogart KP (1975) Preference structures II: distances between asymmetric relations. SIAM J App Math 29:254–262 Bruss FT (1984) A unified approach to a class of best choice problems with an unknown number of options. Ann Probab 12(3):882–889 Bruss FT, Samuels SM (1987) A unified approach to a class of optimal selection problems. Ann Probab 15(2):824–830 Bruss FT (2000) Sum the odds to one and stop. Ann Probab 28(3):1859–1961 Bruss FT (2003) A note on the bounds for the odds theorem of optimum stopping. Ann Probab 31(4):1859–1961 Buchbinder N, Jain K, Singh M (2014) Secretary problems via linear programming. Math Oper Res 39(1):190–206 Cayley A (1875) Mathematical questions and their solutions. Edu Times 22:18–19 Chow YS, Moriguti S, Robbins H, Samuel S (1984) Optimal selection based on relative rank. Israel J Math 2:81–90

The Secretary Problem—An Update

31

Condorcet M (1785) Essai sur L’applicationde L’analyse a la probabilite des Decisions Rendues a la Pluralite des Voix, Paris Cook WD, Saipe AL (1976) Committee approach to priority planning: the median ranking method. Cahiers du Centre d’Etudees de Recherche Operationelle 18(3):337–351 Cook WD, Kress MA (1990) A data envelopment model for aggregating preference rankings. Manage Sci 36(11):1302–1310 Cook WD (2006) Distance-based and adhoc consensus models in ordinal preference ranking. Eur J Oper Res 172:369–385 D’Ambrosio A, Aria M, Siciliano R (2012) Accurate tree-based missing data imputation and data fusion within the statistical learning paradigm. J Classif 29(2):227–258 Ebrahimnejad A (2012) A new approach for ranking of candidates in voting systems. Opsearch 49(2):103–115 Emond EJ, Mason D (2002) A new rank correlation coefficient with application to the consensus ranking problem. J Multi-Criteria Decis Anal 11(1):17–28 Ferguson TS (1989) Who solved the secretary problem? Stat Sci 4:282–296 Freeman PR (1983) The secretary problem and its extensions: A survey. Int Stat Rev 51:189–206 Gardner M (1960) Mathematical games. Sci Am:150–153 Gilbert J, Mosteller F (1966) Recognizing the maximum of a sequence. J Am Stat Assoc 61(2):882– 889 Kemeny JG, Snell JL (1962) Preference ranking: an axiomatic approach. In: Mathematical models in the social sciences, Chapter 2. Ginn, Boston, USA, pp 9–23 Kendall MG (1962) Rank correlation methods. Hafner, New York Krieger A, Samuel-Cahn E (2016) A generalized secretary problem. Sequential Anal 35:145–157 Lindley DV (1961) Dynamic programming and decision theory. Appl Stat 19:39–51 Moriguti S (1992) A selection problem with cost-secretary problem when unlimited recall is allowed. J Oper Res Soc Japan 35:373–382 Mukherjee SP, Mandal A (1994) Secretary selection problem with a chance constraint. J Ind Stat Assoc 32:29–34 Samuels SM (1993) Secretary problems as a source of benchmark bounds. IMS Lecture Notes Monograph, vol 22, pp 371–388

Energy Efficiency in Buildings and Related Assessment Tools Under Indian Perspective Avijit Ghosh and Subhasis Neogi

1 Introduction The world’s energy supply remains the same carbon intensive as it was two decades ago. Following an increase of 1.6% in 2017, energy-related CO2 emission rose by 1.7% in 2018. This comes after 3 years of emissions staying flat. The global buildings sector accounted for about 28% of total energy-related CO2 emissions, two-thirds of which is attributable to emissions from electricity generation for use in buildings. Warming and changing climate has a strong influence on energy use by the building sector globally. Electricity usage in buildings grew five times faster than improvements in the carbon intensity of power generation since 2000, and rising demand for equipment such as air conditioners is putting pressure on electricity systems (IEA 2019). Indian electricity demand scenario in different sectors during the period from 2012 till 2047 (100 years after Independence) is shown below. From Fig. 1, it is clearly evident that electricity demand by the building sector comprising both residential and commercial is surpassing that by the industrial sector from 2022 onwards and in 2047, it is almost double that of the industrial sector. In that projection, energy efficient equipment and appliances are considered for both residential and commercial divisions and considerable percentage of buildings shall be following smart energy building technology. To address the issue of energy efficiency in buildings, Bureau of Energy Efficiency (BEE) under Ministry of Power, Govt. of India, had introduced Energy Conservation Building Code (ECBC) in 2007 for Indian Commercial buildings, which was revised subsequently in 2017 (BEE 2017). ECBC was introduced under Energy Conservation Act 2001 with the aim of controlling ever-increasing energy consumption in the A. Ghosh (B) CSIR-Central Glass & Ceramic Research Institute, Kolkata, West Bengal, India S. Neogi Aliah University, Kolkata, West Bengal, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 B. K. Sinha and S. B. Bagchi (eds.), Strategic Management, Decision Theory, and Decision Science, https://doi.org/10.1007/978-981-16-1368-5_3

33

34

A. Ghosh and S. Neogi Sector wise Electricity Consumption Projection during 2012,2022,2030 & 2047 2000

Industry Residential Commercial Agricultural Others

Electricity Consumption(TWh)

1800 1600 1400 1200 1000 800 600 400 200 0 2010

2015

2020

2025

2030

2035

2040

2045

2050

Fig. 1 Electricity consumption by different sectors in India. (Source NITI Aayog, 2015)

commercial building sector mandatorily. The process of making ECBC mandatory on a PAN India basis is under way, and some States have already made it mandatory and for others, the process is ON. For energy conservation in Residential buildings, another guideline under the title “Eco-Niwas Samhita Part-1” (ENS) (BEE 2018) for covering building envelop was introduced by BEE in 2018. The Part-2, covering Lighting, Electrical Power and HVAC, would be introduced shortly. By incorporating these Codes in building Bye-laws on a PAN India basis, energy consumption could be restricted without compromising the functional and comfort needs, respectively. Both the Codes for Commercial and Residential buildings in India are covering Building Envelop, Lighting, Power (for Motors, Transformers and Distribution system) including Renewable energy and HVAC, respectively. Green Rating for Integrated Habitat Assessment (GRIHA) by The Energy Resources Institute (TERI) and Ministry of New and Renewable Energy (MNRE) and Green Building Rating System by Indian Green Building Council (IGBC) are two separate certification systems available in India to establish sustainability in buildings (Commercial and Residential both). Dedicated Codes are followed across the globe to follow sustainability and energy efficiency in buildings, as listed in the following Table 1 (Young 2014). In all the above Codes, the following seven parameters have been given due importance for compliance and point earning, respectively: 1. 2. 3.

Heating and Cooling requirements; Envelop—Insulation in Walls and Ceiling, Window U-factor and Shading/Solar Heat Gain Coefficient; Electrical Motors, Pumps, Transformers and Renewable Energy including Solar Power and Solar Thermal for Hot Water;

Energy Efficiency in Buildings and Related Assessment …

35

Table 1 Energy efficient codes in developing and developed countries Country

Residential

Commercial

Stringency Coverage

Stringency Coverage

Australia

Mandatory One family; Multi family Mandatory All commercial and public buildings

Brazil

Voluntary

N/A

Canada

Mixed

One family; Multi family Mixed

All commercial and public buildings

China

Mandatory One family; Multi family Mixed

All commercial and public buildings

France

Mandatory One family; Multi family Mandatory All commercial and public buildings

Germany

Mandatory One family; Multi family Mandatory Commercial: Offices, Hotels, Hospitals; Public Buildings: Offices, Hospitals

India

Voluntary

Italy

Mandatory One family; Multi family Mandatory All commercial and public buildings

Japan

Mixed

One family; Multi family Mandatory All commercial and public buildings

Mexico

None

N/A

Russia

Mandatory One family; Multi family Mandatory All commercial and public buildings

Spain

Mandatory One family; Multi family Mandatory All commercial and public buildings

South Korea

Mandatory One family; Multi family Mandatory All commercial and public buildings

N/A

None

N/A

Mandatory All commercial and public buildings

Mandatory All commercial and public buildings

United Kingdom Mandatory One family; Multi family Mandatory All commercial and public buildings United States

4. 5. 6. 7.

Mixed

One family; Multi family Mixed

All commercial and public buildings

Lighting Efficiency; Air Sealing; Technical Installations; Design, Position and Orientation.

Sl.1–4 are covered elaborately in ECBC 2017 (India), and in fact Sl.6 and 7 are primarily essential for compliance with respective Code provision. Considering the developmental need and economic growth of India without jeopardizing the global warming and climate change phenomena, environmental sustainability is essential. The building sector being the second highest energy consumer

36

A. Ghosh and S. Neogi

after the industry sector in our country (even going to be the highest in 2047, as projected by NITI Aayog), energy efficiency and conservation are of utmost importance and also to reduce the ecological footprint, various green building methodologies are encouraged for adoption by Government of India. All such methodologies being practiced in our country are described briefly with some examples in the following sections.

2 Literature Survey Young (2014) had undertaken an exercise to compare energy efficient building code provisions contained in 15 countries including some large economy countries across the globe. The effort was to determine highly effective building code. By highlighting the building energy code stringency, technical parameter requirements, enforcement, compliance and energy intensity in buildings of all countries, the most effective policies were tried to be traced. Abdelkader et al. (2014) analysed the context of green architecture practices in Egypt, by comparing with other countries who have achieved remarkable success in applying green architecture principles. The two such selected countries are United States of America and India. By analysing the contexts of green architecture practices in these countries, it was observed that the progress of these practices depends largely on several factors such as stakeholders response and role, energy codes, supportive factors, incentives, leading projects and rating systems. This study is compared to figure out the potentials and deficiencies of each factor in the Egyptian context with respect to other two countries. Joshi et al. (2016) had studied various energy concepts for green buildings. Use of green building materials, financial advantages with designing and planning of green buildings and the present status of green buildings in India were analysed thoroughly. Manna and Banerjee (2019) had reviewed the effect of eco-friendly constructions or green building projects to reduce the significant impact of the construction industry on the environment. Brief overview on the Indian Green Building rating system and their process of certification, barriers in implementation and the rank under global scale were evaluated in this article. Sundar (2013) had studied the benefits and barriers in the implementation of green building techniques in real estate project. Besides tangible and intangible benefits accrued out of such green building project, initial higher construction cost proves to be a deterrent factor in adopting such technology by the real estate developers. Gupta (2015) had undertaken a literature review on green building design aspects. The fundamental point of this paper is green design of contemporary architecture. It aimed to look at environmental and physical design approaches for green buildings in India. In this paper, an analysis of ideology of green architecture, theories and viewpoints outlined in the field at the backdrop of successful cases of environment friendly buildings in India was presented. Vyas et al. (2014) had explored the role of Bureau of Energy Efficiency in Green Buildings in India. In the light of a case study of a building at IIT-Kanpur, the energy efficiency aspect as planned and practised was described. The Bureau’s efforts through the energy conservation building

Energy Efficiency in Buildings and Related Assessment …

37

code provisions and star rating system in buildings could help the country to achieve sustainability in the building sector. Geelani et al. (2012) studied the issues related to energy expenditure, recycling, biodegradability, environmental and sustainability during the manufacture and use of any new building material. The building life cycle for achieving sustainable development was focused in the study. Energy consumption and associated greenhouse gas emissions restriction would be the final aim of green building construction and maintenance. Awasthi et al. (2016) had explored different rating systems, applicable for green buildings in India, and the effect on actual environment friendliness on a long-term basis. The author also analysed the embedded energy content during manufacturing and operational phases of the building. Gupta et al. (2014) had presented the pedagogical context of green buildings in India. The author studied the present, future and traditional pedagogy of green buildings in India. It was observed that the stepwise improvements in pedagogical practice impact the results in advanced innovative technological research supporting the construction industry. Importance of knowledge transfer through planned approach of conveying the green building, material and construction technology at various levels of formal learning could be imparted for the adoption of green building technology in the country. Meir et al. (2014) had studied the cost benefit analysis of an Israeli Office building, which was constructed by following stipulations from voluntary Israeli Green standard (IS 5281). Code compliance cost vis-à-vis benefits accrued for the builder and the users were analysed under 20 years’ time period. The additional cost involved in those cases were found to be in line with that of other countries, which is within a bracket of 0–10%, and the direct investment payment period found to be varied between 3 and 4 years, applicable for different size buildings.

3 Energy Efficient Building 3.1 Energy Conservation Building Code for Commercial Buildings Bureau of Energy Efficiency (BEE) has been established as the nodal agency under Ministry of Power, Govt. of India, with a clear mandate to restrict uncontrolled energy consumption in India by all the sectors. Energy Conservation Building Code (ECBC) had been introduced by BEE in 2007 for all Commercial buildings under voluntary compliance basis initially, and subsequently becoming mandatory for all Indian States. ECBC set the minimum energy efficiency standards for design and construction of commercial buildings without compromising the functional aspect of the building or ignoring comfort, health or the productivity by the user group, and at the same time maintaining proper regard to economic considerations. The Code was revised subsequently, and published during June 2017 with some changes incorporated therein. The Code provisions are applicable for such commercial buildings with a connected load ≥100 KW or contract demand ≥120 kV. Besides normal

38

A. Ghosh and S. Neogi

provisions, some more stringent provisions are also contained in ECBC 2017 edition for three categories of commercial buildings, namely ECBC Compliant (all provisions are mandatory), ECBC+Compliant (all provisions are voluntary) and Super ECBC Compliant (all provisions are voluntary), respectively. The Code is mainly subdivided into seven major chapters, which are as follows: 1. 2. 3. 4. 5. 6. 7.

Purpose, Scope, Compliance and Approach, Building Envelop (Wall, Roof, Fenestrations), Comfort Systems and Controls (Ventilation, Space Conditioning, Service Water Heating), Lighting and Controls and Electrical (Transformers, DG Sets, Motors, etc.) and Renewable Energy Systems.

India has been subdivided into five Climatic zones, viz., Hot-Dry, Warm-Humid, Composite, Temperate and Cold, as shown in Fig. 2. Different design values have been prescribed in the ECBC 2017 for different Climatic zones. It is estimated that ECBC-compliant buildings can save around 25% of energy, and ECBC+ and Super ECBC-compliant buildings are capable of energy saving to the tune of around 35% and 50%, respectively (AEEE 2017). For buildings to qualify under either of ECBC, ECBC+ or Super ECBC category, the Whole Building Performance method shall have to be followed for the Standard Design. The Proposed Design should meet the mandatory provisions under different Sections of the code, as applicable. Finally, the Energy Performance Index (EPI) Ratio for the Proposed ECBC/ECBC+/Super ECBC building should be ≤ respective EPI Ratio listed under the applicable climate zone in the code.

3.2 Energy Conservation Building Code for Residential Buildings BEE had introduced the Energy Conservation Building Code—Residential (ECBCR) during December 2018 as a voluntary adoptive measure, which is planned to be made Mandatory subsequently. ECBC-R has been renamed as Eco-Niwas Samhita (ENS) and has been divided into two parts, viz., ENS Part-I: Building Envelop and ENS Part-II: Electro Mechanical Systems. The Code is applicable to all residential buildings and residential parts of “Mixed Land-use Projects”, both built on a plot area of ≥500 m2 . However, States and Municipal bodies may also reduce the size of the plot, as per prevalent practices in that particular State. ENS Part-I has been prepared to set minimum building envelop performance criteria to limit heat gains in Hot/Warm-Humid/Composite zones and to limit heat loss in Cold/Moderate zones, respectively. Further, adequate natural ventilation is the additional criteria for Hot/Warm-Humid/Composite zones and harnessing

Energy Efficiency in Buildings and Related Assessment …

39

Fig. 2 Different climatic zones of India (Source ECBC 2017)

Daylighting potential up to the maximum extent is another important criteria for all the zones for ENS Part-I. The Code allows flexible design innovations to vary envelop components (wall type, window size, glazing type, external shading placement, etc.). Residential Envelop Transmittance Value (RETV) demonstrates the thermal performance by the building envelop (except Roof) components (Wall, Glazing, Doors, Windows, Skylights, Ventilators), and limiting the value is aimed in the design by following the code stipulations. For Roof, the thermal transmittance value (Uvalue) is calculated separately, which should also satisfy the code stipulated value (depending on different layers of construction with different materials). The RETV

40

A. Ghosh and S. Neogi

for building envelop for Composite, Hot-Dry, Warm-Humid and Temperate climatic zones are restricted to 15 W/m2 . The aim of ENS Part-II is to minimize the emission intensity arising from the operation of residential buildings, thereby prescribing minimum performance requirement for different electromechanical systems which are required to be installed before occupation, without compromising the desired levels of lighting and thermal comfort by the users. Integration of Renewable energy in the building energy requirement is also an essential part of this code. To comply with ECBC 2017, the Proposed Design shall (a) (b)

Meet all the provisions of the code prescribed under different sections, or Achieve an Energy Performance Index Ratio (EPI Ratio) of less than or equal to 1, for each section, as applicable. The Energy Performance Index (EPI) Ratio of a building is the ratio of the EPI of the Proposed Design to the EPI of the Reference design

EPI Ratio =

EPI of Proposed Design EPI of Reference Design

where (a) (b)

Proposed Design is consistent with the actual design of the building and complies with all the mandatory provisions of the code prescribed. Reference Design is a standardized building that has the same building floor area, gross wall area and gross roof area as the Proposed Design, and complies with all the provisions of the code prescribed.

The EPI of the Proposed design and Reference Design shall be established through Whole Building Performance Method, as described in the code.

4 Green Building The term Green Building reflects a structure within a macro-climate with least disturbance to its natural order, and utilizing Sun, Wind and Rainfall up to its full potential for the intended functionality. The materials and methodology of construction of such green building should be environment friendly and resource-efficient. Green building uses less energy, water and natural resources, as compared to the conventional building. In India, two programs are in vogue for more sustainable pattern of development: LEED-India and Green Rating for Integrated Habitat Assessment (GRIHA). These programs create more efficient urban forms through better planning, design and engineering by using India’s limited resources more efficiently and improving user’s overall quality of life.

Energy Efficiency in Buildings and Related Assessment …

41

4.1 LEED-India LEED-India is a privately managed green rating system that seeks to encourage the construction of sustainable buildings in India. Indian Green Building Council (IGBC) had formulated LEED-India rating system as an offshoot of the United States Green Building Council (USGBC) LEED program. The aim of the IGBC is to facilitate India to be one of the global leaders in sustainable built environment by 2025. IGBC has been instrumental in creating 7.14 billion sq.ft Green Building footprint, and 5,77,102 acres of large development (source: IGBC website). The rating system is based on encouraging sustainable design and construction techniques in new buildings. Points are awarded to projects based on performance related to sustainable site development, water savings, energy efficiency, material selection and indoor environmental quality, respectively. LEED-India awards the certification for four different categories as Certified/Silver/Gold/Platinum. For residential development, IGBC offers the Green Homes Rating System that specifically focuses on energy and water savings. For larger projects, IGBC has created the Green Townships Rating System. “The ‘IGBC Green Townships Rating System’ is designed to address the issues of urban sprawl, automobile dependency, social and environmental disconnect, etc. and they are evaluated on the basis of environmental planning, land-use planning, resource management and community development”. IGBC has also developed a program for industrial buildings called the Green Factory Building Rating System. A brief about Green Rating points are summarized in Table 2.

4.2 Green Rating for Integrated Habitat Assessment (GRIHA) To restrict natural resource consumption, reduce greenhouse gas emissions and enhance the use of renewable and recycled resources by the building sector, The Energy Resources Institute (TERI) has made convergence of various initiatives, essential for effective implementation and mainstreaming of sustainable habitats in India. With over two decades of experience in green and energy efficient buildings, TERI has developed Green Rating for Integrated Habitat Assessment (GRIHA), which was adopted as the national rating system for green buildings by the Government of India in 2007. All new building construction projects with built-up area ≥ 2500 m2 (excluding parking, basement area and typical buildings) are eligible for certification under GRIHA v. 2019. In a bid to fight climate change threats through improved energy efficiency, all new buildings of government and PSUs will have to mandatorily comply with the requirement of at least “3 star-rating” under the Green Rating for Integrated Habitat Assessment (GRIHA) for efficiency compliance by The Energy and Resources Institute (teri). Till date 52502869 sq.mtr. of GRIHA

42 Table 2 IGBC rating system

A. Ghosh and S. Neogi IGBC green new building rating system: checklist

Points available Owner occupied buildings

Tenant occupied buildings

Modules

100

100

Module 1: Sustainable Architecture & Design

5

5

Module 2: Site Selection & Planning

14

14

Module 3: Water Conservation 18

19

Module 4: Energy Efficiency

28

30

Module 5: Building Materials and Resources

16

16

Module 6: Indoor Environmental Quality

12

9

Module 7: Innovation and Development

7

7

Threshold criteria for certification levels Certification level

Owner occupied buildings

Tenant occupied buildings

Recognition

Certified

40–49

40–49

Best practice

Silver

50–59

50–59

Outstanding performance

Gold

60–74

60–74

National excellence

Platinum

75–100

75–100

Global leadership

(Source IGBC 2016)

footprint is available in India. Climate resiliency have been addressed in the rating system adequately. It is based on ECBC 2017 and NBC 2016 code stipulations. Under altogether 11 Sections and 30 Criterions, the GRIHA Rating system consists of 105 points (maximum). A brief glance towards the Sections and Criterions vis-à-vis points carrying, is displayed in Table 3. The Sectional weightages are graphically presented in Fig. 3 for at-a-glance viewing. The limiting values for attaining GRIHA Star standard are depicted in Table 4. The GRIHA Rating process includes Registration, Orientation Workshop, Due Diligence I, Due Diligence II, Submission of Documents, Preliminary Evaluation, Final Due Diligence and Final Evaluation. Final Rating is awarded on the basis of

Energy Efficiency in Buildings and Related Assessment …

43

Table 3 GRIHA rating system Section 1. Sustainable Site Planning (SSP): Total Points 12

Criterion no.

Details of criterion

Maximum points

1

Green Infrastructure

5

2

Low Impact Design Strategies

5

3

Design to mitigate UHIE

2

4

Air and Soil Pollution Control

1

5

Top Soil Preservation

1

6

Construction Mgmt. Practices

2

7

Energy Optimization

12

8

Renewable Energy Utilization

5

9

Low ODP and GWP materials

1

10

Visual Comfort

4

11

Thermal and Acoustic Comfort

2

12

Maintaining Good IAQ

6

13

Water Demand Reduction

3

14

Waste Water Treatment

3

15

Rainwater Management

5

16

Water Quality and Self Sufficiency

5

17

Waste Management (Post Occupancy)

4

18

Organic Waste Treatment On-Site

2

19

Utilization of Alternative Materials in Buildings

5

20

Reduction in GWP through Life Cycle Assessment

5

21

Alt. Materials for External Site Dev.

2

8. Life Cycle Costing (LCC): Total Point 5

22

Life Cycle Cost Analysis

5

9. Socio-Economic Strategies (SES): Total Points 8

23

Safety and Sanitation for Construction Workers

1

24

Universal Accessibility

2

25

Dedicated facilities for Service Staff

2

2. Construction Management (CM): Total Points 4

3. Energy Optimization (EO): Total Points 18

4. Occupant Comfort (OC): Total Points 12

5. Water Management (WM): Total Points 16

6. Solid Waste Management (SWM): Total Points 6

7. Sustainable Building Materials (SBM): Total Points 12

(continued)

44

A. Ghosh and S. Neogi

Table 3 (continued) Section

Criterion no.

Details of criterion

26

Positive Social Impact

3

Commissioning for final Rating

7

Smart Metering and Monitoring

0

29

Operation and Maintenance Protocol

0

30

Innovation

10. Performance Metering and 27 Monitoring (PMM): Total Points 10 28

Total Points

Maximum points

100

11. Innovation*

5 100 + 5

Grand Total

Weightage (%)

Note Sect. 11 is Optional, Point value 0 indicates Mandatory; Source: GRIHA v 2019 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0

Weightage

t t t s n g ort ies ing men ing en en ial tio tin mf nn teg nitor ter e em os iza agem o a a a l g g C r C m a a M o ti n n n St le e P ant yc ic Op ing Ma Ma & M e Ma p Sit t om ring e C tion cu ter Build able f s rgy c n i a a e o L O c W te W in En Ec tru ble Me olid sta io ns na S oc ance Su Co tai S s rm Su rfo Pe Sections

Fig. 3 Sectional weightage for GRIHA rating

Energy Efficiency in Buildings and Related Assessment … Table 4 GRIHA star rating

45

Percentile limit

Star

25–40

*

41–55

**

56–70

***

71–85

****

86 and above

*****

final evaluation, which remains valid for the next 5 years. The Health Care, Hospitality, Institutional, Office, Residential, Retail and Transit Terminals are the applicable building typologies. Besides GRIHA certification, there are other variants as described hereunder.

4.2.1

Simple Versatile Affordable GRIHA (SVAGRIHA)

It is an extension of GRIHA with only 14 criteria and applicable for projects with built-up area 0 and 0 < p < 1 based on the above data. We assume that the first COVID-19 case is detected in the second generation to be consistent with model assumption that the infection process begins with a single infected undetected individual. We use the method of least squares to estimate  the value kof the unknown paramk−2 2 ) eters. In particular, we minimize L(λ, p) = 14 k=2 (x C,k − λ p(2 − p)(1 − p) where xC,k is the number of confirmed cases observed in generation k, with respect

56

A. K. Laha

Table 1 Growth in COVID-19 cases in India between Feb 1 and May 31, 2020 Date Generation no. Cumulative no. of No. of new cases in cases last 10 days Feb-01 Feb-11 Feb-21 Mar-02 Mar-12 Mar-22 Apr-01 Apr-11 Apr-21 May-01 May-11 May-21 May-31

2 3 4 5 6 7 8 9 10 11 12 13 14

1 3 3 5 82 402 2059 8453 20080 37257 70767 118225 190649

1 2 0 2 77 320 1657 6394 11627 17177 33510 47458 72424

Fig. 1 The actual and fitted values obtained using the basic model

to λ and p. Toward this, we compute ∂∂λL and ∂∂ Lp and put them both equal to zero. Letting rk = xC,k − λk p(2 − p)(1 − p)k−2 , αk = k(λ(1 − p))k−2 , and βk = + 2), we get the following system of non-linear equations: (λ(1 − p))k−3 (kp 2 − 2kp  14 14 r α = 0 and r k=2 k k k=2 k βk = 0. The GRG Nonlinear solver in MS-Excel is used with multiple starting solutions to obtain the least-squares estimates λˆ = 17.068 and pˆ = 0.907. The Root Mean Squared Error (RMSE) value is 1834.82. The actual and fitted values obtained from the model are shown in Fig. 1 below.

A Multi-type Branching Process Model for Epidemics …

57

Fig. 2 The actual and fitted values obtained using the extended model with change in parameters happening at k = 7

As discussed in Sect. 4, the basic model cannot take into account the possible change in the parameter values λ and p due to the national lockdown announced on March 25, 2020. We now use the extended model discussed in Sect. 4 where we can take into account the possible change in the parameter values before and after the national lockdown. Thus, we assume that up to generation k = 7, i.e., before the period when the national lockdown was announced the parameter values were λ1 and p1 whereas those from generation k = 8 onwards are λ2 and p2 . Since the aim of declaring the lockdown was to reduce the spread of the infection we impose the constraint λ2 ≤ λ1 . Further, since plans for more intense effort toward detecting the COVID-19 cases were also put in place we impose p1 ≤ p2 as another constraint. As before, the GRG nonlinear solver in MS-Excel is used with multiple starting solutions to obtain the least-squares estimates of the four parameters which are given below. The estimated parameters are λ1 = 3.167, λ2 = 2.946, p1 = 0.123, and p2 = 0.471. The RMSE for this model is 1251.49 which is much lesser than the basic model indicating a better fit. The same conclusion can be drawn by visually inspecting Fig. 2. An interesting feature of this model is that it can be used to understand the impact of an intervention such as the national lockdown. This can be obtained by calculating the difference in the cumulative number of cases that would have happened if the parameter values remained same all through (i.e., if λ2 = λ1 and p2 = p1 ) and when the parameters are allowed to differ. Using the above idea, we find that without the lockdown the expected cumulative number of confirmed cases would have been 567865 more.

58

A. K. Laha

6 Conclusion In this paper, we have proposed two MBP models for the propagation of an infectious disease epidemic that takes into account quarantine and contact tracing, the second model being an extension of the first. The extended model allows for determining the impact of interventions aimed at slowing down the spread of the disease. The models are then applied to the publicly available data on the COVID-19 epidemic in India. The extended model is seen to fit the available data better than the basic model. These models can be further extended to take into account immigration, emigration, and imperfectly implemented quarantine and contact tracing. We intend to study these in a future paper.

Acknowledgements The author would like to thank the anonymous reviewer for his/her comments that helped in improving this paper.

References Athreya KB, Ney PE (1972) Branching processes. Springer-Verlag Athreya KB, Vidyashankar AN (2001) Branching processes. In: Stochastic processes: theory and methods. Handbook of Statistics, vol 19. Elsevier, pp 35 – 53 Jacob C (2010) Branching processes: their role in epidemiology. Int J Environ Res Public Health 7(3):1186–1204 Lauer SA, Grantz KH, Bi Q, Jones FK, Zheng Q, Meredith HR, Azman AS, Reich NG, Lessler J (2020) The incubation period of coronavirus disease 2019 (covid-19) from publicly reported confirmed cases: estimation and application. Ann Intern Med 172(9):577–582 Pénisson S (2014) Estimation of the infection parameter of an epidemic modeled by a branching process. Electron J Stat 8(2):2158–2187 Yadav SK (2019) Branching processes. In: Laha A (eds) Advances in analytics and applications. Springer, pp 31–41

The Mirra Distribution for Modeling Time-to-Event Data Sets Subhradev Sen, Suman K. Ghosh, and Hazem Al-Mofleh

AMS subject classification: MSC 62E10, MSC 60K10, MSC 60N05

1 Introduction In statistical literature, exponential and gamma are well-established probability distributions, better to be termed as life distributions or standard lifetime distributions, for modeling time-to-event data sets arising from different areas, especially in reliability and/or survival analysis. Finite mixture of standard probability distributions is an effective tool for obtaining newer probability models that are sometimes better in the goodness of fit for data sets and on the aspects of distributional properties as compared to the baseline densities and other comparative models. In many real-life situations, finite mixture models are being utilized considerably for statistical analysis. A chronological survey on the applications of finite mixture of distributions can be seen in Sen (2018a) and the references therein. Recently, Sen et al. (2016) proposed and studied xgamma distribution, a life distribution with gaining popularity, as a special finite mixture of exponential distribution with mean 1/θ and gamma distribution with scale θ and shape 3. In last 3 years, several extensions of xgamma distributions can be found in the literature, such as, Sen and Chandra (2017a), Sen et al. (2017b, 2018b, c, 2019) to list a few. The S. Sen · S. K. Ghosh (B) Alliance School of Business, Alliance University, Bengaluru, India e-mail: [email protected] S. Sen e-mail: [email protected] H. Al-Mofleh Department of Mathematics, Tafila Technical University, Tafila, Jordan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 B. K. Sinha and S. B. Bagchi (eds.), Strategic Management, Decision Theory, and Decision Science, https://doi.org/10.1007/978-981-16-1368-5_5

59

60

S. Sen et al.

xgamma distribution has properties analogous to the Lindley distribution (Lindley 1958) and has a very close mathematical form because of a similar process of synthesis. However, the xgamma random variables are stochastically larger than those of Lindley (see Sen et al. 2018b), both the distributions are the special finite mixtures of exponential and gamma distributions and both can effectively be utilized in analyzing positively skewed data sets. For a given data set the problem of selecting either Lindley or xgamma distribution has been studied by Sen et al. (2020). In this article, our attempt is to generate an extension of xgamma distribution by using a special kind of finite mixing on same component distributions as in the case of xgamma distribution, but with different mixing proportions. We propose and study the new probability distribution, named as the Mirra distribution with two unknown parameters, that includes xgamma distribution and few more one-parameter models as special cases. The rest of the article is organized as follows. Section 2 describes the synthesis of the proposed probability distribution. The different important distributional properties, such as moments, shape, and generating functions, are studied in Sect. 3. Section 4 along with the dedicated subsection deals with the properties related to survival and reliability. In Sect. 5, two classical methods, viz. method of moments and method of maximum likelihood have been proposed for the unknown parameters under complete sample situation. A simulation algorithm along with a comprehensive simulation study has been performed in Sect. 6 to understand the variations in estimates for different sample sizes and parametric considerations. Section 7 illustrates a real data analysis for showing the applicability of the proposed distribution and finally the Sect. 8 concludes the article.

2 Synthesis of the Distribution As mentioned in the previous section, for synthesizing the proposed probability distribution we consider special finite mixtures of two probability distributions, exponential with mean 1θ and gamma distribution with scale parameter θ and shape parameter θ2 α 3, with mixing proportions (α+θ 2 ) and (α+θ 2 ) , respectively, to obtain the form of a new probability distribution named as the Mirra distribution with two unknown parameters in the density function. The name “Mirra” is given as a tribute to the spiritual leader and mystic, Mirra Alfassa, popularly known as The Mother. We have the following definition for the two-parameter Mirra distribution. Definition 1 A continuous random variable, X , is said to follow the two-parameter Mirra (TPM) distribution with parameters α > 0 and θ > 0 if its pdf is of the form f (x) =

 α 2  −θx θ3 1 + x e for x > 0. (α + θ2 ) 2

We denote it by X ∼ T P M(α, θ).

(1)

The Mirra Distribution for Modeling Time-to-Event Data Sets

61

Fig. 1 Density plots of Mirra distribution for different values of α and θ

Special cases: 1. When we put α = 1 in (1), we get the one-parameter family of Mirra distributions, named as Mirra-silence (MS I) distribution, with pdf   1 2 −θx θ3 1+ x e f (x) = for x > 0, θ > 0. (1 + θ2 ) 2

(2)

We can denote it by X ∼ M S I (θ). 2. On the other hand, if we put θ = 1 in (1), we get another one-parameter family of Mirra distributions, named as Mirra-surrender (MS II) distribution, with pdf f (x) =

 α  1 1 + x 2 e−x for x > 0, α > 0. (1 + α) 2

(3)

It can be denoted by X ∼ M S I I (α) 3. On putting α = θ in (1), we get the xgamma distribution (Sen et al. 2016). The density plot of T P M(α, θ) for different values of α and θ is shown in Fig. 1. The cdf corresponding to (1) is given by

62

S. Sen et al.

F(x) = 1 −

 θ2 1 +

 + αθ x + α2 x 2 −θx e for x > 0. (α + θ2 ) α θ2

(4)

The characteristic function (CF) of X is derived as  φ X (t) = E eit X =

√  θ3 (θ − it)−1 + α(θ − it)−3 ; t ∈ R, i = −1. 2 (α + θ ) (5)

3 Distributional Properties This section is devoted to studying important distributional properties, such as moments, shape, and generating functions of T P M(α, θ).

3.1 Raw and Central Moments The r th -order raw moment of T P M(α, θ) is obtained by 

μr = E(X r ) =

α r! 1 + (1 + r )(2 + r ) for r = 1, 2, 3, . . . . (6) θr −2 (α + θ2 ) 2θ2

In particular, we have, 

μ1 = E(X ) =

(3α + θ2 ) 2(6α + θ2 )  = μ (say) ; μ2 = E(X 2 ) = 2 . 2 θ(α + θ ) θ (α + θ2 )

(7)

The variance of X ∼ T P M(α, θ) is thus obtained as μ2 = V (X ) =

2 1 α + 2 . θ2 θ(α + θ2 )

(8)

3.2 Mode The following theorem shows that T P M(α, θ) is unimodal. Theorem 1 For α < 2θ2 , the pdf of X ∼ T P M(α, θ) , as given in (1), is decreasing in x. Proof We have from (1) the first derivative of f (x) with respect to x as

The Mirra Distribution for Modeling Time-to-Event Data Sets

63

  αθ 2 −θx θ3 αx − θ − x e . f (x) = (α + θ2 ) 2 



f (x) is negative in x when θ2 > α/2, and hence the proof.

 

So, we have from the above Theorem 1, for θ2 ≤ α/2, ddx f (x) = 0 implies that    θ2 1 + 1 − 2 α /θ is the unique critical point at which f (x) is maximized. Hence, the mode of T P M(α, θ) is given by Mode(X ) =

⎧  ⎨ 1+ 1− 2θα2 ⎩0 ,

θ

, if 0 < θ2 ≤ α/2. otherwise.

(9)

3.3 Generating Functions The moment generating function (MGF) of X is derived as  M X (t) = E et X =

 θ3 (θ − t)−1 + α(θ − t)−3 ; t ∈ R. 2 (α + θ )

(10)

The cumulant generating function (CGF) of X is obtained as  K X (t) = ln M X (t) = 3 ln θ + ln (θ − t)−1 + α(θ − t)−3 − ln(α + θ2 ); t ∈ R. (11)

4 Survival Properties The survival function of T P M(α, θ) is obtained as S(x) =

 θ2 1 +

 + αθ x + α2 x 2 −θx e for x > 0, (α + θ2 ) α θ2

(12)

so that the hazard rate (failure rate) function is derived as   θ 1 + α2 x 2 f (x)  for x > 0, = h(x) = S(x) 1 + θα2 + αθ x + α2 x 2

(13)

It is to be noted that, the hazard rate function in (13) is bounded by the following bounds:

64

S. Sen et al.

Fig. 2 Hazard rate function plots of Mirra distribution for different values of α and θ

2θ3

< h(x) < θ. √ (α + 2θ2 + θ 2α) Plots of hazard rate function for different values of α and θ is shown in Fig. 2. The following theorem shows that the hazard rate function of T P M(α, θ) is increasing and decreasing depending on a certain range of x. Theorem 2 The failurerate, h(x), as given in (13) is increasing failure rate (IFR) in distribution for x >  x < α2 .

2 α

and is decreasing failure rate (DFR) in distribution for

Proof From the pdf given in (1), we have  ln f (x) = ln

   α 2  −θx θ3 α  e = 3 ln θ − ln(α + θ2 ) + ln 1 + x 2 − θx. 1 + x 2 (α + θ ) 2 2

Differentiating twice with respect to x, we have α − α 2x d2 ln f (x) =  2 , 2 dx 1 + α2 x 2 2 2

The Mirra Distribution for Modeling Time-to-Event Data Sets

 and is positive if x < α2 . Therefore, the pdf of   T P M(α, θ) is log-concave for x > α2 and log-convex for x < α2 . Hence the proof.  

which is negative if x >



65

2 α

The mean residual life (MRL) function of X is derived as m(x) = E(X − x|X > x) =

1 S(x)





S(u)du =

x

1 α(2 + θx) . +  θ θ α + θ2 + αθx + α2 θ2 x 2

(14)

  It should be noted that m(x) ↑ x for all 0 < x < α2 as h(x) ↓ x for all 0 < x < α2   and m(x) ↓ x for all x > α2 as h(x) ↑ x for all x > α2 . The lower bound for MRL is obtained as lim m(x) =

x→∞

1 < m(x) ∀x > 0. θ

Plots of MRL function for different values of α and θ is shown in Fig. 3.

4.1 Stress-Strength Reliability Let X ∼ T P M(α1 , θ1 ) be the stress applied and Y ∼ T P M(α2 , θ2 ) be the strength of the system to sustain the stress. Then the stress-strength reliability, denoted as R = Pr(Y > X ). When the random variables X and Y are independent, the stress–strength reliability R is obtained as  α2 + θ22 α2 θ13 θ22    + R=  2 2 2 α1 + θ1 α2 + θ2 θ2 (θ1 + θ2 ) θ2 (θ1 + θ2 )2    1 α1 α2 3α1 α2 6α1 α2 (15) + α1 + α2 + 2 + + θ2 θ2 (θ1 + θ2 )4 (θ1 + θ2 )3 (θ1 + θ2 )5 Special cases: 1. When X and Y are independently and identically distributed (i.i.d) T P M(α, θ), then   θ5 2α + θ2 1 α2 R= = . + 2 3 5 2θ 2θ 2 α + θ2 2. When X ∼ T P M(α, θ1 ), Y ∼ T P M(α, θ2 ) are independent, then

66

S. Sen et al.

Fig. 3 MRL function plots for different values of α and θ

 α + θ22 θ13 θ22 α    R=  + 2 2 2 θ + θ (θ ) α + θ1 α + θ2 θ2 (θ1 + θ2 )2 1 2 2    1 α2 3α2 6α2 . + 2α + 2 + + θ2 (θ1 + θ2 )3 θ2 (θ1 + θ2 )4 (θ1 + θ2 )5 3. When X ∼ T P M(α1 , θ), Y ∼ T P M(α2 , θ) are independent, then R=

 4 4θ + (α1 + 7α2 ) θ2 + 4α1 α2    . 8 α1 + θ2 α2 + θ2

5 Estimation of Parameters In this section, we propose moment estimators and maximum likelihood estimators for two unknown parameters α and θ when X ∼ T P M(α, θ) for complete sample situation. Let X 1 , X 2 , ..., X n be a random sample of size n drawn from T P M(α, θ).

The Mirra Distribution for Modeling Time-to-Event Data Sets

67

5.1 Method of Moments If X¯ denotes the sample mean, then by applying the method of moments, we have n

3α + θ2  and X¯ =  θ α + θ2

i=1

X i2

n

  2 6α + θ2 . = 2 θ α + θ2

Now, let us consider the ratio of sample moments as k=

1 n

n i=1 X¯ 2

X i2

.

By substituting θ2 = cα, c > 0, we obtain a quadratic equation in c as (k − 2)c2 + (6k − 14)c + (9k − 12) = 0.

(16)

If we denote αˆ M and θˆM as the method of moment estimator for α and θ, respectively, then we have (3 + c)2 . (17) αˆ M = c(1 + c)2 X¯ 2 θˆM =



cαˆ M ,

(18)

where c can be obtained by solving the quadratic equation (16).

5.2 Method of Maximum Likelihood Let x˜ = (x1 , x2 , . . . , xn ) be sample observation on X 1 , X 2 , . . . , X n . The likelihood function of α and θ given x˜ is then written as L(α, θ|x) ˜ =

n  i=1

 α 2  −θxi θ3 1 + x e . (α + θ2 ) 2 i

The log-likelihood function is given by  log L(α, θ|x) ˜ = n log

θ3 α + θ2

 +

n   α  log 1 + xi2 − θ xi . 2 i=1 i=1

n 

(19)

Differentiating equation (19) partially with respect to α and θ, respectively, and equating with zero, we have the loglikelihood equations as

68

S. Sen et al. n  i=1

and

xi2 n − =0 2 α + θ2 2 + αxi

 2nθ 3n − − xi . θ α + θ2 i=1

(20)

n

(21)

The Eqs. (20) and (21) cannot be solved analytically, however, for finding the maximum likelihood estimators for α and θ, we apply numerical method such as Newton– Raphson. For simplicity, from Eq. (21) for fixed θ, we can obtain  α(θ) as   α(θ) = θ2

 θ x¯ − 1 . 3 − θ x¯

(22)

The MLE of θ is denoted by  θ. This estimate can be obtained numerically by solving the following non-linear equation n  i=1

xi2 n − = 0. 2  α(θ) + θ2 2+ α(θ)xi

(23)

After the numerically iterative techniques are used to compute  θ from (23), the MLE of α,  α(θ), can be computed from (22) as  α( θ).

6 A Simulation Study To generate a random sample of size n from T P M(α, θ), we have the following simulation algorithm. (i) (ii) (iii) (iv)

Generate Ui ∼ uni f or m(0, 1), i = 1, 2, . . . , n. Generate Vi ∼ ex ponential(θ), i = 1, 2, . . . , n. Generate Wi ∼ gamma(3, θ), i = 1, 2, . . . , n. θ2 If Ui ≤ α+θ 2 , then set X i = Vi , otherwise, set X i = Wi .

A simulation study is performed to understand the trend of the estimates. The following procedures are adopted to generate N = 20, 000 pseudo-random samples from TPM distribution. • Generate pseudo-random values from the TPM distribution with size n using the above-mentioned simulation algorithm. ˆ • Using the obtained samples in step 1, calculate MLEs αˆ and θ. • Repeat the steps 1 and 2, N times.

The Mirra Distribution for Modeling Time-to-Event Data Sets Table 1 Simulation results for φ = (α = 0.5, θ = 1.5) Sample size Parameters |B I AS| 30

αˆ θˆ

50

αˆ θˆ

80

αˆ θˆ

120

αˆ θˆ

180

αˆ θˆ

69

RMSE

0.36121

0.51329

0.04337 0.28011

0.22242 0.37487

0.01081 0.21628

0.11229 0.28348

0.00149 0.17411

0.04207 0.22388

0.00006 0.14320

0.00837 0.18240

0.00000

0.00000

In connection to maximum likelihood estiamtes (MLEs), we calculate absolute bias and root mean squared error as measures of estimation accuracy. These measures are obtained by using the following formulas: Average of absolute biases (|Bias|),  = |Bias(φ)|

N 1   |φ − φ|, N i=1

and the root mean squared error (RMSE),   N 1   − φ)2 , where φ = (α, θ). (φ RMSE =  N i=1 We generate N = 20,000 samples of the TPM distribution, where n = {30, 50, 80, 120, 180}, and by choosing the all parameter combinations from α = (0.5, 0.7, 1.0, 2.5) and θ = (1.5, 2.4, 3.5, 5.0). Out of the 16 different possible parameter combinations,  and R M S E. we have judicially selected four combinations to evaluate: |Bias(φ)| The simulation study is performed using R software (version 3.6.1). In Tables 1, 2, 3, and 4, we report the values of |Bias| and R M S E of the corresponding values of MLEs. It is clear from the Tables 1 , 2, 3, and 4 that the biases and R M S E values gradually decrease with increasing sample sizes and thus the estimates behave in a standard manner for different values of α and θ.

70

S. Sen et al.

Table 2 Simulation results for φ = (α = 0.5, θ = 5.0) Sample size Parameters |B I AS| 30

αˆ θˆ

50

αˆ θˆ

80

αˆ θˆ

120

αˆ θˆ

180

αˆ θˆ

1.41670

2.28396

1.92582 1.13506

2.94612 1.71898

1.76402 0.92679

2.8038 1.32150

1.63094 0.76745

2.71280 1.04285

1.49956 0.65570

2.60236 0.85856

1.28807

2.41322

Table 3 Simulation results for φ = (α = 2.5, θ = 1.5) Sample size Parameters |B I AS| 30

αˆ θˆ

50

αˆ θˆ

80

αˆ θˆ

120

αˆ θˆ

180

αˆ θˆ

αˆ θˆ

50

αˆ θˆ

80

αˆ θˆ

120

αˆ θˆ

180

αˆ θˆ

RMSE

2.38693

4.48172

0.00285 1.27315

0.05339 3.51559

0.00020 0.89048

0.01414 2.37867

0.00000 0.68035

0.00000 0.93864

0.00000 0.54324

0.00000 0.71994

0.00000

0.00000

Table 4 Simulation results for φ = (α = 2.5, θ = 5.0) Sample size Parameters |B I AS| 30

RMSE

RMSE

2.71154

3.71386

0.28333 2.11456

0.99511 2.77581

0.12588 1.68853

0.66420 2.17652

0.03970 1.36739

0.37233 1.74655

0.01026 1.12341

0.18520 1.42540

0.00144

0.05941

The Mirra Distribution for Modeling Time-to-Event Data Sets

71

7 Application with Real Data Illustration This section deals with the applicability of the proposed distribution. As an illustration, we consider a data set on the lifetimes of a device reported in Aarset (1987). The following data represent 50 lifetimes of a device: 0.1, 0.2, 1, 1, 1, 1, 1, 2, 3, 6, 7, 11, 12, 18, 18,18, 18, 18, 21, 32, 36, 40, 45, 46, 47, 50, 55, 60, 63, 63,67,67, 67, 67, 72, 75, 79, 82, 82, 83, 84, 84, 84, 85, 85,85, 85, 85, 86, 86. TPM distribution is compared with the following life distributions with two parameters: Gamma (Ga) distribution with shape α and rate θ; Weibull (W) distribution with shape α and scale λ; Exponetiated exponential (EEX) dsitribution with parameters α and λ (Gupta and Kundu 2001); Lognormal (LN) distribution with parameters μ and σ; xgamma (XG) distribution with parameter α (Sen et al. 2016) and xgamma geometric (XGGc) distribution with parameters α and p (Sen et al. 2019). As model comparison criteria, we have considered two times of negative loglikelihood (−2logL) value, Akaike information criteria (AIC), and Bayesian information criteria (BIC). Related to a particular data modeling, the smaller the value of AIC or BIC, the better the corresponding model. Kolmogorov–Smirnov (K-S) statistic along with corresponding p-values are also calculated for each distribution on the considered data set. The following Table 5 provides maximum likelihood (ML) estimates (standard errors in parenthesis) of the parameters for different distributions fitted with the above data set and corresponding −2logL, AIC, BIC, K-S statistic value, and P-value. We can also compare the proposed distribution with its sub-model using likelihood ratio (LR) statistics, the results can be shown in Table 6. Table 6 shows that TPM model outperforms its sub-model based on the LR test at the 5% significance level. Hence, we reject the H0 hypothesis in favor of the TPM model.

8 Concluding Remarks In this article, a new two-parameter probability distribution is proposed and studied. The distribution, named as two-parameter Mirra distribution, is synthesized as a generalization of the xgamma distribution. The flexibility of the distribution is established by studying its important distributional and survival properties. With such simplicity, the estimates for unknown parameters are been obtained by two classical methods of estimation, namely, method of moments and method of maximum likelihood. A simulation study confirms the good behavior of the estimates for increasing sample sizes and applicability of the proposed model is accomplished with a reallife data illustration. As a future investigation, the estimation aspects of the proposed distribution under different censoring mechanisms and under the Bayesian approach

72

S. Sen et al.

Table 5 ML estimates and model selection criteria for lifetimes of device data Distributions Estimate(Std. Error) −2logL AIC BIC K-S Statistic Ga(α, θ)

W(α, λ)

LN(μ, σ)

EEX(α, λ)

XG(α) XGGc(α, p)

TPM(α, θ)

α=0.79902 ˆ (0.13751) ˆ θ=0.01748 (0.00407) α=0.94915 ˆ (0.11957) ˆ λ=44.9193 (6.94585) μ=3.07898 ˆ (0.24722) σ=1.74811 ˆ (0.17481) α=0.77984 ˆ (0.13499) ˆ λ=0.01870 (0.00361) α=0.05977 ˆ (0.00510) α=0.04948 ˆ (0.01428) p=0.48087 ˆ (0.43914) α=0.00322 ˆ (0.00132) ˆ θ=0.04760 (0.00491)

Table 6 LR statistics for device data Model Hypothesis TPM vs XG

P-value

480.380

484.380

488.205

0.20223

0.03349

482.003

486.003

489.828

0.19280

0.04860

505.646

509.6459

513.47

0.22143

0.08832

479.990

483.990

487.814

0.20417

0.03095

499.936

501.936

503.848

0.23341

0.00861

499.168

503.168

506.992

0.20619

0.02848

473.468

477.468

481.292

0.17069

0.10854

Test statistic

H0 : α = 26.468 θ & H1 : H0 false

95% CI

P-value

(0.05877, 0.06077)

2.679 ×10−7

can be thought of. We believe that the Mirra distribution can be used successfully in modeling time-to-event data in the diverse area of applications and as a generalization of xgamma model.

References Aarset MV (1987) How to identify a bathtub hazard rate. IEEE Trans Reliab 36(1):106–108 Gupta RD, Kundu D (2001) Exponentiated exponential family: an alternative to gamma and Weibull distributions. Biometr J Math Methods in Biosci 43(1):117–130

The Mirra Distribution for Modeling Time-to-Event Data Sets

73

Lindley DV (1958) Fiducial distributions and Bayes’ theorem. J R Stat Soc Series B (Method):102– 107 Sen S, Maiti SS, Chandra N (2016) The xgamma Distribution: statistical properties and application. J Mod Appl Stat Method 15(1):774–788 Sen S, Chandra N (2017a) The quasi xgamma distribution with application in bladder cancer data. J Data Sci 15(1):61–76 Sen S, Chandra N, Maiti SS (2017b) The weighted xgamma distribution: properties and application. J Reliab Stat Stud 10(1):43–58 Sen S (2018a) Some new life distributions: survival properties and applications, chapter 1. Doctoral Dissertation, Department of Statistics, Pondicherry University, India, 25–28 Sen S, Chandra N, Maiti SS (2018b) Survival estimation in xgamma distribution under progressively type-II right censored scheme. Model Assist Stat Appl 13(2):107–121 Sen S, Korkmaz MC, Yousof HM (2018c) The quasi xgamma-Poisson distribution: properties and application. ˙Istat˙Ist˙Ik-J Turkish Stat Assoc 11(3):65–76 Sen S, Afify Ahmed Z, Mofleh Hazem-Al, Ahsanullah M (2019) The quasi xgamma-geometric distribution with application in medicine. FILOMAT (University of Nis) 33(16):5291–5330 Sen S, Mofleh Hazem-Al, Maiti SS (2020) On discrimination between the Lindley and xgamma distributions. Ann Data Sci (online first) https://doi.org/10.1007/s40745-020-00243-7

Comparison of Local Powers of Some Exact Tests for a Common Normal Mean with Unequal Variances Yehenew G. Kifle and Bimal K. Sinha

1 Introduction The inferential problem of drawing inference about a common mean μ of several independent normal populations with unequal variances has drawn universal attention, and there are many exact tests for testing a null hypothesis H0 : μ = μ0 against both-sided alternatives H1 : μ = μ0 . In this paper, we provide a review of their local power and a comparison. A well-known context of this problem occurred when Meier (1953) was approached to draw inference about the mean of albumin in plasma protein in human subjects based on results from four experiments, reproduced below (Table 1). Another scenario happened when Eberhardt et al. (1989) had results from four experiments about nonfat milk powder and the problem was to draw inference about the common mean Selenium in nonfat milk powder by combining the results from four methods (Table 2). A similar situation arises in the context of environmental data analysis when upon identifying a hot spot in a contaminated area, samples are drawn and sent to several labs and then the resulting data are combined for eventual analysis. This is especially important for subsequent adoption of remedial actions in case the mean contamination level at the site is found to exceed a certain threshold. A general formulation of the problem can be stated as follows. There are k normal populations with a common mean μ and different variances σ12 , . . . , σk2 . Based on a sample of size n i from the ith population, we want to test H0 : μ = μ0 versus

Y. G. Kifle · B. K. Sinha (B) Department of Mathematics and Statistics, University of Maryland Baltimore County, Maryland, USA e-mail: [email protected] Y. G. Kifle e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 B. K. Sinha and S. B. Bagchi (eds.), Strategic Management, Decision Theory, and Decision Science, https://doi.org/10.1007/978-981-16-1368-5_6

75

76

Y. G. Kifle and B. K. Sinha

Table 1 Percentage of albumin in plasma protein of four different experiments Experiment ni Mean Variance A B C D

12 15 7 16

62.3 60.3 59.5 61.5

12.986 7.84 33.433 18.51

Table 2 Selenium content in nonfat milk powder using four methods Methods ni Mean Atomic absorption Spectrometry Neutron activation: Instrumental Neutron activation: Radiochemical Isotope dilution mass spectrometry

8

Variance

105

85.711

12

109.75

20.748

14

109.5

2.729

8

113.25

33.64

√ n ( X¯ −μ)

H1 : μ = μ0 . Obviously, there exist k independent t-tests based on ti = i Sii that follows central t distribution with νi = (n i − 1) degrees of freedom. The natural meta-analysis question now is how to combine the results from the k independent t-tests? As one can expect, there are many ways of accomplishing this task based on some exact and some asymptotic procedures. Let us first briefly review those asymptotic procedures for testing hypothesis about common mean μ. In the trivial case when the k population variances are completely known, the common mean μ can easily be estimated using the maximum likelihood estimator  k ni −1  k ni n ˆ = . This estimator μˆ is X¯ ][ kj=1 σ 2j ]−1 with V ar (μ) μˆ = [ i=1 i=1 σi2 σi2 i j the minimum variance unbiased estimator under normality as well as the best linear unbiased estimator without normality for estimating μ. A simple test based on standard normal z is obvious in this case. However, in most cases, these k population variances are unknown. In this context, a familiar estimate, known as the Graybill–Deal estimate can be used Graybill and Deal (1959). This unbiased estimator μˆ G D together with its variance is given as k

ni ¯ i=1 Si2 X i nj j=1 S 2j

μˆ G D = k

with Var(μˆ G D ) = E

 k i=1

n i σi2 Si4

  

k ni 2 S2 i=1 i

Khatri and Shah (1974) proposed exact variance expression for μˆ G D , which is complicated and cannot be easily implemented. Although the exact distribution of μˆ G D is known Nair (1980), so far there is no exact test of H0 based on the Graybill–

Comparison of Local Powers of Some Exact Tests …

77

Deal estimate of μ. To address this inferential problem, Meier (1953) derived a first order approximation of the variance of μˆ G D as Var(μˆ G D ) =

 k i=1

ni σi2

−1  k  1+2 i=1



 k 1 1 ci (1 − ci ) + O ; ni − 1 (n i − 1)2 i=1

n i /σi2

where ci = k

j=1

n j /σ j2

In line with this Sinha (1985) derived an unbiased estimator of the variance of μˆ G D that is a convergent series. A first order approximation of this estimator is (1) (μˆ G D ) =  Var k

1

ni i=1 Si2

1+

k  i=1

4 ni + 1

n i /Si2

k

j=1

n j /S 2j

n i2 /Si4



− k ( j=1 (n j /S 2j )2

The above estimator is comparable to Meier’s (1953) approximate estimator (2) (μˆ G D ) =  Var k

1

ni i=1 Si2

1+

k  i=1

4 ni − 1

n i /Si2

k

j=1

n j /S 2j

n i2 /Si4



− k ( j=1 (n j /S 2j )2

(3) (μˆ G D ) and approximate The “classical” meta-analysis variance estimator, Var (4) (μˆ G D ) are the two other varivariance estimator proposed by Hartung (1999) Var ance estimators of μˆ G D which are given us (3) (μˆ G D ) =  Var k

1

ni i=1 Si2

(4) (μˆ G D ) = Var

and

 k  n i /Si2 1  ( X¯ i − μˆ G D )2 k 2 k − 1 i=1 j=1 n j /S j

As mentioned earlier, the central focus of this paper is to critically examine some exact tests for the common mean. A power comparison of these available tests is then a natural desire. In this paper, we compare six exact tests based on their local powers. The organization of the paper is as follows. In Sect. 2, we provide a brief description of the six exact tests with their references. The pdf of non-central t which plays a pivotal role in studying power of t tests is given along with its local expansion (in terms of its non-centrality parameter). Section 3, a core section of the paper, provides expressions of local powers of all the proposed tests. We omit the proofs and refer to a technical report for details. Section 4 contains some numerical (power) comparisons in the case of equal sample sizes. We conclude this paper with some remarks in Sect. 5.

78

Y. G. Kifle and B. K. Sinha

2 Review of Six Exact Tests for H0 Versus H1 Consider k independent normal populations where the ith population follows a normal distribution with mean μ ∈ R and variance σi2 > 0. Let X¯ i denote the sample mean, Si2 the sample variance, and n i the sample size of the i th population. Then, 2 (n −1)S 2 we have X¯ i ∼ N (μ, σ ) and i 2 i ∼ χν2 , where νi = (n i − 1) and i = 1, . . . , k. ni

σi

i

Note that, these two statistics, X¯ i and Si2 , are all mutually independent. √ A generic notation for a t statistic based on a sample of size n is tobs = n(x¯ − μ0 )/s. We can refer to this t computed from a given data set as the observed value of our test statistic, and reject H0 when |tobs | > tν;α/2 , where ν is the degrees of freedom and α is Type I error level. A test for H0 based on a P-value on the other hand is based on Pobs = P[|tν | > |tobs |], and we reject H0 at level α if Pobs < α. Here tν stands for the central t variable with ν degrees of freedom and tν;α/2 stands for the upper α/2 critical value of tν . It is easy to check that the two approaches are obviously equivalent. A random P-value which has a Uniform(0,1) distribution under H0 is defined √ as Pran = P[|tν | > |tran |], where tran = n( X¯ − μ0 )/S. All suggested tests for H0 are based on Pobs and tobs values and their properties, including size and power, are studied under Pran and tran . To simplify notations, we will denote Pobs by small p and Pran by large P. Six exact tests based on tobs and p values from k independent studies as available in the literature are listed below.

2.1 P-value Based Exact Tests 2.1.1

Tippett’s Test

This minimum P-value test was proposed by Tippett (1931), who noted that, if P1 , . . . , Pk are independent p-values from continuous test statistics, then each has a uniform distribution under H0 . Suppose that P(1) , . . . , P(k) are ordered p-values for testing individual k hypotheses H(0i) , i = 1, . . . , k. According to this method, the common H0 : μ = μ0 is rejected at α level of significance   mean null 1hypothesis if P(1) < 1 − (1 − α) k . Incidentally, this test is equivalent to the test based on Mt = max1≤i≤k |ti | suggested by Cohen and Sackrowitz (1984).

2.1.2

Wilkinson’s Test

This test statistic proposed by Wilkinson (1951) is a generalization of Tippett’s test that uses not just the smallest but the rth smallest p-value (P(r ) ) as a test statistic, where P(1) , P(2) , . . . , P(k) are the ordered p-values (order statistics). The common mean null hypothesis will be rejected if P(r ) < dr,α , where P(r ) follows

Comparison of Local Powers of Some Exact Tests …

79

a Beta distribution with parameters r and (k − r + 1) under H0 and dr,α satisfies Pr{P(r ) < dr,α |H0 } = α. In Sect. 4, we have indicated an optimum choice of r for specified values of n, k and α = 0.05.

2.1.3

Inverse Normal Test

This exact procedure which involves transforming each p-value to the corresponding normal score was proposed independently by Stouffer et al. (1949) and Lipták (1958). Using this inverse normal method, hypotheses about the common μ will be rejected √ −1  k −1 k < −z α , where Φ −1 denotes the at α level of significance if i=1 Φ (Pi ) inverse of the cdf of the standard normal distribution and z α stands for the upper α level cutoff point of a standard normal distribution.

2.1.4

Fisher’s Inverse χ 2 -test

This inverse χ 2 -test is one of the most widely used exact test procedure of combining k independent p-values proposed by Fisher (1932). This procedure uses the  k i=1 Pi to combine the k independent p-values. Then, using the connection between 2 the hypotheses about the common μ will be rejected uni f or mk and χ distributions, 2 2 , where χ2k,α denotes the upper α critical value of a χ 2 if −2 i=1 ln(Pi ) > χ2k,α distribution with 2k degrees of freedom.

2.2 Exact Test Based on a Modified t Fairweather (1972) considered a test based on a weighted linear combination of the ti ’s. In this paper, we considered a variation of this test based on a weighted linear combination of |t i | as we are testing a non-directional alternative. Our test  statisk w1i |ti |, where w1i ∝ [Var(|ti |)]−1 with Var(|ti |) = [νi (νi − tic T1 is given as i=1 2   √ √ 2)−1 ] − [Γ ( νi 2−1 ) νi ][Γ ( ν2i ) π ]−1 . The null hypothesis will be rejected if T1 > d1α , where Pr {T1 > d1α |H0 } = α. In applications d1α is computed by simulation.

2.3 Exact Test Based on a Modified F Jordan and Krishnamoorthy (1996) considered a weighted linear combination of the k w2i Fi , where Fi ∼ F(1, νi ), F-test statistics Fi , namely T2 , which is given as i=1 and w2i ∝ [Var(Fi )]−1 with Var(Fi ) = [2νi2 (νi − 1)][(νi − 2)2 (νi − 4)]−1 . The null

80

Y. G. Kifle and B. K. Sinha

hypothesis will be rejected if T2 > d2α , where Pr{T2 > d2α |H0 } = α. In applications d2α is computed by simulation. We mention in passing that Philip et al. (1999) studied some properties of the confidence interval for the common mean μ based on Fisher’s test and inverse normal test. The pdfs of t statistic under the null and alternative hypothesis which will be √ required under the sequel are given below. δ = n(μ1 − μ0 )/σ stands for the noncentrality parameter when μ1 is chosen as an alternative value. Later, we will denote (μ1 − μ0 ) by Δ.  ν+1  Γ ( ν+1 ) t2 − 2 2 1 + f ν (t) = √ ν νπ Γ ( ν2 )  −νδ2  ν

  ∞ ν 2 exp 2(t 2 +ν) δt 2 1 ν dy f ν;δ (t) = √ y exp − y − √  ν+1 ν−1  2 t2 + ν π Γ ( ν2 )2 2 t 2 + ν 2 0 First and second derivatives of f ν;δ (t) evaluated at Δ = 0 which will play a pivotal role in the study of local powers of the proposed tests appear below. ∂ f ν;δ (t)  t =√ 2   ν+2 δ=0 t ∂δ 2π ν + 1 2

 2 Γ ( ν+1 ) ∂ 2 f ν;δ (t)  t −1 2 = √  δ=0 ∂δ 2 Γ ( ν2 ) νπ ( t 2 + 1) ν+3 2 ν

3 Expressions of Local Powers of the Six Proposed Tests In this section, we provide the expressions of local powers of the suggested tests. A common premise is that we derive an expression of the power of a test under Δ = 0, and carry out its Taylor expansion around Δ = 0. It turns out that due to both-sided nature of our tests, the first term vanishes, and we retain terms of order O(Δ2 ). Local power of a test stands for its Taylor series expansion of power with respect to Δ around 0 and the coefficient of Δ2 in the expansion. It turns out that the coefficient of Δ is 0 for all the proposed tests. To simplify the notations, we consider of equal sample size n, degrees of k the case [σi2 ]−1 . The final expressions of the local freedom ν = (n − 1), and write Ψ = i=1 powers of the proposed tests are given below without proof. For detailed proofs of these exact tests, we refer to the Technical Report Kifle et al. (2020).

Comparison of Local Powers of Some Exact Tests …

81

3.1 Local Power of Tippett’s Test [L P(T )]  L P(T ) ≈ α + where ξνT (aα ) = ξνT (aα ) < 0.

 tν ( a2α ) −tν ( a2α

nΔ2 Ψ 2



∂ 2 f ν;δ (t)  dt; ) ∂δ 2 δ=0

 (1 − α)

k−1 k

|ξνT (aα )|

(1)

1

aα = [1 − (1 − α) k ]. It turns out that

3.2 Local Power of Wilkinson’s Test [L P(Wr )]  L P(Wr ) ≈ α +

nΔ2 Ψ 2

  k−1 −1 |ξνW (dr ;α )|drr;α (1 − dr ;α )k−r r −1

(2)

where ξνW (dr ;α ) is equivalent to ξνT (aα ) with aα = dr ;α . It turns out that ξνW (dr ;α ) < 0. Remark: For the special case r = 1, L P(Wr ) = L P(T ), as expected, 1 k−1 because d1;α = [1 − (1 − α) k ], implying (1 − d1;α )k−1 = (1 − α) k

3.3 Local Power of Inverse Normal Test [L P(I N N)]



 ν nΔ2 z α [Bν − Cν ] (3) Ψ √ φ(z α ) L P(I N N ) ≈ α + − Aν √ 2 k 2 k ∞ ∞ Aν = −∞ uφ(u)Q ν (u)du; Bν = −∞ u 2 φ(u)Q ν (u)du;  2  ∞ −1 ; φ(u) is standard normal Cν = −∞ φ(u)Q ν (u)du; Q ν (u) = xx 2 +ν c 

pdf and Φ(u) is standard normal cdf.

x=tν ( 2 ),c=Φ(u)

3.4 Local Power of Fisher’s Test [L P(F)]  L P(F) ≈ α +



 ν Dν  nΔ2 2 Ψ E{(ln (T /2))I{T ≥χ2k;α } − α D 0 } 2 2

(4)

82

Y. G. Kifle and B. K. Sinha

    D0 = E log (q) ; Dν = E  2U ψν (U ) ; U ∼exp[2]; q ∼gamma[1, k]; −1 T ∼gamma[2, k]; ψν (u) = xx 2 +ν c u x=tν ( 2 ), c=exp (− 2 )

3.5 Local Power of a Modified t Test [L P(T1 )]  L P(T1 ) ≈ α +

 2



 (t1 − 1)ν nΔ2 k Ψ E H0 I { i=1 |ti |>d1α } 2 t12 + ν

(5)

3.6 Local Power of a Modified F Test [L P(T2 )] 



 (F1 − 1)ν nΔ2  k Ψ E H0 I{ i=1 L P(T2 ) ≈ α + Fi >d2α } 2 F1 + ν 

(6)

4 Comparison of Local Powers It is interesting to observe from the above expressions that in the special case of equal sample size, local powers can be readily compared, irrespective of the values of the unknown variances (involved through Ψ , which is a common factor in all the expressions of local power). Table 3 represents values of the second  2 term of local power given above in Eqs. 1 to 6, apart from the common term nΔ2 Ψ for different values of k, n and choices of r (≤ k) with maximum local power. Similarly, local powers of the second term of Wilkinson’s test for different values of n, k, and r (≤ k) is presented in Table 4. All throughout we used α = 5%.

Comparison of Local Powers of Some Exact Tests …

83

0.20

Exact Tests

0.10

alpha=0.05

0.00

0.05

Local Power

0.15

Modified F Modified t Inverse Normal Wilkinson (r=2) Tippett Fisher

−0.4

−0.2

0.0

0.2

0.4

Delta

Fig. 1 Comparison of local powers of six exact tests for n = 15, k = 5, and Ψ = 1

Here are some interesting observations: comparing Tippett’s and Wilkinson’s tests, we note that Wilkinson’s test for some r > 1 always outperforms Tippett’s test, and the √optimal choice of r seems to increase with k (Table 4 and Fig. 2), it is just above k. Among the other tests, Fig. 1 with Ψ = 1 reveals that both modified F and modified t test fares the best uniformly in the design parameters n and k. Inverse normal based exact test also performs reasonably well for all values of k and n (Table 3). Another advantage of this test is that cutoff point can be readily obtained without any simulation.

Table 3 Comparison of the 2nd term of local powers [without nΔ2 Ψ/2] of six exact tests for different values of k and n (equal sample size) Exact Test k=5 k = 10 k = 15 n = 15 n = 25 n = 40 n = 15 n = 25 n = 40 n = 15 n = 25 n = 40 Tippett Wilkinson Inv Normal Fisher Modified t Modified F

0.0575 0.0633 0.0667 0.0575 0.0737 0.0752

0.0633 0.0664 0.0683 0.0579 0.0759 0.0784

0.0667 0.0681 0.0699 0.0602 0.0768 0.0807

0.0322 0.0412 0.0438 0.0421 0.0486 0.0495

0.0361 0.0430 0.0454 0.0429 0.0511 0.0527

0.0383 0.0441 0.0462 0.0441 0.0516 0.0533

0.0227 0.0324 0.0349 0.0321 0.0391 0.0388

0.0257 0.0338 0.0355 0.0343 0.0409 0.0401

0.0275 0.0346 0.0369 0.0363 0.0412 0.0411

84

Y. G. Kifle and B. K. Sinha

Table 4 Comparison of the 2nd term of local powers [without nΔ2 Ψ/2] of Wilkinson’s test for different values of n, k, and r (≤ k) r k=5 k = 10 k = 15 n = 15 n = 25 n = 40 n = 15 n = 25 n = 40 n = 15 n = 25 n = 40 0.0575 0.0633 0.0587 0.0494 0.0359

0.0633 0.0664 0.0603 0.0501 0.0361

0.0667 0.0681 0.0611 0.0504 0.0362

0.20

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

0.0322 0.0395 0.0412 0.0404 0.0384 0.0355 0.0320 0.0279 0.0230 0.0168

0.0383 0.0437 0.0441 0.0425 0.0398 0.0364 0.0326 0.0282 0.0231 0.0168

0.0227 0.0292 0.0317 0.0324 0.0322 0.0314 0.0302 0.0287 0.0270 0.0250 0.0229 0.0205 0.0178 0.0147 0.0108

0.0257 0.0315 0.0335 0.0338 0.0333 0.0323 0.0309 0.0292 0.0273 0.0253 0.0230 0.0206 0.0179 0.0148 0.0109

0.0275 0.0328 0.0345 0.0346 0.0339 0.0327 0.0312 0.0295 0.0275 0.0254 0.0231 0.0206 0.0179 0.0148 0.0109

Wilkinson’s Exact Test (k=5)

0.10

0.15

Wilkinson: r=1 Wilkinson: r=2 Wilkinson: r=3 Wilkinson: r=4 Wilkinson: r=5

alpha=0.05

0.00

0.05

Local Power

0.0361 0.0422 0.0430 0.0417 0.0393 0.0361 0.0324 0.0281 0.0231 0.0168

−0.4

−0.2

0.0

0.2

0.4

Delta

Fig. 2 Comparison of local powers of Wilkinson’s exact test for n = 15, k = 5 and Ψ = 1

Comparison of Local Powers of Some Exact Tests …

85

5 Conclusion Based on our computations of local powers of the available exact tests, we have noted that a uniform comparison of them, irrespective of the values of the unknown variances, can be readily made in case of equal sample size, and it turns out that both modified F and modified t tests perform the best. Inverse normal based exact test also performs reasonably well in the case of equal sample size with the added advantage that its cutoff point can be readily obtained without any simulation. Some limited results for k = 2, 3 and 4 in case of unequal sample sizes are reported in the Technical Report Kifle et al. (2020). Acknowledgements We thank two anonymous reviewers for some helpful comments.

References Cohen A, Sackrowitz H (1984) Testing hypotheses about the common mean of normal distributions. J Stat Plan Infer 9(2):207–227 Eberhardt KR, Reeve CP, Spiegelman CH (1989) A minimax approach to combining means, with practical examples. Chemometr Intell Lab Syst 5(2):129–148 Fairweather WR (1972) A method of obtaining an exact confidence interval for the common mean of several normal populations. J Roy Stat Soc: Ser C (Appl Stat) 21(3):229–233 Fisher R (1932) Statistical methods for research workers. 4th edn. Edinburgh and London, p 307 Graybill FA, Deal R (1959) Combining unbiased estimators. Biometrics 15(4):543–550 Hartung J (1999) An alternative method for meta-analysis. Biometrical J: J Math Meth Biosci 41(8):901–916 Jordan SM, Krishnamoorthy K (1996) Exact confidence intervals for the common mean of several normal populations. Biometrics 52(1):77–86 Khatri C, Shah K (1974) Estimation of location parameters from two linear models under normality. Commun Stat-Theor Meth 3(7):647–663 Kifle Y, Moluh A, Sinha B (2020) Comparison of local powers of some exact tests for a common normal mean with unequal variances. Technical Report Lipták T (1958) On the combination of independent tests. Magyar Tud Akad Mat Kutato Int Kozl 3:171–197 Meier P (1953) Variance of a weighted mean. Biometrics 9(1):59–73 Nair K (1980) Variance and distribution of the Graybill-Deal estimator of the common mean of two normal populations. The Ann Stat 8(1):212–216 Philip L, Sun Y, Sinha BK (1999) On exact confidence intervals for the common mean of several normal populations. J Stat Plan Infer 81(2):263–277 Sinha BK (1985) Unbiased estimation of the variance of the Graybill-Deal estimator of the common mean of several normal populations. Can J Stat 13(3):243–247 Stouffer SA, Suchman EA, DeVinney LC, Star SA, Williams RM Jr (1949) The American soldier: adjustment during army life, vol I. Princeton University Press, Princeton Tippett LHC et al (1931) The methods of statistics. The Methods of Statisticz Wilkinson B (1951) A statistical consideration in psychological research. Psychol Bull 48(2):156

Lower Bounds for Percentiles of Pivots from a Sample Mean Standardized by S, the GMD, the MAD, or the Range in a Normal Distribution and Miscellany with Data Analysis Nitis Mukhopadhyay

Subject Classifications: 60E15; 62E17; 62L12; 62H10

1 Introduction Let us begin with a standard normal random variable Z and another random variable Y having a central Student’s t distribution with ν degrees of freedom, ν = 1, 2, . . .. We denote the customary upper 100α% points of these two distributions by z α and tν,α , respectively, that is, we may write 1 − α = P(Z ≤ z α ) = P(Y ≤ tν,α ), 0 < α
z α whatever be ν; (b) tν,α is decreasing in ν; (c) tν,α → z α as ν → ∞; and (d) the Cornish-Fisher expansion of tν,α involving ν and z α . Theoretical justifications for (a)–(b) and other pertinent information may be obtained from Wallace (1959), Ghosh (1973), and DasGupta and Perlman (1974). Wallace (1959) also gave bounds for the cumulative distribution function of tν . It may be worthwhile to review Chu (1956). The resolution (c) is easy to verify, whereas a Cornish-Fisher expansion (d) of tν,α can be reviewed from Johnson and Kotz (1970, p. 102). Fisher (1926) originally led N. Mukhopadhyay (B) Department of Statistics, U-4120, University of Connecticut, Storrs, CT 06269-4120, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 B. K. Sinha and S. B. Bagchi (eds.), Strategic Management, Decision Theory, and Decision Science, https://doi.org/10.1007/978-981-16-1368-5_7

87

88

N. Mukhopadhyay

the foundation by expressing integrals associated with the Student’s tν distribution in successive powers of ν −1 . We should mention that Fisher and Cornish (1960) proposed significantly updated adjustments to their original expansion from Cornish and Fisher (1937) for tν,α involving both ν and z α . George and Sivaram (1987, Eq. 2.13) gave a modified Fisher and Cornish (1960) approximation for tν,α involving both ν and z α . In addition to these citations, one could also refer to Ling (1978) and Koehler (1983) for a review in this area. More recently, Mukhopadhyay (2010, p. 613) gave a very short and tight proof of (a). Indeed, Mukhopadhyay (2010, Theorem 3.1) gave the following results:  (i) tν,α > bν z α where bν =

     −1 1 1 1 ; ν  ν  (ν + 1) 2 2 2

(1.2)

(ii) bν > 1. In other words, tν,α > bν z α > z α so that we have (i) an explicit lower bound for tν,α which strictly exceeds z α , and also strikingly (ii) bν does not involve α so that tν,α z α−1 > bν uniformly over all 0 < α < 21 . Mukhopadhyay (2010) conjectured that bν should be expected to decrease as ν increases, but a proof of this conjecture and other interesting results came in a latter paper of Gut and Mukhopadhyay (2010).

2 Sample Mean from a Normal Distribution Standardized with Sample Standard Deviation S, GMD, MAD or Range Suppose that X 1 , . . . , X m are independent and identically distributed (i.i.d.) normal random variables having the population mean μ and a standard deviation σ . We assume that μ, σ are both unknown parameters and (μ, σ ) ∈  × + . Suppose that we denote m 2 m X m ≡ m −1 i=1 X i , Sm2 ≡ Tm,1 = (m − 1)−1 i=1 (X i − X m )2 , m ≥ 2,

which, respectively, stand for the sample mean and sample variance. Obviously, Tm,1 ≡ Sm is the sample standard deviation. Next, let us emphasize the customary standardized random variable √

m(X m − μ) with bm,α,1 ≡ tm−1,α ; and Tm,1 ≡ bν defined in (1.2), ν = m − 1.

Pivot 1: Hm,1 = dm,1

Clearly, Hm,1 is distributed as tm−1 .

(2.1)

Lower Bounds for Percentiles of Pivots from a Sample Mean …

89

In this section, in the spirit of Mukhopadhyay and Chattopadhyay (2012), we handle a standardized sample mean along the line of (2.1), but now one may contemplate that the denominator is first replaced by a suitable multiple of Gini’s mean difference (GMD). The GMD is useful as a measure of a scale parameter in many areas of science including actuary, economics, and finance. When samples arrive from a normal population, one customarily uses a t-statistic obtained from (2.1) in order to test for μ or to construct a confidence interval for μ. As an alternative measure of standard deviation, Nair (1936), Downton (1966), Barnett et al. (1967), and D’Agostino (1970) argued in favor of using a multiple of GMD instead of S for estimating σ when samples arrive from a normal parent population with suspect outliers. Recently, Mukhopadhyay and Chattopadhyay (2011) have considerably broadened this area. When m is small, the presence of few anomalous observations is not uncommon in physical experiments while the assumption of a normal parent distribution may be quite reasonable (Huxley 1932; Herrey 1965). Tukey (1960) presented detailed treatments with mixture distributions and commented, “…nearly imperceptible nonnormalities may make conventional relative efficiencies of scale and location entirely useless …” (Tukey 1960, p. 474, point #7). Yitzhaki (2003) also discussed GMD as a measure of variability by stressing its use in situations where data may depart slightly from normality, and we quote: “…Of all measures of variability, the variance is by far the most popular … GMD, an alternative index of variability, shares many properties with the variance, but can be more informative about the properties of distributions that depart from normality. Its superiority over the variance is important whenever one is interested in one or more of the following properties … the GMD can be used to form necessary conditions for stochastic dominance, while the variance cannot; This property protects the investigator from making embarrassing mistakes in the fields of decision-making under risk and social welfare …” (Yitzhaki 2003, p. 285).

2.1 GMD-Based Pivot Now, we will replace Sm from the denominator of Hm,1 in (2.1) with a suitable multiple of GMD. In a normal population, an unbiased estimator of σ 2 based on GMD (≡ G m ) is given by 2 Tm,2

=

−2 cm,2 G 2m

 −1    m Xi − X j  , where G m = 2 1≤i< j≤m

(2.2)

with  cm,2 =

π

1/2 √ 4 , (m + 1) + 2(m − 2) 3 + m 2 − 5m + 6 mπ(m − 1) 3

(2.3)

90

N. Mukhopadhyay

2 since E(G 2m ) = V (G m ) + E 2 (G m ) ≡ cm,2 σ 2 . This came from the exact expressions of V (G m ) and E(G m ) originally derived by Nair (1936). See also Mukhopadhyay and Chattopadhyay (2011). Next, let us define the following standardized version of the sample mean:

√ Pivot 2: Hm,2 =

m(X m − μ) , Tm,2

(2.4)

which may be used for testing or constructing a confidence interval for μ instead of using the Student’s t-pivot Hm,1 from (2.1). Indeed, Barnett et al. (1967) had introduced a similar pivotal entity which can also be used to construct confidence intervals for μ or tests for μ. Obviously, both Hm,1 and Hm,2 have distributions symmetric around zero not involving the unknown parameter σ . The distribution of Hm,1 is completely known, however, there is no analytically closed expression for the probability distribution of Hm,2 . We denote bm,α,2 , the upper 100α% point of Hm,2 , as follows: 1 P Hm,2 ≤ bm,α,2 = 1 − α, 0 < α < . 2

(2.5)

Barnett et al. (1967) opted to replace bm,α,2 with z α assuming m was moderately large or large. This would be valid since the distribution of Hm,2 converges in law to a standard normal distribution when m is large. Mukhopadhyay and Chattopadhyay (2012) and Chattopadhyay and Mukhopadhyay’s (2013) developed techniques to approximately determine bm,α,2 in the spirits of Cornish-Fisher expansion involving z α and reciprocal powers of m without knowing an exact probability density function (p.d.f.) of Hm,2 .

2.2 MAD-Based Pivot Fisher (1920, p. 193) had a footnote attributed to Eddington saying, “I think it accords with the general experience of astronomers that, for the errors commonly occurring in practice, the mean error is a safer criterion of accuracy than the mean square error, especially if any doubtful observations have been rejected.” Tukey (1960), also quoting Eddington, recommended the use of mean absolute deviation (MAD) for smaller samples as a useful compromise. Now, we will replace Sm from the denominator of Hm,1 in (2.1) with a suitable multiple of MAD. In a normal population, an unbiased estimator of σ 2 based on MAD (≡ Mm ) is given by   −2 2 m  X i − X m , = cm,3 Mm2 where Mm = m −1 i=1 Tm,3 with

(2.6)

Lower Bounds for Percentiles of Pivots from a Sample Mean …



91

 

2(m − 1) 2(m − 1) 1/2 − m + m(m − 2) + . cm,3 = m2π πm (2.7) The expression of cm,3 comes from Herrey (1965). Next, in the spirit of Chattopadhyay and Mukhopadhyay’s (2013), let us now define the following standardized version of the sample mean: 

π + sin−1 2

1 m−1



√ Pivot 3: Hm,3 =

m(X m − μ) , Tm,3

(2.8)

which may be used for testing or constructing a confidence interval for μ instead of using the Student’s tν -pivot Hm,1 from (2.1). Barnett et al. (1967) had used a similar pivotal entity which can also be used to construct confidence intervals for μ or test μ. Obviously, Hm,3 has a distribution symmetric around zero not involving the unknown parameter σ, but again there is no analytically closed expression for the distribution of Hm,3 . We denote bm,α,3 , the upper 100α% point of Hm,3 , as follows: 1 P Hm,3 ≤ bm,α,3 = 1 − α, 0 < α < . 2

(2.9)

Mukhopadhyay and Chattopadhyay (2012) and Chattopadhyay and Mukhopadhyay’s (2013) developed techniques to approximately determine bm,α,3 without knowing an exact p.d.f. of Hm,3 .

2.3 Range-Based Pivot The range is widely used in quality control (Cox 1949). Earlier, Lord (1947) discussed the role of a pivot analogous to Hm,1 , Hm,2 or Hm,3 by replacing the denominator of the pivot with an appropriate multiple of the sample range statistic. In the context of online positioning, user service utility reports the positioning (that is, geodesic latitude, longitude, and elevation/height) data, often assumed normally distributed, but as a measure of accuracy, the range is used instead of a sample standard deviation (Schwarz 2006). Let X m:1 ≤ X m:2 ≤ ... ≤ X m:m stand for the order statistics. Next, we will replace Sm from the denominator of Hm,1 in (2.1) with a suitable multiple of the range statistic. In a normal population, an unbiased estimator of σ 2 based on the sample range (≡ Rm ) is given by −2 2 2 = cm,4 Rm where Rm = X m:m − X m:1 , Tm,4

(2.10)

92

N. Mukhopadhyay

with  ∞   1/2  ∞ cm,4 = m(m − 1) w2 [(x + w) − (x)]m−2 φ(x)φ(x + w)d x dw , 0

−∞

(2.11)

where we denote: φ(x) = (2π )−1/2 exp −x 2 /2 , (x) =



x −∞

φ(y)dy, −∞ < x < ∞.

The expression of cm,4 comes from Owen (1962, p. 140). Then, in the spirits of Mukhopadhyay and Chattopadhyay (2012) and Chattopadhyay and Mukhopadhyay’s (2013), let us now define the following standardized version of the sample mean: √ Pivot 4: Hm,4 =

m(X m − μ) , Tm,4

(2.12)

which may be used for testing or constructing a confidence interval for μ instead of using the Student’s t-pivot Hm,1 from (2.1). Again, Hm,4 has a distribution symmetric around zero not involving the unknown parameter σ, however, there is no analytically closed expression for the distribution of Hm,4 . We denote bm,α,4 , the upper 100α% point of Hm,4 , as follows: 1 P Hm,4 ≤ bm,α,4 = 1 − α, 0 < α < . 2

(2.13)

Mukhopadhyay and Chattopadhyay (2012) and Chattopadhyay and Mukhopadhyay’s (2013) developed techniques to approximately determine bm,α,4 without knowing an exact p.d.f. of Hm,4 .

2.4 Lower Bounds for the Upper Percentiles Let us temporarily drop the label “i” from the subscripts of Tm,i , Hm,i , bm,α,i and simply write instead Tm , Hm , and bm,α respectively. In the case of each pivot from (2.1), (2.4), (2.8), and (2.12), the two statistics, namely, X m (in the numerator) and Tm (in the denominator) are independently distributed since Tm depends only on the location invariant statistic (X 1 − X m , X 2 − X m , . . . , X m−1 − X m ). In a subtle way, we may invoke Basu’s (1955) theorem as follows: First, let us pretend fixing σ ≡ σ0 (> 0). Then, in the model P0 = {N (μ, σ02 ), −∞ < μ < ∞, σ0 (> 0) fixed}, the statistic X m is complete and sufficient for μ and Tm is ancillary for μ so that X m , Tm must be independent. But, this conclusion remains true for all fixed σ0 values. Thus, the two statistics X m and Tm are independently distributed

Lower Bounds for Percentiles of Pivots from a Sample Mean …

93

under the full model P = {N (μ, σ 2 ), −∞ < μ < ∞, 0 < σ < ∞} in the spirits of Mukhopadhyay (2000, Examples 6.6.15–6.6.17, pp. 325–327). Next, the pivotal upper 100α-percentiles of Hm from (2.1), (2.4), (2.8), and (2.12) are handled by assuming μ = 0 and σ = 1, in all subsequent calculations in Sect. 2.3. Now, then, we can express: 1−α ≡ P

√

  √    m X m /Tm ≤ bm,α = E P m X m ≤ bm,α Tm | Tm = E  bm,α Tm .

(2.14)

  At this point, we may E  bm,α T m = E[g(Tm )] where g(x) is identified √ view with either (i)  bm,α x or (ii)  bm,α x for x > 0. However, these g(x) functions are both strictly concave for x > 0, and hence by applying Jensen’s (1906) inequality in (2.14), we can claim the following in a straightforward fashion (with x > 0):   ⎧   ⎨(i)  bm,α E[Tm2 ]  1 − α = E  bm,α Tm < ⎩ (ii)  bm,α E[Tm ]

√ with g(x) ≡  bm,α x , with g(x) ≡  bm,α x ,

(2.15)

But, we know that  (z α ) = 1 − α which implies immediately that (i) bm,α E[Tm2 ] > z α and also (ii) bm,α E[Tm ] > z α . Jensen’s (1906) inequality is found in many sources including Lehmann and Casella (1998, pp. 46–47) and Mukhopadhyay (2000, pp. 152–156). Thus, from (2.15), we clearly obtain two distinct choices for the lower bound of bm,α :  (i) z α bm,α > (2.16) (ii) z α E −1 [Tm ]. Part (i) follows since E[Tm2 ] = 1. Remark 2.1 A referee remarked that surely one could write       2 E  bm,α E[Tm ] ≤ E  bm,α E[Tm ] , so that part (i) in (2.15) may appear rather redundant. Such a sentiment is completely justified. Put differently, the same redundancy reflects on a kind of trivial lower bound for bm,α in part (i) of (2.16). We decide to include this lower bound much in the spirit of a customary but very useful claim: tν,α > z α . Part (ii) from (2.15) and (2.16) lead to a sharper lower bound for bm,α . 1/r

Remark 2.2 Along the lines of Remark 2.1, one could in fact look at Tm as (Tm )r for 0 < r < 1. Surely, the function g(x) = x r will continue to remain concave when 1/r x > 0. If we have a tractable and analytical expression available for E[Tm ], then we will come up with new and useful lower bounds for bm,α . For the result quoted in (1.2) part (i) in the context of tν -percentiles, such considerations were emphasized and amply treated in Mukhopadhyay (2010, Sect. 3.2 and Fig. 3).

94

N. Mukhopadhyay

Theorem 2.1 Consider the pivots from (2.1), (2.4), (2.8), and (2.12). Then, for each fixed m and 0 < α < 21 , the upper 100α% point bm,α,i defined via (2.1), (2.5), (2.9), and (2.13) associated with the pivotal distribution of Hm,i strictly exceeds the upper 100α% point z α corresponding to the standard normal distribution for each i = 1, 2, 3, 4. Proof The proof follows from part (i) laid down in (2.16). In the case i = 1, this independently validates the result (1.2) part (i): tν,α > z α .  Theorem 2.2 Consider the pivots from (2.1), (2.4), (2.8) and (2.12). Then, for each fixed m and 0 < α < 21 , the upper 100α% point bm,α,i defined via (2.1), (2.5), (2.9) and (2.13) associated with the pivotal distribution of Hm,i strictly exceeds the following lower bounds:  i = 1 : Eq(2.1) : dm,1 z α with dm,1 ≡

1/2      −1 1 1 1 (m − 1) (m − 1)  m  ; 2 2 2

1 1/2 π cm,2 ; 2  1/2 1 i = 3 : Eq(2.9) : dm,3 z α with dm,3 ≡ cm,3 ; π m(m − 1)−1 2

i = 2 : Eq(2.5) : dm,2 z α with dm,2 ≡

i = 4 : Eq(2.13) : dm,4 z α with dm,4 ≡ E −1 [Rm ]cm,4 ;

with cm,2 , cm,3 , cm,4 respectively coming from (2.3), (2.7), (2.11) and E[Rm ] subsequently coming from (2.21). Again, z α stands for the upper 100α% point of a standard normal distribution. The expression E[Rm ] is clearly interpreted as E σ =1 [Rm ]. Proof Here, we will exploit part (ii) from (2.16). 2 Case 1: i = 1: Eq (2.1): Let Y ∼ χm−1 . Then, E[Tm,1 ] = E[Sm ]

  = 21/2 (m − 1)−1/2 E Y 1/2 m   m − 1 −1 1/2 −1/2   , = 2 (m − 1) 2 2

(2.17)

which leads to (1.2), part (i), originally due to Mukhopadhyay (2010), when we replace ν with m − 1. Case 2: i = 2: Eq (2.5): Let Y ∼ N (0, 1). Then, −1 −1 1/2 −1 E[G m ] = cm,2 2 E[|Y |] = 2π −1/2 cm,2 . E[Tm,2 ] = cm,2

(2.18)

Case 3: i = 3: Eq (2.9): Clearly (X 1 , X m ) ∼ N (0, 0, 1, m −1 , ρ = m −1/2 ) with ρ standing for the correlation coefficient between X 1 , X m . Thus, X 1 − X m ∼ N (0, m −1 (m − 1)). Then,

Lower Bounds for Percentiles of Pivots from a Sample Mean …

E[Tm,3 ] =

−1 cm,3 E[Mm ]

=

−1 cm,3 E

  X1 − X m =



1 π m(m − 1)−1 2

95

1/2

−1 cm,3 .

(2.19) Case 4: i = 4: Eq (2.13): First, we need to obtain an expression for E[Rm ]. Now, then, we may write: ∞ m−1 φ(u)du m:m ] = m −∞ u {(u)}  ∞and thus E[X m:1 ] = E[X ∞ m−1 φ(u)du = −m −∞ u {(u)}m−1 φ(u)du m −∞ u {1 − (u)} ⇒ E[Rm ] = 2E[X m:m ].

(2.20)

From (2.20), we may alternatively express:  E[Rm ] = 2m



  u ((u))m−1 − (1 − (u))m−1 φ(u)du.

(2.21)

0

Now, the proof is complete.



Remark 2.3 By the way, is E[X m:m ] necessarily positive and if so, is it possible to prove it without referring to (2.21)? Certainly, E[X m:m ] must be positive via Jensen’s inequality since X m:m is a strict convex function of X 1 , ..., X m with zero means. Remark 2.4 Recall that the probability distributions of the pivots do not involve μ, σ and hence bm,α,i ’s as well as their lower bounds from Theorems 2.1 and 2.2 will not involve μ, σ either. These lower bounds work even though exact analytical forms for the p.d.f.’s of the pivots, Hm,2 , Hm,3 , and Hm,4 remain technically out of reach.

2.5 Closeness of the Lower Bounds to the Percentiles We recall Theorem 2.2, but in order to explore the closeness between bm,α,i ’s and their respective lower bounds dm,i z α ’s, we clearly need the cm,i values. These values were extensively used by Mukhopadhyay and Chattopadhyay (2012), Chattopadhyay and Mukhopadhyay’s (2013), Mukhopadhyay and Hu (2017, 2018) as well as Hu and Mukhopadhyay (2019), but these were hidden inside the computer codes in order to implement their methodologies. However, at this juncture, we believe that such entities are of sufficient independent interest. We have now used MAPLE to obtain the cm,i ’s and evaluate E[Rm ] from (2.21) when m = 2(1)20(5)40, 50, 60. These entries are shown in Table 1 for completeness. This table has not appeared elsewhere in our previous research. Next, we explain Table 2 which presents the values of bm,α,1 (from a t-table) and those of dm,1 and dm,1 z α using (2.1) in columns 2-3 when m = 10, 15, 20, 30 and α = 0.05, 0.025, 0.005. Overall, the lower bound dm,1 z α , which obviously exceeds z α , looks close to bm,α,1 , especially when m = 20 or 30. This feature is consistent with the conclusions from Mukhopadhyay (2010).

96

N. Mukhopadhyay

Table 1 Values of dm,1 , cm,2 , cm,3 , cm,4 and E[Rm ] m (2.1) (2.3) (2.7) dm,1 cm,2 cm,3 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 25 30 35 40 50 60

1.2533 1.1284 1.0854 1.0638 1.0509 1.0424 1.0362 1.0317 1.0281 1.0253 1.0230 1.0210 1.0194 1.0180 1.0168 1.0157 1.0148 1.0140 1.0132 1.0105 1.0087 1.0074 1.0064 1.0051 1.0042

1.4142 1.2744 1.2259 1.2015 1.1868 1.1770 1.1700 1.1648 1.1607 1.1575 1.1548 1.1526 1.1507 1.1491 1.1477 1.1465 1.1454 1.1445 1.1436 1.1404 1.1384 1.1369 1.1358 1.1343 1.1333

0.7071 0.7358 0.7521 0.7617 0.7680 0.7725 0.7757 0.7783 0.7803 0.7819 0.7833 0.7844 0.7854 0.7863 0.7870 0.7877 0.7882 0.7887 0.7892 0.7909 0.7921 0.7929 0.7936 0.7944 0.7950

(2.11) cm,4

(2.21) E[Rm ]

1.4142 1.9115 2.2389 2.4812 2.6725 2.8298 2.9629 3.0779 3.1790 3.2691 3.3502 3.4238 3.4912 3.5532 3.6107 3.6642 3.7142 3.7612 3.8054 3.9940 4.1438 4.2677 4.3731 4.5452 4.6824

1.1284 1.6926 2.0588 2.3259 2.5344 2.7044 2.8472 2.9700 3.0775 3.1729 3.2585 3.3360 3.4068 3.4718 3.5320 3.5879 3.6401 3.6890 3.7350 3.9306 4.0855 4.2132 4.3216 4.4981 4.6386

The rest of Table 2 handles bm,α,i , dm,i and dm,i z α values, i = 2, 3, 4. Chattopadhyay and Mukhopadhyay (2013, Table 1) gave the estimated values  bm,α,i and their estimated standard errors, sbm,α,i , each obtained from 10,000 independent replications. The estimated  bm,α,i and sbm,α,i values, compiled from Chattopadhyay and Mukhopadhyay’s (2013) Table 1, are included for completeness in columns 4, 6, and 8 in our Table 2 corresponding to i = 2, 3, 4 respectively. We should emphasize that we have included rather accurate estimated values  bm,α,i instead of true bm,α,i values because exact analytical forms for the p.d.f.’s of the pivot Hm,i remain out of reach, i = 2, 3, 4. Columns 5, 7, and 9 show the values of dm,2 , dm,2 z α (Theorem 2.2, i = 2), dm,3 , dm,3 z α (Theorem 2.2, i = 3), and dm,4 , dm,4 z α (Theorem 2.2, i = 4) respectively. Overall, the lower bound dm,i z α , bm,α,i , when m = 20 or 30. which obviously exceeds z α , stays close to 

Lower Bounds for Percentiles of Pivots from a Sample Mean …

97

Table 2 Values of bm,α,1 , dm,1 z α and the estimated values of bm,α,2 , bm,α,3 , bm,α,4 and comparing their respective lower bounds from Theorem 2.2 Sm dm,1 bm,α,1

dm,1 z α

m

 bm,α,2 s b

GMDm

MADm

dm,2

dm,3

dm,2 z α  b−

 bm,α,3 s b

3.09s b

dm,3 z α  b−

Rangem dm,4  bm,α,4 s b

dm,4 z α  b− 3.09s b

1.8671

1.6993

3.09s b

α = 0.05, z α = 1.645 10

1.0281 1.8331

15

1.6912

1.0286 1.8346

20

1.6746

30

1.6667

0.00009 1.8564 1.0201

1.7667

1.6593

1.6753

1.7737

1.6781

0.00008 1.7665

0.00008 1.7735

1.0135

1.0148

1.7303

1.6672

1.7398

1.6693

0.00008 1.7301

0.00008 1.7396

1.0089

1.0097

1.0087 1.6991

1.6958

1.0184

1.0132 1.7291

1.0309 1.8567

0.00009 1.8343 1.0180 1.7613

1.6920

1.7003

1.6596

0.00007 1.7001

1.7082

1.6610

0.00007 1.7080

1.0330 0.00089 1.8643 1.0234 1.7972

1.6835

0.00082 1.7947 1.0188 1.7648

1.6759

0.00079 1.7624 1.0143 1.7364

1.6685

0.00076 1.7341

α = 0.025, z α = 1.96 10

1.0281 2.2622

15

2.0151

1.0286 2.2678

20

1.9953

30

1.9859

0.00012 2.2910 1.0201

2.1485

1.9771

1.9961

2.1651

1.9994

0.00011 2.1482

0.00011 2.1648

1.0135

1.0148

2.0959

1.9865

2.1088

1.9890

0.00010 2.0956

0.00010 2.1085

1.0089

1.0097

1.0087 2.0452

2.0206

1.0184

1.0132 2.0930

1.0309 2.2914

0.00012 2.2674 1.0180 2.1448

2.0161

2.0469

1.9774

0.00009 2.0466

2.0559

1.9790

0.00010 2.0556

1.0330 2.3130

2.0247

0.00012 2.3126 1.0234 2.1990

2.0059

0.00011 2.1987 1.0188 2.1480

1.9968

0.00011 2.1477 1.0143 2.0996

1.9880

0.00010 2.0993

α = 0.005, z α = 2.575 10

1.0281 3.2498

15

2.6474

1.0286 3.2612

20

2.6214

30

2.6090

0.00029 3.3090 1.0201

2.9842

2.5974

2.6224

3.0185

2.6268

0.00023 2.9835

0.00023 3.0178

1.0135

1.0148

2.8667

2.6098

2.8933

2.6131

0.00020 2.8661

0.00021 2.8927

1.0089

1.0097

1.0087 2.7564

2.6546

1.0184

1.0132 2.8609

1.0309 3.3099

0.00028 3.2603 1.0180 2.9768

2.6486

2.7598

2.5979

0.00019 2.7592

2.7784

2.6000

0.00019 2.7778

1.0330 3.3507

2.6600

0.00029 3.3498 1.0234 3.0950

2.6353

0.00024 3.0943 1.0188 2.9707

2.6234

0.00022 2.9700 1.0143 2.8645

2.6118

0.00020 2.8639

98

N. Mukhopadhyay

There is one sticky point that remains hidden: Theorem 2.2 showed that bm,α,i must exceed dm,i z α , i = 2, 3, 4. However, columns 5, 7, 9 validate reasonably well that  bm,α,i values exceed dm,i z α values, i = 2, 3, 4. We obtained approximate 99.9% one-sided confidence intervals [ bm,α,i − 3.09sbm,α,i , ∞) for bm,α,i . In Table 2, note that we write sb instead of sbm,α,i . In each block, under columns 5, 7, 9, the entry below dm,i , dm,i z α is the lower confidence limit,  bm,α,i − 3.09sbm,α,i . In other words, even if the true bm,α,i value sat  close to bm,α,i − 3.09sbm,α,i , empirically speaking, there is approximately 99.01% bm,α,i − 3.09sbm,α,i which clearly exceeds chance that the true bm,α,i would exceed  dm,i z α as seen from Table 2. We additionally calculated more extreme lower confidence limits, for example,  bm,α,i − 4.2sbm,α,i values exceeded dm,i z α all bm,α,i − 4.2sbm,α,i , and we observed that  across Table 2. However, one may note that (4.2) ≈ 0.99999 which is practically as close to 1.0 as one may wish to see! We feel nearly certain that our empirical data-validation confirms the theoretical lower bounds in the face of the unknown nature of the true values of bm,α,i , i = 2, 3, 4.

3 Comparing Normal Means Having Unknown and Unequal Variances: Two Lower Bounds for the Required Upper Percentile We suppose that X i1 , X i2 , . . . , X ini , . . . denote i.i.d. random variables having a common N (μi , σi2 ) distribution with both parameters μi , σi2 unknown, i = 1, 2. It is also assumed that σ1 , σ2 are unequal, a scenario commonly described as one within the broad class of Behrens-Fisher problems. We denote θ = (μ1 , μ2 , σ1 , σ2 ) ∈ 2 × +2 and additionally assume that the X 1 ’s are independent of the X 2 ’s. The problem is one of estimating δ = μ1 − μ2 with some appropriately constructed fixed-width confidence interval (FWCI). More generally, however, one may like to estimate an arbitrary linear function of k-means from k(≥ 2) treatments, but for brevity, we focus on a customary two-sample problem. One may refer to Hochberg and Tamhane (1987), Hsu (1996), Bechhofer et al. (1995), and Aoshima and Mukhopadhyay (2002). Additionally, Liu (1995), Ghosh et al. (1997, Sect. 3), Aoshima (2001), and Mukhopadhyay and de Silva (2009, Chap. 13) cited other relevant sources. Having recorded the observations X i1 , X i2 , . . . , X ini of size n i (≥ 2) from the ith treatment, we denote the sample mean and sample variance: 2 i i X ini = n i−1  nj=1 X i j and Sin = (n i − 1)−1  nj=1 (X i j − X ini )2 , i

i = 1, 2. Then, we set out to explore a confidence interval Jn ≡ [(X 1n 1 − X 2n 2 ) ± d] for δ,

(3.1)

Lower Bounds for Percentiles of Pivots from a Sample Mean …

99

having its width 2d (> 0, fixed in advance) and the associated confidence coefficient Pθ {δ ∈ Jn } ≥ 1 − α, (0 < α < 1, fixed in advance) with n = (n 1 , n 2 ). Let < u > stand for the largest integer < u, u > 0. It is well-known, however, that no fixed-sample-size procedure would provide a solution for the problem on hand (Dantzig 1940; Ghosh et al. 1997, Sect. 3.7). Following along the footsteps of Stein-type (Stein 1945, 1949) two-stage estimation methodologies, Chapman’s (1950) developed a break-through estimation technique by proposing to gather observations in two stages which was summarized in Ghosh et al. (1997, p. 186). One begins with pilot observations X i1 , X i2 , . . . , X im i of size m i (≥ 2) from the ith treatment and define the two-stage stopping times:     2 /d 2 + 1 , i = 1, 2, Ni = max m i , h 2m,α/2 Sim i

(3.2)

with h m,α/2 (> 0) determined via the subsequent equation (3.3). Let T1 , T2 be independent random variables, Ti ∼ tm i −1 , i = 1, 2 and then we determine h m ≡ h m,α (> 0) as follows:   P T1 − T2 ≤ h m,α = 1 − α.

(3.3)

Note that the probability distribution of T1 − T2 is symmetric around zero. Thus, in (3.2), one uses the upper 100(α/2)% point of the probability distribution of T1 − T2 . If Ni = m i , we do not record additional observation(s) from the ith treatment in the second stage, but if Ni > m i , then we record additional (Ni − m i ) observations from the ith treatment in the second stage, i = 1, 2. Based on such fully observed data {Ni , X i1 , X i2 , . . . , X i Ni }, by combining observations from both stages 1 and 2 on the ith treatment, i = 1, 2, we propose the following terminal FWCI: JN ≡ [(X 1N1 − X 2N2 ) ± d] for δ,

(3.4)

where N = (N1 , N2 ). In his path-breaking paper, Chapman’s (1950) proved the following result: Pθ {δ ∈ JN } ≥ 1 − α for all fixed α, θ, d, and m 1 , m 2 .

(3.5)

We emphasize that this result is neither asymptotic nor is it based on any kind of approximation. Chapman’s (1950) introduced this two-stage procedure which was further investigated by Ghosh (1975a, b).

100

N. Mukhopadhyay

Chapman’s (1950) included a table providing the required h m,α -values whereas Ghosh (1975a, b) derived its Cornish-Fisher expansion when m 1 = m 2 . Referring to Chapman’s (1950) tabulated h m,α -values, Ghosh (1975b, p. 463) noted that “many of his values are incorrect.” Aoshima and Mukhopadhyay (2002, Theorem 2.1) gave a detailed and more accurate Cornish-Fisher expansion for the h m,α -values when the pilot sizes m 1 , m 2 may or may not be equal.

3.1 Two Lower Bounds for the Upper Percentiles Let us explore the upper 100α% point h m,α from (3.3) for the distribution of T1 − T2 where T1 , T2 are independent random variables, Ti ∼ tm i −1 , i = 1, 2, 0 < α < 21 . Now, let us bring in four independent random variables Y ∼ N (0, 1), Z ∼ N (0, 1), U ∼ χν2 , and W ∼ χκ2 with ν = m 1 − 1, κ = m 2 − 1. Then, we may express T1 = Y (ν −1 U )−1/2 and T2 = Z (κ −1 W )−1/2 so that we may write 1−α   = P T1 − T2 ≤ h m,α    (3.6) = E P Y (νU −1 )1/2 − Z (κ W −1 )1/2 ≤ h m,α | U, W 

  −1/2 = E  h m,α νU −1 + κ W −1 . Now, since (x 1/2 ) with x > 0 is concave, we may invoke Jensen’s inequality in the last step in (3.6) to obtain     −1 1/2 1 − α = E  h m,α νU −1 + κ W −1    −1  1/2 , <  h m,α E νU −1 + κ W −1

(3.7)

which would immediately imply h m,α > z α E −1/2

 −1  νU −1 + κ W −1 = LB1,m,α , say,

(3.8)

the first lower bound for h m,α . Let us denote −1 (m−2)/2  x exp(−x/2)I (x > 0), f (x; m) = 2m/2 (m/2) the p.d.f. of a χm2  random variable. In order to evaluate the term  −1 seen inside (3.8), we used MAPLE as follows: E νU −1 + κ W −1

Lower Bounds for Percentiles of Pivots from a Sample Mean …

101

 −1  obtained from (3.9) with U ∼ χν2 , W ∼ χκ2 , and U, W Table 3 Values of E νU −1 + κ W −1 distributed independently: ν = m 1 − 1, κ = m 2 − 1 m2 m1 5 10 15 20 25 30 5 10 15 20 25 30

0.40000 0.42383 0.43125 0.43486 0.43700 0.43841

 0





∞ 0



0.42383 0.45000 0.45820 0.46220 0.46457 0.46614

0.43125 0.45820 0.46667 0.47080 0.47325 0.47487

m1 − 1 m2 − 1 + u w

−1

0.43486 0.46220 0.47080 0.47500 0.47749 0.47913

0.43700 0.46457 0.47325 0.47749 0.48000 0.48166

0.43841 0.46614 0.47487 0.47913 0.48166 0.48333

f (u; m 1 − 1) f (w; m 2 − 1)dudw,

(3.9)

for m 1 , m 2 = 5(5)30. These values are summarized in Table 3. We note that (3.8)–(3.9) do not lead to an analytically tractable expression to evaluate the lower bound, LB1,m,α , for h m,α . At the same time, the expression in (3.9) lends itself to provide accurate approximation via numerical integration. Before we proceed further, let us now denote a function  g(q) ≡ q

1/4



1 q 2

   1 1 −1  q+ , q > 0, 2 4

(3.10)

Theorem 3.1 For each fixed m = (m 1 , m 2 ) and 0 < α < 21 , with ν = m 1 − 1, κ = m 2 − 1, we have: (i) h m,α > 21/2 z α ; (ii) h m,α > z α E −1/2



νU −1 + κ W −1

−1 

≡ LB1,m,α ;

(iii) h m,α > g(ν)g(κ)z α ≡ LB2,m,α . Here, U ∼ χν2 and W ∼ χκ2 , they are independent, and g(.) function was defined in (3.10). Proof We briefly outline a proof. Part (i): We note that a + b > 2(ab)1/2 with a > 0, b > 0 and then we invoke Jensen’s inequality to the last expression from (3.6) to claim:

102

N. Mukhopadhyay

  −1/2  E  h m,α νU −1 + κ W −1  1/4 1/4 !  < E  2−1/2 h m,α U ν −1 W κ −1  1/4 1/4 

<  2−1/2 h m,α E U ν −1 . W κ −1

(3.11)

Next, since U, W are independently distributed, (3.11) leads to  1/4   1/4 

E W κ −1 , 1 − α <  2−1/2 h m,α E U ν −1

(3.12)

so that we have: h m,α > 21/2 z α E −1

 1/4  −1  1/4  U ν −1 E W κ −1 = g(ν)g(κ)z α ,

(3.13)

with g(.) coming from (3.10). But, since U ∼ χν2 , we can alternatively look at  2

1/4



1 1 ν+ 2 4

   −1   1  = E U 1/4 < E 1/4 [U ] = ν 1/4 , ν 2

in view of Jensen’s inequality since a(x) = x 1/4 , x > 0, is concave. In other words, g(x) > 21/4 for all x > 0, so that part (i) follows from (3.13). Parts (ii)–(iii): We verified parts (ii) and (iii) on way to conclude (3.6)–(3.8) and (3.11)–(3.13), respectively. Further details are omitted. 

3.2 Closeness of the Two Lower Bounds to the Percentiles In the light of Theorem 2.1 in Aoshima and Mukhopadhyay (2002), we looked at their Table 1 which gave the values of qm satisfying   1 P T1 − T2 ≤ 21/2 qm = 1 − α, 2 when m 1 , m 2 = 5(5)30, 40, 50 and α = 0.05, 0.01. In other words, our upper 100α% point h m,α corresponds to 21/2 qm having fixed α = 0.025, 0.005, respectively. Our Table 4 incorporates the pilot sizes m 1 , m 2 = 5(5)30 and α = 0.025, 0.005. On the first row of each block, we show our requisite h m,α values when α = 0.025, 0.005. On the next two rows of each block we show LB1,m,α and LB2,m,α from Theorem 3.1, parts (ii) and (iii), respectively. √ We have also supplied the values of 2z α . The values of h m,α exceed the values √ of 2z α across the board. Also,√the two newly found lower bounds of h m,α , namely LB1,m,α and LB2,m,α , exceed 2z α across the board. The lower bounds, LB1,m,α

Lower Bounds for Percentiles of Pivots from a Sample Mean …

103

Table 4 Values of h m,α from (3.6) followed by its #1 lower bound (LB1,m,α ) from Theorem 3.1 (ii) and #2 lower bound (LB2,m,α ) from Theorem 3.1 (iii): α = 0.025, 0.005 m2 5 10 15 20 25 30 √ m1 α = 0.025, z α = 1.96; 2z α = 2.7719 5 3.9301 3.5582 3.4790 3.4450 3.4266 3.4139 LB1,m,α 3.0990 3.0106 2.9846 2.9722 2.9649 2.9602 LB2,m,α 3.0537 2.9717 2.9490 2.9385 2.9323 2.9284 10 3.5582 3.1806 3.0985 3.0646 3.0448 3.0335 LB1,m,α 3.0106 2.9218 2.8955 2.8830 2.8756 2.8708 LB2,m,α 2.9717 2.8920 2.8698 2.8596 2.8536 2.8498 15 3.4790 3.0985 3.0179 2.9826 2.9628 2.9500 LB1,m,α 2.9846 2.8955 2.8691 2.8565 2.8491 2.8443 LB2,m,α 2.9490 2.8698 2.8479 2.8377 2.8318 2.8280 20 3.4450 3.0646 2.9826 2.9472 2.9274 2.9161 LB1,m,α 2.9722 2.8830 2.8565 2.8439 2.8364 2.8316 LB2,m,α 2.9385 2.8596 2.8377 2.8276 2.8217 2.8179 25 3.4266 3.0448 2.9628 2.9274 2.9076 2.8963 LB1,m,α 2.9649 2.8756 2.8491 2.8364 2.8290 2.8241 LB2,m,α 2.9323 2.8536 2.8318 2.8217 2.8158 2.8121 30 3.4139 3.0335 2.9500 2.9161 2.8963 2.8836 LB1,m,α 2.9602 2.8708 2.8443 2.8316 2.8241 2.8193 LB2,m,α 2.9284 2.8498 2.8280 2.8179 2.8121 2.8083 √ α = 0.005, z α = 2.575, 2z α = 3.6416 5 6.0500 5.2892 5.1661 5.1180 5.0940 5.0784 LB1,m,α 4.0714 3.9553 3.9211 3.9048 3.8953 3.8890 LB2,m,α 4.0119 3.9042 3.8743 3.8605 3.8524 3.8473 10 5.2892 4.4039 5.0784 4.1889 4.1564 4.1352 LB1,m,α 3.9553 3.8386 3.8041 3.7876 3.7779 3.7715 LB2,m,α 3.9042 3.7994 3.7703 3.7569 3.7490 3.7440 15 5.1661 5.0784 4.0885 4.0234 3.9881 3.9669 LB1,m,α 3.9211 3.8041 3.7694 3.7528 3.7431 3.7367 LB2,m,α 3.8743 3.7703 3.7414 3.7281 3.7203 3.7154 20 5.1180 4.1889 4.0234 3.9570 3.9216 3.8990 LB1,m,α 3.9048 3.7876 3.7528 3.7362 3.7264 3.7201 LB2,m,α 3.8605 3.7569 3.7281 3.7148 3.7071 3.7021 25 5.0940 4.1564 3.9881 3.9216 3.8848 3.8622 LB1,m,α 3.8953 3.7779 3.7431 3.7264 3.7167 3.7103 LB2,m,α 3.8524 3.7490 3.7203 3.7071 3.6994 3.6944 30 5.0784 4.1352 3.9669 3.8990 3.8622 3.8396 LB1,m,α 3.8890 3.7715 3.7367 3.7201 3.7103 3.7039 LB2,m,α 3.8473 3.7440 3.7154 3.7021 3.6944 3.6895

104

N. Mukhopadhyay

and LB2,m,α , are reasonably close to each other, especially when m 1 , m 2 = 25, 30. These are encouraging data-validation of our conclusions laid down in Theorem 3.1. We must raise one other important point: All across Table 4, we cannot escape without observing that LB1,m,α values have always exceeded ever so slightly the corresponding values of LB2,m,α . We admit, however, that we have only exhibited numerically calculated values of LB1,m,α and LB2,m,α . But, such a consistent trend is hard to miss or ignore. So, we are very inclined to propose Conjecture : LB1,m,α > LB2,m,α for fixed m 1 , m 2 , α,

(3.14)

a worthwhile technicality that we have not yet been able to resolve one way or the other.

3.3 Limiting Values of LB1,m,α and LB2,m,α A referee felt strongly that the following could be true: lim lim LB1,m,α = lim lim LB2,m,α =

m 1 →∞m 2 →∞

m 1 →∞m 2 →∞

√ 2z α .

(3.15)

A direct proof of (3.15) is not very involved, but it may be instructive. (i): First, let us handle LB1,m,α . The limiting result will follow if we can verify   −1  = 1. lim lim E 2 νU −1 + κ W −1

ν→∞κ→∞

(3.16)

Obviously, however, lim νU −1 = 1 and lim κ W −1 = 1, both in probability or ν→∞ κ→∞ w.p.1. But, (3.16) may not necessarily follow right away. Imagine having U1 , ..., Uν , ... and W1 , ..., Wκ , ... i.i.d. χ12 random variables, with ν, κ = 1, 2, ... . Surely, U/ν has the same distribution as that of the sample mean ν U ν = ν −1 i=1 Ui and also W/κ has the same distribution as that of the sample mean W κ . Let us denote U ∗ = supν≥1 U ν and W ∗ = supκ≥1 W κ . Now, we use the fact that the geometric mean is no smaller than the harmonic mean of positive numbers, that is, we have (a1 a2 )1/2 ≥ 2(a1−1 + a2−1 )−1 for all a1 > 0, a2 > 0. D

Let us write X ∗ = Y∗ to indicate that the random variables X ∗ , Y∗ have identical probability distributions. This leads us to express −1  1/2 1/2 D ≤ (U/ν)1/2 (W/κ)1/2 = U ν W κ ≤ U ∗ W ∗ , 2 νU −1 + κ W −1

(3.17)

Lower Bounds for Percentiles of Pivots from a Sample Mean …

105

whereas it is clear that the term U ∗ W ∗ on the extreme right-hand side of (3.17) does not involve ν, κ. Wiener’s (1939) ergodic theorem will show that both E [U ∗ ] < ∞ and E [W ∗ ] < ∞. Indeed all positive moments of U ∗ , W ∗ are finite since the Ui ’s and W j ’s have all positive moments finite. Then, by Cauchy-Schwartz’s inequality, we have E[U ∗ W ∗ ] ≤ E 1/2 [U ∗2 ]E 1/2 [W ∗2 ] < ∞. In this proof, we have not used the fact that the Ui ’s and W j ’s are independent. Hence, Lebesgue dominated convergence theorem and (3.17) will allow us to claim    −1  −1   = E lim lim 2 νU −1 + κ W −1 = 1, lim lim E 2 νU −1 + κ W −1 ν→∞κ→∞

ν→∞κ→∞

which completes the proof of (3.16).  (ii): Next, let us handle LB2,m,α which is equivalent to the expression g(ν)g(κ)z α . We recall g(q) from (3.10), and clearly, it will suffice to prove lim g(q) = 21/4 .

q→∞

(3.18)

We apply part 6.1.47 from Abramowitz and Stegun (1972, p. 257) and quote (with q > −a, q > −b): q b−a

1 (q + a) = 1 + (a − b)(a + b − 1)q −1 + O(q −2 ), (q + b) 2

as q → ∞. Identifying a = 0 and b = 41 , we can express: g(q)

   1 1 −1 1 q  q+ =2 2 2 4   −1  1 1 1 q +a  q +b = 21/4 ( q)b−a  2 2 2   1 1 1 = 21/4 1 + (− )( − 1)q −1 + O(q −2 ) 2 4 4   3 = 21/4 1 + q −1 + O(q −2 ) , 32 1/4

1 ( q)1/4  2



(3.19)

which converges to 21/4 as q → ∞. That shows the validity of (3.18). Of course, there will be no loss if we gave the expansion within (3.19) up to the order  O(q −1 ).

106

N. Mukhopadhyay

4 Some Concluding Comments This paper began with a one-sample normal problem and considered a number of pivots obtained from the sample mean which was successively standardized by the sample standard deviation, GMD, MAD, and the range. In each case, we developed useful and explicit lower bounds for the upper 100α% point for the pivot’s probability distribution. We may reiterate that we are able to do so even though we do not have explicitly tractable probability distributions of the pivots when a sample mean is standardized by the GMD, the MAD, or the range. Next, we explored a corresponding two-sample normal problem under unknown and unequal variances and we developed two useful and explicit lower bounds for the upper 100α% point for the requisite pivotal distribution associated with the difference of two independent and central Student’s t random variables. In these problems, we have carefully examined the closeness of our new-found lower bounds with the upper 100α% point (or its estimate) associated with the pivotal distributions. A common undercurrent that is prevalent in the preparation of this paper may be summarized in two words: Jensen’s inequality. This work shows a number of uncommon and not-so-straightforward applications of this elegant inequality. In this moment, when we are nearly ready to bring down the curtain, a referee raised an important question: Should one use the tν -based pivot or instead use one of the GMD-based pivot or MAD-based pivot or range-based pivot? A pivot that is incorporated within data analysis must depend on intrinsic features inherent within the data structure on one’s hand. Let us reiterate a part of a large body of published literature: Barnett et al. (1967), D’Agostino (1970), Downton (1966), Herrey (1965), Huxley (1932), Lord (1947), Nair (1936), Schwarz (2006), Tukey (1960), and Yitzhaki (2003). These researchers and host of others encountered nearly Gaussian data structures where they emphasized that the tν -based pivot did not perform well, but one of the GMD-based or MAD-based or range-based pivots did to withstand serious havocs caused by possible outliers. That is the moot point. We included selected quotes from the literature before Sect. 2.1 and within Sects. 2.2 and 2.3. No one declared one pivot a decisive winner over other choices, while everyone found one or more of these pivots indispensable in analyzing his/her practical data. Acknowledgements Professor Jun Hu, Oakland University, Michigan, shared some insight regarding Table 1 and I take this opportunity to thank him. Two reviewers asked substantive questions. Co mments from both reviewers have led to significant improvements in this revised presentation and I remain indebted to them. I also thank Professor Bikas K. Sinha for his constant encouragement.

Lower Bounds for Percentiles of Pivots from a Sample Mean …

107

References Abramowitz M, Stegun IA (1972) Handbook of mathematical functions, 10th edn. Dover, New York Aoshima M (2001) Sample size determination for multiple comparisons with components of a linear function of mean vectors. Commun Stat-Theor Meth 30:1773–1788 Aoshima M, Mukhopadhyay N (2002) Two-stage estimation of a linear function of normal means with second-order approximations. Seq Anal 21:109–144 Barnett FC, Mullen K, Saw JG (1967) Linear estimates of a population scale parameter. Biometrika 54:551–554 Basu D (1955) On statistics independent of a complete sufficient statistic. Sankhya¯ 15:377–380 Bechhofer RE, Santner TJ, Goldsman DM (1995) Design and analysis of experiments for statistical selection, screening, and multiple comparisons. Wiley, New York Chapman DG (1950) Some two-sample tests. Ann Math Stat 21:601–606 Chattopadhyay B, Mukhopadhyay N (2013) Two-stage fixed-width confidence intervals for a normal mean in the presence of suspect outliers. Seq Anal 32:134–157 Chu JT (1956) Errors in normal approximations to the t, τ , and similar types of distribution. Ann Math Stat 27:780–789 Cornish EA, Fisher RA (1937) Moments and cumulants in the specification of distributions. Rev de l’Inst de Stat 5:307–322 Cox DR (1949) The use of the range in sequential analysis. J Roy Stat Soc Ser B 11:101–114 D’Agostino RB (1970) Linear estimation of the normal distribution standard deviation. Am Stat 24:14–15 Dantzig GB (1940) On the non-existence of tests of Student’s hypothesis having power functions independent of σ . Ann Math Stat 11:186–192 DasGupta S, Perlman M (1974) Power of the noncentral F-test: effect of additional variates on Hotelling’s T2 -test. J Am Stat Assoc 69:174–180 Downton F (1966) Linear estimates with polynomial coefficients. Biometrika 53:129–141 Fisher RA (1920) A mathematical examination of the methods of determining the accuracy of an observation by the mean error, and by the mean square error. Roy Astronom Soc (Monthly Notes) 80:758–769 Fisher RA (1926) Expansion of “Student’s” integral in powers of n −1 . Metron 5:109–112 Fisher RA, Cornish EA (1960) The percentile points of distributions having known cumulants. Technometrics 2:209–225 George EO, Sivaram M (1987) A modification of the Fisher-Cornish approximation for the Student t percentiles. Commun Stat Simul 16:1123–1132 Ghosh BK (1973) Some monotonicity theorems for χ 2 , F and t distributions with applications. J Roy Stat Soc Ser B 35:480–492 Ghosh BK (1975a) A two-stage procedure for the Behrens-Fisher problem. J Am Stat Assoc 70:457– 462 Ghosh BK (1975b) On the distribution of the difference of two t-variables. J Am Stat Assoc 70:463– 467 Ghosh M, Mukhopadhyay N, Sen PK (1997) Sequential estimation. Wiley, New York Gut A, Mukhopadhyay N (2010) On asymptotic and strict monotonicity of a sharper lower bound for Student’s t percentiles. Methodol Comput Appl Probab 12:647–657 Herrey EMJ (1965) Confidence intervals based on the mean absolute deviation of a normal sample. J Am Stat Assoc 60:257–269 Hochberg Y, Tamhane AC (1987) Multiple comparison procedure. Wiley, New York Hsu JC (1996) Multiple comparisons. Chapman & Hall, New York Hu J, Mukhopadhyay N (2019) Second-order asymptotics in a class of purely sequential minimum risk point estimation (MRPE) methodologies. Jpn J Stat Data Sci 2:81–104 Huxley JS (1932) Problems of relative growth. Dover, London, New York

108

N. Mukhopadhyay

Jensen J (1906) Sur les Fonctions Convexes et les Inégalités Entre les Valeurs Moyennes. Acta Mathematica 30:175–193 Johnson NL, Kotz S (1970) Continuous univariate distributions-2. Wiley, New York Koehler KJ (1983) A simple approximation for the percentiles of the t distribution. Technometrics 25:103–105 Lehmann EL, Casella G (1998) Theory of point estimation, 2nd edn. Springer, New York Ling RF (1978) A study of the accuracy of some approximations for t, χ 2 , and F tail probabilities. J Am Stat Assoc 73:274–283 Liu W (1995) Fixed-width simultaneous confidence intervals for all pairwise comparisons. Comput Stat Data Anal 20:35–44 Lord E (1947) The use of range in place of standard deviation in the t-test. Biometrika 34:41–67 Mukhopadhyay N (2000) Probability and statistical inference. Dekker, New York Mukhopadhyay N (2010) On a sharper lower bound for a percentile of a Students t distribution with an application. Methodol Comput Appl Probab 12:609–622 Mukhopadhyay N, Chattopadhyay B (2011) Estimating a standard deviation with U-statistics of degree more than two: the normal case. J Stat Adv Theor Appl 5:93–130 Mukhopadhyay N, Chattopadhyay B (2012) Asymptotic expansion of the percentiles for a sample mean standardized by GMD in a normal case with applications. J Jpn Stat Soc 42:165–184 Mukhopadhyay N, de Silva BM (2009) Sequential methods and their applications. CRC, New York Mukhopadhyay N, Hu J (2017) Confidence intervals and point estimators for a normal mean under purely sequential strategies involving Gini’s mean difference and mean absolute deviation. Seq Anal 36:210–239 Mukhopadhyay N, Hu J (2018) Two-stage estimation for a normal mean having a known lower bound of variance with final sample size defined via Gini’s mean difference and mean absolute deviation. Seq Anal 37:204–221 Nair US (1936) The standard error of Gini’s mean difference. Biometrika 28:428–436 Owen DB (1962) Handbook of statistical tables. Addison Wesley, Reading Schwarz CR (2006) Statistics of range of a set of normally distributed numbers. J Surv Eng 132:155– 159 Stein C (1945) A two sample test for a linear hypothesis whose power is independent of the variance. Ann Math Stat 16:243–258 Stein C (1949) Some problems in sequential estimation (abstract). Econometrica 17:77–78 Tukey JW (1960) A survey of sampling from contaminated distributions. In: Olkin I, et al (eds) Contributions to Probability and Statistics. Stanford University Press, Stanford, pp 448–485 Wallace DL (1959) Bounds on normal approximations to Student’s and the Chi-square distributions. Ann Math Stat 30:1121–1130 Wiener N (1939) The ergodic theorem. Duke Math J 5:1–18 Yitzhaki S (2003) Gini’s mean difference: a superior measure of variability for non-normal distributions. Metron 61:285–316

Intuitionistic Fuzzy Optimization Technique to Solve a Multiobjective Vendor Selection Problem Prabjot Kaur and Shreya Singh

1 Introduction The vendor selection problem is a multiobjective decision- making problem, where the selection of right vendor and right order allocation is done to a vendor having right quality, less cost, etc. The multiobjective nature of the problem is constrained by conflicting qualitative and quantitative criteria. The conflict is best dealt with the help of fuzzy sets. Most of the real world problem has uncertainty handled by fuzzy sets. But fuzzy sets are best represented by membership functions and no ways or means to represent non-membership functions or hesitancy. The intuitionistic fuzzy sets represent all these features of membership and non-membership functions. In our approach to this paper, we use intuitionistic fuzzy sets to handle uncertainty in multiobjective vendor selection problem. The analysis of criteria for selection and measuring the performance of vendors has been the focus of many academicians and purchasing practitioners Weber (1991). The oldest study on Multiobjective optimization approach in VSP was done by Weber and Current (1993). Weber, Current, and Desai (1998) applied the approaches of multi-criteria decision analysis, multiobjective programming, and data envelopment analysis for the selection and negotiation with vendors not selected and non-cooperative vendor negotiation strategies, where the selection of one vendor results in another vendor being left out. Nasr-Eddine Dahel (2003) presented a multiobjective mixed integer programming approach to select the vendors and to allocate vendors in multiple supplier competitive sourcing environments. The deterministic models do not represent uncertainty in real-life problems. The uncertainty is represented by fuzzy sets. Madronero, Peidro, and Vasant (2010) proposed an interactive fuzzy multiobjective approach for vendor selection problem by applying modified s-curve membership functions. Babic and Peric (2014) proposed fuzzy multiobjective programming problem approach for P. Kaur (B) · S. Singh Department of Mathematics, Birla Institute of Technology, Mesra Ranchi, Jharkhand, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 B. K. Sinha and S. B. Bagchi (eds.), Strategic Management, Decision Theory, and Decision Science, https://doi.org/10.1007/978-981-16-1368-5_8

109

110

P. Kaur and S. Singh

multiproduct vendor selection with volume discounts. More advanced form of the fuzzy sets are intuitionistic fuzzy sets (Atanassov 1986). Applications of IFS and multiobjective nature of the problem can be found in agriculture production planning Bharati and Singh (2014), transportation problem Jana and Roy (2007), Roy et al. (2018), reliability problems (Garg et al. (2014), and inventory problem (Chakraborty et al. 2013). Very few papers in intuitionistic fuzzy multiobjective approach to VSP. Shahrokhi, Bernard, and Shidpour (2011) gave an integrated approach of IFS and LPP for supplier selection problem. Kaur and Rachana (2016) gave an intuitionistic fuzzy multiobjective approach to vendor selection problem. Our approach is an application of reformulation of fuzzy optimization problem of Kumar et al. (2006) to that of an IFO model. We construct membership and non-membership functions for the various objective functions. We formulate an IFO model and converted to a linear programming problem for solution. The summary of the paper is as follows: Section One states the problem with literature review. Section Two explains a basic concept of IFS, MO-LPP. Section Three presents the methodology of solving the problem. Section Four discusses the steps of the algorithm. Section Five illustrates a numerical example. Section Six and seven gives the results and discussions of our model and conclusions.

2 Preliminaries Definition 1: Let X = {x1 , x2 ,…,xn }be a finite universal set. An Atanassov’s intuitionistic fuzzy set (IFS) (Atanassov 1986) in a given universal set X is an expression given by

A = {(xi , μ A (xi ), ν A (xi ) : xi ∈ X } where the functions μ A : X → [0, 1] xi ∈ X → μ A (xi ) ∈ [0, 1] And ν A : X → [0, 1] xi ∈ X → ν A (xi ) ∈ [0, 1] . define the degree of membership and the degree of non- membership of an element xi ∈ X to the set A ⊆ X , respectively, such that they satisfy the following condition

Intuitionistic Fuzzy Optimization Technique to Solve …

111

for every xi ∈ X 0 ≤ μ A (x) + ν A (x) ≤ 1 Let π A (xi ) = 1 − μ A (x) − ν A (x) . which is called the Atanassov’s intuitionistic index of an element xi in the set A. It is the degree of indeterminacy membership of the element xi to the set A. Obviously, 0 ≤ π A (xi ) ≤ 1. Definition 2: Multiobjective Linear Programming Problem (Chakraborty et al. 2013). A multiobjective optimization problem with p objectives, q constraints, and n decision variables is defined as follows: Max Z 1 (X ), Z 2 (X ), . . . , Z p (X ) ⎫ Such that ⎪ ⎪ ⎬ g j (X ) ≤ 0, j = 1, 2, . . . , q. X = {X 1 , X 2 , . . . , X n } ⎪ ⎪ ⎭ X i > 0, i = 1, 2, . . . , n

(1)

Definition 3: Complete solution of MOLP (Chakraborty et al. 2013). X0 is said to be a completely optimal solution for the problem (1) if there exist x  X such that fk (x0 ) ≤ fk (x) for all x  X. However, if the objective functions are conflicting in nature, a complete solution that maximize all of the objective function do not exist. In such a situation a solution called Pareto optimality was introduced in MOLP. 0

Definition 4: Pareto-Optimality (Chakraborty et al. 2013). x0  X is said to be Pareto optimal solution for the above problem (1) if there does not exist another x  X such that fk (x0 ) ≤ fk (x) for all k = 1,2,…,p and fj (x0 ) ≤ fj (x) for at least one j = 1,2,..,p.

112

P. Kaur and S. Singh

3 Formulation of the Intuitionistic Fuzzy Multiobjective Model The IFO (Angelov 1997) model for VSP comprises of several objective functions subject to constraints. In this model, we consider the objective function as IFS. The algorithm of steps for the solving the model is as follows.

4 Algorithm of Steps Step 1: Given three objective functions subject to various constraints of our problem, we solve one objective function at a time to obtain its solutions. Step 2: From step 1, we obtain the maximum(upper) and minimum(lower) values of the three objective functions. For membership: Upμ = max (Zr (x))

(2)

Lpμ = min (Zr (x)) Step 3: We construct the membership and non-membership functions for the IFO formulations (Bharati and Singh 2014; Kaur and Rachna 2016).

μp(Zk(x)) =

vp(Zk(x)) =

⎧ ⎪ ⎨ ⎪ ⎩ ⎧ ⎪ ⎨ ⎪ ⎩

0

zk − Lμp Uµp −L µp

1 0

U vp −z k (x) U µp −L µp

1

i f zk ≤ Lpμ i f L µp < z p (x) < U µp if

z k (x) ≥

(3)

U µp

f z k (x) ≤ L µp i f L µp < z k (x) < U µp i f z k (x) ≤ L vp

Step 4: The IFO MOLP (Angelov 1997) mathematical formulation by using steps (2) and (3), we obtain the model as:

Intuitionistic Fuzzy Optimization Technique to Solve …

113

Table 1 Vendor data Vendor no

pi

qi

li

Ui

ri

fi

Bi

1

3

0.05

0.04

5000

0.88

0.02

25,000

2

2

0.03

0.02

15,000

0.91

0.01

100,000

3

7

0

0.08

6000

0.97

0.06

35,000

4

1

0.02

0.01

3000

0.85

0.04

5500

Maximize (α − β) Subject to : μp (x) ≥ α , p = 1, 2, ..., p + q νp (x) ≤ β , p = 1, 2, ..., p + q α+β≤1 α≥β

(4)

β≥0 xX.

5 Numerical Illustration A manufacturing sector dealing with auto parts wants to improve its purchasing process and reconsider its sourcing strategies. Four vendors are shortlisted and evaluated on various criteria. The vendor profile (Kumar et al. 2006) is as follows (Table 1): The mathematical formulation (Kumar et al. (2006) of the problem is as follows:

114

P. Kaur and S. Singh

⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ Minimize Z3 = 0.04x1 + 0.02x2 + 0.08x3 + 0.01x4 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ Subject to constraints: ⎪ ⎪ ⎪ ⎪ ⎪ x1 + x2 + x3 + x4 = 20000 ⎪ ⎪ ⎪ ⎪ ⎪ x1 ≤ 5000 ⎪ ⎪ ⎪ ⎪ ⎪ x2 ≤ 15000 ⎪ ⎪ ⎪ ⎬ x ≤ 6000 Minimize Z1 = 3x1 + 2x2 + 7x3 + x4 Minimize Z2 = 0.05x1 + 0.03x2 + 0.02x4

3

x4 ≤ 3000 88x1 + .91x2 + .97x3 + .85x4 ≥ 18, 400 0.02x1 + 0.01x2 + 0.06x3 + 0.04x4 ≤ 600 3x1 ≤ 25000 2x2 ≤ 10000 7x3 ≤ 35000 x4 ≤ 5500 x1 , x2 , x3 , x4 ≥ 0

⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎭

(5)

The solution procedure of the above formulation is as follows: For λ = 0.2, steps 1,2 implies optimal values for the above formulation is: Max z1 = − 60,000, Min Z1 = − 63,333.33, Max Z2 = − 433.33, Min z2 = − 1283.33, Max Z3 = − 641.67, Min Z3 = − 683.33. Using step (2) and (3), we get membership as well as non-membership for each objective function. Using step (4) we obtain a deterministic form of IFO problem as Max (α − β) ⎫ Subject to: ⎪ ⎪ ⎪ ⎪ ⎪ 3333.33α + 3x1 + 2x2 7x3 − x4 ≤ 63333.33 ⎪ ⎪ ⎪ ⎪ ⎪ 850α + 0.05x1 + 0.03x2 + 0.02x4 ≤ 1283.33 ⎪ ⎪ ⎪ ⎪ 41.66α + 0.04x1 + 0.02x2 + 0.08x3 + 0.01x4 ≤ 683.33 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ 2666.67β + 3x1 + 2x2 + 7x3 + x4 ≤ 60666.66 ⎪ ⎬ 680β + 0.05x1 + 0.03x2 + 0.02x4 ≤ 603.33 (6) ⎪ ⎪ ⎪ 33.328β + 0.04x1 + 0.02x2 + 0.08x3 + 0.01x4 ≤ 650.002⎪ ⎪ ⎪ ⎪ ⎪ α + β≤ 1 ⎪ ⎪ ⎪ ⎪ ⎪ α − β ≥ 0 ⎪ ⎪ ⎪ ⎪ ⎪ α, β ≥ 0 ⎪ ⎪ ⎪ ⎭ xi ≥ 0, i = 1, 2, 3, 4, 5.λ ∈ [0, 1] After solving the above formulation, we obtain the optimal solution as

Intuitionistic Fuzzy Optimization Technique to Solve …

115

70000 60000 50000 40000

Z1 Z2

30000

Z3 20000 10000 0 λ=.2

λ=.49

λ=.99

Kumars

Fig. 1 Comparison of objective function values in IFO and Fuzzy approach

α = .95, β = 0, x1 = 0, x2 = 14918.23, x3 = 4207.55, x4 = 874.22. Using the above algorithm and solving for various value of λ = 0.2,0.49 and 0.99, we obtain the results as shown in the Table 2.

6 Results and Discussion Comparing our IFS approach with Kumar et al. (2006) fuzzy approaches, we get a better degree of achievement of α = 0.96, higher than λ = 0.675. The value of objective functions Z1 , Z2, and Z3 are better than previous study. In fuzzy approach, 70% allocation went to vendor 2, 23% to vendor 3 and 6.4% to vendor4. In IFO, 74% order allocation went to vendor 2, 21% to vendor3, and 4.3% to vendor 4. From Fig. 1, we see that IFO approach gives better results for Z1 and Z3 compared to fuzzy approach. MOLP handle the conflicting objectives efficiently and tactfully. IFS handle information vagueness in criteria of vendors with both membership and non-membership functions.

7 Conclusion In our paper, an IFO model for VSP was developed. IFO method was used to solve MOLP problem. A comparison between fuzzy and IFO approach shows that IFO is better than a fuzzy approach. The advantage of the IFO method over fuzzy approach gives a greater degree of satisfaction to decision makers with respect to

116

P. Kaur and S. Singh

Table 2 Comparison of two approaches Value of Objective Functions

λ = 0.2

λ = 0.49

λ = 0.99

Kumar et al. (2006) Approach

Z1

60,136.53

60,125.82

60,125.82

61,818

Z2

465

465.41

465.41

448

Z3

643.7108

643.2

643.23

665

solution of the problem. With intuitionistic fuzzy sets in IFO, we have two degree of freedom of maximum acceptance (Membership function) and minimum rejection (Non-membership Function) of a solution. Future extensions of work is using other types of non-linear membership and non-membership functions, parabolic, tangential or piecewise, which give better results or extend this model. These models can be applied to other areas of managerial decision-making problems like agricultural production planning model, investment, biomedical engineering, etc.

References Angelov P (1997) Optimization in an intutionistic fuzzy environment. Fuzzy Sets Syst 86:299–306 Atanassov K (1986) Intuitionistic fuzzy sets. Fuzzy Sets Syst 20:87–96 Bharati SK, Singh SR (2014) Intutionistic fuzzy optimization techniques in agriculture production planning: a small farm holders prospective. Int J Comp Appl 89(6) Chakraborty S, Pal M, Nayak PK (2013) Intutionistic fuzzy optimization technique for Pareto optimal solution of manufacturing inventory models with shortages. Eur J Oper Res 228(2):381– 387 Dahel N (2003) Vendor selection and order quantity allocation in volume discount environments. Surg Endosc Other Interv Tech 8(4):335–342 Díaz-madroñero P, MD and Vasant, P, (2010) Fuzzy multi-objective vendor selection problem with modified s-curve membership function. AIP Conf Proc 1239:278 Garg H, Rani M, Sharma SP, Vishwakarma Y (2014) Intutionistic fuzzy optimization technique for solving multi-objective reliability optimization problems in interval environment. Expert Syst Appl 41:3157–3167 Jana B, Roy TK (2007) Multiobjective intutionistic fuzzy linear programming and its application in transportation model. Notes Intutionistic Fuzzy Sets 13(1), 34–51 Kaur P, Rachna KlN (2016) An intutionistic fuzzy optimization approach to vendor selection problem. Persp Sci 8:348–350 Kumar M, Vrat P, Shankar R (2006) A fuzzy programming approach for vendor selection problem in a supply chain. Int J Prod Econ 101(2):273–285 Shahrokhi, Bernard and Shidpour (2011) An integrated method using intuitionistic fuzzy set and linear programming for supplier selection problem,18th IFAC World Congress Milano (Italy) August 28−September 2 Weber CA, Current JR (1993) A multiobjective approach to vendor selection. Eur J Oper Res 68:173–184

Intuitionistic Fuzzy Optimization Technique to Solve …

117

Weber CA, Desai A (1998) Non-cooperative negotiation strategies for vendor selection. Eur J Oper Res 108(1):208–223 Weber CA, Current JR, Benton WC (1991) Vendor selection criteria and methods. Eur J Oper Res 50(1):2–18 Zoran B, Tunjo P (2014) Multiproduct vendor selection with volume discounts as the fuzzy multiobjective programming problem. Int J Prod Res 52(14):4315–4333

Testing for the Goodness of Fit for the DMRL Class of Life Distributions Tanusri Ray and Debasis Sengupta

AMS Subject Classification: Primary 62N05 · Secondary 62G10, 62N03

1 Introduction Suppose X is a non-negative-valued random variable representing a lifetime. Let X have the distribution F and the survival function F¯ = 1 − F. The mean residual life of X at age x is the mean of the conditional distribution of X − x given X > x, given by  ∞ 1 ¯ F(u)du. (1) m(x) = ¯ F(x) x Bryson and Siddiqui (1969) introduced the Decreasing Mean Residual Life (DMRL) class of life distributions, for which m(x) is decreasing in x. The name of the class signifies a special type of ageing that sets it apart from other forms of ageing, viz. Increasing Failure Rate (IFR), Increasing Failure Rate Average (IFRA), New Better than Used (NBU), New Better than Used in Expectation (NBUE), Harmonically New Better than Used in Expectation (HNBUE) and so on. This class includes the IFR class, is included in the NBUE and HNBUE classes, and has partial overlap with the IFRA and NBU classes.

T. Ray Maharaja Manindra Chandra College, Kolkata, India D. Sengupta (B) Indian Statistical Institute, Kolkata, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 B. K. Sinha and S. B. Bagchi (eds.), Strategic Management, Decision Theory, and Decision Science, https://doi.org/10.1007/978-981-16-1368-5_9

119

120

T. Ray and D. Sengupta

The question of whether a life distribution belongs to the DMRL class becomes significant in view of the membership benefits of the class. These benefits come in the form of several useful results. The results given by Klefsjö (1982a) imply that a DMRL survival function F¯ with mean μ is bounded from above and below as follows: ¯ max{(1 − t/μ), 0} ≤ F(t) ≤ exp(− max{(t/μ − 1), 0}), t ≥ 0. Sengupta and Das (2016) improved these results to provide the sharpest possible bounds. Another result is a characterization related to the renewal process governing successive failures of a unit undergoing instantaneous and perfect repair (which amounts to replacement by an independent unit with an identically distributed lifetime). The equilibrium distribution of such a process is the steady-state distribution of the time from any randomly chosen inspection time to the time of next failure of the unit, computed as  1 t ¯ F(u)du, t > 0, Fe (t) = μ 0 where F¯ and μ are the survival function and the mean, respectively, of the lifetime distribution F of the units at the time of replacement. It follows from the definition of the DMRL class that Fe is IFR if and only if F is DMRL. Klefsjö (1982b) provided yet another characterization of the DMRL class, in terms of the Total Time on Test (TTT) transform  of a life distribution F with mean μ, defined as ( p) = μ−1



F −1 ( p)

¯ F(x)d x, 0 < p < 1.

0

He showed that 1 −  is star-shaped, i.e. (1 − ( p))/(1 − p) is decreasing in p, if and only if F is DMRL. Cheng and He (1989) provided a bound for the maximum deviation of a DMRL distribution from the exponential distribution with an identical mean. Abu-Youssef (2002) provided a lower bound on the mean lifetime of a series system with two independent and identical DMRL components. Further properties of the DMRL class may be found in Willmot and Lin (2001). The results mentioned above become usable when a life distribution happens to be a member of the DMRL class. In order to settle the question of this membership empirically, researchers have proposed various tests of a hypothesis based on a censored sample from the underlying distribution. Typically for these tests, the null hypothesis is that the distribution is exponential, while the alternative is that it is DMRL. Hollander and Proschan (1975) had proposed a test based on a linear function of order statistics. Klefsjö (1983) proposed a test based on the empirical TTT plot. Bandyopadhyay and Basu (1990) proposed a U-statistic test based on a measure of DMRL-ness. A more general test was proposed by Bergman and Klefsjö (1989). Ahmad (1992) proposed another U-statistic test based on a simpler measure of DMRL-ness. Lim and Park (1993) adapted Ahmad’s test to randomly right-censored

Testing for the Goodness of Fit for the DMRL Class of Life Distributions

121

data. Lim and Park (1997) gave a more general test. Abu-Youssef (2002) proposed a test based on a moment inequality. Ahmad and Mugdadi (2010) proposed a test based on a more general moment inequality. Anis (2010) proposed a test based on the fact that the F is DMRL if and only if its equilibrium distribution is IFR. Rejection of a test of exponentiality against the DMRL alternative does not necessarily mean that the underlying distribution is DMRL. Any test statistic essentially capitalizes on merely an aspect of the DMRL class. For example, every DMRL distribution is also an NBUE distribution. Therefore, a test of exponentiality against the NBUE alternative is also a test of exponentiality against the DMRL alternative. Rejection of such a test based on a sample from an NBUE distribution does not necessarily mean that it is indeed DMRL. It transpires that membership of the underlying life distribution of a sample to a particular class is not automatically established by the rejection of the exponential null hypothesis against that particular alternative. A confirmatory goodness-of-fit test for that class of distribution has to be used to fill the gap. Rejection of the first test, together with non-rejection of the second would be an appropriate empirical basis for treating a particular distribution as a member of that class. There are many tests of exponentiality against various ageing alternatives; see Stephens (1986); Ascher (1990); Lai (1994); Lai & Xie (2006) for reviews of these tests. The observation made above calls for complementing them with goodness-offit tests for these ageing classes. Not many tests of this kind exist in the literature. Tenga and Santner (1984) proposed a test for goodness of fit to the IFR class. Santner and Tenga (1984) extended it to the cases of Type II and Type I censoring. Srivastava et al. (2012) proposed tests for goodness of fit to IFRA and NBU classes. Sengupta (2013) reviewed these tests. In this paper, we propose a goodness-of-fit test for the DMRL class of life distributions. In the next section, we develop the test for censored data. In Section 3, we present the results of Monte Carlo simulations to support the theory. We conclude the paper by presenting an illustrative analysis of two data sets in Section 4. Throughout this paper, the term ‘decreasing’ would mean ‘non-increasing’.

2 Methodology 2.1 A Measure of Non-decreasingness of MRL x Let M(x) = 0 m(u)du be the integrated MRL function, which is a member of the class C[0, ∞) of continuous real-valued functions on [0, ∞). For any g ∈ C[0, ∞), let Lg be the least concave majorant (LCM) of g, i.e. Lg = inf{t ∈ C[0, ∞) : t (x) ≥ g(x) for x ∈ [0, ∞) and t is concave}.

122

T. Ray and D. Sengupta

If an MRL function m is decreasing, then its integrated version M would be concave, and therefore L M would coincide with M. For any other MRL, the function L M − M is non-negative over the interval [0, ∞). Therefore, a scaled measure of nondecreasingness of an MRL m is 1 (L M (x) − M(x)), m 2 (0)

(m, τ ) = sup

x∈[0,τ ]

where τ is a finite number. This measure is zero when m is decreasing over [0, τ ], and positive when it is not. Therefore, we can use an empirical version of this measure for testing the goodness of fit of the class of DMRL distributions to a given data set. In order that the shape of m is captured as much as possible, τ should preferably be a large number. Suppose X 1 , X 2 , . . . , X n is a sample of size n from the life distribution F, and Fn is a nonparametric estimator of this distribution function. The empirical version of the mean residual life function defined in (1) is m(x) ˆ =

1 F¯n (x)





F¯n (u)du =

x

n i=1 max{X i − x, 0}  , n i=1 I (X i > x)

(2)

where F¯n = 1 − Fn and I (·) is the indicator function. We can define the integrated empirical MRL as ˆ M(x) =



x

 m(u)du ˆ =

0

0

x

n i=1 max{X i − u, 0}  du. n i=1 I (X i > u)

(3)

For randomly right-censored data, we can interpret F¯n as a modified version of the Kaplan–Meier estimator, where the estimator is forced to have the value 0 at the largest uncensored observation. Suppose X (1) , X (2) , . . . , X (n u ) are the ordered values of the uncensored observations, and K 1 , K 2 , . . . , K n u are the values of the Kaplan– Meier estimator at those points. Then, by defining X (0) = 0, K 0 = 1 and K n u = 0, we can write F¯n (x) =

n u −1







K i I X (i) ≤ x < X (i+1) =

i=0

K i for x ∈ [X (i) , X (i+1) ), i = 0, 1, . . . , n u − 1, 0 for x ≥ X (n u ) .

(4) Consequently, we have (2) replaced by ⎧ n u −1 ⎪    Kj  ⎪ ⎨ X ( j+1) − X ( j) − x − X (i) if X (i) ≤ x < X (i+1) , i = 0, 1, . . . , n u − 1, m(x) ˆ = j=i K i ⎪ ⎪ ⎩0 if x ≥ X (n u ) ,

(5) and (3) adjusted accordingly.

Testing for the Goodness of Fit for the DMRL Class of Life Distributions

123

For any real x inside the support of F, the quantity m(x) ˆ is a consistent estimator of ˆ m(x). Therefore, under mild conditions (e.g. boundedness of m), the integral M(x) x should converge in probability to M(x) = 0 m(u)du. We propose the test statistic w(x) ˆ [L Mˆ (x) − M(x)], 2 m x∈[0,τ ] ˆ (0)

(m, ˆ τ ) = sup

(6)

where w(x) is a suitable positive valued weight function, for goodness of fit of the DMRL class. We shall show later that as long as τ is larger than the largest order statistic, its exact value does not matter, and it may practically be regarded as infinity.

2.2 Form of the Empirical Cumulative MRL and Its LCM According to (5), mˆ is piecewise linear. Therefore, Mˆ should be continuous and piecewise parabolic. It follows that for x ∈ [X (i) , X (i+1) ), i = 1, 2, . . . , n u − 1, one ˆ can write M(x) as  x ˆ ˆ m(u)du. ˆ M(x) = M(X (i) ) + X (i)

By repeating this calculation for the interval [0, X (1) ) and formally defining X (0) = 0, we obtain   1 2 ˆ M(x) = Mi + bi x − X (i) − x − X (i) 2

for X (i) ≤ x < X (i+1) , i = 0, 1 . . . , n u − 1,

(7)

where   M0 = Mˆ X (0) = 0,    1 2  X (i+1) − X (i) , i = 0, 1, . . . , n u − 1 Mi+1 = Mˆ X (i+1) = Mi + bi X (i+1) − X (i) − 2 n u −1   1  bi = K j X ( j+1) − X ( j) , i = 0, 1, . . . , n u − 1. Ki j=i

Note that bi may be interpreted as estimated mean remaining life at X (i) . In the absence of censoring, we have the simplification bi =

n 1  X ( j) , i = 0, 1, . . . , n − 1. n − i j=i+1

ˆ We now study some properties of M.

124

T. Ray and D. Sengupta

Proposition 1 The empirical cumulative MRL defined in (7) has the following properties. (a) It has a positive derivative in between observed times of event. (b) At all observed times of event except the last one, its right derivative is greater than the left derivative. The empirical integrated MRL Mˆ consists of the sequence of parabolic functions    2 Pi (x) = Mi + bi x − X (i) − 21 x − X (i) for x ∈ [X (i) , X (i+1) ], i = 0, 1, . . . , n u − 1. In the sequel, we refer to the parabolic segment defined over the interval [X (i) , X (i+1) ) by the notation Pi , and to the entire parabola as ‘extended Pi ’. Part (b) of Proposition 1 shows that Mˆ is not concave at the observed times of event. ˆ Tangents to the parabolas need to be studied in order to identify the LCM of M. Proposition 2 For 0 ≤ i < j ≤ n u − 1, the following statements hold. (a) For a fixed point on P j , there is a unique tangent that passes through that point and touches extended Pi at a point that lies below and to the left of the fixed point. (b) For a fixed point on Pi , there is a unique tangent that passes through that point and touches extended P j at a point that lies above and to the right of the fixed point. We are now in a position to identify the tangents of interest. Proposition 3 For 0 ≤ i < j ≤ n u − 1, the following statements hold. (a) The unique  point on extended Pi lying below and to the left of the point  X ( j) , M j , such that the connector of the two points is a tangent to extended Pi , is (xi,l j , yi,l j ), where xi,l j = X ( j) −



X ( j) − X (i) − bi

2

− bi2 + 2(M j − Mi )



1/2

,

1/2   2 yi,l j = Mi + bi X ( j) − X (i) − bi X ( j) − X (i) − bi − bi2 + 2(M j − Mi )   1/2 2 2 1 2 X ( j) − X (i) − X ( j) − X (i) − bi − bi + 2(M j − Mi ) − . 2

(b) The unique  point on extended P j lying above and to the right of the point  X (i) , Mi , such that the connector of the two points is a tangent to extended P j , is (xi,r j , yi,r j ), where

 1/2 2 b j + X ( j) − X (i) − b2j − 2(M j − Mi ) ,

1/2   2 r = M − b X 2 yi, j j ( j) − X (i) + b j b j + X ( j) − X (i) − b j − 2(M j − Mi ) j   1/2 2 2 1 2 X ( j) − X (i) + b j + X ( j) − X (i) − b j − 2(M j − Mi ) − . 2

r = X xi, (i) + j

Testing for the Goodness of Fit for the DMRL Class of Life Distributions

125

(c) The unique common tangent to extended Pi and extended P j touches the two curves at the points (xi,mlj , yi,mlj ) and (xi,mrj , yi,mrj ), respectively, where xi,mlj

  Mi − M j + bi X ( j) − X (i) − 21 (b j − bi )2 = X (i) + , b j − bi + X ( j) − X (i)

yi,mlj = Pi (xi,mlj ), xi,mrj yi,mrj

  Mi − M j + b j X ( j) − X (i) − 21 (b j − bi )2 = X ( j) + , b j − bi + X ( j) − X (i) = P j (xi,mrj ).

ˆ we now consider the With the ultimate objective of identifying the LCM of M, LCM of a pair of parabolic segments Pi and P j , where 0 ≤ i < j ≤ n u − 1. The desired LCM would in general consist of a part of Pi , a part of P j and a linear part in between. The left end of the linear segment may lie at the left end (L), middle (M) or right end (R) of Pi , while the right end of that segment may lie at the left end (L), middle (M) or right end (R) of P j . Thus, there are nine scenarios that may be denoted as LL, LM, LR, ML, MM, MR, RL, RM and RR, as illustrated in Fig. 1. The next proposition defines the LCM by considering exhaustively these nine cases. Proposition 4 For 0 ≤ i < j ≤ n u − 1, let Li j denote the LCM of Pi and P j , and     M −M si, j = X ( j)j −X (i)i denote the linear connector between X (i) , Mi and X ( j) , M j . Then   Li j has the following form over the interval X (i) , X ( j+1) . Case LR:

If si, j+1 ≥ bi and si, j+1 ≤ b j − (X ( j+1) − X ( j) ), then

Li j (x) = Mi + Case LL:

If si, j+1 ≥ bi , si, j+1 > b j − (X ( j+1) − X ( j) ) and xi,r j ≤ X ( j) , then  Li j (x) =

Case LM: then

 M j+1 − Mi  x − X (i) for X (i) ≤ x ≤ X ( j+1) . X ( j+1) − X (i)

Mi + P j (x)

M j −Mi X ( j) −X (i)



x − X (i)



for X (i) ≤ x ≤ X ( j) , for X ( j) < x ≤ X ( j+1) .

If si, j+1 ≥ bi , si, j+1 > b j − (X ( j+1) − X ( j) ) and X ( j) < xi,r j < X ( j+1) ,   y r −Mi  Mi + x ri, j−X (i) x − X (i) for X (i) ≤ x ≤ xi,r j , i, j Li j (x) = P j (x) for xi,r j < x ≤ X ( j+1) .

126

T. Ray and D. Sengupta

Fig. 1 Illustrations of nine possible shapes LCM of two parabolic segments

Case RR:

If si, j+1 < bi , si, j+1 ≤ b j − (X ( j+1) − X ( j) ) and xi,l j+1 ≥ X (i+1) , then 

Li j (x) =

Pi (x) Mi+1 +

M j+1 −Mi+1 X ( j+1) −X (i+1)



x − X (i+1)



for X (i) ≤ x ≤ X (i+1) , for X (i+1) < x ≤ X ( j+1) .

Case MR: If si, j+1 < bi , si, j+1 ≤ b j − (X ( j+1) − X ( j) ) and X (i) < xi,l j+1 < X (i+1) , then Li j (x) =

⎧ ⎨ Pi (x) ⎩ yi,l j+1 +

M j+1 −yi,l j+1 X ( j+1) −xi,l j+1



x − xi,l j+1



for X (i) ≤ x ≤ xi,l j+1 , for xi,l j+1 < x ≤ X ( j+1) .

Testing for the Goodness of Fit for the DMRL Class of Life Distributions

127

Case MM: If si, j+1 < bi , si, j+1 > b j − (X ( j+1) − X ( j) ), X (i) < xi,mlj < X (i+1) and X ( j) < xi,mrj < X ( j+1) , then ⎧ ⎪ P (x) ⎪ ⎨ i Li j (x) = yi,mlj + ⎪ ⎪ ⎩ P (x)



yi,mrj −yi,mlj xi,mrj −xi,mlj

x − xi,mlj



for X (i) ≤ x ≤ xi,mlj , for xi,mlj < x ≤ xi,mrj , for xi,mrj < x ≤ X ( j+1) .

j

Case RL: If si, j+1 < bi , si, j+1 > b j − (X ( j+1) − X ( j) ), si+1, j ≤ bi − (X (i+1) − X (i) ) and si+1, j ≥ b j , then ⎧ ⎪ ⎨ Pi (x) Li j (x) = Mi+1 + ⎪ ⎩ P j (x) Case ML: M j −yi,l j X ( j) −xi,l j

M j −Mi+1 X ( j) −X (i+1)



x − X (i+1)



for X (i) ≤ x ≤ X (i+1) , for X (i+1) < x ≤ X ( j) , for X ( j) < x ≤ X ( j+1) .

If si, j+1 < bi , si, j+1 > b j − (X ( j+1) − X ( j) ), X (i) < xi,l j < X (i+1) and ≥ b j , then ⎧ ⎪ P (x) ⎪ ⎨ i Li j (x) = yi,l j + ⎪ ⎪ ⎩ P (x)

M j −yi,l j X ( j) −xi,l j



x − xi,l j



for X (i) ≤ x ≤ xi,l j , for xi,l j < x ≤ X ( j) , for X ( j) < x ≤ X ( j+1) .

j

Case RM: If si, j+1 < bi , si, j+1 > b j − (X ( j+1) − X ( j) ),   X (i+1) − X (i) and X ( j) < xi,r j < X ( j+1) , then ⎧ ⎪ ⎨ Pi (x) Li j (x) = Mi+1 + ⎪ ⎩ P j (x)

yi,r j −Mi+1 xi,r j −X (i+1)



x − X (i+1)



yi,r j −Mi+1 xi,r j −X (i+1)

≤ bi −

for X (i) ≤ x ≤ X (i+1) , for X (i+1) < x ≤ xi,r j , for xi,r j < x ≤ X ( j+1) .

Remark 1 For the special cases 0 < i + 1 = j ≤ n u , the only possible cases among those listed in Proposition 4 are LM, LR, MM and MR. Remark 2 The concave functions  L i j , 0 ≤ i < j < n u can be extended from their  stipulated domain X (i) , X ( j+1) to 0, X (n u ) , by extending the linear or parabolic segments to either side, as appropriate. These extensions will retain their concavity. We are now ready to provide the LCM of the empirical cumulative MRL. Proposition 5 The LCM of the empirical cumulative MRL Mˆ defined in (7) is L Mˆ (x) =

sup

i, j∈{0,1,...,n u }:X (i) ≤x≤X ( j+1)

Li j (x), 0 < x < X (n u ) ,

(8)

128

T. Ray and D. Sengupta

where Li j , 0 ≤ i < j < n u are as described in Proposition 4 and extended as in Remark 2.

2.3 Consistency and Null Distribution We present the consistency of the test in the uncensored case as follows. Proposition 6 Suppose (i) the distribution F has a finite mean m(0), (ii) the positive weight function w(x) in (6) is almost surely bounded from above and bounded away from 0,  F(τ ) (iii) the integral 0 (τ − F −1 ( p)/(1 − p)dp is finite. Then, in the absence of censoring and with a suitable choice of threshold, the test (6) is consistent. In the case of censored data, one has to replace the empirical survival function with the Kaplan–Meier estimator, whose uniform consistency under appropriate conditions is well established (Andersen et al., 1993). However, a crucial step that is central to the proof of Proposition 6 is to establish uniform convergence of mˆ in the case of censored data, the difficulty of which has been on record (Csörg˝o & Zitikis, 1996). In order to control the probability of type I error of the test, we need to identify the ‘least favorable’ null distribution for this purpose. This is indeed a challenging question. We do not have a concrete answer to this question but have some indications of what the answer might look like. Suppose m 1 is a non-increasing MRL that is strictly decreasing over at least some partof its support. Suppose m 2 (t) = m 1 (0) for all t. The first integrated MRL, t M1 (t) = 0 m 1 (u)du, is a concave function. The second integrated MRL is M2 (t) = m 1 (0)t. It follows that both the integrated MRLs coincide with their respective LCMs. However, being a linear function, M2 (t) lies at the border between concavity and convexity at all values of t. Therefore, the estimated version of M2 (t) should be more likely than that of M1 (t) to have local fluctuations in the direction of convexity. Such fluctuations, leading to a gap between the estimated M2 (t) and its LCM, would be picked up by the supremum statistic (6). From these considerations, it appears that the exponential distribution, which has a constant MRL, may be the least favorable distribution as far as the type I error probability is concerned. Since we do not have a theoretical proof of this conjecture, we will rely on simulations to get an indication of its validity. If simulations support the conjecture, then the null distribution for the given sample size may be obtained by simulation of repeated samples of that size from the exponential distribution with scale parameter 1. This could be used for computing the cut-off or p-value of a given data set.

Testing for the Goodness of Fit for the DMRL Class of Life Distributions

129

3 Simulations 3.1 Finding Least Favourable Distribution We simulated samples from the Weibull distribution with shape parameter α ≥ 1 and scale parameter 1 and the gamma distribution with shape parameter α ≥ 1 and scale parameter 1. In both cases, α = 1 corresponds to the exponential distribution. We had 10,000 runs of simulation for each of the sample sizes n = 10, 30, 50, 100, to compute the 90, 95 and 99 percentiles of two versions of the test statistic having ˆ¯ mˆ 2 (0)/ M(t). ˆ weight functions w(t) = 1 and w(t) = F(t) The second weight funcˆ tion considers the relative gap between M(t) and its LCM, and weights it further by the Kaplan-Meier estimator. The rationale behind using a decreasing weight function is that the MRL function is less precisely estimated at larger age due to diminished sample size. Tables 1 and 2 show the simulated percentage points of the statistic with weight ˆ¯ mˆ 2 (0)/ M(t), ˆ respectively, when the data follow functions w(t) = 1 and w(t) = F(t) the Weibull distribution. Tables 3 and 4 show the corresponding percentage points in the case of the Gamma distribution. It is found that in each table the column for α = 1 contains the largest value in any given row. These findings indicate that the exponential distribution is possibly the least favorable within the class of null distributions, for computation of cut-off or p-values.

Table 1 Empirical percentile points of the test statistic with w(t) = 1 computed from data of various sample sizes n from the Weibull distribution with different shape parameters α n percentile α=1 α=2 α=3 α=5 α = 10 10 10 10 30 30 30 50 50 50 100 100 100

90% 95% 99% 90% 95% 99% 90% 95% 99% 90% 95% 99%

0.4069 0.6311 1.1717 0.6226 1.0075 2.1098 0.7445 1.1685 2.4832 0.8728 1.3909 3.1201

0.0473 0.0755 0.165 0.0423 0.0695 0.1612 0.0406 0.0704 0.1531 0.0386 0.0659 0.1579

0.0144 0.0225 0.0521 0.0110 0.0178 0.0413 0.0097 0.0158 0.0351 0.0089 0.0144 0.0331

0.0038 0.0059 0.0125 0.0025 0.0039 0.0087 0.0022 0.0034 0.0074 0.0019 0.003 0.0062

0.0007 0.001 0.0022 0.0004 0.0007 0.0015 0.0004 0.0006 0.0012 0.0003 0.0005 0.0011

130

T. Ray and D. Sengupta

ˆ¯ mˆ 2 (0)/ M(t) ˆ Table 2 Empirical percentile points of the test statistic with w(t) = F(t) computed from data of various sample sizes n from the Weibull distribution with different shape parameters α n

percentile

α=1

α=2

α=3

α=5

α = 10

10 10 10 30 30 30 50 50 50 100 100 100

90% 95% 99% 90% 95% 99% 90% 95% 99% 90% 95% 99%

0.4625 0.6708 1.2375 0.4691 0.6902 1.3815 0.4441 0.6629 1.2239 0.4130 0.6174 1.1667

0.0321 0.0529 0.1301 0.0117 0.0189 0.0500 0.0072 0.0120 0.0322 0.0000 0.0000 0.0000

0.0095 0.0139 0.0303 0.0031 0.0048 0.0113 0.0020 0.0031 0.0081 0.0010 0.0016 0.0042

0.0026 0.0038 0.0076 0.0008 0.0012 0.0026 0.0005 0.0007 0.0015 0.0002 0.0004 0.0008

0.0006 0.0008 0.0016 0.0001 0.0002 0.0004 0.0001 0.0001 0.0003 0.0000 0.0001 0.0001

Table 3 Empirical percentile points of the test statistic with w(t) = 1 computed from data of various sample sizes n from the Gamma distribution with different shape parameters α n percentile α=1 α=2 α=3 α=5 α = 10 10 10 10 30 30 30 50 50 50 100 100 100

90% 95% 99% 90% 95% 99% 90% 95% 99% 90% 95% 99%

0.4069 0.6311 1.1717 0.6226 1.0075 2.1098 0.7445 1.1685 2.4832 0.8728 1.3909 3.1201

0.1636 0.2775 0.5956 0.2138 0.3584 0.8319 0.2369 0.3991 0.9617 0.2539 0.4210 0.9890

0.0879 0.1489 0.3459 0.1128 0.1960 0.4491 0.1166 0.1955 0.4785 0.1270 0.2227 0.5487

0.0438 0.0748 0.1719 0.0511 0.0873 0.2069 0.0538 0.0928 0.2181 0.0549 0.0976 0.2560

0.0179 0.0294 0.0655 0.0194 0.0321 0.0803 0.0203 0.0341 0.0829 0.0198 0.0350 0.0890

3.2 Conservative Cut-Offs On the basis of the above findings, we now run more simulations for computation of several percentile points of the (presumed) least favourable null distribution of the test statistic. Tables 5 and 6 show various empirical percentage points of the statistic ˆ¯ mˆ 2 (0)/ M(t), ˆ with weight functions w(t) = 1 and w(t) = F(t) respectively, computed from samples of five different sizes, with data simulated from the exponential

Testing for the Goodness of Fit for the DMRL Class of Life Distributions

131

ˆ¯ mˆ 2 (0)/ M(t) ˆ Table 4 Empirical percentile points of the test statistic with w(t) = F(t) computed from data of various sample sizes n from the Gamma distribution with different shape parameters α n

percentile

α=1

α=2

α=3

α=5

α = 10

10 10 10 30 30 30 50 50 50 100 100 100

90% 95% 99% 90% 95% 99% 90% 95% 99% 90% 95% 99%

0.4625 0.6708 1.2375 0.4691 0.6902 1.3815 0.4441 0.6629 1.2239 0.4130 0.6174 1.1667

0.1431 0.2547 0.5598 0.0856 0.1792 0.4633 0.0653 0.1363 0.4203 0.0498 0.1076 0.3694

0.0645 0.1130 0.2853 0.0378 0.0713 0.2251 0.0293 0.0562 0.1988 0.0199 0.0400 0.1381

0.0280 0.0463 0.1283 0.0171 0.0300 0.0839 0.0123 0.0236 0.0682 0.0084 0.0165 0.0562

0.0112 0.0181 0.0497 0.0062 0.0109 0.0291 0.0047 0.0083 0.0254 0.0031 0.0056 0.0169

Table 5 Empirical percentile points of the test statistic with w(t) = 1 computed from data of various sample sizes n from the Exponential distribution percentile n = 10 n = 20 n = 30 n = 50 n = 100 80% 82% 84% 86% 88% 90% 91% 92% 93% 94% 95% 96% 97% 98% 99%

0.2222 0.2499 0.2813 0.3168 0.3558 0.4069 0.4432 0.4813 0.524 0.5751 0.6312 0.7157 0.8037 0.9616 1.1718

0.2867 0.3222 0.362 0.4118 0.4705 0.5529 0.5895 0.6461 0.7145 0.7693 0.8799 0.9962 1.1576 1.3457 1.7506

0.3358 0.3735 0.4194 0.4704 0.5368 0.6226 0.6794 0.7453 0.8138 0.905 1.0075 1.131 1.3227 1.5818 2.1098

0.3892 0.4369 0.4917 0.5618 0.6414 0.7445 0.8077 0.8765 0.9588 1.0482 1.1685 1.3444 1.5559 1.8794 2.4832

0.4464 0.5026 0.5688 0.6417 0.7325 0.8728 0.9381 1.0159 1.1101 1.2393 1.3909 1.5717 1.845 2.3157 3.1201

132

T. Ray and D. Sengupta

ˆ¯ mˆ 2 (0)/ M(t) ˆ Table 6 Empirical percentile points of the test statistic with w(t) = F(t) computed from data of various sample sizes n from the Exponential distribution percentile n = 10 n = 20 n = 30 n = 50 n = 100 80% 82% 84% 86% 88% 90% 91% 92% 93% 94% 95% 96% 97% 98% 99%

0.2797 0.3048 0.3335 0.3699 0.4119 0.4625 0.4936 0.5268 0.5684 0.6138 0.6708 0.7471 0.8514 1.0063 1.2375

0.2801 0.3119 0.3419 0.3804 0.4303 0.4906 0.5209 0.5598 0.6068 0.6641 0.7231 0.8182 0.9283 1.0936 1.4016

0.2813 0.3098 0.3418 0.3779 0.4184 0.4691 0.5007 0.5424 0.5812 0.6344 0.6902 0.7803 0.8952 1.0388 1.3815

0.2631 0.2903 0.3183 0.3529 0.3969 0.4441 0.4722 0.5133 0.5550 0.6048 0.6629 0.7369 0.8349 0.9710 1.2239

0.2437 0.2673 0.2960 0.3285 0.3655 0.4130 0.4426 0.4782 0.5149 0.5588 0.6174 0.7072 0.8034 0.9331 1.1667

distribution with scale parameter 1. All the values reported in the two tables are based on 10,000 simulation runs. These thresholds will be used in sequel for computations of simulated power and p-values of the test statistic for real data sets.

3.3 Power of the Test In order to get an idea about the power of the proposed test, we computed the test statistic for data simulated from the Weibull and gamma distributions with shape parameter α < 1 and calculated the fraction of time it exceeded the threshold for nominal level 0.05. Tables 7 and 8 exhibit the empirical powers of the test computed in this manner, when the underlying distributions are Weibull and Gamma, respectively, for a range of sample sizes and shape parameter values smaller than 1. The powers are computed on the basis of 1000 simulation runs. The power is generally seen to increase with a decrease in α and an increase in sample size. The test with weight ˆ¯ mˆ 2 (0)/ M(t) ˆ is found to have higher power than the one with function w(t) = F(t) uniform weight function (i.e. unweighted statistic).

Testing for the Goodness of Fit for the DMRL Class of Life Distributions

133

Table 7 Empirical power, at nominal level 0.05, of the test statistic computed from data of various sample sizes n from the Weibull distribution with different shape parameters α w(t) n α = 0.9 α = 0.8 α = 0.6 α = 0.4 α = 0.3 α = 0.2 1 1 1 1 ˆ¯ mˆ 2 (0)/ M(t) ˆ F(t) ˆ 2 ¯ ˆ F(t)mˆ (0)/ M(t)

10 30 50 100

0.078 0.098 0.096 0.095

0.113 0.155 0.148 0.200

0.215 0.339 0.413 0.482

0.417 0.670 0.730 0.849

0.485 0.761 0.867 0.958

0.532 0.873 0.956 0.986

10

0.088

0.153

0.330

0.684

0.839

0.955

0.100

0.217

0.646

0.951

0.993

1.000

0.130

0.260

0.744

0.991

1.000

1.000

0.136

0.345

0.900

1.000

1.000

1.000

30 ˆ¯ mˆ 2 (0)/ M(t) ˆ F(t) 50 ˆ 2 ¯ ˆ F(t)mˆ (0)/ M(t) 100

Table 8 Empirical power, at nominal level 0.05, of the test statistic computed from data of various sample sizes n from the Gamma distribution with different shape parameters α w(t) n α = 0.9 α = 0.8 α = 0.6 α = 0.4 α = 0.3 α = 0.2 1 1 1 1 ˆ¯ mˆ 2 (0)/ M(t) ˆ F(t) ˆ 2 ¯ ˆ F(t)mˆ (0)/ M(t)

10 30 50 100

0.064 0.069 0.068 0.060

0.066 0.097 0.093 0.090

0.121 0.142 0.158 0.167

0.185 0.240 0.267 0.314

0.249 0.351 0.372 0.380

0.335 0.481 0.521 0.573

10

0.071

0.081

0.163

0.334

0.485

0.687

0.070

0.100

0.222

0.536

0.763

0.951

0.071

0.101

0.267

0.630

0.860

0.987

0.074

0.140

0.338

0.801

0.977

1.000

30 ˆ¯ mˆ 2 (0)/ M(t) ˆ F(t) 50 ˆ 2 ¯ ˆ F(t)mˆ (0)/ M(t) 100

3.4 Size and Power for Censored Data For simulating the performance of the test in case of the censored data, we generate censoring times, independently of the simulated lifetimes, from an exponential distribution. While the lifetimes are generated under the conditions described in the previous section for complete samples, the parameter of the said exponential distribution is chosen such that its 20th percentile matches the 90th percentile of the lifetime distribution. This choice produced 6–10% censoring in the data. Tables 9 and 10 exhibit the empirical powers (computed from 1000 simulation runs) of the test at nominal level 0.05, for Weibull and Gamma lifetime distributions, respectively, for various sample sizes and shape parameters. The power is generally seen to increase with a decrease in α and an increase in sample size. The test with weight ˆ¯ mˆ 2 (0)/ M(t) ˆ is found to have higher power than the one with function w(t) = F(t) uniform weight function (i.e. unweighted statistic). There appears to be a marginal

134

T. Ray and D. Sengupta

Table 9 Empirical power, at nominal level 0.05, of the test statistic computed from censored data of various sample sizes n from the Weibull distribution with different shape parameters α w(t) n α = 1.0 α = 0.9 α = 0.8 α = 0.6 α = 0.4 α = 0.3 α = 0.2 1 1 1 1 ˆ¯ mˆ 2 (0)/ M(t) ˆ F(t) ˆ 2 ¯ ˆ F(t)mˆ (0)/ M(t)

10 30 50 100

0.029 0.038 0.032 0.036

0.046 0.061 0.073 0.084

0.082 0.100 0.113 0.110

0.156 0.204 0.291 0.318

0.261 0.435 0.498 0.626

0.371 0.523 0.615 0.738

0.420 0.576 0.651 0.715

10

0.036

0.058

0.091

0.273

0.600

0.811

0.929

0.034

0.094

0.190

0.521

0.912

0.996

1.000

0.039

0.094

0.200

0.652

0.982

1.000

1.000

0.052

0.107

0.251

0.808

1.000

1.000

1.000

30 ˆ¯ mˆ 2 (0)/ M(t) ˆ F(t) 50 ˆ 2 ¯ ˆ F(t)mˆ (0)/ M(t) 100

Table 10 Empirical power, at nominal level 0.05, of the test statistic computed from censored data of various sample sizes n from the Gamma distribution with different shape parameters α w(t) n α = 1.0 α = 0.9 α = 0.8 α = 0.6 α = 0.4 α = 0.3 α = 0.2 1 1 1 1 ˆ¯ mˆ 2 (0)/ M(t) ˆ F(t) ˆ 2 ¯ ˆ F(t)mˆ (0)/ M(t)

10 30 50 100

0.029 0.038 0.032 0.036

0.028 0.045 0.047 0.048

0.048 0.059 0.061 0.063

0.089 0.095 0.098 0.116

0.156 0.178 0.203 0.209

0.194 0.242 0.262 0.295

0.253 0.344 0.356 0.436

10

0.036

0.051

0.056

0.147

0.250

0.402

0.641

0.034

0.054

0.079

0.179

0.476

0.708

0.928

0.039

0.063

0.091

0.218

0.569

0.829

0.986

0.052

0.069

0.090

0.256

0.740

1.000

1.000

30 ˆ¯ mˆ 2 (0)/ M(t) ˆ F(t) 50 ˆ 2 ¯ ˆ F(t)mˆ (0)/ M(t) 100

loss of power due to censoring. The conclusions drawn in the previous section for the case of complete samples continue to hold.

4 Data Analysis We present an illustrative analysis of two data sets to demonstrate how the proposed test can be useful. We first consider the data set on the number of1000 s of cycles to failure for electrical appliances in a life test, used as an example by (Lawless, 2003). The sorted (complete) data on failure time are as follows: 0.014, 0.034, 0.059, 0.061, 0.069, 0.08, 0.123, 0.142, 0.165, 0.21, 0.381, 0.464, 0.479, 0.556, 0.574, 0.839, 0.917, 0.969, 0.991, 1.064, 1.088, 1.091, 1.174, 1.27, 1.275, 1.355, 1.397, 1.477, 1.578, 1.649, 1.702, 1.893, 1.932, 2.001, 2.161, 2.292,

1000000

Integrated MRL

1500000

135

500000

10 0

Estimated integrated MRL Least concave majorant 0

2

4

6 Time

8

10

Estimated integrated MRL Least concave majorant

0

5

Integrated MRL

15

Testing for the Goodness of Fit for the DMRL Class of Life Distributions

0

500

1000

1500

2000

Time

Fig. 2 Estimated integrated MRL and its least concave majorant for the electrical appliance data (left) and the insulating fluid data (right)

2.326, 2.337, 2.628, 2.785, 2.811, 2.886, 2.993, 3.122, 3.248, 3.715, 3.79, 3.857, 3.912, 4.1, 4.106, 4.116, 4.315, 4.51, 4.58, 5.267, 5.299, 5.583, 6.065, 9.701. Hollander and Proschan’s (1975) test for this data produces the p-value 0.008, indicating rejection of the hypothesis of exponentiality in favour of the DMRL alternative. The plot given in the left panel of Fig. 2 shows some gap between the integrated MRL and its LCM. However, the proposed test statistic with weight ˆ¯ mˆ 2 (0)/ M(t) ˆ happens to have p-value 0.546. Thus, there is insufficient w(t) = F(t) evidence of the distribution not belonging to the DMRL family. As another example, we consider the data set on hours to electrical breakdown of an insulating fluid, available as the data frame ifluid as the reliability data set in the R package survival. The sorted (complete) data set is given below: 0.09,0.19, 0.39, 0.47, 0.73, 0.74, 0.78, 0.96, 1.13, 1.31, 1.40, 2.38, 2.78, 3.16, 4.15, 4.67, 4.85, 5.79, 6.50, 7.35, 7.74, 8.01, 8.27, 12.06, 17.05, 20.46, 21.02, 22.66, 31.75, 32.52, 33.91, 36.71, 43.40, 47.30, 72.89, 139.07, 144.12, 175.88, 194.90, 1579.52, 2323.70. The p-value obtained by Hollander and Proschan’s (1975) test for this data is 0.999, indicating non-rejection of the null hypothesis of the exponential distribution. The right panel of Fig. 2 shows that the integrated MRL is far from that of exponential (which should be linear) and that there is a gap between the integrated MRL and its ˆ¯ mˆ 2 (0)/ M(t) ˆ produces a LCM. The proposed test statistic with weight w(t) = F(t) −4 p-value smaller than 10 , confirming that the distribution is not DMRL.

136

T. Ray and D. Sengupta

Appendix: Proofs of Theoretical Results Proof of Proposition 1. It follows from (7) that for X (i) < x < X (i+1) , i = 0, 1 . . . , n u − 1, n u −1      Kj  X ( j+1) − X ( j) ≥ 0. Mˆ  (x) = bi − x − X (i) > bi − X (i+1) − X (i) = Ki j=i+1

This proves part (a). Further, observe that for i = 1 . . . , n u − 1, n u −1    Kj  X ( j+1) − X ( j) Mˆ  X (i) + = bi = Ki j=i

>

n u −1 j=i

 Kj  X ( j+1) − X ( j) K i−1

n u −1

   Kj  X ( j+1) − X ( j) − X (i) − X (i−1) K i−1 j=i−1     = bi−1 − X (i) − X (i−1) = Mˆ  X (i) − . =

This completes the proof.

.

Proof of Proposition 2. It follows from Proposition 1 that, whenever x > X (i) ,   1 2 x − X (i) Pi (x) = Mi + bi x − X (i) − 2   1 2   1 2 = Mi−1 + bi−1 X (i) − X (i−1) − X (i) − X (i−1) + bi x − X (i) − x − X (i) 2 2   1 2   > Mi−1 + bi−1 X (i) − X (i−1) − X (i) − X (i−1) + bi−1 x − X (i) 2   1 2  x − X (i) − X (i) − X (i−1) x − X (i) − 2     1    2 = Mi−1 + bi−1 X (i) − X (i−1) + x − X (i) − X (i) − X (i−1) + x − X (i) 2   1 2 = Mi−1 + bi−1 x − X (i−1) − x − X (i−1) = Pi−1 (x). 2

Therefore, for 0 ≤ i < j ≤ n u − 1, we have P j (x) > Pi (x) whenever x > X ( j) , i.e. all points on P j lie above the extended parabola Pi . The statement of Part (a) follows. When x < X (i+1) , it follows from Proposition 1 that

Testing for the Goodness of Fit for the DMRL Class of Life Distributions

137

  1 2 x − X (i+1) Pi+1 (x) = Mi+1 + bi+1 x − X (i+1) − 2   1 2 X (i+1) − X (i) = Mi + bi X (i+1) − X (i) − 2   1 2 +bi+1 x − X (i+1) − x − X (i+1) 2   1 2   X (i+1) − X (i) + bi x − X (i+1) < Mi + bi X (i+1) − X (i) − 2   1 2  x − X (i+1) − X (i+1) − X (i) x − X (i+1) − 2     1    2 = Mi + bi X (i+1) − X (i) + x − X (i+1) − X (i+1) − X (i) + x − X (i+1) 2   1 2 = Mi + bi x − X (i) − x − X (i) = Pi (x). 2

Therefore, if x < X (i+1) , we have for 0 ≤ i < j ≤ n u − 1, Pi (x) > Pi+1 (x) > · · · > P j (x), i.e. all points on Pi lie to the left of the extended parabola P j . The statement of part (b) follows.   Proof of Proposition 3. By   equating Pi (x) with the slope of the linear connector of (x, Pi (x)) and X ( j) , M j , we obtain

   2   M j − Mi − bi x − X (i) + 21 x − X (i) bi − x − X (i) = , X ( j) − x        1 2 x − X (i) , i.e., bi X ( j) − x − X ( j) − x x − X (i) = M j − Mi − bi x − X (i) + 2      1 2 x − X (i) , i.e., x − X ( j) x − X (i) = M j − Mi − bi X ( j) − X (i) + 2   1 1 2 i.e., x 2 − X ( j) x − M j + Mi + bi X ( j) − X (i) + X (i) X ( j) − X (i) = 0, 2 2

  1/2 2 . i.e., x = X ( j) ± X ( j) − X (i) − bi − bi2 + 2(M j − Mi )

Proposition 2 implies that real solutions exist and the requisite solution xi,mlj is obtained   by using the smaller of the two roots, and yi,mlj = Pi xi,mlj . The result stated in Part (a) follows. i and j, we get the points of contact of the two tangents from   By interchanging X (i) , Mi to extended P j , if they exist, as x = X (i) ±

 1/2 2 b j + X ( j) − X (i) − b2j − 2(M j − Mi ) .

Existence of the solution is guaranteed by Proposition   2, which also implies that the lm lm larger of these two roots is xi, j , and yi, j = P j xi,lmj . The result stated in Part (b) follows.

138

T. Ray and D. Sengupta

In order to identify a common tangent to the two parabolas, we equate Pi (x1 ) and with one another and with the slope of the linear connector of (x1 , Pi (x1 )) and (x2 , P j (x2 )), to obtain P j (x2 )

  bi − x1 − X (i)   = b j − x2 − X ( j)    2    2 M j + b j x2 − X ( j) − 21 x2 − X ( j) − Mi − bi x1 − X (i) + 21 x1 − X (i) = x2 − x1  2  2 1 1 1 2 2 M j − Mi + 2 b j − 2 bi − 2 b j − x2 + X ( j) + 21 bi − x1 + X (i) = . x2 − x1

By making multiple use of the first equality for simplification of the last expression, we have M j − Mi + 21 b2j − 21 bi2     bi − x1 − X (i) = b j − x2 − X ( j) = . b j − bi + X ( j) − X (i) These equations lead to the unique solution (signifying uniqueness of the common tangent) x1 = bi + X (i) − x2 = b j + X ( j) −

M j − Mi + 21 b2j − 21 bi2 b j − bi + X ( j) − X (i)

,

M j − Mi + 21 b2j − 21 bi2 b j − bi + X ( j) − X (i)

,

which simplify further to the expressions of xi,l j and xi,r j given in Part (c). The corresponding ordinates are obtained from the equations of the respective parabolas.  Proof of Proposition 4. The rationale behind choosing the nine cases has been explained before the statement of the proposition. We only have to justify the mathematical descriptions of the conditions and the expressions of Li j given in each case. Case I (LR): The LCM would consist of a single line if both Pi and P j lie   below the line segment connecting the points X (i) , Mi and X ( j+1) , M j+1 . By comparingslopes, we conclude that this happens if and only if si, j+1 ≥ bi and  si, j+1 ≤ b j − X ( j+1) − X ( j) . It is easy to see that, in this case, the above line segment itself is the LCM. We would now consider three other cases by reversing one or both the inequalities and further subcases.   Case II: Suppose si, j+1 ≥ bi and si, j+1 > b j − X ( j+1) − X ( j) . In this case, Pi is entirely below the line segment connecting the points X (i) , Mi and

Testing for the Goodness of Fit for the DMRL Class of Life Distributions

139



 X ( j+1) , M j+1 , but P  j is not. It follows from Proposition 3(b) that another line passing through X (i) , Mi with a greater slope would touch extended P j at xi,r j < X ( j+1) . Subcase LL: xi,r j ≤ X ( j) . In this case P j would lie entirely below the tangent   drawn from X (i) , Mi to extended P j as per Proposition 3(b). The line with  smallest slope passing through X , (i) Mi , under which P j lies, is the one that   passes through X ( j) , M j . Therefore, this line segment together with P j forms the LCM. Subcase LM: X ( j) < xi,r j < X ( j+1) . In this case the tangent to extended P j   drawn from X (i) , Mi touches the parabolic segment P j , which means the LCM consists of this tangent and the part of P j lying to the right of the point of contact. Case III: Suppose si, j+1 < bi , si, j+1 ≤ b j − (X ( j+1) − X ( j) ). Inthis case, P j is  the line segment connecting the points X (i) , Mi and entirely below X ( j+1) , M j+1 , but Pi is not. It  follows from Proposition 3(a) that another line passing through X ( j+1) , M j+1 with a smaller slope would touch extended Pi at xi,l j+1 > X (i) . Subcase RR: xi,l j+1 ≥ X (i+1) . In this case, Pi would lie entirely below the   tangent drawn from X ( j+1) , M j+1 to extended Pi as perProposition 3(a).  The line with highest slope passing through X ( j+1) , M j+1 , under which Pi  lies, is the one that passes through X (i+1) , Mi+1 . Therefore, this line segment together with Pi forms the LCM. Subcase MR: X (i) < xi,l j+1 < X (i+1) . In this case, the tangent to extended Pi   drawn from X ( j+1) , M j+1 touches the parabolic segment Pi , which means the LCM consists of this tangent and the part of Pi lying to the left of the point of contact. Case IV: Suppose si, j+1 < bi , si, j+1 > b j − (X ( j+1) − X ( j) ). In this   case, neither nor P is entirely below the line segment connecting the points X (i) , Mi and P j  i X ( j+1) , M j+1 . It follows that at least some part of each of the parabolic segments Pi and P j will form parts of the LCM. There can be four mutually exclusive and exhaustive cases delineating whether the whole or a part of the two segments will be involved. Subcase MM: Only parts of Pi and P j coincide with the LCM. In this case, the common tangent (with positive slope) to extended Pi and extended P j happens to touch both the segments (not their extended versions). The middle part of the LCM should then consist of that portion of the common tangent, which lies in between the points of contact with Pi and P j . The mathematical condition for this scenario and the expression of the common tangent follow from Proposition 3(c). Subcase RL: The entire segments Pi and P j are part of the LCM. This case may be characterized by the slope si+1, j of the linear connector of the points

140

T. Ray and D. Sengupta



   X (i+1) , Mi+1 and X ( j) , M j , which should be smaller than the slope of Pi at its right end-point and greater than the slope of P j at its left end-point. The LCM should consist of the said linear connector, Pi to its left and P j to its right. with Subcase ML: A part of Pi and the entire segment P j coincide  the LCM.  The in-between part should be the tangent drawn from X ( j) , M j to extended Pi , which touches the latter within the segment Pi . The expression for the tangent and its point of contact are obtained from Proposition 3(a). This case is characterized by the location of the point of contact and the slope of the tangent, which should be greater than the slope of P j at its left end-point. Subcase RM: The entire segment Pi and a part of P j coincide   with the LCM. The in-between part should be the tangent drawn from X (i+1) , Mi+1 to extended P j , which touches the latter within the segment P j . The expression for the tangent and its point of contact are obtained from Proposition 3(b). This case is characterized by the location of the point of contact and the slope of the tangent, which should be less than the slope of Pi at its right end-point.  Proof of Proposition 5. Since each of the functions Li j for 0 ≤ i < j < n u is concave, their supremum is concave too. Further, ˆ for X (i) ≤ x ≤ X ( j+1) . L Mˆ (x) ≥ Li j (x) ≥ max{Pi (x), P j (x)} ≥ M(x) Finally, if there is another concave majorant M˜ of Mˆ that is smaller than L Mˆ at some   point x0 ∈ 0, X (n u ) , then there exists an -neighbourhood of x0 , where M˜ is smaller than L Mˆ , and therefore smaller than Li j for some 0 ≤ i < j < n u . This means M˜ cannot be a concave majorant of Pi and P j , which is a contradiction. This completes the proof.  Proof of Proposition 6. It suffices to show that the test statistics (m, ˆ τ ) converges in probability to 0 under the null hypothesis, and does not do so under the alternative hypothesis. Under conditions (i) and (iii), uniform convergence of mˆ to m in probability over the interval [0, τ ) follows from Theorem 1.1 of (Csörg˝o & Zitikis, 1996) and the subsequent discussion. Then, ˆ sup | M(x) − M(x)| ≤ sup

x∈[0,τ )



x∈[0,τ ) 0

x

 |m(u) ˆ − m(u)|du =

τ

|m(u) ˆ − m(u)|du

0

≤ τ sup |m(x) ˆ − m(x)|. x∈[0,τ )

Therefore, if  > 0, we have      ˆ P sup | M(x) − M(x)| >  ≤ P sup |m(x) ˆ − m(x)| > . τ x∈[0,τ ) x∈[0,τ )

Testing for the Goodness of Fit for the DMRL Class of Life Distributions

141

In other words, uniform convergence of mˆ to m over [0, τ ) in probability implies uniform of convergence of Mˆ to M over that interval in probability. Suppose F is a DMRL distribution. Given  > 0,    ˆ ˆ ≤ M(x) + ∀x ∈ [0, τ ). − M(x)| ≤ ⇒ M(x) − ≤ M(x) sup | M(x) 2 2 2 x∈[0,τ ) ˆ it has to lie above Since M + /2 is a concave function, whenever it lies above M, L Mˆ too. Therefore,  ˆ ˆ sup | M(x) − M(x)| ≤ ⇒ M(x) − /2 ≤ M(x) ≤ L Mˆ (x) ≤ M(x) + /2 ∀x ∈ [0, τ ) 2 x∈[0,τ ) 



ˆ ˆ ≤  ⇒ sup w(x) L Mˆ (x) − M(x) ≤ wb , ⇒ sup L Mˆ (x) − M(x) x∈[0,τ )

x∈[0,τ )

where wb is the upper bound of w mentioned in condition (ii). Thus, 





  ˆ ˆ P sup w(x) L Mˆ (x) − M(x) >  ≤ P sup | M(x) − M(x)| > . wb x∈[0,τ ) x∈[0,τ )



It follows that the probability limit of (m, ˆ τ )mˆ 2 (0) is 0 under the conditions stated in the proposition. Uniform convergence of mˆ to m in probability further ensures that in such a case (m, ˆ τ ) converges to 0 in probability. Now suppose F is a non-DMRL distribution. For sufficiently large τ , there exist points x1 < x2 < x3 in [0, τ ) such that M(x2 )
0. Therefore, whenever | M(x have x2 − x1 ˆ M(x3 ) + x3 − x1 x2 − x1 > M(x3 ) + x3 − x1

L Mˆ (x2 ) >

Thus,  P

x3 − x2 ˆ M(x1 ) x3 − x1 x3 − x2 ˆ 2 ) + δ. M(x1 ) − 2δ > M(x2 ) + 2δ > M(x x3 − x1 

 ˆ ˆ 2) > δ sup |L Mˆ (x) − M(x)| > δ ≥ P L Mˆ (x2 ) − M(x

x∈[0,τ )

 ˆ i ) − M(xi )| < δ, i = 1, 2, 3 ≥ P ≥ P | M(x



 ˆ sup | M(x) − M(x)| < δ ,

x∈[0,τ )

142

T. Ray and D. Sengupta

which goes to 1 as n goes to infinity because of the uniform convergence of Mˆ to M over [0, τ ) in probability. Therefore, as n → ∞,  P

 ˆ sup w(x)|L Mˆ (x) − M(x)| > wa δ → 1,

x∈[0,τ )

where wa is the positive lower bound of w, mentioned in condition (ii). It follows that (m, ˆ τ ) does not converge to 0 in probability. 

References Abu-Youssef SE (2002) A moment inequality for decreasing (increasing) mean residual life distributions with hypothesis testing application. Stat Probab Lett 57:171–177 Ahmad IA (1992) A new test for mean residual life times. Biometrika 79:416–419 Ahmad IA, Mugdadi AR (2004) Further moments inequalities of life distributions with hypothesis testing applications: the IFRA, NBUE and DMRL classes. J Stat Plan Infer 120:1–12 Andersen PK, Borgan Ø, Gill RD, Keiding N (1993) Statistical models based on counting processes. Springer-Verlag, New York Anis MZ (2010) On testing exponentiality against DMRL alternatives. Econ Qual Control 25:281– 299 Ascher S (1990) A survey of tests for exponentiality. Comm Stat Theor Meth 19:1811–1825 Bandyopadhyay D, Basu AP (1990) A class of tests for exponentiality against decreasing mean residual life alternatives. Comm Stat Theor Meth 19:905–920 Bergman B, Klefsjö B (1989) A family of test statistics for detecting monotone mean residual life. J Stat Plan Inf 21:161–178 Bryson MC, Siddiqui MM (1969) Some criteria for aging. J Am Stat Assoc 64:1472–1483 Cheng K, He Z (1989) On proximity between exponential and DMRL distributions. Stat Probab Lett 8:55–57 Csörg˝o M, Zitikis R (1996) Mean residual life processes. Ann Stat 24:1717–1739 Hollander M, Proschan F (1975) Tests for the mean residual life. Biometrika 62:585–593. (Amendments and corrections: 67(1):259, 1980) Klefsjö B (1982a) The hnbue and hnwue classes of life distributions. Naval Res Logistics 29:331– 345 Klefsjö B (1982b) On aging properties and total time on test transforms. Scandinavian J Stat 9:37–41 Klefsjö B (1983) Some tests against aging based on the total time on test transform. Comm Stat Theor Meth 12:907–927 Lai C-D (1994) Tests of univariate and bivariate stochastic ageing. IEEE Trans Rel 43:233–241 Lai C-D, Xie M (2006) Stochastic ageing and dependence for reliability. Springer, New York Lawless JF (2003) Statistical models and methods for lifetime data. Wiley, Hoboken, New Jersey Lim J-H, Park DH (1993) Test for DMRL using censored data. J Nonparametric Stat 3:167–173 Lim J-H, Park DH (1997) A family of test statistics for DMRL (IMRL) alternatives. J Nonparametric Stat 8:379–392 Santner TJ, Tenga R (1984) Testing goodness of fit to the increasing failure rate family with censored data. Naval Res Logistics 31:631–646 Sengupta D (2013) Testing for and against ageing. J Ind Stat Assoc 51:231–252 Sengupta D, Das S (2016) Sharp bounds on DMRL and IMRL classes of life distributions with specified mean. Stat Probab Lett 119:101–107

Testing for the Goodness of Fit for the DMRL Class of Life Distributions

143

Srivastava, R., Li, P. & Sengupta, D. (2012). Testing for membership to the IFRA and the NBU classes of distributions. In Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2012, La Palma, Canary Islands, Spain, April 21-23, 2012, N. D. Lawrence & M. A. Girolami, eds., vol. XX Stephens MA (1986) Tests for the exponential distribution. In: D’Agostino RB, Stephens MA (eds) Goodness-of-fit techniques. Statistics textbooks and monographs, vol. 68. Marcel Dekker, New York, pp 421–460 Tenga R, Santner TJ (1984) Testing goodness of fit to the increasing failure rate family. Naval Res Logistics 31:617–630 Willmot GE, Lin S (2001) Lundberg approximations for compound distributions with insurance applications. Lecture Notes in Statistics, vol. 156. Springer, New York

Linear Empirical Bayes Prediction of Employment Growth Rates Using the U.S. Current Employment Statistics Survey P. Lahiri and Bogong T. Li

1 Introduction Tukey (1979) put forward the following argument to make a point that measuring change is often more important than measuring level in economic series: “If you tell those who do not know what the unemployment rate is today that in 1980 it will be 6.5%, they will not know whether to be sad or glad. If you tell them it will be 1 or 2 or 3 percent less than today, they will know how want to react.” A simple measure of change in economic series is the growth rate, defined as the ratio of the level at current time to that at the previous time point. The growth rate is easy to interpret. For example, a growth rate of 1.02 simply means that the level has increased by 2%. On the other hand, a growth rate of 0.98 stands for a decline of 2% in level. An alternate measure of change is the difference between the levels of two consecutive time points. However, given the previous time point level, these two measures are equivalent. The standard design-based ratio estimator (see, e.g., Cochran 1977) can be used to estimate the growth rate. The estimated growth rate when multiplied by the estimated previous time point level produces the current time point level estimate. The difference between this current time point and the previous time point level estimates is an estimate of the alternative difference measure of change. In the context of the U.S. Current Employment Statistics (CES) survey, the U.S. Bureau of Labor Statistics publishes such difference measures, commonly known as the month-to-month estimates, to measure payroll employment changes.

P. Lahiri (B) University of Maryland, College Park, USA e-mail: [email protected] B. T. Li U.S. Bureau of Labor Statistics, Washington, D.C, USA © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 B. K. Sinha and S. B. Bagchi (eds.), Strategic Management, Decision Theory, and Decision Science, https://doi.org/10.1007/978-981-16-1368-5_10

145

146

P. Lahiri and B. T. Li

Analysts and public policymakers are often interested in some disaggregated analyses. For example, payroll employment for certain sectors of industry (e.g., manufacturing, construction, etc.) or certain geographic regions (e.g., states or provinces) is of interest. The growth rate and the associated level estimate can be computed separately for each such cell. These level estimates for cells can then be combined to produce the national estimate. The U.S. Bureau of Labor Statistics publishes level estimates and month-to-month changes of the payroll employment for different sectors of industry (cells) and the nation. An early knowledge of the economic conditions is useful to public policymakers and financial investors, among others. In order to meet such demands, preliminary estimates are very common in economic series. These are usually based on incomplete data and are subject to more sampling errors and perhaps systematic late reporting errors if the late reporters are very different from the early reporters. But, according to Tukey (1979), “We should: Measure what is needed for policy guidance, even if it can only be measured poorly, …..If the price is first a preliminary value and then a revision, let us pay it.” In fact, revisions of preliminary estimates are common in economic series. One should apply caution in measuring changes in the presence of revisions. In order to produce the preliminary estimates, Tukey (1979) suggested the form “last period final PLUS best estimate of change.” This general principle was followed in Rao et al. (1989) who gave a comprehensive theory of estimation of level and change. In the USA, preliminary estimates of the payroll employment using the Current Employment Statistics Survey, a nationwide survey of business establishments, is probably the most-watched employment data. Preliminary estimates are used by different macroeconomists in academia, government and private sectors in understanding the U.S. business cycles. The preliminary estimates are revised several times. Late reporting and nonresponse could make these series different, sometimes by a considerable amount. According to Neumark and Wascher (1991), “several large and well-publicized revisions during 1988 and 1989 led to concerns about the general quality of government statistics and to some criticism of the BLS.” Over the years, the BLS has taken several steps in various stages of its survey operation for continually improving upon the quality of preliminary estimates, including switching the sampling from a nonprobability sampling design to a probability sampling design. In Sect. 2 of this note, we describe the data we use for this paper. Much of the theory on parametric empirical Bayes was developed by Efron and Morris (1973) and Morris (1983). The parametric empirical Bayes method has been found to be effective in combining information from various sources. Some important applications of parametric empirical Bayes method can be found in Efron and Morris (1975), Carter and Rolph (1974), Fay and Herriot (1979), Stokes and Yeh (1988), Gershunskaya and Lahiri (2018), and others. Parametric empirical Bayes approach has been robustified using the linear empirical Bayes approach where the full specification of the parametric distribution is replaced by certain moment assumptions (see, e.g., Lahiri 1986; Ghosh and Lahiri 1987; Ghosh and Meeden 1997). In this paper, we propose a linear empirical Bayes prediction (LEBP) approach that aims to reduce the gap between the preliminary and final estimates using historical

Linear Empirical Bayes Prediction of Employment Growth Rates …

147

data. To this end, in Sect. 3, we propose a two-level robust Bayesian model. The purpose of the first level is to link the preliminary design-based estimates to the corresponding final revisions. The second level is used to understand the effects of geographical area, industry, and the month of a year on the final design-based estimates. Historical data are used to fit the two-level model. Copeland (2003a; b), Copeland and Valliant (2007) considered an alternate approach that requires modeling at the establishment level to impute employment for the late responders. Our model is much simpler than theirs and avoids the difficult modeling exercise at the establishment level. Previously, Neumark and Wascher (1991) considered a similar problem on a different economic series using the old CES survey that does not use a probability sampling design. In Sect. 4, we use CES data on the preliminary and final design-based estimates for the period March 2003–March 2004 to fit the robust Bayesian model. We compare the proposed LEBP with the preliminary design-based estimator in predicting final revisions for April 2004. Since final revisions for April 2004 are available, such evaluation offers a robust method. Our data analysis shows that our simple robust LEBP method is quite effective in reducing the revision. When compared to the BLS design-based preliminary estimator, LEBP achieves an average 30% reduction in revision. In some cases, the reduction in revision can be more than 45%.

2 The Data The Current Employment Statistics (CES) survey is an important monthly survey conducted by the U.S. Bureau of Labor Statistics (BLS). The survey provides monthly data on employment, earnings, and hours of nonagricultural establishments in the USA. The monthly data refers to the pay period that includes the 12th of the month, a period that is standard for all Federal agencies collecting employment data from business establishments. Employment covers all employees and subcategories of workers for certain industries, e.g., production workers in manufacturing and mining industries, construction workers in construction industry, and nonsupervisory workers for the remaining private sector industries. Aggregate payroll, i.e., the income sum from work before tax and other deductions, is used to estimate total earnings by U.S. workers reported on establishment payroll. The survey also contains information on total hours worked, paid overtime hours, average hourly earnings, real earnings, and straight-time average hourly earnings. The CES provides the above statistics in considerable geographic and industrial details. At the national level, estimates of employment, earnings, and hours are provided for 5200 North American Industrial Classification System (NAICS) industries, representing 92% of four-digit, 86% of five-digit, and 44% of six-digit NAICS industries. For the 50 states, District of Columbia, Puerto Rico, the Virgin Islands, and 288 metropolitan areas, detailed NAICS industry series are published by the BLS and State Employment Security Agencies (ESA) that cooperate with the BLS in collecting the state and area information.

148

P. Lahiri and B. T. Li

The sample design of CES is a stratified, simple random sample of establishments, clustered by unemployment insurance (UI) accounts. Strata are defined by state, NAICS industry, and employment size. Sampling rates for the strata are determined through an optimum allocation formula. In 2003, the CES sample included about 160,000 businesses and government agencies representing approximately 400,000 individual worksites. This is a sample from 8 million nonfarm business establishments (defined as an economic unit that produces goods or services) in the USA. The active CES sample covers approximately one-third of all nonfarm payroll workers. CES uses a weighted link-relative estimator to estimate the total employment, earnings, and hours. Weighted link-relative estimator uses a weighted sample trend within an estimation cell to move the prior month’s estimate forward for that cell. Each month the CES program releases major economic indicators. The uses of the survey are significant both in terms of their impact on the national economic policy and private business decisions. It supplies a significant component in the Index of Coincident Economic Indicators that measures current economic activity, and leading economic indicators forecasting changes in the business cycle. The CES earning component is used to estimate preliminary personal income of the National Income and Product Accounts. U.S. productivity measures are based on the CES aggregate hours data. The BLS and state ESAs conduct employment projections based on the CES data. Business firms, labor unions, universities, trade associations, and private research organizations use the CES data to study economic conditions and to develop plans for the future. Preliminary CES estimates are generated 3–4 weeks after the survey reference period, a pay period containing the 12th of the month, or 5 business days after the deadline to hand in the requested information. The speed of delivery can increase late response and nonresponse resulting in large revisions of preliminary estimates in the subsequent months. Currently, preliminary estimates are based on only about 74% of the total CES sample. Two subsequent revisions in the next 2 months, however, incorporate the late reporters. Though the final revisions, also called the third closing estimates, are released 2 months later, the preliminary estimates are the most critical in terms of different uses and tend to receive the highest visibility. Many shortterm financial decisions are made based on preliminary estimates. Current economic conditions are assessed based on these immediately available data. Large revisions in the subsequent months help obtaining the most accurate statistics, though some damage may have already been done by relatively inaccurate preliminary estimates. Revisions also cause confusions among users who may regard the difference as sampling errors. Some users, on the other hand, perceive the survey performance based on the magnitude of the revisions. The size of revisions varies across geography and depends on the industry, time of a year, location, and other factors. The percent revisions at the state and local levels are generally higher than those at the national level. However, even a very tiny percentage revision at the national level could change the employment situation dramatically. The total U.S. nonfarm employment in 2005 stands about 130 million and the average monthly change in employment (mostly increase) since 1995 is about 131,000. Therefore, roughly a 0.1% revision can turn a job increase into a job

Linear Empirical Bayes Prediction of Employment Growth Rates …

149

decrease situation. At the state and area levels, the average revision is about 1%, and so a more significant level of revision is expected. Since at state level, revision could be positive or negative, at national level, the gross revision should be lower. Compared with the national level, state level estimates generally have higher sampling errors. The CES program has made efforts to make preliminary estimates as accurate as possible in order to avoid large subsequent revisions. For example, since the size of revisions is associated with late reporting and nonresponse, program office has taken steps to improve on response rates through updating and completing current establishment address information, sending advanced notice, providing nonresponse prompts, improving marketing of the survey, expanding survey collection modes (e.g. internet), etc. Efforts have been also made in estimation through seasonal adjustments and business birth/death imputations. For further details on the CES, see Butani et al. (1997) and BLS Handout of Methods (2002, Chap. 2).

3 Linear Empirical Bayes Prediction of the Final Growth Rate The payroll employment growth rate for industry i, geographical area j, and month t is defined as Ri jt =

Yi jt , Yi jt−1

where Yi jt denotes the true payroll employment for industry i, geographical area j, and month t. The growth rate is a good measure of the change of employment for 2 consecutive months and can be easily interpreted. Let yi jkt and wi jk denote the month t employment and the associated sampling weight for establishment k belonging to industry i and geographical area j in the CES monthly sample s (i = 1, . . . , I ; j = 1, . . . , Ji ; k = 1, . . . , K i j ; t = 1, . . . , T ). The sampling weight of a sampled establishment is simply the inverse of the inclusion probability of the establishment, i.e., the probability that the sampling design includes the establishment in the sample. Roughly speaking, the sampling weight of a sampled establishment represents a certain number of establishments in the finite population of all establishments. The sampling weights account for unequal probabilities of selection from different sampling units and are often incorporated in developing a statistical procedure in order to avoid selection bias. Note that the sampling weight for a sampling unit does not change over time. Let st(P) ⊂ st(F) ⊂ s, where st(P) (st(F) ) denotes the set of sampled establishments that respond in month t when the preliminary (final) estimates are produced. The preliminary and final design-based estimates of the employment growth rate Ri jt are given by

150

P. Lahiri and B. T. Li

 Ri(P) jt

=



k∈st(P) wi jk yi jkt

k∈st(P)

wi jk yi jkt−1

,

Ri(F) jt

=

k∈st(F) wi jk yi jkt

k∈st(F) wi jk yi jkt−1

.

For the current month t = T, we have the preliminary estimate Ri(P) j T , but not (F) the final estimate Ri j T . We are interested in making an adjustment to Ri(P) j T so (F)



(F) that the adjusted Ri(P) j T , say R i j T , and Ri j T are as close as possible. We propose to achieve this goal by applying a suitable Bayesian model. To this end, define (P) (F) (F) z i(P) jt = log(Ri jt ), and z i jt = log(Ri jt ). Assume the following robust Bayesian model:

Model: (F) (F) 2 Level 1: z i(P) jt |z i jt ind [ai jt + bi jt z i jt , σi jt ], ∼

2 Level 2: Aprioriz i(F) jt ind [ηi jt , τi jt ], ∼

where [m, v] denotes a probability distribution with mean m and variance v. Remark 1 We use Level 1 to describe the relationship between preliminary and final design-based estimates and Level 2 to describe the relationship of the final design-based estimates with different labor market factors such as the month of a year, industry group, and geography. One can think of an alternative model where the role of the preliminary and the final estimates in Level 1 is reversed and a model on the preliminary estimates in Level 2. But note that the final estimates are much more stable than the preliminary estimates and so we feel that our proposed model is preferred to this alternate model. (P) (P) We assume that E[z i(F) jt |z i jt ] = ci jt +di jt z i jt , where ci jt and di jt are two unknown constants. This is the so-called posterior linearity assumption used by Goldstein (1975), Lahiri (1986), Ghosh and Lahiri (1987), Ghosh and Meeden (1997), among others. Under the above Bayesian model, squared error loss, and the posterior linearity condition, we obtain the following linear Bayes predictor (LBP) of z i(F) jT : 

B P(F)

zi j T

  = wi j T z i∗(P) j T + 1 − wi j T ηi j T,

where wi j T =

bi2j T τi2j T bi2j T τi2j T + σi2j T

, z i∗(P) jT

=

z i(P) j T − ai j T bi j T

.

Remark 2 The Level 1 model parameters ai j T , bi j T , σi2j T and the Level 2 model parameters ηi jt and τi2j T of the Bayesian model are generally unknown and need to

Linear Empirical Bayes Prediction of Employment Growth Rates …

151

Table 1 Model selection and model evaluation criteria for five most effective models Model

Model selection Criteria √ 2 MSE BIC Radj

Model evaluation criteria P A A RP

P A A RL E B P

PRI

1.E(z (P) |z (F) ) = (F) z it bi

0.015

−11,253

0.695

1.045

0.740

29.2

2.E(z (P) |z (F) ) = (F) z it bt

0.015

−11,261

0.699

1.045

0.738

29.4

3.E(z (P) |z (F) ) = z it(F) bi

0.015

−11,256

0.694

1.045

0.741

29.1

4.E(z (P) |z (F) ) = (F) z it bit

0.015

−11,298

0.708

1.045

0.690

34.0

5. Working Model

0.015

−11,304

0.708

1.045

0.691

33.9

be estimated. Note that these model parameters are not estimable unless we make some simplifying assumptions. We have carefully investigated several possibilities. In Table 1, we provide certain model selection and model evaluation criteria for some of the promising plausible models. Model selection criteria are based on the data available at the time of making prediction. In contrast, our model evaluation criteria can be only obtained once the final estimates of the current month are available. We select our final model based on the Bayesian Information Criterion (BIC). But it turns out the selected model is also doing a reasonable job compared to the other plausible models in terms of the evaluation criteria (to be explained in Sect. 4). In any case, this serves as a working model, which may be improved upon as the BLS CES program gathers more data. Working Model: (F) (F) 2 2 Level 1: z i(P) jt |z i jt ind [bit z i jt , σi jt = σi ]; ∼

2 2 Level 2: Aprioriz i(F) jt ind [ηi jt = μ + αi + β j + γt , τi jt = τ ], ∼

where μ is the overall effect; αi is the fixed effect due to the ith industry; β j is the fixed effect due to the jth state; γt is the fixed effect due to the tth month. We make all the standard restrictions on the fixed effects. To estimate the model parameters of our working model, we use all the available (F) data, i.e. {z i(P) jt , i = 1, ..., I ; j = 1, ..., Ji , t = 1, ..., T } and {z i jt , i = 1, ..., I ; j = 1, ..., Ji ; t = 1, ..., T − 2}. Here we assume that final revisions have a backlog of 2 months. Standard statistical packages can be used for estimating the model parameters.

152

P. Lahiri and B. T. Li

Plugging in the estimators of all the model parameters, we get the following LEBP of z i(F) jT : 

L E B P(F)

zi j T 

2





= l i T z i∗(P) j T + (1 − l i T )ηi j T , 



z i(P) b τ jt = , ηi j T = μ + α i + β j + γ T , for all i and j. where l i T = 2 i T2 2 , z i∗(P) jT bi T bi T τ +σ i We take a simple back-transformation to predict the final revision for the current 

2



















(F)



month Ri(F) ). j T , i.e.,R i j T = exp(z i j T Note that we have already presented the basic ingredients for adjusting the preliminary employment estimates. To illustrate this, suppose we are interested in predicting the final revision of payroll employment for industry i at current time T, , at the time we make the preliminary estimates for month T. Note that say Yi(F) T   Ji Ji Yi T = j=1 Yi j T = j=1 Ri j T Yi j T −1 , where Yi jt denote the true total payroll employment for industry i, geographical area j and time t. An estimator of Yi T is  Ji (F) thus obtained as: Y i T = j=1 R i j T Y i j T −1 . The month-to-month change estimate 



L E B P(F)









for the current month can then be produced using Y i T − Y i T −1 .

4 Analysis of the Current Employment Statistics Survey Data In this section, we compare the proposed LEBP method with the design-based estimator using a data set obtained from the BLS CES program. The data set contains preliminary and final design-based growth rate estimates of total payroll employment for 2652 estimation cells formed by all combinations of four two-digit NAICS industries (Mining, Construction, Manufacturing and Wholesale Trade) in all the 50 states and the District of Columbia for the period April 2003–April 2004. Figure 1 displays scatter plots of the final versus the preliminary growth rate design-based estimates by different industrial classifications. Figure 2 displays similar scatter plots by months for the period April 2003–March 2004. For each scatter plot, a dot represents an estimation cell. An ordinary least-squares regression line, a robust least-squares line, and a 45-degree line are plotted in each scatter plot. The robust least-squares line discounts influences from potential outliers by downweighting estimation cells that are 1.5 standard deviations away from the mean, see Rousseeuw and Leroy (1987). These scatter plots aid the model building process. We note the following: (a)

The preliminary and final design-based growth estimates have strong linear relationship, regardless of the industry and month.

Linear Empirical Bayes Prediction of Employment Growth Rates …

153

Fig. 1 Scatter plots of preliminary and final design-based growth rate estimates by industrial classification

(b)

(c)

Figure 1 suggests a possible industry effect on the slope and error variance of the regression. For example, Construction and Wholesale Trade industries have weaker correlations than the Mining and Manufacturing. Figure 2 suggests a possible month effect on the slope but not on the error variance. The relation is much weaker in November and December compared with the other months. The BLS uses a fresh sample each January and releases the final estimate for November using the new sample. Thus, the preliminary and the final estimates for November are based on two different samples. It is possible that there are some overlaps between these two samples, but not enough to get a strong relationship. The same argument applies to justify the weak linear relationship in December. For this reason, we exclude the months of November and December from our data analysis—they need further investigations.

For eight estimation cells, the difference between preliminary and corresponding final design-based estimates exceeds 15%. We treat these cases as possible erroneous outliers and so exclude from data analysis. In the usual production setting under the BLS CES program, these discrepancies will be further investigated on a case-by-case basis. Such an editing is expected to improve our results.

154

P. Lahiri and B. T. Li

Fig. 2 Scatter plots of preliminary and final design-based growth rate estimates by month for March 2003–March 2004

For the evaluation purpose, we split the data set into two parts. The data for the period April 2003–March 2004 (training data) are used to fit our working model. The data for April 2004 (evaluation data) is used to compare our proposed LEBP and design-based preliminary estimates with the actual final design-based estimates of the growth rate. Note that our evaluation method is a variant of the usual crossvalidation method and is useful to assess the predictive power of a model. For details on cross-validation method, we refer the readers to Efron and Tibshirani (1993, ( pr ed) Chap. 17). For a given cell c, the absolute revision of an arbitrary predictor R 

(pred)



of the actual final revision Rc(F) is given by: revc = |R c − Rc(F) |. The percent average absolute relative revision (PAAR) and percent relative improvement (PRI) are used to measure the overall performance of a predictor for a given domain. The P A A Rpred and P R Ipred for an arbitrary domain are given by P A A Rpred = 100 × P R Ipred = 100 ×

1 C revc , l=1 C

1 C |P A A R L E B P − P A A R P | , l=1 C P A A RP

Linear Empirical Bayes Prediction of Employment Growth Rates …

155

where C is the number of estimation cells for the domain in the evaluation data set, and “pred” can be substituted by “P” for the design-based estimate and “LEBP” for LEBP predictor. The PRI measures the percent average reduction in revision achieved by using LEBP instead of the design-based estimator. Obviously, the larger the PRI the better is the LEBP. We have tested several possible models. In Table 1, we provide different model selection and evaluation criteria for a few promising models that include our working model (Model 5). In all the models, Level 2 assumptions are exactly the same as those of the working model. In the first four models, the variance structure for Level 1 is identical (homoscedastic) and is different from heteroscedastic variance structure of the working model. The first √ four models are different in terms of the Level 1 mean structure. In terms of the M S E criterion, we cannot distinguish the five models. In 2 criterion, the models are comparable with Model 4 and our working terms of the Radj 2 . The BIC measures for all the models are also model providing slightly higher Radj quite comparable with Model 4 and the working model providing the two lower values. Based on the BIC criterion, we select our working model. At the time of model selection, the model evaluation measures would not be available. None-the– less, it is interesting to observe that Model 4 and our working model do a slightly better job compared with the rest in terms of our evaluation criterion. Table 2 reports P R I L E B P for different industrial sectors and selected states. The P R I L E B P ranges between 10 and 40%, the overall average in revision reduction being about 29%. Some states or two-digit industries exhibit large P R I L E B P (e.g., Alaska: 37.7% and Whole Sale Trade: 31.8%). Figure 3 displays the final and the corresponding preliminary design-based growth rate estimates and LEBP for different estimation cells in the evaluation data. Compared with the design-based preliminary estimates, the LEBPs appear to be closer to the final design-based growth rate estimates. The directions of revisions suggested by the preliminary design-based estimates and LEBP tend to agree for the Table 2 Summary statistics of the percent reduction in employment revisions by the LEBP Maximum Industry

State

All

Mining

24.120

Average

1st Quartile

Median

3rd. Quartile

9.192

2.756

6.128

9.536 36.480

Construction

42.230

20.400

10.992

23.588

Manufacturing

26.120

16.544

4.980

9.816

19.456

Wholesale Trade

46.040

31.776

12.612

20.244

44.280

Alabama

38.080

18.064

4.916

12.040

25.192

Alaska

47.840

37.720

20.868

33.680

45.520

Arizona

24.720

16.888

4.648

19.568

22.804

West Virginia

25.876

10.944

4.164

7.548

14.328

Wisconsin

46.880

19.380

7.180

14.704

26.904

Wyoming

34.840

20.508

8.300

13.044

25.252

47.840

28.976

5.328

29.372

39.128

156

P. Lahiri and B. T. Li

Fig. 3 Plot of final design-based growth rates and the corresponding preliminary design-based and LEBP growth rate estimates

most estimation cells. This figure provides a nice visual illustration of the superiority of LEBP over the preliminary estimator currently used by the BLS.

5 Concluding Remarks In this paper, we attempt to exploit the relationship between preliminary and final design-based estimates and the historical data on these two estimates to improve the preliminary design-based estimates for the current month. We have achieved significant reduction in revision (30% on the average) of the preliminary estimates. Both preliminary and final design-based estimators are subject to the sampling errors, which we have ignored in this paper primarily because of the unavailability of reliable sampling standard error estimators of these estimators at the time we completed the research. It is possible to develop a hierarchical Bayes method using a fully specified parametric prior distribution. But in a real production environment where the speed of delivery is critical, such method would be difficult to implement since this method requires careful examination of different parametric assumptions of the prior. While it is possible to improve on our working robust model using additional more recent data, our paper offers a promising framework for making possible improvements on the preliminary design-based estimator currently used by the U.S. Bureau of Labor Statistics.

Linear Empirical Bayes Prediction of Employment Growth Rates …

157

Acknowledgements The research was supported by a U.S. Bureau of Labor Statistics Fellowship awarded to the first author. This chapter is a revision of Lahiri and Li (2005). The authors wish to thank Ms. Julie Gershunskaya of the U.S. Bureau of Labor Statistics for helpful discussions.

References BLS Handout of Methods (2002) Chapter 2, Employment, hours, and earnings from the Establishment survey. http://www.bls.gov/opub/hom/pdf/homch2.pdf Butani S, Stamas G, Brick M (1997) Sample redesign for the current employment statistics survey. In: Proceedings of the survey research methods section. American Statistical Association, pp 517–522 Carter GM, Rolph JE (1974) Empirical Bayes methods applied to estimating fire alarm probabilities. J Am Stat Assoc 74:269–277 Cochran WG (1977) Sampling techniques, 3rd edn. Wiley, New York Copeland KR (2003a) Nonresponse adjustment in the current employment statistics survey. In: Proceedings of the federal committee on statistical methodology conference, pp 80–90. http:// www.fcsm.gov/events/papers2003.html Copeland KR (2003b) Reporting patterns in the current employment statistics survey. In: Proceedings of the section on survey research methods. American Statistical Association, pp 1052–1057 Copeland KR, Valliant R (2007) Imputing for late reporting in the current employment statistics survey. Official Statistics 23:69–90 Efron B, Morris C (1973) Stein’s estimation rule and its competitors- an empirical Bayes approach. J Am Stat Assoc 68:117–130 Efron B, Morris C (1975) Data analysis using Stein’s estimator and its generalizations. J Am Stat Assoc 70:311–319 Efron B, Tibshirani RJ (1993) An introduction to the bootstrap. Chapman & Hall Fay RE, Herriot RA (1979) Estimates of income for small places: an application of James-Stein procedure to census data. J Am Stat Assoc 74:269–277 Gershunskaya J, Lahiri P (2018) Robust empirical best small area finite population mean estimation using a mixture model. Calcutta Statist Assoc Bull 69(2):183–204. https://doi.org/10.1177/000 8068317722297 Ghosh M, Lahiri P (1987) Robust empirical Bayes estimation of means from stratified samples. J Am Stat Assoc 82:1153–1162 Ghosh M, Meeden G (1997) Bayesian Methods for Finite Population Sampling. Chapman and Hall, London Goldstein M (1975) Approximate Bayes solution to some nonparametric problems. Ann Stat 3:512– 517 Lahiri P (1986) Robust empirical Bayes estimation in finite population sampling, PhD dissertation, University of Florida, Gainesville Lahiri P, Li BT (2005) Estimation of the change in total employment using the U.S. current employment statistics survey. In: Proceedings of the section on survey research methods. American Statistical Association, pp 1268–1274 Morris C (1983) Parametric empirical Bayes inference: theory and applications (with discussions). J Am Stat Assoc 78:47–65 Neumark D, Wascher WL (1991) Can we improve upon preliminary estimates of payroll employment growth? J Bus Econ Stat 9:197–205 Rao JNK, Srinath KP, Quenneville B (1989) Estimation of level and change using current preliminary data (with discussion). In: Kasprzyk, et al (eds) Panel surveys, pp 457–485 Rousseeuw PJ, Leroy AM (1987) Robust regression and outlier detection. Wiley, New York

158

P. Lahiri and B. T. Li

Stokes SL, Yeh M (1988) Searching for causes of interviewer effects in telephone surveys. In: Groves R et al (eds) Telephone survey methodology. Wiley, New York, pp 357–373 Tukey JW (1979) Methodology, and the Statistician’s responsibility for BOTH accuracy and relevance. J Am Stat Assoc 74:786–793

Constrained Bayesian Rules for Testing Statistical Hypotheses K. J. Kachiashvili

1 Introduction One of the basic branches of mathematical statistics is statistical hypothesis testing, the development of which was starting from the beginning of the last century. It is believed that the first result in this direction belongs to a student who discovered t test in 1908 (Lehmann 1993). The fundamental results of the modern theory of statistical hypothesis testing belong to the cohort of famous statisticians of this period: Fisher, Neyman–Pearson, Jeffreys and Wald (Fisher 1925; Neyman and Pearson 1928, 1933; Jeffreys 1939; Wald 1947a, b). Many other bright scientists have brought their invaluable contributions to the development of this theory and practice. As a result of their efforts, we have many brilliant methods for different suppositions about the character of random phenomena under study and their applications for solving very complicated and diverse modern problems. The mentioned four approaches (philosophies) have their pros and cons, which often lead to differences of opinions between different researchers. The modern, complicated scientific environment often requires the unification of the positive aspects of these ideas for overcoming existing problems (Goritskiy et al. 1977; Kachiashvili et al. 2009; Kachiashvili 2018). To confirm the said, we are quoting well-known statistician, Prof. B. Efron of the University of Stanford which he expressed in the 164th ASA presidential address, delivered at the awards ceremony in Toronto on August 10, 2004: “Broadly speaking, nineteenthcentury statistics was Bayesian, while the twentieth century was frequentist, at least from the point of view of most scientific practitioners. Here in the twenty-first century K. J. Kachiashvili (B) Faculty of Informatics and Control Systems, Georgian Technical University, 77, st. Kostava, 0160 Tbilisi, Georgia e-mail: [email protected] I. Vekua Institute of Applied Mathematics, Tbilisi State University, Tbilisi, Georgia Muskhelishvili Institute of Computational Mathematics, Georgian Technical University, Tbilisi, Georgia © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 B. K. Sinha and S. B. Bagchi (eds.), Strategic Management, Decision Theory, and Decision Science, https://doi.org/10.1007/978-981-16-1368-5_11

159

160

K. J. Kachiashvili

scientists are bringing statisticians much bigger problems to solve, often comprising millions of data points and thousands of parameters. Which statistical philosophy will dominate practice? My guess, backed up with some recent examples, is that a combination of Bayesian and frequentist ideas will be needed to deal with our increasingly intense scientific environment. This will be a challenging period for statisticians, both applied and theoretical, but it also opens the opportunity for a new golden age, rivaling that of Fisher, Neyman, and the other giants of the early 1900s” (Efron 2005, p. 1). Since the mid-70s of the last century, the author of this paper has been engaged in the development of the methods of statistical hypothesis testing and their applications for solving practical problems from different spheres of human activity (Kachiashvili 1980, 2018; Kachiashvili et al. 2009). As a result of this activity, there developed a new approach to the solution of the considered problem, which was later named Constrained Bayesian Methods (CBM) of statistical hypotheses testing. The results obtained in Kachiashvili (1989, 2003, 2011), Kachiashvili et al. (2012a, b), Kachiashvili and Mueed (2013) and Kachiashvili (2019b, 2018) show that the Bayesian approach to hypothesis testing, formulated as a constrained optimization problem, allows us to combine the best features of both Bayesian and Neyman–Pearson approaches. Moreover, it is the data-dependent method similar to the Fisher’s test and gives a decision rule with new, more common properties than a usual decision rule does. This approach opens new opportunities in inference theory and practice. In particular, it allows us, on the basis of a common platform, to develop a set of reliable, flexible, economic and universal methods for testing statistical hypotheses of different types. The present paper is devoted to an overview of the results obtained by us, using this method, when examining different types of hypotheses under different criteria. The rest of the article is organized as follows. The next section discusses CBM for testing different types of hypotheses, comparative analysis of constrained Bayesian and other methods are presented in Section 3 and conclusions are offered in Section 4.

2 Constrained Bayesian Method for Testing Different Type of Hypotheses Let the sample x T = (x1 , ..., xn ) be generated from p(x; θ ), and the problem of interest is to test Hi : θi ∈ i , i = 1, 2, ..., S, where i ⊂ R m , i = 1, 2, S. The number of tested hypotheses is S. Let the prior on θ be denoted ..., S π(θ |Hi ) p(Hi ), where for each i = 1, 2, ..., S, p(Hi ) is the a priori probby i=1 ability of hypothesis Hi and π(θ |Hi ) is a prior density withsupport i ; p(x|Hi ) denotes the marginal density of x given Hi , i.e. p(x|Hi ) = i p(x|θ )π(θ |Hi )dθ and D = {d} is the set of solutions, where d = {d1 , ..., d S }, it being so that

Constrained Bayesian Rules for Testing …

 di =

161

1, i f hypothesis Hi is accepted , 0, other wise;

δ(x) = {δ1 (x), δ2 (x), ..., δ S (x)} is the decision function that associates each observation vector x with a certain decision δ(x)

x −−→ d ∈ D; (notation: depending upon the choice of x, there is a possibility that δ j (x) = 1 for more than one j or δ j (x) = 0 for all j = 1, ..., S).    j is the region of acceptance of hypothesis H j , i.e.  j = x : δ j (x) = 1 . It is obvious that δ(x) is completely determined by the  j regions, i.e. δ(x) = {1 , 2 , ...,  S }. Let us introduce the loss function L(Hi , δ(x)) that determines the value of loss in the case when the sample has the probability distribution corresponding to hypothesis Hi , but, because of random errors, decision δ(x) is made. In the general case, the loss function L(Hi , δ(x)) consists of two components: L(Hi , δ(x)) =

S j=1

L 1 (Hi , δ j (x) = 1) +

S j=1

L 2 (Hi , δ j (x) = 0),

(1)

i.e. the loss function L(Hi , δ(x)) is the total loss caused by incorrectly accepted and incorrectly rejected hypotheses. Taking into account the loss (1), hypothesis testing problem, instead of unconstrained optimization problem where the risk is obtained by averaging (1) in classical Bayesian approach, let us formulate as constrained optimization problem where we minimize the averaged loss of incorrectly accepted hypotheses  rδ = min { j }

S i=1

p(Hi )





S j=1

j

L 1 (Hi , δ j (x) = 1) p(x|Hi )d x ,

(2)

subject to the averaged loss of incorrectly rejected hypotheses S i=1

p(Hi )



S j=1

R n − j

L 2 (Hi , δ j (x) = 0) p(x|Hi )d x ≤ r,

(3)

where r1 is some real number determining the level of the averaged loss of incorrectly rejected hypotheses. By solving problem (2, 3), we have  j ={x :

S i=1

L 1 (Hi , δ j (x) = 1) p(Hi ) p(x|Hi )

162

K. J. Kachiashvili

0) is defined so that in (3) the equality takes place. Depending on target function (2) and restriction condition (3), it is possible to formulate seven different statements of CBM (Kachiashvili 2003, 2018; Kachiashvili et al. 2012a; Kachiashvili et al. 2012b; Kachiashvili 2018), some of which we will present below, at consideration of different types of hypotheses. In CBM, in contradiction to the existing ones, the decision-making areas have unique properties. In particular, there are not only regions of making decisions in the decision-making space but also regions of impossibility to make decisions. This provides a unique way to solve simultaneous and sequential analysis problems with a single methodology, which significantly increases the theoretical and practical value of the methodology (Kachiashvili and Hashmi 2010). Unlike other methodologies, principle difficulties in implementing CBM do not increase with an increase in the number of hypotheses and the dimension of observational results. When the number and dimension of hypotheses increase, only the time required for the calculation increases in CBM without major complications. In the usual situations, it is comparable to the existing methodologies. Let us define the summary risk (SR) of making the incorrect decision at hypothesis testing as the weighed sum of probabilities of making incorrect decisions, i.e. r S () =

S S i=1

j=1, j=i

 L(Hi , H j ) p(Hi )

j

p(x|Hi )d x.

(5)

It is clear that, for given losses and probabilities, SR depends on the regions of making decisions. Let us denote by  C B M and  B the hypotheses acceptance regions in CBM and Bayes rule, respectively. Then SR for CBM and Bayes rules are r S ( C B M ) and r S ( B ), respectively. The following theorem is proved in Kachiashvili et al. (2018) and Kachiashvili (2018). Theorem 2.1 For given losses and probabilities, SR of making incorrect decision in CBM is convex function of λwith maximum at λ = 1. At increasing or decreasing λ, SR decreases, and in the limit, i.e. at λ → ∞or λ → 0, SR tends to zero. Corollary 2.1 At the same conditions, SR of making the incorrect decision in CBM is less or equal to SR of the Bayesian decision rule, i.e. r S ( C B M ) ≤ r S ( B ). The justification of this corollary is obvious from theorem 2.1. Providing arguments similar to theorem 2.1, it is not difficult to be convinced that SR of making the incorrect decision in CBM is less than or equal to SR of the frequentist decision rule, i.e. r S ( C B M ) ≤ r S ( f ), where r S ( f ) is SR for the frequentist method.

Constrained Bayesian Rules for Testing …

163

Let us consider very briefly the application of CBM for testing different types of hypotheses.

2.1 Simple Hypotheses Let us consider a case when hypotheses Hi , i = 1, 2, ..., S, are simple, i.e. when the sets i , i = 1, 2, ..., S, are degenerated into points. In this case, for the step-wise loss functions, hypothesis acceptance regions (4) are transformed in the form

S j = x :

i=1,i= j

p(Hi ) p(x|Hi ) < λp(H j ) p(x|H j ) , j = 1, ..., S.

(6)

Concerning properties of decision rules, developed on the basis of (6), the following theorems are proved (Kachiashvili and Mueed 2013). Theorem 2.2 Let us assume that probability distributions p(x|Hi ), i = 1, ..., S,are such that, at increasing number of repeated observations n, the entropy concerning distribution parameter θ , in relation to which the hypotheses are formulated, decrease. In such a case, for all given values 0 < α1 < 1and 0 < β1 < 1always can be found such a finite, positive integer number n ∗ that inequalities α(x¯n ) < α1 and β(x¯n ) < β1 , where the number of repeated observations is n > n ∗ , take place. Here α(x¯n )and β(x¯n )are the probabilities of errors of the first and the second types, respectively, in conditional Bayesian task when the decision is made on the basis of arithmetic mean of the observation results x¯n calculated by nrepeated observations. Theorem 2.3 Let us assume that for given hypotheses Hi , i = 1, ..., S, and significance level of criterion α, inequality λ > λ∗2 (x) takes place, and at the same time observation result xbelongs to more than one of hypotheses acceptance regions i , i = 1, ..., S, i.e. it is impossible to make a simple decision on the basis of x. In such a case, by increasing α, one of following conditions x ∈ i , i = 1, ..., S, i.e. the acceptance of one of the tested hypotheses, could always be fulfilled. Theorem 2.4 Let us assume that for given hypotheses Hi , i = 1, ..., S, and signifiS i ,i.e. cance level of criterion α, inequality λ < λ∗1 (x) takes place and x ∈ R n − i=1 it is impossible to make a decision on the basis of x. In such a case, by decreasing α, i.e. at α → 0, the fulfillment of one of the following conditions x ∈ i , i = 1, ..., S, i.e. the acceptance of one of the tested hypotheses could always be achieved. Computation results of concrete examples confirming the correctness of these theorems are given in Kachiashvili and Mueed (2013) and Kachiashvili et al. (2012a).

164

K. J. Kachiashvili

2.2 Composite Hypotheses When hypotheses Hi , i = 1, 2, ..., S, are composite, i.e. when each or some of the sets i contain more than one point, it is shown that, CBM keeps the optimal properties at testing composite hypotheses and, therefore, it gives great opportunities for making reliable decisions in theory and practice. The necessity in testing composite hypotheses arises very often for solving many practical problems. In particular, they are very important in many medical and biological applications (Alberton et al. 2019; Bansal et al. 2015; Dmitrienko et al. 2009; Chen and Sarkar 2004). In Kachiashvili (2016), it is shown that it keeps the optimal properties at testing composite hypotheses similarly to the cases when CBM is optimal at testing simple hypotheses in simultaneous and sequential experiments. In particular, it easily, without special efforts, overcomes the Lindley’s paradox arising when testing a simple hypothesis versus a composite one. Lindley’s paradox arises when a prior density of the parameter, contained in the posterior probability, becomes increasingly flat and the posterior odds usually approach 0. Therefore, the basic hypothesis is accepted always, independently of the values of observed results (Bernardo 1980; Marden 2000). In the paper Kachiashvili (2016), CBM is compared with the classical Bayesian test under normal conditions, and when a priori probabilities are selected in the latter in a special manner for overcoming the Lindley’s paradox. Superiority of CBM against these tests is shown theoretically and demonstrated by simulation. The theoretical derivation is supported by many computational results of different characteristics of the considered methods.

2.3 Directional Hypotheses Directional hypotheses are comparatively new in comparison to traditional hypotheses. For parametrical models, this problem can be stated as H0 : θ = θ0 versus H− : θ < θ0 or H+ : θ > θ0 , where θ is the parameter of the model, θ0 is known (see, e.g. Bansal and Sheng 2010). The consideration of directional hypotheses started in the 50s of the last century. The earliest works considering this problem were Lehmann (1950, 1957a, b) and Bahadur (1952). Since then, interest in the problem has not diminished (see, e.g. Kaiser 1960; Leventhal and Huynh 1996; Finner 1999; Jones and Tukey 2000; Shaffer 2002; Bansal and Sheng 2010). For solving this problem, authors used traditional methods based on p-values, frequentist or Bayesian approaches and their modifications. A compact but exhaustive review of these works is given in Bansal and Sheng (2010) where Bayesian decision-theoretic methodology for testing the directional hypotheses was developed and was compared with the frequentist method. In the same work, the decision-theoretic methodology was used for testing multiple directional hypotheses. The cases of multiple experiments for directional hypotheses were also considered in Bansal and Miescke (2013) and Bansal et al. (2015). The choice

Constrained Bayesian Rules for Testing …

165

of a loss function related to the Kullback–Leibler divergence in a general Bayesian framework for testing the directional hypotheses is considered in Bansal et al. (2012). The paper Kachiashvili et al. (2018) discusses the generalization of CBM for arbitrary loss functions and its application for testing the directional hypotheses. The problem is stated in terms of false and true discovery rates. One more criterion of quality of directional hypotheses tests, the Type III errors rate, is considered. The ratio of discovery rates and the Type III errors rate in CBM is considered. The advantage of CBM in comparison with Bayes and frequentist methods is theoretically proved and clearly demonstrated by a concrete computed example. The advantages of the use of CBM for testing the directional hypotheses are: (1) alongside with a priori probabilities and loss functions, it uses the significance levels of hypotheses for sharpening the sensitivity concerning direction; (2) it makes decisions more carefully and with given reliability; (3) less values of SR and type III error rates correspond to it. CBM allows making a decision with required reliability if the existing information is sufficient, otherwise, it is necessary to increase the information or to reduce the required reliability of the made decision. CBM surpasses the Bayes and frequentist methods with guaranteed reliability of made decisions for the same information. CBM and the concept of false discovery rates (FDR) for testing directional hypotheses are considered in the paper Kachiashvili et al. (2019). Here it is shown that the direct application of CBM allows us to control FDR on the desired level. Theoretically, it is proved that mixed directional false discovery rates (mdFDR) are restricted on the desired levels at the suitable choice of restriction levels at different statements of CBM. The correctness of the obtained theoretical results is confirmed by computational results of concrete examples. There is also shown that the following assertion is correct in general, for all possible statements of CBM. Theorem 2.5 Let us assume that the probability distributions p(x|Hi ), i ∈ , where ≡ {−, 0, +}, are such that, at increasing number of observations nin the sample, the entropy concerning distribution parameter θ , in relation to which the hypotheses are formulated, decreases. In such case, for given set of hypotheses H0 , H− and H+ , there always exists such a sample of size non the basis of which decision concerning tested hypotheses can be made with given reliability when in decisionmaking regions, Lagrange multipliers are determined for n = 1and the condition md F D R ≤ qis satisfied. This fact is demonstrated by computation of concrete examples, using CBM (Kachiashvili et al. 2019).

2.4 Multiple Hypotheses For several last decades, multiple hypothesis testing problems have attracted significant attention because of their huge applications in many fields (Shaffer 1995).

166

K. J. Kachiashvili

For example, rapid detection of targets in multichannel and multisensor distributed systems, in building high-speed anomaly detection systems for early detection of intrusions in large-scale distributed computer networks (Tartakovsky and Veeravalli 2004; Tartakovsky et al. 2003); clinical trials when two sets of objects are compared on the identity by a certain set of measured parameters (O’Brien 1984; Pocock et al. 1987; De and Baron 2012a, b; Dmitrienko et al. 2009); DNA (deoxyribonucleic acid) microarray experiments at identification of differentially expressed genes, that is, genes whose expression levels are associated with a response or covariate of interest (Dudoit et al. 2003); an image representation of the brain (Alberton et al. 2019); the problem of identifying the lowest dose level (Tamhane et al. 1996); comparison of different treatments with a control in terms of binary response rates in pharmaceutical research (Chen and Sarkar 2004); acceptance of sampling techniques with several different criteria, used for monitoring the accuracy of gas meters (Hamilton and Lesperance 1991); multiple hypotheses testing procedures in empirical econometric research (Savin 1980, 1984; Dhrymes 1978; Seber 1977) and so on. A lot of various procedures for testing multiple hypotheses have been developed so far. A review of earlier works was given in Savin (1984). The review of multiple hypothesis testing methods and the special problems arising from the multiple aspects were given in Shaffer (1986, 1995). As was mentioned here, “except in the ranking and selection area, there were no other than Miller’s (1966) book-length treatments until 1986, when a series of book-length publications began to appear (Klockars and Sax 1986; Hochberg and Tamhance 1987; Bauer et al. 1988; Toothaker 1991; Hoppe 1993; Westfall and Young 1993; Braun 1994; Hsu 1996).” Different aspects of testing of multiple hypotheses using the classical methods and the consideration of the obtained results were presented (Efron 2004; Gómez-Villegas and GonzálezPérez 2011; Gómez–Villegas et al. 2009; Salehi et al. 2019; Chang and Berger 2020; Berger et al. 2014; Bogdan et al. 2007; Dunnett and Tamhane 1991, 1992). A compact but interesting review of the methods classified by the kinds of tested hypotheses was given in De and Baron (2012a) and then there was introduced a more general statement of the multiple hypothesis testing problem. In particular, there were considered d individual hypotheses about parameters θ1 , ..., θd of sequentially observed vectors X1 , X2 , ... H0(1) : θ1 ∈ 01 vs H A(1) : θ1 ∈ 11 , H0(2) : θ2 ∈ 02 vs H A(2) : θ2 ∈ 12 , ....................................................... H0(d) : θk ∈ 0d vs H A(d) : θk ∈ 1d .

(7)

A set of sequential procedures controlling the family-wise error rate were described in Shaffer (1995). They used different kinds of restrictions of significance levels of individual sequential tests. The control of both kinds of error rates

Constrained Bayesian Rules for Testing …

167

is important in many practical applications of multiple comparisons (De and Baron 2012a, b; He et al. 2015). A procedure of controlling both family-wise error rates I and II at multiple testing was developed (De and Baron 2012a, b), where family-wise error rates are determined as ⎧ ⎫ ⎨ ⎬ ( j) r eject H0 | , (8) F W E R I = max P

=0 ⎩ ⎭ j∈

F W E R I I = max P F=0

⎧ ⎨ ⎩

¯ j∈

( j)

⎫ ⎬

accept H0 | , ⎭

(9)

¯ is where ⊂ {1, ..., d} is the index set of the true null hypotheses and F =

the index set of the false null hypotheses. In particular, step-down Holm, and stepup Benjamini–Hochberg methods for multiple comparisons were generalized to the sequential setting. A more economic procedure than that was given in De and Baron (2012a, b) and was developed in the paper Kachiashvili (2014, 2015) for solving the above-stated problem with the control of both family-wise error rates I and II, and it was compared with the methods considered in De and Baron (2012a, b). In particular, there was considered a set of one-sided (right-tail) tests about parameters θ1 , ..., θd ( j)

H0

( j)

: θ j = θ0

( j)

vs H1

( j)

( j)

( j)

: θ j = θ1 , j = 1, ..., d, θ0 < θ1 .

(10)

This case covers the situation when hypotheses are one-sided composite, i.e. when they have the following forms (De and Baron 2012a) ( j)

H0

( j)

: θ j ≤ θ0

( j)

vs H1

( j)

: θ j ≥ θ1 , j = 1, ..., d.

(11)

A stopping rule T with decision functions δ j = δ j (X 1 j , ..., X T j ), j = 1, ..., d, must be defined with the guarantee of acceptance or rejection of each of null hypotheses H0(1) , ..., H0(d) . It must control both family-wise error rates (8) and (9), i.e. the following conditions. F W E R I ≤ α and F W E R I I ≤ β

(12)

for α, β ∈ (0, 1) must be satisfied. It is possible to use the Bonferroni inequality in the appropriate sequential test. That means that, if, for the j th hypothesis, we control the probabilities of Type I and Type II errors at the given levels α j = α/d and β j = β/d, we will immediately obtain (12) by the Bonferroni inequality. The stopping rule, in the sequential test, is defined as

168

K. J. Kachiashvili

d T = inf m :



j=1

  (mj) ∈ , / λ∗ j , λ∗j

(13)

( j)

where λ∗ j and λ∗j are defined by CBM; m is the likelihood ratio for the j th parameter ( j = 1, ..., d) calculated on the basis of m sequentially obtained observation results. Lemma 2.1 For any pairs (λ∗ j , λ∗j ), the stopping rule defined by (13) for thresholds.     λ∗ = min λ2∗ , λ5∗ and λ∗ = max λ∗2 , λ∗5 where indices of Lagrange multipliers indicate the number of appropriate CBM (Kachiashvili 2003, 2018; Kachiashvili et al. 2012a, b), is proper. Accepting or rejecting the jth null hypothesis at time T depending on whether ( j) ( j) T < λ∗ j or T > λ∗j , we could obtain strong control of probabilities of Type I and Type II errors by the Bonferroni inequality P{at least one T ype I err or} ≤

d d 

 ( j) P T ≥ λ∗j = α j = α, j=1

P{at least one T ype I I err or} ≤

j=1

d d 

 ( j) P T ≤ λ∗ j = β j = β. j=1

(14)

j=1

Theorem 2.6 Sequential Bonferroni procedure for testing multiple hypotheses (11) ( j) ( j) with stopping rule (13), rejection regions T > λ∗j , and acceptance regions T < ∗ λ∗ j , where λ j and λ∗ j are defined on the basis of CBM 2 and CBM 5 for β j = β/dand α j = α/d, controls both error rates at levels F W E R I ≤ αand F W E R I I ≤ β. Next we recall a scheme for making decision, which is based on the likelihood [2] [d] ratio statistics ordered non-increasingly, i.e. on [1] m ≥ m ≥ · · · ≥ m , and the errors of both types for individual tests are αj =

α β and β j = for j = 1, ..., d. d +1− j d +1− j

(15)

The stopping rule corresponding to this scheme is

d T1 = inf m :

j=1

and the following theorem is proved.



  [mj] ∈ / λ∗[ j] , λ∗[j] ,

(16)

Constrained Bayesian Rules for Testing …

169

Theorem 2.7 The stopping rule T1 is proper; the offered scheme strongly controls both family-wise error rates Type I and Type II. That is, for any set of true hypotheses.

P {T1 < ∞} = 1 and P {at least one T ype I err or } ≤ α, P {at least one T ype I I err or} ≤ β. It is shown (Kachiashvili et al. 2020) that the direct application of CBM allows us to control FDR on the desired level for both one set of directional hypotheses and multiple case when we consider m (m > 1) sets of directional hypotheses, i.e. when the testing hypotheses are the following (Bansal et al. 2016; Bansal and Miescke 2013; Benjamini and Hochberg 1995; Finner 1999; Shaffer 2002; Sarkar 2002, 2008): Hi(0) : θi = 0 vs Hi(−) : θi < 0 or Hi(+) : θi > 0, i = 1, ..., m

(17)

where m is the number of individual hypotheses about parameters θ1 , ..., θm that must be tested by test statistics X = (X 1 , X 2 , ..., X m ), where X i ∼ f (xi |θi ). Let us consider the case, when the components of the vector X = (X 1 , X 2 , ..., X m ) can be observed independently and, consequently, decisions concerning the sub-sets of hypotheses can be made independently. Let us suppose that for test statistics X i ∼ f (xi |θi ), i ∈ (1, ..., m), decision is not made on the basis of observation result xi(1) . Here upper index indicates the number of the observation in their sequence. Let xi(2) , xi(3) , ... be the sequence of independent  observations   of X i . Then common  density of the sample xi(1) , xi(2) , ... is f xi(1) |θi · f xi(2) |θi · ··. Using these densities in decision-making regions of the appropriate CBM, we make decisions until one of the tested directional hypotheses is not accepted, i.e. the stopping rule in this sequential test is the following: T = max Ti , i∈(1,...,m)

where  

Ti = inf n i : ti xi(1) , xi(2) , ..., xi(ni ) ∈ only one o f  ij , j ∈ .

(18)

170

K. J. Kachiashvili

Here n i is the number of sequentially obtained observations for making final decision in ith sub-set of multiple hypotheses and is a set of tested hypotheses in each sub-set of multiple hypotheses, i.e. ≡ {−, 0, +}. If independent observation of the components of the vector X = (X 1 , X 2 , ..., X m ) is impossible then the stopping rule is the following 

m    ti X(1) , X(2) , ..., X(n) ∈ only one o f  ij , j ∈ , T = inf n : i=1

(19)

where X(i) , i = 1, ..., n are independent m-dimensional observation vectors. The following theorem is proved in Kachiashvili et al. (2020). Theorem 2.8 The stopping rules (18) and (19) are proper; the decision-making scheme developed on the basis of CBM strongly controls mixed directional false discovery rate. That is, for any set ψof true hypotheses in directional hypotheses (17)

P {T < ∞} = 1 and tmd F D R ≤ q. Here ψ = (ψ1 , ψ2 , ..., ψm ) is the index set of the true hypotheses, ψi ∈ {−, 0, +}. Computational outcomes for concrete examples, given in Kachiashvili et al. (2020), confirm the correctness of the theoretical results.

2.5 Union–Intersection and Intersection–Union Hypotheses In many applications, there arise problems where basic or/and alternative hypotheses can be represented as the union or intersection of several sub-hypotheses (SenGupta 2007; Roy 1953). The general methodology of testing union–intersection hypotheses was offered in Roy (1953). An idea of this methodology consists of consideration in pairs of sub-hypotheses from basic and alternative hypotheses. Each pair of hypotheses is chosen one by one from basic and alternative hypotheses sub-sets accordingly. Final regions, for acceptance of basic and alternative hypotheses, are defined as intersection of the appropriate sub-regions. The reverse scenario, when basic and/or alternative hypotheses are presented as intersection of the appropriate sub-sets of hypotheses, is considered in SenGupta (2007). The special methodology, giving the powerful decision rule based on Pivotal Parametric Product (P3 ), is offered there. Application of CBM, developed by the author (Kachiashvili 2018), to the

Constrained Bayesian Rules for Testing …

171

considered types of hypotheses in one concrete case, is considered in (Kachiashvili 2019a). Here it is shown that the obtained decision rule allows us to restrict the Type-I and Type-II error rates at the desired levels, in this case. In particular, the following problem is considered as an example. Let a random variable X follows the distribution f (x; θ ), where θ is scalar parameter. Let us consider testing H0 : θ ≤ θ1 or θ ≥ θ2 vs H1 : θ1 < θ < θ2 .

(20)

2 Hypothesis H0 can be represented as i=1 H0i , where the sub-hypotheses H01 and H02 are given by H01 : θ ≤ θ1 and H02 : θ ≥ θ2 . That means, we have to 2 H0i against alternative H1 when the condition test basic hypothesis H0 ≡ i=1 H0 H1 = R m is fulfilled. One of the possible statements of CBM, concretely Task 2 (Kachiashvili 2018), for solving this problem is used. The solution of the stated problem gives 01 = {x : K 1 · ( p(H02 |x) + p(H1 |x)) < K 0 · λ01 · p(H01 |x)}, 02 = {x : K 1 · ( p(H01 |x) + p(H1 |x)) < K 0 · λ02 · p(H02 |x)},

(21)

1 = {x : K 1 · ( p(H01 |x) + p(H02 |x)) < K 0 · λ1 · p(H1 |x)} where Lagrange multipliers λ01 , λ01 and λ01 are determined so that in the restriction conditions   r201 r202 , , p(x|H01 )d x ≥ 1 − p(x|H02 )d x ≥ 1 − K 0 · p(H01 ) 02 K 0 · p(H02 ) 01  r21 (22) p(x|H1 )d x ≥ 1 − K 0 · p(H1 ) 1 equalities take place (restriction conditions of CBM 2). In this case, the errors of the Type-I and the Type-II are:  α=  β=

 1

01

p(x|H01 )d x +

p(x|H1 )d x +



1

02

p(x|H02 )d x = p(x ∈ 1 |H01 ) + p(x ∈ 1 |H02 ),

p(x|H1 )d x = p(x ∈ 01 |H1 ) + p(x ∈ 02 |H1 ). (23)

The following theorem is proved.

172

K. J. Kachiashvili

Theorem 2.9 Testing hypotheses (20), using CBM 2 with restriction levels of (22) ensures a decision rule (21) with the error rates of the Type-I and Type-II, restricted by the following inequalities

α≤

r201 r202 + , K 0 · p(H01 ) K 0 · p(H02 )

β≤

r21 . K 0 · p(H1 )

(24)

3 Comparative Analysis of Constrained Bayesian and Other Methods Practically in all of the above-mentioned papers of the author, the quality of the results obtained by CBM was compared with the results of other existing efficient methods for solving the same problems. The comparisons were realized theoretically and experimentally. The results of these comparisons have always been favorable for the CBM, both theoretically as well as experimentally. The presentation here of the results of this comparison will unjustifiably increase the volume of the article. Therefore, we advise interested readers to see the interesting moments in the original works.

4 Conclusion On the basis of the presented materials, we make conclusions that CBM exceed existing methodologies by the following: (1) it uses all the information that is used in the existing methodologies; (2) all the peculiarities of the formalization of the existing methodologies are taken into account at the formalization of CBM. In particular, constrained Bayesian methodology uses not only the loss function and a priori probabilities to make a decision, as the classical Bayesian methodology does, but also the significant level of criterion as a frequency (Nyman–Pearson) methodology, and it is data dependent, similar to Fisher’s methodology. The combination of these capabilities increases the quality of the decisions made in the constrained Bayes methodology compared with other methodologies. We have used conditional Bayesian methodology for testing different types of hypotheses, such as simple, complex, directional, multiple hypotheses, for which the advantage of CBM over existing methods is theoretically proven in the form of

Constrained Bayesian Rules for Testing …

173

theorems and practically shown by computation of many practical examples. When examining these hypotheses, the constrained Bayesian methodology allows us to obtain such decision rules that naturally, without any force, allow us to minimize all existing criteria of making optimal decision. Such criteria are: (1) Type-I, Type-II and Type-III error rates; (2) false discovering rate (FAR); (3) pure directional false discovery rate (pdFDR); (4) mixed directional false discovering rate (mdFDR); (5) Type-I and Type-II family-wise error rates (FWER I , FWER I I ) and so on. I would like to point out that the constrained Bayesian methodology not only gives better results than other well-known methodologies but also makes it possible to solve problems that are difficult to overcome with existing methodologies. As examples, I am listing some practical problems solved by mentioned methodology, which are presented in the appendices of the monograph (Kachiashvili 2018): (1) detection and tracking of moving objects on the basis of radiolocation information; (2) identification of river water emergency pollution sources, depending on the current situation, by giving preference to ecological or economic factors; (3) sustainable development of production in accordance with the set strategy; (4) system ensuring the safe sailing of ships; (5) verification in biometric systems with simultaneous limiting the two types of errors on the desired levels.

References Alberton BAV, Nichols TE, Gamba HR, Winkler AM (2019) Multiple testing correction over contrasts for brain imaging, bioRxiv preprint first posted online Sep. 19, 2019. doi: http://dx. doi.org/https://doi.org/10.1101/775106 Bahadur RR (1952) A property of the t-statistics. Sankhya 12:79–88 Bansal NK, Hamedani GG, Maadooliat M (2015) Testing multiple hypotheses with skewed alternatives. Biometrics 72(2):494–502 Bansal NK, Hamedani GG, Maadooliat M (2016) Testing multiple hypotheses with skewed alternatives. Biometrics 72(2):494–502 Bansal NK, Hamedani GG, Sheng R (2012) Bayesian analysis of hypothesis testing problems for general population: a Kullback-Leibler alternative. J Statis Plan Inf 142:1991–1998 Bansal NK, Sheng R (2010) Beyesian decision theoretic approach to hypothesis problems with skewed alternatives. J Statis Plan Inf 140:2894–2903 Bansal NK, Miescke KJ (2013) A bayesian decision theoretic approach to directional multiple hypotheses problems. J Multivar Anal 120:205–215 Bauer P et al. (1988) Multiple hypothesenprüfung. (Multiple Hypotheses Testing.). Berlin, SpringerVerlag (In German and English) Benjamini Y, Hochberg Y (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Statist Sco B 57(1):289–300 Berger JO, Wang X, Shen L (2014) A Bayesian approach to subgroup identification. J Biopharmaceut Stat 24:110–129 Bernardo JM (1980) A Bayesian analysis of classical hypothesis testing, Universidad de Valencia, pp 605617 Bogdan M, Ghosh JK, Ochman A, Tokdar ST (2007) On the empirical Bayes approach to the problem of multiple testing. Qual Reliab Eng Int 23:727–739 Braun HI (1994) The collected works of John W Tukey, vol. VIII, Multiple Comparisons: 19481983. New York, Chapman & Hall

174

K. J. Kachiashvili

Chen J, Sarkar SK (2004) Multiple testing of response rates with a control: a Bayesian stepwise approach. J Statis Plan Inf 125(1–2):3–16 Chang S, Berger JO (2020) Frequentist properties of bayesian multiplicity control for multiple testing of normal means. Sankhya A. https://doi.org/10.1007/s13171-019-00192-1 De ShK, Baron M (2012a) Step-up and step-down methods for testing multiple hypotheses in sequential experiments. J Statis Plan Inf 142:2059–2070 De ShK, Baron M (2012b) Sequential bonferroni methods for multiple hypothesis testing with strong control of family-wise error rates I and II. Seq Anal 31:238–262 Dmitrienko A, Tamhane AC, Bretz F (2009) Multiple testing problems in pharmaceutical statistics. CRC Press, Taylor & Francis Group, A Chapman & Hall Book Dhrymes PJ (1978) Introductory econometrics. Springer Verlag, New York Dudoit S, Shaffer JP, Boldrick JC (2003) Multiple hypothesis testing in microarray experiment. Stat Sci 18:71–103 Dunnett CW, Tamhane AC (1991) Step-down multiple tests for comparing treatments with a control in unbalanced one-way layouts. Stat Med 10(6):939–947 Dunnett CW, Tamhane AC (1992) A step-up multiple test procedure. J Am Stat Assoc 87(417):162– 170 Efron B (2004) Large-scale simultaneous hypothesis testing. J Am Stat Assoc 99(465):96–104 Efron B (2005) Bayesians, frequentists, and scientists. J Am Stat Assoc 100(469):1–5 Finner H (1999) Stepwise multiple test procedures and control of directional errors. Ann Stat 27(1):274–289 Fisher RA (1925) Statistical methods for research workers. Oliver and Boyd, London Gómez-Villegas MA, González-Pérez B (2011) A Bayesian analysis for the homogeneity testing problem using–contaminated priors. Commun Statis Theory Methods 40(6):1049–1062 Gómez-Villegas MA, Maín P, Sanz L (2009) A Bayesian analysis for the multivariate point null testing problem. Statistics 43(4):379–391 Goritskiy YA, Kachiashvili KJ, Datuashvili MN (1977) Application of the generalized NeymanPearson criterion for estimating the number of false decisions at isolation of true trajectories. Techn Cyber Proce Georg Techn Inst, Tbilisi 7(198):79–85 Hamilton DC, Lesperance ML (1991) A consulting problem involving bivariate acceptance sampling by variables. Canadian J Statis 19:109–117 He L, Sarkar SK, Zhao Z (2015) Capturing the severity of type II errors in high-dimensional multiple testing. J Multivar Anal 142:106–116 Hochberg Y, Tamhance AC (1987) Multiple comparison procedures. Wiley, New York Hoppe FM (1993) Multiple comparisons. Selection, and Applications in Biometry. New York, Dekker Hsu JC (1996) Multiple comparisons: theory and methods. chapman & hall, New York Jeffreys H (1939) Theory of probability, 1st ed. Oxford, The Clarendon Press Jones LV, Tukey JW (2000) A sensible formulation of the significance test. Psychol Methods 5(4):411–414 Kachiashvili KJ (1980) Algorithms and programs for determination of lower limit of the mean number of false objects at restrictions on the appropriate numbers of omitted at processing of the radar-tracking information. Deposited in TsNIITEIpriborostroenia No. 1282. DM 1282. Bibl. Indicator VINITI “Deposited manuscripts”, No. 7. 62 Kachiashvili KJ (1989) Bayesian algorithms of many hypothesis testing, Tbilisi, Ganatleba Kachiashvili KJ (2003) Generalization of bayesian rule of many simple hypotheses testing. Int J Inf Technol Decis Mak 2(1):41–70 Kachiashvili KJ (2011) Investigation and computation of unconditional and conditional bayesian problems of hypothesis testing. ARPN J Syst Software 1(2):47–59 Kachiashvili KJ (2014) The methods of sequential analysis of bayesian type for the multiple testing problem. Seq Anal 33(1):23–38. https://doi.org/10.1080/07474946.2013.843318

Constrained Bayesian Rules for Testing …

175

Kachiashvili KJ (2015) Constrained bayesian method for testing multiple hypotheses in sequential experiments. Seq Analy Des Methods Appl 34(2):171–186. https://doi.org/10.1080/07474946. 2015.1030973 Kachiashvili KJ (2016) Constrained bayesian method of composite hypotheses testing: singularities and capabilities. Int J Statis Med Res 5(3):135–167 Kachiashvili KJ (2018) Constrained bayesian methods of hypotheses testing: a new philosophy of hypotheses testing in parallel and sequential experiments. Nova Science Publishers Inc., New York, p 361 Kachiashvili KJ (2019a) An example of application of CBM to intersection-union hypotheses testing. Biomed J Sci & Tech Res 19(3), 14345–14346. BJSTR. MS.ID.003304 Kachiashvili KJ (2019b) Modern state of statistical hypotheses testing and perspectives of its development. Biostat Biometrics Open Acc J 9(2), 555759. 14. Doi: https://doi.org/10.19080/BBOAJ. 2019.09.55575902 Kachiashvili KJ, Bansal NK, Prangishvili IA (2018) Constrained bayesian method for testing the directional hypotheses. J Mathem Syst Sci 8:96–118. https://doi.org/10.17265/2159-5291/2018. 04.002 Kachiashvili GK, Kachiashvili KJ, Mueed A (2012a) Specific features of regions of acceptance of hypotheses in conditional bayesian problems of statistical hypotheses testing. Sankhya: Indian J Statis 74(1), 112125 Kachiashvili KJ, Kachiashvili JK, Prangishvili IA (2020) CBM for testing multiple hypotheses with directional alternatives in sequential experiments. Seq Anal 39(1):115–131. https://doi.org/ 10.1080/07474946.2020.1727166 Kachiashvili KJ, Hashmi MA (2010) About using sequential analysis approach for testing many hypotheses. Bull Georg Acad Sci 4(2):20–25 Kachiashvili KJ, Hashmi MA, Mueed A (2009) Bayesian methods of statistical hypothesis testing for solving different problems of human activity. Appl Mathem Inf (AMIM) 14(2):3–17 Kachiashvili KJ, Hashmi MA, Mueed A (2012b) Sensitivity analysis of classical and conditional bayesian problems of many hypotheses testing. Commun Statis Theory Methods 41(4), 591605 Kachiashvili KJ, Mueed A (2013) Conditional bayesian task of testing many hypotheses. Statistics 47(2):274–293 Kachiashvili KJ, Prangishvili IA, Kachiashvili JK (2019) Constrained bayesian methods for testing directional hypotheses restricted false discovery Rates. Biostat Biometrics Open Acc J. 9(3), BBOAJ.MS.ID.555761. https://juniperpublishers.com/bboaj/articleinpress-bboaj.php Kaiser HF (1960) Directional statistical decisions. Psychol Rev 67:160–167 Klockars AJ, Sax G (1986) Multiple comparison. Sage, Newbury Park, CA Lehmann EL (1950) Some principles of the theory of the theory of testing hypotheses. Ann Math Stat 20(1):1–26 Lehmann EL (1957a) A theory of some multiple decision problems I. Ann Math Stat 28:1–25 Lehmann EL (1957b) A theory of some multiple decision problems II. Ann Math Stat 28:547–572 Lehmann EL (1993) The fisher, neyman-pearson theories of testing hypotheses: one theory or two? Am Statis Assoc J Theory Methods 88(424):1242–1249 Leventhal L, Huynh C (1996) Directional decisions for two-tailed tests: Power, error rates, and sample size. Psychol Methods 1:278–292 Marden JI (2000) Hypothesis testing: from p values to bayes factors. Am Stat Assoc 95:1316–1320. https://doi.org/10.2307/2669779 Miller RG (1966) Simultaneous Statistical Inference. Wiley, New York Neyman J, Pearson E (1928) On the use and interpretation of certain test criteria for purposes of statistical inference. Part i, Biometrica 20A:175–240 Neyman J, Pearson E (1933) On the problem of the most efficient tests of statistical hypotheses. Philos Trans Roy Soc Ser A 231:289–337 O’Brien PC (1984) Procedures for comparing samples with multiple endpoints. Biometrics 40:1079–1087

176

K. J. Kachiashvili

Pocock SJ, Geller NL, Tsiatis AA (1987) The analysis of multiple endpoints in clinical trials. Biometrics 43:487–498 Roy SN (1953) On a heuristic method of test construction and its use in multivariate analysis. Ann Math Stat 24:220–238 Salehi M, Mohammadpour A, Mengersen K (2019) A new f -test applicable to large-scale data. J Statis Theory Appl 18(4):439–449 Sarkar SK (2002) Some results on false discovery rate in stepwise multiple testing procedures. Ann Stat 30(1):239–257 Sarkar SK (2008) On methods controlling the false discovery rate. Sankhya: Indian J Statis Series A 70(2), 135168 Savin NE (1980) The Bonferroni and Scheffe multiple comparison procedures. Rev Econ Stud 47:255–213 Savin NE (1984) Multiple hypothesis testing, Trinity college, Cambridge, chapter 14 of handbook of econometrics, vol. II, Edited by Z. Griliches and M.D. Intriligator, Elsevier Science Publishers Seber GAF (1977) Linear regression analysis. Wiley, New York SenGupta A (2007) P3 Approach to intersection-union testing of hypotheses. Invited paper prepared for the S.N. Roy centenary volume of Journal of Statistical Planning and Inference, 137(11), 37533766 Shaffer JP (1986) Modified sequentially rejective multiple procedures. J Am Stat Assoc 81(395):826–831 Shaffer JP (1995) Multiple hypothesis testing. Annu Rev Psychol 46, 56184 Shaffer JP (2002) Multiplicity, directional (Type III) errors, and the null hypothesis. Psychol Methods 7(3):356–369 Tamhane AC, Hochberg Y, Dunnett CW (1996) Multiple test procedures for dose finding. Biometrics 52(1):21–37 Tartakovsky AG, Veeravalli VV (2004) Change-point detection in multichannel and distributed systems with applications. In: Mukhopadhyay N, Datta S, Chattopadhyay S (eds) Applications of sequential methodologies. Marcel Dekker Inc., New York, pp 339–370 Tartakovsky AG, Li XR, Yaralov G (2003) Sequential detection of targets in multichannel systems. IEEE Trans Inf Theory 49(2):425–445 Toothaker LE (1991) Multiple comparisons for researchers. Sage, NewBury Park, CA Wald A (1947a) Sequential analysis. Wiley, New-York Wald A (1947b) Foundations of a general theory of sequential decision functions. Econometrica 15:279–313 Westfall PH, Young SS (1993) Resampling-based multiple testing. Wiley, New York

Application of Path Counting Problem and Survivability Function in Analysis of Social Systems Tatsuo Oyama

1 Introduction In our daily lives, we are surrounded by various types of “lifeline networks,” such as traffic roads, electric power transmissions, city gas supply pipelines, and water supply pipelines. All these lifeline networks consist of sets of nodes and edges, i.e., they are network-structured systems. If some nodes or edges are “broken” or “blocked,” the system may not operate properly. Whether the system can operate properly or not would depend on the extent to which the system is “broken”; hence, a question might be described as how sets of “broken” nodes or edges are related to each other and what type of network-structured systems are easy or difficult to break. Thus, we are interested in quantitatively measuring the “robustness” of network-structured systems. If a quantitative evaluation method were established, it would aid us in building more reliable and stable lifeline networks. In Sect. 2, we introduce the shortest path counting problem (SPCP) and its applications. In Sect. 3, we define the path counting problem (PCP) and survivability function (SF), which we proposed recently for quantitatively measuring the robustness of a general network system. The SF was derived from the PCP through the edge deletion connectivity function (EDCF) and the expected edge deletion connectivity function (EEDCF). In Sect. 4, we introduce the SF and its applications to the analyses of social systems. Finally, in Sect. 5, we provide conclusions and summarize the paper.

T. Oyama (B) National Graduate Institute for Policy Studies, 7-22-1 Roppongi, Minato-ku, Tokyo 106-8677, Japan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 B. K. Sinha and S. B. Bagchi (eds.), Strategic Management, Decision Theory, and Decision Science, https://doi.org/10.1007/978-981-16-1368-5_12

177

178

T. Oyama

2 SPCP and Its Applications Given an undirected network N = (V,E) with node set V = {1,…,n} consisting of n nodes and edge set E = {(i, j)| i ∈ V, j ∈ V, i = j} consisting of m edges, let each edge {(i, j)| i ∈ E} in the network N = (V,E) have a “length” of d ij . The shortest path between nodes i ∈ V and j ∈ V in the network N = (V,E) is defined as the path with the shortest length, i.e., the smallest sum of lengths of all edges contained in the path between nodes i ∈ V and j ∈ V . Subsequently, the SPCP, which was originally proposed by Oyama and Taguchi (1991a, b, 1992) can be determined: How many shortest paths are each edge of the network N = (V,E) contained in? There are n(n-1)/2 shortest paths between two different nodes in the undirected network; therefore, among all these shortest paths, the SPCP requires us to count the number of shortest paths passing each edge. By applying the SPCP approach, we considered that we can estimate the “importance” of the road segment in a road network (Oyama and Taguchi 1996; Taguchi and Oyama 1993; Oyama 2000; Oyama and Morohosi 2003, 2004). We know there is a long history concerning the SPCP. We will be able to say that SPCP could originate from the lattice counting problem which was proposed in 1979 (refer to Mohanty 1979). Since then a lot of interesting applications of graph-theoretic analyses including path counting problems to the analysis of the social system appeared (refer to Wasserman and Faust 1994; Bandyopadhyay et al. 2011). Wasserman and Faust (1994) present a comprehensive discussion of social network methodology dealing with social network theory and application. In the recent book by Bandyopadhyay et al. (2011), we find that various types of statistical methods and models to deal with social problems have been proposed with their actual applications. In the area of graph theory, we know various concepts such as cliques, reciprocity, degrees, eccenricity, and so on in addition to paths that could be applicable to social systems analyses. We expect that those new applications could be proposed in the near future. Consider an SPCP for a grid network G(m, n), which consists of (m + 1)(n + 1) grid points (Fig. 1). The network G(m, n) contains m edges vertically and n edges horizontally with the same length. Consider an SPCP for a circular-type network K(m, n) with a polar angle θ , consisting of (m + 1)(n + 1) grid points (Fig. 2). The network K(m, n) contains m edges radially and n edges circularly with the same length. Figures 3 and 4 depict a grid network G(4, 5) and circular network K(2, 8), respectively, with θ =2 π/3 including all “weights (distances)” for all edges in the respective networks. These values indicate the number of shortest paths that pass the corresponding edge for all shortest paths between any two different nodes. We can obtain the theoretical results to count the number of shortest paths that pass the corresponding edge for the networks G(m, n) and K(m, n) (Oyama and Morohosi 2003, 2004). In determining the shortest path between two distinct grid points, suppose that more than one shortest path with the same total length exists. We can apply the

Application of Path Counting Problem … Fig. 1 Grid network G(m, n)

Fig. 2 Circular network K(m, n)

Fig. 3 Grid network G(4, 5)

179

180

T. Oyama

Fig. 4 Circular network K(2, 8)

following rules, which result in a unique shortest path between any two distinct grid points. (i) (ii)

We minimize the number of “turns” in selecting the shortest path. If two shortest paths with an equal number of turns exist, we select the left turn route.

The road network in the Tokyo metropolitan area is shown in Fig. 5, consisting of 529 nodes and 855 edges. Applying the SPCP to the road network in Fig. 5, we can obtain the frequency distribution of the weights of edges for the Tokyo network (e.g., Oyama and Morohosi 2004, pp. 560–561). Figure 6 shows a cumulative distribution indicating the set of edges belonging to the corresponding weight and higher than that Fig. 5 Road network of metropolitan Tokyo area. Source Oyama and Morohosi (2004), p. 560

Application of Path Counting Problem …

181

Fig. 6 Cumulative distribution of the weights of edges (Tokyo). Source Oyama and Morohosi (2004), p. 561

182

T. Oyama

for the Tokyo metropolitan area. This cumulative distribution confirms that the actual road segments appearing in the figures with the largest weights virtually coincide with the most congested road segments in each area of Tokyo. Figure 6 indicates that the congested areas in this road network are central parts of radial roads emitting from the center to the surrounding areas and circular roads, which are also near the center. Importantly, this fact implies that traffic congestion can be estimated without using, for example, actual traffic survey data. In other words, the measure can be considered to indicate or correspond to the “importance” of each edge segment.

3 PCP and SF In contrast to the SPCP, we proposed the PCP, which inquires how many paths, i.e., pairs of different nodes, connecting two different nodes in the network exist after deleting an arbitrary number of edges or nodes from the original network. Oyama and Morohosi (2002, 2003, 2004) and Kobayashi et al. (2009) applied the PCP to evaluate the reliability and stability, i.e., the strength of connectivity, in a network-structured system. Defining the connectivity function for a network to quantify the strength of connectivity in a network-structured system, we demonstrated computational results for Japanese traffic road networks. We provided an approximate function for when some of the road segments are deleted and characterized the property of the road network for many regions of Japan using the estimated parameter values. Here, we attempt to evaluate the properties of the entire network by applying the PCP. We focus on the entire network rather than on each of its segments. Oyama and Morohosi (2004) and Kobayashi et al. (2009) described the use of the PCP. We define the EDCF and attempt to obtain the EEDCF of the network. Applying the Monte Carlo method, we estimate the EEDCF when an arbitrary number of edges or nodes are deleted from the original network. We attempt to approximate the EEDCF by using an appropriate nonlinear function with two parameters. Additionally, we show the numerical results of applying the path counting method to evaluate the connectivity for a special type of network called a grid network. We propose a method to quantitatively measure the reliability, stability, and strength of the connectedness of the network-structured system. Subsequently, we provide several explicit results and properties for some networks with special structures (see also Oyama and Morohosi 2004). Research on network reliability has a long history, and various methodologies involving computational experiments were proposed, including those by Ball (1979) and Ball and Golden (1989). Additionally, network robustness problems were investigated by Scott et al. (2006). They proposed a new method to identify critical links in the transportation network; thus, they attempted to evaluate the network performance considering network flow, link capacity, and topology. Their methodology was based on that of Bell and Iida (1997) and Bell (2000), in which they introduced, with the methodology, the network reliability concept as a network performance measure. Bell (2000), investigated the

Application of Path Counting Problem …

183

removal or blockage of one or more network links, i.e., the disruption of origin– destination routes, to measure the effect of “rerouting” on the importance of links. Those approaches mentioned above are based upon focusing on the connectivity between two specified nodes, while our approaches have been focusing upon the whole network. In this sense, we can say our approach could be categorized “macro” rather than “micro” described above. Consider the undirected connected network N = (V, E) with the node set V = {1,…,n} consisting of n nodes and edge set E = {(i, j)| i ∈ V, j ∈ V, i = j} consisting of m edges. We can select n(n − 1)/2 paths between two different nodes in the network N = (V, E). Assuming that d out of m edges in the network N = (V, E) are deleted, we can denote the number of paths connecting two different nodes by cm (N , d). Let the ratio be denoted by Sm (N , d) =

cm (N , d) , d ∈ D = {1, 2, . . . ., m}. cm (N , 0)

(1)

m! In general, S m (N, d) cannot be unique as C(m, d) = d!(m−d)! possibilities exist to select d out of m edges in E. We term the function S m (N, d) the EDCF. Note that this edge deletion connectivity function corresponds to the “stable connection function” in Oyama and Morohosi (2004). Assuming that each deletion occurs with equal probability, we denote the expected value of the function S m (N, d) as follows:

Sm (N , d) = E{Sm (N , d)}

(2)

We term the above function the EEDCF. Similarly, we can define the expected node deletion connectivity function. When the network is simple, we can accurately plot the graph for the expected edge deletion connectivity function. Figure 7 shows the graph of the EDCF for the cycle network denoted by C 4 consisting of 4 nodes, while Fig. 8 depicts the graph of the EDCF for the complete network denoted by K 4 consisting of four nodes. The horizontal coordinates of these graphs indicate the number of edges to be deleted from the original graph. Fig. 7 EDCF Sm (C4 , d)

184

T. Oyama

Fig. 8 EDCF Sm (K 4 , d)

As the size of the network increases, computing the EEDCF by enumerating all possible deletion patterns becomes impractical. To evaluate the value of Sm (N , d) for a large network, we utilized a Monte Carlo simulation. Our Monte Carlo simulation algorithm to estimate the EEDCF is described below. Monte Carlo simulation algorithm 1.

Iterate the following operations r times for each number of d. (a) (b)

2.

By deleting d edges (nodes) randomly from the network N, we build a new network N d. Calculate the number of paths connecting two different nodes in N d .

Calculate the mean of r sampling values with their empirical distribution.

In our Monte Carlo simulation, we iterated the above calculation 100,000 times for each number of d in the first step of the above algorithm when the experimental enumeration number given by C(m, d) was larger than 100,000. We applied the Monte Carlo simulation technique to the special type of network such as a grid network denoted by GR(p, q) consisting of p and q nodes in the vertical and horizontal directions, respectively, i.e., p × q nodes and 2 pq − p − q edges in the entire graph. Figure 9 shows the EDCF Sm (GR(20, 20), d) for various values of k ranging between 0 and 1. Parameter d indicates the ratio of the number of deleted edges to the total number of edges. Figure 9 shows the maximum, minimum, lower, and upper quartiles, and the mean for the EDCF Sm (GR(20, 20), d). We conducted at most 100,000 calculations for each scenario with the Monte Carlo simulation. The figure indicates that the mean curve was in the middle and the minimum curve was always significantly lower than the mean curve, while the lower and upper quartile curves were always near the mean curve. We know that the scenarios in which the minimum values are attained can be regarded as rare. Moreover, the lower and upper quartiles were very close to the mean. This meant that 50% of all scenarios could be approximated by the mean value.

Application of Path Counting Problem …

185

Fig. 9 EDCF Sm (GR(20, 20), d) for the Tokyo road network. Source Oyama and Morohosi (2009), p. 376

We calculated the value of the EEDCF Sm (N , d) for a network N = GR(p, p) for p = 5, 10, 20, and 30, whose graph is shown in Fig. 10. When the deletion ratio was in the range [0, 0.45], as the p-value of the network increased, the value of the EDCF increased. In contrast, when the deletion ratio was in the range [0.45, 1], as the p-value of the network increased, the value of EEDCF decreased. Moreover, when the deletion ratio was in the range [0.45, 1], the difference between the values of the EDCF was larger. When the deletion ratio was in the range [0.65, 1.0], the EEDCF values decreased as the value of p increased. The threshold value to represent the interval, given as 0.65 in the above scenarios, was slightly larger than that in other scenarios such as G(p, p), where the corresponding value was 0.45. We attempted to approximate the mean curve EEDCF using an appropriate nonlinear function with two parameters. The approximating function with two parameters t and k, which we termed the survivability function, is given as follows: Fig. 10 EEDCF Sm (GR(20, 20), d) for the Tokyo road network. Source Oyama and Morohosi (2009), p. 378

186

T. Oyama

Table 1 Estimates of parameters t and k 



Graph

t

GR(5,5)

0.87

2.85

0.0019

GR(10,10)

0.88

3.95

0.0032

GR(20,20)

0.89

5.66

0.0042

GR(30,30)

0.90

6.84

0.0081

f (x) =

k

Residual

k  1 − xt x tk + (1 − x t )k

(3)

The residual sum of squares is defined as the residual sum of squares using the values (xi , yi ) and estimated values of t and k as follows: n 





(yi − f (xi ; t , k ))2

(4)

i=1

In the above survivability function, a special case of t = 1 and k = 3 corresponds to the “cubic law” used in the social science of politics to explain the relationship between the number of votes and the corresponding number of seats each political party obtains in the general election when two main political parties are dominating. Table 1 shows the estimates of t and k for the grid network denoted by GR(p, p), p = 5,10,20,30. These results indicate that t ranged approximately between 0.85 and 0.95, while k ranged more widely between 2.80 and 7.00 (see Morohosi and Oyama (2009), p. 381). In Eq. (3), parameter t indicates “the time at which the function starts decreasing,” i.e., the smaller (larger) the value of t , the earlier (later) the decrease begins. In contrast, parameter k corresponds to “the speed at which the function decreases,” i.e., the larger (smaller) the value of k , the higher (lower) the decreasing speed is. 







4 SF and Its Application to Analysis of Social Systems 4.1 Measuring the Robustness of the Urban Traffic Road Network System in Tokyo We aimed at measuring the robustness of an urban traffic road network system by applying the EDCF and EEDCF approaches to the actual road network of Tokyo in the metropolitan area of Japan. We used the road network of Tokyo shown in Fig. 11. The road network of Tokyo consists of 438 nodes and 798 edges. Applying the Monte Carlo simulation technique to this network, we obtained the EEDCF as shown in Fig. 12. Figure 12 shows both the approximate EEDCF values and actual

Application of Path Counting Problem …

187

Fig. 11 Road map of Tokyo

Fig. 12 EEDCF and actual observed data

observed data. Additionally, the horizontal coordinates of the figure have 50 equal divisions of the interval from 0 to 1, indicating high goodness of fit with the observed data except for a small gap observed between approximately 12 and 14 corresponding to 0.24 and 0.28 in the horizontal coordinate. The estimates of the parameters t and k

188

T. Oyama

Table 2 Degree frequency and characteristics of each graph Degree

Tokyo

GR(5,5)

GR(10,10)

GR(20,20)

GR(30,30)

1

1

0

0

0

0

2

4

4

4

4

4

3

175

12

32

72

112

4

229

9

64

324

784

5

28

0

0

0

0

6

1

0

0

0

0

Total

1596

80

360

1520

3480

Nodes

438

25

100

400

900

Edges

798

40

180

760

1740

Avg. Deg

3.644

3.200

3.600

3.800

3.867





for the survivability function in Eq. (3) were t = 0.83 and k = 5.65. We know that in Eq. (3), t indicates the time at which the function begins to decrease, i.e., the smaller (larger) the value of t , the earlier (later) the decrease begins. Furthermore, k is the speed at which the function decreases, that is, the larger (smaller) the value of k , the higher (lower) the decreasing speed is. Thus, these estimates for t and k can be compared with those provided in Table 1 to study their relation to each other. In other words, from the perspective of robustness, the road network of Tokyo shown in Fig. 11 may correspond to or may be nearly equivalent to the grid network GR(20,20) as their estimated values of t and k are similar. Table 2 shows the degree frequency and characteristics of each graph of the Tokyo road network and several grid networks. We consider that close relationships may exist between the form of the EEDCF and the degree distribution of the network. That is, when the network is rather “dense,” i.e., more edges connected with each node, it is considered to be more robust, i.e., the EEDCF values are “larger”; thus, the estimates for t and k may be large. Now, we will investigate the relationships between the form of the EEDCF and the degree distribution of the network, their relationships to each other, how more robust networks can be created, etc. 















4.2 Investigating the Relationship Between Vote Share (vS) and Seat Share (SS) The Parliament of Japan, the National Diet, is composed of the House of Representatives (HR) (Lower House) and House of Councilors (HC) (Upper House). In 1994, the Japanese Diet changed the electoral system for the House of Representatives (HR) from a federal election system to a hybrid single-member and proportional representation system. Before the political reform of the electoral system, HR had 511 seats in total, which was lowered to 300 seats after 1994 as the single-member

Application of Path Counting Problem …

189

system was accepted; the remaining 180 seats were allocated using the proportional system. Now, the 300 seats of the single-member electoral system are allocated to each prefecture in proportion to the population, after one seat has been distributed to each prefecture. General elections for HR are held every four years (unless the lower house is dissolved earlier). Each voter has the right to cast two votes, one for a single-seat constituency and the other for a proportional seat. Each political party draws a candidate list for the proportional seats. Proportional seats are allocated to the parties based on their proportional share of votes following the D’Hondt method. Elections for the HC are held every three years to select one-half of its members. The HC has 242 members (elected for a 6-year term): 146 members in 47 single- and multi-seat constituencies (prefectures) by single transferable votes and 96 members by proportional representation (using the D’Hondt method) at the national level. The proportional election to the HC enables the voters to cast a preference vote for a single candidate on a party list. The preference votes exclusively determine the ranking of candidates on party lists. In this section, we approximate the relationship between vote share (VS) and seat share (SS) for Japanese national elections. Tables 3 and 4 show the results of the relationship between vote-share (VS) and seat-share (SS) of recent national elections for the HR and HC in Japan. The empirical cubic law for election results implies that the seat shares of two major parties in an election are approximately the cube of their vote shares (Kendall and Stuart 1950). With the increase in minor party seats in recent elections in countries around the world, many researchers modified or generalized the cube law (Balu 2004; Grofman 1983; King and Browning 1987; Neimi and Pett 1986; Tufte 1973). The generalizations were in two different aspects. One was for election rules and other for the number of contender parties (Taagepera 1973, 1986). We approximated the VS–SS relationship using the survivability function, whereas the cubic law is a special case Table 3 VS and SS in the HC Party

2007

2010

VS

SS

VS

SS

VS

SS

LDP

31.35

31.51

33.4

53.4

42.7

64.38

39

38.4

16.3

NK

5.96

2.74

DP

40.45

54.79

2013

5.13

JR

7.25

JCP PNP

10.6 1.87 20.38

2.74 4.11

1.37

YP OP

5.48 13.7

9.59

10.2

4.11

17.4

4.11

7.84 10.1

5.48 4.11

LDP: Liberal Democratic Party, NK: New Komeito, DP Democratic Party, JR: Japan Restoration, JCP: Japanese Communist Party, PNP: People’s New Party, YP: Your Party, OP: Other Parties

190

T. Oyama

Table 4 VS and SS in the HR Party

2005

2009

VS

SS

VS

LDP

47.8

73

DP

36.4

17.33

2012 SS

VS

38.68

21.3

47.43

73.7

JR NK

1.4

2.67

YP

0.87

0.67

TPJ SDP

1.95

1

NPD PNP

0.6

0.7

NPN

1.04

1

0.31

0.33

2014 SS

VS

43.01

79

48.1

75.6

22.81

9

22.51

12.9

11.64

4.67

1.49

3

4.71

1.33

5.02

0.67

0.76

0.33

0.53

0.33

1.45

3.05

8.16

3.73

19.79

4.74

JIP PPR

0.7

0.67

OP

13.1

5.67

9.72

2

10.04

1.67

SS

TPJ: Tomorrow Party of Japan, SDP: Social Democratic Party, NPD: New Party DAICHI, NPN: New Party Nippon, JIP: Japan Innovation Party.

of this function (Oyama and Morohosi 2004; Kobayashi et al. 2009). We have been working on this problem to investigate the relationship between them. To characterize the VS–SS relationship, we attempted to express the relationship using the survivability function given in Eq. (3). Table 5 shows parameter estimates; t and k indicate that t ranged between 0.66 and 0.83, whereas k ranged relatively more widely, between 1.48 and 6.93, in all elections. In all the scenarios, R2 was significantly high, and χ 2 statistics was approximately zero. Figure 13a–g shows the approximation of the VS–SS relationship for recent Japanese national election results. 



Table 5 Parameter estimates for t and k HR

HC

Year

t

k

Xˆ2

Rˆ2

2005

0.83

5.81

0.001

0.9996

2009

0.83

6.93

0

0.9999

2012

0.66

4.48

0

0.9988

2014

0.71

2.95

0.002

0.9991

2007

0.73

2.67

0.001

0.9996

2010

0.76

1.48

0.03

0.9775

2013

0.67

2.16

0.002

0.9989

Application of Path Counting Problem …

191

a

b

c

d

e

f

g

Fig. 13 a Fitting for the election in 2005, HR. b Fitting for the election in 2009, HR. c Fitting for the election in 2012, HR. d Fitting for the election in 2014, HR. e Fitting for the election in 2007, HC. f Fitting for the election in 2010, HC. g Fitting for the election in 2013, HC

192

T. Oyama

5 Summary and Conclusion The survivability function was originally proposed by Oyama et al. (1996, 2006) to measure the robustness of a network-structured system. Subsequently, authors have been attempting to apply it to solving various types of social problems the society (Oyama et al. 2006). In this paper, we have focused on the SF and its application to analyzing social systems. The SF originated from the PCP, which was considered to be a generalization of the SPCP. Section 2 introduced the SPCP and its applications for measuring the “importance” of the edge of a network, applying the SPCP to various types of networks, including the road map of the Tokyo metropolitan area. Section 3 defined the PCP by generalizing the SPCP; the PCP was then applied to various types of network–structured systems to measure their robustness. Thus, given a network, by deleting any number of edges from the original network, we can count the number of paths between any two different nodes, obtaining the ratio of the number of paths to the total number of paths. Defining the EDCF and EEDCF obtained by applying the Monte Carlo simulation approach, we approximate the mean curve EEDCF using the nonlinear function SF with two parameters. Section 4 introduces the application of the SF to the analysis of social systems, selecting two problems such as (i) measuring the robustness of the urban traffic road network system in Tokyo and (ii) investigating the relationship between VS and SS. In both problems, we demonstrate that the SF is applicable. We consider that the function is applicable to plurality voting elections and various types of social and natural phenomena exhibiting a similar trend. In our future studies, we require to further investigate the application of the SF to the analysis of social systems. We believe that other types of social network system analyses would be necessary for the SF to be applied. For example, we believe that the SF could apply to communication network systems to investigate their reliability, stability, and resilience. Furthermore, the SF may be related to what is termed the logistic curve. Therefore, investigating the theoretical aspects of the SF could be another interesting topic. We have been investigating these problems. Acknowledgments The author would like to express his thanks to anonymous reviewers who have given invaluable comments to improve his paper.

References Ball MO (1979) Computing network reliability. Oper Res 27:823–838 Ball MO, Golden BL (1989) Finding the most vital arcs in a network. Oper Res Lett 8:73–76 Balu A (2004) A quadruple whammy for first-past-the-post. Elect Stud 23:431–453 Bandyopadhyay S, Rao AR, Sinha BK (2011) Models for social networks with statistical applications. Sage Publications, Inc. Bell MGH, Iida Y (1997) Transportation network analysis. John Wiley and Sons, New York

Application of Path Counting Problem …

193

Bell MGH (2000) A game theoretic approach to measuring the performance reliability of transportation networks. Trans Res Part B: Methodol 34:533–545 Grofman BN (1983) Measures of Bias and proportionality in seats-vote relationships. Pol Methodol 10:295–327 Kendall MG, Stuart A (1950) The law of the cubic proportion in election results. Br J Sociol 1(3):183–196 King G, Browning RX (1987) Democratic representation and partisan Bias in congressional elections. Am Political Sci Rev 81(Dec), 12511273 Kobayashi K, Morohosi H, Oyama T (2009) Applying path-counting methods for mea- suring the robustness of the network- structured system. Int Trans Oper Res 16:371–389 Mohanty SG (1979) Lattice path counting and its applications. Academic Press, Inc., London Neimi RG, Pett P (1986) The swing ratio: an explanation and an assessment. Legisl Stud Quart XI 1(Feb), 7590 Oyama T, Taguchi A (1991a) On some results of the shortest path counting problem. Abstracts of the OR Society Spring Meeting (Kitakyushu), pp 102103 Oyama T, Taguchi A (1991b) Further results on the shortest path counting problem. Abstracts of the OR Society Fall Meeting (Osaka), pp 166167 Oyama T, Taguchi A (1992) Shortest path counting problem and evaluating the congestion of road segments. PAT News, The Institute of Statistical Research, no.1, pp 1319 (in Japanese) Oyama T, Taguchi A (1996) Application of the shortest path counting problem to evaluate the importance of the city road segments in Japan. In: Traffic YM, Fushimi M (eds) Perspectives of advanced technology society 3: urban life. Maruzen Planet Co., Tokyo, Japan, pp 320 Oyama T (2000) Weight of shortest path analysis for the optimal location problem. J Operation Res Japan 43(1):176–196 Oyama T, Morohoshi H (2003) Applying the shortest path counting problem to evaluate the stable connectedness of the network-structured system. In: Operational research and its applications: recent trends, proceedings of the APORS 2003, New Delhi, India, pp 375384 Oyama T, Morohosi H (2004) Applying the shortest path counting problem to evaluate the importance of city road segments and the connectedness of network- structured system. Int Trans Operat Res 11(5 ), 555574 Taagepera R (1973) Seats and votes—a generalization of Cube Law of Elections. Soc Sci Res 2:257–275 Taagepera R (1986) Reformulating the cube law for proportional representation elections. Am Political Sci Rev 80, 489504 Taguchi A, Oyama T (1993) Evaluating the importance of the city road segments based on the network structure—application on the city road network. Commun Operat Res 38(9):465–470 (in Japanese) Tufte ER (1973) The relationship between seats and votes in two-party systems. Am Political Sci Rev 67, 540554 Wasserman S, Faust K (1994) Social network analysis: methods and applications. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511-815478

Framework for Groundwater Resources Management and Sustainable Development of Groundwater in India Asok Kumar Ghosh

1 Introduction Groundwater is a very important resource and is a part of a larger system of water resources and the hydrological cycle. Groundwater is utilized as a source of drinking water and also other uses like irrigation and industrial uses. India being a monsoonfed country, surface water is available during the rainy season. However, during rainy season, excess water reports as floodwater and gradually gets disposed of to sea or recharges the aquifer below surface. However, groundwater is available for use at distant places and also throughout the year. For storage of surface water, dams and big reservoirs have to be constructed, and these are all capital intensive and need regional planning. For small consumers or agricultural uses during the dry season, bore-wells or tube-wells are the only sources of water. Quality-wise salt content of groundwater is higher compared with surface water. Due to overexploitation of groundwater, the quality often deteriorates. Although groundwater is a very important resource, it is also a very important part of the overall environment. Any improper handling of this valuable resource can cause serious damage to the ground strata, which holds groundwater, i.e. aquifer body. This paper is an attempt to describe the systematic step by step approach to be undertaken for proper management of groundwater so that the resource can remain in healthy utilizable condition for years to come. Another important feature related to groundwater is that groundwater occurs as part of underground strata. Just as groundwater is a valuable resource there are many important mineral resources, which occur as part of underground strata. Excavation of these mineral resources can cause serious damage to aquifer bodies. Since mineral items have some price index and water is not allotted any price the mineral economics is not required to accommodate the loss due to damage to aquifer. Since most of the mining projects are in forest and rural areas, the rural population suffers due to A. K. Ghosh (B) Former Technical Director & Head Water System, M. N. Dastur & Co. Calcutta, Mission Row Extension, Calcutta, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 B. K. Sinha and S. B. Bagchi (eds.), Strategic Management, Decision Theory, and Decision Science, https://doi.org/10.1007/978-981-16-1368-5_13

195

196

A. K. Ghosh

damage of aquifer in mining areas. This aspect needs careful assessment of mining hydrogeological problems associated with mining activities.

2 Groundwater Aquifer as Part of Hydrological Cycle Water is part of a very important natural global cycle termed hydrological cycle with the sun as the prime mover to supply energy for water to get evaporated from ocean bodies and other surface water bodies. The source of groundwater is recharged from rainfall. The ultimate source of groundwater replenishment is rainfall. This rainfall enters the aquifer through soil water zone below ground by infiltration or by seepage of water from lakes or streams. Depending on water content and water movement, underground rock strata can be classified as aquifer, aquitard, aquiclude and aquifuge. Depending on the placement of these different types of strata, the aquifers are named unconfined aquifer, perched aquifer, confined aquifer, leaky aquifer and semi unconfined aquifer. In India, the underground strata holding groundwater can be classified as: 1.

Porous rock formation: a. b.

2.

Unconsolidated formation Semi-consolidated formation

Hard rock/consolidated formations a. b.

Fractures and fissures bearing rocks Karstic formations with caves and solution cavities.

The various geological formations holding groundwater are: 1. 2. 3. 4. 5. 6.

Alluvial aquifer Laterite aquifer Sandstone/shale aquifer. Limestone aquifer Basalt aquifer Crystalline aquifer.

The extraction of groundwater from aquifer needs construction of a structure like dug well, bore-well, infiltration gallery or tube well. The yield from a water extraction structure depends on the storage and flow characteristics of water in the aquifer. These characteristics are quantitatively estimated on the basis of potentiometric surface or piezometric surface, transmissivity, storativity and specific capacity. Groundwater flow in the subsurface is driven by differences in energy /head. Water flows from high energy/high head part to low energy/low head zone. The total head of flowing groundwater is governed by Bernoulli’s equation and comprises gravitational potential energy (datum Head) plus pressure head plus velocity head. The movement of a fluid (groundwater) through a porous media is a subject by itself

Framework for Groundwater Resources Management …

197

and has been presented in detail by Scheidegger (1974), P.Ya. Polubarinova Kochina and Roger de-weist (1962), Glover (Engineering Monograph 31). The groundwater flow can be either laminar if the Reynolds number is less than 2200 or turbulent if the Reynolds number is higher than 2200. For laminar flow, the flow velocity is directly proportional to the hydraulic gradient. This type of groundwater flow is governed by Darcy’s law. In unconsolidated rocks or semi-consolidated rocks, the Reynolds number is mostly below 2200 and the flow is laminar and follows Darcy’s law. q = Q/A = − K ∗ (dh/dl)

(1)

where, q = specific discharge, Q = discharge, A = cross-sectional area. dh/dl = hydraulic gradient, K = hydraulic conductivity. Turbulent groundwater flow can occur in many aquifers through relatively large interconnected porosity. Turbulent flow is characterized by streamlines flow in random complex patterns (eddies) because of viscous forces of the water being overcome by shear stresses within the water. Turbulence in groundwater flow has been examined by Smith and Nelson (1964). The problem of turbulent flow was analysed by using pipes of circular section as flow channels of underground rocks. Pipe diameter 0.25–150 mm has been used. Critical velocities have been calculated from Reynolds’s equation. The result shows that the groundwater flow is turbulent in the larger pipes. Quin Jiazhong, HongbinZhan, WeidongZhao, and Fenggen Sun (2005) have made an experimental study of turbulent unconfined groundwater flow in a single fracture. The average flow velocity was approximated by an experimental empirical exponential function of the hydraulic gradient and the power index of the experimental function was close to 0.5. The effect of turbulent groundwater flow on hydraulic heads and parameter sensitivities was examined for the Biscayne aquifer in southern Florida using the Conduit Flow Process (CFP) for MODFLOW-2005 (Shoemaker 2009). Medici, West and Banwart (2019) studied implications of groundwater flow velocities in a fractured carbonate aquifer for the transport of contaminants. For the case study aquifer, the workflow predicts hydraulic apertures ranging from 0.10 to 0.54 mm. The groundwater flow velocities for these conditions range from 13 to 242 m/day. Hydraulic conductivity, on the other hand, ranges from 0.30 to 2.85 m/day. The turbulent flow of groundwater through fractures in hard rock or karstic limestone aquifer can be simulated to an interconnected pipe flow. Thus, this can be compared with the equation for pipe flow presented below: V = 0.85 ∗ C ∗ R 0 .633 ∗ S 0 .54.

(2)

Here the power of slope factor s is significant. Varalakshmi et al. (2014) generated a model of the hard rock aquifer comprising granites, basalts and a small amount of laterites in Osmansagar and Himayathsagar

198

A. K. Ghosh

catchments with the help of MODFLOW software and studied recharge and discharge to evaluate the effect on the water table.

3 Groundwater Exploration Surface water bodies are open to our eyes and we can extract water from these sources without any major investigation. However, groundwater is a dynamic resource existing below ground and need detailed investigation before the economic extraction of water. The following step by step exploration methodology needs to be applied for the planning of groundwater extraction without initiating any environmental hazard (Balsubramanian April 2007, T.S. Badrinarayanan). A. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. B. 1. 2. 3. 4. 5.

Regional and Detailed Survey: Basin-related hydrological studies. Geological and geomorphological studies Surface geophysical studies particularly electro-resistivity surveys. Geo-botanical and geotechnical survey. Subsurface hydrogeological survey including inventory of existing dug wells and bore wells. Borehole logging. Tracer studies. Aerial photo-geologic methods. Aerial topographic survey Landsat imagery and infrared imagery Remote sensing and satellite imagery Electromagnetic technique. Esoteric studies like water divining. Water balance studies. Regional water quality studies. Detailed Studies and Location of Bore wells: Identification of potential well field Selection of location for exploratory cum production bore-well. Study of litho-logical log of exploratory bore well. Pump testing: (a) step drawdown test and (b) aquifer performance test. Water sample collection and analysis of chemical and bacteriological parameters.

4 Dynamic Groundwater Resources Groundwater is an annually replenishable resource but its availability is non-uniform over space and time. The ultimate source of replenishment of underground water is precipitation or rainfall. Rainfall is again unevenly distributed over space and

Framework for Groundwater Resources Management …

199

time. Another interesting aspect is that the demand for groundwater is highest when replenishment is lowest or practically nil. Thus the dynamic feature of groundwater aquifer needs detailed quantitative study for different regions and different aquifer. Thus, the availability and utilization scenario changes frequently and this needs periodic assessment of groundwater (GEC 2015) prepared by the Ministry of Water Resources, River Development level and quality in different aquifers. The methodologies for study and groundwater resource potential for different states and regions have been presented in detail in the Report of the Ground Water Resource Estimation Committee prepared by the Ministry of Water Resources, River Development & Ganga Rejuvenation, Government of India, New Delhi, October 2017. The salient features of the methodologies followed are presented below: 1.

2. 3. 4. 5. 6. 7. 8. 9.

In those aquifers where the aquifer geometry is yet to be established for unconfined aquifer, the in-storage groundwater resources have to be assessed in the alluvial zones up to the depth of bedrock or 300 m, whichever is less. In case of hard rock aquifers, the depth of assessment would be limited to 100 m. For confined aquifer with no withdrawal of groundwater, only in-storage resources are estimated. For confined aquifer with withdrawal of groundwater, the dynamic as well as in-storage resources are to be estimated. The periodicity of assessment is recommended as 3 years. For all aquifers, the lateral as well as vertical extent should be demarcated along with the disposition of different aquifers. For all unconfined aquifers, groundwater level fluctuation should be monitored. Recharge of water from rain and various surface water bodies should be assessed. Annual water balance studies should be carried out for all aquifers.

Dynamic Groundwater Resources 2017 has been presented in a compilation report of CGWB (July 2019). These are presented below: 1. 2. 3. 4.

Total annual groundwater recharge: 431.86 billion cubic metres. Total extractable groundwater resources: 392.70 billion cubic metres. Annual groundwater extraction: 248.69 billion cubic metres. Stage of groundwater extraction: 63.33%.

The above figures present the overall picture of India but there is wide variation in availability and extraction of groundwater in various assessment units. These are presented below: 1. 2. 3. 4. 5. 6.

Total number of assessed units: 6881 Safe: 4310 Semi-critical: 972 Critical: 313 Overexploited: 1186 Saline: 100

200

A. K. Ghosh

Jha and Sinha of CGWB have indicated that the groundwater extraction in North Western Plain states is 98% and in Eastern Plain states is 43% and Central Plain states is 42%. Efficient management of groundwater resources needs comprehensive water balance studies on an aquifer with a proper demarcated basin boundary. Dhungel and Fiedler (2016) have observed that groundwater depletion in the face of growth is a well-known problem particularly in areas that have grown to become dependent on a declining resource. The authors presented a system dynamics approach for watershed management related to water balance, recharge and groundwater extraction.

5 Scope and Practice of Groundwater Resource Management Albert Tuinhof et al. in a brief note series of World Bank report have presented the basic concepts of sustainable utilization and methodology of groundwater resources management. The key issues of groundwater resource management are presented below: 1. 2. 3. 4.

5. 6.

The flow boundaries of groundwater in space and depth are difficult to define and vary with time. Groundwater resource management has to deal with balancing the recharge with an increase in demand for water. The call for groundwater management does not usually arise until a decline in water table/well yield/water quality affects at least one stakeholder group. Uncontrolled pumping damages the aquifer body itself and also causes seawater intrusion in a coastal aquifer. In urban areas, the excess pumping can cause subsidence of land surface and can cause damage to urban structures. Interaction between groundwater and surface water is an important issue and is a guide for conjunctive use for groundwater and surface water. Important aspects of demand management are social developmental goals, drought management and regulatory interventions.

Integrated Groundwater Resource Management: The vicious cycle generated by increasing demand and uncontrolled abstraction is presented in Fig. 1. By adopting proper management tools and monitoring and regulatory steps, the vicious cycle can be transformed into a virtuous cycle. Figure 2 presents the basics of a virtuous cycle. The groundwater management tools are presented in Table 1. Level 0

Level 1

Level 2

Level 3

Level 4

Base line

Incipient stress

Significant stress

Unstable development

Stable high development (continued)

Framework for Groundwater Resources Management …

201

(continued) Level 0

Level 2

Level 3

Level 4

Adequate Growth of availability of aquifer good quality pumping groundwater

Level 1

Rapid increase in abstraction

Excessive uncontrolled abstraction permanent damage to aquifer

High level of abstraction sound balance between stakeholder demand and ecosystem needs

Registration of wells and springs

Well spacing hours of pumping pump capacity

Increase in dependence of stakeholder regulatory framework

Regulatory framework demand management artificial recharge

Integrated resource management with high level of user self-regulation guided by aquifer modelling and monitoring

Resource assessment

Basic knowledge of aquifer

Conceptual Numerical Models linked to decision model of field model with support and used for planning data simulation of and management different abstraction data

Quality evaluation

No quality constraints

Quality variability issue in allocation

Water quality Quality integrated to allocation process plan understood

Aquifer monitoring

No regular monitoring programme

Project monitoring

Regular monitoring

Monitoring programme used for decision-making

Technical tools

Institutional tools Water rights

Customary water rights

Court cases clarification

Societal changes and customary water rights

Dynamic rights based on management plans

Regulatory provisions

Only social regulation

Licensing well and new drilling

Active regulation through agency

Facilitation and control of stakeholder self-regulation

Water legislation

No water legislation

Nominal legislation

Legal provisions for groundwater users

Full legal framework for aquifer management

(continued)

202

A. K. Ghosh

(continued) Level 0

Level 1

Stakeholder participation

Little interaction between regulator and water users

Development Aquifer of user council with organization members from stakeholder

Level 2

Stakeholders and regulator share responsibility for aquifer management

Awareness and education

Groundwater is considered an infinite and free resource

Campaign for Economy water integrated conservation system and protection

Effective interaction and communication between stakeholders

Economic tools

Economically Symbolic externalities charges subsidy

Recognition of economic value

Level 3

Level 4

Economic value recognized adequate charges and possibility of reallocation

Management activities Prevention of Little concern Recognition Preventive side effects of side effects measures

Mechanism to balance extractive uses and in-situ values

Resource allocation

Limited constraints

Equitable allocation of extractive uses and in-situ values

Pollution control

Nominal steps Land surface for pollution zoning no control proactive control

Competition Priorities between users defined for abstraction Control over new pollution source

Control of all points and diffusive source of pollution and mitigation existing contaminations

Table 1 above can be used as a diagnostic instrument to assess the adequacy of existing groundwater management arrangement for a given level of resource development. On the basis of the World Bank Report on the National Groundwater Management Improvement Programme (September 29, 2016), Central Government of India has undertaken a comprehensive program on Management of Groundwater System. The basic objectives of the programme are: 1. 2. 3. 4. 5. 6. 7.

Identification of societal needs and development objectives Improved investments and management actions for addressing groundwater depletion and degradation. Arrest the decline of groundwater levels in selected areas. Community-led water security plans and implementation in selected blocks. Strengthening of groundwater management units in selected states. Operationalization of participatory groundwater management. Strengthening of institutional capacity.

The various agencies and programs for the implementation of the above improvement programmes are presented below: 1. 2.

NGMIP = National Groundwater Management Improvement Programme. P for R = Programme for Results financing system

Framework for Groundwater Resources Management …

203

INCREASING DEMAND AND CONTAMINANT LOAD

AQUIFER IMPACTED QUALITY & YIELD DETERIORATES

UNRESTRICTED DEMAND

DISSATIFACTION

UNREGULATED RESOURCE

OF WATER USERS

REDUCTION OF SUPPLY & INCREASE OF COST

Fig. 1 Vicious cycle of groundwater abstraction

ACCEPTABLE DEMAND ACCEPTABLE CONTAMINANT LOAD AQUIFER PROTECTED

RESOURCE EVALUATION

REGULATORY FRAMEWORK

QUALITY QUANTITY STABILIZE

RESOURCE ALLOCATION

DEFINITION OF WATER RIGHTS

HAZARD ASSESSMENT POLLUTION CONTROL

SECURED SUPPLY AT

RESONABLE

SATISFACTION OF WATER USERS

STAKEHOLDER PARTICIPATION USE OF ECONOMIC TOOLS COST

Fig. 2 Virtuous cycle of groundwater management

3. 4. 5. 6. 7. 8.

ESSA = Environmental and Societal Systems MOWR, RD, GR = Ministry of Water Resources, Rural Development and Ganga Rejuvenation CGWB = Central Groundwater Board GMWR = Groundwater Management and Regulation NAQUIM = National programme on Aquifer Mapping and Management. PGWM = Participatory Groundwater Management

204

A. K. Ghosh NATIONAL INSTITUTIONS MOWR,RD,&GR CGWB

PMKSY IRRIGATION INITIATIVE

AGRI-IRRIGATION

DRINKING WATER

POLLUTION CONTROL

RURAL DEVELOPMENT

INVESTMENT AT STATE LEVEL: 1.Monitoring Wells 2.Upgrading of Monitoring System for Water Level & Quality 3. Data AcquisiƟon & Management

ARTIFICIAL RECHARGE TRADITIONAL IRRIGATION ADVANCED IRRIGATION PLANNING & PRICING

WASTE WATER MONITORING

RURAL WATER SUPPLY

4. Aquifer Mapping 5. DelineaƟon of Management Areas

STATE /DISTRICT DEVELOPMENT PLANS

6.ParƟcipatory Ground Water Management WATER SECURITY PLAN

Fig. 3 NGMIP

9.

PMKSY = Pradhan Mantri Krishi Sinchayee Yojana.

The planned investments and activities to be supported by NGMIP are presented in Fig. 3.

6 Natural Recharge, Artificial Recharge and Rain Water Harvesting The most important supply-side component of aquifer management is the recharge of groundwater in the aquifer body. With developmental projects like road making, industrialization and urbanization, the recharge area is getting reduced. Further increased “DROUGHT PROOFING” of agricultural economy has led to overexploitation of groundwater and consequent lowering of water table both in hard rock as well as in alluvial aquifers. In order to make up for this deficiency rainwater

Framework for Groundwater Resources Management …

205

harvesting and artificial recharging of aquifer are essential. Artificial recharging to an unconfined aquifer is simple and straightforward and similar to filling up an open reservoir and can be implemented through some simple recharge methods like surface spreading techniques like flooding, ditch or furrows. Recharge can also be implemented through various runoff conservation structures like bench terracing, contour bunds, contour trenches, check dams, percolation tanks. Confined aquifers can be recharged at their natural recharging zones exposed to the surface or by using gravity head recharge wells pushing water at a pressure higher than the piezometric head of the aquifer at that place. Recharging of the aquifer is often associated with aquifer modification techniques like bore blasting technique or hydro-fracturing techniques aimed at increasing the permeability and storage of aquifer body. Manual on Artificial Recharge of Ground Water (September 2007) presents a detailed treatise on the methodology of Artificial Recharge. CGWB (2013) has prepared a master plan for Artificial Recharge to Ground Water in India. The salient aspects of the master plan are presented below: 1. 2. 3.

Area identified for artificial recharge: 9,41,541 m2 . Volume of water to be recharged: 85,565 million cubic metres. Total number of structures proposed: (a) (b)

4.

Rural areas: 22,83,000 Urban areas: 87,99,000

Estimated cost: Rs. Crores: Rural areas: 61,192 Urban areas: 17,986

7 Management of Coastal Aquifers Coastal aquifers play an important role for all coastal regions. India has a very long coastline and 25% of the Indian population live in the coastal region. The water demand in coastal region is high. Thus, it is important that coastal aquifers are efficiently managed. As a result of density difference between seawater and freshwater in coastal aquifers, a transition zone between two fluids is formed. This situation is presented in Fig. 6 below as elaborated by Mehdi Nezhad Naderi et al. (2013). Vincent Post et al. (2018) in a manual on groundwater management in coastal areas have presented strategies and solutions for sustainable groundwater governance and management in coastal zones. The authors have identified the demand side problems with Agricultural Growth, Population Growth, Land Subsidence and Tourism and the solutions have been identified with Monitoring, Metering, Enhanced Recharge and Optimised Abstraction.

206

A. K. Ghosh

Manivannan Vengadesan and Elango Lakshmanan (November 2018) in a book on Coastal Management: Global Challenges and Innovations edited by R.R. Krishnamurthy, M.P. Jonathan, Seshachalam Srinivasalu and Bernhard Glaeser (Feb 2019) have presented the following mitigation measures to prevent/reverse the seawater intrusion in coastal aquifers: 1. 2. 3. 4. 5. 6.

Reduction in pumping to keep drawdown at a low level. Rearranging of pumping wells over space and time. Increasing groundwater recharge. Use of injection wells for injecting fresh water in aquifer. Pumping out of saltwater and throwing out at a distance in the sea. Construction of subsurface barrier to prevent seawater intrusion.

Dr. S.C. Dhiman and D.S. Thambi of Central Ground Water Board have presented a study of problems associated with coastal aquifers in India and have concluded that in coastal areas, the problem of groundwater management is complex and related to seawater intrusion, salinity from aquifer material, global warming, tidal influence and pollution. The authors have suggested the following studies and monitoring and mitigation programmes for the management of coastal aquifers: 1. 2. 3. 4. 5.

The aquifer geometry, distribution of fresh and saline water have to be studied in detail. Constant monitoring of water level, pumping systems and quality of groundwater. Accurate monitoring of tidal effects. Evaluation of exact safe yield and restriction on drilling of well and extraction of water. Artificial recharge.

8 Management of Karstic Aquifers In desert countries like Oman, Libya the coastal zone gets some rainfall, which infiltrates underground and has formed solution cavities in the coastal limestone beds. The solution cavities after joining with one another form a piping type network. The successful bore wells that tap a cavity zone can be pumped heavily because the permeability is very high and drawdown is negligible. However, these aquifers are very prone to seawater intrusion and deterioration of water quality. In Benghazi plain area, due to heavy pumping, the quality of water deteriorated very fast and ultimately desalination of seawater and saline aquifer water had to be resorted to. To reduce seawater intrusion, the main cavity was identified and a dam-like obstruction was constructed to reduce the entry of seawater.

Framework for Groundwater Resources Management …

207

9 Management of Groundwater in Large-Scale Mining Project Area The mining projects particularly deep open cast mines cause huge and sometimes irreversible damage to the underground aquifer system. Since groundwater has not been attributed any pricing in terms of money this aspect used to be get neglected in the past. This aspect started drawing attention only after environmental degradation became visible in the world. Karmakar and Das (2012) have identified the following damaging effect of mining on groundwater: 1. 2. 3.

Lowering of the water table Subsidence. Reduction in moisture content in soil and atmosphere.

An advanced study course on Ground Water Management in Mining Areas was conducted in Pecs, Hungary during June 23–27, 2003 to give an insight into the problem of the impact of mining activities on groundwater resources and remedial measures to be undertaken. Younger, P. L. presented a brief overview of the impacts of mining on physical hydrogeology. Banks, D. presented various geochemical processes, which control mine water pollution and ultimately affect groundwater quality. Roehl, K. E presented a case of passive in situ remediation methodology of contaminated groundwater by providing permeable reactive barriers. In India, largescale mining projects are causing extensive damage to both unconfined and confined aquifers. These have to be arrested by providing extensive remedial measures.

10 Concluding Remarks Groundwater is a very important national resource. It is a renewable resource and dynamically balanced with the overall natural ecological set-up. Improper and unplanned use of this resource can cause damage to this valuable resource itself. The resource has multiple competitive stakeholders and thus it needs participatory management for efficient utilization of the resource. Thus a proper framework for management backed up by regulatory stipulations and efficient governance is very essential for efficient handling of this resource.

References Badrinarayanan TS. Ground water exploration: an introduction, geoscientist, B square Geotech services, Kalidam, Tamilnadu, 609102 Balsubramanian A (2007) Methods of ground water exploration: technical report April 2007. Centre for Advanced Studies in Earth Science, University of Mysore, Research Gate

208

A. K. Ghosh

Banks D (2003) Geochemical processes controlling mine water pollution, ground water management in mining areas, Proceedings of the 2nd Image-Train Advanced Study Course, Pecs, Hungary, June 23–27 Dhiman Dr SC, Thambi DS. Ground water management in coastal areas, central ground water board Dhungel R, Fritz F (2016) Water balance to recharge calculation: implications for watershed management using system dynamics approach. Hydrology 3(13), 1–19 Glover RE. Ground water movement, United States department of the interior, Denver, Colorado, Bureau of Reclamation, Monograph no 31, p 79 Govt. of India, Ministry of Water Resources, River Development & Ganga Rejuvenation (2017) Report of the groundwater resource estimation committee (GEC—2015), New Delhi, October 2017 Govt. of India, Ministry of Jala Shakti, Department of Water Resources, River Development & Ganga Rejuvination, C G W B (2019) National compilation on dynamic ground water resources of India 2017, Faridabad, July 2019 Govt. of India, Ministry of Water Resources, River Development & Ganga Rejuvenation, CGWB (2007) Manual on artificial recharge of ground water, September, 2007 Govt. of India, Ministry of Water Resources, River Development & Ganga Rejuvenation (2013) Master plan for artificial recharge to ground water In India, New Delhi, 2013 Jha BM Chairman, Sinha JK, Scientist D. Towards better management of ground water resources in India, central ground water board Karmakar HN, Das PK (2012) Impact of mining on ground & surface waters. Int Mine Water Assoc 2012:187–198 Medici G, West LJ, Banwart SA (2019) Ground water flow velocities in a fractured carbonate aquifer type : implications for contaminant transport. J Contam Hydrol 222(2019):1–16 Naderi Nezhad Naderi, Masoud Reza, Hebasmi Kermani & Gholam Abbas Barani (2013) Sea water intrusion & ground water resources management in coastal aquifers. European J Experim Biol 3(3), 80–94 NGWA (1999) Aquifer, Chapter 14, Ground Water Hydrology for Water well Contractors Polubarinova-kochina P Ya, Roger Dewiest JM (1962) Theory of ground water movement. Princeton New Jersey, p 613 Post V, Eichholz M, Brentfuhrer R (2018) Ground water management in coastal areas, German federal institute for geosciences and natural resources, BGR -2018 Qian J, Zhan H, Zhao W, Sun F (2005) Experimental study of turbulent unconfined ground water flow in a single fracture. J Hydrol 311(1–4):134–142 Roehl KE (2003) Passive Insitu Remediation of Contaminated Ground Water: Permeable Reactive barriers. Proceedings of the 2nd Image –Train Advanced Study Course, Oecs, Hungary, June 23 – 27, 2003. Scheidegger AE (1974) The physics of flow through porous media. University of Toronto Press P372:1974 Shoemaker WB (2009) Effect of turbulent ground water flow on hydraulic heads and parameter sensitive in preferential ground water flow layers within the Biscayne aquifer in Southeastern Florida, Geophysical Union, fall Meeting 2009, Abstract id 43E -1070, dt 12/2009 Smith WO, Nelson Sayre A (1964) Turbulence in ground water flow, geological survey, Professional Paper, 402–E, Washington, 1964 Tuinhof, Albert, Charles Dumars, Stephen Foster, Karin Kemper, Hector Garduno, Marcella Nanni, Ground Water Resource Management: An Introduction to Its Scope & Practice, G.W. Mate Core Group, Briefing Note 1: Briefing Note Series World Bank. Varalakshmi V, Venkateswara Rao B, Surinaidu L, Tejaswini M (2014) Groundwater flow modelling of a hard rock aquifer: case study. J Hydrol Eng 19(2014):877–886 Vengadesan, Manivannan, Elango Lakshmanan (2018) Management of coastal resources, Chapter 17 in coastal management: global challenges and innovations. In: Krishnan RR, Jonathan MP, Seshachalam Srinivasalu, Bernhard Glaeser, Elsevier Academic Press, November 2018

Framework for Groundwater Resources Management …

209

World Bank (2016) National ground water management improvement programme, Environmental & Social Systems Assessment, September 29, 2016 Younger PL (2003) Impacts of mining on physical hydrogeology: ground water management in mining areas, proceedings of the 2nd image-train advanced study course, Pecs, Hungary, June 23027, 2003

A New Generalized Newsvendor Model with Random Demand and Cost Misspecification Soham Ghosh, Mamta Sahare, and Sujay Mukhoti

1 Introduction Newsvendor problem is one of the most extensively discussed problems in the inventory management literature due to its applicability in different fields (Silver et al. 1998). This inventory problem relies on offsetting the shortage and leftover cost in order to obtain optimal order quantity. In the standard newsvendor problem, each of the shortage and leftover costs are assumed to be proportional to the quantity lost, i.e., linear in the order quantity and demand. Also, demand is assumed to be a random variable with known probability distributions. However, there may be situations where the cost is more than the usual piecewise linear one. Further, in real-life situations, the demand distribution is often unknown. In this paper, we generalize the classical newsvendor problem with piecewise nonlinear costs and study the estimation of optimal order quantity subject to random demand with unknown parameters. In classical newsvendor problem, we consider a newsvendor selling a perishable commodity procured from a single supplier. Further let the demand be a random variable. Newsvendor takes one time decision on how much quantity she should order from supplier. The newsvendor faces leftover cost if the demand is lower than the inventory, as a penalty for ordering too much. Similarly, shortage cost is faced if the demand is higher than the inventory, as a penalty for ordering too less. The shortage (leftover) cost is computed in terms of currency units as the product of per unit shortage cost and the quantity lost, i.e., difference between demand and inventory. Thus, the cost is linear in difference between demand and inventory. To determine the optimal order quantity, the newsvendor has to minimize the total cost or maximize the total profit. This classical version of the newsvendor problem has been generalized, application wise, in many directions since its inception. Veinott (1965) generalized the newsvendor model for multiple time periods. Extension of newsvendor problem S. Ghosh · M. Sahare · S. Mukhoti (B) Operations Management and Quantitative Techniques Area, Indian Institute of Management, Indore, Rau-Pithampur Road, Rau, Indore 453556, Madhya Pradesh, India e-mail: [email protected] M. Sahare e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 B. K. Sinha and S. B. Bagchi (eds.), Strategic Management, Decision Theory, and Decision Science, https://doi.org/10.1007/978-981-16-1368-5_14

211

212

S. Ghosh et al.

for multiple products is being studied extensively in the literature (e.g., see Chernonog and Goldberg 2018, and the references therein). We refer to a recent review by Qin et al. (2011) for extensions of newsvendor problem that adds dimensions like marketing efforts, buyer’s risk appetite, etc. Many real life situations warrant cost to be higher than the quantity lost. For example, chemotherapy drugs are critical for administering it to a patient on the scheduled days. Shortage of the drug on that day would result in breaking the cycle of treatment. Hence, the loss is not merely the quantity but more than that. Similarly, in case of excess inventory, not only the excess amount of the drug, but its disposal method also contributes to the corresponding cost. This is due to the chance of vast environmental and microbial hazards that may be created through improper disposal, like creation antibiotic resistant bacteria or super-bugs. Thus, the excess and shortage costs could be rationalized as nonlinear. An alternative way to look at the nonlinear shortage and excess costs is to consider a power type cost function (per unit) instead of constant cost as assumed in classical inventory problem. Newsvendor problem with nonlinear costs, however, remains not much addressed. Chandra and Mukherjee (2005) have considered optimization of different risk alternatives of the expected cost function, like cost volatility and Value-at-Risk, which results in nonlinear objective functions. Parlar and Rempala (1992) considered the periodic review inventory problem and derived the solution for a quadratic cost function for both shortage and leftover. Gerchak and Wang (1997) proposed a newsvendor model using power type cost function for asset allocation. Their work assumes asymmetric costs with linear excess but power type shortage cost with details only up to quadratic cost function. In this paper, we consider a generalization of the newsvendor problem in the line of Parlar and Rempala (1992) and Gerchak and Wang (1997), i.e., power type costs. In particular, we consider the shortage and excess costs as general power function of same degree. Determination of optimal order quantity from such a generalized newsvendor problem still remains unexplored to the best of our knowledge. Since the power of excess and shortage costs are same, we refer to this problem as symmetric generalized (SyGen) newsvendor problem. An interesting observation that can be made from the newsvendor literature is that majority of related work considers a completely specified demand distribution, whereas in reality, it is seldom known. In such cases, the optimal order quantity needs to be estimated. Dvoretzky et al. (1952) first addressed the estimation problem in classical newsvendor setup using Wald’s decision theoretic approach. Scarf (1959), Hayes (1969), and Fukuda (1960) considered estimation problems in inventory control using maximum likelihood estimation (MLE) under different parametric demand distributions. Conrad (1976) estimated the demand and hence the optimal order quantity for Poisson distribution. Nahmias (1994) estimated the optimal order quantity for Normal demand with unknown parameters and Agrawal and Smith (1996) estimated the order quantity for negative binomial demand. Sok (1980) in his master’s thesis, presented estimators of the optimal order quantity based on order statistics for

A New Generalized Newsvendor Model with Random Demand …

213

parametric distributions including uniform and exponential. Rossi et al. (2014) has given bounds on the optimal order quantity using confidence interval for parametric demand distributions. On the other hand, non-parametric estimation of optimal order quantity in a classical newsvendor problem is comparatively recent. For example, Bookbinder and Lordahl (1989) considered bootstrap estimator of the optimal order quantity and Pal (1996) discussed construction of asymptotic confidence interval of the cost using bootstrapping. Another important non-parametric data-driven approach is the sampling average approximation (SAA) (Kleywegt et al. 2001). In this method, the expected cost is replaced by the sample average of the corresponding objective function and then optimized. Levi et al. (2015) provides bounds of the relative bias of estimated optimal cost using SAA based on full sample data. Bertsimas and Thiele (2005) ranks the objective functions evaluated at each demand data in sample and shows that the trimmed mean of the ordered objective functions leads to a convex problem ensuring robust and tractable solution. In this paper, we consider estimation of the optimal order quantity for parametric demand distributions under SyGen newsvendor setup. In particular, we consider two specific demand distributions, viz. uniform and exponential. We present here the optimal order quantity and its estimators using: (1) full data in a random sample and (2) order statistic from a full sample. We investigate the feasibility of the problem by establishing existence of the optimal order quantity and its estimators. Further, we do a simulation study to gauge the performance of different estimators in terms of bias and mean square error (MSE) for different shortage to excess cost ratio, power of nonlinear cost and sample size. Rest of this paper is organized as follows. Section 2 describes determination of optimal order quantities for uniform and exponential demand distribution using full sample and order statistics, along with the existence condition for the same, wherever required. In Sect. 3, we investigate estimation of the optimal order quantity using full sample and order statistics. Results of simulation study is presented in Sect. 4. Section 5 is the concluding section with discussion on the future problems.

2 SyGen: Symmetric Generalized Newsvendor Problem In a single period classical newsvendor problem, the vendor has to order the inventory before observing the demand so that the excess and shortage costs are balanced out. Let us denote the demand by a positive random variable X with cumulative distribution function (CDF) Fθ (x), θ ∈  and probability density function f θ (x). We further assume that the first moment of X exists finitely. Suppose Q ∈ R+ is the inventory level at the beginning of period. Then, shortage quantity is defined as (X − Q)+ and the excess quantity is defined as (Q − X )+ , where d + = max(d, 0). Let Cs and Ce denote the per unit shortage and excess costs (constant), respectively. Then the corresponding costs are defined as Cs (X − Q)+ and Ce (Q − X )+ . Thus, the total cost can be written as a piecewise linear function in the following manner:

214

S. Ghosh et al.

 χ=

Cs (X − Q) if X > Q Ce (Q − X ) if X ≤ Q

(1)

The optimal order quantity (Q ∗ ) is obtained by minimizing the expected  total cost, s , that E[χ ]. The analytical solution in this problem is given by Q ∗ = Fθ−1 CeC+C s s is the γ th quantile of the demand distribution (γ = CsC+C ). e We propose the following extension the classical newsvendor problem replacing the piecewise linear cost function by piecewise power cost function with same degree of importance of the shortage and excess quantity. Thus, we define the generalized cost function as  Cs (X − Q)m if X > Q (2) χm = Ce (Q − X )m if X ≤ Q

The powers of the lost quantity in the nonlinear cost function are considered to be same on both sides of Q on the support of X . Hence, we would refer to this problem as symmetric generalized (SyGen) newsvendor problem. The particular case of m = 1 will refer to usual cost function in classical newsvendor problem. The dimension of this generalized cost function is curr ency − unit m−1 . For example, for m = 3 and quantity measured in milligram (mg), the generalized cost will be reported in Rs. −mg2 . The expected total cost in SyGen newsvendor problem is given by, 

Q

E[χm ] =

 Ce (Q − x)m f θ (x)d x +

0



Cs (x − Q)m f θ (x)d x

Q

In the subsequent sections, we show that the expected cost admits minimum for uniform and exponential distributions. In order to do so, the first order condition (FOC) is given by  Q  ∞ ∂ E[χm ] = mCe (Q − x)m−1 f θ (x)d x − mCs (x − Q)m−1 f θ (x)d x = 0 ∂Q 0 Q  Q  ∞ m−1 ⇒ Ce (Q − x) f θ (x)d x = Cs (x − Q)m−1 f θ (x)d x (3) 0

Q

It is difficult to provide further insight without more information on f θ (x). In the following section, we assume two choices for the demand distribution—(i) Uniform and (ii) Exponential.

A New Generalized Newsvendor Model with Random Demand …

215

2.1 Optimal Order Quantity for SyGen Newsvendor with Uniform Demand In this section, we consider the problem of determining optimal order quantity in SyGen newsvendor setup, where the demand is assumed to be a U ni f or m random variable over the support (0, b). The pdf of demand distribution is given by,  f θ (x) =

if 0 < x < b 0; otherwise 1 b

The minimum demand here is assumed to be zero without loss of generality, because any non-zero lower limit of the support of demand could be considered as a pre-order and hence can be pre-booked. Using Leibnitz rule and routine algeb , where bra, it can be shown that the optimal order quantity is given by Q ∗ = 1+α m   m1 bm αm = CCes . Corresponding optimal cost is Cs × m+1 ×  1/m Ce 1/m m . Notice that Ce +Cs m  m  1/m 1/m 1/m Ce + Cs = Cs 1 + α1 ≥ Cs , and hence we get an upper bound of m

b the optimal cost as Ce m+1 . Notice, large αm would imply small order quantity. In other words, if the cost of excess inventory is much larger than the shortage cost, then the newsvendor would order less quantity to avoid high penalty. Similarly, small αm would result in ordering closer to the maximum possible demand (b) to avoid high shortage penalty. However, if the degree of cost (m) is very high, then the optimum choice would be to order half the maximum possible demand, i.e., Q ∗ → b2 , as m → ∞. Next we compare the optimal costs Under piecewise linear and power cost functions. Let OCNL and OCL denote the optimal expected cost under piecewise nonCe bm linear and linear costs respectively. It can be easily shown that OCNL = m+1 (1+αm )m Ce and OC L = b2 (1+α . The assumption of nonlinearity is advantageous if the optimal 1) expected cost under this model is less than that under linear loss. In other words, it can be written as

OCNL 0, λ > 0 λ

In this case the expected cost function becomes 

Q

E[χm ] =

 Ce (Q − x) f θ (x)d x + m

0



Cs (x − Q)m f θ (x)d x

(4)

Q

In the following theorem, we derive the first order condition for optimal inventory in SyGen newsvendor problem with exponential demand. Theorem 1 Let the demand in a SyGen newsvendor problem be an exponential random variable X with mean λ > 0. Then the first order condition for minimizing the expected cost is given by ψ(Q/λ) = e− λ

Q

where ψ

Q λ

=

m−1 j=0

(−1) j

Q λ

m− j−1



Cs − (−1)m Ce

1 . (m − j − 1)!

 (5)

A New Generalized Newsvendor Model with Random Demand …

217

Q ∞ x Proof Let us define, Im = 0 (Q − x)m−1 λ1 e− λ d x and Jm = Q (x − Q)m−1 λ1 x e− λ d x. Hence, the FOC in Eq. (3) becomes Ce Im = Cs Jm

(6)

Assuming Q − x = u, we get e− λ Im = λ

Q

=

e



Q

u

u m−1 e λ du 0

− Qλ

u

Q × λe λ u m−1 − λ(m − 1)Im−1 , (integrating by parts)

0 λ = Q m−1 − λ(m − 1)Q m−2 + λ2 (m − 1)(m − 2)Im−2

... ... ... m−1 Q m−1− j (−λ) j =

Q (m − 1)! + e− λ λm−1 (−1)m (m − 1)! (m − j − 1)!

j=0

Now letting, x − Q = v in Jm , we get e− λ λ

Q

Jm =





vm−1 e− λ dv = e− λ (m)λm−1 Q

v

0

Thus, from Eq. (6), we get Ce Im = Cs Jm

m− j−1   m−1 Q Cs 1 j Q − Qλ m =e ⇒ (−1) − (−1) = γm e− λ λ (m − j − 1)! C e j=0



As a consequence of the above FOC condition for exponential demand, we need to inspect the existence of non-negative zeroes of Eq. (5). We first provide lower and upper bound of ψ(Q/λ) in the following theorem. Theorem 2 For γm > 0 and m ≥ 4, the following inequality holds:

−(1 + γm )

u2 − u + 1 + Sm−4 < ψ(u) < (1 + γm )(u − 1) + Sm−3 2

where, ψ(u) = Sm−1 − e

−u

γm and Sm−k =

m−k j=0

(−1) j

u m− j−1 . (m − j − 1)!

218

S. Ghosh et al. Q λ

Proof Letting

= u, we obtain from Eq. (5),

  u m− j−1 Cs −u m ψ(u) = (−1) − (−1) − e γm = 0, where, γm = (m − j − 1)! Ce j=0 m−1

j

Since, e−u > 1 − u, for m ≥ 3, we get ψ(u)
0. Remark 2 If m = 2k + 1, γm > 0. In that case, we get from the lower boundary function in Eq. (7), g L (u) =

u 2k u2 u 2k−1 u4 u3 − + ··· + − − (1 + γm ) + u(1 + γm ) − (1 + γm ) (2k)! (2k − 1)! 4! 3! 2

Therefore, by Descarte’s rule of sign, the maximum number of positive real roots is 2k − 1 and hence there will exist at least one positive root. Further, from the upper boundary function in Eq. (8) gU (u) =

u 2k−1 u4 u3 u2 u 2k − + ··· + − + + u(1 + γm ) − (1 + γm ) (2k)! (2k − 1)! 4! 3! 2

Hence, by similar argument as in g L (u), at least one positive root of gU (u) will exist. Thus, both the boundary functions will have a positive root leading to the existence of positive root of ψ(u). Remark 3 If m(≥ 4) is even then, following the similar line of argument as in the previous remark, it can be shown that both g L (u) and gU (u) will have 2k − 1 sign changes and hence at least one positive root. Remark 4 In the particular case of m = 3, it the boundary functions reduces to 2

u Cs −u+1 g L (U ) = 2 + Ce 2 u2 &gU (u) = (u − 1)(γm + 1) + 2 Clearly, no real root of the lower boundary functions  exist and the positive root of upper boundary function is given by −(1 + γm ) + γm2 + 4γm + 3. Since Eq. (5) is a transcendental equation in Q, numerical methods would be required to find zeros, which, in turn, would provide the optimal order quantity. However, the solution would be dependent on the shortage and excess cost ratio. In what follows, we describe the nature of solutions in different scenarios for α1 = CCes .

220

S. Ghosh et al.

Table 1 Optimal order quantity for Cs = Ce

m

Q∗

2 3 4 10 20

λ 1.3008λ 1.5961λ 3.33755λ 6.17753λ

In case of equal shortage and excess costs (Cs = Ce ), α1 = 1 and corresponding optimal order quantities (Q ∗ ) are given in the Table 1 for different m. On the other hand, if the shortage cost is much lower than the excess cost (α1 → 0), then the optimal order quantity (Q ∗ ) is very small (tends to 0). In case of 0 < α1 < 1, the optimum order quantity can be computed from the following equation: m−1 j=0

(−1)

j

Q λ

m− j−1

Q 1 = γm e − λ (m − j − 1)!

The two figures in Appendix B, Fig. 1 provide the optimal order quantity as a multiple of the average demand (λ) obtained for α1 ∈ (0, 1) and m = 2, 3, 4, 5, 10, 20, 30, 40, 50, 100. Notice that the differences between optimal order quantities corresponding to different α1 s reduce with increasing m. Here we may interpret m as the degree of importance of the nonlinear cost. Hence, the observation made above may also be restated in the following manner. As the degree of importance of the cost increases, the newsvendor becomes indifferent to both the costs. That is, beyond a risk level the newsvendor will not react much to the increase in any type of cost and order at a steady level, i.e., become risk neutral. In the following section, we consider the parameters of the demand distributions discussed above, to be unknown and study the estimation of the optimal order quantity.

3 Estimation of the Optimal Order Quantity In this section, we consider the problem of estimating the optimal order quantity in SyGen newsvendor setup, based on a random sample of fixed size on demand, when the parameters of the two demand distributions discussed above are unknown.

A New Generalized Newsvendor Model with Random Demand …

221

3.1 Estimating Optimal Order Quantity for U(0, b) Demand As in the previous section, we first consider the demand distribution to be uni f or m(0, b), where b is unknown. We elaborate on the estimation of optimal order quantity in SyGen news vendor problem with available i.i.d. demand observations X 1 , X 2 . . . X n . Method of moment type estimator of the optimal order quantity could be constructed by plugging in the same for the unknown parameter b. Thus Qˆ 1 =

2 x¯ 1 + αm

(9)

which is an unbiased estimator as well. The variance of Qˆ 1 is given by V ( Qˆ 1 ) = b2 . In fact the uniformly minimum variance unbiased estimator (UMVUE) of 3n(1+αm )2 the optimal order quantity can be obtained using the order statistics X (1) < X (2) · · · < X (n) . The UMVUE is given as follows: (n + 1)X (n) Qˆ 2 = n(1 + αm )

(10)

2 The variance of the UMVUE is given by V ( Qˆ 2 ) = (1+αm )b2 n(n+2) . Since X (n) is the maximum likelihood estimator (MLE) of b, the MLE of optimal order quantity can also be obtained as X (n) (11) Qˆ 3 = 1 + αm

1 Note that Qˆ 3 is biased with Bias( Qˆ 3 ) = −Q ∗ n+1 and mean square error (M S E) = 2 ∗2 ˆ ˆ ˆ Q (n+1)(n+2) . Comparing Q 1 , Q 2 and Q 3 in terms of their variances (MSE for Qˆ 2 ), it can be easily seen that the UMVUE provides the best estimator among the three. In particular, V ( Qˆ 2 ) < M S E( Qˆ 3 ) < V ( Qˆ 1 ) ∀b > 0.

3.2 Estimating Optimal Order Quantity for exp(λ) Demand Let us now consider that the demand distribution to be exponential with unknown parameter λ > 0. Let X 1 , X 2 , . . . , X n be a random sample on demand. Thus the problem becomes estimation of optimal order quantity in this SyGen setup.

3.2.1

Estimating Equation Based on Full Sample

In order to provide a good estimator of the optimal order quantity based on the full sample, we replace the parametric functions of λ involved in the FOC Eq. (5) by their suitable estimators. In particular, we focus on replacing (i) λ by its MLE, (ii)

222

S. Ghosh et al.

Q ¯ e− λ = F(Q) by corresponding UMVUEs. Performances of the estimated optimal order quantities, Q ∗ , is measured by the corresponding bias and MSE.

First Estimating Equation Replacing λ in Eq. (5) by its MLE X¯ , the first estimating equation is obtained as follows:

m− j−1 m−1 Q Q 1 j (−1) (12) = e− X¯ γm ¯ (m − j − 1)! X j=0 Let the solution of this estimating equation be denoted by Qˆ ∗1 . Though Qˆ ∗1 is a plugin estimator, being a function of MLE, it still would be expected to perform well in terms of bias and MSE. Second Estimating Equation Q ¯ based on the SRS X 1 , X 2 , . . . , X n drawn from exp(λ) The UMVUE of e− λ = F(Q) population is given by

Q n−1 TS R S = 1 − W +

n Q where, W = i=1 X i = n X¯ and (d)+ = max(d, 0). First we replace e− λ by TS R S in Eq. (5). Also, the UMVUE of λ, appearing in the coefficients of Q m− j−1 , ∀0 ≤ j ≤ m − 1, in Eq. (5), is given by X¯ . Replacing the above two estimators in place of their corresponding estimands in FOC, the second estimating equation is obtained as

m− j−1

m−1 n−1 Q Q n−1 (−1) j = γm 1 − (13) m− j −1 W W + j=0 Let the solution of the above estimating equation be denoted by Qˆ ∗2 . This is also a plug-in estimator and we compare it with Qˆ ∗1 in terms of bias and MSE, as provided in Sect. 4.

3.2.2

Estimating Equation Based on Order Statistics

In case the full data is not available, it is more likely that the seller would be able to recall the worst day or the best day in terms of demand. The worst day demand would be represented by the smallest order statistic X (1) , and the best day would be counted as the largest order statistic X (n) . In general, if the observation on the ith smallest demand out of a sample of size n is available, viz. X (i) , then we may consider the following two possible ways of estimating the optimal order quantity. One is to use estimating equation obtained by replacing λ with its unbiased estimator based on X (i) in both sides of the Eq. (5) (see Sengupta and Mukhuti 2006, and references therein). Another estimating equation can be obtained by replacing λ by its unbiased

A New Generalized Newsvendor Model with Random Demand …

223

estimator using X (i) in the left hand side expression of Eq. (5) and X (i) based unbiased ¯ estimator of F(Q) in the right hand side of the same equation. ¯ Sinha et al. (2006) provided an unbiased estimator of F(Q) based on X (i+1) , which is as follows: h i+1 (Z i+1 ) =

∞ ∞ j1 =0 j2 =0

···



j

j

j

d j1 j2 ... ji I (Z i+1 > α11 α22 . . . αi i Q)

(14)

ji =0 

(−1) 1 ji n  × =

n−i+k , n−i

where Z i = (n − i + 1)X (i) , αk = k = 1, 2 . . . i, d j1 j2 ... ji i   i   jk i   k , 1 extending over all even suffixes of j. Further, an unbiased estiα k k=1 mator of λ based on X (i) is given by λˆ =

X (i) , ai

where ai =

i j=1

1 . The first n− j +1

approach to estimate Q would be through the estimating equation obtained by replacing λ with λˆ in Eq. (5). Let the corresponding solution be denoted by Qˆ ∗(1) . The second ¯ by h i (Z i ) and λ by λˆ in Eq. (5). We approach for estimating Q ∗ is to replace F(Q) ∗ ˆ denote the corresponding estimator by Q (2) . In the case of only the best day demand data being available, the largest order statistic X (n) is observed. However, due to high complexity of computation we don’t investigate this case in details.

4 Simulation In this section, we present simulation studies for exponential demand in order to estimate the optimum order quantities from the estimating equations discussed above. Let us consider standard exponential demand distribution (exp(1)) and observe performances of the estimated optimal order quantities corresponding to different estimating equations over different values of m and αm . In this simulation we draw samples of size n, (=10, 50, 100, 500, 1000, 5000, 10000). For a given sample size n, the estimated optimal quantities are determined from each of the estimating equations described in the previous section. This procedure is repeated 1000 times. We compute the bias in the estimated optimal order quantities by the average of ( Qˆ ∗ − Q ∗ ) over these 1000 repetitions, where Q ∗ is true optimal order quantity and Qˆ ∗ is its estimate. The Tables 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 and 13 report bias and MSE of different estimators of the optimal order quantity. Full sample bias and MSE of Qˆ ∗1 are reported in Tables 2, 3, and 4, respectively and the same for Qˆ ∗2 are given in Tables 5, 6, and 7. It can be observed from the figures that the bias and MSEs of these two estimators of Q ∗ are comparable. Also, none of the two estimators uniformly outperforms the other in terms of absolute bias

224

S. Ghosh et al.

or MSE across the given degrees of importance of nonlinear cost, m, in either small or large sample cases. Comparing the bias and MSE of the estimators of Q ∗ based on 2nd order statistic (viz. Qˆ ∗(1) and Qˆ ∗(2) ) given in Tables 8, 9, 10, 11, 12 and 13 respectively, similar observations as in the full sample case, can be made. However, in this case, the margin in bias and/or MSE for certain (α1 , m, n) are much larger. For example, MSE of Qˆ ∗(1) is quite smaller than that of Qˆ ∗(2) for m = 50 over all the considered sample sizes, when α1 = 2, whereas similar margins could be observed favoring Qˆ ∗(2) in case of α1 = 1 and m = 10, for all sample sizes except 100 and 1000. Thus, neither of the two estimators outperform the other.

5 Performance Comparison of Piecewise Linear and Nonlinear Costs with Misspecification In this section, we compare the performances of the piecewise linear and nonlinear costs. In particular, we consider misspecification of the power of the cost functions in newsvendor problem. We consider two types of misspecification, viz. piecewise nonlinear model is misspecified to be piecewise linear and vice versa. We study the performances of the estimators proposed in Sect. 3.1 based on their respective MSEs under the misspecified models.

5.1 Cost Misspecification Under U ni f (0, b) Demand b Let us consider a piecewise nonlinear cost function to be appropriate with Q ∗ = 1+α . m If m is misspecified to be unity, then the optimal order quantity for uniform demand can be estimated using either of the proposed estimators, Qˆ 1 , Qˆ 2 , and Qˆ 3 with m = 1 as detailed in Sect. 3. Let the corresponding mean square errors be M S E i , (i = 1, 2, 3). The expressions of the MSEs are as follows:



2 2 X¯ b − 1 + α1 1 + αm   2 4(1 + αm )2 b 2 )(1 + α ) + (1 + α ) − 2(1 + α 1 m 1 (1 + α1 )2 (1 + αm )2 3  2 (n + 1)X (n) b E − n(1 + α1 ) 1 + αm   b2 (n + 1)2 (1 + αm )2 2 )(1 + α ) + (1 + α ) − 2(1 + α 1 m 1 (1 + α1 )2 (1 + αm )2 n(n + 2)  2 X (n) b E − 1 + α1 1 + αm

M S E1 = E = M S E2 = = M S E3 =

A New Generalized Newsvendor Model with Random Demand … =

b2 2 (1 + α1 ) (1 + αm )2



n(1 + αm )2 n −2 (1 + α1 )(1 + αm ) + (1 + α1 )2 n+2 n+1

225 

Using routine algebra, it can be readily seen that M S E 1 is strictly greater than M S E 2 . Numerical comparison between M S E 1 , M S E 2 , and M S E 3 are shown in Figs. 2, 3, and 4 in Appendix II, for different values m, α1 , n. From Fig. 2, it can be seen that for α1 = 0.5, the piecewise linearity assumption of the cost worsens the performance of Qˆ 1 compared Qˆ 2 with m, implying Qˆ 2 to be better than Qˆ 1 . On the other hand, for α1 = 2, the same assumption leads to improvement of performance of Qˆ 1 , although Qˆ 2 remains better estimator. Similarly, from Fig. 3, it can be seen that Qˆ 3 remains better estimator than Qˆ 1 for given values of α1 = 0.5. However, Fig. 4 shows that at α1 = 0.5, Qˆ 3 is better than Qˆ 2 for small samples if specification is correct, i.e., m = 1. Otherwise, Qˆ 3 is worse than Qˆ 2 under all misspecification at same α1 . If α1 = 2, then Qˆ 3 performs better than Qˆ 2 in small sample and it becomes the best in MSE sense under correct specification. FOr α1 = 1, Qˆ 3 is better than Qˆ 2 in small sample. The difference in MSE among the two estimators tends to zero with increasing sample size.

5.2 Cost Misspecification Under exp(λ) Demand In this section, we present simulation studies for standard exponential demand under power-misspecified cost functions. In particular, we first consider the true m to be unity and obtain the optimal order quantity (Q∗) numerically as in Eq. (5). Next, we estimate the optimal order quantity from correctly specified model (i.e., m = 1) and different power-misspecified models (i.e., m = 2, 3, 4, 5, 10, 20 and 50). Similarly, MSE of misspecified estimators are calculated when correct power specification in m = 5. Logarithms of ratios of the MSEs from different misspecified estimators to the same under correct specification is shown for all the estimators Qˆ i∗ , i = 1, 2 in the figures Figs. 5, 6, 7, and 8 (see Appendix II). Similarly, performances of estimated optimal order quantities using ordered demand data under power-misspecified models are presented in Figs. 9, 10, 11, and 12 in Appendix II. For simulation, we have considered λ = 1. The performance of the estimators of optimal order quantities vary across different misspecified models. For instance, m = 4 provides better estimator, Qˆ ∗1 , than other powers when the correct specification of m is 1, provided α1 = 0.5. For α1 = 1, 2, m = 10 provides a better choice (Fig. 5). Qˆ ∗2 provides a better estimator when m = 4, for α1 = 0.5, 1, 2, the correct specification being m = 1 (Fig. 6). Both Qˆ ∗(1) and Qˆ ∗(2) provides better estimators if m = 4, 10 and 3, corresponding to α1 = 0.5, 1, 2 respectively, under correct specification of m = 1 (see, Figs. 7 and 8). Under the correct specification of m = 5, Qˆ ∗1 performs better with m = 20 for α1 = 0.5, 1, 2. Qˆ ∗2 , on the other hand, performs better for m = 50, 20 and 2, respectively, for α1 = 0.5, 1, 2 under this misspecification. The estimators based on ordered

226

S. Ghosh et al.

demand data, viz. Qˆ ∗(1) and Qˆ ∗(2) , perform better for m = 4 and 10 depending on the value α1 . In particular, Qˆ ∗(1) is better under this piecewise nonlinear misspecified cost if m = 10 when α1 = 0.5 or 1 and m = 4 for α = 2. Qˆ ∗(2) shows better performance for m = 4 when α1 = 0.5 or 1, and for m = 10 when α1 = 2.

6 Conclusion Contributions of our study in this paper are twofold. First we have proposed a generalization of the standard news vendor problem assuming random demand and higher degree of shortage and excess cost. In particular, we have developed a symmetric generalized news vendor cost structure using power costs of same degree for both of shortage and excess inventory. We have presented the method to determine the optimal order quantity. In particular, we have presented the analytical expression of the optimal order quantity for uniform demand. In case of exponential demand, determination of optimal order quantity requires finding zeroes of a transcendental equation. We have proven the existence of real roots of the equation, ensuring that there exists a realistic solution to the proposed general model. Our second contribution is to provide different estimators of the optimal order quantity. We have provided estimators of the optimal order quantity using (i) full sample and (ii) broken sample data on demand. Whereas, analytical form of the estimator and its properties are easy to verify for uniform demand distribution, it is difficult for exponential distribution. We have presented a simulation study to compare different estimators. In this paper, we have presented estimators based on full sample as well as broken sample like single order statistic. We have provided a simulation study to gauge the performance of the proposed estimators in terms of bias and MSE. Finally, we have compared the performances of the piecewise linear and nonlinear cost functions in case of misspecified importance. The numerical comparisons show that in case of importance misspecification, there is no single degree of importance in the cost function which dominates over others. Rather, the suitable degree of importance under misspecified models vary with respect to ratio of shortage and excess costs (per unit). A natural extension of this work could be to consider asymmetric shortage and excess costs. However, asymmetric power type shortage and excess costs would result in different dimensions of shortage and excess costs making them incomparable. Unpublished manuscript by Baraiya and Mukhoti (2019) proposes an inventory model for such a case. Acknowledgements The authors would like to thank the antonymous referees for their valuable comments. Third author remains deeply indebted to Late Prof. S. Sengupta and Prof. Bikas K. Sinha for their intriguing discussions and guidance on broken sample estimation. Research of the first author is funded by Indian Institute of Management Indore (FDA/SM/2255). The authors acknowledge gratefully the valuable comments by the participants of the SMDTDS-2020 conference organized by IAPQR at Kolkata.

A New Generalized Newsvendor Model with Random Demand …

227

Appendix I Tables

Table 2 Bias (in 1st row) and MSE (in 2nd row) of Qˆ ∗1 for Cs = 2Ce m

n = 10

2

0.0052 −0.0070 −0.0010 0.0011 −0.0007 0.0004 −0.0006 0.0597 0.0113 0.0060 0.0012 0.0005 0.0001 0.0001 0.6206 0.5937 0.6070 0.6116 0.6076 0.6099 0.6079 0.6775 0.4074 0.3976 0.3800 0.3718 0.3726 0.3698 13.1606 12.9312 13.0446 13.0836 13.0495 13.0695 13.0523 194.3676 171.1996 172.2751 171.6106 170.4826 170.8554 170.3837 −0.0927 −0.1180 −0.1055 −0.1012 −0.1050 −0.1028 −0.1047 0.2664 0.0624 0.0369 0.0155 0.0134 0.0111 0.0112 3.0587 2.9607 3.0092 3.0258 3.0112 3.0198 3.0124 13.2180 9.4926 9.4406 9.2340 9.1028 9.1270 9.0786 −4.4660 −4.4904 −4.4784 −4.4742 −4.4778 −4.4757 −4.4776 20.1848 20.2090 20.0797 20.0235 20.0533 20.0325 20.0487 −10.9240 −10.9797 −10.9522 −10.9427 −10.9510 −10.9461 −10.9503 120.5828 120.7890 120.0743 119.7677 119.9350 119.8197 119.9100

3 4 5 10 20 50

n = 50

n = 100

n = 500

n = 1000 n = 5000

n = 10000

Table 3 Bias (in 1st row) and MSE (in 2nd row) of Qˆ ∗1 for Cs = Ce m

n = 10

2

0.0978 0.0805 0.0891 0.0920 0.0895 0.0910 0.1299 0.0291 0.0200 0.0109 0.0091 0.0085 1.8820 1.8318 1.8566 1.8651 1.8577 1.8621 4.5529 3.5459 3.5480 3.4993 3.4603 3.4694 −0.5893 −0.6052 −0.5973 −0.5946 −0.5970 −0.5956 0.4485 0.3853 0.3669 0.3556 0.3573 0.3549 0.0128 −0.0172 −0.0024 0.0027 −0.0017 0.0009 0.3614 0.0683 0.0361 0.0074 0.0033 0.0007 11.3833 11.1515 11.2661 11.3055 11.2711 11.2913 151.2083 128.4259 129.0847 128.2545 127.2342 127.5375 −4.3518 −4.3806 −4.3664 −4.3615 −4.3657 −4.3632 19.2711 19.2521 19.0983 19.0292 19.0627 19.0385 −8.2308 −8.3314 −8.2817 −8.2646 −8.2795 −8.2708 71.8199 70.1792 68.9928 68.3861 68.5878 68.4136

3 4 5 10 20 50

n = 50

n = 100

n = 500

n = 1000 n = 5000

n = 10000 0.0897 0.0082 1.8583 3.4543 −0.5968 0.3563 −0.0014 0.0004 11.2739 127.1223 −4.3654 19.0570 −8.2783 68.5345

228

S. Ghosh et al.

Table 4 Bias (in 1st row) and MSE (in 2nd row) of Qˆ ∗1 for Cs = 0.5Ce m

n = 10

n = 50

n = 100

n = 500

n = 1000 n = 5000

n = 10000

2

0.1296 0.2148 4.5171 24.0654 −0.5046 0.4257 1.2617 2.7166 −2.2265 5.1226 −4.2357 18.3848 0.1000 22.1078

0.1074 0.0488 4.4217 20.2408 −0.5252 0.3080 1.2088 1.6729 −2.2468 5.0790 −4.2689 18.3069 −0.1344 4.1763

0.1184 0.0338 4.4689 20.3366 −0.5150 0.2823 1.2349 1.6374 −2.2367 5.0195 −4.2525 18.1277 −0.0185 2.2061

0.1222 0.0189 4.4851 20.1907 −0.5115 0.2651 1.2439 1.5702 −2.2333 4.9909 −4.2468 18.0444 0.0213 0.4496

0.1189 0.0159 4.4709 20.0227 −0.5146 0.2664 1.2361 1.5381 −2.2363 5.0025 −4.2517 18.0814 −0.0135 0.2016

0.1191 0.0144 4.4721 20.0034 −0.5143 0.2647 1.2367 1.5306 −2.2361 5.0001 −4.2513 18.0743 −0.0107 0.0223

3 4 5 10 20 50

0.1208 0.0150 4.4793 20.0712 −0.5128 0.2633 1.2407 1.5416 −2.2345 4.9934 −4.2488 18.0536 0.0069 0.0449

Table 5 Bias (in 1st row) and MSE (in 2nd row) of Qˆ ∗2 for Ce = 2Cs m

n = 10

n = 50

n = 100

n = 500

n = 1000 n = 5000

n = 10000

2

−0.0040 0.0566 0.6517 0.7187 13.4388 201.9324 −0.0795

−0.0062 0.0110 0.6058 0.4211 13.1458 176.7958 −0.1103 0.0597

0.0024 0.0060 0.6197 0.4138 13.2482 177.6949 −0.0954 0.0352

0.0000 0.0012 0.6102 0.3785 13.1041 172.1608 −0.1028 0.0160

−0.0003 0.0006 0.6091 0.3740 13.0796 171.2964 −0.1037 0.0134

−0.0002 0.0001 0.6087 0.3709 13.0613 170.6197 −0.1039 0.0111

3 4 5

10 20 50

−0.0001 0.0001 0.6090 0.3715 13.0654 170.7480 −0.1036 0.0113

0.2606 3.1096 2.9906 3.0482 3.0195 3.0161 3.0164 3.0154 13.4791 9.6562 9.6832 9.1984 9.1368 9.1061 9.0969 −4.4347 −4.4792 −4.4667 −4.4754 −4.4765 −4.4765 −4.4768 19.9084 20.1073 19.9757 20.0342 20.0411 20.0399 20.0419 −10.9395 −10.9700 −10.9336 −10.9470 −10.9486 −10.9481 −10.9486 120.8739 120.5714 119.6690 119.8620 119.8843 119.8641 119.8738

A New Generalized Newsvendor Model with Random Demand …

229

Table 6 Bias (in 1st row) and MSE (in 2nd row) of Qˆ ∗2 for Cs = 2Ce m

n = 10

2

0.1491 0.0942 0.1002 0.0918 0.0907 0.0904 0.1502 0.0314 0.0223 0.0109 0.0095 0.0084 1.9619 1.8564 1.8812 1.8628 1.8606 1.8604 4.8798 3.6337 3.6416 3.4912 3.4723 3.4630 −0.5811 −0.6003 −0.5910 −0.5956 −0.5962 −0.5962 0.4374 0.3791 0.3595 0.3569 0.3565 0.3556 0.0309 −0.0070 0.0101 0.0009 −0.0002 −0.0002 0.3582 0.0668 0.0368 0.0076 0.0038 0.0007 11.5039 11.2222 11.3586 11.2907 11.2825 11.2832 153.6697 129.9282 131.2102 127.9323 127.5194 127.3534 −4.3348 −4.3713 −4.3546 −4.3633 −4.3643 −4.3642 19.1194 19.1696 18.9964 19.0449 19.0506 19.0472 −8.3287 −8.3343 −8.2585 −8.2744 −8.2763 −8.2746 73.1996 70.2044 68.6129 68.5503 68.5387 68.4776

3 4 5 10 20 50

n = 50

n = 100

n = 500

n = 1000 n = 5000

n = 10000 0.0902 0.0083 1.8599 3.4602 −0.5963 0.3557 −0.0004 0.0004 11.2810 127.2837 −4.3645 19.0494 −8.2754 68.4869

Table 7 Bias (in 1st row) and MSE (in 2nd row) of Qˆ ∗2 for Cs = 2Ce m

n = 10

n = 50

n = 100

n = 500

n = 1000 n = 5000

n = 10000

2

0.1288 0.2084 4.7220 26.0939 −0.4644 0.3920 1.2892 2.7712 −2.2054 5.0296 −4.2275 18.3061 0.0614 21.3335

0.1119 0.0489 4.5005 20.9409 −0.5131 0.2951 1.2249 1.7080 −2.2386 5.0418 −4.2598 18.2278 −0.1994 4.0419

0.1261 0.0359 4.5335 20.9274 −0.5038 0.2713 1.2560 1.6916 −2.2276 4.9791 −4.2397 18.0200 −0.0146 2.2135

0.1205 0.0187 4.4846 20.1881 −0.5122 0.2660 1.2405 1.5625 −2.2344 4.9959 −4.2490 18.0635 −0.0146 0.4609

0.1198 0.0164 4.4784 20.0944 −0.5132 0.2652 1.2387 1.5460 −2.2352 4.9978 −4.2502 18.0684 −0.0126 0.2292

0.1198 0.0146 4.4753 20.0321 −0.5137 0.2640 1.2383 1.5347 −2.2354 4.9973 −4.2503 18.0658 −0.0046 0.0239

3 4 5 10 20 50

0.1200 0.0148 4.4765 20.0462 −0.5134 0.2639 1.2388 1.5369 −2.2352 4.9965 −4.2500 18.0636 −0.0034 0.0443

230

S. Ghosh et al.

Table 8 Bias (in 1st row) and MSE (in 2nd row) of Qˆ ∗(1) for Cs = 2Ce m

n = 10

2

0.0311 0.0057 0.0044 0.0118 0.0130 0.0042 0.0257 0.3344 0.2931 0.2813 0.3175 0.2928 0.3210 0.3657 0.6780 0.6219 0.6189 0.6353 0.6379 0.6185 0.6660 2.0924 1.8219 1.7604 1.9575 1.8396 1.9542 2.2309 13.6486 13.1712 13.1460 13.2854 13.3079 13.1422 13.5463 304.4965 277.3833 272.5336 289.0052 280.8245 286.5041 312.9112 −0.0388 −0.0915 −0.0943 −0.0789 −0.0765 −0.0947 −0.0501 1.4415 1.2740 1.2235 1.3767 1.2693 1.3950 1.5789 3.2672 3.0633 3.0525 3.1120 3.1216 3.0508 3.2235 32.2458 28.3435 27.5137 30.2142 28.6722 30.0715 34.0053 −4.4141 −4.4649 −4.4676 −4.4528 −4.4504 −4.4680 −4.4250 20.8213 21.1102 21.0868 21.0992 20.9786 21.2496 21.0440 −10.8054 −10.9214 −10.9275 −10.8936 −10.8882 −10.9285 −10.8303 123.7353 125.4102 125.2968 125.3128 124.6758 126.1481 124.9338

3 4 5 10 20 50

n = 50

n = 100

n = 500

n = 1000 n = 5000

n = 10000

Table 9 Bias (in 1st row) and MSE (in 2nd row) of Qˆ ∗(1) for Cs = Ce m

n = 10

2

0.1346 0.0986 0.0967 0.1073 0.1089 0.0965 0.6903 0.6005 0.5763 0.6512 0.6016 0.6563 1.9886 1.8843 1.8788 1.9092 1.9141 1.8779 9.6012 8.5136 8.2929 9.0191 8.6186 8.9619 −0.5556 −0.5886 −0.5903 −0.5807 −0.5791 −0.5906 0.8739 0.8432 0.8253 0.8751 0.8314 0.8929 0.0765 0.0142 0.0109 0.0291 0.0320 0.0104 2.0236 1.7737 1.7021 1.9211 1.7715 1.9423 11.8767 11.3942 11.3687 11.5096 11.5323 11.3648 261.8440 235.9926 231.1351 247.4247 238.9786 245.4252 −4.2906 −4.3505 −4.3536 −4.3362 −4.3333 −4.3541 20.2683 20.5605 20.5222 20.5714 20.4089 20.7477 −8.0167 −8.2261 −8.2372 −8.1760 −8.1662 −8.2389 87.0193 87.6663 87.0425 88.5002 86.6497 89.7789

3 4 5 10 20 50

n = 50

n = 100

n = 500

n = 1000 n = 5000

n = 10000 0.1269 0.7519 1.9663 10.0476 −0.5626 0.9354 0.0632 2.2128 11.7733 270.8390 −4.3034 20.5546 −8.0616 89.8955

A New Generalized Newsvendor Model with Random Demand …

231

Table 10 Bias (in 1st row) and MSE (in 2nd row) of Qˆ ∗(1) for Cs = 0.5Ce m

n = 10

2

0.1768 0.1306 0.1282 0.1417 0.1439 0.1278 1.1370 0.9889 0.9491 1.0724 0.9909 1.0807 4.7201 4.5216 4.5111 4.5691 4.5784 4.5095 42.7250 38.4152 37.5964 40.3345 38.9017 40.0158 −0.4607 −0.5036 −0.5059 −0.4933 −0.4913 −0.5062 1.1676 1.0934 1.0618 1.1526 1.0797 1.1759 1.3742 1.2641 1.2583 1.2905 1.2956 1.2574 8.1699 7.1191 6.8820 7.6434 7.1903 7.6275 −2.1833 −2.2255 −2.2278 −2.2154 −2.2135 −2.2281 5.6909 5.7651 5.7423 5.7875 5.7101 5.8538 −4.1650 −4.2341 −4.2378 −4.2176 −4.2143 −4.2383 19.8266 20.1071 20.0502 20.1477 19.9362 20.3501 0.5986 0.1109 0.0852 0.2276 0.2505 0.0812 123.7725 108.4855 104.1096 117.5048 108.3515 118.8006

3 4 5 10 20 50

n = 50

n = 100

n = 500

n = 1000 n = 5000

n = 10000 0.1669 1.2383 4.6776 44.2617 −0.4699 1.2667 1.3506 8.7005 −2.1924 5.8179 −4.1798 20.1851 0.4941 135.3470

Table 11 Bias (in 1st row) and MSE (in 2nd row) of Qˆ ∗(2) for Cs = 2Ce m

n = 10

2

−0.0625 −0.1219 −0.0799 −0.1007 −0.1041 −0.1025 −0.1056 0.2761 0.2108 0.2245 0.2384 0.2546 0.2371 0.2576 0.5977 0.4855 0.5937 0.5472 0.5395 0.5451 0.5376 1.9156 1.4015 1.6598 1.6739 1.7599 1.6658 1.7779 13.2132 12.6096 13.6049 13.2154 13.1489 13.1931 13.1255 291.3362 251.0958 288.8477 284.1045 289.9123 282.9913 290.7649 −0.1042 −0.1701 −0.0614 −0.1039 −0.1112 −0.1064 −0.1138 1.4026 1.1268 1.2407 1.3157 1.4074 1.3100 1.4255 3.0143 2.7592 3.1798 3.0152 2.9871 3.0058 2.9772 29.9356 24.0600 28.6405 28.6393 29.8210 28.4890 30.0241 −4.5136 −4.5327 −4.4134 −4.4535 −4.4608 −4.4556 −4.4627 21.6035 21.5750 20.6595 21.0824 21.2326 21.0951 21.2678 −11.1292 −11.2668 −11.0399 −11.1287 −11.1439 −11.1338 −11.1492 129.9304 131.7310 127.2745 129.5401 130.2713 129.6265 130.4667

3 4 5 10 20 50

n = 50

n = 100

n = 500

n = 1000 n = 5000

n = 10000

232

S. Ghosh et al.

Table 12 Bias (in 1st row) and MSE (in 2nd row) of Qˆ ∗(2) for Cs = Ce m

n = 10

2

0.1857 0.2157 0.3067 0.2795 0.2740 0.2783 0.8032 0.7401 0.8810 0.9173 0.9725 0.9135 2.0321 1.8945 2.1214 2.0326 2.0174 2.0275 10.2007 8.3781 9.8960 9.8237 10.1555 9.7758 −0.5965 −0.6378 −0.5697 −0.5963 −0.6009 −0.5979 0.9022 0.8377 0.8101 0.8679 0.9087 0.8672 −0.2011 −0.2469 −0.1175 −0.1674 −0.1741 −0.1685 1.5988 1.3269 1.4608 1.5476 1.6572 1.5434 11.2783 10.6747 11.6700 11.2805 11.2140 11.2581 243.9466 206.0419 239.9422 236.7064 242.7717 235.6797 −4.4437 −4.4980 −4.3694 −4.4175 −4.4258 −4.4206 21.3903 21.5558 20.5983 21.1017 21.2846 21.1207 −8.4431 −8.6981 −8.2775 −8.4421 −8.4703 −8.4516 92.1351 92.1044 87.0470 90.8175 92.6434 90.8839

3 4 5 10 20 50

n = 50

n = 100

n = 500

n = 1000 n = 5000

n = 10000 0.2725 0.9837 2.0121 10.2103 −0.6025 0.9175 −0.1766 1.6790 11.1905 243.7150 −4.4287 21.3313 −8.4802 93.0735

Table 13 Bias (in 1st row) and MSE (in 2nd row) of Qˆ ∗(2) for Cs = 0.5Ce m

n = 10

n = 50

n = 100

2

0.0783 1.0075 4.6372 42.3529 −0.4795 1.2028 1.2377 7.6032 −2.1798 5.7243 −4.4662 21.8757 −0.1642 116.7738

−0.0096 0.7557 4.3821 35.6494 −0.4729 1.0670 1.1001 5.9993 −2.2379 5.7721 −4.6036 22.6154 −0.7678 92.6822

0.0751 0.0353 0.0286 0.0322 0.8501 0.8859 0.9455 0.8799 4.8027 4.6381 4.6100 4.6286 41.5951 41.0597 42.1501 40.8788 −0.3671 −0.4008 −0.4066 −0.4019 1.0989 1.1834 1.2596 1.1810 1.3270 1.2382 1.2230 1.2331 7.1566 7.2254 7.5813 7.1856 −2.1481 −2.1815 −2.1869 −2.1827 5.4737 5.6682 5.7558 5.6704 −4.4881 −4.5374 −4.5444 −4.5388 21.7307 22.2611 22.4430 22.2681 0.2275 −0.1620 −0.2285 −0.1843 103.8056 109.4833 117.0710 108.9685

3 4 5 10 20 50

n = 500

n = 1000 n = 5000

n = 10000 0.0261 0.9566 4.6001 42.3211 −0.4084 1.2757 1.2177 7.6446 −2.1889 5.7768 −4.5471 22.4902 −0.2519 118.5504

α

Fig. 1 Optimal order quantity (Q ∗ ) for different α1 ∈ (0, 1) and m

0.5

1.0

1.5

2.0

2.5

3.0

m=100

m=20 m=50

m=3

m=10

m=2

m=5

1.00

0.95

0.90

0.85

0.80

0.75

0.70

0.65

0.60

0.55

0.50

0.45

0.40

0.35

0.30

0.25

0.20

0.15

0.10

0.05

0.00

0

m=30

1.00

0.95

0.90

0.85

0.80

0.75

0.70

0.65

0.60

0.55

0.50

0.45

0.40

0.35

0.30

0.25

0.20

0.15

0.10

0.05

0.00

0.0

Q*α 5

10

Q*α 15

20

25

A New Generalized Newsvendor Model with Random Demand … 233

Appendix II Figures

m=40

α

m=4

Fig. 2

M S E2 M S E1

for α = 0.5, 1, 2

234 S. Ghosh et al.

235

Fig. 3

M S E3 M S E1

for α = 0.5, 1, 2

A New Generalized Newsvendor Model with Random Demand …

Fig. 4

M S E3 M S E2

for α = 0.5, 1, 2

236 S. Ghosh et al.

Fig. 5 log



ˆ 1) M S E m ( Q∗ M S E 1 ( Qˆ ∗1 )



for α = 0.5, 1, 2 and m = 2, 3, 4, 5, 10, 20, 50

A New Generalized Newsvendor Model with Random Demand … 237

Fig. 6 log



M S E m ( Qˆ ∗2 ) M S E 1 ( Qˆ ∗2 )



for α = 0.5, 1, 2 and m = 2, 3, 4, 5, 10, 20, 50

238 S. Ghosh et al.

Fig. 7 log



(1)

M S E m ( Qˆ ∗(1) ) M S E 1 ( Qˆ ∗ )



for α = 0.5, 1, 2 and m = 2, 3, 4, 5, 10, 20, 50

A New Generalized Newsvendor Model with Random Demand … 239

Fig. 8 log



(2)

M S E m ( Qˆ ∗(2) ) M S E 1 ( Qˆ ∗ )



for α = 0.5, 1, 2 and m = 2, 3, 4, 5, 10, 20, 50

240 S. Ghosh et al.

Fig. 9 log



M S E m ( Qˆ ∗1 ) M S E 5 ( Qˆ ∗1 )



for α = 0.5, 1, 2 and m = 1, 2, 3, 4, 10, 20, 50

A New Generalized Newsvendor Model with Random Demand … 241

Fig. 10 log



M S E m ( Qˆ ∗2 ) M S E 5 ( Qˆ ∗2 )



for α = 0.5, 1, 2 and m = 1, 2, 3, 4, 10, 20, 50

242 S. Ghosh et al.

Fig. 11 log



(1)

M S E m ( Qˆ ∗(1) ) M S E 5 ( Qˆ ∗ )



for α = 0.5, 1, 2 and m = 1, 2, 3, 4, 10, 20, 50

A New Generalized Newsvendor Model with Random Demand … 243

Fig. 12 log



(2)

M S E m ( Qˆ ∗(2) ) M S E 5 ( Qˆ ∗ )



for α = 0.5, 1, 2 and m = 1, 2, 3, 4, 10, 20, 50

244 S. Ghosh et al.

A New Generalized Newsvendor Model with Random Demand …

245

References Agrawal N, Smith SA (1996) Estimating negative binomial demand for retail inventory management with unobservable lost sales. Naval Res Logist (NRL) 43(6):839–861 Baraiya R, Mukhoti S (2019) Generalization of the newsvendor problem with gamma demand distribution by asymmetric losses. Technical Report WP/03/2019-20/OM&QT, Indian Institute of Management Indore, India Bertsimas D, Thiele A (2005) A data-driven approach to newsvendor problems. Working Paper, Massachusetts Institute of Technology Bookbinder JH, Lordahl AE (1989) Estimation of inventory reorder level using the bootstrap statistical procedure. IIE Trans 21:302–312 Chandra A, Mukherjee SP (2005) Some alternative methods of finding the optimal order quantity in inventory models. Calcutta Statist Assoc Bull 57(1–2):121 Chernonog T, Goldberg N (2018) On the multi-product newsvendor with bounded demand distributions. Int J Prod Econ 203:38–47 Conrad SA (1976) Sales data and the estimation of demand. J Oper Res Soc 27(1):123–127 Dvoretzky A, Kiefer J, Wolfowitz J (1952) The inventory problem: II. Case of unknown distributions of demand. Econom J Econ Soc 450–466 Fukuda Y (1960) Estimation problems in inventory control. Technical report, California Univ Los Angeles Numerical Analysis Research Gerchak Y, Wang S (1997) Liquid asset allocation using “newsvendor” models with convex shortage costs. Insur Math Econ 20(1):17–21 Hayes RH (1969) Statistical estimation problems in inventory control. Manag Sci 15(11):686–701 Kleywegt AJ, Shapiro A, Homem-De-Mello T (2001) The sample average approximation method for stochastic discrete optimization. SIAM J Optim 12:479–502 Levi R, Perakis G, Uichanco J (2015) The data-driven newsvendor problem: new bounds and insights. Oper Res 63(6):1294–1306 Nahmias S (1994) Demand estimation in lost sales inventory systems. Naval Res Logist (NRL) 41(6):739–757 Pal M (1996) Asymptotic confidence intervals for the optimal cost in newsboy problem. Calcutta Statist Assoc Bull 46(3–4):245–252 Parlar M, Rempala R (1992) A stochastic inventory problem with piecewise quadratic costs. Int J Prod Econ 26(1–3):327–332 Qin Y, Wang R, Vakharia AJ, Chen Y, Seref MMH (2011) The newsvendor problem: review and directions for future research. Eur J Oper Res 213(2):361–374 Rossi R, Prestwich S, Armagan Tarim S, Hnich B (2014) Confidence-based optimisation for the newsvendor problem under binomial, Poisson and exponential demand. Eur J Oper Res 239(3):674–684 Scarf H (1959) The optimality of (5, 5) policies in the dynamic inventory problem Sengupta S, Mukhuti S (2006) Unbiased variance estimation in a simple exponential population using ranked set samples. J Stat Plan Inference 136(4):1526–1553 Silver EA, Pyke DF, Peterson R et al (1998) Inventory management and production planning and scheduling, vol 3. Wiley, New York Sinha BK, Sengupta S, Mukhuti S (2006) Unbiased estimation of the distribution function of an exponential population using order statistics with application in ranked set sampling. Commun Stat Theory Methods 35(9):1655–1670. https://doi.org/10.1080/03610920600683663 Sok Y (1980) A study of alternative quantile estimation methods in newsboy-type problems. Technical report, Naval Postgraduate School Monterey, CA Veinott AF Jr (1965) Optimal policy for a multi-product, dynamic, nonstationary inventory problem. Manag Sci 12(3):206–222

Analytics Approach to the Improvement of the Management of Hospitals Atsuo Suzuki

1 Introduction About 73% of Japanese hospitals have deficit in 2019 by the survey of Japan Hospital Federation. Because Japanese government introduced Diagnosis Procedure Combination (DPC in short) to reduce the expense of the tax for the healthcare, many hospitals are suffering from the financial difficulty. DPC defines the standard treatment for each sickness and define the income of the hospitals from the social security. Before that many hospitals tend to gain profit by unnecessarily treatments and medicine from the social security system. Although most Japanese people know that the sustainable healthcare system of Japan is a very serious problem, the financial situation of hospitals may not be improved even in a future. On the other hand, by the aging of the society, the number of the patients are increasing in Japan. The ratio of the population of over sixty five years old is about 28%, and it is forecasted that it will increase up to 35% in 20 years by the Statistics Bureau of Japan. Hospitals need to take care more patients than ever. Therefore, the deficit may cause heavy burden to the doctors, nurses, and the other medical staff. Both the financial aspect and the labor of the staffs of hospitals become more serious in Japan. To overcome this situation, many hospitals are trying to improve the management. They are trying to introduce ICT to improve the effectiveness of the management. However, no hospitals are going to introduce the Analytics, which is the same meaning as Operations Research, although the healthcare management has various aspects of Operations Research (OR in short). It is from the strategic level to the daily operational level (see Fig. 1). In the strategic level, location problem of hospitals and assignment of the departments are included. Strategic level problems are mainly for government or local government, and they need to consider political issues too. In A. Suzuki (B) Nanzan University, 18 Yamazato-cho, Showa-ku, Nagoya 466-8673, Japan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 B. K. Sinha and S. B. Bagchi (eds.), Strategic Management, Decision Theory, and Decision Science, https://doi.org/10.1007/978-981-16-1368-5_15

247

248 Fig. 1 Healthcare problems from strategic to operational level

A. Suzuki

STRATEGIC

OPERATIONAL

Location of Hospitals Assignment of Departments Design of Hospitals Facilities, Layout Information System Appointment System (waiting time problem) Scheduling Operating Rooms Doctors, Residents Nurses, Staffs

the mid level, design of the hospitals is the main topic. It includes many OR problems and we may solve them by the queing theory, inventory theory, and scheduling theory. In the operational levels, there are many problems solved by OR. Among them, we present three topics, which is about the improvement of appointment system, operating rooms scheduling, and residents scheduling. The reasons why we introduce these daily operational topics are that they are urgent in Japan and we OR people can contribute immediately to these topics without any political consideration. If you go to a conference of academic society of medical management, such as Japan Hospital Association, you recognize that although there are so many problems in hospitals which can be solved by OR, almost nobody knows about OR. Because of these reasons, we would like to introduce our experience about the projects that we have solved these days. The topics are based on the projects we have been doing with my Ph.D. students. The results of the waiting time problem with Ichihara is published in Ichihara et al. (2020), and it is a part of his Ph.D. thesis. The operating room scheduling problem have been doing with Ito, and the result is published in Ito et al. (2016), and the resident scheduling problem has done with Ito and Ohnishi. The results were published in Ito et al. (2017). The rest of the paper is formulated as follows. In Sect. 2, we introduce the waiting time problem. In Sects. 3 and 4, we describe two scheduling problems which are the scheduling problem of operation rooms and the scheduling of the residents. We conclude the paper with the final remarks in Sect. 5.

2 Waiting Time Problem in a Department of Hospital For the most of the hospitals, the long waiting time of the patients is a serious issue. When we go to a hospital, we sometimes encounter the scene that patients get angry and shout “Why you make me wait so long. I have an appointment.” Even for the hospital which is renovated recently and introduce a new information system, the situation is the same. For example, in a hospital, once you arrive and go to the

Analytics Approach to the Improvement of the Management of Hospitals

249

Fig. 2 Appointment scheme of hospitals with the modified block scheduling

reception, a tablet is delivered. The tablet shows the rough estimation of the waiting time, and the place the patient should go on its screen. In spite of this information system, the waiting time problem is not resolved. One of the reasons is that administrative staffs of the hospital do not know how to estimate the waiting time based on the data such as arrival rate of patients, the distribution of the examination time, and the appointment system analytically or even numerically. So, we decided to consider the way to estimate the waiting time of the patients using the queuing theory. It will be a strong tool to determine the number of doctors, the appointment system, etc., so as to make the waiting time be in a reasonable range. The result we obtain needs mathematical calculation and for the details of the derivation please refer Ichihara et al. (2020). Here is our situation. Most Japanese hospitals adopt the modified block scheduling. There are two type of patients. The first one is appointment patients. They have appointments at the specific time. The other is the new patients. They come to the hospital with the reference letter from a clinic doctor. In most of the hospitals, the appointment patients have the priority to the new patients. Figure 2 shows the outline of the appointment scheme. There are time frames and N patients are assigned to one time frame. Appointments for the appointment patients are made at the (l − 1)T where T is the length of the time frame. We assume that the new patients arrive by the Poisson process and the appointment patients arrive before the appointment time. We also assume that the service time of the patients is distributed by Erlang distribution and the service time of the new patients is longer than that of the appointment patients. Figure 3 shows the scheme we use to obtain the distribution of the waiting time. First, we calculate the moment generation function. Then by the inverse Laplace transform, we calculate the density function, then by integrating the density function, we obtain the distribution function. Note that it is known that the density function and the moment generation function has one to one correspondence. By this method, we obtain the distribution function analytically. Because we need a straight but lots of mathematical transformation, we skip the details of the derivation. The details are in Ichihara et al. (2020). Using the result, we compute the probability of the waiting time for many cases. One of the result is shown in Table 1. The result is rounded off to the second decimal place. The column represents the number of people that can be assigned to a time frame, and the row represents the frame number. For each set of these, we calculated the probability that the waiting time is less than 40 min for appointment patients and 90 min for the new patients.

250

A. Suzuki

Fig. 3 Calculation of the distribution function of the waiting time

)UDPHQXPEHU

Table 1 Example of the calculation of the probability

     

 DSS SDW      



ILUVW SDW      



DSSRLQWPHQW SDWLHQWV   DSS ILUVW DSS ILUVW SDW SDW SDW SDW                        

 DSS SDW      

ILUVW SDW      

*1 Prob(Waiting time of app.pat) < 40min *2 Prob(Waiting time of first.pat) < 90min

The result in the Table 1 shows that, if you set 3 appointment patients in each time frame, you can see that the probability that the waiting time for one examination is less than 40 min is 100%. If there are 3 appointment patients in each frame, the probability that the waiting time for the appointment patient is less than 40 min is 63% even in sixth frame. On the other hand, if the number of appointment patients is 5, the probability that the waiting time for the patients is less than 40 min is 1%. Using the distribution function, we calculate the probability of any combination of number of appointment patients in a time frame and the number of time frames within a short CPU time. By calculating the probability repeatedly, we set the number of appointment patients appropriately. It means that we may reduce the waiting time of the patients. We also compare our results with the actual data from a hospital. Our result estimates the waiting time reasonably except the special cases. They are that the doctors change the order of examination by some reasons. It may be that the new patient’s situation is serious and the doctor needs to change the priority. It may happen and it cannot be forecast beforehand. Instead of this special case, our result is useful for the management of the hospital.

Analytics Approach to the Improvement of the Management of Hospitals

251

3 Operating Room Scheduling Operating room management is important for both patient treatment quality and hospital finances. Many patients experience long wait times before taking operations. Because the situation of the patients tend to be worse while waiting, the patients generally want to take the operation as soon as possible. For this reason, efficient management so as to reduce the waiting time of the operation will increase the quality of the treatment and the satisfaction of the patients. The other important factor is that hospitals make a profit on operations. The profit of the operation is a main resource of income of the hospitals in Japan. The more operations make more profit for the hospitals. The efficient management which increases the number of operations is important for the hospital’s financial aspect. Because of these reasons, many hospitals introduced the so-called operating room information system, but even in the system, scheduling of the operations are generated manually. Therefore, we decided to make a system to create operating rooms schedules automatically to implement better management. Here is the manner to generate the operating rooms scheduling in most of Japanese hospitals. First, surgeons propose operations with their duration times. Then, the head nurse generates a 5-day schedule depending on the proposed duration times of the operations. Finally, she adjusts the schedule at a meeting with surgeons who are the representative of the departments once a week. There are two drawbacks of this manner. The first one is that the head nurse spends a lot of time on this work. It takes two hours and a half to create a 5-day schedule, and two hours to adjust the schedules at the meeting. Five to ten surgeons attend the meeting and it is a waist of time even for surgeons. The second one is that in spite of the head nurse’s effort, there is no guarantee that the schedules are better for the management. Actually, overtime work load is a serious problem, and the scheduling change of the operations and reassignment of the operating rooms happen so often. These things reduce the efficiency of the operating rooms management.To overcome the drawbacks and increase the efficiency, we make the scheduling system to reduce the overtime, schedule changes, and operating room reassignments. We formulated the problem as a mixed integer programming problem. We outline the formulation mainly for the objective function, and for the details of the formulation please refer to Ito et al. (2016). The objective function consists of three terms. The first term is the sum of overtime.    Oi j + β zl j + γ Rj (1) min. α j∈J i∈I

j∈J l∈L

j∈J

where I , J , and L are the index set of operations, operating rooms, and departments, respectively, and Oi j , zl j , and R j are the variables representing overtime, the overflow time for department, and the difference of the standard deviation of the consecutive operation’s duration time, respectively.

252

A. Suzuki

Fig. 4 Schedule obtained by the system Set the first day for a 5-day schedule

Create a 5-day schedule using CPLEX

Begin es ma on of opera on dura ons

Fig. 5 Interface of the operating rooms scheduling system

The first term of (1) is the sum of the overtime of all operations and operating rooms. The second term represents the sum of overflow times that department l uses operating room j. The third term is the sum of differences of standard deviations of two consecutive operations in operating room j. There are more than ten constraints of the formulation, and the details of the formulation are in Ito et al. (2016). We solve the problem formulated in this way using PC and implement it as a system running on a EXCEL based interface. In the system, CPU time to solve the problem is thirteen seconds using CPLEX. Figure 4 shows a schedule obtained by the system. We used the estimated operation duration time. Figure 5 shows the operating room scheduling system’s user interface. The system can automatically create schedules by simple operations. The system operates in two stages. In the first stage, we estimate operation duration time by a regression analysis. “Compile data” button compiles a data. “Estimate duration” button begins estimation of operation duration time. After clicking these buttons, the date, patient ID, department, proposed time, and estimated time are listed in a table.

Analytics Approach to the Improvement of the Management of Hospitals

253

And, it makes the graph of operation duration time. The vertical axis is operation duration time. The horizontal axis is the operation number. The solid line is estimated duration time of the operations and the dotted line is proposed duration time proposed by the doctors. If we find differences between the estimated and proposed duration times, we can adjust operation duration times by consulting with the doctor. In the second stage, we solve the operating rooms scheduling problem formulated as a mixed integer programming problem. “Set initial date” button sets the first day for a weekly schedule. “Schedules” button creates a weekly schedule using a optimization software CPLEX. In the trial use of the system in the Aichi Medical University Hospital, the system reduced the lord of the head nurse. It also reduced the overtime, scheduling change of the operations and reassignment of the operating rooms. Details of the results are in Ito et al. (2016).

4 Night-Shift Scheduling System for Residents Next, we show a case study of the night-shift scheduling system for residents in Aichi Medical University Hospital developed by our research group. The organization which is managing the scheduling in the hospital is postgraduate clinical training center of Aichi Medical University Hospital. The night-shift scheduling for residents introduced here is an example of its management. The scheduling is a problem to determine both the shift at night on weekdays and the shift of daytime and night on weekends, holidays. The current manner of the scheduling has a lot of drawbacks. For example, a schedule planner makes the schedule by hand and spends 24 h per month. In spite of his effort, there are two main conditions which can not be considered and not be satisfied in the schedule. First, the night-shift of weekends and holidays is not equal for all residents. For example, a resident has to do three times or more, on the other hand, the other resident has to do only one times in weekends and holidays. Second, night-shift on weekdays is imbalanced among the residents. For example, a resident has to do the night-shift on Friday for three times, on the other hand, the other resident has no night-shift on Friday. As a result, residents adjust the schedule in the individual basis. This is the goals of this case study. We want to generate the schedule which increases the satisfaction of residents and reduce the workload for the the schedule planner to create the schedules. We formulated the problem as 0-1 integer programming problem. The constraints we consider in this formulation have weights which show the priority of constraints. But, the priority of the constraints is not clear for the schedule planner. So, we use AHP to clarify the priority of the constraints. We implement night-shift scheduling system for residents whose interface is looks like Fig. 6.

254

A. Suzuki

Fig. 6 Interface of the residents scheduling system

The system can generate the schedule automatically by clicking these buttons in the interface. The first button sets the month which creates the schedule. It displays the month of the calendar and the department which is assigned to the month. Because, residents are rotated monthly different departments. The second button sets resident’s off-duty days which are decided beforehand. After clicking button, “×” mark is entered. The third button sets the data of calendar, residents name, etc. The fourth button generates a schedule using CPLEX. It solve the night-shift scheduling for residents problem formulated as a 0–1 integer programming problem. The fifth button generate the outputs. After clicking these buttons, the allocation of on-duty are shown in the table. We show the formulation of scheduling as 0–1 integer programming problem. We represent the objective function and the details of the constraints are shown in Ito et al. (2017). The objective function is minimization of the sum of weighted pl , l = 1, . . . , 4. For l, the meaning of the soft constraints are followings: p1 : off-duty of the residents p2 , p3 : balance of annual night-shifts for each residents p4 : balance of night-shifts of weekends and holidays p5 : balance of night-shifts on weekdays pl is a variable which represent the violation of soft constraint l. The following is the objective function. min. αp1 + β( p1 + p3 ) + γ p4 + δp5

(2)

To determine the value of the parameter α, β, γ , δ, we use Analytic Hierarchy Process (AHP in short). Figure 7 is the hierarchy diagram. The goal of the AHP is to decide the priority of constraints. We assume that evaluation criteria should have four items. First item is the job such as hardness of job content of last day, the day, and next day. Second item is the

Analytics Approach to the Improvement of the Management of Hospitals

Overall goal

Decide the priority of constraints

Evalua on criteria Alterna ve

255

Job constraint(1) Off-duty shi

0.46

Salary

Health constraint(2) # night shi s

constraint(3) Night-shi of weekends and holidays

0.21

0.18

Private constraint(4) Night-shi on weekdays

0.14

Explana on of evalua on criteria Job Health Salary Private

Hardness of job content of last day, the day and next day Sleep and overwork etc. Over me payment and holiday work payment etc. Travel and hobby etc.

Fig. 7 Interface of the residents scheduling system

health such as sleep and overwork. Third item is the salary such as overtime payment and holiday work payment. Fourth item is the private life such as travel and hobby. For the second hierarchy, we set the four constraints which is represented in the formulation. As the result, we recognized that p1 has the most priority. The numbers in Fig. 7 represent the priority. The result is contrary to the knowledge that the schedule planner uses for the scheduling. Based on the new priority, we obtain a better scheduling for residents. Using the system, we reduce the workload to create the schedules. The set up time is 15 min and the CPU time is 5 s. Further more, we can consider two new conditions mentioned before which are not introduced so far by the manual scheduling. First, night-shift of weekends and holidays is equal for all residents. Second, night-shift on weekdays is balanced. The system is in a trial use in the hospital. It will be used as a part of the information system of the hospital.

5 Concluding Remarks We introduced three topics of hospital management that we have solved in these five years. The first one is to calculate the distribution function of the waiting time of the patients. It is used to design the appointment system of the departments of the hospital. We may reduce the waiting time of the patients to determine the number of the patients who are assigned to a specific appointment time and the number of doctors in a department by estimating the probability of the waiting time of the patients using our result.

256

A. Suzuki

The second one is the operating room scheduling. The system we implemented will increase the effectiveness of the operating room management. The increase will give the hospital both the quality of the treatment and the financial merit. We are doing the joint project with a major system development corporation of the operating room system to include our system in their system this year. In the project, we rewrite the code by Python and use the free optimization software MIPCL. The third one is the resident scheduling. It is for the quality of the on the job training of the residents. Our system will increase the quality of the training and as a result, many residents will apply to the hospital of the training. As a result, the reputation of the hospital will become better and many good residents will be in the hospital.

References Ichihara H, Suzuki A, Miura H (2020) On the distribution of the waiting time of the patient. Trans Math Model Appl 13(1):23–37 Ito M, Ohnishi A, Suzuki A et al (2017) The resident scheduling problem. J Jpn Ind Manag Assoc 68(4E):259–272 Ito M, Suzuki A, Fujiwara Y (2016) A prototype operating room scheduling system—a case study at Aichi Medical University Hospital. J Jpn Ind Manag Assoc 67(2E):202–214

A Novel Approach to Estimate the Intrinsic Dimension of fMRI Data Using Independent Component Analysis Rajesh Ranjan Nandy

1 Introduction Estimating the true dimension of multivariate data in the presence of noise has always been an interesting problem with a wide range of applications in real life. The primary motivation in estimating the true dimension is data reduction in order to be able to separate true signal from noise. In the absence of noise, the problem is trivial since the true dimension can be obtained simply by evaluating the rank of a matrix. In the presence of noise though, the data are always full rank and the true dimension has to be estimated. There has been an extensive body of work to address this problem primarily using information-theoretic approach for signal processing in general as well as specifically for fMRI data (Wax and Kailath 1985, Minka 2000, Beckman and Smith 2004, Cordes and Nandy 2006, Li et al. 2007, Ulfarsson and Solo 2008, Yourganov et al. 2011). However, almost all the existing methods of dimensionality estimation rely on the assumption that the signal and noise are both normally distributed and the noise is white. For correlated noise, one has to specify the covariance structure of the noise. Cordes and Nandy (2006) partially addressed the problem but is limited to a simple AR(1) model. In general, these limitations are not necessarily prohibitive and often, the noise covariance structure can be reasonably estimated separately if data with pure noise can be obtained. However, these assumptions do pose several challenges when applied to fMRI data. First, the noise in fMRI data is strongly correlated with a very complex autocorrelation structure that cannot be modeled with simple autoregressive structures. Uncorrected information-theoretic methods are in general moderately robust to the violation of the white noise assumption provided the signal to noise ratio is not too low or the correlation structure is not too complex. Unfortunately, in most cases, for R. R. Nandy (B) Department of Biostatistics and Epidemiology, University of North Texas Health Science Center, Fort Worth, TX 76107, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 B. K. Sinha and S. B. Bagchi (eds.), Strategic Management, Decision Theory, and Decision Science, https://doi.org/10.1007/978-981-16-1368-5_16

257

258

R. R. Nandy

fMRI data, neither the noise has a simple structure nor is the signal strong enough to compensate for the complex correlation structure. In these circumstances, we will show that when the popular information-theoretic methods are implemented without any proper adjustment for the correlation, the estimates can be heavily biased in the positive direction. Furthermore, as the dimension of the noise in the observed data grows without increasing the true dimension of signal, the estimates of true dimension tend to grow as well. For example, we have consistently observed that in real resting-state fMRI data, a common characteristic of each of these methods is to provide dimensionality estimates that grow almost linearly with the number of time frames used in the data (Cordes and Nandy 2006). Although the true dimension may increase when more data are acquired in time, it is unlikely that the dimension will grow at such a large rate. Simply arguing that “the brain is doing more” when the scanning time is doubled does not provide a sufficient explanation of the fact that almost twice the number of dimensions are obtained. This effect is actually quite strong with the BIC approach and its Laplace approximation as implemented in the MELODIC program of the popular FSL package. Yourganov et al. (2011) also reported this observation in a recent article. However, they did not observe this effect using the MDL approach as implemented in the GIFT package (Li et al. 2007). They correctly noted that this could be due to the use of subsampled data in the GIFT package. We agree that the problem is less severe with GIFT package, but it will be demonstrated later that the stability is more of a feature of discarding a large number of voxels rather than the MDL procedure. The subsampling approach is implemented in the GIFT package to address the problem of spatial correlation in fMRI data, which is more pronounced when the data are spatially smooth. Li et al. (2007) in their subsampling approach try to identify a smaller subset of voxels that can be treated as being independent by calculating the entropy of the subsampled data for different levels of subsampling. This approach is implemented in the GIFT package and it tends to provide more stable estimates of the intrinsic dimensionality. However, there is a strong trade-off involved here since we are discarding a large amount (typically 75%) of valid data and hence run the risk of missing true components. In general, we have mixed feelings about the sab-sampling approach and our proposed method can be used with or without sub-sampling. Finally, the most popular implementation of dimension estimation in fMRI data is to use the reduced data for ICA. ICA works only if all the sources except possibly one do follow Gaussian distribution. Hence, the assumption of Gaussian distribution for signal in the information-theoretic approaches is clearly violated. However, unlike the violation of white noise assumption, we have observed empirically that the information-theoretic methods are quite robust to this violation. Our extensive simulation results also confirm this robustness. Similar observation was also made by Beckman and Smith (2004). The method that we are going to propose actually makes use of the fact that the sources are non-Gaussian. We will begin with a guess value of the true dimension and estimate those independent source components by implementing fastICA algorithm (Hyvärinen 1999). The residual will be used to estimate the noise covariance structure, which will then be used to partially whiten the noise. Usual dimension

A Novel Approach to Estimate the Intrinsic Dimension …

259

estimation techniques can then be used on the partial whitened data. It will be shown that the approach is extremely robust to the choice of initial guess value for the dimension.

2 Theory and Methods 2.1 The Noisy Mixing Model Since even in the absence of any true dynamic signal in the brain, the static brain image remains, we prefer to subtract the mean brain image from the time-series data. We then also remove the mean over all voxels at each time point so that the mean of each signal over all voxels is zero. The noisy mixture can then be written as x i = Asi + εi i = 1, . . . , N ,

(1)

where x i is the observed time course at voxel i with T time points, A is the T × p dimensional mixing matrix, si is the signal vector (zero mean) with p components at voxel i, εi is the noise vector that follows a multivariate Gaussian distribution with zero mean and covariance matrix Σ, and N is the number of voxels. Due to the presence of noise, the observation matrix X = [x 1 .....x N ] is always of full rank T , irrespective of the number of components of the signal. If T > p, which is reasonable to assume for fMRI data, the dimension of the full data set is larger than the number of “true” biological components.

2.2 Estimation of ΣUsing ICA As mentioned previously, the primary problem with the use of standard model order selection procedures is the presence of correlated noise. If the structure of the covariance matrix is known, one can simply whiten the noise and then implement standard procedures on the transformed data. Unfortunately, for fMRI data, not only is the structure unknown but also even most autoregressive moving average (ARMA) models are inadequate to identify the correct structure of the noise covariance matrix. Hence, we propose to use a very flexible noise covariance structure, which requires that the noise is stationary and the temporal autocorrelation is non-negative and decreasing for increasing values of lag. In other words, the temporal autocorrelation of noise only depends on the lag between two time points is non-negative and gets weaker as the temporal lag increases. One can relax some of these requirements, but the method implemented in this article satisfies these requirements. Hence, the noise correlation structure has the following form:

260

R. R. Nandy



⎤ 1ρ1 ρ2 . . . ρT −1 ⎢ ρ1 1 ρ1 . . . ρT −2 ⎥ ⎢ ⎥ =⎢ . .. ⎥ ⎣ .. ... . ⎦ ρT −1 . . . 1

(2)

where 1 ≥ ρ1 ≥ ρ2 · · · ≥ ρT −1 ≥ 0. Hence, we need to estimate the parameters ρ1 , ρ2 , . . . , ρT −1 . We propose to estimate these parameters by implementing ICA on the observed data, the assumption being that the true signals do not follow a Gaussian distribution. We need to start with a guess value for the true dimension of the data. The idea is that once we have estimated the sources corresponding to the initial guess value for dimension, we can separate the signal and noise in the form of residual from the observed data and can then have an estimate of the noise covariance from the observed residuals. Of course, this separation depends on the initial guess value. Intuitively, it would appear that if the initial guess is too low, some of the true signal will be identified as noise and post-whitening, we may get an underestimate of the true dimension. Similarly, if the initial guess is too high, some of the noise will be identified as true signal and the whitening may not work very well. In that sense, rather than missing some true signal, it may be preferable to choose higher values for the initial guess. While the intuition is not entirely wrong, it will be demonstrated that the method is remarkable robust to this initial choice as will be demonstrated in the results section. Formally, let the initial guess for the dimensionality be p. We then run fastICA on the observed data X = [x 1 · · · x N ] and obtain the first p components without ∼

any initial dimension reduction by retaining all the eigenvalues. A be the estimated ∼ ∼ ∼ T × p dimensional mixing matrix and S = s 1 · · · s N be the estimated sources for ∼

the N voxels. Observe that each s i is the signal vector with p components at voxel i. Then the observed residual can be written as ∼ ∼ ∼ E = [e1 · · · e N ] = [x 1 · · · x N ]− A × s 1 · · · s N .

(3)



We also denote by Σ and R, respectively, the sample residual covariance and correlation matrices with the (i, j)th entries denoted by σ i j and ri j . To comply with the stationary and other conditions imposed on the noise correlation, we define

r (k) =

i− j=k ri j

T −k

,

(4)

where r (k) is an estimate for the temporal correlation of noise at lag k. We then force all negative r (k) -s to be zero and redefine each r (k) as the minimum of its current ∼

value and r (k − 1). Finally, we define a correlation matrix R, such that its (i, j)th ∼

entry is r (|i − j|). The matrix R then satisfies all the conditions imposed on the true

A Novel Approach to Estimate the Intrinsic Dimension …

261

noise correlation matrix . We can then obtain the whitening stationary covariance ∼ ∼ matrix Σ with entries σ i j defined as ∼ σ i j = r (|i − j|) σ ii σ j j .



(5)

The transformed data now has the form

 ∼ −1/2 Y = y1 · · · y N = Σ [x 1 · · · x N ],

(6)

and we implement the usual model order selection methods on Y .

2.3 Imaging and Data The data generation for simulation will be described on the “Results” section. For real data, fMRI was performed at the Radiology Imaging Science Center at the University of Washington (UW) in a commercial 1.5 T G.E. (General Electric, Waukesha, WI) MRI scanner equipped with echospeed gradients to allow echo-planar BOLD contrast acquisition. We have collected two different resting-state datasets. The first dataset was collected from a healthy male subject and had a fast repetition time (TR) of 400 ms, FOV 24 cm × 24 cm, slice thickness 6 mm, gap 1 mm, 4 axial slices covering the central part of the brain and 64 × 64 imaging matrix. 2415 scans were collected. The second dataset was also collected from a healthy male subject and had a repetition time (TR) of 2 s, FOV 24 cm x 24 cm, slice thickness 6 mm, gap 1 mm, 20 axial slices covering the whole brain and 64 × 64 imaging matrix. 320 scans were collected. We dropped the first few scans for each of the datasets to ensure stability of the acquired. For both datasets, no temporal or spatial preprocessing is performed except applying standard masking procedures to eliminate voxels outside the brain.

3 Results 3.1 Simulated Data Although, the motivation behind the proposed new approach is the failure of the existing methods to address or model the complex correlation structure of noise in fMRI data, it is useful to implement the proposed method to simulated data that incorporate non-Gaussian signals and noise with high order autocorrelation and moving average structure. We will compare the results from our proposed method with the MDL and BIC approaches that seem to work best for fMRI data among existing

262

R. R. Nandy

methods. We first demonstrate the effect of correlated noise in the model order selection for two different types of noise structures with varying noise strength. For our simulations, we chose N = 5000, p = 30 and T = 150. Each of the independent sources is super-Gaussian and generated from an exponential distribution with parameter 1. The sign of each observation for each source is then randomly assigned as positive or negative with probability 0.5, which ensures that expected value each source is zero. The entries of the T × p dimensional mixing matrix A are obtained randomly from a Uniform distribution in the interval [0, 1]. Finally, the observed data X = [x 1 · · · x N ] are obtained by adding noise with specified structure and strength to the mixed signal As. We first use AR(1) noise with AR parameter 0.35 and variance 1. In Fig. 1, we present the eigenspectrum, BIC, and MDL estimators of the dimensionality for various noise scale factors. Only the relevant portions of the graph 50

60

60

40

50

50

80

no whitening

60 30

40

40

20

30

30

10

20

20

0

10

10

-10

0

0

40

whitened with 50 ICA components

40

whitened with 90 ICA components

40

whitened with 70 ICA components

20

0

50

100

150

0

50

100

150

0

50

100

150

0

12

6

3

10

5

2.5

8

4

2

6

3

1.5

0

50

100

150

0

50

100

150

0

50

100

150

30

20

4

2

1

2

1

0.5

10

0

0

50

100

150

0

0

50

100

150

14

0

0

50

100

150

0

8

4

6

3

12 30 10 20

8

4

2

2

1

6 10 4 0

0

50

100

150

2

0

50

100

150

0

50

100

150

10

15

40

0

6 eigenspectrum BIC MDL

5

8

30

0

10

4 6 3

20 4 5

2

10

0

2

0

50 100 noise scale factor=1

150

0

0

50 100 noise scale factor=2

150

0

1 0

100 50 noise scale factor=3

150

0

0

100 50 noise scale factor=4

150

Fig. 1 Performance of eigenspectrum, BIC and MDL in the identification of true dimension of multivariate data with AR(1) noise (AR parameter 0.35) for different noise scale factors. In row 1, results are presented for no ICA pre-whitening. In rows 2–4, results are presented for pre-whitened data with 50, 70 and 90 ICA components, respectively

A Novel Approach to Estimate the Intrinsic Dimension …

263

are displayed in transformed scales for clarity. Note that we have used the adjusted eigenspectrum to control the spread of eigenspectrum in a finite sample (Beckman and Smith 2004). It is quite evident that even for a simple AR(1) without any prewhitening using ICA components (Fig. 1, first row), all the dimension estimation procedures fail for noise scale factor larger than 2. In fact, for noise scale factor 3, the visual dimension estimate from the eigenspectrum is quite accurate, but not with BIC or MDL. For noise scale factor 4, the dimension is simply not estimable. Nevertheless, for whitened data using the estimated noise correlation from residuals using the method described in the previous section, the estimation works remarkably well. In fact, it is extremely robust to the initial guess of the number of dimension. For a very high noise scale factor, the MDL and BIC estimates tend to be smaller than the true value, but the eigenspectrum still does a remarkable job even for the highest noise scale factor 4. In Fig. 2, we performed the same simulation but with 60

60

50

50

40

40

30

30

100

80

80

no whitening

60 60 40 40 20

20

10

10

0

0

20

0

50

100

0

150

50

100

0 0

150

50

100

150

40

50

100

150

0

50

100

150

0

50

100

150

4

6 10

3

30 4

2

20 5 2

1

10

0

50

100

0

150

0

0

0

0

50

50

100

0

150

20

8

15

6

10

4

5

2

50

100

150

6 5

40 whitened with 70 ICA components

0

5

8

15

50

whitened with 50 ICA components

20

0

4 30 3

20 2 10

1

0

50

100

0

150

50

100

0

150

50

100

150

8

10

20

50

0

0

0

0

eigenspectrum 8

whitened with 90 ICA components

40

BIC

6

15

MDL 6

30

4

10 4

20

2

5 2

10

0

50

100

noise scale factor=1

150

0

0

0

0

0

50

100

noise scale factor=2

150

0

50

100

noise scale factor=3

150

0

50

100

150

noise scale factor=4

Fig. 2 Performance of eigenspectrum, BIC and MDL in the identification of true dimension of multivariate data with ARMA noise (AR parameters 0.35, 0.2 and MA parameters 0.2) for different noise scale factors. In row 1, results are presented for no ICA pre-whitening. In rows 2–4, results are presented for pre-whitened data with 50, 70 and 90 ICA components, respectively

264

R. R. Nandy

a more complex ARMA correlation structure for AR parameters 0.35, 0.2 and MA parameters 0.2. As before we used noise scale factors 1, 2 and 3. The noise is again standardized to variance 1. This time without any pre-whitening all the methods fail beyond noise scale factor 1. But for whitened data, we observe the same estimation accuracy as with the simpler AR(1) model. Once again, it is robust to the initial guess of the number of dimension. We have performed similar simulation with even higher order ARMA models and observed very similar result. To avoid repetition, those results are not presented here.

3.2 Real Data Although, the results from simulated data provide strong evidence toward the improvements with the proposed method, we need to implement the method to real fMRI data to assess its performance. Unlike simulated data, there is no direct way to make that assessment since the ground truth is unknown. However, as has been noted before, the dimension estimates from real fMRI data tend to grow as the number of fMRI scans increases. Part of it can be explained by the fact that not all underlying processes in the brain, which serve as sources of the observed signal, may be active all the time. Hence, for longer scans, a few extra components are expected. However, the presence of too many additional components indicates a possible failure of the estimation process. Keeping this in mind, we used the first dataset (described earlier) with a very small TR of 400 ms and a large number of acquisitions. For small TR, we expect the correlation to be stronger and hence the dataset should be useful to study the effectiveness of the model order selection procedures with complex correlation structure for the noise. With large number of acquisitions, we can assess the stability of the dimension estimates by dropping a large number of acquisitions. The results are presented in Fig. 3. In the first row, the results are shown when no pre-whitening is performed for different number of retained time points. In the second, third and fourth rows, we present the corresponding results for pre-whitening using ICA components for 50, 70 and 90 components, respectively. For each of these four cases, in the first column, all the time points are retained followed by 1000, 100 and 50 time points in the second, third and fourth columns, respectively. There is no result with 50 time points for the third and fourth rows since more than 50 components cannot be estimated for 50 time points. Unlike the simulated case, the visual inspection of the plot for eigenspectrum offers no meaningful estimate of the dimension in any of the plots. It appears that for 50 time points, there is not enough information for a proper estimate of the dimension as the extreme value is attained at the boundary 50. For all other cases, both BIC and MDL provide the same estimates. However, for analysis without pre-whitening, the estimated dimensions are 36, 29 and 19 for all, 1000 and 100 retained time points, respectively, indicating a steady drop in the estimate with a reduction in the number of retained time points. For whitened data using the estimated noise correlation from residuals, the corresponding estimates are 26, 21 and 19, respectively. Clearly, the estimates for smaller and more stable compared to

no ICA component estimated

A Novel Approach to Estimate the Intrinsic Dimension … 40

20

2

30

15

1.5

265 1.5 eigenspectrum

20

10

1

10

5

0.5

0.5

0

0

20

40

60

80

0

0

20

40

60

80

15

30

0

0

20

40

60

80

1.5

0

10

40

50

0

10 20 30 40 number of time points=50

50

20

30

2.5 10

20

1

2

15

1.5

10

5

0.5

1

5 0

0

3

25

50 ICA components

BIC MDL

1

0.5

0

20

40

60

80

30

0

0

20

40

60

80

0

15

1.5

10

1

5

0.5

0

20

40

60

80

0

20

40

60

80

0

70 ICA components

25 20 15 10 5 0

0

20

40

60

80

30

0

0

20

40

60

80

0

15

1.5

10

1

5

0.5

90 ICA components

25 20 15 10 5 0

0

20 40 60 number of time points=2410

80

0

0

20 40 60 number of time points=1000

80

0

0

20 40 60 number of time points=100

80

Fig. 3 Performance of eigenspectrum, BIC and MDL in the identification of true dimension of fMRI data with TR 400 ms from 4 axial slices covering the central part of the brain. 2415 scans were collected. In row 1, results are presented for no ICA pre-whitening. In rows 2–4, results are presented for pre-whitened data with 50, 70 and 90 ICA components, respectively

when no pre-whitening is performed. Furthermore, as with the simulated case, the results are extremely robust to the initial guess of the number of ICA components. Although the first real example clearly indicates improvement with the new approach using ICA, due to small TR, only four slices were acquired with approximately 5000 voxels retained inside the brain. In the second dataset, 20 slices were acquired for the whole brain with approximately 20,000 voxels retained. The correlation for TR = 2 s is weaker corresponding to data with TR = 400 ms but is still useful to evaluate the performances. The results are presented in Fig. 4 for no prewhitening and whitening using 50 ICA components. The results are no different for other choices of a number of ICA components but are not shown to avoid replication. Here also we study the behavior of the estimates for a different number of retained time points. In the first column, all the time points are retained followed by 200, 150 and 100 time points in the second, third and fourth columns, respectively. As

50 ICA components

no ICA component

266

R. R. Nandy

10

10

10

10

8

8

8

8

6

6

6

6

4

4

4

4

2

2

2

2

0

0

0

0

20

40

60

80

100

0

20

40

60

80

100

0

20

40

60

80

100

0

10

10

10

10

8

8

8

8

6

6

6

6

4

4

4

4

2

2

2

2

0

0

20

60 40 all time points

80

100

0

0

80 60 20 40 number of time points=200

100

0

0

80 60 20 40 number of time points=150

100

0

eigenspectrum BIC MDL

0

20

80

100

0

80 20 40 60 number of time points=100

100

40

60

Fig. 4 Performance of eigenspectrum, BIC and MDL in the identification of true dimension of fMRI data with TR 2 s from 20 axial slices covering the whole brain. 320 scans were collected. In row 1, results are presented for no ICA pre-whitening. In row 2, results are presented for pre-whitened data with 50 ICA components

in the first example, the methods fail for 50 time points and are not presented. For analysis without pre-whitening, the estimated dimensions are 59, 53, 49 and 48 for all, 200, 150 and 100 retained time points, respectively, again indicating a drop in the estimate with a reduction in the number of retained time points. For whitened data using the estimated noise correlation from residuals, the corresponding estimates are 53, 51, 49 and 48, respectively, which appear to be more stable compared with its counterpart. Finally, we provide the results for the same dataset after performing sub-sampling (as implemented in the GIFT package), which addresses the issue of spatial correlation. The idea is to use a spatially separated sub-sample of all the voxels that are statistically independent as estimated from entropy calculation. Approximately 2200 voxels are retained after the sub-sampling procedure is implemented. The results are presented in Fig. 5 for no pre-whitening and whitening using 50 ICA components. For analysis without pre-whitening, the estimated dimensions are 20, 19, 18 and 16, respectively. For whitened data using the estimated noise correlation from residuals, the corresponding estimates are 16, 16, 16 and 15, respectively. Once again, the whitening provides a more stable result.

no ICA component

A Novel Approach to Estimate the Intrinsic Dimension … 2

2

2

2

1.5

1.5

1.5

1.5

1

1

1

1

0.5

0.5

0.5

0.5

0

50 ICA components

267

0

10

20

30

40

0

0

10

20

30

40

0

0

10

20

30

40

0

2

2

2

2

1.5

1.5

1.5

1.5

1

1

1

1

0.5

0.5

0.5

0.5

0

0

10

20 30 all time points

40

0

0

10 20 30 number of time points=200

40

0

0

10 20 30 number of time points=150

40

0

eigenspectrum BIC MDL

0

0

10

20

30

10 20 30 number of time points=100

40

40

Fig. 5 Same as Fig. 4, except the fMRI data have been subsampled to reduce spatial correlation

4 Discussion 4.1 Validity and Violations of Assumptions in the Proposed Method As in any statistical analysis of fMRI data, we have made a few assumptions in our proposed method. The most critical assumption in our model is the assumption of stationarity for the noise. It has been reported that under certain situations, the fMRI noise can be non-stationary. Although due to space constraint, we could only report results from two real fMRI datasets, we have tried our method in a wide range of datasets (both resting and activation) and the result was mostly similar to what is reported in the results section. However, on a few occasions, we did find non-stationary behavior in the residual correlation matrix. In such situations, it is preferable to avoid any whitening. Modifying the proposed method for non-stationary data is currently a work in progress.

4.2 Initial Choice for the Number of Independent Components In the proposed method, it is necessary to make an initial guess for the number of independent components. This could have been a potential problem if the final dimension estimate had any significant impact on the estimation process. Fortunately, as evident from the results from simulated and real data, the estimates are remarkably consistent over a wide range of values for the initial choice of dimension. This can

268

R. R. Nandy

be explained by the fact that the objective of performing the ICA is to get an estimate of the correlation of the stationary noise for different time lags. The initial choice for number of components does have an impact on the rank of the residual matrix and its covariance matrix. For data with T time points and an initial guess l for the number of components, the rank of the residual matrix as well as its correlation matrix is T − l. But with stationarity imposed, only T − 1 parameters have to be estimated for the T − 1 time lags and hence the choice of initial guess l has little effect on these T − 1 parameters unless l is very close to T .

4.3 Sub-sampling for Identification of the Effectively I.I.D. Samples Li et al. proposed the use of Entropy Rate Matching Principle for identification of the effectively i.i.d. samples to sub-sample the whole brain fMRI data as the first step to model order selection. The motivation for subsampling is the fact that the negative log-likelihood function has larger values for spatially correlated voxels and hence the penalty term for a larger model is not strong enough, which leads to an over-estimation of the dimension. Although our method can be implemented with or without sub-sampling, we have a few reservations about the use of sub-sampling. First, the distribution of a source over all the voxels is not truly random. Any component corresponding to a specific brain function is heavily skewed toward a small fraction of voxels inside the brain responsible for that function and these same voxels will have higher values for the same component for a repeat fMRI experiment. While this does not pose a serious problem for the implementation of ICA algorithm, it may prove quite detrimental for dimension estimation. In the subsampling approach, up to 85% of the voxels may be discarded and the discarded voxels may well include a large number of the voxels that are responsible for a particular component. This may lead to severe underestimation of the number of components. More importantly, it is clear from the results that sub-sampling results in a significant reduction in the dimension estimate. But it is not clear whether the reduction in the number of estimated components is due to the gain in accuracy from selection of effectively i.i.d. samples or simply due to a reduction in the number of voxels. We have performed a simple experiment on the second real dataset in the results section. As described earlier, the complete dataset had approximately 20,000 voxels and the sub-sampled data had approximately 2200 spatially separated effectively i.i.d. sample. As a test, we also implemented the dimension estimation procedure (with or without whitening) on 2200 contiguous voxels in the same brain that are clearly not effectively i.i.d. However, the dimension estimates are extremely close to what we obtain from effectively i.i.d. samples. This indicates that the sub-sampling approach, while theoretically appealing, can severely underestimate the number of true components.

A Novel Approach to Estimate the Intrinsic Dimension …

269

5 Conclusion We have presented a novel scheme for the estimation of the true dimension of signals in fMRI data. The proposed methods are easy to implement and are not costly computationally. While the method has its limitations primarily for non-stationary noise, it still provides a major improvement over existing methods. We believe that the new methods will be beneficial for the fMRI community.

References Beckmann CF, Smith SM (2004) Probabilistic independent component analysis for functional medical imaging. IEEE Trans Med Imaging 23(2):137–152 Cordes D, Nandy R (2006) Estimation of intrinsic dimensionality of fMRI data. Neuroimage 29:145–154 Hyvärinen A (1999) Fast and robust fixed-point algorithms for independent component analysis. IEEE Trans Neural Netw 10(3):626–634 Li YO, Adali T, Calhoun VD (2007) Estimating the number of independent components for functional magnetic resonance imaging data. Hum Brain Mapp 28:1251–1266 Minka TP (2000) Automatic choice of dimensionality for PCA. Technical Report 514, MIT Media Laboratory, Perceptual Computing Section Ulfarsson MO, Solo V (2008) Dimension estimation in noisy PCA with SURE and random matrix theory. IEEE Trans Signal Process 56(12):5804–5816 Wax M, Kailath T (1985) Detection of signals by information theoretic criteria. IEEE Trans Acoust Speech Signal Process ASSP-33(2):387–392 Yourganov G, Chen X, Lukic AS, Grady CL, Small SL, Wernick MN, Strother SC (2011) Dimensionality estimation for optimal detection of functional networks in BOLD fMRI data. Neuroimage 56:531–543

Data Science, Decision Theory and Strategic Management: A Tale of Disruption and Some Thoughts About What Could Come Next Federico Brunetti

1 Introduction The Conference deals with three absolutely relevant topics: Strategic Management (SM), Decision Theory (DT) and Data Science (DS). SM, DT and DS are actually among the most important topics in their respective domains: strategy and Strategic Management in business management, decision-making and Decision Theory in economics and psychology, Data Science in information technology. More, each and every one of them is a distinct and important field of research. No doubt that they are lively fields of research, each of them encompassing a wide range of topics. They are fields of research with their own community of scholars, departments, journals, conferences. And actually SM, DT and DS could easily be the subject matter of a single and ad hoc Conference. However, they are interrelated fields too, because forging strategies imply taking decisions, and decisions are based on data. So, SM, DT and DS are not marginally or accidentally linked together, and it makes perfect sense to consider them jointly. This paper intends therefore to assume a comprehensive and far-sighted enough perspective on the pillar themes of the Conference. Rather than discussing this or that individual topic, it deliberately tries to link together such themes. In doing so, it especially highlights some of the effects the revolution in data has on decision-making and subsequently on business strategies. While specialization in research is fundamental, and knowledge develops more and more specifically, here a non-specialized approach is followed. Reductionism and fragmentation are at the very base of the scientific endeavor, but sometimes it is worth too assuming a broader view, getting out of a siloed vision. When one walks on the ground it is possible to see many things and notice details, but it is only when

F. Brunetti (B) Department of Business Administration, University of Verona, Verona, Italy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 B. K. Sinha and S. B. Bagchi (eds.), Strategic Management, Decision Theory, and Decision Science, https://doi.org/10.1007/978-981-16-1368-5_17

271

272

F. Brunetti

one is in a higher position that other things become clear or can even be seen. Details are important, but the big picture too deserves some attention at times. Besides the reasons of interest lying in a comprehensive perspective, one more thing has to be considered. A great change has occurred and is still occurring in the quantity and quality of data available to decision-makers in general and, obviously, to company strategists. Data, now big data, are an essential part of every human activity and, even more importantly, in every business context. More in detail, the purpose of the paper is, first, to examine the (r)evolution that occurred in MS, DT and DS, then, to discuss the effects of the leap that happened in the process data are linked to decisions and these, in turn, to strategies and, finally, to develop some reflections about the challenges posed by the power of data and about the relationship between (digital) technologies and business management.

2 Changes in Individual Fields As the old adage says, the only constant in our world is change. So, acknowledging the role of change is more or less like reading yesterday’s paper. In recent times, however, the pace of change has undoubtedly increased, and now it can be appropriately said not only that the times they are changing, but that we live in really turbulent times. The practical domains of data, decisions, strategies—and their corresponding theoretical domains—do not escape this trend. DS, DT and SM are experiencing profound changes that alter considerably what was deemed true in the past. In general terms, data moved from a situation of scarcity, fragmentation and poor quality to a situation of abundance, extreme richness and structure; Decision Theory moved from Olympic rationality and certainty to bounded rationality, uncertainty and cognitive biases and Strategic Management moved from clear and straightforward course of action to an open, incremental and many “best ways” process. With respect to data, what first emerges is a shift from scarcity to abundance. It’s a matter of quantity. Every phenomenon is now measured, recorded, stored, thus generating big amounts of data. The quantity of data available is now so huge, that it often exceeds the processing capacity and capability. It is now common to read or hear of information overload, meant as the situation where data are simply too much. Second, it comes a matter of quality. Data underwent a shift from poor quality to richness. Of every phenomenon many different data (by kind, by format, by source) are available. The understanding of any phenomenon that can be reached is much more deep and comprehensive. Third, data moved from fragmentation to structure. Here, a matter of connection is in place. People, objects, systems are now linked together and data from multiple sources are more and more gathered, stored, structured together. Connection is at the very base of understanding and, as a natural consequence, intelligence arises. Intelligence, in fact, is the capacity of reading (lègere) inside (intus) something and the more the connection the more the insight.

Data Science, Decision Theory and Strategic Management: A Tale of Disruption …

273

With respect to Decision Theory, changes are no way less. In the beginning, DT was fully comprised in Olympic rationality. Over time, several streams of research questioned Olympic rationality and its assumptions, and alternative theoretical approaches emerged: Bounded rationality, Behavioral economics, Ecological rationality. Olympic rationality has been the paradigm in DT for several years. It is based on the maximization of Subjective Expected Utility (SEU). The decisionmaker has his own utility function and the decision he takes (should take) is “simply” the highest outcome of a calculus where all the variables are known. Although it has long been considered as a description of decision processes, later it became clear that it was rather something of a prescriptive kind. Olympic rationality, far from providing a representation of what was actually going on during decision processes, offered a framework for taking rational decisions according to some basic and quantified assumptions. It was Herbert Simon who in the mid-50s questioned Olympic rationality, suggesting that real decision-makers were rather applying a bounded rationality. In this approach, the assumptions underpinning SEU are completely subverted and Simon provided evidence that the decision-maker suffers from limitations to thinking capacity, available information, time constraints and pressure, emotional interferences (Simon 2013). The bounded rationality approach—which granted a Nobel prize in Economics to its founder—“simply” introduced the idea that—as human beings, made of flesh and blood—people are satisficers, not optimizers. People in their ordinary behavior just choose and want what is good enough for them, they are absolutely not interested in performing the whole set of complex calculations needed to maximize the utility function, supposedly even if wrongly meant as the guiding driver of human behavior. Later, Behavioral economics took the stage in DT as a better representation of real decision processes, accurately describing the way people actually behave rather than the way they should act. Inside Behavioral economics, Prospect Theory changed the way we look at the very tenets of SEU: utility function and weighting of probabilities (Eiser and Pligt 2015). Preferences are not absolute, detached from the context, but they depend on a reference point they are compared with; people are differently sensible to gain and losses, namely they suffer for losses more than they joy for gains of the same amount. More, people are not good at considering probabilities, and they show a—not completely rational, but perfectly human—preference for complete certainty (or uncertainty). For real people—unless something is definitely sure or definitely impossible—a 20% of chance, for instance, is not that different from an 80%, even if in purely numerical terms, it is four times higher! In parallel, research streams on Heuristics and Biases uncovered inconsistencies, fallacies and mistakes affecting every decision-maker (Ariely and Jones 2008). There is no space here to analyze the whole range of heuristics and biases; suffice it to say that there are many of them indeed and they affect pretty every stage of the decision-making process. In short, thanks to Behavioral economics, the representation we now have of decision-making processes is far more accurate and realistic. Finally, Ecological rationality takes a step further and in a certain way leaves the domain of rationality—be it Olympic, bounded or behavioral—to enter a pretty

274

F. Brunetti

different way of looking at the entire decision issue. Intuition does play a role in decision-making; intuition does not come from whim but from experience; experience can be understood as an unconscious form of intelligence (Todd and Gigerenzer 2012). Emotions as well do play a role; emotions do not just interfere with rationality; they are, and have to be, part of the decision processes (Damasio 1994; LeDoux 2005). As a result, a new form of rationality emerges: fast, frugal and multidimensional. In a word, since it resembles the natural way things go, ecological. With respect to Strategic Management, many changes as well took place in terms of time horizon, strategic intention and the very nature of strategy. Theories and practices, in their mutual interdependence, moved from approaches like Long-range planning, Deliberate strategy and focus on Analysis to approaches like Strategic Agility or Flexibility, Emergent strategy, Incrementalism and focus on Execution. What comes out putting them all together is that before organization was stronger than its external environment and after that, the external influence on organization is stronger (Pfeffer and Salancik 1978). In terms of time horizon, until not long ago the more a company was able to plan in the future the more competitive it was. Since the external environment was quite stable, it made sense trying to foresee and plan accordingly, because the company could gain an advantage from defining in advance what had to be done. Long-range planning epitomized this kind of mindset and approach. Rather quickly, the enviroment started to become volatile, uncertain, chaotic and ambiguous (VUCA). Nowadays, external environment turned to turbulence, and long term in the sense of foretelling and possibly controlling what is going to happen decades or at least years ahead does not exist anymore. How can anybody plan more than some months ahead now? And, if possible, would it make sense, since nearly everything is bound to significantly change? Forecasting and planning left space for flexibility and adaptation as key-skills and competencies in firm’s knowledge. In terms of intention, not every strategy that is decided can be implemented the same way: emerging strategies emerge (Mintzberg and Waters 1985). Several factors come up along the way—both positive and negative—that divert the company from its strategic intention. Practitioners and scholars realized that, in the unfolding of strategy, the ex-post path may differ a lot from the ex-ante intention. Deliberate strategies gave way to emerging strategies, making it clear that, not only it is difficult to manage the company over a long enough time span but also even implementing a same-year strategy can result in surprises and unexpected events. From the nature of strategy standpoint, strategic decision-making is used to mirror the Olympic rationality model and follow a pretty strong analytical approach. Collecting data, analyzing them, devising strategic alternatives, assessing them and finally choosing the best option was the common way strategy that was looked at. Nowadays, such a view is in crisis. Strategy is definitely something more of a stepby-step journey than a grand master plan. An incremental approach to strategy gained momentum (Quinn 1980), and a trial–error mindset replaced the idea of strategy as a sequence of well-defined steps clearly defined in advance. In the same vein, execution became as much important as analysis, subverting the deeply-rooted idea that

Data Science, Decision Theory and Strategic Management: A Tale of Disruption …

275

mind, thinking and analysis are somehow superior to “body”, acting and execution. Psychological features of decision-makers come into play. In a nutshell, strategy moved from being a carefully planned and rationalitydriven process, from a procedural point of view, and a clear plan of action, from a substantive point of view, to being a mindset in procedural terms and a purpose and a vision in substantive terms. In its new concept, strategy gets closer to the ability of uncovering and creating business opportunities (blurring with entrepreneurship) and of switching quickly from one to another; in this respect, the surfer metaphor has been suggested.

3 A Giant Leap As we tried to show, even though in a very packed way, each and every domain underwent relevant transformations over the last years and the way SM, DT and DS look now is very different from what they looked just a few decades ago. Although everybody knows that change is a constant in our world, it is rather surprising to notice the magnitude and depth of change in the fields of DT and SM. What was held true at the beginning, now has been nearly completely subverted, and the very principles in these fields are now quite the opposite of the early ones. In the evolution of our interconnected domains of interest, it is particularly interesting to look at data and the role they play, because data are the raw material for decision-making and decision-making is at the heart of strategic management. Therefore, what is going on in the domain of data necessarily exerts a great impact both on decision theory and on strategic management. We entered the era of a so-called digital revolution or digital transformation, where society and economy are heavily shaped by the opportunities ushered by information and communication technologies. The innovations occurred in few decades in hardware, software, data storage, connecting technologies, internet and web, social media, user-generated contents, artificial intelligence, internet of things, created such a new framework both for companies and for people that revolution or transformation is the sole words able to capture the magnitude of such change. With respect to data, in particular, once upon a time, (few) data were the input for decisions based on sound criteria that, in turn, led to a definite route to be followed by the organization. Nowadays, a multitude of data are available to entrepreneurs and managers whose decisions are (more or less knowingly) flawed, with the whole strategic process ending in simply claiming the need for adaptation. What happened is that data turned to Big Data (and intelligence turned to the artificial intelligence needed to handle them) and started playing a hegemonic role. It can be said that this is the era of data-driven everything. Data, Big Data, promise or maybe foster the illusion that everything can be known and controlled leading to a growing faith in Data Science as the new panacea. Whether such faith is well placed or is unreasonable, falls outside the scope of this paper, but it is undeniable that today a kind of belief in data as a source of salvation is all too well alive and kicking.

276

F. Brunetti

Big Data and Data Science took the driver’s seat and ended up conquering the other domains. Keeping close to those domains most relevant to us, decision-making became Data-driven decision-making and strategic management evolved into Datadriven Strategic management. Someone calls this the “Dictatorship of Data” (Cukier and Mayer-Schönberger 2013), someone calls this the “tyranny of metrics” (Muller 2018). While it is perfectly normal to rely on data in order to make decisions—and reasonably as many and as good as possible ones—something starts to go wrong when data play no more an instrumental role but are the starter of the process and often the very reason to act. Someone even conceptualized the so-called Digitalization Spell: digital technologies push us to do things not because they are beneficial, but (simply) because they make it possible. Why did all this happen? Among other reasons, it is because data are, by their very nature, perceived as objective, neutral and truthful (Porter 1996). Numbers just come out from counting, numbers are the same for everyone and numbers do not lie. Data are supposed to be undisputable, since they convey a sense of certainty (Baccarani 2005). Being data usually in the shape of numbers, they exert a fascination, because we are all used to consider numbers real, unbiased and veritable. Someone calls this “quantification bias” (Wang 2013). So, data stormed into DT and SM and, from an instrumental role and position, they acquired the power to shape the world of DT and SM. Data are the playbook that has to be played and, in some way, dictate behaviors.

4 Where Do We Go from Here? Connecting the dots of the changes occurred in DS, DT and SM, it comes out that these fields are now linked together more than they used to, and that data are not only a shared element, but they have taken a pivotal role. After becoming aware of the power of data, and their implications on Decision Theory and Strategic Management (and, if one would look at the bigger picture, even beyond), some reflections are introduced about the challenges companies and society are facing from now on and the way a beneficial relationship between business and technology can be established. Data, Big Data, Data Science, Data-driven decision-making and Data-driven strategic management are no doubt valuable and important in business and in other domains. At first glance, the more the data, the better the decisions and the strategies. Relying on (good) data as a base for decisions has always been an undisputed statement. At a deeper scrutiny, however, data should not be idolized. Surprising as it can be, data are no panacea. Data per se do not provide a solution to every problem. Rather, it seems a kind of “too much of a good thing” principle is operating in these circumstances.

Data Science, Decision Theory and Strategic Management: A Tale of Disruption …

277

In the following paragraphs, four challenges arising from “data dictatorship”, “data fundamentalism”, “power of data” or anyhow one wants to call them, are briefly discussed. The first challenge is that data—even Big Data—are not enough. As it has been said, data are just raw material. In our society and companies, there are plenty of data. Quantity of data is no more an issue. But every decision-maker—be it an institution, a company or an individual person—needs information not data. Information is processed data, and in order to properly process data, knowledge and judgment are needed. It seems a kind of paradox is at work here: Data comprise all the possible answers, but if we do not have the right questions, answers are pointless and Data— even Big Data—unfortunately per se do not provide any questions. On the contrary, the more the data the more the complexity; there is still a missing link, then, between (big) data and decision-making and, even more, strategic management. Frameworks for (big) data applications in business are needed (Crawford 2013). Furthermore, since it has been realized that Big Data are not enough, new ideas are emerging, such as those of Small Data (Lindstrom 2016) or Thick Data (Wang 2013). The second challenge is concerned with the way Big Data and Artificial Intelligence operate. They basically work crunching enormous amounts of data, but these data necessarily refer to phenomena that already took place. In a word, as sophisticated as these technologies can be, even when applied for predictive purposes, they extrapolate the past. In addition, as it has been said, due to their numerical nature, they benefit from a kind of positive prejudice. In business, however, relying on past events can hinder creativity and innovation. In order for companies to be competitive and possibly disrupting their respective industries, out-of-the-box thinking is required. Such a mindset, in turn, is favored if and when people can imagine with low or no constraints and with low or no conditioning from past events or “past based” knowledge. Challenge number 3 deals with the ethical side of technology. AI, machine learning, deep learning—that are required to manage Big Data—are all too often hidden in black boxes. No one but the people involved in their development knows how algorithms actually work (someone even says that over time algorithms will start working on their own, out of control of their very programmers too). American mathematician Cathy O’neil (2016) called algorithms “Weapons of math destruction”, in the sense that an ever greater part of life—in business and outside it—is determined by rules that are not known and agreed on. Transparency and explainability of the rules followed to process data are urgently needed. It is less a matter of privacy than of being aware of criteria and procedures that bring to the final decision. The final challenge considers some of the consequences on human existence an excessive trust in technology can lead to. It is all too obvious that machines, automated systems, technology are purposefully designed and realized to prevent human work. From the very beginning of human presence on earth, men did everything they could to replace human work—be it physical or intellectual—with machine work and took advantage of any technological device. But, as Borgmann (2000) calls it, the “device paradigm” has not only lights but also shadows. First of all, if machines are trusted too much, mistakes can happen, as some disasters in the airline industry proved

278

F. Brunetti

true. Second, in the long run, human skill deprivation can occur. People can lose the ability to perform certain tasks—think of make calculations in their heads or orientate themselves on an unknown ground—resulting in a complete loss of basic aspects of life. Finally, bringing it to its final consequences, a total dependence on technology risks to establish itself, leaving people unable to live their lives without technological support.

5 Not to Conclude In a wider perspective, the described evolution forges a new episode in the same old fight between a positive and a negative view of technology. In some respects, there is nothing new under the sun, since technology has always brought hope for a better future but at the same time fear for the unknown. The point is that such a fight is now exacerbated by the absolutely pervasive and disruptive nature of digital technologies. Digital technologies are deeply changing food and agriculture, health and medicine, economic activity and business management, human relationships and society, to name just a few. Nearly no realm of human life and activity is spared from the impact of digital. More, digital technologies are promising to alter human experience not only in its physical terms but also in cognitive and maybe emotional ones. It is more and more under examination what it means that both our brains and our senses are heavily supported by digital devices. What is left to human experience—at least as it has been understood until recently—when digital devices are so deeply entrenched in our most intimate processes? These ideas running, a kind of so called techno-anxiety is gaining momentum. The scenarios resulting from the advances in digital technologies, bio-technologies, artificial intelligence are far from assuring. While, once upon a time, human progress was inextricably linked to the idea of the future, now tomorrow is not necessarily perceived as a better time than yesterday. For many people, these days nostalgia seems more comfortable than science fiction. Vintage products and brands are right an expression of such a desire to find solace in the past. The greater emphasis on (digital) technologies worries is justified on one side by the loss of the human experience as it has been known until now and on the other one by the fear these technologies are to some extent getting out of control. Getting closer to our (not) concluding remarks, it has to be acknowledged that technology is inseparable from human evolution, we cannot escape it. Humankind from its very origins has always been looking for devices able to giving relief from effort, fatigue, labor. A fork is simply a tool that makes it easier to eat, and no one now would dare blaming a fork. So, no a priori technology refusal is allowed. Following a dialectical line of reasoning, however, the commonly accepted idea of technology neutrality should be carefully considered. The idea that technology is neutral and that everything depends just on the way it is used and, implicitly, that its use is exclusively up to humans and not to technology should be scrutinized with great attention. Technology has an inherent power and an inherent direction; it

Data Science, Decision Theory and Strategic Management: A Tale of Disruption …

279

helplessly pushes towards its own ends. Since something is possible from a technical point of view, sooner or later someone will do it. Technology exerts an influence on human behavior, simply because it makes something possible. This, in turn, nearly irresistibly will drive someone using it. The technology incorporates inertia toward itself and it generates a slippery slope very difficult to escape from. Two authors perfectly champion such a dialectic, Kelly and Borgmann, and they both are convincing enough, having sound and strong arguments. Kelly (2010) thinks technology is good and sees it as a force of good for humanity since it expands our possibilities. Borgmann (2000) argues that the device paradigm makes us lose focal things and practices, impoverishing the human experience. Therefore, such an issue should not be framed under an either-or game. On one hand, we must embrace technology. But, on the other hand, we should not be blinded by it, unconditionally accepting it. Consequences and, more importantly, hidden consequences and side effects of technology should be taken into consideration as much as possible. As commonplace and at the same time as paradoxical as it may seem, the solution to such a dilemma is always a matter of balancing. Keeping an eye on business management, Data, Big Data, Data Science can certainly prove useful to improve business performances and the overall contribution firms give to society. However, on one side, scientists, IT pundits and engineers should not think to treat business management like a computational problem while, on the other side, executives should not rely too much on technology thus escaping their responsibilities. It is a work of a joint kind to be done, and events like this Conference commendably provide a very good setting for starting such a delicate job.

References Ariely D, Jones S (2008) Predictably irrational. Harper Audio, New York Baccarani C (2005) Diario di viaggio sul treno che non va in nessun posto: riflessioni per chi vive l’impresa. Giappichelli, Torino Borgmann A (2000) The moral complexion of consumption. J Consum Res 26(4):418–422 Crawford K (2013) The hidden biases in big data. Harv Bus Rev 1(4):814 Cukier K, Mayer-Schönberger V (2013) The dictatorship of data. MIT Technol Rev Damasio AR (1994) Descartes’ error: emotion, reason, and the human brain. Putnam, New York Eiser JR, van der Pligt J (2015) Attitudes and decisions. Psychology Press, New York Kelly K (2010) What technology wants. Penguin, New York LeDoux J (2005) Le cerveau des émotions. Odile Jacob Lindstrom M (2016) Small data: the tiny clues that uncover huge trends. St. Martin’s Press Mintzberg H, Waters JA (1985) Of strategies, deliberate and emergent. Strat Manag J 6(3):257–272 Muller JZ (2018) The tyranny of metrics. Princeton University Press, Princeton O’Neil C (2016) Weapons of math destruction: how big data increases inequality and threatens democracy. Broadway Books, New York Pfeffer J, Salancik JR (1978) The external control of organizations: a resource dependence perspective. New York Porter TM (1996) Trust in numbers: the pursuit of objectivity in science and public life. Princeton University Press, Princeton

280

F. Brunetti

Quinn JB (1980) An incremental approach to strategic change. McKinsey Q 16(4):34–52 Simon HA (2013) Administrative behavior. Simon and Schuster, New York Todd PM, Gigerenzer GE (2012) Ecological rationality: intelligence in the world. Oxford University Press, Oxford Wang T (2013) Big data needs thick data. Ethn Matter 13