The Economics of Digital Transformation: Approaching Non-stable and Uncertain Digitalized Production Systems (Studies on Entrepreneurship, Structural Change and Industrial Dynamics) 3030599582, 9783030599584


119 17 16MB

English Pages [277]

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
The Economics of Digital Transformation
Contents
The Economics of Digital Transformation: Approaching Non-stable and Uncertain Digitalized Production Systems
1 Introduction
1.1 The Arguments
The Book
Social and Economic Consequences of Large-scale Digitization and Robotization of the Modern Economy
1 Introduction
2 New Trends in the Development of the Modern Capitalist Economy
3 Models of Capital Accumulation and Technological Progress in the Twenty-First Century
4 Models of Employment and Income Allowing for Technological Replacement of Jobs
5 Models of Income Distribution in Society. Growth of Income Inequality
6 Conclusions
References
Revisited Economic Theory or How to Describe the Processes of Disequilibrium and Instability of Modern Economic Systems
1 Introduction
2 The Schumpeter-Kondratiev Innovation-Cyclic Theory of Economic Development
3 On the Digital Economy Dynamics
4 Synergetic and Digital Economies. Keynes-Minsky Theory of Stabilization of a Nonequilibrium and Unstable Economy
5 Conclusion
References
Technological Development: Models of Economic Growth and Distribution of Income
1 Introduction
2 Technological Development and Inequality
3 Empirical Data on Growth of Inequality in the USA and How it Can Be Explained
4 Models of Growth and Income Distribution in Modern Society
5 Conclusion
References
Breakthrough Technologies and Labor Market Transformation: How It Works and Some Evidence from the Economies of Developed Coun...
1 Introduction
2 Are New Technologies Always Driving Employment?
3 ICT and Structural Shifts
4 Labor-Saving Technologies and Structural Shifts in US Manufacturing
5 Conclusion
References
Technological Substitution of Jobs in the Digital Economy and Shift in Labor Demand Towards Advanced Qualifications
1 Introduction
2 Human Capital and Its Profitability
3 Technological Substitution of Jobs in the Economy
4 Optimal Salary Growth
5 Conclusion
References
Oil Shocks and Stock Market Performance: Evidence from the Eurozone and the USA
1 Introduction
2 Literature Review and Research Hypotheses
3 Methodology
3.1 Econometric Model
3.2 Data Description and Model Specification
4 Results and Discussion
4.1 Empirical Evidence
4.2 Discussion
5 Conclusions
5.1 Evidence and Implications
5.2 Limitations and Future Research
References
Reinforcement Learning Approach for Dynamic Pricing
1 Introduction
2 The Practice of Dynamic Pricing Usage
3 The Mathematical Formulation of the Dynamic Pricing Problem
4 Demand Surface Reconstruction and Agent Training
5 Conclusion
References
Convergent Evolution of IT Security Paradigm: From Access Control to Cyber-Defense
1 Introduction
2 Related Work
3 Evolution of Security Technologies
4 Classification of Security Technologies in Terms of Control Theory
5 Digital Transformation of Control
6 Digital Transformation of Control
6.1 Cyber-Resilience as a Development of a Dynamic Technology Security Paradigm
6.2 Example of Cyber-Resilience Maintaining Using Homeostasis Control
6.3 Example of CPS Resilience Evaluation
7 Conclusion
References
AI Methods for Neutralizing Cyber Threats at Unmanned Vehicular Ecosystem of Smart City
1 Introduction
2 The Related Works
3 Hybrid AI-Based Detection of Security Threats
3.1 An Artificial Swarm Algorithm
3.2 A Deep Neural Network
4 The Experiments and Results
4.1 Network Routing Anomalies Detection
4.2 FDI Detection
5 Conclusion
References
Cybersecurity and Control Sustainability in Digital Economy and Advanced Production
1 Introduction
2 Related Works
3 Approach to CPS Security
3.1 Model of CPS Functioning
3.2 Estimating of Sustainability Area
4 Modeling Destructive Influences
4.1 Analyzing the Sensitivity of the Criterion to Cyber Attacks
4.2 Approach to System Self-Adaptation Recovery
5 Conclusion
References
Blockchain for Cybersecurity of Government E-Services: Decentralized Architecture Benefits and Challenges
1 Introduction
2 Centralized Government E-Services Issues
3 Government E-Services Security Requirements
4 Blockchain for Government E-Services
4.1 Blockchain as Data Exchange Infrastructure
4.2 Blockchain-Based Certifications Database
4.3 Common Issues and Solutions
5 Conclusion
References
Green Energy Markets: Current Gaps and Development Perspectives in the Russian Federation
1 Introduction
2 Literature Review
2.1 Trends in Development of Green Energy Sources
2.2 Costs of Technological Base for Green Energy
3 Methodology
4 Discussion
5 Conclusions
References
Energy Efficiency in Urban Districts: Case from Polytechnic University
1 Introduction
2 Literature Review
2.1 Technology Aspects of the Energy Supply in Housing
2.2 Economic Efficiency of RES
2.3 Social Aspects of Energy-Saving Measures
3 Methodology
4 Results and Discussion
4.1 EID Selection
4.2 EID Analysis
5 Conclusions
References
An Architectural Approach to Managing the Digital Transformation of a Medical Organization
1 Introduction
2 Research Methods
3 Literature Review
3.1 Principles of Value and Personalized Medicine
3.2 Description of Digital Technologies and the Concept of ``End-to-end Digital Technologies´´
3.2.1 Big Data
3.2.2 Neurotechnology
3.2.3 Artificial Intelligence
3.2.4 Blockchain
3.2.5 New Production Technologies
3.2.6 Industrial Internet of Things
3.2.7 Robotics
3.2.8 Wireless Communication
3.2.9 Virtual Reality
3.2.10 Augmented Reality
4 Results
4.1 Business Architecture of Modern Medical Organization
4.2 Requirements for Functionality of Informational System, Including Medical Informational System
4.3 Requirements Formation for a Reference Architecture Model
4.3.1 Business Services Requirements
4.3.2 IT Services Requirements
4.3.3 Options for Selecting BI Systems
4.3.4 Possible Integration Options for ERP, MIS, and BI Systems
5 Discussion and Conclusion
References
Online document
Aluminum Production and Aviation: An Interesting Case of an Interwoven Rebound Effect in a Digital Transforming World
1 Aluminum: From Noble Metal to Cheap Commodity
2 The Cream of the Crop
3 Comments on Al-Alloys Prices
4 Incredible Versatility
5 Snapshots of the Aluminum World Production and Consumption
6 Signals of Backfire
7 The Future: A Prospective Analysis
References
Recommend Papers

The Economics of Digital Transformation: Approaching Non-stable and Uncertain Digitalized Production Systems (Studies on Entrepreneurship, Structural Change and Industrial Dynamics)
 3030599582, 9783030599584

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Studies on Entrepreneurship, Structural Change and Industrial Dynamics

Tessaleno Devezas João Leitão Askar Sarygulov   Editors

The Economics of Digital Transformation Approaching Non-stable and Uncertain Digitalized Production Systems

Studies on Entrepreneurship, Structural Change and Industrial Dynamics

Series Editors João Leitão University of Beira Interior, Covilhã, Portugal Tessaleno Devezas Atlantica—Instituto Universitário Oeiras, Lisbon, Portugal C-MAST (Center for Aerospace Science and Technologies)—FCT, Lisbon, Portugal

The ‘Studies on Entrepreneurship, Structural Change and Industrial Dynamics’ series showcases exceptional scholarly work being developed on the still unexplored complex relationship between entrepreneurship, structural change and industrial dynamics, by addressing structural and technological determinants of the evolutionary pathway of innovative and entrepreneurial activity. The series invites proposals based on sound research methodologies and approaches to the above topics. Volumes in the series may include research monographs and edited/contributed works.

More information about this series at http://www.springer.com/series/15330

Tessaleno Devezas • João Leitão • Askar Sarygulov Editors

The Economics of Digital Transformation Approaching Non-stable and Uncertain Digitalized Production Systems

Editors Tessaleno Devezas Atlantica—Instituto Universitário, Oeiras Lisbon, Portugal

João Leitão University of Beira Interior Covilhã, Portugal

C-MAST (Center for Aerospace Science and Technologies)—FCT Lisbon, Portugal Askar Sarygulov Saint Petersburg State University of Economics Saint Petersburg, Russia

ISSN 2511-2023 ISSN 2511-2031 (electronic) Studies on Entrepreneurship, Structural Change and Industrial Dynamics ISBN 978-3-030-59958-4 ISBN 978-3-030-59959-1 (eBook) https://doi.org/10.1007/978-3-030-59959-1 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Contents

The Economics of Digital Transformation: Approaching Non-stable and Uncertain Digitalized Production Systems . . . . . . . . . . . . . . . . . . . . Tessaleno Devezas, João Leitão, and Askar Sarygulov

1

Social and Economic Consequences of Large-scale Digitization and Robotization of the Modern Economy . . . . . . . . . . . . . . . . . . . . . . . Askar Akaev, Andrey Rudskoy, and Tessaleno Devezas

5

Revisited Economic Theory or How to Describe the Processes of Disequilibrium and Instability of Modern Economic Systems . . . . . . . А. А. Akaev and V. A. Sadovnichiy

25

Technological Development: Models of Economic Growth and Distribution of Income . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Askar Akaev, Askar Sarygulov, and Valentin Sokolov

45

Breakthrough Technologies and Labor Market Transformation: How It Works and Some Evidence from the Economies of Developed Countries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Elena Gorbashko, Irina Golovtsova, Dmitry Desyatko, and Viktorya Rapgof

67

Technological Substitution of Jobs in the Digital Economy and Shift in Labor Demand Towards Advanced Qualifications . . . . . . . А. А. Akaev, A. I. Rudskoy, and Tessaleno Devezas

85

Oil Shocks and Stock Market Performance: Evidence from the Eurozone and the USA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 João Leitão and Joaquim Ferreira Reinforcement Learning Approach for Dynamic Pricing . . . . . . . . . . . . 123 Maksim Balashov, Anton Kiselev, and Alena Kuryleva

v

vi

Contents

Convergent Evolution of IT Security Paradigm: From Access Control to Cyber-Defense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Dmitry P. Zegzhda AI Methods for Neutralizing Cyber Threats at Unmanned Vehicular Ecosystem of Smart City . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Maxim Kalinin, Vasiliy Krundyshev, and Dmitry Zegzhda Cybersecurity and Control Sustainability in Digital Economy and Advanced Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Dmitry P. Zegzhda, Evgeny Pavlenko, and Anna Shtyrkina Blockchain for Cybersecurity of Government E-Services: Decentralized Architecture Benefits and Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Alexey Busygin and Artem Konoplev Green Energy Markets: Current Gaps and Development Perspectives in the Russian Federation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Yury Nurulin, Inga Skvortsova, and Elena Vinogradova Energy Efficiency in Urban Districts: Case from Polytechnic University . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Yury Nurulin, Vitaliy Sergeev, Inga Skvortsova, and Olga Kaltchenko An Architectural Approach to Managing the Digital Transformation of a Medical Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Igor Ilin, Oksana Iliashenko, and Victoriia Iliashenko Aluminum Production and Aviation: An Interesting Case of an Interwoven Rebound Effect in a Digital Transforming World . . . . 251 Tessaleno Devezas and Hugo Ruão

The Economics of Digital Transformation: Approaching Non-stable and Uncertain Digitalized Production Systems Tessaleno Devezas, João Leitão, and Askar Sarygulov

1 Introduction 1.1

The Arguments

The recent economic crisis that lasted between 2007 and 2009 was predetermined not only by the presence of “bubbles” in the financial sector, but also by the allometric character of the sectorial development. This structural imbalance, which accumulated over the years, was the result of a lack of conjugation between the new technological platform, forms and methods of management and practices of social development. It can be expected that a new technological breakthrough will bring with it not only “roses of prosperity” but also “prickly thorns” of disappointment. The key problem will not only be to ensure a new character of economic growth, but also to solve the unemployment problem and to ensure conjugation of the new technology platform and the information and social infrastructure of society. New technologies provide ample opportunities for rapid and direct exchange of goods and services between the producer and the consumer, which are often separated by thousands of kilometres. This forms a new type of economic relationship, without intermediaries, with a high level of individualization, targeting and accuracy by the producer to maximize satisfaction. In classical economic systems, T. Devezas (*) Atlantica—Instituto Universitário, Oeiras, Lisbon, Portugal C-MAST (Center for Aerospace Science and Technologies)—FCT, Lisbon, Portugal e-mail: [email protected] J. Leitão University of Beira Interior, Covilhã, Portugal A. Sarygulov Saint Petersburg State University of Economics, St. Petersburg, Russia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Devezas et al. (eds.), The Economics of Digital Transformation, Studies on Entrepreneurship, Structural Change and Industrial Dynamics, https://doi.org/10.1007/978-3-030-59959-1_1

1

2

T. Devezas et al.

there was a clear division of labour into living and materialized, the latter having a purely productive, not a creative, character. Modern complex software products (which are not artificial intelligence yet) are able to replace a person in the analysis of standard banking transactions or stock trading on the stock exchange or, as Uber’s experience shows, to even replace entire companies. In carrying out such functions, materialized work already changes its character and is capable of variability generating greater added value. More complex systems, such as those based on pattern recognition, can control technology (e.g. cars), and in this case, the scale of replacement of labour looks frightening. The importance of the economics of digital transformation can be measured not only by means of its unequivocal contribution to reinforce and speed up the dynamics of economic activity, but also by its role for the sustainable growth of the economy as a whole. In this innovative and disruptive context, people, companies and governments must be able to improve and acquire skills in order to stay ahead of the fast-changing game of technological change, thus improving their economic, environmental and social performance. In recent decades, digital transformation has impacted on the totality of dimensions of the organization of economic, social and public activity, simplifying the modalities of communication, reinforcing the value attributed to leisure and entertainment activities and transforming the economic value of distances and raising new challenges to security, especially in terms of cybersecurity. However, the most visible face of technological change is shaped in the evolution of Industry 4.0 to subsequent versions, i.e. Industry 5.0 and others to come, which can radically change and dematerialize production processes, supply chains, logistics support services, distribution channels, intelligent forms of marketing, customer relationship management, data mining and big data management. On the supply-push side, the industrial sectors show major changes towards the introduction of technological advances in the interrelated fields of robotics, domotics, automation, nanotechnologies, renewable energy and artificial intelligence. In addition, it is worth emphasizing the greater intelligence and speed integrated through the coupling in industrial production systems, computerized maintenance management systems and big data, what is collectively known as the Internet of Things (IoT), which has led to a sharp trend of increasing “servitization” of the industry, which brings additional challenges to the slowdown in productivity. From a demand-pull perspective, the consumers demand connected products/ services, which should be more responsive and a lot smarter than they are. The development of smarter products/services means we can track our exercise habits and how much we eat; find out the answer to almost anything on the Internet within seconds; control our home from work, or vice versa; and even put the kettle on before we open our front doors at home. These game-changing products come with an endless list of advantages, from maximizing home security and conserving energy to saving costs and helping us enjoy happier, healthier and lower cost lifestyles. In general, they offer the consumer a lot more interactions and control, and can be tailored to everyone’s individual needs and preferences.

The Economics of Digital Transformation: Approaching Non-stable and Uncertain. . .

3

In terms of the challenges to be faced through the collective dissemination of innovations working on intangible platforms, it is worth to highlight the possibilities of strengthening equity in access to data and applications through cloud computing, which is based on computing based on the Internet, integrating applications that can be accessed on the Internet and not just using physical devices or servers. This frees up the memory and computing power of individual computers, and also saves money on the purchasing and maintenance of computers and other peripheral and connectable equipment. Undoubtedly, one of the biggest challenges to security in this digital transformation age is cybercrime. It can have an irreversible effect on corruption perception and institutions’ quality, at the government level, or over the company’s revenue and customer loyalty and recommendation, so security really is a critical problem to be solved. Here, again cloud computing can bring to citizens, companies and governments the possibility to add more security measures than they could usually have on a normal computer, improving data protection and restricting access to sensitive information stored in the cloud. Adding to the previous, digital tools can help with sourcing talent and help put people in the right jobs. For example, artificial intelligence is already widely used to speed up recruitment and reduce inherent human bias throughout the process, which does not correspond to a natural selection process of species, but in fact to a learning and iterative matching process that human beings should be aware of. However, recovering an evolutionary lens of writing, the key here is to adapt, adopt and live, not only survive. Today’s workers should be educated and trained to embrace and work with new technologies in turbulent scenarios, learn new skills such as web engineering, coding and the ins and outs of data protection and ensure a greater quality of working life and a balanced trade-off between life and profession. This highly turbulent and uncertain type of environment requires to move forward the knowledge on the socio-economic changes of digital transformation in distinct sectors and activities. For doing so, it is necessary to revisit “old wine in new bottles,” bringing together different lens of analysis, including structural change, socio-economic change, digital production systems, knowledge and technological competences and skills, which are critical levers of endogenous growth based on innovation and sustainability. For being able to successfully surf the turbulent “waves” of uncertainty and change, it is required to design and strategically predict different scenarios of implementation of advanced/intelligent/learning technologies, cybersecurity tools, new materials and green energies, providing new insightful implications for driving economies towards the desired path of a sustainable growth. This editorial endeavour contributes to the literature of reference on entrepreneurship, structural change and industrial dynamics, by providing innovative answers to still unexplored analysis topics, namely: (1) the socio-economic changes associated with the digital transformation of production systems; (2) the impacts of digital transformation on the sustainable functioning of socio-economic and environmental systems; (3) the adoption of intelligent/learning systems affecting the substitution of human labour force and smart digital management/security of cities,

4

T. Devezas et al.

places and people; and (4) the type of materials and energy innovations leading to sustainable change.

The Book This book is organized into 15 chapters that cover the four topics mentioned above, contributing new insights that can help the worldwide effort to better adapt to the reality of the new digital universe dominated nowadays by intangible platforms. Some of the authors of the chapters of this book are carrying out research activities within the project “Transformation of socio-economic and technological systems: a new comprehension of the role of man, machines and governance” supported by the Russian Science Foundation (Grant No. 18-18-00099) led by Professor Askar Akaev from the Moscow State University. All chapters presented in this book were planned to be presented during the Vth Scientific International Conference “Technological Transformation: A New Role for Human and Machines Management” that should have been held on 27–29 May 2020 at Peter the Great Saint-Petersburg Polytechnic University but was postponed to 16–18 September 2020 as a consequence of the SARS-CoV-2 pandemic. June 2020

Social and Economic Consequences of Large-scale Digitization and Robotization of the Modern Economy Askar Akaev, Andrey Rudskoy, and Tessaleno Devezas

Abstract The authors analyze the trends in the development of modern capitalist economies such as the growth of capital share in the national income (GDP) and corresponding decrease in the share of labor; lower wages and the reduction of employment; steady growth of income inequality and polarization of labor. The models for the dynamics of employment and income are developed considering the acceleration of technological replacement of jobs. Models for the distribution of the household annual income in the USA based on the exponential law, Rayleigh distribution law, and power-law distribution are discussed in the work. The forecasted distribution curves of the annual household income of the middle class, including the poor, and of the rich families in the USA are built for 2030 and 2050 and compared with the corresponding data for 2017. They reflect the forecasted growth of income polarization: there are practically no families with income ranging from 300 thousand dollars to 600 thousand dollars per year. This indicates the gradual disappearance of the middle class which is the basis of democracy and stability. Keywords Technological replacement of workplaces · Polarization of labor and income · NBIC technologies

A. Akaev Institute for Mathematical Research of Complex Systems, M.U. Lomonosov Moscow State University, Moscow, Russia A. Rudskoy Saint Peterburg Peter the Great Polytechnic University, Saint Petersburg, Russia e-mail: [email protected] T. Devezas (*) Atlantica—Instituto Universitário, Oeiras, Lisbon, Portugal C-MAST (Center for Aerospace Science and Technologies)—FCT, Lisbon, Portugal e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Devezas et al. (eds.), The Economics of Digital Transformation, Studies on Entrepreneurship, Structural Change and Industrial Dynamics, https://doi.org/10.1007/978-3-030-59959-1_2

5

6

A. Akaev et al.

1 Introduction Akaev and Rudskoy (2015) proposed a mathematical model for estimating the synergistic effect caused by the convergence of NBIC technologies, which allowed us to calculate the long-term projection, and have shown that NBIC technologies can accelerate the technological growth in the developed economies by an additional 0.5–1.2% per year. Thus, due to their mutual convergence, NBIC technologies generate a significant synergistic effect contributing to the increase in aggregate factor productivity (AFP). The revolution based on NBIC technologies has led to the creation of many innovative industrial technologies and consumer products that can generate largescale changes in production facilities, which are many times superior to the achievements of the third industrial revolution based on microelectronics and information technology. Therefore, the fourth industrial revolution (Schwab 2016) and the creation of Industry 4.0 (Kagermann et al. 2013) have sparked discussions all over the world about their possible consequences on AFP growth. The main infrastructure of Industry 4.0 is the Industrial Internet (Gringard 2015)—a digital platform based on the Internet that ensures the effective interaction of all industrial production facilities. Internet technologies allow to automate the process of manufacturing goods from the stage of production of components and assembly of goods to electronic order and delivery of the finished products to the end user, displacing human workers from these processes. On the other hand, digital technologies have their shortcomings, and the main disadvantage is its intensive labor-saving. This feature is also inherent to additive technologies and robotics. For example, every industrial robot today replaces six workers. Consequently, the digital economy will inevitably lead to numerous job cuts and the elimination of entire professions. Such professions as bank clerks, notaries, lawyers, personnel officers, accountants, financial analysts, writers, journalists, tutors, drivers of vehicles, and many others shall be performed by machines (Schwab 2016). Therefore, in the coming decades, humankind will have to face a large-scale reduction of jobs and, as a result, a sharp decline in household income. Eric Brynjolfsson and Andrew McAfee (2016), scientists at the Massachusetts Institute of Technology, argue: «The economy in the twenty-first century will qualitatively change. There will be mass replacement of qualified labor by the capital and, consequently, sharp decrease in labor activity».

2 New Trends in the Development of the Modern Capitalist Economy The twentieth century saw long-term economic growth during which a number of empirical laws appeared and were proven in the long run when the effects of various economic and financial shocks and crises were mitigated. Six of them were firstly

Social and Economic Consequences of Large-scale Digitization and Robotization. . .

7

Fig. 1 Dynamics of US GDP growth over the past 65 years (1950–2016) (authors’ creation)

formulated by Nicholas Kaldor (1961), and some of them are still valid now. However, some of these empirical laws ceased to function, indicating changes in the development trends of the modern capitalist economy. For our further analysis, the most interesting are two central laws formulated by Kaldor: 1. The ratio of physical capital to output is approximately constant. 2. The shares of labor and physical capital in the national income are approximately constant. To these two laws of developed economies we add the third law, which directly results from the second: 3. The wages of workers grow in proportion to labor productivity, and the share of labor in GDP remains constant. Let us take the classical production function (PF) of the Cobb–Douglas type with labor-saving technological progress as a basic model to describe long-term economic growth: Y ¼ γ × K α × ðA × LÞ1–αþδ

ð1Þ

where Y(t) is the current national income (GDP); K(t) is the physical capital; L(t) is the number of the employed; A(t) is the technological progress; α is the share of capital in GDP; δ is a parameter characterizing the increasing return on the scale of production (δ > 0); γ is a constant normalizing coefficient. Verification of the classical production function (PF) given by Eq. (1) conducted on the example of economic development of the USA for the period from 1950 to 2016 proves its validity (see Fig. 1). Parameters of the PF were estimated by the

8

A. Akaev et al.

method of least squares (LS method): γ ¼ 2.37; α ¼ 0.38; δ ¼ 0.24. The numerical values of the production factors (K, L, and A), as well as GDP (Y), were taken from the World Bank database (http://data.worldbank.org/) and were verified with the data of the University of Groningen (http://febpwt.webhosting.rug.n1/). But, in order to use Eq. (1) for predictive calculations, it is necessary to take into account changes in the development trends of today’s economy. The second law implies that in the Eq. (1) the parameter α, which characterizes the capital share, and the parameter (1-α), which characterizes the share of labor, have constant values. The first of the above laws is expressed by the following: ðaÞ K ¼ σY, σ ¼ const; ðbÞ Y ¼ κK, κ ¼ const; ðcÞ σ ¼ κ –1

ð2Þ

Here σ is the coefficient of capital intensity; κ is the coefficient of capital income. Using data from the World Bank statistics for K (t) and Y (t) of the US economy for the period from 1985 to 2016, we estimated the coefficients σ and κ and obtained: σ ¼ 3.2; κ ¼ 0.31 The third law can be expressed by the following formula : w ¼

A , 1þη

ð3Þ

where w is a real average wage of workers and η > 0 is the increment set by firms to reduce the proportionality factor to their own advantage. The trends in the accumulation of capital, economic growth, and income inequality, which developed at the beginning of the twenty-first century, were thoroughly studied by the French economist Thomas Piketti. The fundamental results of his studies are described in his famous book “Capital in the 21st Century” (Piketti 2013). First of all, Piketti showed that in the most industrialized countries (the USA, Great Britain, Germany, France, etc.) the value of capital intensity σ moved along a huge U-shaped curve and at the beginning of the twenty-first century returned to its maximum value, close to those that were observed at the end of the twentieth century. In the eighteenth and nineteenth centuries, the value of σ in the leading European economies was quite stable and was equal to 7 in France and Great Britain, and equal to 6.5 in Germany. By the middle of the twentieth century the value of σ in these countries had dropped to its minimum of 3.0–3.5. In the US, capital intensity reached its quasi-stability at the level σ ¼ 4.5 in the early twentieth century and then stabilized at the level of σ ffi 4 starting from the mid-twentieth century. It is obvious that changes in the capital intensity in the USA during the twentieth century were quite limited, unlike in the countries of Western Europe, and, therefore, made an impression that the first empirical law stated by Kaldor was still working. The fact that capital intensity σ in the industrialized countries in the twenty-first century returned to its maximum means that it is likely to stabilize again at least until the middle of this century. However, Piketti claimed that for the world economy the capital intensity would increase to 6 by the middle of this century from its current value of σ ¼ 5. So, concluding from the research done by Piketti, it follows that in

Social and Economic Consequences of Large-scale Digitization and Robotization. . .

9

the first half of the twenty-first century the first empirical Kaldor law (2), Y ¼ κK, will still be a key factor for the transformation of the production function PF given by Eq. (1). As for the second empirical law of Kaldor, it does not function in the twenty-first century: the share of capital income in GDP will not remain constant, but it tends to grow, according to Picketti. This follows from the first fundamental law of capitalism (Piketti 2013): α¼r×σ

ð4Þ

where r is the average capital income. The average capital income (r) was 5–6% in the eighteenth and nineteenth centuries, then it grew to 7–8% in the mid-twentieth century, due to the breakthrough technological innovations of the fourth Kondratieff Wave, and then it fell to 4–5% at the dawn of the twenty-first century (Piketti 2013). The capital share (α) in the GDP of Western European countries amounted to 35–40% in the nineteenth century, later it fell to 20–25% in the middle of the twentieth century, and it rose to 25–30% by the beginning of the twentieth century (Piketti 2013). As we see from the data above, the in deep analysis made by Piketti shows that in the long term the share of capital income in GDP changed significantly. The renewed growth of the profitability of capital in the developed economies can be expected in the coming decades due to the revolutionary technological innovations of the sixth Kondratieff wave. The average capital income in the industrialized countries may grow to 6–7% from its current 4–5%, which, in accordance with formula (4), will lead to an increase in the share of capital income to 40–50% and a subsequent decrease in the share of labor in GDP by 15–20%. For example, the share of labor in GDP in the USA fell from 65% to 55% in the period from 1960 to 2015 (Ford 2015). The third law (3) ceased to function in the mid-1970s, when the growth trajectories of labor productivity (A) and the real average wage w went apart: the former continued to grow steadily, and the latter at first stagnated, then dropped noticeably at the beginning of the twenty-first century (Ford 2015). In the golden age of the «economic prosperity» (1948–1973), the growth of wages was directly proportional to the growth of labor productivity, which led to an unprecedented expansion and strengthening of the middle class in the developed countries. Most economists believe that the main culprit of this negative trend is the rapid technological changes that cause ever-increasing unemployment, which contributes to the increase in income inequality. Indeed, in the period from 1980 to 2000, the average wage of low-skilled workers decreased by 15%, and the wage of highly qualified employees increased by 20%. The main reason for the increase in the wage of highly skilled workers and the decrease in the wages of low-skilled workers is a growing demand for highly qualified specialists and a lower demand for low-skilled workers. These factors are observed in the USA, but they are typical for all developed countries. These two negative trends will only be reinforced in the future: digital technologies, as well as smart computers and robots, will trigger an intensive and large-scale replacement of low-skilled workers. Intensive technological replacement of jobs will

10

A. Akaev et al.

cause fierce competition for well-paid jobs, which will ultimately lead to a decrease in real wages. The reduction of the real income of households, subsequently, will lead to the drop of the demand for goods and services and to the output decline, followed by the slowdown in the economic growth predicted by Piketti. In order to maintain demand at the level of potential output, the developed countries will have to introduce the universal basic income (UBI), which was proposed by Friedrich Hayek, who believed that UBI would serve as the “economic safety cushion” (Hayek 1976).

3 Models of Capital Accumulation and Technological Progress in the Twenty-First Century The accumulation of capital and innovative technologies of the fourth industrial revolution will become the driving forces affecting the economic development in the first half of the twenty-first century. The growth of capital was the most important feature of capitalism in the nineteenth and twentieth century and will accelerate in the twenty-first century, according to Piketti. Let us consider the patterns of capital accumulation in the twenty-first century and use the classical equation of capital accumulation: K_ ðt Þ ¼ I ðt Þ – μ × K ðt Þ

ð5Þ

where I (t) is the production investment; μ is the rate of capital retirement. We consider the depreciation of infrastructure capital within one Kondratieff wave. We estimated the value of μ within the fifth Kondratieff wave (1982–2016) as the regression coefficient in Eq. (5), and we obtained μ ¼ 0.037, i.e., 3.7% per year. Since I(t) ¼ s × Y(t), Eq. (5) is transformed to: K_ ðt Þ ¼ s × Y ðt Þ – μ × K ðt Þ

ð6Þ

Considering that the first empirical law of Kaldor (2) remains valid in the first half of the twenty-first century, Eq. (6) will be simplified to the following: K_ ðt Þ ¼ ðs × κ – μÞ × K ðt Þ

ð7Þ

The solution to this simplest differential equation is the following: K ðt Þ ¼ K 0 × exp ½ðs × κ – μÞ × ðt – T 0 Þ]

ð8Þ

From the World Bank statistics, the following values were used: K0 ¼ 65558 billion dollars at T0¼ 2017 and s ¼ 0, 18. The parameters k ¼ 0.32 and μ ¼ 0.037

Social and Economic Consequences of Large-scale Digitization and Robotization. . .

11

were determined earlier. Consequently, s × κ – μ ¼ 0.0206 and the accumulation of capital will occur exponentially with an annual growth rate of 2.06%. Consequently, under the conditions of the empirical law given by expression (2), there will be an exponential growth of the accumulated capital in the twenty-first century. However, according to the theory of Kondratieff waves, the downward stage in the 2040s should see the effect of capital saturation, therefore, the accumulation of capital will go along a logistic trajectory: K ðt Þ ¼ K 1 þ

K2 1 þ uK × exp ½–ϑK × ðt – T 0 Þ]

ð9Þ

where K1, K2, uK and ϑK are constant parameters; T0 ¼ 2018. All these parameters are easily determined using the least square method, from the condition of the same growth trajectories of capital accumulation (8 and 9) at the initial stage of the upward stage of the Kondratieff wave (2018–2034). The results of the calculations for the US economy give the following figures: K1 ¼ 48464 billion dollars; K2 ¼ 105000 billion dollars; uK ¼ 5; ϑK ¼ 0, 1. It is extremely important to estimate precisely the numerical values of the parameters s, κ and μ in formula (8). Knowing the trajectory of the capital accumulation (9) and using the formula (2), we can now calculate the growth trajectory of an expected output Y p ðt Þ ¼ κ × K ðt Þ

ð10Þ

Let us choose the model for the description of the technical progress of A(t). The main innovative technologies of the sixth Kondratieff wave are the general-purpose technologies (GPT), so they will be implemented throughout the economy contributing to the accelerated accumulation of capital. In the conditions of accelerated accumulation of the capital (9), the most suitable model to describe technological progress is the model that allows for knowledge, skills, and experience acquired by employees in the process of their work on the new production equipment. The latter, obviously, depends on the amount of newly invested capital. This model was proposed by Kenneth Arrow in 1962 and was called the “model of training in the production process” (Arrow 1962): Aðt Þ ¼ a

( )θ K ¼ ak θ ðt Þ L

ð11Þ

where θ is the parameter characterizing the efficiency of training (0 < θ ≤ 1); a is the normalizing coefficient. Arrow estimated the value of the parameter θ for the aviation industry and obtained θ ffi 0, 7. Such high values of θ are typical only for high-tech industries; for traditional industries, the typical values are of the order θ ¼ 0.1–0.3. The value of θ for the US economy for the period from 1985 to 2015 was estimated at θ ffi 0, 19 by the regression formula lnA ¼ θ × (lnK – ln L) + ln a that follows from the Arrow model (11).

12

A. Akaev et al.

Arrow also showed that the knowledge acquired in the production process in one firm or industry is shared by the workers of other firms/industries, i.e., there is a spreading effect of knowledge. Therefore, other firms (industries) also benefit from this process, as an external effect of raising the level of capital-labor ratio (capital intensity). Consequently, the Arrow model (11) is most suitable for describing the average technical (progress ) throughout the economy. In the coming decades, the capital-labor ratio k ¼ KL of the workplace will steadily grow due to accelerated accumulation of capital (K) and a simultaneous decrease of labor (L), i.е., the number of workers and employees involved in the real economy. Thus, during the period of the sixth Kondratieff wave, the nature of capital accumulation in the twenty-first century will radically change followed by the large-scale replacement of labor with capital, whereas in previous centuries capital accumulation entailed the increase in the number of jobs.

4 Models of Employment and Income Allowing for Technological Replacement of Jobs Above we established the relations (2) and (11) between the capital and output, and also between the capital and technological progress, which correspond to the development trends of high-yield economies in the first half of the twenty-first century. A forecast logistic function for the calculation of capital accumulation was also obtained (9). Substituting formulas (2 and 11) in the PF (1) and solving it with respect to L(t), we obtain the following formula for calculating the value of labor: ( ) 1–e α–θ 1–e αþδ ( ) (1 ) ( ) κ ð1–θÞ 1–eαþδ × K ðt Þ ð1–θÞ 1–eαþδ Lðt Þ ¼ λ × γ

ð12Þ

Here, λ ¼ α–1–θ is the normalizing coefficient. In this formula, all parameters (κ, θ, δ) have constant values, except for the capital share α in GDP, which will increase in accordance with the formula (13). Therefore, the parameter α is equipped with ~ symbol above, indicating that it is a variable. If we assume that the constant current value e α ¼ α0 ¼ 0, 38 in formula (12), then we can forecast the dynamics of the potential number of jobs in the US economy—Lp(t). The corresponding graph is shown in Fig. 2. In accordance with Piketti’s theory, we assume that by the middle of this century we will have an increase of the capital share in US GDP about 7% up to e α¼ αm ¼ 0, 45, as it has already risen by 3% since the beginning of this century. Then we can use equation (12) to calculate the dynamics of change in the number of working places, allowing for the technological replacement of labor by capital—LCK 1

Social and Economic Consequences of Large-scale Digitization and Robotization. . .

13

Fig. 2 Projections of the number of the employed in the US economy, allowing for the technological replacement of jobs (LCK) and production robotization (LCKR) (authors’ creation)

(t). e αðt Þ, which will increase along the logistic trajectory as expressed by equation (13): e αðt Þ ¼ α1 þ

α2 1 þ uα × exp ½–ϑα × ðt – T 0 Þ]

ð13Þ

where α1, α2, uα, and ϑα are constant parameters. The parameters of which can be calculated, given its initial (α0 ¼ 0.38; T0 ¼ 2018) and final (α0 ¼0.45; Tm ¼ 2050) values: α1 ¼0.38; α2 ¼0.07; uα ¼20; ϑα ¼0.18. The corresponding forecast curve describing the dynamics of changes in the number of jobs in the US economy, LCK (t) is given in Fig. 2, along with the growth trajectory of the projected number of jobs Lp(t). This curve also shows the retrospective data on the growth in the number of jobs in the US for the period from 1980 to 2016. At the point, the forecast curves join the retrospective one, we obtain the following value of the normalizing coefficient λ ¼ 1.996. As can be inferred from Fig. 2, in the framework of the current economic model, an additional 100 million jobs could be created by 2050, whereas in accordance with the new development model based on the widespread digitization of all spheres of life, there will be a slight increase in the number of jobs until 2025, and later their number will gradually decrease to a minimum level of about 140 million jobs, which is close to the level of 2009, when the crisis occurred.

14

A. Akaev et al.

An additional reduction in the number of jobs will take place in various sectors of the economy due to the intensive use of robots, whose number can also be forecasted with the help of a logistic function: Rðt Þ ¼ R1 þ

R2 1 þ uR × exp ½–ϑR × ðt – T BR Þ]

ð14Þ

where R1, R2, uR, ϑR are constant parameters. To determine the numerical values of these parameters we have used the data from the International Federation of Robotics (IFR) on the number of robots operating in the US economy: 0.3 million robots in 1995 and 1.5 million robots in 2015. According to the forecast for 2025, there will be 4 million. Robots in the US economy. Taking into account these three points on the logistic trajectory given by equation (14), we find the numerical values of the constant parameters: R1 ¼ 0.17 mln; R2 ¼ 17.5 mln. ; uR ¼ 132; ϑR ¼ 0.121; TBR ¼ 1995. So, knowing the numerical values of the constant parameters and using equation (14), we can calculate the forecast growth dynamics of the number of robots that will be used in the US economy until 2050. Automation using robots aimed at performing both physical and mental work increases the demand for highly skilled personnel. However, the balance is negative for both employment and wages: they both decline according to Acemoğlu and Restrepo (2016). The above authors analyzed the US labor market and formulated the following empirical law: one new robot per 1000 workers reduces employment by 0.18%, even taking into account the growth of employment in related industries, and the average pay is reduced by 0.25%. Later, these authors rounded these figures to 0.34% for employment and to 0.5% for wages, respectively. The empirical patterns for the US economy proposed by Acemoglu and Restrepo are the following: ðaÞLCKR ðt Þ ¼ LCK ðt Þ × f1 – εL × ½Rðt Þ – R0 ]g, T 0 ¼ 2018 ⌈ ⌉, ðbÞw ðt Þ ¼ w 0 × f1 – εw × ½Rðt Þ – Ro ]g × exp qp × ðt – T 0 Þ

ð15Þ

Here w 0 is the average annual wage of an employee (in the USA in 2016 w 0 ¼ $60, 000Þ; qp is the average forecast inflation rate that can be taken as 2% per year (qp ¼ 0, 02Þ; Ro ¼ 1.88 million robots in the US economy at the beginning of 2018; εL ¼ (0, 18 ÷ 0, 34) × 10–7 and εw ¼ (0, 25 ÷ 0, 5) × 10–7 are empirical coefficients. Since w ðt Þ is the nominal average wage of one employee, the forecast dynamics of the total income of all workers can be calculated using both empirical patterns given by equation (15). Then: Y ph ðt Þ ¼ w ðt Þ × LCKR ðt Þ

ð16Þ

The additional reduction of jobs caused by the automation of production with robots and calculated by equation (15a) is expressed by the curve LCKR in Fig. 2. As

Social and Economic Consequences of Large-scale Digitization and Robotization. . .

15

can be assumed from this figure, digital technology leads to the reduction of four times more jobs than the automation with robots, although the effect of digital technology takes place in the virtual world, while we can see the work of robots in reality. The real demand for goods and services depends mainly on the real income of private households Y ph ðt Þ as expressed by equation (16). To determine the relationship between the income of private households and the real aggregate demand for goods and services, let us use the basic economic formula: Y ¼ C þ I þ G þ NX

ð17Þ

where C(t) is the total consumption of households; I (t) is gross investment; G (t) is public expenditure; NX(t) is net exports. Assuming that CðtÞ ¼ c Yph (c is the average household consumption, I ¼ s × Y, G ¼ τ × Y (τ is the average tax rate), and NX ¼ 0 (in case of the balanced international trade), we obtain the following relationship between the aggregate real demand for goods and services (Yrd) and real total annual household income (

) Yph : Y rd ðt Þ ¼ c0 × Yph ðtÞ; c0 ¼

с 1–s–τ

ð18Þ

For the USA, the following figures are relevant today: с ¼ 0.82; s ¼ 0.18; τ ¼ 0.11; c0 ¼ 1.16. Figure 3 shows the forecasted growth trajectories of the potential US GDP up to 2050, calculated using the PF (1) under different scenarios of employment reduction Lp, LCK, LCKR (Fig. 2) and accelerated capital accumulation K expressed by equation (9). Technological progress was calculated using the Arrow model (equation 11). As can be seen in the graphs in Fig. 3, the technological replacement of labor with capital reduces the potential level of GDP to a lesser extent than automation with robots (YCKR). However, the curve Yrd, which reflects the real aggregate demand for goods and services, calculated by equations (15–18), goes up until 2025, and then begins to fall. As a result, the gap between the potential output and the aggregate real demand will rise to 20 trillion dollars or 54% of GDP by 2050. Therefore, in the digital economy, it is the demand that will play the key role being a major constraint on GDP growth. The question arises: how can we ensure consumer demand at the level of the expected output of goods and services? We have already stated above that most economists propose the introduction of ‘universal basic income’ (UBI) for all adult citizens of the country, which should serve as a guaranteed minimum income, regardless of any other sources of income. The UBI is considered to be the most effective solution to the problem of decreasing the income of citizens due to the technological replacement of jobs (Ford 2015). Moreover, it does not require large administrative expenses. To finance UBI, the redistribution of the tax burden from labor earnings to capital revenues would be required. If this is not done in time, the

16

A. Akaev et al.

Fig. 3 Forecast of GDP dynamics allowing for technological replacement of jobs (YCK, YCKR) and falling demand from the households (Yrd) (authors’ creation)

US economy will face the recession caused by the falling consumer demand, the steady growth of structural unemployment, and explosive growth of social tension. Let us now estimate the value of UBI for the USA. First, it is necessary to forecast the growth dynamics of the US population, which can be calculated by a logistic function: N ðt Þ ¼ N 1 þ

N2 1 þ uN × exp ½–ϑN × ðt – T bN Þ]

ð19Þ

where N1, N2, uN, ϑN are constant parameters; TbN is the base year from which the approximation of population growth begins. We assume that TbN ¼ 1950 and using the retrospective data of the population growth in the USA for the period from 1950 to 2016 we determine the numerical values of the parameters: N1 ¼ 0.009 million people; N2 ¼ 617.76 million people; uN ¼ 2, 93; ϑN ¼ 0,018. We also found that equation (19) is verified with a high correlation coefficient. Furthermore, the gap between the potential output of YCK (taking into account the forthcoming technological replacement of labor by capital), and the real aggregate demand Yrd is compensated by the growing UBI, whose dynamics is also approximated by a logistic function:

Social and Economic Consequences of Large-scale Digitization and Robotization. . .

r b ðt Þ ¼

r b0 1 þ ub × exp ½–ϑb × ðt – T b0 Þ]

17

ð20Þ

where rb0, ub, ϑb are constant parameters; Tb0 is the year of the introduction of the UBI. The total UBI received by the adult population of the USA is determined by the expression: Y Nb ðt Þ ¼ ψ × N ðt Þ × r b ðt Þ

ð21Þ

where ψ is the coefficient that allows for the adult population receiving the UBI (ψ ≤ 1). Now, the aggregate demand on part of households, considering both the labor income Yph (16), and the UBI (21), will be calculated as follows: Y rdb ðt Þ ¼ c0 Y ph ðt Þ þ Y Nb ðt Þ

ð22Þ

Taking Tb0 ¼ 2018 as the initial year of the UBI introduction and allowing for the coincidence of the trajectories YCK(t) and Yrdb(t), we calculated the parameters of the forecasted growth function of UBI given by equation (21): rb0 ¼ $50; ub ¼ 50, ϑb ¼ 0.22. The growth trend of the aggregate household demand, which allows calculating the UBI, given by equation (22), and the growth curve of the UBI given by equation (20) are shown in Fig. 4. As can be seen from Fig. 4b, the starting point should be $50 per person per month, which should be later increased by a logistic function: 500 dollars per person per month in 2025; 1000 dollars per person per month in the year 2030; more than 2000 dollars per person per month in 2035; more than 3000 dollars per person per month in 2040; and more than 4000 dollars per person per month in 2050. It should be noted that this task is feasible given there is the political will of the US government. For comparison, in 2016 Switzerland held a referendum on granting the UBI amounting to 2500 Swiss francs, which is approximately $3000, to each adult citizen of the country. However, to the astonishment of the whole world, the Swiss voted against the introduction of the UBI, fearing that this would discourage the motivation of their children to receive higher education and good qualifications.

5 Models of Income Distribution in Society. Growth of Income Inequality Dragulesku and Yakovenko (2001) have shown that the distribution of the annual income of families with one or two working parents is expressed by the exponential and Rayleigh distribution laws:

18

A. Akaev et al.

Fig. 4 Recovery of the aggregate household demand (Yrdb) by the introduction of the basic income (YNb) (a) Influence of the demand on GDP dynamics (b) logistic dynamics of the required basic income (authors’ creation)

Social and Economic Consequences of Large-scale Digitization and Robotization. . .

ðaÞ Y ph1

( ) ( ) r r r ; ðbÞ Y ph2 ¼ 2 × exp – ¼ × exp – r m1 r m1 r m2 r m2 1

19

ð23Þ

where r is the annual family income, in thousands of US dollars; rm1 and rm2 are the mathematical expectations (average values) of family income. It turned out that these distributions are also valid for the income up to 120 thousand US dollars. As expected, the distribution of income for wealthy families with the income over $ 120,000 turned out to follow the Pareto law with an exponent of ( ) Y sp0 hðtÞ hð t Þ – 1 × h ¼ 2, 7 : Y sp ðt, r Þ ¼ Y sp0 r

ð24Þ

where Ysp0 is the lower limit of income for wealthy families. According to our estimates, the numerical values of the following parameters turned out to be equal to: h ¼ 2.5 (for 2016); Ysp0 ¼ 570 thousand dollars. Obviously, they have changed significantly over the past 20 years. The total income of private households Yph ðtÞ, determined by us earlier (15), are distributed between households with one and two employees: ⌈ ⌉ Y ph ðt, r Þ ¼ Yph ðtÞ × ν1 ðtÞ × Yph1 ðrÞ þ ν2 ðtÞ × Yph2 ðrÞ

ð25Þ

where ν1 and ν2 are the shares of families with one and two employees, with ν1 ¼ 1 – ν2. It is a well-known fact that in 1996, the income of American households was described by the approximation 0, 45Yph1 + 0, 55 Yph2 (Dragulesku and Yakovenko 2001). In 2010, the approximation was different: 0, 5Yph1 + 0, 5 Yph2. The peak of the economic activity of women in the US took place in 2000 when 60% of women worked on a par with men (Ford 2015, p. 68). Since then, this indicator has been decreasing. Due to the intensive technological replacement of jobs, the number of families with two workers ν2 will only decrease in the future, and the number of families with one employee will increase. Let us assume that this replacement will follow the logistic law: ν1 ¼

ν10 v10 þ ð1 – v10 Þ × exp ½–ϑν × ðt – T0ν Þ]

ð26Þ

where ν10, ϑν are constant parameters. Let us assume that by 2050 will most families have just one worker, then we can find ϑν ffi 0.067 from (26) knowing that ν10 ffi 0.5 in 2010. Eventually, we obtained the distributions of household income of the middle class, including the poor (25), and the distributions of household income of wealthy families (24). Their mathematical expressions are important for comparative analysis. They can be obtained from the above distributions:

20

A. Akaev et al.

ðaÞ r mph ðt Þ ¼ ν1 ðt Þ × r m1 þ 2ν2 ðt Þ × r m2 ¼ ½ν1 ðt Þ þ 2z × ν2 ðt Þ] × r m1 hð t Þ – 1 ðbÞrmsp ðt Þ ¼ × Y sp0 ; r m2 ¼ z × r m1 : hð t Þ – 2

ð27Þ

Here, z is a coefficient that allows for the difference in the wage of men and women in the family. If they have the same income, then z ¼ 1. We can take the average annual forecast wage of an employee as rm1 which was determined earlier (15b): ⌈ ⌉ r m1 ¼ w 0 × f1 – εw × ½Rðt Þ – R0 ]g × exp qp × ðt – T 0 Þ

ð28Þ

This average income can nominally slightly increase, although in real terms it will only decrease. If we calculate the average income of households belonging to the middle class (27a), also including the poor, we will see that they will gradually diminish, as the number of families with the two workers will decrease. As for the wealthy families, their average income (27b) will continue to grow, because the Pareto index h(t) will decrease, asymptotically approaching 2. Ysp0 will grow slowly, but the laws of its growth are still unknown. It should be noted that h is inversely proportional to e α-capital share in GDP (13): hð t Þ ¼

h1 , h ¼ const e αðt Þ 1

ð29Þ

According to the given retrospective data of the Pareto index h (t), we estimated the value of h1 ¼ 0.95. We introduce an approximate gage of income inequality as the ratio of the average income of rich families (28b) to the average income of middle-class families, including the poor (28a): ξðt Þ ¼

rmsp ðt Þ hðt Þ – 1 Y sp0 ¼ × r mph ðt Þ hðt Þ – 2 ½ν1 þ 2zð1 – ν1 Þ]r m1

ð30Þ

Here, h(t) is calculated by equations (29 and 13), ν1-by equation (26), and rm1-by equation (28). The growth curve of the income inequality ξ (t), calculated by expression (30) with the constant value Ysp0 ¼ 570 thousand dollars and the minimum value εw ¼ 0, 25 × 10–7 is shown in Fig. 5a. As it can be seen from the graph, if no radical measures are taken to redistribute income, then income inequality which is now greater than a tenfold gap, will reach a 20fold gap in the 2030s and a 50fold gap by 2050. The fact that the lower limit of income of wealthy families Ysp0 tends to increase is neglected here. Polarization of income can be seen in Fig. 5b, which shows the distribution curves of the annual household income of the middle class, including the poor (on the left in Fig. 5b), and also the rich families (on the right in Fig. 5b), obtained from equations (24) and (25) for 2017, 2030 and 2050, respectively. The graph

Social and Economic Consequences of Large-scale Digitization and Robotization. . .

21

Fig. 5 Growth of inequality and polarization of income of the US population. (a) Growth of income inequality. (b) Distribution of households by the income (authors’ creation)

shows that families with an average income ranging from $300,000 to $600,000 tend to disappear by 2050. This signals the future disappearance of the middle class that has always been the pillar of democracy and stability. In his famous book “The Great Divide” J. Stiglitz revealed the true picture of income inequality in the US and the polarization of American society, as well as their negative consequences for the American people and for the whole world (Stiglitz

22

A. Akaev et al.

2015). He brought the problem of inequality to the center of the public debate and suggested the ways out of the current depression and further economic recovery. Stiglitz argues that, «Markets should be tamed and ennobled so that they work for the benefit of most citizens» (Stiglitz 2015). He calls for the real decisions to be taken to maintain the middle class: raising taxes on corporations and the rich; more investment in education and health, as well as in science; help provided to households, not banks; economic recovery and full employment. However, these problems are unlikely to be solved in the era of the digital economy.

6 Conclusions 1. The era of large-scale digitalization, computerization, and robotization of all spheres of the economy and public life will lead to the rapid growth of aggregate factor productivity. Industry 4.0, based on digital technologies and platforms, additive technologies, and robotics, will gradually solve the problem of shifting from mass production of standard goods and services to the creation of diverse and high-quality goods and services that meet individual preferences and needs. This best corresponds to the growing demand for a variety of goods and services, and leads to the extension of their range, hereby considering changes in consumer behavior. 2. Large-scale digitalization, computerization, and robotization of all spheres of economic and social life in the coming decades will boost automation of production with robots and technological replacement of labor with capital. The new stage of automation begins with a higher intellectual level of the machines that can learn and improve during the production process. So far, automation has displaced a human worker from the field of manual labor, while now the technological achievements in machine learning and artificial intelligence (AI) will enable a large-scale displacement of human workers in the sphere of intellectual work, often performed by the representatives of the middle class, the pillar of democracy and stability in modern societies. 3. There will be a growing tendency to reduce the employment and wages of workers, due to the large-scale technological replacement of jobs using intelligent computers and robots. By 2050, in the USA alone, about 150 million jobs will be technologically displaced, and only about 100 million jobs will be performed by people, which corresponds to the level of 1980. Moreover, most of these jobs will be performed by low-skilled and low-paid workers. 4. The decrease in real household income, caused by continuous job cuts and lower wage, will lead to a steady drop in consumer demand for much of the US population. This will subsequently lead to the drop in production and slowdown of economic growth. For the aggregate demand to recover to the level of the expected output of goods and services, it will be necessary to introduce a universal basic income (UBI) for all adult citizens or the alternative targeted

Social and Economic Consequences of Large-scale Digitization and Robotization. . .

23

support for the poor, which will entail high expenses since it will cover most of the population. 5. Polarization of the labor force will intensify, which will lead to the growth of income inequality and social tensions in society. In the labor market, there will be a growing demand for highly skilled (highly paid) and creative specialists capable of programming computers and robots, and for low-skilled and low-paid workers, whose work is unprofitable to automate. The jobs requiring medium qualification will be performed by computers and robots equipped with AI elements. Consequently, the middle class, which has already begun to diminish in recent decades, will disappear. Obviously, with the beginning of large-scale application of digital technologies and robots, the world around us will change dramatically. Therefore, the urgent problem today is the timely and efficient adaption to future changes. Unfortunately, these changes entail not only new extraordinary opportunities but also big risks. The development of effective social innovations is necessary to overcome the negative consequences of the new era of intelligent automation of production and management. Acknowledgments This article was prepared under the financial support of the Russian Science Foundation (Grant No. 18-18-00099).

References Acemoğlu, D., & Restrepo, P. (2016). Robots and jobs: Evidence from the US. https://voxeu.org/ article/robots-and-jobs-evidence-us Akaev, A. A., & Rudskoy, A. I. (2015). A mathematical model for predictive calculations of the synergetic effect of NBIC-technologies and an estimation of its influence on economic growth in the first half of the 21st century. DAN, 461(4), 383–386. Arrow, K. (1962). The economic implications of learning-by-doing. Review of Economic Studies, 29(80), 155–173. Brynjolfsson, E., & McAfee, A. (2016). The second machine age: Work, progress and prosperity in a time of brilliant technologies (2nd ed.). New York: Norton. Dragulesku, A., & Yakovenko, V. M. (2001). Exponential and power-law distributions of wealth and income in the United Kingdom and the United States. Physica A, 299, 213–221. Ford, M. (2015). The rise of the robots. New York: Basic Books. Gringard, S. (2015). The internet of things (MIT Press essential knowledge series). London: MIT Press. Hayek, F. (1976). Law, legislation and liberty. Chicago, IL: University of Chicago Press. Kagermann, H., Wahlster, W., & Helbig, J. (2013). Recommendations for implementing the strategic initiative INDUSTRIE 4.0. – Frankfurt-Main. http://www.acatech.de/fileadmin/user_ upload/Baumstruktur_nach_Website/Acatech/root/de/Material_fuel_Sonderseiten/Industrie_4. 0/Final_report_Industrie_4.0_accessible.pdf Kaldor, N. (1961). Capital accumulation and economic growth. In F. Lutz & D. Hague (Eds.), The theory of economic growth (pp. 177–222). New York: St. Martin’s Press. Piketti, T. (2013). Le Capital au XXIe Siècle. Paris: Editions du Seuil. Schwab, K. (2016). The fourth industrial revolution. Cologny/Geneva: World Economic Forum. Stiglitz, J. (2015). The great divide. New York: W. W. Norton.

Revisited Economic Theory or How to Describe the Processes of Disequilibrium and Instability of Modern Economic Systems А. А. Akaev and V. A. Sadovnichiy Abstract The world financial and economic crisis of 2008–2009 convincingly showed that the modern market economy is unstable, unbalanced, and it develops cyclically. The article discusses the dynamics of the digital economy, generated by the innovations of the NBIC technological and fourth industrial revolutions and forming the basis of the sixth Kondratiev Big Cycle (2018–2050). The digital economy solves the epochal task of moving from the mass production of standard goods to original goods that meet individual needs and preferences, which reflects current trends in demand. The article also examines the relationship between synergetics and the digital economy, since synergetics deals with unstable and nonequilibrium systems and focuses on nonlinear phenomena in economic evolution, such as structural changes, bifurcations, and chaos that will accompany the process of digital economy formation. In the era of the digital economy, capital markets are neither stable nor self-optimizing, and they need supervision and management. In this regard, the rethinking and new reading of the ideas of J. Keynes and H. Minsky on the role of the state in ensuring effective governance are extremely relevant. Keywords Digital economy · Synergetics · Cycles · Anti-crisis theories · Unstable economy

А. А. Akaev (*) Institute for Mathematical Research of Complex Systems, Lomonosov Moscow State University, Moscow, Russia V. A. Sadovnichiy Moscow State University named after M.V. Lomonosov, Moscow, Russia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Devezas et al. (eds.), The Economics of Digital Transformation, Studies on Entrepreneurship, Structural Change and Industrial Dynamics, https://doi.org/10.1007/978-3-030-59959-1_3

25

26

А. А. Akaev and V. A. Sadovnichiy

1 Introduction The global financial and economic crisis of 2008–2009 convincingly showed that the modern market economy is unstable, unbalanced, and develops cyclically. Now, there are signs that in the 2020s, developed economies, due to intensive and ubiquitous digitalization and robotization, will become more and more unstable and will develop mainly in conditions of disequilibrium. In our opinion, Schumpeter–Kondratiev’s innovative-cyclical theory of economic development, Prigozhin-Haken’s synergetic principles for describing the dynamics of nonequilibrium systems and Keynes–Minsky’s stabilization theory based on stimulating economic activity with the help of an effective fiscal policy of the government is the most suitable for describing such economic dynamics and monetary policy of the central bank. In the second quarter of the last century, N. Kondratiev (1925) and J. Schumpeter (1934) proposed a theory describing the cyclical processes of economic dynamics. It is known as Schumpeter–Kondratiev’s cyclical theory of economic development. According to this theory, cycles in the economy are generated by an uneven innovation process, and the cyclical movement of output is a form of deviation from equilibrium, so that most of the time the economy develops in conditions of disequilibrium. Later, the Schumpeter–Kondratiev theory was comprehensively substantiated by the works of G. Mensch (1979), K. Freeman (1987), and M. Hirooka (2006), and practical confirmation—during the implementation of the fourth (1945–1982) and the fifth (1982–2018) Kondratiev waves. A distinctive feature of modern economic systems is their instability and disequilibrium, which necessitates considering the relationship between synergetics and the digital economy from a new angle. Synergetics deals with unstable and nonequilibrium systems and focuses on nonlinear phenomena in economic evolution, such as structural changes, bifurcations, and chaos that will accompany the process of formation of the digital economy. Two outstanding scientists of the twentieth century, I. Prigogine (1980) and G. Haken (1983, 1988) laid the foundations of synergetics. Synergetics shows that in the economic system, which is in the state of disequilibrium, close to the unstable critical point, self-organization results in stable self-sustained oscillations. Thus, synergetic management methods allow the dynamic development of the economy in conditions of instability and disequilibrium, by maintaining dynamic stability in conditions of disequilibrium. Keynes, in his investment theory of the cycle (Keynes 1936), was the first one who explained the reasons why the capitalist economy is subject to cyclical fluctuations, crises, and depression. Keynes’s theory laid the basis of financial instability, rejecting the hypothesis of an effective market, which is the cornerstone in the foundation of orthodox economic theory. This hypothesis claimed that markets themselves come to a stable state of equilibrium if they are not exposed to external influences, and this equilibrium state is optimal in terms of resource allocation. To get out of the state of depression, Keynes proposed the idea of stimulating aggregate demand in the economy through deficit financing of government spending by the

Revisited Economic Theory or How to Describe the Processes of Disequilibrium. . .

27

government and easing monetary policy by the central bank. The idea was very fruitful and today it is widely used. The American economist H. Minsky (1986) creatively accepted and successfully continued Keynes’s research, comprehensively substantiating the endogenous tendency of the capitalist economy to financial instability. Minsky developed the financial theory of investments (1970s), according to which the economic dynamics are largely determined by how firms finance their investments in fixed assets and are also subject to instability. In this regard, Minsky supposed that the main task of the Central Bank is to ensure the financial stability of the credit system. Since capital markets are neither stable nor self-optimizing, they need supervision and management. Therefore, both Keynes and Minsky paid special attention to the role of the state in ensuring effective governance. This role of the state in the era of the digital economy is only growing. The authors of (Akaev and Sadovnichiy 2018) developed for the first time a closed mathematical model for describing and calculating the nonequilibrium trajectory of a long wave of Kondratiev’s economic development lasting 30–40 years, and also obtained a nonlinear differential equation of macroeconomic dynamics that describes an innovation-cyclic theory of Schumpeter–Kondratiev.

2 The Schumpeter–Kondratiev Innovation-Cyclic Theory of Economic Development The global world crisis of the financial and economic system of 2008–2009, which led to the “Great Recession” in the USA and a recession in most developed economies of the world, as well as the protracted global depression that lasted for almost 10 years, once again reminded politicians, economists, and businessmen about the uneven, unstable, and cyclical nature of the development of a market economy. Studying the laws of the long-term phenomena taking place in the world economy in the 1920s, the Russian economist N. Kondratiev discovered large cycles of the economic environment for about half a century (Kondratiev 1922), which were called “Kondratiev’s big cycles.” At the beginning of the twenty-first century, the Japanese scientist M. Hirooka rightly called this discovery “the epoch-making discovery of Kondratiev” (Hirooka 2006). Kondratiev himself identified the 1st (1780/90–1845/51) and the second supercycle (1845/51–1890/96) and suggested that a cyclic crisis and depression would occur in the late 1920s (Kondratiev 1925), which was confirmed in 1929 and the 1930s. Kondratiev’s supercycles consist of upward and downward stages, which in turn add up respectively to the phases of recovery (renewal), growth, recession (decline), and depression. The upward stage covers the period of the long prevalence of high economic conditions in the world economy lasting 20–25 years, when it develops dynamically, easily overcoming short-term shallow recessions. According to Kondratiev’s theory, crises, recessions, and depression must precede the beginning of the upward stage of the next

28

А. А. Akaev and V. A. Sadovnichiy

Kondratiev’s supercycle. Depression plays not only a negative role, inhibiting the economy, but also a positive one—stimulating the search for innovation. It was during this period that the technological structure and basic production capital changed, infrastructure replacement began and new organizational structures were introduced. And the rise begins again, starting the new supercycle. Kondratiev argued that large cycles are endogenous, i.e., they are intrinsic to the capitalist economy. It is important to note that he was the first to understand that wave-like cyclical movements of the economy are a process of deviations from the equilibrium state, to which the capitalist economy is allegedly striving. Consequently, most of the time, a healthy, dynamic economy develops in conditions of disequilibrium, while the classical economic theory has argued the opposite. Another great economist of the twentieth century, Joseph Schumpeter, developed Kondratiev’s doctrine of the supercycles and developed an innovative theory of long waves, integrating it into his general innovative theory of economic development (Schumpeter 1939). Schumpeter considered cycles as a direct consequence of the innovation process due to technological progress, and he, like Kondratiev, considered the cyclical movement of output to be a form of deviation from equilibrium. It is important to note that Schumpeter emphasized in his works that the main driving force of the capitalist economy is innovation and entrepreneurship, and not pure capital, as many economists of that era believed. Schumpeter argued that capital without innovation and initiative, will, and perseverance of the entrepreneur is useless and powerless to cause economic growth. Both Kondratiev and Schumpeter believed that there are three types of equilibrium, and therefore three oscillatory movements, which consist of short-term Kitchin cycles (3–5 years) caused by fluctuations in inventories; medium-term industrial cycles of Juglar (7–11 years), and large cycles of Kondratiev. The superposition of these three waves on the trend path of economic growth and their superposition gives, according to Schumpeter, the general state of the situation at every moment in time (Schumpeter 1939). Schumpeter first made the key assumption that innovations appear unevenly in time, then happen spontaneously in the form of bundles (clusters) of innovation. He distinguished between basic and improving innovations. He especially emphasized the key role of basic innovations in the cyclical dynamics of long waves of economic development, considering them as the main engine of the capitalist economy. Since the Kondratiev’s supercycle concept plays a major role in Schumpeter’s innovative theory of economic development (Schumpeter 1934), and also taking into account the fact that Schumpeter himself considered it the cornerstone of his own theory, we entitled it “Schumpeter-Kondratiev’s innovative cyclical theory of economic development” (Akayev 2013). However, by the time Schumpeter’s capital monograph Business Cycles was published in 1939 (Schumpeter 1939), developed western countries, including the USA, had already adopted Keynes’s teachings, which encouraged active government intervention in the economy to stimulate aggregate demand and create favorable conditions for attracting private investment. So, the Schumpeter–Kondratiev theory turned out to be outside the mainstream of economic science of the twentieth century. Now its time has come.

Revisited Economic Theory or How to Describe the Processes of Disequilibrium. . .

29

During the second research wave of Kondratiev’s supercycle in 1970–1990, the Schumpeter–Kondratiev theory was decisively substantiated in the writings of three economists of the late twentieth century—the German scientist G. Mensch, the English scientist K. Freeman and the Japanese scientist M. Hirooka. The main merit of G. Mensch’s contribution was empirical evidence of the fact that it is powerful clusters of basic innovations that are formed by self-organization during periods of depression that launch the next supercycle (Mensch 1979). Mensch entitled this fact “the trigger effect of depression,” bearing in mind that it is depression that contributes to the launch of an innovative process that ensures the transition of the economy from stagnation to recovery and further dynamic growth. Thus, Mensch indicated the endogenous mechanism of the transition of the lower turning point of the supercycle, from the depression phase to the upward stage of the new supercycle. Mensch also substantiated the endogenous mechanism of the upper turning point from upward to downward stages, when existing technologies no longer allow maintenance of high economic growth rates and satisfactory profit levels, and new technologies are not yet able to serve as a sufficiently powerful source of economic growth. This situation, which Mensch called the “technological stalemate,” leads to instability in development and crisis recessions. C. Freeman showed the importance of the process of diffusion of a cluster of innovations in food markets in the formation of cyclical fluctuations and that it serves as a specific mechanism that causes a long rise, covering the entire upward stage of the supercycle (Freeman 1987). The diffusion period of innovations in the modern era lasts about 20–25 years, until the market reaches a saturation state, which determines the duration of the upward stage of the supercycle. M. Hirooka was the first to establish that the diffusion of the innovation cluster is strictly synchronized with the supercycle upward wave and reaches its saturation in the region of the highest peak of the cycle, and also proved the close correlation of the diffusion of innovations to the markets and the rise of the upward stage of the supercycle (Hirooka 2006). Another outstanding achievement of Hirooka was the development of a new innovation paradigm that allows us to predict the dynamics of the receipt and distribution of innovations in the markets for 20–25 years, even at the stage of development of innovative basic technologies of the future technological structure (Hirooka 2006). Hirooka claims that the supercycle concept not only retains its strength in the twentyfirst century, but also acquires special significance. Schumpeter–Kondratiev’s innovative-cyclical theory of economic development is valuable in that it offers an effective mechanism for overcoming the global cyclical crisis and the ensuing depression through the “launch and comprehensive stimulation of the storm of a new generation of highly effective basic technological innovations” (Mensch 1979), in order to replace obsolete production technologies and forms of organization of production. Significantly, this theory in a certain way indicates the onset of a period of crisis and depression, and also has an innovative paradigm for predicting the beginning of a new cycle (Hirooka 2006). The innovative cyclic theory of Schumpeter–Kondratiev was fully confirmed during the implementation of the fifth supercycle (1982–2018). The core of the fifth technological structure is microelectronics, personal computers, information

30

А. А. Akaev and V. A. Sadovnichiy

and communication technologies (ICT) and biotechnologies (Hirooka 2006). Microprocessors and computers have become technologies of wide consumption and their widespread use has provided a revolution in the technology of production of goods in all sectors of the economy and the management of dynamic objects. In the work (Akayev 2013) it was shown that the beginning of the fifth supercycle dates back to 1982. Indeed, it was in 1982 that the world economy began to recover, which then grew into a long period (1982–1994) of stable and fairly rapid economic growth with an average annual rate of 3.4%, which ended in a slight decline in 1995. Further, from 1996 to 2006, genuine prosperity was observed as the growth rate of labor productivity reached 2.8% per year and almost doubled that of the previous decade (1985–1995). These achievements were the result of the widespread use of ICTs and an unprecedented surge in investment in this area. This explains the phenomenon of spasmodic growth in labor productivity in the second half of the 1990s in developed countries. In developed countries, it was then that they started talking about the emergence of a new economy—the “knowledge economy.” However, by the mid-2000s, ICT-driven productivity growth had ended. According to Mensh, this meant that the fifth supercycle reached the top turning point and it was already necessary to begin the search for innovative technologies and products of the next generation. In 2006–2007 the economic growth rates in the OECD countries have already begun to decline, which meant a transition from the upward stage of the fifth BCC to the downward one. Thus, 2006 was the top turning point of the fifth BCC. The duration of the upward stage of the fifth BCC, as expected, was 24 years (1982–2006). Not even three years had passed before the global financial and economic crisis of 2008–2009, which entitled the Great Recession, erupted. In the work (Akayev et al. 2009), based on the Schumpeter–Kondratiev theory, we predicted that the post-crisis depression that swept the developed countries would be protracted and would last until 2017–2018, which was later confirmed. In the work (Akaev et al. 2011) we also showed that the forerunner of global cyclical financial and economic crises is the explosive rise in prices for highly liquid commodities such as oil and gold and have developed a nonlinear dynamic model to predict the time of the onset of the crisis. Using this model, consisting of accelerating log-periodic fluctuations superimposed on an exponential increasing price trend, described by a power-law function with quasi-singularity at a finite moment in time, we successfully predicted the start date of the second wave of the global financial crisis that occurred on August 4, 2011, 9 months before the crisis with an error of only two weeks (Akaev et al. 2011). Further, with the use of Hirooka’s innovative paradigm (Hirooka 2006), we also described the trajectory of developing basic technologies for the sixth technological paradigm and predicted the start date for the rise of the sixth supercycle—2018. Then, we calculated the economic potential of NBIC technologies (Akaev and Rudskoi 2013, 2015, 2016). Since NBIC technologies are mutually convergent, thanks to their cooperative action, a significant synergistic effect is achieved, which will accelerate the pace of technological progress in developed countries to 3% and higher by 2030, which is much better than the same indicator during the rise

Revisited Economic Theory or How to Describe the Processes of Disequilibrium. . .

31

of the fifth supercycle (1982–1994), which is equal to 2.3% (Akaev and Rudskoi 2015, 2016). Obviously, since 2018 we have been observing the rise of the sixth supercycle, which can last twenty years. The question arises: will the current synchronous growth of the developed economies of the world be sustainable in the medium term? Kondratiev and Schumpeter noted in their classic works that economic growth in the initial period of recovery at the upward stage of the supercycle is subject to various risks that make it unstable, and recommended that governments actively assist entrepreneurs in overcoming them. In (Akayev and Korotaev 2019), we described the main risks that stand in the way of the current sustainable economic recovery: the huge accumulated debt of governments, households, corporate, and financial sectors; the widening gap between the real economy and the financial sector; the accelerating growth of excess income inequality; acute shortage of consumer demand; instability of the financial system and a sharp increase in protectionism by developed countries, resulting in trade wars, as well as increased environmental threats. Therefore, there is reason to expect that the rise of the sixth supercycle that has begun will be rather slow and unstable and may be interrupted by crisis recessions, albeit short-lived, and not as deep as the Great Recession of 2009. Thus, in accordance with the theory of Schumpeter–Kondratiev, the rise of the sixth supercycle in the 2020s will occur as a result of diffusion into the economy of innovative technologies and products of the fourth industrial revolution (Schwab 2016; Schwab and Davis 2018), amid growing imbalances and unsustainable economic growth. Consequently, the economic policies of governments should contribute to the broadest implementation of innovations in the economy and maintain dynamic stability at high growth rates. Although private investment is a direct driver of economic growth and job creation, favorable conditions for this are created by public investment in the public sector and, above all, in education and the creation of new infrastructure. Countries with highly developed ICT infrastructure are prone to higher rates of economic growth. Consequently, the importance of ICT in the next decade is increasing dramatically. Since the start-up time of the innovation process takes a significant period, covering the depression phase, the recovery phase, and partly the recovery phase, the next few years, until 2025, will remain favorable for the development and implementation of the richest cluster of a new wave of basic NBIC technologies, as well as digital technologies and platforms. Naturally, all this will require a transition to a soft stimulating monetary policy. So, finally, the Schumpeter–Kondratiev theory, which most adequately describes the nonequilibrium and uneven cyclical economic development, should be in the mainstream of economic science and become the basis of the real economic policy of responsible governments. We do not contrast Schumpeter–Kondratiev’s theory with other economic theories—neo-Keynesianism, neoclassical synthesis, monetarism, etc., but we believe that it should become a long-term pivot with which the measures required by a specific situation are combined in specific periods (phases and stages of the supercycle) arising from other classical theories (Akaev 2011). Our key idea is that governments, in shaping their economic and financial policies, must rely on the

32

А. А. Akaev and V. A. Sadovnichiy

Schumpeter–Kondratiev innovation cyclic theory as a basic long-term development strategy. In various phases of development throughout the supercycle, the role of the state is different, it changes; The economic and financial policies of the government in managing economic development are also changing, as shown in (Akayev 2011, 2013).

3 On the Digital Economy Dynamics On the boundary between the twentieth and twenty-first centuries, the NBIC technological revolution began (Bainbridge and Roko 2006; Roko 2011), which led to the breakthrough creation of information and digital technologies, advanced computers, and robots with elements of artificial intelligence (AI), smart touch devices (sensors) and the Internet of things. All these technologies are capable of generating large-scale changes in production forces, many times surpassing the achievements of the third industrial revolution, based on microelectronics and ICT (Schwab 2016). Therefore, the world started talking about the technologies of the fourth industrial revolution (Schwab 2016; Schwab and Davis 2018) and the creation of Industry 4.0 (Kagermann et al. 2013). The industrial Internet is becoming the basic infrastructure of Industry 4.0 (Gringard 2015)—a digital platform that provides effective interaction between all Internet-based industrial production facilities. Undoubtedly, one of the main innovations generated by the NBIC technological revolution is nanochips and biochips, as well as quantum computers, which led to the creation of highly efficient digital technologies with elements of artificial intelligence (AI). They increased the computational power of computers by orders of magnitude, which led to breakthrough achievements in the field of machine learning and AI. The practical implementation of the most sophisticated digital technologies for analyzing and processing big data, as well as cloud computing through a service that provides computer and software resources via the Internet, has become possible. A multifunctional digital information technology appeared, which is intended for reliable accounting of assets and operations with them—Blockchain technology that can become a reliable economic shell on the Internet, serve online payments, decentralize digital asset exchange, and issue and use “smart contracts” (Swan 2015). Industry 4.0 or the digital economy solves the problem of moving from the mass production of standard goods to the creation of high-quality goods and services that meet individual needs and preferences. In describing economic dynamics in the digital age, technological progress will play a key role, determining aggregate factor productivity (AFP). Moreover, it is important that it is directly determined by the dynamics of the production of technological information, since the main factors of the digital economy are knowledge and know-how embodied in digital technologies, i.e., technological information. This is exactly what we were able to do in (Akaev and Sadovnichiy 2018).

Revisited Economic Theory or How to Describe the Processes of Disequilibrium. . .

33

Therefore, the equations of economic dynamics in the digital age must be presented depending on the main control variable—the volume of technological information, which embodies the knowledge and know-how used in the production of goods and services. Since the digital economy means the transition from the mass production of standard goods to the production of single samples that meet individual consumer preferences, we believe that the Ramsey–Kass–Kupmans mathematical model of economic growth with the optimization of the usefulness of consumption of a representative household is most suitable for describing the dynamics of the digital economy (Barro and Sala i Martin 1990). From a review of this model, it follows that further economic progress will require an exponential increase in technological information (Akaev and Sadovnichiy 2019), which may well be provided by exponential digital technologies (Ford 2015; Kurzweil 2005; Schwab and Davis 2018). It is the exponential growth of technological information and our ability to process it in real time that will determine economic growth and prosperity in the twenty-first century. An extensive empirical study conducted by the Economist Intelligence Unit in 2003 (EIU 2003) made it possible to formulate the following conclusion regarding the impact of ICT on productivity and economic growth: “There is a significant time lag between investments in the ICT sector and the emergence of a positive impact of ICT on economic development and labor productivity.” Indeed, the economic effect of the widespread use of ICT in the form of a spasmodic increase in labor productivity in developed economies from 0.8 to 1.5 percentage points was observed in the second half of the 1990s, although the share of investments in this sphere was quite high (about 4–6%) since the 1970s. In (Brynjolfsson and Adam 2010) it was shown that this is natural and is explained by the hypothesis of ICTs as general-purpose technologies that require a certain time lag to form their technical and economic environment (paradigm), where their full impact is manifested. This lag time is about 14 years (Akaev and Sadovnichiy 2018). A similar effect will be observed in the era of the digital economy, since digital technologies also belong to general-purpose technologies and will be widely applied in all areas of the economy, management, and public life. Moreover, the digital economy is a natural extension of the information industries of the modern economy. That is why, the next 5 years, will be devoted to the vigorous formation of a digital platform infrastructure in order to overcome the critical threshold barrier, beyond which the digital economy will begin to increase labor productivity and accelerate economic growth. In our work (Akaev and Sadovnichiy 2018), in particular, it is predicted for the US economy that this will happen between 2022–2026 in the form of an increase in labor productivity by 1.1 percentage points. At the same time, digital technologies also have disadvantages, the main of which is a very intensive labor-saving property. From the very beginning, digital computer technologies automated many routine tasks, as a result of which the once countless traditional office jobs related to accounting, reporting, and bookkeeping disappeared. Robots replaced workers in serial conveyor production. Large-scale digitalization, computerization, and robotization of all spheres of economic and

34

А. А. Akaev and V. A. Sadovnichiy

social life in the coming decades will accelerate the process of automation of production and technological replacement of labor by capital. In fact, a new stage of automation begins at a high intellectual level of machines that can learn and improve in the process of production activity. If until now automation has forced a person out of the sphere of routine physical labor and services, now it will force out a person from the sphere of mental labor, replacing representatives of routine intellectual labor, i.e., many of which are middle-class specialists of average qualification. The calculations performed by us according to the model of employment taking into account the technological replacement of jobs show that in the USA alone, about 20 million jobs will be technologically replaced by 2030, and people will get about 150 million jobs, i.e., at the 2015 level. This means that about a third of the total number of employees today could be left without work. According to the forecast of experts of the company Mckinsey Global Institute (MGI 2013), by 2055 half of the existing jobs in all countries of the world will be eliminated thanks to the full automation of production. Robots could leave 1.1 billion workers around the world without work and deprive them of their salaries in the value of 15.8 trillion $. As a result of such a reduction in jobs, labor productivity on a global scale will grow steadily, increasing by 0.8–1.4% per year (MGI 2013). The technological progress generated by the fourth industrial revolution will steadily increase the productivity of the main factors (capital and labor) of economic growth. Due to the growth of aggregate factor productivity (SPF), national income (GDP) will also grow. However, workers’ incomes are unlikely to increase. The fact is that median income in a number of developed countries stopped growing in the 1980s, although before that it had been growing for decades in proportion to productivity, following its growth (Brinolfsson and McAfee 2014, p.176–178). This was the result of the transition to a neoliberal model of the development of capitalism and a gradual departure from the social orientation of a market economy. Accordingly, the process of growing income inequality in society has also begun. This process accelerated after the 2000s. If stagnation of the median salary was observed before 2000, now it has already begun to decline, although SPF in developed economies has been steadily growing all this time (Brinolfsson and McAfee 2014, p.136–140). It should be expected that in the 2020s this process will only worsen, as digital technologies will destroy jobs faster than create them, thereby increasing unemployment and causing a further decrease in median income. The fourth industrial revolution is irreversibly gaining momentum. So, the fourth industrial revolution, along with a positive phenomenon—the complete automation of production, as well as accelerating the growth of productivity and GDP, also has very negative social consequences—a sharp reduction in secondary jobs for the middle class and further increasing income inequality in society. The reduction of the middle class, which is the pillar of democracy and stability in developed societies, threatens social explosions, and to destabilize the situation in developed countries. On the other hand, the decline in real household incomes caused by job cuts and lower wages will lead to a steady decline in consumer demand for most of the population. And this, in turn, will lead to a

Revisited Economic Theory or How to Describe the Processes of Disequilibrium. . .

35

reduction in production and an economic downturn. There is a danger that most economies, both developed and developing, will develop in the 2020s in the face of a constant shortage of demand. However, it is demand that will determine the sustainability of the growth of the digital economy. Consequently, governments will have to re-learn Keynesian recipes to increase aggregate demand in order to sustain the potential digital growth of economies.

4 Synergetic and Digital Economies. Keynes–Minsky Theory of Stabilization of a Nonequilibrium and Unstable Economy We have already noted above that in the twenties, the economies of advanced countries, under the influence of a powerful cluster of digital technologies, will develop under conditions of disequilibrium, deviating ever further from the equilibrium state in which they were in the years of depression (2010–2016). Moreover, the nonequilibrium economic development periods, in accordance with the theory of the supercycle, are much longer than the equilibrium states that are observed mainly during the years of stagnation and depression. On the contrary, traditional classic economic theory proceeded from the postulate of the predominant gravitation of the economy toward equilibrium. Adam Smith used the term “invisible hand” in order to mark the emergence of the public benefit from the desire of people to obtain exclusively their own benefit. In the twentieth century, this term has been unreasonably widely used to describe the effectiveness of markets, meaning that markets have innate properties of self-regulation and a natural desire for a stable balance. In fairness, it should be noted that the presence of market stability was strictly proved only for the market of goods and services (Arrow 1972) and for some abstract economies. As for capital markets, they no longer have this property. Capital markets do not strive for a stable equilibrium; on the contrary, they are prone to self-fulfilling cycles of unlimited expansion and contraction. J. Keynes was the first to realize that during the Great Depression of the 1930s, the capitalist economy did not strive for a stable equilibrium, as the orthodox theory of market efficiency predicted. This prompted him to develop a new economic theory that rejected the efficient market hypothesis. Keynes (1936) assigned a central role to investments, ways of financing them from the banking system, and the impact of financial obligations of investing firms. This prompted him to develop a new economic theory that rejected the efficient market hypothesis. Keynes (1936) assigned a central role to investments, ways of financing them from the banking system, and the impact of financial obligations of investing firms. He then created an investment theory that most adequately explained the reasons why the capitalist economy is subject to cyclical fluctuations and crises. And most importantly, he proposed specific measures that, in practice, make it possible to get out of the trap of extremely low economic activity inherent in the phase of

36

А. А. Akaev and V. A. Sadovnichiy

depression. These measures came down to state stimulation of aggregate demand in two ways: on the one hand, by increasing government spending without raising tax rates, i.e., through deficit financing, and on the other hand, by lowering the refinancing rate on the part of the Central Bank, thereby increasing consumption through the expansion of bank borrowing. The USA and Great Britain launched financialization in the 1980s after globalization in order to control and manage investment flows, capital flows, and world trade, turned commodity and commodity markets into financial ones. Therefore, it is not surprising that the USA and Great Britain, with the onset of globalization, simultaneously became pricing centers for resources and raw materials for the entire global economy, and the financial centers of these countries concentrated most of the global profits. With the transition to the digital economy, the role of information as one of the most important production resources has also increased dramatically. Information has become the object of everyday economic activity and has turned into a product itself and into other digital production products: databases; software; intellectual services; games and entertainment, etc. Business success has now become determined by the efficiency of work with a large amount of data and the speed of information processing. Information, like commodities, is now rapidly being financialized. And this dominance of financial capital in the world creates the conditions under which the increased risks inherent in all financial markets will manifest themselves in the real economy, and it will become even more unstable. In the classic economy, equilibrium and stability are basic properties that are universal in nature. However, they will be very limited in modern, and especially in the future, digital economy. For a long time, prominent economists understood this. For example, P. Drucker wrote about this as follows: “Economic theory assumes that the goal of economic policy is an equilibrium characterized by full employment. However, it is impossible to achieve a sustainable economic equilibrium. The only thing that can ensure full employment is dynamic imbalance. The economy is like a bicycle: it can maintain equilibrium only while driving. Growth is always volatile, but only a growing economy can be in equilibrium” (Drucker 1969). Namely, growth is the main goal of the modern economy, especially in developing countries. Kondratiev and Schumpeter, as noted above, while developing the supercycle theory and the innovation-cyclical theory of economic development, they believed that economic growth mainly occurs in conditions of disequilibrium, when moving from one equilibrium state to another. Classical figures of economics like K. Marx and J. Keynes also viewed competitive economics as an unstable system, while the neoclassical theory of growth assumed that the capitalist economy was knowingly stable. According to Keynes, the free market is not able to consistently maintain aggregate demand at a level that ensures full employment. One of the prominent representatives of the Post Keynesian school, H. Minsky (1986) borrowed Keynes’s “investment theory of the cycle” and developed it by adding his own “financial theory of investment,” according to which he argued that competition and financial crises lead to the financial sector of the economy and, first of all, various ways of speculative financing of investments, when, to pay off the existing debt, investor

Revisited Economic Theory or How to Describe the Processes of Disequilibrium. . .

37

firms require more and more borrowing, the extreme manifestation of which he called the “Ponzi scheme.” It is a Ponzi scheme, launched by certain structural changes, that creates unbearable debts and leads to the termination of lending by banks and a reduction in investment by firms, and also leads to the “Minsky moment” when a financial bubble bursts and a crisis sets in. The banking system is also collapsing. As a result, the state is forced to intervene in the process to save the banking system. According to Minsky’s financial instability hypothesis, financial markets can create their own (endogenous) driving forces that generate self-fulfilling cycles of credit expansion and inflation of asset prices, followed by cycles of credit reduction and asset depreciation. Minsky has shown for the first time that it is in conditions of stable economic development that investment financing regimes spontaneously evolve from predominantly wealthy regimes to an increasing share of speculative and even Ponzi financing, which ultimately leads to a financial crisis. Consequently, investment financing is the main source of instability in a capitalist economy. Minsky argues that a capitalist economy, which uses highly sophisticated methods of conducting financial activity, is volatile by its very nature (Minsky 1986). Moreover, this instability is the result of exclusively internal processes taking place in the capitalist financial system and economy. Keynes’s general theory revealed two fundamental flaws in the capitalist system—chronic unemployment and excessive income inequality. Minsky added a third to them: instability, which is an innate (endogenous) property of modern financial capitalism. Minsky reasonably believed that financial crises arise with a sharp reduction in financial sector regulation, which happened in the 1980–2000s that preceded the 2008–2009 financial crisis. Indeed, the policy of deregulation of the financial and banking systems, which was actively pursued in that period in the USA and a number of other developed countries, only intensified this instability. H. Minsky argued that instability is a normal result of modern financial capital. He believed that instability is also intrinsic to a dynamic capitalist economy, and not just to the financial system. That is why Minsky advocated active macroeconomic and institutional intervention by the state in the person of the “Big Government” and the “Big Bank” in order to limit the negative consequences of instability. Minsky considered the macroeconomic role of the state, first of all, as a way to prevent financial collapse during periods of recession and depression, through a stimulating fiscal and monetary policy. It is such a large-scale intervention of the state and the central bank in 2008–2009 that saved the USA and EU countries from the second Great Depression. It was in the mechanism of financing ownership of capital assets and investments that Minsky saw a key destabilizing factor in the financial and economic system. He believed that if a consumer-oriented economy uses less capital-intensive technology, then it will be less prone to financial instability (Minsky 1986). Therefore, since capital-intensive and labor-saving technologies will be mainly used in the digital economy, it follows that in the era of digital technologies, financial instability will only increase.

38

А. А. Akaev and V. A. Sadovnichiy

It is also crucial that Minsky did not see financial markets as self-optimizing. According to Minsky, the financial system can be in two self-fulfilling states: expansion and reduction of lending. It is noteworthy that the point in time at which the credit cycle moves from the expansion phase to the reduction phase was subsequently called the “Minsky moment.” So, the lending process of investing firms is the key to economic growth and improving the well-being of people. At the same time, as it was first discovered and comprehensively substantiated by Minsky (46), it has an endogenous tendency to instability. Therefore, Minsky believed that the main goal of the Central Bank is to ensure the financial stability of the credit system and only secondarily to control price stability. Recently, Cooper (2008) proposed to get rid of financial crises by allowing more small credit cycles, for which the Central Bank should periodically interrupt credit expansion in order to restrain the inherent instability of financial systems and thereby increase the longterm ability of the economy to grow well-being of citizens. The dynamics of nonequilibrium complex systems is the subject of a branch of science called “synergetics.” The foundations of synergetic science were laid by I. Prigogine (1980) and G. Haken (1983) and were developed in detail by representatives of their scientific schools. Haken defined synergetics as the collective effect of the interaction of a large number of subsystems, leading to self-organization in complex systems (Haken 1988), the spontaneous formation of stable spatial, temporal, or functional structures in them. The key to understanding the essence of synergetics, according to Haken, is the concept of “self-organization.” Prigogine preferred not to use the term “synergetics,” although in terms of its internal content, his research undoubtedly refers to the synergetic theory of evolution and selforganization of complex systems. Stable structures that can spontaneously arise and develop in active, scattering (dissipative) media in states far from equilibrium, he proposed to call the wonderful term “dissipative structures” (Prigogine 1980), which was entrenched in science. It is such structures that economists have to deal with when an economy develops dynamically in conditions far from equilibrium. Therefore, it is no coincidence that synergetic economics have also emerged (Zhang 1988), which deals with unstable and nonequilibrium systems and focuses on nonlinear phenomena in economic evolution, such as structural changes, bifurcations, and chaos. The process of self-organization is just a transition from a more chaotic to a more ordered state, or, in short, “a transition from chaos to order” (Prigogine and Stengers 1997). Depression in the economy is the state of chaos through which the transition to a new order is carried out. Synergetics arose in response to the crisis of stereotyped linear thinking that operates with a set of outdated postulates: (1) chaos is an exclusively destructive principle; (2) nonequilibrium and instability—states that must be overcome, since they play a destructive role; (3) the world is connected by rigid causal relationships. Synergetics, on the contrary, argues that nonequilibrium is the same fundamental property of complex systems as equilibrium: it determines the free choice of the best option from a whole range of possible directions of system evolution. As for equilibrium systems, it is obvious that they are not capable of dynamic development and self-organization, since they suppress any

Revisited Economic Theory or How to Describe the Processes of Disequilibrium. . .

39

deviations from their stationary state, while development and self-organization suggest its qualitative changes. In addition, in a state of stable equilibrium, in accordance with the neoclassical theory itself, profit disappears, and capital, as Schumpeter once proved, is depreciated. One can only be surprised that the model of such an unproductive economic condition still serves as the basis of the mainstream current of the economic theory of capitalism. So it is better to have an unstable but growing economy than a stable and stagnant one. Synergetics also shows that instability plays a key role in self-organization processes. Nonlinear systems at an unstable singular point are very sensitive to small changes in the control parameters. If the economic system is in such an unstable state, small fluctuations can cause structural restructuring of the entire system and have a significant impact on the further dynamics of economic development. The stability of the dissipative structures that arise in this case is guaranteed by a certain balance of nonlinearity and dissipation. Market-based economies are open, self-organizing systems. Moreover, structural changes occur in the system when it is near a critical point, where it is unstable and where small fluctuations are amplified due to positive feedback, reaching a macroscopic level. A market economy is a nonequilibrium system. System studies show that the determining condition for the optimal behavior of economic systems is their nonequilibrium self-organization, functional stability in nonequilibrium states. That is why the synergistic approach allows us to find effective ways to manage nonequilibrium economic systems by pushing for self-organization. The consequence of the process of self-organization is the formation of attractors, which attract the trajectories of economic systems. Using methods of directed self-organization of economic systems, it is possible to achieve the desired attractor—one of the asymptotically stable states. The question may arise: why synergetics when there are very effective and welldeveloped methods of cybernetics? The fact is that the key concept of cybernetics is negative feedback (NF). If the system has an NF, then in response to any external influence, it reacts in such a way as to compensate for it, reduce its influence to zero and maintain a given equilibrium state. Such a system is the classical stable equilibrium economy. However, in a vibrant economy, there is positive feedback (PF) that exacerbates deviations from the equilibrium state and generates an even larger deviation. If in such an economy there are no mechanisms for controlling and restricting destructive “market bubbles,” the latter will certainly undergo explosive growth with subsequent bursting of bubbles and with extremely negative consequences for the financial and economic system. The economy in general and finance in particular have long been based on the erroneous concept of a natural desire for a state of equilibrium. It is assumed that any deviation from this state activates the built-in NF forces, which return the system to a state of equilibrium. But self-reinforcing economic mechanisms with PF are typical for high-tech sectors of the modern knowledge-based economy. The development and production of high-tech products such as computers, airplanes, software products require expensive research and experimentation. However, when they enter the market, increasing their output is relatively cheap and therefore profits will only

40

А. А. Akaev and V. A. Sadovnichiy

grow. Therefore, modern high-tech industries should be described in dynamic models as generators with PIC for profit growth. Thus, if cybernetics focused on systems with environmental protection, then synergetics focuses on selforganization in systems with PF. So, to manage a nonequilibrium and unstable economic system in the digital age, it is precisely the methods of synergetics that are required. An economic system with an investment accelerator and a consumption multiplier (Allen 1956) is a typical PF system: if the PF gain is large enough, a selfsustaining oscillatory process will occur in the system. Here, the gain is the power of the accelerator, and the external energy source is the independent investment continuously flowing from the outside. It is the accelerator power that is the controlling parameter and has a significant impact on the dynamics of the system. The key role is also played by the inventories in the economy, which are expended in order to respond to emerging demand without delay. Then these stocks are replenished as investments become accessible. The accelerator power, in turn, is determined by business activity in the economy, which is best managed with the help of a focused and flexible economic incentive policy by the state, in accordance with the Keynes–Minsky stabilization theory. Dissipative structures arise only in systems described by nonlinear equations for macroscopic variables. If such dissipative structures generate undamped oscillations, the form, and properties of which are determined by the system itself and do not depend on the initial conditions, then they are called self-oscillating. Modern developed economies are precisely self-oscillating systems. Therefore, it is extremely important for the application of synergistic control methods. First of all, to obtain a nonlinear differential equation that describes the interaction of the investment accelerator and consumption multiplier in conditions of nonequilibrium dynamics. For the first time, the authors obtained the general nonlinear differential equation of macroeconomic dynamics that describes the interaction of long-term economic growth and business cyclical fluctuations (Akaev 2007), which was then verified and used for long-term economic forecasting (Akaev and Sadovnichiy 2012).

5 Conclusion 1. The paper shows that in the 2020s, developed economies, due to intensive and ubiquitous digitalization and robotization, will become more unstable and will develop mainly in conditions of disequilibrium. The innovation-cyclic theory of Schumpeter–Kondratiev’s economic development is most suitable for describing such economic dynamics. The authors developed nonlinear mathematical models that adequately describe the Schumpeter–Kondratiev theory and allow one to calculate both long-term forecasted trend trajectories of economic growth and cyclical fluctuations. Since the main synergetic model was obtained taking into account the interaction of the growth trend and cyclical fluctuations, it is possible to predict the bifurcation point and the breakdown point in a crisis recession.

Revisited Economic Theory or How to Describe the Processes of Disequilibrium. . .

41

2. Self-reinforcing economic mechanisms with positive feedback (PIC) are typical for high-tech sectors of the modern economy, based on knowledge and knowhow. Their share in the digital economy will only increase. The dynamics of nonequilibrium complex systems with PIC, which include the modern developed economy, is the subject of the field of science, which is called “synergetics.” The synergetic economy deals with unstable and nonequilibrium dynamics and focuses on nonlinear phenomena in economic evolution, such as structural changes, bifurcations, and chaos. Obviously, for optimal economic management in the digital age, it is necessary to master the synergetic methods of managing complex systems and adapt them to the modern economy. 3. For the practical management of unstable and nonequilibrium economic dynamics, the Keynes–Minsky theory of stimulating economic activity with the help of an effective fiscal policy of the government and the monetary policy of the central bank is most suitable. The time has come when the main task of the Central Bank should be to ensure the financial stability of the credit system and only secondarily to control price stability. In the future, the Central Bank will have to reorient its main efforts, from managing markets of consumer goods and services that are stable in nature, to managing capital markets that are essentially unstable. Acknowledgments This paper was prepared under the financial support of the Russian Science Foundation (Grant No. 18-18-00099).

References Akaev, A. A. (2007). Derivation of the general macroeconomic dynamics equation describing the joint interaction of long-term growth and business cycles. Doklady Mathematics, 76(3), 879–881. Akaev, A. A. (2011). Strategic management of sustainable development based on the theory of innovation-cyclical economic growth Schumpeter-Kondratiev. In Modeling and forecasting of global, regional and national development. Moscow: The LIBROCOM House. Akaev, A. A., & Rudskoi, A. I. (2013). Analysis and forecast of the influence of the sixth technological mode on the dynamics of world economic development. In World dynamics: Patterns, trends, perspectives. Moscow: Krasand. Akaev, A. A., & Rudskoi, A. I. (2015). A mathematical model for predictive computations of the synergy effect of NBIC technologies and the evaluation of its influence on the economic growth in the first half of the 21st century. Doklady Mathematics, 91(2), 182–185. Akaev, A., & Rudskoi, A. (2016). Economic potential of breakthrough technologies and its social consequences. In T. Devezas, J. Leitao, & A. Sarygulov (Eds.), Industry 4.0 – Entrepreneurship and structural change in the new digital landscape. Berlin: Springer Verlag. Akaev, A. A., & Sadovnichiy, V. A. (2012). Mathematical modeling of global processes, taking into account the effects of cyclic fluctuations. Moscow: Book House “LIBROCOM”. Akaev, A. A., & Sadovnichiy, V. A. (2018). Mathematical models for calculating the development dynamics in the era of digital economy. Doklady Mathematics, 98(2), 526–531. https://doi.org/ 10.1134/S106456241806011X. Akaev, A. A., & Sadovnichiy, V. A. (2019). On the choice of mathematical models for describing the dynamics of digital economy. Differential Equations, 55(5), 729–738.

42

А. А. Akaev and V. A. Sadovnichiy

Akaev, A. A., Sadovnichiy, V. A., & Korotaev, A. V. (2011). Huge rise in gold and oil prices as a precursor of a global financial and economic crisis. Doklady Mathematics, 83(2), 243–246. Akayev, A. A. (2013). Big business cycles and the innovation-cyclical theory of SchumpeterKondratiev’s economic development. The economics of modern Russia, 2(61), 7–28. Akayev, A. A., & Korotaev, A. B. (2019). On the beginning of the phase of the rise of the sixth Kondratiev wave and the problems of global sustainable growth. The Century of Globalization, 1, 3–17. Akayev, A. A., Pantin, V. I., & Aivazov, A. E. (2009). Analysis of the dynamics of the global economic crisis based on the theory of cycles. Report at the first Russian economic congress. Lomonosov: Moscow State University M.V. Allen, R. G. D. (1956). Mathematical economics. London: The Macmillan Press. Arrow, K. J. (1972). General economic equilibrium: Purpose, analytic techniques, collective choice. Lecture to the Memory of Alfred Nobel, 12, 1972. Bainbridge, W. S., & Roko, M. (2006). Managing nano-bio-info-cogno innovations. Dordrecht: Springer. Barro RJ and Sala i Martin X (1990). Economic growth and convergence across the United States, National Bureau of Economic Research (Working paper # 3419). Brinolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress and prosperity in a time of the brilliant technologies. London: W. W. Norton. Brynjolfsson, E., & Adam, S. (2010). Wired for innovation: How information technology is reshaping the economy. London: MIT Press. Cooper, G. (2008). The origin of financial crises. London: Harriman House. Drucker, P. F. (1969). The age of discontinuity: Guidelines to our changing society. Oxford: Butterworth-Heinemann. EIU. (2003). The 2003 e-readiness rankings. Available from http://graphics.eiu.com/files/ad_pdfs/ eReady_2003.pdf Ford, M. (2015). The rise of the robots. New York: Basic Books. Freeman, C. (1987). Technical innovation, diffusion and long cycles of economic development. Berlin: The Long-Wave Debate. Gringard, S. (2015). The internet of things (Essential knowledge series). Massachusetts: MIT Press. Haken, H. (1983). Synergetics: An introduction (Springer ser. Synergetics) (Vol. 1, 3rd ed.). Heidelberg: Springer. Haken, H. (1988). Information and self-organization. A macroscopic approach to complex systems. Berlin: Springer. Hirooka, M. (2006). Innovation dynamism and economic growth. A nonlinear perspective. Cheltenham, MA: Edward Elgar. Kagermann, H., Wahlster, W., & Helbig, J. (2013). Recommendations for implementing the strategic initiative INDUSTRIE 4.0. Available from https://www.acatech.de/wpcontent/ uploads/2018/03/Final-report-Industrie_4.0_accessible.pdf Keynes, J. M. (1936). The general theory of employment, interest and money. London: Palgrave Macmillan. Kondratiev, N. D. (1922). The world economy and its conjuncture during and after the war. Vologda: Regional Department of the State Publishing House. Kondratiev, N. D. (1925). Big business cycles. Market Issues, 1(1), 28–79. Kurzweil, R. (2005). The singularity is near. New York: Viking Books. McKinsey Global Institute. (2013). Disruptive technologies: Advances that will transform life, business, and the global economy. New York: McKinsey & Company. Mensch, G. (1979). Stalemate in technology. Cambridge: Cambridge University Press. Minsky, H. P. (1986). Stabilizing an unstable economy. New York: McGraw-Hill. Prigogine, I. (1980). From being to becoming: Time and complexity in the physical sciences. San Francisco, IL: W. H. Freeman. Prigogine, I., & Stengers, I. (1997). The end of certainty. New York: First Free Press.

Revisited Economic Theory or How to Describe the Processes of Disequilibrium. . .

43

Roko, M. C. (2011). The long view of nanotechnology development: The National Nanotechnology Institute at 10 years. Journal of Nanoparticle Research, 12, 427–445. Schumpeter, J. A. (1934). The theory of economic development: An inquiry into profits, capital, credit, interest and the business cycle. New Brunswick, NJ: Transaction Books. Schumpeter, J. A. (1939). Business cycles. A theoretical, historical and statistical analysis of the capitalist process. N.Y: McGraw-Hill. Schwab, K. (2016). The fourth industrial revolution. London: World Economic Forum. Schwab, K., & Davis, N. (2018). Shaping the fourth industrial revolution. London: World Economic Forum. Swan, М. (2015). Blockchain: Blueprint for a new economy. Sebastopol, CA: O’Reilly Media. Zhang, W.-B. (1988). Synergetics economics. Heidelberg: Springer.

Technological Development: Models of Economic Growth and Distribution of Income Askar Akaev, Askar Sarygulov, and Valentin Sokolov

Abstract The economic development of vanguard countries during the last 40 years contributed to a large disturbance of the two key empirical regularities that underlie neoclassical economic theory: the effect of the “Bowley Law” or one of the “stylized facts” of Kaldor, is becoming increasingly less apparent; there is more and more empirical evidence that the famous Kuznets curve is no longer valid. Income inequality is growing in all developed countries, particularly in those like the USA, Great Britain, and Canada. Hence, it should be expected that a new stage of technological development, in the form of digital technologies, will contribute to the reinforcement of these trends. We propose modified neoclassical models of income growth and distribution, which take into account the new empirical regularities. Our results clearly show that if state institutions do not interfere with existing trends in the increase of inequality, then its growth will continue as there are no endogenous economic mechanisms that could restrict this process. Keywords Empirical patterns · Income inequality · Technological development income distribution

1 Introduction The neoliberal economic model of development, adopted by the developed capitalist states in the 1980s, contributed to a large disturbance of the two key empirical regularities that underlie neoclassical economic theory. First, the effect of the “Bowley Law” or one of the “stylized facts” of Kaldor (1961), which consists of A. Akaev Institute for Mathematical Research of Complex Systems, Lomonosov Moscow State University, Moscow, Russia A. Sarygulov (*) · V. Sokolov Saint Petersburg State University of Economics, Saint-Petersburg, Russia e-mail: [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Devezas et al. (eds.), The Economics of Digital Transformation, Studies on Entrepreneurship, Structural Change and Industrial Dynamics, https://doi.org/10.1007/978-3-030-59959-1_4

45

46

A. Akaev et al.

the constant shares of capital and labor in national income, is becoming increasingly less apparent. The new empirical pattern consists in an increase in the share of income from capital and, accordingly, in a decrease in the share of labor in national income. The latter has already led to stagnation and decline in the median income of workers in developed countries, significantly reducing the size of the middle class. Secondly, there is more and more empirical evidence that the famous Kuznets curve (1955), which claims that income inequality will decline to an acceptable level at a mature stage of development of an industrial economy, is no longer valid. Income inequality in all developed countries, particularly in those like USA, Great Britain, and Canada have begun to grow rapidly in the 1980s. In the USA, it has almost reached the historical maximum observed in 1929–1933, particularly at the beginning of the Great Depression. Hence, it should be expected that a new stage of technological development, in the form of digital technologies, will contribute to the reinforcement of these trends. In this chapter, we propose modified neoclassical models of income growth and distribution, which take into account the new empirical regularities. Confirmation of our model is carried out on the basis of statistical data of the US economy from 1980 to 2018. The modified models allow us to forecast US economic growth and income distribution by population groups up to 2050. Our results clearly show that if state institutions do not interfere with existing trends in the increase of inequality, then its growth will continue as there are no endogenous economic mechanisms that could restrict this process. We also show that the potential of the middle class to completely disappear with time is real if the distribution of income in society takes a bimodal form, where the American society is divided into a small group of superrich families and a huge mass of poor, primarily single-worker, households. To find a way out of this situation, which potentially threatens major social conflicts or even cataclysms, the developed countries need to revive the key elements of the welfare state, for example, by strengthening progressive taxes and adapting the mechanisms of income redistribution to new conditions.

2 Technological Development and Inequality One of the key issues of modern development is the role of technologies: whether they are the drivers of universal progress, the spreaders of opportunities and increased social mobility, or whether they are a means for further concentration of power and wealth? The technological development of today has exacerbated the question of this dualism. On the one hand, we are witnessing an unprecedented growth in digital technologies and their spread into almost all areas of the economy, where everyone can benefit from them. But on the other hand, we are witnessing an unprecedented and rapid growth of capital and profits across the largest digital technology companies. In fairness, it should be noted that the growth of companies that are associated with the widely used digital technologies does not seem to be sustainable. Consider

Technological Development: Models of Economic Growth and Distribution of Income

47

the IT crisis of 2000–2001, which led to a drop in market capitalization of internet and telecommunications companies by 1.775 trillion dollars and to mass bankruptcy of approximately 300 large companies (Kleinbard 2000). The crisis did not only occur due to widespread market speculation in the shares of IT companies, but also due to the unreasonably high expectations of investors and consumers from the IT services sector, the lack of a developed digital infrastructure, and the absence of powerful and capable of real-time information retrieval systems oriented towards the mass consumer. There was also a lack of high-tech and affordable gadgets, which significantly narrowed the usage of new technologies. But now, after almost 15 years, this technological gap has been overcome. Over the past 5–10 years, the practice of industrial development and management shows concrete examples of technological breakthroughs: the appearance of autonomous vehicles (Google, Tesla), voice recognition technologies (Siri, Alexa), a new generation of warehouse robots (Amazon Kiva) and an exponential increase in microchip density, in speed data loading and processing, in information storage capacities, and in their energy efficiency. The rapid development of the internet and the associated growth in electronic commerce has led to the creation of new banking transaction technologies, such as blockchain. This made it possible to reduce the time for transferring funds from 5–6 banking days to 20 seconds (Shiba 2017). The growth of digital technologies, social networks, and smartphones has contributed to faster and targeted satisfaction of consumer demands, the creation of new companies (such as Uber, Amazon, Alibaba, Facebook, Netflix, iTunes) and the bankruptcy of others (such as Borders, Blockbuster, Kodak, which resulted in a loss of 250 thousand jobs). Of particular note is the rapid growth in the number of different robots and systems with artificial intelligence (AI). On one hand, global investment in startups related to the development of AI reached its five-year peak in 2016 and amounted to $ 5.02 billion (Ogawa 2017). On the other hand, the American practice of growing high-tech industries has shown that only 8.8% of the workforce in the 1980s, 4.4% in the 1990s, and 0.5% in the 2000s switched to working in these sectors. (Lin 2011). A banking sector sensitive to innovations has already announced impending large-scale staff cuts: US CitiBank will cut 30% of employees by 2025, while the three largest Japanese banks (Mizuho Financial Group Inc., Sumitomo Mitsui Financial Group Inc., Mitsubishi UFJ Financial Group Inc) will make 33000 employees redundant in the same period. American researchers believe that it is possible that almost 47% of jobs are directly threatened by automation and replacement by robots, software products, or artificial intelligence (Frey and Osborne 2013), while in Germany this indicator estimates even higher losses up to 59% (Brzeski and Burk 2015), while for the countries of Southern Europe it ranges from 45 to 60% (Bowles 2014). Experts from MIT estimate that in the near future, each robot will replace from 3.2 to 5.4 workers (Acemoglu and Restrepo 2017). It should also be noted that robots are increasingly being used not only in industry, but also in services and households. According to the International Federation for Robotics, in 2019, 2.6 million industrial robots and 42 million service and household robots will be used in the world.

48

A. Akaev et al.

Thus, the technological transformation will inevitably lead to the transformation of social and economic systems. Traditional and classical industrial systems that have been dominant to date have mainly been based on the principles of replacing human muscular energy (e.g., with steam power, internal combustion engine, electricity). Human intellectual abilities were used to the full and their replacement by machines in the process of production, management, or design was minimal. However, the technological changes of today raise the question of the extent of the crowding out of the living workforce, both in the sphere of production and in the sphere of management and services. And this question radically changes the entire economic and social landscape. Modern technologies offer great opportunities for self-employment, which has already led to the creation of a parallel labor market, blurring the line between the concepts of “work” and “home.” This process raises the question of revisiting many regulations in the areas of employment, taxation, medical insurance, and pension coverage. For example, in the EU, almost 55% of the self-employed (representing 13% of the total workforce), are at risk of losing entitlement to unemployment benefits and 37.5% are at risk of losing sickness benefits (Matsaganis et al. 2016). Meanwhile, modern giants of the digital industry (Google, Amazon, Twitter) are actively using information generated by their users for commercial purposes. This process is called Digital Taylorism, or crowdsourcing. As can be seen, technological development in the digital age is becoming a double-edged sword, especially since economic inequality in developed countries has increased, and wealth has shifted from real to financial assets in the past 40 years. The processes of globalization, beginning in the 1980s, have turned commodity and product markets into financial ones. With the transition to a digital economy, the role of information as a vital and important productive resource has increased dramatically. Information is becoming the object of everyday economic activity and is turning into a product itself and into other digital products: databases; software; intellectual services; games and entertainment, etc. Business success is now determined by the effectiveness of working with a large amount of data and the speed of information processing. Information, like commodities, is now rapidly acquiring all aspects of a financial resource. And this global dominance of financial capital creates the conditions under which the increased risks inherent in all financial markets will start to manifest in the real economy, making it even more unstable. It is these processes that are prompting an increasing number of researchers across the globe to research the relationship between technology and inequality, the wealth created by technology, and its distribution in society. Our literature analysis shows that the most common view is that technological innovation initially leads to greater inequality, but as technology becomes widespread, more people are able to receive higher incomes and benefits, reducing inequality (Barro 1999). A number of researchers have either considered the already established business practices (Allen 2017), or empirical data on the polarization of wages associated with the boom in information technology (Sanchez-Paramo and Shady 2003; Galbraith and Hale 2006; Wang 2007), or data on long-term trends in the income gap due to different skill levels that occurs from differences in education

Technological Development: Models of Economic Growth and Distribution of Income

49

levels (Card and DiNardo 2002). Some researchers pay special attention to the endogenous nature of technological progress, which results in the influence of technologies on gaps in wage levels being more significant than the processes of globalization themselves (Gancia 2012), while others are convinced that the existing socioeconomic inequality will increase with the advent of more complex information technologies (Wyatt et al. 2000). Other papers suggest that the process of technological diffusion acts as a mechanism in the creation of an inverted U-shaped relationship between technology and inequality (Greenwood 1997). There are also attempts to assess the impact of specific types of technologies on inequality. For example, according to (Helpman 1998), information and communication technologies are more likely to have an inverted U-shaped trajectory in their diffusion processes, while biotechnologies can show such a relationship in the form of a non-inverted U due to the huge large-scale investments required for breakthroughs in the early stages of innovation. Currently, the widespread view is that when technological innovation is stimulated by large firms that can afford large investments in R&D and, therefore, receive huge profits from R&D (Malerba and Orsenigo 1995), the existing inequalities only grow, since its benefits can be used mainly by those who have significant physical or human capital. The U-shaped connection between technology and inequality at a very high level of development is also noted in (Conceição and Galbraith 2000). The presence of the U-shaped relationship between technology and inequality is also emphasized in (Kim 2012) and the same paper proposes a government redistribution policy to reduce technology-related inequalities. Recently, the problem of income inequality has become central to economic analysis. Its causes and consequences have been comprehensively investigated by (Aghion and Williamson 2009; Stiglitz 2012, 2015; Piketty 2014; Deaton 2013; Dorling 2014; Milanović 2016). The general conclusion of these works can be reduced to two points: (1) the further growth of inequality in the vanguard countries and in the world poses a threat to stable economic development; (2) market forces alone will be insufficient to reduce inequality, focused economic policy measures on the part of the state are required.

3 Empirical Data on Growth of Inequality in the USA and How it Can Be Explained According to most economists, the share of income of 1% of the population is the best measure of all possible measures of inequality. J. Stiglitz believes that it is the “one percent” that has a decisive influence on inequality in modern society, since the entire top leadership in the USA consists of representatives of the “one percent” and works in the interests of, above all, “one percent” (Stiglitz 2015). Stiglitz demonstrates the perniciousness of growing inequality for the future of the USA and emphasizes that this is the choice of the “one percent.” Otherwise, if the “one

50

A. Akaev et al.

percent” representatives adhered to rational egoism, they would be concerned about inequality and would try to take at least some measures to reduce it (Stiglitz 2015). Although inequality has worsened in most countries in recent decades, it has reached a historic high in the United States not seen since the late 1920s and early 1930s. Inequality is dangerous because, like corrosion, it eats at society from within, creating social conflicts (Dorling 2014). At the beginning of the twentieth century, the richest 1% of the population received income that was 10–20 times higher than the average (median) income almost everywhere in the world. Beginning around the 1930s, income inequality declined throughout the world. By 1980, almost everywhere in the world, one percent income did not exceed ten times the median income, and inequality had a moderate ratio of 4 to 10 in all developed countries. Since the 1980s, when developed countries moved to the neoclassical model of economic development, inequality began to grow again. In a number of leading countries, (USA, UK, Canada) it again reached the ratio of 16–20 (Dorling 2014), while in less capitalist states (Denmark, the Netherlands, Sweden, Finland, Switzerland, Germany, Japan) it remained at an acceptable level, equal to a ratio of 6–8 (Dorling 2014). The latter can include China and India, the two countries with the fastest-growing large economies. What is illustrated by the dynamics of income inequality over the past hundred years in the USA? Using the information sourced from the World Inequality Database, we constructed graphs illustrating the dynamics of changes in the share of income (wealth) of the upper decile (top 10%), of the upper percentile (top 1%), of the top 0.1% and of the top 0.01%. Changes in the share of pre-tax and post-tax revenues of the upper percentile in the USA (“one percent”) over the past hundred years (1913–2015) are presented in Fig. 1. As noted above, income inequality in the United States reached its peak value of 0.5, which was observed at the beginning of the Great Depression (1929–1933). Then, due to the socially oriented economic policy of Roosevelt, inequality in the United States steadily declined to its historical minimum of 0.37 by 1970 and remained at that level until 1980 (see Fig. 1). After that, it began to grow steadily again and today it has almost reached its historical maximum of 1930. As can be seen from a simple extrapolation forecast, 2020 may become turbulent for the USA and an economic crisis may arise due to excessive income inequality. We also calculated the dynamics of the Gini inequality index based on the proportional distribution of income (wealth). A graphical illustration of the change in the Gini index in the USA is presented in Fig. 2 for the period from 1962 to 2015. As noted above, income inequality in the USA reached its peak value of 0.5, which was observed at the beginning of the Great Depression (1929–1933). Then, due to the socially oriented economic policy of Roosevelt, inequality in the USA steadily declined to its historical minimum of 0.37 by 1970 and remained at that level until 1980 (see Fig. 2b). After that, it began to grow steadily again and today it has almost reached its historical maximum of 1930. As can be seen from a simple extrapolation forecast, 2020 may become turbulent for the USA and an economic crisis may arise due to excessive income inequality.

Technological Development: Models of Economic Growth and Distribution of Income

51

Fig. 1 Graphs of changes in the share of income of the upper percentile in the USA over the past 100 years (1913–2015): (a) pre-tax national income and (b) post-tax national income

Analysis of the graphs in Figs. 1 and 2 shows that a sharp increase in inequality began after 1980. How long will this trend of increasing inequality last and what will ultimately cause it to stop? First of all, we note that the new increase in inequality was preceded by a breakdown of two most important long-term trends in the development of the capitalist economy, described by stable empirical laws on the basis of which classical theories of economic growth and income distribution were built (Kanbur and Stiglits 2015). The first of these patterns, regarding the distribution of national income, was established back in the 1930s by the English economist Arthur Bowley. Bowley, analyzing the results of the first empirical study of the distribution of national income of the United Kingdom, put forward a hypothesis that

52

A. Akaev et al.

Fig. 2 Graphs of changes in the Gini inequality index in the USA for the period from 1962 to 2015: (a) pre-tax national income and (b) post-tax national income

postulated the constant share of labor income and capital (Bowley 1937). This pattern was highly was seen by. Keynes as one of the main advantages of the capitalist economy, and was described as the “Bowley Law.” In the 1950s, the “Bowley Law” received a significant confirmation on the basis of statistical data of the American economy, leading to it being included by Kaldor amongst his basic empirical laws that were used to build his economic theory (Kaldor 1961). The most important consequence of the “Bowley/Kaldor Law” was that the median (average) wages of workers should increase in proportion to labor productivity, i.e., economic growth under capitalism leads to an increase in the welfare of both capitalists and workers. It was so in the period from 1950 to 1980 across all developed countries of the world. However, since 1980 this law has ceased to function. As Thomas Piketty showed in his famous book “Capital in the 21st

Technological Development: Models of Economic Growth and Distribution of Income

53

Century” (Piketty 2014), the share of capital in national income (α) in the twentyfirst century will only grow. The share of capital (α) in the GDP of Western European countries was 35–40% (α ¼ 0.35–0.40) in the nineteenth century, it decreased to 20–25% in the middle of the twentieth century, and by the beginning of the twenty-first century, it increased again to 25–30% (Piketty 2014). As can be seen from the above data and a thorough long-term analysis conducted by Piketty, the share of capital income in GDP has changed significantly. It is natural to forecast that capital returns in developed economies will continue to grow in the coming decades due to the revolutionary innovations of the sixth technological mode. Piketty suggests that the share of capital income in GDP at the global level will reach 30–40% by the middle of the century, i.e., a level close to the indicators of the eighteenth–nineteenth centuries, with a potential to surpass it (Piketty 2014). As a consequence of the growth of the share of capital, there has been a corresponding decrease in the share of labor (1-α), and therefore, since the mid-1970s, the growth paths of labor productivity [A(t)] and real average wages [ω(t)] of workers have diverged: the first continues to grow steadily, while the second stagnated at first, and from the beginning of the twenty-first century, began to decline markedly. This has led to the erosion of the middle class in developed countries, especially in the USA. Most economists believe that the main culprit in this negative trend is rapid technological change, which is causing ever-increasing chronic structural unemployment, which contributes to increasing income inequality in society. Aguillon showed that technological progress is one of the main factors in increasing inequality (Aguillon, Ch. 3.5, Williamson 2015). It was technological progress that began to reduce the share of labor in the final product, which led to a steady decrease in the wages of workers. And since the share of capital in GDP is growing, more income goes to the owners of capital. Unsurprisingly, owners of capital are not inclined to voluntarily share them with anyone. The second regularity regarding income inequality in capitalist societies was described by Kuznets in the 1950s. Kuznets considered changes in the distribution of income associated with economic growth as a consequence of it. To establish this, he was the first to undertake a thorough study of all national data on the distribution of income in the USA from 1913 to 1948. The results of this study brought good news: during this period, inequality peaked in 1929–1930, and then it steadily decreased. Based on these results, Kuznets hypothesized that assuming modern conditions in the long-run, economic growth will see an increase in inequality at the early industrial stage of development, followed by a stage of reducing inequality as a mature industry develops (Kuznets 1955). Moreover, Kuznets believed that inequality would decline at the mature stages of development of industrial capitalism spontaneously, regardless of the current economic policy and the characteristics of a particular country, and then stabilize at a certain level. As it turns out, he was mistaken. Kuznets’ hypothesis quickly found widespread support and became the basis of the main economic theory of income inequality. Such a theory was very useful at the time in order to establish the advantages of capitalism over socialism. In fact, this reduction in inequality across developed countries occurred primarily for political

54

A. Akaev et al.

rather than economic reasons, due to the influence of the successes of the socialist countries during the competition between the capitalist and socialist systems. Many leading capitalist states were forced to develop a socially oriented economy, contributing to the relative equalization of income in society. Indeed, with the collapse of the socialist system in the 1980s, social programs in the leading capitalist countries that supported the redistribution of income from rich to poor began to be curtailed, and taxes on the rich began to decline. The consequence of this was the disappearance of the regularity Kuznets observed, as inequality began to grow rapidly from 1980 in all developed countries, with the exception of a small number of more socialist states, which included the Scandinavian countries, Germany, Switzerland, and Japan. In his work, Piketty unequivocally showed that the reduction in income inequality observed in the developed countries in the twentieth century was, first of all, the result of wars, revolutions, and the socially oriented policies that the states began to pursue after those conflicts. The reduction was not determined by economic mechanisms (Piketty 2014). In the same way, the increase in inequality, which began again in the 1980s, was largely predetermined by political changes, namely, the liberalization of tax and financial policies exclusively in favor of the rich (Piketty 2014). In general, Piketty argues that in the conditions of “normal” capitalism that we are witnessing today, inequality should only grow. But to what extent should it? The excessive growth of inequality causes social conflicts, which negatively affects investment and economic growth. Milanović formulated an extension of the Kuznets’ hypothesis, which he called Kuznets waves, or cycles (Milanović 2016). Milanović argued that there are alternating periods of increasing and decreasing inequality, which is consistent with empirical evidence from the time of the first industrial revolution. He also introduced the concept of a boundary of possible inequality, where the average income is only slightly higher than the level necessary to survive. The boundaries of possible inequality show the maximum level of inequality can reach for different levels of average income. Historically, as Milanović showed, inequality in all countries reaches a peak of about 50–55 percentage points of Gini (Milanović 2016, Table 2.2). Milanović argues that the Kuznets cycles of inequality arise as a result of the interaction between economic and political factors. Indeed, after World War II, the strength of trade unions and the influence of socialist forces limited the political measures aimed in favor of the rich and limited the power of capital in the developed capitalist countries. But after that, this political pressure weakened, and economic conditions became more favorable for capital, completely changing the situation and leading developed countries to enter a period of growing inequality. One of the important conclusions of Milanović is that in the long-run, economic growth does not require growth of inequality (Milanović 2016), while the reduction of excess inequality almost always fosters the acceleration of economic growth.

Technological Development: Models of Economic Growth and Distribution of Income

55

4 Models of Growth and Income Distribution in Modern Society To predict the dynamics of income distribution and inequality in modern society, new models are required, since old models based on the Bowley/Kaldor Law and Kuznets Hypothesis no longer work (Kanbur and Stiglits 2015). To achieve this, we modified the classical models of economic growth and the distribution of national income among various population groups, taking into account new key trends emerging at the beginning of the twenty-first century: accelerating technological replacement of jobs, stagnation, and a decrease in median wages, and an increase in the share of capital in national income. As a basic model for describing long-term economic growth, we use the classic Cobb-Douglas production function with laborsaving technical progress: Y ¼ γ . K α . ðA . LÞ1–αþδ

ð1Þ

where Y(t)—current national income (GDP); K(t)—physical capital; L(t)—the number of workers in the economy; A(t)—technological progress; α—capital share of GDP; δ—parameter characterizing the increasing returns to scale in production (δ > 0); γ—constant rate factor. Verification of the production function model (1), performed on data of the USA economic development from 1950 to 2017, showed that it works well. Numerical values of factors of production (K, L, A), and GDP (Y ) were taken from the World Bank database (http://data.worldbank.org/) and checked against data from the University of Groningen (http://febpwt.webhosting.rug.n1/). The parameters of the production function were estimated by the least square method: γ ¼ 2.37; α ¼ 0.38; δ ¼ 0.24. Moreover, the approximation error does not exceed 0.3%. The first of the empirical laws of Kaldor is formalized as follows: aÞK ¼ ∂Y; bÞY ¼ ϕK; cÞ∂ ¼ ϕ–1 :

ð2Þ

Here ∂—capital ratio; ϕ—ratio of capital return. Based on World Bank data for K (t) and Y(t) for the US economy for the period from 1985 to 2017, we estimated the coefficients and with the following results: ∂ ¼ 3.2; ϕ ¼ 0.34. The second law of Kaldor means that in the production function (1), the parameter characterizing the share of capital, and also (1-α)—the share of labor, are constant values. The share of capital (and labor) in GDP did remain quite constant throughout most of the twentieth century, both in Western European countries and in the USA. However, by the beginning of the twenty-first century, the capital ratio increased, and the share of labor, respectively, decreased. The third law of Kaldor was formalized as follows:

56

A. Akaev et al.

ϖ ðt Þ ¼

Aðt Þ , n>0 1þη

ð3Þ

where ϖ(t)—real average wage of workers: η—premium established by firms to reduce the ratio coefficient in their favor. As can be seen from (3), the average wage of workers should increase in proportion to labor productivity A(t) (1). This law could be observed in the third quarter of the twenty century across all developed and most developing countries of the world. Trends of capital accumulation, economic growth, and income inequality emerging at the beginning of the XXI century were comprehensively studied by the French economist Thomas Piketty. He obtained significant results, which are described in detail in his book “Capital in the 21st Century” (Piketty 2014). First of all, Piketty convincingly showed that in the most developed countries (USA, UK, Germany, France, etc.), capital intensity ∂ (2) in the twentieth century followed a large Ushaped curve and at the beginning of the twenty-first century returned to maximum values close to those observed at the end of the nineteenth century (Piketty 2014). Moreover, in the eighteenth–nineteenth centuries, the value of in leading European economies was quite stable and amounted to 7 (∂ ¼ 7) in France and Great Britain, and 6.5 in Germany (Piketty 2014). In the USA, capital intensity reached quasi-stability at the beginning of the twentieth century at the level of ∂ ffi 7, and then, starting from the middle of the twentieth century, stabilized at the level of (Piketty 2014). Return of capital intensity ∂ in developed countries in the twentyfirst century to the maximum value means that it will now stabilize at least until the middle of the century. And this means that in the first half of the twenty-first century, Caldor’s First Law K ¼ ∂Y retains its significance and will be decisive for the transformation of the production function (1). As for Kaldor’s second empirical law, in the twenty-first century, it ceases to be practical: the share of capital income in GDP will not remain constant, but will grow, as Piketty argues. This follows from the first fundamental law of capitalism (Piketty 2014): α¼r.∂

ð4Þ

where r—average return on capital. The average return on capital (r) was 5–6% in the eighteenth–nineteenth centuries, rose to 7–8% in the middle of the twentieth century, thanks to the era-defining technological innovations of the post-war period, and then fell to 4–5% at the turn of the twentieth–twenty-first centuries (Piketty 2014). Naturally, it is expected that the innovative technologies of the Fourth Industrial Revolution can also increase the average return on capital ∂ to 6–7% once more, despite the fact that the value has already been close to the maximum. Therefore, for developed economies, it is possible to forecast a growth in the share of income from capital (4) to 40–50% and a corresponding decrease in the share of labor in GDP by 15–20%. This trend is confirmed by the fact that in the USA, the share of labor in

Technological Development: Models of Economic Growth and Distribution of Income

57

GDP has already fallen from 65 to 55% between 1980 to 2015 (Ford 2015). As income from capital increases, and the share of labor in GDP decreases, the capitalists will grow richer, while the wages of workers will only decrease or, at best, stagnate. Thus, in the next 20–30 years, American society will experience a trend of increasing income inequality. Piketty suggests that the share of capital income of GDP at the global level will reach a level close to that of the eighteenth–nineteenth centuries (30–40%) by the middle of the twenty-first century. and possibly even surpass it despite the average global capital return of 4–5% (Piketty 2014). In economic history, this has already happened: an increase of 10% from 35–40% at the turn of the eighteenth-nineteenth centuries to 45–50% was observed in the middle of the nineteenth century (Piketty 2014). Since according to Piketty’s description, the growth in the share of capital income occurs gradually, starting with a gradual rise which grows to become rapid and then stabilizes for about fifty years, we can propose a forecasted logistic trajectory of its growth for the period until the middle of the twenty-first century: e αð t Þ ¼ α 1 þ

α2 1 þ U α . exp ½–ϑα . ðe – T a0 Þ]

ð5Þ

where α1, α2, Uα, ϑα—constant parameters. At the initial values of α0 ¼ e αðt ¼ T α0 Þ it is advisable to accept the already estimated value of α0 ¼ 0.38 where T α0 ¼ 2018 year (see Eq. 1). Therefore, to determine the parameters of the logistic function (5), it suffices to set the value of αm ¼ e αðt ¼ T αm Þ where T α0 ¼ 2050 year . Following Piketty’s assumption that by the middle of the century the USA will have an increase of 10% α(t) in relation to the beginning of the century, then αm ¼ 0.45. Here, we have taken into account the fact that over the past 18 years α(t) has already increased by 3%. Under these conditions, using the least squares method, we obtain the following estimates for the constant parameters of the logistic equation (5): α1 ¼ 0.38; α2 ¼ 0.07; Uα ¼ 20; ϑα ¼ 0.18. Now, for long-term forecasting of the economic growth using the production function (1), it is necessary to use the variable αðt Þ ¼ e αðt Þ (5). The third law (3) has also ceased to hold true since the mid-1970s, when the growth trajectories of labor productivity (A) and real average wages (ϑ) of workers diverged: the first continued and continues to grow steadily, and the second stagnated at first, and from the beginning of the twenty-first century, began to decline markedly (Ford 2015). In the golden era of “economic prosperity” (1948–1973), wage growth was directly proportional to the growth of labor productivity, which led to an unprecedented expansion and strengthening of the middle class in advanced countries. In the last 30–40 years, due to the stagnation of the median wages of workers in developed countries, the middle class began to erode. Yet, it should be noted that the middle class is a bastion of sociopolitical stability in modern society. Most economists believe that the rapidly accelerating technological progress that causes ever-increasing chronic unemployment is the main culprit in this negative trend (Brynjolfsson and Mcafee 2013). And the latter contributes to increasing

58

A. Akaev et al.

income inequality in society. Indeed, from 1980 to 2016, the average salary of low-skilled workers decreased by about 30%, and the salary of highly qualified specialists increased by more than 40%. Although this fact applies to the USA, it is characteristic of all developed countries. The main reason for the increase in the salaries of highly qualified specialists and the decrease in the salaries of low-skilled workers is the increasing demand for the former and the permanent decline in the latter. The two negative trends mentioned above will only intensify in the future: digital technologies, as well as intelligent computers and robots, will begin the intensive and large-scale replacement of workers with low and medium qualifications. Active technological substitution of work will cause sharp competition for well-paid jobs, which will ultimately lead to a further decrease in the real wages of workers. In turn, the decline in real household incomes will lead to a drop in demand for goods and services and a further decrease in production, as well as to a slowdown in the economic growth predicted by Piketty. On the basis of extensive statistical data, it was shown that the distribution of the annual income of families with one and two working parents is fairly well described by the exponential and Rayleigh distribution laws, respectively (Dragulesku and Yakovenko 2001): aÞY ph1 ðr Þ ¼

⌠ ⎫ ⌠ ⎫ r 1 r , bÞY ph2 ðr Þ ¼ . exp – . exp – r m1 r m1 r m2 r m2 1

ð6Þ

where r—annual household income in thousands of US dollars; rm1 and rm2 average values of family income. Moreover, at the end of the 1990s, these distributions were fair for incomes of up to 120 thousand US dollars. As expected, the income distribution of wealthy families with incomes over 120 thousand dollars turned out to be exponential—similarly to a Pareto law with an exponent of h ¼ 2.7: ⌠ ⎫hðtÞ e rp ðr Þ ¼ hðt Þ – 1 . Y rpo Y Y rpo r

ð7Þ

where Yrpo—the lower limit of the annual income of wealthy families (Yrpo ¼ 120 thousand dollars in 2000). All three distributions are presented in Fig. 3. The left border in the distribution of incomes of wealthy citizens is smoothed out by a curve similar to the left half of the normal Gaussian distribution: ⎫2 h i⌠ e rg ðr Þ ¼ p1------- . exp – 1 r – Y rpo Y 2 ∂ ∂ 2Π

ð8Þ

where ∂—standard deviation, which was selected so that incomes under the Gaussian curve did not exceed 10% of incomes of all wealthy citizens. The total income of private households, Y ph ðt Þ, except for super-rich families, (8) is distributed between households with one and two employees as follows:

Technological Development: Models of Economic Growth and Distribution of Income

59

Fig. 3 Models of the income distribution (2015)

| | Y ph ðt, r Þ ¼ Y ph ðt Þ . ν1 ðt Þ . Y ph1 ðr Þ þ ν2 ðt Þ . Y ph2 ðt Þ

ð9Þ

where v1(t) and v2(t) share of families with one and two employees, where v2 ¼ 1- v1. It is known that in 1996 the incomes of American households were well described by the following approximation: 0:45Y ph1 þ 0:55Y ph2 (Dragulesku and Yakovenko 2001). In the 2010s, this relation had the following shape: 0:5Y ph1 þ 0:55Y ph2 The peak of women economic activity in the USA occurred in 2000, when 60% of them worked on par with men (Ford 2015). Since then, this figure has continued to fall. Due to the intensive technological replacement of jobs, the number of families with two employees v2(t) will only decrease in the future, and the number of families with one employee will increase. Suppose that this substitution will occur according to the following logistic law: ν10 ¼

ν10 ν10 þ ð1 – ν10 Þ . exp ½–ϑv . ðt – T vo Þ]

ð10Þ

Where v10 and ϑv—constant parameters. The above form of recording the logistic function is most suitable, since the value v1 already exceeded the value equal 0.5. If we accept that Tvo ¼ 2010 year, then v10 ffi 0.5. Assuming that by 2020, most families will only have one worker, we get the following estimate for ϑv ffi 0.067. We denote the total household income, including wealthy households, by: Y hk ðt Þ ¼ C ðt Þ þ I ðt Þ

ð11Þ

60

A. Akaev et al.

where C(t)—total household consumption; I(t)—gross investment in the national economy. Hence, the income of wealthy households Y rp ðt Þ and incomes of middleincome households, and the poor, Y ph ðt Þ will be calculated through Y hk ðt Þ (11): Y rp ðt Þ ¼ ξ1 . Y hk ðt Þ; Y ph ðt Þ ¼ ξ0 . Y hk ðt Þ; ξ0 . ¼ 1 – ξ1 ; ξ0 and ξ1 > 0:

ð12Þ

The distribution of total income Y hk ðt Þ among all private households, taking into account the designation (12), will take the following shape: ⌠ ⎫ ⌈ ⌉ Y rpo hðtÞ 0 hð t Þ – 1 . . Y hk ðr Þ ¼ ξ0 . ν1 ðt Þ . Y ph1 ðt Þ þ ν2 ðt Þ . Y ph2 ðr Þ þ ξ1 . Y rpo r ð13Þ ( ) 2⌈ r–Y ( ) ( )⌉ 1 – ξ0 – ξ01 –12ð ∂rpo Þ p------.1 . r – Y rpo þ 1 – 1 . r – Y rpo .e ∂ 2Π Here 1ˑ(r-Yrpo)—unit function, where 1ˑ(r-Yrpo) ¼ 1 under r ≥ Yrpo and 1ˑ 1–ξ –ξ0 (r-Yrpo) ¼ 0, under 0 < r < Yrpo. As for ξ01 , ξ01 < ξ1 and ξ01 1 < 0:1, we transfer a small part of the incomes of wealthy families to smooth the left border of the Pareto distribution (7) using the normal distribution curve (8). The numerical values of the coefficient ξ0 and ξ1, he average values of household incomes, rmph, as well as the lower boundary of incomes of wealthy familiesYrpo can be found in (World Inequality Report 2018). So, for example, if we take 1% of the super-rich population as rich families, then we know that their income in 1980 amounted to 10.7% of GDP or 14.2% of Y hk , and in 2015 it amounted to 20% of GDP or 23.2% of Y hk (World Inequality Report 2018, p. 12). By extrapolating the corresponding curve until 2050, we obtained a forecast estimate for the income share of 1% of super-rich families—33.2% Y hk. We also need a formula for calculating the expectations of distributions (7) and (9), which have the form: aÞr mrp ðt Þ ¼

hð t Þ – 1 . Y rpo, hð t Þ – 2

ð14Þ

bÞr mrp ðt Þ ¼ ½v1 ðt Þ þ 2v2 ðt Þ] . r m1 ¼ ½2 – v1 ðt Þ] . r m1 , r m2 ¼ r m2 From here we get the formulas for calculating expectations of rm1 and rm2, as well as for the lower boundaries of the incomes of super-rich families according to the average incomes of various population groups available in (World Inequality Report 2018): r m1 ¼

r mph h–1 ; Y ¼ r mpo : h–2 2 – ν1 rpo

ð15Þ

Using the extrapolation method, we found values for h(t). They are h ¼ 2.64 for 2015, and h ¼ 2.55 for 2050. The values h were known for 1980 and 2000 (1980:

Technological Development: Models of Economic Growth and Distribution of Income Table 1 Values of the coefficients and parameters

Year ν1 h ξ1 rm1 (1000 s%.) Yrpo (1000 s%.) rmph (1000 s%.) η

1980 0.4 3.28 0.142 34 330 55 10.9

2015 0.5 2.64 0.232 62 720 94 19.9

61 2050 0.94 2.55 0.332 87 1080 93 32.7

h ¼ 3, 2000: h ¼ 2.7 with an asymptotic value of h ¼ 2.5) (Dragulesku and Yakovenko 2001). We also introduce a crude measure of income inequality as the ratio of the average income of rich families (14a) to the average income of families from the middle class and the poor, (14b): η ðt Þ ¼

Y rpo ðt Þ hð t Þ – 1 . : hðt Þ – 2 ½2 – ν1 ðt Þ] . r m1

ð16Þ

We calculated the distribution of income among all private households (13) for three-time slices: I-1980; II-2015; III-2050. The numerical values of the coefficients and parameters calculated by formulas (10), (14), and (15), using the values of average income taken from (World Inequality Report 2018) are summarized in Table 1. The forecasted values of US GDP by 2050 were calculated by us according to the verified production function (1), and to calculate the projected average incomes for various groups of households, the predicted values of the number of households were calculated from current demographic data. A graphic illustration of the distribution of households by income level is presented in Fig. 4. These graphs show the distribution for 1980 (Fig. 4a), 2015 (Fig. 4b), and the forecast for 2050 (Fig. 4c). The graphs of the distribution of income for all households show how the polarization of society occurs: in 2015 and 2050, the distributions already have a bimodal form, with effectively separate income distributions for the super-rich and the poor. As for the middle class, in the distribution of income in 1980 it is present in a significant form, whereas in 2015 it is rather blurred, and in 2050 it practically disappears. As can be seen from the last row of Table 1, income inequality, which already constitutes a 20-fold gap today, will grow and by 2050 will exceed the 30-fold gap. As can be seen from the examination of Fig. 4a, in 1980 the middle class with an income of over 50 thousand dollars per person made up a significant portion of the US population. Super-rich families were located to the right of 320 thousand dollars per person. People who have earned their income through their talents and work are located around this mark and are distributed according to the Gaussian law. But already in 2015 (see Fig. 4b), the middle class lost its presence, and wealthy families had a much higher annual income of 700 thousand dollars per person, in comparison with 320 thousand dollars per person in 1980. The forecast for 2050 (see Fig. 4c) shows that there will be a bimodal distribution, where there is practically no middle

62

A. Akaev et al.

Fig. 4 Distribution of households by levels of income: (a) 1980, (b) 2015, (c) 2050

Technological Development: Models of Economic Growth and Distribution of Income

63

class with incomes ranging from 400 thousand dollars per person and one million dollars per person. The minimum annual income of wealthy people will exceed $ one million per person.

5 Conclusion It is necessary to revive and modernize the key elements of the social aspects of the state, particularly in the relation of progressive tax. It is the state what is able to become the main tool for equalizing incomes in society and supporting the middle class, which is vital for democracy and social cohesion in developed countries. An additional measure to consider is the introduction of a capital tax, as Piketty suggests (Piketty 2014, Ch. 14). Otherwise, further growth of inequality and poverty will continue, both in the USA and around the world, with accelerated concentration of resources and capital in the financial sector and with increasing economic imbalances. This in turn can lead to a sharp decline in the real economy, since the overconcentration of capital can easily reduce the effectiveness of investing in innovation. The neoliberal economic model not only contributes to the withdrawal of real production to countries with cheap labor, but also helps to redistribute incomes from the general population to financial elites, thereby increasing inequality in society. The process of financialization, which has been booming since the 1980s and turned commodity and product markets into financial markets, has led to unprecedented growth in the scale and profitability of the financial sector at the expense of the material economy. This was largely due to the complete liberalization of the regulations on the financial sector and less control over its income. Globalization and financialization have turned the developed capitalist states into post-industrial economies, drastically reducing material industrial production there. Financial capital, together with transnational corporations, brought industrial production to developing countries with cheap labor, while establishing strict control by capital owners over the entire chain of creation of goods and services. The corporations themselves focused on sales and service, which brings the lion’s share of the revenue. Digital technology is the best fit for scaling such a business. Therefore, the identified trend of rising inequality will clearly continue, reducing, and eroding the middle class. The attempt by the President of the USA, Donald Trump, to return industrial production back to America and begin reindustrialization has not yet yielded tangible results and does not seem to have caused a significant import substitution effect. New digital technologies only reinforce the current trend, because today they effectively serve just the sectors of services, finance, and trade, as well as management and public administration, i.e., sectors which focus on the reallocation of resources, not their creation. Attempts to apply digital technologies in material industrial production, i.e., in Industry 4.0, can only be observed in Germany, Japan, and South Korea, and so far on a limited scale. Furthermore, digital information and Big Data are currently being financialized and commercialized, like

64

A. Akaev et al.

traditional commodities. As a result, working in financial markets gives capital owners more profit than industrial production. As capital seeks to get a quick return on investment, further growth in inequality in developed countries and the world is consequently inevitable. Acknowledgments This chapter was prepared under the financial support of the Russian Science Foundation (Grant No. 18-18-00099).

References Acemoglu D., Restrepo P. (2017). Robots and jobs: Evidence from US labor markets. NBER working paper 23285 Aghion, P., & Williamson, J. G. (2009). Growth, inequality, and globalization. Theory, history, and policy. Cambridge: Cambridge University Press. Allen J.P. (2017). Technology and Inequality. Springer International Publishing AG. https://link. springer.com/book/10.1007%2F978-3-319-56958-1 Barro, R.J. (1999). Inequality, growth, and investment (NBER Working Paper 7038). Bowles J. (2014). The computerisation of European jobs Bruegel. http://gesd.free.fr/bowles714.pdf Bowley, A. L. (1937). Wages and income in the United Kingdom since 1860. Cambridge: Cambridge University Press. Brynjolfsson, E., & Mcafee, A. (2013). The second machine age. New York: W. W. Norton & Company, Inc. Brzeski, C., & Burk, I. (2015). Die Roboter kommen. Economic Research: Folgen der Automatisierung für den deutschen Arbeitsmarkt. https://ingwb.de/media/1398074/ing-dibaeconomic-research-die-roboter-kommen.pdf. Card, D., & DiNardo, J. E. (2002). Technology and U.S. wage inequality: A brief look. Federal Reserve Bank of Atlanta Economic Review, 87, 45–62. Conceição, P., & Galbraith, J. K. (2000). Technology and inequality: Empirical evidence from a selection of OECD Countries. Proceedings of the 33rd Hawaiian International Conference on System Sciences. Deaton, A. (2013). The great escape. Health, wealth, and the origins of inequality. Princeton, NJ: Princeton University Press. Dorling, D. (2014). Inequality and the 1%. NY: Verso. Dragulesku, A., & Yakovenko, V. M. (2001). Exponential and power-low distributions of wealth and income in the United Kingdom and the United States. Physica A, 299, 213–221. Ford, M. (2015). The rise of the robots. New York: Basic Books. Frey, C.B. and Osborne, M. A., (2013). The future of employment: How susceptible are jobs to computerisation? Oxford Martin school working paper. https://www.oxfordmartin.ox.ac.uk/ downloads/academic/future-of-employment.pdf Galbraith, J. K., & Hale, T. (2006). The changing geography of American inequality: From IT bust to big government boom (University of Texas Inequality Project Working Paper 40). Gancia G. (2012). Globalization, technology and inequality. http://www.crei.cat/wp-content/ uploads/users/working-papers/GTI_OpCREI.pdf Greenwood, J. (1997). The third industrial revolution: Technology, productivity, and income inequality. Washington, DC: American Enterprise Institute for Public Policy Research. Helpman, E. (Ed.). (1998). General purpose technologies and economic growth. Cambridge, MA: MIT Press. Kaldor, N. (1961). Capital accumulation and economic growth. New York: St. Martin’s Press.

Technological Development: Models of Economic Growth and Distribution of Income

65

Kanbur, R., & Stiglits, J. (2015). Wealth and income distribution: New theories needed for a new era. VoxEu. Kim, S. Y. (2012). Technological Kuznets curve? Technology, income inequality, and government policy. Asian Research Policy, 3, 33–49. Kleinbard D. (2000). The 1.7 trilliondot. comlesson, CNNMoney. https://edition.cnn.com/2000/fyi/ news/11/13/dot.com.economics Kuznets, S. (1955). Economic growth and income inequality. The American Economic Review, 45 (1), 1–28. Lin, J. (2011). Technological adaptation, cities, and new work. Review of Economics and Statistics, 93(2), 554–574. Malerba, F., & Orsenigo, L. (1995). Schumpeterian patterns of innovation. Cambridge Journal of Economics, 19, 47–65. Matsaganis, M., Özdemir, E., Ward, T., & Zavakou, A. (2016). Non-standard employment and access to social security benefits (Research Note 8/2015). Brussels: European Commission. Milanović, B. (2016). Global inequality. A new approach for the age of globalization. Cambridge: The Hebelknap Press of Harvard University Press. Ogawa, J. (2017). Global AI startup financing hit $5bn in 2016 [Internet]. Nikkei. Retrieved from http://asia.nikkei.com/Business/Trends/Global-AI-startup-financing-hit-5bn-in-2016 Piketty, T. (2014). Capital in the twenty-first century. Cambridge: Harvard University Press. Sanchez-Paramo, C., & Shady, N. (2003). Off and running? Technology, trade and the rising demand for skilled workers in Latin America. (World Bank Working Paper 3015). Shiba K. (2017). Central Banks’ new approach to AI: New settlement system applying blockchain technology and issuance of digital currency. Institute for International Monetary Affairs (IIMA). Newsletter #3 Stiglitz, J. (2012). The price of inequality. How today divided society endangers our future. New York: W. W. Norton Company. Stiglitz, J. (2015). The great divide. New York: W. W. Norton Company. Wang, W. C. (2007). Information society and inequality: Wage polarization, unemployment, and occupation transition in Taiwan since 1980 (University of Texas Inequality Project Working Paper 44). Williamson, J. G. (2015). Latin American inequality: Colonial origins, commodity booms, or a missed 20th century leveling? Working paper 20915. Cambridge, MA: National Bureau of Economic Research. World Inequality Database. World Inequality Report (2018). https://wid.world/data/ Wyatt, S., et al. (2000). Technology and inequality: Questioning information society. New York: Routledge.

Breakthrough Technologies and Labor Market Transformation: How It Works and Some Evidence from the Economies of Developed Countries Elena Gorbashko, Irina Golovtsova, Dmitry Desyatko, and Viktorya Rapgof

Abstract The proliferation of digital technology and growing economic inequality have exacerbated the question of the boundaries of using breakthrough technologies. Economic practice shows that under the influence of new technologies there is a constant transformation of the labor market and these changes are usually associated with job cuts in the manufacturing industry. An analysis of empirical data of the US economy shows that job cuts in the industry sector and growth in the services sector are a long-term and sustainable trend. Such a process of structural transformation cannot be provided only by market mechanisms. The wide involvement of the state as an institution in the formation and financing of retraining and retraining programs for personnel is required to mitigate the consequences of profound structural changes in the labor market. Keywords Labor-saving technology · Labor market · Structural changes · Economic inequality

1 Introduction One of the main reasons for changing the employment structure is technological development. Since technological changes are ongoing, changes in the structure of employment are inevitable. However, in certain periods of time when breakthrough technologies significantly change the technological base, cardinal changes in the labor market occur, the consequences of which are most significantly reflected in the development of economic systems.

E. Gorbashko · I. Golovtsova · D. Desyatko Saint Petersburg State University of Economics, Saint-Petersburg, Russia e-mail: [email protected] V. Rapgof (*) Peter the Great St. Petersburg Polytechnic University, Saint-Petersburg, Russia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Devezas et al. (eds.), The Economics of Digital Transformation, Studies on Entrepreneurship, Structural Change and Industrial Dynamics, https://doi.org/10.1007/978-3-030-59959-1_5

67

68

E. Gorbashko et al.

Nowadays, negative expectations for the labor market are largely associated with the digital technologies development, when estimates of the effects of wide automation are estimated to decrease from 9% of the number of people employed in the European economy (Arntz et al. 2016) to 47% in the US economy (Frey and Osborne 2013). Even if we accept that these estimates are polar in nature, it must be recognized that the impact of breakthrough technologies on the labor market is significant. The acute discussion of this issue is also largely related to the growing income inequality in developed economic countries over the past 30 years (Piketty 2014; Stiglitz 2012) and the long-term trend of a decline in the share of manufacturing in the industrialized countries (OECD 2017). The processes of widespread use of information and communication technologies have become especially painful for the world industry. On the one hand, they have contributed to lower costs and the growth of outsourcing processes, but on the other hand, they laid the foundations for the wide substitution of many types of activities with software products capable of performing complex technological processes. In real industrial systems, this manifested itself already at the beginning of the twenty-first century, and a process of comparatively massive washing out of many industrial professions from the labor market is currently taking place. In this research, we want to show based on empirical data that the new technological influence on the labor market is twofold, and the measures are taken to mitigate the pressure of these technologies on the employment system are usually late in time.

2 Are New Technologies Always Driving Employment? An analysis of various literary sources shows that there are two opposing points of view on how new technologies are changing the labor market and the employment system. The first point of view is based on the fact that new technologies in real-world economic systems do not threaten the employment system, since displacing some activities at the same time contribute to the appearance of others. Analyzing the American labor market for the period from 1850 to 2015, the authors of the study argue that the level of professional outflow in the USA is now at a historic minimum, and no more than 10% of jobs in the US economy are exposed to a real threat of automation (Atkinson and Wu 2017). Referring to such landmark innovations as electricity, the internal combustion engine, the computer, and the Internet, the authors of the study note that changes are almost always smoother than many people think. As other researchers note, the boundaries of automation are expanding rapidly, but the problems with replacing living workers with machines in tasks requiring flexibility, meaningfulness, and common sense remain enormous. And although in many cases machines replace and supplement human labor, they add value to those tasks that are solved due to the unique qualities of workers (Autor 2015). Another study notes that due to the imbalance of technological progress, when it is impossible to replace routine tasks with information technology, there is

Breakthrough Technologies and Labor Market Transformation: How It Works and. . .

69

an increase in wages and employment in the low-skilled services sector (Autor and Dorn 2013). Other researchers in Europe are also inclined not to dramatize the consequences of widespread job automation (Arntz et al. 2016; Pouliakas 2018). In Germany, according to research, no more than 13–15% of employees are at risk of automation (Arnold et al. 2016; Dengler and Matthes 2018). OECD researchers also tend to believe that no more than 10% of those employed in the US economy are at risk of automation (Nedelkoska and Quintini 2018). Studying the practical application of robots and artificial intelligence (Vermeulen et al. 2018), a group of authors notes that this is a “normal structural change,” since the use of new technologies in their application sectors is compensated by intersectoral effects and job creation in other sectors, since professions affected by the effect of new technologies make up no more than 20% of jobs. In fairness, it should be noted that there are pessimists who believe that technological progress is inevitable, but technology will not improve US economic performance in the long run, because headwinds such as demographics, education, inequality, and public debt will slow down GDP growth (Gordon 2014). Analyzing German industrial practice regarding the use of robots from 1994 to 2014. (Wolfgang et al. 2018), the authors note that the use of robots led to the loss of jobs in production, but this was offset by successes in the business services sector, i.e., robots were not killers of jobs, although their use affected the composition of aggregate employment: industrial robots supplanted labor in the manufacturing sector in Germany, but at the same time there was a compensating effect of employment in industries that complement the tasks performed by robots. Another, the opposite point of view believes that new technologies, primarily digital, as well as robots and artificial intelligence, pose real threats to the employment system. The most pessimistic estimates were given in (Frey and Osborne 2013), according to which 47% of those employed in the US economy are at risk of automation. One of the consequences of the widespread use of digital technologies is a decrease in average wages (Brynjolfsson and McAfee 2014) due to the widespread use of information and communication technologies, and as a result, their gradual reduction in cost. As noted by Acemoglu and Resrepo (2017), technological innovations can affect employment in two main ways: (a) by directly displacing workers from their previously performed tasks (crowding out effect); (b) by increasing the demand for labor in industries or jobs that arise or develop as a result of technological progress (productivity effect). In this regard, their assessment of the use of industrial robots from 1990 to 2007 in local US labor markets showed that robots can reduce employment and wages: one robot per thousand workers reduces the employmentto-population ratio by about 0.18–0.34%, and wages—by 0.25–0.5%. According to other estimates on the use of robots in the EU countries, one robot per thousand workers reduces the level of employment by 0.16–0.20%, i.e., crowding out dominates (Chiacchio et al. 2018). The use of industrial robots in the global economy (Carbonero et al. 2018) also poses significant threats: the estimates indicate a longterm reduction in employment by about 1.3% due to an increase in the number of robots by 24% from 2005 to 2014. In developed countries, this decrease in employment is just over 0.5%, while in countries with developing economies it reaches

70

E. Gorbashko et al.

almost 14%. A concomitant effect of the use of robots in developed countries is a decrease in the rate of businesses leaving the country, which has already led to a 5% decrease in employment in countries with developing economies from 2005 to 2014. The impact of automation on the transformation of the labor market, according to some experts, is long-term, and this transformation, when artificial intelligence and robots quickly develop the ability to perform both cognitive and physical work of most of the workforce, leads in the long run to a significant reduction in the share of labor strength and rising inequality: automation is very good for economic growth and very bad for equality (Berg et al. 2017). Currently, there are already attempts to assess the global impact of the widespread adoption of computers and digital technologies on employment. According to a survey conducted by the World Economic Forum (WEF 2016), between 2015 and 2020. net losses in the employment market will amount to 5.1 million jobs (total losses will amount to 7.1 million, and the number of new jobs will not exceed 2 million). One of the key issues in the context of the impact of disruptive technologies on labor substitution is growing inequality in income distribution. Such inequality increases as a result of two forces: (a) a growing gap between labor income and capital income, and (b) a growing gap between high-income and low-income families (Leipziger and Dodev 2016). Appealing to the data on the US economy, they emphasize that the share of domestic income that goes to wages has been declining since the beginning of the 1970s, while the share that goes to capital (interest, dividends, realized investment income, capital gain) increased. This trend in the ever-decreasing share of gross domestic income earmarked for labor (wages) since its peak in 1970 explains the expansion of inequality in the USA in recent decades. As it can be seen, breakthrough technologies act as a double-edged sword: on the one hand, they contribute to the progress of society as a whole, but on the other hand, they strongly transform the labor market, aggravate the problem of economic inequality, and create real threats of job loss for representatives of many professions. The economies of developed countries, especially the American one, provide us with sufficient material to analyze how breakthrough technologies specifically transform the labor market.

3 ICT and Structural Shifts For all industrialized countries over the past 50 years, the dominant economic development has been a decline in the share of manufacturing and an increase in the share of the services sector in the national economy. Our studies only confirm this steady trend (Figs. 1 and 2). If in 1970 the share of manufacturing in GDP ranged from 21% (Canada) to 30% (Japan), then by 2015 this figure was significantly lower: 10% (the UK and Canada) and 22% (Germany). This drop in the share of the industry as a whole occurred against the backdrop of the growth of high-tech sectors in the manufacturing industry itself: if in 1970 it ranged from 5 to 9%, then in 2015 it accounted for 13% (Canada) to 19% (USA).

Breakthrough Technologies and Labor Market Transformation: How It Works and. . .

1970-2015

% Value-added ratio, high-tech industries

19

USA

Germany Japan

UK 17

71

France

15

Italy

13

2015

Canada

11 9

USA

7

Canada

Japan Germany UK

France 5

10

12

14

16

18

20

22

24

Italy 26

28

1970

30

%

Share of manufacturing in value added Fig. 1 The share dynamics of the manufacturing in the G-7 countries in 1970–2015 (authors’ creation)

The growth in the share of services in GDP was accompanied by an increase in the number of people employed in this segment of the economy in all developed countries. For example, from 1994 to 2016, the share of people employed in the services sector increased: in the USA from 72.7 to 81.4%; in Italy from 64.7 to 73.3%; in Japan from 60.5 to 72.2%; in Germany from 64.2 to 74.3%, in France from 72.8 to 80.1%. It was the growth of people employed in the services sector that made it possible to compensate for the drop in employment in the industrial sector, and moreover, to ensure the growth of employment in the economy as a whole (Table 1). However, if we consider individual economies, then with a general increase in the number of employed, there are significant structural changes in the number of employed in certain groups of professions. The analysis we conducted on the US and UK economies for the period 2000–2017 showed that there is a significant group of professions that have experienced rapid growth during this period, or vice versa, a significant reduction (Tables 2 and 3). As it can be seen from Tables 2 and 3, the employed balance is positive. For the 12 fastest growing and fastest declining professional groups in the US economy, the

72

E. Gorbashko et al. % 85 USA UK France

80 75

Italy Canada Japan Germany

70 65

USA

60

Canada UK France

2018

55 50 45

Italy Germany Japan 1970

1970 1975 1980 1985 1990 1995 2000 2005 2010 2015 2020 2025

Fig. 2 The share of the services dynamics in G-7 countries in 1970–2018 (authors’ creation) Table 1 Employment dynamics in economically developed countries (thousand people) Country USA UK Germany Italy Canada Japan

2000 129,739 28,549 35,883 20,894 14,760 64,100

2005 130,138 30,150 35,846 22,452 16,124 62,900

2010 127,097 30,299 37,503 23,207 16,964 62,000

2015 137,897 32,786 40,173 22,466 17,947 62,700

2017 142,549 33,307 41,607 23,024 18,417 64,200

Growth rate (%) 0.6 0.9 0.9 0.6 1.3 0.0

Sources of data: https://www.bls.gov/oes/tables.htm, https://www.gov.uk/government/publica tions/, http://www.ilo.org/ilostat/, http://www5.statcan.gc.ca/cansim/, http://www.stat.go.jp/ english/data/roudou/lngindex.htm

growth in employed totaled 2.382 million. Obviously, the reduction of professions such as telephone operators, computer operators, word processors and typists, data entry keyers, and switchboard operators is directly related to the rapid growth of information and communication technologies (ICT). Obviously, ICTs are basic or general-purpose technologies that are associated not only with the manufacturing or service sector but also with the mass consumer. Among the rapidly growing professions, it is safe to say that one profession—software developers—is also closely associated with the growth of ICT. In this case, there is a creation of new jobs in the breakthrough technology sector, but this only makes up 9% of the total number of newly created jobs. This suggests that the structural shift due to the influence of ICT does not mean that there is proportional creation of new jobs in the breakthrough

Breakthrough Technologies and Labor Market Transformation: How It Works and. . .

73

Table 2 The dynamics of employed in the US economy by rapidly declining and rapidly growing professions in 2000–2017. (Calculated by the authors according to https://www.bls.gov/oes/tables. htm) USA Fastest declining occupations Dismissal, thousand people Occupation title Telephone operators –46,0

Annual rate (%) –11.7

Computer operators Word processors and typists

–146,0 –192,0

–8.6 –7.8

Tool grinders, filers, and sharpeners Advertising and promotions managers Switchboard operators, including answering service Machine feeders and offbearers

–20,0

–7.1

–65,0

–6.8

–163,0

–6.3

–140,0

–6.0

Sewing machine operators Data entry keyers

–225,0

–5.6

–279,0

–5.4

Chief executives

–310,0

–5.2

Telemarketers Executive secretaries and executive administrative assistants Total

–272,0 –774,0

–5.1 –4.8

–2632

Fastest growing professions Growth, thousand people Occupation title Market research +497,0 analysts and marketing specialists Personal care aides +1665 Sales representa+812,0 tives, services, all other Massage therapists +78,0

Annual rate (%) +11.1

+10.5 +10.2

+8.8

Manicurists and pedicurists Coaches and scouts

+76,0

+8.0

+167,0

+7.6

Medical scientists, except epidemiologists Personal financial advisors Health specialties teachers, postsecondary Software developers, applications Medical secretaries Cooks, restaurant

+76,0

+7.0

+124,0

+5.8

+116,0

+5.5

+474,0

+4.9

+ 294,0 +635,0

+4.3 +4.1

Total

+5014

technology sector. Most likely, new technologies create new jobs, primarily due to the emergence of new niches for professional activities, which would have been impossible with the old technological systems. We observe more modest indicators in the UK economy: there is practically an equilibrium in nine groups of professions. Here, unlike in the USA, two groups of professions—programmers and software development professionals and information technology and telecommunications directors—have rapidly developed in the ICT sector, and their share in newly created jobs is significant—25%. In Figs. 3 and 4 we give examples of occupational shifts for individual professions in the USA and Great Britain. For example, the first generation of computers, created in the 1950–1960s, led to the emergence of such

74

E. Gorbashko et al.

Table 3 The dynamics of employed in the UK economy by rapidly declining and rapidly growing professions in 2000–2017 (Calculated by the authors according to https://www.ons.gov.uk/ employmentandlabourmarket/peopleinwork/) UK Fastest declining occupations Dismissal, thousand people Occupation title Telephone –81,0 salespersons Typists and related –95,0 keyboard occupations Library clerks and –45,0 assistants

Annual rate (%) –7.6

Fastest growing professions Growth, thousand people Occupation title Paramedics +20,0

Annual rate (%) +8.6

–6.6

Educational support assistants

+126,0

+7.8

–5.2

Information technology and telecommunications directors Engineering professionals n.e.c.4 Business and related associate professionals n.e.c.4 Human resource managers and directors Customer service managers and supervisors

+72,0

7.2

+86,0

+5.6

+109,0

+5.4

+122,0

+5.3

+102,0

+4.6

Marketing associate professionals Programmers and software development professionals Total

+92,0

+3.2

+145,0

+3.2

Printers

–45,0

–4.5

Bank and post office clerks

–121,0

–4.2

Personal assistants and other secretaries National government administrative occupations Business sales executives Retail cashiers and check-out operators

–190,0

–3.9

–125,0

–3.1

–59,0

–2.2

–74,0

–1.7

Total

–835,0

+874,0

a new profession as a computer programmer. However, the advent of personal computers and packages of standard software applications led to a sharp reduction in their number, and, for example, in the USA their number has a steady downward trend, having decreased from 525 thousand people in 2000 to 250 thousand people in 2017. On the other hand, the advent of social networks and global information retrieval systems, as well as a new generation of gadgets for the mass consumer (smartphones), has led to an increase in the number of programmers involved in the development of various applications. So, the number of developers of software applications has grown in the USA from 375 thousand people, in 2000, up to 849 thousand people in 2017, as shown in Fig. 3a. The series of Fig. 3b–f exhibit several other examples of occupational shifts in the same vein as Fig. 3a, both for the USA and UK. The data of Tables 2 and 3, as well as Figs. 3 and 4 show that the structural shift is not toward the sectors producing goods, but toward the services sector. In this regard, it should be noted that the results obtained in the course of our study are

Breakthrough Technologies and Labor Market Transformation: How It Works and. . .

75

Fig. 3 (a) Occupational shifts in the USA: (A) Applications and Computer Programmers; (B) Software Developers (authors’ creation). (b) Occupational shifts in the USA: (A) Nonfarm Animal Caretakers; (B) Mail Clerks and Mail Machine Operators (authors’ creation). (c) Occupational shifts in the USA: (A) Coaches and Scouts; (B) Sewing Machine Operators (authors’ creation). (d) Occupational shifts in the UK: (A) Graphic designers; (B) Printers (authors’ creation). (e) Occupational shifts in the UK: (A) Product, clothing and related designers; (B) Typists and related keyboard occupations (authors’ creation). (f) Occupational shifts in the UK: A) Artists; B) Library clerks and assistants (authors’ creation)

76

Fig. 3 (continued)

E. Gorbashko et al.

Breakthrough Technologies and Labor Market Transformation: How It Works and. . .

77

Fig. 4 US manufacturing production and occupations dynamic according to data from www.bls. gov/oes/tables.htm and stats.oecd.org/index.aspx?DatasetCode¼SNA_TABLE6A#

completely consistent with the conclusions contained in the work of (Spiezia 2016). This study notes that in OECD countries between 1995 and 2012. ICT use has had the greatest impact on declining employment in manufacturing and has caused growth in sectors such as culture, leisure, and construction. The conclusions contained in (Kehoe et al. 2013) are also consistent with our results. In a broader context, these empirical data, even if they are fragmented, confirm that structural shifts are due to significant differences between sectors in multifactor productivity, and as a result caused by these differences in final prices of goods (Acemoglu and Guerrieri 2008).

4 Labor-Saving Technologies and Structural Shifts in US Manufacturing It has already been noted above that in all developed countries over the past 40 years there has been a reduction in the number of people employed in the industry. One of the most striking examples is the US manufacturing industry shown in Fig. 4, where there are two mixed trends: a decrease in the number of employees and an increase in production volumes. As can be seen from this graph, from 2000 to 2010, the number of employees decreased from 12 to 8.2 million people, approximately one third. Here, of course, the impact of the 2008–2009 crisis was notable and although by 2016 the number of employees in the sector increased to 9.1 million people, it is obvious that employment in the sector has significantly decreased. Over the same period, production increased 1.4 times, which is an indirect confirmation that the multifactor productivity (MFP) in the industry has increased.

78

E. Gorbashko et al.

A logical question arises: due to what is this happening? First, let us note that there is the use of labor-saving technologies when the share of capital grows faster than the share of labor in the production of final products. Apart from ICT, such technologies include other technologies, the use of which cannot be as wide as ICT, but whose impact on individual sectors of the economy is significant. Such technologies include industrial robots (IR), additive technologies (AT) in industry, automation in the broadest context. Industrial robots have become an important part of production, and currently there are more than 1.3 million robots in the world, most of which (about 350 thousand) are used in Japan, about 250 thousand in China, and almost 200 thousand in the USA, however, the highest density of their use (35 robots per 10,000 inhabitants) is observed in Korea (De Backer et al. 2018). Approximately 60% of robots are used to perform operations such as the handling of components, stamping, bending, measuring, quality inspection, and packaging and placing. For most economies, a considerable share of robot purchases is also used for welding and soldering. It is interesting to note the results of a study based on the experience of using robots by Spanish companies in the period from 1990 to 2016. According to the results obtained during the survey, 10% of jobs in non-adopting firms are destroyed when the share of sales attributable to robot-using firms in their industries increases from zero to one half (Koch et al. 2019). It should be noted that the density of industrial robots (the number of robots per 10,000 employees) in the manufacturing sector in Spain and the USA is approximately the same—160 and 189, respectively (Crowe 2018). However, given that the total number of robots in the US industry in 2016 was 170 thousand units, we can assume a greater effect from their use of reducing jobs. Additive manufacturing (AM) processes are a types of methods that fabricate parts by adding elements and segments of a designed feedstock material. These materials can range from polymeric and plastic to metallic and ceramic. Based on different needs, a specific method can be implemented. Different methods utilize different deposition techniques. Some of them melt the materials and some change the materials into semi-solid form. Different heating sources such as laser and resistance heaters can be used to change material states. Some of the benefits of AM processes can be summarized as: (1) no need for tool design, (2) no need for separate machines and (3) less waste of materials and final cost (Dehghanghadikolaei et al. 2018). One of the distinguishing features of this technology is a significant reduction in the involvement of human labor in the production process (Thomas 2013). This technology also opens up new opportunities such as mass customization, complexity for free, design for function, shorter time to market, supply chain simplification, waste reduction, and less pollution, The technology itself has good growth prospects in segments such as automotive, aerospace, and medical industries (van Barneveld and Jansson 2017). At the same time, the barriers that exist at this time in the technology itself are noted: slow build rates, high production costs, the considerable effort required for application design and for setting process parameters, manufacturing process, discontinuous production process, and limited component size (Berger 2013). However, it should be noted that technology is rapidly improving, and in

Breakthrough Technologies and Labor Market Transformation: How It Works and. . .

79

2016 HP launched the Multi Jet Fusion 3D printer, which is 50% cheaper and ten times faster than existing models to produce plastic products (Baumers et al. 2018). The main findings indicate that AM (1) contributes to job creation in both the manufacturing sector and in the service sector, (2) does not bring back mass production jobs from emerging economies such as BRIC, (3) contributes to job creation in product development stages (e.g., rapid prototyping), and (4) contributes to job creation in production stages of low-volume batches, mainly of complex products (Kianiana et al. 2015). One of the world leaders in both manufacturing and the use of additive technologies (AT) is the USA. According to a study (Ford 2014), in 2011 the USA accounted for 38.3% of all installed systems in the world using additive technologies, and it produced 64% of the total global production of such systems. At the same time, as can be seen from this study, additive technologies provided the output of only 0.01% of the total industrial production in the country. For example, in the aerospace industry, 12.1% of installed systems use additive technologies, but they support production only in the amount of $ 29.8 million, or 0.05% of all production in the industry. One of the new trends in the global development of these systems has been a sharp increase in sales of additive systems related to the processing of metal materials: their sales increased four times compared to 2013 and reached 1800 systems in 2017 (Wohlers 2018). We have analyzed the dynamics of number of employed by working groups of professions that were fundamental for the period of rapid industrial development.1 The dynamics changes in the number of these professional groups are shown in Table 4. As can be seen from Table 4, since 2000 all these professions have been rapidly washed out of the labor market. In some professions, the average annual rate of retirement amounted to: Assemblers and Fabricators, All Other—1.4%, Grinding, Lapping, Polishing, and Buffing Machine Tool Setters, Operators, and Tenders, Metal and Plastic—2.9%, Milling and Planning Machine Setters, Operators, and Tenders, Metal and Plastic—4.0%, Lathe and Turning Machine Tool Setters, Operators, and Tenders, Metal and Plastic—5.9%, Forging Machine Setters, Operators, and Tenders, Metal and Plastic—6.2%, Drilling and Boring Machine Tool Setters, Operators, and Tenders, Metal and Plastic—10.4%. The series of Fig. 5a–d shows the trajectories of individual workers’ retirement from the US labor market. The theoretical curve of job retirement is built based on a logistic function of the form of eq. 4.1: y¼

a þ δy 1 þ b × e–c ∙ ðt–T 0 Þ

ð1Þ

The Logistic Function Parameters are shown in Table 5.

1

Assemblers and Fabricators; Engine and Other Machine Assemblers; Electrical, Electronic, and Electromechanical Assemblers; Extruding and Drawing Machine Setters, Operators, and Tenders, Metal and Plastic; Forging Machine Setters, Operators, and Tenders, Metal and Plastic; Rolling Machine Setters, Operators, and Tenders, Metal and Plastic; Cutting, Punching, and Press Machine Setters, Operators, and Tenders etc.

80

E. Gorbashko et al.

Table 4 Structural shifts in US manufacturing sector due to the diffusion of Labor-saving technologies (Calculated by the authors according to the source: https://www.bls.gov/oes/tables. htm)

Occupation title Drilling and boring machine tool setters, operators, and tenders, metal and plastic Forging machine setters, operators, and tenders, metal and plastic Lathe and turning machine tool setters, operators, and tenders, metal and plastic Milling and planning machine setters, operators, and tenders, metal and plastic Rolling machine setters, operators, and tenders, metal and plastic Cutting, punching, and press machine setters, operators, and tenders, metal and plastic Tool and die makers Electrical, electronic, and electromechanical assemblers, except coil winders, tapers, and finishers Grinding, lapping, polishing, and buffing machine tool setters, operators, and tenders, metal and plastic Extruding and drawing machine setters, operators, and tenders, metal and plastic Engine and other machine assemblers Assemblers and fabricators, all other Computer-controlled machine tool operators, metal, and plastic Machinists Welders, cutters, solderers, and brazers Molding, core making, and casting machine setters, operators, and tenders, metal and plastic Total

Number of employed, thousand people 2000 2005 2010 2015 2017 71 43 22 15 11

Rate of drawdown (%) –10.4

54

34

22

20

18

–6.2

84

71

41

40

30

–5.9

36

29

21

20

18

–4.0

50

38

32

32

26

–3.8

351

265

182

195

189

–3.6

131 66

100 49

67 33

75 39

74 38

–3.3 –3.2

124

102

70

74

75

–2.9

114

87

76

72

74

–2.6

101 1674 162

93 1501 136

80 1176 124

80 1344 147

78 1306 145

–1.6 –1.4 –0.7

420 414 158

368 358 157

353 314 115

399 386 136

378 377 155

–0.6 –0.5 –0.1

4010

3431

2728

3074

2992

–1018

Calculated by the authors according to the source: https://www.bls.gov/oes/tables.htm

In total for the four professional groups shown in Fig. 5, 322,000 jobs have been lost for over 17 years (Table 6). For comparison, we can give some data on a decrease in the number of people employed in the US textile and steel industry for the period 1958–2011: their number over the 53 years decreased from 300 thousand to 16 thousand in the textile and from 500 thousand to 100 thousand in the steel industry (Bessen 2017).

Breakthrough Technologies and Labor Market Transformation: How It Works and. . .

81

Fig. 5 (a) Disappearance from the US labor market: lathe and turning machine tool setters, operators, and tenders (metal and plastic) (authors’ creation). (b) Disappearance from the US labor market: cutting, punching, and press machine setters, operators, and tenders (metal and plastic) (authors’ creation). (c) Disappearance from the US labor market: tool and die makers (authors’ creation). (d) Disappearance from the US labor market: grinding, lapping, polishing, and buffing machine tool setters, operators, and tenders (metal and plastic) (authors’ creation)

82

E. Gorbashko et al.

Fig. 5 (continued)

Table 5 Logistic function parameters (authors’ creation)

Parameters T0 a b c δy

Fig. 5a 2000 19.928 259.02 –0.0714 9.9959

Fig. 5b 2000 9004.6 52.929 –0.1345 164.29

Fig. 5c 2000 2811.6 46.392 –0.1667 67.716

Fig. 5d 2000 130 1.2069 –0.1765 65.85

Table 6 Job cutting by occupation in US manufacturing (authors’ creation) Occupational title Lathe and turning machine tool setters, operators, and tenders (metal and plastic) Cutting, punching, and press machine setters, operators, and tenders (metal and plastic) Tool and die makers Grinding, lapping, polishing, and buffing machine tool setters, operators, and tenders (metal and plastic) Total

2000 84,000

2017 30,000

Balance –54,000

351,000

189,000

–162,000

131,000 124,000

74,000 75,000

–57,000 –49,000

690,000

368,000

–322,000

5 Conclusion Empirical data based on processing statistics from the US manufacturing industry indicate a profound transformation of the labor market. The main reason for this is new technologies that contribute to broad automation, based primarily on information technology. It should be noted that automation is itself at the initial stages and its use contributes to the growth of employment, since it solves the problem of unmet

Breakthrough Technologies and Labor Market Transformation: How It Works and. . .

83

demand. However, these days we are witnessing that inelastic demand has led to the dominance of labor-saving technologies (Bessen 2019). As a result of this, there is a significant redistribution of jobs from industry to the service sector. Since the decline in employment in the industrial sector is a long-term and stable trend for all developed countries, it is necessary to agree that the economies of developed countries are undergoing a profound structural transformation similar to that which occurred in the prewar years in the USA, when employment in agriculture plummeted, providing the influx of labor into industry, construction, transportation, and services (Stiglitz 2017). This process will be lengthy and associated with high unemployment in shrinking industries. To overcome this negative trend, one does not need to rely on the “invisible hand of the market;” a strong state participation in retraining and retraining programs is necessary. Acknowledgment This chapter was prepared under the financial support of the Russian Science Foundation (Grant No. 18-18-00099).

References Acemoglu, D., & Guerrieri, V. (2008). Capital deepening and non-balanced economic growth. Journal of Political Economy, 116(3), 467–498. Acemoglu, D. & Resrepo, P. (2017). Robots and jobs: Evidence from US labor markets (NBER Working Paper No 23285). Arnold, D., Arntz, M., Gregory, T., Steffes, S., & Zierahn, U. (2016). Herausforderungen der digitalisierung für die zukunft der arbeitswelt. ZEW Policy Brief Nr., 16–08. Arntz, M., Gregory, T. & Zierahn, U. (2016). The risk of automation for jobs in OECD countries: A comparative analysis. (OECD Social, Employment and Migration Working Papers No 189). Atkinson, R.D. & Wu, J. (2017) False Alarmism: technological disruption and the U.S. Labor Market, 1850–2015. Information Technology & Innovation Foundation. http://www2.itif.org/ 2017-false-alarmism-technological-disruption.pdf Autor, D. H. (2015). Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives, 29(3), 3–30. Autor, D. H., & Dorn, D. (2013). The Growth of Low-Skill Service Jobs and the Polarization of the US Labor Market. American Economic Review, 103(5), 1553–1597. Baumers, M., et al. (2018). Adding it up: The economic impact of additive manufacturing. https:// eiuperspectives.economist.com/sites/default/files/Addingitup_WebVersion.pdf Berg, A., Buffie, E., & Zanna, F. (2017). Should we fear the Robot revolution? (The correct answer is yes) (IMF Working Paper No 18/116). Berger R (2013, November). Additive manufacturing. A game changer for the manufacturing industry? Munich: Roland Berger. Bessen, J. (2017). Automation and Jobs: When technology boosts employment. https://papers.ssrn. com/sol3/papers.cfm?abstract_id¼2935003 Bessen, J. (2019). Automation and jobs: When technology boosts employment. https://voxeu.org/ article/automation-and-jobs-when-technology-boosts-employment Brynjolfsson, E., & McAfee, A. (2014). The second machine age: work, progress, and prosperity in a time of brilliant technologies. New York: WW Norton & Company. Carbonero, F., Ernst, E., & Weber, E. (2018). Robots worldwide: The impact of automation on employment and trade (ILO Working Paper No 36).

84

E. Gorbashko et al.

Chiacchio, F., Petropoulos, G., & Pichler, D. (2018). The impact of industrial robots on EU employment and wages: A local labour market approach (Working Paper No 02). Crowe, S. (2018). 10 most automated countries in the World. https://www.therobotreport.com/10automated-countries-in-the-world/ De Backer, K., De Stefano, T., Menon, C., Suh, J. R. (2018). Industrial robotics and the global organization of production (OECD Science, Technology and Industry Working Papers 2018/ 03). Dehghanghadikolaei, A., Namdari, N., Mohammadian, B., & Fotovvati, B. (2018). Additive Manufacturing Methods: A Brief Overview. Journal of Scientific and Engineering Research, 5(8), 123–131. Dengler, K., & Matthes, B. (2018). The impacts of digital transformation on the labour market: Substitution potentials of occupations in Germany. Technological Forecasting and Social Change, 137, 304–316. Ford, S. (2014). Additive manufacturing technology: Potential implications for U.S. manufacturing competitiveness. Journal of International Commerce and Economics. https://ssrn.com/ abstract¼2501065 Frey, C. B., & Osborne, M. A. (2013). The Future of Employment: How Susceptible are Jobs to Computerization? Oxford: University of Oxford. Gordon, R. (2014). US economic growth is over: The short run meets the long run. The Brookings Institution. Growth, convergence and income distribution: The road from the Brisbane G-20 Summit, pp. 173–180. Kehoe, T. J., Ruhl, K. J., & Steinberg, J. B. (2013). Global imbalances and Structural Change in the United States (NBER Working Paper No 19339). Kianiana, B., Tavassolib, S., & Larsson, T. C. (2015). The Role of Additive Manufacturing Technology in job creation: an exploratory case study of suppliers of Additive Manufacturing in Sweden. Procedia CIRP, 26, 93–98. Koch, M., Manuylov, I., Smolka, M. (2019). Robots and firms. VOX CEPR Policy Portal. https:// voxeu.org/article/robots-and-firms Leipziger, D, & Dodev, V. (2016). Disruptive technologies and their implications for economic policy: Some preliminary observations (Institute for International Economic Policy Working Paper Series WP-2016-13). Nedelkoska, L. & Quintini, G. (2018). Automation, skill use and training (OECD Social, Employment and Migration Working Papers No 202). OECD. (2017). OECD employment outlook 2017. http://www.oecd.org/employment/outlook/ Piketty, T. (2014). Capital in the twenty-first century. London: Harvard University Press. Pouliakas, K. (2018). Determinants of automation risk in the eu labour market: A skills-needs approach (IZA Discussion Paper No 11829). Spiezia, V. (2016). ICT and jobs: Complements or substitutes? The effects of ICT investment on labour demand by skill and by industry in selected OECD countries. Paris: OECD. Stiglitz, J. E. (2012). The price of inequality: How today’s divided society endangers our future. New York: W.W. Norton. Stiglitz, J. E. (2017). Structural transformation, deep downturns, and government policy (NBER Working Paper No 23794). Thomas, D.S. (2013, August). Economics of the U.S. Additive Manufacturing Industry (NIST Special Publication 1163). https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP. 1163.pdf van Barneveld, J., & Jansson, T. (2017). Additive manufacturing: A layered revolution. European Foundation for the Improvement of Living and Working Conditions. http://www.technopolisgroup.com/wp-content/uploads/2018/08/wpfomeef18002.pdf Vermeulen, B., Kesselhut, J., Pyka, A., & Saviotti, P. P. (2018). The Impact of Automation on Employment: Just the Usual Structural Change? Sustainability, 10, 1–27. Wohlers Report (2018). http://wohlersassociates.com/2018report.htm Wolfgang, D, Findeisenz, S, Suedekum, J, Woessner, N. (2018). Adjusting to Robots: Worker-Level Evidence. Opportunity and Inclusive Growth Institute (Institute Working Paper No 13).

Technological Substitution of Jobs in the Digital Economy and Shift in Labor Demand Towards Advanced Qualifications А. А. Akaev, A. I. Rudskoy, and Tessaleno Devezas

Abstract This chapter addresses three key labor market challenges that occur as a result of the digital transformation of the economy: shift in labor demand towards advanced qualifications; the continuous growth of structural technological unemployment; and the polarization of labor into high- and low-skilled positions, while middle-skilled jobs are being eroded. The authors propose mathematical models that allow to observe the distribution of labor by skill level, as well as the distribution of the probability curve of technological substitution of labor dependent on the skill level, and calculate the effective share of low-, medium-, and high-skilled workers before and after the digital transformation of the economy. A solution to the differential equation describing the increase in productivity of a worker in the digital economy, where they interact in symbiosis with intelligent machines, is also presented in this chapter. This solution allows us to make forecasts of optimal wage growth for workers in the digital age, assuming that it is proportional to productivity growth. It is shown that the optimal increase in the salary of a high-skilled worker should occur at a rate of 7% per year and double in 10 years. Keywords Technological substitution of jobs · Technological shift in labor demand · Polarization of labor · Labor productivity · Optimal wage

А. А. Akaev Institute for Mathematical Research of Complex Systems, Moscow State University named after M.V. Lomonosov, Moscow, Russia A. I. Rudskoy Peter the Great Saint Petersburg Polytechnic University, Saint Petersburg, Russia e-mail: [email protected] T. Devezas (*) Atlantica—Instituto Universitário, Oeiras, Lisbon, Portugal C-MAST (Center for Aerospace Science and Technologies)—FCT, Lisbon, Portugal e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Devezas et al. (eds.), The Economics of Digital Transformation, Studies on Entrepreneurship, Structural Change and Industrial Dynamics, https://doi.org/10.1007/978-3-030-59959-1_6

85

86

А. А. Akaev et al.

1 Introduction Technological progress has always created three problems in the field of demand for labor: a shift in demand for labor towards low or high qualifications; technological unemployment and, recently, the polarization of labor. Let us briefly consider each of these problems in the context of the emerging digital economy, which has a number of unique features. In particular, it is assumed that it will sharply aggravate all of the above three problems in the labor sphere. The milestone of the emergence of the digital economy is considered to be 2007, when mobile Internet started operating, which was developing so rapidly that it took up almost half of all Internet traffic in the world by 2017. Over this period, breakthroughs have occurred in key digital technologies—cloud computing, big data analytics, artificial intelligence (AI), Internet of things (IoT), blockchain, and robotics with AI elements, as well as intelligent computing technologies. Their convergent development and rapid integration with the real economy began as a result from the presence of a developed ICT infrastructure that originated from the previous information revolution. The world started talking about the onset of the fourth industrial revolution and the digital economy formation (Schwab 2016). Today, a number of developed and avant-garde developing countries have already laid the foundations of the digital economy, which has now become one of the main driving forces of economic growth. The growth rates of the digital economy in the leading countries (USA, Japan, Great Britain, China, etc.) over the past 5 years averaged 5–7% per year (Huaten et al. 2019, p. 31). Thus, the digital economy is already contributing to increased labor productivity in production, creating new markets, and new growth points in the economy. How will this affect the labor market? One of the important examples of the technological progress direction is the shift in demand for labor towards low or high qualifications (Acemoğlu 2009, Chap. 15). In the nineteenth and first half of the twentieth century, technological progress was shifted towards low-skilled labor. This was facilitated by the division of labor and the conveyor organization of production. In many ways, this shift in technological change was due to the extremely low supply of skilled labor, even in developed countries. In the second half of the twentieth century, the supply of skilled labor increased sharply in the labor markets of developed countries, and the wages offered by firms also increased continuously. This dynamic in the labor market was explained by the fact that the innovative technologies of the third information and industrial revolution, related to the 5th (1946–1982) and 6th (1982–2018) technological system (TS), already had a shift towards skilled labor. It was shown that this process was endogenous (Acemoğlu 2009, pp. 773–776). There is a hypothesis that this will be accelerated in the era of the digital economy. The problems associated with the possible technological replacement of jobs and the structural unemployment growth have been discussed from the very beginning of the first industrial revolution (Ford 2015, p. 10). Indeed, when the previous technical conditions changed, problems of short-term structural unemployment arose during

Technological Substitution of Jobs in the Digital Economy and Shift in Labor. . .

87

transitional periods, but they never turned into a systemic or chronic one. The new technological structure destroyed many jobs associated with the old TS, but created a lot of new jobs, often better paid, but requiring higher qualifications from the workers. Therefore, a significant part of the unemployed could get work in a new segment of the labor market after additional and advanced training, as well as the acquisition of certain skills in new technological equipment. The dissemination of new technologies in the past was a rather slow process, which took two to three decades or more, so the workers had enough time to master a new profession and adapt to new working conditions. Today, most economists are convinced that structural unemployment is always temporary and does not pose a serious problem. However, the great economist of the twentieth century, John Maynard Keynes, took a completely different point of view on this problem and predicted the following (Keynes 1963, p. 358): “We are overcome by a disease that some readers may not have heard of, but which will be much discussed in the coming years technological unemployment. It arises because the speed with which we discover labor-saving technologies surpasses our ability to find new use for liberated labor.” Now, this time has come. The technologies of the fourth industrial revolution are exponential and will penetrate the life of society much faster than the technologies of previous revolutions, since they will be distributed through ready-made digital networks—the infrastructure of the third industrial revolution (1950–2010) (Schwab 2018, p. 34). Indeed, digital technologies are developing and spreading at an exponential rate, so it will be difficult for most people to have time to adapt to changes in the labor market that arise to the rapid development of new technologies. Consequently, in the era of the digital economy, the likelihood of continuous growth in technological unemployment rises sharply. M. Ford figuratively formulated this fundamental shift in the world of work as follows: “In the past, machines served as a means of increasing the productivity of workers. And now, machines themselves are turning into workers, forcing people out of the economy” (Ford 2015, p. 11). A very unpleasant consequence of the emergence of intelligent computers and robots and their irrepressible expansion into all spheres of public life and economics is the possibility of mass replacement of people engaged in routine work not only of a physical but also of a cognitive nature. New digital technologies can dramatically reduce jobs for people of average skill, destroying entire professions (Schwab 2018, p. 95). It is assumed that in the next 10 years, the number of translators, journalists, assistants, drivers, sellers, accountants, brokers, as well as representatives of many other programmable professions will greatly decrease. At the same time, digital technologies are expected to create many new jobs in such new professions as big data analytics, training and management of AI, development of intelligent computing technologies and software, training and management of intelligent robots, and many others. However, jobs in these new industries will require deep and diverse technical and mathematical knowledge and skills. Consequently, the digital economy will increase the demand for highly skilled workers in the STEM-fields (scientific researchers, specialists in new technologies and engineering, mathematicians specializing in the fields of digital technologies). In

88

А. А. Akaev et al.

addition, highly skilled robotics engineers, AI and machine learning specialists, and virtual and augmented reality architects will be required. In developed countries, one of the laws of the future digital era is already observed, which is that the demand for less-skilled labor is decreasing and the demand for highly skilled labor is growing (Brynjolfsson and McAfee 2016, p. 181). Moreover, demand falls most heavily for jobs related to routine cognitive work, which are usually occupied by middle-class representatives with an average level of skill. Most unskilled jobs in the service sector are reserved for people, since it is simply economically unprofitable to replace them with expensive intelligent robots. A decrease in demand for low and medium-skilled workers means that the trend towards a further decrease in their wages, which began back in the 1980s, will continue. The workers with a high school education or vocational secondary education in the future can only rely on low paid jobs in the service sector. As a result, labor will be polarized: middle-skilled jobs with an average level of wages will be intensively reduced, and employment will be increasingly concentrated both in the most highly skilled and highly paid, as well as in the least qualified and low paid segments of labor. This process is called “polarization of jobs” (Brynjolfsson and McAfee 2016, p. 186). According to most experts, this process is caused by information technology and it will receive acceleration with the widespread adoption of digital technology. Thus, digital technologies will increase both unemployment and income inequality in society. Highly skilled workers have always been in demand. But today this need is felt especially sharply, as the requirements for the level of training of specialists have increased sharply. The degree of technological development of most companies is so high that soon all the routine (programmable) work of a cognitive nature will be performed by intelligent computers and robots. Experts believe that in the coming years, every second employee of large companies will be forced to reorient or improve their knowledge and skills to match the level of technology of the fourth industrial revolution. As intelligent machines (IMs) are becoming ubiquitous, more and more workers with mathematical and engineering thinking are required. This means that workers in the digital age will need to receive predominantly higher education, and this trend will only be strengthened further. Moreover, in modern labor market conditions, continuing education should be an ongoing process. Only in this way a person can play a key role in the symbiosis of “human-IM,” which will become the main driving force of the digital era. At the same time, a person should strive to be a leader and solve his tasks in cooperation with machines, passing them their routine, but labor-intensive segments. A spectacular example of such a “human-IM” symbiosis is the symbiosis of a sufficiently strong chess player and a modest chess computer that can beat any chess player, including the world champion, as well as the most powerful chess supercomputer machine, or a human doctor and a computer “Doctor Watson,” which turns out to be much more reliable and effective than each of them individually (Brynjolfsson and McAfee 2016, p. 246). Scientists and developers need to ensure that IMs are extremely friendly to people and serve to improve human labor, strengthening its cognitive activity. In the words of a prominent futurologist Kevin

Technological Substitution of Jobs in the Digital Economy and Shift in Labor. . .

89

Kelly: “in the future, the level of your salary will depend on how efficiently you can work together with robots” (Kelly 2017, p. 75). This means that success awaits those who are best able to establish the process of working together with intelligent machines. Given the growing role of highly qualified personnel for the digital economy, today it is necessary to plan for increasing investments in education and R&D for the 2020s. With the widespread penetration of digital technology in all areas of the economy and management, all employees are increasingly faced with additional requirements: the presence of professional competence and a good knowledge of digital technology. Today, an employee with a higher level of knowledge of digital technologies has an undeniable advantage in the labor markets. In response to this challenge, at Peter the Great St. Petersburg Polytechnic University we create such a competitive advantage in our graduates at the educational stage, by bringing together training in professional knowledge and skills in working with digital technologies and intelligent machines. We are convinced that this is an excellent model of education that is the best way for a future employee to keep up with life in the digital age. Today there is no more important task than an accelerated adaptation of the education system to exponential technological changes. First of all, it should educate and develop the advantages that a person has over AI and IM: creative imagination; ability to recognize patterns in complex systems and phenomena; the ability to collaborate with people and intelligent machines in solving complex problems; the ability to constantly adapt and assimilate new knowledge and work skills, and other higher cognitive abilities. In this regard, the adoption of optimal decisions for the educational system reform requires medium- and long-term forecasting of various aspects of the digital future, in particular changes related to the labor market, and the role of qualifications in technological progress. In this article, we propose mathematical models for prognosing the assessment of the demand for labor resources and its bias towards high qualifications. They give quantitative answers to how the demand for workers of low, medium, and high qualifications will change, which will allow timely adjustments to the structure of personnel training in order to have more workers who are in greater demand in the future.

2 Human Capital and Its Profitability The most widely used method of measuring human capital is the method of measuring human capital by income. This approach allows you to evaluate the person’s contribution to the economy, depending on the level of his education, based on his salary. Therefore, human capital is usually defined as a measure of the ability embodied in a person to bring him income and benefit to society. These abilities include both innate talent and basic knowledge and qualifications gained as a result of education, as well as work skills acquired in the process of practical

А. А. Akaev et al.

90

activity. At the end of education, a person has a relative stock of knowledge, which is described by the formula (Jones and Vollrath 2013, p. 61): h ¼ exp ðψ . uÞ,

ð1Þ

where u—the average number of study years; ψ—return on education. The empirical estimates of the coefficient ψ gave the following result (Acemoğlu 2009, p. 557; Jones and Vollrath 2013, p. 64): 0:06 ≤ ψ ≤ 0:1:

ð2Þ

This means that an additional year of education increases human capital by 6–10%. From formula (1) it also follows that even an illiterate person (u ¼ 0) has a capital stock h ¼ 1 other than zero. So, h ¼ h(u) means the stock of knowledge and skills or human capital accumulated over the years of training and professional practice. It is assumed that in conditions of free competition in the labor market, the employee’s salary should be proportional to the size of his human capital. This assumption is confirmed by an empirical pattern: the employee’s salary in logarithmic terms is proportional to the duration of his education (Acemoğlu 2009, p. 135). Hence, w ¼ w0 . exp ðψ . uÞ,

ð3Þ

where w0—an average salary of an unskilled worker. It follows that labor productivity and employee wages also grow by 6–10% for an additional year of education. The dynamics of the accumulation of human capital in the years of work, after graduation, is determined by the differential equation (Acemoğlu 2009, p. 555): h_ ðt Þ ¼ qh . hðt Þ,

ð4Þ

where qh—human capital growth. Formulas (1), (3), and (4) underlie our further analysis. Let us consider the main steps of a typical education system and the corresponding skill levels of an employee. Human capital h (1) can be considered as the relative qualification level of a person who has been learning for u years. The person with the lowest qualifications has only graduated from elementary school (4 years of study), which provides a minimum amount of knowledge that allows reading, writing, and counting, using only arithmetic operations. Then comes the next stage of education—incomplete secondary school (average duration of study is 9 years). Next—graduating from high school or getting secondary vocational education (average duration of study is 12 years). These lower three levels of education mainly prepare workers for unskilled or low-skilled work.

Technological Substitution of Jobs in the Digital Economy and Shift in Labor. . .

91

Table 1 Education levels and skill levels of employee (authors’ creation) Education levels Elementary Incomplete secondary education Comprehensive education or secondary vocational education Bachelor’s program Master’s degree program Postgraduate program

4 9 12

ψ 0.06 0.06 0.06

hi hm h1 h2

1.27 1.72 2.05

Skill levels Low Low Low

16 18 21

0.08 0.1 0.1

h3 h4 hM

3.60 6.05 8.17

Medium High High

u

Next are the upper three levels of education: undergraduate, graduate, and postgraduate programs. Undergraduate (average duration of education—16 years) programs produce graduates of average qualifications, and master’s programs (average duration of 18 years)—graduates of high qualification. The highest qualification is achieved after graduate school (average duration of study is 21 years), which prepares researchers for work in the R&D system or for pedagogical work in higher educational institutions. The numerical values of workers’ skill levels, calculated by the formula (1), are presented in Table 1. Note that scientists with the highest qualifications are not represented here—professors and academicians—as they make up a relatively small part of the workers, although their contribution to scientific and technological progress is undeniably decisive. They generate revolutionary ideas and lead R&D in order to turn them into innovations. It will be reasonable to assume that the distribution of relative labor force by skill level lðhÞ ¼ LLh (here Lh is the number of workforce with skill level h; L is the total number of labor resources) obeys the normal law (Monsik and Skrynnikov 2012, pp. 127–128): " ( )2 # h – hμ 1 lðhÞ ¼ p----- . exp – , 2σ 2h σ h 2π

ð5Þ

where σ h—standard deviation of a random variable h from its average value hμ, σ h ¼ 16 ðhM – hm Þ; hμ—is the average human capital in the economy, the mathematical expectation of a random variable h, hμ ¼ 12 ðhm þ hM Þ. Of course, this distribution (5) is only a good approximation, since h by definition (1), the quantity is strictly positive and has a value of greater than or equal to 1, i.e., h ≥ 1. It is possible that a Rayleigh-Rice type distribution would be better (Monsik and Skrynnikov 2012, p. 147): ( l 1 ð hÞ ¼

h . exp ð–h=2Þ, h > 0; 0, for h ≤ 0:

ð6Þ

92

А. А. Akaev et al.

Fig. 1 Distribution of labor supply by skill level (authors’ creation)

Unfortunately, the authors do not know whether a study was conducted on this topic. Since in the range hm ≤ h ≤ hM, according to the “three sigma” rule (Monsik and Skrynnikov 2012, p. 138), more than 99.7% of the workforce is located, then as a first approximation, it is best to use the normal distribution law (5). Let us estimate the numerical characteristics of a random variable h (1). For σ h we obtain the estimate σ h ¼ 16 ðhM – hm Þ ffi 1:15. The average value of h or its mathematical expectation is hμ ¼ 12 ðhm þ hM Þ ffi 4:72. It is easy to see from Table 1, that workers with average qualifications fit in the range from h3 to h4, amounting to approximately 2σ h. These are mainly undergraduate graduates. This fact also works in favor of choosing the normal distribution (5). Highly skilled workers are located to the right of h4, and they are mainly graduate students. Unskilled workers are located to the left of h3—these are mainly people with secondary general or professional education. The share of workers with average qualifications is 68%, the highest—16% and low—also 16%. This follows from the assumption of a normal distribution l(h) (5) (Monsik and Skrynnikov 2012, p. 139). The distribution of labor by skill level, in accordance with the normal distribution hypothesis (5), is presented in graphical form in Fig. 1. It characterizes primarily the supply of labor. Since unemployment in most avant-garde countries today does not exceed the natural level, it can be considered that the indicated shares of low (16%), average (68%), and highly skilled workers (16%) approximately correspond to the qualification shares of labor force employment in the modern economy. In the future, under the influence of digital technologies, both the distribution of labor by skill level (Fig. 1) and the proportion of workers employed in low, medium, and high qualification jobs will naturally change, as we will see in a further analysis. The distribution of labor by skill level (5) and its total number will change over time. Therefore, in the general case, its distribution has the form:

Technological Substitution of Jobs in the Digital Economy and Shift in Labor. . .

93

Fig. 2 Distribution of effective labor force by skill level (authors’ creation)

Lðh, t Þ ¼ lðhÞLðt Þ,

ð7Þ

where Lðt Þ—dynamics of the total labor force. In practice, the level of effective labor is important, which has the following distribution (Jones and Vollrath 2013, p. 60): Lef ðh, t Þ ¼ h . lðhÞ . Lðt Þ:

ð8Þ

The distribution of effective labor by skill level lef(h) ¼ h . l(h) is shown in Fig. 2. The level of effective labor is calculated by the formula: Z1 Lef ðt Þ ¼ Lðt Þ

Z1 hlðhÞdh ffi Lðt Þ

1

hlðhÞdh ¼ hμ Lðt Þ,

ð9Þ

–1

This result was expected, because with unlimited expansion of the lower limit of the integral, we obtain a formula for the mathematical expectation of a random variable h. So, we have an approximate formula for calculating Lef ðt Þ ffi hμ . Lðt Þ, if Lðt Þ is known. The shares of effective workers with low (1), medium (2), and high (3) qualifications (Fig. 2) can be calculated using the formulas:

А. А. Akaev et al.

94

ðlÞ

ðaÞ λef ¼

ðhÞ

ðcÞ λef ¼

1

Zh3 hlðhÞdh;

ðaÞ

lef

1

hm

ZhM hlðhÞdh;

ðaÞ

lef

h4

1

ðmÞ

ðbÞ λef ¼

ðaÞ

Zh4 hlðhÞdh;

ðaÞ

lef

h3

ZhM

ðdÞ lef ¼

ð10Þ

hlðhÞdh; hm

Having performed calculations using these formulas, we have obtained: ðlÞ

ðmÞ

ðhÞ

ðaÞ λef ¼ 0:105ð10:5%Þ; ðbÞ λef ¼ 0:724ð72:4%Þ; ðcÞ λef ¼ 0:172ð17:2%Þ ð10Þ As it can be seen, the effective labor shares differ from the corresponding labor shares, calculated without taking into account the level of human capital.

3 Technological Substitution of Jobs in the Economy As you know, industrial production at the industrial stage of economic development was characterized by a gradual process of replacement of living labor. Thus, the general trend in the development of real production is to reduce the complexity and at the same time to increase the capital intensity of products. In the digital economy, this process will only accelerate; moreover, we will also observe a trend in the growth of knowledge-intensiveness. Becoming more knowledge-intensive, production attracts large volumes of investment in fixed assets and, conversely, the expanded renewal of fixed assets is an important factor in the development of R&D. And the latter will increase the demand for research experts. Consequently, science-intensive and capital-intensive are mutually contributing processes in the course of digital transformation. The mentioned above trend has led to fundamental changes in the national income distribution. Technological changes aimed at deepening capital and stimulating the replacement of human labor with machines increases the amount of profit that capital owners receive and reduces the share of income of workers. Thus, the capital share in the national income of most developed countries began to noticeably increase since the 1980s, and the labor share decreased, which was most fully and clearly shown in the famous book of the French economist Thomas Piketty “Capital in the 21st Century” (Picketti 2014). Moreover, T. Piketty claims that in the first half of the twenty-first century this trend will only accelerate. Moreover, this trend originated with the beginning of the information era in the 1980s, with the increasing role of ICT in the economy and, undoubtedly, is a direct consequence of the widespread use of information technologies in society (Ford 2015, p. 10). Let us move on to the mathematical description of the process of technological replacement of jobs in the digital economy. Today, the share of the digital economy

Technological Substitution of Jobs in the Digital Economy and Shift in Labor. . .

95

Fig. 3 Probability curve of technological replacement of labor (authors’ creation)

in developed countries is approximately 5–10%. It is estimated that the digital transformation of the entire economy will take approximately 10–12 years (Huaten et al. 2019; Schwab 2016). Consequently, already in the 2030s, a full-blown digital economy will function in most countries of the world, and the industry will transform into a fully automated digital industry 4.0. So, we denote the probability of substitution of labor with a skill level h with the help of smart machines (SM) by p (h), with 0 ≤ p ≤ 1. We have already noted above that it is unprofitable to replace unskilled workers with SM simply for economic reasons. Of course, we are talking about cognitive but routine labor requiring only low qualifications, since physical routine requiring only low qualifications will certainly be replaced by robots, which, moreover, are steadily becoming cheaper every year. So, the probability of substitution of a low-skilled labor force engaged in partly non-routine labor activity can be taken equal to zero, i.e., (Table 1): pðhÞ ¼ 0, for hm ≤ h ≤ h3 :

ð11Þ

Of course, as SM becomes cheaper, the replacement of human labor will begin in this segment as well. Therefore, one can also consider the increase in the probability of substitution according to quadratic or linear laws within hm ≤ h ≤ h3: ðаÞ p3 ðhÞ ¼ 0:166ðh – hm Þ2 ; ðbÞ p4 ðhÞ ¼ –0:495 þ 0:39h:

ð12Þ

All these three options (11 and 12) are presented graphically in Fig. 3, respectively, under numbers 1, 3, and 4. SM will learn to actively replace middle-skilled workers engaged in routine cognitive work, and gradually, as the intellectual level rises, they will move on to replace highly skilled workers. Moreover, if the probability of substitution of

А. А. Akaev et al.

96

workers’ SM in the lower segment of secondary qualification (h > h3 ¼ 3.6) today is equal to 1 ( p(h3) ¼ 1), then the probability of replacement of researchers (h ≥ hM ¼ 8.17) even after 10 years in the year 2030 will be close to zero ( p (hM) ¼ 0), since the onset of the singularity, when AI will surpass human intelligence, is expected only in the 2040s. Since substitution processes, as a rule, follow the logistic law, the probability distribution curve connecting the points h3 and hM, can be represented by a logistic curve: pð hÞ ¼

1 )⌉ : ⌈ ( M 1 þ exp ϑh h – h þh 2

ð13Þ

The parameter ϑh can be found from standard conditions: p(h3) ¼ 0.9; p(hM)¼0.1. Solving Eq. (13) either with h ¼ h3 or h ¼ hM, we obtain ϑh ffi 0.963. For further analysis, in the low-skill area (hm ≤ h ≤ h3), we will take option (11) when p(h) ¼ 0. As a result, we obtain the following distribution curve describing the probabilities of technological replacement of labor in the current decade:

pðhÞ ¼

8 > > >
þ hM > > : 1 þ exp ϑh h – h 2

ð14Þ

Now we can get the distribution of the effective labor force employed in the economy after the completion of the digital transformation in the 2030s. It will be written in the form: lefe ðhÞ ¼ ½1 – pðhÞ]hlðhÞ:

ð15Þ

This distribution is presented in graphical form in Fig. 4. As it can be seen from this figure, as a result of the economy digital transformation, labor is polarized into highly and low-skilled jobs, with a sharp reduction in secondary jobs. The dynamics of effective labor employed number in the digital economy is written as: Z1 Lefe ðt Þ ¼ Lðt Þ

½1 – pðhÞ]hlðhÞdh:

ð16Þ

1

This integral can be calculated by narrowing the lower and upper limits of integration to the interval (hm ≤ h ≤ hM), in which all practically possible values (99.7%) of l(h) are contained. Then we have:

Technological Substitution of Jobs in the Digital Economy and Shift in Labor. . .

97

Fig. 4 Distribution of effective workforce employed in the digital economy (authors’ creation)

9 8 h ZhM = 12 the influence of SM predominates. So, choosing γ ¼ 12, we specify equation (21): 1= h_ ðt Þ ¼ qh A1=2 ðt Þh1=2 ðt Þ ¼ qh A0 2 h1=2 ðtÞ exp ðqA t Þ:

ð25Þ

This equation is easily solved and has the following solution: ⌈ ⎧ qA ⎫⌉2 1= q 1 hðt Þ ¼ h0=2 þ h . A0 2 . e 2 t – 1 : qA

ð26Þ

Since the current distribution of h0 acts as h (Table 1), we will have the following formulas for calculating the growth dynamics of labor productivity (skill level) of an average and high-skilled worker in the process of the digital transformation of the economy: ⌈ ⎧ qA ⎫⌉2 1= q 1=2 þ h . A0 2 . e 2 t – 1 ; ðaÞ hμ ðt Þ ¼ hμ0 qA ⌈ ⎫⌉2 qh 1=2 ⎧ q2A t 1=2 ðbÞ hM ðt Þ ¼ hM0 þ . A0 . e – 1 qA

ð27Þ

The dynamics of a low-skilled worker labor productivity hm(t) is described by equation (23), since it does not interact with SM. Let us estimate the numerical values of the constant coefficients (h0, A0) and the parameters (qh, qA), of Eq. (27). The initial levels of human capital h are known (Table 1): hμ0 ¼ hμ ¼ 4.72; hM0 ¼ hM ¼ 8.17. We can equate the initial value A0 to the level of qualification of master’s degree, i.e., A0 ¼ 6, which means: digital technologies are developed by specialists with a master’s qualification level or higher. It is known that the growth rate of the digital economy in developed countries over the past few years has been 5–7% (Huaten et al. 2019, p. 31). Let us assume that these observed growth rates in advanced technologies will continue in the 2020s. Therefore, we can take qA ¼ 0.06. It remains to evaluate the value of qh. For this purpose, we differentiate formula (1): h_ ¼ ψ . exp ðψuÞ . u_ ðt Þ ¼ ψhðt Þu_ ðt Þ: _

ð28Þ

It follows that qh ¼ hh ¼ ψ . u_ ðt Þ . If we assume that the education process is continuous, then u(t) is proportional to the time t, and u_ ðt Þ ¼ 1 in the years when a person received education. Thus, we obtain the estimate: qh ffi ψ. Given that 0.06 ≤ ψ ≤ 0.1 and taking the average value of ψ we get: qh ffi 0.08. Since the optimal level of salary should increase in proportion to the employee’s labor productivity (or skill level), multiplying formulas (27) w0 (initial salary level), we obtain formulas for forecast calculations of the dynamics of the salary of middleand high-skilled workers in the digital economy:

Technological Substitution of Jobs in the Digital Economy and Shift in Labor. . .

101

Fig. 5 The growth dynamics of the workers’ salaries of medium and higher qualifications working in symbiosis with SM (authors’ creation)

⌈ ⎫⌉2 wμ0 1=2 qh 1=2 ⎧ qA t hμ0 þ . A0 e 2 – 1 hμ0 qA ⌈ ⎫⌉2 qh 1=2 ⎧ q2A t wM0 1=2 ðbÞ wM ðt Þ ¼ h þ .A e –1 hM0 M0 qA 0 ð а Þ wμ ð t Þ ¼

ð29Þ

For the initial average wage wμ0,it is reasonable to take the median salary of the employee at the current time. For example, in the USA today it is 60 thousand ffi 100 thousand dollars. dollars, i.e., wμ0¼60 thousand dollars. Then wM0 ¼ wμ0 . hhM0 μ0 The predicted dynamics of the optimal salary for workers of average and higher qualifications, calculated according to formulas (29), is presented in Fig. 5 in graphical form. As can be seen from the consideration of the growth paths of the optimal salary (Fig. 5), it should grow at a rate of about 7% per year and double in 10 years.

5 Conclusion Digital technologies are universal and they will penetrate into all spheres of society, as they are general-purpose technologies. As digital technologies are introduced into the modern economy, in almost all of its sectors there will be a decrease in the need for human labor. Moreover, it will decline very quickly, because innovative technologies, for the first time in the history of the industrial era, will be distributed over the finished infrastructure—digital information and communication networks. Of course, this infrastructure will improve and become more broadband and high speed. For millions of workers, the technologies of the fourth industrial revolution will cease to be a means of increasing labor productivity, they will simply replace them in all jobs associated with routine cognitive activity. This will naturally lead to a

102

А. А. Akaev et al.

continuous increase in technological unemployment. If the growth of the qualifications of some workers does not keep pace with the increase in the level of technology, the proportion of the economically active part of the population will decrease. Already today there are fully automated production lines. For example, modern intelligent systems based on nano- and biochips can be produced exclusively by computerized and robotic systems. On the one hand, all this is good for developed countries with a high level of human capital, since it provides economic growth with a decreasing labor force due to an aging population. On the other hand, an increase in the number of unemployed with stagnating salaries of medium- and low-skilled workers will lead to a sharp reduction in demand for goods and services, which, in turn, leads to a recession in the economy. The solution to this problem is, on the one hand, the revival of continuous growth of workers’ wages in proportion to the growth of labor productivity, and on the other, the provision of a “universal basic income” to all adult citizens, which provides everyone with minimum living standards. If people want to improve their position, they begin to study and improve their skills. By the way, such outstanding economists were adhered to by such prominent economists, Nobel Prize laureates F. Hayek, P. Samuelson, J. Tobin, and others (Brynjolfsson and McAfee 2016, p. 299–300). All the above problems can be resolved, but only if there is the right economic policy and the political will of the leaders of peoples and governments for its practical implementation. Indeed, the famous professor Klaus Schwab is convinced that the evolution of digital technologies is wholly in the power of mankind and it must ensure: (1) an equitable distribution of the benefits of the fourth industrial revolution; (2) control of its negative consequences and risks; (3) ensuring that the fourth industrial revolution will unfold in the interests of man and under the control of man (Schwab 2016, pp. 24–25). In conclusion, in the present work we obtained the following specific results: 1. In the 2020s, there will be a steady technological shift in the demand for labor towards highly skilled. It is shown that in the 2030s, the share of highly skilled workers with a master’s degree will be 26.5% employed in the economy of the labor force, against the current 17.2%, and the share of secondary workers with a bachelor’s degree will decrease to 49.2%, from the current 72.4%. Therefore, there is a need for a significant increase in the training of specialists with a master’s degree. 2. The share of low-skilled workers will also increase from the current 10.5% to 24.3% in the 2030s. Consequently, during the digital transformation, there will be a significant polarization of labor into high and low skilled, with a sharp reduction in secondary jobs. 3. For the sustainable development of the economy, it is necessary to revive the proportional growth of workers’ salaries and their labor productivity. It is shown that in the digital economy, the growth of the wages of highly skilled workers is required to grow at a rate of about 7%, with doubling over a decade.

Technological Substitution of Jobs in the Digital Economy and Shift in Labor. . .

103

Acknowledgments This article was prepared under the financial support of the Russian Foundation for Basic Research (Grant No. 20-010-00279 “A comprehensive assessment system and labor market forecast at the stage of transition to a digital economy in developed and developing countries”).

References Acemoğlu, D. (2009). Introduction to modern economic growth. Princeton, NJ: Princeton University Press. Brynjolfsson, E., & McAfee, A. (2016). The second machine age: Work, progress and prosperity in a time of brilliant technologies (2nd ed.). New York: Norton Pub.. Ford, M. (2015). The rise of the robots. New York: Basic Books. Huaten, M. A., Zhaoli, M., Delhi, Y., & Huali, W. (2019). The digital transformation of China [Cifrovaja transformacija Kitaja]. Moscow: Intellectual Literature. Jones, C. I., & Vollrath, D. (2013). Introduction to economic growth. London: W. W. Norton. Kelly, K. (2017). The inevitable: Understanding the 12 technological forces that will shape our future. London: Penguin Books. Keynes, J. M. (1963). Essays in persuasion. New York: W.W. Norton & Company. Monsik, V. B., & Skrynnikov, A. A. (2012). Probability and statistics [Veroyatnost i statistika]. Moscow: BINOM. Knowledge Laboratory. Picketti, T. (2014). Capital in the twenty-first century. Cambridge: Harvard University Press. Schwab, K. (2016). The fourth industrial revolution. Cologny/Geneva: World Economic Forum. Schwab, K. (2018). Shaping the fourth industrial revolution. Cologny/Geneva: World Economic Forum.

Oil Shocks and Stock Market Performance: Evidence from the Eurozone and the USA João Leitão and Joaquim Ferreira

Abstract Recent oil price developments contribute to renewed interest in the subject of oil shocks regarding international stock market performance. In this connection, this study uses a structural VAR (SVAR) model to evaluate the impact of BRENT and WTI crude oil price effects on the Dow Jones, DAX, CAC, ATHENS Composite, and PSI20 performance. The dynamic analysis gives support to the nonsignificant effects of oil prices on the US and Eurozone markets. Nevertheless, in the context of stock markets, the findings suggest that Dow Jones becomes the prevailing market-driver of Eurozone market performance. Furthermore, there is only a unidirectional, significant relationship in Eurozone stock markets, from the CAC to the PSI20. The outstanding role of the US stock market reflects the fact of this being the world’s largest economy in addition to being closely linked to other economies worldwide. The impact of the French stock market on the Portuguese stock market is justified by the integration of both stock markets in the Euronext Stock Exchange. Keywords Oil shocks · Stock markets · SVAR JEL Classification C32 · G15 · Q4

J. Leitão (*) Faculty of Social and Human Sciences, NECE–Research Center in Business Sciences, University of Beira Interior, Covilhã, Portugal Research Center in Business Sciences (NECE), University of Beira Interior, Covilhã, Portugal Centre for Management Studies of Instituto Superior Técnico (CEG-IST), University of Lisbon, Lisboa, Portugal Instituto de Ciências Sociais, ICS, University of Lisbon, Lisboa, Portugal e-mail: [email protected] J. Ferreira Research Center in Business Sciences (NECE), University of Beira Interior, Covilhã, Portugal e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Devezas et al. (eds.), The Economics of Digital Transformation, Studies on Entrepreneurship, Structural Change and Industrial Dynamics, https://doi.org/10.1007/978-3-030-59959-1_7

105

106

J. Leitão and J. Ferreira

1 Introduction Given the world’s largest economies’ great dependence on “black gold” and demand exceeding the restricted supply of this commodity, there has been a substantial rise in the price of oil. However, this price increase may have been spurred by the growing demand from emerging economies, mainly China and India, as these economies need oil to maintain their growth trajectories. According to the International Energy Agency (2009), oil consumption is expected to grow by 2030 in China and India by 3.5% and 3.9%, respectively. Thus, the continued rise of oil prices could lead to effects on the most commodity-dependent economies and, consequently, on financial markets. Lardic and Mignon (2006) argue that oil shocks have asymmetric effects in different European countries. Their results point out that a rise in oil prices leads to a greater effect on the economy than a decrease in oil prices, as a mechanism to stimulate economic activity. Jiménez-Rodríguez (2008) analyzed the impact of oil shocks on production in the main sectors of European economies (Germany, France, Italy, and Spain) and Anglo-Saxon countries (the USA and UK). In the European context, the effects of oil shocks are heterogeneous, i.e., rising oil prices register divergent (positive and negative) effects on sectors. Concerning Anglo-Saxon countries, the effects of oil prices are homogeneous, showing particularly a negative effect. Oil shocks can lead to an impact on the economy through inflationary effects. Álvarez et al. (2011) claim that an oil shock causes direct and indirect impacts, and second-round effects (macroeconomic behavior). Direct impacts relate to changes in the price of refined products. Indirect impacts affect the costs of producing goods and services using products from this raw material, leading to changes in retail prices. The second-round effects of an oil impact lead to forecasting revisions of inflation, leading to wage readjustments. The authors also reveal that direct impacts have been growing in relation to indirect impacts and second-round effects. This is due to an increase in household spending on refined products. It is pointed out that some volatility or stock market behavior affects not only companies’ stock prices but also economies’ overall performance. However, it is interesting to evaluate whether oil price shocks contribute significantly to international stock market performance. Arouri and Nguyen (2010), Arouri (2011), and Arouri et al. (2012) reveal that the impact of oil shocks varies according to stock market sectors. Furthermore, the authors note that oil shocks cause asymmetric effects, in addition to underlining the spread of spillover effects from oil shocks to financial markets. The current paper aims to inquire into the existence, in the short-term, of a spread from oil shocks (WTI and BRENT) to the USA (Dow Jones), German (DAX), French (CAC), Greek (ATHENS), and Portuguese (PSI20) stock markets, also gaging the relationship among the stock markets. A multivariate structural VAR (SVAR) model is applied to assess the existence of short-term structural shocks by imposing contemporaneous restrictions. The results

Oil Shocks and Stock Market Performance: Evidence from the Eurozone and the USA

107

confirm no impact of oil prices on stock market performance while revealing that the Dow Jones performance has a dominant role in European stock market performance. The remainder of the paper is structured as follows. Section 2, presents a review of the literature on the impact of oil shocks on the level of economic activity and, mainly, on stock market performance. Section 3 outlines the econometric model applied and empirical specification. Section 4 examines the empirical results and discusses them. Section 5 presents concluding remarks.

2 Literature Review and Research Hypotheses Analyzing financial markets and their performance is essential from the point of view of the investor and the policymaker. This practice has been related to finding “patterns” in stock performance to minimize risk and apply policies. Nowadays, growth and economic-financial development depend heavily on the consumption of mineral fuels, most notably crude oil. From this standpoint, crude oil emerges as the main source of global energy, and therefore some research has attempted to understand whether oil price behavior affects the financial market’s performance. In this context, it is worth mentioning the pioneering studies by Jones and Kaul (1996), Huang et al. (1996), and Ciner (2001). Jones and Kaul (1996) made use of an APT typology model, whereas Ciner (2001) applied the Vector Autoregressive model (VAR) with linear and nonlinear Granger-causality tests and Huang et al. (1996) employed a VAR model with linear Granger-causality tests and correlation tests. These studies conclude that stock markets (in the USA, Canada, the UK, and Japan) are negatively affected by oil prices (Jones and Kaul 1996), whereas US stock markets and oil prices show a linear and nonlinear feedback relationship (Ciner 2001). Meanwhile, oil prices and US oil-related stocks show a higher correlation and causal-effect (Huang et al. 1996). Following this type of methodology, Park and Ratti (2008), in the European and American context, demonstrate the negative effects of oil shocks on stock market returns, excluding the Norwegian case. Regarding VEC models, the empirical work by Miller and Ratti (2009) and Cunado and Perez de Gracia (2014) indicates that oil prices affect stock markets negatively, underlining that supply shocks are most prevalent in that negative influence (Cunado and Perez de Gracia 2014). In addition, oil shocks contribute to a decrease in the behavior of some macroeconomic fundamentals, such as the employment rate (Papapetrou 2001), economic activity, and stock markets (Filis 2010; Papapetrou 2001; Sadorsky 1999). According to Markov-switching VAR/VEC (MS-VAR/VEC) methodology, oil price movements have a negative effect on US stock markets, leading to a market sentiment change from a bullish to a bearish market (Chen 2010). Therefore, Jammazi and Aloui (2010) argue that in the Japanese case an increase in oil prices contributes to a negative effect on stock markets in both the expansion state and recession state, being slightly different in the British and French context where a rise

108

J. Leitão and J. Ferreira

in oil prices has a negative effect on stock markets in the steady-state and expansion state, but temporarily. Considering SVAR models, Kang and Ratti (2013) find a positive effect on stock market movements through exogenous shocks. It is also noteworthy that the authors find that the idiosyncratic oil-demand shock and the aggregate-demand shock to be most influential, showing a negative and positive influence on stock markets, respectively. Narayan and Sharma (2011), employing GARCH and Threshold models, show that US stock sectors less closely related to the oil commodity respond negatively to oil prices, ascertaining, by no means least, a threshold effect from oil prices to stock returns. Nevertheless, Nandha and Faff (2008) using an AR model, emphasize the symmetry in the relationship between oil prices and stock markets integrated into the FTSE Global Classification System, in general, since there is a negative sign in the high and low oil prices. Furthermore, among studies considering macroeconomic fundamentals, it is worth mentioning, firstly, that the results of Hamilton (1983), using causality tests, reveal that an increase in oil prices contributed to economic recessions in the US post World War II. Subsequently, Hamilton (2003) finds that the negative effect increases when the rise in oil prices comes after a period of stability. Thus, in line with these references the following hypothesis is formulated: H1: Oil shocks have a significant and negative impact on stock market performance. On the basis of cointegration panel data analysis, Li et al. (2012) and Zhu et al. (2011) note the existence of a positive and significant effect on the long-term relationship between oil prices and stock markets. However, it should be noted that regardless of whether the country is a non-producer (e.g., Economic and Monetary Union) or producer (US), oil price movements affect oil and gas-related stocks positively (Elyasiani et al. 2011; Oberndorfer 2009), respectively, while oil volatility implies negative effects (Oberndorfer 2009). Thus, using a GARCH-type model, Oberndorfer (2009) emphasizes that the oil market is the main utility market affecting utilities-related stocks, which does not mean that the existence of risk factors1 may not present a greater preponderance in the markets than oil prices and oil volatility (Elyasiani et al. 2011). In line with Elyasiani et al. (2011), Broadstock et al. (2012), for the Chinese context, and using a GARCH-BEKK model, find that oil prices and Fama and French factors (1993) (RM and SMB) have a significant and positive impact on energy-related stocks, although oil prices become significant during post-financial turmoil. 1

The three factors of Fama and French (1993) are described as: (i) RM (Return Market)—Return on a portfolio minus one-month Treasury yield; (ii) SMB (Small minus Big)—average return on the difference in a portfolio between companies’ stocks with small market capitalization and high market capitalization; (iii) High minus low (HML)—average return on the difference in a portfolio of stocks between companies with a high and low book-to-market ratio.

Oil Shocks and Stock Market Performance: Evidence from the Eurozone and the USA

109

Arouri et al. (2011), applying a VAR-GARCH model in the GCC context, observe a positive effect of oil prices on stocks during financial turmoil, since in the period of financial stability, oil prices become independent of stock market prices. In the European context, Arouri et al. (2012) highlight spillover effects from oil prices to stock markets.2 Thus, the research hypothesis to consider is as follows: H2: Oil shocks have a significant and positive impact on stock market performance. Following VAR methodology, Cong et al. (2008), in the case of Chinese sector indices, suggest that oil prices and their volatility do not affect stock markets, with the exception of oil, petrochemical, and mining-related stocks, which have a positive effect, contrasting with the results of Li et al. (2012). In turn, Lee et al. (2012), do not find significant effects of oil prices on stock market performance.3 However, those authors concluding that stock markets (composite and sectoral), on balance, cause positive effects on oil prices through economic growth. Considering SVAR models, Apergis and Miller (2009), despite the positive effects of the aggregate-demand shock and the negative effects of the idiosyncratic oil-demand shock together with a marginal influence of the oil supply shock on stock markets, the authors find that those shocks induce a reduced effect on stock price movements. Basher et al. (2012), in a context of emerging economies, suggest in line with Apergis and Miller (2009) a lower preponderance of the oil shock on stock markets, proving that macroeconomic variables such as interest and exchange rates are relevant factors in stock market behavior. However, the authors show that oil prices respond positively to markets and economic activity. Sadorsky (2012), implementing different GARCH-type models (BEKK, diagonal, CCC, and DCC) argues that technology stocks have a positive correlation with renewable energy stocks in addition to having a positive influence. In turn, oil prices have a weaker effect on both stock sectors. In short, these studies find that oil shocks do not affect stock market performance significantly or that they have a less prominent role. Thus, the following hypothesis is formulated: H3: Oil shocks do not have a significant impact on stock market performance. Several studies have also considered the interdependence of stock markets, considering the US market as having the most influence on European and Japanese markets. Such outcomes may be found in Eun and Shim (1989) and Hamao et al. (1990). In this vein, Eun and Shim (1989) show the US market behavior as the main driver influencing the performance of a group of major European and non-European stock markets, besides showing a positive effect and correlation with remaining

The authors also note spillover effects from stock markets to oil prices, mainly from financialrelated stocks and utilities-related stocks. 3 This nonsignificant effect refers to composite indices for G7 member countries. Therefore, oil prices foresee certain sectorial indices’ performances as having mainly a negative sign. 2

110

J. Leitão and J. Ferreira

markets. Hamao et al. (1990) show that spillover effects from the US market have the greatest (positive) effect and are the fastest regarding transmission to the British and Japanese markets. The authors also verify asymmetries in the spillover effects. Lin et al. (1994) and Karolyi and Stulz (2016) contrast the results of the aforementioned authors, namely Hamao et al. (1990), showing the existence of symmetries between the American and Japanese markets, with the authors indicating that a context of growing stock returns fosters an increase of correlations and interdependencies in both markets. In the same geographical context, including the German stock market, Morana and Beltratti (2008) describe the increase of comovements between the European and US markets, through increased linkages concerning volatility, correlation, and stock returns. Ozdemir (2009) finds that the American and German markets respond positively to general stock market shocks, while the British and Japanese markets respond negatively to the German and British stock markets, respectively. Kim et al. (2005) find that Eurozone countries respond more significantly to a Eurozone stock market shock than to a specific stock market shock (both European and non-European stock markets). Nevertheless, Nielsson (2009) reveals that the Euronext stock market merely contributes to higher returns and the liquidity of large companies in relation to increased sales turnover in foreign markets. In turn, Khan and Vieito (2012) advocate that the integration of the Portuguese stock market into the Euronext stock market contributed to improved market efficiency. Albuquerque and Vega (2009) demonstrate that information from the US market and macroeconomic fundamentals affect the Portuguese stock market positively as well as its liquidity. This leads to the following hypothesis: H4: Stock market shocks have a significant and positive impact on stock market performance.

3 Methodology 3.1

Econometric Model

In order to achieve the main purpose of the current paper, through research hypotheses, an SVAR model with contemporaneous restrictions (Amisano and Giannini 1997) is estimated to analyze the effects and dynamics of innovations, bearing in mind the literature of reference. Thus, the vector of K-endogenous variables is represented as follows: yt ¼ {brent, wti, dow, dax, cac, psi20, athens}. The reduced-form VAR( p)-process is expressed as follows:

Oil Shocks and Stock Market Performance: Evidence from the Eurozone and the USA

yt ¼ c þ

p X

Ai yt–i þ Dt þ μt

111

ð1Þ

i¼1

Where c is a K-vector of constants; p denotes the maximum lags; Ai corresponds to K × K coefficient matrices; Dt corresponds to a dummy variable4 to smooth the effect of the subprime crisis and the European sovereign crisis; and μt is the vector of error terms which consists of the white noise process. Therefore, the structural representation is specified by: Ayt ¼ c þ

p X

A*i yt–i þ Bεt

ð2Þ

i¼1

Where εt represents the vector of structural shocks; A*i perform the K × K coefficient matrices for i ¼ 1, . . . , p, which produces structural coefficients that differ from the reduced form. The dummy variable is not integrated into the structural equation. Assuming B¼IK, let E(εt ε0t ) ¼ DK, where DK is a K-dimensional diagonal matrix that is restricted to the structural shocks, for there to be serially and mutually uncorrelated innovations, in which by processing the reduced form of the VAR with the results of matrix A, the relationship between the reduced-form errors μt and the structural shocks εt is expressed as follows: ( ) Aut ¼ εt and DK ¼ ðAÞ E ut u0t ðA0 Þ

ð3Þ

After normalization of the diagonal K elements of matrix A to 1 s, K-dimensional structural system identification K(K – 1)/2 restrictions permit orthogonalizing the shocks.

3.2

Data Description and Model Specification

This paper aims to analyze the impact of oil shocks (WTI and Brent5) on stock markets in the Eurozone (Germany, France, Greece, and Portugal) and the USA, as well as ascertaining the idiosyncratic stock markets shocks on their own stock

4 The dummy variable assumes the value of 0 between December 30, 1994 and September 12, 2008, being equal to 1 between September 15, 2008 and May 10, 2010. 5 West Texas Intermediate (WTI) corresponds to the US benchmark oil prices traded on the New York Mercantile Exchange. This product is extracted from the Gulf of Mexico. Brent represents Europe’s benchmark oil prices traded on the Intercontinental Exchange Brent Futures which is extracted, mainly, from the North Sea. However, it should be noted that Brent is the most important reference for the category of sweet light crude, referred to as the best quality oil.

112

J. Leitão and J. Ferreira

market performance. For this purpose, time series with daily frequency6 is used spanning from December 30, 1994 to May 10, 2010. Daily data are selected for the current study due to the need to study the effects of high-frequency innovations concerning market transactions trading daily. Therefore, the time span is due to covering several episodes (for instance, the dotcom bubble and the subprime crisis) of importance for stock market performance. This sample comprises four Eurozone indices (DAX30, CAC40, ATHENSCOMPOSITE, and PSI20) and one index from North America (Dow Jones Industrial Average). To ensure further convergence of the estimated coefficients, all variables are expressed in natural logarithms. The specifications of the variables included in the study are described as follows: brentt corresponds to the benchmark for European oil prices; wtit corresponds to the benchmark of US oil prices; dowt is the US reference stock index; daxt, refers to the German reference stock index; cact describes the French reference stock index; psi20t concerns the Portuguese stock index; and Athenst refers to the Greek reference stock index. To identify the structural parameters, a lower triangular matrix is used, incorporating the respective contemporaneous restrictions among the variables, based on assumptions from the economic literature and ad hoc economic interpretations backed up by economic theory (Basher et al. 2012). To calculate the respective coefficients of matrix A, the following matrix notation is considered: 2

α1,1

6 6 α2,1 6 6 α3,1 6 6 A ¼ 6 α4,1 6 6 α5,1 6 6 4 α6,1 α7,1

0

0

0

0

0

α2,2

0

0

0

0

α3,2 α4,2

α3,3 α4,3

0 α4,4

0 0

0 0

α5,2 α6,2

α5,3 α6,3

α5,4 α6,4

α5,5 α6,5

0 α6,6

α7,2

α7,3

α7,4

α7,5

α7,6

0

3

7 0 7 7 0 7 7 7 0 7 7 0 7 7 7 0 5 α7,7

ð4Þ

Oil price shocks are assumed to be absolutely exogenous, i.e., they do not react, in contemporary terms to the other variables. Zivot and Andrews (1992) and Kilian (2008) argue that oil prices are treated as exogenous before events and economic activity, respectively. According to the US Energy Information Administration (2012), it is assumed that Brent is more exogenous than the West Texas Intermediate (WTI) benchmark. The stock markets react simultaneously to oil shocks within the same period, given that the countries linked to the stock indices under analysis are oil-importing/dependent. Arouri et al. (2011) suggest that oil prices affect financial markets, and thus react contemporaneously to oil shocks. Glezakos et al. (2007) 6

All data with the exception of the WTI variable were sourced from the Datastream database. The daily data for the WTI variable was sourced from the US Energy Information Administration website.

Oil Shocks and Stock Market Performance: Evidence from the Eurozone and the USA

113

Table 1 Unit root tests with trend and constant (author’s creation) Stock markets WTI BRENT DOWJONES DAX CAC ATHEN PSI20

ADF –3.05 –2.69 –2.45 –1.94 –1.68 –1.07 –1.49

Level PP –2.80 –2.55 –2.44 –1.87 –1.51 –1.00 –1.53

KPSS 0.50* 0.54* 0.78* 0.64* 0.90* 0.76* 0.63*

ADF –65.95* –66.22* –48.31* –64.92* –40.14* –55.49* –56.52*

First differences PP –66.37* –65.31* –66.56* –65.02* –65.23* –55.50* –56.86*

KPSS 0.04 0.04 0.08 0.10 0.08 0.10 0.13

Notes: * indicates significance at 1%

emphasize that Dow Jones performance responds solely to domestic factors, exerting a prevailing influence on other stock markets. On the one hand, the authors also reveal that DAX responds to both domestic factors and Dow Jones performance, and on the other hand, affects the other stock indices.7 Thus, it is assumed that the German, French, Greek, and Portuguese indices react simultaneously within the same period to the US index. The French index (CAC), which is one of the most influential in Europe, is assumed to contemporaneously affect the Portuguese index (PSI20). Filis and Leon (2006) conclude that both the Portuguese and French stock markets affect the Greek market, within the same period, i.e., there is a transmission of the information originating from these markets to the Greek market.

4 Results and Discussion 4.1

Empirical Evidence

To estimate an SVAR model, the stationarity test is performed. The unit root test tests of Augmented Dickey and Fuller (ADF) (Dickey and Fuller 1979), Phillips and Perron (PP) (Phillips and Perron 1988), and Kwiatkowski, Phillips, Schmidt, and Shin (KPSS) (Kwiatkowski et al. 1992) are applied, in which some variables are transformed by differentiation, in order to estimate solely with stationarity variables or I(0) variables as shown in Table 1.8 Therefore, the optimal lag length ( pmax) is selected, using five information criteria to obtain the p-th order of the reduced-form VAR process. Considering the

7 The European indices considered in the study are the following: BEL (Belgium); CAC (France); DAX (Germany); FTSE (UK); ATHENS (Greece); AEX (Netherlands); IBEX (Spain); and MIB (Italy). 8 Concerning ADF and PP unit root, the null hypothesis arises as the variable has a unit root. In the KPSS unit root, the null hypothesis is that the variable is stationary.

114

J. Leitão and J. Ferreira

Table 2 Lag order selection criteria (author’s creation) Lag 0 1 2 3 4 5 6 7 8

LogL 81960.48 82710.72 82785.88 82846.08 82892.01 82950.09 82990.36 83055.32 83099.08

LR NA 1497.1160 149.7062 119.7182 91.1699 115.0761 79.6648 128.2501 86.2517*

FPE 3.71e-27 2.61e-27 2.58e-27 2.56e-27 2.57e-27 2.56e-27 2.57e-27 2.55e-27* 2.55e-27

AIC –40.9937 –41.3445 –41.3576 –41.3632 –41.3617 –41.3662 –41.3619 –41.3698* –41.3672

SBC –40.9717 –41.2454* –41.1813 –41.1098 –41.0311 –40.9585 –40.8770 –40.8079 –40.7281

HQ –40.9859 –41.3094* –41.2951 –41.2734 –41.2445 –41.2217 –41.1900 –41.1706 –41.1407

Notes: * Indicates lag order selected by the criterion; NA—Not Available

Schwartz Information Criterion (SBC), this reports that the VAR model is set with 1 lag (see Table 2). The Granger-causality test (Granger 1969) of the variables in the study determines both unidirectional causal relationships and bidirectional causal relationships (see Table 3). As indicated above in Table 3, for a significance level of 5%, the Brent benchmark is preceded by both the WTI benchmark and the Dow Jones stock market performance, whereas the WTI benchmark is only preceded by the Dow Jones performance at a 1% level of significance. Regarding causality among stock markets, the Dow Jones stock market is preceded by the PSI20 stock market performance at a 5% level of significance, although the Dow Jones stock market Granger-causes all European stock market performance at a 1% level of significance. Furthermore, the French stock market is preceded by the Portuguese stock market performance at a 1% level of significance. Likewise, the Granger-causality test shows the PSI20 stock market is preceded, with statistical significance at 1%, by the CAC market performance as happens with the Athens stock market. Last but no less important, the Brent benchmark precedes the Athens stock market performance at a 1% level of significance. In short, the Granger-causality test confirms the presence of feedback relationships such as ΔDOW$ΔPSI20 and ΔCAC$ΔPSI20. Regarding unidirectional relationships, ΔBRENT ! ΔATHENS, ΔWTI ! ΔBRENT, ΔDOW ! ΔBRENT; ΔDOW ! ΔWTI, ΔDOW ! ΔWTI, ΔDOW ! ΔDAX, ΔDOW ! ΔATHENS, ΔDOW ! ΔCAC, ΔCAC ! ΔPSI20, and ΔCAC ! ΔATHENS are identified. Forecasting techniques such as impulse-response functions (hereafter, IRF) and forecast error variance decompositions (hereafter, FEVD) are applied, allowing analysis of the behavior of variables in an SVAR system.9 Sadorsky (1999) advocates that IRF reproduces the responsiveness of an endogenous variable to a certain exogenous shock. Regarding FEVD, the author states that 9 Due to space constraints, the results are not presented here. However, it can be obtained on request from authors.

Δbrent – 22.6253* 32.4378* 0.1726 2.2824 3.5881 3.0672 65.5080*

Δwti 2.3165 – 18.7652* 0.2240 1.0947 1.5745 3.2956 25.9810*

Δdow 1.4546 0.2902 – 2.7954 0.3180 6.0363** 3.4780 17.1615*

Δdax 0.0601 1.7425 360.9805* – 0.3175 1.8954 1.6946 366.6186*

Δcac 8.79E-05 1.4291 509.5407* 0.4417 – 8.5174* 0.9957 555.7506*

Δpsi20 3.3117 0.0171 356.8699* 0.4717 345.3244* – 0.0009 379.1295*

Δathens 10.0906* 2.7895 218.4473* 1.6709 9.0147* 1.5621 – 284.9291*

Notes: (i) Consider the variable or the block as the independent variable (i.e., the origin of the causality), and the variable that is presented in the line, as the dependent variable (that is, the destination of the causality). (ii) The contrasts of the causality of the variables are made by using the χ 2 statistic with one degree of freedom. (iii) * indicates significance at 1%; ** indicates significance at 5%

Δbrent Δwti Δdow Δdax Δcac Δpsi20 Δathens Block

Table 3 Granger-causality block exogeneity Wald test (author’s creation)

Oil Shocks and Stock Market Performance: Evidence from the Eurozone and the USA 115

116

J. Leitão and J. Ferreira

Table 4 Dynamic analysis of the causalities’ directions (author’s creation) Direction of causality Brent Shock ! ΔATHENS WTI Shock ! ΔBRENT DOW Shock ! ΔBRENT DOW Shock ! ΔWTI DOW Shock ! ΔDAX* DOW Shock ! ΔCAC* DOW Shock ! ΔPSI20* DOW Shock ! ΔATHENS* CAC Shock ! ΔPSI20* CAC Shock ! ΔATHENS PSI20 SHock ! ΔDOW PSI20 Shock ! ΔCAC

Dynamic analysis FEVD AIRF FEVD AIRF FEVD AIRF FEVD AIRF FEVD AIRF FEVD AIRF FEVD AIRF FEVD AIRF FEVD AIRF FEVD AIRF FEVD AIRF FEVD AIRF

4-stepahead 0.4846 0.0008 0.6090 0.0014 0.7192 0.0015 0.3222 0.0011 35.6289 0.0112 34.9800 0.0107 21.3031 0.0070 12.3765 0.0083 6.3476 0.0021 1.4397 0.0015 0.1139 –0.0004 0.1753 –0.0007

8-stepahead 0.4846 0.0008 0.6090 0.0014 0.7194 0.0014 0.3224 0.0011 35.6293 0.0112 34.9810 0.0106 21.3035 0.0069 12.3765 0.0083 6.3476 0.0021 1.4397 0.0015 0.1139 –0.0004 0.1753 –0.0007

10-stepahead 0.4846 0.0008 0.6090 0.0014 0.7194 0.0014 0.3224 0.0011 35.6293 0.0112 34.9810 0.0106 21.3035 0.0069 12.3765 0.0083 6.3476 0.0021 1.4397 0.0015 0.1139 –0.0004 0.1753 –0.0007

sign + + + + + + + + + + – –

Notes: (i) FEVD as mentioned above, Forecast Error Variance Decomposition; AIRF corresponds to Accumulated Impulse-Response Function; The causality sign is obtained from the accumulated value of the 10-steps-ahead coefficients since from that period coefficients reach the necessary stability (Goux 1996). (ii) *Denotes a significant impact, i.e. when there is an impact above 5% (Goux 1996)

this technique determines the proportion in which a certain innovation explains nsteps-ahead forecast error variance of a certain endogenous variable. In the aftermath of the results of FEVD and IRF, a dynamic analysis (see Table 4) is applied, covering the three previous analyses, i.e., Granger-causality test, FEVD, and IRF.10 Given the results of the dynamic analysis for the SVAR model (Table 4), there are only significant unidirectional relationships. In this line, the Dow Jones stock market prevails on European stock markets’ performance since it is the origin of causal relationships.

10

According to Sims (1980), Goux (1996) and Lütkepohl (1999), the dynamic analysis must be complemented with the use of these two forecasting techniques.

Oil Shocks and Stock Market Performance: Evidence from the Eurozone and the USA

117

Thus, the causal relationship established between the Dow Jones stock market shock and DAX stock market is significant, since after the 4-steps-ahead the shock effect accounts for 35.63%, whereas for 8 and 10-step-ahead, it stabilizes at 35%. Moreover, the relationship denotes a positive sign. Likewise, regarding the causal relationship between the Dow Jones stock market shock and CAC stock market performance, after 4-steps-ahead an impact of 34.98% is found. Here, the associated coefficients from subsequent steps or periods indicate stability from the shock. As regards the sign of the causal relationship, it is positive. Regarding the relationship between the American stock market shock and Portuguese stock market performance, the effect from the shock accounts for 21.30% of the three forecasting time horizons. As in previous causal relationships, the sign of the current causal relationship is positive. It is also evident that for causal relationships between the Dow Jones stock market shock and the Athens stock market as well as for the causal relationship for the CAC stock market shock and the PSI20 stock market, there is stability since throughout the steps-ahead the impact accounts for 12.4% and 6.35%, respectively. It is noted that for both causal relationships, the sign is positive.

4.2

Discussion

Bear in mind H1, which postulates that oil shocks have a significant and negative impact on US and European stock market performance, the results do not reveal interdependent relationships among the variables, indicating rejection of H1 and contrasting with previous literature, namely, Sadorsky (1999), Papapetrou (2001), Park and Ratti (2008), Miller and Ratti (2009), Filis (2010), and Cunado and Perez de Gracia (2014). As regards H2, which suggests that oil shocks affect stock market performance positively and significantly, the results indicate rejection of the hypothesis. Hence, the results are not in line with those of Oberndorfer (2009), Elyasiani et al. (2011), and Zhu et al. (2011). Hypothesis H3 states that oil shocks do not have a significant impact on stock market performance. The empirical evidence reveals fail to reject the expected results, being in agreement with Apergis and Miller (2009) and Lee et al. (2012). Hypothesis H4 postulates that stock market shocks impact positively and significantly on the European and American stock markets. The results obtained are consistent with the expected results and with the evidence of Eun and Shim (1989), Nielsson (2009) and Yang et al. (2003).11

11 This reference is presented as the authors conclude that predominantly the most relevant stock markets, in the context of the monetary union, influence the performance of less relevant stock markets. References that specifically explain the relationship between the French stock market and the Portuguese stock were not identified in the literature review.

118

J. Leitão and J. Ferreira

Given the results obtained, half of the hypotheses formulated were not rejected. Authors who have proven that oil shocks impact negatively on stock market performance have focused on the interpretation that oil shocks impact indirectly, for instance, via macroeconomic fundamentals (Filis 2010; Miller and Ratti 2009; Papapetrou 2001; Park and Ratti 2008), or via changes in the cash-flows of listed companies (Sadorsky 1999), or also via changes in crude (Cunado and Perez de Gracia 2014). Oberndorfer (2009), Elyasiani et al. (2011), and Zhu et al. (2011) suggest the positive effects of oil shocks on stock markets, via oil-related stocks. Nevertheless, this paper provides new evidence of the non-occurrence of the significant effects of oil shocks on stock market performance. Therefore, these results fall into the two references of Apergis and Miller (2009) and Lee et al. (2012). Both argue that the exchange rate or economic activity contributes to stock market performance. Due to this evidence and the empirical results obtained, the current paper supports such evidence since no significant relationship between oil shocks and stock market performance is found. Eun and Shim (1989), Nielsson (2009), and Yang et al. (2003) show that the US stock market is the market-driven stock prevailing on stock market performance along with an increase in liquidity among Euronext stock markets. In line with this view, this paper presents similar interdependent relationships between the US market and European stock markets, showing an identical reaction from European stock markets to US stock market innovations.

5 Conclusions 5.1

Evidence and Implications

For the crude oil benchmarks in international markets, it appears that both WTI and BRENT shocks show no impact on the international stock markets studied, indicating rejection of H1. The dynamic analysis suggests that stock market shocks do not affect oil price movements significantly, indicating rejection of H2. US stock market innovation produces a significant and positive effect on Eurozone stock market performance, highlighting the American stock market as market-driven and determining European stock markets’ performance, and thus H3 is not rejected. In the Eurozone context, a higher number of integration relationships among the stock markets was expected. However, according to the empirical results obtained, French stock market innovation appears to have a positive and significant impact on Portuguese stock market performance, signifying non-rejection of H4. It is therefore concluded that oil prices do not significantly influence the US and European stock markets. Additionally, there is no evidence of the effects of stock market innovations on oil price movements. Indeed, as mentioned, stock market

Oil Shocks and Stock Market Performance: Evidence from the Eurozone and the USA

119

performance may depend on the behavior of other variables (for instance, interest rates) or, possibly as a consequence of the boom of economic activity until the beginning of the millennium, or from the credit derivatives bubble. Furthermore, the rise of oil prices may be influenced by an increase in demand from economies such as China and India (Li and Lin 2011), in addition to the reduction in energy consumption, for instance, in the USA, for each dollar in relation to the real gross domestic product (Hamilton 2009). The preponderant role played by the US stock market on the other stock markets is fundamentally justified by the globalization of markets, based on different relations among world economies and also due to the US economy being the largest. In turn, the impact of the French stock market on the Portuguese stock market is due to the integration of both stock markets in the Euronext Stock Exchange. The current evidence has implications for market regulators and for investors. Concerning regulators, taking into account the USA as the prevailing market for the other stock markets, a broad oversight by market regulators is necessary, considering speculation and market manipulation. Regarding investors, the results obtained may be important for the purpose of asset allocation. This paper provides new evidence for the literature, considering a context of interdependence between the US stock market and the Eurozone stock markets, which include the two Eurozone countries with a higher risk of default. In addition, it contributes to an analysis that addresses gaps in the study concerning the effects of oil price shocks on international stock market performance, particularly, by revealing the absence of significant effects in a Granger-causality conjecture.

5.2

Limitations and Future Research

As regards the limitations of the paper, there is, firstly, the non-inclusion of noncommercial agents as a factor of market speculation. Secondly, the non-inclusion of crude oil production as a variable in studying the hypothetical effect of the oil supply shock. Thirdly, non-inclusion of the three factors of Fama and French (1993), in order to thoroughly analyze the oil price market and stock market performance. However, it should be emphasized that these limitations are due to the impossibility of accessing data that could address these limitations within the scope of the current research. Therefore, in terms of future research, it is important to analyze whether the refined product market has a greater weight in stock markets compared to the oil market. It is also necessary to study the effects of an increase in the price of oil and oil-related products on oil-related stock price sectors in the Eurozone context, aiming to analyze oil-related stock sectors from a perspective of understanding which sectors oil price movements affect significantly, as well as verifying the different effects between oil market and refined products market movements in oil-related stock sectors.

120

J. Leitão and J. Ferreira

References Albuquerque, R., & Vega, C. (2009). Economic news and international stock market co-movement. Review of Finance, 13(3), 401–465. Álvarez, L. J., Hurtado, S., Sánchez, I., & Thomas, C. (2011). The impact of oil price changes on Spanish and euro area consumer price inflation. Economic Modelling, 28(1–2), 422–431. Amisano, G., & Giannini, C. (1997). Topics in structural VAR econometrics (2nd ed.). Berlin: Springer. Apergis, N., & Miller, S. M. (2009). Do structural oil-market shocks affect stock prices? Energy Economics, 31(4), 569–575. Arouri, M. E. H. (2011). Does crude oil move stock markets in Europe? A sector investigation. Economic Modelling, 28(4), 1716–1725. Arouri, M. E. H., & Nguyen, D. K. (2010). Oil prices, stock markets and portfolio investment: Evidence from sector analysis in Europe over the last decade. Energy Policy, 38(8), 4528–4539. Arouri, M. E. H., Lahiani, A., & Nguyen, D. K. (2011). Return and volatility transmission between world oil prices and stock markets of the GCC countries. Economic Modelling, 28(4), 1815–1825. Arouri, M. E. H., Jouini, J., & Nguyen, D. K. (2012). On the impacts of oil price fluctuations on European equity markets: Volatility spillover and hedging effectiveness. Energy Economics, 34 (2), 611–617. Basher, S. A., Haug, A. A., & Sadorsky, P. (2012). Oil prices, exchange rates and emerging stock markets. Energy Economics, 34(1), 227–240. Broadstock, D. C., Cao, H., & Zhang, D. (2012). Oil shocks and their impact on energy related stocks in China. Energy Economics, 34(6), 1888–1895. Chen, S. S. (2010). Do higher oil prices push the stock market into bear territory? Energy Economics, 32(2), 490–495. Ciner, C. (2001). Energy shocks and financial markets: Nonlinear linkages. Studies in Nonlinear Dynamics and Econometrics, 5(3), 203–212. Cong, R. G., Wei, Y. M., Jiao, J. L., & Fan, Y. (2008). Relationships between oil price shocks and stock market: An empirical analysis from China. Energy Policy, 36(9), 3544–3553. Cunado, J., & Perez de Gracia, F. (2014). Oil price shocks and stock market returns: Evidence for some European countries. Energy Economics, 42, 365–377. Dickey, D. A., & Fuller, W. A. (1979). Distribution of the estimators for autoregressive time series with a unit root. Journal of the American Statistical Association, 74(366), 427. Elyasiani, E., Mansur, I., & Odusami, B. (2011). Oil price shocks and industry stock returns. Energy Economics, 33(5), 966–974. Eun, C. S., & Shim, S. (1989). International transmission of stock market movements. The Journal of Financial and Quantitative Analysis, 24(2), 241–256. Fama, E. F., & French, K. R. (1993). Common risk factors in the returns on stocks and bonds. Journal of Financial Economics, 33(1), 3–56. Filis, G. (2010). Macro economy, stock market and oil prices: Do meaningful relationships exist among their cyclical fluctuations? Energy Economics, 32(4), 877–886. Filis, G., & Leon, C. (2006). Time-varying dynamics in the Greek stock market integration with the EMU stock markets. In Proceedings of the 7th WSEAS International Conference on Mathematics & Computers in Business & Economics. Glezakos, M., Merika, A., & Kaligosfiris, H. (2007). Interdependence of major world stock exchanges: How is the Athens stock exchange affected. International Research Journal of Finance and Economics, 7(7), 24–39. Goux, J.-F. (1996). Le canal étroit du crédit en France: Essai de vérification macroéconomique 1970-1994. Revue d’économie Politique, 106(4), 655–681. Granger, C. W. J. (1969). Investigating causal relations by econometric models and cross-spectral methods. Econometrica, 37(3), 424–438.

Oil Shocks and Stock Market Performance: Evidence from the Eurozone and the USA

121

Hamao, Y., Masulis, R. W., & Ng, V. (1990). Correlations in Price changes and volatility across international stock markets. Review of Financial Studies, 3(2), 281–307. Hamilton, J. D. (1983). Oil and the macroeconomy since World War II. Journal of Political Economy, 91(2), 228–248. Hamilton, J. D. (2003). What is an oil shock? Journal of Econometrics, 113(2), 363–398. Hamilton, J. D. (2009). Causes and consequences of the oil shock of 2007-08. Brookings Papers on Economic Activity, Economic Studies Program, The Brookings Institution, 40(1), 215–283. Huang, R. D., Masulis, R. W., & Stoll, H. R. (1996). Energy shocks and financial markets. Journal of Futures Markets, 16(1), 1–27. International Energy Agency. (2009). World energy outlook (pp. 489–503). Paris: International Energy Agency. Jammazi, R., & Aloui, C. (2010). Wavelet decomposition and regime shifts: Assessing the effects of crude oil shocks on stock market returns. Energy Policy, 38(3), 1415–1435. Jiménez-Rodríguez, R. (2008). The impact of oil price shocks: Evidence from the industries of six OECD countries. Energy Economics, 30(6), 3095–3108. Jones, C. M., & Kaul, G. (1996). Oil and the stock markets. Journal of Finance, 51(2), 463–491. Kang, W., & Ratti, R. A. (2013). Oil shocks, policy uncertainty and stock market return. Journal of International Financial Markets, Institutions and Money, 26(1), 305–318. Karolyi, A., Stulz, M. R., & American Finance Association. (2016). Why do markets move together? An investigation of U. S. Japan Stock Return Comovements. The Journal of Finance, 51(3), 951–986. Khan, W., & Vieito, J. P. (2012). Stock exchange mergers and weak form of market efficiency: The case of Euronext Lisbon. International Review of Economics and Finance, 22(1), 173–189. Kilian, L. (2008). Exogenous oil supply shocks: How big are they and how much do they matter for the U.S. economy? Review of Economics and Statistics, 90(2), 216–240. Kim, S.-J., Moshirian, F., & Wu, E. (2005). Dynamic stock market integration driven by the European Monetary Union: An empirical analysis. Journal of Banking & Finance, 29(10), 2475–2502. Kwiatkowski, D., Phillips, P. C. B., Schmidt, P., & Shin, Y. (1992). Testing the null hypothesis of stationarity against the alternative of a unit root. How sure are we that economic time series have a unit root? Journal of Econometrics, 54(1–3), 159–178. Lardic, S., & Mignon, V. (2006). The impact of oil prices on GDP in European countries: An empirical investigation based on asymmetric cointegration. Energy Policy, 34(18), 3910–3915. Lee, B. J., Yang, C. W., & Huang, B. N. (2012). Oil price movements and stock markets revisited: A case of sector stock price indexes in the G-7 countries. Energy Economics, 34(5), 1284–1300. Li, H., & Lin, X. S. (2011). Do emerging markets matter in the world oil pricing system? Evidence of imported crude by China and India. Energy Policy, 39(8), 4624–4630. Li, S. F., Zhu, H. M., & Yu, K. (2012). Oil prices and stock market in China: A sector analysis using panel cointegration with multiple breaks. Energy Economics, 34(6), 1951–1958. Lin, W.-L., Engle, R. F., & Ito, T. (1994). Do bulls and bears move across Borders? International transmission of stock returns and volatility. Review of Financial Studies, 7(3), 507–538. Lütkepohl, H. (1999). Vector autoregressions. SFB 373 Discussion Papers 1999, 4, Humboldt University of Berlin. Interdisciplinary Research Project 373: Quantification and Simulation of Economic Processes. Miller, J. I., & Ratti, R. A. (2009). Crude oil and stock markets: Stability, instability, and bubbles. Energy Economics, 31(4), 559–568. Morana, C., & Beltratti, A. (2008). Comovements in international stock markets. Journal of International Financial Markets, Institutions and Money, 18(1), 31–45. Nandha, M., & Faff, R. (2008). Does oil move equity prices? A global view. Energy Economics, 30 (3), 986–997. Narayan, P. K., & Sharma, S. S. (2011). New evidence on oil price and firm returns. Journal of Banking and Finance, 35(12), 3253–3262.

122

J. Leitão and J. Ferreira

Nielsson, U. (2009). Stock exchange merger and liquidity: The case of Euronext. Journal of Financial Markets, 12(2), 229–267. https://doi.org/10.1016/j.finmar.2008.07.002. Oberndorfer, U. (2009). Energy prices, volatility, and the stock market: Evidence from the Eurozone. Energy Policy, 37(12), 5787–5795. Ozdemir, Z. A. (2009). Linkages between international stock markets: A multivariate long-memory approach. Physica A: Statistical Mechanics and its Applications, 388(12), 2461–2468. Papapetrou, E. (2001). Oil price shocks, stock market, economic activity and employment in Greece. Energy Economics, 23(5), 511–532. Park, J., & Ratti, R. A. (2008). Oil price shocks and stock markets in the U.S. and 13 European countries. Energy Economics, 30(5), 2587–2608. Phillips, P. C. B., & Perron, P. (1988). Testing for a unit root in time series regression. Biometrika, 75(2), 335–346. Sadorsky, P. (1999). Oil price shocks and stock market activity. Energy Economics, 21(5), 449–469. Sadorsky, P. (2012). Correlations and volatility spillovers between oil prices and the stock prices of clean energy and technology companies. Energy Economics, 34(1), 248–255. Sims, C. A. (1980). Macroeconomics and reality. Econometrica, 48(1), 1–48. US Energy Information Administration. (2012). Retrieved from independent statistics & analysis. Website https://www.eia.gov/outlooks/steo/special/pdf/2012_sp_02.pdf Yang, J., Min, I., & Li, Q. (2003). European stock market integration: Does EMU matter? Journal of Business Finance and Accounting, 30(9–10), 1253–1276. Zhu, H. M., Li, S. F., & Yu, K. (2011). Crude oil shocks and stock markets: A panel threshold cointegration approach. Energy Economics, 33(5), 987–994. Zivot, E., & Andrews, D. W. K. (1992). Further evidence on the great crash, the oil-price shock, and the unit-root hypothesis. Journal of Business & Economic Statistics, 10(3), 251–270.

Reinforcement Learning Approach for Dynamic Pricing Maksim Balashov, Anton Kiselev, and Alena Kuryleva

Abstract With the introduction of digital technologies, it becomes easier for customers to compare prices and choose the product that is most profitable for them, this leads to instability of demand, which means that there is a need for market players to review pricing policies in favor of one that could take into account the characteristics of producer’s resources and current demand status. Dynamic pricing seems to be an adequate solution to the problem, as it is adaptive to customer expectations. In addition, with the digitalization of the economy, unique opportunities arise for using this apparatus. The purpose of this study is to evaluate the possibility of applying the concept of dynamic pricing to traditional retail. The goal of solving the dynamic pricing problem in the framework of this study is to maximize profits from the sale of a specific associated product at an automatic gas station. To solve this problem, the authors propose using machine learning approaches that adapt to the external environment, one of which is reinforcement learning (RL). At the same time, an approach is proposed to restore the demand surface for subsequent training of the agent. Keywords Dynamic pricing · Reinforcement learning · Neural network · Deep learning

M. Balashov PJSC Gazpromneft, St. Petersburg, Russia ITMO University, St. Petersburg, Russia e-mail: [email protected] A. Kiselev · A. Kuryleva (*) PJSC Gazpromneft, St. Petersburg, Russia e-mail: [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Devezas et al. (eds.), The Economics of Digital Transformation, Studies on Entrepreneurship, Structural Change and Industrial Dynamics, https://doi.org/10.1007/978-3-030-59959-1_8

123

124

M. Balashov et al.

1 Introduction Pricing is a complex and multifaceted process that requires rethinking due to the widespread digitalization of the economy. The development of digital platforms in many areas of economic activity can reduce the cost of products and bring them to a new qualitative level, which is the prerequisites for a new understanding of the issue of pricing. The implementation of pricing in the form of the traditional mechanism of the sum of distribution costs and estimated profit can no longer be the optimal strategy of the seller due to high competition in the market. Therefore, it will lead to a decrease in customer loyalty and competitive abilities of market players who adhere to this approach. Qualitative changes in the economic sphere necessitate the transition of pricing for goods and services from a fixed price model to a more flexible system—a dynamic pricing model. There are several important reasons for moving from a fixed pricing mechanism to a dynamic approach on the market: dynamic pricing allows you to redistribute customer flows, generating more revenue, which in turn allows businesses to work more efficiently. Besides, a flexible pricing system can be a good tool to increase customer loyalty, and therefore the establishment of strong competitive advantages of the seller. Dynamic pricing refers to a pricing approach that involves the use of flexible strategies and tactics depending on historical data, current events, the competitive environment, demand, and goals of the seller. The question of the possibility of using flexible pricing was first raised in the 1970s by Rothstein (1971) and Littlewood (1972), who proposed the dynamic pricing of flights and in the hotel sector. In 1978, the Law on the deregulation of the airline sector, which was passed by the US Congress, served as a powerful prerequisite for the development of their ideas not only from a theoretical but also from a practical point of view. Later, from the beginning of the 1990s, studies of dynamic prices received publicity in the field of car rental (Carroll and Grimes 1995), in the railway sector (Ciancimino et al. 1999), in the hotel sector (Hayes and Miller 2011), etc. In the 1990s, the idea of flexible prices also affected retail trade (Subrahmanyan and Shoemaker 1996). In later works on dynamic pricing, more attention is paid not to the theoretical understanding of the problem, but the empirical development and evaluation of strategies, as well as the modeling of factors affecting the price. Touching upon the issue of factors determining the price of a product, it is important to note that there is no consensus on this. Among the main factors that are often found in studies on dynamic pricing are the following (Deksnyte and Lydeka 2012): 1. Beauvoir and characteristics of customers— the levels of knowledge of

Reinforcement Learning Approach for Dynamic Pricing

125

customers is important to assess their market behavior strategies. 2. Fair prices— “the consumer’s assessment and understanding of whether the difference between the prices of the seller and the other party is reasonable, acceptable or justifiable” (Xia et al. 2004). 3. Market structure— a competitive environment and the position of a particular product on the market, taking into account a specific reaction to price changes. 4. Demand is the most important factor, according to the authors. Demand modeling is given special attention in the work since its stochastic nature significantly complicates the process of adopting a dynamic pricing strategy. 5. Perception of product value—customers often put off purchases to get the best deal in the future. 6. Seasonality—researchers note that price changes and seasonal fluctuations affect some products more than others. In studies on the topic of pricing, the idea of the demand curve modeling itself is popular, since many researchers agree that demand is the determining factor in setting prices. The authors also believe that an important aspect of determining the pricing strategy is the modeling of stochastic demand. Demand is often modeled as an exogenously given random process with a known probability distribution, but such models have several disadvantages: firstly, they are completely determined by demand parameters, and secondly, they do not allow specifying demand when additional information is obtained. In their work Carvalho and Puterman (2003), pay attention to the problem of dynamic pricing, in which the form of the demand function is known, but the parameters that are updated over time using Kalman filters are not known. Aviv and Pazgal (2002) show that there is a compatibility between a low price, which leads to loss of income, and a high price, which reduces the likelihood of a purchase, while demand has not been determined for a long time. Another method of estimating demand is the use of neural networks. Thus, Liu and Wang (2013) uses neural networks to model different demand scenarios and concludes that this tool demonstrates higher modeling accuracy compared to other methods while being adaptive to various environmental behaviors. The literature describes many tools and mathematical models for choosing dynamic pricing methods. The mathematical formulation of the problem of determining the price usually comes down to solving the optimization problem. Due to the possibility of analyzing a large array of data and factors, according to the authors, machine learning methods can be considered one of the most effective tools for solving problems of changing price strategies. For example, Lawrence (2003) considered the approach of applying machine learning to modeling the assessment of profit from a pricing strategy based on marked-up transaction data, while Spedicato et al. (2018) optimized the pricing strategy in the insurance market. Improving machine learning technologies and expanding data available for analysis allows dynamic pricing to go beyond the traditional management function

126

M. Balashov et al.

and allows sellers to understand demand behavior. Pricing is becoming reasonable and adaptive to changing consumer behavior. The concept of dynamic pricing uses algorithms that can significantly increase the response rate for price changes. Automation of the process involves the ability to track many factors, such as time, traffic, weather, news events, historical data, etc. A well-established and popular method among machine learning methods for solving the dynamic pricing problem taking into account the uncertainty of demand is reinforcement learning, as Gupta et al. (2002) use reinforcement learning in an indefinite non-stationary auction environment. Jintian and Lei (2009) address the issue of dynamic pricing as part of a duopoly in electronic retail markets. In the work, the authors conduct experiments of the type of training with reinforcement (simulated annealing Q-learning), as well as WoLFPHC algorithms (win-or-learn-fast policy hill climbing). In the work of Lu et al. (2018) reinforced learning is used to illustrate electricity pricing, in which the dynamic pricing problem is formulated as a discrete finite Markov decision-making process, and Q-learning is adopted to solve this problem. Despite the variety of studies on the topic of dynamic pricing, in practice, this strategy is most often applied to goods and services sold on electronic platforms. This direction remains relatively new for offline transactions, such as traditional retail. In this regard, the authors are invited to consider the possibility of applying the concept of dynamic pricing in the sale of related products at automatic gas stations.

2 The Practice of Dynamic Pricing Usage It is believed that in practice the idea of dynamic pricing was introduced in the 1980s by American Airlines, in connection with the Law on the deregulation of the airline sector, adopted by the US Congress. The company positioned such a pricing strategy for airline tickets, which included a change in costs by the availability of seats, passenger demand, and whether tickets were bought in advance (Lane 2019). The same company is credited with one of the first and most frequently quoted definitions of dynamic pricing: a tool to maximize revenue by “selling the right product to the right customer at the right price” (Weatherford and Bodily 1992), later this definition was supplemented with the words “at the right time”. Dynamic pricing is widely used in the housing rental market, so in 2015, Airbnb, VRBO, and HomeAway, which offered short-term rental apartments, switched to a flexible pricing strategy by introducing PriceMethod. The creators of the platform claim that their software will help homeowners increase profits by 20–40%. They will be able to increase rental prices in high season and lower tariffs while demand decreases (Larina 2015). Another well-known representative of using the dynamic pricing concept in practice is Uber. Uber uses the concept of machine learning to build its pricing system. Based on machine learning algorithms, the company generates demand

Reinforcement Learning Approach for Dynamic Pricing

127

forecasts in an existing market, taking into account the sensitivity of the system to external factors (Didur 2018). A convenient platform for applying the concept of dynamic pricing is online stores. Prices for goods and services that are sold through the Internet can constantly change by the number and characteristics of customers, competitors’ prices, and other factors, without having any technical difficulties. Platforms like Walmart, Amazon, Taobao currently sell millions and billions of products. It is not possible from time to time to set prices for such a quantity of goods manually to be competitive. Amazon has developed its automatic pricing system that can change prices every 15 min. Amazon pricing strategies have been studied empirically and significant pricing factors have been formulated (Chen et al. 2016). Currently, the concept of dynamic pricing is widely reflected in the areas of air travel, rail transportation, hotel, and tourism, as well as on e-commerce platforms— platforms where financial organizations and clients interact to conclude financial transactions. The task of the platform is to automate the interaction of the parties and ensure the convenience of operations. The authors propose to address the issue of transferring the principles of the concept of flexible pricing strategies to retail at physical sites. Summarizing the results of the practical application of dynamic pricing, we can say that, using this concept in practice, sellers can increase their income by selling their products at prices adapted to customer demand, market conditions, and the seller’s offer at the time of the transaction (Dimicco et al. 2003) Since a high increase in profitability can be achieved by improving dynamic pricing, the problem of optimal dynamic pricing is of great intellectual, economic, and practical interest (Liu and Wang 2013).

3 The Mathematical Formulation of the Dynamic Pricing Problem The problem of the dynamic pricing can be formulated as Markov decision process (MDP). MDP is the process of consecutive decision-making under uncertainty for a fully observed environment with Markov transition model and additional rewards. MDP is useful for studying optimization problems solved using dynamic programming and reinforcement learning. MDP is known since at least the 1950s. The idea is that at each moment the process is in a certain states, and the agent of the system (the decision-maker) can choose any action available in the current state. The next moment the system responds, randomly moving to a new state s’, giving the agent some reward Ra(s, s’). The probability that the system enters a new state depends on the actions selected by the agent, this is determined by the transition function Pa(s, s’). By this, the new state s’ depends on the current state of the system s and the action selected by the agent, but it does not depend on previous states and actions, in other words, the transitions satisfy the Markov property.

128

M. Balashov et al.

The difference between Markov decision processes and Markov chains is that in the first case, there are actions (determining the agent’s choice) and rewards (motivating by the environment). In case when there is only one action for each state of the system and all rewards are the same, Markov’s decision-making process can be reduced to the Markov chain. Mathematically, the Markov decision process is represented by four following sets: S is a finite set of states of the environment (the theory of Markov decision processes does not claim that S or A are finite, but the basic algorithms based on this approach assume that they are finite). A is a finite set of alternative actions, where As is a set of possible actions for state S.Pa(s, s0) ¼ P(st + 1 ¼ s0| st ¼ s, at ¼ a)–the probability that the action a in state s at time t will lead to a state s0 at time t + 1.r ¼ Ra(s, s0)–the expected immediate reward received after conversion from the state s to the state s0 due to the action a. Such wise, the agent receives a tuple (s, a, r, s0) at each step. In this study, the agent sets prices. The system or the environment with which the agent interacts is the local market, which generates the demand for selling products. A set of states of the environment reflects the current situation in the local market. A set of agent actions consists of the prices for the next period (1 day, 8 h, 4 h, 1 h). The award function is the profit from selling the products for a certain period. There are several approaches to solving the dynamic pricing problem in the literature. The first of these is evolutionary algorithms. The essence of their application is to optimize the parameters of heuristic pricing strategies that are adaptive to changes in the market. On the one hand, the strategies under consideration are adaptive in making decisions, on the other hand, their parameters are configured offline. In practice, Ramezani et al. (2011), these algorithms have shown that optimized adaptive strategies can achieve good results with different changes in the market environment. Particle swarm optimization is also a good solution to the dynamic pricing problem due to the research nature of this method (Mullen et al. 2006). In determining the appropriate balance between the phases of operation and exploration, dynamic pricing is also associated with measurable related costs that must be considered. Because the environment changes dynamically over time, using only an operating strategy over time can lead to worse results. On the other hand, intelligence can extract valuable information about the nature of the market, which could have a positive impact on future operations. Exploration and exploitation are identical from the consumer because both are related to pricing. With sufficiently strong exploration, there are risks of opportunity costs, since there is a deviation from the optimal price. Due to the unknown structure of the environment and the behavior of agents in the market, the authors considered it to be correct to use reinforcement learning (RL) in modeling. The term RL implies one of the methods of machine learning in which the agent interacts with a certain environment. The RL does not request the specific information about the environment but uses an «amplifying signal» from it,

Reinforcement Learning Approach for Dynamic Pricing

129

which indicates whether the agent action was in the «right» direction or not. RL procedures were recognized as powerful and practical methods for solving MDP. The problem of dynamic pricing comes to a problem of maximizing the profit of the retail store. Using the terminology of RL the agent must earn as much reward as possible. This is the optimization problem with the objective function depending on the time horizon of the agent’s activity. If the agent is given a certain time horizon h (a model with a finite horizon) to achieve the goal, then the agent’s gain is represented as (1): " R ¼ ½r0 þ r 1 þ . . . þ r h ] ¼ 

h X

# rt ,

ð1Þ

t¼0

where rt is the agent’s reward at time t. If the time range is not given to the agent and in the context of the task it is assumed to maximize the reward for all the upcoming time (model with an infinite horizon), then the discounted reward can be represented the following way (2): hX1 i ⌈ ⌉ t γ ∙ r , R ¼ E r0 þ γ ∙ r1 þ γ 2 ∙ r2 þ . . . þ γ k ∙ rk þ . . . ¼ E t t¼0

ð2Þ

here γ is a discount factor, which represents a fact that the reward value decreases over time. Thus, in our task, at each step, the agent tries to maximize the discounted Rt the reward at time t, interacting with the environment (3): Rt ¼

XT t 0 ¼t

0

γ t –t Rt ! max :

ð3Þ

Besides the reward function R it is important to determine the function of the expected total reward, which can be gained starting from a current state s with a certain action a. For MDP this can be formalized in the following form (4): Qπ ðs, aÞ ¼ Eπ ½Rt jst ¼ s, at ¼ a] ¼ E π

hXT

i t 0 –t γ R js ¼ s, a ¼ a , t t t t 0 ¼t

ð4Þ

where π is a policy, a function that returns the probability distribution over the set of actions A for a given current state s. Thus, the Q-function (Q : S × A ! R) is a function that reflects the expected total reward if the agent being in the state s performs the action a. In the current formulation of the problem, the Q-function is what needs to be evaluated, because knowing it the agent in the current state s can choose such an action a that maximizes Q(s, a). The optimum of an action-value function Q*(s, a) is the maximum expected reward for all a 2 A, s 2 S (5):

130

M. Balashov et al.

Q* ðs, aÞ ¼ max ½Rt jst ¼ s, at ¼ a, π ] : π

ð5Þ

The optimal action-value function conforms an important identity known as the Bellman Eq. (6): ⌈ ⌉ * 0 0 Q ðs , a Þjs, a : Q ðs, aÞ ¼ max s0~E r þ γ max 0 *

π

a

ð6Þ

In other words, if we know the optimal value of the action-value function Q*(s0, a ) at the next step s0 for all possible actions a0, then the optimal strategy is to choose the action a0 maximizing the expectation r + γ . Q*(s0, a0). The basic idea of many RL algorithms is to estimate the action-value ⌉function which is updated iteratively ⌈ 0

Qiþ1 ðs, aÞ ¼ max  r þ γ max Qi ðs0 , a0 Þjs, a . Such algorithms converge to the 0 π

a

optimal action-value function Qi ! Q* as i ! 1. In practice, this basic approach is impractical (Nikolenko et al. 2020). The goal of RL is not to evaluate the Q-function, but to determine the optimal policy for the behavior of the agent π *. Many modern approaches to learning are based on the principle of TD-learning, which involves state learning based on the estimates for subsequent states: when making the next transition for the state-action pair from which we left, the value of the Q-function approximates the value of the Q-function for a pair of state-action, in which we have come. Nowadays, in practice, TD-learning turns out to be faster and more efficient than other strategies. The article uses off-policy TD-learning Q-function, which is also called the Q-learning method, assuming the Bellman equation solved for the maximum (7): ⎧ ⎫ Qðstþ1 , aÞ – Qðst , at Þ Qðst , at Þ≔Qðst , at Þ þ α r tþ1 þ γ . max a

ð7Þ

Q-learning is a method that allows environment agents to learn how to act optimally in situations that can be represented as an MDP. The method is incremental for dynamic programming imposing limited computational requirements. It works incrementally improving its estimates. Given the reward that the agent receives from the environment, the function Q is formed, which allows the agent not to choose a behavior strategy randomly, but to analyze the experience of the previous interaction with the environment. One of the advantages of Q-learning is that it can compare the expected reward of the available actions without having a model of the environment. This method is used for situations that can be represented as an MDP. The optimal policy is represented by the formula (8): Q* ðs, aÞ: π * ðsÞ ¼ arg max a

ð8Þ

Reinforcement Learning Approach for Dynamic Pricing

131

Sutton and Barto (2018) present the Q-learning algorithm for finding a policy that is close to optimal. Often in practice, as in our case, the number of state-action pairs (s, a) is huge (if not infinite), so it is not possible to train the function Q* on all possible inputs (s, a). In this case, the function Q(s, a) can be represented as a parametric machine learning model Q(s, a; θ) which accepts features of s and a as an input. This formulation Q(s, a; θ) is a function of the features of s and a which returns a real number (agent’s expected reward). The parameters θ of this function can be estimated using machine learning models. According to the TD-learning principle, every transition (st, at, rt + 1, st + 1) in the input for learning the model, then each learning step looks like this: the agent performs an action a from a state s for the transition to the state s0, receiving the reward r and then taking one step of learning the function Q(s, a; θ) with the input (s, a) and the output max Qðs, a; θÞ þ r . 0 a

Mnih et al. (2015) present the ideas that formed the basis of the Deep Q-Network algorithm (DQN) and also propose the algorithm itself. The authors believe that it is appropriate to use neural networks, namely deep Q-networks (DQN), as a machine learning model for approximating Q(s, a). The idea of using Q-network is reduced to a non-linear approximation (9): Qðs, a; θÞ ≈ Q* ðs, aÞ:

ð9Þ

In practice, the approximation is performed by minimizing the loss function Li(θi), which is represented in the form (10) for each iteration i: h i Li ðθi Þ ¼ s,a~pð.Þ ðyi – Qðs, a; θÞÞ2 ,

ð10Þ

h i where yi ¼ s0~E r þ γ ∙ max Q* ðs0 , a0 ; θi–1 Þ is the target value for i-th iteration, p(s, a

a) is the probability distribution over the states s and the actions a, called the behavior distribution. Gradient respect to the weights (Q-learning algorithm) is represented as follows (11): ∇θi Li ðθi Þ ¼ s,a~pð.Þ; s0~E

h⎧ ⎫ i r þ γ ∙ max Q* ðs0 , a0 ; θi–1 Þ – Qðs, a; θi Þ ∇θi Qðs, a; θi Þ : a

ð11Þ In this study, we used DQN-architecture supposing that the action space is discrete. The action space in the context of this article is a finite set of prices that an agent can set. On the input layer, the neural network has several neurons equals to the number of elements in the state vector. The number of neurons in the output layer equals to the number of actions available for the agent. In our particular case, we feed the network with the states of the environment, the network output returns the predicted values of the Q-function for each action. The actions can be selected as follows:

132

M. Balashov et al.

1. Select points with a given step in the price range. 2. Choose the sub-intervals of actions (i.e., actions are represented by non-intersecting segments from the general price range) with a given step. For example, we divide the price range of the i-th item (Pmin , Pmax) into K separate segments (12): Pi, min þ

Pi, max – Pi, min P – Pi, min . ðk – 1Þ ≤ pi,k < Pi, min þ i, max . k: K K

ð12Þ

The final price can be chosen randomly from the final sub-interval (Liu et al. 2018). In e-commerce markets, the possibility of successfully applying the RL method for pricing was demonstrated by Raju et al. (2006), Chinthalapati et al. (2006), Jintian and Lei (2009). In this regard, the authors suggest that this tool can give good results when it is used in retail.

4 Demand Surface Reconstruction and Agent Training When solving the problem of dynamic pricing in the framework of stochastic demand, a frequent approach is to use a simulated environment. For example, Dimicco et al. (2003) presented a developed market simulator that helps analyze agent pricing strategies in markets with limited time horizons and stochastic consumer demand. The current simulator is based on a simulation approach and helps to understand agent pricing strategies. An analysis of different pricing strategies in different market conditions helps the seller exploit fluctuations in customer demand, generate more revenue, and sell more units. IBM researchers have achieved significant results (Kephart et al. 2000) in market research by modeling information product markets. They discovered some likely pitfalls of using dynamic pricing, such as price wars. The authors suggest sellers use a learning curve simulation to study complex pricing strategies. One of the main advantages of using a simulated environment is the absence of monetary and reputational risks. The second reason for its use is the ability to train and test several models. But at the same time, the use of a simulated environment imposes certain restrictions. If it were possible to create a model that predicts demand fairly well in the price range, then the solution to the problem would be to choose the price that gives the greatest value of the objective function. In this case, the problem is reduced to the optimization problem concerning the model parameters. But the demand is too complex to be based on specific models. Also, historical data can be characterized as an aggregate demand for a period for a selected product at a specific price and time. This means that for each time step there is only one observable aggregate demand value at a single fixed price. But for our case, we want to be able to choose different prices at each time step. For these reasons, it was

Reinforcement Learning Approach for Dynamic Pricing 1400

133

Store_0

1200

Volume

1000 800 600 400 200 0 0

25

75 100 50 Time period in a week

125

150

Fig. 1 Sales time series of goods in one of the five stores at various gas stations (authors’ creation)

decided to build a simulation of the demand surface not based on a specific analytical model. PJSC Gazpromneft provided access to anonymous data describing the total hourly sales of specific related products in stores at similar gas stations. In addition to sales data, historical prices for the goods in question and competitors’ prices for goods from stores at each gas station were available. Data was provided for 6 months. For agent training and validation, there has to be an environment (product demand data), to interact with. In this study, it is proposed to use simulated demand curves at each moment as a medium, combined into a demanding surface (the time series of hourly sales of related products during the week for one of five stores at various gas stations is shown in Fig. 1). It is assumed that each time period corresponds to aggregated sales for this period. When modeling the environment, it is important to consider that prices in each store change over time (Fig. 2). Combining the dynamics of sales of goods for all five stores at the gas station (the dynamics of sales of goods at one of the five stores at the gas station is shown in Fig. 1) and the change in price (Fig. 2), we can obtain a sales curve in space depending on time and price (Fig. 3). The next modeling step is to combine the dynamics of sales of several similar stores (Fig. 4). It is assumed that there is a demand curve for a product by price for each moment of time, and also that the demand curves at different times may differ from each other. Thus, from a sequence of demand curves, we can deduce the demand surface, which is a function of sales on time and price. It is also important to take into account

134

M. Balashov et al. 42

Store_0 Store_1 Store_2 Store_3 Store_4

41

Price

40

39 38 37 1000

0

2000 Time period

3000

4000

Fig. 2 Change in product prices from stores at five different gas stations over a long period (authors’ creation)

1400 1200 1000 800 600

38.2 0

0

0

0 16

14

12

10

60

20

0

40

38.1

80

Price

Volume

1400 1200 1000 800 600 400 200 0

400

Steps 200

Fig. 3 Sales curve in space, depending on the time and price (authors’ creation)

that the demand surface has seasonality; in the context of this article, we consider daily and weekly seasonality. The next step is to reconstruct the surface of demand using the following algorithm: 1. Selection of similar time series. 2. Elimination of the trend and seasonality, the period of which is more than a week. 3. “Cutting” the time series by weeks.

Reinforcement Learning Approach for Dynamic Pricing

135

Fig. 4 Dynamics of sales of goods from stores at five similar gas stations (authors’ creation)

Fig. 5 Weekly points placement on the demand surface (authors’ creation)

4. Placement of points on the surface with a length of 1 week (Fig. 5). 5. Interpolation of the demand surface (Fig. 6). 6. Returning of the removed trend and seasonality. As a result of the algorithm, we obtain a demand behavior pattern during the week, and depending on the price. Interpolation can also be performed on a certain part of randomly selected points to obtain slightly different surfaces, but with a single pattern of behavior. After reconstructing the demand surface for a given period for modeling the agent’s behavior, it is necessary to set the cost price of the product at each time point. For the experiment conducted in our work a 16-week-long demand surface was generated, the first 15 of which were used to train the agent, and the last one for a test. During the learning process, the agent on each step selects a price for the next period. As a result of the chosen action, the agent receives the demand for the

136

M. Balashov et al.

Fig. 6 Interpolation of demand surface (authors’ creation)

product for the previous step. As a reward function, it is proposed to use the profit earned by the agent for the past period according to the formula (13). r ¼ ðprice – cost Þ . volume

ð13Þ

where price is set by the agent, cost is the cost of the product, volume is the amount of product sold over the past period by the given price. Since we know the demand surface, the surface of rewards can be calculated (Fig. 7). In Fig. 7, the horizontal axis shows product prices (rubles), the vertical axis shows the time period (hour), the color shows the volume of demand for price and time. The agent was trained in the first 15 weeks of the generated surface. After that its performance was evaluated on the last (test) week. In our case, the agent reached the performance of 94.38% of the optimal profit value, i.e., the maximum value that can be obtained on this demand surface. The path of the agent on the surface is close to an optimal path (Fig. 8). Besides that, the agent learns seasonal patterns, such as a nightly decrease in demand, which corresponds to an intuition. This leads to the conclusion that the agent remembers the pattern of beneficial behavior (in terms of maximizing the reward function). When experimenting with the training multiple agent models assembling the prices offered by them, it was possible to achieve 95.58% of the optimal profit value. It can be noticed that the average path of the agents, in this case, is smoother, and, what is more important, closer to the optimal one (Fig. 9). As a result of modeling, as part of the MDP task, the authors concluded that the agent can learn close to optimal pricing strategies by making decisions and basing

Fig. 7 Heat maps of the demand surface (left) and awards (right) (authors’ creation)

Reinforcement Learning Approach for Dynamic Pricing 137

138

M. Balashov et al.

Fig. 8 The best way to choose the price for the model and when training the agent (authors’ creation)

only on information related to the time of making the decision (day of the week and hour).

5 Conclusion The paper examined the possibility of applying the concept of dynamic pricing in traditional retail. The theoretical and practical aspects of applying the flexible price strategy are considered, and the most important factors affecting pricing are described. The relevance of the issue of transition to more flexible pricing practices is estimated and examples of successful practical implementation of this concept are given. The task of developing a dynamic pricing strategy was formulated as an MDP. The expediency of applying this approach was shown in the example of dynamic price regulation for one of the related products from stores at gas stations. When solving the problem for a specific product, an environment was created for demand modeling, a methodology for reconstructing the demand surface was proposed, and a DQN agent model was introduced.

Fig. 9 The best way of price setting according to the model and when training 50 agent models

Reinforcement Learning Approach for Dynamic Pricing 139

140

M. Balashov et al.

The implemented model was trained and tested in a simulated environment. The agent developed by the DQN agent receives 94–95% of the maximum possible total reward. Thus, within the framework of MDP, an agent can learn almost optimal pricing strategies, based only on information related to the decision time (day of the week and hour). Speaking about the practical significance of the study, the authors note that the results demonstrate the possibility of using a flexible price system for traditional retail goods. In solving a specific applied problem posed in the study, it turned out to maximize profits from the sale of goods by modeling its price in a stochastic market. In the future, it is planned to determine a more complex environment, including more variables, such as competitor prices, gas station geolocation, weather, and other factors. Also, in this study, only one specific product was considered, in the future it is supposed to expand the simulation to a wider range of products, since their sales may depend on each other. In addition, it is planned to move from a discrete to a continuous space of actions using the DDPG architecture because the price is a continuous quantity. With a significant increase in the number of elements in the state vector, difficulties may arise in constructing the demand surface. In this case, it is proposed to pay attention to the study of historical data, recent studies show that such training can be very effective (Hester et al. 2018). Acknowledgment This paper was prepared under financial support of the Russian Science Foundation (Grant No. 18-18-00099).

References Aviv, Y., & Pazgal, A. (2002). Pricing of short life-cycle products through active learning. Under revision for Management Science. Carroll, W. J., & Grimes, R. C. (1995). Evolutionary change in product management: Experiences in the car rental industry. Interfaces, t., 25(5), 84–104. Carvalho, A. X., Puterman, M. L. (2003). Dynamic pricing and learning over short time horizons. Sander School of Business, UBC, Working Paper. Chen, L., Mislove, A., & Wilson, C. (2016). An empirical analysis of algorithmic pricing on amazon marketplace. In Proceedings of the 25th international conference on world wide web (pp. 1339–1349). International World Wide Web Conferences Steering Committee. Chinthalapati, V. L. R., Yadati, N., & Karumanchi, R. (2006). Learning dynamic prices in multiseller electronic retail markets with price-sensitive customers, stochastic demands, and inventory replenishments. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 36(1), 92–106. Ciancimino, A., et al. (1999). A mathematical programming approach for the solution of the railway yield management problem. Transportation Science., 33(2), 168–181. Deksnyte, I., Lydeka, Z. (2012). Dynamic pricing and its forming factors. International Journal of Business and Social Science, 3, no. 23. Didur, I. (2018). Dynamic pricing algorithms on Uber and Lyft. https://datarootlabs.com/uber-liftgett-surge-pricing-algorithms/

Reinforcement Learning Approach for Dynamic Pricing

141

Dimicco, J. M., Maes, P., & Greenwald, A. (2003). Learning curve: A simulation-based approach to dynamic pricing. Electronic Commerce Research, 3(3–4), 245–276. Gupta, M., Ravikumar, K., & Kumar, M. (2002). Adaptive strategies for price markdown in a multiunit descending price auction: A comparative study. IEEE International Conference on Systems, Man and Cybernetics, 1, 373–378. Hayes, D. K., & Miller, A. A. (2011). Revenue Management for the Hospitality Industry. Hoboken, NJ: Wiley. Hester, T. et al. (2018). Deep q-learning from demonstrations. Thirty-second AAAI conference on artificial intelligence. Jintian, W., & Lei, Z. (2009). Application of reinforcement learning in dynamic pricing algorithms. In IEEE international conference on automation and logistics (pp. 419–423). Kephart, J. O., Hanson, J. E., & Greenwald, A. R. (2000). Dynamic pricing by software agents. Computer Networks, 32(6), 731–752. Lane, A. (2019). How Dynamic Pricing Is Revolutionizing Retail, The use of analytics to set prices in real time is becoming widespread. https://channels.theinnovationenterprise.com/articles/ how-dynamic-pricing-is-revolutionizing-retail Larina, E. (2015). Airbnb and Home Away switch to dynamic pricing. https://buyingbusinesstravel. com.ru/news/accomodation/3498-airbnb-i-homeaway-perekhodyat-na-dinamicheskoetsenoobrazovanie-/ Lawrence, R. D. (2003). A machine-learning approach to optimal bid pricing. Computational modeling and problem solving in the networked world, Springer, Boston, MA, pp. 97–118. Littlewood, K. (1972). Forecasting and control of passenger bookings. Airline Group International Federation of Operational Research Societies Proceedings, t. 12 (pp. 95–117). Liu, G., & Wang, H. (2013). An online sequential feed-forward network model for demand curve prediction. Journal of information & computational science, 10(10), 3063–3069. Liu, J., et al. (2018). Dynamic Pricing on E-commerce Platform with Deep Reinforcement Learning. International conference on learning representations. New Orleans, Louisiana, United States. Lu, R., Hong, S. H., & Zhang, X. (2018). A dynamic pricing demand response algorithm for smart grid: Reinforcement learning approach. Applied Energy, 220, 220–230. Mnih, V., et al. (2015). Human-level control through deep reinforcement learning. Nature, 518 (7540), 529. Mullen, P. B. et al. (2006). Particle swarm optimization in dynamic pricing. IEEE International Conference on Evolutionary Computation, pp. 1232–1239. Nikolenko, S., Arhangelskaya, E., & Kadurin, A. (2020). Deep learning. Dive into the world of neural networks. Sankt-Petersburg: Publishing house Piter (in Russian). [Николенко С., Архангельская Е., Кадурин А. Глубокое обучение. Погружение в мир нейронных сетей. Санкт-Петербург, Издательство Питер, 2020 г. 480 с.]. Raju, C. V. L., Narahari, Y., & Ravikumar, K. (2006). Learning dynamic prices in electronic retail markets with customer segmentation. Annals of Operations Research, 143(1), 59–75. Ramezani, S., Bosman, P. A. N., & La Poutré, H. (2011). Adaptive strategies for dynamic pricing agents. Proceedings of the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology-Volume 02. IEEE Computer Society, pp. 323–328. Rothstein, M. (1971). An airline overbooking model. Transportation Science, 5(2), 180–192. Spedicato, G. A., Dutang, C., & Petrini, L. (2018). Machine learning methods to perform pricing optimization. A comparison with standard GLMs. Casualty actuarial society, 12(1), 69–89. Subrahmanyan, S., & Shoemaker, R. (1996). Developing optimal pricing and inventory policies for retailers who face uncertain demand. Journal of Retailing, 72(1), 7–30. Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press. Weatherford, L. R., & Bodily, S. E. (1992). A taxonomy and research overview of perishable-asset revenue management: Yield management, overbooking, and pricing. Operations Research, 40 (5), 831–844. Xia, L., Monroe, K. B., & Cox, J. L. (2004). The price is unfair! A conceptual framework of price fairness perceptions. Journal of Marketing, 68(4), 1–15.

Convergent Evolution of IT Security Paradigm: From Access Control to Cyber-Defense Dmitry P. Zegzhda

Abstract The information technology revolution (Industry 4.0) has led to the creation of the concept of cyber-physical systems. Digitalization has brought all the urgency of information security problems, which depend on the efficiency of modern production on targeted and random destructive impact, which lead to hidden, remote, and difficult to detect effects that can cause catastrophic consequences. The information security problems of cyber-physical systems require the development of a new security methodology. In this paper, it is proposed to interpret the security of cyber-physical systems as the preservation of the sustainable functioning of the cyber-physical system in the context of a targeted destructive impact on its information components. Cybersecurity methodology extends the objects of protection from data or key information to control systems as telecommunications equipment and actuators in the energy and manufacturing. This paper describes the process of transition from access control to cyber-defense for securing cyberphysical systems. Keywords Cyber-physical systems · Cyber security · Convergent evolution · Cyber-defense · Security control

1 Introduction There is a clear tendency to increase the level of distribution and heterogeneity of computer systems in the evolution of information technology. The evolution of information security technologies is driven by the growing rate of systems development and a higher level of users technical literacy, expanding of automated systems external perimeters, and so on (Gupta et al. 2009; Zhou and Jiang 2012). Modern cybersecurity technologies are required to be associated with new goals of attackers D. P. Zegzhda (*) Saint Petersburg Peter The Great Polytechnic University, Saint-Petersburg, Russia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Devezas et al. (eds.), The Economics of Digital Transformation, Studies on Entrepreneurship, Structural Change and Industrial Dynamics, https://doi.org/10.1007/978-3-030-59959-1_9

143

144

D. P. Zegzhda

who now can gain access to industrial, transport, and banking systems; fast reaction on security incidents; accounting for the network environment states, that determines the power of system phase space (Jasim et al. 2015). Here author proposes an approach to identifying implicit parameters of the security technologies evolution in the direction of expanding a pre-formed set of the attacker capabilities and narrowing the set about the protection effectiveness. The proposed view allows classifying security technologies in terms of control theory. This classification allows using well-developed control theory for solving cybersecurity problems. Moreover, this approach can be used for predicting future steps of cybersecurity evolution. Consider the process of forming the current cybersecurity evolutionary step in a historical perspective.

2 Related Work The beginning of the 1970s (Anderson 1972), when the first access control models were proposed Harrison–Ruzzo–Ulman (HRU), Bella–LaPadula (BLP) (Zegzhda and Zegzhda 2001; Zegzhda and Pavlenko 2017; Bishop 2003), can be considered the date of initial security methodology. The main goal of these models was to build a formal base for proving the security of the system (Bishop 2003). In according to these models, there is an assumption about low attacker creativity. Thus, it is enough to prevent the implementation of the predefined set of actions. Firewalls and signature antiviruses work by this principle. Over time, there are new objects that do not belong to state institutions. For these systems, commercial aspects of security became one of the most important. This fact had led to using methods of risk management (from economics) in the field of information security (Rattner 2010). In the 1990s–2000s several standards were developed that describe risk management for information processing systems (including ISO 17799, ISO 27001, etc.) (Calder 2009). Risk management in practice comes down to a simple formula: RISK ¼ Pincident + COST FAILURE, where Pincident is the probability of incident, and COST FAILURE is the cost of possible consequences. However, IT-risks are slightly dependent on people’s relationship and are mainly dependent on “relationship” of technical entities. This significantly reduces the adequacy of methods of risk management in IT security fields. Complicating the attacker’s actions required more complex approaches. In 2000s methods of posterior methods were appeared (Corin et al. 2005). The main goal of these methods is to guaranty the identification of attacker, thus, monitoring of the system states and protection systems were required. Historically, the first representatives of this approach were intrusion detection systems (IDS), which track an attack and store data about the attack source. Then DLP (Clayton 2009) and SIEM systems (Rothman 2010) were developed.

Convergent Evolution of IT Security Paradigm: From Access Control to. . .

145

The development and accessibility of information technology have led to appearing of new segments that originally were in isolation and had not any protection against external influences. As a classic example, SCADA systems that have multiple vulnerabilities, which could not be exploited before connecting to the Internet. In addition, more non-industrial cyber-physical systems are connecting to the Internet (transport, medical, and other networks). An important factor is a new class of threats—APT (Advanced Persistent Threat) (Lacey 2013). This term was introduced by the US Air Force in 2006 to describe complex attacks (often directed against a specific industry (for example, Stuxnet was aimed at nuclear power enterprises)) implemented by using a combination of technical methods of penetration, social engineering, and industrial espionage. In response to the appearing of APT in 2014, the Cyber Threat Alliance was formed. This step once again reminds of the need to develop a unified framework for building protection systems, both technological and theoretical. Many tools to protect against APT attacks include firewalls and next-generation firewalls and intrusion detection systems from Palo Alto, Fortinet, Cisco, Blue Coat, and others. However, there are no fully automated tools that implement the complex approach to protecting against all types of cyber threats (expert analysis plays the main role in cybersecurity), and the need for them is great, especially with the advent of Big Data.

3 Evolution of Security Technologies In the given historical review, there are some laws, the knowledge, and understanding of which will be useful in the development of new types of security techniques. Initially, security was based on a priori estimates that are unchangeable. Evolution or degradation of the protected system led to a change of behavior that would not be considered by the protection system due to lack of a feedback loop. At the stage of introducing risk management methods into information security, the system stayed non-static due to the risk level redefining. However, two listed methods remains a priori. The posterior methods have a greater level of adaptability—the monitoring results are interpreted by the defense system as feedback for reconfiguration. Moreover, in a posteriori defense system, the non-marking of processes is considered. The most modern methods focus on the heterogeneity and distribution of protected objects. The evolution process of the system depends not only on its internal state, but also on external influences. Therefore, a modern approach to information security should include methods, aimed at predicting the system state depending on the dynamics of changes in internal and external factors. Therefore, we come to the concept of a predictable behavior system. Now, security tools should predict the system behavior and in future—predict dynamics of external influences behavior. Approach to solve such problems is just beginning to appear, however they solve just specialized subproblems (Stepanova 2012).

146

D. P. Zegzhda

The main regularity is the transition from reactive to proactive defenses. Now security tools, especially with an increase in the criticality level of protected resources, must resist all potential threats. Such requirement entailed necessity in monitoring of the system, defense system, and communication with environment. The revealed regularities allowed the authors to conclude that there are similarities in the described evolution of protection technologies and the evolution of systems observed in control theory. Despite the difference in goals of control and information security theories, there is a similarity of approaches used to achieve these goals and aimed at keeping the system in a certain set of states. Moreover, control theory has a longer history and a richer terminological base. Therefore, the authors attempted, comparing the evolution of both theories, to adapt the classification of methods and technologies of control theory for the information security theory.

4 Classification of Security Technologies in Terms of Control Theory Criteria, that used to classify control methods, include the following (Aström and Murray 2010; Kilian 2006): the existence of feedback, the existence of an adaptive control loop, existence of the possibility of system state predicting in the feedback loop. Based on the listed criteria it is possible to classify existing information security technologies: static, active, adaptive, and dynamic. In static protection technologies, control functions do not change in time and system operation is described by constant control parameters; there are no feedback, adaptive control, and predicting module. Active technology can be described as a static technology with a feedback loop. Adaptive technology requires an adaptive control loop for changes in the system’s constant parameters. In dynamic technology, the main goal is dynamic compensation of suspicious changes based on interacting with both the protected object and its infrastructure. A comparative analysis allows classifying information security technologies based on the degree of action and system state monitoring (see Table 1). Table 1 Characteristics of building information security technologies (authors’ creation)

Static

Monitoring objects system Defense state system state None None

Sharing with entertainment Partial

Active

Partial

None

Incoming information analysis

Adaptive

Partial

Partial

Incoming information analysis

Dynamic

Full

Full

Incoming information and communications analysis

Advantage Adequacy to threats Low false/positive rate Sustainability management Invariance of protection

Convergent Evolution of IT Security Paradigm: From Access Control to. . .

147

Table 2 Projection of ACSs on the classification of information security technologies (authors’ creation) Inf. security technology Static Active Adaptive Dynamic

Control system Linear, discrete, stationary Linear, non-stationary, discrete Nonlinear, non-stationary, discrete Nonlinear, non-stationary, continuous

Table 3 Formal description of information security control systems (authors’ creation) Inf. security technology Static Active Adaptive Dynamic

Formal description C0y(iTn) + C1Δ1y(iTn) + . . . + CnΔny(iTn) ¼ b0x(iTn) + . . . + bmΔmx(iTn), parameters of system are time independent C0y(iTn) + C1Δ1y(iTn) + . . . + CnΔny(iTn) ¼ b0x(iTn) + . . . + bmΔmx(iTn), parameters of system are time dependent x(k + 1) ¼ f(k, x(k), v(k) y(k) ¼ φ(k, x(k), v(k)) ( x ¼ f ðt, x, vÞ yðkÞ ¼ φðt, x, vÞ

In most general forms, the mathematical model of an automated control system (ACS) is an operator of the influence transformation. A system operator is a transformation that formally is denoted as y(t) ¼ x(t)s(t), where s(t) is a system operator by a control input. ACSs can be classified into linear and nonlinear, stationary and non-stationary, continuous, and discrete systems. For a linear system, the superimposing principle is valid. This principle for system operator can be written as follow: y(t) ¼ s(t)[x1(t) + x2(t)] ¼ s(t)x1(t) + s(t) x2(t) . All other systems are nonlinear. According to information signals, systems can be continuous, discrete, or hybrid (both continuous and discrete types of signals occurred in the system). If the properties of the system change over time and the operator of the system s(t) depends on time, then the systems are called non-stationary, and if the properties of the operator are constant, they are called stationary. The above classification of control systems can be projected onto the previously proposed classification of protection technologies (see Table 2). A key point of control theory is the creating of a mathematical model. A controlled process can be described using the following parameters: g—vector of input actions, f—disturbance input, u—vector of control actions, y—vector of controlled actions, CE—comparing element or deviation ε ¼ g – y. The general form of control is determined by the formula u ¼ U(g, ε, f, y, x). Thus, security management systems can be formally described as follows (see Table 3). The more refined form of the equations presented in Table 3 depends on the specific the protection system. Since it was shown that effective cybersecurity solutions should implement dynamic technologies, suitable methods of control

148

D. P. Zegzhda

theory would be, in particular: parametric methods based on nonlinear differential equations and NARMAX models (Chen and Billings 1989), methods of harmonic and statistical linearization. The use of these methods will increase the level of autonomy, stability, and intelligence of cybersecurity systems operating under the conditions of an expanded set of assumptions about the capabilities of an attacker.

5 Digital Transformation of Control The rapid development of information technologies and their implementation in the industrial systems lead to the digital transformation of traditional control systems. Control systems are not limited by the physical level, but also include the information level of control and data processing. With the development of digitalization, control systems change from simple parameter control to self-organizing systems. In this case, the control object becomes a distributed multicomponent system that implements the target function. The sustainability is determined by the correct implementation of the target function. The main goal of control system is system resource allocation planning. At the same time, there is a tendency to build a control planner based on bioinspired technologies (self-similarity, homeostasis, mutations, and evolution), as shown in different papers about ensuring cybersecurity monitoring and control (Pavlenko et al. 2017; Zegzhda et al. 2018; Lavrova et al. 2018; Zegzhda and Pavlenko 2017). There is an obvious connection between cyber sustainability with automatic control theory and the concept of dynamic systems stability (Bellman 2008). However, the control object in modern cyber-physical systems (CPS) differs from the control object in the automatic control systems (ACS). Based on the systematization CPS (Zegzhda and Pavlenko 2017), their common properties are multielement, redundancy, distribution, significant degree of virtualization. Consider a comparison of a typical AСS with feedback and CPS. AСS without feedback are excluded from consideration. Figure 1 shows the functional diagram of the AСS, where the CS is the comparing summator, the CD is the control device, the CO is the control object, the FL is feedback, g is the input action, bringing the system to the initial state x0(t), r(t) is deviation indicator, u(t) is the control action, e(t) is the feedback loop, x(t) is the output value, f is the perturbation. Each AСS is an element of the CPS, and g , f , x (t) may have both physical and informational semantics. CPS is a superposition of similar elements, that connected by a set of information dependencies with sharing vectors G’ (initial information), X’(t) (output parameters), F0 (vector of external disturbances). Control of a large set of these elements is a difficult task. Thus, the new task of security is controlling the full CPS, not every individual element (see Fig. 2).

Convergent Evolution of IT Security Paradigm: From Access Control to. . .

149

Fig. 1 Digital transformation of control (authors’ creation)

Fig. 2 CPS with control of sustainability (authors’ creation)

Maintaining the sustainability of CPS, expressed as Φ (G’) ¼ X’ (t) ¼ X’opt (t) ± ΔX, can be implemented out both by parametric regulation and by regulating the relationships between the elements; architecture using forecasting technologies. To maintaining the sustainability of CPS, methods for predicting system state are needed. Implementing of predicting systems required the creating of a digital twin of the system. The control system must have a significant degree of intellectualization in order to make decisions in real time, without operator intervention, the role of which is to monitor and identify emergencies. For this purpose homeostatic strategy can be applied.

150

D. P. Zegzhda

6 Digital Transformation of Control The main components of digital production are cyber-physical systems. The stable operation of such systems is essential to ensure the security of digital production. In accordance with considered evolutionary changes, to secure CPS it is necessary to take into account the following features: – The existence of a transitive closure of all CPS components, which is both the source of security threats and gives a possibility to maintain the resilience of the CPS through duplication of its components. – CPS self-regulation which is uncontrolled by a person can lead to loss of system reliability. – Distribution of functions to be implemented by the CPS between the set of its components. – The possible existence of several conflicting control loops (Zegzhda and Pavlenko 2017). Digital production systems are subject to both external and internal threats and need to be protected from all types of impacts, which are impossible to be specified completely. Therefore, security tools that are invariant to the form of threats are necessary.

6.1

Cyber-Resilience as a Development of a Dynamic Technology Security Paradigm

An analysis of existing trends in the development of security tools allows us to conclude that the protection paradigm based on static, active, adaptive, and dynamic protection technologies (Zegzhda and Zegzhda 2001) is changing. The idea of this protection technologies classification is taken from control theory and includes the following features: feedback existence; adaptive control loop existence; the presence of the system state predicting functions in the feedback loop. These features can be used to classify existing protection technologies, each of the classes contains a certain combination of the listed loops. It is not enough to describe the security state of the system to ensure its cybersecurity. It is necessary to be able to predict the behavior of the system in a given environment and to predict the dynamics of external impact on the system in the future. The development of a dynamic approach consists in the application of the concept of resilience (Zegzhda 2016). Resilience maintaining allows determining the set of acceptable CPS states and developing measures to keep the system within its boundaries. CPS resilience evaluation consists in the evaluation of the possibility of the system to be in a stable stationary state (Zegzhda and Pavlenko 2017). The resilience indicators given in (Zegzhda et al. 2017) are applicable for this purpose. Self-

Convergent Evolution of IT Security Paradigm: From Access Control to. . .

151

similarity is a criterion for the correct functioning of the system and is determined by the fractal method based on the Hurst coefficient H ¼ 1 – β/2 and Fano factor Φ(n) ¼ δ2(n)/m(n) where δ2(n)—variance, m(n)—mathematical expectation. To evaluate the CPS security it is also necessary to take into account its selfadaptation ability, which is determined by cross-correlation relationships within the ⌈ ⌉ system: r ðt, τÞ ¼ E cTn ðt þ τÞX ðt Þ , where τ is the time delay, X ¼ {xi} is the vector of the physical level actuator state, and cn is the command stream coming from the control level, as well as the following characterizations: Controllability: 8 N P > > > path num li N > X > > i¼1 > , if path numi > 0 > > N < P i¼1 path numi CTp ðtÞ ¼ i¼1 > > > > > N X > > > > 0, if path numi ¼ 0 :

ð1Þ

i¼1

where N—total number of nodes, path_numi—number of paths between node and control center, path_num_li—number of paths between node and control center which length is less than or equal to l. The length l is estimated in unit of time Δt which are required to transmit a message between neighboring nodes. Operating Constance: ∂Res

Op ¼ ∂t sp , where Ressp—capacity of the required resources, t—time. Scalability rate of the system: M ¼ (σСTp(t), σRmax, σOp, σT), where T—time interval, Rmax—level of critical destructive impacts. The metric reflects the ability of the network to correctly perform its functions both with a small and a large number of nodes. The specific features of CPS and the use of resilience and self-adaptation evaluations require changes in the concept of security management of digital production systems. In accordance with these features, the CPS protection strategy should provide maintaining of the performance of the process under destructive impacts; the ability to adapt the parameters and structure of the CPS in order to maintain the process resilience; maintaining of the ability to analyze the state of the environment and the CPS in order to perform an advanced adaptation. The concept of homeostatic control corresponds to the identified requirements. This concept consists of a use of self-regulation mechanisms to maintain the system equilibrium. Thus, the ability of a system to confront the threats lies in its ability to homeostasis.

152

6.2

D. P. Zegzhda

Example of Cyber-Resilience Maintaining Using Homeostasis Control

The technology of homeostatic control includes a set of mechanisms that ensure the permanence of the system’s internal environment and the structural and functional resistance to external destructive impact. The mechanism of homeostasis is based on the inheritance of the general properties of the system: the property of regeneration—self-regulation, aimed at maintaining the object features; the property of immune reactivity, which aims in adapting the parameters and structure of the system to ensure resilience to external destructive impact. The main mechanisms for the implementation of homeostasis for CPS: 1. Interchangeability. If the CPS component is not able to perform any function, it tries to delegate this function to the closest homogeneous component. 2. Isolation of a broken component. Any of out-of-operation component is excluded from the CPS and the work that was performed by this component is distributed among other nodes. 3. System behavior and operation algorithm changes. If the CPS components are not able to perform its target function, it is necessary to change the algorithms of their operation to eliminate downtime. An example of the interchangeability mechanism: smart traffic lights provide traffic information. When a failure occurs, the traffic light determines the closest homogeneous component to perform its function. An example of the isolation mechanism: if one of the power plant dampers fails, the water disposal function is automatically redistributed between the nearest enabled dampers. An example of a homeostasis mechanism: the end of consumable materials in a system of robots requires the robot to interrupt current activities and fill the gap of materials.

6.3

Example of CPS Resilience Evaluation

Let us describe CPS as heterogeneous network or graph G with set of vertices V ¼ {v1, . . ., vn}, set of edges E ¼ {e1, . . ., em} and set of vertices types B ¼ {b1, b2, . . ., bk}. The graph G reflects the network structure of the cyberphysical system, the vertices V represent active components, the edges E represent the connections existing between these components. The type of vertex determines the immediate cyber-physical object involved in the work of the system as a whole. Let us denote the working route as R ¼ vt1 . . . vtγ . The stable operation of the system directly depends on the existence of at least one working route. The more alternative working routes exist in the system, the less damage an attacker can cause by removing interconnections. Let us denote the number of working routes as countR

Convergent Evolution of IT Security Paradigm: From Access Control to. . .

153

and total number of routes in the system countv. The resilience criterion of a countR . computer network can be represented as a relation: S ¼ 1 – count V The equality of this criterion to zero means that all the vertices of the system are involved in useful routes, so system resilience is equal to zero. The equality of this criterion to 1 means that the system does not perform any useful work, this way it is resilient. The runtime of the route search algorithm for graphs of different dimensions and the ratio of the number of nodes to the number of edges was analyzed during the computational experiment. To generate random graphs, the Barbarashi–Albert model, which is an algorithm for generating scaleless networks (Zadorozhnyi and Yudin 2012), was chosen. The number of nodes N and the degree of each node e are used as an algorithm input. These parameters are used to construct edges with the preferred connection method. The results of the experiment are presented in Fig. 3, which was made using Python programming language. To reduce time spent, it is necessary to significantly increase the performance of the network management node. Intelligent controllers in software-configured networks can be used for this purpose.

Fig. 3 Correspondence between graph size and algorithm runtime (authors’ creation)

154

D. P. Zegzhda

7 Conclusion The functional structure of cyber-physical system differs from traditional automated control systems. This way the use of traditional security mechanisms is not enough. It is necessary to use a mechanism, which will be able to ensure resilient management under destructive impacts. The proposed evolutionary approach to the presentation of CPS information security allowed us not only to identify the hidden laws and development trends of the considered area but also to conclude that the evolution of information security led to the transformation of information security theory into one of the subclasses of control theory. For this reason, a well-developed methodological base of the corresponding methods of control theory can be adapted for solving cybersecurity problems. The main advantages of the proposed approach include the following ones: expansion of cybersecurity theoretical base; increasing of the protected system resilience to external harmful impact; the possibility of developing mechanisms aimed at protection from a wider range of harmful impacts; the ability to determine changes that should be added to protection functions. It is proposed to ensure the cyber resilience of CPS using homeostatic control and three homeostasis mechanisms: interchangeability, isolation, and behavior change.

References Anderson, J. P. (1972). ‘Computer security technology planning study’. Electronic systems division, air force systems command. Bedford, MA: Hanscom Field. Aström, K. J., & Murray, R.,. M. (2010). Feedback systems: An introduction for scientists and engineers. Princeton University Press. Bellman, R. (2008). Stability theory of differential equations. Courier Corporation. Bishop, M. (2003). Computer security: Art and science. Boston: Addison Wesley. Calder, A. (2009). Information security based on ISO 27001/ISO 27002: A management guide–best practice. Hertogenbosch: Van Haren Publishing. Chen, S., & Billings, S. A. (1989). Representations of nonlinear system: The NARMAX model. International Journal of Control, 49(3), 1013–1032. Clayton, G. E. (2009). Data loss prevention and monitoring in the workplace: Best practice guide. Dallas, USA: Privacy Compliance Group, Inc. Corin, R., Etalle, S., den Hartog, J., Lenzini, G., & Staicu, I. (2005). A logic for auditing accountability in decentralized systems. In T. Dimitrakos & F. Martinelli (Eds.), Formal aspects in security and trust. IFIP WCC TC1 2004. IFIP International Federation for Information Processing (Vol. 173). Boston, MA: Springer. Gupta, A., Kuppili, P., Akella, A., & Barford, P. (2009). An empirical study of malware evolution. 2009 First International Communication Systems and Networks and Workshops. Jasim, O. K., Abbas, S., & Salem, A. B. M. (2015). Evolution of an emerging symmetric quantum cryptographic algorithm. Journal of Information Security, 6, 82–91. Kilian, C. T. (2006). Modern control technology: Components and systems. Thompson Delmar Learning. Lacey, D. (2013). Advanced persistent threats: How to manage the risk to your business. ISACA.

Convergent Evolution of IT Security Paradigm: From Access Control to. . .

155

Lavrova, D. S., Alekseev, I. V., & Shtyrkina, A. A. (2018). Security analysis based on controlling dependences of network traffic parameters by wavelet transformation. Automatic Control and Computer Sciences, 52(8), 931–935. Pavlenko, E. Y., Yarmak, A. V., & Moskvin, D. A. (2017). Hierarchical approach to analyzing security breaches in information systems. Automatic Control and Computer Sciences, 51(8), 829–834. Rattner, D. (2010). Risk Assessments. Security management. Boston: Northeastern University. Rothman, M. (2010). Understanding and Selecting SIEM/Log Management. Securosis, Blog https://securosis.com/blog/understanding-and-selecting-siem-log-management-introduction Stepanova, T. (2012). Ensuring sustainability of multi-agent protection systems under the impact of distributed security threats. Ph. D. Thesis, SPbSTU. Zadorozhnyi, V. N., & Yudin, E. B. (2012). Structural properties of the scale-free Barabasi-Albert graph. Automation and Remote Control, 73(4), 702–716. Zegzhda, D. P. (2016). Sustainability as a criterion for information security in cyber-physical systems. Automatic Control and Computer Sciences, 50(8), 813–819. Zegzhda, P. D., Lavrova, D. S., & Shtyrkina, A. A. (2018). Multifractal analysis of internet backbone traffic for detecting denial of service attacks. Automatic Control and Computer Sciences, 52(8), 936–944. Zegzhda, D. P., & Pavlenko, E. Y. (2017). Cyber-physical system homeostatic security management. Automatic Control and Computer Sciences, 51(8), 805–816. Zegzhda, D. P., Poltavtseva, M. A., & Lavrova, D. S. (2017). Systematization and security assessment of cyber-physical systems. Automatic Control and Computer Sciences, 51(8), 835–843. Zegzhda, P. D., & Zegzhda, D. P. (2001). Secure systems design technology. In V. I. Gorodetski, V. A. Skormin, & L. J. Popyack (Eds.), Information Assurance in Computer Networks. MMM-ACNS 2001. Lecture notes in computer science (Vol. 2052). Berlin, Heidelberg: Springer. Zhou, Y., & Jiang, X. (2012). Dissecting android malware: Characterization and evolution. 2012 IEEE symposium on security and privacy (pp. 95–109).

AI Methods for Neutralizing Cyber Threats at Unmanned Vehicular Ecosystem of Smart City Maxim Kalinin, Vasiliy Krundyshev, and Dmitry Zegzhda

Abstract Due to the increased mobility of the infrastructural topology and the growing amount of data being processed, traditional protection methods become ineffective. Security weaknesses cause disruption of control, malfunction of transportation, the occurrence of smart building equipment failures, traffic jams, etc. New methods to ensure cyber security for new digital platforms are required. The article analyzes the existing approaches to ensuring cyber security in modern dynamic networks and revealed their main advantages and disadvantages. The authors propose the application of new AI methods (swarm algorithms and neural networks) to ensure the security of the network in the infrastructure of the intelligent transport system (ITS) a sample of new digital platforms. The paper assesses the possibility of their use for preventing cyber threats in the digital infrastructures of V2X. The results of experiments to assess the effectiveness of the proposed approach, obtained using supercomputer modeling are given. The achievements are ready for application in other smart environments: IoT, IIoT, WSN, mesh networks, and m2m-networks. Keywords Artificial intelligence · Cyber security · Digital platform · Intrusion detection

Selected portions of this chapter have appeared in V. Krundyshev, M. Kalinin and P. Zegzhda, “Artificial swarm algorithm for VANET protection against routing attacks,” 2018 IEEE Industrial Cyber-Physical Systems (ICPS), St. Petersburg, 2018, pp. 795-800, doi: 10.1109/ ICPHYS.2018.8390808. Used with permission. M. Kalinin (*) · V. Krundyshev · D. Zegzhda Peter the Great St.Petersburg Polytechnic University, Russia, St.Petersburg e-mail: [email protected]; [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Devezas et al. (eds.), The Economics of Digital Transformation, Studies on Entrepreneurship, Structural Change and Industrial Dynamics, https://doi.org/10.1007/978-3-030-59959-1_10

157

158

M. Kalinin et al.

1 Introduction In 2011, at the Hanover Messe, Germany for the first time spoke out about the need to develop a strategy for the development of industry in accordance with the trends of the new industrial era. The founder and chairman of the World Economic Forum, Klaus Schwab, firstly introduced the term “Industry 4.0.” The mainstream for the fourth industrial revolution, which is also called digital, he has pointed out the convergence of technology and blurring between the digital, biological, and physical spheres (Schwab 2016). The main value in the new economy is data, economic growth is now based not on capital and natural resources, but on innovation and the human imagination. The basis of the digital economy is the most promising technologies that will be, according to researchers at PricewaterhouseCoopers (PwC) (Klau 2017), the most significant influence on various fields of activity in all countries of the world: artificial intelligence, augmented reality, virtual reality, unmanned vehicles, blockchain, “Internet of Things,” 3D printing, robotics. The German initiative was taken with interest in other industrialized countries, where similar programs were actually adopted according to the German example: in the UK—“High Value Manufacturing Catapult,” in France—“Alliance Industrie du Futur,” in Italy—“Fabbrica del Futuro,” in the Netherlands it is “Smart Factory,” in Belgium—“Made Different,” in Russia—“National Technology Initiative,” in China “Made in China 2025,” etc. (Zhou 2015). AT&T, Cisco, GE, IBM, and Intel in the USA created the Industrial Internet Consortium (IIC) a nonprofit open membership group in 2014 that seeks to remove barriers between different technologies to maximize access to big data and improve the integration of physical and digital environments. The goal of the consortium is to assist in connecting and optimizing resources, operations, and data in order to disclose the values of the business in all industries (Zhou et al. 2018). The created ecosystem of companies, research centers, and government agencies is designed to stimulate the implementation of industrial Internet applications. Now, these consortia are developing technical specifications and agree on IIoT development standards so that disparate smart devices can be combined into a common cyber environment. According to McKinsey’s forecast, by 2025, the total economic effect of the industrial Internet will be up to $11 trillion per year (McKinsey 2015). Huawei forecasts that by 2025, there will be 100 billion connected devices, used in every area of business and life (Huawei 2018). Moreover, the “connectivity” of objects with the Internet carries a number of risks, primarily related to safety and security. An example close to simple consumers is that smart water and electricity power meters, although they will allow more efficient use of resources, will have all the information about their owners. So, the service company will know at what time the people leave their houses and come back, do the parents leave their kids alone for the night, etc. Leaking such information can face us the most unpleasant consequences—from breaking into the apartments at a time when the owners are definitely not at home, to blackmail the owners with information about them, to cyber invade the smart home

AI Methods for Neutralizing Cyber Threats at Unmanned Vehicular Ecosystem of. . .

159

supply equipment and break down the IT-controlled systems and vehicles by devices-addressed DDoS. More serious threats are related to the fact that on the Internet, unexpectedly for engineers, there were objects that were not taken into account when designing this possibility, which means that they are vulnerable by nature. In particular, the computer on the ISS was infected with a virus written for gamers, and later it turned out that this is not the only case: computers in space periodically “catch” viruses, for example, from infected external memory devices. Moreover, in industrial systems, business reliability and continuity is critical. Moreover, many industrial enterprises are critical infrastructure in terms of human security and environmental impact, so the consequences of a hacker intrusions or a failure in IIoT could potentially affect not only the economy, but also physical security (Kaspersky 2019). To reduce these risks, the researchers are considering modern methods of ensuring cyber security that would allow counteracting a wide range of actual set of new cyber threats. Many of them have been focused on the application of artificial intelligence (AI) methods to identify hidden patterns and deviations: in medicine (Kulikowski 1980), in IoT (Poniszewska-Maranda and Kaczmarek 2015), in power systems (Laughton 1997), etc. The technologies of machine learning, data mining, and artificial intelligence support us in effective processing of intensively arriving unstructured data of superhigh volume and extracting knowledge out of them. Several solutions have already shown their effectiveness in solving the task of detecting cyber threats, and the fact that the world’s largest companies spend impressive funds on research and development in this area just confirms the promise of the AI-based approach. The purpose of the paper is to assess the possibility of using new methods of artificial intelligence to neutralize cyber threats in the smart cyber-physical ecosystem on the sample of unmanned vehicles at a smart city. The paper is organized in the following manner: Section 2 reviews the related works on the possibility of artificial intelligence methods; Section 3 presents our hybrid approach to detect a wide specter of new cyber threats; Section 4 presents our results of experiments on detecting the network routing anomalies and FDI in the smart infrastructure of the connected vehicles; Section 5 concludes our contribution.

2 The Related Works An analysis of different research works on the methods of protecting modern digital platforms, including the unmanned transport ecosystem, was carried out to generalize modern trends and achievements. In their work, the authors (Singh and Nand 2016) note the serious danger of attacks aimed at dynamic network routing. The authors (Smith and Schuchard 2018) confirm this thesis by citing statistics of successful cyber attacks. Finally, in their work, the authors (Kamble et al. 2017) note the serious consequences of network routing threats. A large number of studies are devoted to protection against a new

160

M. Kalinin et al.

type of security threats in smart ecosystems targeted at false data injection (FDI). According to the authors, this problem is equally relevant for large smart infrastructures (digital fabrics, unmanned intellectual transport networks, smart cities) (Xie et al. 2010), as well as for small smart home networks and digital sensor networks (Kallitsis et al. 2015). For example, by false data injection into the smart home system, the heating, power supply, and smart meters can be frustrated. Based on this analysis, it was concluded that the most acute problem is the detection of security threats aimed at network routing and FDI. These threats are simple to be realized, their consequences are critical in the sensitive infrastructure of modern digital ecosystems, and they have no effective methods of counteracting. Next, we need to determine the methods that can be used to detect the cyberthreats under consideration. In various research spheres, methods of machine learning have proven themselves well: Bayesian networks, the method of reference vectors, artificial neural networks, swarm algorithms, etc. In (Yu et al. 2016), the authors proposed an approach for fault diagnosis based on an improved k-nearest neighbor algorithm. The mathematical structure at the heart of the method is to calculate a distance function. The fault detection rate is 81% (It is utilized to detect malfunctions of an oil-filled power transformer). (Andre et al. 2013) proposed a combination of support vector machine (SVM) and k-nearest neighbors for the detection of functional failures. Construction of a hyperplane for a set of points is applied, and the fault detection rate of this method is 75.3% (it is utilized to detect failures in machines). For detection, SVM is used, and for its improvement kNN method is attached. (Chen et al. 2011) proposed SVM based method for anomalies detection in a thermal power plant. Search for a hyperplane for a set of points and fault detection rate is 91.93% (it is applied for detection of failures at a thermal power plant). (Yu et al. 2018) explored the possibility of online false data injection detection with wavelet transform and deep neural network. The detection accuracy is around 90% with new unknown data not used in training. This result demonstrates the satisfactory generalization capability of different AI methods. Based on an analysis of existing research to detect FDI, it was decided to use deep neural networks. Deep networks do better with increasing amounts of data than classic machine learning algorithms. For structural security issues (e.g., network routing anomalies) mathematical and machine learning methods mentioned above are less ready, but artificial intelligence gives us different opportunities. The first area of it focused on the application of intelligence for ad hoc networks security assurance. (Das et al. 2011) considered security issues in MANET, and to protect it their algorithm is directed on analyzing and improving the protection of the Ad hoc On-Demand Distance Vector (AODV), the popular routing protocol for MANET. However, that approach is effective only with the static topology of the network: when connect a new node and open a new route, the delay time in the delivery of packets is greatly increased. (Al-Shurman et al. 2004) introduced on-demand distance vector (AODV) protocol to handle the black hole attack. It is used to find more than one route to the destination and also to exploit the packet sequence number included in any packet header. This solution provides a quick and reliable way to determine a suspicious reply. No additional

AI Methods for Neutralizing Cyber Threats at Unmanned Vehicular Ecosystem of. . .

161

overhead is added to the channel because the sequence number itself is included in every packet in the base protocol. However, it has an issue with group attacks. The second group of research is consisted in complex protection from network attacks and improvements of the quality of packet routing using artificial swarms. The modified routing protocol improves the throughput and decreases the packet loss along with reduction. (Dengiz et al. 2011) proposed a particle swarm optimization (PSO) algorithm that uses the maximum flow objective to choose optimal locations of the agents during each time step of network operation. Testing results have shown that the proposed approach is effective in improving the connectivity of MANET. However, this method is not as effective at highly dynamic topologies (e.g., unmanned vehicular ecosystem or flying networks), because of node movement and constantly reconnecting links. (Sirola et al. 2014) have analyzed the existing network routing threats in moving networks and gave brief theoretical recommendations on countering the threats considered. In our work, we followed these recommendations and offered a new method of protection against various network routing threats with the application of swarm intelligence.

3 Hybrid AI-Based Detection of Security Threats It is proposed to apply artificial swarm intelligence to detect attacks on dynamic network routing, and deep learning networks to identify false data injections. All this forms a hybrid AI-based detection are shown in Fig. 1.

3.1

An Artificial Swarm Algorithm

The swarm algorithm for detecting attacks aimed at dynamic routing in the unmanned vehicular ecosystem of a smart city. The algorithm is based on intelligent water drops (IWD) (Hamed 2007), and the algorithm uses the trust model. IWD has the following advantages: it is used in applications that can adapt to changes and has

Fig. 1 Hybrid IDS in unmanned vehicular ecosystem (authors’ creation)

162

M. Kalinin et al.

a high speed of operation, which is very important in dynamically moving networks where nodes move at high speeds and the network topology is extremely unstable. Trust is the key element in creating a trusted vehicular environment that promotes security in vehicular networks. Few trust models have been introduced to enforce honest information sharing between communicating nodes (Zhang 2011). Current trust management schemes for unmanned vehicular ecosystem establish trust by voting on the reports received. This is time-consuming for time-critical applications and not practical in real life especially in dense areas. All known algorithms are applicable for the case if the inter-car network has a fixated topology and all cars are trusted. If the cars become intruders in this trusted topology, no assumptions about intruders are made. To solve this issue and improve the existing approach to build the trust model for the unmanned vehicular ecosystem, we combined it with the artificial swarm intelligence. The algorithm is designed for moving vehicular networks up to 100,000 nodes, the speed of the nodes is 0–140 km/h, it is suitable for both urban and highway use scenarios and adopted to the case if the car connected to other cars becomes an intruder. The algorithm has two types of parameters: static and dynamic. Static parameters are constant during the process of the IWD algorithm. Dynamic parameters are reinitialized after each iteration of the IWD algorithm. The algorithm consists of three phases: (1) build a route, (2) maintain existing routes, and (3) update a trust estimation to detect anomalies in routing and build a new route bypassing the suspicious node, which are described in detail below: 1 To transfer the message M from node S to node D, a route from S to D is built according to the following algorithm: 1.1 S generates a RFIWD packet. 1.2 Whenever a node X receives RFIWD/RBIWD packets, and it sends it to the most trusted one hop neighbor on the way to D. 1.3 S broadcasts a Trust Request packet to determine a trusted estimate to its nearest neighbors and collects responses from nodes. 1.4 For all one-hop neighbors with a positive level of trust, multicast RFIWD is used to determine the best route. If there are no such nodes, then the next-hop is determined by the highest level of trust. 1.5 On the way, the packet collects the following metrics: delay, bandwidth, and number of replay. 1.6 If the next hop node is D, then it creates a packet RBIWD in which it stores the statistics collected by the packet, and sends it to S. 1.7 On the return path, RBIWD computes the value of Soil and updates the trust estimate on the nodes. 1.8 As soon as RBIWD reaches the source, node S forms a direct estimate of the trust to one hop neighbor according to formula (1). Trustð0Þ ¼

M X i¼1

wi C i þ

N 1 X IndirectTrust j N j¼1

ð1Þ

AI Methods for Neutralizing Cyber Threats at Unmanned Vehicular Ecosystem of. . .

163

Ci – pheromone (bandwidth, delay, etc.), wi – pheromone weight, M– total number of pheromones, N – number of neighbors. 1.9 If RBIWD does not come from one-hop neighbor, then the reply exponent increases and the trust is recalculated. 2 To maintain the existing routes of node S and regularly update the confidence estimate, the following algorithm is applied: 2.1 After a fixed time interval expires, proactive routing is initiated. 2.2 On S node, PFIWD packets are created that are sent to the destination nodes according to the routing Table. 2.3 The PFIWD package collects the same network parameters as RFIWD, updates the Soil and Velocity values on all visited nodes. Additionally, it saves the node IDs to restore the backward route. 2.4 The next hop node is defined by Velocity. If there is no such node, a message about the loss of the route is sent to S. 2.5 If the next hop node is the destination node, then a PBIWD packet is generated and sent back along the saved backward route. 2.6 On the return path, PBIWD starts the routing update timers on each node. 2.7 If the PBIWD packet does not arrive, the corresponding entry is deleted from the routing Table. 3 To determine the confidence estimation on node A for all one hop neighbors B1, . . ., BN, where N is the number of such neighbors, and the subsequent detection of the security threats is performed according to the following algorithm: 3.1 A table of confidence factors for node A is constructed for all parameters processed K ¼ [delay, reply, bandwidth, ∑]. 3.2 For each Bi, where i ¼ 1, N, there is calculated the confidence coefficient A to Bi using formula (2), where w is the weighting coefficient of the parameter, Kj is the parameter. Trustðt Þ ¼ Trustðt – 1Þ þ w * K j Trustðt Þ ¼ Trustðt – 1Þ þ w * Kj

ð2Þ

3.3 If the trust factor of node A to node Bi does not exceed the threshold value, then the algorithm finishes its work. 3.4 If the trust factor of the node A to the node Bi does not exceed the threshold value, then A generates the message and sends it to the nearest central node (BS or RSU) or, in the absence thereof, to 8Bj: i 6¼ j about suspicious activity Bi. 3.5 The central node or Bi, which received a warning message from node A, lower their confidence rating to Bi. Advantages of this approach: • A completely decentralized approach, and hence independence from the ITS (intelligent transportation systems) architecture. • Independence from the selected routing protocol. • Scalability.

164

M. Kalinin et al.

• Simple expansion of detection pheromones by adding additionally collected statistics. • Localization of the intruder nodes.

3.2

A Deep Neural Network

Deep neural networks are networks that have more than two hidden layers. A neural network is capable of extracting knowledge from a large number of input parameters what is needed to perform its intended task. A feature of deep neural networks is the presence of a subsample layer. To train the neural network, the backpropagation method (back propagation error method) was chosen. The main idea of this method is to propagate error signals from the network outputs to its inputs, in the direction opposite to the direct propagation of signals in normal operation. In order to be able to use the back propagation method, the transfer function of neurons must be differentiable. The method is a modification of the classical gradient descent method. An eLU is used as an activation function. This method has two advantages that make it very popular: • Easy to count locally. • Implements stochastic gradient descent in the space of scales. The error back propagation algorithm uses the so-called locality constraint. That is, the calculations in the neuron that other neurons act on are separate. This property increases fault tolerance and allows the efficient use of parallel architectures. Training algorithm: S—number of steps Weights—wi, jChildren( j)—not at the last level nodes that have a way out. D—number of inputs T–training example output O—node output 1. Initialize {wi, j}i, j with small random values, {Δwi, j}i, j ¼ 0 2. Repeat S times: { } to the input of the network and count the outputs oi of (a) Submit xdi each node. (b) For all k 2 Outputsδk ¼ ok(1 – ok)(tk – ok). (c) For each level, starting from the penultimate ( ) P δk w j,k (d) For each node of the jth level, calculate δ j ¼ o j 1 – o j k2Childrenð jÞ

(e) For each edge of the network

AI Methods for Neutralizing Cyber Threats at Unmanned Vehicular Ecosystem of. . .

165

Δwi,j ðnÞ ¼ αΔwi,j ðn – 1Þ þ ð1 – αÞηδ j oi wi,j ðnÞ ¼ wi,j ðn – 1Þ þ Δwi,j ðnÞ 3. Return the values wi, j Adamax is implemented, that is, we correct the weights after each training example and, thus, “move” in the multidimensional space of the scales. To “get” to the minimum of error, we need to “move” in the direction opposite to the gradient, that is, based on each group of correct answers, add wi, j to each weight. The implemented neural network solves the problem of determining the presence of FDI. This task comes down to binary classification, the set of network states is divided into two subsets: normal state and attack state. Receiving an arbitrary state of the network represented by a set of data from sensors transmitted in the network over a finite period of time, the neural network returns the value of the class to which this state belongs, thereby answering the question whether an attack is performed in a given period of time or not. This problem is a classification problem, since the discrete function of the network state, which takes a binary value, is approximated. Detection Algorithm: Input: sensor and network data Output: fact of the security anomaly, or the fact of its absence 1. Calculation of statistics, recording the result in the vector X¼(x1,x2,x3,x4,x5,x6,x7, x8,x9,x10). Where x1,. . .,5 are extracted from .routes files, and x6, . . ., x10 are from . sensor files. 2. The vector X is fed to the input layer of the neural network of deep learning. 3. Multiplication X by the weight w of neurons: Hinp ¼ ∑xiwi. 4. The value Hinp is transferred to the activation function fact: Houtp ¼ fact (Hinp), where the activation function is the hyperbolic tangent. 5. The values of Houtp are applied to the next layer, go to step 3. If Houtp is received from the output layer of the neural network, go to step 6. 6. Houtp ¼ {0, 1, 2, 3, 4}, Houtp ¼ 0 depicts no anomaly, Houtp ¼ 1 is FDI.

4 The Experiments and Results To demonstrate the effectiveness of the developed AI algorithms, Network Simulator (NS-3) v. 3.25 has been used because of its open source simplicity and free accessibility. NS-3 allows you to set up wireless dynamic routing of the network, adjust the speed of nodes and transmitter power. To implement the developed neural network model, the Keras c backend library, TensorFlow, was used. Keras is an open neural network library written in Python. It is an add-on for the frameworks Deeplearning4j, TensorFlow, and Theano. It is aimed at the operational work with deep learning networks, while being designed to be compact, modular, and

166

M. Kalinin et al.

expandable. This library contains numerous implementations of the widely used building blocks of neural networks, such as layers, target and transfer functions, optimizers, and many tools to simplify working with images and text.

4.1

Network Routing Anomalies Detection

For the simulation, the Black Hole attack was selected when a malicious node can destroy all the packets that it receives for subsequent transmission. This type of security threat is especially effective when the node is also a collection point. This combination may be the reason for stopping the transfer of a large amount of data. The developed algorithm was implemented as a distributed IDS. The simulation parameters are presented in Table 1. The purpose of the experiments is to evaluate the effectiveness of the developed swarm algorithm to protect unmanned vehicular ecosystem against routing anomalies by selected criterion—the packet delivery ratio. Figure 2 shows the proportion of received packets (RP) and lost packets (LP) throughout the entire simulation without attack, in total one million packets were sent. The location of nodes and sending messages occurs randomly. Then a series of experiments was carried out under the influence of a threat. From the very beginning of the simulation, malicious nodes start sending fake information and discarding data packets. The experiments were conducted without implemented protection and with IDS based on swarm intelligence. Figures 3, 4, and 5 show the results for 1, 10, and 100 malicious respectively. The total number of nodes remained unchanged—100,000. Even with the presence of one intruser in the network, the number of successfully delivered packets to the destination decreases by 50,000. In case if there are 100 intruders in the network, the number of lost packets begins to exceed the number of packets received. When simulating with the developed intelligent IDS, it was Table 1 Simulation parameters

Parameter NS version Channel Routing protocol Simulation area Number of nodes Number of malicious Maximum nodes velocity Network traffic Transport protocol Data rate Packet size Simulation time

Value 3.25 Wireless AODV 5000 m × 5000 m 10,000 1–100 30 m/s CBR UDP 250 kbps 1024 bytes 600 sec

AI Methods for Neutralizing Cyber Threats at Unmanned Vehicular Ecosystem of. . .

167

Fig. 2 Normal proportion of RP and LP (authors’ creation)

Fig. 3 Proportion of RP and LP with 1 malicious (authors’ creation)

possible to maintain routing at an acceptable level. The developed algorithm quickly finds malicious nodes thanks to the developed trust mechanism and builds alternative routes, thereby isolating intruders.

168

M. Kalinin et al.

Fig. 4 Proportion of RP and LP with 10 malicious (authors’ creation)

Fig. 5 Proportion of RP and LP with 100 malicious (authors’ creation)

4.2

FDI Detection

After the test launch of the deep learning network, it became clear that the most important parameter that most affects the learning outcome is the number of layers. With an increase in this parameter (and with large scatter of the others), the high accuracy of the constructed neural network (> 90%) was always achieved. The best

AI Methods for Neutralizing Cyber Threats at Unmanned Vehicular Ecosystem of. . .

169

Fig. 6 FDI detection accuracy (authors’ creation)

number of layers is 5. With a fixed number of layers, the result of the work began to strongly depend on another parameter—the number of epochs. This also affected the rate of convergence of the neural network, so the number of epochs is best set independently. With the previously obtained number of layers, we test the neural network at a different number of eras to select the optimal value. The best number of epochs is 10. There are various ways to optimize a neural network in the Keras library. Among them: SGD, RMSprop, Adagrad, Adadelta, Adam, Adamax, and Nadam. Figure 6 shows the result of FDI detection using various optimization methods. As it can be resulted out from the above graph, Adamax optimization turned out to be the most suitable, it showed the highest result of 94.93%.

5 Conclusion The paper discusses the task of detecting cyber threats in new digital platforms. The purpose of the study is to detect and identify new types of security threats specific to dynamic routing anomalies and FDI threats on the unmanned vehicular ecosystem of smart city with AI-based methods. The work reviews various methods for detecting such threats, their basic features, and drawbacks. To detect security threats on network routing, it was proposed to use an artificial swarm algorithm, and deep neural networks to detect FDI. The swarm algorithm is based on intelligent water drops and the trust model. The efficiency of our approach was confirmed by experimental studies. Using a network simulator a dynamic network of an unmanned vehicular ecosystem was built and a Black Hole was modeled. The developed artificial swarm algorithm quickly finds malicious nodes thanks to the developed trust mechanism and builds alternative routes, thereby isolating intruders, due to this a larger number of sent packets reaches the recipients. Thanks to the experiments, the best configuration of

170

M. Kalinin et al.

the developed neural network was determined. The proposed neural network method provides 94% detection accuracy. These results suggest that this method can be applied to detect FDI in different smart infrastructures of IIoT, smart buildings, WSN, VANET, smart cities, and other kinds of sensitive environments of cyberspace. For future work, it is planned to continue to explore the possibility of using artificial intelligence methods to ensure ubiquitous cyber security in new digital environments. Acknowledgments The reported study was funded by RFBR according to the research project № 18-29-03102.

References Al-Shurman, M., Yoo, S., & Park, S. (2004). ‘Black hole attack in mobile ad hoc networks’. Proceedings of the 42nd annual Southeast regional conference. ACM, 2004, USA, April 2004, pp. 96–97. Andre, A., Beltrame, E., & Wainer, J. (2013). A combination of support vector machine and k-nearest neighbors for machine fault detection. 2013. https://doi.org/10.1080/08839514. 2013.747370. Chen, K. Y., Chen, L. S., Chen, M. C., & Lee, C. L. (2011). Using SVM based method for equipment fault detection in a thermal power plant. In Computers in Industry, vol. 62, no. 1, pp. 42–50. Das, R., Purkayastha, B., & Das, P. (2011). Security measures for black hole attack in MANET: An approach. International Journal of Engineering Science and Technology, vol. 3 no. 4, April 2011, pp. 2832–2838. Dengiz, O., Konak, A., & Smith, A. (2011). Connectivity management in mobile ad hoc networks using particle swarm optimization. Ad Hoc Networks, 9(7), 1312–1326. Hamed, S. (2007). Problem solving by intelligent water drops. Proceedings of the IEEE Congress on Evolutionary Computation, Singapore, 2007, pp. 3226–3231. Huawei Global Industry Vision. (2018). Unfolding the industry blueprint of an intelligent world. Kallitsis, M., Michailidis, G., & Tout, S. (2015). Correlative monitoring for detection of false data injection attacks in smart grids. 2015 IEEE International Conference on Smart Grid Communications (SmartGridComm), Miami, FL, 2015, pp. 386–391. Kamble, A., Malemath, V., & Patil, D. (2017). Security attacks and secure routing protocols in RPL-based internet of things: Survey. 2017 International Conference on Emerging Trends & Innovation in ICT (ICEI), Pune, 2017, pp. 33–39. Kaspersky Lab. (2019). Industry 4.0. Klau, T. (2017). As the boards of Directors of companies decide on the introduction of advanced technologies. Joint stock company: corporate governance issues, 2017, vol. 1 (2), pp. 30–31. Kulikowski, C. (1980). Artificial intelligence methods and systems for medical consultation. in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-2, no. 5, pp. 464–476. Laughton, M. (1997). Artificial intelligence techniques in power systems’. IEE Colloquium on Artificial Intelligence Techniques in Power Systems (Digest No: 1997/354), London, UK, 1997, pp. 1/1–119. McKinsey Global Institute. (2015). The internet of things: Mapping the value beyond the hype.

AI Methods for Neutralizing Cyber Threats at Unmanned Vehicular Ecosystem of. . .

171

Poniszewska-Maranda, A., & Kaczmarek, D. (2015). Selected methods of artificial intelligence for Internet of Things conception. 2015 Federated Conference on Computer Science and Information Systems (FedCSIS), Lodz, 2015, pp. 1343–1348. Schwab, K. (2016). The fourth industrial revolution. Singh, R. & Nand, P. (2016). Literature review of routing attacks in MANET2016 International Conference on Computing, Communication and Automation (ICCCA), Noida, 2016, pp. 525–530. Sirola, P., Joshi, A., & Purohit, K. (2014). An analytical study of routing attacks in vehicular ad-hoc networks (VANETs) International Journal of Computer Science Engineering (IJCSE), vol. 3 no. 4. Smith, J. & Schuchard, M. (2018). Routing around congestion: Defeating ddos attacks and adverse network conditions via reactive bgp routing. 2018 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, 2018, pp. 599–617. Xie, L., Mo, Y., & Sinopoli, B. (2010). False data injection attacks in electricity markets. 2010 First IEEE International Conference on Smart Grid Communications, Gaithersburg, MD, 2010, pp. 226–231. Yu, J., Hou, Y., & Li, V. (2018). Online false data injection attack detection with wavelet transform and deep neural networks. In IEEE Transactions on Industrial Informatics, vol. 14, no. 7, pp. 3271–3280. Yu, F., Liu, J., & Liu, D. (2016). An approach for fault diagnosis based on an improved k-nearest neighbor algorithm2016 35th Chinese Control Conference (CCC), Chengdu, 2016, pp. 6521–6525. Zhang, J. (2011). A survey on trust management for VANETs. International Conference on Advanced Information Networking and Applications, 2011, pp. 105–112. Zhou, J. (2015). Intelligent manufacturing-main direction of ‘made in China 2025. China Mech. Eng., 26(17), 2273–2284. Zhou, L., Yeh, K., Hancke, G., Liu, Z., & Su, C. (2018). Security and privacy for the industrial internet of things: An overview of approaches to safeguarding endpoints. in IEEE Signal Processing Magazine, 35(5), 76–87.

Cybersecurity and Control Sustainability in Digital Economy and Advanced Production Dmitry P. Zegzhda, Evgeny Pavlenko, and Anna Shtyrkina

Abstract This paper describes the challenge of ensuring the cybersecurity of modern digital systems. The development of intelligent information technologies has allowed to create systems that are able to implement physical and economic processes through information exchange between the components of the system. The integration of such systems with critical industries and their accessibility from the Internet provides ample opportunities for the implementation of cyber attacks. The authors showed the need to expand the concept of information security for the systems under consideration by the concept of sustainability, as the ability to function in the conditions of computer attacks. It is proved that an important feature of modern digital systems is the priority to ensure the correct operation of the entire system, before ensuring the safety of individual components. The proposed approach to ensuring cybersecurity and control of cyber sustainability, based on the selfadaptation of the system to the conditions of operation, is described. Keywords Cyber-physical system · Cybersecurity · Cyber sustainability · Digital systems · Self-adaptation

1 Introduction In all sectors of the economy, digital technologies are being introduced. They penetrate both as digital assets in the form of new business models, and in the form of the industrial Internet of things. This fact creates the conditions for the formation of large amounts of data, both industry, and inter-industry. Technology penetrates into the social sphere in the form of connections and communications, in which almost everything around it becomes part of the global digital space, forming D. P. Zegzhda · E. Pavlenko (*) · A. Shtyrkina Saint Petersburg Peter The Great Polytechnic University. Polytechnicheskaya, Saint-Petersburg, Russia e-mail: [email protected]; [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Devezas et al. (eds.), The Economics of Digital Transformation, Studies on Entrepreneurship, Structural Change and Industrial Dynamics, https://doi.org/10.1007/978-3-030-59959-1_11

173

174

D. P. Zegzhda et al.

the prerequisites for applying the relevant data to assess and predict the economic development of society. Digitalization has transformed the information and computer systems of all industries, turning them into cyber-physical systems (CPS). CPS is a technological concept, which provides a close coordination between computing and physical resources. In general, CPS support the maintenance of real-world processes using regular monitoring and a feedback loop. Examples of CPS are industrial systems that are associated with all areas of human activity: transport, energy, finance, medicine, etc. Modern advanced production and digital economy systems are CPS. Unauthorized interference with such systems can lead to great financial losses and to disastrous consequences; therefore, the question about CPS security is extremely important nowadays. The problem of CPS security is considered in detail (Pavlenko et al. 2017; Zegzhda et al. 2018; Pavlenko and Zegzhda 2018). The close integration of physical and information processes leads to the fact that CPS security does not provide by classical concepts of confidentially, integrity, and availability of information circulated in the system. The CPS protection from destructive impact is also important since the physical processes implemented by the system are irreversible. In this regard, the problem of maintaining the functional sustainability of CPS in the context of destructive interventions comes to the fore. By the cyber sustainability of CPS, authors understand the system’s ability to function correctly under cyber attacks. Creation of approaches to ensuring cyber sustainability of CPS began in 2010: one of the first official documents addressing the problem of resistance to attacks was the Australian government’s Strategy for Critical Infrastructure Sustainability, then large corporations joined in: Big4 (PWC, EY), analytical agencies (Gartner), vendors (IBM, Symantec) and NIST, the Central Bank of the Russian Federation and the European Central Bank. At the moment, there are a fairly large number of developed documents that define the concept of cyber sustainability and measures to ensure it: guidance on cyber resilience for financial market infrastructures from the IOSCO (International Organization of Securities Commissions), methods of the US Department of Homeland Security, US-CERT methodologies, etc. However, none of the documents provides a methodology for measuring cyber sustainability, approaches to its assessment, and reference values. The proposed studies are aimed at eliminating this problem. In this paper, authors proposed an approach to ensuring cybersecurity and control of cyber sustainability, based on the self-adaptation of the system to the conditions of operation. The results of experimental studies, which showed high efficiency of the proposed approach, are presented.

Cybersecurity and Control Sustainability in Digital Economy and Advanced. . .

175

2 Related Works There are not many scientific approaches to maintain the sustainability of CPS. One promising approach uses a biology concept of homeostasis—mechanism that provides constancy of internal organism processes. This approach provides adaptation and self-regulation mechanisms of complex dynamic systems. Such features of the approach allow autonomous control and maintenance of the state of the system. The homeostatic approach for CPS was proposed by Gerostathopoulos et al. (2016, b) as an ability of self-adaptation. However, authors of these papers were focused on the operation correctness, but not on security aspects. Moreover, the proposed model is not applicable because of the high monitoring algorithm complexity in the case of large dynamic systems. One more paper (Muccini and Vaidhyanathan 2019) is focused from self-adaptive architectures to self-learning architectures to learn and improve QoS parameters over time. However, such an approach does not take into account the structural parameters of CPS, but only time series and data stream. Thus, due to the dynamic behavior of CPS, the homeostatic strategy can be separated into three stages: system monitoring, sustainability estimating, and making decision to system recovery. To implement this strategy, a method is needed to evaluate the sustainability of the CPS at the current time, as well as to predict the maximum destructive load, which will lead to a complete loss of system functionality. Thus, the second stage can be realized by different methods using mathematical statistics, game theory, and so on. Yong et al. (2016) proposed a novel algorithm for estimation of the system state that resilient to different types of attacks. The proposed method uses principles of robust optimization and gives a “frequentist” robust estimator. However, such methods do not take into account the structure of the CPS which can be represented as a network of devices. He et al. (2013) proposed game-theoretic concept to estimating system sustainability. This approach defined sustainability as power-form product of the survival probabilities of cyber and physical spaces, each with a corresponding correlation coefficient. Such methods do not take into account a structure of the system and might not be as flexible as it needed for providing cybersecurity. Thiede (2018) proposes a methodology to estimate environmental sustainability of CPS. This approach is scalable, economic perspective, however due to simplifications, some failures can be missed. Wei and Ji (2013) proposed to estimate CPS as the rate of system recovery, however, this method is a posteriori, so this model allows only restoring system after destructive influences.

176

D. P. Zegzhda et al.

3 Approach to CPS Security 3.1

Model of CPS Functioning

Homeostasis strategy was applied to the security of CPS (Zegzhda and Pavlenko 2017 and continues in terms of systematization and security assessment of cyberphysical systems (Zegzhda et al. 2017), the sustainability of cyber-physical systems in the context of targeted destructive influences (Zegzhda and Pavlenko 2018). The method of estimating CPS sustainability is determined by the way the system is presented and simulated. In the case of CPS, one of the most common is a model based on graph theory. Graph theory allows us to consider not only the network of devices within an integrated CPS, but also the interaction of CPS components with each other. Since the processes in the CPS are carried out by exchanging data between devices, each process can be represented as a route on a graph. The presence of a large number of such routes, as well as their quality, determine the system’s ability to function, thereby giving an assessment of its stability. Lavrova (2016) proposed graph model, according to which CPS is a graph G ¼ , where V ¼ {v1, v2, . . ., vn} is set of graph vertices representing the devices, and E ¼ {e1, e2, . . ., ed} – set of edges representing connections between system components. Each vertex is characterized by a tuple, which contains the characteristics, depending on its type. The important parameter of the vertex is the capacity of device performancevi, where i is the node identifier. In addition to typical parameters, each vertex corresponds to a set of functions that it can perform Fvi ¼ ( f1, f2, . . ., fk). The set of functions that can be performed by components of the CPS is not homogeneous: it can include both trivial and more complex in terms of function implementation. Therefore, it is advisable to enter a measure for each of the functions that determines its complexity fi ! complexityfi. Knowing the node performance and the complexity of the functions it performs, you can find the execution time of the function fi on the device vi through the relation: timevi , f j ¼

complexity f j performancevi

ð1Þ

Each edge also has a parameter characterizing the data rate between vertices vi and vj: timevi,vj. A process running in a CPS is characterized by a sequence of functions that are performed by the vertices of the graph Rprocess ¼ {f1, f2, . . ., fm}. It should be noted that complex functions can be decomposed as a sequence of simpler ones, which allows you to effectively reconfigure the route in terms of destructive effects. This fact, as well as the fact that each function can correspond to several vertices of the graph with different performance, leads to the fact that each processi process in the CPS corresponds to a set of working routes pathj differing in their characteristics. As parameters of the routes, it is proposed to consider:

Cybersecurity and Control Sustainability in Digital Economy and Advanced. . .

– – – – –

177

Route length l ¼ | pathj |. Total route complexity Fpathj. Total route performance Perfpathj. Time of route execution Timepathj. Energy characteristics of the vertex, determined by device type.

Thus, when calculating the characteristics of the route, all connections between the components of the system are taken into account, as well as the characteristics of the vertices that perform the functions included in the process. Intermediate nodes are not counted in the summation. The presence of high-quality routes, for example, with a short execution time, determines the cyber sustainability of the CPS under destructive influences, since the reduction of such routes will lead to system downtime, which can lead to failures and of the target function.

3.2

Estimating of Sustainability Area

While ensuring the CPS security, important parameters are times of attacks de-tection and CPS rebuilding to neutralize destructive impacts. Therefore, as the characteristics of the quality of the route were chosen the time of the route execu-tion and its total performance. As a part of study, a working route was defined, represented as a sequence of functions. To estimate CPS sustainability an algorithm was developed that performs a search for various routes on a graph, including a sequence of vertices that perform functions from the working route. The characteristics of the intermediate vertices were not taken into account. For each route found, time and performance were calculated. To estimate the number of routes depending on the time of their execution, a cumulative function was built (Fig. 1). The argument of this function is an ordered set of time values, and the function values are the number of routes that have a time execution less than the value of the argument Timepathj. Thus, judging from Fig. 1, the number of routes that have an execution time less than 19 is approximately 100,000. In a case of performance estimation, the best quality route will have a large total performance value. Therefore, the cumulative function for the performance of routes is constructed as follows: the number of routes whose performance is greater than the value of the performancevi, taken as the value of function, as shown in Fig. 2. For further analysis, the normalization of the values of performance and execution time of the route was carried out. The graphs for both characteristics were combined, and then the intersection point was found, as can be seen in Fig. 3. The left area of the graph corresponds to routes with the lowest performance; the right area corresponds to routes with the longest execution time. Thus, routes in the middle part of the plot on Fig. 4 can be interpreted as an area of system sustainability. It is proposed to limit the sustainability area by symmetric intervals of length 0.25

178

Fig. 1 Cumulative function for route time execution (authors’ creation)

Fig. 2 Cumulative function for route performance (authors’ creation)

D. P. Zegzhda et al.

Cybersecurity and Control Sustainability in Digital Economy and Advanced. . .

179

Fig. 3 Intersection of execution and performance curves for working routes (authors’ creation)

Fig. 4 Number of routes for the fixed values of execution time and performance of working route (authors’ creation)

180

D. P. Zegzhda et al.

from the intersection point. The right boundary refers to the execution time of the routes—that is, routes from the sustainability area should not run for longer than a certain time. The left border, respectively, refers to route performance. For fix values of execution time and performance of the working route on the xaxis the number of routes suites to such characteristics was calculated. The largest value is observed at the intersection point of two curves, shown in Fig. 4. Since the number of routes is also a quality criterion, to limit the area of sustainability, it is proposed to cut off a part with characteristics for which the number of routes is less than 20,000. Thus, this paper proposes the criterion for CPS sustainability, which is the number of working routes in system with optimal values of execution time and performance. In order to check the applicability of the criterion, it is necessary to simulate destructive influences and to check the reaction of criterion to changes in system structure. To estimate the CPS sustainability, the information system was modeled as a graph. The graph was constructed using Erdős and Rényi (1960) with the number of nodes equal to 30, and the probability, and the probability of edge appearance equal to 0.35. Each vertex of the graph was mapped: – Set functions that the vertex can perform and its complexity. – Performance of the device. – Time of function execution of the device. Each edge is associated with a time rate between vi and vj timevi,vj.

4 Modeling Destructive Influences 4.1

Analyzing the Sensitivity of the Criterion to Cyber Attacks

As part of the study, an attack was modeled, consisting in sequential removal of half of the vertices. For the resulting graph shown in Fig. 5, the number of routes was calculated, as well as the characteristics of time and performance that were in area of the system sustainability. The second model of the attack influence is to delete the vertex, which has a certain degree of criticality. As an indicator of the vertex criticality, is it proposes to use the ratio of working routes number passing through the vertex to the total number of working routes: crit vi ¼

numRprocess ðvi Þ numRprocess

ð2Þ

The number of routes depending on the criticality of deleted vertex was evaluated for fixed values of execution time and performance of routes shown in Fig. 6.

Cybersecurity and Control Sustainability in Digital Economy and Advanced. . .

181

Fig. 5 Number of routes in the sustainability area depending on the number of deleted vertices (authors’ creation)

As experiments show, at a certain criticality of vertex, the number of routes in the sustainability area reaches zero, which indicates the complete inability of the system to function along a given sequence of functions. During the simulation of attacking influences, the proposed criterion of sustainability showed high sensitivity to structural changes in CPS.

4.2

Approach to System Self-Adaptation Recovery

Taking into account the proposed criterion, recovery of system functionality is reduced to the problem of changing the graph in such a way that number of routes satisfying the given characteristics increases. An increase in the number of routes is possible through the implementation of various scenarios: – Rebuilding and reconfiguration of CPs to improve the graph connectivity, which will lead to the emergence of new routes or change their length [12]. – Definition of a new sequence of performing target function due to the possibility of representing the functions as a decomposition of other functions. – Improving device characteristics, in particular, increasing the performance of a certain type of devices.

182

D. P. Zegzhda et al.

Fig. 6 Number of routes in the sustainability area depending on the criticality of the deleted vertex (authors’ creation)

Obviously, due to the varying complexity of functions performed by devices, an increase in the performance of different types of vertices affects the number of suitable routes in different ways. As part of the work, an experiment consists in increasing the performance of a certain type vertex twice, was conducted. Results of which are presented in Fig. 7. Abscissa axis indicates the type of functions that can be performed by system components, arranged in order of increasing complexity. The first point on the plot corresponds to the initial value of the number of routes in the graph without changing the performance of devices of a particular type. It should be noted that the observed linear relationship is determined by the fact that the sequence of functions includes all the functions performed by the system. If, however, we increase the length of the working route and duplicate the occurrence of f3 function, then a small jump will be observed precisely with an increase in the performance of devices implementing this function, as shown in Fig. 8. Thus, for effective CPS recovery and an increasing number of suitable routes that satisfy the specified characteristics, it is necessary to give preference to types of devices that perform more complex functions if the ratio of functions of different types in a given sequence is approximately the same. System recovery should occur automatically, the system should be capable of self-adaptation, for which the proposed stability criterion should be used. The decision on which route to take should be based on the calculation of the criterion.

Cybersecurity and Control Sustainability in Digital Economy and Advanced. . .

183

Fig. 7 Sustainability criterion depended on changing performance of certain type vertices (authors’ creation)

5 Conclusion CPS security reduces to maintaining system sustainability. For solving this problem criterion on sustainability is needed. This criterion should take into account not only information and physical parameters of system devices but also structural characteristics of the CPS network. Using graph representation of CPS, the processes in the system can be represented as a set of routes that include a given sequence of vertices, each of which performs a set of specific functions. Mapping a set of qualitative characteristics to vertices and connections leads to simple evaluating the optimality of the route as the total value of vertices and links characteristics containing in the route. Thus, the number of routes with the optimal value of quality characteristics determines the sustainability of CPS. Applicability of this criterion was verified by modeling destructive effects, as a result of which the proposed sustainability assessment demonstrated high sensitivity to changes in the graph describing CPS. To selfadaptive recover CPS during a strong decrease in the number of suitable working routes, it is proposed to increase the performance of devices that are involved in performing complex functions, if the ratio of functions in a given sequence is approximately the same. Otherwise, it is recommended to increase the performance of nodes for those functions that are more common in the working route.

184

D. P. Zegzhda et al.

Fig. 8 Sustainability criterion depended on changing performance of certain type vertices (authors’ creation)

References Erdős, P., & Rényi, A. (1960). On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci, 5(1), 17–60. Gerostathopoulos, I., Bures, T., Hnetynka, P., Keznikl, J., Kit, M., Plasil, F., & Plouzeau, N. (2016). Self-adaptation in software-intensive cyber–physical systems: From system goals to architecture configurations. Journal of Systems and Software, 122, 378–397. Gerostathopoulos, I., Skoda, D., Plasil, F., Bures, T., & Knauss, A. (2016). Architectural homeostasis in self-adaptive software-intensive cyber-physical systems. In European conference on software architecture (pp. 113–128). Cham: Springer. He, F., Zhuang, J., Rao, N. S., Ma, C. Y., & Yau, D. K. (2013). Game-theoretic resilience analysis of cyber-physical systems. In In 2013 IEEE 1st international conference on cyber-physical systems, networks, and applications (CPSNA). IEEE (pp. 90–95). Lavrova, D. S. (2016). An approach to developing the SIEM system for the internet of things. Automatic Control and Computer Sciences, 50(8), 673–681. Muccini, H., & Vaidhyanathan, K. (2019). A machine learning-driven approach for proactive decision making in adaptive architectures. In 2019 IEEE international conference on software architecture companion (ICSA-C) (pp. 242–245). IEEE. Pavlenko, E. Y., Yarmak, A. V., & Moskvin, D. A. (2017). Hierarchical approach to analyzing security breaches in information systems. Automatic Control and Computer Sciences, 51(8), 829–834. Pavlenko, E., & Zegzhda, D. (2018, May). ‘Sustainability of cyber-physical systems in the context of targeted destructive influences’. In 2018 IEEE Industrial Cyber-Physical Systems (ICPS). IEEE, pp. 830–834.

Cybersecurity and Control Sustainability in Digital Economy and Advanced. . .

185

Thiede, S. (2018). Environmental sustainability of cyber physical production systems. Procedia CIRP, 69, 644–649. Wei, D., & Ji, K. (2013). U.S. patent application no. 13/703 (p. 158). Yong, S. Z., Foo, M. Q., & Frazzoli, E. (2016). ‘Robust and resilient estimation for cyber-physical systems under adversarial attacks’. In 2016 American control conference (ACC). IEEE, 308–315. Zegzhda, P. D., Lavrova, D. S., & Shtyrkina, A. A. (2018). Multifractal analysis of internet backbone traffic for detecting denial of service attacks. Automatic Control and Computer Sciences, 52(8), 936–944. Zegzhda, D. P., & Pavlenko, E. Y. (2017). Cyber-physical system homeostatic security management. Automatic Control and Computer Sciences, 51(8), 805–816. Zegzhda, D. P., & Pavlenko, E. Y. (2018). Digital manufacturing security indicators. Automatic Control and Computer Sciences, 52(8), 1150–1159. Zegzhda, D. P., Poltavtseva, M. A., & Lavrova, D. S. (2017). Systematization and security assessment of cyber-physical systems. Automatic Control and Computer Sciences, 51(8), 835–843.

Blockchain for Cybersecurity of Government E-Services: Decentralized Architecture Benefits and Challenges Alexey Busygin and Artem Konoplev

Abstract Traditional government e-services are based on centralized data stores, processing, and management systems and solutions, which have several constraints, for example, single point of failure, trust concentration, absence of transparency, tracking, and verifiability. The paper discusses the applicability of a decentralized approach to building government e-services. Financial, operational, and security benefits of the decentralized approach comparing to the centralized one are outlined. The main cybersecurity challenges are identified. Authors present how Blockchain technology utilization could solve the identified issues. Keywords E-government · E-services · Cybersecurity · Blockchain · Decentralized architecture

1 Introduction E-government and government e-services act an important role in cost reduction and citizen, businesses, and other government institutions interaction convenience in major scopes of government activities. E-government consists of digital interactions between citizen and government (C2G), government and citizen (G2C), government and businesses (G2B), government and government employees (G2E), government and other government agencies (G2G) on local, national, or international state levels (Hai and Ibrahim 2007). These interactions involve confidential, sensitive, or critical information exchanges, critical automated information processing, evaluation, and decision-making. Therefore, reliability, necessary transparency, verifiability, trustworthiness, and security of government e-services become an issue of current interest.

A. Busygin (*) · A. Konoplev Peter the Great St.Petersburg Polytechnic University, Saint Petersburg, Russia e-mail: [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Devezas et al. (eds.), The Economics of Digital Transformation, Studies on Entrepreneurship, Structural Change and Industrial Dynamics, https://doi.org/10.1007/978-3-030-59959-1_12

187

188

A. Busygin and A. Konoplev

A strategy of information technology and information and communication technologies utilization for government services development has several issues. The issues related to cybersecurity are discussed in the following papers. Manoharan and Holzer identify reputation and performance as the key factor of trust to government e-services and, therefore, successful e-government operation. Good reputation depends on the ability of the e-government system to provide a good track-record. Performance is defined as an ability of the e-government system to resist cybersecurity issues and recover from failures (Manoharan and Holzer 2011). Government e-services are largely based on commonly applied information and communication technologies such as Web (Hai and Ibrahim 2007). Therefore e-government systems inherit cybersecurity threats of typical distributed information systems: network probes, malware, internet infrastructure attacks, denial of service attacks, etc. (Singh and Karaulia 2011). Common security countermeasures are proposed: cryptography, firewalls, operating system security, data backups, security monitoring, and auditing (Al-Azazi 2008; Backer 2005), honeypots (Nikkhahan et al. 2009). Blockchain technology is proposed to be applied as infrastructure for document handling in e-government services (Ølnes and Jansen 2017). Potential technical and economic benefits of blockchain application to e-government are highlighted (Ølnes et al. 2017).

2 Centralized Government E-Services Issues Traditional government e-services are based on centralized systems, for example, centralized data stores, databases and directories, processing, monitoring and management systems, public key infrastructures, and certification authorities. Such base systems with centralized architecture have a number of constraints, caused by their major design property, that all operations are performed by one or a few entities (organizations). General flaw of centralized approach to building government e-services is a single physical or logical point of failure. Additional effort should be involved to increase services and infrastructure redundancy to eliminate or lower possible downtime of critical e-government services such as voting systems. Centralized systems have lower resistance to DDoS attacks or errors and failures not related to information security. Another disadvantage of centralized systems is an overconcentration of trust causing the absence of formal and technical guaranties of information processing and decision-making transparency, ability to track, monitor, and verify e-service activities in a public and managing entity independent way.

Blockchain for Cybersecurity of Government E-Services: Decentralized. . .

189

3 Government E-Services Security Requirements One of the most critical properties of the government e-service is availability and transparency (It should be possible to track and verify government activities). A government e-service is a distributed information system. Availability of a distributed information system depends on the availability of its components and communication channels. Factors of availability of a government e-service are presented in Fig. 1. Government e-service components availability at the system level is provided by redundancy/failover, load balancing, vertical and horizontal scaling methods, and instruments. The availability of government e-service communication channels is provided by the similar methods as availability of components (redundancy/failover, load balancing, scaling). It is achieved by proper network planning, implementing, maintenance, and is outside of the scope of this article. Government e-service components redundancy/failover and load balancing could be provided by application of the blockchain technology. Although scalability is one of the main disadvantages of blockchain technology, authors propose some modifications to base blockchain that could be utilized to overcome this issue in the Sect. 4.3. The information system implementing government e-service should satisfy the following requirements. • Data should be available in a distributed way to increase performance by providing multiple service access points.

Fig. 1 Government e-service availability factors

190

A. Busygin and A. Konoplev

• Data should be available in a decentralized way to decrease a count of points of failure caused by carnalized service model. • Not only data about current e-service processes state should be available but also data about previous states (i.e., history of state transitions). This condition is required to achieve the transparency of government e-service processes. • Data should be protected against forgery, modification, and deletion. • Information system should be resistant to the destructive actions of multiple intruders, including colluding ones (Byzantine fault tolerance). • Information system should provide the ability to scale with the growth of loads. The approach to implementation of government e-services with the specified properties based on the application of the blockchain technology is presented below.

4 Blockchain for Government E-Services The application of blockchain technology to information systems provides: • • • •

Decentralized data exchange, processing, and storage. Access to the full history of system state changes. Data protection against forgery, corruption, and deletion. Byzantine fault tolerant distributed consensus.

Decentralized data processing and storage with public blockchain nodes has a significant financial and operational benefits compared to centralized approach to data processing with on premises or cloud servers: there is no need to spend resources on infrastructure maintenance, because some of the processing tasks could be delegated to public blockchain nodes. The security benefit of decentralized data processing with blockchain is an increased resiliency against DDoS attacks. There is no single point of failure (server, datacenter, communication channel) and there is no need to spend extra resources on DDoS protection. Access to full append-only history of system state changes provides a framework for more accurate reputation assessment and data analysis. This also allows for e-government transparency and accountability.

4.1

Blockchain as Data Exchange Infrastructure

In this chapter, we describe the blockchain place in government e-services architecture and common tasks solved by blockchain protocols. Figure 2 illustrates the generic architecture of government e-service. The proposed government e-service model has a layered architecture. The basement of the information system implementing government e-services is a peer-to-

Blockchain for Cybersecurity of Government E-Services: Decentralized. . .

191

Fig. 2 Decentralized government e-service architecture

peer communications between the blockchain nodes. Layer 1 of the e-service model is formed by nodes exchanging data by blockchain means: all data is presented as blockchain transactions and propagated to other network peers via the blockchain gossip protocol. Valid blocks of transactions are fixed in blockchain via a decentralized consensus protocol. The transaction data could be of any message type, for example, application layer queries, responses, or notifications exchanged by different logical components of government e-services. As a result, a ledger of heterogeneous data is formed on each node in decentralized way (layer 2). The blockchain ledger acting as a backend store for database management systems, registries, directories, message queues, and other necessary infrastructure of the government e-services residing at layer 3. Layer 4 is represented by government e-services components reading data from layer 3, processing it, and writing results back to layer 3. Layer 3 transforms results received from e-services components to blockchain transactions and relays transactions to available peers via blockchain gossip protocol. The decentralized consensus protocol is used to commit valid transactions to the blockchain. In the described decentralized approach every node has its own copy of the distributed transaction ledger and databases (registries, message queues, etc.) build as a result of the execution of transactions from the ledger. The consistency of the ledger and databases build is guaranteed by the blockchain protocol. Hence, government e-services components running on the node have the most of the data resources available locally and are independent of the components residing on the other nodes. The significant advantage of this approach is an absence of single point of failure resulting in the high availability of government e-services. There is no central database or application server. For example, DDoS attack on decentralized government e-service nodes does not result in failure of the whole service, because service components could be easily replicated on any of the other blockchain nodes.

192

A. Busygin and A. Konoplev

All transactions are checked and executed by every node in a unified and formalized way. Furthermore, every node has access to all transactions stored in blockchain, i.e., to all history of heterogenous data exchanges between government e-service components. These features ensure a transparency of government activities resulting in high degree of citizen trust. An example of an information system that could be built according to the presented architecture is presented in the following section.

4.2

Blockchain-Based Certifications Database

In this section, a decentralized certifications database is presented as an alternative way of building certification authorities’ registries. A model described in (Konoplev et al. 2018) is adopted to decentralized certifications database implementation. In this section, we use the following notation. Let Validsig (σ, k, x) denote the predicate taking the true value if and only if σ is a valid signature calculated over x with private key corresponding to public key k. Let Trusted (k) denote the predicate taking the true value if and only if public key k is trusted in terms of in terms of PGP Web of Trust model (Callas et al. 2007), i.e., owner of key k is a trusted introducer. The described model also has the layered architecture presented in Fig. 3. Layer 1 provides the blockchain-based transaction log. Transaction log L is defined as: L ⊂ N × TS × TX

ð1Þ

N is a set of transaction numbers. Transaction numbers are stored in blockchain implicitly and are defined by transaction order in blockchain. TS is a set of

Fig. 3 Decentralized certification e-service layered architecture

Blockchain for Cybersecurity of Government E-Services: Decentralized. . .

193

transaction timestamps. TX is a set of transactions. The transactions from L are uniquely identified by their numbers: ð8hn1 , t1 , a1 i 2 LÞ ð8hn2 , t2, a2 i 2 LÞ ððn1 ¼ n2 Þ ! ðt1 ¼ t2 Þ ^ ða1 ¼ a2 ÞÞ

ð2Þ

The transaction set is defined as follows: TX ⊂ CERT × KEY × KEY × ATT × COND × SIG

ð3Þ

KEY is a set of public keys. ATT is a set of attributes associated with the public key. COND is a set of conditional expressions. SIG is a set of signatures. CERT ¼ {true, false} is a set of claims (certification and revocation accordingly). For example, htrue, k1, k2, a, c, σi 2 TX is a certification transaction. By issuing this transaction an owner of public key k1 2 KEY certifies that the owner of public key k2 2 KEY has attribute a 2 ATT if conditional expression c 2 COND is true. The certification is committed with signature σ calculated on true, k1, k2, a, and c using private key corresponding to public key k1. Condition c 2 COND could be used, for example, to specify the certification claim validity period. Attribute a 2 ATT could be used to specify, for example, name, identifier, role, etc. of key k2 owner. Transaction hfalse, k1, k2, a, c, σ“i 2 TX is used to revoke the previously issued certification htrue, k1, k2, a, c, σ’i 2 TX. Signature σ “calculated on false, k1, k2, a, and c using a private key corresponding to public key k1. ( 〈 〉 ) ð8hn, t, false, k 1 , k 2 , a, c, σ i 2 LÞ ∃! n’ , t ’ , true, k 1 , k2 , a, c, σ ’ 2 L (( ’ ) ( )) n < n ^ t’ < t

ð4Þ

Transaction hn, t, x, k1, k2, a, c, σi is added to blockchain if the following requirements are met: • It meets the common requirements for valid blockchain transactions (correct formatting, proof-of-work/proof-of-stake, etc.). • Transaction’s signature is valid: ð8hn, t, x, k1 , k2 , a, c, σi 2 LÞ Validsig ðσ, k1 , hx, k1 , k2 , a, ciÞ:

ð5Þ

Every blockchain node performs serial execution of transactions from L and constructs the certifications database. Certifications database DB is defined as follows: DB ⊂ KEY × KEY × ATT × COND

ð6Þ

Execution of transaction htrue, k1, k2, a, c, σi 2 TX results in the creation of a new database entry hk1, k2, a, ci if for the executor the following predicated is true:

194

A. Busygin and A. Konoplev

Fig. 4 Examples of effective certifications database entries

Trusted(k1). Execution of transaction hfalse, k1, k2, a, c, σi 2 TX results in deletion of the database entry hk1, k2, a, ci if hk1, k2, a, ci 2 DB. Finally, certifications database DB is transformed into an effective view DBe defined as follows: DBe ⊂ KEY × ATT

ð7Þ

DBe is created by the following rule: ð8hk1 , k2 , a, ci 2 DBÞ ððc ¼ trueÞ ! ðhk2 , ai 2 DBe ÞÞ

ð8Þ

Resulting DBe could be thought of as a certified document registry (or document store in terms of non-relational databases) provided by layer 2 of the model (Fig. 3). Examples of documents stored in DBe are presented in Fig. 4. Layer 3 is an application layer. All other components of government e-service reside at this layer. The absence of the authoritative nodes, acting as CAs and directories, requires from all blockchain nodes to maintain an independent copy of the certifications database. The described model has the following features. • Load balancing between database (blockchain) nodes could be easily performed. • Decentralization. There is no single point of failure. Database is well protected against DDoS by design. • Complex decentralized trust relationships. Some certification requests to central government certification authorities could be delegated to other authoritative government departments. • The history of the database modifications is available. This provides certification transparency, allowing to avoid security incidents caused by certification authorities mistakes and misbehaving such as the described in (Wiesinger 2011). The primary challenge in building a decentralized certifications database according to the defined model is layer 1 implementation. The creation of a secure read and append-only public ledger is a nontrivial task. The applying of a base blockchain technology brings some issues discussed in the next chapter.

Blockchain for Cybersecurity of Government E-Services: Decentralized. . .

4.3

195

Common Issues and Solutions

The listed above blockchain technology properties and benefits open a new way for building secure and trusted e-government services, for example, citizen identification, certifications, state registries, publishing, online taxes and fines processing, elections, etc. Although blockchain technology has significant issues and constraints described below. A “young” blockchain insecurity. No one of the miners (the subjects directly writing to the blockchain) should control more than a half of hashing power of the network. The new blockchains usually do not have a lot of independent miners, thus, it is easier for a malicious miner to gain a control of the majority of network hashing power (via cooperation with other miners or by purchasing an additional computing power) and perform blockchain modification via 51% attack. This vulnerability is mitigated by involving computation powers for other “mature” blockchains, for example as proposed in (Sanchez and Fisher 2018). Also, security issues related to 51% attack could be eliminated with the application of the consortium blockchain instead of public blockchain. Consortium blockchain permits public read access allowing to perform public monitoring and auditing of government e-services, but imposes additional restrictions on write operations and permits authenticated writes only. Appending a new transaction to a blockchain may take from several minutes to several hours (Ali et al. 2016). The slow writes issue could be solved with blockchain technology extension having a directed acyclic graph structure (Popov 2017). Multiple interconnected chains of blocks allow for parallel writes to the blockchain (see Fig. 5). All transactions are readable by all nodes. Privacy issues of public and consortium blockchain could be solved with homomorphic encryption and secret sharing schemes. A blockchain size grows linearly with time without the ability to remove old data. Consequences:

Fig. 5 Directed acyclic graph of blocks

196

A. Busygin and A. Konoplev

Fig. 6 Basic blockchain

Fig. 7 Blockchain with floating genesis block

• Challenges with a blockchain usage on devices with limited storage. Currently, the size of the Bitcoin blockchain, exceeds 240 gigabytes. • Increasing new nodes initialization time. New nodes have to download and verify a whole blockchain. Presently, these operations for the Bitcoin blockchain could take several days (Ali et al. 2016). Unlimited blockchain size growth issue could be solved floating genesis block enhancement (Bruce 2017; Busygin et al. 2018). In basic blockchain (Fig. 6) old data could not be pruned because current state si ¼ s0 + Δs1 + . . . + Δsi-1 could be computed only if initial state s0 from the genesis block and all previous state changes from previous blocks are known. The presented organization of block header provides the way to trim outdated data from blockchain without significant security losses, as presented in Fig. 7. Blockchain with floating genesis block stores not only state changes (Δsi) but also state snapshots (si). With this header structure, outdated data could be pruned (see Fig. 8). This modification could be made without significant security implications because block header with proof-of-work is preserved. To summarize, it could be stated that currently mitigation methods and techniques are proposed for all identified blockchain issues.

Blockchain for Cybersecurity of Government E-Services: Decentralized. . .

197

Fig. 8 Pruning excess data from Blockchain with floating genesis block

5 Conclusion The paper discussed the applicability of a blockchain decentralized approach to building government e-services. Financial, operational, and security benefits of the decentralized approach comparing to the centralized one are outlined. The main cybersecurity challenges are identified. Authors present how the blockchain distributed ledger technology, introducing data redundancy, information processing transparency, cryptographically guaranteed data immutability, and distributed consensus mechanism could be utilized to solve the identified issues.

References Al-Azazi, S. (2008). A multi-layer model for e-government information security assessment. PhD Thesis, Cranfield University. Ali, M., Nelson, J., Shea, R., & Fridman, M. (2016). Blockstack: Design and implementation of a global naming system with blockchains. USENIX Annual Technical Conference, Denver, CO, U.S. Backer, W. C. (2005). E-government security issues and measures. Hoboken, NJ: Wiley. Bruce, J. (2017). The mini-blockchain scheme. Cryptonite. http://cryptonite.info/files/mbc-schemerev3.pdf Busygin, A., Konoplev, A., Kalinin, M., & Zegzhda, D. (2018). The floating genesis block enhancement for blockchain based routing between unmanned vehicle ad-hoc networks and elastic computing security services. SIN’18 Proceedings of the 11th International Conference on Security of Information and Networks, Article no. 24. Callas, J., Donnerhacke, L., Finney, H., Shaw, D., & Thayer, R. (2007). OpenPGP message format. IETF Tools. https://tools.ietf.org/html/rfc4880 Hai, J., & Ibrahim, C. (2007). Fundamental of development administration. Selangor: Scholar Press. Konoplev, A., Busygin, A., & Zegzhda, D. (2018). A Blockchain Decen-tralized public key infrastructure model. Automatic Control and Computer Sciences, 52(8), 1017–1021. Manoharan, A., & Holzer, M. (2011). E-governance and civic engagement: Factors and determinants of E-democracy. Hershey, PA: IGI Global.

198

A. Busygin and A. Konoplev

Nikkhahan, B., Aghdam, A. J., & Sohrabi, S. (2009). E-government security: A honeynet approach. International Journal of Advanced Science and Technology, 5. Ølnes, S., & Jansen, A. (2017). Blockchain technology as s support infrastructure in e-government. Electronic government. EGOV 2017. Lecture Notes in Computer Science, 10428, 215–227. Ølnes, S., Ubacht, J., & Janssen, M. (2017). Blockchain in government: Benefits and implications of distributed ledger technology for information sharing. Government Information Quarterly, 34 (3), 355–364. Popov, S. (2017). The tangle. https://www.iotatoken.com/IOTA_Whitepaper.pdf Sanchez, M., & Fisher, J. (2018). Proof-of-proof: A decentralized, trustless, transparent, and scalable means of inheriting proof-of-work security. Veriblock. https://www.veriblock.org/wpcontent/uploads/2018/03/PoP-White-Paper.pdf Singh, S., & Karaulia, D. S. (2011). E-governance: information security issues. International Conference on Computer Science and Information Technology. Wiesinger, S. (2011). Remove trustwave certificate(s) from trusted root certificates. Bugzilla. https://bugzilla.mozilla.org/show_bug.cgi?id¼724929

Green Energy Markets: Current Gaps and Development Perspectives in the Russian Federation Yury Nurulin, Inga Skvortsova, and Elena Vinogradova

Abstract The article is devoted to the analysis of the current state and prospects for the development of the renewable (green) energy market. The achieved level of technologies and equipment for the production of renewable energy and the main trends in the development of these issues are analyzed. The demand for green energy is analyzed from the point of view of free-market niches for this type of energy. The accessibility of energy systems is considered as a key factor for the development of the green energy market. The technical, organizational, and economic problems of ensuring the accessibility of energy systems are analyzed. Existing barriers that prevent the level of grids accessibility, which is necessary for the formation of an effective market for green energy, are also analyzed. Getting the electricity index as a component of the Doing Business rating is used for comparative analysis of the availability of electricity grids in the world economies and Russia. The Smart Grid concept is analyzed from the point of view of development the grid’s availability. Keywords Renewables, · Green energy, · Smart Grids · Energy consumption · Grids accessibility

Selected portions of this chapter have appeared in Nurulin Y.R., Skvortsova I.,Vinogradova E. (2020) “On the Issue of the Green Energy Markets Development.” In: Arseniev D., Overmeyer L., Kälviäinen H., Katalinić B. (eds) Cyber-Physical Systems and Control. CPS&C 2019. Lecture Notes in Networks and Systems, vol 95. Springer, Cham. Y. Nurulin (*) Institute of Computer Science and Technology. Peter the Great Saint-Petersburg Polytechnic University, Saint-Petersburg, Russia I. Skvortsova Institute of Industrial Management, Economy and Trade. Peter the Great Saint-Petersburg Polytechnic University, Saint-Petersburg, Russia E. Vinogradova Department of economy and trade, Peter the Great Saint-Petersburg Polytechnic University, Saint-Petersburg, Russia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Devezas et al. (eds.), The Economics of Digital Transformation, Studies on Entrepreneurship, Structural Change and Industrial Dynamics, https://doi.org/10.1007/978-3-030-59959-1_13

199

200

Y. Nurulin et al.

1 Introduction The key components for any market are product manufacturers, product consumers as well as organizational and technical conditions for mutually beneficial sale and purchase of a product. A key factor for the first component is product manufacturing technology. Technological progress provides the emergence of new technological solutions that underlie successful innovative projects in the field of renewable and clean energy (Renewables Information 2019, REN21 2018, Energy Efficiency and Renewable Energy 2018). This process is described by a model that Freeman called technological push and which reflects the importance of technology for innovation (Freeman 1995). As technology advances, it stimulates the growth of market demand for technology-based products. The market begins to pull technology Kline and Rosenberg 1986). In the field of renewable energy, technology is pulled to the market not so much by economic demand as in political requests, which are based on environmental issues, rational nature management, and climate change mitigation. The number of countries with renewable energy targets and support policies increase, and jurisdictions makes their existing targets more and more ambitious (Makarov et al. 2016). This has led to the fact that the classic market, based on the principles of free mutually beneficial interaction between suppliers and consumers, for renewable energy sources takes new forms and the organizational and economic components of this market become most important. In addition to economic mechanisms, political and environmental factors play an important role in the green energy market. Formally not being economic, these factors have a major impact on the green energy economy and overcome the existing imbalance between demand/supply economy for green energy. The existence of this imbalance is confirmed by the opinions of many experts, as well as the widespread use of subsidies and benefits for producers and suppliers of green energy (Islam 2018). Given the above, the article is structured as follows. The first section provides a literature review of the current state of the green energy market components. The next section focuses on grids, which are one of the key elements of the power system. The final sections provide an analysis of the current procedures for connecting to existing networks and present the general conclusion of the study.

2 Literature Review 2.1

Trends in Development of Green Energy Sources

Global trends in renewable energy development are in the focus of expert assessments around the world (Verbeke 2018; Renewables Information 2019). International Energy Agency (IEA) experts point out that at the moment, renewable energy shows the highest growth rates among all energy carriers. Solar photovoltaic

Green Energy Markets: Current Gaps and Development Perspectives in the Russian. . .

201

Fig. 1 Average annual growth rates of world renewables from 1990 to 2017 (Renewables Information 2019)

(PV) power shows the greatest growth dynamics—almost two times more than wind power, which is in second place (Fig. 1). REN21 experts underline that more solar PV capacity was added in 2017 than the net additions of coal, gas, and nuclear combined (Renewables 2018). According to IEA experts, in 2017 global renewable energy consumption increased more than 5% – three times faster than total final energy consumption. In the power sector, renewables accounted for half of annual global electricity generation growth, led by wind, solar PV, and hydropower. The share of renewable technologies meeting global energy demand is expected to increase by a fifth, reaching 12.4% in 2023—a faster rate of progress than in the 2012–17 period. Renewables cover 40% of global energy consumption growth. Their use continues to increase most rapidly in the electricity sector, reaching 30% of total world electricity generation in 2023 2017 (Renewables Information 2019). While a general increase in energy demand is forecasted at the level of 47% by 2040, the consumption of renewable energy sources will increase by 93% in the same period and will achieve 17% in the world energy balance (Makarov et al. 2016). It should be noted, however, that these figures include data on all types of renewables, including hydropower and energy production based on solid biomass which drives renewable energy in India, Brazil, and other countries. The term “green energy,” which is included in the title of this article, means that renewable energy sources that have a minimal environmental impact will be the subject of further analysis. Among all renewables, wind and solar PV generators meet this criterion.

202

Y. Nurulin et al.

Analysis of numerous publications revealed the following feature. Experts from organizations that promote green energy ideas and technologies mainly appeal to relative indicators that show the high dynamics of green energy production growth. Their reports look quite optimistic and convince of the prospects of green energy both in terms of achieving sustainable development goals and in terms of business (Vetroenergetika Rossii 2018). As an example, according to their conclusions, the technical electric power potential of wind energy in Russia is about 17,101 billion kWh / year, which is 10 times higher than the amount of electricity generated by all power plants in the country in 2018 (1090.9 billion kWh / year). Such comparisons which are present in publications of experts from different countries seem incorrect to us, since the question remains open as to which part of the existing wind or solar energy potential can be effectively used by existing technologies and in existing economic conditions. These conclusions are based on the analysis of increasing demand on renewables but do not analyze the nature of this demand (Regulatory and Business Model Reform 2018). It consists of 2 independent components. The first is an increase of energy consumption in a whole and the second is environmental protection issues. These components are independent and often contradictory from an economic point of view. The second component has a non-market nature and influence on the market indirectly by no-economical mechanisms (policy, stakeholder behavior, etc.). The special status of renewables from the point of view of achieving sustainable development goals forms an additional influx of budget financing in green energy projects. It allows to make renewable energy attractive in a number of countries, even in cases where the initial economic indicators (excluding support mechanisms, taxation, etc.) are more than 50% worse than when using fossil fuels (Cheung 2018). Experts from organizations that develop traditional energy mainly analyze the total amount of energy produced by traditional and renewable sources. Their conclusions can be summarized as follows: despite existing growth, in the near future, renewable energy will be significantly inferior to traditional forms of energy production on its effect on global energy consumption.

2.2

Costs of Technological Base for Green Energy

The main trend in improvements of renewable energy technologies is reducing costs while increasing capacity and reliability. The focus of further analysis will be on solar and wind generators which correspond to this trend (Jäger-Waldau 2018). According to BNEF, there has been 28,5% drop in the price for crystalline -silicon PV modules for every doubling of cumulative capacity since 1976 (Bloomberg NEF 2018. This trend reflects the general pattern of reducing the cost of the microelectronics component base while increasing its functionality with increasing serial production, which clearly demonstrates the current market for computer technology. The use of solar and wind energy creates additional tasks that are absent or not present so brightly in traditional energy. First of all, this refers to the uneven

Green Energy Markets: Current Gaps and Development Perspectives in the Russian. . .

203

production of these types of energy. The intensity of the wind and solar energy can drop to zero during the day. In addition to daily irregularities in electricity production at renewable energy facilities, seasonal unevenness caused by changes in the intensity of solar radiation and wind energy in different periods of the year should be noted. Because of this, the output power of installed generators may differ several times that create a rather high threat to the sustainable functioning of energy systems, as forecasting hourly power generation in the framework of the calculations of market consumption of the day ahead becomes not possible. This poses additional challenges for ensuring the flexibility of the power system associated with the need for redundancy of power storage. In fact, this establishes an additional hidden investment component in energy sector projects, which is often not taken into account when directly comparing the costs of generating electricity from various sources. Batteries, which are necessary to replenish energy in the dark or calm weather, have become one of the limiting factors for improving the efficiency of green energy. Their production and disposal can have a negative impact on the environment, and the limited service life and the need for replacement cause additional costs during operation. The effectiveness of batteries is increasing and the price is decreasing because of technology development and increasing demand. A significant impact on this process has both green energy technologies and electric vehicles (EV) where batteries are one of key element as well. BNEF expects EV sales to raise from 1.1 million at the moment to 30 million by 2030, which will cause essential costs declined for battery packs (Bloomberg New Energy Outlook 2018). In addition to traditional batteries, thermal batteries are also used to store energy generated by solar or wind generators. They store thermal energy at times of reduced demand and supply it to the heating network when heat consumption increases. Such an increase, for example, is regularly observed in the housing sector, when many consumers take a bath or shower in the mornings at the same time. Traditional heat energy sources do not have the necessary dynamics to meet changes in demand, and providing an additional supply of thermal power requires additional costs for providing a necessary reserve of heat. The most important trend in the development of world energy is the increase in the share of electricity in the final energy consumption. Electricity is the most convenient form of energy to use and will crowd out all others. Therefore, the demand for electricity is growing in all countries of the world, without exception, even in those OECD countries that stabilize their primary energy consumption. The electric power industry has a pronounced regional character and is mainly produced either in the regions of consumption or in the regions of sources existence. This is also true for renewable energy sources. We can see two main types of solar and wind power projects. These are either projects of large wind and solar parks, or projects of individual generators in isolated regions where there is demand, but there is no generation. The question remains open under what conditions projects of the individual (or small) solar or wind generators in regions where electricity supply networks exist can be economically profitable.

204

Y. Nurulin et al.

The electric power industry is the sector in which the main competition between all types of fuel occurs and this competition will intensify. The improvement of new technologies leads to a decrease in the specific cost of renewable energy, and the rise in prices of traditional energy resources, on the contrary, will push up the costs of gas and coal generation, bringing all technologies in a rather narrow range of competition. Wind and solar have reached grid price parity and are moving closer to performance parity with conventional sources (Motyka et al. 2019).

3 Methodology In the energy market in general, and in the green electricity market in particular, the essential role plays grids that acts as equal and some time, even as the most important player. Like any commodity, energy is needed not where it is produced, but where it is consumed. When delivering goods, the consumer is an active participant in the process, choosing the place where he buys the goods and moving it at his discretion. When organizing the supply of energy, the consumer participates in the process only at the initial stage, when the connection to the grid occurs. Further his participation in the supply is limited by payment of the supplied energy, and the possibility of change of the supplier is limited. Thus, the internal properties of the grid (technical parameters, business processes, etc.) as well as the regulatory framework have a significant impact on the efficiency (or even on the very existence) of the energy market. Substantial development of internal properties is provided in the framework of the Smart Grid concept which has the following attributes (European SmartGrids Technology Platform 2006): Flexibility The grid must adapt to the changing needs of electricity producers and consumers. Technically it means online reconfiguration of the grid if necessary. This property of a smart grid can compensate for the shortcomings of solar and wind energy, which are mentioned above. Reliability The grid must ensure the security and quality of the electricity supply in accordance with the requirements of the digital society. The possible use of common communication equipment and process control systems for RES groups of power plants creates a risk of a common point of failure in the communication channel diagram for groups of power plants. It follows that even now, with the current insignificant share of renewable energy sources in the total output of the energy system, the entities of operational dispatch control may present certain requirements for the operational maintenance of renewable energy sources. So, when parameters of the electric power regime of the system go out of the range of permissible values, the control center for wind and solar power plants should provide the ability to change or disconnect the load by means of remote telecontrol from the dispatch center in the operating area of which it is located. This applies both to technical and organizational issues of RES integration into the grid.

Green Energy Markets: Current Gaps and Development Perspectives in the Russian. . .

205

Cost-Effectiveness A synergistic effect should be provided by a combination of innovative technologies for building the grid together with effective business models and management of the grid functioning. Availability The main problems in connection to the existing grid are caused by the lack of free capacity at the connection point. View From an organizational point of view it means that the electricity suppliers are obliged both to provide electricity to all consumers who have contacted them and to conclude energy sale and purchase agreements with the owners of microgeneration, including renewables. The grid should be accessible to new users, and new users could be generating sources, including sources with zero or reduced CO2 emissions. Together with the regulatory framework, the last parameter (availability) will be in a focus of the further analysis which is based on the Doing Business rating developed by the World Bank (Doing Business 2018). The subject for analysis will be the Getting Electricity indicator for the Russian Federation.

4 Discussion According to the data from the Russian Association of Wind Industry, by the end of 2018, there were 21 wind electricity stations in Russia with installed power 275 KW–35 MW and the total amount 139 MW (Vetroenergetika Rossii 2018). By 2023, the total power of wind electricity stations installed in Russia will be 3,5 GW which corresponds to the world trend (high growth with a relatively low share of total energy consumption). Similar data are available for solar electricity stations. For comparison, in 2018, the total capacity of wind power plants in the world reached the level of 539 GW. This means that the conditions in which green energy is developing are varied significantly for Russia and other countries. Russia is one of the largest suppliers of energy resources to the world market. A unique factor that has a significant impact on the development of the Russian energy sector is the developed hydro and nuclear energy. In addition to large explored deposits of oil and gas, this determines relatively low electricity prices in Russia, which do not provide sufficient market pull for the development of green energy. Nevertheless, by 2024, dozens of wind farms with an installed capacity of 16 to 200 MW will be built in a number of Russian regions. 3.5 years after the advent of the Paris Climate Agreement, in September 2019 Russia ratified it. As a result, the attention of Russian authorities to the development of energy and wind energy will increase. In 2018, the Government of the Russian Federation approved the Rules for the Technological Operation of Electric Power Systems Postanovlenie Pravitelstva 2018). This comprehensive document formulates the basic principles and requirements for the operation of the energy system as a single technologically complex facility. It is noteworthy that this regulatory document of the Russian energy sector takes into account the steadily growing influence of renewable energy sources on the

206

Y. Nurulin et al.

Table 1 Changes in getting electricity procedures in Russia (Doing Business 2018) (author’s creation) Year 2012 2014 2016

2018

Reform Getting electricity became cheaper due to lower tariffs for connection. Getting electricity became simpler and less costly due to setting standard connection tariffs and eliminating many procedures previously required. The process of obtaining an electricity connection became simpler, faster, and less costly due to eliminating a meter inspection by electricity providers and revising connection tariffs. The number of procedures necessary for getting electricity and the time to complete were reduced.

electric power industry and includes the fundamental requirements for wind farms as generating facilities. We can say that with the approval of the Rules, renewable generation facilities began their transition from the status of “non-traditional” sources, which they had in the recent past, to the status of full participants in the energy system. It is difficult to imagine more meaningful and effective support of renewable energy from the state in the regulatory environment than recognition of their role in the form of a highlevel government act. According to the IEA study, special measures for renewable energy integration in the energy system are usually not required when its share in the annual output does not exceed 3%, unless the renewable energy sources are very localized in the energy system. In the second stage, when the share of renewables comes to 3–15%, it is necessary to adapt the available regulatory resources, technologies, and methods for managing the energy system. At the third stage, when the share of renewable energy exceeds 15% of the annual output, as well as further stages, a deep restructuring of the energy system and the introduction of new tools and tools to support the operation of the energy system are already required (Renewables Information 2019. Despite the fact that the annual production of electricity in the UES of Russia through renewable energy currently corresponds to the first stage, a possible problem of strong localization of renewable energy sources is already coming to the fore. According to experts, in the Russian Federation by 2024 the wholesale market of electricity produced by wind power stations with a volume of 750 billion rubles and a market for high-tech power engineering with an investment potential of up to 250 billion rubles will be fully formed, as well as a mature innovative wind turbine manufacturing industry and a developed service infrastructure for the wind energy market will be created (Obzor Rossiyskogo Vetroenergeticheskogo Rynka 2018). For these forecasts to become reality, an appropriate regulatory frameworks that regulate procedures for connecting new suppliers and consumers of green energy to existing electricity grids are needed. One of the indicators which show the level of development of this regulatory framework is the grid openness (availability) for users. During the last 7 years, the Russian Federation conducted a series of reforms to raise the availability of existing grids for new customers (Table 1).

Green Energy Markets: Current Gaps and Development Perspectives in the Russian. . .

207

Table 2 Getting Electricity—Saint-Petersburg, standardized connection (Doing Business 2018) (authors’ creation)

Indicator Procedures (number) Time (days) Cost (% of income per capita) Reliability of supply and transparency of tariff index (0–8)

SaintPetersburg 2

Europe and Central Asia 5,3

OECD high income 4,5

Best Regulatory Performance 3 (25 economies)

80 6,5 8

110,3 325.1 5,5

77,2 64.2 7,5

18 (3 economies) 0.0 (3 economies) 8.0 (27 economies)

As a result, the Russian Federation rank according to the Doing Business essentially increased and in 2019 the only one indicator (the time for connection) indicator is far from the desired value (Table 2). Obtaining an electricity connection is essential to enable a business to conduct its most basic operations. Whether electricity is available or not, the first step for a customer is always to gain access by obtaining a connection to the electric grid. In many economies, the connection process is complicated by the multiple laws and regulations involved—covering service quality, general safety, technical standards, procurement practices, and internal wiring installations. Doing Business rating provides a common average picture of the problem that a consumer faces when connecting in a standard way. In reality, the consumer may face unexpected problems for him, which can significantly increase the cost and time to connect. A survey of 52 organizations that implemented connection projects in 2017–2019, conducted in St. Petersburg, showed that these problems can appear before the first formal procedure (Submit application to responsible organization and await technical conditions and connection contract). More than half of the companies surveyed noted the difficulty of collecting the documents necessary for technological connection. Upon having the connection contract signed by the customer, the responsible company prepares project design for building a network for connection and obtain all approvals for construction work (such as a permit required for the laying of underground or overhead lines), and carry out all the external works according to the contract and the technical conditions. In an urban setting, another typical difficulty selected by the companies surveyed is the presence of a large number of other communications, which entails the search for an extraordinary design solution for the passage of intersections with heating lines, water supply, telephony, electricity, sewers, highways. As a result, the typical duration and cost may be exceeded several times. The second formal procedures (Receive external works and final connection) may also cause unexpected difficulties in the absence of sufficient capacity and the resulting need for the construction of new or modernization of existing substations. The second procedure may also cause unexpected difficulties in the absence of sufficient capacity and the resulting need for the construction of new or modernization of existing substations.

208

Y. Nurulin et al.

Sixty-two percent of surveyed companies used a temporary connection for the period of work so that the business would not stand idle while waiting for the technological connection. Most of them rented traditional mobile diesel power plants and only three used wind and biomass generators. All respondents from this group noted difficulties in choosing a specific option for implementing a temporary connection due to the lack of necessary information. Formally, the conducted survey is not related to the green energy market because it concerns only electricity consumers who are generally indifferent to the type of energy source: the connected electricity must be cheap, high-quality, and reliable. At the same time, it clearly shows that possible green energy suppliers will have similar or even much more difficulties while connecting to existing grids to sell electricity. By analogy with the problems of connecting new consumers, it can be assumed that most of the problems for green energy suppliers will be of an organizational nature.

5 Conclusions 1. At the moment, the green energy market in Russia is at the initial stage and requires strong state support in the economy and legislation spheres. 2. The level of openness and accessibility of existing grids for new customers and especially for new suppliers could be a restraining factor for the development of green energy markets. To counter this, targeted organizational measures are needed to develop grids accessibility. Best practices in this sphere should be studied and benchmarked. 3. The Doing Business ranking approach that has proved its effectiveness for analysis of different aspects of business, should be extended to the accessibility of existing grids for green electricity suppliers. Acknowledgments The article is prepared in the frame and with the financial support of the KS1024 project of the CBS ENI program.

References Bloomberg New Energy Outlook. (2018). Accessed Sep 25, 2019, from https://bnef.turtl.co/story/ neo2018?src¼pressrelease&utm_source¼pressrelease Cheung A. (2018). Power markets today bloomberg NEF. Doing Business. (2018). Accessed Sep 25, 2019, from http://www.doingbusiness.org/en/reports/ Energy Efficiency & Renewable Energy. (2018). Renewable electricity generation. Accessed Sep 25, 2019, from /https://www.energy.gov/eere/office-energy-efficiency-renewable-energy European SmartGrids Technology Platform. (2006). Vision and strategy for Europe’s electricity networks of the future. Luxembourg: Office for Official Publications of the European Communities. Freeman, C. (1995). The ‘National System of innovation’ in historic perspective. Cambridge Journal of Economics, 19, 5–24.

Green Energy Markets: Current Gaps and Development Perspectives in the Russian. . .

209

Jäger-Waldau, A. (2018) Rooftop PV and self consumption of electricity in Europe Benefits for the climate and local economies. EEI, 3, 16–20. Islam, S. (2018). Key european solar markets trends from the installers’ perspective EEI, 3, 24–25. Kline, S. J., & Rosenberg, N. (1986). An overview of innovation. In R. Landau & N. Rosenberg (Eds.), The positive sum strategy (pp. 275–305). Washington: National Academy Press. Makarov, A. A., Grigoriev, L. M., & Mitrova, T. A. (2016). Prognoz razvitiya energetiki mira i Rossii. M. INEI RAN pri pravitelstve RF, 2016.-200 s. Motyka, M., Slaughter, A., & Amon, C. (2019). Global renewable energy trends. Accessed Sep 25, 2019, from https://www2.deloitte.com/insights/us/en/industry/power-and-utilities/globalrenewable-energy-trends.html Obzor Rossiyskogo Vetroenergeticheskogo Rynka za. (2018). God. Accessed Sep 25, 2019, from https://rawi.ru/wp-content/uploads/2019/03/rawi-report-2018-full.pdf Postanovlenie Pravitelstva RF (2018). N 937 Ob utverdgdenii Pravil tehnologocheskogo funktsionirovaniya elektroenergeticheskih sistem. Accessed Sep 25, 2019, from http://www. consultant.ru/document/cons_doc_LAW_304807/ Regulatory and Business Model Reform. (2018). Accessed Sep 25, 2019, from https://rmi.org/ourwork/electricity/regulatory-business-model-reform/ Renewables 2018 Global Status Report (2018). Accessed Sep 25, 2019, from http://www.ren21.net/ gsr-2018 Renewables Information. (2019). Overview. Accessed Sep 25, 2019, from https://webstore.iea.org/ renewables-information-2019-overview Verbeke, S. (2018). Realising the clean energy revolution in the existing building stock. EEI, 28–31. Vetroenergetika Rossii, V. T. (2018). Accessed Sep 25, 2019, from https://rawi.ru/wp-content/ uploads/2019/03/rawi-report-2018-full.pdf

Energy Efficiency in Urban Districts: Case from Polytechnic University Yury Nurulin, Vitaliy Sergeev, Inga Skvortsova, and Olga Kaltchenko

Abstract The article presents the results of a case study of Peter the Great St. Petersburg Polytechnic University (SPbPU) campus in the frame of INTERREG BSR project AREA21. SPbPU is considered as an Energy Improvement District (EID)—an object where energy efficiency measures are applied for real estate objects in private, state, and regional ownership that is typical for a large number of territorial entities of St. Petersburg. Using the stakeholder identification method, the article analyzes the behavior of the EID stakeholders at SPbPU. The special features of the EID which prevent motivating the end users to save resources are highlighted. In combination with the SWOT analysis, this allows us to identify gaps and problems in creating a resource-saving motivation system for the main users in EID. The results of the study form the basis for the development of an EID strategic plan with the close collaboration of key stakeholders. The invariant components of a case study are of interest to other EIDs. Keywords Energy efficiency · Stakeholders motivation · Strategy energy planning

Y. Nurulin (*) Institute of Computer Science and Technology. Peter the Great Saint-Petersburg Polytechnic University, Saint-Petersburg, Russia V. Sergeev Peter the Great Saint-Petersburg Polytechnic University, Saint-Petersburg, Russia e-mail: [email protected] I. Skvortsova · O. Kaltchenko Institute of Industrial Management, Economy and Trade. Peter the Great Saint-Petersburg Polytechnic University, Saint-Petersburg, Russia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Devezas et al. (eds.), The Economics of Digital Transformation, Studies on Entrepreneurship, Structural Change and Industrial Dynamics, https://doi.org/10.1007/978-3-030-59959-1_14

211

212

Y. Nurulin et al.

1 Introduction “On average, every €1 invested in energy efficiency saves €3, over the lifespan of a technology” (Frassony 2019). Various versions of similar statements can be found both in advertising brochures and in analytical studies. They reflect the unanimous expert opinion about the prospects of energy-saving technologies, and at the same time indicate the lack of tools that would allow to calculate the economic effect from the energy-saving measures. The reason for this is not a lack of research in the area of energy production, transfer, and use, but rather the complexity and versatility of the problems that form the potential for energy saving. Energy consumption in the residential sector is implemented in the framework of complex systems that belong to the class of socio-technical systems, and have technical, economical, and social subsystems. The technical components of the system are equipment that ensures the supply and metering of energy consumption by end users. Modern innovation technologies provide a rather high level of the technical components development. Smart Light, Smart Heating, and Ventilation—these titles reflect the main direction for the development of the technical subsystem of energy consumption in the residential sector. From the point of view of energy saving, the objective function of this subsystem is to minimize energy consumption by a general increase of equipment efficiency, as well as a reduction (up to a complete cessation) of energy consumption in the absence of a consumer. The necessary conditions are the requirement to ensure the supply of energy in the mode of peak consumption and average loads. The economic components of the system are financial and organizational mechanisms that regulate the consumption of energy resources in the context of consumption volumes, time, and spatial frameworks of energy consumption by end users. From the point of view of energy saving, the objective function of this subsystem is to stimulate the minimization of energy consumption. A necessary condition for the effective functioning of this subsystem is the compliance of the general level of the financial burden on the energy consumer with the general level of economic development and financial condition of energy end users. Too high tariffs or financial restrictions on the use of resources can be a barrier to the development of the economy or worsen the living conditions of the consumer (the greatest energy savings will be achieved if you refuse to use any electrical appliances due to the high cost of electricity). On the contrary, a financial burden of 0,5–1% is unlikely to serve as an effective economic incentive to save energy consumption. It is necessary to underline the luck of balance in economy interests for all stakeholders of the system “production-supply-use” of energy. It is necessary to underline the lack of balance of economics interests for all stakeholders of the system “production-supply-use” of energy. In fact, only end users which pay for consumed resources have a direct economic interest in energy saving since it leads to a reduction of their payments. On the contrary, energy manufacturers and suppliers are interested in increasing energy consumption since it leads to increasing their profit. The task of ensuring a balance of

Energy Efficiency in Urban Districts: Case from Polytechnic University

213

interests for all stakeholders in the energy system is even more complicated in case end users of energy resources do not pay for consumed resources. This is typical for users of public buildings where structural units of organizations, as well as individual users (university students, visitors to polyclinics, schoolchildren, etc.), do not pay directly for energy consumed. The social components of the system reflect the behavioral characteristics of the energy resources end users. They are significantly affected by financial and economic mechanisms that regulate energy consumption. However, issues of end-user behavior cannot be reduced only to the issues of tariff optimization for consumed energy. Obviously, in order to optimize the activities of this subsystem, it is necessary to maximally activate non-economic energy stimulation mechanisms that could be considered as the objective function of this subsystem. The above shows that to obtain the maximum effect from energy-saving measures at a district level, a systematic approach is needed based on a comprehensive analysis of all the components of the EID as a socio-technical system of energy saving.

2 Literature Review Cities and urban districts with their vast building stock and infrastructure play a key role in reaching European energy efficiency targets. They represent crucial spaces with great energy efficiency potential. Nevertheless, the transition towards high energy efficiency cities is often hampered by sectoral fragmentation and lack of cooperation between public authorities, energy utilities, and property owners. Currently, a number of project initiatives aimed at a comprehensive solution to the problem of improving energy efficiency are implementing all over the world including Europe and Russia. Different aspects of energy-saving problems including housing and infrastructure sectors are studied in these projects.

2.1

Technology Aspects of the Energy Supply in Housing

The common trend in the development of energy supply for housing is a combination of centralized and distributed energy sources (Directive 2018). In general, the cost of producing a unit of energy decreases as the volume of production increases. This is one of the main reasons for the attractiveness of large centralized energy plants. At the same time, energy is needed at the place where it is consumed but not at the place where it is produced. The cost of transferring energy from producer to consumer may exceed the savings that a large remote producer provides compared to a small producer, but located nearby. In combination with the requirements of environment protection and climate change mitigation, this creates a steady demand for the deployment of small generators based on renewable energy in urban areas.

214

Y. Nurulin et al.

Progress in the field of digitization, renewable energy production, electrical energy storage, materials, and technologies for the manufacturing of components for energy production and use provides an opportunity to increase the efficiency of energy consumption in the residential sector (Renewables 2018). All experts unanimously note that at the moment, renewable energy shows the highest growth rates among all energy carriers. While a general increase in energy demand is forecasted at the level of 47% by 2040, the consumption of renewable energy sources (RES) will increase by 93% in the same period (Makarov et al. 2016). Among all renewables, solar and wind generators show the highest growth dynamics. An important trend in their development is reducing costs while increasing the capacity and reliability of solar and wind generators (Jäger-Waldau 2018). According to BNEF experts, there has been 28,5% drop in the price for crystalline—silicon PV modules for every doubling of cumulative capacity since 1976 (Cheung 2018). This trend forms economic conditions for effective use not only large solar plants, but distributed solar generators as well. Solar and wind generators have known drawbacks that are caused by possible interruptions in generation in the dark time or in the absence of wind. In addition to the daily irregularity of electricity production by these types of energy sources, it should be noted the seasonal component, which is caused by changes in the intensity of solar radiation and wind energy at different times of the year. It is also worth noting the significantly different potential of solar and wind energy in different geographical areas (Renewables 2019). Because of this, the average day power production may vary by several times. Since the energy system must withstand pick loads, this causes additional hidden investment costs, which are often not taken into account when comparing the costs of generating electricity from different sources. Batteries that are needed to replenish energy in dark or calm weather have become one of the limiting factors for increasing the efficiency of solar and wind generators. Their production and disposal can have a negative impact on the environment, and the limited service life and the need for replacement cause additional costs during the RES operation. As BNEF pointed, the efficiency of batteries increases, and the price decreases due to the development of technology and growing demand (Bloomberg 2018). In addition to RES, this demand is formed by electric vehicles, where batteries are one of the key elements. BNEF expects that sales of electric vehicles will increase from 1.1 million currently, to 30 million by 2030, which will lead to a decrease in the basic cost of batteries. In the centralized energy supply systems, energy production is a part not housing, but of the industrial sector. The housing and utilities sector consumes energy resources and it does not matter how the energy has been produced. The main thing is the volume, price, and reliability of supply. RES development changes the situation and energy generators come to the housing sector and become a part of housing infrastructure. Solar energy, wood, and wind are the most widely used RES. Wood is used as fuel for district heating, both centralized and local, and for the heating of individual buildings. Solar and wind generators are used for electricity

Energy Efficiency in Urban Districts: Case from Polytechnic University

215

production. All the most of the electricity generated by local communities and located in urban areas comes from renewable and environmentally friendly energy sources, whereas the remaining electricity is generated by combined heat and power plants working in cogeneration mode. In addition to the technical issues mentioned above, the problem of connecting to existing energy grids is essential for effective using distributed RES in housing. This problem is not so much technical as organizational and economic in nature. The problem is absent if the energy produced by the RES is completely consumed by its owner (for heating separate buildings, for lighting common premises or house territories, etc.). As mentioned above, in order to increase the economic efficiency of renewable energy sources, it is necessary to increase their capacity and when the generated capacity exceeds the portability, the task of selling surplus capacity in the grid will arise. Transition to a decentralized energy supply system provides a number of advantages but formulates additional questions for economical and social subsystems of the complex energy-saving system.

2.2

Economic Efficiency of RES

The main discussions on the economic efficiency of RES which can be used in housing are held around the cost of energy produced. While some experts state that “renewables are reaching price and performance parity on the grid and at the socket” (Motyka et al. 2018), another underline that high rates of growth in the production of renewable energy are largely due to significant state support (Giannakopoulou 2017, Islam 2018). It should be noted that currently there are no convincing calculations of the cost of energy produced by the RES, taking into account the entire life cycle of generators and the system as a whole. Most experts are of the opinion that in the current economic conditions and in the near future, effective support measures from the state are necessary to ensure the economic efficiency of RES. These measures are both economic and non-economic (organizational) in nature. Economic measures reduce the financial burden on renewable energy producers and owners (targeted subsidies, preferential tariffs, tax preferences). In the EU, these issues are well developed and continue to evolve in the frame of INTERREG BSR, ENI CBC, and Horizon 2020 projects. The project EFFECT4buildings develops financial tools and methods to improve profitability, facilitate the financing of energy investments and lower the risks of implementing energy efficiency measures (retrofitting, upgrading, and deep renovation) in buildings (EFFECT4buildings 2018). The CBC ENI project Green Regional Market Development (Green ReMark) is aimed at the study of the current state of the green energy regional markets components, as well as of the perspectives and existing barriers for this market development (Green ReMark 2019).

216

Y. Nurulin et al.

Compared with the EU, in Russia, the instruments of state support for RES are not widely represented. Expensive “green tariffs” for the supply of electricity apply only to suppliers who have passed strict competitive selection and will receive appropriate support from the state (Kommersant 2018). Small producers of green energy cannot participate in this selection and therefore are deprived of their ability to compete with traditional arbiters. Relatively low tariffs for energy supplied for municipal needs also do not stimulate the use of RES.

2.3

Social Aspects of Energy-Saving Measures

The importance of social aspects for the energy efficiency is confirmed by many publications which are focused on the problem of end users’ motivation on energy saving (Nurulin and Glazunova 2015), on analysis of stakeholders involved in the supply and use of energy resources (Reed et al. 2009; Cuppen et al. 2010) and on the role of instruments for social communication in the forming of active position of end users of energy (Chung and Crawford 2016). The following aspects of social topics related to energy efficiency are studied (Prieto 2018). 1. Organizational structure of social innovation for energy efficiency. These studies focus on consumer and producer communities and energy conservation associations. A significant portion of these communities is organized into well-structured networks that bring together citizens to share involvement in renewable energy or energy efficiency projects. 2. Measures to ensure comfortable living conditions in conditions of energy shortage. Today, energy scarcity still affects several million households in Europe, where residents are forced to live in homes that are not protected from moisture and mold, while paying large utility bills to ensure proper heating. This situation largely depends on the geographical location and social status of energy consumers, which leads to significant differences in the levels of access to efficient energy supply and modern technologies. 3. Social aspects of financing energy-saving projects with the attraction of citizen’s investments: – Shareholding participation in energy communities. – Crowdfunding initiatives in the sectors of renewable energy and energy efficiency. – Citizens which finance and implement innovation projects in the field of RES. – Allocation of public resources via vouchers or grants and. – Donations of business-angels, i.e., citizens voluntarily providing financial support to an energy efficiency without expecting any return.

Energy Efficiency in Urban Districts: Case from Polytechnic University

217

4. Educational aspects, referring to raising awareness and contributing to increasing the acceptance of renewable energy and energy efficiency measures and projects. This mode of social innovation which can explore synergies with cultural and entertainment activities addressing general messages such as climate change mitigation and the environment protection is studied in a number of EU projects. The Horizon 2020 project eTEACHER aims to reduce energy consumption through more conscious energy behavior on energy end users in a wide range of buildings. The project seeks to raise the energy awareness of building users by tailored methods and strategies and empowers end users to achieve energy savings and improve indoor comfort (eTEACHER 2017). The NTERREG project Baltic Smart Energy Areas for the twenty-first century (AREA21) seeks to model energy-efficient urban areas of the future, adopting collaborative stakeholder engagement processes in the strategic planning and implementation of energy solutions. Specifically, the project brings together public authorities, energy providers, property owners, and citizens to find and to apply the best solutions for saving energy to decrease CO2 emissions (AREA21 2017).

3 Methodology AREA 21 approach to development the complex problem of strategic planning of energy-saving measures at the level of the urban districts is based on two keystones: – The concept of an Energy Improvement District (EID) as a limited area where both consumers and producers or suppliers of energy resources consumed by the housing sector are compactly located. The focus of the EID concept lies in the development of a framework for the promotion of joint activities of both private and public stakeholders in energy efficiency planning and implementation with a focus on a certain urban area. The concept promotes the pooling of competences, activities, and ideas for energy efficiency planning and implementation. By promoting network and consensusbuilding activities the concept fosters the identification of tailor-made solutions for the area, the piloting of new projects, and the establishment of both informal cooperation and formalized partnerships. The concept offers the opportunity to involve public property owners and citizens as building owners and users in the initiation of energy efficiency measures (AREA21 2017). – The comparative analysis of 7 pilot areas located in different countries of Baltic Sea region. Peter the Great Saint-Petersburg Polytechnic University (SPbPU) is a Partner of AREA21 project and one of these pilots is SPbPU campus which is the subject for further analysis.

218

Y. Nurulin et al.

The selection of EID is based on the following criteria developed by the project Partners. Criteria 1. Housing area with a preference of multi-family buildings: the presence of cooperatives and associations with a reduce the number of separate owners. Criteria 2. Existence of public buildings within the EID. Criteria 3. Accessibility to data of the area including energy demand and consumption (current and future) age and refurbishment of buildings, etc. Criteria 4. Strong interest and commitment of stakeholders in the area. Criteria 5. Integration in current or/and future redevelopment plans or strategic documents. Capacity of involved parties to influence the planning and implementation of EID. Criteria 6. Selection and identifying of EID through mapping areas with the higher energy losses and/or highest energy consumption, with distinct identifiable and unifying features such as similar building typology, the spatial configuration of urban structure, common goals, etc. Criteria 7. Unifying features, common goal, or need which considers building stock, the infrastructure, and the people as one system. Criteria 8. Area with a manageable size that considers unifying features as listed above. In order to provide comparable data, the analysis of each EID was done with the following steps. Step 1. Status Quo (Collecting the Information on the Object of Analysis): • City Level and Administrative Unit Level. • EID Level: Site analysis (urban structure; urban infrastructure; demographic structure existent plans). Stakeholders framework (stakeholder identification; existent cooperation formats). Climate conditions. Energy baseline situation (energy structure; district’s energy flows; state of the supply network. • Building Level (sample-based approach): Current building performance (building overview; comfort-status quo; general energy performance-status quo). Occupant/end users’ behavior (Final energy consumption; Energy cost; CO2 emissions). Step 2. Analysis of the EID Planning Process: • Stakeholder (Stakeholder analysis, Integration of multiple property owners). • Energy Planning Process (Passive efficiency potential, Renewable energy production capacity, Energy efficiency potential, Analyze the EID’s new/planned energy flows).

Energy Efficiency in Urban Districts: Case from Polytechnic University

219

• End users’ behavior (Behavioral strategies; ICT tools). Step 3. Assessment of EID Potential: • Barriers and Interpretations (Barriers from existing instruments, New formal instruments, New informal instruments, and networks). • Energy Assessment (energy-efficient measures, energy-saving potential, energy balance comparison: status Quo versus EID). • Integrated Energy Planning Assessment (local condition analysis for EID integrated energy planning, benefits of implementation, assessment: policy framework versus anticipated results, stakeholder involvement). Step 4. Goal Formulation: • Goals check (goal formulation). • Stakeholders responsibilities towards goals. Step 5. Action Plan Development. The first of the above steps is aimed at the analysis of the EID external. It provides data on opportunities and threats for the SWOT analysis which will be done at Step 3. The second step provides the necessary input to defining the strengths and weakness of analyzed EID from the point of view of energy efficiency. Standard SWOT procedures realized at Step 3 provides formulation of goals and development of action plan to achieve these goals (steps 4 and 5). Since the social aspects of energy saving are considered as one of the main focus area of AREA21 project, the analysis of stakeholders involved in strategy energy planning at the district level is one of key tasks. To solve this task, the Stakeholder Map as special instrument for visualization of stakeholders’ roles was developed according to their main intervention in the life cycle of an EID, as primary or secondary stakeholders (Reed et al. 2009). The following Intervention Categories in the EID life cycle were selected as main elements of Stakeholder Map (Castillejos et al. 2018): Category 1: Strategic Policy Development. This category comprises energy sector policy making on environmental protection and spatial planning to promote EID goals and development. It has a strategic focus by identifying the needs of stakeholders and using stakeholder consultation to review, evaluate, and monitor the policy to make further improvements. Category 2: Regulation and Financing. This category comprises practical rules for the everyday working in the fields of energy, environmental protection, and spatial planning, as well as financing models for EID development. Financing includes private investments, incentives, and other funding programs. Category 3: Cooperative Energy Planning. This category includes the communications that will be undertaken with EID stakeholders including notification of project updates and cooperative formats

220

Y. Nurulin et al.

such as round table discussions and online forums. The communication material will address all matters related to energy planning for the EID. Category 4: Implementation. This category includes the necessary technology delivery and energy supply for the EID, as well as the people and organizations implementing the EID. Category 5: End Use, Management, and Maintenance. This category involves end users such as beneficiaries and users of the EID, as well as the groups responsible for the management and maintenance of the EID.

4 Results and Discussion 4.1

EID Selection

EID Polytechnic which is one of the pilot sites selected for comparative analysis in frame of AREA21, is located at the north part of Saint-Petersburg in Kalininskiy district, Academichesky municipality at the SPbPU campus (EID Polytechnic 2018). The EID was selected as a result of the analysis of different areas of SaintPetersburg on the base of the selection criteria listed above. Criteria 7: Unifying Features, Common Goal, or Need. The following technical subsystems play a key role for the energy efficiency in Polytechnic. Electricity Supply Subsystem No electricity generators, electricity supply from the city, small solar, and wind generators used only for testing and research. Centralized electricity supply: 24 electricity transformer substations, 42 transformers, 31 of them are more than 45 years old while lifetime is 20–25 years. Cable lines 6 kV with the length around 25 km, and cable lines 0,3 kV with the length around 50 km, 90% of which have average exploitation age more than 45 years. Heat and Water Supply Subsystem The length of external serviced heating networks is 20 km. The length of outdoor networks of drinking and fire water supply and sewerage systems is 30 km. The length of internal heat supply, water supply, and sewerage systems more than 150 km. The number of heat nodes and water-measuring units more than 140. The number of sewers and water wells more than 800. 80% of all external and internal water and supply and sewerage systems have physical wear or exceeded service life. Combination of centralized and decentralized systems. The responsibility for maintenance and use of these energy supply subsystems lies on a single juridical person—SPbPU that makes it natural to consider different technical elements of the infrastructure of supply energy resources in Polytechnic as subsystems of one energy resources supply system. The technical condition of this system elements creates the most dangerous threats for the achievement of EID goals.

Energy Efficiency in Urban Districts: Case from Polytechnic University

221

Criteria 3: Accessibility to Data of the Area Including Energy Demand and Consumption. The availability of data is one of the prerequisite for analysis. Data of energy consumption refer to the category of personal, which in some cases creates additional barriers to their receipt. The centralized management system in the EID Polytechnic greatly simplifies the task of obtaining energy consumption data for EID analysis. Criteria 5: Integration in Current or/and Future Redevelopment Plans or Strategic Documents. Conducted analysis has shown the presence in St. Petersburg of a sufficiently developed regulatory framework of the federal and regional levels on energy efficiency and energy saving in housing. SPbPU develops and realizes own programs for the renovation and development of outdoor infrastructure and indoor equipment for energy resources supply of buildings on the campus.

4.2

EID Analysis

Following the above methodology, stakeholder map for EID Polytechnic was developed and their motivation for energy saving was studied (see Table 1). According to the methodology described above, SWOT analysis of the EID Polytechnic was done to provide a systematic view on the energy efficiency in SPbPU (Fig. 1). Social aspects of energy-saving measures were selected as priority topics for energy saving in SPbPU and the motivation in energy saving for different stakeholders was discussed. Since the most of stakeholders in EID Polytechnic do not have direct economical motivation in energy saving, to realize EID goals it is necessary to stress opinion at behavioral aspects and to provide measure which will develop the internal motivation of SPbPU students and employees in energy saving when they use SPbPU facilities and infrastructure. The result of the analysis and discussion was an action plan that contains a list of tasks to be solved to achieve the goals of the EID (see Fig. 2). Since this action plan was developed in cooperation with all parties which will be involved in its realization, prospects for its implementation significantly increased.

5 Conclusions 1. The energy saving by their nature is a complex problem, which requires the

222

Y. Nurulin et al.

Table 1 Stakeholder motivation in EID Polytechnichesky (authors’ creation)

Stakeholder Managers of the object

Degree of interest in energy saving Direct interest in frame of the job responsibilities

Apartment owners

Direct interest (costs saving, quality of life)

SPbPU teachers and researches as users of buildings and infrastructure

Indirect interest, (practically absent, only quality of conditions for the work)

SPbPU students as users of buildings and infrastructure

Indirect interest, (practically absent, only quality of conditions for the study)

SPbPU students living in dormitories

Indirect interest, (they pay on tariffs, but not for consumed resources)

Companies—Suppliers of goods and services

Direct interest (business)

The possibilities of influencing the final result High (development of technical aspects, an increase of the work efficiency)

Relatively small (their number in the EAD Polytechnichesky is insignificant) Practically absent, although they are end users for most of the consumed resources (more than 40 buildings) At the moment practically absent although they are end users for most of the consumed resources (more than 20 buildings) At the moment practically absent although they are end users of energy resources in 13 dormitories High (quality of products and services for energy saving)

Main problems within the framework of existing opportunities Absence of direct links between salary and saved energy. Insufficient capacity to analyze resources consumption in separate buildings Lack of technical capacity to regulate heat consumption in the old apartments Lack of technical capacity to regulate heat consumption in old buildings. Inability to collect data on resources consumed by the unit Absence of the information on SPbPU plans and actions for energy saving

Absence of possibility to collect data on consumed resources

Restrictions of the contracting system

systematic analysis of technical, organizational, economical, and sociopsychological aspects. The EID concept developed in the frame of AREA21 project provides a systematic view on complex problems of energy saving in urban districts. This provides additional impetus for engaging all stakeholders in energy-saving planning and realization. 2. Stakeholders behavior aspects are crucial for energy saving especially in cases where stakeholders have no direct economic motivation. As a university, SPbPU belongs to this category which is reflected in the developed action plan for energy-saving measures in EID Polytechnic. 3. ICT and education should be considered as effective tools to promote and motivate energy-saving measures for end users in public buildings such as universities, hospitals, and offices of large companies.

Energy Efficiency in Urban Districts: Case from Polytechnic University

Strengths

(attributes of the area)

INTERNAL ORIGIN

Cooperation formats already exist within the district Own scientific and service departments that conduct benchmarking of technical and organizational solutions on realization of not used potential of energy saving Centralized management system with high level of main stakeholders’ responsibility concentration. The EID territory is under SPbPU management and SPbPU has direct motivation to reduce payments for consumed energy resources.

223

Weaknesses SPbPU departments responsible for maintenance of energy infrastructure have no direct motivation in energy saving (salary received does not depend on saved energy) Most of end users of premises and infrastructure do not motivated (and partly have no technical possibility) for energy saving. Most of internal energy infrastructure (grids, substations etc) have physical wear or exceeded service life

EXTERNAL ORIGIN (attributes of the environment)

Partial independence from external heat resources (combination of centralized and decentralized heat supply systems)

Opportunities

Treats

Local energy policy focused on energy saving. SPbPU plans for modernization of the communal infrastructure (updated annually)

Energy suppliers have no interest in energy saving measures (economically harmful for them)

Big potential for implementation of energy- and resource-saving measures Use of EID for rising the quality of education in SPbPU and for development of new types of scientific and technical services provided by SPbPU

Fig. 1 EID Polytechnic SWOT analysis (authors’ creation)

Lack of effective financing instruments. Formal barriers for use of regional financing in EID which is federal property

224

Y. Nurulin et al.

Fig. 2 Action plan for the development of the EID Polytechnic (authors’ creation) Acknowledgments The article is prepared in the frame and with the financial support of the AREA21 project of the BSR INTERREG program.

Energy Efficiency in Urban Districts: Case from Polytechnic University

225

References AREA21. (2017). Accessed Oct 1, 2019, from https://area21-project.eu/ Bloomberg New Energy Outlook. (2018). Accessed Sep 30, 2019, from https://bnef.turtl.co/story/ neo2018?src¼pressrelease&utm_source¼pressrelease Castillejos, Z. R., Vladova, G., Hannes, M., Marcks, J., & Motyka, M., (2018). Analyzing Stakeholders for energy improvement districts: Framework. Accessed Sep 30, 2019, from https://area21-project.eu/ Cheung A. (2018). Power markets today. BNEF. Chung, K., & Crawford, L. (2016). The role of social networks theory and methodology for project stakeholder management. Procedia- Social and Behavioural Sciences, 226, 372–380. Cuppen, E., Breukers, S., Hisschemoeller, M., & Bergsma, E. (2010). Q methodology to select participants for a stakeholder dialogue on energy options from biomass in the Netherlands. Ecological Economics, 6(69), 579–591. https://doi.org/10.1016/j.ecolecon.2009.09.005. Directive. (2018). 2018/2001 of the European Parliament and of the Council of 11 December 2018 on promotion of use the of energy from renewable sources (recast) Accessed Sep 30, 2019, from https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri¼CELEX:32018L2001&from¼EN EFFECT4buildings. (2018). Accessed Oct 1, 2019, from http://www.effect4buildings.se/en/Pages/ About.aspx EID Polytechnic. (2018). Accessed Sep 30, 2019, from https://area21-project.eu/pilot-areas/stpetersburg/ eTEACHER. (2017). Accessed Oct 1, 2019, from http://www.eteacher-project.eu/about-theproject/ Frassony M. (2019). The biggest innovation in energy is efficiency. Available online: Accessed Sep 30, 2019, from http://www.europeanenergyinnovation.eu/OnlinePublication/Summer2019/ mobile/index.html#p¼16. Giannakopoulou, E. (2017). The Power Transition – trends and the Future. Bloomberg new energy outlook. Green ReMark. (2019). Accessed Oct 1, 2019, from http://greenremark.com/ Islam S. (2018). Key european solar markets trends from the installers’ perspective. EEI, 3, 24–25. Jäger-Waldau, A. (2018). Rooftop PV and self consumption of electricity in Europe benefits for the climate and local economies (Vol. 3, pp. 16–20). EEI. Kommersant: milliardy na veter. (2018). https://rawi.ru/2018/03/kommersant-milliardyi-na-veter/ Makarov, A., Grigoriev, L., & Mitrova, T. (2016). Prognoz razvitiya energetiki mira i Rossii. M. INEI RAN pri pravitelstve RF, 2016.-200 s. Motyka, M., Slaughter, A., Amo, C. (2018) . Global renewable energy trends. Accessed Sep 30, 2019, from https://www2.deloitte.com/insights/us/en/industry/power-and-utilities/globalrenewable-energy-trends.html Nurulin, Y., & Glazunova, T. (2015). Razvitie sistemy motivatsii resursosberedgeniya v GKH ComCon-2015 p. 415–418. Prieto, J. G. (2018). Social innovation in energy, implications for Smart Specialisation. Accessed Sep 30, 2019, from http://www.europeanenergyinnovation.eu/Portals/0/ publications/EuropeanEnergyInnovation-Winter2018.pdf Reed, M., Graves, A., Dandy, N., Posthumus, H., Hubacek, M. J., Prell, C., Quinn, C., & Stronger, L. (2009). Who's in and why? A typology stakeholder analysis methods for natural resource management. Journal of Environmental Management. Journal of Environmental Management, 90, 1933–1949. https://doi.org/10.1016/j.jenvman.2009.01.0012. Renewables. (2018). Global status report. Accessed Sep 30, 2019, from http://www.ren21.net/gsr2018/ Accessed 30 September 2019. Renewables Information. (2019). Overview. Accessed Sep 25, 2019, from https://webstore.iea.org/ renewables-information-2019-overview

An Architectural Approach to Managing the Digital Transformation of a Medical Organization Igor Ilin, Oksana Iliashenko, and Victoriia Iliashenko

Abstract The modern medical management system is influenced by modern medical concepts (value medicine, predictive medicine), on the one hand, and technologies that provide digital transformation (IoT, Big Data, blockchain, etc.), on the other hand. Digital transformation involves the implementation of fundamental changes in the activities of the organization using digital technology. At the same time, business models, medical organization development strategies, business process systems, IT architecture, services architecture, data architecture, etc., change. The object of the research is medical organizations, being the subject of research the digital transformation process of a medical organization. The hypothesis of research: the application of the architectural approach to managing the digital transformation of a medical organization will allow the formation of a reference architectural model of the upper level, as well as the upper level of the digital transformation roadmap of the medical organization. The main result of the research is the formation of the architectural model of the upper level, as well as the formation of the upper level of the transition plan during the digital transformation of the medical organization. Keywords Architectural approach · Digital transformation · Medical organization

Selected portions of this chapter have appeared in, and built-upon, the Master’s thesis of the co-author Victoriia Iliashenko titled “Development of Requirements for the BI System for the Key Anylsus of Key Performance Indicators of a Medical Organization” successfully completed at the School of Engineering Science, Lappenranta-Lahti University of Technology, Finland, 2020. Selected portions of this chapter have appeared in Iliashenko O.Y., Iliashenko V.M., Dubgorn A. (2020) IT Architecture Development Approach in Implementing BI Systems in Medicine. In: Arseniev D., Overmeyer L., Kälviäinen H., Katalinić B. (eds) Cyber-Physical Systems and Control. CPS&C 2019. Lecture Notes in Networks and Systems, vol 95. Springer, Cham. Used with permission. I. Ilin · O. Iliashenko (*) · V. Iliashenko Graduate School of Business and Management, The Institute of Industrial Management, Economics and Trade, Peter the Great St.Petersburg Polytechnic University, Saint-Petersburg, Russia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Devezas et al. (eds.), The Economics of Digital Transformation, Studies on Entrepreneurship, Structural Change and Industrial Dynamics, https://doi.org/10.1007/978-3-030-59959-1_15

227

228

I. Ilin et al.

1 Introduction Due to the trend of digitalization of the healthcare system as one of the key sectors of the economy, the implementation of the concepts of value and personalized medicine (that is, medicine that bases decisions on individual characteristics of patients, and not on average indicators for the entire population (Fröhlich et al. 2018), as well as an increase the level of independence of medical organizations, management issues in healthcare, remain relevant. The main goal of modern management systems in health care is to reduce the loss of society from morbidity, disability, and mortality. To achieve this goal, the effective activity of both the entire healthcare system and each medical organization is required (Begun and Begun 2017; Adzhiyev 2013). At the same time, as noted in the analyzed publications, the digital transformation that the healthcare sector is undergoing today is a key component for improving patient treatment results, and its influence on all aspects of managing a medical organization will only increase (Zhao et al. 2017; Gopal et al. 2019; Iljashenko et al. 2019). Thus, the development of a reference architectural model of the top level of a modern medical organization, as well as the formation of a top-level transition plan for the digital transformation of a medical organization and the study of the possibilities of using the model in the context of digitalization of the Russian healthcare system, is relevant. The purpose of this study is the formation of an architectural model of the upper level of a modern medical organization, as well as the formation of the upper level of a transition plan for the digital transformation of a medical organization. The reference model is being developed with the aim of creating the basis for modeling the levels of digital maturity of the enterprise through the formation of a relevant indicators system. Also, the referential model will allow modeling various levels of architecture of a given medical organization (business process system, IT architecture, technological infrastructure) in the context of digital transformation, reviewing, and improving existing components and introducing new ones.

2 Research Methods The methodology of this study is a combination of the following steps: 1. Analysis. As part of the research, information on the processes of a medical organization was collected and analyzed based on interviewing and questioning of medical organizations personnel of ownership various forms, a description of these processes was created. Next, an analysis was made of modern medical concepts (value, predictive medicine), on the one hand, and technologies that provide digital transformation (Internet of things, Big data, blockchain, etc.), on the other hand. End-to-end digital technologies were highlighted, which are of key importance in various projects when implementing the digital transformation of a medical organization.

An Architectural Approach to Managing the Digital Transformation of a Medical. . .

229

2. Generalization. To create a reference architectural model of the upper level, a description of the processes was presented in a general (universal for medical organizations) form, a description of information systems that provide IT support for the process system, a layer of the technological infrastructure is presented in general form. 3. Modeling Based on an Architectural Approach. The methodological basis of the study is the architectural approach to the design of control systems (Greefhorst and Proper 2011; Lankhorst 2013; OPENGROUP. TOGAF standard), which allows one to implement and visually present the possibilities of aligning the business and IT strategies of company development. The emergence of this approach more than 30 years ago is due to the ever-increasing dependence of business on implemented IT support. The architectural approach allows us to consider such elements as business processes, organizational structure, functional structure, information systems and applications, IT services, IT infrastructure in interconnection and interaction. The development of these concepts regarding the development of approaches to the design of individual elements of enterprise architecture and the development of industry-specific methods and models has been received in the studies of modern scientists (Greefhorst and Proper 2011; Ilin et al. 2018; Ilyashenko et al. 2018; Lankhorst 2013). Today, four main approaches to the design of enterprise architecture are distinguished: Federal Enterprise Architecture Framework (Federal Enterprise Architecture Consolidated Reference Model. Version 2.3. October 2007), Gatnner (Gartner Research Process 2019), Zashman Framework (Sowa and Zachman 1992), TOGAF (Welcome to TOGAF. Version 9.1 Enterprise Edition, 2019). After analyzing these approaches, we came to the conclusion that the TOGAF standard and the architecture development method ADM (Architecture Development Method) are the most suitable for the formation of business requirements for the company’s personnel policy management system. The choice in favor of this method is due to its focus on the process of managing architectural changes in the organization and the ability to manage changes, taking into account the changing external and internal business environment. In this study, we applied the TOGAF architectural standard (ADM method and ArchiMate language) to build business requirements for the top-level architectural model in the digital transformation of the medical organization and the formation of the top-level architectural model. 4. Project Management and Project Programs. The introduction of end-to-end digital technology is a project program. Modern project and program management methodologies, such as Prince2 (PRINCE2 2017), MSP (Whelan and Meaden 2016), Agile (Duka 2013) are aimed at systematizing the management and administration of individual projects and project programs and provide the opportunity to build effective communications of participants and stakeholders of projects and project programs, optimal allocation of resources, system monitoring, analysis, and control of the implementation of projects and project programs, as well as project prioritization in the framework of the programs.

230

I. Ilin et al.

3 Literature Review Today, value-based and personalized medicine is called the medicine of the future. In recent years, a number of European countries and the USA have been intensively discussing the question of the possible transition of the health care of these countries to the principles of value medicine—that is, medicine that focuses on the outcome of the disease and involves the use of economic levers that motivate individual physicians and medical organizations to achieve positive and necessary outcomes for the patient himself (Shlyakhto and Yakovenko 2017). In addition, one of the most important challenges for the twenty-first century medicine is to provide effective treatments that are adapted to the biological state of a person to provide so-called “personalized healthcare solutions” (Nicholson 2006). Personalization of medicine allows you to integrate individual genetic and other information for the prevention and treatment of complex disorders (Dedov et al. 2012) and is used to increase the safety of pharmacotherapy, helping to solve the problem of the frequent development of undesirable drug reactions (Kukes and Sychev 2010). The term “value” was first coined by a team of researchers led by Dr. Brown at the Center for Value Medicine at the University of Pennsylvania. The ultimate goal pursued by value medicine is to provide a cost-effective and scientifically sound medical service that takes into account patient value (Bae 2015; Lebedev et al. 2019). On the other hand, the greatest value for the patient can be offered through the personalization of medicine. Personalized medicine can be defined as a modern approach to healthcare based on the individual characteristics of each person. This approach implies a deep, detailed, and complete study of the patient’s health status, which allows for the use of individually selected treatment methods and timely prophylaxis of diseases.

3.1

Principles of Value and Personalized Medicine

The authors of research in the value medicine field distinguish the following 10 principles of value medicine: 1. All decisions, including decisions in the field of diagnostics, should be based both on values and facts. 2. It must be remembered that most often values become noticeable only when they are heterogeneous or contradictory and therefore can be problematic. 3. Thanks to the opportunities offered by scientific progress, the full diversity of human values is being increasingly involved in all areas of health care. 4. The priority source of information in value-based practice is the point of view of the patient or group of patients to whom the decision in question applies. 5. In a value-based practice, value conflicts are resolved not primarily with the help of rules prescribing a “true” result, but with the help of processes designed to maintain a balance of differing points of view.

An Architectural Approach to Managing the Digital Transformation of a Medical. . .

231

6. Careful attention to the use of language in this context is one of the powerful methods to raise awareness of values and differences in values. 7. It is necessary to take into account the rich resource of both empirical and philosophical methods, available to improve knowledge of people’s values. 8. In a value-based practice, ethical reasoning is used primarily to study differences in values, and not to determine what is “right.” 9. In a value-based practice, communication skills play a fundamental, not just an executive, role. 10. A value-based practice, although it involves a partnership with ethics specialists and lawyers, brings the decision-making process back to where it should go: in the real working conditions of the clinic, with clients and suppliers (Petrova et al. 2006). In turn, the following four principles of personalized medicine stand out: 1. Predictability—the ability to “predict” the disease, identifying predispositions based on a “health passport,” creating a long-term prognosis and recommendations. 2. Prevention or a significant reduction in the risk of developing a disease. 3. Personalization—an individual treatment of each; targeted diagnosis and possible subsequent treatment of the patient, based on clinical, genetic, genomic, and environmental factors. 4. Partnership, participatory activity—the direct participation of the patient in the process of prevention and treatment; explanation of all prescriptions and medical manipulations.

3.2

Description of Digital Technologies and the Concept of “End-to-end Digital Technologies”

Digital conversion in companies is associated with a significant increase in the amount of data that can be converted into information that is valuable for a specific business purpose. Digital technology, which is the main driver of digital transformation, affects business and enables consumers to provide unique value. The use of digital technologies provides organizational changes that can significantly increase productivity (increase efficiency) of companies. Digital transformation is the process of integrating digital technologies into all aspects of business activity, requiring fundamental changes in technology and the principles of creating new products and services. The cross-cutting technologies of the digital economy are Big Data, neurotechnologies, artificial intelligence, distributed ledger systems (blockchain), quantum technologies, new production technologies, the industrial Internet, robotics, sensorics, wireless communications, virtual and augmented reality.

232

3.2.1

I. Ilin et al.

Big Data

Big Data is a designation of structured and unstructured data of huge volumes and significant diversity, effectively handled by horizontally scaled (scale-out) software tools that appeared in the late 2000s and are alternative to traditional database management systems and Business Intelligence solutions. In a broad sense, “big data” is spoken of as a socioeconomic phenomenon associated with the advent of technological capabilities to analyze huge amounts of data, in some problematic areas—the entire world data volume, and the transformational consequences arising from this. Big data involves more than just analyzing vast amounts of information. The problem is not that organizations create huge amounts of data, but that most of them are presented in a format that does not correspond well to the traditional structured database format—these are web magazines, video recordings, text documents, machine code, or, for example, geospatial data. All this is stored in a wide variety of repositories, sometimes even outside the organization. As a result, corporations may have access to a huge amount of their data and not have the necessary tools to establish the relationship between this data and make significant conclusions based on it. Add to this the fact that data is now being updated more and more often, and you will get a situation in which traditional methods of information analysis cannot keep up with huge volumes of constantly updated data, which ultimately opens the way for big data technologies. The use of Big Data technology is based on five basic principles (Hsieh et al. 2013): • • • • •

Velocity. Volume. Variety. Value. Veracity.

Table 1 presents a comparison of a traditional database with a Big Data database of big data on the main characteristics. The concept of Big Data implies working with information of a huge volume and diverse composition, which is often updated and located in different sources in order to increase work efficiency, create new products and increase competitiveness. The consulting company Forrester gives a short wording: Big data combines techniques and technologies that make sense from data at an extreme limit of practicality.

An Architectural Approach to Managing the Digital Transformation of a Medical. . .

233

Table 1 Comparison of traditional databases and big data DB (authors’ creation) KPI Value of information Storage method Data structure Data storage and processing model Data relationship

3.2.2

Traditional database From gigabytes to terabytes Centralized Structured

Big Data database From petabytes to Exabytes

Vertical connection

Decentralized Semi-structured or unstructured Horizontal connection

Strong

Week

Neurotechnology

One of the definitions of neurotechnology is—the totality of technologies created on the basis of the principles of the functioning of the nervous system (Morales and Clément 2018): 1. Neurotechnologies consider the brain as a neural network, that is, a set of interconnected neurons. Neural networks can be divided into two types: “wet” and “dry.”“Wet”—biological neural networks that are in our heads, and “dry”— artificial ones; mathematical models built on the principle of biological neural networks, capable of solving very complex problems and self-learning. 2. The most promising branches of neurotechnology: – Neuropharmacology. The development of gene and cell therapy, early personalized diagnosis, treatment, and prevention of neurodegenerative diseases (senile dementia, Alzheimer’s disease, etc.), as well as improving mental abilities in healthy people. – Neuromedtech. The development of neuroprosthetics of organs, including artificial sensory organs, the development of means for rehabilitation using neurotechnologies that help develop a limb that has lost mobility. – Neuroformation. The development of neural interfaces and virtual and augmented reality technologies in training, the development of educational programs and devices, the creation of devices to enhance memory and analyze the use of brain resources. – Neuroentertainment and sports. The development of brain-fitness exercises for the brain, the creation of games using neurogadgets, including neurodeveloping games. – Neurocommunications and Marketing. Development of neuro-marketing technologies (a set of methods for studying the behavior of buyers, the possibilities of influencing it, as well as reactions to similar effects using neurotechnologies), predicting behavior based on neuro- and biometric data. – Neuroassistants. The development of natural language understanding technology, the development of deep machine learning (machine learning based on neural networks that help improve algorithms such as speech recognition,

234

I. Ilin et al.

computer vision, and natural language processing), the creation of personal electronic assistants (web services or applications that play the role of virtual secretary) and hybrid human-machine intelligence.

3.2.3

Artificial Intelligence

Artificial intelligence, AI—the science and technology of creating intelligent machines, especially intelligent computer programs. Nowadays AI includes a number of algorithms and software systems, the distinguishing feature of which is that they can solve some problems as a person who would think about their solution would do it. The main properties of AI are language comprehension, learning, and the ability to think and, importantly, act. AI is a complex of related technologies and processes developing qualitatively and rapidly, for example: • • • • •

Natural language text processing. Machine learning. Expert systems. Virtual agents. Recommendation systems.

This helps to build qualitatively new customer experience and interaction process (Geger 2016). Two areas of AI development can be distinguished: • Solving problems related to the approximation of specialized AI systems to human capabilities, and their integration, which is implemented by human nature. • The creation of artificial intelligence, representing the integration of already created AI systems into a single system that can solve the problems of mankind. The main areas of AI application are: • • • • • • • • • • • •

Automatic translation. Medical intelligent systems. Getting business intelligence. Visual recognition. Expert systems. Text recognition. Information retrieval. Understanding and analysis of natural language texts. Image analysis. Intelligent information security systems. Speech recognition. Robotics. Figure 1 presents a diagram of the existing technological areas of AI.

An Architectural Approach to Managing the Digital Transformation of a Medical. . .

235

Fig. 1 Technological areas of development of AI (authors’ creation)

3.2.4

Blockchain

A blockchain is a distributed database in which storage devices are not connected to a common server. This database stores an ever-growing list of ordered records called blocks. Each block contains a timestamp and a link to the previous block. The use of encryption ensures that users can change only those parts of the blockchain that they “own” in the sense that they have private keys, without which writing to the file is impossible. In addition, encryption ensures the synchronization of copies of the distributed blockchain for all users. Blockchain technology originally incorporated security at the database level. The concept of blockchains was proposed in 2008 by Satoshi Nakamoto. It was first implemented in 2009 as a component of digital currency—bitcoin, where blockchain plays the role of the main general registry for all operations with bitcoins. Thanks to blockchain technology, bitcoin has become the first digital currency that solves the double-cost problem (unlike physical coins or tokens, electronic files can be duplicated and spent twice) without using any authoritative body or central server. Security in blockchain technology is provided through a decentralized server that affixes timestamps and peer-to-peer network connections. As a result, a database is formed, which is managed autonomously, without a single center. This makes blockchains very convenient for recording events (for example, making medical records) and data operations, identity management, and source authentication. In the case of bitcoins, such keys are used to access addresses that store some amounts in foreign currency of direct financial value. This implements the function of registering the transfer of funds, usually this role is played by banks. In addition, another important function is implemented: the establishment of a relationship of trust and confirmation of the identity of the person, because no one can change the chain of blocks without the corresponding keys. Changes not approved by these keys are rejected. Of course, keys (like physical currency) can theoretically be stolen, but

236

I. Ilin et al.

protecting several lines of computer code usually does not cost much. This means that the main functions performed by banks: identity verification (to prevent fraud) and subsequent registration of transactions (after which they become legal) can be performed by a chain of blocks faster and more accurately. Blockchain technology offers a tempting opportunity to get rid of intermediaries. It can take on all three important roles that the financial services sector traditionally plays: registering transactions, verifying identity, and concluding contracts (Alhadhrami et al. 2017; Zhu et al. 2019). In world practice, there is the use of distributed registry technology to increase the efficiency of various business processes in medical organizations. Most studies are conducted in the USA and are mainly devoted to the description of individual projects and the experience of introducing certain technologies into the practice of medical organizations. For example, Change Healthcare offers software, analytics, services, and network solutions based on innovative healthcare technologies. The company’s mission is to modernize the American healthcare system in order to increase economic efficiency. The company has implemented a project to use blockchain technology to process hundreds of medical transactions per second. To achieve this goal, a technology implementation project was implemented on the Hyperledger Fabric platform. The implementation took several months, and since January 2018, the test network has demonstrated the ability to process up to 50 million transactions per day with a throughput of up to 550 transactions per second. This was enough to handle all the transactions that occurred on the network. (Platform Hyperledger 2019). A McKinsey consulting company, after conducting a study, suggests that the US healthcare system can save up to $ 450 billion a year by using updated processes and technologies (Mckinsey research 2019). The Dutch technology company Royal Philips has decided to expand the possibility of using a distributed registry. A division of the company, Philips Healthcare has launched the Blockchain Research Lab, which is exploring applications for the blockchain in medicine. In 2015, the company, in partnership with the Tierion startup, already conducted the first studies on the possibility of using a distributed registry in the healthcare industry. According to research, Tierion released a report on the introduction of technology (October 5, 2016), however, there is no practical use yet. Estonia has a blockchain platform in which patient history can be seen in real time. The technology of the Guardtime blockchain and the eHealth Foundation ensures the safety, transparency, and integrity of medical information, protecting data from unforeseen changes or deletion due to hacker attacks, system crashes, and malicious programs.

3.2.5

New Production Technologies

New production technologies are complex processes of designing and manufacturing at the modern technological level of customized (individualized) material objects

An Architectural Approach to Managing the Digital Transformation of a Medical. . .

237

(goods) of varying complexity, the cost of which is comparable to the cost of massproduced goods. They include: • • • •

New materials. Digital design and modeling, including bionic design. Supercomputer engineering. Additive and hybrid technologies.

3.2.6

Industrial Internet of Things

Industrial Internet of Things, (IIoT)—the concept of building info-communications, which leads to the formation of new business models when creating goods and services, as well as their delivery to consumers. The key driver for the implementation of the “Industrial Internet” concept is to increase the efficiency of existing production and technological processes and reduce the need for capital costs. The resources of companies released in this way form the demand for industrial Internet solutions. Today, all the links necessary for its functioning are involved in the Internet of things system: manufacturers of sensors and other devices, software, system integrators, and customer organizations (both B2B and B2G), communication operators. The introduction of the industrial Internet has a significant impact on the economies of individual companies and the country as a whole, contributes to increased labor productivity and the growth of the gross national product, and has a positive effect on working conditions and professional growth of employees. The service model of the economy that is created during this transition is based on the digitalization of production and other traditional industries, the exchange of data between various actors in the production process, and the analysis of large amounts of data.

3.2.7

Robotics

Robotics is an applied science engaged in the development of automated systems and is the most important technical basis for the intensification of production. A robot is a programmable mechanical device capable of performing tasks and interacting with the external environment without human assistance. Robotics is based on such disciplines as electronics, mechanics, telemechanics, mechatronics, computer science, as well as radio engineering and electrical engineering. They distinguish building, industrial, domestic, medical, aviation, and extreme (military, space, underwater) robotics.

238

3.2.8

I. Ilin et al.

Wireless Communication

Wireless communication—communication that bypasses wires or other physical transmission media. For example, the Bluetooth wireless data protocol works “over the air” over a short distance. Wi-Fi is another way to transfer data (Internet) over the air. Cellular communication is also wireless. Although wireless protocols are improving from year to year, in terms of their basic indicators and transmission speed, they have not yet circumvented wired communications. Although high hopes in this field are shown by the LTE network and its latest iterations.

3.2.9

Virtual Reality

Virtual reality, VR—the world created by technical means (objects and subjects), transmitted to a person through his sensations: sight, hearing, smell, touch, and others. Virtual reality simulates both exposure and response to exposure. To create a convincing complex of sensations of reality, a computer synthesis of the properties and reactions of virtual reality is performed in real time. Virtual reality objects usually behave close to the behavior of similar objects of material reality. The user can act on these objects in accordance with the real laws of physics (gravity, water properties, collision with objects, reflection, etc.). However, often for entertainment purposes, users of virtual worlds are allowed more than is possible in real life (for example: flying, creating any objects, etc.). “Virtual reality” systems are devices that more fully, in comparison with conventional computer systems, imitate interaction with the virtual environment, by affecting all five sensory organs that a person has.

3.2.10

Augmented Reality

Augmented reality (AR) is the result of introducing any sensory data into the perception field in order to supplement information about the environment and improve the perception of information. Augmented reality—perceived mixed reality (English mixed reality), created using the “complemented” with the help of a computer elements of perceived reality (when real objects are mounted in the field of perception). Among the most common examples of complementing perceived reality is a parallel front colored line showing the location of the closest field player to the goal when watching football matches in television, arrows showing the distance from the penalty kick to the goal, “drawn” puck path the time of a hockey match, a mixture of real and fictional objects in film films and computer or gadget games, etc. There are several definitions of augmented reality: Ronald Azuma researcher in 1997 defined it as a system that:

An Architectural Approach to Managing the Digital Transformation of a Medical. . .

239

• Combines virtual and real. • Interacts in real time. • Works in 3D.

4 Results In our study, we propose an approach to implementing the digital transformation of the medical organization (Fig. 2). The approach is the development and implementation of a comprehensive architectural solution and includes the following steps: • • • •

Development of a business process system. Development of requirements for information systems. Configuring information systems. Development of additional modules of information systems.

The project product of digital transformation we see a comprehensive architectural solution that includes the following components: • • • • •

Business architecture. Specification of IT services, supported by digital technologies. IT architecture. Configured information systems. Developed additional modules of information systems. Let us consider each point more detailed.

Fig. 2 Digital transformation stages for the modern medical organization. (authors’ creation)

240

4.1

I. Ilin et al.

Business Architecture of Modern Medical Organization

Business processes determine the enterprise’s organizational structure. The organizational structure is a stable set of interrelated and mutually subordinate organizational units for coordinating the enterprise workforce. The project is traditionally defined as “a temporary organization created to solve unique problems/obtain unique results.” As a temporary organization, the project does not have a permanent organizational structure—instead, there is a structure of roles and responsibilities that are implemented on a temporary, role basis by performers from the organizational structure of the company. Business architecture involves a description of all groups of processes—main, managerial, supporting with an appropriate level of decomposition into subprocesses (Dubgorn et al. 2019). The business process model, as a key element of business architecture, is the starting point for the analysis and reengineering of the company’s management system, because of: • It provides an understanding of the company. • It provides a visual model for analysis, benchmarking, and identification of optimization potential. • It defines the organizational structure, information, material, and cash flows. • It serves as the basis for identifying the needs for IT support, the requirements formation for IT services, and the subsequent formation of the IT architecture landscape. Let us try to simulate a business architecture for a medical organization, but it is worth remembering that when modeling the activities of medical organizations, there are a number of specific features of this industry that affect the choice of approach to identifying processes: • A patient-oriented approach that determines the cross-functionality of activities in business processes for patient care. • A pronounced matrix management system on the grounds of functional and administrative subordination. • The individual path of treatment of the patient, leading to a high degree of flexibility and variability of business processes for patient care. • A high degree of regulation of the processes of medical care and related processes, including certain requirements for document management in healthcare. In connection with the specified specifics, the main processes of medical activity were identified not by medical specialization, but by the form of medical care and services. Below is a detailed description of the typical functions of a medical organization (Fig. 3). The proposed models contain an exhaustive list of functions of medical organizations obtained in the course of analysis of existing practices of modeling the activities of medical organizations and in consultation with large medical organizations in Russia. Functional models in Fig. 2 can be considered as

An Architectural Approach to Managing the Digital Transformation of a Medical. . .

241

Fig. 3 The typical business functions of a medical organization (authors’ creation)

reference models and adapted to the conditions of implementation of the activities of a particular medical organization.

4.2

Requirements for Functionality of Informational System, Including Medical Informational System

It’s easy to see, during the analysis of the medical organization’s top-level processes, that support of control processes requires accounting informational systems and business analysis as well. Informational support of all medical processes requires using of medical informational system, integrated with accounting and business intelligence systems. The term “medical information system” is understood as a system of workflow automation for medical institutions, which combines clinical decision support system, patient’s electronic accounting information, patient live monitoring data from medical devices, means of communication between medical specialists and financial and administrative information. Introducing of informational system in medical institutions is necessitated by using the huge and constantly growing amount of information during solving diagnostic, therapeutic, statistical,

242

I. Ilin et al.

managerial, and other tasks. Finally, the main aim of introducing of the medical informational system is to increase of medical help quality. The reference model for business process, from one side, will make it possible to formulate requirements for information system functionality, including requirements for medical information system functionality as well, which is necessary to solve tasks of the main activity of the medical institution, and helps substantiate the choice of a specific MIS as base for further formulation of the model for unified MIS, which has required functionality. Based on the presented reference model, it is possible to enlarge the following functionality, which is required by MIS for implementation-specific information support for medical processes: • Registration of patient movement (registration, transfers, writing out). • Accounting of information about the patient (examinations, studies, consultations, surgical intervention) in accordance with the regulatory documents of electronic medical history. • Accounting of the activities of medical personnel. • Formation of medical and financial reporting. • Administration of the main activities of a medical organization.

4.3

Requirements Formation for a Reference Architecture Model

To achieve the digitalization of medical organizations we propose an approach (Ilyin and Ilyashenko 2018; Ilyin et al. 2019). It consists of the following stages: • Explore the IT architecture that exists in the enterprise today. • Forming the requirements to business and IT and infrastructure services to implementing BI system and IoT in existing IT architecture, • Analyze the existing BI systems and make a choice of the platform. • Explore the BI system architecture on the selected platform. • Suggest possible ways to integrate the BI system and the existing architectural solution. • Formulate the criteria on which the choice of integration technology is based. • Evaluate each of the alternatives according to established criteria, taking into account their significance. • After analyzing the estimates obtained, to determine the most rational way of introducing the BI system in the enterprise. • Analyze how to implement IoT devices (Lebedev et al. 2019) in existing IT architecture: analyze the types of IoT architecture and possibilities to integrate one of them in existing IT architecture. • Developing the target IT architecture.

An Architectural Approach to Managing the Digital Transformation of a Medical. . .

4.3.1

243

Business Services Requirements

The key business service requiring the use of the BI system is the possibility of obtaining analytical reports on key aspects of the organization’s activities in various ways: • Monitoring patient health indicators. • Report on financial results of activity: data on the volume of income and expenses of the medical organization in the context of analytical codes of income (credits), expenses as of January 1 of the year following the reporting year. • Financial statements: balance sheet. • Statistical medical reporting: a report on the structure and co-relation of diseases, a report on the results of patient treatment, internal reporting on the workload of hospital beds, etc. • Reporting on the development of medical organization personnel. These business services allow you to get a reliable assessment of the results of the medical organization activities, to identify ways of more rational use of funds and their most efficient distribution. The implementation of business services is carried out through the functioning of the relevant IT services. IT services should be implemented taking into account the requirements for BI systems and the specifics of the medical organizations activities, as well.

4.3.2

IT Services Requirements

• Integration requirements—the presence of integration components for the implementation of the relationship between the individual modules of the system. • Access requirements—providing data access and access to system data processing functions. • Interface requirements—availability of a single interface environment, use the principle of “data navigation;” • Methodological requirements—the use of modern standards in the medical field in the development of semantically interoperable information systems in order to obtain a result that corresponds to the modern development stage of IT technologies in medicine.

4.3.3

Options for Selecting BI Systems

Based on the formulated requirements for business and IT services, we can offer options for choosing BI systems for solving business analytics problems in medical organizations.

244

I. Ilin et al.

In order to make a choice, a survey was conducted of the responsible person who makes the decision to implement the implementation of the analytical system in the organization. Based on the information collected, the requirements for the future architecture were formulated and ten parameters were identified, with the help of which the systems integration technologies were evaluated (Troyansky et al. 2015; Kroenke et al. 2017; Kroenke et al. 2018): • Quantitative: – Total Cost of Ownership (TCO) of BI system implemented in the IT architecture of the company. – Time spent on the solution implementing. – The speed of downloading data from ERP to BI system. – The speed of downloading data from MIS to BI system. – Potential threats to remake. • Quality: – – – –

4.3.4

Ability to integrate various information systems. Ensuring the required data quality, their integrity. Low complexity of technical implementation and maintenance. Resilience to the information system failures.

Possible Integration Options for ERP, MIS, and BI Systems

In our study, we consider three main ways of possible integration of ERP and BI system, MIS and BI system, and the main advantages and disadvantages of each of the integration technologies are highlighted: 1. Using the Excel upload reporting system. Advantages: low cost; ease of use; does not require special skills in work; ease of implementation; minimal intervention by external experts. Disadvantages: low degree of the data loading process automation; low data security; the need to create new files, which leads to a high probability of system failures, and as a result, the data integrity and completeness; the need to create new requests to the ERP database depending on the user’s requirements. 2. Making connection of BI system directly to DBMS using standard ODBC/OLE DB interface. Advantages: support of external data sources (OLE DB); works with 32-bit and 64-bit drivers (ODBC); relative cheapness of decisions; do not require additional modules; no violation of data integrity and completeness; constant access to ERP database.

An Architectural Approach to Managing the Digital Transformation of a Medical. . .

245

Disadvantages: encrypting the names of tables and fields during the formation of the ERP database leads to difficulties in creating queries to the DBMS and the ratio of the names of directories and registers ERP with tables in the ERP database. 3. Uploading data from ERP and MIS systems by software component Xtract BI. Advantages: opportunity to extract data from ERP system tables and view; use Xtract BI business application programming interface (BAPI) component to access data from BAPIs and protocol function modules and directly use the output in BI system; support for using dynamic SQL statements with variables; data extraction can be processed in packets to handle Big Data; no significant effect on the production system. Disadvantages: requires the cost of using and connecting additional modules. Connector functionality includes: • Generation of data models for various BI systems. • Creating a direct sample from DBMS for loading data model tables. • Automating the process of assigning data fields that are correct and understandable to users and IT professionals for names that are ready to be used in the headers of the visual interface of a BI application, including from the point of view of organizing associative links. • The connector acts as a reference service for data structures and relationships of tables ERP. Summing up, it can be argued that the ERP, MIS, and BI integration system using special connectors can accurately process and convert data. Company in the implementation of operational monitoring of sales of the enterprise. For IT specialization, which is engaged in the formation of data models for BI applications, there are three levels of work in the browser: • Level 1: provides basic functionality for selecting data from ERP and MIS in the BI system. • Level 2: creates complex SQL-views for tabular objects ERP-configuration, which can be used both for the automatic creation of data models and for the automatic creation of data models at the tracking level. • Level 3: level of automation, the ability to create a variety of data models for various BI applications, without taking into account access to the database, based on SQL representations. Figure 4 shows one of the options for the model of uploading data from the ERP system to the BI system depending on the type of data source. Each of the above methods allows you to upload data for further aggregation, depending on the type and source of data. Requirements Formation for a Reference Architecture Model. The target IT architecture with BI and ERP and MIS systems integration. We propose to develop the target IT architecture based on the integration of the ERP system, MIS system, and BI system using a special connector will eliminate previously existing shortcomings in the organization of information exchange and

246

I. Ilin et al.

Fig. 4 Referent architectural model with integration IoT technologies, ERP, BI, MIS systems (authors’ creation)

automate the processes of extracting, converting, and loading data as much as possible. In our opinion, it allows to provide reliable data for data analysis services of collaborative BI system and to improve the accuracy and efficiency of data analysis services. This will ensure the implementation of operational monitoring of the medical organization’s main indicators. The target IT architecture model with the Archimate language is presented in Fig. 4. Now the user or developer does not interact with the ERP or MIS systems at any stage of the process. User actions are only related to working in the BI application. Users can view reports, make the necessary selections, create additional visualizations or stories, and upload the necessary data in the right formats. The stages of extracting, processing, and loading data into a BI application have changed significantly.

5 Discussion and Conclusion We proposed an approach to the formation of the transition process for the implementation of digital transformation projects in a medical organization. From our point of view, the transition process to digital transformation in a medical organization involves the implementation of the following stages: analysis of modern medical and IT technologies (which is already implemented in the organization and what is still advisable to do), the formation of a requirements system for the architectural model, the formation of the architectural model itself.

An Architectural Approach to Managing the Digital Transformation of a Medical. . .

247

The main research result is the formation of the architectural model of the upper level, as well as the formation of the upper level of the transition plan for the digital transformation in a medical organization. In the future, we plan to consider the levels of digital maturity of the enterprise through the formation of relevant indicators system. Based on these indicators in digital format, we plan to develop a methodology for assessing the levels of digital maturity of a medical organization and give recommendations on the feasibility of moving to the next stage of digital transformation. Acknowledgments The reported study was funded by RFBR according to the research project № 19-010-00579.

References Adzhiyev, M. Y. (2013). Main problems of quality management system of a medical organization. Young scientist, (12), 561–562. Alhadhrami, Z., Alghfeli, S., Alghfeli, M., Abedlla, J. A., & Shuaib, K. (2017). ‘Introducing blockchains for healthcare’, Electrical and Computing Technologies and Applications (ICECTA) 2017 International Conference, no 19, pp. 1–4. Bae, J. M. (2015). Value-based medicine: Concepts and application. Epidemiology and health, 37. Begun, T. V., & Begun, D. N. (2017). Modern problems of management in healthcare. Young scientist, 22, 416–418. Dubgorn, A. S., Levina, A. I., & Lepekhin, A. A. (2019). The reference model of the functional structure of a medical organization. Management Research Journal, 5(1), 29–36. Duka, D. (2013). ‘Adoption of agile methodology in software development’. 36th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). Fröhlich, H., et al. (2018). From hype to reality: Data science enabling personalized medicine. BMC Medicine, 16, 150. Geger, Y. V. (2016). Information technologies in management of medical care quality. Modern high technologies, 2-1, 9–12. Greefhorst, D., & Proper, E. (2011). Architecture principles. Berlin, Heidelberg: Springer. Gopal, G., Suter-Crazzolara, C., Toldo, L., & Eberhardt, W. (2019). Digital transformation in healthcare – Architectures of present and future information technologies. Clinical Chemistry and Laboratory Medicine, 57(3), 328–335. Hsieh, J. C., Li, A. H., & Yang, C. C. (2013). Mobile, cloud, and big data computing: Contributions, challenges, and new directions in telecardiology. International Journal of Environmental Research and Public Health, 10(11), 6131–6153. Ilin, I., Iliyaschenko, O., & Konradi, A. (2018). Business model for smart hospital health organization. SHS Web Conf., 44, 00041. Ilyashenko, O., Ilin, I., & Kurapeev, D. (2018). Smart hospital concept and its implementation capabilities based on the incentive extension. SHS Web Conf., 44, 00040. Iljashenko, O., Bagaeva, I., & Levina, A. (2019). Strategy for establishment of personnel KPI at health care organization digital transformation. IOP Conference Series: Materials Science and Engineering, vol. 497, conference 1. Ilyin, IV, & Ilyashenko, V. M., (2018). Formation of requirements for a reference architectural model for the digital transformation of a medical organization. Sci. Bull. South. Inst. Manag. 82–88.

248

I. Ilin et al.

Ilyin, I. V., Ilyashenko, O. Y., & Ilyashenko, V. M. (2019). An architectural approach to the development of a medical organization in the context of healthcare digitalization. Journal of Management Studies, 5(1), 37–47. Kroenke, D. M., Auer, D. J., & Yoder, R. C. (2018). Database processing: Fundamentals, design, and implementation (15th ed.). New York, NY: Pearson. Kroenke, D. M., Auer, D. J., Vandenberg, S. L., & Yoder, R. C. (2017). Database concepts. New York, NY: Pearson. Lankhorst, M. (2013). Enterprise architecture at work, the Enterprise engineering series. Berlin, Heidelberg: Springer Berlin Heidelberg. Morales, J. M. H, & Clément, C. (2018). Technical challenges of active implantable medical devices for neurotechnology. In 2018 IEEE CPMT Symposium Japan (ICSJ), pp. 77–80. Nicholson, J. K. (2006). Global systems biology, personalized medicine and molecular epidemiology. Molecular Systems Biology, 2, 52–61. Petrova, M., Dale, J., & Fulford, B. K. (2006). Values-based practice in primary care: Easing the tensions between individual values, ethical principles and best evidence. The British Journal of General Practice, 56(530), 703–709. PRINCE2 (2017). Handbook. Axelos. Shlyakhto, E. V., & Yakovenko, I. V. (2017). Medicine oriented to the outcome of the disease. Translational Medicine, 4(1), 6–10. Sowa, J. F., & Zachman, J. (1992). Extending and formalizing the framework for information systems architecture. IBM Systems Journal, 31(3), 590–616. Troyansky, O., Gibson, T., & Leichtweis, C. (2015). Qlik view your business. An Expert Guide to Business Discovery with Qlik View® and Qlik Sense. – WILEY. 759 pp. Whelan, J., & Meaden, G. (2016). Business architecture: A practical guide (p. 658). Routledge. Zhao, W., Luo, X., & Qiu, T. (2017). Smart healthcare (editorial). Applied Sciences (Switzerland), 7 (11), 1176. Zhu, X., Shi, J., & Lu, C. (2019). Cloud health resource sharing based on consensus-oriented Blockchain technology: Case study on a breast tumor diagnosis service. Journal of Medical Internet Research, 21(7), e13767.

Online document Dedov II, Tyulpakov AN, Chekhonin VP, Baklaushev VP, Archakov AI, and Moshkovsky SA (2012). Personalized medicine: Current status and prospectsVestnik RAMS, no 12. Accessed Aug 13, 2019, from https://cyberleninka.ru/article/n/personalizirovannaya-meditsinasovremennoe-sostoyanie-i-perspektivy Federal Enterprise Architecture Consolidated Reference Model. (2007). . Version 2.3. October. Accessed Aug 18, 2019, from https://www.whitehouse.gov/omb Gartner Research Process. (2019). Accessed Aug 18, 2019, from http//www.gartner.com/ technology/research/methodologies/research_process.jsp Kukes V, Sychev DA (2010). Personalized medicine: new opportunities for increasing safety of pharmacotherapy. Remedium, no1. Accessed Aug 15, 2019, from https://cyberleninka.ru/article/ n/personalizirovannaya-meditsina-novye-vozmozhnosti-dlya-povysheniya-bezopasnostifarmakoterapii Lebedev GS, Shaderkin IA, Fomina IV et al. (2019). Evolution of internet technologies in healthcare// CyberLeninka: [website] Accessed Aug 28, 2019, from https://cyberleninka.ru/ article/n/evolyutsiya-internettehnologiy-v-sisteme-zdravoohraneniya Mckinsey Research. (2019). Accessed Aug 07, 2019, from https://healthcare.mckinsey.com/sites/ default/files/The_big-data_revolution_in_US_health_care_Accelerating_value_and_innovation %5B1%5D.pdf.

An Architectural Approach to Managing the Digital Transformation of a Medical. . .

249

OPENGROUP. TOGAF (n.d.). Standard. [electronic resource]. Accessed Jan 18, 2019, from https://www.opengroup.org Platform Hyperledger. (2019). Accessed Aug 07, 2019, from https://www.hyperledger.org/ resources/publications/changehealthcare-case-study Welcome to TOGAF. (2019). Version 9.1 Enterprise Edition, The Open Group. Accessed Aug 18, 2019, from http//www.opengroup.org/togaf

Aluminum Production and Aviation: An Interesting Case of an Interwoven Rebound Effect in a Digital Transforming World Tessaleno Devezas and Hugo Ruão

Abstract In this chapter, it is presented an extensive prospective analysis of the worldwide aluminum production. It is discussed in detail the six major reasons why aluminum is not only one of the fundamental materials of modern civilization, but it could still immensely expand its range of applications and volume of production globally. The high demand for aluminum from all industry sectors can still endure for a couple of decades from now, but probably at a lower growth rate, following probably the path of a logistic curve. Today’s high per capita consumption presented by highly industrialized countries (between 15 kg and 30 kg per capita) will probably persist for the next two decades from now, considering that these countries have also significative automobile and aircraft production plants. But a very important aspect to consider regarding aluminum production/consumption is the fact that it is highly probable that recycling of scrap aluminum will gain a renewed momentum as environmental concerns worsen; still regarding this point it is important to take in account that today’s existing aluminum in circulation is almost enough to satisfy global demand through recycling.

1 Aluminum: From Noble Metal to Cheap Commodity The four engineering materials most produced/consumed by mankind today are, in descending order: cement/concrete, iron/steel, plastics, and aluminum. Table 1 below exhibits, for these four materials, the percentage change in production at each decade since 1960 as well as the tonnage produced in 2018. T. Devezas (*) Atlantica—Instituto Universitário, Oeiras, Lisbon, Portugal C-MAST (Center for Aerospace Science and Technologies)—FCT, Lisbon, Portugal e-mail: [email protected] H. Ruão Atlântica School of Management Sciences, Health, IT & Engineering, Barcarena, Lisbon, Portugal © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Devezas et al. (eds.), The Economics of Digital Transformation, Studies on Entrepreneurship, Structural Change and Industrial Dynamics, https://doi.org/10.1007/978-3-030-59959-1_16

251

252

T. Devezas and H. Ruão

Table 1 Percentage change in production by decade and tonnage produced in 2018 (Devezas et al. 2017)

Material Concrete Iron/steel Plastics Aluminum

1960– 1970 81 72 366 115

1970– 1980 54 21 90 60

1980– 1990 18 8 81 25

1990– 2000 59 10 50 26

2000– 2010 98 68 66 70

2010– 2018 30 75 32 56

Tonnage 2018 106 metric tons 4300 1808 350 64,3

While concrete and iron are old companions of man, plastics, and aluminum are, in comparison, newcomers of the twentieth century. Metallic aluminum does not occur as pure metal on earth, and the earliest attempts to synthetize the bright silvery metal date back to the mid-nineteenth century when some chemists demonstrated that it was possible to get the metal by leaching and posterior electrolysis of white clays rich in alumina. At this early times, aluminum was considered more valuable than gold (about US $ 545 per pound), and some small ingots of aluminum were exhibited for the first time at the Exposition Universelle of 1855 in Paris, presented to the general public as ´the silver from clay’ (Richards 2018). This fact aroused the attention of the French emperor Napoleon III, who glimpsed the potential applications of this light metal for military applications and financed generously the French chemist Henry Étienne Deville for the continuation of his research and industrial production of this precious metal. At the next Paris fair in 1867 visitors were presented with aluminum wire, foil, and plates. Examples of important facts to be noted from these early times were Jules Verne’s extraordinary futurist vision of an aluminum-made spacecraft in his famous novel Journey to the Moon, and the use of a pure aluminum pyramid with about 3 kg as the capstone of Washington’s monument in 1884. But the real cost-effective industrial production of aluminum was made possible only in 1886 when two young scientists, the American student Charles Hall (founder of ALCOA in 1888; Pittsburg) and the French engineer Paul Héroult, working separately, devised an inexpensive electrolysis process by which aluminum could be extracted from aluminum oxide. In the following year, the Austrian engineer Joseph Bayer developed a chemical process by which alumina could be extracted from bauxite ore. Both, the Bayer process, and the so-called Hall-Héroult process, are still used today to produce primary aluminum all over the world. The start of industrial aluminum production by the end of the nineteenth century was still a very modest one, because of the complexities and energy-intensive processes involved in the refining the metal from ore. On the other hand, it was already of common knowledge the abundance of bauxite ore throughout the world as well as the fact that aluminum was the most abundant metal element on earth. Then, at the dawn of the twentieth century, the myth of aluminum as a noble metal dissipated, and the metal entered the century traded on the stock exchanges at a rate of about US$ 19/kg (inflation corrected).

Aluminum Production and Aviation: An Interesting Case of an Interwoven Rebound. . .

253

ALUMINUM PRICE ADJUSTED 2015 US$ US$/Metric Ton

30000 25000 20000 15000 10000 5000 0 1880

1900

1920

1940

1960

1980

2000

2020

2040

YEAR Fig. 1 Price of aluminum metric ton since 1900 in inflation’s adjusted 2015 US$ (data from IAI, World Aluminium Statistics 2018)

Aluminum found early industrial use in engines, as was the case of the one built in 1903 by the Wright brothers to power their first airplane (Sheller 2014). Aluminum foil entered the market in 1910, and soon alloy development, begun in 1911 (Richards 2018) improved physical properties, and opened a lot of new industrial uses for the metal. Fig. 1 below exhibits the price behavior of aluminum since 1900 (average of data from the US Geological Survey 2018, Kitco Corp., and International Aluminum Institute). In the early years of the twentieth century, the metal had become widely used in many everyday items like jewelry, eyeglass frames, optical instruments, and rapidly supplanted copper and cast iron in the production of tableware, what caused a first peak in price in 1908 for the still hard to produce aluminum. But the Al-alloys development and growth in production allowed a sharp decline in price in the following years before WW I. During the war, however, nations in conflict demanded huge quantities of aluminum for war equipment, explosives (aluminum powder is extremely inflammable), and above all for the construction of light but strong airframes for the emergent aviation industry (Zimmering 2017). Such intensive demand could not be satisfied by the still growing aluminum industry and prices rocketed in a very short period as demonstrated by the second strong peak in Fig. 1 (~ US$ 27/kg around 1916). The production grew exponentially from about 7000 metric tons in 1900 up to about 100,000 metric tons in 1916, when also intensive recycling started in the USA and Europe. Soon after the war prices reached a historical minimum of about US$ 5/kg in 1921, probably because production exceeded slightly the demand. But the entrenching of the Great Depression in the 1930s and the following troubled years brought again radical changes in the aluminum industry: – Rising prices from 1930 until 1940 (peak in 1933 of about US$ 9/kg). – In the USA, because of the economic downturn, the Works Progress Administration (WPA) advanced projects to expand the hydroelectric generation capacity,

254

– – –



T. Devezas and H. Ruão

which in turn increased the production capacity of primary aluminum. During this time, the Aluminum Association was formed, and the Association’s first meeting was held in New York City in 1935. Energy prices have begun to be largely subsidized by the state, not only in the USA but also in Europe. This context brought incentives to the use of aluminum as a civil engineering material, being used in both basic construction and building interiors. Also, of paramount importance at that time, was the development of aircraft engineering, both in the USA and Europe. The fierce competition between Boeing and Douglas Commercial led to the birth of the modern commercial aviation, with the launching of the Boeing 247 operated by United Airlines (1932) and DC-3 operated by TWA (1935)—both aircraft producers (Boeing and Douglas Aircraft) used precipitation hardened Al-alloys (then known as Duralumin—an Al-Cu alloy) in the construction of fuselages.1 But the definitive thrust for the development of strong and light Al-alloys as well as for the cost-effective ways of producing aluminum was brought up by the approaching war time (Zimmering 2017).

These two last points are closely related. Short before the Second World War aluminum was declared a strategic material of extreme importance, mainly due to its intensive use in aircraft production. The US administration urged Alcoa (since 1910 the official name of the original Hall’s Pittsburgh Reduction Company and the aluminum production monopolist in the USA at the time) to expand its production. Germany, then the world’s leading producer of aluminum, envisioned the metal as its edge in the war. After the United Kingdom was attacked in 1940, it started an ambitious program of aluminum recycling and appealed to the public to donate any household aluminum for airplane building—what indeed happened in large scale. During WW II the production peaked, first exceeding 1000,000 metric tons in 1941. A great share of the aluminum produced in the USA and Great Britain was sent to the Soviet Union (over 320,000 metric tons between 1941 and 1945) to be used in military engineering for both airplanes and tank engines. But as we can observe in Fig. 1, the war has not provoked a new increase in price, much on the contrary, prices plummeted in the 1940s, from ~ US$ 7/kg in 1940 down to—US$ 4/kg in 1945. Government incentives and the war effort eliminated price competition; electric power availability allowed to satisfy demand. Such low-level trend endured all over the 1950s and 1960s. In the late 1950s aluminum entered the Space Race—the first artificial satellite, the Russian Sputnik, was produced with joined aluminum hemispheres. Since then satellites and spacecraft used aluminum in great extent. At that time aluminum

1

The revolutionary design of the Boeing 247 (1932), DC1 (1933) and DC-3 (1935) models is credited as legitimate representatives of modern airplane engineering, whose fundamental characteristics are still used today. The DC-3 is considered as History’s most successful airplane; although DC-3 passenger version was shut down in 1943, some of these original aircraft are still in service on small cargo regional routes around the world – more than 75 years later!

Aluminum Production and Aviation: An Interesting Case of an Interwoven Rebound. . .

255

consumption grew at a rate of about 10% /year, a trend that lasted for the following decades, boosted by the production of wires and cables, structures for high-speed trains, pieces for automobile engines, beverage cans, and many other new applications. Production exceeded 10 × 106 metric tons in 1971 and prices fell until 1973 (~ US$ 3/kg), supported by the decline of extraction and processing costs allied with ever-increasing technological progress in properties (new alloys) development. This increased demand for aluminum made it an exchange commodity—it entered the London Metal exchange in 1978 (Zimmering 2017). In the second half of the 1970s producers became aware of environmental concerns about waste from industrial production, ore extraction, as well as greenhouse emissions, and governments enforced a series of regulations favoring recycling and waste disposal. Therefore, prices increased slightly and fluctuated in the following years, and this also contributed to the rise of aluminum recycling. As we can infer from Table 1, the most compelling change (growth) in production for all four most consumed materials happened in the decade 1960–1970. In the three following decades, the rate of growth in production declined significantly—that was a long period of succeeding recessions, declining PIB’s growth rates, worsening of environmental concerns, etc. But again, in the decade 2000–2010 things changed dramatically, a new growth phase in consumption emerged for reasons to be commented on later in a next section. Regarding prices, these continued relentlessly a downward trend as evidenced in Fig. 1, and this trend was not only a feature of aluminum but of most commodities in general (Vaz 2018). The historical prices for aluminum exhibited in Fig. 1 concern mainly the value of pure aluminum for the transformation industry. There are some important differences regarding different alloys and the corresponding allowing elements that will be covered in sect. 3.

2 The Cream of the Crop Amongst the most used materials presented in Table 1 aluminum is undoubtedly the most environmentally friendly. But we need, however, to be cautious in claiming that any material is environmentally friendly. The production of any material implies an assault on the environment in what concerns its entire life cycle—extraction, production, use, and disposal—in any of these phases we may have some type of degradation of the environment and possible emission of greenhouse gases. Extraction of ores is always destructive, requiring the removal of large amounts of overburden materials in open pit processes that lead to deforestation and leave behind the so-called “red mud” lakes that can overflow and pollute local groundwater. But the extraction of bauxite (a sedimentary clay-like rock rich in alumina) is far simpler than the extraction of iron ores. Adriaanse et al. (1997) estimated for instance that there is a rate of 0.48 tons of overburden for a ton of bauxite, compared to 2 ton of overburden per ton of iron ore. ‘Red mud’ lakes in the case of iron ore imply in the

256

T. Devezas and H. Ruão

Fig. 2 Comparative % contribution for CO2 emissions of material industries (IPCC 2018)

Industry CO2 emissions iron&steel 24 33

cement chemicals plasTIc

18 3

4

6

paper aluminum

12

rest of industry

creation of large dams that can eventually be broken causing enormous natural catastrophes as was the recent case of Brumadinho in Brazil. Regarding disposal both steel and aluminum are to date the two most recycled materials, but recycling aluminum is far more economical (requiring only about 10% of the energy required to produce primary aluminum) than recycling steel (about 40% of the energy required for primary production). Disposal of plastics is probably one of the most serious environmental concerns today. Concerning CO2 emissions, a recent IPCC report (2018) presented the contribution of material industries that is reproduced in Fig. 2—as can be seen the contribution of aluminum is minimal when compared to all other material groups. On the other hand, we must recognize that any of these materials are needed for our infrastructures, our comfort, and survival—let us say they are the necessary crop for our existence as an advanced civilization. Under this focus can be said that aluminum is the cream of the crop. The major problem with aluminum regards the fact that it is still a very energy-intensive material for its primary production—the aluminum atom has a very strong chemical affinity with the oxygen atom, and breaking their very strong covalent-ionic bonds is a price that nature charge us to use in practice one of the most abundant elements of earth’s crust. We can present a list of at least six major reasons to believe that aluminum is not only one of the fundamental materials of modern civilization, but it could still immensely expand its range of applications and volume of production globally: 1. Aluminum is the third most abundant element of the earth’s crust. Expressed in weight % of the crust (until 5 km deep) we have in decreasing order: oxygen (47%), silicon (27%), aluminum (8%), iron (5%). 2. There are still immense recoverable bauxite resources all over the planet. According to Mining Resources (2019), the estimated resources amount to over 25,000 Mton distributed mainly among Australia, Guinea, Brazil, China, Russia, India, Indonesia, Jamaica, Kazakhstan, and Suriname (to name the 10 major

Aluminum Production and Aviation: An Interesting Case of an Interwoven Rebound. . .

3.

4.

5.

6.

2

257

reserves, so-called the 10 bauxite behemoths).2 Considering the rate of 4 to 1 for the aluminum production from bauxite (4 ton bauxite produces 2 ton alumina that produces 1 ton aluminum) and today’s yearly tonnage of production of about 60 Mton aluminum, such reserves can satisfy the demand for the rest of the century or more (detailed forecast is presented in the last section of this paper). Aluminum is 100% recyclable, can be recycled indefinitely with no loss in quality. As will be shown in the next section, today about one-third of the world aluminum production comes from scrap and/or recycling of end-used aluminum products (cans, cars, windows, etc.). In theory, we can state that we have an inexhaustive supply of aluminum, for what we have of it in circulation now could be recycled to satisfy the world demand. Moreover, as pointed out in the previous section, the energy required for recycling (production of secondary aluminum) is about only 10% or less, than the energy required to produce primary aluminum. Aluminum alloys have a specific strength (strength-to-weight ratio) much higher than steels and almost so higher than titanium alloys but are essentially lighter and cheaper. Moreover, aluminum presents the suitable elasticity (Young Modulus—between 70–90 MPa) to be used as structural materials in applications that require the compromise between high strength and some capacity of elastic deformation when in work, as in cars, airplanes, bridges, etc. Add to that their exceptional corrosion resistance, which can be still enhanced by specific heat and surface treatments. Aluminum is a relatively very cheap material. The price of pure aluminum and of the most used Al-alloys lies under U$$ 2/kg. Cheaper than aluminum we find only the immense family of most common steels and cast iron. Based on data from Granta CES EduPack (2019) Fig. 33 exhibits a comparison among most common metals and alloys and Fig. 4 presents the price range of the larger family of Al-alloys. Aluminum is nowadays the material with the broader spectrum of applications in industry. Perhaps even broader than plastics. This theme is discussed in sect. 4, and in the final discussion about the future. When we look to the shorter term future we can see that aluminum is the promising material that will enable some of the very important technological achievements such as the electric car (already in course), space elevator (graphene or carbon nanotube tether plus Al-alloy climber), or space propulsion with solar sails (very thin pure aluminum foils, already produced).

The order presented is not the order of the largest producers, but rather of the largest reserves. The order by volume of production is: Australia, China, Brazil, Indonesia, Guinea, India, Jamaica, Kazakhstan, Russia and Suriname. 3 The prices in this figure are given in Euros/kg as it is presented by CES EduPack.

Fig. 3 Price comparison among several groups of metal alloys, which prices lie under Euro 20/kg (author’s creation based on data and software from Granta CES EduPack. 2019)

258 T. Devezas and H. Ruão

Fig. 4 Price difference among several types of aluminum alloys (author’s creation based on data and software from Granta CES EduPack 2019)

Aluminum Production and Aviation: An Interesting Case of an Interwoven Rebound. . . 259

260

T. Devezas and H. Ruão

3 Comments on Al-Alloys Prices Using the same data from Granta CES EduPack (2019) we can construct the graph exhibited in Fig. 4, which presents the different prices for several types of aluminum alloys. Aluminum alloys are classified in two major groups, wrought alloys (85% of the global production) and cast alloys, both of which may be subjected or not to heat treatments to improve their mechanical properties, depending on their respective alloying elements. The designation presented corresponds to the International Alloy Designation System (IADS), developed in 1970 by the USA’s Aluminum Association (ref). According to this designation system wrought alloys are specified by four digits, distributed as nine families; the first digit specifies the major alloying elements, the other digits represent a classification number, mostly related to the quantity of the alloying elements. It is thus: 1XXX (pure aluminum), 2XXX (Cu), 3XXX (Mn), 4XXX (Si), 5XXX (Mg), 6XXX (Mg, Si), 7XXX (Zn), 8XXX (mainly Li), 9XXX (reserved for future alloys).4 Cast alloys are also specified with four digits, the last digit separated by a dot: 1XXX.X, 2XXX.X, etc.. . . As can be observed in Fig. 7 we have a large group located at ~Euro 2/Kg, which correspond to pure aluminum, all cast alloys (except A201), several 2XXX and 6XXX, and all 3XXX, 4XXX, and 5XXX alloys. Another group follows between Euro 3–4/Kg, which contains all 7XXX family and some 6XXX alloys. Then, at about Euro 5–6 appears the special A-201.0-T7 alloy, which is an Al-Cu alloy with a small amount of silver (0.5 to 1%), solution heat treated but overaged (long time at ~300 ○ C), used for several casting applications in auto and aerospace applications. This alloy stands out for its excellent weldability and is also used to produce Al-rivets. Above Euro 10/Kg we have four very special alloys for aerospace applications—the major reason for their higher price is the use of Lithium. The main characteristics of this group of alloys are their higher Young Modulus (80 to 85 GPa), higher tensile strength (450 up to 600 MPa), and lower density (2.6 g/cm3 in contrast with 2.8 g/cm3 of the other Al-alloys). The 2297-T87 and 2090-T83 are both Al-Cu alloys with 1.7% and 2.5% Li respectively, solution heat treated and cold worked. The 8090-T851 and 8091-T6 are pure Al-Li alloys (2.5% and 2.8% Li respectively), developed by the second half of the 1980s by the Royal Aircraft Establishment and ALCOA, and are generically known by the trade name Lital. Presently, these Al-Li alloys constitute the highest performance Al-alloys used in airplane fuselages. Another interesting measure was proposed recently by Vaz (2018), which, on the one hand, confirms a general trend to lower material costs in society, but on the other hand, may contribute to its counterpart—the rebound effect. The author coined these measures as GEME (Global Effort of Materials in Economy) and IEME (Individual

4 To this four digits follow letters that specify the type of temper treatment: F (as fabricated), H (strain hardened, followed by numbers that specify if they are submitted to other heat treatments); O (full soft), T (heat treated followed by numbers that specify the type of treatment), W (solution heat treated). Cast alloys also use the same letter codes.

Aluminum Production and Aviation: An Interesting Case of an Interwoven Rebound. . .

261

Global Effort of Materials in Economy (GEME) 8.00% 6.00% 4.00% 2.00% 0.00%

y = -0.0033x + 0.0678

Fig. 5 Global Effort of Materials in Economy (GEME) calculated for a basket of the 79 most used materials between 1960 and 2015. The Y-axis express the percentual related to the global GDB (Vaz 2018)

Individual Effort of Materials in Economy (IEME) 1.50 1.00 0.50 Fig. 6 Individual Effort of Materials in Economy calculated for a basket of the 79 most used materials between 1960 and 2015 and normalized with relation to 1960 (1960 = 1) (Vaz 2018)

IEME for aluminum in the period 1960 - 2015 1.2 1 0.8 0.6 0.4 0.2 0 Fig. 7 IEME calculated for aluminum in the period 1960 2015 normalized to 1960 (1960 = 1) (Vaz 2018)

Effort of Materials in Economy). The first one is calculated multiplying the value (price) of a given material by its produced tonnage and dividing this product by the global GDP; the second one is calculated dividing GEME by the world population (the same as dividing price x volume by affluence).

262

T. Devezas and H. Ruão

Vaz calculated the GEME and IEME for a basket of 79 most used materials in the world economy in the period 1960–2015. Both graphs are presented in Figs. 5 and 6 and reveal clearly a downward trend—the strong oscillation between 2000 and 2010 (also evident in Table 1) will be discussed in sect. 5. Figure 7 shows the IEME calculated for aluminum between 1960 and 2015. Notably, the strong oscillation in the period 2000 and 2010 is not so evident and translates the stability of aluminum price mentioned previously. The oscillations between the 1970s and 1980s also appear in Fig. 1 (in a shorter scale).

4 Incredible Versatility Aluminum is indeed a wonder material. Lightweight, corrosion-resistant, and 100% recyclable aluminum has become a foundation of modern civilization (Smil 2014). If the eras of human history are to be coined by the metal usage (like Bronze Age and Iron Age) we could perfectly say that we are entering an Aluminum Era. His spectrum of applications is extraordinarily large, even larger than plastics if we consider its high-performance applications in several industry branches, as for instance aerospace and automotive. Figure 8 below exhibits the global demand in 2017 for semi-finished aluminum products, with a breakdown by sector (data from Statista 2018). Important to note that this has been a very dynamic demand market. Only 10 years earlier (2008) the same breakdown was very different, with reduced shares for construction (12%), electrical (7%), and machinery (7%). Consequently, the shares for packaging and other applications were much higher (22%) and (18%) respectively, even because the demand market was not yet well defined as today. Although the share for transportation has remained roughly the same, there has been an increased demand for rolled sheets sold to auto manufactures and aeronautic industry.

Fig. 8 Aluminum global demand in 2018 for semifinished products— breakdown by sector (Statista 2018)

Aluminum applicaTIons breakdown by industry sector (%) Consumer durables 4 Packaging 7

Other 5 Transport 26

Foil stock 8 Machinery/ equipment 10 Electrical 14

ConstrucTIon 26

Aluminum Production and Aviation: An Interesting Case of an Interwoven Rebound. . .

263

Evolution of the use of Al in american cars (net kg per vehicle) 250

Kg

200 150 100 50 0 1975 1980 1985 1990 1995 2000 2005 2010 2015 2020 YEAR Fig. 9 Net weight (Kg) of aluminum used in American cars. The value for 2020 is extrapolated (Aluminum Transportation Group 2018)

The jump observed in construction was essentially due to the surge in construction that happened in China all along the last two decades, as will be discussed in sect. 5. But the industry segment that really is driving aluminum demand these days is the transportation sector, and we can add that it is also the one with the greatest growth potential. Such driving force is acting not only from the demand side (and production volume) but also is also pulling the technological improvement of aluminum alloys. As we know the transport sector, the most technologically intensive downstream segment of the aluminum industry is keenly focused on reducing weight and fuel costs, and the best way to do that is substituting iron/steel for aluminum alloys, which are about 30% lighter. As we have seen in the previous section, there has been a notable effort in developing the new Al-Li alloys, which are about 7–10% lighter than all other Al-alloys. Such a reduction may not seem like much, but it is significant if we think that in 20 tons of the fuselage of a mediumsized passenger aircraft, such achievement would imply a reduction in weight of around 1.4 to 2 tones, which would in turn imply a significant reduction in fuel consumption. It is very important to highlight here the growing relevance of the use of aluminum in the automotive industry. A recent report of the Aluminum Transportation Group (2018) calls attention to the fact that the use of aluminum in cars in the USA has steadily increased since 1975. In that year about 38 Kg of the metal was used in a medium American car; today this weight is about 200 Kg. Figure 9 presents the numbers of this increase since then. Very probably we are just now experiencing a quantum leap in this kind of measure (weight of aluminum per car). Ford and Tesla have already launched in the market their models, pickups F-150 (2015) and F-250 (2017), and Tesla Model 3 (2012), whose bodies are almost entirely made of aluminum. Perhaps a great incentive for this sector was the introduction in 2014 of Alcoa’s Micromill Technology, which produces, as had been reported, aluminum alloy sheets with 40%

T. Devezas and H. Ruão

Thousand Metric Tonnes

264

70,000

GLOBAL ALUMINUM PRODUCTION 64336 (2018)

60,000 50,000 40,000

2009

30,000 20,000 10,000 0 1950

1960

1970

1980

1990

2000

2010

2020

2030

YEAR Fig. 10 Evolution of world production of primary aluminum since 1960 (data from IAI and USGS)

greater formability and 30% greater strength than the traditional aluminum sheets produced in rolling mills. Also is almost certain that soon all electric cars and/or selfdriving cars will be built for the greatest part in aluminum. As already highlighted in sect. 1, aluminum has always played a major role in the aeronautical industry, and this role is still progressing unabated, despite the competition with CFRP. We turn to this theme in sect. 6.

5 Snapshots of the Aluminum World Production and Consumption Figure 10 below shows the evolution of aluminum primary production since 1960 (data from IAI and USGS—20185). It can be distinguished two different growth periods, a first one (slower) between 1960 and 2000, followed by an acceleration after 2000: this acceleration progressed unabated even after the hesitation during the recession in 2009. The world map of primary production has radically changed in the last 20 years, as can be drawn from Table 2 below. Major’s producers in North and South America, Western Europe, East Europe, and Oceania reduced significantly their share, reflecting the entry “in force” of new private producers and/or conglomerates in China, India, and in the Arabian Peninsula. In light blue are marked the regions with share decrease and in light red the regions with important share increase. It happened an enormous change in just one decade. As an example, Table 3 presents the list of the 8 bigger producers (with their respective % shares) in 2008 (production over 1000 x 103 metric tons) (for the sake of simplicity let us use from here onwards 5

All data of the curves presented in this section stem from IAI and USGS.

Aluminum Production and Aviation: An Interesting Case of an Interwoven Rebound. . .

265

Table 2 Primary aluminum production—percentual share comparison among the world’s regions (data from IAI, World Aluminium Statistics, 2018)

REGION North America South America Western Europe East/Central Europe China Asia (non-China) Africa Oceania GCC* Non reported

% 2000 (26,055 × 103) (metric tons) 25.8 8.8 15.6 15.0 11.0 8.5 4.6 8.5 ND 2.2

% 2010 (42,353 × 103) (metric tons) 11.0 5.5 9.0 10.0 41.0 6.0 4.0 5.4 6.4 2.7

% 2018 (64,336 × 103) (metric tons) 5.8 1.8 5.8 6.3 56.7 6.8 2.6 3.0 8.3 2.9

a

GCC—Gulf cooperation countries—Saudi Arabia, Kuwait, United Arab Emirates, Qatar, Bahrain, Oman

Table 3 Comparison between the eight bigger aluminum producers in 2008 and 2018 (data from IAI, World Aluminium Statistics, 2018)

Increasing order of producers with output over 1000 × 103 metric tons 1 2 3 4 5 6 7 8

2008 (40,131 × 103) (metric tons) China (34) Russia (10) Canada (8) USA (7) Australia (5) Brazil (4) Norway (3) India (3)

2018 (64,336 × 103) (metric tons) China (56) Russia (6) Canada (5.3) India (5.3) United Arab Emirates (4.3) Australia (2.5) Norway (2) Bahrain (1.6)

TMT for ‘thousand metric tons) compared with the same list in 2018. As can be seen, China almost doubled its share, producing now over 36,000 TMT, distributed among nine different producers. Brazil and the USA disappeared from the list and were displaced to 11th and 13th place respectively—Brazilian production shrank from 1600 TMT to 800 TMT, and the USA shrank from 2650 TMT to 790 TMT! And we note the entry in the stage of two new players from the GCC*. Such transition brought also another important change regarding the primary energy source used in the production of the metal: while in the twentieth century the main energy source was essentially hydroelectric power, we now have a huge contribution from coal (China) and natural gas (Arab countries and Russia).

T. Devezas and H. Ruão

Thousand Metric Tonnes

266

GLOBAL ALUMINUM PRODUCTION

80000 60000 40000

y = 383730x - 7E+08

20000 0 1950 1960 1970 1980 1990 2000 2010 2020

YEAR Global

China

Global - China

Linear (Global - China)

Fig. 11 Global and China’s primary aluminum production, compared with the linear growth obtained subtracting China from the global (authors’ creation)

CHINA'S SHARE IN THE GLOBAL ALUMINUM PRODUCTION 0.6 0.5 0.4 0.3 0.2 0.1 0 1998

2003

2008

2013

2018

YEAR Fig. 12 China’s share in the global primary aluminum production in the last 20 years (authors’ creation)

The rise of China is indeed the most impressive aspect of this change in the worldwide scenario, and as already pointed out by Devezas et al. (2017), China’s growth is disrupting cardinally any reasonable dematerialization analysis. Figure 11 demonstrates assuredly this disruption—if we subtract China’s contribution from the global production of primary aluminum we can see that the growth in production of the rest of the world is practically linear, more or less in tandem with the economic growth of the other economies. In other words, the change in direction of the growth curve of aluminum production that is observed at the dawn of this century was essentially the result of the Chinese boom. The same phenomenon was also

Aluminum Production and Aviation: An Interesting Case of an Interwoven Rebound. . .

267

Thousand Metric Tonnes

demonstrated by Devezas et al. (2017) for the worldwide production of cement— almost exponential after 2000, but smoothly linear subtracting China from global production (Fig. 12). On the other hand, if we plot China’s share along the last 20 years, we get a typical sigmoid curve (logistic), which suggests that this share has already reached its ceiling, as shown in Fig. 15. This is also visible in the upward unfolding of both curves of Fig. 14 and does not mean that production will reach a ceiling—growth may and will certainly continue in the next decades for we have the still growing contribution of all other players. The information to be drawn from this curve is only that China’s slice might remain constant, while the whole cake of aluminum production continues to grow. At this point, it is important to address the relevant issue of the production of secondary aluminum, or perhaps better, the management of manufacturing scrap and end-used goods (recycling). Important to note that all information and data commented on until now refer to primary aluminum production, or in other words, the metal resulting from the electrolytic reduction of alumina. If we wish to evaluate the global consumption it is necessary to add to those figures the tonnage of metal resulting from recovered scrap. Unfortunately, surveying data on the production of secondary aluminum on a global scale is not an easy task. In some regions of the planet, China included, much of the recovered aluminum may enter in the material flow as primary production, which certainly distorts the available statistics. Few countries publish trustful statistics on secondary production, and there is some confusion in distinguishing between recovering manufacturing scrap and recycling end-used products. It is relevant to observe that the two countries that were displaced (Table 3) from big producers in 2008 (Brazil and the USA) are now world leaders in aluminum recycling. The best example is the USA: according to data from USGS (2018) total production of aluminum in the country in 2018 was 4590 TMT, from which 890 TMT was primary Global aluminum production 80000 60000 40000 20000 0 1970

1980

1990

2000

2010

2020

YEAR Primary

Secondary

Fig. 13 Global aluminum production- primary and estimated secondary production (authors’ creation)

268

T. Devezas and H. Ruão

Share of the total production - Aluminum 0.8 0.6

%

0.4 0.2 0 1975

1985

1995 Secondary

YEAR

2005

2015

Primary

Fig. 14 Share of primary and secondary aluminum production (authors’ creation)

and 2700 secondary. From the latter 1600 resulted from old scrap (recovery of discharged aluminum goods) and 2100 came from new scrap (rest of manufacturing). Brazil recycles now 98.2% of its aluminum can production, equivalent to 14.7 billion cans per year, ranking first in the world (Japan follows with the rate of 82.5%). Several sources (IAI, 2018—Zimmering 2017—Smil 2014) indicate that today’s rate of secondary aluminum production is around one-third of the primary production. Figure 13 compares the production of primary aluminum with the estimated unfolding of secondary aluminum. Surprisingly, the share of secondary aluminum has remained practically constant along the last two decades, as appears in Fig. 14. It is indeed difficult to draw a conclusion about the cause of such constant rate. One possible reason is the point already discussed about missing and/or exact numbers for recycled aluminum. Another hypothesis is perhaps related to some technological limitations of recycling the metal. Zimmering (2017), in his new interesting book “Aluminum Upcycled” makes the point that recycling is not as clean and easy as people think. There are alloying elements and other impurities that must be removed using chemicals like chlorine, there are fumes and chemical releases that are toxic. But, on our view, the phenomenon is most probably an economic question related to commodity value: overproducing recycled aluminum may lead to a further decline in price and decrease profit margin of investors and of primary aluminum producers. Finally, it is not to be overlooked that there is a lack of more effective legislation limiting bauxite exploitation quotes, as well as the absence in several countries of most efficient political and commercial incentives to recycle the metal. What is certain is that aluminum figures nowadays among the most recycled materials, surpassing even plastics. Figure 15 translates an actual estimate comparison among the most recycled materials—the percentual indicates the rate of recycling for a given material.

Aluminum Production and Aviation: An Interesting Case of an Interwoven Rebound. . .

269

%

Estimate comparison among most recycled materials 100 90 80 70 60 50 40 30 20 10 0 Steel

Rubber/tires

Paper

Aluminum

Plastic

Fig. 15 Estimate comparison among the most recycled materials (authors’ creation)

Percapita global aluminum production (Kg) 10 8.43

8 6 4 2 0 1950

1960

1970

1980

1990

2000

2010

2020

2030

YEAR Fig. 16 Evolution of the per capita world aluminum production since 1960 (authors’ creation)

Global per capita production of primary aluminum is today of the order of 8,5 Kg, which we get simply dividing the world production by the world population in a given year. Figure 16 shows its evolution since 1960, and again we can distinguish two growth periods (before and after 2000) and a continuing growth trend. If we wish to estimate the per capita consumption it is necessary to add to this number the estimated rate of produced secondary aluminum of about 1/3 of the primary production, which results then in a consumption of ca. 11 kg/cap. This quantity represents what is called as “industrial consumption”, which is the quantity of material that goes into manufactured goods. This figure is easier to calculate than the called “final consumption”, which means the quantity of material in goods consumed by population; such consumption is very difficult to calculate, for it

270

T. Devezas and H. Ruão

Aluminum consumptiom per capita

35

Germany

30 25

kg/cap

USA

Japan apan

20 15 10 5 0

France China Turkey ey Indonesia Russia India Mexico Brazil 0

10000

20000

30000

40000

50000

GDP/cap PP (2010) Fig. 17 Aluminum consumption per capita of some populous countries in 2010, according to data from USGS (2018)

requires a minimum knowledge about the immense panoply of goods consumed and their respective aluminum content. Industry consumption differs largely among countries, according to the average national income level as measured by GDP per capita, or affluence (Smil 2014). In general, countries with a GDP per capita of less than US$5000 consume less than 5 kg/cap; a GDP per capita between US$5000 and US$15,000 consume between 5–10 kg/cap; and countries with incomes greater than $25,000 consume between 15–35 kg/cap. Figure 17 shows the consumption for some populous countries with a large difference of income level, as calculated by USGS in 2010 (ref). Considering then the global GDP/cap of US$ 10,748 in 2017 it is very reasonable our estimated value of about 11 kg/cap for the global consumption of aluminum. Such change in per capita consumption that happens with the increase in affluence is an important factor to take into account when speculating about likely future global consumption (sect. 7).

6 Signals of Backfire How to interpret the behavior of the curves appearing in Fig. 18? Are they just two spikes without any relation between them, or are they a hint of a possible backfire, or rebound effect in the same sense of the Jevons paradox—a phenomenon that occurs when technological progress or other measures increases the efficiency with which a resource is used but the rate of consumption of the resource rises in tandem with increasing demand.

Aluminum Production and Aviation: An Interesting Case of an Interwoven Rebound. . .

271

PRICE x PRODUCTION Price (US$/metric ton) 70000 60000 50000 40000 30000 20000 10000 0 1890

1910

1930

Production (thousand metric ton)

1950

1970

1990

2010

2030

YEAR

Fig. 18 Comparison between the unfolding of aluminum price and primary aluminum production since 1900. The scale on the Y-axis is valid for both curves (Price in US$/metric ton and production in thousand metric tons) (authors’ creation)

Such spikes abound along with the two last centuries, be they either related to socioeconomic measures (population, GDP, etc.. . .), or (and mainly) related to technological improvement. As Vernor Vinge (1993) stated 25 years ago, “the acceleration of technological progress has been the central feature of this century.” Indeed, the spike in production and consumption of aluminum are a direct consequence of its increasing properties improvement and is also related to the necessity of an exceptional light and corrosion-resistant material to the development of other high-tech industry sectors. But, turning to the initial question, is the spike in production related to the decreasing price? In our view this is not a hypothesis that can be ruled out. Certainly, we would not have this so remarkable increase in production if the metal remained the expensive noble metal (and poor in mechanical properties) that it was at the dawn of the twentieth century. Undoubtedly the prevailing driving force for the development of exceptional aluminum alloys was the aeronautical industry. As it is very well known, this has been a common place in aeronautical engineering—the necessity of high specific strength and corrosion-resistant materials for the fuselages and high-temperature resistant materials to improve the efficiency of propulsion engines. Intense research focused on attending the specific necessities of airplanes has resulted in the development of an extensive family of aluminum alloys, titanium alloys, Ni-Cr superalloys, and more recently the almost magic highly resistant CFRP (carbon fiber reinforced polymers). All these developments together over at least the last 50 years have resulted in the construction of increasingly efficient, lighter, and fuel-efficient aircraft. In the case of aluminum alloys, this aeronautical industry-related progress continues unabated. It started, as pointed out in sect. 1, in the height of WWI when the

272

T. Devezas and H. Ruão

still incipient military aviation needed to use a light metal instead of wood for the structure of the combat aircraft. But the definitive impulse arose with the burgeoning of commercial aviation in the 1930s when several new types of Al-alloys of the families 2XXX and 7XXX were developed. Since then an immense family of new Al-alloys and/or aluminum sheet laminates were developed for specific applications on aircraft fuselages. In this context, we have now an interesting dispute between Al-alloys, aluminum laminates, and CFRP composites. Until the 1970s major commercial passenger aircraft (for instance Boeing 707 and 747) used essentially Al-alloys (over 80%) in their fuselages. But after the successive oil shocks of the 1970s, with the consequent accentuated increase in fuel prices, the need to reduce the weight of aircraft intensified, and the best solution was to implement the more extensive use of composites as structural components. Aircraft manufacturers began to invest more in composite research and use, and in the 1980s the Airbus A310 was the first airplane to use composites in the fuselage—its composite vertical stabilizer allowed a reduction of 400 Kg when compared to aluminum, about 20% lighter. Since then composites, mainly CFRP, have been replacing the metal structures of airplanes, but this growth was relatively slow, mainly due to the high development cost and delay in certifications, as well as to the high investment cost in new production machines. But there was also another important factor provoking this slow growth—the development of a new family of aluminum alloys and the close cooperation between the aeronautical industry and key metallic suppliers of aluminum material. Airbus pioneered this approach, putting in place in 2002 three strategic cooperations with Alcan, Alcoa, and Corus. In the frame of each separate cooperation, multidisciplinary teams performed investigations for the fuselage as well as for the wing, covering the fields of materials, design principles, technologies, validation programs, and costs. The main advantage for the suppliers was the in-depth understanding of airframe manufacturer’s requirements, while Airbus got materials better tailored to the intended application. When Airbus launched in 2005 his new wide-body aircraft A380 to compete in the long-haul market with the already famous Boeing 747, introduced in the aeronautical industry a fully new concept in airplane construction: the targeted material use for each fuselage component or application specific material. This new approach was a direct consequence of the abovementioned cooperation contracts. The material breakdown of this aircraft looks like a kaleidoscope of colors, each representing a given class of materials. This complex hybrid fuselage uses composites (that jumped from ~10% in the A320/330 to 22%, mainly CFRP), 10% Titanium, 61% Al-alloys (20 different alloys, compared to only 6 used in the A320/330 family). Another important novelty introduced by Airbus in the original A380 program was the use for the first time of Al-Lithium alloys in civil aircraft fuselages, specifically developed by Alcan and Alcoa. As already pointed out in sect. 3, Al-Li alloys are more expensive than other Al-alloys used in aeronautics (family 2XXX and 7XXX), but their higher specific strength allied to better corrosion and fatigue resistance justify their extensive use in fuselages. The alloys shown in Fig. 4 belong to the so-called second generation of Al-Li alloys, namely pure Al-Li alloys

Aluminum Production and Aviation: An Interesting Case of an Interwoven Rebound. . .

273

(8090 and 8091), or Al-Cu-Li alloys (2090 and 2297). But now, the strong concurrence between Airbus and Boeing when launching their new advanced two-engines aircraft, the A 350 XWB (2013) and Boeing 787 “Dreamliner” (2009), led to the development and use of the so-called third generation Al-Li alloys. These new alloys are also Al-Cu-Li alloys but have a lower concentration of lithium (0.75w% a 1.8w%) when compared to 2090 and 2297 (1.9w% to 2.7w%). The lower Li content, although implying in a slightly higher density, but together with further optimized thermomechanical treatment gives these alloys an improved thermal stability compared to the previous generations of Al-Li alloys. The materials have superior corrosion resistance, higher than current Al-alloys used for skin application, and this will permit the use of unclad material resulting in cost and weight savings. But it is of utmost importance to make a point here about both aircraft models mentioned above: both mark an important turning point in the history of commercial aeronautical construction, when the use of composites has reached the milestone of 50% of the total fuselage mass. Al-alloys (mainly Al-Li) still figures as 20% of the total mass, but the evolution of the material usage in aeronautics has unfolded as an authentic logistic material substitution process, as is demonstrated in Fig. 22 calculated with data from Boeing, Airbus, and Bombardier (C Series) using IIASA’s LSM II technological substitution software (IIASA, 2010). As can be observed, aluminum was gradually replaced by composites, as well as in a small amount by titanium, and its use decreased sharply in recent years, following a typical logistic substitution process. One can then inquire: what is the limit for this substitution? Can we have a fuselage made with 100% composite? The answer is a resounding NO, there are several critical parts—stringers, corners cutouts, bolts, rivets, front skin of wings, frontal nose, etc., which cannot be substituted. The major problem with composites is that they have a very low resistance to impact (because of delamination) and are not weldable (and rivets cannot be used, the sections need to be joined with stringers, metal straps, or something similar). Moreover, there exists another caveat with the use of composites in the fuselage: lightning! A composite material is a bad electrical conductor, and then it is necessary to include in the laminate a layer of conductive material (copper), which can dissipate the charge in the event of a lightning strike; this inclusion will therefore increase the weight of the piece so that the potential weight reduction is impaired. The curve for composites in Fig. 19 suggests that their use is practically reaching the ceiling. Using all the sets of available data and the software of LSM II we can construct the logistic curve for the usage of composite materials in aircraft, which is shown in Fig. 20 below. This result shows clearly that the limit of the use of composite lies between 60–70%, a limit to be reached within the next decade. Or perhaps not, the process can revert favoring aluminum. Recently, Djukanovic (2017) published an interesting article with the suggestive title “Aluminium-Lithyum Alloys Fight Back” in which, referring to the new aircraft mentioned above, he asserts:. . .“This explains why, when Boeing and Airbus introduced these two crafts several years ago, most experts thought that the next generation of planes would be made out of composites, a trend

Fig. 19 Material substitution process in aeronautical engineering in the last 50 years, calculated with data from Airbus, Boeing, and Bombardier (C Series) (authors’ creation using the IIASA’s LSM II program)

274 T. Devezas and H. Ruão

Aluminum Production and Aviation: An Interesting Case of an Interwoven Rebound. . .

275

Fig. 20 Logistic growth of the use of composites in commercial aircraft (authors’ creation)

that would then expand to include smaller jets – but as turns out, they were wrong. Boeing’s latest 777–9 will have composite wings but will support a mostly aluminium fuselage. The reason for this change is the emergence of advanced third generation aluminium-lithium alloys, which are not just cheaper than both CFRP and titanium alloys but are also lighter and stronger than previous iterations. As a result, Al-Li alloy-intensive aircraft have better fuel efficiency and lower maintenance costs”. . . . In fact, not only the new Boeing 777 generation, but several other new model for present and future aircraft, p. ex. A 400, A220 (former Bombardier C Series), Embraer (KC-390, Legacy 500, E 175-E2, etc.) are confirming this trend: aluminum fuselage combined with wings and vertical or horizontal stabilizers in composites. In other words, the aeronautical market will remain closely attached to the use of aluminum. Important to point out that AA 2198 (a third generation Al-alloy) is the material choice for Space X launchers Falcon 9 and Falcon Heavy. Closing this section, we would like to bring attention to another fact related to the phenomenon of backfire—it is relatively easy to identify the working of Jevon’s paradox in aviation. The logic is very simple: increasingly light and efficient airplanes → cheaper flights → more people flying. The following graphs in Figs. 21, 22, and 23 demonstrate clearly this effect. In the last 50 years, the rate of aircraft fuel consumption per seat (passenger) decreased by 70%, but the number of people flying, and distance covered (designated as RPK— revenue passenger kilometer) grew exponentially, reaching the mark of 7 trillion

276

T. Devezas and H. Ruão

Fuel consumption per seat (%)

AircraFT fuel consumpTIon per seat 120 100 80 60 40 20

Comet 4 DC8-30 B707-120B VC10 DC8-6A DC10-30 B707-320 B747-100B B777-8 B747-400 DC8-63 B777-200 B747-100 B747SP B747-300 A310-300 A340-300 B777-300A350-800

B707-120

0 1950

1960

1970

1980

1990

2000

2010

2020

YEAR Fig. 21 Reduction of aircraft fuel consumption per seat (%)—baseline 100 Comet 4 (authors’ creation)

RPK in 2016. This corresponds today to about 4 billion passengers6 carried every year, as shown in the graph of Fig. 23 based on data from the World Bank (2018). In other words, we have in 2018 ten times more people flying than in 1970, when the legendary wide-body aircraft B 747 was launched, what in some sense can be considered as a rebound effect of the search for light and fuel-efficient airplanes. The reader can then ask: what is the connection of this rebound effect in aviation with the use of aluminum? Again, the logic is simple: More flights → More → airplanes More aluminum. The curves presented in Figs. 22 and 23 do not evidence signal of the ceiling, consequently the numbers related to the intensity of flight should continue the same growth rate for the next couple of years. But perhaps the most interesting aspect to consider in this context is that these curves present the same spike-like appearance of the curve for global aluminum production, and then, probably, will follow the same logistic growth trend to be discussed in the next section.

Important to observe that the yearly number refers to booked flights, and do not reflect the number of people flying. For instance, a person that flies once a week, every week, will count as 52 booked flights (passenger) in that year. A passenger flying on both the international and domestic stages of the same flight should be counted as both a domestic and an international passenger, then as two booked flights. 6

Fig. 22 Aircraft transportation (revenue passenger kilometer and freight ton-kilometer) according to data from IATA (2018)

Aluminum Production and Aviation: An Interesting Case of an Interwoven Rebound. . . 277

278

T. Devezas and H. Ruão

Billion passenger

Global air transport - Pasengers carried 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 1975

1985

1995

2005

2015

2025

YEAR Fig. 23 Yearly number of passenger carried globally (author’s creation according to with World Bank 2018)

7 The Future: A Prospective Analysis Considering the global aluminum production presented in Fig. 10 it is possible to verify if these data fit with a logistic growth trend, or in other words, if such growth trend may converge to a given limit. Using the IIASA’s software LSM II (IIASA 2019) it was found that the data fit well a logistic trend, signaling a possible ceiling of about 120 x 103 metric tons, roughly the double of the actual production, as can be seen in Fig. 24 below. This result indicates that in 2020 we are reaching half of the ceiling; considering that the found ΔT (time elapsed between 10% and 90% of the maximum growth) is about 70 years, the maximum production can be reached around 2050-–. This result is in line with the recent findings of Roper (2018), who estimated that the maximum of bauxite extraction (~600 x 106 tons/y) may be reached around 2040; this author also has found that by 2050 the amount of bauxite extracted will be equivalent to the amount left to be extracted. Roper also has pointed out that the recent rise in the extraction rate is unsustainable for more than a few decades from now. Regarding this aspect of unsustainability, it is important to observe that the recent rise in aluminum production was essentially due to the industrializing and construction boom in China in the last 20 years, as demonstrated in Fig. 11. But on the other hand, as observed in Fig. 12, China’s share in global production is probably reaching a ceiling, signaling that much of this trend is a one-time building boom that will subside as Chinese housing stocks and infrastructures reach modern standards. As can be seen, the high demand for aluminum from all industry sectors can still endure for a couple of decades from now, but probably at a lower growth rate, following probably the logistic trajectory shown in Fig. 24. Today’s high per capita consumption presented by highly industrialized countries (the USA, Japan, Germany, France, see Fig. 17) will probably persist for the next two decades from now,

Fig. 24 Logistic fit of the global aluminum production using IIASA’s LSM II (IIASA 2019) (Rsquare = 0.96) (authors’ creation)

Aluminum Production and Aviation: An Interesting Case of an Interwoven Rebound. . . 279

280

T. Devezas and H. Ruão

considering that these countries have also significative automobile and aircraft production plants. Closing this section regarding speculations about the future of production/consumption of aluminum it is important to consider two relevant aspects: 1. It is highly probable that recycling scrap aluminum will gain a renewed momentum as environmental concerns worsen; still regarding this point, it is important to take into account that today’s existing aluminum in circulation is almost enough to satisfy global demand through recycling. 2. Supporters of the Kondratieff waves hypothesis speculates that by 2025–2030 we are reaching the peak of the current growing phase of the fifth K-wave (Devezas and Corredine 2001; Devezas 2010). If this in fact happens, after 20307 we will probably enter in a long global decade long recession phase like the previous recession phases observed in 1930–1950 and 1970–1990 (third and fourth K-waves respectively). In this case, any further speculation about the future of aluminum production and its implication with the global demand for aeronautical transportation becomes innocuous and meaningless. 3. Final Remark At this point, the reader can ask: what is the relationship between the digital transformation and this prospective analysis about the worldwide usage of aluminum and its alloys? In fact we can state that most what has been presented in this chapter, regarding for instance the development of advanced Al-alloys or their extensive application in the aeronautical/aerospace industry is an indisputable consequence of the exponential expansion of our digital possibilities and the consequent acceleration of the technological development observed in the last decades. In sect. 6 pointed out “signals of backfire” would not have happened without the working of the strong forces of the digital transformation we have been experiencing all along the last three decades.

References Adriaanse, A., et al. (1997). Resource flows: The material basis of industrial economies. World Resources Institute. Aluminum Transportation Group. (2018). The aluminum association, growth of aluminum’s use in light vehicle market. Accessed Mar, 2019, from https://www.aluminum.org/members-area/ committees/aluminum-transportation-group Devezas, T. (2010). Crisis, depressions, expansions – Global trends and secular analysis, Technological Forecasting and Social Change, 77, 739–761. Devezas, T., Corredine, J. (2001). The biological determinants of long wave behavior in socioeconomic growth and development. Technological Forecasting and Social Change, 68, 1–57.

7 This foreseen recession may have been anticipated because of the pandemics of the SARS-CoV-2, which is still spreading around the world at the time we are concluding this chapter.

Aluminum Production and Aviation: An Interesting Case of an Interwoven Rebound. . .

281

Devezas, T., Vaz, A., & Magee, C. (2017). Global pattern in materials consumption: An empirical study, chapter 13. In T. Devezas, J. Leitão, & A. Sarygulov (Eds.), Industry 4.0 - entrepreneurship and structural change in the new digital landscape. Heidelberg, Germany: Springer. Djukanovic, G. (2017). Aluminium-Lithium alloys fight back. Accessed Mar, 2019, from https:// aluminiuminsider.com/aluminium-lithium-alloys-fight-back/ Granta CES Edupack. (2019). Granta Design Limited – special license to Atlantica University. IATA. (2018). International air transport association – world air transport statistics. Accessed Mar, 2019, from https://www.iata.org/en/publications/store/world-air-transport-statistics/ IIASA. (2019). International Institute of Applied System Analysis – LSM II – Logistic Substitution Model. Accessed Mar, 2019, from https://iiasa.ac.at/web/home/research/researchPrograms/ TransitionstoNewTechnologies/download.en.html Intergovernmental Panel on Climate Change. (2018). https://report.ipcc.ch/sr15/pdf/sr15_spm_ final.pdf. Retrieved in March 2019. International Aluminium Institute. (2018). World Aluminium Statistics, http://www.worldaluminium.org/statistics/. Retrieved in March 2019. Richards, J. W. (2018). Aluminum: Its history, occurrence, properties, metallurgy, and applications, including its alloys. Franklin Classics. Sheller, M. (2014). Aluminum dreams: The making of light modernity. Cambridge, MA: MIT Press. Smil, V. (2014). Making the modern world. Wiley. Statista. (2018). Global demand for aluminum products in 2018 by sector. Accessed Mar, 2019, from https://www.statista.com/statistics/280983/share-of-aluminum-consumption-by-sector/ United States Geological Survey. (2018). Aluminum statistics and information. Accessed Mar, 2019, from https://www.usgs.gov/centers/nmic/aluminum-statistics-and-information Vaz, A. (2018) Dematerialization and the effect of intangible on the sustainability and global materials consumption, PhD Thesis (in Portuguese), University of Beira Interior. Vinge, V. (1993) “The coming technological singularity”, VISION 21 symposium, NASA Lewis Research Center and Ohio Aerospace Institute, March 30–31, 1993. World Bank. (2018). Air transport, passengers carried. Accessed Mar, 2019, from https://data. worldbank.org/indicator/IS.AIR.PSGR Zimmering, C. A. (2017). Aluminum Upcycled: Sustainable Design in Historical Perspective. Johns Hopkins University Press.