145 9 52MB
English Pages 803 Year 2022
Lecture Notes in Operations Research
Xianliang Shi · Gábor Bohács · Yixuan Ma · Daqing Gong · Xiaopu Shang Editors
LISS 2021 Proceedings of the 11th International Conference on Logistics, Informatics and Service Sciences
Lecture Notes in Operations Research Editorial Board Members Ana Paula Barbosa-Povoa, University of Lisbon, LISBOA, Portugal Adiel Teixeira de Almeida
, Federal University of Pernambuco, Recife, Brazil
Noah Gans, The Wharton School, University of Pennsylvania, Philadelphia, USA Jatinder N. D. Gupta, University of Alabama in Huntsville, Huntsville, USA Gregory R. Heim, Mays Business School, Texas A&M University, College Station, USA Guowei Hua, Beijing Jiaotong University, Beijing, China Alf Kimms, University of Duisburg-Essen, Duisburg, Germany Xiang Li, Beijing University of Chemical Technology, Beijing, China Hatem Masri, University of Bahrain, Sakhir, Bahrain Stefan Nickel, Karlsruhe Institute of Technology, Karlsruhe, Germany Robin Qiu, Pennsylvania State University, Malvern, USA Ravi Shankar, Indian Institute of Technology, New Delhi, India Roman Slowiński, Poznań University of Technology, Poznan, Poland Christopher S. Tang, Anderson School, University of California Los Angeles, Los Angeles, USA Yuzhe Wu, Zhejiang University, Hangzhou, China Joe Zhu, Foisie Business School, Worcester Polytechnic Institute, Worcester, USA Constantin Zopounidis, Technical University of Crete, Chania, Greece
Lecture Notes in Operations Research is an interdisciplinary book series which provides a platform for the cutting-edge research and developments in both operations research and operations management field. The purview of this series is global, encompassing all nations and areas of the world. It comprises for instance, mathematical optimization, mathematical modeling, statistical analysis, queueing theory and other stochastic-process models, Markov decision processes, econometric methods, data envelopment analysis, decision analysis, supply chain management, transportation logistics, process design, operations strategy, facilities planning, production planning and inventory control. LNOR publishes edited conference proceedings, contributed volumes that present firsthand information on the latest research results and pioneering innovations as well as new perspectives on classical fields. The target audience of LNOR consists of students, researchers as well as industry professionals.
More information about this series at https://link.springer.com/bookseries/16741
Xianliang Shi Gábor Bohács Yixuan Ma Daqing Gong Xiaopu Shang •
•
•
Editors
LISS 2021 Proceedings of the 11th International Conference on Logistics, Informatics and Service Sciences
123
•
Editors Xianliang Shi Beijing Jiaotong University Beijing, China Yixuan Ma Beijing Jiaotong University Beijing, China
Gábor Bohács Budapest University of Technology and Economics Budapest, Hungary Daqing Gong Beijing Jiaotong University Beijing, China
Xiaopu Shang Beijing Jiaotong University Beijing, China
ISSN 2731-040X ISSN 2731-0418 (electronic) Lecture Notes in Operations Research ISBN 978-981-16-8655-9 ISBN 978-981-16-8656-6 (eBook) https://doi.org/10.1007/978-981-16-8656-6 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
Organization
11th International Conference on Logistics, Informatics and Service Sciences (LISS 2021) 23–26 July 2021
In Cooperation with Budapest University of Technology and Economics, Hungary Informatics Research Centre, University of Reading, UK School of Management, Shandong University, China
Hosted by IEEE SMC Technical Committee on Logistics Informatics and Industrial Security Systems The International Center for Informatics Research, Beijing Jiaotong University, China China Center for Industrial Security Research of Beijing Jiaotong University, China School of Economics and Management of Beijing Jiaotong University, China
Organizing Committee Honorary Co-chairs James M. Tien
C. L. Philip Chen
Academician of America National Academy of Engineering, Past President of IEEE SMC, IEEE Fellow, USA Past President of IEEE SMC, IEEE Fellow, AAAS Fellow, South China University of Technology, China
v
vi
T. C. E. Cheng
Tak Wu Sam Kwong
Organization
Chair Professor of Management at The Hong Kong Polytechnic University, Hong Kong, China IEEE Fellow, Chair Professor of Computer Science, City University of Hong Kong, China
General Co-chairs Runtong Zhang, Beijing Jiaotong University, China Zhongliang Guan, Beijing Jiaotong University, China Kecheng Liu, University of Reading, UK Gabor Bohacs, Budapest University of Technology and Economics, Hungary Qingchun Meng, Shandong University, China Program Co-chairs Zhenji Zhang, Beijing Jiaotong University, China Peixin Zhao, Shandong University, China Juliang Zhang, Beijing Jiaotong University, China Xianliang Shi, Beijing Jiaotong University, China Adam Torok, Budapest University of Technology and Economics, Hungary Organization Co-chairs Jianghua Zhang, Shandong University, China Xiaochun Lu, Beijing Jiaotong University, China Publication Co-chairs Guowei Hua, Beijing Jiaotong University, China Yisong Li, Beijing Jiaotong University, China Xiaopu Shang, Beijing Jiaotong University, China Finance Co-chairs Shifeng Liu, Beijing Jiaotong University, China Dan Chang, Beijing Jiaotong University, China Jingci Xie, Shandong University, China Special Session/Workshop Co-chairs Hongjie Lan, Beijing Jiaotong University, China Anqiang Huang, Beijing Jiaotong University, China Publicity Co-chairs Daqing Gong, Beijing Jiaotong University, China Guodong Yu, Shandong University, China
Organization
vii
Web Master Zikui Lin, Beijing Jiaotong University, China
International Steering Committee Co-chairs James M. Tien C. L. Philip Chen
University of Miami, USA South China University of Technology, China
Members Francisco Vallverdu Bayes Jian Chen Yu Chen T. C. Edwin Cheng Waiman Cheung Shicheng D. Dingyi Dai Xuedong Gao David Gonzalez-Prieto Liming He Harald M. Hjelle Joachim Juhn Kim Kap-Hwan Harold Krikke Erwin van der Laan Der-Horng Lee Dong Li Menggang Li Cheng-Chang Lin Kecheng Liu Oriol Lordan Qining Mai Drenser Martin Theo Notteboom Yannis A. Phillis Robin Qiu
Universitat Politècnica de Catalunya Barcelona Tech, Spain Tsinghua University, China Renmin University of China, China The Hong Kong Polytechnic University, China The Chinese University of Hong Kong, China Royal Institute of Technology-KTH, Sweden China Federation of Logistics & Purchasing, China University of Science and Technology Beijing, China Universitat Politècnica de Catalunya Barcelona Tech, Spain China Federation of Logistics & Purchasing, China Molde University College, Norway Anglia Ruskin University, UK Pusan National University, Korea Tilburg University, The Netherlands RSM Erasmus University, The Netherlands The National University of Singapore, Singapore University of Liverpool, UK China Center for Industrial Security Research, China National Cheng Kung University, Taipei, China University of Reading, UK Universitat Politècnica de Catalunya Barcelona Tech, Spain The University of Hong Kong, China University of Maryland, USA University of Antwerp, Belgium The Technical University of Crete, Greece Pennsylvania State University, USA
viii
Jurgita Raudeliuniene Linda Rosenman Kurosu Seiji Zuojun (Max) Shen Carlos Sicilia Pep Simo Dong-Wook Song Jelena Stankeviciene JaumeValls-Pasola David C. Yen
Organization
Vilnius Gediminas Technical University, Lithuanian Victoria University, Australia Waseda University, Japan University of California, Berkeley, USA Universitat Politècnica de Catalunya Barcelona Tech, Spain Universitat Politècnica de Catalunya Barcelona Tech, Spain Edinburgh Napler University, UK Vilnius Gediminas Technical University, Lithuanian Universitat Politècnica de Catalunya Barcelona Tech, Spain Miami University, USA
International Program Committee Muna Alhammad Bernhard Bauer Michael Bell Ida Bifulco Christoph Bruns Luis M. Camarinha-Matos Mei Cao Maiga Chang Sohail Chaudhry Gong Cheng Alessandro Creazza Lu Cui Kamil Dimililer Chunxiao Fan Zhi Fang Vicenc Fernandez Juan Flores Alexander Gelbukh Joseph Giampapa Wladyslaw Homenda Wei-Chiang Hong
University of Reading, UK University of Augsburg, Germany The University of Sydney, Australia University of Salerno, Italy Fraunhofer Institute for Material Flow and Logistics, Germany New University of Lisbon, Portugal University of Wisconsin, Superior, USA Athabasca University, Canada Villanova University, USA University of Reading, UK University of Hull, UK Hohai University, China Near East University, Cyprus Beijing University of Posts and Telecommunications, China Beijing Institute of Technology, China Universitat Politècnica de Catalunya Barcelona Tech, Spain Universidad Michoacana de San Nicolás de Hidalgo, Mexico National Polytechnic Institute, Mexico Carnegie Mellon University, USA Warsaw University of Technology, Poland Oriental Institute of Technology, Taipei, China
Organization
Alexander Ivannikov
Taihoon Kim Rob Kusters Der-Horng Lee Kauko Leiviskä Zhi-Chun Li Da-Yin Liao Cheng-Chang Lin Shixiong Liu Miguel R. Luaces Zhimin Lv Nuno Mamede Vaughan Michell Jennifer Min Nada Nadhrah Paolo Napoletano David L. Olson Stephen Opoku-Anokye Rodrigo Paredes Puvanasvaran A. Perumal Dana Petcu Vo Ngoc Phu Henryk Piech Geert Poels Elena Prosvirkina Michele Risi Baasem Roushdy Ozgur Koray Sahingoz Shaoyi Liao Cleyton Slaviero Lily Sun Ryszard Tadeusiewicz Shaolong Tang Vladimir Tarasov Arthur Tatnall Theodoros Tzouramanis Wenjie Wang Zhong Wen
ix
State Research Institute for Information Technologies and Telecommunications, Russian Federation Sungshin Women’s University, Korea Eindhoven University of Technology, The Netherlands The National University of Singapore, Singapore University of Oulu, Finland Huazhong University of Science and Technology, China National Chi-Nan University, Taipei, China National Cheng Kung University, Taipei, China University of Reading, UK Universidade da Coruña, Spain University of Science Technology Beijing, China INESC-ID/IST, Portugal Reading University, UK Ming Chuan University, Taipei, China University of Reading, UK University of Salerno, Italy University of Nebraska, USA University of Reading, UK Universidad de Talca, Chile Universiti Teknikal Malaysia Melaka, Malaysia West University of Timisoara, Romania Duy Tan University, Vietnam The Technical University of CzĊstochowa, Poland Ghent University, Belgium National Research University, Russian Federation University of Salerno, Italy Arab Academy for Science and Technology, Egypt Turkish Air Force Academy, Turkey City University of Hong Kong, Hong Kong Universidade Federal Fluminense, Brazil University of Reading, UK AGH University of Science and Technology, Poland Hong Kong Baptist University, China Jönköping University, Sweden Victoria University, Australia University of the Aegean, Greece Donghua University, China Tsinghua University, China
x
Martin Wheatman Sen Wu Yuexin Wu Li Xiong Hangjun Yang Jie Yang Muhamet Yildiz Wen Yen Jiashen You Rui Zhang Yong Zhang Matthew Zeidenberg Eugenio Zimeo Zhongxiang Zhang Bo Zou
Organization
Brightace Ltd, UK University of Science Technology Beijing, China Beijing University of Posts and Telecommunications, China Shanghai University, China University of International Business and Economics, China University of Houston-Victoria, USA Northwestern University, USA WuI-Shou University, Taipei, China University of California Los Angeles, USA Donghua University, China Southeast University, China Columbia University, USA University of Sannio, Italy Fudan University, China University of Illinois at Chicago, USA
Contents
Supply Chain Coordination of Loss-Averse Retailer for Fresh Produce with Option Contracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deng Jia and Chong Wang
1
Research on the Innovation Mechanism of Enterprise Business Model in the Internet Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jiayuan Wang, Yue Zhang, and Lei Xu
11
Cooperation Strategy of Intellectual Property Securitization in Supply Chain from Risk Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cheng Liu, Wenjing Xie, Qiuyuan Lei, and Xinzhong Bao
24
The Evolution Game of Government and Enterprise in Green Production—The Perspective of Opportunity Income and Media Supervision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yanhong Ma, Zezhi Zheng, and Chunhua Jin
35
QL4POMR Interface as a Graph-Based Clinical Diagnosis Web Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sabah Mohammed, Jinan Fiaidhi, and Darien Sawyer
43
Thick Data Analytics for Small Training Samples Using Siamese Neural Network and Image Augmentation . . . . . . . . . . . . . . . . . . . . . . . Jinan Fiaidhi, Darien Sawyer, and Sabah Mohammed
57
How Does the Extended Promotion Period Improve Supply Chain Efficiency? Evidence from China’s Online Shopping Festival . . . . . . . . . Yang Chen and Hengyu Liu
67
Pricing Strategy of Dual-Channel Supply Chain for Alcoholic Products with Platform Subsidy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dongyan Chen, Yi Zhang, and Shouting Zhao
81
xi
xii
Contents
Research Summary of Intelligent Optimization Algorithm for Warehouse AGV Path Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ye Liu, Yanping Du, Shuihai Dou, Lizhi Peng, and Xianyang Su
96
Research on Time Window Prediction and Scoring Model for Trauma-Related Sepsis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Ke Luo, Jing Li, and Yuzhuo Zhao Intelligent Emergency Medical QA System Based on Deep Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Zihao Wang and Xuedong Chen Prediction of Airway Management of Trauma Patients Based on Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Zheyuan Yu, Jing Li, and Yuzhuo Zhao Design and Implementation of Intelligent Decision Support System for THS in Large-Scale Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Zhaohong Wang, Xuedong Chen, and Yuzhuo Zhao Recognition of Key Injuries in Winter Sports Events Based on Rough Set and Cellular Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Xiucheng Li, Jing Li, and Yuzhuo Zhao The Model for Pneumothorax Knowledge Extraction Based on Dependency Syntactic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Xiangge Liu, Jing Li, and Yuzhuo Zhao Construction of Winter Olympic Games Emergency Medical Security Ontology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Zhenxia Zhao, Chunfang Guo, and Xuedong Chen Design of Mobile Terminal for Acute Hypotension Risk Scoring System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Tingting Li, Jing Li, and Yuzhuo Zhao Research on Labor Costs Analysis Model for Condition-Based Maintenance of Railway Freight Cars Based on the Integration of Business and Finance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Junxin Gao, Yongmei Cui, Xuemei Li, and Yuhui Sun Research on the Work Style Construction and Safety Performance of Civil Aviation Safety Practitioners . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Yuhan Wang, Ruijian Liu, and Zhidong Yang News Recommendation Method Based on Topic Extraction and User Interest Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Yimeng Wei, Guiying Wei, and Sen Wu
Contents
xiii
Molecule Classification Based on GCN Network . . . . . . . . . . . . . . . . . . 220 Xiaozhang Huang Research on Classification of Pipeline Operator Clusters Based on PHMSA-IM Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Xiangting Yin, Dezhi Tang, Hongjian Chen, Xiaodong Zhang, and Gu Tan A Novel Multi-attribute Group Decision-Making Method Based on Interval-Valued q-rung Dual Hesitant Fuzzy Sets and TOPSIS . . . . . . . 237 Jun Wang, Wuhuan Xu, Yuan Xu, Li Li, and Xue Feng Blockchain-Based Sea Waybill Trust Mechanism . . . . . . . . . . . . . . . . . . 248 Yuan Feng and Wei Liu Managing Supply Chain Risks by Using Incomplete Contract . . . . . . . . 259 Hongjin Ju and Juliang Zhang Evaluation of the Application of YOLO Algorithm in Insulator Identification* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Yaopeng Chu Sentiment Analysis of News on the Stock Market . . . . . . . . . . . . . . . . . 284 Huimin Zong, Sen Wu, and Guiying Wei Measurement and Clustering Analysis of Interprovincial Employment Quality in China . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 Yingxue Pan and Xuedong Gao Operation Architecture Planning Method of Information System and Person-Job Fit Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Ai Wang and Xuedong Gao Dynamic Adjustment Method of Space Product Material Classification Based on ID5R Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 321 Tingting Zhou and Xuedong Gao The Influence of Investor Sentiment on Stock Market Based on Sentiment Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Danyu Lan, Sen Wu, and Guiying Wei Mental State Prediction of College Students Based on Decision Tree . . . 345 Qixin Bo and Xuedong Gao Research on the Current Situation of Work Style Construction in Civil Aviation Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Zhidong Yang, Ruijian Liu, and Yuhan Wang Research on the Ecological Quality Improvement Path of Existing Urban Residential Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Zhidong Zhang, Boyang Liu, and Yang Mao
xiv
Contents
Research on the Self-organization Model of the Internet Public Opinion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Xiaolan Guan Multi-objective Butterfly Optimization Algorithm for Solving Constrained Optimization Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Mohammed M. Ahmed, Aboul Ella Hassanien, and Mincong Tang The Evolution and Development of Public Opinion Analysis in China——From the Perspective of Bibliometric Analysis . . . . . . . . . 401 Tong Zhao, Chunhua Jin, and Xiaoxiao Zhai Deep Belief Neural Networks for Eye Localization Based Speeded up Robust Features and Local Binary Pattern . . . . . . . . . . . . . . . . . . . . . . 415 Mahmoud Y. Shams, Aboul Ella Hassanien, and Mincong Tang Analysis on Spatial-Temporal Distribution Evolution Characteristics of Regional Cold Chain Logistics Facilities: A Case Study of BJE . . . . . 431 Qiuxia Zhang and Hanping Hou Development of a Low-Cost Material Handling Vehicle Solution Using Teleoperation and Marker Recognition . . . . . . . . . . . . . . . . . . . . . . . . . 442 Gábor Bohács, András Máté Horváth, and Dániel Gáspár Quality Management of Social-Enabled Retailing E-commerce . . . . . . . 454 Shuchen Li Decision-Making and Impact of Blockchain on Accounts Receivable Financing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 Mengqi Hao and Jingzhi Ding A Two-Stage Heuristic Approach for Split Vehicle Routing Problem with Deliveries and Pickups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 Jianing Min, Lijun Lu, and Cheng Jin Logistics Cost Analysis of Small and Medium-Sized Agricultural Planting Enterprises Under the Mode of “Agricultural Super Docking” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 Yiwen Deng and Chong Wang Research on the Optimization of Intelligent Emergency Medical Treatment Decision System for Beijing 2022 Winter Olympic Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504 Yujie Cai, Jing Li, Lei Fan, and Jiayi Jiang Analysis of the “One Pallet” Model of Fast Consumer Goods in Post Epidemic Period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511 Yiqing Zhang and Wei Liu
Contents
xv
Parameter Optimization for Neighbor Discovery Probability of Ad Hoc Network Using Directional Antennas . . . . . . . . . . . . . . . . . . 523 Ruiyan Qin and Xu Li An Optimal Online Distributed Auction Algorithm for Multi-UAV Task Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 Xinhang Li and Yanan Liang Research on the Influence of Different Network Environments on the Performance of Unicast and Broadcast in Layer 2 Routing . . . . 549 Mingwei Wang, Tao Jing, and Wenjun Huang Performance Analysis of MAC Layer Protocol Based on Topological Residence Time Limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562 Yangkun Wang, Tao Jing, and Wenjun Huang Improved Channel Estimation Algorithm Based on Compressed Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575 Binyu Wang and Xu Li Research on Network Overhead of Two Kinds of Wireless Ad Hoc Networks Based on Network Fluctuations . . . . . . . . . . . . . . . . . . . . . . . 584 Jianhua Shang, Ying Liu, and Xin Tong Environmental Monitoring and Temperature Control of Aquatic Products Cold Chain Transport Carriage . . . . . . . . . . . . . . . . . . . . . . . 596 Peiyuan Jiang, E. Xu, Meiling Shi, Shenghui Zhao, Yang Wang, Jianglong Cao, and Zhenpeng Dai Development and Evaluation of Configurations and Control Strategies to Coordinate Several Stacker Cranes in a Single Aisle for a New Dynamic Hybrid Pallet Warehouse . . . . . . . . . . . . . . . . . . . . . . . . 606 Giulia Siciliano, Anna Durek-Linn, and Johannes Fottner Research on the Coordination Mechanism of the Integration and Optimization of High-Quality Online Teaching Resources in Universities in the Post-Epidemic Period . . . . . . . . . . . . . . . . . . . . . . 627 Xiaolan Guan Research on Evaluation Technology of Electric Power Company Strategy Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636 Fangcheng Tang, Ruijian Liu, and Yang Yang Framework Design of Data Assets Bank for Railway Enterprises . . . . . 649 Cheng Zhang and Xiang Xie Research on Emergency Rescue Strategy of Mountain Cross-Country Race . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660 Qianqian Han, Zhenping Li, and Kang Wang
xvi
Contents
Rail Passenger Flow Prediction Combining Social Media Data for Rail Passenger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672 Jiren Shen A Novel Optimized Convolutional Neural Network Based on Marine Predators Algorithm for Citrus Fruit Quality Classification . . . . . . . . . 682 Gehad Ismail Sayed, Aboul Ella Hassanien, and Mincong Tang A Hybrid Quantum Deep Learning Approach Based on Intelligent Optimization to Predict the Broiler Energies . . . . . . . . . . . . . . . . . . . . . 693 Ibrahim Gad, Aboul Ella Hassanien, Ashraf Darwish, and Mincong Tang Modeling Satellites Transactions as Space Digital Tokens (SDT): A Novel Blockchain-Based Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 705 Mohamed Torky, Tarek Gaber, Essam Goda, Aboul Ella Hassanien, and Mincong Tang A Derain System Based on GAN and Wavelet . . . . . . . . . . . . . . . . . . . . 714 Xiaozhang Huang Research and Development Thoughts of Intelligent Transportation System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 722 Wenhao Zong The Construction of Risk Model (PDRC Model) for Collaborative Network Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735 Xiadi Cui and Juanqiong Gou Temporal and Spatial Patterns of Ship Accidents in Arctic Waters from 2006 to 2019 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747 Qiaoyun Luo and Wei Liu A Summary of User Profile Research Based on Clustering Algorithm . . . Lizhi Peng, Yangping Du, Shuihai Dou, Ta Na, Xianyang Su, and Ye Liu
758
Research on the Evaluation Method of Storage Location Assignment Based on ABC-Random Storage-COI-AHP—Take a Logistics Park as an Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 770 Xianyang Su, Shuihai Dou, Yanping Du, Qiuru Chen, Ye Liu, and Lizhi Peng Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785
Supply Chain Coordination of Loss-Averse Retailer for Fresh Produce with Option Contracts Deng Jia1 and Chong Wang2(B) 1 College of Management, Sichuan Agricultural University, Chengdu, China 2 Business and Tourism School, Sichuan Agricultural University, Chengdu, China
[email protected]
Abstract. Considering the huge circulation losses of the fresh produce, we investigate the ordering and cooperation strategies with option contracts in a two-phase supply chain that a risk-neutral fresh product supplier sells to a loss-averse fresh produce retailer. The optimal option ordering policies for the loss-averse retailer as well as the optimal manufacturing strategy for the risk-neutral supplier are given when retailer only order options. According to the integrated supply chain, we discussed the coordinating conditions for the fresh produce supply chain. Keywords: Supply chain coordination · Loss aversion · Fresh produce · Option contracts
1 Introduction As a representative perishable commodity, fresh produce (including fresh fruits, fresh vegetables, live seafood, cut flowers, meats, and eggs, etc.) exerts a crucial effect on the people’s life. Compared with other perishable products, fresh produce has large circulation losses, are perishable without salvage value after the selling season but a longterm lead time and a short life cycle. Moreover, because of the influences of uncertain factors including climate, land fertility, pests and diseases, and transportation losses, the fresh supply chain is under high risks including uncertain market needs and random yield, which often leads to the outcomes that the retailer meets overage or shortage, and then lead to the retailer bearing great losses. Further, fresh product suffers a dramatic loss in its circulation. Thus, it’s a great problem to control fresh produce in consideration of their inherent properties and high risks of the supply chain. Recently, as an effective instrument to managing and hedging risks such as unreliable supply, price fluctuations, and unpredictable demand, option contracts are more and more popular in supply chain governance. There are call, put, and bidirectional options. It has been found that option contracts are widely application in multiple industries, such as telecommunications, IT, semiconductors, and electricity (Anderson et al. 2017; Wu and Kleindorfer 2005). In supply chain management considering option contracts, This research is partially supported by National Natural Science Foundation of China (No. 71972136). © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 1–10, 2022. https://doi.org/10.1007/978-981-16-8656-6_1
2
D. Jia and C. Wang
ordering policies, pricing strategies (such as retail and option pricing), and supply chain cooperation issues are the most significant decisions. Many scholars have found that option contracts can deal with many different risks to adjust the agricultural supply chain (Wang and Chen 2017; Yang et al. 2017; Wang and Chen 2018). The practice of option contracts in the supply chain control of fresh produce has gradually increased (Wang and Chen 2017; Zhao et al. 2013). In most supply chain models, both members of the supply chain are considered to be risk-neutral, and their goals are very consistent, that is, to maximize their profits. However, many studies can prove that the decision-making behaviors of both members in the supply chain deviate from the goal of maximizing profits, which conforms to loss aversion (Ho and Zhang 2008; Feng et al. 2011). Loss aversion means that supplier or retailer are more sensitive to losses than to profits of the same scale. They no longer pursue profit maximization, but maximize the utility of loss aversion. This is an incredible and interesting phenomenon. We want to know how to coordinate the fresh product supply chain with loss-averse retailers and option contracts, this is the motivation of this research. A fresh product supply chain will be studied in this paper, which a riskneutral fresh produce supplier sells options to a loss-averse fresh produce retailer. On the basis of establishing the loss aversion utility function of loss-averse retailers, the best ordering policies for loss-averse retailers and the best manufacturing strategy for risk-neutral suppliers under option contracts and fresh produce circulation losses are derived. The structure of the rest of this paper is arranged below. The fresh produce supply chain control and the supply chain contracts with options in Sect. 2 are reviewed. The detailed description and assumptions of the issue are in Sect. 3. In Sect. 4 and Sect. 5, we discussed the best decisions of the loss-averse retailer and the risk-neutral supplier under this assumption and gave detailed evidence, respectively. In Sect. 6, based on the integrated supply chain, we discussed several conditions for the fresh produce supply chain to achieve cooperation. We summarized the conclusions and pointed out the deficiencies of this work, and possible further work in the future in Sect. 7.
2 Literature Review 2.1 Fresh Product Supply Chain Management Fresh product supply chain control has attracted widespread attention from scholars recently. When studying the ordering decision of the fresh product supply chain, a large number of literature considers the characteristics of fresh produce deterioration over time. Blackburn and Scudder (2009) consider the rate that fresh produce loses value as time passes in the supply chain, and develop supply chain strategies to guarantee that the supply chain to maximize value. When studying the pricing strategy of the fresh product supply chain, many studies have considered the quantity loss and quality drop of fresh produce. Cai et al. (2010) argue that the supply chain of perishable products such as fresh produce will incur quantity loss and quality drop during long-distance transportation, which will influence the profits of the supply chain. Xiao and Chen (2012) observe that the quantity loss of fresh commodities in long-distance transportation, and compared the pull model and
Supply Chain Coordination of Loss-Averse Retailer
3
the push model to derive the best ordering and pricing decisions of both supply chain members. Qin et al. (2014) consider the quality and physical quantity of a lot of perishable products such as fresh produce and food, which tend to deteriorate over time, and the deterioration rate of quality and physical quantity is proportional to time. The members of the supply chain spare no effort to slow down the rate of quantity loss and quality drop through freshness-keeping effort and other efforts to ensure higher profits. Cai et al. (2010) believe that distributor must try to maintain the freshness of the commodities during the transportation of fresh products, and to ensure that the quality drop and quantity loss of fresh products are minimized when they reach the target market. Considering the characteristics that the quantity and quality of perishable products may be damaged during transportation, Cai et al. (2013) studied the long distance transportation of fresh products to the target market by a third-party logistics provider, and the best decisions for the three supply chain members were derived. There are also many papers that have studied the strategy of supply chain cooperation with contracts (Blackburn and Scudder 2009; Cai et al. 2010; Xiao and Chen 2012; Cai et al. 2013; Wang and Chen 2013; Wang and Chen 2017; Anderson and Monjardino 2019). However, there are very few papers to study fresh produce supply chain cooperation in the aspect of option contracts. 2.2 Supply Chain Contracts with Options Operational management and financial measures are important means to manage supply chain risks. As an important financial derivative security, option has already received the attention of the theoretical scholars and has been widely used by the practitioners in many industries. The application of option contracts such as call, put, and bidirectional option in supply chain risk management has gradually attracted wide attention. Some literature shows the flexibility and effectiveness of options and illustrated how provide flexibility to respond to market dramatical changes. Christiansen and Wallace (1998) explain the adaptability and practicality of options through the methods to model dynamic programming with uncertainties. Barnes-Schuster et al. (2002) studied how supply contracts with options offer the buyer flexibility to cope with market changes in the second stage. Some scholars have proved that option contracts are beneficial to both the supply chain members and can achieve supply chain coordination. Wu and Kleindorfer (2005) explore the B2B supply chain problems with options in the spot market, and verify the existence of market equilibrium and explain the framework of market equilibrium. Zhao et al. (2010) research the cooperation of option contracts in the supply chain, and demonstrate that option contracts can coordinate supply chain and Pareto improvement by taking the method of cooperative game. Chen and Shen (2012) analyze the supply chain with service requirements and option contracts, and prove that option contracts are beneficial to both members of the supply chain. They also derive that a contract can cooperate the supply chain with option contracts and service requirements. Several scholars utilize the option contract extends the existing model and derive the members’ best decisions and the cooperation situations. Nosoohi and Nookabadi (2014) use the coordination contract based on the option mechanism to expand the supply chain include a supplier and a manufacturer, and obtain the manufacturer’s best order quantity,
4
D. Jia and C. Wang
the supplier’s best manufacturing quantity and the inevitable situations for cooperation between them in the described environment. Hu et al. (2014) expand a manufacturer and a retailer model under the condition of random yield and need uncertainty, and derive the retailer’s best ordering strategy and the manufacturer’s corresponding production decision. Chen et al. (2014) use option contracts in a risk-averse supply chain which included a loss-averse retailer and risk-neutral suppliers, and derive their optimal strategies in the case of option contracts, respectively. Wang and Chen (2015) extend the newsvendor model with option contracts and observe that there are differences optimal ordering and pricing strategies between the single ordering (the retailer orders come from a wholesale price contract or an option contract) and the mixed ordering (the retailer applies both a wholesale price contract and option contract). A large number of literatures have used call options (Wang and Chen 2015; Luo and Chen 2017), put options (Chen and Parlar 2007; Wang et al. 2019; Chen et al. 2020) and bidirectional options (Yang et al. 2017; Wang et al. 2019) to study pricing strategy (Wang and Chen 2015), ordering strategy (Wang and Chen 2015), return policy(Wang et al. 2019; Wang et al. 2019) and supply chain coordination, respectively. Chen and Parlar (2007) use put options in a supply chain with stochastic need and risk-averse newsvendor to reduce losses caused by low demand. Wang and Chen (2015) study the optimal ordering and pricing strategy of call options in the newsvendor model with demand uncertainty, and compare the difference between single ordering and mixed ordering. Luo and Chen (2017) study the effect of call options on a supplier-manufacturer supply chain with random yield and specific market, and the conditions for supply chain cooperation and Pareto improvement of both supply chain members are derived. Yang et al. (2017) research the option contracts in the supplier-retailer supply chain where sales efforts decide market demands, and find that call option contracts can decrease the shortage risk, put option contracts can decrease inventory risk, and bidirectional option contracts can decrease bilateral risks. Wang et al. (2019) explore the influences of client returns on optimal pricing strategies and ordering decisions under put options, as well as the profits of suppliers and retailers. They also discuss a coordination mechanism that can control supply chain and guarantee that suppliers and retailers can achieve higher profits. Wang et al. (2019) discussed the influence of client returns and bidirectional option contracts on newsvendor’s refund price and order decisions, and proved that bidirectional option contracts can increase the company’s profits and decrease the negative influence of customer returns with high need uncertainties. Chen et al. (2020) derive the best ordering and production strategies of the supply chain with put options under the constraint of service level. They find that retailer can make more profits and provide higher service levels through put option. Few scholars (Wang and Chen 2013; Wang and Chen 2017) introduce option contracts into the fresh product supply chain and there are few papers concentrate on the option contracts for the coordination of the fresh produce supply chain.
3 Problem Description and Assumption This study considers a one-period two-stage fresh produce supply chain, which consisting of a loss-averse retailer and a risk-neutral supplier. The supplier produces fresh
Supply Chain Coordination of Loss-Averse Retailer
5
produce and the retailer orders call option contracts from the supplier and sells fresh produce to clients with uncertain need in the selling season. The following introduces the parameters that appear in this paper. The subscripts r, s, and I in the parameters represent retailer, supplier, and the integrated supply chain, respectively. Assuming the retailer buys Qr options at option price o per unit. After understanding the market demand, each option gives the retailer the right (not the obligation) to purchase fresh produce at the exercise price e per unit. The supplier provides the fresh produce at cost of c per unit and the retailer sells the fresh products at price of p per unit which is clear. Due to the characteristics of deterioration and loss of fresh commodities in circulation, we invite β (0 < β < 1) to express the circulation loss in quantity, and the supplier bears the cost of circulation loss. Meanwhile, assuming fresh produce shows no salvage value after the selling season. The shortage penalty cost of retailer is gr and the supplier’s shortage penalty cost is gs . Let random variable D be the market need which is a continuous and non-negative with a mean of u, and the probability density function is f (x) and cumulative distribution function is F(x). F(x) is differentiable and strictly growing. Let F(0) = 0, E[D] = u. F(x) = 1 − F(x). To eliminate trivialities, let p > e + o > c, gr > e and gs > c. Notation (x)+ = max(0, x). According to the research of Chen et al. (2014), define π as profit. We can consider a simple piecewise linear form of retailer’s loss aversion utility function π, π ≥ 0 U (π ) = (1) λπ , π < 0 where λ refers to the loss aversion index of retailer. It is assumed that λ > 1, which represents that people are more sensitive to losses than profits of the same size. Higher value of λ relates to a higher degree of loss aversion for retailer.
4 Loss-Averse Retailer’s Optimal Ordering Policy Now, it is considered that the best ordering policies of loss-averse retailer with option contracts. The profit of retailer, denoted as πr (Qr ), is D ≤ Qr (1 − β) π1 (D; Qr ) = (p − e)D − oQr , (2) πr (Qr ) = π2 (D; Qr ) = (p + gr − e)(1 − β) − o Qr − gr D, D > Qr (1 − β) Next, expected profits of the retailer is shown as E πr (Qr ) , is + E πr (Qr ) = p min D, Qr (1 − β) − e min D, Qr (1 − β) − oQr − gr D − Qr (1 − β)
The first term means the sales revenue, which is limited to the need and the ordered options due to the circulation loss. The second term refers to the option’s exercise cost and the third term represents the option’s ordering cost. The last term means retailer’s shortage penalty cost. Therefore, the loss-averse retailer’s expected profit can be shown as Qr (1−β) F(x)dx − oQr − gr u (3) E πr (Qr ) = (p + gr − e) 0
6
D. Jia and C. Wang
Next, we begin to discuss the break-even quantity of the actual demand. Let d be the actual demand. Case 1. If d ≤ Qr (1 − β), then from the retailer’s profits function π1 (d ; Qr ) = oQr . Since π1 (d ; Qr ) is strictly (p − e)d − oQr . Let π1 (d ; Qr ) = 0, we get d1 (Qr ) = p−e growing in d , if d > d1 (Qr ), then π1 (d ; Qr ) > 0; if d < d1 (Qr ), then π1 (d ; Qr ) < 0. Case 2. If d > Qr (1 − β), then from the retailer’s profit function π2 (d ; Qr ) = (p+gr − (p+gr −e)(1−β)−o]Qr e)Qr (1 − β) − gr D − oQr . Let π2 (d ; Qr ) = 0, we get d2 (Qr ) = [ . g Since π2 (d ; Qr ) is strictly decreasing in d , if d > d2 (Qr ), then π2 (d ; Qr ) < 0; if d < d2 (Qr ), then π2 (d ; Qr ) > 0. Hence, we derived Lemma 1 as follow: oQr Lemma 1. The retailer has two break-even quantities of actual demand: d1 (Qr ) = p−e (p+gr −e)(1−β)−o]Qr and d (Q ) = [ , where if D < d (Q ) or D > d (Q ), the retailer 2
r
g
1
r
2
r
shows negative achieved profit and if d1 (Qr ) < D < d2 (Qr ), the retailer shows positive achieved profit. Lemma 1 indicates that if the actual demand is too low (D < d1 (Qr )) or too high (D > d2 (Qr )), the retailer will meet losses, and only if the achieved demand is between d1 (Qr ) and d2 (Qr ), the retailer will meet benefits. So, the expected utility function of the loss-averse retailer, shown E U (πr (Qr )) can be written as d (Q ) E U (πr (Qr )) = E πr (Qr ) + (λ − 1) ∫01 r π1 (x; Qr )f (x)dx (4) + ∫+∞ d2 (Qr ) π2 (x; Qr )f (x)dx According to (4), we can derive the Theorem 1 about the best order quantity of loss-averse retailer as follows. Theorem 1. E U (πr (Qr )) is concave in Qr and there is a distinct best order quantity (Qrλ ) that maximizes the loss-averse retailer’s expected utility and meets the conditions below: λ + g Q − e)(1 − β)F − e)(1 − β) − o − β) − o + − 1) + g (p (1 (λ (p r r r (5) =0 F d2 Qrλ − oF d1 Qrλ Proof. From above (4), we can get. dE [U (πr (Qr ))] = (p + gr − e)(1 − β)F Qr (1 − β) − o + (λ − 1) dQr (p + gr − e)(1 − β) − o F d2 (Qr ) − oF d1 (Qr ) and d 2 E [U (πr (Qr ))] = −(p + gr − e)(1 − β)2 f Qr (1 − β) − (λ − 1) 2 dQ r [(p+gr −e)(1−β)−o]2 f d (Q ) − o2 f d (Q ) < 0 2 r 1 r g p−e
Supply Chain Coordination of Loss-Averse Retailer
7
That is, the expected utility of loss-averse retailer E U (πr (Qr )) is concave in Qr . dE U (π (Q )) Let [ r r ] = 0, we obtain the best order quantity Qλ that meets r
dQr
λ (p r (1 − β) − o + (λ − 1) (p + gr − e)(1 − β) − o e)(1 − β)F
λQ + grλ− =0 F d2 Qr − oF d1 Qr The proof is completed.
5 Risk-Neutral Supplier’s Optimal Production Strategy Before the selling season, it is necessary for the risk-neutral supplier to determine how many units Qs to manufacturer on basis of the best order quantity Qrλ of loss-averse retailer. For avoiding the risk of overage, the risk-neutral suppliers will always produce fresh produce less than or equivalent to the best order quantity of loss-averse retailer, so Qs ≤ Qrλ . Because the supplier is risk-neutral, supplier’s utility function is equivalent to its profit function. The risk-neutral supplier’s expected profits, shown as E πs (Qs ) , is
+ E πs (Qs ) = oQrλ + e min D, Qrλ (1 − β) − cQs − gs min D, Qrλ (1 − β) − Qs (1 − β)
The first term means the option sales. The second term refers to option exercise revenue. The third term represents the expense of production and the last term is supplier’s shortage penalty cost. Therefore, the risk-neutral supplier’s expected profits can be written as Qλ (1−β) E πs (Qs ) = oQrλ + (e − gs ) Qrλ (1 − β) − ∫0 r F(x)dx (6) Q (1−β) F(x)dx − cQs +gs Qs (1 − β) − gs ∫0 s from (6), we can get dE πs (Qs ) = gs (1 − β) − gs (1 − β)F Qs (1 − β) − c dQs and
d 2 E πs (Qs ) = −gs (1 − β)f (Qs ) < 0 dQs2
dE [πs (Qs )] = 0, then the best Therefore E[πs (Qs )] is concave in Qs . Let dQs manufacturing quantity of the risk-neutral supplier without constraints is Qs∗ = 1 −1 1−β F
1−
c gs (1−β)
.
Considering the constraint of Qs ≤ Qrλ , we can derive the Theorem 2 as follows: Theorem 2. The risk-neutral supplier’s best manufacturing quantity (Qsλ ) satisfies: Qsλ = min{Qs∗ , Qrλ }.
8
D. Jia and C. Wang
6 Fresh Produce Supply Chain Coordination Now we begin to discuss the fresh product supply chain coordination. It is assumed that production quantity of the comprehensive supply chain is QI and gr is shortage penalty cost of the risk-neutral comprehensive supply chain. So, the comprehensive supply chain’s expected profits, shown E πI (QI ) , is + E πI (QI ) = p min D, QI (1 − β) − cQI − gr D − QI (1 − β) The first term means the total revenue and the second term represents the cost of manufacturing. The last term refers to the total shortage penalty cost of the comprehensive supply chain. Thus, the comprehensive supply chain’s expected profits can be written as QI (1−β) F(x)dx − gr u (7) E πI (QI ) = (p + gr )(1 − β) − c QI − (p + gr ) 0
From above equation, we can get dE πI (QI ) = (p + gr )(1 − β) − c − (p + gr )(1 − β) − F QI (1 − β) dQI and
d 2 E πI (QI ) dQI2
= −(p + gr )(1 − β)2 − f QI (1 − β) < 0
Thus E[πI (QI )] is concave in QI . Let follows:
dE [πI (QI )] dQI
= 0, then the Lemma 2 is derived as
Lemma 2. The risk-neutral integrated supply chain’s best production quantity (QIλ ) is
meets QIλ =
1 −1 1−β F
1−
c (p+gr )(1−β)
.
For coordinating the supply chain, both the order quantity of loss-averse retailers and the manufacturing quantity of risk-neutral suppliers need to be coordinated. According to Theorem 2, in order to satisfy Qsλ = Qrλ , then Qs∗ ≥ Qrλ . Therefore, when Qs∗ ≥ Qrλ = QIλ , gs ≥ p + gr . Theorem 2 and Lemma 2 offer condition 1, which incentivizes suppliers to produce and coordinate the same quantity in the fresh produce supply chain as follow: Condition 1. When gs ≥ p + gr , the fresh produce supply chain will be coordinated. In addition, Theorem 1 and Lemma 2 provide Condition 2 that to guarantee the retailer’s order quantity is cooperated as follow: Condition 2. When the retailer’s order quantity should be satisfy {(p + gr − e)(1 − β)F[QIλ (1 − β)] − o} + (λ − 1){[(p + gr − e)(1 − β) − o]F[d2 (QIλ )] − oF[d1 (QIλ )]} = 0, the fresh produce supply chain will be coordinated.
Supply Chain Coordination of Loss-Averse Retailer
9
A combination of the above two conditions is sufficient for the fresh produce supply chain coordination. Therefore, we can derive Theorem 3 about the fresh produce supply chain coordination: Theorem 3. The fresh produce supply chain can be coordinated when {(gs − e)(1 − β)F[QIλ (1 − β)] − o} + (λ − 1){[(gs − e)(1 − β) − o]F[d2 (QIλ )] − oF[d1 (QIλ )]} = 0 is satisfied.
7 Conclusion and Suggestions for Further Research In this study, considering the circulation loss of fresh produce, we make a fresh product supply chain model that includes a loss-averse retailer and option contracts. we derived the best ordering strategy of the loss-averse retailer and the best manufacturing strategy of the risk-neutral supplier with option contracts. We also discussed the conditions of the fresh product supply chain coordination. The inadequacy of this article is that we did not discuss the associations between the loss-averse retailer’s best order quantity and some parameters such as cost and price. This research can be extended on problem with put option, bidirectional option, random yield, and asymmetric information. Acknowledgment. The authors express thanks to the editor and the referees for carefully reading the paper. National Natural Science Foundation of China (No. 71972136) partially supports this research.
References Christiansen, D.S., Wallace, S.W.: Option theory and modeling under uncertainty. Ann. Oper. Res. 82(1), 59–82 (1998) Barnes-Schuster, D., Bassok, Y., Anupindi, R.: Coordination and flexibility in supply contracts with options. Manuf. Serv. Oper. Manag. 4(3), 171–207 (2002) Wu, D.J., Kleindorfer, P.R.: Competitive options, supply contracting, and electronic markets. Manag. Sci. 51(3), 452–466 (2005) Chen, F., Parlar, M.: Value of a put option to the risk-averse newsvendor. IIE Trans. 39(5), 481–500 (2007) Ho, T.-H., Zhang, J.: Designing price contracts for boundedly rational customers: does the framing of the fixed fee matter? Manag. Sci. 54(4), 686–700 (2008) Blackburn, J., Scudder, G.: Supply chain strategies for perishable products: the case of fresh produce. Prod. Oper. Manag. 18(2), 129–137 (2009) Cai, X., Chen, J., Xiao, Y., Xu, X.: Optimization and coordination of fresh product supply chains with freshness-keeping effort. Prod. Oper. Manag. 19(3), 261–278 (2010) Zhao, Y., Wang, S., Cheng, T.E., Yang, X., Huang, Z.: Coordination of supply chains by option contracts: a cooperative game theory approach. Eur. J. Oper. Res. 207(2), 668–675 (2010) Feng, T., Keller, L.R., Zheng, X.: Decision making in the newsvendor problem: a cross-national laboratory study. Omega-Int. J. Manag. Sci. 39, 41–50 (2011) Xiao, Y., Chen, J.: Supply chain management of fresh products with producer transportation. Decis. Sci. 43(5), 785–815 (2012)
10
D. Jia and C. Wang
Chen, X., Shen, Z.J.: An analysis of a supply chain with options contracts and service requirements. IIE Trans. 44(10), 805–819 (2012) Zhao, Y., Ma, L., Xie, G., Cheng, T.C.E.: Coordination of supply chains with bidirectional option contracts. Eur. J. Oper. Res. 229(2), 375–381 (2013) Cai, X., Chen, J., Xiao, Y., Xu, X., Yu, G.: Fresh-product supply chain management with logistics outsourcing. Omega-Int. J. Manag. Sci. 41(4), 752–765 (2013) Wang, C., Chen, X.: Option contracts in fresh produce supply chain with circulation loss. J. Ind. Eng. Manag. 6(1), 104–112 (2013) Chen, X., Hao, G., Li, L.: Channel coordination with a loss-averse retailer and option contracts. Int. J. Prod. Econ. 150, 52–57 (2014) Qin, Y., Wang, J., Wei, C.: Joint pricing and inventory control for fresh produce and foods with quality and physical quantity deteriorating simultaneously. Int. J. Prod. Econ. 152(1), 42–48 (2014) Nosoohi, I., Nookabadi, A.S.: Designing a supply contract to coordinate supplier’s production, considering customer oriented production. Comput. Ind. Eng. 74(1), 26–36 (2014) Hu, F., Lim, C.C., Lu, Z.: Optimal production and procurement decisions in a supply chain with an option contract and partial backordering under uncertainties. Appl. Math. Comput. 232(1), 1225–1234 (2014) Wang, C., Chen, X.: Optimal ordering policy for a price-setting newsvendor with option contracts under demand uncertainty. Int. J. Prod. Res. 53(20), 6279–6293 (2015) Anderson, E.J., Chen, B., Shao, L.: Supplier competition with option contracts for discrete blocks of capacity. Oper. Res. 65(4), 952–956 (2017) Wang, C., Chen, X.: Option pricing and coordination in the fresh produce supply chain with portfolio contracts. Ann. Oper. Res. 248(1–2), 471–491 (2017). https://doi.org/10.1007/s10 479-016-2167-7 Luo, J., Chen, X.: Risk hedging via option contracts in a random yield supply chain. Ann. Oper. Res. 257(1–2), 697–719 (2017). https://doi.org/10.1007/s10479-015-1964-8 Yang, L., Tang, R., Chen, K.: Call, put and bidirectional option contracts in agricultural supply chains with sales effort. Appl. Math. Model. 47, 1–16 (2017) Wang, C., Chen, X., Wang, L., Luo, J.: Joint order and pricing decisions for fresh produce with put option contracts. J. Oper. Res. Soc. 69(3), 474–484 (2018) Anderson, E., Monjardino, M.: Contract design in agriculture supply chains with random yield. Eur. J. Oper. Res. 277(3), 1072–1082 (2019) Wang, C., Chen, J., Chen, X.: The impact of customer returns and bidirectional option contract on refund price and order decisions. Eur. J. Oper. Res. 274(1), 267–279 (2019) Chen, X., Luo, J., Wang, X., Yang, D.: Supply chain risk management considering put options and service level constraints. Comput. Ind. Eng. 140, 106228 (2020)
Research on the Innovation Mechanism of Enterprise Business Model in the Internet Environment Jiayuan Wang, Yue Zhang, and Lei Xu(B) School of Economics and Management, Beijing Jiaotong University, Beijing, China {yue1108,19113074}@bjtu.edu.cn
Abstract. With the influence of the Internet, business model innovation has become one of the effective ways to ensure the normal development of enterprises when enterprises response to complex market environments and individualized needs of customers. However, under the new phenomenon of enterprises in the Internet era, the theory of traditional business model innovation still has shortcomings, which is difficult to support the business model change of enterprises. How to successfully carry out business model innovation? This article will take MI enterprises as the research object to explore the main driving factors of enterprise business model innovation under the Internet environment. And we refine the dynamic innovation model of business model based on the value logic, which could complement the theoretical research of business model innovation. The research conclusion shows that the driving factor for business model innovation is the market change brought by the Internet. Through the evolution and reorganization of value logic, companies form new business models, and finally realize business model innovation. Keywords: Internet environment · Business model innovation · Value logic
1 Introduction As the theory of management in the 21st century, the business model has just emerged internationally, which represents the core business philosophy of the company. With the development of the Internet, information technology has revolutionized. The barriers to information communication between enterprises and consumers have been broken, which has led to changes in the business philosophy of enterprises, which has made the original business model rigid and difficult to adapt to fierce market competition. In the end, the original business model of the enterprise becomes rigid and it is difficult to adapt to the fierce market competition. In response to the complex market environment, business model innovation is regarded as one of the important core capabilities for the survival and development of enterprises in the 21st century [1]. However, how does the business model innovate? Or what is the internal mechanism of business model innovation? This is one of the problems that the current academic community and enterprises need to deal with. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 11–23, 2022. https://doi.org/10.1007/978-981-16-8656-6_2
12
J. Wang et al.
Although the words of the Internet and business model innovation frequently appear in various authoritative media such as corporate project reports, news reports, and so on. Academics still lack corresponding research results. This is in contrast to the innovative practices of Chinese companies. Existing literature research still has the following two short-comings: First, in the Internet environment, traditional theories face a new phenomenon of enterprises in the Internet era, which lacks research on the formation mechanism of business models and the development process mechanism; Second, there is a lack of research on the relationship between Internet factors and the innovation process of business models. From a dynamic perspective, there is still a “black box” in the process of enterprise business model innovation in the Internet era. Therefore, it is of great theoretical and practical significance to study the innovation of business models in the Internet environment. In order to further explore the process of researching business model innovation, this study selected MI enterprises that focus on Internet products and have a unique business model as the main case study object. And study analysis the business model innovation process from the value logic level based on the dynamic perspective, to explore the complete mechanism of business model construction. This paper selects the value logic theory based on the dynamic perspective for the following reasons: (1) The business model of the enterprise is constantly changing with the development of the enterprise, and its evolution is a dynamic development process. (2) The essence of business model innovation lies in the transformation and restructuring of value logic [2]. The development of the Internet is the main driving force to change value logic. Therefore, the structure of this paper is as follows: Firstly, the article discusses the theoretical background of the model framework and illustrates the process of data collection and analysis. Secondly, taking MI enterprise as an example, it analyzes the process of business model innovation. Finally, it discusses the theoretical and practical significance of the research results.
2 Theoretical Background 2.1 Business Model Theory With the rapid development of e-commerce in the 20th century, the concept of business model has begun to be well known by enterprises, and the development of business models has attracted the attention of many academic researchers. Due to its importance and uniqueness, the business model has gradually become the subject of extensive dissemination and research in academia. Foreign scholars have defined business models from different perspectives. Based on the existing theoretical results, the definition of business model in the academic community is mainly divided into the following three viewpoints: First, the business model from the perspective of strategy. Business model is the specific performance or implementation of corporate strategy. Scholars who hold this view believe that the business model is the feedback that enterprises make in response to the complex market environment and is the epitome of the company’s future development strategy. Through business models, companies could accurately identify the market and choose the right operating model to create and profit [3].
Research on the Innovation Mechanism of Enterprise Business Model
13
Second, the business model from the perspective of governance. Scholars who hold this view believe that the design of the business model will affect the configuration of the internal organization from the side, thus conforming to the core business philosophy of the enterprise. In Hawkins’ [4] study on corporate performance, business models are viewed by firms as an effective management approach that can rationally allocate resources for market competition; while researchers such as Morris [5] believes that business models are a design method that helps organizations in the enterprise achieve organic integration, which can ensure a virtuous cycle and normal operation of the business, and ultimately help the company to obtain a continuous profit flow. Finally, the business model from the perspective of value logic. Scholars who hold this view believe that the competitiveness of the business model is mainly reflected in its value logic, namely value proposition, creation and acquisition [6]. In the existing literature, there are many researches on business model theory from the perspective of value creation, which has the greatest impact [7]. Because by researching value creation, it is possible to dynamically and systematically understand the meaning of business models [8]. With the deepening of the research on the value theory of business models, practical experience has found that the success of business models depends not only on value creation [1], but also the value acquisition of enterprises is one of the important determinants [9].With the improvement of the business model theory, scholars gradually merged the three values logic concepts on the basis of the original research. They emphasize that the core of the business model lies in the value proposition, which is the foundation. And focusing on the target customer, enterprise choose the right value creation and value acquisition method [10], which is the core value logic of the business model. 2.2 The Business Model in the Internet Environment The maturity of Internet technology has promoted the deep integration of emerging technologies represented by the Internet with other fields, which has greatly changed the innovative mode of society and business. Krishnakumar [11] found that the Internet can bring about an increase in the value of hardware and software innovation. J Dong [12] analyzed the business model of MI and pointed out that with the help of Internet technology, enterprises form a new business model in which users participate in product design. The market environment faced by enterprises in the Internet era has undergone tremendous changes. The rapid development of information technology has broken the regional restrictions on market transactions. The original business model of enterprises has gradually become rigid and lost the vitality of market competition [13]. Berman [14] believes that the traditional business model works like a pipeline, and under the impetus of the development of the Internet, the new business model is represented by the platform operation. Putzc [15] believes that the application of business models under the influence of the Internet can effectively promote the transformation and upgrading of traditional enterprises towards the new economic model, thus achieving sustainable development. Therefore, in order to cope with the complex market competition environment under the Internet situation, for enterprises business model innovation is one of the indispensable and important capabilities.
14
J. Wang et al.
Based on the existing business model literature, the current research on business models has the following shortcomings: First, the previous business model literature focused on its microscopic and macroscopic attribute characteristics [16], lacking research on the mechanism of its formation and evolution. Secondly, the current research on the changes of business models in the Internet context is mostly based on Internet information technology as an external tool that affects the business model of the enterprise, which does not match the current development of the business model of the enterprise. Because the Internet has been deeply embedded in the business model of the enterprise and has become an important internal factor affecting the business model of the enterprise. Finally, with the rapid development of time and the resource constraints caused by the rapid changes in the social environment, in order to ensure the sustainability of development, the innovation of business model is one of the effective ways for enterprises to solve the pressure of market competition. In summary, the research on the dynamic innovation mechanism of enterprise business model under the Internet context has important reference significance and academic research value for the future market development of enterprises. Although the existing theoretical research of business models has laid a solid theoretical foundation for this paper, there are still some shortcomings in the literature on the study of business model innovation from a dynamic perspective. Due to the lack of process research, the academic community still has debates about business models. Therefore, this paper studies the evolution of enterprise-related value logic in business model innovation from a dynamic perspective, and reveals the important issue of how to conduct business model innovation in the context of the Internet. The specific research framework of the article is shown in Fig. 1:
Value proposition
Internet impact
Business model dynamic innovation
Value logic
Value Creation
Value acquisition
Enterprise business model under the background of the Internet
Fig. 1. The framework of research
3 Research methods 3.1 Method and Case Selection In order to achieve the research goal of the thesis, this paper adopts the standard research method (SPS) based on single case study, and combines theory with case to carry out
Research on the Innovation Mechanism of Enterprise Business Model
15
problem research. The reasons are as follows: First, the purpose of this paper is researching on the dynamic process of how the business model evolves during the development process, so the use of single case exploration research and the research objectives of this paper are mutually compatible. Second, the literature on the study of business model innovation in academia is relatively inadequate. Compared with business model practice, research focusing on dynamic innovation of business models is rare, and exploratory case studies are applicable to analysis of existing research shortages [17]. Based on the following two standards, this paper selects MI enterprises as the research object: (1) The choice of case objects takes into account the principle of importance and representation. As an Internet manufacturing company, MI focuses on the innovative research and development of high-tech products and is one of the typical representatives of China’s high-tech industry. At the same time, it constantly improved the business model in the development process to match the development goals of the company. As a case study object, this Company is in line with the theme of the dynamic evolution of the business model of this paper. (2) Take into account the principle of consistency between theoretical goals and corporate best practices. The success of company is largely due to its innovative business model. How does MI face challenges and innovate business models, which is consistent with the research goal of building an enterprise business model based on corporate development goals. 3.2 Data Collection This study follows the case study process of “Defining Research Questions → Theoretical Review → Case Study Draft Design → Data Collection → Data Analysis”. Through theoretical and data cycle analysis, repeated focusing, we could find theoretical innovations [17, 18]. (1) Refining research questions based on key data. The research question originated from the rapid success of MI under the influence of the Internet. Throughout its history, MI has continuously changed its business model to overcome the resource constraints brought about by changes in the market environment. However, in the context of the Internet, why do some companies gain the ability to sustain development by changing their business models, while others cannot or have no obvious performance? The reason is the key to how companies can innovate in business models. (2) Focus on business model theory and value logic theory. In the process of researching millet enterprises, the article focuses on business model innovation and value logic changes. This paper assumes that the business model innovation reason of the enterprise is closely related to the change of its value logic. Through in-depth study of relevant literature on business model innovation, we combine the case materials to derive the main theoretical analysis framework of this paper. (3) Case data collection. The main data collection methods of MI enterprises are combined with formal research interviews and informal surveys, and the data collected and collected are repeatedly compared to ensure the true validity of the research data. First, the research team interviewed several core management personnel of enterprises, and obtained a large amount of data. Second, the results of the project
16
J. Wang et al.
papers produced by participating in the Beijing High-tech Enterprise Innovation Research Project can effectively supplement the case data; Third, the article collects case information through second-hand information such as corporate official website and books. (Refer to Table 1 for details).1
Table 1. Data collection and analysis
Data Collection
Preliminary Data Collection Searched secondary data from multiple sources Gathered internal archrival data
Onsite Data Collection Interviewed departments and business units Group discussions with other researchers were conducted
Data Analysis
Preliminary Data Analysis Summarize the timeline of the evolution of business models Review a large body of literature on business model innovation
Offsite Data Collection Search and select data that supports business model innovation theory construction and processes Ensure consistency of data theory models through discussion
Mechanisms to establish reliability
Mechanisms to establish validity
Prepared semi-structured interview guides with open ended and relevant questions Collected data from multiple sources to enable tri -angulation and cross validation All of the interviews were taped and transcribed to ensure the accuracy and completeness of data [17]
Set up an interview panel of five researchers to enable the validation of interpretations and observations [17] Theoretical constructs were repeatedly confirmed with informants to ensure data model alignment [18] Ensured emergent process models and final conclusions were supported by literature
1 Supported by key project of Beijing Social Science Foundation(No.18GLA008): Study on the
path and Strategy of Beijing-Tianjing-Hebei Innovation-Driven Development.
Research on the Innovation Mechanism of Enterprise Business Model
17
(4) Case data analysis process. The case study is based on the collected first-hand and second-hand case data, and combined with the existing literature data. Finally, paper derives the theoretical model of the business model. Firstly, we summarize the collected case data, and learn more about the business situation of the case enterprise. Then, team compare, filter and classify the data to establish a preliminary case database. Secondly, comparing the case data with the literature and finding out the matching theoretical concepts. We classify and rearrange the data according to the theory to form a theoretical database. Finally, the team extracts some important theoretical research-related theoretical constructs from the theoretical data, then compares the case phenomena and theoretical data with the theoretical data. These theories are fused to form a model. After experiencing data processing, the case data is gradually changed from disorder to order.
4 Case study 4.1 Case Description In order to ensure the true integrity of the case data, this paper collects first-hand information about the development of business model. Combined with the existing literature, the development is roughly divided into the following stages (As shown in Fig. 2): (1) Single product development stage: 2011–2013 At the beginning of development, most of the employees in the company were engaged in the software industry. They lacked experience in the hardware manufacturing of smart phones. Therefore, they did not immediately develop new smart phones, but concentrated their corporate resources on smart phones. The system has been improved to accommodate users in the domestic market. Out of this concept, MI Company gave up the tradition of hardware-focused products of most smart phone companies and proposed a new unique business model of “Hardware + Software + Internet”, in which “hardware” refers to smart phones and “software” refers to smart phones. The operating system MIUI, while the Internet is building value-added services outside the MIUI system and smart phones. (2) Product expansion stage: 2014–2016 As time goes on, MI attracts a large number of market users by integrating into the Internet and minimizing the overall cost of sales. It uses the mobile phone as the entrance to the traffic, and builds the communication platform to ensure the user’s brand loyalty, thus introducing users into their own fan base. Based on the original products, it has continuously expanded its product boundaries, from smart phones to smart TVs and smart routers, from mobile phone MIUI systems to tablet operating systems and computer operating systems, which have brought the distance between enterprise and consumers closer. Through this development strategy, MI has obtained opportunities for rapid development and expansion, and also cultivated and accumulated a wide range of potential consumer groups, and the company has achieved rapid and stable development.
18
J. Wang et al.
(3) Product Ecology Stage: 2017–2018 With the business model innovation of “Hardware + Software + Internet”, MI’s main business has achieved rapid development. As many powerful technology companies gradually entered the field of smart phones, many weak aspects began to appear. In addition, Huawei and ZTE also brought considerable development pressure. In order to solve the above problems, it upgraded the triathlon business model, from the original “Internet + software + hardware” three sections to “Internet + hardware + new retail.“ In the context of the new retail era, MI vigorously began to cooperate with third-party enterprises to develop offline experience stores, expand offline sales channels, and successfully achieve online and offline linkages, which greatly promoted the reversal.
Huami
Jinshan
Mobile phone Zhimi
Thunder
box
Mobile phone
box
Lianchuang
hardware
MIUI TV
Internet
hardware MIUI tablet
software
cloud service
software
MIUI TV
Internet
MIUI tablet
cloud service
software
router
router
MIUI mobile phone
Mobile phone
hardware
MIUI mobile phone
Mutual entertai -nment
Youku
Millet page Huace Film
Medical Mutual entertai -nment
Millet page
Internet Rice chat
Single product development
Product expansion
Product ecology
Fig. 2. Concept hierarchy
4.2 Dynamic Evolution of Value Proposition With the development of digital technology brought about by the Internet, while enterprises are close to the distance of communication with consumers, the competition and resource constraints brought about by the rapid change of market environment have also caused huge impact on the original business model of the enterprise. In order to adapt to the relevant impacts brought by market changes, the business focus of the company has gradually evolved from “profit-oriented” to “customercentered”. For MI enterprises, in order to adapt to the market opportunities and competitive pressure brought by the rapid development of Internet technology, it constantly changes the value proposition to assist enterprises in business model innovation. The changes in the value proposition at different stages are as follows:
Research on the Innovation Mechanism of Enterprise Business Model
19
(1) Single product development stage. Unlike other smart phone manufacturers that started out from the production of mobile phone accessories, MI set the focus of research and development on the improvement of intelligent operating systems, followed by the production of smart phones. This is due to the newly established company, the internal core staff did not have experience in the mobile phone industry. “In the senior management team, there is a Dr. Zhou who came from Motorola, and the rest have not done mobile phones. They are all software and management.” It can be seen that in the environment of the Internet, enterprises are not only affected by the market environment demand, but also the constraints of internal resources have greatly restricted their development. Therefore, the value proposition of the enterprises at the beginning of the establishment is mainly: using the current limited resources to realize the innovation of the mobile phone system. (2) Product expansion stage. With the popularity of domestic smart phones, consumers are not only limited to the price and operation difficulty of smart phones, but also begin to pursue the demand for mobile phones to meet various individual needs. Therefore, how to meet the new individualized needs of market consumers has become an important factor in the competition of the enterprise market. Therefore, the development at this stage mainly focuses on product optimization and customer satisfaction. “For MI, quality is the top priority, user satisfaction is the top priority, and user reputation is the top priority.” Therefore, in the product expansion stage, the value proposition of the enterprise is mainly to optimize the product and meet the individual needs of the users. (3) Product ecological stage. As Internet technology increasingly matures, market competition begins to diversify. In order to enhance competitiveness, a single company gradually turned to seeking third-party partners to acquire more environmental resources to enhance the market competitiveness of enterprises. For MI, after six years of continuous development, it has huge market resources. “Being born with fever” has become a popular term for many fans in the Internet age. In order to protect the stable development of the company in the future, it expands the enterprise’s influence field by constructing an ecological chain and continuously expanding the investment field of the enterprise, further narrowing the distance between enterprises and consumers. Therefore, in the product ecological stage, the value proposition of the enterprise is mainly: investing in third-party enterprises and building a product ecosystem. 4.3 Dynamic Evolution of Value Creation With the constant changes in the external environment and internal resources, MI constantly changes the value proposition to confirm the correct market development direction of the company. In order to match different value propositions, companies need to select appropriate value creation logic to assist the stable development of the company. Under the background of the Internet, the value creation methods of different stages of the enterprise are as follows:
20
J. Wang et al.
(1) Single product development stage. Due to the immature Internet technology and the lack of relevant core resources, MI’s research focuses on the innovative design of smart phone operations. First of all, it invested all the resources to system innovation and launched the MIUI system, which successfully filled the gap in the domestic smart phone system market. Secondly, for the manufacture of smart phones, due to the lack of hardware experience, it abandoned the traditional mobile phone manufacturing industry. It is customary to completely outsource the manufacturing process of smart phones to reduce production input costs. Therefore, in the early stage of development, it mainly relied on the launch of MIUI system and new products such as mobile phones to compete in the market, thus creating the value of the enterprise. (2) Product expansion stage. Enterprises make full use of the “rice fans” resources brought by mobile phones to create value. First of all, in order to meet the individual needs of consumers, MI has specially built a user communication platform to optimize products with user suggestions. Secondly, based on the original products, it has continuously expanded its product boundaries, from smart phones to smart TVs and smart routers, from mobile phone MIUI systems to tablet operating systems and computer operating systems, which have brought enterprises and consumers closer. Therefore, in the product expansion stage, in order to meet the needs of consumers, it mainly adopts a way of constructing a community communication platform to optimize and improve existing products to enhance user satisfaction, and then realize product value creation. (3) Product ecological stage. In response to the market competition pressure brought about by the rapid development of technologies such as Internet big data, it began to seek new models to cope with future market competition. First of all, in order to create an ecological circle, MI continues to expand the scope of its products, aiming to create a product ecosystem around consumers and fully integrate into the lives of consumers. Secondly, it continues to cooperate with third parties in the form of investment to form a derivative industry under the brand. While enhancing brand awareness, it can also ensure the balanced development of enterprises in many aspects. Finally, in order to cope with the future market development, it has invested more in cultivating the company’s scientific and technological talents, and constantly carries out product improvement and innovation, such as medical, film and television products. It can be seen that at this stage, MI adjusted the structure of the original model, integrated third-party enterprises to expand the boundaries of enterprise products, and finally formed a complete product ecosystem. 4.4 Dynamic Evolution of Value Acquisition For MI, at different stages of development, due to internal and external factors, it needs to constantly change the value proposition of the enterprise, adopt different ways to create market value for the enterprise, and obtain it in time to stabilize the existing market position of the enterprise. Therefore, for the value acquisition of the enterprise, the analysis is as follows:
Research on the Innovation Mechanism of Enterprise Business Model
21
(1) Single product development stage. Through the use of Internet technology, MI launched the MIUI operating system and smart phone on the basis of existing products to compete in the market. The way to sell its products is pure online direct sales. Owing to the limitation of production resources, enterprises have to adopt the “hunger marketing” method to make products for sale by making advance reservations. Therefore, in the development stage of the single product, the main feature of the revenue model is the sales of smart phones and mobile phone systems, and the cost is mainly the previous R&D investment. (2) Product expansion stage. With the help of the communication platform, enterprises understand the user’s needs and improve the product’s user satisfaction and market competitiveness by recommending and perfecting the products. In terms of sales, while taking online sales, MI expanded its marketing area with good user reputation and began to enter foreign markets. Therefore, at this stage, the main business is characterized by the operation of the information platform and product sales. The cost lies in the maintenance of the platform and the optimization and upgrading of the products. (3) Product ecological stage. In order to ensure future market competition, MI began to use rich resources to cooperate with third-party companies through investment. In this way, it expands the scope boundaries of enterprise products to form a complete product ecosystem to reduce market risks. In addition, based on the original online sales, MI continues to increase the investment in the construction of the experience store to actively develop offline channels. Therefore, in the development stage of the ecological circle, the income of the enterprise is characterized by investing in third parties and opening up offline channels. The cost is mainly from the investment and the investment spent in building the product ecosystem.
5 Business Model Innovation Mechanism Under the influence of the Internet, with the changes of environment and resources, the original transaction structure of the market has changed, which has led to the ablation of the original value network of the enterprise, and has a huge impact on the business model of the enterprise. In order to enhance the market competitiveness and ensure the sustainable development of enterprises, enterprises need to continuously innovate business models. Combined with the previous analysis, the essence of business model innovation lies in the transformation and reorganization of different value logic, and finally builds a new value logic chain. MI constantly carries out the evolution and restructuring of value logic at different stages to realize the innovation of business model (Fig. 3). 5.1 Theoretical and Practical Contributions This case study highlights implications for researchers and practitioners. By combing the relevant information of the case enterprises and combining the research conclusions of business model innovation, the paper conducts discussion on the business model innovation process in the Internet environment to discover relevant regular experiences and provide reference for Chinese enterprises to carry out business model innovation. The conclusions of this paper are as follows:
22
J. Wang et al. pr ogression stage Value pr oposition Value Cr eation Value acquisition
development stage (2011-2014)
expansion stage (2014-2016)
ecology stage (2017-2018)
Innovative products to adapt to the market
Focus on product quality
Work together to create an ecosystem
Production of smart phones System innovation Mobile phone sales revenue Mobile
phone
service revenue
system
Construction of information platform Product optimization
Business cooperation Derivation of products
Related product sales revenue
Sales revenue
Community operating income
Equity income
Development stage
Enterprise
Enterprise
Enterprise
Business model
Product
Multiple products
Third-party enterprise
Product
information platform
User
Fig. 3. Innovation mechanism of business model
First of all, it opened the “black box” of enterprise business model innovation under the Internet environment, and supplemented the existing research. The essence of enterprise business model innovation lies in the change of the original value logic. In order to cope with the complex market competition environment and the individualized needs of customers in the Internet era, enterprises need to reconstruct relevant value logic to achieve business model innovation, so as to ensure the sustainable development capability of enterprises. Secondly, based on the dynamic perspective, this paper introduces value logic in the research of business model innovation. Based on the case material analysis and research, this paper extracts a business model innovation model based on value logic to help guide enterprises to build business models. In addition, the practical significance of the research is to combine the business model innovation research with the Internet development strategy, and provide relevant ideas for promoting the business model innovation. Through the case study, the article refines innovation model based on value logic, which makes the construction of enterprise business model get rid of the management suggestion level and becomes more disciplined and operational. 5.2 Limitations Although this article is based on the single case study method which is a “typical and legitimate endeavor” in qualitative research, it is associated with the problem of generalizability and external validity. And we acknowledge that statistical generalization is impossible from a single case, thus future research could validate propositions of this study statistically, so that the boundary conditions of our findings can be better refined.
Research on the Innovation Mechanism of Enterprise Business Model
23
References 1. Chesbrough, H.: business model innovation: opportunities and barriers. Long Range Plan. 43(2–3), 354–363 (2010) 2. Geissdoerfer, M., Morioka, S., Marly, M.D.C., et al.: Business models and supply chains for the circular economy. J. Cleaner Prod. 190, 712–721 (2018) 3. Sitoh, M.K., Pan, S.L., Yu, C.Y.: Business models and tactics in new product creation: the interplay of effectuation and causation processes. IEEE Trans. Eng. Manag. 61(2), 213–224 (2014) 4. Hawkins, W.G., Gagnon, D.: Source-assisted attenuation correction for emission computed tomography. Picker International (2001) 5. Business, M.L., Innovation, M.: The strategy of business breakthroughs. Int. J. Innov. Sci. 1(4), 191–204 (2009) 6. Wei, J.: Connotation of business model and construction of research framework. Res. Manag. V33(5), 107–114 (2012). (in Chinese) 7. Teece, D.J.: Managing intellectual capital: organizational, strategic, and policy dimensions. Technovation 59(3), 767–770 (2010) 8. Zott, C., Amit, R.: The fit between product market strategy and business model: implications for firm performance. Strateg. Manag. J. 29(1), 1–26 (2008) 9. Sosna, M., Trevinyo-Rodríguez, R.N., Velamuri, S.R.: Business model innovation through trial-and-error learning: the naturhouse case. Long Range Plan. 43(2–3), 383–407 (2010) 10. Bocken, N.: Sustainable business: the new business as usual. J. Ind. Ecol. 18(5), 3 (2014) 11. Krishnakumar, S., Prasanna Devi, S., Surya, P.R.K.: A business dynamics model in entrepreneurial orientation for employees. Ind. Commer. Train. 45(1), 36–50 (2013) 12. Dong, J., Chen, J. How do manufacturers reshape the relationship with users in the Internet era: a case study based on MI’s business model. China Soft Sci. 2015(8) (2015). (in Chinese) 13. Lu, S.: The innovation of business model based on value creation in the internet age – taking “Jingdong” as an example. Hebei Enterp. 2019(02), 63–64 (2019). (in Chinese) 14. Berman, S.J., Kesterson-Townes, L., Marshall, A., et al.: How cloud computing enables process and business model innovation. Strategy Leadersh. 40(4), 27–35 (2012) 15. Putzc, L.M.: The influence of business models and development trends in the field of logistics to promote eco-friendly transport (2017) 16. Kantur, D.: Strategic entrepreneurship: mediating the entrepreneurial orientation-performance link. Manag. Decis. 54(1), 24–43 (2016) 17. Yin, R.K.: Case study research: design and methods. J. Adv. Nurs. 44(1), 108 (2010) 18. Pan, S.L., Pan, G.S.C., Leidner, D.E.: Crisis response information networks. J. Assoc. Inf. Syst. 13(1), 31–56 (2012)
Cooperation Strategy of Intellectual Property Securitization in Supply Chain from Risk Perspective Cheng Liu1 , Wenjing Xie1 , Qiuyuan Lei1 , and Xinzhong Bao2(B) 1 University of Science and Technology Beijing, Beijing, China 2 Beijing Union University, Beijing, China
[email protected]
Abstract. This paper uses the evolutionary game method to study the strategic choice of SMEs, financial institutions and investors in the financing of intellectual property supply chain securitization from the perspective of risk governance, and discusses the influence of various risk factors on the stable equilibrium. The results show that no matter how the initial strategy is chosen, the final game will evolve to “SMEs repay the rent on time, financial institutions perform the contract, investors buy subordinated bonds”. The credit risk of SMEs, intellectual property value evaluation risk and investor preference risk will affect the stable equilibrium of the game system. Keywords: Intellectual property securitization · Supply chain · Tripartite evolutionary game · Risk governance
1 Introduction The successful issuance of “Qiyi century supply chain intellectual property ABS” in 2018 indicates that China has begun to actively explore new ways of supply chain financing. The securitization of intellectual property supply chain helps to improve the capital turnover efficiency of the supply chain, promote the progress and scale of creation, and also provides greater development space and opportunities for enterprises in the supply chain, which is conducive to the development of the industry. However, the risks between supply chain enterprises and intellectual property seriously restrict the successful development of intellectual property supply chain securitization. Asset securitization creates a favorable market environment for the implementation of intellectual property supply chain securitization [1]. The timeliness and intangibility of intellectual property make its securitization not only complex but also risky [2]. The trust mechanism between enterprises in the supply chain and enterprise credit risk also seriously restrict the development of intellectual property supply chain Securitization [3]. The large scale of China’s intellectual property assets and strong market demand for securitization are all favorable factors that support the smooth development of securitization in the intellectual property supply chain [4]. However, in the process of implementing © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 24–34, 2022. https://doi.org/10.1007/978-981-16-8656-6_3
Cooperation Strategy of Intellectual Property Securitization
25
the securitization of the intellectual property supply chain, there are problems such as insufficient legal policies and protection mechanisms, large scale of intellectual property rights but low quality, imperfect intellectual property licensing trading market, lagging development of third-party intermediary service agencies, and some enterprises. Real problems and obstacles such as chaotic internal organization and management [5–7]. China should actively take advantage of the favorable conditions for the development of intellectual property securitization, overcome the unfavorable conditions that hinder its development, and develop the securitization model of the intellectual property supply chain. To sum up, scholars’ research on the securitization of intellectual property supply chain has the following deficiencies: (1) there are relatively few studies on the cooperation strategy of participants in the securitization of intellectual property supply chain. (2) In the existing research, few scholars use risk factors to measure the main income, and study the impact of risk factors on the stable equilibrium of the game.
2 The Construction of Evolutionary Game Model of Risk Sharing in Intellectual Property Supply Chain Securitization This paper takes Qiyi century intellectual property supply chain financial asset support special plan, the first Securitization Product of intellectual property supply chain in China, as the research object, and studies the income balance of all parties from the perspective of risk sharing. 2.1 Introduction to the Securitization Process of the Intellectual Property Supply Chain
Financing
Financial Institution
Company Assignment Qiyi century of receivables supply chain Factoring enterprise financing
Original equity holder Juliangbaoli
Basic assets accounts receivable claims
Asset support special plan
Funding
Deficiency payment
Investor
Offer ABS
Secondary investors
Get offering income
Priority investors
Supply services Intermediary service agencies: lawyers,
Behavior flow Capital flow
accountants, credit rating, etc.
Return funds
Capital &interest payment
Supervision, escrow and payment agency
Fig. 1. Operation process of intellectual property supply chain ABS in Qiyi Century
The basic process of Qiyi century’s intellectual property supply chain ABS is as follows: Qiyi century and its suppliers (enterprises in the supply chain) generate accounts
26
C. Liu et al.
receivable claims due to intellectual property services, and Juliang factoring company transfers the accounts receivable claims to become the original owner of the intellectual property securitization project, and packages and transfers the factoring assets to the asset support special plan. Managers raise funds through asset-backed special plans, purchase accounts receivable and creditor’s rights, and provide relevant services through intermediaries. The specific financing process is shown in Fig. 1. In the start-up stage of intellectual property securitization, when the financing party transfers the accounts receivable of intellectual property to financial institutions, there is mainly the risk of intellectual property value evaluation; Financial institutions set up special asset plans with accounts receivable claims as basic assets, and entrust intermediary service agencies to carry out rating. This is the rating stage, and there are operational risks of intermediary service agencies; Then it enters the stage of design and issuance. The special asset-backed plan issues bonds, investors buy bonds, and the special asset-backed plan obtains issuance income. This stage mainly includes credit risk and investment preference risk; Then enter the duration, at this stage, enterprise credit risk, investment preference risk, intellectual property impairment risk is full of them, the above risks seriously restrict the strategy choice of game players. Facing the accounts receivable, the enterprise has the situation of paying or not paying; According to whether the financial institutions make up the balance of the special asset plan, the strategy choice is performance/nonperformance; Investors have the risk of investment preference and will make the decision to buy subordinated bonds or priority bonds. There are eight combination strategies of the three, and the decision meaning of each strategy is shown in Table 1. Table 1. Strategy combination matrix Serial number
Strategy combination
Meaning
Serial number
Strategy combination
Meaning
I
Payment, performance, purchase of subordinated bonds
Enterprises pay accounts, financial institutions perform contracts, and investors buy subordinated bonds
V
Nonpayment, performance, purchase of subordinated bonds
In the event of accelerated liquidation, the financial institutions make up the funds and cash the investors & apos; income in the order of liquidation. The investors buy the subordinated bonds and get part of the income, but less than the income when the cash flow is normal
(continued)
Cooperation Strategy of Intellectual Property Securitization
27
Table 1. (continued) Serial number
Strategy combination
Meaning
Serial number
Strategy combination
Meaning
II
Pay, perform, buy priority bonds
Enterprises pay accounts, financial institutions perform contracts, and investors buy priority bonds
VI
Nonpayment, performance, purchase of senior bonds
When the accelerated liquidation event occurs, the financial institutions perform the contract, and the investors buy the priority bonds, the investors will get the interest income, which is less than the income when the cash flow is normal
III
Payment, default, purchase of subordinated bonds
Enterprises pay accounts, financial institutions default, may occur accelerated liquidation events, investors & apos; interest income is affected
VII
Nonpayment, default, purchase of subordinated bonds
If the enterprises do not pay the accounts, the financial institutions do not make up the difference and pay the corresponding price, and the investors buy the subordinated bonds, the income will be greatly reduced
IV
Pay, default, buy priority bonds
Enterprises pay accounts, financial institutions default, may occur accelerated liquidation events, investors & apos; income is affected
VIII
Nonpayment, default, purchase of senior bonds
Enterprises do not pay, financial institutions do not make up the difference, pay the price, investors buy priority bonds, the income is affected, but higher than the interest income of subordinated bond buyers
28
C. Liu et al.
2.2 Parameter Setting For the financing parties, the transfer of creditor’s rights can obtain funds to meet the financing needs; For financial institutions, they undertake the collection of accounts receivable and interest payment of investors, entrust managers to manage basic assets, collect and transfer cash flow, and make up special funds. If they do not make up funds, they will pay a price; For investors, the risk of buying priority bonds is low. They pay interest on time and repay the principal by hand. If they choose to buy subordinated bonds, the risk is high and the yield is uncertain, but it is generally higher than that of priority bonds. In order to better reflect the income of the main game, the relevant parameters are set. The parameters and definitions are shown in Table 2. Table 2. Parameter matrix Variable
Parameter Variable description
Variable
Parameter Variable description
The cost of δ intellectual property securitization of the original owner
All other costs of the business on behalf of the original owner
Intellectual property valuation
v
It is related to the level of risk
Investment amount of priority investors
B1
The principal of a senior bond investor
Financing amount
γ εv
ε Is the ratio of intellectual property fees, γ Assessing risk for intellectual property value
Investment amount of secondary investors
B2
The principal of Investment a subprime bond income of investor financial institutions
θ1 r
θ1 For income risk, r Is the highest expected rate of return, θ1 ∈ [0, 1]
Expected yield of senior bond
ipn
n = 1, 2, 3, It Balance represents the payment yield of priority bonds in normal state, accelerated liquidation state and default state respectively
π
When the cash flow is abnormal, the difference payer needs to make up the expenses
(continued)
Cooperation Strategy of Intellectual Property Securitization
29
Table 2. (continued) Variable
Parameter Variable description
Variable
Expected total yield of subordinated bonds
ism
m = 1, 3, 4, Loss of D They are the non-payment yields of subordinated bonds at maturity in normal state, accelerated liquidation state and default state
If the enterprise does not pay the account, the right to use the intellectual property will be restricted, which will bring losses to the enterprise
Yield of subordinated bonds during holding period
is2
The yield of subordinated bonds during the holding period shall not exceed 1%/year
F
If a financial institution defaults, it should not inform the manager of the cash flow problem
Recovery ratio of accounts receivable
ϕβ
ϕ On behalf of Investment enterprise credit preference risk, β risk Represents the highest recovery rate of accounts
μl
Represents the risk of the investor & apos earning, l The values are 1 and 2, which are the preference risk of priority and secondary investors respectively, μ1 < μ2
Losses from default of financial institutions
Parameter Variable description
30
C. Liu et al.
2.3 Model Assumptions In order to establish a reasonable evolutionary game model, this paper makes the following assumptions. • Hypothesis 1: the three subjects are bounded rationality and information asymmetry. In this paper, all yields do not consider the time value of funds, assuming that bonds issued at par. • Hypothesis 2: when the enterprise does not pay the account, the balance made up by the difference payer can repay the interest and principal of the priority investor, and the investor and the original equity holder need to bear the credit risk of the enterprise and the impairment risk of intellectual property. • Hypothesis 3: under normal conditions, the interest income of the investors purchasing the subordinated bonds is greater than that of the investors purchasing the priority bonds. However, when an accelerated liquidation event or a default event occurs, assuming that the interest income of the investors purchasing the subordinated bonds is less than that of the investors purchasing the priority bonds, the investors will guarantee the principal in any case. When the accounts receivable are abnormal, if the interest income of the investors purchasing the subordinated bonds is less than that of the investors purchasing the priority bonds, the investors will guarantee the principal. The cost of default of financial institutions is greater than the difference that needs to be made up. The probability of accounts receivable being paid is x and not being paid is 1 − x; The performance probability is y and default probability of financial institutions is 1 − y; The probability of investors investing in priority bonds is z, and the probability of investors investing in subordinated bonds is 1 − z;
3 The Solution of Three Equilibrium Points and the Analysis of Stability Strategy of Three-Party Evolutionary Game In this part, firstly, according to the hypothesis of the model and the setting of parameters, we get the income matrix of the three main bodies. Secondly, according to the income matrix, we get the replication dynamic equation of the three main bodies. Finally, we get the equilibrium point of the evolutionary game system, and explore the stability of the equilibrium point. 3.1 Income Matrix According to the hypothesis and parameter matrix of the model, the capital asset pricing model is improved. The risk coefficient and income are multiplied to express the income of each participant, and the income matrix of each participant is determined. Table 3 reflects the income of each participant, and the results correspond to the financiers, financial institutions and investors respectively.
Cooperation Strategy of Intellectual Property Securitization
31
Table 3. The game income matrix of financiers, financial institutions and investors Game participants
Supply chain enterprises Payment of accounts receivable
Nonpayment of accounts receivable
Financial institution Investor
Purchase of subordinated bonds
Performance
Breach of contract
Performance
Breach of contract
γ εv
γ εv
γ εv(1 + θ1 r) − D
γ εv(1 + θ1 r) − D
ϕβγ εv − μ1 ip2 B1 − B1
θ2 v − δ − π − B1
θ2 v − δ − B1
−B2 − μ2 is3 B2 − δ
−μ1 ip2 B1 − B2
−μ1 ip3 B1 − B2
−(1 − ϕβ)F
−μ2 is3 B2
−μ2 is4 B2 − F
ϕβγ εv − μ1 ip1 B1 −μ2 is2 B2 − B1 −B2 − μ2 is1 B2 − δ −(1 − ϕβ)π
Purchase of senior bonds
μ2 is2 B2 + μ2 is1 B2
μ2 is3 B2
μ2 is3 B2
μ2 is4 B2
γ εv × θ1 r
γ εv × θ1 r
γ εv(1 + θ1 r) − D
γ εv(1 + θ1 r) − D
ϕβγ εv − μ1 ip2 B1 − B1
θ2 v − δ − π − B1
θ2 v − δ − B1
−B2 − μ2 is3 B2 − δ
−μ1 ip2 B1 − B2
−μ1 ip3 B1 − B2
−(1 − ϕβ)F
−μ2 is3 B2
−μ2 is4 B2 − F
μ1 ip2 B1
μ1 ip2 B1
μ1 ip3 B1
ϕβγ εv − μ1 ip1 B1 −μ2 is2 B2 − B1 −B2 − μ2 is1 B2 − δ −(1 − ϕβ)π μ1 ip1 B1
3.2 Copy Dynamic Equation From the income matrix in Table 3, we can get the replication dynamic equation of financiers, financial institutions and investors: dx = x VE1 − VE = x(1 − x)[D − γ εv(K + 1)] dt F(y) = dy dt = y VB1 − VB = y(1 − y){x μ1 ip2 B1 + μ2 is3 B2 − μ1 ip1 B1 − μ2 is2 B2 − μ2 is1 B2 + (1 − ϕβ) (F − π )] + (1 − x) μ2 ip3 B1 − μ2 ip2 B1 +μ2 is4 B2 − μ2 is3 B2 + F − π )] F(Z) = dZ dt = Z VI1 − VI = z(1 − z){xy[(μ2 is2 B2 + μ2 is1 B2 − μ1 ip1 B1 + (x + y − 2xy)(μ2 is3 B2 − μ1 ip2 B1 + (1 − x)(1 − y) μ2 is4 B2 − μ1 ip3 B1
F(x) =
(1)
(2)
(3)
The equilibrium point of evolutionary game dynamic process can be generated by dy dz F(x) = dx dt , F(y) = dt , F(z) = dt and E1 (0, 0, 0), E2 (0, 0, 1), E3 (0, 1, 0), E4 (0, 1, 1), E5 (1, 0, 0), E6 (1, 0, 1), E7 (1, 1, 0), E8 (1, 1, 1), E9 (p∗ , q∗ , m∗ ), where E9 (p∗ , q∗ , m∗ ) is the equilibrium point of mixed strategy).
32
C. Liu et al.
3.3 Discussion on Stability of Equilibrium Point and Analysis on Stability Strategy of Evolutionary Game In asymmetric game, mixed strategy equilibrium is not evolutionary stable equilibrium. Therefore, we only need to discuss the asymptotic stability of pure policy equilibrium. The asymptotic stability of the equilibrium point is determined by Lyapunov criterion (indirect method). According to the Lyapunov criterion1 , The equilibrium point is a stable point when the eigenvalue of J matrix λ ≤ 0. Business strategy “0” means that accounts receivable have not been paid, while business strategy “1” is the opposite; The strategy of financial institutions is “0” for default and “1” for performance; The investor strategy of “0” represents the purchase of priority bonds, and the investor strategy of “1” represents the purchase of subordinated bonds. According to the judgment of positive and negative eigenvalues it is found that the system has a unique evolutionary game equilibrium E8 (1,1,1). The evolutionary game phase diagram of the three subjects is shown in Fig. 2.
(a) Phase diagram of SMES
(b) Phase diagram of financial institutions
(b) Phase
diagram of investors
Fig. 2. Phase diagram of evolutionary game
From the copy dynamic equation and Jacobian matrix of the three subjects, it can be seen that, because the assumption D−γ εv(K +1) must be greater than 0, no matter what the initial strategy of the financing party is, it will eventually be stable at “1”, that is, to μ B (ip2 −ip3 )+μ2 B2 (is3 −is4 )+π −F pay the accounts receivable; When x > μ B 2i −i1 1−i 1 1 ( p2 p3 p1 )+μ2 B2 (2is3 −is4 −is2 −is1 )+ϕβ(π −F) At that time, when the initial strategy is in S1 , y = 0 is the equilibrium point of evolutionary game, that is, financial institutions choose to “default”; when the initial μ B (ip2 −ip3 )+μ2 B2 (is3 −is4 )+π −F strategy is in S2 , x < μ B 2i −i1 1−i y = 1 is the 1 1 ( p2 p3 p1 )+μ2 B2 (2is3 −is4 −is2 −is1 )+ϕβ(π −F) equilibrium point of evolutionary game, that is, financial institutions choose to “perform”; It can be seen from the evolutionary game phase diagram of investors in Fig. 2 that when the initial game is in S3 , z = 0 is the equilibrium point of evolutionary game, that is, investors “buy priority bonds”, when the initial game is in S4 , z = 1 is the equilibrium point of evolutionary game. According to the previous hypothesis, μ B (ip2 −ip3 )+μ2 B2 (is3 −is4 )+π −F x < μ B 2i −i1 1−i must be true, and the initial strat1 1 ( p2 p3 p1 )+μ2 B2 (2is3 −is4 −is2 −is1 )+ϕβ(π −F) egy choice of investors must be in S4 , so the final evolutionary game is stable at E8 (1, 1, 1). 1 It was established in 1892 by a.m. Lyapunov, a Russian mathematician and mechanic, to analyze
the stability of the system.
Cooperation Strategy of Intellectual Property Securitization
33
4 Analysis of Factors Influencing the Stability Strategy of Tripartite Evolutionary Game The stable equilibrium of the above three-party evolutionary game model is based on a series of assumptions, but if the assumptions change, it will seriously affect the behavior of the game players. 4.1 The Impact of Intellectual Property Valuation Risk on Evolutionary Game Equilibrium The risk of intellectual property valuation γ mainly affects the amount of financing. When the risk of intellectual property valuation γ is very high, the financing party obtains a part of additional funds, which increases the amount of financing. When the D , D − γ εν(k + 1) < 0, the stability risk of intellectual property valuation γ > εν(k+1) point changes immediately, and E3 (0, 1, 0) is the stability point, In this case, financial institutions do not pay attention to the recovery of accounts after the game. However, because the default loss of financial institutions is greater than the difference that needs to be made up, financial institutions will not easily make default decisions. Financial institutions will choose to perform the contract after the game. For investors, the risk of intellectual property value assessment is high, After the game, they tend to buy priority bonds. 4.2 The Influence of Supply Chain Enterprise Credit Risk on Evolutionary Game Equilibrium Enterprise credit risk ϕ mainly affects the recovery ratio of accounts, and then affects the amount that financial institutions need to spend. When the recovery ratio of accounts is relatively high, the difference between the default penalty and the payment is small. In the previous paper, it is assumed that the difference between the default loss and the difference payment can make up for the investor’s loss, but if it cannot make up for the investor’s loss, the stability point changes to E5 (1,0,0), When the credit risk of enterprises is relatively large, it mainly affects the strategic choice of financial institutions and investors. If the accounts are paid, if the accounts recovered by financial institutions cannot make up for the investment losses of investors, they will choose to default, and investors will also tend to buy priority bonds in order to avoid risks.
5 Conclusion This paper constructs a risk sharing model of Intellectual Property Securitization Based on financiers, financial institutions and investors, and draws the following conclusions: (1) in order to promote the effective development of intellectual property supply chain securitization, through the game of its own income, the evolutionary equilibrium is finally stable at E8 (1,1,1). (2) Intellectual property value evaluation risk and enterprise credit risk affect the equilibrium of evolutionary game. When the risk of intellectual property valuation is high, the account is likely not to be paid, and investors will consider the
34
C. Liu et al.
risk factors and choose to buy the low-risk priority bonds; The credit risk of enterprises mainly affects the strategic choice of financial institutions and investors. If the recovered accounts cannot make up for the investment losses of investors, financial institutions will choose to default, and investors will also tend to buy priority bonds in order to avoid risks. Acknowledgements. This research is partially supported by Major projects of Beijing Social Science Foundation (no. 20ZDA03).
References 1. Guo, G., Wu, H.: A study on risk retention regulation in asset securitization process. J. Bank. Finan. 45, 61–71 (2014) 2. Gans, J.S., Hsu, D.H., Scott, S.: The impact of uncertain intellectual property rights on the market for ideas: evidence from patent grant delays. Manag. Sci. 54, 982–997 (2008) 3. Themsen, T.N., Skærbæk, P.: The performativity of risk management frameworks and technologies: the translation of uncertainties into pure and impure risks. Acc. Organ. Soc. 67, 20–33 (2018) 4. Zhengang, Y., Kouhua, Y., Zhenhua, Y.: Research on the theory and development strategy of my country’s intellectual property securitization. Sci. Res. 06, 1077–1082 (2007) 5. Yingmei, H., Yanping, L.: The construction of my country’s legal system of intellectual property securitization. Econ. Res. Guid. 21, 192–194 (2019) 6. Shuang, W.: Research on Intellectual Property Financing of China’s Small and Medium-sized Enterprises. Zhejiang University, Hangzhou (2019) 7. Fanhua, Z., Ru, F.: The operation mechanism of patent asset securitization and the path and countermeasures of patent technology industrialization. Technol. Prog. Countermeas. 33, 109– 113 (2016)
The Evolution Game of Government and Enterprise in Green Production—The Perspective of Opportunity Income and Media Supervision Yanhong Ma, Zezhi Zheng, and Chunhua Jin(B) Beijing Information Science and Technology University, Beijing, China {mayanhong,20000119}@bistu.edu.cn
Abstract. Green production problem is still an important research content in the field of environment management. This paper analyzes the strategic interaction between the government and the enterprise through building an evolutionary game model. In the built model, the impact of the opportunity income generated by the green production input costs is considered. Also, media supervision is now widely used. Therefore, the probability that the media can discover the enterprise’s nongreen production behavior is considered when building the game model. How this probability affects the system is discussed. And advice on how to urge the enterprise to implement green production are proposed according to the effects. Keywords: Green production · Evolution game · Opportunity income · Media supervision
1 Introduction As environmental issues are one of the key concerns of many countries, green production becomes an essential requirement for enterprises. However, because there exists many influencing factors, non-green production behavior still happens. For example, the occurrence of random environmental accidents is an obvious factor [1]. The location of the environmental accidents are always dispersed, which is another obvious factor [2, 3]. Therefore, how to urge the enterprises to implement green production becomes a fundamental problem to be solved now. There are two stakeholders in the problem of green production management: the government and the enterprise. This interaction between the government and the enterprise is usually described by supervision game. And traditional games are commonly used to study this interaction [4]. In these game models, many factors are considered, including government policies (tax policies and subsidy), consumer behavior, rebound [5]. Some other influencing factors of the supervision game, such as subsidy, cooperative contract Funding: Social Science Foundation of Beijing (19GLC066). © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 35–42, 2022. https://doi.org/10.1007/978-981-16-8656-6_4
36
Y. Ma et al.
between the manufacture and the retailer, and the social welfare of an enterprise’s green production decision, are also analyzed [6–8]. The traditional game assumes that all players are completely rational when making decisions, which is unreasonable. In practice, the players should be assumed as bounded rational [9]. In order to solve the problem of the bounded rationality, evolutionary game is proposed and used in many researches. For example, China’s coalmine safety inspection system is extensively analyzed by using an evolutionary game [10–12]. As for the problem studied in this paper, carbon taxes, subsidies and some other factors are considered when building the evolutionary game model of green production supervision [13, 14]. Although many factors have been considered in green production supervision game, there are other factors that have not been studied. From the perspective of the enterprise, the factors that have been studied can be defined as external influencing factors (government subsidies, tax policies, consumer behavior, public participation). From the internal point of view of enterprises, some internal factors also affect the strategic interaction between the government and the enterprise. Because of the limited resources of the enterprise, the resources invested in green production can generate production profit when it is invested in production. As the non-green production behavior may not be discovered by the government, opportunistic behavior may be taken by the enterprise. Therefore, the opportunity income generated by the green production cost is considered in this paper. Besides, the probability that non-green production behavior is discovered by the media is also considered. This is because that media supervision has become a widespread way of supervision now. Hence, this paper studies the evolution game of government and enterprise in green production from the perspective of opportunity income and media supervision. This paper is organized as follows. Section 2 provides the problem description, assumptions, and notations of this paper. Section 3 builds the evolutionary game model of the green production supervision problem. Section 4 provides the analysis of the evolutionary stable strategies of the game. Last, conclusions are presented in Sect. 5.
2 Problem Description, Notations, and Assumptions 2.1 Problem Description In the game studied in this paper, there are two players: the manufacturing enterprise and the government regulator. The enterprise chooses to implement green production or not. Hence, the strategies of the enterprise are {Green, Not Green}. If all of the resources of the enterprise are invested in production, the profit is I . Assume the green production input cost is Sc and the output coefficient is h (h > 0), then these resources can generate production profit Sch when it is invested into production. Therefore, the enterprise’s profit is I −Sch when it implements green production. On the other hand, all of the resources are invested into production which makes profit I when it implements non-green production. Normally, the government take regulatory measures for the sake of reducing the non-green behavior of the enterprise. The regulatory measures include inspections, fine, policies etc. Consequently, the strategy space of the government is assumed as {Strong supervision, Weak supervision}. The cost of strong supervision is assumed to be Sg and the cost of weak supervision is 0.
The Evolution Game of Government and Enterprise in Green Production
37
Figure 1 shows the relationship between the enterprise and the government in the above game. Supervision
Government
Strategy Space Strong Supervision Weak Supervision
Game
Enterprise
Strategy Space Green Not Green
Factors influencing the game decisions
Production profit、pollution treatment cost、supervision from the media
Fig. 1. The relationship between the game participants
2.2 Notations The notations of the variables and the parameters used in this paper are as follows: Sc : green production cost of the enterprise. Sg : strong supervision cost of the government. f : discovery probability of non-green production behavior of the enterprise. h: output coefficient of the enterprise’s production. I : the profit generated by all of the resources from the enterprise. rc : rewards for the enterprise. Normally, if the enterprise complies with green production guideline when the government take strong supervision strategy, the enterprise can get some policy or funds support from the government. Pc : penalty imposed on the enterprise. Once the non-green production behavior of the enterprise is discovered, the government will impose some penalty, for example, a fine, on the enterprise. Pg : loss of the government. If the local government’s nonfeasance is discovered by the media, there will be some loss to local government, including the decline of credibility, the reduction of funds from central government, etc. Tc : cost of pollution treatment. Once the pollution brought by the enterprise’s non-green production behavior is discovered by the media, the enterprise has to treat pollution which generates cost Tc .
38
Y. Ma et al.
2.3 Assumptions We propose three assumptions before building the evolutionary game model, which are as follows. Assumption 1: When the enterprise takes non-green behavior and the government takes weak supervision, the environmental pollution happens. In other cases, green production equipment is installed actively or passively. As mentioned in Sect. 1, the location of the environmental accidents are always dispersed, this makes the pollution difficult to be found. Consequently, assumption 2 is proposed. Assumption 2: Environmental pollution is discovered with probability f . Assumption 3: Pg > Pc . Assumption 3 ensures that the government has the incentive to supervise the enterprise. The goal of the players are all assumed to be profit maximization.
3 Model According to the above analysis, the payoff matrix of green production supervision game in this paper is given in Table 1. Table 1. The payoff matrix of the stage game Enterprise
Government Strong supervision
Weak supervision
Green
I − Sch + rc −Sg
I − Sch 0
Not Green
I − Sch − Pc
I − f (Pc + Tc )
Pc − Sg
fPc − fPg
Note: In each payoff unit, the upper expression represents the enterprise’s payoff; the underneath expression represents the government’s payoff.
In practice, the players are all bounded rational. That is to say, they will adjust their decision gradually according to the game results of the previous stage. After analyzing the payoff matrix of the stage game, we built the evolutionary game model for the problem studied in this paper. Suppose the players were drawn randomly in pairs from two populations and received the expected payoffs noted in Table 1. y represents the probability that the government takes the pure strategy “Strong Supervision” and x represents the current proportion of the population of the enterprise taking the pure strategy “Green.” Subsequently, the expected payoffs of the enterprise and the government are denoted as shown in Eq. (1) and Eq. (2), respectively: y (1) uc (x, y) = (x, 1 − x)A 1−y
The Evolution Game of Government and Enterprise in Green Production
ug (x, y) = (y, 1 − y)B
T
x 1−x
39
(2)
0 I − Sch + rc I − Sch −Sg , and B = . I − Sch − Pc I − f (Pc + Tc ) Pc − Sg f (Pc − Pg ) Then, the standard two-population replicator dynamics model of the green production supervision game can be written as follows: where A =
dx dt = x(1 − x){y[Sch + rc + (1 − f )Pc − fTc ] − ...
F(x, y) =
Sch + fPc + fTc }
(3)
dy dt = y(1 − y){−x[(1 − f )Pc + fPg ] + ... (1 − f )Pc + fPg − Sg }
G(x, y) =
(4)
4 Results and Analysis ⎧ dx ⎪ =0 ⎨ , we obtained five local equilibriums of the two-population Through solving dt ⎪ ⎩ dy = 0 dt replicator dynamics model. They are: (x∗ , y∗ ), (0, 0), (0, 1), (1, 0) and (1, 1). x∗ and y∗ are denoted as follows: x∗ = 1 − y∗ = 1 −
Sg , Pc + f (Pg − Pc )
rc + Pc Sch + rc + Pc − f (Pc + Tc )
Although we have got five equilibriums of the studied problem, local equilibriums may not be the evolutionarily stable strategy (ESS) of the replicator dynamics system. One commonly used method to determine the global equilibrium is the method proposed by Friedman [9]. This method judge whether a local equilibrium is an ESS or not through analyzing the value of the trace and determinant of the Jacobian matrix of the differential equation. The Jacobian matrix of a differential equation J is as follows: ⎤ ⎡ ∂F(x, y) ∂F(x, y) ⎥ ⎢ ∂x a11 a12 ∂y ⎥ J =⎢ (5) ⎣ ∂G(x, y) ∂G(x, y) ⎦ = a21 a22 ∂x ∂y
40
Y. Ma et al.
If the determinant and the trace satisfies the following two conditions, then the corresponding local equilibrium is an ESS. Condition (1): a a det J = 11 12 = a11 a22 − a12 a21 > 0, and a21 a22 Condition (2): trJ = a11 + a22 < 0. where: a11 = (1 − 2x){y[Sch + rc + (1 − f )Pc − fTc ] − Sch + fPc + fTc }, a12 = x(1 − x)[Sch + rc + (1 − f )Pc − fTc ] a21 = −y(1 − y)[(1 − f )Pc + fPg ], and a22 = (1 − 2y){−x[(1 − f )Pc + fPg ] + (1 − f )Pc + fPg − Sg }. The values of a11 , a12 , a21 , and a22 in each local equilibrium are given in Table 2. Table 2. The values of a11 , a12 , a21 , and a22 in each local equilibrium Local equilibrium
a11
a12
a21
a22
(0, 0)
−Sch + fPc + fTc
0
0
−Sg + Pc − f (Pc − Pg )
(0, 1)
r c + Pc
0
0
Sg − Pc + f (Pc − Pg )
(1, 0)
Sch − fPc − fTc
0
0
−Sg
(1, 1)
−rc − Pc
0
0
Sg
(x∗ , y∗ )
0
A
B
0
In Table 2, A = x∗ (1−x∗ )[Sch +rc +(1−f )Pc −fTc ], B = y∗ (1−y∗ )[(1−f )Pc +fPg ]. It is obvious that the probability that non-green production behavior of the enterprise is discovered by the media affects the ESS of the system significantly. Therefore, the ESS of the system based on the value of f is given in Proposition 1. Proposition 1: Sh
S −P
c , g c }, the ESS of the model is (0,0); (a): If 0 ≤ f < min{ Pc +T c Pg −Pc
(b): If
Sg −Pc Pg −Pc
To better understand the trends of evolution, a simulation analysis is shown in Fig. 2.
The Evolution Game of Government and Enterprise in Green Production (a) f=0.2
1
(b) f=0.5
1
41
(c) f=0.8
1
enterprise government
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.2
0
proportion
proportion
proportion
0.8
0.4
0.2
0
5
10
time/step
15
20
0
0.2
0
10
20
time/step
30
40
0
0
5
10
15
20
time/step
Fig. 2. The evolutionary trend of the green production supervision game, h = 2, rc = 1, Sc = 2, Pc = 3, Tc = 3, Sg = 4, Pg = 6
Proposition 1 and Fig. 1 show the ESS of the green production supervision game. When the probability the non-green behavior is discovered by the media is relatively low, which is case (a), the ESS is (0, 0). And when this probability is a little larger, which is case (b), the replication dynamic system shows a fluctuating trend. Under this circumstance, the system has a stable limit cycle, but it was not asymptotically stable [10]. Last, when the probability is large enough, which satisfy the condition in case (c), the ESS is (1, 0). It is assumed that environmental pollution only happens when the government and the enterprise are all negative on green production. Consequently, the state of case (c) is better than that of case (b) and case (b) is better than that of case (a). Based on the above analysis, how to encourage the media to discover non-green production behavior is an effective way to realize green production. For example, the government can reward with money the media which pays more attention to green production behavior. The government also can improve the credibility of these media to encourage other media focus on green production.
5 Conclusion Green production is still an important concern of many countries in the world. Because green production relies on the decision of the enterprise itself and the government’s supervision decision, this paper analyzes the strategic interaction between the enterprise’s decision and the government’s decision by building evolutionary game model. In this model, the opportunity income generated by the green production input costs and the probability that non-green production behavior is discovered by the media are considered. Through solving the model, the ESS of the system is given. How the probability discovered by the media affects the system is analyzed. Results show that encouraging the media to report the enterprise’s non-green behavior can help to realize green production. Acknowledgment. We thank for the anonymous reviewers, whose advice helps improve the quality of this paper. Also, this study was supported by the Social Science Foundation of Beijing (19GLC066).
42
Y. Ma et al.
References 1. Beavis, B., Walker, M.: Random wastes, imperfect monitoring and environmental quality standards. J. Public Econ. 21(3), 377–387 (1983) 2. Kim, S.H.: Time to come clean? disclosure and inspection policies for green production. Oper. Res. 63(1), 1–20 (2015) 3. Wang, S., Sun, P., De Vericourt, F.: Inducing environmental disclosures: a dynamic mechanism design approach. Oper. Res. 64(2), 371–389 (2016) 4. Schol, J.T.: Cooperative regulatory enforcement and the politics of administrative effectiveness. Am. Polit. Sci. Rev. 85(1), 115–137 (1991) 5. Soroush, S., Morteza, R.B.: A game theoretic approach for assessing residential energyefficiency program considering rebound, consumer behavior, and government policies. Appl. Energy 233–234, 44–61 (2019) 6. Ma, W., Zhang, R., Chai, S.: What drives green innovation? a game theoretic analysis of government subsidy and cooperation contract. Sustainability. 11, 5584 (2019) 7. Jeong, E.B., Park, G.W., Yoo, S.H.: Incentive mechanism for sustainable improvement in a supply chain. Sustainability 11, 3508 (2019) 8. Zhang, Z., Wang, Y., Meng, Q., Luan, X.: Impacts of green production decision on social welfare. Sustainability 11, 453 (2019) 9. Friedman, D.: On economic applications of evolutionary game theory. J. Evol. Econ. 8(1), 15–43 (1998) 10. Liu, D., Xiao, X., Li, H., Wang, W.: Historical evolution and benefit-cost explanation of periodical fluctuation in coal mine safety supervision: an evolutionary game analysis framework. Eur. J. Oper. Res. 243(3), 974–984 (2015) 11. Liu, Q., Li, X., Hassall, M.: Evolutionary game analysis and stability control scenarios of coal mine safety inspection system in China based on system dynamics. Saf. Sci. 80, 13–22 (2015) 12. Liu, Q., Li, X., Meng, X.: Effectiveness research on the multi-player evolutionary game of coal-mine T safety regulation in China based on system dynamics. Saf. Sci. 111, 224–233 (2019) 13. Chen, W., Hu, Z.H.: Using evolutionary game theory to study governments and manufacturers’ behavioral strategies under various carbon taxes and subsidies. J. Clean. Prod. 201, 123–141 (2018) 14. Fan, R., Dong, L., Yang, W., Sun, J.: Study on the optimal supervision strategy of government low-carbon subsidy and the corresponding efficiency and stability in the small-world network context. J. Clean. Prod. 168, 536–550 (2017)
QL4POMR Interface as a Graph-Based Clinical Diagnosis Web Service Sabah Mohammed, Jinan Fiaidhi(B) , and Darien Sawyer Department of Computer Science, Lakehead University, Thunder Bay, ON, Canada {mohammed,jfiaidhi,dsawyer}@lakeheadu.ca
Abstract. Most of the experienced physicians follow the SOAP note structure in documenting patient cases and their care journey. The SOAP note was originated from the problem-oriented medical record (POMR) developed nearly 60 years ago by Lawrence Weed, MD. However, the POMR/SOAP is not commonly found in electronic medical records (EMR) used today due to the flexible nature of building patient cases. Previous EMRs as well as clinical decision making software requires information to follow a strict schema which pushed POMR away from being implemented. However, the development of GraphQL is a wind of change that offer building a flexible interface and providing the query on the data that have semi structured schema like the POMR. This article developed a QL4POMR as a GraphQL implementation to the POMR SOAP note. Physicians can use this interface and create any patient case and present it for the purpose of diagnosis and prognosis with a varying backends. The QL4POMR implemented a mapping module to map graphs from POMR to HL7 FHIR and vise versa. Neo4j was used as backend to integrate all the data irrelevant of their nature as soon as the data is identified by objects and relations. The progress reported in this article is quite encouraging and advocates for further enhancements. Keywords: POMR · SOAP · GraphQL · Neo4J · e-Diagnosis · Graph Databases
1 Introduction Over the past five decades, healthcare over all the world is undergoing an immense transformation to tab on the technological advancement of having more pervasive connections with people and devices. The goals of these transformation include sensitive issues like promoting patient independence, improving patient outcomes, reducing risk, enforcing important policies like patient privacy, minimizing avoidable services, focus on prevention, understanding complex healthcare data, decrease medical errors, reduce cost, increase integration and partnership, clinical workflow consolidation and to timely prevent the spread of diseases. Connectivity in healthcare needs to go beyond the enabling connectivity infrastructure including wireless, mobile, cloud or any form of tele-health to include information connectivity. Much of the stated goals are only possible with highly connected information that goes beyond the hospital database silos. To make sense of the information connections and leveraging the connections within the healthcare existing © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 43–56, 2022. https://doi.org/10.1007/978-981-16-8656-6_5
44
S. Mohammed et al.
data one need a holistic approach that uses graph representation and graph databases [1, 2]. Designing connected information based on graph databases produces efficient data management and data services at the same time [3]. In legacy healthcare systems, data services are often a missing component where data management is the only functionality that is only used for building error-tolerant and non-redundant database systems with traditional relational database query capability. Data services, however, provide retrieval, analytic and data mining queries capabilities encompassing high degree of relationships. There are other advantages behind supporting graph databases as they possess the ability to accommodate unstructured data and deal with schema less or to deal with structured data with strongly typed schema. In this research we are proposing a method to deal with all forms of data (with or without schema) as well as with data dealing with healthcare data having flexible schema that have defined upper structure but they vary with their subcomponents. In dealing with flexible schema, our method uses an interface for the popular GraphQL to build Web API for clinical diagnosis that uses a flexible schema like the POMR SOAP [4]. The GraphQL interface is connected to Neo4j schema less graph database. The GraphQL schema is build around the popular problem oriented medical record SOAP note that was introduced by Lawrence Weed [5] that describe medical cases based on the subjective observations as presented by the patient and the objective examinations conducted by the physician. Based on these two attributes the physician builds the assessment (i.e. diagnosis) and the plan to cure the patient case. SOAP is kind of diagnostic schema and a cognitive model that allows physicians to systematically approach a diagnostic problem by providing a structured scaffold for representing the clinical problem and all the associated clinical scenarios including physical exam, lab tests, assessments and planning. Based on SOAP schema physicians can reason about the chief complaint presented patient and identify the causes of presented encounters. By approaching SOAP, physicians can systematically access and explore individual illness scripts as potential diagnoses. SOAP is kind of clinical reasoning upper schema that uses a semi-structured model where the problem representation cannot be accommodated in a relational database model that requires a strict and well structured schema. SOAP can provide a well defined upper structure but the shape and contents of lower structures used depends on the purpose and the problem case. Clinicians highly support this approach as it provides several advantages including [6]: • Flexible clinical case representation with lower problem structures that can easily be changed according to the differential diagnosis and the assessment progress • Helps to manage physician cognitive load and maintain effective problem-solving. Flexible schema helps trigger clinicians to perform differentiating historical or physical exam maneuvers to refine the differential diagnosis. • It help to teach others how to approach a clinical problem diagnosis • It allow clinicians to adapt, refine and individualize their diagnosis schema by modifying, collapsing certain categories, or creating new ones, allows a schema to “work” best for them. • Provide proper record keeping improves patient care and enhances communication between the provider and other parties: claims personnel, peer reviewers, case managers, attorneys, and other physicians or providers who may assume the care of the patients.
QL4POMR Interface as a Graph-Based Clinical Diagnosis Web Service
45
However, the major drawback of using semi-structured model is that the clinical data cannot be easily saved and retrieved as efficiently as in a more constrained structure, such as in the relational model. Records generated by the semi structured schema can only be accommodated using generic representation such as XML, JSON or OEM where clinical elements need to have unique ID or tag to enable their future reuse and retrieval. Based on these representations, clinicians can produce a range of different types of diagnosis problem cases, in efforts to meet different patient’s encounters. To manage the data exchange for such semi-structured data, applications must follow the ISO/IEC 11179 Metadata Registry (MDR) standard1 which paves the way to represent, share and query such semi structured data. The basic principle of data modeling in this standard is the combination of an object class and a characteristic. For example, the high-level concept “Chest Pain” is combined with the object class “Patient ID” to form the data element concept “Patient ID with Chest Pain”. Note that “Patient ID with Chest Pain “ is more specific than “Chest Pain”. Efforts to implement the ISO 11179 in representing clinical cases resulted in using GraphQL API [7, 8]. In this article we are describing an interface based on GraphQL the POMR SOAP which follows the guidelines of the ISO 11179. The proposed interface is called QL4POMR which defines objects, fields, queries and mutation types. Entry points within the schema define the path through the graph to enable search functionalities, but also the exchange is promoted by mutation types, which allow creating, updating and deleting of metadata. QL4POMR is the foundation for the uniform interface, which is implemented in a modern web-based interface prototype. The QL4POMR interface is linked to the Neo4J Graph Database.
2 The POMER Flexible Diagnostic Schema The problem oriented medical record (POMR) was introduced in 1968 as an attempt by Lawrence Weed (MD) to address the most common problems in diagnosis. It has been widely used afterwards by the medical community and approved by the American Institute of Medicine during 1974. Although there were no standard for it and no real implementation, the medical community from 2015 repeatedly restarted their efforts to implement it as POMR fits in the trend of care becoming more patient-centered [9]. A problem-oriented approach is also useful because patients and physicians can relate easily to the problems on the problem list and assess, update and respond to them [10]. Moreover, a problem oriented approach is seen as good solution to the bottleneck of interoperability by focusing on care services [11]. This design ideology been supported by the recent HL7 FHIR healthcare record representation based on service-oriented architecture for seamless information exchange. In this direction, POMR can be viewed as the upper uniform interface for defining every diagnostic problem. It will use a flexible schema where its top nodes are well defined and it lower level nodes vary according to the patient case. Figure 1 illustrate our view to the POMR schema where the yellow nodes (i.e. subjective, objective, assessment and plan) must exist while the remaining sub nodes marked green will vary according to the patient problem. GraphQL is a query language for graph-structured data which is a common standard for querying semi structured data like the POMR. It has gained wide popularity with 1 https://en.wikipedia.org/wiki/ISO/IEC_11179.
46
S. Mohammed et al.
Fig. 1. The POMR flexible schema layout.
major IT venders like Facebook and Netflix [12]. Technically, GraphQL is a query language for APIs - not databases. It can be considered as abstraction layer providing a single API endpoint both for queries and mutations (i.e. data changes) for any database including NoSQL. The query objects are defined using GraphQL schema, which has an expressive expression to define objects, supports inheritance, interfaces, and custom types and attribute constraints. A GraphQL schema is created by supplying the root types of each type of operation, query and mutation (optional). class GraphQLSchema { constructor(config: GraphQLSchemaConfig) } type GraphQLSchemaConfig = { query: GraphQLObjectType; mutation?: ?GraphQLObjectType; }
The GraphQL server uses the defined schema to describe the shape of the semistructured data graph. The defined schema describes a hierarchy of types with fields that are populated from the back-end data stores. The schema also specifies exactly which queries and mutations (changes) are available for clients to execute against the described data graph. GraphQL uses the Schema Definition Language(SDL) using buildSchema to define queries and the requested resolver. In the following example we are using the GraphQL SDL to fetch a Patient Object with name and age as attributes.
QL4POMR Interface as a Graph-Based Clinical Diagnosis Web Service
47
import { buildSchema } from 'graphql'; const typeDefs = buildSchema(` type Patient { name: String age: Int } type Query { getUser: User } `); const resolvers = { Query: { getUser: () => ({ name: 'Sabah Mohammed', age: 66 } ), }, };
GraphQL provides two important capabilities: building a type schema and serving queries against that type schema. So developers need first to build the GraphQL type schema which can be mapped to the required codebase. In this sense, GraphQL function is executing a GraphQL query against a schema which in itself already contains structure as well as behavior. The main role of GraphQL thus is to orchestrate the invocations of the resolver functions and package the response data according to the shape of the provided query. Such approach is generally known as schema first programming. In our case we design our POMR SOAP schema. GraphQL API provides set of tools (graphqltools) a basic thin layer which include parse and buildASTSchema, GraphQLSchema, validate, execute and printSchema. By using the graphql-tools we can connect the schema types with resolvers that may change its content.
48
S. Mohammed et al. const { makeExecutableSchema } = require('graphql-tools') const typeDefs = ` type Query { user(id: ID!): Patient subjective: { type: Schema.Types.ObjectId, ref: 'Subjective' }, objective: { type: Schema.Types.ObjectId, ref: 'Objective' }, assessment: { type: Schema.Types.ObjectId, ref: 'Assessment' }, plan: { type: Schema.Types.ObjectId, ref: 'Plan' } } type Patient { id: ID! name: String }` const resolvers = { Query: { user: (root, args, context, info) => { return fetchUserById(args.id) }, }, } const schema = makeExecutableSchema({ typeDefs, resolvers, })
Based on the schema design principles we can make our modification on top of what was reported by QL4MDR [7] to create our own QL4POMR schema that complies with the MDR ISO 11179. Figure 2 illustrates our overall QL4POMR echo system to define a diagnosis service interface that can work with the POMR semi structured schema and have all the nodes created to be stored at a graph database like Neo4j. The QL4POMR is enforced with a CRUD Agent to manage the complexity of the schema and resolvers as well as to make structured querying through four modules (Create, Read, Update and Delete). For this we will need to use the graphql-modules library to facilitate building these modules. With using the graphql-modules, the CRUD Agent would be structured as follows:
QL4POMR Interface as a Graph-Based Clinical Diagnosis Web Service
49
Fig. 2. QL4POMR diagnostic web service interface.
However, exchanging information between the QL4POMR interface and the Neo4j backend as well as with HL7 FHIR requires other primitive modules to manage matching and mapping operations required for facilitating information exchange. In the next two sections we are describing these primitive operations.
3 Interfacing QL4POMR with Neo4j To connect QL4POMR to Neo4j we will need to use a programming language environment that have APIs for such connectivity. The best API for connectivity is the nxneo4j Python 3.8 library that enables you to use NetworkX type of commands to interact with
50
S. Mohammed et al.
Neo4j. To demonstrate this connectivity we will use the following snippets from the Python Jupiter notebook: (1) Connect to Neo4j
(2) Create some patient nodes:
(3) Create three POMR SOAP Nodes according to QL4POMR Schema:
(4) Now we can see the three patient SOAPs displayed at the Jupiter notebook (Not Persistent see Fig. 3-a) and on the Neo4j (Persistent see Fig. 3-b)
QL4POMR Interface as a Graph-Based Clinical Diagnosis Web Service
51
Fig. 3. Patient POMR patient data as displayed by jupiter (not persistent) and Neo4J desktop (persistent).
We followed this style of connectivity to build our Neo4j Graph database with POMR patient cases using our CRUD modules. However, the POMR patient data that resides at the Neo4j platform will provide clinicians with additional capabilities to monitor and update patient cases through the use of the Neo4J Cypher query language. For example executing the following Cypher query will provide information on the current patients encountered what conditions (see Fig. 4): MATCH p = ()-[r:Encounter]- > () RETURN p LIMIT 25.
52
S. Mohammed et al.
Fig. 4. Executing a Neo4j cypher query to browse what patients have encountered.
4 Interfacing QL4POMR with HL7 FHIR Connecting with the HL7 FHIR requires mapping the QL4POMR types to the FHIR Resources types which are the common building blocks for all information exchanges with the FHIR. A single resource (e.g., Condition2 ) contains several Element Definitions (e.g., Encounter) which has Data Type (e.g. String) and cardinality associated with it (see Fig. 5). Figure 6 illustrates how the process of mapping the data types between the FHIR Resource and the QL4POMR. This is part of the matcher and mapper modules. Additionally we will need to use the graphql-fhir API3 from Asymmetrik to connect to the FHIR server. Prior to the connection with FHIR server, we find that experimenting with the matcher and mapper based on FHIR sample record example is a useful practice. FHIR provide a JSON record example4 for this purpose.
Fig. 5. FHIR condition data type.
2 http://hl7.org/implement/standards/fhir/condition.html. 3 https://github.com/Asymmetrik/graphql-fhir. 4 https://www.hl7.org/fhir/patient-example.json.
QL4POMR Interface as a Graph-Based Clinical Diagnosis Web Service
53
Fig. 6. Mapping beteen FHIR resource and QL4POMR.
5 Connecting with Clinicians via Arrows The QL4POMR interface allows the use of the Arrows.app5 which is a new tool from Neo4j Labs. It will enable clinicians who have been trained on the POMR Types and naming convention to draw graphs for patient cases from using any web or mobile browser. The goal is not only to present the patient case but also to export its representation as JSON or Cypher file which can be integrated with the remaining Neo4J POMR cases. If the POMR naming were used correctly the matcher and mappers modules will be able to map them to the Neo4j graph database. Figure 7 illustrate how a physician can use the visual facilities of the Arrows app to describe a patient with ID34 who encountered joint pain and redness and his assessment with the skin test and blood test demonstrated a positive Cellulitis diagnosis and have been prescribed a Primsol as medication. Arrows can export the visual graph into several formats like Cypher which can be used to integrate the described care with the POMR patient cases. Figure 8 illustrate how the described case of the patient ID34 can be exported to Cypher. Once the Cypher file has been saved to the local storage, the QL4POMR interface will be able to use the matcher and mapper modules to integrate it with the Neo4J Graph Database. Having this visual tool will enable clinicians to focus of care design rather than training themselves to the information technology system that they are using. Figure 9 illustrate the patient ID34 case after mapping it to the Neo4J.
5 https://github.com/neo4j-labs/arrows.app.
54
S. Mohammed et al.
Fig. 7. Physician using arrows to describe a POMR patient case.
Fig. 8. Exporting arrow visual graph into cypher.
QL4POMR Interface as a Graph-Based Clinical Diagnosis Web Service
55
Fig. 9. Mapping a patient case from arrows to Neo4J.
6 Conclusions Since the release of GraphQL as open source API for web development by Facebook in 2015 and the list of GraphQL users are increasing exponentially (e.g. Netflix, The New York Times, Airbnb, Atlassian, Coursera, NBC, GitHub, Shopify, and Starbucks) as this API enables the development of scalable systems which can deal with data interoperability. By adopting GraphQL developers can add new types and fields to the API, and similarly straightforward for the clients to begin using those fields. GraphQL’s declarative model helps also to create a consistent, predictable API that we can use across all the clients. Developers can add, remove, and migrate back-end data stores; however, the API doesn’t change from the client’s perspective. Moreover, the declarative structure of the GraphQL allow developer to accommodate data of all sort including unstructured and semi structured data which a common case in healthcare applications. The framework includes a new graph query language whose semantics has been specified informally only to allow dealing with the unstructured nature of data and the heterogeneity of the sources. In this direction, GraphQL will fit designing clinical diagnostic services that collects all the patient information from heterogeneous data sources and inputs it into the system as well as to allow physicians to query and reason about it. This research paper illustrated how GraphQL can be used to implement the problem oriented medical record (POMR) and describing patient cases through is SOAP note. The QL4POMR is framework to systematically describe patient care journey and to integrate it with the other cases at the clinical practice. QL4POMR implemented a CRUD interface for describing the patient cases as well as to query the graph data via the Neo4J Cypher query. QL4POMR is also able to integrate with the HL7 FHIR healthcare record and map the POMR patient cases into FHIR record and vice versa. Our efforts to fully build the QL4POMR are supported by two national grants and we are anticipating more detailed to be published on this research project in the coming months. Acknowledgment. This research is funded by the first author NSERC DDG-2021–00014 and first and second author MITACS Accelerates Grant IT22305 of 2021.
56
S. Mohammed et al.
References 1. Kundu, G., Mukherjee, N., Mondal, S.: Building a graph database for storing heterogeneous healthcare data. In: Senjyu, T., Mahalle, P.N., Perumal, T., Joshi, A. (eds.) ICTIS 2020. SIST, vol. 196, pp. 193–201. Springer, Singapore (2021). https://doi.org/10.1007/978-981-15-70629_19 2. Singh, M., Kaur, K.: SQL2Neo: moving health-care data from relational to graph databases. In: 2015 IEEE International Advance Computing Conference (IACC), pp. 721–725. IEEE (2015) 3. Park, Y., Shankar, M., Park, B.H., Ghosh, J.: Graph databases for large-scale healthcare systems: a framework for efficient data management and data services. In: 2014 IEEE 30th International Conference on Data Engineering Workshops, pp. 12–19. IEEE (2014) 4. Mowery, D., Wiebe, J., Visweswaran, S., Harkema, H., Chapman, W.W.: Building an automated SOAP classifier for emergency department reports. J. Biomed. Inf. 45(1), 71–81 (2012) 5. Cameron, S., Turtle-Song, I.: Learning to write case notes using the SOAP format. J. Counsel. Dev. 80(3), 286–292 (2002) 6. Bayegan, E., Tu, S.: The helpful patient record system: problem oriented and knowledge based. In: Proceedings of the AMIA Symposium, p. 36. American Medical Informatics Association (2002) 7. Ulrich, H., et al.: QL 4 MDR: a GraphQL query language for ISO 11179-based metadata repositories. BMC Med. Inf. Decis. Mak. 19(1), 1–7 (2019) 8. Ulrich, H., Kern, J., Kock-Schoppenhauer, A.-K., Lablans, M., Ingenerf, J.: Towards a federation of metadata repositories: addressing technical interoperability. In: GMDS, pp. 74–80 (2019) 9. Salmon, P., Rappaport, A., Bainbridge, M., Hayes, G., Williams, J.: Taking the problem oriented medical record forward. In: Proceedings of the AMIA Annual Fall Symposium, p. 463. American Medical Informatics Association (1996) 10. Simons, S.M.J., Cillessen, F.H.J.M., Hazelzet, J.A.: Determinants of a successful problem list to support the implementation of the problem-oriented medical record according to recent literature. BMC Med. Inf. Decis. Mak. 16(1), 1–9 (2016) 11. Mukhiya, S.K., Rabbi, F., Pun, V.K.I., Rutle, A., Lamo, Y.: A GraphQL approach to healthcare information exchange with HL7 FHIR. Procedia Comput. Sci. 160, 338–345 (2019) 12. Landeiro, M.I., Azevedo, I.: Analyzing GraphQL performance: a case study. In: Software Engineering for Agile Application Development, pp. 109–140. IGI Global (2020)
Thick Data Analytics for Small Training Samples Using Siamese Neural Network and Image Augmentation Jinan Fiaidhi(B) , Darien Sawyer, and Sabah Mohammed Department of Computer Science, Lakehead University, Thunder Bay, Canada {jfiadhi,dsawyer,mohammed}@lakeheadu.ca
Abstract. Although machine learning and deep learning has provided solutions and effective predictions to a variety of complex tasks, it requires to be trained with large amount of labeled data in order to make the learning models perform with high accuracy. In many applications such as in healthcare and medical imaging, collecting big amount of data is sometimes not feasible. Thick data analytics is an attempt to solve this challenge by incorporating additional qualitative interventions such as involving expert’s heuristics to annotate and augment the training data. In this article, we are embarking on an investigation to involve the heuristics of a human radiologist in identifying COVID-19 few cases of CT-Scans imaging through the use of groups of image annotation and augmentation techniques. The identification of new COVID-19 is carried out utilizing unique structure Siamese network to rank similarity between new COVID-19 CT Scan images and images determined as COVID provided by the radiologist. The Siamese network extracts the features of the augmented images compared to the new CT-Scan image to determine whether the new image is COVID-19 positive using a similarity ratio. The results show that the proposed model of using the augmentation heuristics trained on small dataset outperforms the advanced models that are trained on datasets containing large numbers of samples. This article starts by answering key questions on why we need CT-Scans for COVID-19 diagnosis and what is the notion of Thick Data and the use of image augmentation as heuristics as well as what is the role of Siamese Neural Network in learning from small samples. Based on answering these questions, the analytics method described in this paper will have better justification. Keywords: Thick data analytics · Image annotation and augmentation · Siamese neural network · COVID-19 CT-scan imaging · Few shot learning
1 Why CT-SCAN Imaging for COVID-19 Testing? COVID-19 point-of-care testing (POCT) has been the subject of huge attention due to high transmissibility of COVID-19 and affect on human health. Covid-19 POCT brought also large frustration due to the fact that there is no perfect test. Most of the well known COVID-19 POCT tests echoes false negatives depending on amounts of SARS-CoV2 virus concentrations in patients. For example, the popular transcription-polymerase © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 57–66, 2022. https://doi.org/10.1007/978-981-16-8656-6_6
58
J. Fiaidhi et al.
chain reaction (RT-PCR) test may 50% false negatives [1] and thus many institutions including the FDA do not approve antigen tests like the RT-PCR for nasopharyngeal swabs, sputum and other lower respiratory tract secretions samples should not be the only POCT test in excluding COVID-19 [2]. Other tests like RT-LAMP, RT-RPA/RAA and CRISPR has not been found superior to the RT-PCR as there were little information on the viral load associated with the clinical studies that uses these POCT tests [27]. Similar sensitivity was found with molecular tests like ID NOW and Xpert Xpress tests as they are less accurate with patients without symptoms. Because of this variability, WHO states that rule out COVID-19 using POCT test requires having a proven high accuracy (‘acceptable’ sensitivity ≥ 80% and specificity ≥ 97%) to use it as the sole test to confirm or rule-out COVID-19 cases [28]. However, the diagnostic accuracy of COVID-19 using CT Scans is reaching 98% satisfying the WHO requirement in adopting as conclusive test in ruling our or confirming COVID-19 cases [3]. CT-Scan can accurately identify COVID-19 cases because of the availability of unique features like: • Existence of peripheral, bilateral, GGO (Ground Glass Opacity) with or without consolidation or crazy paving • Existence of multifocal GGO of rounded morphologic features with or without consolidation or crazy paving • Existence of reverse halo signs Based on these COVID-19 identifying features the sensitivity of the CT-Scan reaches 92% and the specificity reaches 97% [4]. Figure 1 illustrates the existence some of the COVID-19 features at the CT scan lungs lob.
Fig. 1. Arrows pointing to the existence of COVID-19 identifying features at a CT scan.
The more interesting thing is that because of the availability of these features, especially at the lungs lobs, radiologists are able to score the severity of COVID-19 cases [29] using the CTSS scale. Table 1 illustrate severity scores according to the availability of the COVID-19 identifying features at the lungs lobs.
Thick Data Analytics for Small Training Samples
59
Table 1. CTSS scoring scale for COVID-19 CTSS score
Availability of COVID-19 features at the lungs lobs
0
No features at lobs
1
75% of lobe involvement
The high accuracy of CT-Scans and the availability of severity scoring are important in often required for evaluating COVID-19 complications and predicting the prognosis of COVID-19 cases. In this article we are focusing on how to increase the accuracy and sensitivity of diagnosing COVID-19 cases using CT-Scans based on employing thick data techniques.
2 Thick Data as Augmentation Heuristics for Medical Imaging? Thick data analytics are has been defined as the process of augmenting context and heuristics for machine learning algorithms training data to provide more focused, relevant and robust predictions especially for small datasets [5]. The original notion of Thick Data has been popularized by Tricia Wang in 2016, who works in the field of ethnography and anthropology where she advocates for integrating “socialware” to the process interpreting data analytics [6, 30]. For Tricia Wang, the focal point is to embed social heuristics in the form of qualitative data extracted from testing spotlights to verify the correctness of the contextual interpretation of quantitative data analytics when projected on these spotlight testbeds [6, 31]. In this direction, the main goal is not to over rely on quantitative data analytics as statistics are often biased and misleading, and overly narrow in identifying hidden social insights. However, the concept of Thick Data analytics faced several interrogations from scientists questioning the context of the data and the meaning that need to be given to it [7]. Context augmentation need to be further investigated beyond the notion of socialware to generate more actionable and automated qualitative methods for all sort data. Several attempts have been made by the first author to define such contextual methods for harvesting and harnessing Thick Data [8–11]. Thick data analytics can be applied to all types of data to learn interesting patterns that are common within a particular sample of data. However, this concept needs to be divided into two distinct categories: Overarching Analytics (OA) and Specific Analytics (SA). On one hand, the OA analytics are the contextual augmentation techniques that are applied to all types of data; a concept that resembles Tricia Wang socialware approach. On the other hand, the SA analytics is concerned with integrating data augmentation techniques that are particular to a specific type of data. The OA analytics uses general techniques to generate the thick data heuristics like:
60
J. Fiaidhi et al.
• Behavioral heuristics: Qualitative methods for learning behavioural patterns from data. • Shared heuristics: The metadata generated to share data with others. • Symbolic heuristics: Tagging, floksonomy and labelling generated to give meaning to the data items. • Holistic heuristics: Comprehension techniques of the parts of data as intimately interconnected by reference to the whole. • Integrated heuristics: Analytic techniques that work together to identify general hidden patterns. • System heuristics: Analytics that that work together to provide user insights. In this article, however, we are more interested in using SA analytics to generate families of the contextual methods and heuristics related to medical image data. Image augmentation techniques can be used to generate or transform the training data by using heuristics transformations that are found effective to identify regions of interest according to the eye of imaging experts like radiologist. These heuristics can have many forms including automated standard ones like rescaling, flipping and cropping the original images. Many other customized image augmentation techniques can be used along with the standard ones [21–24]. However, no prior research has been conducted on using image augmentation to study its effect regarding the identification of medical disease like COVID-19. In this article we are starting this investigation. In Fig. 2, we have classified the SA analytics for medical imaging based on image augmentation to include four major classes including the use of basic transformation filters, pipeline filters, domain specific filters and image embeddings.
Fig. 2. Medical imaging thick data analytics techniques.
Thick Data Analytics for Small Training Samples
61
3 Why Siamese Neural Network for Learning Diagnostic Heuristics from the Radiologist? Machine learning algorithms including deep learning fail to work well if these algorithms were trained with small COVID-19 sample [20]. The reason is quite simple as learning networks rely in their training on using big data to avoid overfitting (i.e. to perform well on new, unseen data). Unfortunately, in many application domains we cannot avoid overfitting as there is no access to big data, such as medical image analysis. The previous section provides image augmentation as one solution to enhance the training data especially with small sample of images. However, we still need to find the suitable conventional network that learns effectively with small datasets. Convolutional neural networks (CNN) commonly shows worse performance than traditional machine learning methods (e.g. shallow neural network and support vector machine) when used with small training data [12]. Recently, techniques such as one-shot learning, few-shot learning, and Siamese neural networks have been proposed to provide the type of convolution network structure that can learn effectively from small training data sample [13]. Actually the Siamese neural network (SNN) structure are a good fit in identifying similarity between images with reference to the targeted class as it share weights across its sub networks. Figure 3 illustrate the structure of typical Siamese neural network.
Fig. 3. The structure of siamese neural network (SNN).
It has been widely used to score severity of COVID-19 [14] but rarely for diagnosing COVID-19 in unseen cases [15]. In this article are using image augmentation on COVID-19 small CT-Scan imaging dataset and employing the Siamese neutral network for learning from the augmented dataset. Other researchers experimented with Siamese neural network without using thick data analytic techniques like image augmentation and find it quite effective to learn from small data but losing focus on identifying regions of interest or scoring the severity of COVID-19 cases [16]. The small dataset we used is COVID-CT Dataset [17] and the SARS-CoV-2 CT-scan database [18]. The SNN passes into two stages (a learning stage and an assessment stage of the severity clustering) [19].
62
J. Fiaidhi et al.
4 Experimenting with Image Based Thick Data Analytics: Identifying Augumentation Techniques that Enhances COVID-19 Identification To test how image augmentation techniques might affect the capabilities of a Siamese network, we experimented with several augmentation techniques outlined in Fig. 1, specifically: center cropping, contrast, random erasing, sharpening kernel, bounding boxes, key points, and landmarking similarity. Our SNN was trained using 40 training images with 20 classified as covid positive, and 20 classified as covid negative. The network was then tested using 5 testing images with 3 classified as covid positive, and 2 classified as covid negative. The testing images are compared to a covid positive image and a covid negative image. A classification is considered correct if the smaller similarity is between the images with the same classification. Overall accuracy is calculated by finding how many predictions are correct after 5 runs of the 5 test images, for a total of 25 tests per augmentation. Table 2. Prediction accuracy of SNN after different image augmentations Augmentation
Accuracy (%) Tests
Average
1
2
3
4
5
No filters
60
60
0
20
60
40
Center crop
60
20
40
60
40
44
Contrast
40
60
100
80
60
68
Random erasing
60
40
60
100
60
60
Shapren kernal
60
60
40
60
40
52
Bounding boxes
60
60
60
100
40
64
As we can see in Table 2 and Fig. 4, when adding a single augmentation filter it enhances the predictive capaibilites of the SNN from 40% (with no augmentations) to as much as 68% in the case of increasing image contrast. The SNN used in our experiment is a sequential one that uses seven layers as described by the following code snippet:
Thick Data Analytics for Small Training Samples
Fig. 4. Applying image augmentation filters.
63
64
J. Fiaidhi et al.
The VGG convolutional neural network architecture used for our SNN model with the seven convolutional layers that use small filters (e.g. 3 × 3 pixels) followed by a max pooling layer proved to learn the similarity with given target. This similarity is increasing with certain types of augmentation. For example ground glass opacities (GGO) which region of interest (ROI) that identifies the existence of COVID-19 is letting the SNN to identify the ROI with higher contrast augmentation.
5 Conclusion This research describe our initial work to identify region of interest for COVID-19 CT Scans by learning from the radiologist what heuristics they use to identify COVIDCases. In our experimentations we used a Siamese Neural Network to learn from small sample of data. We exposed the VGG SNN to original images and images that have been augmented with image filters like the contrast and cropping and found that augmentations like the contrast enhances the recognition of COVID-19 by a factor of 20% based on identifying the ground glass opacities (GGO) feature of the image. In our future research we are going to continue investigating higher filter that can bring more data thickness through identifying more heuristics used by the radiologist. In this direction we are setting our experimentations to identify additional features like well-aerated regions, crazy paving and linear opacities consolidation. Figure 5 illustrate some of these new radiologist heuristics as augmented for one CT-Scan captured from this reference [25]. We are also setting other experiments to build pre-trained SNN model that learn from large COVID-19 CT-Scan images as well as to use some advanced image annotation techniques like COCO. Moreover, we are considering extending our SNN model to Siamese Generative Adversarial Network [26] to better identify the COVID-19 CT-Scan regions of interest. This is all left to our future research.
Thick Data Analytics for Small Training Samples
65
Fig. 5. New thick data heuristics captured by augmentation (The green is GGO, the yellow is the crazy paving and the blue is the well-aerated regions).
Acknowledgment. The first author would like to thank NSERC for supporting this research through NSERC DDG-2020–00037.
References 1. Robert, H., Shmerling, M.D.: Which test is best for COVID-19? Harverd Medical School. Accessed 30 Sept 2020, https://www.health.harvard.edu/blog/which-test-is-best-for-covid19-2020081020734 2. FDA: Potential for False Positive Results with Antigen Tests for Rapid Detection of SARSCoV-2 - Letter to Clinical Laboratory Staff and Health Care Providers (2020). Accessed 11Mar 2020, https://www.fda.gov/medical-devices/letters-health-care-providers/potential-false-pos itive-results-antigen-tests-rapid-detection-sars-cov-2-letter-clinical-laboratory 3. Radiological Society of North America, CT provides best diagnosis for COVID-19. Accessed 26 Feb 2020, www.sciencedaily.com/releases/2020/02/200226151951.htm 4. SHARON BEGLEY, Covid-19 testing issues could sink plans to re-open the country. Might CT scans help? Accessed 16 Apr 2020, https://www.statnews.com/2020/04/16/ct-scans-alt ernative-to-inaccurate-coronavirus-tests/ 5. Fiaidhi, J.: Envisioning insight-driven learning based on thick data analytics with focus on healthcare. IEEE Access 8, 114998–115004 (2020) 6. Wang, T.: Big data needs thick data. Ethnography Matters 13 (2013). https://medium.com/ ethnography-matters/why-big-data-needs-thick-data-b4b3e75e3d7 7. Der, J.: What are thick data? Medium.com. Accessed 5 Nov 2017, https://medium.com/@jde r00/what-are-thick-data-6ed5178d1dd 8. Grosjean, S., Mallowan, M., Marcon, C.: Methods and strategies of information management by organizations: from big data to “thick data”. In: ACFAS Congress, 11–12 May 2017 (2017). https://www.acfas.ca/evenements/congres/programme/85/400/405/c?ancre=522 9. Fiaidhi, J., Mohammed, S., Fong, S.S.: Orchestration of thick data analytics based on conversational workflows in healthcare community of practice. In: IEEE Big Data 2020 Conference, 3rd SI on HealthCare Data, 10–13 December 2020 (2020) 10. Fiaidhi, J., Mohammed, S.: Submitted to the 2020 WS-9 SAC Symposium on e-Health, IEEE International Conference on Communications (IEEE ICC 2021) , Montreal, Canada, 14–18 June 2021 (2021) 11. Zhao, J., Zhang, Y., He, X., Xie, P.: COVID-CT-dataset: a CT scan dataset about COVID-19. arXiv preprint arXiv:2003.13865 (2020) 12. Feng, S., Zhou, H., Dong, H.: Using deep neural network with small dataset to predict material defects. Mater. Des. 162, 300–310 (2019)
66
J. Fiaidhi et al.
13. Figueroa-Mata, G., Mata-Montero, E.: Using a convolutional siamese network for imagebased plant species identification with small datasets. Biomimetics 5(1), 8 (2020) 14. Li, M.D., et al.: Automated assessment and tracking of COVID-19 pulmonary disease severity on chest radiographs using convolutional siamese neural networks. Radiol. Artif. Intell. 2(4), e200079 (2020) 15. Imani, M.: Automatic diagnosis of coronavirus (COVID-19) using shape and texture characteristics extracted from X-Ray and CT-Scan images. Biomed. Signal Process. Control 68, 102602 (2021) 16. Mohammad, S., Hossain, M.S.: MetaCOVID: a siamese neural network framework with contrastive loss for n-shot diagnosis of COVID-19 patients. Pattern Recogn. 113, 107700 (2021) 17. Zhao, J., Zhang, Y., He, X., Xie, P.: COVID-CT-Dataset: a CT scan dataset about COVID-19. arXiv preprint arXiv:2003.13865 (2020) 18. Eduardo, S., Angelov, P., Biaso, S., Froes, M.H., Abe, D.K.: SARS-CoV-2 CT-scan dataset: a large dataset of real patients CT scans for SARS-CoV-2 identification. medRxiv (2020) 19. Mishra, A.K., Das, S.K., Roy, P., Bandyopadhyay, S.: Identifying COVID19 from chest CT images: a deep convolutional neural networks based approach. J. Healthcare Eng. 2020, 1–7 (2020) 20. Silva, P., et al.: COVID-19 detection in CT images with deep learning. Inf. Med. Unlocked 20, 100427 (2020) 21. Shorten, C., Khoshgoftaar, T.M.: A survey on image data augmentation for deep learning. J. Big Data 6(1), 1–48 (2019) 22. Casado-García, Á., et al.: CLoDSA: a tool for augmentation in classification, localization, detection, semantic segmentation and instance segmentation tasks. BMC Bioinf. 20(1), 1–14 (2019) 23. Yun, S., Han, D., Oh, S.J., Chun, S., Choe, J., Yoo, Y.: Cutmix: regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6023–6032 (2019) 24. Kiela, D., Bottou, L.: Learning image embeddings using convolutional neural networks for improved multi-modal semantics. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 36–45 (2014) 25. Carvalho, A.R.S., et al.: COVID-19 chest computed tomography to stratify severity and disease extension by artificial neural network computer-aided diagnosis. Front. Med. 7 (2020) 26. Hsu, C.-C., Lin, C.-W., Weng-Tai, S., Cheung, G.: Sigan: siamese generative adversarial network for identity-preserving face hallucination. IEEE Trans. Image Process. 28(12), 6225– 6236 (2019) 27. Subsoontorn, P., Lohitnavy, M., Kongkaew, C.: The diagnostic accuracy of isothermal nucleic acid point-of-care tests for human coronaviruses: a systematic review and meta-analysis. Sci. Rep. 10(1), 1–13 (2020) 28. Dinnes, J., Deeks, J.J., Berhane, S.: How accurate are rapid tests for diagnosing COVID19? Cochrane Podcast. Accessed 24 Mar 2021, https://www.cochrane.org/CD013705/INF ECTN_how-accurate-are-rapid-tests-diagnosing-covid-19 29. Francone, M., et al.: Chest CT score in COVID-19 patients: correlation with disease severity and short-term prognosis. Eur. Radiol. 30(12), 6808–6817 (2020). https://doi.org/10.1007/ s00330-020-07033-y 30. Wang, T.: We need to invest in socialware just as much as we invest in hardware. Accessed 30 Aug 2021, https://www.triciawang.com/about 31. Wang, T.: Big data needs thick data. Ethnogr. Matters 13 (2013)
How Does the Extended Promotion Period Improve Supply Chain Efficiency? Evidence from China’s Online Shopping Festival Yang Chen and Hengyu Liu(B) Department of Management Science and Engineering, School of Economics and Management, Beijing University of Posts and Telecommunications, Beijing, China {2017212056,hengyu_liu}@bupt.edu.cn
Abstract. In 2020, the “Double Eleven” shopping festival, the world’s largest online shopping festival, extended its promotion period from 1 day to 20 days for the first time, topping nearly $74.1 billion in sales. Extending the promotion period can have a profound impact on supply chain efficiency, e.g., sales and returns. On the one hand, an extended promotion period facilitates the public praise effect to produce stronger purchase power in the market, thus increasing sales. On the other hand, an extended promotion period can effectively reduce returns as it enables consumers to make more rational purchase decisions. In this work, we analyze the impact of the extended promotion period on supply chain efficiency. More specifically, we first develop a modified susceptible infected recovered (SIR) model to derive the total number of consumers who are willing to participate in the online shopping festival, which is time dependent. Then, we use the consumer utility model to calculate the respective numbers of consumers who purchase and return during the extended promotion period. Moreover, we numerically analyze the impacts of different parameters on the platform’s supply chain efficiency during the online shopping festival. Finally, a case study based on Taobao’s “Labor Day” online shopping festival in 2021 is conducted to derive more managerial insights from the analytical findings. Our results suggest that extending the promotion period can increase the Taobao’s sales revenue by 37.9% and can decrease its costs due to returns by 9%. Keywords: Promotion period · Supply chain efficiency · Online shopping festival · Modified SIR model
1 Introduction Since the release of China’s first five-year e-commerce development plan in 2007, the country’s e-commerce has moved from rapid growth to a new stage of high-quality development. In 2019, China’s e-tail sales reached 10.6 trillion yuan, completing the 10 trillion yuan target set in the 13th Five-Year Plan one year ahead of schedule and making China the world’s largest e-tail market for many years in a row. The development of ecommerce has given rise to various online shopping festivals, such as “Double Eleven” © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 67–80, 2022. https://doi.org/10.1007/978-981-16-8656-6_7
68
Y. Chen and H. Liu
and “June 18”. The “Double Eleven”, created in 2009, for example, grew from an initial 50 million yuan to 268.4 billion yuan in 2019, an increase of more than 5,000 times. It is noteworthy that although the sales volume of “Double Eleven” has been rising year after year, its growth rate has been slowing down. As shown in Fig. 1, from 2016 to 2019, the growth rates of sales during Taobao and Tmall’s “Double Eleven” were below 0.5 percentage points.
Fig. 1. Taobao and Tmall “Double Eleven” sales from 2009 to 2019
One of the main reasons for the slowdown growth rates in sales of online shopping festival is that the promotion period is too short, which is normally limited to one day. On the one hand, the short promotion period creates great “buying time pressure” for consumers. As a result, many consumers blindly buy a large number of products because they worry about missing the discounts, and the transaction volume of e-commerce platforms surges on the promotion day. However, in addition the extended logistics time due to the limited processing capacity of the courier industry, consumers often change their minds after purchase and return their impulsively consumed products. According to Xinhua News Agency, the return rate of daily online shopping is nearly 10%, while it can reach about 30% during the “Double Eleven”. On the other hand, some consumers are unable to fully capture all kinds of promotional information on the platform because the promotion period is too short, therefore the online shops are unable to make full use of the “public praise effect” to broaden their potential buying base. In 2020, for the first time, China’s major e-commerce platforms (e.g., Taobao, Tmall, Jingdong, etc.) extended the promotion period to more than 20 days during the “Double Eleven”. Jingdong, for example, started its “Double Eleven” campaign on October 21 and lasted through November 13, a total of 24 days. In the end, the sales during the “Double Eleven” reached 498.2 billion yuan for Taobao and Tmall and 271.5 billion yuan for Jingdong, an increase of 85.5% and 32.8% year-over-year, respectively. In addition to the significant increase in sales, the consumers’ purchase and logistics experience improved a lot. The extension of the promotion period enabled the online shops to use presale data to layout logistics and warehousing in advance and to start delivering orders when the event officially started, and many consumers experienced the “next-day” logistics service. Moreover, because the consumers have sufficient time to compare prices and learn about the products information, and most of the promotional products are set up with prepayment, the consumers’ impulse consumptions are effectively reduced, thus
How Does the Extended Promotion Period Improve Supply Chain Efficiency?
69
relieving the logistics pressure during the promotion period and helping reduce the platforms’ return rate (Table 1). Table 1. The schedule of Jingdong’s “Double Eleven” in 2020 Pre-sale period
Oct 21, 2020 00:00:00 - Oct 31, 2020 23:59:59
Special period
Nov 1, 2020 00:00:00 - Nov 8, 2020 23:59:59
Climax Period
Nov 9, 2020 00:00:00 - Nov 11, 2020 23:59:59
Renewal period
Nov 12, 2020 00:00:00 - Nov 13, 2020 23:59:59
Obviously, the extension of the online shopping festival promotion period not only reduces the platforms’ return rates by reducing consumers’ impulse consumption but also helps to fully utilize the “word-of-mouth effect” of consumers to increase their sales. Both the return rates (or return costs) and sales revenues are important indicators of supply chain efficiency of an online shopping festival. Questions about how the length of the promotion period affects the number of buyers and returns, how the length of the promotion period affects the supply chain efficiency of an online shopping festival, and what is the conversion relationships among different consumers categories remain to be answered. Many studies have been conducted to investigate the impact of promotions on online shopping. For example, Jiang et al. (2015) demonstrated that online price promotion and production recommendations should be jointly considered in a bid to increase the platform’s profit. Liu et al. (2015) analyzed the impact of online purchaser segmentation on promotion strategies. Yan et al. (2016) studied unplanned consumption and its influencing factors based on the SOR model and theories of self-regulation. Li and Du (2017) proposed a framework to identify opinion leaders and to maximize the dissemination of promotion messages by analyzing existing microblogs. Kim and Krishnan (2019) built a hidden Markov model and estimated it with individual-level transaction data collected from a premier online retailer. They found that price promotion can strengthen consumers’ loyalty only for consumers who are at least moderately loyal, but it is not effective at changing non-loyal consumers’ attitudes toward an online retailer. Recently, Xiao et al. (2020) introduced a latent factor model to build a time sensitivity-based predictive model to seek popularity on Twitter, which enables the platform to conduct more accurate online promotions. Based on the real historical data of VIP.com, Zhou et al. (2020) utilized machine learning to investigate the main elements that influence the sales of different brands of products. However, there are few studies analyzing how the length of the promotion period affects supply chain efficiency. In this paper, we analyze the above problems by using mathematical modeling, numerical experiments, and a case study. More specifically, first, we use the modified susceptible infected recovered (SIR) model to characterize the dynamic conversion relationships among different consumer categories (i.e., potential consumers, repeat buyers, public praise spreaders, and festival quitters) during the promotion period, and analyzes the purchase rate and return rate of the consumers based on the consumer utility function. Next, we numerically analyze the impacts of the potential consumers conversion rates,
70
Y. Chen and H. Liu
propagation conversion rate, and exit rates on supply chain efficiency. Finally, we conduct a case study on Taobao’s “Labor Day” online shopping festival in 2021 to study the impact of the extended promotion period on Taobao’s supply chain efficiency. The results of the case study show that extending the promotion period can increase Taobao’s sales revenue by approximately 37.9% and can reduce its return costs by approximately 9%.
2 Model Setup 2.1 The Number of Consumers Participating in Promotions Considering the limitation of information dissemination media, the promotion information cannot reach every potential consumer in time. As time passes, the promotion information will eventually spread among the consumers. Moreover, some of the purchasing consumers will repeatedly purchase (i.e., repeat buyers) and also play the role of “promotion information disseminators” (i.e., public praise spreaders), i.e., prompting more consumers to join in the promotion activities. In addition, as the promotion period comes to an end, some consumers will eventually quit from the purchasing and promotion groups (i.e., festival quitters). Therefore, the number of customers participating in an online shopping festival is dynamically changing during the promotion period. Many scholars used the SIR model to characterize information dissemination, for example, Bass (1994, 2004) and Chan (2011). The SIR model usually classifies the target population into three categories, i.e., susceptible, infected, and recovered. In this paper, these three groups represent the potential consumers, repeat buyers, and festival quitters, respectively. Specially, these three categories of consumers are interchanged over time. However, the SIR model fails to reflect the “word-of-mouth effect”, which is especially important for online shops during the promotion period. Therefore, this paper modifies the SIR model (i.e., modified SIR model) by considering the “public praise spreaders”. To this end, the consumers in the modified SIR model can be subdivided into four categories, i.e., potential consumers, repeat buyers, public praise spreaders, and festival quitters. The conversion relationships among the four are as follows (see Fig. 2): • All consumers on the e-commerce platform who have not yet received the promotion information at moment t can be viewed as potential consumers, which is denoted as S (t), and the potential consumers at the beginning of the promotion period is assumed to be S (0) = n − k, where n is a constant indicating the total number of consumers on the e-commerce platform and k indicates the initial number of public praise spreaders (e.g., the previous consumers). Potential consumers will convert into repeat buyers or public praise spreaders with conversion rates (i.e., potential consumers conversion rates) of α1 and α2 , respectively, where 0 ≤ αi ≤ 1, i = 1, 2. • The repeat buyers on the e-commerce platform at moment t are denoted as I (t), and we assume I (0) = 0, i.e., no consumers participate in the festival at the beginning of the promotion. At moment t, a portion of the repeat buyers will withdraw from the purchase, among which, the consumers with the ratio of γ1 (i.e., exit rate) will no longer disseminate the promotion information but convert into festival quitters, denoted as R (t)), while the consumers with the ratio of β will be converted into public praise spreaders, denoted as T (t). Assume that R (0) = 0, T (0) = k. Therefore, γ1 ≥
How Does the Extended Promotion Period Improve Supply Chain Efficiency?
71
0, β ≥ 0, γ1 + β ≤ 1, and I(t) • (1 − γ1 − β) measures the repeat buyers at moment t. In addition, at moment t, public praise spreaders with the ratio of γ2 (0 ≤ γ2 ≤ 1) will also convert into festival quitters.
Fig. 2. Conversion among four consumers categories at moment t
Based on the above analysis, we can drive the following differential equations to characterize the conversion relationships among the four consumers categories at moment t: dS(t) α1 S(t)T (t) α2 S(t)I (t) =− − dt N N
(1)
α1 S(t)T (t) α2 S(t)I (t) dI (t) = + − (β + γ1 )I (t) dt N n
(2)
dT (t) = βI (t) − γ2 T (t) dt
(3)
dR(t) = γ1 I (t) + γ2 T (t) dt
(4)
S(0) = n − k, I (0) = 0, T (0) = k, R(0) = 0.
(5)
where Eq. (5) gives the initial conditions. 2.2 Purchase and Return Rates In this subsection, we use the consumer utility function to derive the “purchase rate” and “return rate” of consumers during the promotion period. Many works (e.g., Sriram et al. 2010, and Ursu 2018) use the perceived value to evaluate consumers’ purchasing behavior. Specifically, the perceived value comprehensively evaluates the benefits and costs of acquiring a product or service. Denote x the number of potential consumers on an e-commerce platform and assume it follows a CDF F(x) and a PDF f (x). Moreover, assume a consumer’s valuation of the product V follows a uniform distribution on the interval (V , V ), and assume a consumer’s perceived difference D of the product after
72
Y. Chen and H. Liu
receiving it follows a uniform distribution on the interval (D, D), where D < 0. Therefore, the comprehensive perceived value of a product can be expressed as T = V + D, and its joint PDF, g(V , D), can be expressed as 1 , if V < V < V , D < D < D V −V ∗ D−D g(V , D) = (6) 0, otherwise Assume that the original price of the goods is p and its discount during the promotion period is c (1/2 < c < 1). Then, a consumer will purchase the product if his perceived value exceeds the discounted price, i.e., V ≥ cp. Therefore, a consumer’s purchase rate can be expressed as W1 (c) =
D V D
1 V − cp dVdD = V −V cp V − V ∗ (D − D)
(7)
To better characterize a consumer’s motivation of returning a product, assume that another alternative product B is available to him when purchasing product A, and both of which are sold at the discounted price cp. In addition, we assume that when the consumers return product A, product B has been restored to its original price p, i.e., the promotion period has finished. Therefore, a consumer’s opportunity cost of returning a product is p(1 − c), indicating that the utility of a consumer’s return behavior is cp − p(1 − c) = 2cp − p. In this case, a consumer will return the product if V + D < 2cp − p. Then we can derive a consumer’s return rate as W2 (c) =
D (2c−1)p−D D
V
[(c − 1)p − D]2 _ g(V , D)dVdD = 2 V − V (D − D)
(8)
_
By Eqs. (7) and (8) and combing the results derived in Subsect. 3.1, we can calculate the continuously changing numbers of consumers that purchase and return during the promotion period.
3 Sensitivity Analysis The conversion relationships among the four consumers categories during the promotion period have been portrayed in the previous section, which are closely related to the potential consumers conversion rates (α1 and α2 ), the propagation conversion rate (β), and the exit rates (γ1 and γ2 ) of the consumers. In this section, we shall use numerical studies to analyze how these parameters affect these four consumers categories as well as the platform’s supply chain efficiency. 3.1 Parameters Setting In 2020, the payment time of the “Double Eleven” on multiple e-commerce platforms was limited to November 1 to November 11, a total of 11 days, so we set the value
How Does the Extended Promotion Period Improve Supply Chain Efficiency?
73
of t between 0 and 11. Moreover, according to news reports, a total of more than 800 million consumers participated in this online shopping festival in 2020, thus we set n = 800, 000, 000. In addition, the news reported that the sales on November 1, 2020, exceeded that of the whole day on November 11, 2019 amounting to $268,444,058,381 when more than 600 million consumers participated in the one-day “Double Eleven”. Then we can calculate the average expenditure of the consumers as $406. Furthermore, we set α1 = α2 and vary them between 0.6–0.9, set γ1 = γ2 and vary them between 0.05–0.2, and vary β between 0.4–0.7. The specific parameter settings of the numerical studied in this section are given in Table 2. Table 2. Parameters setting
Study 1
Study 2
Study 3
α1 = α 2
β
γ1 = γ2
0.6
0.55
0.125
0.7
0.55
0.125
0.8
0.55
0.125
0.9
0.55
0.125
0.75
0.4
0.125
0.75
0.5
0.125
0.75
0.6
0.125
0.75
0.7
0.125
0.75
0.55
0.05
0.75
0.55
0.1
0.75
0.55
0.15
0.75
0.55
0.2
3.2 The Impact of Potential Consumers Conversion Rates In this subsection, we investigate the impact of the potential consumers conversion rates on supply chain efficiency. Specifically, we vary α1 and α2 between 0.6 and 0.9 with an interval 0.1, and set β = 0.55 and γ1 = γ2 = 0.125. The results are shown in Fig. 3. In particular, the four sets of curves in different colors represent the four consumer categories, i.e., blue for repeat buyers, orange for potential consumers, green for festival quitters, and red for public praise spreaders. As shown in Fig. 3, as the promotion period extends, the number of repeat buyers decreases but at a slower rate. This is because, for a fixed potential consumers conversion rate, the number of potential consumers decreases in this case, and the number of potential consumers joining in the online shopping festival is much smaller than that quitting out the festival. In addition, as the number of repeat buyers continues to decrease, the number of quitting consumers further shrinks, so the rate of decline slows down. Figure 3 also
74
Y. Chen and H. Liu
Fig. 3. Impact of potential consumers conversion rate
shows that the number of public praise spreaders first increases and then decreases as the promotion period extends. The reason for this trend can be explained by the decrease in the number of repeat buyers and the increase in the number of public praise spreaders. As a result, more (fewer) consumer join in (quit) the online shopping festival in this case. Table 3 below calculates the numbers of repeat buyers, public praise spreaders and the total sales of the platform under different potential consumers conversion rates. The results show that the impacts of the conversion rates on the three indexes are not significant. More specifically, the sales at a rate of 0.9 is only approximately 0.4% higher than that at a rate of 0.6. This is because the extended promotion period can fully convert the potential consumers into repeat buyers. Table 3. Platform supply chain efficiency under different consumption conversion rates α1 = α 2
Number of repeat buyers
Number of public praise spreaders
Total sales (yuan)
0.6
1,181,306,336
3,793,881,002
550,488,752,533
0.7
1,183,256,890
3,807,163,585
551,397,710,748
0.8
1,184,282,113
3,816,988,308
551,875,464,571
0.9
1,184,790,546
3,824,337,600
552,112,394,518
3.3 The Impact of Propagation Conversion Rate In this subsection, we study the impact of the propagation conversion rate on supply chain efficiency. Specifically, we vary β between 0.4 and 0.7 with an interval 0.1, and set α1 = α2 = 0.75 and γ1 = γ2 = 0.125. The results are shown in Fig. 4. We can observe the similar trends of the four consumers categories in Fig. 4 to those in Fig. 3. Also, we calculate the numbers of repeat buyers, public praise spreaders and the total sales of the platform under different propagation conversion rates. The results show that the total sales increase only approximately 0.25% when the propagation
How Does the Extended Promotion Period Improve Supply Chain Efficiency?
75
Fig. 4. Impact of propagation conversion rate
conversion rate increases from 0.4 to 0.6, indicating the weak impact of the propagation conversion rate on sales revenue. It is worth noting that the total amount of public praise spreaders will obviously increase as this rate increases. Specifically, the number of the spreaders increases by approximately 33.7% as this rate increases from 0.4 to 0.6. The above analyses indicate that a small increase in propagation conversion rate benefits a lot to promotion information dissemination among the potential consumers, thus stimulating the sales on the platform. Therefore, the platform has the incentives to reward the promotion information disseminators, i.e., public praise spreaders. In fact, the platforms have developed some reward mechanisms, such as mini-games and friend invitation rewards, to facilitate promotion information disseminations (Table 4). Table 4. Platform supply chain efficiency under different propagation conversion rates β
Number of repeat buyers
Number of public praise spreaders
Total sales (yuan)
0.4
1,181,712,624
2,773,785,001
550,678,082,570
0.5
1,183,857,302
3,479,697,222
551,677,502,565
0.6
1,184,734,021
4,185,441,713
552,086,053,998
0.7
1,142,700,294
4,729,036,891
532,498,337,189
3.4 The Impact of Exit Rates Finally, we analyze the impact of the exit rates on supply chain efficiency. Particularly, we vary γ1 and γ2 between 0.05 and 0.2 with an interval 0.05, and set α1 = α2 = 0.75 and β = 0.55. The results are shown in Fig. 5. Again, the trends of the four consumers categories in Fig. 5 are similar to those in Figs. 3 and 4. Moreover, Table 5 summarizes the numbers of repeat buyers, public praise spreaders and the total sales of the platform under different exit rates. Table 5 shows that the changes in exits rate will have great impacts on these three indexes. In particular, as the exist rates increase from 0.05 to 0.2, the platform’s total sales decrease
76
Y. Chen and H. Liu
Fig. 5. Impact of exit rates
approximately 20%. This can be explained by the fact that a higher exit rate means more public praise spreaders convert into festival quitters. The above analysis also explains why the e-commerce platforms try to minimize the exit rates of consumers by setting multistage purchase progresses such as deposits and final payments, since a high exit rate will result in poor sales revenues. Table 5. Platform supply chain efficiency under different exit rates γ1 = γ2
Number of repeat buyers
Number of public praise spreaders
Total sales (yuan)
0.05
1333208345
5823365778
621,275,088,938
0.1
1230454676
4393914442
573,391,879,194
0.15
1141932535
3363882439
532,140,561,479
0.2
1064633963
2617480395
496,119,426,599
4 Case study To stimulate the consumer market during the COVID-19 epidemic, the Chinese government extended the Labor Day holiday from three days to five days in 2021. In this case, the major e-commerce platforms in China, such as Jingdong, Tmall, and Taobao, launched their promotions (e.g., presale activities) about two days before the holiday and the whole promotion period lasted for more than seven days. We shall conduct a case study on Taobao’s promotion activities during the Labor Day to examine whether the extended promotion period can help increase its sales and reduce return costs. In this section, we first explain the data sources and parameter settings for the case study; then we derive the potential consumers conversion rates, propagation conversion rate, and exit rates using the data based on; finally, by setting the supply chain efficiency
How Does the Extended Promotion Period Improve Supply Chain Efficiency?
77
indexes of 2019 “Double-Eleven” as benchmark, we analyze the impact of the extended promotion period on the corresponding efficiency of Taobao’s. 4.1 Data Sources and Parameters Setting In this section, we use the selenium package in Python to crawl Taobao.com for three types of data during its “Labor Day” online shopping festival in 2021, i.e., the number of collections, product sales, and product reviews. The three types of data reflect the popularity of products, the actual buyers, and consumers’ awareness of promotion information, respectively. Further, the potential consumers conversion rates and the propagation conversion rate can be approximated by the following equations: αi = product sales/collections, i = 1, 2
(9)
β = product reviews/product sales.
(10)
The case study crawled a total of 509 brands of store data, mainly in the sector of snack. Since some of data are derived in the form of “n +”, we assume that the real sales take a random value between (n, 2n). Moreover, by setting α1 = α2 as in Sect. 3, we can get the potential consumers conversion rates of the crawled products as 0.58, and the propagation conversion as 0.33. Considering that the “Double Eleven” online shopping festival in 2019 had not yet extended the promotion period, the case study will use the 2019 network-wide “Double Eleven” data as benchmark. Specifically, based on the press data, there were approximately 660,000,000 consumers participating in the 2019 “Double Eleven” with total sales of $268,444,058,381, i.e., the per customer average expenditure is $406. Since the exit rates during the promotion period are not available, we will use the exit rates setting in the sensitivity analysis in the case study, i.e., γ1 = γ2 = 0.125. 4.2 The Impact of Extended Promotion Period on Sales Revenue We first analyze the impact of the extended promotion period on the Taobao’s sales revenue of its “Labor Day” online shopping festival in 2021. The results are shown in Fig. 6. We can observe from Fig. 6 that as the promotion period extends, the number of public praise spreaders first increases and then decreases as analyzed in Sect. 3. Therefore, a maximum sales revenue can be achieved when the number of public praise spreaders reaches its maximum (e.g., the sixth day in Fig. 6). We can derive the total number of the repeat buyers on the sixth day as 909,315,954 and the corresponding sales revenue is 370,091,593,278 yuan. Then, by comparing this sales revenue to that in 2019 “Double Eleven”, we can evaluate the impact of extended promotion period on sales revenue in the following equation: 370091593278 − 268444058381 = 37.9%. (11) 268444058381 In particular, Eq. (11) shows that extending the promotion period can increase Taobao’s sales revenue of its “Labor Day” online shopping festival in 2021 by 37.9%.
78
Y. Chen and H. Liu
Fig. 6. The impact of extended promotion period on Taobao’s sales revenue
4.3 The Impact of Extended Promotion Period on Return Costs Now we analyze the impact of the extended promotion period on Taobao’ return costs of its “Labor Day” online shopping festival in 2021. According to the analysis in Sect. 2.2, we can calculate the platform’s return rate by Eq. (8). For analytical simplicity, we assume that the lower bound of the consumers’ perceived value to be 0, i.e., the product brings no value to the consumers, and assume the upper bound of this value to equal to the product’s original price p, i.e., consumers have no willingness to pay for a higher price than its original price. In addition, we assume that the minimum value of the consumer’s perceived difference is -cp, and its maximum value is 0. In 2019 “Double Eleven”, in addition to price discounts, many e-commerce platforms implemented the incentives such as shops’ full discount 400-20, store coupons 400-20, shopping allowances 400-50. To this end, we set the original price of the product to be 400 yuan, and after deducting the discount, the final sales price is 400-20-20-20-50 = 310 yuan. Finally, by Eq. (8), we can derive the return rate of Taobao during its “Labor Day” online shopping festival in 2021 as 19.8%, while the return rate of 2019 “Double Eleven” as 30% based on the report by Xinhua News Agency. To this end, we can calculate Taobao’s respective return costs in the two online shopping festivals as follows: 80533217514.3 − 73278135469.0 = 9.0% 80533217514.3
(12)
Specifically, Eq. (12) shows that extending the promotion period can decrease the Taobao’s return costs of its “Labor Day” online shopping festival by 9.0%.
5 Conclusions China has the world’s largest e-commerce market, and the “Double Eleven” has grown into the world’s largest online shopping festival. In 2020 “Double Eleven”, China’s major e-commerce platforms such as Jingdong, Tmall and Taobao have extended their one-dayonly promotions to more than 20 days. To analyze the impact of the extended promotion period on the platform’s supply chain efficiency during the online shopping festivals,
How Does the Extended Promotion Period Improve Supply Chain Efficiency?
79
we set up a modified SIR model to characterize the dynamic conversion relationship among the four consumer categories, i.e., potential consumers, repeat buyers, public praise spreaders, and festival quitters, over the promotion period. Next, we analyze the consumers’ purchase rate and return rate based on the consumer utility function, which in turn enables us to calculate the platform’s sales revenue and return costs. After that, three groups of numerical experiments are conducted to analyze the impacts of the potential consumers conversion rate, propagation conversion rate, and exit rates on the supply chain efficiency. The experimental results show that as the promotion period extends: (1) the change in the potential consumer conversion rates will not significantly affect the numbers of repeat, public praise spreaders, and the platform’s sales revenue; (2) the increase in the propagation conversion rate does not obviously increase the number of repeat buyers and sales revenue, but increases the total number of public praise spreaders; and (3) the change in exit rates have significant impacts on these three indexes. Finally, we conduct a case study on Taobao’s “Labor Day” online shopping festival and demonstrate that the extended promotion period can increase Taobao’s sales revenue by approximately 37.9% and can reduce its return cost by approximately 9%. There are several interesting extensions of this work. First, we use sales revenue and return cost to measure the supply chain efficiency of the e-commerce platform. It would be interesting to consider other supply chain efficiency indictors (the delivery time) in our models. Second, to facilitate the analysis, we assume that consumers’ product preferences follow a uniform distribution. It would be interesting to analyze a more generalized form of distribution to characterize this preference. Third, in addition to price discounts, e-commerce platforms offer various forms of promotions to attract consumers, e.g., free shipping, free freight insurance, and rebates. It would be interesting to analyze how these promotions affect e-commerce platforms’ supply chain efficiency. Acknowledgment. This work was supported in part by the National Natural Science Foundation of China under grant number 72001028, and the Fundamental Research Funds for the Central Universities under the grant number 2020RC32.
References Eason, G., Noble, B., Sneddon, I.N.: On certain integrals of Lipschitz-Hankel type involving products of Bessel functions. Phil. Trans. Roy. Soc. Lond. Ser. A Math. Phys. Sci. 247(935), 529–551 (1955) Bass, F.: Comments on “a new product growth for model consumer durables the bass model.” Manag. Sci. 50(12_supplement), 1833–1840 (2004) Bass, F.M., Krishnan, T.V., Jain, D.C.: Why the Bass model fits without decision variables. Mark. Sci. 13(3), 203–223 (1994) Chan, J.W.: Enhancing organizational resilience: application of viable system model and MCDA in a small Hong Kong company. Int. J. Prod. Res. 49(18), 5545–5563 (2011) Jiang, Y., Shang, J., Liu, Y., May, J.: Redesigning promotion strategy for e-commerce competitiveness through pricing and recommendation. Int. J. Prod. Econ. 167, 257–270 (2015) Kim, Y., Krishnan, R.: The dynamics of online consumers’ response to price promotion. Inf. Syst. Res. 30(1), 175–190 (2019)
80
Y. Chen and H. Liu
Li, F., Du, T.C.: Maximizing micro-blog influence in online promotion. Expert Syst. Appl. 70, 52–66 (2017) Liu, Y., Li, H., Peng, G., Lv, B., Zhang, C.: Online purchaser segmentation and promotion strategy selection: evidence from Chinese E-commerce market. Ann. Oper. Res. 233(1), 263–279 (2015). https://doi.org/10.1007/s10479-013-1443-z Sriram, S., Chintagunta, P.K., Agarwal, M.K.: Investigating consumer purchase behavior in related technology product categories. Mark. Sci. 29(2), 291–314 (2010) Ursu, R.M.: The power of rankings: quantifying the effect of rankings on online consumer search and purchase decisions. Mark. Sci. 37(4), 530–552 (2018) Xiao, C., Liu, C., Ma, Y., Li, Z., Luo, X.: Time sensitivity-based popularity prediction for online promotion on Twitter. Inf. Sci. 525, 82–92 (2020) Yan, Q., Wang, L., Chen, W., Cho, J.: Study on the influencing factors of unplanned consumption in a large online promotion activity. Electron. Commer. Res. 16(4), 453–477 (2016). https:// doi.org/10.1007/s10660-016-9215-x Zhou, Y.-W., Chen, C., Zhong, Y., Cao, B.: The allocation optimization of promotion budget and traffic volume for an online flash-sales platform. Ann. Oper. Res. 291(1–2), 1183–1207 (2020). https://doi.org/10.1007/s10479-018-3065-y
Pricing Strategy of Dual-Channel Supply Chain for Alcoholic Products with Platform Subsidy Dongyan Chen1(B) , Yi Zhang2 , and Shouting Zhao2 1 Beijing Wuzi University, Beijing, China 2 Beijing Wuzi University (Purchasing Management), Beijing, China
[email protected]
Abstract. The manufacturers in the alcohol industry used to sell products using the traditional channel, i.e., the products are sold to end customers through different wholesalers and retailers. With the increasing development of China’s e-commerce retailing, the manufacturers in the alcohol industry are trying to open direct online channels based on third-party online platforms, e.g., JD, Buy Together, Tmall, etc., to take advantage of online channel. However, the direct online channel and the traditional channel will compete with each other, which brings channel conflict. Besides, platform subsidy may also affect the dual pricing strategy. The existing studies cannot solve the research problem faced in this paper. Then we explore the effect of platform subsidy on pricing strategy and the channel conflict of dual-channel supply chain for alcoholic products. We use the Stackelberg game model to solve the optimal pricing strategy under two scenarios, e.g., the manufacturer implements single traditional channel, and the manufacturer implements dual channel with platform subsidy. Taking A as an example, the paper explores the effect of different factors on the optimal dual-channel pricing decisions and profit of manufacturers under the two scenarios. The results show that manufacturers can achieve more profit from dual channel. And with the increase of the platform subsidy and the proportion of subsidies borne by manufacturers, manufacturers’ online direct sales prices will increase. Also the platform subsidy intensifies the channel conflict. Keywords: Platform subsidy · Dual-channel supply chain · Game
1 Introduction With online shopping becoming an essential form of consumption, the liquor industry is no longer bound to traditional offline marketing and has begun to expand to online marketing. In 2019, the core commodities of the mid-year promotions of various ecommerce platforms gradually became alcoholic products, which achieved remarkable sales results. We learned from the JD’s 618 liquor product sales data that on June 18, only one minute after the opening of the sale, Wuliangye’s sales in the JD e-commerce platform had exceeded 5 million yuan. And 30 min after the opening of the sale, the total sales of liquor products in the JD e-commerce platform had increased to five times that of the same period last year. According to Nielsen monitoring data, in 2019, a total of 34 © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 81–95, 2022. https://doi.org/10.1007/978-981-16-8656-6_8
82
D. Chen et al.
domestic online and offline merchants of liquor products saw their sales revenue increase by seven percent. Among them, the sales revenue of online merchants increased by 26 percent, which is thirteen times of the offline increase. Therefore, it is expected that more and more offline merchants will move online, and the sales profit of online merchants will continue to grow as e-commerce platforms continue to develop and expand their influence. The paper is motivated by the dual-channel strategy of traditional liquor production company A in China, with a history of more than 800 years. With the growing scale of China’s Internet marketing development, A fully recognizes that it is difficult to adapt to the current development trend by having only offline traditional marketing channels, and that Internet marketing is very important and necessary for its long-term survival and development. However, if the same product of A exists in two sales channels, then there is bound to be sales competition and channel conflict. In particular, if the retailer also implements a dual-channel strategy, a more intense vertical channel conflict will arise between the manufacturer’s dual channel and the retailer’s dual channel. Besides, the third-party online platforms, e.g., JD, Buy Together, Tmall, etc., who often use the subsidy strategy, may also affect the dual-channel strategy and the channel conflict. Therefore, the question of whether to open online direct channels and how to set pricing strategies after opening online channels when there exists platform subsidy becomes an urgent issue to be solved. The existing research at home and abroad mainly focus on channel selection [1], pricing decisions [4], supply chain coordination [10]. However, the channel selection and pricing problems of dual-channel supply chains considering platform subsidy is limited. There are few literature discussing this problem [16]. They cannot solve the research problem of this paper that channel selection is done in the alcohol industry and the source of subsidy is platform subsidy, so our paper is different from them in terms of channel situation and source of subsidy. To address the dual-channel pricing strategy of A’s dual-channel supply chain for alcoholic products, this paper introduces the Stackelberg game model based on two scenarios, i.e., the manufacturer’s single channel, and the manufacturer’s dual channel considering platform subsidy. Then this paper solves these two game models to derive the optimal pricing strategies for the manufacturer and the retailer. Finally, we conduct the numerical analysis to explore the effect of different factors on the optimal dual-channel pricing decisions and profit of manufacturers under the two scenarios. The results show that manufacturers can achieve more profit from dual channel. And with the increase of the platform subsidy and the proportion of subsidies borne by manufacturers, manufacturers’ online direct sales prices will increase. Also, the platform subsidy intensifies the channel conflict.
2 Literature Review 2.1 Channel Selection in Dual-Channel Supply Chain For the channel selection problem of dual-channel supply chains, many scholars at home and abroad have done extensive research on different decision premises. For example, Yumeng and Behzad (2020) argue that manufacturers’ channel preferences depend not
Pricing Strategy of Dual-Channel Supply Chain
83
only on channel operating costs and consumers’ channel preferences, but also on competitors’ channel strategies [1]. Yongmei et al. (2017) argued that, in a dual-channel structure with free-rider behavior, improvements in traditional services benefit the traditional channel and quickly increase online revenue, offsetting service costs and generating revenue for both channels [2]. Jiayi et al. (2021) found that consumers’ fairness attitudes determine manufacturers’ channel selection strategies [3]. 2.2 Pricing in Dual-Channel Supply Chains For the pricing decision problem of dual-channel supply chains, many scholars at home and abroad have found that the pricing strategy under centralized decision making is better than decentralized decision making. Lidan et al. (2012) argued that only dominant manufacturers will necessarily benefit from a wholesale price-led strategy [4]. According to Shujuan and Aijun (2014), when channel substitution increases and the market share of the retailer’s direct channel is small, the retailer should choose to abandon the direct channel and focus on sales in the retail channel [5]. Jian and Ning (2019) found that with a centralization strategy, online retailers have lower prices but higher margins [6]. Arpita et al. (2016) argued that dual channel has a significant effect on the pricing strategy and effort level of each subject in the supply chain, which is always beneficial to each member of the supply chain in an integrated system [7]. Using game theory and fuzzy theory, K et al. (2019) found that wholesale prices, retail prices, direct prices and remanufacturing efforts will be optimal under centralized decisions [8]. Ata et al. (2020) found that centralized structure is effective in achieving the highest expected profit, setting the lowest selling price, considering environmental view and resource utilization (achieving the highest recycling rate) are superior to decentralized structures [9]. 2.3 Dual-Channel Supply Chain Coordination For the coordination of dual-channel supply chains, many scholars at home and abroad have done extensive research using revenue sharing contracts, promotion sharing mechanisms, switching contracts and sharing information, respectively. For example, Xiaoning (2017) argued that quantity discount contract mechanism, two-part pricing contract, wholesale price coordination contract and hybrid coordination contract can achieve supply chain coordination [10]. Huimin and Hui (2016) argued that if the dominant retailer and supplier have different levels of trust in $c$ value, then the retailer cannot use markup contracts to coordinate the channel when only the supplier cares about fairness [11]. Xinyang et al. (2016) argued that, in a situation of intense competition among tour operators, horizontal coordination is more advantageous than vertical coordination. promotional cost-sharing mechanism can achieve Pareto improvement in profit levels of both manufacturers and brick-and-mortar stores [12]. Philipp et al. (2016) explained the concept of designing a tactical collaborative planning for planning the maintenance capabilities of relevant participants based on IMS-supported predictive information [13]. Amin and Jafar (2019) found that transshipment contract can not only coordinate the supply chain but also ensure the profitability of each member of the supply chain capacity [14]. Rofin and Biswajit (2020) explored the impact of information asymmetry in retailers’ greening costs on manufacturers’ and retailers’ performance and they found
84
D. Chen et al.
that sharing greening cost information among retailers can help both manufacturers and retailers prove profit [15]. 2.4 Dual-Channel Supply Chain Pricing Issues with Considering Subsidy There are limited literature exploring the dual-channel supply chain pricing decision problem considering subsidies. Chao (2017) argued that the manufacturer’s wholesale price and the retailer’s selling price are a decreasing function of the government subsidy coefficient [16]. Gaikai (2020) argued that the merchant’s optimal pricing then decreases as the optimal subsidy amount of the platform rises [17]. Kunkun (2017) argued that third-party platform subsidies increase total supply chain profits, and third-party platforms are more willing to subsidize suppliers dual-channel network consumers [18]. By analyzing and comparing the change of system profit under the two scenarios of centralized and decentralized decision making, it is found that the efficiency of the system can be optimized under centralized decision making.
3 Dual-Channel Supply Chain Pricing Strategy for Alcoholic Beverage Products 3.1 Problem Description A’s offline traditional channels have matured in terms of pricing, forming a stable wholesale price pricing mechanism with its retailers. Due to the development of e-commerce, A decided to open online direct sales stores on various platforms. However, A received a lot of complains from retailers from the offline channel due to the subsidy strategy of the online platform. And pricing for the complicated channels considering the platform subsidy is the key problem in this situation. A believed that the online platform has disrupted its pricing system and leads to A’s inability to determine whether it should open online direct sales. To solve the above problem, this chapter establishes a Stackelberg game model for the traditional offline channels and the dual channels for manufacturers respectively under the premise of dual channels for retailers, and compares their optimal pricing decisions and profit results to provide some suggestions. 3.2 Stackelberg Game of Single-Channel Supply Chain 1) Model description w
Traditional Channels
Manufacturer
Consumers Network Channels
Fig. 1. The traditional offline selling channel
Pricing Strategy of Dual-Channel Supply Chain
85
Considering a two-level supply chain consisting of a manufacturer and a retailer as shown in Fig. 1. Assume that the market demand for the product in each channel is random and that both the manufacturer and the retailer are risk-neutral and perfectly rational, i.e., both will make decisions based on the principle of maximizing their own expected profits. Dt and De are the manufacturer’s demand from retailer’s traditional and network channel respectively. πm and πr are the profit of manufacturer and retailer respectively. The manufacturer makes its own optimal decision with the goal of profit maximization, and wholesales the product with unit production cost c to the retailer at unit price w in the physical channel. While the retailer sells the product at unit price Pe and Pt through its network and traditional channels, respectively, based on the manufacturer’s decision. Then, the demand functions for the retailer’s physical channel and online channel are as follows: Dt = θ a − SPt + σ Pe
(1)
De = (1−θ )a − SPe + σ Pt
(2)
Given Eq. (1) and Eq. (2), the profit functions of the manufacturer and the retailer, respectively, are πm = (w − c)(De + Dt )
(3)
πr = (Pe − w)De + (Pt − w)Dt
(4)
Where a(a > 0) is the total market demand for the product; θ (0 < θ < 1) is the coefficient of consumer preference for the retailer’s physical channel; S is the price elasticity of demand coefficient for the same channel; σ is the price shift coefficient of demand for different channels, where 0 < σ < 1 < S since the impact of any channel price within the channel on its own demand is greater than the impact of on another channel; θa − SPe > 0 and (1 − θ)a − SPt > 0 are required, which implies that each channel has its own loyal customers. 2) Game analysis Since the manufacturer has the priority of decision making, a Stackelberg game model with the strong manufacturer as the dominant party is established, i.e., firstly, the manufacturer decides the wholesale price of the product, and subsequently, the dualchannel retailer determines the sales price of its products in both online and offline channels based on the manufacturer’s decision. As mutually independent business entities, the manufacturer and the retailer make decisions with the goal of maximizing their respective profits. The solution is solved using the inverse induction method. Let ∂πr /∂Pe = 0, ∂πr /∂Pt = 0, the Hessian matrix H1 of the retailer’s profit function πr about the retailer’s online platform price Pe and physical channel price Pe is obtained as −2S 2σ H1 = (5) 2σ −2S
86
D. Chen et al.
It is easy to know that the first-order principal subformula of matrix |H11 | = −2S < 0. Therefore, when |H12 | > 0, i.e., when S 2 − σ 2 > 0 is satisfied, the profit function πr of the retailer is a strictly concave function about the retailer’s online platform price Pe and physical channel price Pt , for which there exists a unique optimal solution. The joint collation yields Pt =
a(σ + Sθ − σ θ) w + 2(S 2 − σ 2 ) 2
(6)
Pe =
a(S − Sθ + σ θ ) w + 2(S 2 − σ 2 ) 2
(7)
Equation (6) and Eq. (7) are brought into Eq. (3), and let ∂πm /∂w = 0, the manufacturer’s optimal wholesale price is given by w∗ =
a + 2c(S − σ ) 4(S − σ )
(8)
Bringing Eq. (8) into (6) and (7) yields the equilibrium solution for the retailer’s traditional and online channels, respectively. Pt∗ =
a + 2c(S − σ ) a(σ + Sθ − σ θ ) + 8(S − σ ) 2(S 2 − σ 2 )
(9)
Pe∗ =
a + 2c(S − σ ) a(S − Sθ + σ θ ) + 8(S − σ ) 2(S 2 − σ 2 )
(10)
Bringing Eq. (9) and (10) into Eq. (3) and (4) yields the profit of manufacturer and retailer, respectively. [a + 2c(S − σ )]2 16(S − σ )
(11)
c(a − Sc + σ c) a2 (2θ − 1)2 a2 − + 32(S − σ ) 8 8(S + σ )
(12)
πm∗ = πr∗ =
Fig. 2. Manufacturer’s dual-channel supply chain
Pricing Strategy of Dual-Channel Supply Chain
87
3.3 Stackelberg Game of Dual-Channel Supply Chain 1) Model description We consider a two-level supply chain model consisting of a manufacturer and a retailer with the organizational structure and distribution channels shown in Fig. 2 in this part. Assume that the market demand for the product in each channel is stochastic and that both the manufacturer and the retailer are risk-neutral and perfectly rational, i.e., both will make decisions based on the principle of maximizing their own expected profits. Dt and De are the manufacturer’s demand from retailer’s traditional and network channel respectively, while Dm is the manufacturer’s demand from direct online channel. πm and πr are the profit of manufacturer and retailer respectively. The manufacturer wholesales the product with unit production cost c to the retailer at unit price w in the physical channel, and sells the product through the direct online channel at unit price Pm , while the retailer sells the product at unit price Pe and Pt through his two channels, respectively, based on the manufacturer’s decision. According to the above, the demand functions for the retailer’s two channels and the manufacturer’s direct sales channel be De = θe a − SPe + σ Pt + η(Pm − p)
(13)
Dt = θt a − SPt + σ Pe + ξ(Pm − p)
(14)
Dm = (1 − θe − θt )a − S(Pm − p) + ηPe + ξ Pt
(15)
Given Eqs. (13)–(15), the profit functions of the manufacturer and the retailer are as follows: πm = (w − c)(De + Dt ) + (Pm − c − bp)Dm
(16)
πr = (Pe − w)De + (Pt − w)Dt
(17)
Where a(a > 0) is the total market demand for the product; p(p ≥ 0) is the subsidy of the platform; b(0 ≤ b ≤ 1) is the proportion of the subsidy borne by the manufacturer; θe and θt (0θe , θt 1) are the coefficients of consumer preference for the retailer’s online platform and the retailer’s physical channel, respectively; S is the price elasticity of demand coefficient of the same channel; σ, η, ξ are the cross-price elasticity coefficients among the three sales channels, respectively. We assume that 0 < σ, η, ξ < 1 < S since the impact of any channel price on its own demand within the multi-channel is greater than the impact on another channel. Besides, the assumptions that θe a − SPe > 0, θt a − SPt > 0, (1 − θe − θt )a − S(Pm − p) > 0, imply that each channel has its own loyal customers. 2) Game analysis Since the strong manufacturer has the priority of decision making, a Stackelberg master-slave countermeasure game model with the strong manufacturer as the dominant
88
D. Chen et al.
party is established, i.e., the manufacturer first determines the wholesale price and the price of the online direct sales channel, and subsequently, the retailer determines the price of its traditional channel and the electronic channel. The manufacturer and the retailer make decisions with the goal of maximizing their respective profits, respectively. The solution is solved using the inverse induction method. First, taking the derivations of πr with respect to Pe , Pt and make it 0, we will get the optimal Pe , Pt . Setting the results into Eq. (16) and taking the derivations of πm with respect to Pm and w, we get the optimal decisions of Pm and w. The concavity of the profit functions can be proved using the Hessian matrix H2 and Hessian matrix H3 , respectively. −2S 2σ H2 = (18) 2σ −2S −2(S − σ ) η + ξ 2 (19) H3 = 2S S −σ 2 −ξ 2 S−η2 S−2σ ηξ η+ξ − S 2 −σ 2 It is easy to know that the first-order principal subformula of matrix H2 , |H21 | = −2S < 0, so when |H22 | > 0, i.e., when S 2 −σ2 > 0, the profit function πr of the retailer is a strictly concave function with respect to the retailer’s online platform price Pe and physical channel price Pt , and there exists a unique optimal solution of this function. The first-order principal subformula of matrix H3 , |H31 | = −2(S − σ ) < 0, so when 2(S − σ )[2S S 2 − σ 2 − ξ 2 S − η2 S − 2σ ηξ /(S 2 − σ 2 ) − (η + ξ )2 > 0, |H32 | > 0, the manufacturer’s profit function πm is a strictly concave function about the wholesale price w and the direct network price Pm , and there exists a unique optimal solution for ∗ , w∗ . this function. The joint collation yields Pm ∗ Pm =
M 4S 3 − 4Sσ 2 − 3Sη2 − σ η2 − 2Sηξ − 6σ ηξ − 3Sξ 2 − σ ξ 2
(20)
w∗ =
N 4S 3 − 4Sσ 2 − 3Sη2 − σ η2 − 2Sηξ − 6σ ηξ − 3Sξ 2 − σ ξ 2
(21)
where M = (4cS 3 + 4pS 3 + 4bpS 3 + 4aS 2 − 4cSσ 2 − 4pSσ 2 − 4bpSσ 2 − 4aσ 2 − 3cSη2 − 3bpSη2 − 3pSη2 − cσ η2 − bpσ η2 − pσ η2 − 2cSηξ − 2bpSηξ − 2pSηξ − 6cσ ηξ − 6bpσ ηξ − 6pσ ηξ − 3cSξ 2 − 3bpSξ 2 − 3pSξ 2 − cσ ξ 2 − bpσ ξ 2 − pσ ξ 2 − 4S 2 aθe + 4aσ 2 θe + 3Saηθe + aσ ηθe + Saξ θe + 3aσ ξ θe − 4S 2 aθt + 4aσ 2 θt + Saηθt + 3aσ ηθt + 3Saξ θt + aσ ξ θt )/2 N= (4cS 3 − 4cSσ 2 + 2aSη + 2aσ η − 3cSη2 − cσ η2 + 2Saξ + 2aσ ξ − 2cSηξ − 6cσ ηξ − 3cSξ 2 − cσ ξ 2 + 2S 2 aθe + 2Saσ ξ θe − 2Saηθe − 2aσ ηθe − 2Saξ θe − 2aσ ξ θe + aηξ θe − ξ 2 aθe + 2S 2 aθt + 2Saσ θt − 2Saηθt − 2aσ ηθ − aη2 θt
Pricing Strategy of Dual-Channel Supply Chain
89
− 2Saξ θt − 2aσ ξ θt + aηξ θt )/2 Further, the optimal online price and the optimal physical channel price for the retailer can be obtained as ∗ (Sθe + σ θt )a + (σ ξ + ηS)Pm + (S 2 − σ 2 )w∗ − pSη− pσ ξ Pe∗ = 2(S 2 − σ 2 ) ∗ (σ θe + Sθt )a + (σ η + ξ S)Pm + (S 2 − σ 2 )w∗ − pSξ − σ ηp Pt∗ = 2(S 2 − σ 2 )
(22)
(23)
Taking Eq. (22) and Eq. (23) into Eq. (16) and Eq. (17), we get the optimal profit πm∗ , πr∗ as follows: ∗ πm∗ = (w∗ − c)[−SPe∗ + σ Pt∗ + η(Pm − p) + θt a − SPt∗ + σ Pe∗
∗ ∗ ∗ + ξ(Pm − p)] + (Pm − c − bp)((1 − θe − θt )a − S(Pm − p) ∗ ∗ + ηPe + ξ Pt )
(24)
∗ πr∗ = (Pe∗ − w∗ )[−SPe∗ + σ Pt∗ + η(Pm − p)] + (Pt∗ − w∗ )[θt a ∗ − SPt∗ + σ Pe∗ + ξ(Pm − p)]
(25)
4 Numerical Analysis Since the manufacturer’s profit and retailer’s profit solved in the previous chapter are complicated and it is not easy to compare the two scenarios, this section will use the information obtained from a research trip to A to conduct the numerical analysis. 4.1 Comparative Analysis of the Results in the Two Scenarios The parameters are as follows: the market demand a = 100, the product unit production cost c = 5, and the price elasticity of demand coefficient of the same channel S = 2. After the manufacturer opens the online direct sales channel, the price shift coefficients of demand for different channels σ = η = ξ = 0.8, the coefficient of consumer preference for the traditional channel of the retailer θt = 0.5. And when consumers choose to buy online, the coefficient of preference for the network channel of the retailer channel’s preference coefficient θe = 0.3, the preference coefficient for the manufacturer’s network channel is 1 − θt − θe = 0.2.
90
D. Chen et al.
Taking the above parameters into the above equations, as shown in Table 1, we can obtain the optimal pricing decisions and profit in the two scenarios. Table 1. Equilibrium solution in different scenarios ∗ πm
πr∗
Decision
Pt
Pe
Pm
w
Dt
De
Dm
Single
34
31
–
23
16
26
-
770
367
Dual
55
52
42
45
15
4.5
22
1564
168
1) The impact of manufacturer’s direct selling channel on retailer’ pricing decisions The retailer’s price increased after the manufacturer opened the online direct sales channel. This is due to the fact that the manufacturer, after opening the online direct sales channel, has increased the wholesale price in order to induce offline consumers to shift to online consumption and to induce consumption from offline to online consumption, which has led the retailer to increase their pricing. 2) The impact of manufacturer’s direct selling channel on the demand of the retailer When the manufacturer opens online direct sales channels, the demand for the retailer’s online channel is reduced. Because the manufacturer’s online direct sales price is much lower than the retailer’s online retail price, there is a price advantage, resulting in consumers who originally preferred the retailer’s online channel switch to buy it on the manufacturer’s online channel. Then the retailer’s online demand plummeted, leading to conflicts between the manufacturer and retailer. 3) The impact of manufacturer’s direct selling channel on retailer’s profit The retailer’s profit is reduced as manufacturer opens online direct selling channel. As manufacturer opens online direct selling channel, they not only harvested some new users and shifted offline consumers to online, but also stole online demand from retailer, resulting in a reduction in demand from both retailer’s online and offline channels. And with the manufacturer’s wholesale prices increasing, the retailer’s online and offline retail prices are also increasing above the manufacturer’s online prices in order to maintain a certain level of profit for itself, resulting in a significant reduction in the retailer’s profit. 4.2 The Impact of Price Subsidies on Manufacturer’s Decisions and Profits It is clear from the above that the manufacturer have seen an increase in profits after opening online direct selling channel. However, in reality, especially due to the epidemic and the growing global economic downturn, the online and offline economies of various industries have been affected. In order to stimulate consumption and stimulate consumers’ consumption potential, major platforms have introduced millions of subsidies
Pricing Strategy of Dual-Channel Supply Chain
91
to achieve the effect of price reduction, thus attracting consumers to buy and promoting market circulation. In order to enable manufacturer to selectively adjust the price subsidies provided by the platforms and manufacturer to obtain the optimal profit after opening the online direct sales channel, this section will analyze the impact of different factors on the profit difference between the two scenarios, as well as the impact of the percentage of subsidies borne by manufacturer on the manufacturer’s online direct sales prices. 1) The impact of price subsidies on manufacturer’s online direct selling price We set p ∈ (0, 5)b = 0.6. And the other parameters taking the same values as in the previous section. The effect of platform subsidy on the manufacturer’s online direct selling price is obtained using software programming, with a certain percentage of subsidy borne by the manufacturer, as shown in Fig. 3. Pm 46
45
44
43
42 1
2
3
4
5
p
Fig. 3. The impact of price subsidy on manufacturer’s direct selling price
Figure 3 shows that the impact of price subsidy on manufacturer’ online direct selling prices increases with the growth of the platform subsidy. This indicates that after opening the online direct selling channel, the manufacturer will raise the online direct selling price in the presence of price subsidy in order to maintain their profit. 2) The effect of the percentage of subsidy borne by the manufacturer on his online direct selling price We set b ∈ (0, 1)p = 5. And other parameters are the same values as in the previous section. The effect of the subsidy percentage borne by the manufacturer on the manufacturer’s online direct sales price is as shown in Fig. 4. Figure 4 shows that with a certain price subsidy, the impact of the manufacturer’s share of the subsidy on the manufacturer’s online direct sales price increases with the manufacturer’s share of the subsidy b. This indicates that after the manufacturer opens the online direct sales channel, if the manufacturer and the platform subsidize the goods at the same time, the manufacturer will appropriately increase the online direct sales price to ensure a certain profit in order to compensate for the cost of the price subsidy they bear.
92
D. Chen et al. Pm
46.5 46.0 45.5 45.0 44.5 0.2
0.4
0.6
0.8
1.0
b
Fig. 4. The effect of the percentage of subsidy borne by the manufacturer on the manufacturer’s online direct sales price
3) The impact of price subsidies on manufacturer’s profits We set p ∈ (0, 5), b = 0.6. And other parameters are the same values as in the previous section. The effect of the price subsidy on the profit difference between the two scenarios is obtained using software programming as shown in Fig. 5. Where πm = πm∗ (Dual) − πm∗ (Single). m
840
830
820
810
800 1
2
3
4
5
p
Fig. 5. The effect of price subsidies on the profit difference between the two scenarios
Figure 5 shows that for a certain percentage of subsidy borne by the manufacturer, the impact of price subsidy on the profit difference grows with the increase of platform subsidy. This suggests that after opening the online direct sales channel, manufacturers will share as much and more price subsidies with the platform when granting price subsidies in order to increase their own profits and stimulate consumers’ shopping demand. 4) The impact of price subsidy on retailer’s profit We set p ∈ (0, 5), b = 0.6. And other parameters are the same values as in the previous section. The effect of the price subsidy on retailer’s profit difference between the two scenarios are as shown in Fig. 6. Where πr = πr∗ (Dual) − πr∗ (Single).
Pricing Strategy of Dual-Channel Supply Chain
93
r
200 202 204 206 208 210 1
2
3
4
5
p
Fig. 6. The effect of price subsidies on retailer’ profit difference between the two scenarios
Figure 6 shows that with a certain percentage of subsidies borne by the manufacturer, the price subsidy to the retailer’s profit differential quotient after and before the manufacturer opens the online direct sales channel decreases as the price subsidy increases. This indicates that if the manufacturer and the platform subsidize the goods at the same time after the manufacturer opens the online direct sales channel, it will cause the retailer to lose profit and increase the conflict of interest between the manufacturer and the retailer, resulting greater conflict. 5) Suggestions and discussions a) Increase the online direct selling price when the online platform offers subsidy A, after opening the online direct sales channel, will more or less participate in various promotions of the e-commerce platform. Especially when the e-commerce platform issues millions of subsidies, under the premise of certain price subsidies. If A needs to share the price subsidies with the e-commerce platform, then A needs to raise its online direct sales prices to make up for the price subsidies it bears in order to maintain its own profit stability. b) The platform subsidy is increasing the channel conflict After A opened the online direct sales channel and subsidized the price with the help of the platform, it not only transferred some offline consumers from retailer to online, but also took away a large amount of online consumer market from retailer, increased its own profit, but led to a sharp decrease in retailer’s profit, stimulated the conflict between A and retailer, and intensified the channel conflict between A and retailer.
5 Conclusion This paper addresses the question of whether A should open an online direct sales channel, what happens when the online platform conducts subsidy strategy, and how the wholesale price and online direct sales price should be determined after opening an online direct sales channel. Because the existing studies cannot solve the research
94
D. Chen et al.
problem faced in this paper, the study uses the Stackelberg game model to explore the pricing and profit changes when the manufacturer opens the online direct selling channel, especially when the subsidy cannot be ignored. The results conclude that profit will increase after the manufacturer opens the online direct sales channel. Besides, the numerical analysis reveals that the manufacturer’s online direct selling price increases as the subsidy and the percentage of manufacturer’s subsidy for the goods increase. For this reason, it is recommended that A should increase its attractiveness to the platform and gain a dominant position in the relationship with the online platform in order to master its own pricing initiative to maximize its own profit. At the same time, since the platform increases channel conflicts, the manufacture should reach a revenue/profits sharing contract with retailers to mitigate the conflict. The application area of this paper is the liquor industry. In future research, the model function can be adjusted and applied to other fields. At the same time, this paper can further explore the coordination contract of the manufacturer and retailer. Acknowledgment. This research was supported by the National Natural Science Foundation of China (NSFC) under grant numbers 72002015.
References 1. Zhang, Y.M., Behzad, H.: Competition in dual-channel supply chains: the manufacturers’ channel selection. Eur. J. Oper. Res. 291(1), 244–262 (2021) 2. Liu, Y.M., Fan, C., Ding, C.J., Chen, X.H.: Pricing and channel structures selection considering free-riding behavior. In: 2017 29th Chinese Control and Decision Conference (CCDC). IEEE, pp. 4653–4659 (2017) 3. Sun, J.Y., Chen, W., Yang, S.F., Luo, Y.Y.: Research on recycling channel selection of closedloop supply chain considering consumer preference and fairness concern[C]//MATEC Web of Conferences. EDP Sciences 336, 09001 (2021) 4. Ma, L.D., Zhang, R., Guo, S.D., Liu, B.: Pricing decisions and strategies selection of dominant manufacturer in dual-channel supply chain. Econ. Modell. 29(6) (2012) 5. Li, S.J., Liu, A.J.: Pricing strategies in dual-channel supply chain with retailer direct marketing. Adv. Mater. Res. 3160, 902–906 (2014) 6. Zhou, J., Ding, N.: Price Decision in Multi-Channel Supply Chain under the Impact of DualChannel Retailer’s Internal Strategy 7. Roy, A., Sana, S.S., Chaudhuri, K.: Joint decision on EOQ and pricing strategy of a dual channel of mixed retail and e-tail comprising of single manufacturer and retailer under stochastic demand. Comput. Ind. Eng. 102 (2016) 8. Karimabadi, K., Arshadi-khamseh, A., Naderi, B.: Optimal pricing and remanufacturing decisions for a fuzzy dual-channel supply chain. Int. J. Syst. Sci. Oper. Logistics 7(3), 248–261 (2020) 9. Taleizadeh, A.A., Mamaghan, M.K., Torabi, S.A.: A possibilistic closed-loop supply chain: pricing, advertising and remanufacturing optimization. Neural Comput. Appl. 32(4), 1195– 1215 (2020) 10. Cao, X.N.: Research on the coordination of dual-channel supply chain for agricultural products in the e-commerce environment. Kunming University of Technology (2017) 11. Liu, H.M., Yu, H.: Fairness and retailer-led supply chain coordination under two different degrees of trust. J. Ind. Manage. Optim. 13(3), 1347 (2017)
Pricing Strategy of Dual-Channel Supply Chain
95
12. Zhang, X.Y., Song, H.Y., Huang, G.Q., et al.: Game-theoretic approach to tourism supply chain coordination under demand uncertainty for package holidays. Tourism Anal. 15(3), 287–298 (2010) 13. Saalmann, P., Wagner, C., Hellingrath, B.: Decision support for a spare parts supply chain coordination problem: designing a tactical collaborative planning concept. IFAC PapersOnLine 49(12), 1056–1061 (2016) 14. Aslani, A., Heydari, J.: Transshipment contract for coordination of a green dual-channel supply chain under channel disruption. J. Cleaner Prod. 223 (2019) 15. Rofin, T.M., Mahanty, B.: Equilibrium Analysis of dual-channel supply chain under retailer’s greening cost information asymmetry. Int. J. Inf. Syst. Supply Chain Manage. (IJISSCM) 13(4) (2020) 16. Liu, C.: A Study of Retailers’ Dual-Channel Supply Chain Pricing from the Perspective of Government Innovation Subsidies. Hunan University (2017) 17. Luo, K.: A study of take-out supply chain pricing considering price subsidies and equity concerns. Fujian Agriculture and Forestry University (2020) 18. Gu, K.K.: Research on dual-channel supply chain of fresh produce under third-party platform subsidy. Xia Men University (2017)
Research Summary of Intelligent Optimization Algorithm for Warehouse AGV Path Planning Ye Liu, Yanping Du, Shuihai Dou(B) , Lizhi Peng, and Xianyang Su Beijing Institute of Graphic Communication, Beijing, China {duyanping,doushuihai}@bigc.edu.cn
Abstract. Automated Guided Vehicle (AGV) path planning is the core technology of warehouse AGV. Reasonable path planning is helpful to maximize the benefits of warehouse space and time. Scholars at home and abroad have already made extensive and in-depth research on warehouse AGV path planning, and have achieved fruitful research results. In this paper, the models and environmental modeling methods of warehouse AGV path planning are summarized. It turned out that the cell method is intuitive and easy to model, the geometric method is safe, but difficult to update, and the artificial potential field method is easy to solve, but easy to fall into local optimum. The optimization methods of genetic algorithm, ant colony algorithm and particle swarm optimization algorithm in AGV path planning are emphatically summarized. It is found that genetic algorithm is suitable for complex and highly nonlinear path planning problems, ant colony algorithm is suitable for discrete path planning problems, and particle swarm algorithm is suitable for real number path planning problems. The research summary of this paper provides reference value for the research of intelligent optimization algorithm of AGV path planning and new ideas for broadening the application field of AGV path planning. Keywords: Warehouse · AGV · Path planning · Intelligent optimization algorithm
1 Introduction With the rapid development of artificial intelligence, Automated Guided Vehicle (AGV) is gradually developing towards miniaturization and intelligence. As a modern tool of logistics handling and production assembly, AGV will replace labor in logistics enterprises, manufacturing enterprises, tobacco enterprises, pharmaceutical enterprises and even more fields. By optimizing the running path of warehouse AGV, we can reduce the errors caused by human negligence, improve the utilization rate of storage space and logistics efficiency, reduce logistics costs, improve the scientific and technological content and competitiveness of enterprises, and promote the intelligent development of logistics industry in China. Funding source of this paper-project to design and develop an intelligent book management platform in the physical bookstore scene (27170121001/025). © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 96–110, 2022. https://doi.org/10.1007/978-981-16-8656-6_9
Research Summary of Intelligent Optimization Algorithm
97
In order to promote the research of AGV path optimization in warehouse environment, this paper summarizes the research status of intelligent optimization algorithms in warehouse AGV path planning in recent years, introduces the basic model of warehouse AGV path planning, discusses the genetic algorithm, ant colony algorithm and particle swarm algorithm commonly used in AGV path optimization, summarizes the improvement methods of each algorithm, analyzes the advantages and disadvantages of each algorithm and its applicable scope, and looks forward to the future research of warehouse AGV path planning.
2 Overview of Warehouse Agv Path Planning Model Path planning refers to finding a feasible and optimal path between task points in the process of movement, and avoiding obstacles in the environment during the process of travel. Path planning includes environment modeling, path searching and path smoothing [1]. The following will be introduced from three aspects: environment modeling, objective function and constraint conditions. 2.1 Environmental Modeling The process of environmental modeling is to transform the external environment in its original form into an internal mathematical model of appropriate planning through a series of treatments. In order to simplify the problem, the three-dimensional space environment is usually converted to two-dimensional for modeling. Environmental modeling is mainly the representation of obstacles, starting points and target points [2]. The method of environmental modeling determines the choice of path planning method and search algorithm. Different environmental modeling adopts different path planning methods. Commonly used environmental modeling methods include cell method, geometric method and artificial potential field method. 1) Cell Method Cell method is a method of dividing space into individual cells with appropriate granularity and assigning corresponding values. It mainly includes grid method and unit tree method, which are mainly different from the size of cells [3]. The grid method divides the spatial environment by using cells of the same size, and represents the environment with arrays, in which obstacles are represented as 1 and free space as 0 [4]. Each grid point is in obstacle space or free space. Mixed grid points are classified as free space or obstacle space according to the proportion they occupy respectively. Figure 1(a) shows the grid method. The unit tree method divides the environment space into units with different sizes to describe the environment. Generally, the environment space is divided into larger units first, and the smaller units are divided in the space that needs to be refined. The working space of the divided unit may be free space, obstacle space and mixed space. Its advantage is better adaptability. Figure 1(b) shows the unit tree method.
98
Y. Liu et al.
(a) grid method
(b) unit tree method
Fig. 1. Grid method and unit tree method.
Scholars at home and abroad choose two-dimensional or three-dimensional cell method to model the environment according to actual needs. Reference [5, 6] adopted grid method to model environment in three-dimensional space. Reference [7] realized the transformation from 3D to 2D, which vastly reduces the number and size of grids. Reference [8] used the combination of two-dimensional grid method and element decomposition method based on edge base points to model the environment, which not only solves the contradiction between grid resolution and planning speed, but also ensures the effectiveness of the whole region traversal. Grid method is simple, but it has the problem of solving accuracy. The unit tree method has better self-adaptability, and the loss of calculating the adjacency relationship between units is large, and the calculation algorithm is more complex than the grid method. 2) Geometric Method Geometric method is to extract the geometric features of the environment and map the environment space to a weighted graph by using its combination characteristics, so that the path planning problem of avoiding obstacles can be transformed into a simple graph search problem [9]. The main methods include visibility graph and Voronoi diagram method. Visibility graph connects all the vertices of obstacles(set as V0), the starting point S, and the target point G with straight lines, and the connecting lines between the three do not pass through the obstacles, that is, the straight lines are visible. So the graph G(V, e) is constructed by giving weights to the edges in the graph, and then the optimal path is planned by some search method [10]. The viewable method is shown in Fig. 2. Reference [11] adopted visibility graph method to model the environment, simplifying the three-dimensional motion space into two-dimensional space. Reference [12] put forward an improved visibility graph, which is only applicable to static global path planning with known working environment. Visibility graph is intuitive in concept and simple in implementation, but it lacks flexibility. In other words, once the starting point and the target point are changed, it is necessary to reconstruct the visibility graph, which is too heavy to calculate when the number or shape of obstacles is complex [13].
Research Summary of Intelligent Optimization Algorithm
99
Fig. 2. Visibility graph.
Voronoi diagram method is defined by a series of nodes, that are equidistant to the edges of two or more nearby obstacles. The space is divided into several regions, each of which contains only the edge of an obstacle [14].Voronoi diagram method is shown in Fig. 3.
Fig. 3. Voronoi diagram method.
Reference [15] proposed a path planning algorithm based on Voronoi diagram. Reference [16] proposed an algorithm to calculate the generalized Voronoi diagram and its channel width generated in crowded obstacle environment. Reference [17] proposed an incremental construction method based on Voronoi diagram. The path security of Voronoi diagram is high, but the path is not necessarily optimal, and the calculation is large. 3) Artificial Potential Field Method The artificial potential field method is a spatial planning method expressed by magnetic field characteristics. The basic idea is to abstract the motion of AGV in the environment as the motion of artificial gravitational field, in which the target points attract AGV and obstacles repel AGV, and the corresponding path is obtained according to the stress direction of AGV, so that AGV can effectively avoid obstacles in real time and move to
100
Y. Liu et al.
the target points along the collision-free path [18]. The artificial virtual potential field method has high timeliness and smooth generation path, but it lacks macro self-regulation ability in global environment, so it is easy to fall into local optimum [19]. By adding virtual target points and improving potential field function, the problem that artificial potential field method is easy to fall into local optimum can be solved to a certain extent. Reference [20] proposed an improved artificial potential field method based on virtual target points and environmental judgment parameters to realize local path planning of mobile robots. Reference [21] made the artificial potential field algorithm realize robot automatic obstacle avoidance path planning in dynamic environment by improving the repulsive force gravitational function. In order to improve the performance of artificial potential field method, scholars at home and abroad have proposed optimization algorithms combining artificial potential field method with rolling window method [22], fuzzy control method [23], simulated annealing method [24] and particle swarm optimization algorithm [25, 26], which mainly solves the local minimum problem and enhances the navigation ability in complex environment. The advantages and disadvantages of cell method, geometric method and artificial potential field method are shown in Table 1. Table 1. Comparison of advantages and disadvantages of environmental modeling methods Modeling method
Cell Method
Geometric method
Artificial potential field method
Advantage
Intuitive and easy to model
High security
Easy to solve
Disadvantage
Inefficiency
The update is difficult and the accuracy is low
May not find the path, easy to fall into local optimum
2.2 Objective Function According to different task requirements, AGV path planning can construct different objective functions. The planning model can be single objective or multi-objective, and generally takes travel time, path length and obstacle avoidance as objective functions. Researchers usually choose single-objective optimization under certain conditions. Reference [27] took minimizing AGVs delay time as the optimization goal under the condition of given task allocation. Reference [28] took the shortest driving path as the optimization objective under the condition of meeting the requirements of obstacle avoidance. The single objective function can be solved accurately and quickly, but it is difficult to meet the actual needs. Compared with single objective function, multi-objective function is more consistent with complex environment and has practical significance. Reference [29] took minimizing path length and maximizing path smoothness as optimization objectives. Reference
Research Summary of Intelligent Optimization Algorithm
101
[30] took energy consumption, path smoothness, and the shortest task completion time as optimization objectives. Reference [31] took task allocation waiting time and conflictfree path as optimization objectives. Reference [32] took path length, safety and smoothness as optimization objectives. The optimization of multi-objective function is based on reducing the speed of solution, and the solution is not unique and can not achieve the optimization of every sub-objective. 2.3 Constraints The constraints of AGV path planning include self-constraints and environmental constraints. Generally, self-constraints are AGV endurance, driving speed, driving acceleration [33], waiting time [34], etc. There will be other constraints according to different task requirements. Environmental constraints include path boundaries, dynamic obstacles, terrain constraints [35], etc., and intelligent algorithms are selected to plan AGV paths under environmental constraints. Satisfying certain constraints is the premise of AGV path planning, and the constraints are often coordinated and competitive with each other. Compared with the constraint conditions of single AGV path planning problem, the biggest feature of multiAGV path planning is that cluster constraints are also taken into account, which usually include spatial cooperation constraints such as safe driving distance between AGVs and task cooperation constraints.
3 Overview of Algorithms for Solving the Path Planning of Warehouse AGV Common path planning algorithms include traditional algorithms and intelligent optimization algorithms. Traditional methods mainly include mixed integer linear programming method [36, 37], A* algorithm [38], Dijkstra algorithm [39], artificial potential field method [40], dynamic window algorithm [41], etc., which have limitations in solving path planning problems. For example, A* algorithm can solve the optimal path faster and more effectively, but its efficiency decreases with the increase of search space, and it is mostly used in single AGV path planning. Although Dijkstra algorithm can find the shortest path, it contains a lot of redundant operations in the operation process, which leads to the increase of algorithm memory and the decrease of efficiency. The route planning by artificial potential field method depends on the establishment of potential field,If attractive force and repulsive force are equal and there are many positions, it may fall into the optimal layout, and if obstacles are close to the target point, it may not be possible to find a feasible path. Dynamic window algorithm has good obstacle avoidance ability and smooth path, but it is easy to fall into the local optimal solution and cannot reach the designated target along the global optimal path. In recent years, intelligent algorithms and bionic algorithms are increasingly applied to path planning, such as genetic algorithm, ant colony algorithm, particle swarm algorithm, neural network algorithm, bee colony algorithm, and reinforcement learning algorithm, etc. Among them, genetic algorithm, ant colony algorithm and particle swarm algorithm are the most widely used. This paper mainly summarizes these three algorithms.
102
Y. Liu et al.
3.1 Genetic Algorithm Genetic algorithm (GA), which originated from Darwin’s theory of evolution, is a kind of intelligent optimization algorithm proposed by imitating the evolution phenomenon of genetic cross mutation of natural species, and is widely used in medicine, agriculture, industry and other fields [42]. At present, many scholars have used genetic algorithm to study the path planning of single AGV. Guo Erdong [43] solved the path planning problem of single laser navigation AGV by using genetic algorithm. Dang Hongshe [44] used genetic algorithm to solve the problems of complex driving path and application limitations of AGV in factories. Gu Yong [45] put forward a multi-objective point path planning method for the multi-robot coordinated sorting operation process in intelligent warehouse. These studies effectively shorten the path length and travel time, but do not consider the constraint relationship between clusters. In the practical scene application, the path planning problem of multi-AGV is more involved, and more and more scholars begin to study the path planning problem of multi-AGV. Li Qingxin [46] studied the genetic algorithm of path planning from single AGV to multi-AGV, and analyzed several different types of path planning according to the complexity and quantity of information in the running environment. Li Ming [47] researched and designed an improved genetic algorithm, which was applied to single robot and multi-robot path planning. There are premature problems in the application of genetic algorithm in path planning. Many scholars optimize it by adding operators and improving fitness function. Chaymaa Lamini [48] put forward an improved crossover operator and a new fitness function considering distance, security and energy, which makes the algorithm converge faster, but increases the search time of path in the crossover process. Milad Nazarahari [49] improved the initial path by using five customized crossover and mutation operators to eliminate possible collisions between paths. Cheng Liang [50] avoids premature algorithm by adding smoothing operator and deleting operator. Yang C [51] proposed the adaptive operator and the supervised operator, which adaptively added or deleted path nodes according to the complexity of map, and obtained the optimal path considering length, smoothness and security, which was safer than other methods. These methods accelerate the convergence speed of the algorithm and reduce the time spent on AGV path planning, but the improved algorithm is mainly suitable for static environment. Some scholars have introduced the idea of simulated annealing into the population selection operation of the algorithm to plan the optimal path of AGV [52–54]. Compared with local algorithm, the combination of genetic algorithm and simulated annealing algorithm can achieve convergence faster, jump out of the poor solution of local path optimization, and improve the efficiency of AGV global path search. 3.2 Ant Colony Algorithm Ant colony optimization (ACO) was proposed by Italian scholar Dorigo et al. in 1990s. By simulating the behavior of ant colony searching for food, ACO transformed the combinatorial optimization problem into path optimization problem [55]. ACO was originally used to solve TSP problem. After years of development, it has gradually penetrated into other fields, such as graph coloring problem, large-scale integrated circuit
Research Summary of Intelligent Optimization Algorithm
103
design, routing problem in communication network, load balancing problem, vehicle scheduling problem and so on. Traditional ant colony algorithm is affected by the setting of initial parameter empirical value, and its efficiency is low in the process of path optimization. Because pheromones are not updated in time, path search can not get the global optimal solution only locally. Aiming at the problems of ant colony algorithm in path planning, scholars at home and abroad have made improvements from two aspects: randomness of initial parameters, experience range and pheromone update. By optimizing the initialization parameters, the convergence speed of the algorithm in path planning can be accelerated, and the global optimal path can be obtained by avoiding falling into the local optimal solution [56]. Chen Yuanyi [57] improved the algorithm by adjusting heuristic factors to avoid the algorithm falling into local minimum. Masoumi Zohreh [58] modified the relevant parts of “ant decision rules”, which is suitable for path search in complex terrain environment. Ali Zain Anwar [59] enhanced the MaxMinimum Ant Colony Optimization (MMACO) algorithm by adding Cauchy mutation (CM) operator, eliminating the limitations of classical ACO and MMACO algorithms. Li Xue [60] has improved the searching ability at the initial time, expanded the searching range and added roulette operators by adaptively changing the volatilization coefficient, thus effectively improving the quality of the solution and the convergence speed of the algorithm. Many scholars have combined ant colony algorithm with intelligent algorithms such as genetic algorithm [65], particle swarm optimization algorithm [66] and artificial potential field method [67] to improve global and local search ability, speed up path convergence, and realize autonomous navigation, obstacle avoidance and path optimization of AGV. Many scholars combine ant colony algorithm with other intelligent algorithms to improve the convergence speed of the algorithm. Jiao Deqiang [67] used genetic algorithm and nonlinear optimization to optimize ant colony algorithm, using genetic algorithm to improve global search ability, and using nonlinear optimization algorithm to improve local search ability. Chunyan Jiang [68] combined the heuristic strategy of particle swarm optimization and ant colony algorithm, and adopted different search strategies at different stages of the algorithm, which has fast convergence speed and strong optimization ability and can obtain better optimization results. Wang Yu [69] added the algorithm of local search of artificial potential field to find the optimal path based on ant colony algorithm, and realized the autonomous navigation, obstacle avoidance and path optimization functions of AGV trolley in automatic workshop environment. 3.3 Particle Swarm Optimization Particle Swarm Optimization(PSO) is an evolutionary computation technique proposed by Dr. Eberhart and Dr. Kennedy in 1995, which originated from the study of bird predation behavior. The algorithm uses swarm intelligence to establish a simplified model, and makes use of the sharing of information by individuals in the swarm to make the movement of the whole swarm evolve from disorder to order in the problem solving space, thus obtaining the optimal solution [69].
104
Y. Liu et al.
PSO algorithm has the characteristics of simple modeling, easy implementation, less parameters, high precision and fast convergence, which has attracted many scholars’ attention. However, the traditional PSO algorithm has the problems of slow convergence speed and early-maturing in the later stage when applied to path planning. In order to solve the problem of late convergence speed, Guangsheng Li [70] proposed to select the most suitable search strategy adaptively at different stages, which improved the searching ability of AGV path. Nianyin Zeng [71] analyzed the path planning problem of switched local evolutionary particle swarm optimization based on non-homogeneous Markov chain and DE, and overcame the contradiction between local search and global search. To solve the problem of premature convergence, scholars at home and abroad have put forward some solutions. Peram Thanmaya [72] optimized PSO algorithm according to the distance ratio of fitness value. Liang J J [73] used the optimal historical information to update the particle velocity. Shao Peng [74] introduced a sinusoidal function factor with periodic oscillation, which made every particle position get periodic oscillation and expanded the search space. All these three methods effectively avoided premature path. Chen Jialin [75] solved the premature phenomenon in smooth path planning by improving population evolution state strategy, adaptive inertia weight, adaptive learning factor strategy and group jump strategy. In addition to paying attention to the search path and path optimization ability of the algorithm, scholars also have some research on the obstacle avoidance ability of the algorithm and the smoothness of the path. Ma [76] proposed random disturbance particle swarm optimization algorithm and simulated annealing particle swarm optimization algorithm to study collision-free path planning in dynamic double warehouse environment. P.K [77] used bee colony operator to enhance the ability of improved particle swarm algorithm, and calculates the optimal collision-free trajectory of robot in complex environment. Baoye Song [69] added multi-modal delay information to the speed update model, and proposed a multi-modal delay particle swarm optimization algorithm for global smooth path planning of mobile robots. At present, in addition to optimizing PSO path planning algorithm by improving parameters, many scholars have studied the application of PSO and ant colony algorithm [78], simulated annealing algorithm [79] and genetic algorithm [80] in path planning. There are some limitations in path optimization of single algorithm, and the final result of hybrid algorithm is better than PSO algorithm alone. The fusion of intelligent algorithms can effectively play the advantages of each algorithm, improve the quality, speed and security of solving the optimal path, and its disadvantage is that the hybrid algorithm takes a long time to calculate. To sum up, most scholars use genetic algorithm, ant colony algorithm and particle swarm algorithm for AGV path planning. These algorithms have inherent parallelism, high robustness and few parameters. In practical application scenarios, scholars have proposed some improved methods for different algorithms. Specific algorithm analysis and comparison are shown in Table 2.
Research Summary of Intelligent Optimization Algorithm
105
Table 2. Algorithm analysis and comparison
algorithm
advantage
disadvantage
genetic algorithm
Have global optimizatio n ability; Intrinsic parallelism
Slow convergence speed; The choice of initial value will affect the convergence effect
Ant colony algorithm
Have positive feedback; Has strong robustness; Strong global optimizatio n ability
The amount of calculation is large and it takes a long time; Easy to fall into local optimum
Particle swarm optimizati on algorithm
Search speed is faster; Set fewer parameters ; Implement ation is simple
Slow convergence in later period; Easy to precocious convergence
improve one's method Adding operators and improving fitness function; Combined with simulated annealing algorithm Improve heuristic function; Update pheromone s; Combined with other intelligent algorithms Improve weight coefficient and learning factor; Optimizati on strategy; Combined with other intelligent algorithms
Applicable problems
Complex and highly nonlinear problems
Discrete problem
Real number problem
4 Summary As one of the main equipments in modern warehousing, AGV’s path planning technology is the key to realize intelligent warehousing. The research of AGV path planning algorithm can improve the ability of AGV to plan its own path and shorten the time of path search, which has important practical significance in the application of warehousing scenarios. In this paper, the models and environmental modeling methods of warehouse AGV path planning are summarized. It is found that the cell method is difficult to update; artificial potential field method is easy to solve, but easy to fall into local optimum. In this paper, genetic algorithm, ant colony algorithm and particle swarm optimization algorithm, which are widely used in AGV path planning, are summarized. It turned
106
Y. Liu et al.
out that genetic algorithm has strong global optimization ability, but slow convergence speed, which is suitable for complex and highly nonlinear path planning problems. Ant Colony Algorithm has strong robustness and global optimization ability, but is easy to fall into local optimization, which is suitable for discrete path planning problems. PSO has the advantages of fast searching speed, few parameters and simple realization, but slow late convergence speed and easy premature convergence, which is suitable for real number path planning problems. The three algorithms can merge each other to form a new algorithm in different degrees. The fused intelligent algorithm can effectively play the advantages of each algorithm and make up for the shortcomings of the algorithm. The environment of large-scale warehousing is complex and changeable. The warehousing tasks are highly random and dynamic, and the task scale is large. Although there have been fruitful research results on AGV path planning, the warehouse environment requires higher AGV quantity, cooperation ability, response ability, fault tolerance ability, performance and system efficiency. The research on multi-AGV path planning still cannot meet the urgent needs of storage environment, and further research and system development are needed. With the continuous progress of science and technology, multi-AGV cooperation and multi-algorithm integration will become the development direction of intelligent warehouse AGV path planning.
References 1. Fetanat, M., Haghzad, S., Shouraki, S.B., Optimization of dynamic mobile robot path planning based on evolutionary methods. AI & Robotics (IRANOPEN), IEEE, pp. 1–7 (2015) 2. Lv, Z., Yang, L., He, Y., Liu, Z., Han, Z.: 3D Environment Modeling with Height Dimension Reduction and Path Planning for UAV, Kunming University of Science and Technology, IEEE Control System Society Beijing Chapter, IEEE Beijing Section, Proceedings of 2017 9th International Conference on Onnd Technology. IEEE, Beijing Section, p. 6 (2017) 3. Liu, X.L., Jian, L., Jin, Z.F.: Mobile robot path planning based on environment modeling of grid method in unstructured environment. Mach. Tool Hydraulics 44(17), 1–7 (2016) 4. Dang, V.-H., Viet, H.H., Thang, N.D., Vien, N.A., Tuan, L.A.: Improving path planning methods in 2D grid maps. J. Comput. 1, 15 (2020) 5. Xiao, S., Tan, X., Wang, J.: A simulated annealing algorithm and grid map-based UAV coverage path planning method for 3D reconstruction. Electronics 10 (2021) 6. Xiong, C.: Improvement of ant colony algorithm and its application in path planning. Chongqing University of Posts and telecommunications (2020) 7. Lu, Z., Yang, L.Y., He, Y.Q.: 3D environment modeling with height dimension reduction and path planning for UA. In: The 2017 9th International Conference on Modelling, Identifification and Control, Kunming, China, pp. 734–739. IEEE (2017) 8. Wang, W.F., Wu, Y.C., Zhang, X.: Research of the unit decomposing traversal method based on grid method of the mobile robot. Tech. Autom. Appl. 32, 34–38 (2013) 9. Dai, G.: Algorithm research on obstacle avoidance path planning, Huazhong University of science and technology (2004) 10. Liu, Y.: Obstacle avoidance path generation and optimization based on visual graph method. Kunming University of science and technology (2012) 11. Sheng, J.: Virtual human path planning method and its application in virtual environment. East China University of science and technology (2011) 12. Junlan, N., Qingjie, Z., Yanfen, W.: Flight path planning of UAV based on weighted voronoi diagram. Flight Dyn. 33(4), 339–343 (2015)
Research Summary of Intelligent Optimization Algorithm
107
13. Feng, C.: Application of improved immune algorithm in multi robot formation control. Guangxi University of science and technology (2019) 14. Chen, X., Wu, Y.: Research on path planning algorithm of UAV attacking multiple moving targets based on Voronoi diagram. Inf. Commun. 06, 36–37 (2020) 15. Shao, W., Luo, Z.: Application of improved visual graph method in path planning. J. Nanyang Normal Univ. 17(04), 38–42 (2018) 16. Feng, H., Bao, J., Jin, Y.: Generalized Voronoi diagram for multi robot motion planning. Comput. Eng. Appl. 46(22), 1–3 + 19 (2010) 17. Haibin, W., Yi, L.: Online path planning of mobile robot based on improved Voronoi diagram. Chinese J. Constr. Mach. 01, 117–121 (2007) 18. Wang, H., Hao, C.E., Zhang, P., Zhangmingquan, yinpengheng, zhangyongshun, Path planning of mobile robot based on a~* algorithm and artificial potential field method, vol. 30, pp. 2489–2496 (2019) 19. Wang, Y.: Improvement of artificial potential field algorithm for robots in different environments, Nanjing University of information engineering (2020) 20. Di, W., Caihong, L., Na, G., Tengteng, G., Guoming, L.: Local path planning of mobile robot based on improved artificial potential field method. J. Shandong Univ. Technol. (NATURAL SCIENCE EDITION) 35, 1–6 (2021) 21. Huang, L., Geng, Y.: Research on mobile robot path planning based on dynamic artificial potential field method. Comput. Meas. Control 25, 164–166 (2017) 22. Zhang, Y.L., Liu, Z.H., Chang, L.: A new adaptive artificial potential field and rolling window method for mobile robot path planning. In: Editorial Department of control and decision making, 2017 29th Chinese Control and Decision Conference (CCDC), Chongqing, pp. 7144– 7148. IEEE (2017) 23. Abdalla, T.Y., Abed, A.A., Ahmed, A.A.: Mobile robot navigation using PSO-optimized fuzzy artificial potential field with fuzzy control. J. Intell. Fuzzy Syst. 32, 3893–3908 (2016) 24. Ying, Z., Yuanpeng, L., Yawan, Z., Weijian, L.: Path planning of handling robot based on improved artificial potential field method. Electron. Meas. Technol. 43, 101–104 (2020) 25. Liu, Z.: Research and application of AGV path planning based on particle swarm optimization and artificial potential field method. Shenzhen University (2018) 26. Xu, Y.: Hybrid path planning for mobile robot based on particle swarm optimization and improved artificial potential field method. Zhejiang University (2013) 27. Zhong, M., Yang, Y., Dessouky, Y., Postolache, O.: Multi-AGV scheduling for conflict-free path planning in automated container terminals. Comput. Ind. Eng. 142, 106371 (2020) 28. Zhijun, W.: Dynamic refinement of robot navigation path and planning of flower pollination algorithm. Mech. Des. Manuf. 03, 288–292 (2021) 29. Thi Thoa Mac: A hierarchical global path planning approach for mobile robots based on multi-objective particle swarm optimization. Appl. Soft Comput. 59, 68–76 (2017) 30. Xuan, Y.: Laser ablation manipulator coverage path planning method based on an improved ant colony algorithm. Appl. Sci. 10, 8641 (2020) 31. Zhang, Z., zhangbohui, representative contention, “multi AGV conflict free path planning based on dynamic priority strategy,” Computer application research, pp. 1–5. https://doi.org/ 10.19734/j.issn.1001-3695.2020.08.0221 32. Xue, Y., Jian-Qiao, S.: Solving the path planning problem in mobile robotics with the multiobjective evolutionary algorithm. Appl. Sci. 8, 9 (2018) 33. Liao, K.: Research on multi AGV path planning optimization algorithm and scheduling system. Hefei University of technology (2020) 34. Jie, W.: Research on path planning and collision avoidance strategy of multi AGV in intelligent warehouse, Shandong University of science and technology (2020)
108
Y. Liu et al.
35. Hu, Z., Cheng, L., Zhang, J., Wang, C.: Path planning of mobile robot based on improved genetic algorithm under multiple constraints. J. Chongqing Univ. Posts Telecommun. 06, 1–8 (2021) 36. Kim, K.H., Bae, J.W.: A Look-Ahead Dispatching Method for Automated Guided Vehicles in Automated Port Container Terminals. Inforvis (2004) 37. Lopes, T.C., Sikora, C., Molina, R.G.: Balancing a robotic spot welding manufacturing line: an industrial case study. Eur. J. Oper. Res. 263, 1033–1048 (2017) 38. Yuan, R., Dong, T., Li, J.: Research on the collision-free path planning of multi-AGVs system based on improved A*algorithm, Inventi Impact - Algorithm (2017) 39. Chen, Q.: Research on optimal path planning combined with obstacle avoidance and its application in delivery car. Guangdong University of technology (2019) 40. Gao, Y., Wei, Z., Gong, F.: Dynamic path planning for underwater vehicles based on modified artificial potential field method. In: Proceeding of 2013 Fourth International Conference on Digital Manufacturing and Automation (ICDMA), Shinan. IEEE (2013) 41. Jiao, C., Jia, C., Qing, L.: Path planning of mobile robot based on improved a * and dynamic window method. Computer integrated manufacturing system, pp. 1–17 (2021). http://kns. cnki.net/kcms/detail/11.5946.TP.20201026.1053.026.html 42. He, R.: Research on vehicle routing planning algorithm based on genetic algorithm. Beijing Jiaotong University (2020) 43. Guo, E., Liu, N., Wu, L., Wu, Z.: An AGV path planning method based on genetic algorithm. Sci. Technol. Innov. Prod. 08, 87–88 + 91 (2016) 44. Dang, H., Sun, X.: Research on AGV path optimization based on genetic algorithm. Electron. Products World 27, 48–51 + 73 (2020) 45. Gu, Y., Duan, J., Yuan, Y., Su, Y.: Multi objective path planning method for storage robot based on genetic algorithm. Logistics Technol. 39, 100–105 (2020) 46. Li, Q.: Genetic algorithm for path planning of AGV. Guangdong University of technology (2011) 47. Li, M.: Research on path planning of mobile robot based on improved genetic algorithm. Anhui Engineering University (2017) 48. Lamini, C., Benhlima, S., Elbekri, A.: Genetic algorithm based approach for autonomous mobile robot path planning. Procedia Comput. Sci. 127, 127 (2018) 49. Nazarahari, M., Khanmirza, E., Doostie, S.: Multi-objective multi-robot path planning in continuous environment using an enhanced genetic algorithm. Expert Syst. Appl. 115, 106– 120 (2019) 50. Liang, C.: Path planning and navigation based on improved genetic algorithm under multiple constraints. Chongqing University of Posts and Telecommunications (2020) 51. Yang, C., Zhang, T., Pan, X., Hu, M.: Multi-objective mobile robot path planning algorithm based on adaptive genetic algorithm. Technical Committee on control theory, Chinese Association of Automation, pp. 7 (2019) 52. Crossland, A.F., Jones, D., Wade, N.S.: Planning the location and rating of distributed energy storage in LV networks using a genetic algorithm with simulated annealing. Int. J. Electr. Power Energy Syst. 59, 103–110 (2014) 53. Bo, S., Jiang, P., Genrong, Z., Dianyong, D.: AGV path planning based on improved genetic algorithm. Comput. Eng. Des. 41, 550–556 (2020) 54. Yang, L.: Research on Robot Path Planning Based on Genetic Algorithm. Yunnan University (2019) 55. Deng, X., Zhang, L., Lin, H.: Pheromone mark ant colony optimization with a hybrid nodebased pheromone update strategy. Neurocomputing 143, 46–53 (2015) 56. Chen, C.-C., Shen, L.P.: Improve the accuracy of recurrent fuzzy system design using an efficient continuous ant colony optimization. Int. J. Fuzzy Syst. 20, 817–834 (2018)
Research Summary of Intelligent Optimization Algorithm
109
57. Yuanyi, C., Xiangming, Z.: Path planning of robot based on improved ant colony algorithm in computer technology. J. Phys. Conf. Ser. 1744, 4 (2021) 58. Zohreh, M., Van John, G., Abolghasem, S.N.: An improved ant colony optimizationbased algorithm for user-centric multi-objective path planning for ubiquitous environments. Geocarto Int. 36, 137–154 (2021) 59. Anwar, A.Z., Han, Z., Bo, H.W.: Cooperative path planning of multiple UAVs by using max– min ant colony optimization along with cauchy mutant operator. Fluctuation Noise Lett. 20, 01 (2021) 60. Li, X.: Research on the application of improved ant colony algorithm in intelligent car path planning, Anhui Engineering University (2020) 61. Hsu, C.-C., Wang, W.-Y., Chien, Y.-H., Hou, R.-Y.: FPGA implementation of improved and colony optimization algorithm based on pheromone diffusion mechanism for path planning. J. Marine Sci. Technol. 26, 170–179 (2018) 62. Sangeetha, V., Krishankumar, R., Ravichandran, K.S., Kar, S.: Energy-efficient green ant colony optimization for path planning in dynamic 3D environments. Soft Comput. 25, 1–21 (2021) 63. Boxin, G., Yuhai, Z., Yuan, L.: An ant colony optimization based on information entropy for constraint satisfaction problems. Entropy (Basel, Switzerland) 21, 8 (2019) 64. Jing, Y.: Mobile robot path planning based on improved ant colony optimization algorithm. In: Proceedings of the 39th China Control Conference, vol. 2 (2020) 65. Deqiang, J., Che, L., Zerui, L., Dinghao, W.: An improved ant colony algorithm for TSP application. J. Phys: Conf. Ser. 1802, 3 (2021) 66. Jiang, C., Fu, J., Liu, W.: Research on vehicle routing planning based on adaptive ant colony and particle swarm optimization algorithm. Int. J. Intell. Transp. Syst. Res. 19, 1–9 (2020) 67. Wang, Y., Feng, X., Yulei, L., Xiang, Z.: Research on path planning of autopilot car based on improved potential field ant colony algorithm. Manuf. Autom. 41, 70–74 (2019) 68. Mandava, R.K., Bondada, S., Vundavilli, P.R.: An optimized path planning for the mobile robot using potential field method and PSO algorithm. Soft Computing for Problem Solving, pp. 139–150. Springer, Berlin (2019) 69. Song, B., Wang, Z., Zou, L.: On global smooth path planning for mobile robots using a novel multimodal delayed PSO algorithm. Cogn. Comput. 9, 5–17 (2017) 70. Li, G., Chou, W.: Path planning for mobile robot using self-adaptive learning particle swarm optimization. Sci. China (Inf. Sci.) 61, 267–284 (2018) 71. Zeng, N.: Path planning for intelligent robot based on switching local evolutionary PSO algorithm. Assembly Autom. 36, 120–126 (2016) 72. Thanmaya, P., Veeramachaneni, K., Mohan, C.K.: Fitness-distance-ratio based particle swarm optimization. In: Proceedings of the IEEE Congress on Swarm, Intelligence Symposium, vol. 2, pp. 174–181 (2003) 73. Liang, J.J., Qin, A.K., Suganthan, P.N.: Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 10, 281–295 (2006) 74. Shao, P., Wu, Z.: An improved particle swam optimization algorithm based on trigonometric sine factor. J. Chinese Comput. Syst. 36, 156–161 (2015) 75. Jialin, C., Guoliang, W., Tian, X.: Smooth path planning of mobile robot based on improved particle swarm optimization algorithm. Miniature Microcomput. Syst. 40, 2550–2555 (2019) 76. Ma, Y., Wang, H., Xie, Y., Guo, M.: Path planning for multiple mobile robots under doublewarehouse. Inf. Sci. 278, 357–379 (2014) 77. Das, P.K., Jena, P.K.: Multi-robot path planning using improved particle swarm optimization algorithm through novel evolutionary operators. Appl. Soft Comput. J. 92, July 2020 78. Ma, Y., Li, C.: Path planning and tracking for multi-robot system based on improved PSO algorithm. In: 2011 International Conference on Mechatronic Science, Electric Engineering and Computer, Jilin, pp. 1667–1670 (2011)
110
Y. Liu et al.
79. Zhang, Y., Lu, G.: Research on logistics distribution path optimization based on hybrid particle swarm optimization. Packag. Eng. 05, 10–12 (2007) 80. Mousavi, M., Yap, H.J., Musa, S.N., Tahriri, F., Dawal, S.Z.M.: Multi-Objective AGV scheduling in an FMS using a hybrid of genetic algorithm and particle swarm optimization. PLoS ONE 12, 16–17 (2017)
Research on Time Window Prediction and Scoring Model for Trauma-Related Sepsis Ke Luo1 , Jing Li1(B) , and Yuzhuo Zhao2 1 School of Economics and Management, Beijing Jiaotong University, Beijing, China
{17711020,jingli}@bjtu.edu.cn 2 Department of Emergency, Chinese PLA General Hospital, Beijing, China
Abstract. Based on the MIMIC-III database of the Massachusetts Institute of Technology, this paper studies and analyzes the symptoms of trauma-related sepsis. Use SOFA score as the Inclusion and Exclusion Criteria, extract the relevant patient medical index data with the guidance of a professional clinician. Sequential forward search is applied to search the optimal index combination based on the eXtreme Gradient Boosting (XGBoost) algorithm. Twenty independent replicates perform to obtain 7 key risk indicators (Urea Nitrogen, Prothrombin Time, PO2, Sodium, Red Blood Cells, Carbon Dioxide, International Normalized Ratio). The time window prediction model builds by four machine learning algorithms (decision tree, random forest, decision tree-based adaptive reinforcement (Adaboost) algorithm, XGBoost). The results show that the time window prediction model of trauma-related sepsis has good generalization ability. The prediction effect of the random forest and XGBoost algorithm is better than the other two. Finally, using the multi-factor Logistic regression method build the risk scoring tool for sepsis-induced by trauma-related infection base on the key risk indicators and the opinions of professional clinicians. The results show that the data-driven risk scoring tool can effectively predict the outcome of patients with trauma-related sepsis, which has high clinical significance. Keywords: Trauma-related Sepsis · Big data · Key risk indicators · Time window forecast · Risk score · Machine learning
1 Introduction With the development of medicine and the application of Smart healthcare, many diseases are under control. But the number of trauma patients is increasing, and the number of deaths is also increasing. Trauma has become the first cause of death for patients between 1 and 44 years old [1]. Trauma-related infection is one of the most common complications of trauma patients. Due to the difference in traumatic environment and degree of trauma, most trauma patients will have different degrees of infection, among which sepsis is This work was partly supported by the National Key Research and Development Plan for Science and Technology Winter Olympics of the Ministry of Science and Technology of China (2019YFF0302301). © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 111–123, 2022. https://doi.org/10.1007/978-981-16-8656-6_10
112
K. Luo et al.
the result of out-of-control infection of trauma patients in the late stage. Difficulty in predicting, diagnosis, and treatment are the reasons for the extremely high mortality of sepsis. It can highly improve the treatment rate of sepsis if sepsis prediction in the early stage. Many researchers applied Smart healthcare in the study of sepsis, combined with the computer, big data, and clinical medicine, analyze the risk factors for sepsis, trying to find out the relationship between the specific indicators and sepsis, focus on the point in a certain time of sepsis judgment, lack of the forecast on time interval judgment [2]. This paper will focus on the possibility of sepsis in trauma patients over time. Combined with the MIMIC-III database, this paper will conduct data mining and statistical analysis, and machine-learning algorithms to construct time window prediction and risk scoring tools for trauma-related sepsis to reduce the risk of sepsis and improve the working efficiency of medical staff.
2 Methods 2.1 Study Population and Data Sources All data used in this study obtained from the Medical Information Mart for Intensive Care III database (MIMIC-III). The data are patients aged 18 years or older who had been in the ICU for 4 h or more due to trauma. Blood culture should be performed within 24 h if the patient uses antibiotics first. If the patient is performed within blood culture first, antibiotics should use within 72 h, and the priority project time recorded as Tsuspicion . When the SOFA score of the patients at 12 h after Tsuspicion minus the SOFA score at 24 h before Tsuspicion is higher than 2, the patients are identified as having sepsis, that is, the experimental group, otherwise for the control group. 2.2 Data Processing The SOFA score is used as the inclusion and exclusion criteria of the research experiment. Data is preprocessed by data transpose, outlier processing, missing value analysis, and data filling. Missing no module of Python is used as the main tool for missing value analysis, and indicators with a missing ratio of more than 80% are removed. And then time series data are filled based on two dimensions, namely linear interpolation and distance filling. 2.3 Feature Selection and Machine Learning Feature Selection is the preliminary step of machine learning and data mining, and it is a process of data preprocessing. It eliminates redundant or irrelevant features to identify the most important features, thus reducing the complexity of the problem [3]. In this study, the greedy algorithm is used to design a feature selection algorithm, and the XGBoost algorithm is used to select features of 35 indicators by sequential forward search strategy search. XGBoost has good anti-over-fitting characteristics and high computational efficiency [4]. The tree model of XGBoost is characterized by providing a basis for quantitative feature selection and forming encapsulated feature selection. The
Research on Time Window Prediction and Scoring Model
113
time-series data of trauma-related sepsis are input into the key indicator screening model for iteration, and the results of each iteration are recorded to select the index with the highest performance. Compared with the black-box model (uninterpreted algorithm taking neural network as an example), the Decision Tree is based on if-then-else rules and is easier to understand, realize, explain and visualize [5]. The neural network (the black-box model representation) has certain defects: difficult to optimize, result in the local-optimal solution rather than the global-optimal solution, and low generalization leads to overfitting problems, etc. To sum up, this study uses the decision tree algorithm to build the time window prediction model. This study also uses the random forest and Boosting method which is derived from the decision tree to carry out multiple groups of experiments, to improve the accuracy of time window prediction of trauma-related sepsis [6]. In this study, grid parameter iteration is used for parameter adjustment. Given a set of parameters, the enumeration search method is used to iterate over all possibilities to select the best result (Fig. 1).
Fig. 1. Time prediction model parameter adjustment and grid search setting.
The logistic regression model is a multivariate statistical method to study the relationship between explanatory variables and observed results. In this study, the scoring tool is based on the key index set, using the multi-factor logistic regression method and the clinical grading consensus of the indicators to quantitatively calculated the severity of the patient’s illness, namely the score. And then the score is corresponding to the outcome probability. The specific steps of model construction are as follows: 1) Calculate the risk index regression coefficient β of each index; 2) Combined with the inherent medical knowledge to determine the scoring threshold of each index, and determine the reference value wij of each group; 3) The basic risk reference value wiREF of each risk indicator is determined. In the subsequent scoring model construction, wiREF is recorded as 0 points, and when it is higher than wiREF , it is recorded as positive points; otherwise, the higher the score, the higher the risk is. 4) Calculate the distance D between the reference value wij of each risk indicator and basic risk reference value wiREF : D = wij − wiREF ∗ βi (1)
114
K. Luo et al.
5) Set constant B, the change value of the index corresponding to the change of 1 point in the risk scoring tool; 6) Calculate the score Pointsij , of each group of risk indicators, and round the calculated value as the corresponding score value of this group: Pointsij =
D = wij − wiREF ∗ βi /B B
(2)
7) Calculate the total score and risk prediction probability:
p=
1 1 + exp(βi Xi )
(3)
3 Results This study uses PostgreSQL to extract data from the MIMIC-III database. And finally obtained data of 177 patients in the experimental group and 369 patients in the control group, with 35 various examination and laboratory indicators, amount to 201189 records. The Hosmer Lemeshow goodness of fit index (H-L) [7] is used to verify the time series data after filling, and the significance of the result is 0.553 which greater than 0.05. There is no significant difference between the predicted value and the observed value, which proves that the model has a good fit. 3.1 Key Risk Indicators The key indexes of trauma-related sepsis are extracted by feature selection. After 20 separate repeated experiments, the key indexes with retention times more than or equal to 16 times are Urea Nitrogen, PTT, PO2, Sodium, Red Blood Cells, CO2, and INR. The key indicators for retention between 12 and 16 times are Lactate and White Blood Cells. Hematocrit, Chloride, Hemoglobin, Temperature, Base Excess are the key indicators of retention between 10 and 12 times. Key indicators with retention times between 8 and 10 are Heart Rate, PCO2, Glucose, Platelet, Creatinine, and Calcium. The more times a single indicator is retained, the greater the influence of this indicator on the outcome and the stronger the ability to identify patients. In this study, the feature_importances function of XGBoost is used to obtain the characteristic importance of each index. The results are shown in the following Table 1. Compared with the calculation results, find that Urea Nitrogen and PTT, with the most retention times of key indicators, rank the first and the sixth in the ranking of characteristic importance, respectively, which proved that there is a causal relationship between key indicators and the accuracy of the predicted results of trauma-related sepsis. The weight of characteristic importance does not represent the correlation degree between the indicators and the predicted results, but it proves the correlation between the indicators and the predicted results, which lays a foundation for the subsequent prediction of trauma-related sepsis (Table 2).
Research on Time Window Prediction and Scoring Model
115
Table 1. Indicator weights and rankings Rank
Label
Feature_importances
Rank
Label
Feature_importances
1
Urea Nitrogen
0.06
14
Heart Rate
0.039
2
Hemoglobin
0.059
15
Calcium
0.038
3
Lactate
0.052
16
Chloride
0.037
4
CO2
0.049
17
Potassium
0.036
5
INR
0.048
18
Magnesium
0.035
6
PTT
0.047
19
Red Blood Cells
0.033
7
PO2
0.046
20
Base Excess
0.031
8
White Blood Cells
0.046
21
Hematocrit
0.03
9
Glucose
0.045
22
ph
0.03
10
Creatinine
0.045
23
Respiratory Rate
0.024
11
Platelet
0.042
24
Systolic Pressure
0.02
12
Sodium
0.041
25
Diastolic Pressure
0.016
13
PCO2
0.04
26
Temperature
0.01
Table 2. Forecast the distribution of key indicators Category
Vital signs
Coagulation function
Arterial blood gas
Blood routine
Blood biochemical
Number
2
2
4
5
7
Total Weight
0.049
0.095
0.169
0.21
0.308
By summing up the weights of all the key indicators, the calculation results show that the weight summation of Blood Biochemical and Blood Routine is the highest, and the distribution is 0.308 and 0.21. To a certain extent, it proved the important value of Blood Biochemical and Blood Routine in the prediction of trauma-related sepsis, followed by Arterial Blood Gas and Coagulation Function, and Vital Signs had the least influence in the prediction of trauma-related sepsis (Fig 2).
116
K. Luo et al.
3.2 Time Window Forecast Under the time window model of the full index data set, the accuracy rate, recall rate, and precision rate of the four model algorithms are between 64% and 83%, which meet the requirements of clinical medicine. The best prediction effect is the Random Forest. From the perspective of time, although the prediction effect fluctuates slightly, the overall accuracy decreases with the increase of time, which is in line with the actual prediction logic. Moreover, the overall model performance increase with time, but the changing trend is not obvious, which proved the stability of the model and is more conducive to the earlier prediction and early warning of trauma-related sepsis (Table 3, Fig. 3). Table 3. Comparison of prediction time parameters of all indexes in different Time Predicted time
Method
F1.5
Acc
Pre
Rec
1h
Decision Tree
0.6645
0.6452
0.6364
0.6785
Random Forest
0.8119
0.7941
0.7762
0.8299
Adaboost
0.6818
0.6681
0.6618
0.6921
2h
3h
4h
XGBoost
0.7770
0.7492
0.7266
0.8028
Decision Tree
0.6631
0.6514
0.6464
0.6718
Random Forest
0.8123
0.7881
0.7637
0.8367
Adaboost
0.6586
0.6562
0.6559
0.6610
XGBoost
0.7753
0.7452
0.7209
0.8028
Decision Tree
0.6559
0.6427
0.6366
0.6655
Random Forest
0.8099
0.7895
0.7704
0.8305
Adaboost
0.6808
0.6633
0.6541
0.6944
XGBoost
0.7718
0.7492
0.7331
0.7921
Decision Tree
0.6598
0.6511
0.6477
0.6667
Random Forest
0.8073
0.7845
0.7634
0.8299
Adaboost
0.6752
0.6684
0.6653
0.6802
XGBoost
0.7690
0.7469
0.7296
0.7887
According to the model AUC value, all the models are higher than 0.64 under different time window parameter Settings, which can meet the dynamic requirements (Table 4). Under the time window model of “Key Indicator Set 1–4”, the performance results of each machine learning method are still above 63%, and the accuracy, recall, and precision rate of Random Forest are all the best. Although the effect of different indicators fluctuated slightly, in general, the prediction effect decreased with the decrease of the number of key indicators (Fig. 4). According to the model AUC value, all models are higher than 0.63 under different time window parameter Settings, among which Random Forest performed the best, followed by XGBoost, Adaboost, and Decision Tree. With the decrease of the amount
Research on Time Window Prediction and Scoring Model
1h
117
2h
3h
4h
1h
2h
3h
4h
Fig. 2. ROC curve and AUC of internal validation of each model in the full index data set. Decision Tree Adaboost XGBoost Random Forest Decision Tree Adaboost XGBoost Random Forest Decision Tree Adaboost XGBoost Random Forest Decision Tree Adaboost XGBoost Random Forest 0.00
0.20
0.40
0.60
0.80
1.00
Fig. 3. AUC comparison of internal validation among models in the full indicator set.
of key index data, the prediction effect has a certain tendency to decrease, but it is not obvious, which does not affect the application requirements of the dynamic real-time time window prediction model for trauma-related sepsis, which is in line with clinical practice, and proves the generalization ability of the time window prediction model for trauma-related sepsis (Fig. 5).
118
K. Luo et al.
Table 4. Comparison of prediction time parameters of key index datasets in different sets Method
Set
N
F1.5
Acc
Pre
Rec
Decision Tree
1
20
0.6468
0.6339
0.6284
0.6559
Random Forest
1
20
0.8132
0.7910
0.7695
0.8356
Adaboost
1
20
0.6774
0.6655
0.6585
0.6870
XGBoost
1
20
0.7834
0.7545
0.7317
0.8102
Decision Tree
2
14
0.6645
0.6506
0.6449
0.6740
Random Forest
2
14
0.8106
0.7881
0.7672
0.8328
Adaboost
2
14
0.6748
0.6562
0.6473
0.6881
XGBoost
2
14
0.7729
0.7486
0.7279
0.7955
Decision Tree
3
9
0.6817
0.6610
0.6509
0.6966
Random Forest
3
9
0.8083
0.7898
0.7716
0.8266
Adaboost
3
9
0.6887
0.6740
0.6669
0.7006
XGBoost
3
9
0.7873
0.7565
0.7317
0.8164
Decision Tree
4
7
0.6834
0.6548
0.6424
0.7040
Random Forest
4
7
0.7966
0.7712
0.7485
0.8209
Adaboost
4
7
0.6655
0.6455
0.6363
0.6797
XGBoost
4
7
0.7658
0.7398
0.7200
0.7893
Set 1
Set 2
Set 3
Set 4
Fig. 4. ROC curve and AUC of internal validation of each model in different sets.
Key Indicator Key Indicator Key Indicator Key Indicator Set 1 Set 4 Set 2 Set 3
Research on Time Window Prediction and Scoring Model
119
Decision Tree Adaboost XGBoost Random Forest Decision Tree Adaboost XGBoost Random Forest Decision Tree Adaboost XGBoost Random Forest Decision Tree Adaboost XGBoost Random Forest 0.00
0.20
0.40
0.60
0.80
1.00
Fig. 5. AUC comparison of internal validation among models in the key indicator set.
3.3 Risk Score In this study, the two indexes with the highest weighted sum of feature importance, blood biochemical and blood routine, are selected to construct a risk scoring tool. Creatinine is taken as a constant reference index, and the results are as follows (Table 5). Table 5. Trauma-related sepsis risk scoring tool Key indicators
Group
wij
Platelet
0 ≤ x < 20
10
20 ≤ x < 50
35
50 ≤ x < 100
75
100 ≤ x < 400
250
x ≥ 400
658
0 ≤ x < 88
44
Creatinine
Urea Nitrogen
wiREF
β
D
B
S
−0.0107
2.5650
1.0185
3
2.2978
1.0185
2
1.8703
1.0185
2
0
1.0185
0
−4.3605
1.0185
−4
−0.6894
1.0185
−1
250 0.0078
88 ≤ x < 176
132
0
1.0185
0
176 ≤ x < 308
242
0.8618
1.0185
1
308 ≤ x < 440
374
1.8960
1.0185
2
132
x ≥ 440
410
0 ≤ x < 1.8
1.79
1.8 ≤ x < 7.1
8.9
0
1.0185
0
7.1 ≤ x < 21
14
0.4806
1.0185
0
x ≥ 21
25.5
1.5645
1.0185
2
0.0942 8.9
2.1780
1.0185
2
0.6701
1.0185
1
(continued)
120
K. Luo et al. Table 5. (continued)
Key indicators
Group
wij
Sodium
0 ≤ x < 135
107.5
135 ≤ x ≤ 145
140
x > 145
155
0≤x (pi − ri )DiS , ki > (pi − ri )/(pi − ri + Fi ).
III II
I
1 Probability of recognition
Fig. 2. The supplier’s strategy in different circumstances.
From Proposition 1, our main observations are as follows: First, if supplier’s production cost is relatively small, producing qualified products will not bring severe lost, he will choose to produce qualified products. As production cost gradually increases, if consumers’ recognition probability is relatively low, the penalty imposed by the platform for producing inferior products is relatively small, so he will choose to produce inferior products. At last, if both supplier’s production cost and consumers’ recognition probability are high, these two quality choices will cause the supplier’s short-term profit to be non-positive, as a result, he will choose not to produce any products (Fig. 2). In the long run, once one supplier provides the low-quality product to consumers, consumers’ trust in the sharing salespersons will be reduced if they can recognize the inferior quality. After that, when salespersons recommend the other supplier’s product, consumers’ valuation of this new product will also drop greatly. In other words, the
Quality Management of Social-Enabled Retailing E-commerce
459
supplier’s long-term profit is simultaneously affected by earlier quality choices of the two suppliers. Consequently, the long-term profit πiL of supplier i is πiL = 1 − (1 − qi )ki πi − (1 − qi )ki ni − (1 − q3−i )k3−i n3−i
(6)
Let ni denote the loss of externality for producing inferior products, and for analytical tractability, we assume that the total profit πiT of supplier i is πiT = πiS + πiL . Table 1 presents the strategy of quality choice game, where the first expression is the expected profit of the supplier 1, and the second is of the supplier 2. Table 1. The suppliers’ expected profit in quality choice game. q2 = 1 q1 = 1
q1 = 0
(p1 − r1 ) 1 − p1 + α 2 r1 − C1 + π1 (p2 − r2 ) 1 − p2 + α 2 r2 − C2 + π2
(1 − k1 )(p1 − r1 ) − k1 F1
1 − p1 + α 2 r1 + (1 −
k1 )π1 − k1 n1 (p2 − r2 ) 1 − p2 + α 2 r2 − C2 + π2 − k1 n1
q2 = 0
(p1 − r1 ) 1 − p1 + α 2 r1 − C1 + π1 − k2 n2 [(1 − k2 )(p2 − r2 ) − k2 F2 ] 1 − p2 + α 2 r2 + (1 − k2 )π 2 − k2 n2 (1 − k1 )(p1 − r1 ) − k1 F1 1 − p1 + α 2 r1 + (1 − k1 )π1 − k1 n1 − k2 n2 [(1 − k2 )(p2 − r2 ) − k2 F2 ] 1 − p2 + α 2 r2 + (1 − k2 )π2 − k1 n1 − k2 n2
Let β and γ denote the probabilities associated with the supplier i producing qualified products respectively. Lemma 1 characterizes all the possible Nash equilibria of quality choice game. Lemma 1. The Nash equilibria of suppliers’ quality choice game are characterized as follows: There are lower bound Ci = min{(pi − ri )DiS + πi − k3−i n3−i , ki [(pi − ri + Fi )DiS + ni + πi ]} and upper bound Ci = min{(pi − ri )DiS + πi , ki [(pi − ri + Fi )DiS + ni + πi ]} in the supplier’s production cost, making four pure strategy Nash equilibriums and one mixed strategy Nash equilibrium exist in quality choice game. Proposition 2. On the social-enabled retailing e-commerce platform, one supplier’s quality choice will affect the total profit of himself and the other supplier on the platform. If one supplier chooses to produce inferior products, the probability of another supplier choosing to produce qualified products or inferior products will decrease simultaneously, in other words, another supplier will be more willing to choose not to produce any products. It can be concluded from Proposition 2 that if one supplier chooses to produce inferior products (See Fig. 3b), the critical value of production cost will decrease due to the loss of externality caused by inferior products, which leads to the probability of another supplier producing qualified products decreases, this observation is consistent with our common sense.
460
S. Li
III III
II
II
I
I 1 (a) supplier 1 chooses qualified products
1 (b) supplier 1 chooses inferior products
Fig. 3. The influence of supplier’s quality choice on equilibrium strategy.
However, the probability of another supplier producing inferior products also decreases, which is not in line with our cognition. Usually, if one supplier chooses to produce inferior products, the other is also willing to produce inferior products that is more beneficial for him. But it should be noted that the choice of supplier 1 will make supplier 2’s profit decrease greatly through the loss of externality, and even probably decrease to non-positive. Therefore, supplier 2 will be more willing to choose not to produce. According to Lemma 1 and Proposition 2, we can find that if the platform only relies on consumers’ recognition to punish suppliers for producing inferior products, this will lead to continued fraud because consumers cannot recognize inferior products completely. In addition, because all the suppliers share the salespersons on the socialenabled retailing e-commerce platform, once one supplier frauds, the other suppliers on the platform will be unwilling to produce. If this happens, it will tremendously hinder the development of the entire platform. As a result, the platform needs to design quality incentive mechanism to manage the product quality of the suppliers, which is the key to develop social-enabled retailing e-commerce in the long run.
4 Quality Inspection Game In the “outer” game, the platform (she) first decides on the punishment contract according to inspection cost. After that, suppliers choose the product quality according to production cost and consumers’ recognition probability. Subsequently, the platform decides whether to inspect the suppliers or not, if she chooses to inspect, inferior products can be checked out with a certain probability, and the platform will impose penalties on suppliers. At last, after the products are delivered to consumers, if consumers recognize the inferior products and request a refund, the platform will punish the suppliers once again. The situation where the platform inspects two suppliers at the same time can be divided into two circumstances where the platform inspects the product quality of each supplier separately, so this paper only concentrates on the game between the platform and the supplier i.
Quality Management of Social-Enabled Retailing E-commerce
461
For simplicity, we suppose that the supplier (he) only produces 1 unit of product. The price is pi . He has two quality choices, he can choose to produce the qualified product (qi = 1) with production cost Ci > 0, or choose to produce the inferior product (qi = 0) with production cost Ci = 0. The platform also has two options. She can choose to inspect the supplier (T = 1) with inspection cost Cϕ . At this time, the qualified product will pass the inspection test, while the inferior product is checked out with probability ϕi , and the platform will impose penalty Fi on the supplier. In this circumstance, the inferior product is delivered to consumers with a probability of 1−ϕi . She can also choose not to inspect (T = 0) with no inspection cost. At this time, the inferior product is certainly delivered to consumers. After the product is delivered to consumers, consumers can recognize the inferior product with probability ki and request a refund. At this time, the platform will incur economic loss E (including return cost, reputation loss, etc.), and the platform will punish the supplier Fi once again. Table 2 presents the strategy of quality inspection game, where the first expression in each cell is the expected profit of the supplier i, and the second expression is the expected profit of the platform. Table 2. The supplier’s and platform’s expected profit in quality inspection game. T =1 qi = 1 qi = 0
T =0
pi − ri + πi − Ci
pi − ri + πi − Ci
−Cϕ
0
(1 − ϕ)(1 − ki )(pi − ri + (1 − ki )(pi − ri + πi ) − πi ) − ϕFi − (1 − ϕ)ki (ni + Fi ) ki ni − ki Fi ki Fi − ki E −Cϕ + [ϕ + (1 − ϕ)ki ]Fi − (1 − ϕ)ki E
Let ρ and μ denote the probabilities associated with the supplier i producing qualified products and the platform inspecting the supplier respectively. As a result, the expected profits of the supplier and the platform are as follows. Lemma 2 characterizes all the possible Nash equilibria of quality inspection game. Lemma 2. The Nash equilibria of quality inspection game are characterized for all the regions as follows:
Region I: ρ = 1, μ = 0 is a pure strategy Nash equilibrium if 0 < Ci ≤ ki (pi − ri + πi + ni + Fi ). Region II: ρ = 0, μ = 0 is a pure strategy Nash equilibrium if Ci > ki (pi − ri + πi + ni + Fi ), Cϕ > ϕ[(1 − ki )Fi + ki E]. Region III: ρ = ρ0 , μ = μ0 is a mixed strategy Nash equilibrium if ki (pi − ri + πi + ni + Fi ) < Ci ≤ (ϕ + ki − ϕki )(pi − ri + πi + Fi ) + (1 − ϕ)ki ni , Cϕ < ϕ[(1 − ki )Fi + ki E].
462
S. Li
Cost of inspection
Region IV: ρ = 0, μ = 1 is a pure strategy Nash equilibrium if Ci (ϕ + ki − ϕki )(pi − ri + πi + Fi ) + (1 − ϕ)ki ni , Cϕ < ϕ[(1 − ki )Fi + ki E].
>
II
I III
IV
Cost of production of supplier i
Fig. 4. The equilibrium strategy in quality inspection game.
From Lemma 2, our main observations are as follows: First, if the supplier’s production cost is low enough, the supplier will produce the qualified product, and the platform will choose not to inspect. In addition, if the platform’s inspection cost is high enough, the platform will choose not to inspect, so the supplier will produce the inferior product. When the platform’s inspection cost is relatively small, as the production cost gradually increases, the supplier will change from producing the inferior product with a certain probability to completely producing the inferior product, the platform will inspect with a certain probability to a complete inspection (Fig. 4). After we characterize all the Nash equilibria of the quality inspection game, the next step is to find the optimal punishment contract that will induce each Nash equilibrium. Consequently, the optimal punishment contract can be obtained by solving the following mathematical program: max πP (Fi |ρ, μ)
(7)
s.t. πS (Fi |1, μ) ≥ 0
(8)
πS (Fi |1, μ) ≥ πS (Fi |0, μ)
(9)
Fi ≥0
Notice that the objective function (7) is the expected profit of the platform, the constraint (8) is individual rationality constraint that ensures a non-negative profit for the supplier choosing the qualified product, and the constraint (9) is incentive compatibility constraint that ensures the more profit when the supplier chooses the qualified product. Proposition 2 characterizes all the punishment contracts in different circumstances.
Quality Management of Social-Enabled Retailing E-commerce
463
Proposition 3. The optimal punishment contract can be described as follows:
Region I: Fi∗ = 0 if a1 Ci + b1 ≤ Cϕ ≤ a2 Ci + b2 . Region II: Fi∗ = Fi1 = Ci /ki − (pi − ri + πi + ni ) if Cϕ > a2 Ci + b2 . ∗ 2 Region III: Fi = Fi = [Ci − (1 − ϕ)ki ni ]/(ϕ − ϕki + ki ) − (pi − ri + πi ) if Cϕ < min{a1 Ci + b1 , a2 Ci + b2 }.
Cost of inspection
For the detail of above parameters, see the content below: a1 = ϕ 1 − ki /(ϕ − ϕki + ki ), a2 = ϕ(1 − ki )/ki , b1 = ϕ − (1 − ki )(pi − ri + πi + ni )+ ki E , b2 = ϕ − (1 − ki )(pi − ri + πi ) − ki (1 − ki )(1 − ϕ)ni /(ϕ − ϕki + ki ) + ki E .
II
I
III
Cost of production of supplier i
Fig. 5. The optimal punishment contract.
It can be concluded from Proposition 3 that Fi1 > Fi2 if ni < (1 − ki )(pi − ri + πi + Fi )/ki , in other words, if the loss of externality is relatively small, the platform will punish the supplier more when she chooses not to inspect (Fig. 5). Besides, we can find that the optimal punishment increases with the supplier’s production cost, owning to the fact that the higher production cost will push the supplier to produce the inferior product, as a result, the optimal punishment should be higher to prevent the supplier to fraud. However, the optimal punishment decreases with the loss of externality, which is not in line with our cognition. Usually, if the loss of externality is higher, the punishment should be much higher. But it should be noted that the higher loss of externality leads to the profit of the supplier decreasing, and even becoming non-positive, at this time, the higher punishment for preventing fraud is unnecessary.
5 Conclusions On the social-enabled retailing e-commerce platform, if the platform only relies on consumers’ recognition to punish suppliers for producing inferior products, this will lead to continued fraud because consumers cannot recognize inferior products completely. Because all the suppliers share the salespersons on the social-enabled retailing
464
S. Li
e-commerce platform, once one supplier frauds, the other suppliers on the platform will be unwilling to produce. As a result, the platform needs to design quality incentive mechanism to manage the product quality of the suppliers. In the optimal punishment contract, if the loss of externality is relatively small, the supplier prefers the inferior product, the platform will punish the supplier more when she chooses not to inspect. In addition, when the production cost is relatively high or the loss of externality is rather small, the platform will also punish the supplier much more.
References Liu, Y.: The growth rate of social e-commerce transaction scale declines in 2019. Comput. Netw. 46, 8 (2020) Cheng, L.: Social e-commerce is fierce. China Entrepreneur 8, 102–107 (2019) Wang, J., Chen, W., Jiang, S., Liu, S .: Participant strategy in the platform ecosystem: decoupling of complementarity and dependence. Manag. World 37, 126–147+10 (2021) Ping, X., Tang, C.S., Wirtz, J.: Optimizing referral reward programs under impression management considerations. Eur. J. Oper. Res. 215, 730–739 (2011) Kornish, L.J., Li, Q.: Optimal referral bonuses with asymmetric information: firm-offered and interpersonal incentives. Mark. Sci. 29, 108–121 (2010) Biyalogorsky, E., Gerstner, E., Libai, B.: Customer referral management: optimal reward programs. Mark. Sci. 20, 82–95 (2001) Liu, Y., Shuang, W., Hao, J.: Quality incentive contract of the supply chain of complex products from the two-factor perspective. Ind. Eng. Manag. 4, 1145–1152 (2019) Seung, H.Y., Taesu, C.: Quality improvement incentive strategies in a supply chain. Transp. Res. Part E 114, 331–342 (2018) Nikoofal, M.E., Gümü¸s, M.: Quality at the source or at the end? managing supplier quality under information asymmetry. Manuf. Serv. Oper. Manag. 20, 498–516 (2018)
Decision-Making and Impact of Blockchain on Accounts Receivable Financing Mengqi Hao(B) and Jingzhi Ding School of Economics and Management, Beijing Jiaotong University, Beijing, China [email protected]
Abstract. This paper mainly discuss the decision-making and influence of blockchain technology on accounts receivable financing in supply chain finance. In recent years, supply chain finance has become an important way to alleviate the financing difficulties of small and medium-sized enterprises, but problems such as high financing costs and frequent fraud still has not solved. As the underlying technology of digital currency, blockchain has the characteristics of immutability, traceability and smart contract, which can help supply chain finance solve the above problems. We takes supply chain finance Accounts Receivable Financing (ARF) as the research, sorts out the ARF transaction process with or without blockchain technology, and then we investigate two collaborative accounts receivable financing decision-making schemes: accounts receivable (AR) financing and blockchain-embed accounts receivable (BAR) financing. The mean-variance method is used to discuss the optimal performance of the two financing decisions. The results show that the blockchain-embed accounts receivable financing leads to lower financing risks, and when the cost of credit investigation and additional benefits are high enough, the use of blockchain can bring higher expected benefits to financing participants. Keywords: Blockchain · Accounts receivable financing · Mean-variance
1 Introduction In recent years, with the increasingly fierce market competition, the supply chain has become more complex. Supply chain finance (SCF) has become an important factor in promoting global supply chain development and has great potential for development. The 2021 BCR research report shows that due to the impact of COVID-19, the growth rate of global supply chain finance business has increased significantly in 2020, and the scale of the global supply chain finance market in 2020 has increased by 35% compared with 2019 [1]. Supply chain finance refers to relying on core enterprise credit to provide financial support for small, medium and micro enterprises in the supply chain, enhance the viability of the entire supply chain, and improve the efficiency of supply chain capital operation. SCF has three main financing models: accounts receivable financing, advance payment financing, and inventory financing. Among them, Accounts Receivable Financing (ARF) is the most widely used method. For example, Hofmann [2] establishes an © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 465–477, 2022. https://doi.org/10.1007/978-981-16-8656-6_42
466
M. Hao and J. Ding
accounts receivable platform (ARP) program to investigate whether SCF solutions have the potential to create tripartite value in the international trade arena. Peng [3] studies the optimal financing and operations strategy with account receivables in the supply chain with uncertain yields. However, some related studies show that the optimal financing decision under the coordination of the supply chain can maximize the expected benefits of the entire supply chain. And the typical method of supply chain coordination is revenue sharing contract. Li et al. [4] indicate that the revenue-sharing coordination mechanism based on the retailer’s advance payment can achieve supply chain coordination and maximize the profits of the supply chain. Cao et al. [5] show that the capital constrained retailer can earn more profit than the well-funded retailer when coordinating by a revenue sharing contract. Nevertheless, in recent years, frequent financing fraud incidents have seriously hindered the development of supply chain finance. For instance, a pharmaceutical company defrauded nearly 40 billion yuan through forged accounts receivable invoices in 2018, and in 2019, a trading company also defrauded 3.4 billion yuan in loans through forged accounts receivable documents. In fact, the fundamental reasons of these incidents are that the authenticity of the trade between the two parties cannot be confirmed, the company’s information is not interoperable, the documents are forged, and the use and repayment of financing funds are uncontrollable, etc. As a significant emerging technology, blockchain technology has attracted the attention of many industrial participants since its emergence, and has been applied to many fields such as the Internet of Things, security services, finance and supply chain [6]. Blockchain technology has the characteristics of decentralization, non-tampering, traceability, transparency, and smart contracts [7], which can help solve the above problems in SCF. Chod et al. [8, 9] find that blockchain technology furnishes the ability to secure favorable financing terms at lower signaling costs. Hofmann et al. [10] show that the blockchain could deliver substantial benefits for all parties involved in an SCF transaction, promising to expedite the processes and lower the overall costs of financing. Ma et al. [11] introduce a privacy protection mechanism applied in SCF and uses a supply chain financial case to explain it. Du et al. [12] make advantage of blockchain to build a novel type of SCF platform, which establishes trust, reduces costs, and provides better financial services to the relevant parties in the supply chain. It can be found that most of the above literatures discuss the mechanism design, advantages and disadvantages of blockchain in SCF. We develop mathematical models to analyze the influence of blockchain technology on the decision-making of accounts receivable financing in supply chain finance, which is deeply concerned about by managers and scholars. In the supply chain, thanks to market uncertainty including demand uncertainty and supply uncertainty, the performance of the supply chain becomes unstable, which makes decision makers often have to decide under risk. Li et al. [13] use mean–variance method to study the impact of retailer’s risk-averse behavior on the expected profits and the utilities of supply chain members. Zhuo et al. [14] investigate the conditions for coordinating the supply chain by using option contracts in a two-echelon supply chain under the framework of mean-variance. Choi [15] uses the mean variance model to compare the optimal profits decision-making of the supply chain before and after the adoption of blockchain technology. He proves that under mean-risk dominating policy,
Decision-Making and Impact of Blockchain
467
blockchain can brings a higher expected profit and a lower risk for the whole supply chain and its participants. Different from Choi’s study, we will focus on the accounts receivable financing (ARF) in supply chain finance. We study the decision-making and impact of blockchain on ARF under supply chain coordination. Firstly, we analyze the account receivable financing transaction processes with or without blockchain technology. Secondly we construct two financing decision models: account receivable (AR) financing and blockchainembed account receivable (BAR) financing. Thirdly, we discuss the optimal decision when the coordination of financing supply chain is achieved. Finally, we use the meanvariance model to analyze the impact of blockchain on the expected benefits and risk of accounts receivable financing supply chain.
2 Accounts Receivable Financing Decision-Making Without Blockchain 2.1 Receivables Financing Processes Accounts Receivable Financing refers to the behavior that the lender provides operating funds to financing enterprises according to the accounts receivable generated by the actual trade contracts signed between the financing enterprise (the seller) and the core enterprise (the buyer). In the Accounts Receivable financing without blockchain, namely Model AR, we assume that the financing enterprise entrusts the core enterprise to pay the financing principal and interest to the lender at the end of repayment period and the financing contract become completed after the lender receives the funds. Figure 1 depicts the transaction processes of accounts receivable financing.
Fig. 1. Transaction process with receivables financing
We consider a secondary supply chain system (e.g., the retail industry) including a supplier and a retailer under the revenue sharing contract. The supplier refers to the seller in the upstream of the supply chain, and the retailer refers to the buyer in the downstream of the supply chain. The retailer orders the products with quantity q at a unit wholesale price w and sells them at a unit price p, and the remaining products will be disposed with a unit salvage value v. The market demand of products is x, f(x) is probability density function and F(x) is cumulative distribution function. The supplier and the retailer are
468
M. Hao and J. Ding
long-term partners. In order to obtain more profits and achieve supply chain coordination, the retailer and the supplier sign a revenue sharing contract and negotiate the revenue sharing coefficient α and the wholesale price w. After the supplier completes the order, the retailer receives the products and sends the accounts receivable document to the supplier. When the receivables account last unpaid for a long time and the remaining funds of the supplier cannot maintain operations. The supplier pledges the accounts receivable to obtain loans from the lender. After reviewing the transaction and supplier credit, the lender agrees to the loan and determines the loan-to-value rate θ. At the end, the retailer pays the loan principal and interest to the lender and the remaining revenue to the supplier. 2.2 Notations and Assumptions The description of the notations used is shown as follows. S, R, L, SC: The supplier, the retailer, the lender, the financing supply chain; q: Ordering quantity of the retailer; p: Unit retail price, including delivery costs; w: Unit wholesale price; c: Unit product cost; v: Unit salvage value, dealt with by the retailer at the end of the sale period, where v < w < c < p; x: The demand for the product market, the demand is random; f(x) and F(x): The probability density function and cumulative distribution function of product market demand; r: The lending rate; r 0 : The risk-free rate; θ: The loan-to-value ratio, 0 ≤ θ ≤ 1; T: The lending time; M: The credit investigation costs of lender; α: The revenue share proportion granted to the supplier, 0 ≤ α ≤ 1; Assumptions are made as follows: Assumption 1. The participants are risk-neutral and completely rational. Assumption 2. The retailer and the supplier sign a revenue sharing contract and negotiate to determine the wholesale price w (very low) and revenue sharing coefficient α. Assumption 3. The lender provides financing based on the supplier’s sales revenue, excluding sharing revenue. Assumption 4. Defaults situation from by suppliers and retailers is not considered.
Decision-Making and Impact of Blockchain
469
2.3 Optimal Decision-Making Under Model AR The revenue sharing contract aims to improve the performance of the supply chain and maximize the total profits of the supply chain. Therefore, under the revenue sharing contract, this section will construct the expected profit function with ordering quantity q as the decision variable. The purpose is to maximize the expected profits of accounts receivable financing supply chain without blockchain. The financing supply chain includes the retailer, the supplier and the lender. Under the revenue sharing contract, when the ordering quantity is q, the remaining q
amount at the end of the sale period is ∫(q − x)f (x)dx. And according to assumption 3, 0
the receivables pledged to the lendershould be wq, which is supplier’s sales revenue. And q
the supplier’s sharing revenue is pα q − ∫(q − x)f (x)dx , product costs is cq, financing 0
interest is wqθ Tr. Then we can get the expected profit function of each participant in the financing supply chain as follows. The supplier’s expected profit function is: q (q − x)f (x)dx] − cq − wqθ Tr (1) E(πS ) = wq + pα[q − 0
Similarly, the retailer’s expected profit function can be expressed as: q E(πR ) = [p(1 − α) − w]q − [p(1 − α) − v] (q − x)f (x)dx
(2)
0
2 R) R) By ∂E(π = p(1 − α) − w − p(1 − α) − v F(q), and ∂ E(π < 0, let ∂q ∂q2 ∗ we can obtain the retailer’s optimal ordering quantity qR as: qR∗ = F −1 (
p(1 − α) − w ) p(1 − α) − v
∂E(πR ) ∂q
= 0,
(3)
In ARF, the lender needs to review the repayment ability and willingness of the supplier and the retailer before making a loan. We define it as the cost of credit investigation M. The expected profit function of the lender is: E(πL ) = wqθ T (r − r0 ) − M
(4)
Therefore, from Eqs. (1), (2) and (4), we can express the expected profit function of Model AR as follows: E(πSC ) = E(πR ) + E(πS ) + E(πL )
q
= (p − c − wθ Tr0 )q − (p − v)
(q − x)f (x)dx − M
0
(5)
470
M. Hao and J. Ding
The first derivative of q is: to find
∂ 2 E(π
SC )
∂q2
∂E(πSC ) ∂q
= (p − c − wθ Tr0 ) − (p − v)F(q). And it’s easy
∗ for Model AR is: < 0. So the optimal ordering quantity qSC ∗ qSC = F −1 (
p − c − wθ Tr0 ) p−v
(6)
Under the coordinated management of the ARF, the optimal ordering quantity of the ∗ . So we can get the retailer and the whole supply chain should be the same, qR∗ = qSC following equation: p(1 − α) − w p − c − wθ Tr0 = p(1 − α) − v p−v Solving the above equation, the relationship between the wholesale price w and the revenue sharing coefficient α can be obtained as: wSC =
c(p − v) − pα(c − v) (p − v)(1 − θ Tr0 ) + pαθ Tr0
(7)
Taking the derivative of Eq. (7), we have −(c−v)[(p−v)(1−θTr0 )+pαθTr0 (1+p)]+c(p−v)pθTr0 = < 0. Thus, under the revenue [(p−v)(1−θTr0 )+pαθTr0 ]2 sharing contract, if α decreases, wSC increases. dwSC dα
Proposition 1. For the accounts receivable financing supply chain, the critical parameters of the revenue sharing contract can achieve supply chain coordination by setting c(p−v)−pα(c−v) , and 0 < α < 1. Moreover, the the following conditions:wSC = (p−v)(1−θTr 0 )+pαθTr0 wholesale price wSC is decreases in α. Proposition 1 shows the relationship between the parameters of the revenue sharing contract under Model AR coordination. The wholesale price and the revenue sharing coefficient are important factors that affect the coordination of the financing supply chain. Proposition 1 indicates that the higher of the revenue sharing coefficient, the lower of the wholesale price is. In fact, the retailer is the core enterprise and has greater bargaining power in the supply chain, which will lead to a large shared revenue (1 − α). As a follower, the supplier can only get a small revenue share α, but it can obtain a higher wholesale price w. This is beneficial to all financing participants, retailers can optimize their cash flow and obtain more shared benefits. Suppliers can obtain not weak financing funds and more profits in cooperation. And lenders can also get more profits. However, in actual accounts receivable financing transactions, the high financing risk makes the lender need to spend more credit investigation costs. This is not beneficial for lenders, which means that sometimes they don’t lend to smaller companies. Under coordination, from Eqs. (6) and (7), we can find that the optimal ordering quantity and the wholesale price are influenced by the loan-to-value rate. In addition, we omit the discussion on the relationship between the revenue sharing coefficient and dq∗ wTr0 < 0, dwd θSC = the loan-to-value rate. The first derivative of θ is: d SC ∗ θ = − (p−v)f (qSC ) Tr0 (p−pα−v)[(p−pα−v)c+pαv] > 0. And the results are given in the following corollary. [(p−v)(1−θTr0 )+pαθTr0 ]2
Decision-Making and Impact of Blockchain
471
∗ is decreases in θ , but the Corollary 1. In Model AR, the optimal ordering quantity qSC wholesale price is increases in θ .
Corollary 1 implies the impact of the loan-to-value rate on the financing supply chain members. An excessively high loan-to-value rate will increase the wholesale price, which is beneficial to suppliers and lenders. It can increase suppliers’ financing funds and lenders’ profits. And the reduction of the revenue sharing coefficient may affect the revenue share of suppliers. In addition, a higher loan-to-value ratio also can reduce the retailer’s optimal ordering quantity. Therefore, it is very important to set a reasonable loan-to-value ratio in ARF, which can optimize the profits of the entire supply chain. Put Eqs. (6) and (7) into (5), we can obtain the maximum expected profit function of Model AR under the revenue sharing contract. ∗
E(πSC ) = (p − c
∗ − wSC θ Tr0 )qSC
∗ qSC
− (p − v) 0
∗ (qSC − x) f (x)dx − M
(8)
3 Accounts Receivable Financing Decision-Making with Blockchain 3.1 Receivables Financing Processes with Blockchain Blockchain technology can simplify the ARF processes. After retailers and suppliers upload transaction information and electronic accounts receivable on the Blockchainembed SCF Platform (BcSCFP), suppliers can apply for financing and obtain loans online through the platform. BcSCFP effectively improves financing efficiency. In the Blockchain-embed Accounts Receivable financing, namely Model BAR, the blockchain can record transaction information in the supply chain at any time, and the nontamperable feature of the blockchain ensures the authenticity and transparency of the data. Receivables financing participants can keep abreast of transaction and capital usage in the supply chain. The lender can avoid fraudulent behaviors and breach of contract in the financing process.. In addition, the lender can complete the review of the supplier’s credit through BcSCFP, without having to pay other costs, which greatly simplifies the financing process and reduces financing costs. Suppliers can obtain financing funds in a short period (maybe on the day of application for financing). Retailers can pass their credit to more suppliers to maintain the stability of the supply chain. Both suppliers and retailers can focus on their business development and obtain more benefits. The specific transaction process is shown in Fig. 2. This chapter considers the supply chain of accounts receivable financing based on BcSCFP. The supplier signs a transaction contract with the retailer and generates a smart contract through BcSCFP. After the retailer receives the products, it sends the e-voucher of the accounts receivable to the supplier. If the supplier’s funds are insufficient and want to finance, the supplier can use the electronic accounts receivable to apply for financing on BcSCFP. Then the lender evaluates the supplier’s credit based on the information from BcSCFP, signs a financing agreement, and generates a financing smart contract with the supplier. And then the lender pays the financing funds wqθ to the supplier. In the financing processes, both the supplier and the lender use BcSCFP and need to pay the cost of using the blockchain b. After the financing expires, BcSCFP automatically deducts the loan principal and interest wqθ Tr from the retailer’s account and transfers the remaining revenue to the supplier.
472
M. Hao and J. Ding
Fig. 2. Transaction process with receivables financing based on blockchain
3.2 Optimal Decision-Making Model Under BAR Because of BcSCFP, we assume that the additional revenue of the supplier and the B = D + D (D = D ). We use a superscript B to denote retailer is DS , DR , and DSC S R S R the decisions associated with blockchain in Model BAR. The lender only pays the use cost b of BcSCFP, there is no high credit investigation cost M, and b M . At the same time, the supplier that applies for financing through the platform also needs to pay the cost b. Similar to Model AR, the purpose of this section is to maximize the expected profits of the blockchain-embed accounts receivable financing supply chain. Similar to Model AR, we can get the expected profit function of the supplier, retailer, and the lender in Model BAR as follows: q (q − x)f (x)dx − wqθ Tr − (c + b)q + DS (9) E(πSB ) = (w + pα)q − pα 0
E(πRB ) = [p(1 − α) − w]q − [p(1 − α) − v]
q
(q − x)f (x)dx + DR
E(πLB ) = wθ T (r − r0 )q − bq
(10)
0
B
(11)
B
dE π d2 E π Due to dq R = p(1 − α) − w − p(1 − α) − v F(q), and dq2 R < 0. Similarly, it is easily to obtain the optimal ordering quantity of the retailer: qRB∗ = F −1 (
p(1 − α) − w ) p(1 − α) − v
The expected profit function of Model BAR is: q B B E(πSC ) = (p − c − 2b)q − (p − v) (q − x)f (x)dx − wθ Tr0 q + DSC
(12)
(13)
0
The first derivative of q is:
B ∂ 2 E πSC ∂q2
B ∂E πSC ∂q
= (p − c − 2b − wθ Tr0 ) − (p − v)F(q). And
< 0.The optimal ordering quantity of Model BAR is: B∗ qSC = F −1 (
p − c − 2b − wθ Tr0 ) p−v
(14)
Decision-Making and Impact of Blockchain
473
B* under Model BAR also should be the same: Similar to Model AR, the qRB∗ and qSC
p(1 − α) − w p − c - 2b − wθ Tr0 = p(1 − α) − v p−v Then we can obtain the relationship between the wholesale price and the revenue sharing coefficient under coordination: B wSC =
(p − v)(c + 2b) − pα(c + 2b − v) (p − v)(1 − θ Tr0 ) + pαθ Tr0
(15)
Proposition 2. For the blockchain-embed accounts receivable financing supply chain, the critical parameters of the revenue sharing contract can achieve supply chain coordination by setting the following conditions: wBSC = (p−v)(c+2b)−pα(c+2b−v) (p−v)(1−θTr0 )+pαθTr0 , and 0 < α < 1. Proposition 2 is similar to Proposition 1. It also shows the relationship between the parameters of the revenue sharing contract under coordination of Model BAR. Different from Model AR, the use cost of blockchain can affect the optimal ordering quantity and wholesale price of the financing supply chain in Model BAR, and ultimately influence the optimal decision of the financing supply chain. From Eqs. (14) and (15), we have:
B* dqSC db
= −
2(p−pα−v) (p−v)(1−θr0 )+pαθr0
2 B* (p−v)f qSC
< 0,
dwBSC db
=
> 0. So we can know the influence of the blockchain on optimal ordering quantity and the wholesale price as follows. B* is decreases in b, but Corollary 2. In Model BAR, the optimal ordering quantity qSC the wholesale price w is increases in b.
Corollary 2 reflects the impact of blockchain on financing participants. When the cost of using blockchain is too high, the optimal ordering quantity of the financing supply chain will be reduced, which is detrimental to the financing supply chain, because it reduces the overall profit. Moreover, the increase in the cost of using the blockchain will increase the wholesale price and reduce the revenue sharing coefficient, which can affect the revenue of suppliers and retailers. Then put Eqs. (14) and (15) into (13), we can obtain the maximum expected profit of the BAR under the revenue sharing contract: B ∗ B B∗ ) = (p − c − 2b − wSC θ Tr0 )qSC − (p − v) E(πSC
B∗ qSC
0
B∗ B (qSC − x)f (x)dx + DSC
(16)
4 Impact Analyzes of blockchain on ARF In this section, we analyze the impact of blockchain technology on the optimal ordering quantity, expected benefits and risks of the financing supply chain.
474
M. Hao and J. Ding
4.1 Impact on Optimal Order Quantity According to the optimal ordering quantity q∗ of the ARF obtained in Sect. 2 and Sect. 3. This section compares the impact of blockchain in Model AR and Model BAR based on the q∗ . The results are summarized in Proposition 3. From Eqs. (6) and (14), the optimal ordering quantity of Model AR and Model BAR ∗ B* 0 0 is: F qSC = p−c−wθTr = p−c−2b−wθTr and F qSC . Then we can get the following p−v p−v equation:
B* − q∗ So qSC SC
∗ −2b B* F qSC − F qSC = p−v
= F −1 −2b p−v .
B* ≤ q∗ . Proposition 3. In ARF, the optimal ordering quantity: qSC SC
Proposition 3 reveals the impact of blockchain on the optimal ordering quantity of ARF under supply chain coordination. From the results, we can find that the use of blockchain is beneficial to ARF. Because the BcSCFP improves the transparency and authenticity of information and reduces the impact of the bullwhip effect on ARF participants. This enables retailers to obtain orders closer to the real market demand and reduce inventory costs. In addition, retailers and suppliers can obtain more benefits by financing platforms. 4.2 Impact on Expected Benefits and Risks The use of blockchain may not only bring benefits to ARF participants, but also may bring some risks. This section uses the mean-variance method to analyze the expected benefits and risks of using blockchain in ARF. The expected benefits of using the blockchain E B are given by Proposition 4. B* + Proposition 4. E B > 0 if and only if Z B < DBSC + M , where Z B = (2b − δθ Tr0 )qSC γ p − c − wBSC θ Tr0 − δθ Tr0 γ + (p − v) ∫ F(x)dx. 0
B ∗ B First, we define the expected benefits of using the blockchain
is: E = E πSC −
∗ = qB* + γ , where γ = F −1 E(πSC )∗ . And define qSC SC
wSC = wBSC + δ, where δ =
2b p−v
∗ , > 0; similar to qSC
−2b(p−pα−v) (p−v)(1−θTr0 )+pαθTr0
< 0. B ∗ are: From Eqs. (8) and (16), we can get the E(πSC )∗ and E πSC ∗ E(πSC )∗ = (p − c − wSC θ Tr0 )qSC − (p − v)
∗ qSC
0
B ∗ B B∗ E(πSC ) = (p − c − 2b − wSC θ Tr0 )qSC − (p − v)
0
∗ (qSC − x) f (x)dx − M
B∗ qSC
B∗ B (qSC − x)f (x)dx + DSC
Decision-Making and Impact of Blockchain
475
B ∗ So E B = E πSC − E(πSC )∗ B B∗ =[(p − c − 2b − wSC θ Tr0 )qSC − (p − v) ∗ −[(p − c − wSC θ Tr0 )qSC − (p − v)
∗ qSC
0
B∗ qSC
0
B∗ B (qSC − x)f (x)dx + DSC ]
∗ (qSC − x) f (x)dx − M ]
Then the expected benefits of using the blockchain E B is: B E B = −Z B + DSC +M
(17) γ
B* + (p − c − wB θ Tr − δθ Tr )γ + (p − v) ∫ F(x)dx. where Z B = (2b − δθ Tr0 )qSC 0 0 SC 0
From Proposition 4, it is easy to find the relationship between the expected benefits of blockchain, the additional revenue of BAR, and the credit cost of the lender. This means that when DBSC and M are constant, the smaller the cost of using the blockchain, the greater the expected benefits of ARF. Similarly, if DBSC or M is higher, the expected benefits also will be greater. This is beneficial to ARF participants because the use of blockchain technology saves a lot of financing costs. Lenders can avoid the high cost of credit investigations, save more financing costs, optimize the use of funds, and provide financing services to more companies. Suppliers and retailers can focus on their own business, the supply chain will be more stable and they even can obtain some additional benefits. Therefore, when the expected benefits are positive, ARF participants should be willing to use the blockchain for financing. Conversely, if the expected benefits are negative, it means that the cost of using the blockchain may be high, or the additional benefits and credit costs are low. This will not appeal to the participants. To use or not to use the blockchain for financing needs to be weighed. Then we will analyze the risk that ARF participants will bear with and without blockchain. The risk is generally bear with and without blockchain. The risk is generally expressed by the variance of the expected benefits. From Eqs. (8) and (16), we also can obtain the expected profits’ variance V of ARF:
q
VSC (q) = (p − v)2 2q B VSC (q) = (p − v)2 2q
0
q
F(x)dx − 2
F(x)dx
0 q
F(x)dx − 2
0
0
2
q
xF(x)dx − 0
q
2
q
xF(x)dx −
(18)
F(x)dx
(19)
0
∗ and qB* into Eqs. (18) and (19), We can get the conclusion of Proposition 5. Put qSC SC
Proposition 5. In ARF, VBSC (q) < VSC (q). Proposition 5 shows that the use of blockchain in ARF is beneficial to participants because it reduces the financing risk of participants. In fact, if participants are more concerned about financing risk, they should consider online financing on BcSCFP, which
476
M. Hao and J. Ding
can reduce financing risk. The conclusion of Proposition 5 is valid for retailers, suppliers, and lenders. In summary, we can conclude that the impact of blockchain technology on ARF is positive. Because it can increase the expected benefits of ARF while reducing financing risk. However, as the cost of using the blockchain decreases, the optimal order quantity, expected benefits, and financing risk of ARF are increasing. This is not necessarily the optimal result for ARF participants, because they may be taking greater financing risks when they get more benefits. Therefore, before using the blockchain, ARF participants should also evaluate the expected benefits and risk based on credit investigation costs, blockchain usage costs and additional benefits.
5 Management Insights and Conclusion 5.1 Management Insights For core enterprises, if they want to maintain a long-term and stable cooperative relationship with suppliers, they should consider using blockchain technology to provide suppliers with payment commitments. Through BcSCFP, more suppliers can obtain financing funds with the credit of core enterprises, and maintain the stability of the supply chain. More importantly, retailers can get the order quantity closer to the real market demand, which effectively reduces the retailers’ operating costs. In actual ARF, the lender usually refuses to lend to some small-scale suppliers or only provides lower financing funds to avoid risks. Because the lender and the supplier cannot trust each other. The advantages of blockchain can just strengthen the trust relationship between them. Therefore, in ARF, suppliers and lenders will be willing to finance on BcSCFP. Because through BcSCFP, suppliers can obtain more financing funds and additional revenue. Lenders can obtain more financing interest and bear lower risks. Therefore, lenders can reasonably improve the loan-to-value rate under the financing risk that they can bear. For ARF, blockchain technology can reduce the total costs of financing and reduce financing risks. Simultaneously, it can also increase the expected benefits of ARF participants, which is beneficial to ARF. However, if the cost of credit investigation and additional benefits are too low, ARF participants should hesitate to use the blockchain, because in this case, the application of the blockchain reduce the financing risk, and the total profits together. 5.2 Conclusion Blockchain, with its unique advantages, has been extensively used in supply chain management, particularly in the SCF industry. We start with discussion on the impact of blockchain on the ARF transaction processes. Then based on the revenue sharing contract, the expected profits financing decision-making models under ARF were established. Finally, according to the results, the impact of blockchain on ARF is discussed from benefits and risks. We mainly draw the following conclusions: Firstly, under the revenue sharing contract, the use of blockchain will reduce the retailer’s optimal order quantity (closer to the actual market demand); secondly, if the costs of using the blockchain
Decision-Making and Impact of Blockchain
477
is lower, or the credit investigation costs, additional revenue are higher, the expected benefits of the blockchain to ARF is greater; thirdly, we find that the use of blockchain will reduce the financing risk of ARF. Therefore, we think that blockchain technology can be used in accounts receivable financing when financing costs and financing risks are high. However, in ARF, retailers or suppliers may breach the contract, which may be taken into consideration in future research.
References 1. BCR. World supply chain finance report (2021) 2. Hofmann, E., Zumsteg, S.: Win-win and No-win situations in supply chain finance: the case of accounts receivable programs. Supply Chain Forum Int. J. 16(3), 30–50 (2015) 3. Peng, H.: Financing strategy with accounts receivable in the supply chain with uncertain yields. J. Syst. Manage. 25(6), 1163–1169 (2016) 4. Li, C., Luo, J.: Revenue-sharing coordination mechanism for a financial constrained supply chain based on advance payment. Chinese J. Manage. 13(5), 763–771 (2016) 5. Cao, E., Yu, M.: Trade credit financing and coordination for an emission-dependent supply chain. Comput. Ind. Eng. 119(25), 50–62 (2018) 6. Saberi, S., Kouhizadeh, M., Sarkis, J., Shen, L.: Blockchain technology and its relationships to sustainable supply chain management. Int. J. Prod. Res. 119, 50–62 (2018) 7. Paliwal, V., Chandra, S., Sharma, S.: Blockchain technology for sustainable supply chain management: a systematic literature review and a classification framework. Sustainability 12(18), 7638–7677 (2020) 8. Chod, J., Trichakis, N., Tsoukalas, G., Aspegren, H., Weber, M.: Blockchain and the Value of Operational Transparency for Supply Chain Finance. Working paper, Boston College (2018) 9. Chod, J., Trichakis, N., Tsoukalas, G.: On the financing benefits of supply chain transparency and blockchain adoption. Manage. Sci. 66(10), 4378–4396 (2020) 10. Hofmann, E., Strewe, U.M., Bosia, N.: Supply chain finance and blockchain technology: the case of reverse securitisation. Springer Briefs in Finance (2017) 11. Ma, C., Kong, X., Lan, Q., Zhou, Z.: The privacy protection mechanism of Hyperledger Fabric and its application in supply chain finance. Cybersecurity 2(1), 1–9 (2019). https://doi.org/ 10.1186/s42400-019-0022-2 12. Du, M., Chen, Q., Xiao, J., Yang, H.: Supply chain finance innovation using blockchain. IEEE Trans. Eng. Manage. 67(4), 1045–1058 (2020) 13. Li, Q., Li, B., Chen, P., Hou, P.: Dual-channel supply chain decisions under asymmetric information with a risk-averse retailer. Ann. Oper. Res. 257(1–2), 423–447 (2015). https:// doi.org/10.1007/s10479-015-1852-2 14. Zhuo, W., Shao, L., Yang, H.: Mean-variance analysis of option contracts in a two-echelon supply chain. Eur. J. Oper. Res. 271(2), 535–547 (2018) 15. Choi, T.M.: Supply chain financing using blockchain: impacts on supply chains selling fashionable products. Ann. Oper. Res. (2020). https://doi.org/10.1007/s10479-020-03615-7
A Two-Stage Heuristic Approach for Split Vehicle Routing Problem with Deliveries and Pickups Jianing Min1 , Lijun Lu2 , and Cheng Jin3(B) 1 School of Business, Taihu University of Wuxi, Wuxi, China 2 School of Management, Nanjing University, Nanjing, China
[email protected] 3 University Office, Taihu University of Wuxi, Wuxi, China
Abstract. The vehicle routing problem with split deliveries and pickups is an important research topic in which deliveries and pickups to and from each customer are split into multiple visits. The objective is to minimize the travel distance while using the fewest vehicles. Therefore, this paper proposes a two-stage constructive heuristic approach to solve this problem. First, revised sweep algorithms are adopted to partition the customer domain into sub-domains limited in vehicle capacity and customer deliveries and pickups. Second, a modified Clarke-Wright saving algorithm is used to optimize the route in each sub-domain from the delivery and pickup demands of each point. The reconstructed Solomon benchmark datasets were adopted to evaluate the proposed algorithms. The computational results indicate that the proposed algorithms are feasible and efficient in reducing the total travel distance, decreasing the number of vehicles used, and increasing the average loading rate. Keywords: Split deliveries and pickups · Vehicle routing problems · Two-stage heuristic approach · Reconstructed benchmark datasets
1 Introduction The vehicle routing problem (VRP) is essential in modern logistics. The VRP with simultaneous deliveries and pickups (VRPSDP) reduces the energy consumption of empty vehicles in single deliveries and pickups, reduces transportation costs, and increases the benefits to enterprises [1]. With the relaxation of the constraint imposed in the classic VRPSDP stating that each customer is visited once only, each customer is allowed to be visited multiple times, then greater vehicle reduction and path savings can be achieved [2]. Therefore, VRPs accounting for split demands have received increasing attention. There are two split demands in these problems: split delivery demands called the split This work was financed by the National Natural Science Foundation of China (grant no. 61872077) and the Natural Science Fund of Jiangsu Province Education Commission (grant no. 17KJB520040). © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 478–490, 2022. https://doi.org/10.1007/978-981-16-8656-6_43
A Two-Stage Heuristic Approach for Split Vehicle Routing Problem
479
delivery VRP (SDVRP) and combined delivery and pickup demands called the split VRP with deliveries and pickups (VRPSPDP). Both of them are variants of the classic VRP. SDVRP was first introduced in [2-4], and deliveries to a point were split among any number of vehicles. Since then, researchers have developed several procedures for solving the SDVRP [5-10]. In VRPSPDP, introduced in [11, 12], a vehicle starts from the depot, serves customers with both delivery and pickup demands, then returns to the depot. There is no limit to the number of deliveries and pickups, and each customer can be served by the same vehicle or different vehicles multiple times. The goal is to find a group of vehicle paths that minimizes the total travel distance under the vehicle capacity. To address this problem, a mixed-integer linear programming formulation was proposed and a route construction heuristic based on the cheapest insertion criterion with the fewest vehicles was developed in [11]. A parallel clustering technique and new route construction heuristic was presented in [12]. From [11, 12], four constructive heuristics (Therein, two of them do not limit the number of vehicles, and the other two limit the number of vehicles) were designed [13]. A mathematical model with two special preconditions for the VRPSPDP was proposed in [14]: the first one has a maximum travel-distance constraint, and the second restriction demands that each customer can split only once. A two-stage hybrid heuristic method was developed to solve the VRPSPDP in [15]. A model that the pickup and delivery quantities could be served in two separate visits was proposed in [16]. A parallel approach based on variable neighborhood search was presented in [17]. A tabu search algorithm based on specially designed batch combination and item creation operations was developed in [18]. An arc-based mixed-integer formulation was introduced in [19], and a branch-and-cut algorithm was proposed to solve a pickup and delivery problem with split demands. To date, researchers have dwelled on SDVRP, only a few articles have addressed VRPSPDP. So, there remains considerable room for improving VRPSPDP in both the optimization effect and computing time. With these goals in mind, algorithms based on a two-stage approach are proposed in this paper to minimize the travel distance, transportation cost, and loading rate for the VRPSPDP. The first stage of these algorithms involves clustering customers and finding the split points and values, and the second stage conducts route optimization in each cluster. Reconstructed Solomon benchmark datasets were used to evaluate the feasibility and effectiveness of the proposed algorithms, and the computational results were analyzed. The remainder of this paper is organized as follows: Sect. 2 describes the VRPSPDP. Section 3 describes the two-stage heuristic approach in detail. Section 4 presents and discusses the computational results obtained using the reconstructed benchmark datasets. Finally, Sect. 5 presents the conclusions.
2 Problem Description In VRPSPDP, a depot meets the delivery and pickup demands of several customers with a fleet of homogeneous vehicles. Each vehicle leaves and returns to the depot with all deliveries and pickups, respectively, within its capacity limit in one tour. Each customer enjoys both delivery and pickup demands and is met by one or multiple vehicle stops.
480
J. Min et al.
For simplicity, there are no time windows and restrictions on the maximum route times and distances. We schedule the vehicle routes to minimize the total travel distances while using the fewest vehicles to fulfill the delivery and pickup demands of all the customers. The notations and parameters used in this paper are as follows. Symbol Definition i, j
Depot and customer indices; i, j = (0, 1, 2, . . . , n), where 0 corresponds to the depot;
k
Vehicle index; k = (1, 2, . . . , m);
Q
Vehicle capacity;
di
Delivery demand of customer i;
pi
Pickup demand of customer i;
cij
Distance between customer i and point j; cii = 0, cij = cji ;
d ij
Quantity of delivery load moved from customer i to customer j; dij ≥ 0;
pij
Quantity of pickup load moved from customer i to customer j; pij ≥ 0;
x ijk
If vehicle k goes from customer i to customer j, xijk = 1; otherwise xijk = 0
yik
If point i is served by vehicle k, yik = 1; otherwise yik = 0
Let n and m denote the number of customersand vehicles in VRPSPDP, respectively. The minimum number of vehicles used is max( ni=1 di , ni=1 pi )/Q [11, 12], where x is the denoting the greatest integer function smallest integer equal to or greater than x. In max ni=1 di , ni=1 pi , ni=1 di and ni=1 pi are the maximum cumulative values (MCV ) of deliveries and pickups from point 1 to n, respectively, and denoted as: MCVdn and MCVpn . The objective function minimize the total travel distance and the formulation is as follows: min
n m n
cij xijk
(1)
i=0 j=0 k=1
subject to: m
d0j yjk = dj , j = 1, 2, . . . , n;
(2)
pj0 yik = pj , j = 1, 2, . . . , n;
(3)
d0i yik ≤ Q, k = 1, 2, . . . , m;
(4)
pi0 yik ≤ Q, k = 1, 2, . . . , m;
(5)
k=1 m k=1 n i=1 n i=1
A Two-Stage Heuristic Approach for Split Vehicle Routing Problem θ
n
pi yik +
i=0
di yik ≤ Q,
i=θ+1
481
(6)
i = 0, 1, . . . θ, θ + 1.., n; k = 1, 2, . . . , m; n i=0 n
di0 = 0
(7)
p0i = 0
(8)
i=0 n n−1
(xijk − xj(j+1)k ) = 0, k = 1, 2, . . . , m;
(9)
i=1 j=i+1 n j=0
x0jk = 1,
n
xj0k = 1, k = 1, 2, . . . , m;
(10)
j=0
Equation (1) is the objective function to minimize the total travel distance. Equations (2) and (3) ensure that the delivery/pickup demands of customer j are satisfied by multiple visits. Equations (4) and (5) ensure that the delivery/pickup quantities of a vehicle are within the capacity of the vehicle. Equation (6) ensures that the gross loading of both the delivery and pickup at any node falls within the vehicle’s capacity in one tour. The sum of the picked-up quantity before customer node θ (including the node θ ) and the delivery quantity after customer node θ (starting from the node θ +1) along the route of vehicle k cannot exceed the vehicle capacity. Both delivery and pickup can occur at one node. Equations (7) and (8) ensure that no delivery or pickup loads are directed to and from the depot, respectively. Equation (9) ensures that a vehicle that arrives at customer location j also leaves this location. Equation (10) states that each vehicle enters/exits the depot only once per tour.
3 Proposed Two-Stage Approach A two-stage approach based on “clustering first and routing later” is proposed to solve the VRPSPDP. First, an improved sweep algorithm on the requirement of both demand split is adopted to cluster customers and to determine the split points and values, then the problem becomes a VRPSDP in each cluster; Second, a Clarke-Wright (C-W) saving algorithm modified to meet the VRPSDP requirements is employed to optimize the travel distance. 3.1 Multi-restart-Iterative-Sweep-Algorithm for VRPSDP A sweep algorithm is a kind of constructive heuristic, which essentially groups the nearest points into a cluster. An improved multi-restart-iterative-sweep-algorithm was introduced for VRPSDP (MRISA-VRPSDP) to clustering customer domain to subdomains under the vehicle capacity limitation [20]. It improved the primitive sweep algorithm to sweep from each point in the domain in both clockwise and anti-clockwise.
482
J. Min et al.
In VRPSDP, a vehicle visits a point only once for delivery and pickup, so customer demands cannot be split. The MRISA-VRPSDP is also called unsplit-MRISA (in the following sections, they are used arbitrarily). The clustering procedure of the unsplitMRISA is described as follows. 1) Create a polar coordinate system: We establish a polar coordinate system having the depot as the origin and define a line as an angle of 0° of the polar coordinate system. 2) Convert rectangular coordinates to polar coordinates: We transform all customer points into a polar coordinate system and sort all angles of customer points in ascending order. 3) Set the first starting point: We appoint the point with angle 0° as the first starting point and ensure that the demand of each point is less than the vehicle capacity Q. 4) Start the sweeping clockwise: a) Judgment 1: When ((MCVdi < Q) and (MCVpi < Q)), and ((pi ≤ di ) or (MCVpi ≤ MCVdi )) at point i, sweep the point gradually into an initial group. b) Judgment 2: When ((MCVdi = Q) or (MCVpi = Q)), this current point i becomes lp
lp
the last (lp) of this cluster, denoted as: (MCVd = Q) or (MCVp = Q). And then, three situations exist: • When ((MCVdi = Q) and (MCVpi = Q)), this current point i becomes the last lp
lp
point (lp) of the cluster, denoted as: (MCVd = Q) and (MCVp = Q). • When ((MCVdi = Q) and (MCVpi < Q)) and ((pi ≤ di ) or (MCVpi ≤ MCVdi )) lp
at point i, this current point i becomes lp of this cluster, then ((MCVd = Q) lp (MCVp
and = MCVpi )). • When ((MCVdi < Q) and (MCVpi = Q)) and (MCVdi ≤ MCVpi ) at point i, lp
this current point i becomes lp of this cluster, then ((MCVd = MCVdi ) and
lp (MCVp
= Q)).
c) Judgment 3: When ((MCVdi > Q) or (MCVpi > Q)), the point preceding the current lp
lp
point i becomes lp of this group, and then, (MCVd = MCVdi − di ) or (MCVp = MCVpi − pi ). There are three situations: • When ((MCVdi > Q) and (MCVpi > Q)), the point preceding the current point i lp
lp
becomes the lp of this group, and then (MCVd = MCVdi − di ) and (MCVp = MCVpi − pi ). • When ((MCVdi > Q) and (MCVpi ≤ Q)) and ((di ≥ pi ) or (MCVdi ≥ MCVpi )), the point preceding the current point i becomes the lp of this group, and then lp lp (MCVd = MCVdi − di ) and (MCVp = MCVpi − pi ).
A Two-Stage Heuristic Approach for Split Vehicle Routing Problem
483
• When ((MCVpi > Q) and (MCVdi ≤ Q)) and (MCVdi ≤ MCVpi ), the point prelp
ceding the current point i is the lp of this group, so then (MCVd = MCVdi − di ) lp
and (MCVp = MCVpi − pi ). 5) Continue sweeping: We take the current point i as the first point of the next group, repeat 4) until the endpoint is swept. 6) Compute the total distance: We calculate the total travel distance of each cluster based on the spatial distribution of points and store it into storage_pool. 7) Execute A 4)-6) clockwise: We execute unsplit-MRISA iteratively starting from each customer points clockwise. Furthermore, we compare the total travel distance of the current starting customer point with that of the last starting customer point and store the smaller one into storage_pool. 8) Execute A 4)-7) anti-clockwise: We execute A 4)-7) in the anti-clockwise direction. The smallest value of the total travel distance will be into storage_pool. 3.2 B-Split-MRISA Some modifications are performed on unsplit-MRISA because both split deliveries and pickups should be met (briefly, B-split-MRISA). Thus, the B-split-MRISA is then employed to determine the split points and both values in each cluster of the VRPSPDP. The procedure of the B-split-MRISA is detailed as follows. 1) Prepare a polar coordinate system: Execute A 1), 2) and 3). 2) Execute sweep: gradually into an initial group in a clockwise Sweep the points lp lp ≥ Q is reached at the last point (lp). There d , p direction until max i=1 i i=1 i are four judgments: lp lp a) Judgment 1: If (MCVd = Q or (MCVp = Q)), end the grouping at point lp. Point lp is not split. Then there are three situations: lp
lp
lp
lp
lp
lp
lp
lp
• If ((MCVp = Q) and (MCVd = Q)), then dlp = Q and plp = Q.
lp
• If ((MCVd = Q) and (MCVd > MCVp )), then dlp = Q and plp = MCVp . lp
• If ((MCVp = Q) and (MCVd < MCVp )), then plp = Q, d lp = MCVd . lp
lp
b) Judgment 2: If ((MCVd > Q) and (MCVp < Q)), end the grouping at point lp and lp
split lp into lp1 and lp2 on deliveries. Thus, dlp1 = Q, dlp2 = MCVd −Q, plp1 = lp
MCVp , plp2 = 0. lp2 becomes the starting point in the next sweeping procedure. lp lp c) Judgment 3: If ((MCVp > Q) and (MCVd < Q)), end the grouping at point lp lp
and split lp into lp1 and lp2 on pickups. Thus, plp1 = Q, plp2 = MCVp −Q, dlp1 = lp MCVd , dlp2 = 0. lp2 becomes the starting point in the next sweeping procedure. lp
lp
d) Judgment 4: If ((MCVd > Q) and (MCVp > Q)), end the grouping at point lp. Split lp into lp1 and lp2 into both deliveries and pickups. Therefore, dlp1 = Q, dlp2 =
484
J. Min et al. lp
lp
MCVd −Q, plp1 = Q, plp2 = MCVp −Q. lp2 becomes the starting point in the next sweeping procedure. 3) Continue sweep: We start from point lp2 (with the demand d lp2 and plp2 ), and repeat B 2) until the endpoint of the customer area is swept. 4) Compare each cluster distance: We calculate the total travel distance of each cluster from the spatial distribution of points and the smaller is stored into storage_pool. 5) Sweep in clockwise: Similarly, we execute B 2)-4) in the anti-clockwise direction. 3.3 Route Optimization The clustering operation paves a route in each cluster, and each customer point has certain delivery and pickup demands. Thus, the problem becomes a VRPSDP in each route. A modified C-W algorithm is adopted to meet the VRPSDP requirements, and the procedure is as follows. 1) Set up the initial solution: Form an initial solution set L = {Li }, ∀i ∈ {1, 2, . . . , n}, where L i indicates that point i is dispatched directly. 2) Calculate and sort: We compute the distance-saving degree cij = 2(c0i + c0j )−(c0i + c0j + cij ) = c0i + c0j − cij , cij = c0i + c0j − cij and sort the resulting cij (i, j = 1, 2, . . . , n) in descending order Dcij . 3) Set up the initial route L0 and load quantities R0 : We construct a new route set L0 = ∅ and the load quantities = 0 for both the delivery demands d i and pickup demands pi . 4) Check the merge possibility: Scan the descending order Dcij from the top to the bottom, and check the merge possibility of the current point-pair (i, j) according to the following criteria during scanning: a) Pass by 1: Go to the next point-pair in the descending order Dcij if both points (i, j) are not in the route, i.e., Li ⊂ L0 and Lj ⊂ L0 . b) Pass by 2: Go to the next point-pair in the descending order Dcij if both points (i, j) are in the route, i.e., Li ⊆ L0 and Lj ⊆ L0 . c) Merge: Merge the one point j or i into the route L0 and calculate the route distance if one of the points (i, j) is in the route L0 , and another point is not in the L0 , i.e., ((Li ⊆ L0 ) and (Lj ⊂ L0 )) or ((Lj ⊆ L0 ) and (Li ⊂ L0 )). Then, there are five scenarios: • • • • •
Merge point j behind point i if i is the end of the route. Merge point j before point i if i is the beginning of the route. Merge point i behind point j if j is the end of the route. Merge point i before point j if j is the beginning of the route. Otherwise, give up the operations on the current point-pair, go to the next point-pair and execute C 4) until the bottom of Dcij .
d) Check the load constraints: We check the load constraints for each point because the gross loading of the vehicle fluctuates with the increasing or decreasing load along
A Two-Stage Heuristic Approach for Split Vehicle Routing Problem
485
the route. If the load in the current route meets the following conditions, go to the next point-pair: • The load quantity at the starting point equals Rso = ni=1 di yik ≤ Q. • The load quantity at the endpoint equals Reo = ni=1 pi yik ≤ Q.
• The load quantity at the current point i equals R0,i = R0,(i−1) − di + pi + R(i+1),0 ≤ Q. • Otherwise, abandon the operations on the current point-pair, go to the next point-pair, and execute C 4) until the bottom of Dcij .
4 Case Study Computational experiments were performed to verify the feasibility and effectiveness of the proposed algorithm. The Solomon benchmark datasets of VRP Web were used and 25-, 50-, and 100-customer datasets are considered. The Solomon datasets cannot be used directly in the VRPSPDP because they do not include pickup demand data. Therefore, we constructed new datasets by extracting the delivery demand from one dataset and placing it into another dataset as the pickup demand data. For example, the delivery and pickup demands of C101 and R101, respectively, are constructed into a new dataset CR101. The experiments were implemented using C in a 64-bit Windows 7 machine with an Intel (R) Core processor of 2.50 GHz and 8 GB of memory. 4.1 Execution on the Originally Constructed Datasets Our experiments were on the originally constructed dataset. The results of B-splitMRISA (B-s-MRISA) for VRPSPDP and unsplit-MRISA (Unsplit) for VRPSDP are shown in Table 1. The distance reduction Dist.% is given by: Dist.% = |(Dist. of B-s-MRISA−Dist. of Unsplit)| / (Dist. of Unsplit) × 100% (11) The route reduction Rt. % is given by: Rt.% = |(Routes of B-s-MRISA−Routes of Unsplit)| / (Routes of Unsplit) × 100% (12) The results do not show significant benefits of splitting: • The average distance reduction of the B-s-MRISA is 1.91% and Dist % is between 0.00% and 3.80%. • Only three points are split in the instances of RCR101-100 and one point is split in the instance of R_C101-100 among all 12 instances.
486
J. Min et al. Table 1. Results performed on the original dataset
Point
25
50
100
Instance
Demands (D/P)
B-s-MRISA Distance
Un-split
Route
Dist. %
Rt. %
Distance
Route
CR101-25
460/332
236.363
3
3.21
0
244.2
3
CR201-25
460/332
265.727
3
2.27
0
271.9
3
R_C101-25
332/460
394.761
3
2.94
0
406.7
3
RCR101-25
540/332
311.098
3
2.87
0
320.3
3
CR101-50
860/721
488.421
5
0.42
0
490.5
5
CR201-50
860/721
633.314
5
2.81
0
651.6
5
R_C101-50
721/860
677.277
5
0.00
0
677.3
5
RCR101-50
970/721
580.435
5
0.54
0
583. 6
5
CR101-100
1810/1458
975.848
10
3.32
0
1009.4
10
CR201-100
1810/1458
1034.111
10
0.45
0
1038.8
10
R_C101-100
1458/1810
1032.253
10
0.33
0
1035.7
10
RCR101-100
1724/1458
1405.597
9
3.80
0
1461.17
9
Average
–
–
–
1.91
0
–
–
These results can be subject to the average demand values being too small in the original dataset, with averages of 9.2%, 8.6%, and 9.1% of the vehicle capacity for the delivery values and 6.64%, 7.21%, and 7.29% of the vehicle capacity for the pickup values in the 25-, 50-, and 100-customer cases, respectively. All these values are below 10% of the vehicle capacity. 4.2 Application to Newly Generated Dataset 1 To verify the benefits of splitting, we generated a new Dataset 1 from the original dataset. We retained the locations of the data but changed the delivery and pickup values by multiplying all values by four and adding 0.1·Q to all points, where Q denotes the capacity of the vehicle similar to the method in [16]. Thus, the new dataset 1 had delivery and pickup values between 12% and 110% of the vehicle capacity, with averages of 46.8%, 44.4%, and 46.2% of the vehicle capacity for the delivery values and 36.56%, 38.84%, and 39.16% of the vehicle capacity for the pickup values in the 25-, 50-, and 100-customer cases, respectively. The results of applying the B-s-MRISA and unsplit to new Dataset 1 are shown in Table 2. The results show that B-s-MRISA has significant advantages compared to unsplit and outlined as follows:
A Two-Stage Heuristic Approach for Split Vehicle Routing Problem
487
Table 2. Results performed on the new dataset 1 Point Instance
25
50
100
Demands (D/P)
B-s-MRISA
Un-split
Distance Route Lord Dist. Rt. rate % %
Distance Route Lord rate
CR101-4X-25
2340/1828 643
12
0.98
10.57
14.29 719
14
CR201-4X-25
2340/1828 781.7
12
0.98
R_C101-4X-25
1828/2340 830.8
12
0.98
RCR101-4X-25
2660/1828 1246
14
CR101-4X-50
4440/3884 1356
CR201-4X-50
4440/3884 1592
R_C101-4X-50
0.84
5.03
14.29 823.1
14
0.84
9.00
25.00 913
16
0.73
0.95
3.71
12.50 1294
16
0.83
23
0.97
14.93
20.69 1594
29
0.77
23
0.97
10.66
20.69 1782
29
0.77
3884/4440 1685
24
0.93
8.32
20.00 1838
30
0.74
RCR101-4X-50
4880/3884 2354.1
25
0.98
16.61
21.88 2823
32
0.76
CR101-4X-100
9240/7832 3437.3
48
0.96
16.65
25.00 4124
64
0.72
CR201-4X-100
9240/7832 3647.5
48
0.96
15.74
25.00 4329
64
0.72
R_C101-4X-100
0.73
7832/9240 3280.2
48
0.96
12.88
23.81 3765
63
RCR101-4X-100 8896/7832 4021.1
47
0.95
15.59
25.40 4764
63
0.71
Average
–
0.96
11.64
20.71 –
–
0.76
–
–
• The distance reduction of the B-s-MRISA Dist.% is between 3.71% and 16.65%, with an average of 11.64%. This distance reduction directly decreases the transportation cost. • The average “load rate” obtained with the B-s-MRISA and unsplit are 0.96 and 0.76, respectively. Thus, the “load rate” increased by 26.32% when B-s-MRISA was employed. • The route reduction of the B-s-MRISA Rt.% is between 12.50% and 25.40%, with an average of 20.71%. Thus, the number of vehicles required according to the B-sMRISA and the vehicle startup and manpower costs would be reduced, all of which decreases the transportation cost. • Most routes are after applying B-s-MRISA became short. 4.3 Applications to Newly Generated Dataset 2 To better understand the benefits of splitting, we generated a new Dataset 2 from the original dataset. We retained the locations from the dataset but changed the delivery and pickup values by adding 0.75·Q and 0.2·Q to the delivery and pickup of each odd point. Also, we added 0.2·Q and 0.75·Q to the delivery and pickup of each even point [16]. Thus, the new dataset 2 had either delivery or pickup values between 20.5% and 95% of the vehicle capacity. The results of applying B-s-MRISA and unsplit to new Dataset 2 are shown in Table 3.
488
J. Min et al. Table 3. Results performed on the new dataset 2
Point Instance
25
50
100
Demands (D/P)
B-s-MRISA
Un-split
Distance Route Lord Dist. Rt. rate % %
Distance Route Lord rate
CR101-OE-25
2890/2652
890.7
16
0.9
21.32
36.00 1132
25
0.58
CR201-OE-25
2890/2652
1091
16
0.9
14.43
36.00 1275
25
0.58
R_C101-OE-25
2762/2780
981.6
15
0.93
21.22
40.00 1246
25
0.56
RCR101-OE-25
2970/2652
1485
17
0.87
21.30
32.00 1887
25
0.59
CR101-OE-50
5610/5471
1807
31
0.91
25.05
38.00 2411
50
0.56
CR201-OE-50
5610/5471
2250
32
0.88
16.01
36.00 2679
50
0.56
R_C101-OE-50
5471/5610
2319
34
0.83
11.66
32.00 2625
50
0.56
RCR101-OE-50
5720/5471
2970
33
0.87
26.85
34.00 4060
50
0.57
CR101-OE -100
11310/10958 4455
63
0.90
22.80
37.00 5771
100
0.57
CR201-OE -100
11310/10958 4773
64
0.88
19.69
36.00 5943
100
0.57
R_C101-OE-100
10958/11310 4156
65
0.87
16.70
35.00 4989
100
0.57
RCR101-OE-100 11224/10958 5182
64
0.88
21.70
36.00 6618
100
0.56
Average
–
0.89
19.89
–
0.57
–
–
–
–
• The distance reduction of the B-s-MRISA relative to Dist.% of the Unsplit is between 11.66% and 26.85%, with an average of 19.89%. These distance reductions directly decrease the transportation cost. • The number of routes in the B-s-MRISA case reduced by approximately 36% compared with that in the Unsplit case. • The average “load rate” obtained using the B-s-MRISA and the Unsplit is 0.89 and 0.57, respectively. The average “load rate” increased by 56.14% in the B-s-MRISA case.
5 Conclusion This study developed a mathematical model for the VRPSPDP and a two-stage constructive heuristic approach using the strategy of “clustering first and routing later” is proposed. In stage one, Both-split-MRISA and Unsplit-MRISA algorithms partitioned the domain of customer points into sub-domains and determined the split points and values. The number of sub-domains depends on the minimum vehicle usage. The split points and values in each sub-domain are finally determined using MRISA. In stage two, a modified C-W saving algorithm is adopted to optimize the route distance from the VRPSDP requirements in each sub-domain. The constructed Solomon datasets were employed to verify the feasibility and effectiveness of the proposed approach. The experimental results indicate that: • The proposed two-stage algorithms Both-split-MRISA for the VRPSPDP reduce the total travel distance, the number of vehicles used, and increase the average loading rate.
A Two-Stage Heuristic Approach for Split Vehicle Routing Problem
489
• Both-split-MRISA exhibited larger advantages over the Unsplit-MRISA when applied to new Dataset 2. • By applying Both-split-MRISA, the number of vehicles, vehicle startup, and manpower costs notably reduced, therefore substantially decreased the transportation cost. Future research on solutions of VRPSPDP will include the VRP with divisible deliveries and pickups in various splitting cases. Acknowledgment. We would like to send our special thanks to the reviewers and editors, who have carefully reviewed the manuscript and provided pertinent and useful comments and suggestions.
References 1. Parragh, S.N., Doerner, K.F., Hartl, R.F.: A survey on pickup and delivery problems Part I: Transportation between customers and depot. JfB. Springer, vol.58, no. 2, pp. 21–51 (2008). https://doi.org/10.1007/s11301-008-0036-4 2. Dror, M., Trudeau, P.: Split delivery routing. Transp. Sci. 23(2), 141–145 (1989) 3. Dror, M., Trudeau, P.: Savings by split delivery routing. Naval Res. Logistics 37(3), 383–402 (1990) 4. Dror, M., Laporte, G., Trudeau, P.: Vehicle routing with split deliveries. Discrete Appl. Math. 50(3), 239–254 (1994) 5. Archettic, C., Hertz, A., Speranza, M.G.: A tabu search algorithm for the split delivery vehicle routing problem. Transp. Sci. 40(1), 64–73 (2006) 6. Archettic, C., Savelsbergh, M.W.P., Speranza, M.G.: An optimization-based heuristic for the split delivery vehicle routing problem. Transp. Sci. 42(1), 22–31 (2008) 7. Archettic, C., Speranza, M.G.: Vehicle routing problems with split deliveries. Int. Trans. Oper. Res. 19, 3–22 (2012) 8. Archetti, C., Bianchessi, N., Speranza, M.G.: A branch-price- and-cut algorithm for the commodity constrained split delivery vehicle routing problem. Comput. Oper. Res. 64, 1–10 (2015) 9. Ozbaygin, G., Karasan, O., Yaman, H.: New exact solution approaches for the split delivery vehicle routing problem. EURO J. Comput. Optim. 6(1), 85–115 (2018) 10. Min, J.N., Jin, C., Lu, L.J.: Maximum-minimum distance clustering method for split delivery vehicle routing problem: Case studies and comparisons. Adv. Prod. Eng. Manage. 14(1), 125–135 (2019) 11. Mitra, S.: An algorithm for the generalized vehicle routing problem with backhauling. AsiaPacific J. Oper. Res. 22(2), 153–169 (2005) 12. Mitra, S.: A parallel clustering technique for the vehicle routing problem with split deliveries and pickups. J. Oper. Res. Soc. 59(11), 1532–1546 (2008) 13. Wang, K.F.: Research of vehicle routing problem with nodes having double demands. Ph.D. thesis, University of Shanghai for Science and Technology, Shanghai, China (2012) 14. Yin, C., Bu, L., Gong, H.: Mathematical model and algorithm of split load vehicle routing problem with simultaneous delivery and pickup. Int. J. Innov. Comput. Inf. Control 9(11), 4497–4508 (2013)
490
J. Min et al.
15. Wang, Y., Ma, X.L., Lao, Y.T., Yu, H.Y., Liu, Y.: A two-stage heuristic method for vehicle routing problem with split deliveries and pickups. J. Zhejiang Univ.-Sci. C (Comput. Electron.) 15(3), 200–210 (2014) 16. Nagy, G., Wassan, N.A., Speranza, M.G., Archetti, C.: The vehicle routing problem with divisible deliveries and pickups. Transp. Sci. 49(2), 271–294 (2015) 17. Polat, O.: A parallel variable neighborhood search for the vehicle routing problem with divisible deliveries and pickups. Comput. Oper. Res. 85, 71–86 (2017) 18. Qiu, M., Fu, Z., Eglese, R., Tang, Q.: A tabu search algorithm for the vehicle routing problem with discrete split deliveries and pickups. Comput. Oper. Res. 100, 102–116 (2018) 19. David, W., Juan-José, S.G.: The pickup and delivery problem with split loads and transshipments: a branch-and-cut solution approach. Eur. J. Oper. Res. 289, 470–484 (2021) 20. Min, J.N., Jin, C., Lu, L.J.: Split-delivery vehicle routing problems based on a multi-restart improved sweep approach. Int. J. Simul. Modell. 18(4), 708–719 (2019)
Logistics Cost Analysis of Small and Medium-Sized Agricultural Planting Enterprises Under the Mode of “Agricultural Super Docking” Yiwen Deng(B) and Chong Wang Sichuan Agricultural University Name of Organization (of Affiliation), Chengdu, People’s Republic of China
Abstract. Taking some small and medium-sized agricultural planting enterprises in Chengdu City of Sichuan Province as samples, this paper investigates and analyzes their logistics mode selection and cost composition data. The results show that the third party logistics has become the logistics mode of most small and medium-sized agricultural planting enterprises, and the region and policy, product type, modernization degree and other aspects are important factors affecting the size and composition of logistics cost. Taking Chengdu as an example, this paper analyzes the size and composition of logistics cost under the mode of “agricultural supermarket docking”, so as to provide guidance on logistics cost optimization for enterprises who are willing to participate in the mode of “agricultural supermarket docking”. Keywords: Agricultural super docking · Agricultural logistics cost · Small and medium-sized agricultural enterprises
1 Introduction Agriculture is the primary industry and plays an extremely important role. Agricultural commercialization is a major issue in China’s rural economic development and reform, and it is also a core issue related to how to realize agricultural modernization under the actual conditions of China (Chen and Yang 2017). Small and medium-sized agricultural enterprises play an important role in China’s agricultural development. “Agricultural supermarket docking” is a major direction for small and medium-sized agricultural companies to develop sales channels in recent years. It means that large chain supermarkets directly purchase agricultural products from farmers’ professional cooperatives or farmers, and integrate the industrial chain of The authors are supported by the National Natural Science Foundation of China (No. 71602134, 71972136), Humanities and Social Sciences of Ministry of Education of China (No. 17YJC630098,19YJC630063), and Innovation Team of Education Department of Sichuan Province (No. 18TD0009). © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 491–503, 2022. https://doi.org/10.1007/978-981-16-8656-6_44
492
Y. Deng and C. Wang
enterprises and the supply chain of agricultural products (Deng et al. 2013). Specifically, the mode refers to a new circulation mode in which farmers sign a letter of intent with businesses to directly supply agricultural products to supermarkets, vegetable markets and convenience stores, mainly to build a platform for high-quality agricultural products to enter supermarkets. Sichuan Province is a big agricultural province, so it is very representative to study the agriculture of Sichuan Province. Chengdu is the capital city of the province, and the degree of agricultural enterprises in the surrounding areas is higher than that in other cities, so there are more samples for research.
2 Literature Review 2.1 Agricultural Supermarket Docking Han (2011) pointed out that “agricultural supermarket docking” can shorten the length of the supply chain, effectively reduce logistics costs and improve logistics efficiency. Ren (2012) pointed out that in order to change the main body of “agriculture”, produce more concentrated and large-scale agricultural professional cooperatives and leading enterprises of agricultural industrialization, “super” should not be limited to large supermarkets, but should be expanded to chain supermarkets to facilitate retail and other directions that go deep into people’s lives. Yang (2014) also analyzed the logistics operation mode of “agricultural supermarket docking”, which can be divided into four logistics modes: cooperative self-logistics, cooperative logistics, cooperative logistics, cooperative logistics, joint distribution (the mode that multiple cooperatives or agricultural enterprises jointly distribute to small and medium-sized retail enterprises or supermarkets in a region) and professional third-party logistics. Delong (2017) divided the logistics mode of “agricultural supermarket docking” into one-way channel and twoway channel. Liu (2018) found that through direct purchase, the circulation cost can be reduced by 20%-30%, bringing real benefits to consumers. 2.2 Small and Medium Sized Agricultural Enterprises He (1990) pointed out that compared with micro enterprises, the production scale of small and medium-sized enterprises is larger, that is, enterprises with highly concentrated labor force, labor mode, labor object and product production. Zhao and Liu (2016) pointed out that the construction of consumption channels of small and medium-sized agricultural enterprises is unreasonable, and most agricultural products have to go through various intermediary channels in the sales process, which makes the sales channels single and unimpeded, and leads to the meaningless enterprise brand construction. Han (2017) believed that the financial management concept of agricultural enterprises is single and the framework is simple. Most enterprises have no idea of financial risk or financial control management. The smaller the scale of the enterprise, the more such problems. In addition, the production line of agricultural small and medium-sized enterprises is generally long, the capital return is slow, and the short-term profit is not high, which brings great pressure to the financial management of enterprises.
Logistics Cost Analysis of Small and Medium-Sized Agricultural Planting
493
In terms of logistics of small and medium-sized agricultural enterprises, Deng (2007) believed that due to different reasons, agricultural enterprises will adopt three management modes of self-support, outsourcing and Logistics Alliance for logistics management, and small and medium-sized enterprises are more suitable for the logistics alliance mode of "group heating”. Taking Linyi City as an example, Sun (2016) pointed out that the reason why most agricultural enterprises choose self-logistics is that they strengthen the control of supply and distribution channels, and the product line and customer distribution area are single. 2.3 Agricultural Logistics Cost Ji (2014) believed that the cost of agricultural logistics can be divided into three categories: transportation cost, inventory cost and logistics management cost. The logistics cost of fresh agricultural products refers to the total cost of various logistics activities involved in the process of purchasing, warehousing and sales of fresh agricultural products. Specifically, Jin (2015) thought it is the sum of all labor, financial and material resources paid in the circulation, such as transportation, storage, packaging, loading and unloading, circulation processing and distribution. There are many factors that affect the price stability of agricultural products, such as the supply chain and so on. The conventional factors that affect the logistics cost include weight, transportation distance, cargo volume, etc. The special feature of agricultural products is that whether it is grain or fresh vegetables, we should consider the problem of preservation. In addition to conventional factors, McKinnon (2003) pointed out some influencing factors of fresh food transportation by analyzing the supply efficiency of British food transportation chain. Tage (2003) analyzed the formation and maintenance of logistics relationship based on asset specificity and transaction cost theory. Li (2007) thought that logistics efficiency is an important factor affecting agricultural logistics cost, and the influencing factors include infrastructure construction, logistics channels, policies and accounting. Chen (2014) believed that the logistics cost of agricultural products is also related to the length of the supply chain.
3 Research Methodology 3.1 Sample Selection The sample selection for this survey is near Chengdu, including Dayi County, Peng Zhou, Chong Zhou and other places. The business income of the selected enterprises is less than 200 million yuan, which is in line with the previous definition of small and medium-sized agricultural enterprises. The product types of the selected enterprises are different, and the main planting types are vegetable crops and food crops. The transportation modes include one-way logistics, third-party logistics, combined transportation logistics and other logistics modes. In addition, the first mock exam was conducted by some interested enterprises who participated in the model and obtained market data collected by them. Most of the enterprises surveyed are small enterprises with an operating income of about 5 million yuan, and a few are medium-sized enterprises with an operating income of more than 10 million yuan.
494
Y. Deng and C. Wang
3.2 Investigation Implementation The survey collected data by telephone interview, actual report data integration and network survey. Because the volume, scale and operating revenue of each enterprise are not exactly the same, the accounting unit in the survey process is the unit price per kilogram, that is, yuan/kg. Because of different regions, the logistics service prices of each region are different, and finally all of them are converted into percentages for comparison. In the process of investigation, telephone interview and direct interview are often used. The problems mainly involve the following aspects: • Basic information of the enterprise: such as business income, planting type and scale, output. • Logistics mode selection: including logistics types and partners. • Salary and service expenses of relevant personnel: including harvesting and loading personnel, warehouse inventory and custody personnel, relevant management personnel, etc. • Third party service purchase fee: including transportation service and warehousing service. • Packaging related: including the packaging materials used, price and specifications. As there are few companies only engaged in planting business, and most of them are enterprises integrating planting, service and processing, finally, we obtained the data of eight local representative planting enterprises, including 4 vegetable planting enterprises, 4 grain planting enterprises, 3 enterprises that have actually participated in the “agricultural supermarket docking”, and 5 enterprises that have not actually participated but have collected relevant data, The scale of most enterprises is about 5 million yuan of annual revenue. Most of them tend to use the third party logistics. In addition, through on-the-spot investigation, we made a survey on the location of the interviewed enterprises. After collecting the first-hand data, we calculate the percentage of each part in the logistics cost by calculating the unit mean value. After that, we compared and analyzed the logistics costs under different logistics forms, and combined with the interview content, we analyzed the reasons for the differences.
4 Data Analysis and Results As the actual situation and data obtained from this survey are different from the assumption, the analysis part will be carried out from two aspects, the first is the description of data calculation, and the second is the display and analysis of data analysis results. 4.1 Data Description 1) For tool cost: In the actual investigation, we found that there is no one-way logistics enterprise using self-management, so the cost accounting of one-way logistics is based on the existing calculation model and the actual quotation.
Logistics Cost Analysis of Small and Medium-Sized Agricultural Planting
495
Among them, the tool cost is converted into cost according to the legal depreciation period, that is, four years, and the residual value rate is 5%. Therefore, the calculation method of tool cost is as follows: Tool usage fee = {[(1-5%) × Car purchase cost]/4 + annual total maintenance cost + annual fuel cost}/annual total transportation volume. Among them: a) Purchase cost: Most of the trucks in Chengdu are 40 tons, so the average price is 280000 yuan/vehicle. b) Vehicle maintenance cost: Based on the network data and the actual survey of the average situation of the transport fleet, it is roughly determined that the annual maintenance cost of freight cars is 61500 yuan/vehicle a year (in the case of pure agriculture). c) Fuel cost: it is calculated according to the average fuel consumption of 30 L per 100 km, and the oil price is calculated according to 5.7 yuan/L. 2) For Labor cost: In addition, because most enterprises use direct loading and transportation after harvest, and this process is completed by the same group of people manually, in order to facilitate calculation, their wages are included in the circulation cost labor. This standard is used to unify the algorithm. 3) For Vegetable packaging: Vegetables are special. For the purpose of preservation, it is supposed to be stored in cold storage. But in fact, under the mode of “agriculture supermarket docking”, vegetable crops are directly loaded and transported to the supermarket after harvest without storage, so as to maximize the freshness of vegetables and save the cost related to inventory. At the same time, most of the vegetable planting enterprises which mainly focus on the volume of vegetables are relatively simple in terms of packaging. They are usually simple strapped or unpacked. Only a few vegetable enterprises which mainly focus on “green vegetables” or special varieties will carry out fine packaging. The samples selected in this paper are mostly small and medium-sized vegetable planting enterprises without packaging. For vegetable enterprises, the mode of agricultural supermarket docking can help them save a lot of storage costs, and let the products enter the market in a fresher state, so as to improve the competitiveness and profit space of the products. At the same time, because of simple packaging or even no packaging, it can bring certain environmental protection effect. 4.2 Calculation Results 1) Self-operated logistics cost: In the hypothesis, according to the existing literature, one-way self-operated logistics is divided into enterprise logistics and supermarket logistics. However, in the actual investigation, it is found that there is almost no oneway self-operated logistics in Chengdu. Because most small enterprises are difficult to support the prophase investment of self-logistics, including the purchase cost of transportation equipment such as vehicles, and most medium-sized enterprises with
496
Y. Deng and C. Wang
little capital are unwilling to bear the cost of vehicle maintenance, the calculation of the cost proportion of one-way logistics is an estimated value. The results are as follows: Table 1. Self- operated logistics cost Measure
Self-support logistics
Type
Vegetables
Grain
Labor
52.41%
40.74%
Tools
30.12%
23.41%
Service
–
–
Labor
–
13.58%
Warehouse
–
5.43%
Packing
–
3.26%
18.63%
13.58%
Circulation cost
Inventory cost
Administration cost
And this mode will also bring some other expenses not included in the logistics cost, such as the training expenses of logistics related personnel. 2) Third party logistics cost: In the actual investigation, the vast majority of enterprises that have participated in the docking mode of agricultural supermarkets adopt the way of third-party logistics. The degree of application is different, including the mode of complete outsourcing from transportation to storage, the mode of separate outsourcing with higher degree of specialization, and the mode of partial outsourcing. In order to facilitate the comparison with other modes, the surveyed enterprises basically adopt the complete outsourcing mode. The business income of the six enterprises surveyed ranged from 3 million yuan to 20 million yuan, mainly from Dayi County, Chong Zhou, Peng Zhou and other places. Among them, there are 3 vegetable planting enterprises and 3 grain planting enterprises. The average value of each expense is entered and calculated. The results are as follows: Table 2. Third party logistics cost Measure
Third party logistics
Type
Grain
Vegetables
Labor
40.44%
61.91%
Tools
–
–
Service
14.71%
19.05%
Labor
18.38%
–
Warehouse
7.35%
–
Packing
4.41%
–
14.71%
19.05%
Circulation cost
Inventory cost
Administration cost
Logistics Cost Analysis of Small and Medium-Sized Agricultural Planting
497
It can be seen that more than 60% of the cost comes from the labor cost of each part, while the expected large third-party service purchase cost and packaging cost actually account for a small proportion. At the same time, it can be seen from the actual data and proportion that compared with the tool cost of one-way logistics, the service cost of purchasing third-party services has been saved by nearly 50%. This is mainly due to the professional level of the third party logistics. The professional third party logistics can manage the logistics cost more effectively. At the same time, due to the existence of competition, the service price has been unified to the unified level of the industry. However, from some other data, due to the fact that the third-party distribution mode has more process links than one-way logistics, the time consumed in the process will be longer. Due to the relative lack of supervision in the time and process, the loss rate will rise by about 1–3%. But thanks to its low cost, the overall rate of return is still much higher than one-way logistics. 3) Combined transportation logistics cost: There are two situations in combined transportation: first, joint purchase of third-party services; second, organization of joint fleet. The tool and service cost algorithm of the two cases follows the above oneway transportation and third-party logistics algorithm. In the actual situation, the vast majority of cases use the third party logistics (including logistics companies and professional teams, etc.), and the example of joint teams is rare. The annual business income of the two enterprises surveyed is about 4 million yuan, which is one of the participating enterprises rather than a consortium (the consortium is mostly led by cooperatives, etc.). The results are as follows:
Table 3. Combined transportation logistics cost Measure
Combined transportation logistics cost
Type Circulation cost
Inventory cost
Grain
Vegetables
Labor
47.45%
65.00%
Tools
–
–
Service
16.53%
20.00%
Labor
12.40%
–
Warehouse
8.26%
–
Packing Administration cost
4.96%
–
12.40%
15.00%
In the actual investigation, it can be found that due to the scale effect, the management cost and warehouse related labor cost are lower than those of the previous two logistics modes when several companies share the management cost and warehouse related costs. However, there are also risks such as loss rate caused by centralized distribution. And because the enterprise has a certain scale, combined transportation may bring many
498
Y. Deng and C. Wang
other management problems, such as the cost bearing, benefit distribution and other issues involved in the signing of cooperation contract. This mode is relatively less applied among enterprises, and more exists between cooperatives and farmers. 4.3 Overall Analysis The above results can be summarized into a scale as follows: Table 4. Summary of logistics cost composition Measure
Self-support Logistics
Type
Grain
Circulation Labor cost Tools
40.74% 52.41%
40.44% 61.91%
Inventory cost
Third party logistics
Vegetables Grain
Combined transportation logistics cost
Vegetables Grain
47.45% 65.00%
23.41% 30.12%
–
Service
–
14.71% 19.05%
16.53% 20.00%
Labor
13.58% –
18.38% –
12.40% –
–
–
Vegetables
–
–
Warehouse 5.43%
–
7.35%
–
8.26%
–
Packing
–
4.41%
–
4.96%
–
Administration cost
3.26%
13.58% 18.63%
14.71% 19.05%
12.40% 15.00%
From the survey results, the differences in the composition of enterprise logistics costs under the mode of “agricultural supermarket docking” mainly come from different planting types and logistics methods. In terms of the choice of logistics mode, the situation investigated is quite different from the preference or suitable logistics mode of small and medium-sized agricultural enterprises mentioned by scholars a few years ago in the literature review. Therefore, the logistics mode selection and cost components are analyzed respectively. 1) Analysis of logistics mode selection: In general, whether it is one-way logistics or joint logistics, more enterprises will prefer to purchase third-party services. The main reasons are as follows a) Technical factors: Technically speaking, services provided by professional third parties have a higher degree of specialization and can effectively save costs. At the same time, the third party logistics service level and quality level often have the industry or regional unified standards, and enterprises need to spend more manpower and material resources for professional training to achieve this standard b) Cost factor: Enterprise self-logistics involves the purchase cost and maintenance cost of transportation equipment. Compared with the third-party logistics, the cost of one-way logistics is nearly twice as high. Although the self- operated logistics can save delivery time through fewer processes, and reduce the loss rate through
Logistics Cost Analysis of Small and Medium-Sized Agricultural Planting
499
more effective supervision in the process, for small and medium-sized agricultural planting enterprises, the benefit of this part of the reduced loss rate is far less than the cost saved by purchasing the third-party services. And spending a lot of money to purchase transportation equipment will bring the problem of reducing working capital. To a certain extent, it reduces the anti-risk ability of enterprises and improves the risk cost of enterprises. At the same time, the license required for heavy trucks generally needs professional training, and the general transportation must be undertaken by special personnel. The salary of these additional personnel will also increase the cost, so the cost of human resources related to self-logistics will also rise. At the same time, due to the increase of equipment and personnel, the cost of its management will also increase. Therefore, for small and medium-sized agricultural planting enterprises, whether it is combined transportation or separate transportation, they mostly use the way of purchasing third-party services. c) Other factors: With the development of science and technology, the situation that “the third party logistics has more links, so the efficiency is lower” proposed by the researchers of universities in the past few years has been solved by the gradually updated technology. In fact, the third party logistics from the delivery time efficiency and economic benefits are higher. Therefore, the agricultural enterprises prefer to use the third party logistics under the mode of “agricultural supermarket docking”. 2) Analysis of Cost Components: a) General law: From the survey data and the proportion of each part, the proportion of labor cost is the highest. Among them, compared with the actual data, the labor cost of pure manual harvesting and loading in Chengdu is about 0.13 yuan/kg, while the cost of mechanized harvesting and loading is only 0.11 yuan/kg, so the cost of mechanized harvesting and loading will be lower than that of manual harvesting, and the proportion will be relatively low. In addition, there is a small gap in the management cost from the numerical point of view. Specifically, one-way logistics management cost (about 0.05 yuan/kg, the average total salary of relevant management personnel/total transportation volume) > third party logistics management cost (about 0.04 yuan/kg) > combined transportation management cost (about 0.03 yuan/kg) b) Differences between different planting types: There are significant differences in cost composition between vegetable planting enterprises and grain planting enterprises. Basically, the investigated vegetable enterprises have omitted the step of packaging or simply packaged (such as bundling) to ensure the freshness of vegetables and save costs. Food crops usually need to be packed because they are loose and not easy to stack directly. At the same time, they usually need to be stored in stock because they are more durable than fresh crops and have a longer sales cycle. c) Differences between different modes of transport: It is not difficult to see that the difference between different modes of transportation mainly lies in the composition of transportation costs. The main body of joint logistics, which adopts one-way logistics and organizing joint fleet, its transportation cost is mainly composed of labor and tool cost, while the enterprise purchasing the third-party service is composed of
500
Y. Deng and C. Wang
labor and service purchase cost. After accounting, the cost of tools is much higher than the cost of purchasing third-party services, and the proportion of costs is also significantly different.
5 Influencing Factors It is not difficult to find that there are still some subtle differences in the composition of logistics cost under the mode of “agricultural supermarket docking”. Through the analysis of the actual data and relevant information, it can be concluded that the influencing factors of logistics cost under the mode of “agricultural supermarket docking”” can be roughly divided into the following points: 1) Regional or social factors: Even around Chengdu, there are subtle differences in the composition of logistics costs in different districts and counties. After analysis, this part of the difference mainly comes from the difference in labor costs. According to the actual data, even in the similar districts and counties, the cost of human resources is higher than that in the economically developed areas. Cui (2019) pointed out that the level of economic development is positively correlated with the cost of social logistics. In addition, the storage cost is also linked to the level of economic development. The more developed the infrastructure is, the higher the storage cost is. 2) Product type: In the “agriculture supermarket docking” mode, for the sake of preservation, vegetables are basically not stored, but directly sent to the supermarket by picking and transportation. General vegetables are not even packaged or simply bundled, only a part of the brand of green organic vegetables will use high-cost plastic packaging. Grain production enterprises need simple processing of crops, such as processing rice into milled rice, grinding wheat into flour, and packaging before transportation or shelf. However, due to the different output and local related equipment conditions, short-term inventory is usually carried out in the processing process. 3) Degree of Modernization: Modernization is divided into the following aspects: a) Mechanization degree: Through the comparison of actual data, it can be seen that under the premise of purchasing social services, the cost of mechanized harvesting + loading and unloading is lower than that of full manual harvesting and loading and unloading. (the cost of manual harvesting and loading/unloading is 0.13 yuan/kg, while that of mechanical harvesting and loading/unloading is 0.11 yuan/kg) b) Degree of informatization: After practice, the higher the degree of informatization and the more transparent the transaction, the more effective it will be to reduce logistics loss, save logistics cost and improve logistics efficiency. Among the surveyed enterprises, there are two enterprises which are also involved in online agricultural platform services in addition to “agricultural supermarket docking”. The
Logistics Cost Analysis of Small and Medium-Sized Agricultural Planting
501
logistics loss and logistics efficiency of these two enterprises are relatively optimal. Zheng (2018) put forward: the process of information-based “agricultural supermarket docking” mode is: supermarket online release of supply and demand information → agricultural products listed for sale → online order → offline logistics distribution, and online platform is responsible for order management and settlement. at present, with the promotion of Rural Revitalization and rural e-commerce, this mode is gradually integrating with the existing “rural supermarket docking” mode. The conclusion of his paper can also support this point. c) Application degree of social service: Yuan (2019) pointed out that in terms of the current situation of enterprises in China, enterprises can better establish a logistics information management system by cooperating with the relevant logistics business and the third party logistics, and connect various suppliers and customers to create a perfect logistics service system. It can be said that this is complementary to mechanization and informatization.
6 Conclusions After more than 12 years of development, the evolution of logistics mode has gradually stabilized. The main modes of transportation include one-way self-operated logistics, third-party logistics and regional combined transportation. Among them, the latter two are more widely used than one-way logistics, and the third-party logistics is the most widely used in small and medium-sized agricultural planting enterprises. It is worth mentioning that one of the key points of vigorously advocating agricultural modernization and industrialization in China in recent years is the purchase of social services. This way is widely used in all aspects of agricultural production, so the United transportation also widely exists through the purchase of third-party logistics services. From the actual data point of view, the third-party transportation service purchase price has been unified within the city, the difference mainly comes from the difference of labor cost and storage cost. 1) Suggestions: a. Improve the level of Modernization: We can save labor costs and improve efficiency by using large-scale agricultural machinery or purchasing agricultural machinery services. Logistics mode selection: including logistics types and partners. Gradually improve the level of information technology, to the sales order, logistics process, standardization. This can effectively improve the efficiency and accuracy of logistics, and reduce the loss rate through effective monitoring. b) Simplified packaging: For vegetable based enterprises, the cost can be reduced by simplifying the packaging as much as possible. This not only helps to optimize the cost, but also improves the freshness of crops by saving the time needed for packaging, so as to improve their market competitiveness. In addition, simplified packaging can also produce certain environmental protection effect.
502
Y. Deng and C. Wang
For enterprises with certain brand benefits and packaging needs, it is suggested to use simple and environmentally friendly packaging as far as possible (for example, vegetables and fruits can be bundled and labeled, grain planting enterprises can choose paper or degradable packaging, etc.). Although excessive packaging can improve the added value of products, it may also lead to the decline of the preservation ability of products. c) Select appropriate transportation mode: Measure the distance between the enterprise and the supermarket, when the distance is close, you can reduce or not use cold chain transportation to reduce costs. More professional third-party logistics is used for transportation. When choosing partners, enterprises with higher degree of informatization, more transparent process and higher efficiency can be selected, so as to effectively supervise the state of goods and reduce the risk of loss. 2) Insufficient: There are still many deficiencies in this study. Firstly, the sample selection of this paper does not involve fruit planting enterprises. Secondly, although the data of this paper are first-hand data, the number of samples is small, covering a small area, and the applicability and popularization of the results need to be verified. In the future, we can do further research on these deficiencies. For example, we can further study the situation of fruit enterprises, compared with vegetable enterprises and grain enterprises to find similarities and differences. We can also investigate more samples from other provinces and even countries to further verify the conclusion of this paper.
References Chen, Y., Yang, Y.: Agricultural enterprise is an important way of rural modernization in China. Agric. Technol.. 2017(8), 250 (2017) Ning, D., Changzhi, L., Guoying, J., Jianxi, C.: Research on supply Chain management under the mode of “agricultural supermarket docking” – Based on the perspective of agricultural products circulation. J. Yunnan Agric. Univ. (SOCIAL SCIENCE EDITION) 7(05), 56–59 (2013) Han, Yan, X.: Fresh Agricultural Products Logistics under “Farmer-Supermarket DirectPurchase”: Problems and Suggestions Analysis. Appl. Mech. Mater. 97–98, 1046–1049 (2011) Ren, X.: New definition of the implementation subject of “agricultural supermarket docking”. Rural Work Communication 2012(2), 34 (2012) Cuixiang, Y., Yongqing, Y.: Analysis and Prospect of “agriculture supermarket docking” logistics operation mode. Logistics Sci. Technol. 12, 73–74 (2014) Delong, J.:The operational efficiency measurement of agro-food supply chains: the single “farmersupermarket direct purchase”vs. dual channel. Chinese J. Manage. Sci. (2017) He, S.: Dictionary of Finance and Economics: China Finance and Economics Press (1990) Zhao, Y., Liu, Y.: Thinking of brand building of small and medium agricultural enterprises under the background of “Internet plus”. Modernization of Shopping Malls, 2016(24), 86–87 (2016) Yang, H.: The path to improve the financial management level of modern small and medium-sized agricultural enterprises. Jilin Agriculture 10, 59 (2018)
Logistics Cost Analysis of Small and Medium-Sized Agricultural Planting
503
Deng, Y., Zhao, X.: Discussion on the choice of logistics mode of agricultural enterprises. Group economic research (2007). (14) Linglong, S., Yang, G., Qin, W., et al.: Research on Logistics Development Strategies of agricultural leading enterprises – Taking Linyi City as an example. China Market 28, 32–33 (2016) Xiaoping, J.: Construction of agricultural logistics cost budget management system. Econ. Res. Guide 18, 143–144 (2014) Jin, H., Zhang, Y., Lu, W., et al.: Study on fresh farm produce logistics cost control based on system dynamics. Logistics Technology (2015) McKinnon, A.: Analysis of Transport Efficiency in the UK Food Supply Chain, Logistics Research Center, Heriot—Watt University (2003) Skjoettlarsen, T., Thernøe, C., Andresen, C.: Supply chain collaboration: theoretical perspectives and empirical evidence. Int. J. Phys. Distrib. Logistics Manage. 33(6), 531–549 (2003) Li, S., Wu, Y., Zhu, K.: Research on the countermeasures to reduce the cost of agricultural logistics. China Township Enterp. Account., 2007(8), 41 (2007) Wenli, C.: Research on cost control of vegetable logistics. Agric. Econ. 4, 117–118 (2014) Cui, T.: Research on the influencing factors of social logistics cost in China. Master’s thesis (2019) Yu, Z.C., Guo, L.X., Sheng, Z.X.: Problems and Countermeasures in the construction of modern circulation system of agricultural products in China. Econ. J. 389(04), 131–134 (2018) Shuilin, Y.: Multiple linear regression analysis of the impact of enterprise logistics cost on enterprise benefit. Stat. Decis. 35(04), 188 (2019)
Research on the Optimization of Intelligent Emergency Medical Treatment Decision System for Beijing 2022 Winter Olympic Games Yujie Cai, Jing Li(B) , Lei Fan, and Jiayi Jiang School of Economics and Management, Beijing Jiaotong University, Beijing, China {19241054,jingli,19241057,19271183}@bjtu.edu.cn
Abstract. In order to timely respond to possible accidents during the Winter Olympics and reduce the risk of injury and death, it is essential to upgrade and optimize the emergency medical support system. Through real-time monitoring of patients’ physical data, based on existing algorithms and built platforms, early warning, prediction and intelligent evaluation of existing diseases or high-risk diseases can be made. However, we find some defects, taking traumatic hemorrhagic shock as an example, we will put forward problems, solve and optimize them, combine structured and unstructured data, improve practicability and comfort, and make information storage and transmission more smooth. Keywords: Emergency medical security · Web design · HTML page design · Floating window · Video docking
1 Introduction As shown in the Fig. 1, we establish the E-R diagram model based on the underlying relations of the platform construction. At present, the main catalog of the system is divided into eight categories of severe diseases, and the functions that can be realized are system login, warning and prediction system description, data collection and analysis, early warning and prediction, and risk assessment. Our improvement direction mainly includes THREE aspects: HTML page design, data entry and function realization. In the HTML page design, the entry will increase the disease classification, the introduction of disease and function; In terms of data entry, improve source code vulnerabilities, add chart data units, adjust casualty ID and other details; In terms of function realization, two major functions will be added: suspension window and remote video docking.
2 Emergency Medical Disposal Requirements Based on the current development status of the system, we have analyzed the system by reference to some literature, and according to the needs of the actual medical staff, © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 504–510, 2022. https://doi.org/10.1007/978-981-16-8656-6_45
Research on the Optimization of Intelligent Emergency Medical Treatment
505
Fig. 1. E-R diagram of emergency medical insurance system.
and the needs of the system, we have found some parts that need to be optimized in the future [1–4]. For severe trauma and its complications, we should realize the real-time forecasting warning, medical personnel to patients after data entry or player by wearing a wrist watch devices such as real-time access to physical data, by the above two kinds of data sources, to obtain prediction and evaluation, do the paramedics to obtain the probability of injury may be ahead of schedule, to carry on the risk score, and the needle For common crises and severe cases, we should input relevant data for risk scoring and intelligent monitoring, determine whether the disease occurs, and do a good job in medical security, diagnosis and treatment.
3 The Shortcomings of the Current System As shown in the Fig. 2 below, it lists the current problems existing in our system, and then We will address the above issues in three aspects below.
Fig. 2. Problems in our system.
506
Y. Cai et al.
3.1 HTML Web Page Design Define abbreviations and acronyms the first time they are used in the text, even after they have been defined in the abstract. Abbreviations such as IEEE, SI, MKS, CGS, sc, dc, and rms do not have to be defined. Do not use abbreviations in the title or heads unless they are unavoidable. • Symptoms are not classified In the current system, when a doctor enters his/her account number and password and clicks login, eight types of diseases will be directly displayed on the left menu bar. The entry does not classify these eight types of severe diseases, which is not good for information retrieval and later classified entry. • Lack of all kinds of disease introduction Has classified the eight class condition in the home page, click on the certain condition and click disease early warning and forecast of the bottom of the page is the same, for the forecasting model is introduced, and what expression is no related conditions used in the functional analysis model is introduced, to want to operate this system but not the professional physicians and other treatment is a little difficult. • Lack of introduction to the three features The system supports three major functions: prediction and early warning, risk scoring and intelligent monitoring. However, the definition of the three and their relationship are not outlined, and non-professionals using the system may not be able to distinguish between the various functions. Forecasting warning is based on the existing data to judge what is going to happen, risk score is to quantify the disease process, give good rates do in advance to prevent, intelligent evaluation is to determine what condition it is. 3.2 The Function is not Running Smoothly In the algorithm model corresponding to the three functions, the smoothness of operation is not high, the connection between each page is not strong, and there are small defects in the page design, such as the source code still has small holes, the data chart lacks units, the casualty ID information is not obvious enough, and there is no return key, etc. Flexibility and comfort are not good. 3.3 Unable to Alert and Lack of Visual Data At present, the system has been able to complete real-time monitoring, but it cannot issue a reminder and warning to the monitoring objects that have shock or are at high risk of shock. It is difficult for background monitoring personnel to find abnormalities from a large amount of data and take timely measures. In order to better realize the warning, it is better to realize the front and rear warning simultaneously. The front can rely on the vibration alarm of the wearable device, and then the flashing and alarm of the PC terminal system, which cannot be achieved in the existing systems.
Research on the Optimization of Intelligent Emergency Medical Treatment
507
• The forward ambulance personnel cannot communicate with the terminal monitoring personnel Under the current system, frontline ambulance personnel can only transfer structured data to the background through equipment monitoring and manual input, while terminal monitoring personnel can obtain unstructured data such as images. The lack of visual data hinds the correct diagnosis and timely treatment of diseases. The rear monitoring personnel and authoritative experts can only transmit text information to the forward rescue personnel, and cannot give advice on voice guidance, action demonstration, site guidance, etc. The communication between the front and rear only relying on text may cause semantic deviation and lead to serious consequences.
4 System Optimization As can be seen from the Fig. 3 below, it shows our system optimization flow chart, as we can see, this diagram is divided into six processes to optimize the system.
Fig. 3. System optimization flow chart
4.1 HTML Web Page Design • Classifying symptoms In the process of system optimization, in the aspect of web design, we optimize the development and design of medical system by reference to some research [5–7]. According to web design problem, after the medical personnel enter your account password, we will add a new page in the face of the original eight classes choose heavy disease are classified, including severe trauma and its complications, and two kinds of common critical disease, choose options severe trauma and its complications, Will enter include uncontrolled hemorrhagic shock, airway obstruction, lethal trauma pneumothorax, post-traumatic coagulation disorder, post-traumatic infection five conditions of system interface, choose the common crisis conditions will enter including severe respiratory infectious diseases, fracture and frostbite system interface, the additional page will quickly find the corresponding conditions for inputting data real-time prediction and other related operations, Complete classification of storage and so on many functions.
508
Y. Cai et al.
• The introduction of eight diseases For eight types of serious diseases, the lack of detailed introduction, we will increase the important content of eight categories, increase the introduction of early performance and symptoms of each severe, realize rescue personnel quickly judge the scene condition, judge the type of diseases, choose correct classification input injury related data during a short period, cut down diagnosis and treatment time, predict analysis score in a short time, judge whether there will be serious trauma and complications, be aimed to prepare for treating, reduce the number of patients. So as to achieve a reasonable and simple introduction of severe illness, we are based on some medical literature, induction and collation [8, 9]. • The introduction of the three features For the three functions, we lack detailed introduction. In the future, we will add introduction pages between the three functions to indicate the logical relationship between the three functions, so as to realize the judgment of site rescue personnel in time, reasonably use all kinds of functions in a short time, complete the goal of prediction, early warning and timely rescue. 4.2 Details Aspects For the details aspects, we have already possessed the approximate framework of the system, but the modules are not closely connected enough, We will add the connection of the different modules, each interface adds buttons to the main interface and to the previous step, what’s the more, to add mutual jump buttons between forecast warning, scoring and intelligent detection modules, in order to achieve the functions of fast jump, opportune return to the functions of the interface, improve the practicality and rationality of the system, In the process of testing the whole system in real time, We find errors in some functions such as data entry, insufficient function of some buttons, This details will be optimized in the future, to facilitate better achieve data entry, complete the forecast early warning and risk score. 4.3 Alerting and Adding Visual Data At present, the system has been able to complete real-time monitoring, but it cannot issue a reminder and warning to the monitoring objects that have shock or are at high risk of shock. It is difficult for background monitoring personnel to find abnormalities from a large amount of data and take timely measures. In order to better realize the warning, it is better to realize the front and rear warning simultaneously. The front can rely on the vibration alarm of the wearable device, and then the flashing and alarm of the PC terminal system, which cannot be achieved in the existing systems. • Setting the suspension window For the suspension window settings, We are currently not closely connected with the handheld Mobile end at the PC end, to better achieve the goal of onsite treatment, improve the cooperation degree between front-line treatment personnel and rear medical personnel, We will add a new feature—suspension window to the mobile end, we
Research on the Optimization of Intelligent Emergency Medical Treatment
509
have implemented real-time detection of casualty information and conducted early warning, But in the prediction of major trauma, Front ambulance staff and rear medical staff are unable to obtain information in real time, Therefore, add the suspension windows, positive signs for the injured with crisis conditions, Positive then means that according to the forecast, Patients are likely of a critical condition, The handheld end will realize the real-time rolling play function to remind the medical staff to follow this part of the injury and patients, At the same time, the rear PC end large screen will also be achieved in real-time display, Prepare the rear personnel, Assistance is readily available. • Establishing the video function For some common illnesses, such as frostbite, conditions such as fracture, currently we couldn’t predict algorithm model is set up early warning and therefore increase the video function, with the medical staff in addition to the patient’s heart rate, blood pressure, body temperature, such as structured data entry system, can also be the site conditions and injuries in specific situation through video transmission back to the rear, Remote medical personnel can observe the situation on the site, conduct real-time diagnosis and medical treatment, and provide visual data such as site guidance, action demonstration and voice guidance, so as to better improve the emergency medical treatment at the front and back ends, improve the degree of cooperation and real-time cooperation, and jointly achieve the goal of reducing casualties. In order to achieve the goal of remote monitoring using video, we have established corresponding schemes to solve this technical problem according to some research [10–12]. • Setting the suspension window For the suspension window settings, We are currently not closely connected with the handheld Mobile end at the PC end, to better achieve the goal of onsite treatment, improve the cooperation degree between front-line treatment personnel and rear medical personnel, We will add a new feature—suspension window to the mobile end, we have implemented real-time detection of casualty information and conducted early warning, But in the prediction of major trauma, Front ambulance staff and rear medical staff are unable to obtain information in real time, Therefore, add the suspension windows, positive signs for the injured with crisis conditions, Positive then means that according to the forecast, Patients are likely of a critical condition, The handheld end will realize the real-time rolling play function to remind the medical staff to follow this part of the injury and patients, At the same time, the rear PC end large screen will also be achieved in real-time display, Prepare the rear personnel, Assistance is readily available.
5 Conclusion We optimize the research based on the Winter Olympics emergency medical security system, and will introduce eight types of severe cases and three functions through the optimized web design module Four parts are added for the detail button, which is committed to realizing the interaction between each module. By adding the function design of the suspension window, the structured and semi-structured data are combined, and by
510
Y. Cai et al.
adding the front and rear real-time video function, the real-time diagnosis and medical treatment are realized We will strive to improve the fluency of the entire system and platform, and strive to serve the emergency medical security system, meet the needs of medical personnel on the front line, and ultimately achieve the goal of reducing casualties. Acknowledgments. This work was partly supported by the National Key Research and Development Plan for Science and Technology Winter Olympics of the Ministry of Science and Technology of China (2019YFF0302301).
References 1. Xu, J.: Construction of remote medical emergency information system in 5G network for Beijing international horticultural exhibition. Chin. Digit. Med. 15(01), 19–21 (2020) 2. Muhammad, N., Muhammad, N., Idar, M.: A low cost wearable medical device for vital signs monitoring in low-resource settings. Int. J. Electr. Comput. Eng. (IJECE) 9(4), 2321–2327 (2019) 3. Niculescu, M.-S., Florescu, A., Pasca, S.: LabConcept—a new mobile healthcare platform for standardizing patient results in telemedicine. Appl. Sci. 11(4), 1935 (2021) 4. Zhang, P., Jing, C., Zhao, H., Wang, Y.: The application of 5G the Beijing winter olympics Chongli division. Chin. Digit. Med. 15(01), 16–18+32 (2020) 5. Xu, Y.: Research on mobile medical information integration platform construction. Chin. Comput. Commun. 33(08), 31–33 (2021) 6. Bai, X., Liu, Z., Zhang, G., Li, J., Sun, R., Luo, X.: Design and development of intelligent health medical system based on Android platform. Electron. Des. Eng. 29(04), 107–111 (2021) 7. Lu, H.: On the risks and countermeasures of the third-party platform of internet medical. J. Hohai Univ. (Philos. Soc. Sci.) 23(03), 89–96+108 (2021) 8. Bawankar, B.U., Dharmik, R.C., Telrandhe, S.: Nadi pariksha: IOT-based patient monitoring and disease prediction system. J. Phys. Conf. Ser. 1913(1), 1–4 (2021) 9. Wu, C., et al.: Acute exacerbation of a chronic obstructive pulmonary disease prediction system using wearable device data, machine learning, and deep learning: development and cohort study. JMIR Mhealth Uhealth 9(5), e22591 (2021) 10. Kang, M.H., Lee, G.J., Yun, J.H., Song, Y.M.: NFC-based wearable optoelectronics working with smartphone application for Untact healthcare. Sensors 21(3), 878 (2021) 11. Wang, K., Zhou, G.: Research on mobile APP development strategy based on android platform. Comput. Eng. Softw. 42(04), 144–146 (2021) 12. Zhai, W., Li, Z.: Research on remote monitoring based on WiFi wireless transmission. Electron. Sci. Technol. 29(09), 68–71 (2016)
Analysis of the “One Pallet” Model of Fast Consumer Goods in Post Epidemic Period Yiqing Zhang and Wei Liu(B) Shanghai Maritime University, Shanghai, China [email protected]
Abstract. The “one pallet” model describes the situation when an order placed by a consumer on the terminal platform is transferred to the central platform which then carries out unified control, picks up the goods from the general warehouse and distributes them to consumers. This model can reduce excess inventory as goods are distributed from general warehouse to sub warehouse. As an example, U Company sells FMCG (Fast moving consumer goods). This paper uses system dynamics to analyze the key links from platform terminal to distribution terminal and puts forward the “one pallet” supply chain mode of “contactless distribution” based on existing technology. At the same time, it explores the plan to optimize U Company’s supply chain in the post epidemic period. Keywords: Post epidemic situation · FMCG · Supply chain · System dynamic
1 Introduction Since the end of 2019, COVID-19 has rapidly spread throughout the world. People’s lives and health are seriously affected, and the global economy also faces great challenges. According to the statistical bulletin of national economic and social development of the People’s Republic of China in 2020 issued by the National Bureau of Statistics, the total volume of freight transportation in 2020 reached RMB 46.3 billion tons [1]. According to the operation of Shanghai’s national economy in 2020, the total retail sales of social consumer goods in Shanghai reached RMB 1593.25 billion in 2020, an increase of 0.5% over last year, and the city’s online retail sales reached RMB 260.639 billion in 2020, an increase of 10.2% over 2019 [2]. FMCG is an indispensable part of people’s lives, including as it does food and beverages, daily chemical products, etc. FMCG’s special characteristics include: short turnover period, high utilization rate, large number of competitors and low consumer loyalty. All these determine that the FMCG supply chain must not be broken. An out-ofstock position at any point in the chain could lead to reduced company profit and loss of existing customers. In addition to the general characteristics of FMCG mentioned above, the epidemic is brought about significant changes in demand for FMCG: (1) demand for FMCG such as personal cleaning products, condiments, etc. has grown rapidly during the epidemic period and this growth could substantially outlast the epidemic period. (2) © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 511–522, 2022. https://doi.org/10.1007/978-981-16-8656-6_46
512
Y. Zhang and W. Liu
The demand for skin care products decreased rapidly during the epidemic period and recovered rapidly afterwards. Due to the epidemic’s impact, there was a move towards online shopping, and ecommerce logistics have benefited. Research showed that, during the epidemic, many Chinese consumers adopted the online shopping mode. In the first three quarters of 2020, e-commerce channels accounted for 26.7%, up 4.8% over 2019, and live e-commerce channels accounted for 7%. Although the development of e-commerce logistics is like a gushing well, it does not mean that the future development of the industry will be smooth sailing. On the contrary, a series of problems will follow, the most obvious of which is delivery delays. And, as increasing numbers of consumers rely on online shopping postepidemic, the delivery delay problem becomes more acute. To alleviate the problems, we propose a “one pallet” supply chain model. An FMCG supply chain and logistics summit put forward a plan for joint city distribution (2014). Zhenlin Wei and other scholars proposed common warehousing and distribution (2015) for small and medium-sized e-commerce services. The research on system dynamics (SD) was founded by American scholar J W Forrester in 1956. Based on the research method of system dynamics, Xu Li discussed the influencing factors and solutions to improving supply chain inventory management, and took beer game as an example (2019) [3]. In 1945, the rudiment of modern UAV was the redundant or retired aircraft refitted after the Second World War. With the progress of electronic technology, UAV was mainly used in the military field, and rarely involved in the commercial field [4]. In fact, the use of UAV in business can help improve transportation efficiency.
2 Problem Analysis With the development of the economy, the diversification of marketing means and sales channels, from the beginning of the epidemic to the present post epidemic period, the demand for fast-moving consumer goods has grown rapidly, the orders on the ecommerce platform are increasing rapidly, inventory pressure is increasing, and the logistics delay problem seems endless. In addition, in the post epidemic period, there are still a few sporadic areas for closed management due to the outbreak. So, for many companies, how to minimize the days of inventory turnover, and quickly deliver goods to consumers without contact, is a problem to be solved. Take U Company as an example. The company plays an important role in the FMCG market. Every day, all over the world, people will come into contact with U Company’s products. At present, U Company’s main business has three categories: food planning, food, and household and personal care products. According to the survey, during the epidemic period after 2020, U Company’s FMCG sales volume generally increased. Sales of in September, October and November of 2019 and September, October and November of 2020 are obviously different (Fig. 1) (Table 1). Overall, in September 2020, the total sales in October and November increased by 149% compared with the same month of 2019. At present, U Company has entered the post epidemic period, because in the post epidemic period, the demand for fast-moving consumer goods continues to grow, placing huge pressure on its supply chain. Investigation reveals some problems in U Company’s
Analysis of the “One Pallet” Model of Fast Consumer Goods
513
Table 1. Types of FMCG of U Company (unit: RMB 10000) Serial number
Type
Total for three months in 2019
Total of three months in 2020
➀
Deodorants fragrances
➁
Fabric sensations category
➂
Fabric solutions category
26489
36479.4
➃
Hair care
40257.3
67435.4
➄
Home & hygiene
11786.8
2034.8
➅
Oral care
1722.9
2960.2
➆
Skin care
30518.5
46335.6
➇
Skin cleansing
35391.6
57080.5
➈
Tea
2902.5
2073.3
880.1
982.2
9391.6
21798.3
20000 15000 10000 5000 0 -5000
①
②
③
④
⑤
⑥
⑦
⑧
⑨
-10000 September
October
November
Fig. 1. Sales growth of U Company in 2019 and 2020 (unit: RMB 10000)
supply chain system: (a) The average inventory turnover is 20 days, and there is still room for reduction. (b) Due to the increasing pressure of U Company’s supply chain, there is a large area of delivery delay. (c) Its distribution mode still risks spreading the virus.
3 Assumption of “One Pallet” Supply Chain of FMCG In order to solve the above-mentioned problems and make U Company’s supply chain operate efficiently during the post epidemic period, it is proposed to connect the terminal e-commerce platform, general warehouse, transportation, distribution and other links of FMCG through an Internet smart platform. This paper explores how to use the “one
514
Y. Zhang and W. Liu
pallet” mode to reduce the waste of goods turnover and puts forward the scheme of “contactless distribution” through the central platform monitoring. 3.1 Framework of “One Pallet of Goods” In order to ensure the continuous supply of goods and reduce the cost of the supply chain as much as possible, the principle of system dynamics is used to design the “one pallet” supply chain mode in the post epidemic period [5]. The model is constructed by combining the relationship between terminal e-commerce platform, inventory management, logistics distribution and terminal distribution, with system dynamics (Fig. 2).
Fig. 2. Schematic diagram of “one pallet” structure
3.2 Concept of “one Pallet” In a traditional supply chain, the terminal e-commerce platform is not directly connected with the general warehouse, but with each sub warehouse, and finally connected to the general warehouse. This is the “multi-inventory” supply chain model that most large companies use. “One pallet” supply chain mode means that, when consumers place an order on the terminal platform, the terminal platform will transfer the information to the central platform and the central platform carries out unified control, picks up the goods from the general warehouse and distributes them to consumers. To avoid unnecessary waste in the multi-inventory supply chain mode, we can transform it into one inventory mode. The specific differences between the multi-pallets mode and the one pallet mode are as follows: in multi-pallets transportation and distribution mode, after consumers place orders on the platform, the platform transmits the order information to each sub warehouse, and then distributes the goods to consumers from the sub warehouse. When the goods are removed from the sub warehouse, the sub warehouse sends shortage information to the general warehouse and the general warehouse distributes the goods to the sub warehouse. The mode of “multiple pallets” causes production waste, overstocking and virus spread. In contrast, under the one pallet mode, after consumers place orders on the platform, the goods are taken from the general warehouse at one time and distributed to consumers (Fig. 3). Compared with the previous multi pallets supply chain mode, one pallet mode can help integrated management, reduce the risk of unsalable FMCG, reduce inventory costs, and improve order delivery efficiency.
Analysis of the “One Pallet” Model of Fast Consumer Goods
515
Fig. 3. Comparison between “multi pallets” and “one pallet”
In the multi pallets mode, the company needs to consider safety stock, in transit stock, turnover stock and so on at each sub warehouse. Safety stock is buffer stock held against uncertain future material supply or demand, such as a large number of sudden orders, etc. An increase in safety stock increases holding cost. Turnover inventory refers to the inventory of goods needed to maintain a certain amount of turnover according to the circulation link and speed of goods in order to ensure normal supply to the market. In transit inventory refers to the inventory that has not yet arrived at the destination, is being transported or is waiting for transportation. The one pallet mode can reduce this inventory by putting all the goods on one pallet, opening up all sales channels and realizing unified allocation from one inventory. At the same time, the optimized terminal distribution in one pallet mode can help safe and reliable distribution in the post epidemic emergency situation. 3.3 Integration of One Pallet System dynamics will be used to intuitively and clearly analyze the positive and negative feedback relationship of “one pallet” mode. The system science thought of system dynamics (SD) is “Every system must have structure, and system structure determines system function”. The internal components of the system are the feedback of cause and effect to each other. We can find the link of the problem from the internal structure of the system [5]. A one pallet model is built using system dynamics to help improve the management efficiency from the internal micro-structure of the platform system. First, the terminal e-commerce platform collects the consumer’s data and sends it to the general warehouse; then, the goods are directly distributed from the general warehouse to the terminal consumers. In order to ensure that the goods in the general warehouse are matched with demand, the company needs an accurate forecast of the demand so that it can arrange the production; finally, in the terminal distribution stage,
516
Y. Zhang and W. Liu
in order to achieve contactless distribution, the UAV and intelligent car can work together to complete the last kilometer distribution at the fastest speed. One pallet supply chain mode means that, when consumers place an order on the terminal platform, the terminal platform will transfer the information to the central platform and the central platform will carry out unified control, pick up the goods from the general warehouse and distribute them to consumers. In order to avoid unnecessary waste in the multi-inventory supply chain mode, we can transform it into one inventory mode. The following model includes certain assumptions: (a) It is assumed that operating only from an e-commerce platform reduces inventory and optimizes distribution. (b) It is assumed that the big data collected is accurate. (c) It is assumed that the relevant information flow can be obtained in a timely manner. (d) It is assumed that the UAV, intelligent car and robot are in line with the relevant regulations and will not cause damage to the environment (Fig. 4).
Fig. 4. Schematic diagram of the “one pallet” model of modern FMCG
According to the theory of system dynamics, the one pallet mode is divided into upstream subsystem and downstream subsystem. Then the causal feedback of each subsystem is analyzed, and suggestions for the integration of each subsystem and optimization of logistics transportation are put forward. 1) Upstream Subsystem The upstream of a pallet of goods includes FMCG production. As the characteristics of FMCG determine that there must be no out of stock phenomenon, the challenge of one pallet in the post epidemic period is to minimize inventory while meeting consumers’
Analysis of the “One Pallet” Model of Fast Consumer Goods
517
needs through effective inventory management. According to the principle of system dynamics, the key to the one pallet supply chain mode is that the administrator can clearly grasp the real market information, inventory information and goods information. According to the principle of system dynamics, VMI (Vendor Managed Inventory) system collects data after consumers place an order, compares the actual and the planned inventories, predicts how long the current inventory can last, and transmits the information to the central platform and the production department (Fig. 5).
Fig. 5. VMI working diagram
Through a convenient and efficient information system that maintains transparency of the inventory, this is the prerequisite to achieve one pallet. Because it is in the special environment of an epidemic situation, it could not only be considered the complementary and substitution of products, but also be considered a possible FMCG growth trend and the VMI system should be constantly optimized and improved according to these changes. The perfect inventory management system helps managers to understand the current inventory; the quantity of goods in transit and the expected arrival time. It can also predict possible future demand. This transfers part of the sales pressure to the live broadcast platform. Before the live broadcast, the company needs to negotiate with the live broadcast platform about the quantity and price of goods on the shelves, which can share part of the amount of terminal-to-terminal information flow, reducing production waste and improving profits. 2) Downstream Subsystem The one pallet downstream includes terminal distribution and other links. In the post epidemic period, emergency areas still exist. To ensure daily life in these areas, Contactless Delivery based on intelligent cars and UAV is particularly important. Complete Contactless Delivery exists when the distributor has no contact with the consumer, the consumer places an order on the online e-commerce platform, and does not contact the distributor face to face until receiving the goods.
518
Y. Zhang and W. Liu
The idea of complete Contactless Delivery is: if consumers choose to pick up the goods from the modern receiving point, the smart car sends the goods to the deposit box and the corresponding robot deposits the goods in the empty deposit box. After storing the goods, the robot automatically sends the pick-up code to the receiver’s mobile phone, and the consumer can pick up the goods according to the number; if the consumer chooses UAV delivery, the UAV delivers the goods to the customer’s home if the weather, building structure and other conditions permit. According to the principle of system dynamics, in order to ensure that consumers can receive goods delivered by UAV, the consumers must be contacted in advance and an appointment made before completing the last kilometer delivery. At the same time, the UAV is required to take off automatically at the specified time (specified time = receiving time − delivery time). If the goods are not signed in, the UAV sends the goods to the deposit box (Fig. 6).
Fig. 6. UAV terminal delivery flow chart
UAV technology is not perfect and is limited by weather, the environment and high buildings, and UAV flights cannot take place at low altitude in bad weather and areas with a large number of high buildings. Therefore, when the host recognizes that the current environment is not suitable for UAV transportation, the e-commerce platform reminds the user and automatically changes the selected transportation mode to intelligent vehicle transportation + UAV transportation mode. When the goods arrive at the collection point, the truck can be used as the UAV take-off point, and the UAV takes off from the truck to deliver the goods to consumers. This means that freight cars need to be refitted. Light freight trucks (length less than 7 m and height 2.5–4 m) can be used. The specific methods are as follows: (a) a layer of space is added between the cargo rack and the cab to store the UAV, and the remaining space is used to place the goods. (b) The side of the container can be opened from top to bottom and parallel to the bottom layer of the container to form a horizontal platform for UAV take-off. At the same time, the side can also be opened from top to bottom to facilitate unloading (Fig. 7). 3) Integration of Various Subsystems To improve the company’s brand image, build consumers’ loyalty, and avoid lack of goods due to opaque information, it is necessary to build an Internet smart platform,
Analysis of the “One Pallet” Model of Fast Consumer Goods
519
Fig. 7. Schematic diagram of container warehouse
connect the terminal platform with the terminal distribution, and monitor the amount of inventory and the flow direction of goods (Fig. 8).
Fig. 8. Elements of Internet smart platform
Building a transparent and efficient one pallet central control system helps the company better understand the current state of the supply chain: if a consumer refund is due, after the money is returned to the consumer, management personnel can quickly respond and readjust the distribution plan as soon as possible; if there is a consumer complaint, it helps supply chain management personnel to inquire whether the supply chain is in good condition. If there are problems in any link, a timely reminder will lead relevant personnel to solve them quickly. 3.4 One Pallet Suggestions for U Company (a) Because VMI requires a high degree of mutual trust between e-commerce, general warehouse and production, it is difficult to realize a completely integrated supply chain. As a result, managers cannot make timely adjustments to production, so that the orders cannot be implemented quickly. To alleviate this, we can use big data combined with VMI and other technologies. Based on the big data collected by the terminal e-commerce platform, manufacturers can reasonably classify consumers and adjust production plans accordingly. (b) According to the survey, the distribution of U Company depends on third-party logistics. Because the main business of U Company is not distribution, its technical distribution system is not mature. Choosing third-party logistics for distribution is cheaper in the short term than spending a lot to build a distribution system, but in the long run selfoperated distribution may be a better choice for U Company for the following reasons: Generally, U Company has a huge demand for FMCG and frequent logistics distribution. Compared with third-party logistics, the cost of self-operated logistics is lower. Self-operated logistics can help companies to have a stronger control over all aspects of
520
Y. Zhang and W. Liu
the supply chain, so that managers can more comprehensively grasp the status of goods, shorten the terminal delivery time, and alleviate delivery delay and other problems. (c) U company has a huge demand for FMCG, and there are many kinds of ecommerce platforms to sell its goods. In order to realize personalized service, reasonable production and fast distribution, it is necessary to ensure that the information flow is completely smooth, transparent and accurate. The one pallet mode involves a variety of businesses mentioned above. These businesses form a complete service system from upstream platform terminal to distribution terminal. The data for each subsystem in each business layer are unified into the data layer to form a reliable database. These data can help to grasp the epidemic situation, predict the production scale, analyze the market trend, understand the transportation status, and solve the problems existing in the distribution. To enable administrators to obtain information in time, each business layer subsystem involved needs to clearly show the data to the end users when necessary (Fig. 9).
Fig. 9. Content contained in Internet smart platform
To facilitate managers to grasp the real-time information of the terminal platform and terminal distribution, and to quickly control the intelligent car and UAV remotely in case of an accident, it is necessary to set the terminal search and operation interface on the mobile terminal. The interface should be clear, concise and easy to operate (Fig. 10). The terminal platform interface is mainly to help administrators understand the commodity inventory and sales price of each platform. The terminal distribution interface not only lets the administrator know the time, place and status related to the goods, but also facilitates the administrator to remotely transfer back the intelligent car and UAV, so as to avoid the continuation of wrong transportation and the occurrence of traffic danger to a certain extent.
Analysis of the “One Pallet” Model of Fast Consumer Goods
521
Fig. 10. Schematic diagram of mobile phone interface
4 Conclusion The one pallet model helps FMCG companies reduce inventory turnover days and goods turnover costs. Relying on the Internet, the upstream production, transportation and distribution system can reduce the transportation turnover as much as possible, shorten the distance from production to the terminal, and achieve the “last kilometer distribution” within a few hours. Building a transparent and efficient e-commerce terminal helps to reduce consumer complaints and improve after-sales service. Moreover, the model can complete fast and contactless distribution from the end to the end, and help the company reduce delivery delay, inventory accumulation, virus transmission risk and the probability of consumer complaints. In order to carry out the one pallet mode smoothly, it is necessary to ensure smooth connection between the subsystems, transparent information and simple operation. Although this study can give some enlightenment to reduce inventory backlog, there are still some problems. The combination of supply chain and Internet urgently needs to ensure information security, so as to prevent criminals from obtaining business secrets and controlling intelligent vehicles and UAV after breaking the information system. Acknowledgments. In the design process of this paper, Professor Wei Liu Wei gave careful guidance and instruction to each link of the paper from topic selection, conception to final draft, which enabled me to finally complete the paper.
References 1. Statistical bulletin of national economic and social development of the people’s Republic of China in 2020. National Bureau of Statistics (2021). http://www.stats.gov.cn/tjsj/zxfb/202 102/t20210227_1814154.html. Accessed 28 Feb 2021
522
Y. Zhang and W. Liu
2. The operation of Shanghai’s national economy in 2020. Government OnlineOffline Shanghai. http://www.shanghai.gov.cn/nw31406/20210126/e0e5b258ed954d56ac 2ebed69bf0d099.html. Accessed 26 Jan 2021 3. Huang, C., Hong, J., Fu, J., You, X.: Review on the application of system dynamics in inventory management. China Storage and Transportation Network (2020). http://www.chinachuyun. com. Accessed Apr 2020 4. Wang: Overview of driverless technology. In: Magnetism. Information Technology Exploration, pp. 147–148. Academic (2019) 5. Zhong, J., Li: System Dynamics. Magnetism Science Press (2019) 6. Pan: Design of container logistics information platform for open sea intermodal transportation. In: Magnetism. Computer Knowledge and Technology, vol. 16, pp. 269–270. Academic (2020) 7. Ran, Li: Research on profit distribution mechanism of three echelon supply chain VMI mode based on third party logistics. In: Magnetism. Logistics Engineering and Management, vol. 40, pp. 79–83. Academic (2018) 8. Guan: Research on the relationship between enterprise logistics turnover and inventory management. In: Magnetism. Modern Economic Information, p. 104. Academic (2018) 9. Hong, Z., Nie, W.: A review of logistics ‘last mile’ distribution. In: Magnetism, Logistics Technology, pp. 22–24. Academic (2018) 10. Ning, Chuan: Research on inventory management optimization of fast moving consumer goods dealers. In: Magnetism. Science and Technology and Economy in Inner Mongolia, pp. 8–11. Academic (2016)
Parameter Optimization for Neighbor Discovery Probability of Ad Hoc Network Using Directional Antennas Ruiyan Qin(B) and Xu Li School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China {19120108,xli}@bjtu.edu.cn
Abstract. One of the problem in distributed wireless ad hoc network is to design efficient neighbor discovery mechanism. However, the parameter analysis of the directional neighbor acquisition probability model that can be applied to engineering is still unclear. In this paper, the successful discovery probability of the directional neighbor discovery mechanism based on election (DND-EMA) and directional neighbor discovery mechanism based on competition multiple access (DND-CMA) are modeled and analyzed respectively. And then, we obtain the probability expressions of access slot transmission probability and transmission success probability, and finally combines the geometric calculation method and mechanism to obtain the expression of probability of successful discovery, and according to the probability expression analyzes the joint optimized parameters. Numerical analysis results show that the efficiency of the neighbor acquisition mechanism also depends on the choice of the number of antenna sectors. And the number of directional antenna sectors equipped with nodes should be adjusted according to the network scale. Keywords: Ad hoc network · Directional antennas · Neighbor discovery mechanism · Probability of successful discovery · Parameter optimization
1
Introduction
In recent years, directional antenna technology has provided various benefits for wireless communication systems. However, wireless ad hoc networks using directional antennas will also encounter some new problems, such as hidden and exposed terminals, inaudible problems, and neighbor acquisition problems [1].
This work was supported by the National Key R&D Program of China under Grant 2017YFF0206201. c The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 523–536, 2022. https://doi.org/10.1007/978-981-16-8656-6_47
524
R. Qin and X. Li
Neighbor discovery is the basis and premise of the networking procedure of wireless ad hoc network [2]. The higher probability of successful neighbor discovery, the more reliable basic functions such as route discovery and data transmission can be provided [3]. However, the directional neighbor discovery problem becomes more challenging due to the following reasons: (1) The energy of the directional antennas is concentrated, and the beam width can only cover a small local spatial range. (2) In order to find each other, nodes must point the correct beam to each other at the correct time, and the receiving and sending modes are opposite to each other [4]. In wireless ad hoc networks using directional antennas, [5,6] use omnidirectional antennas to assist the neighbor discovery process, but the gain asymmetry of directional antennas and omnidirectional antennas may lead to the discovery of different neighbor sets. The neighbor discovery mechanisms proposed in [7,8] uses directional transmission, but do not consider collisions. However, when there are multiple transmitting neighbors in the same beam range of the receiving node, conflicts may occur. [9–11] theoretically analyze the random neighbor acquisition mechanisms using directional antennas, and gives a mathematical model, however, the papers focuses more on theoretical analysis rather than practical application, and do not optimize the parameters in the mechanism. Aiming at the problem of neighbor acquisition using directional antennas, we propose a neighbor acquisition probability model under two mechanisms: DNDEMA and DND-CMA. First, the two mechanisms are analyzed and compared with the omnidirectional neighbor discovery mechanism based on election multiple access (OND-EMA). Then, the probability of successful neighbor discovery of the DND-EMA mechanism is numerically simulated and analyzed. Optimize the parameters in the mechanism to increase the probability of successful neighbor discovery, so that the neighbor discovery mechanism proposed in this paper can achieve higher performance. The rest of the paper is structured as follows: Sect. 2 introduced protocol framework of distributed wireless ad hoc networks using directional antennas. Section 3 and Sect. 4 proposed two neighbor discovery mechanisms and corresponding successful discovery probability models. Section 5 simulates and analyzes the neighbor discovery mechanism and optimizes the design. Section 6 summarizes the full paper.
2
Protocol Framework of Distributed Wireless Ad Hoc Networks Using Directional Antennas
Distributed wireless ad hoc network does not require central nodes to uniformly allocate channel resources. Each node only needs to know the scheduling information of neighbor nodes to coordinate the data transmission time and directly establish a communication link.
Parameter Optimization for Neighbor Discovery Probability
525
Fig. 1. Distributed wireless ad hoc network frame structure based on directional antenna.
Figure 1 shows the frame structure of the distributed wireless ad hoc network protocol in the TDMA system, the scheduling period of the MAC layer contains O multiframes, and each multiframe contains F frames. The frame structure is divided into two parts: control subframe and data subframe, with a total of C+D slots. Among them, C control slots for transmitting network signaling constitute a control subframe, and the other D slots for transmitting data constitute a data subframe. DND-EMA adopts a hybrid access mechanism to realize the sharing of channel resources, and the neighbor discovery phase adopts a random access mechanism based on a local election algorithm. DND-CMA send control messages and data in a competitive way. Before sending a control message, a node first randomly selects a back-off value within the back-off window value range to start back-off, and broadcasts a control message carrying the node’s information after the back-off ends, which causes more serious collisions.
3
DND-EMA Mechanism and Network Performance Modeling
The antenna sector of DND-EMA adopts polling and traversal scanning for sending and receiving. Each node relies on the broadcast transmission of control messages to discover and maintain neighbor information. If the node is elected successfully in a certain slot, the current slot state is set to the sending state, otherwise it is the receiving state. We assume that a pair of nodes can find each other needs to meet: The antenna beams of two nodes are pointed to each other at the same time and must be in complementary transmit/receive modes to find each other. 3.1
Access Slot Transmission Probability
Access slot transmission probability is the probability that the node will win the election when electing the access right of a certain control slot, expressed by Pt . If the node is elected successfully in a certain time slot, the consecutive K time slots after the current time slot are all the sending state slots of the node, where K is the number of directional antenna sectors.
526
R. Qin and X. Li
Fig. 2. Access state transition diagram.
The access state transition is shown in Fig. 2, and x is used to represent the number of times a node participates in the election during a certain period of access. In state x, the node wins the transmission right of the control slot with probability Pt , and then enters the back-off state; the node fails the election with probability 1 − Pt , then enters state x + 1, and continues to participate in the election competition of the next control slot. When a node competes for a certain control slot, it will first select competitors from neighbors to determine the competition set (including the node itself) according to the election algorithm. The number of nodes in the competition set is represented by NCP , 1 ≤ NCP ≤ NBR + 1, where NBR = πR2 λ − 1. Each node has an equal probability of winning in each election: Pt =
1 NCP
(1)
The back-off period is determined by a fixed index XB = 4 and a dynamic back-off index XE . The back-off period is 2XB +XE control slots. Let S represent the duration of the competition, with the number of control slots as the unit. In a certain round of access process, the first x − 1 times have failed, and the probability of success in the xth election is: x−1
Pelc (S = x) = (1 − Pt )
Pt
(2)
Without limiting the number of competitions, the competition process obeys the geometric distribution of probability Pt , S ∼ G (Pt ). According to the characteristics of the geometric distribution, the expected value of the competition duration can be obtained (in the number of control slots as a unit): E (S) =
+∞
i × Pelc (S = i) =
i=1
1 Pt
(3)
Because nodes must back-off before the election, the length of the average election period is also included in the back-off time as: E (Telc ) = 2XB +XE + E (S)
(4)
In the process of determining effective competing nodes, neighbors can be divided into nodes with unknown scheduling parameters and nodes with known scheduling parameters. For neighbor nodes with unknown scheduling parameters,
Parameter Optimization for Neighbor Discovery Probability
527
they directly become effective competing nodes. For nodes with known scheduling parameters, the average length of the contention interval is 2XE + E (s) control slots. Therefore, the probability of a neighbor node with a known scheduling parameter participating in the election competition is: pac =
2XE + E (s) + E (s)
2XB +XE
(5)
Respectively use the symbols NU nknown and NKnown to indicate the number of nodes with unknown scheduling parameters and the number of nodes with known scheduling parameters, and the total number of nodes in the competition set can be obtained: NCP = NKnown
2XE + E (s) + NU nknown + 1 2XB +XE + E (s)
(6)
According to the above, we can get the equation about Pt : 2XE + 1 1 = NKnown X +X Pt 1 + NU nknown + 1 Pt 2 B E + Pt
(7)
According to the above formula, access slot transmission probability can be obtained. 3.2
Transmission Success Probability
Nodes using directional antennas can correctly find that neighbor nodes need to meet the correct transmission of control messages, and then further derive the transmission success probability. Supposed that the nodes in the network are randomly distributed, and the node density is λ. If the transmission channel is a rayleigh channel, starting from the probability density function of the signal-to-interference ratio, the transmission success probability can be obtained. The probability density function of SIR under rayleigh fading channel: τ
fSIR (x) = qτ xτ −1 e−qx
(8)
Where τ = h/α, q = Qh dh λdir Γ (1 + τ ) Γ (1 − τ ), h refers to the spatial dimension, Qh represents the capacity of the unit sphere in the h-dimensional −λπR2
indicates the density of interference nodes [7]. This paper space. λH = 1−eπR2 studies the two-dimensional plane network, Qh dh = πd2 . The probability density function of the signal-to-interference ratio is: fSIR (x) = πd2 λH Γ 1 + α2 Γ 1 − α2 α2 2 (9) 2 2 2 2 α ·x α −1 e−πd λH Γ(1+ α )Γ(1− α )x
528
R. Qin and X. Li
The receiving signal-to-interference ratio threshold of the receiver is β. Transmission success probability under rayleigh channel conditions psuc , which is the probability that the received signal-to-interference ratio is greater than β: psuc =
+∞
fSIR (x) dx −λπR2 2 = exp −d2 β α 1−eR2 Γ 1 + α2 Γ 1 − α2 β
3.3
(10)
Probability of Successful Discovery
Consider DND-EMA in the planar situation. Node A and node B in the network find each other’s beam schematic diagram, as shown in Fig. 3.
Fig. 3. Schematic diagram of beams discovering each other between neighbor nodes in a distributed directional ad hoc network.
The area of the shaded area and the non-shaded area are AS = 12 R2 tan θ2 and AN S = θR2 − R2 tan θ2 . AS So, the proportion of the shaded area is η = AS +A NS If the election is successful in a certain control slot, the node will poll and send messages in the next K slots. If the node is not in the sending state, it is in the directional receiving state. Then the necessary and sufficient conditions for node A to successfully discover another neighbor node B in agreed slots are: (1) The beams of node A and node B are aligned with each other; (2) The antenna beams of other nodes within the directional transmission range of node B do not point to node B or the beams point to node B but are in the receiving state; (3) The beams of other nodes within the directional transmission range of node A do not point to node A or the beams point to node A but are in the receiving state. Node B successfully receives the control message transmitted from node A during the neighbor discovery phase, but not others. Then other nodes within the transmission range of A and B should meet the above conditions (2) and (3), and the probability is: qe =
2 1 − Pt K
η
πR2 K
λ−2
1 1 − Pt K
η
πR2 K
λ−2
(11)
Parameter Optimization for Neighbor Discovery Probability
529
Node A and node B discover each other, including two situations where A sends a control message first and B sends a control message first. Then the probability of successful discovery of node A to node B in a single slot is: Psig =
2 Pt (1 − Pt ) qe psuc K2
(12)
The probability that node A finds node B in t slots is: t
Pcap (t) = 1 − (1 − Psig )
(13)
Node A finds its m neighbors in t time slots in one of the following two ways: (1) Node A finds m − 1 neighbors in the first t − 1 slots, and finds any one of the remaining N − m + 1 neighbors in the tth slot; (2) In the first t − 1 slots, m neighbors are found, and no one of the remaining N − m neighbors is found in the tth slot. Therefore, the expression of the probability that node A finds m neighbors within time t is: PEMA (m, t) = PEMA (m − 1, t − 1) (N − m + 1) Psig + PEMA (m, t − 1) [1 − (N − m) Psig ]
(14)
Fig. 4. Markov chain model of random back-off process.
The boundary expression of the recurrence relationship is:
PEM A (m, t) = 0, m > t PEM A (0, t) = 1
4
(15)
DND-CMA Mechanism and Performance Modeling
Before sending messages, the node randomly selects a value of the back-off window W for back-off. After the back-off is over, a sector is randomly selected to send the message. 4.1
Back-off Process Analysis
In the stage of sending a control message, a node randomly selects a value for a given W window to back-off. After the back-off ends, the control message is broadcasted. If the current node receives a control message sent by another node before the end of the back-off, the value of the current back-off counter is frozen,
530
R. Qin and X. Li
and the back-off is continued after the channel is idle again. W is the back-off parameter of distributed wireless ad hoc network using directional antennas. The decrement time interval of the back-off counter of the node is the basic slot unit δ. Figure 4 shows the Markov chain model of the random back-off process. The probability of a node choosing each back-off value is 1/W . Any state in the back-off process of a network node can be represented by a random variable b(t). b(t) is the current back-off remaining value of the node. Use p (i|j) to represent the one-step state transition probability of a node from state j to state i, then: P {j|j + 1} = 1
j ∈ [0, W − 2]
(16)
It is defined bj as the probability that the remaining value of the back-off counter is j when the node is in the back-off phase, namely: bj = lim P {b (t) = j} t→∞
j ∈ [0, W − 1]
(17)
From the analysis of the above model, we can get: bk =
W −k b0 W
k ∈ [0, W − 1]
(18)
It can be seen from the above formula that any state of the node bk can be expressed as a functional relation of b0 . According to the normalization condition W −1 bk = 1, the expression of other states can be finally obtained as: k=0
bk = 4.2
2 (W − k) W (W + 1)
k ∈ [0, W − 1]
(19)
Probability of Successful Discovery
Assuming that node A and node B scan randomly after entering the network, the necessary and sufficient conditions for node A to successfully discover another neighbor node B within agreed slots are: (1) The beams of node A and node B are aligned with each other; (2) The antenna beams of other nodes within the directional transmission range of node B do not point to node B or the beam points to node B but the back-off value is greater than that of node A; (3) The beams of other nodes within the directional transmission range of node A do not point to node A or the beam points to node A but the back-off value is greater than the backoff value of node A; (4) The back-off values of other nodes within the directional transmission range of node A that receive the control message of node A are greater than the back-off value of node B.
Parameter Optimization for Neighbor Discovery Probability
531
Node B is the first to successfully receive the control message transmitted by node A. Then other nodes within the transmission range of A and B should meet the above conditions (2) and (3). The probability is: qc1 =
i 2 1− bn K n=0
η
πR2 K
λ−2
i 1 bn 1− K n=0
η
πR2 K
λ−2
(20)
Where i represents the back-off value of node A. Node A is also the first to successfully receive the control message from node B, but not from other nodes. Then other nodes within the transmission range of node A should meet the above condition (4). The probability is:
qc2 =
W −1 u=0
bu 1 −
i 1 bv K v=0
η
πR2 λ−2 K
(21)
Node A and node B discover each other in agreed slots, including A first sending a control message and B sending a control message first. If the above conditions (1)–(4) need to be met at the same time, then node A finds node B The probability is: 2 Psig = 2 qc1 qc2 psuc (22) K The probability that node A finds node B in t slots is: t
Pcap (t) = 1 − (1 − Psig )
(23)
According to Sect. 3.3, the probability that a node finds all m neighbors within time t is expected to be: PCMA (m, t) = PCMA (m − 1, t − 1) (N − m + 1) Psig + PCMA (m, t − 1) [1 − (N − m) Psig ]
5
(24)
Performance Analysis and Protocol Parameter Optimization
This chapter compare the performance of three mechanisms firstly, and the conducts numerical analysis and parameter optimization on the model of DNDEMA. The parameters that affect the probability of successful neighbor discovery include neighbor node density, directional antenna beam width, number of agreed slots, back-off index, etc.
532
R. Qin and X. Li
Fig. 5. The relationship between the probability of successful neighbor discovery and the number of slots for different mechanisms.
5.1
Analysis and Comparison of Probability Models of Successful Discovery of Three Mechanisms
This section analyzes and compares the probability of successful discovery models of the three mechanisms. As is shown in Fig. 5, the successful discovery probability of OND-EMA is always higher than DND-EMA and DND-CMA. However, due to the poor confidentiality and anti-interference ability of omnidirectional antennas, this article will not discuss it in detail. At the same time, when the number of agreed slots is small, the successful discovery probability of the DNDCMA is higher than that of DND-EMA. However, as the number of agreed slots increases, the probability of successful discovery of the DND-EMA increases faster than DND-CMA and ultimately far exceeds DND-CMA. 5.2
Directional Antenna Beam Width
As is shown in Fig. 6, while keeping other parameters constant, the probability of discovering all neighboring nodes first increases and then decreases as the beam width of the directional antenna increases. This is because when the beam width of the directional antenna is narrow, the probability of beam alignment between two neighboring nodes is low, but the number of neighboring nodes that cause collision and interference at the same time is also reduced; when the directional antenna beam is wider, the probability of beam alignment between neighbor nodes is increased, and the number of neighbor nodes that cause collisions also increases.
Parameter Optimization for Neighbor Discovery Probability
533
Fig. 6. The relationship between the directional antenna beamwidth parameter and the probability of successful neighbor discovery.
Fig. 7. The relationship between the neighbor node density parameter and the probability of successful neighbor discovery.
5.3
Neighbor Node Density
Keeping other network parameters constant, the probability of successful discovery decreases as the density of network nodes increases in Fig. 7. This is because the greater the density of network nodes, the greater the probability of collisions in the neighbor discovery process. At the same time, the network transmission success probability and the access slot sending probability model are related to the node density, and they all decrease with the increase of the node density, which will also lead to a decrease in the probability of successful neighbor discovery.
534
R. Qin and X. Li
Fig. 8. The relationship between the density of neighbor nodes and the probability of successful discovery under different back-off indexes.
Fig. 9. The relationship between the density of neighbor nodes and the probability of successful discovery under different numbers of antenna sectors.
5.4
Back-off Index
The optimal access slot transmission probability can obtain the greatest probability of successful discovery. And the back-off index is the key factor of access slot transmission probability. It can be seen from Fig. 8 that the probability of successful discovery decreases as the density of network neighbor nodes increases which is analyzed in Sect. 5.3. It can also be seen from the figure that when the network node density is constant, with the increase of the back-off index XE , the transmission probability of the node access slot increases. This is because the back-off index increases, and the back-off time becomes larger, thereby effectively alleviating the intensity of competition and increasing the probability of successful node discovery.
Parameter Optimization for Neighbor Discovery Probability
5.5
535
The Method to Determine the Optimal Number of Antenna Sectors
Figure 9 shows the change curve of the probability of successful discovery with the density of nodes under the constraint of the number of different antenna sectors. It can be seen from the figure that no matter how many antenna sectors the node configures, the probability of successful discovery always decreases as the density of the node increases. When the node density is small, a smaller number of directional antenna sectors can make the node maintain a higher probability of successful neighbor discovery during the neighbor discovery process. However, as the node density increases, a smaller number of directional antenna sectors can no longer meet the requirements. The node maintains a high probability of successful neighbor discovery. Therefore, the number of directional antenna sectors needs to be selected according to different network scales to obtain the best performance.
6
Conclusion
This paper proposes DND-EMA and DND-CMA, and derives the successful discovery probability based on the analysis of plane geometry and probability theory, and then compares it with OND-EMA. Numerical simulation results prove that the probability of successful discovery using omnidirectional antennas is higher than that of directional antennas. Due to the large interference and low security of omnidirectional antennas, no specific discussion will be made in this paper. In the two mechanisms proposed in this paper using directional antennas, the probability of successful discovery based on DND-EMA is higher than DNDCMA when the number of agreed slots is large. The transmission probability, transmission success probability, and successful discovery probability established in this paper can also effectively reflect the influence of network parameters and protocol parameters on network performance. At the same time, the optimal number of antenna sectors under different network node densities is obtained according to the numerical simulation results to ensure a higher discovery success rate. This paper provides a reference for selecting the appropriate number of antenna sectors according to the density of network nodes in engineering practice. This paper is carried out under the condition that the node remains static in the neighbor discovery phase. The follow-up work will be based on this neighbor discovery protocol and join the MAC protocol that supports node mobility.
References 1. Feng, O., Qiang, L., Qi, H., Tingting, L.: Summary of mobile self-networking technology based on directional antennas. TV Technol. 41(004), 148–153 (2017) 2. Liao, Y., Peng, L., Xu, R., Li, A., Ge, L.: Neighbor discovery algorithm with collision avoidance in ad hoc network using directional antenna. In: 2020 IEEE 6th International Conference on Computer and Communications (ICCC), pp. 458–462 (2020)
536
R. Qin and X. Li
3. Qin, Z., Gan, X., Wang, J., Fu, L., Wang, X.: Capacity of social-aware wireless networks with directional antennas. IEEE Trans. Commun. 65(11), 4831–4844 (2017) 4. Felemban, E., Murawski, R., Ekici, E., Park, S.: SAND: sectored-antenna neighbor discovery protocol for wireless networks. In: IEEE Communications Society Conference on Sensor (2010) 5. Burghal, D., Tehrani, A., Molisch, A.: On expected neighbor discovery time with prior information: modeling, bounds and optimization. IEEE Trans. Wirel. Commun. 99, 1–1 (2017) 6. Russell, A., Vasudevan, S., Wang, B., Zeng, W., Chen, X., Wei, W.: Neighbor discovery in wireless networks with multipacket reception. IEEE Trans. Parallel Distrib. Syst. 26, 1984–1998 (2015) 7. Tehrani, A.S., Molisch, A.F., Caire, G.: Directional zigzag: neighbor discovery with directional antennas. In: 2015 IEEE Global Communications Conference, GLOBECOM 2015 (2014) 8. Mir, Z.H., Jung, W.S., Ko, Y.B.: Continuous neighbor discovery protocol in wireless ad hoc networks with sectored-antennas. IEEE (2015) 9. Khamlichi, B.E., Nguyen, D., Abbadi, J.E., Rowe, N.W., Kumar, S.: Collisionaware neighbor discovery with directional antennas. In: 2018 International Conference on Computing, Networking and Communications (ICNC) (2018) 10. Liu, L., Peng, L., Xu, R., Zhao, W.: A neighbor discovery algorithm for flying ad hoc network using directional antennas. In: 2019 28th Wireless and Optical Communications Conference (WOCC) (2019) 11. Li, H., Xu, Z.: Self-adaptive neighbor discovery in mobile ad hoc networks with directional antennas, pp. 1–6 (2018)
An Optimal Online Distributed Auction Algorithm for Multi-UAV Task Allocation Xinhang Li(B) and Yanan Liang School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China [email protected]
Abstract. Collaborative task allocation is a key component of multiUAV combat system in battlefield environment. The combat effect can be maximized by allocating UAV resources to corresponding targets in an optimal way. Aiming at the problem that the battlefield environment is dynamic and changeable, current online task allocation algorithms mainly consider the rapid deployment of new tasks on the basis of existing assignments, but it is difficult to ensure the maximum payoffs of replanning results. Based on the distributed auction algorithm, this paper introduces a result update mechanism, which resets some of the original assignments and lets them participate in the auction together with the new tasks, obtaining the re-planning result with maximum payoffs. The simulation results show that compared with other online task allocation algorithms, the introduced algorithm mechanism not only meets the requirement of algorithm timeliness, but also ensures the maximum payoffs of assignments, which is more suitable for dynamic and changeable battlefield environment. Keywords: Online task allocation algorithm · Multi-uav systems
1
· Distributed algorithm · Auction
Introduction
Faced with the increasingly complex battlefield environment, single unmanned aerial vehicle(UAV) is restricted by load, combat range and other factors, so it is difficult to meet the task requirements of multi-target operations, hence it is necessary to use multiple UAVs to cooperate to complete the tasks together. The premise of multi-UAV cooperative combat is to be able to carry out effective task allocation. However, with the emergence of new targets in the battlefield environment and the increasing uncertainty of environment, the original offline task allocation algorithm can not guarantee that the assignments meet the scene requirements in real time. Collaborative Operations in Denied Environment (CODE) hosted by Defense Advanced Research Projects Agency (DARPA) This work was supported by the National Key R&D Program of China under Grant 2017YFF0206201. c The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 537–548, 2022. https://doi.org/10.1007/978-981-16-8656-6_48
538
X. Li and Y. Liang
is based on continuous monitoring tasks, which are usually applied to intelligence, surveillance and reconnaissance (ISR) tasks. CODE puts forward that UAVs clusters need to find targets and contact them according to the established rules of engagement under autonomous cooperation, and adapt to the dynamic environment in time. UAVs can dynamically combine formations and sub-formations through autonomous cooperation to adapt to the task changes in the updated environment. One of the situations is that the task requirements change (the number of tasks increases, decreases or the task payoffs change) in the process of multi-UAVs performing tasks. Online multi-UAV task allocation system should take timely measures to deal with the uncertain events and reassign each task of UAV to get the re-planning assignments. In order to meet the requirements of effectiveness and timeliness of online task allocation algorithm, there are two main strategies. One is to improve centralized task allocation algorithm by reducing the running time of the algorithm [1–3]. The others are to apply market mechanism to dynamically assign new tasks to appropriate UAVs. Reference [4] extended the contract network protocol to realize the complete multi-task auction allocation. Reference [5] the allocation algorithm based on task sequence mechanism was adopted, which provided task allocation for UAVs in real time without conflict. Reference[6] proposed an auction algorithm based on multi-hops, in which the robot generated its task cost by probability. This algorithm can be applied to the scene where resources are consumed and supplemented when the robot performs tasks. References [7,8] improved the auction algorithm, which enabled multiple robots to assign tasks efficiently and autonomously in dynamic environment. However, when the number of UAVs increases to a certain scale, it will be difficult for the centralized task allocation algorithm to meet the timeliness requirements. Besides, for large scale task allocation problems, the total auction bids increase rapidly, which makes it difficult to ensure the effectiveness and optimality of the re-planning assignments. The assignment process in [9] relies on an appropriate selection of bids that determine the task prices, which provides an almost optimal solution to allocation problem. The existing online task allocation algorithms based on this offline auction algorithms mainly consider the grouping auction of newly arrived tasks without modifying the assigned task sets. In this paper, under the scenario that the remaining budget of each UAV is not less than the number of newly added tasks, based on the distributed task allocation auction algorithm, a task update mechanism is introduced, which resets some of the tasks that have been assigned to UAVs, then auctions at the same time as new tasks. Under the condition that the convergence time of the algorithm meets the timeliness, the efficiency of the assignments obtained by the proposed algorithm is better than that of the current online task allocation algorithm. This paper is organized as follows: In Sect. 2, we give the definition of the problems to be solved. In Sect. 3, we present an online shared memory auction algorithm for multi-UAV task allocation and discuss its performance, and its completely distributed form in Sect. 4. In Sect. 5, we demonstrate the performance of our algorithm with some example simulations. Finally, in Sect. 6, we present our conclusion.
An Optimal Online Distributed Auction Algorithm
2
539
Problem Statement
Firstly, we give the formal definition of the task allocation problem of multiple UAVs. Suppose that there are nt tasks T = {t1 , t2 , . . . tnt }, and nr UAVs U = {u1 , u2 , . . . unr }. Let aij (aij > 0) be the payoff for assignment (ui , tj ). Without losing generality, we assume that any UAV can be assigned to any task. Every task must be carried out completely by a UAV, and each UAV can perform at most Ni tasks (we call, Ni , the budget of UAV ui ), so we should have nr i=1 Ni ≥ nt , for all tasks to be performed. Let fij be a binary variable that indicates whether ui is assigned to tj . The overall goal is to allocate all tasks to UAVs, so as to maximize the total payoffs. The task allocation problem of multiple UAVs can be mathematically expressed as follows: max {fij }
s.t.
nr i=1
nt
nt nr
aij fij
(1)
i=1 j=1
fij = 1 ∀j = 1, . . . , nt
fij ≤ Ni ∀i = 1, . . . , nr
(2) (3)
j=1
fij ∈ {0, 1} ∀i, j
(4)
Equation (2) indicates that each task must be assigned to one UAV. Equation (3) gives the budget constraint of each UAV. After the task allocation is completed, the payoffs of some tasks change or the number of tasks increases or decreases. The online task allocation adopts emergency mechanism to solve the accident, and obtains the task assignments which accord with the updated scene. Next, we give the formal definition of online multi-UAV task allocation problem. It is assumed that after the allocation of tasks, during the execution of tasks in the order of optimal value of the assignments, nc tasks Tc = {t1 , t2 , . . . tnc } ¯ij be the payoff change their payoffs which become a ˆij = (ui , tj ) tj ∈ Tc . Let a aij tj ∈ Tc , a ¯ij =aij tj ∈ T \Tc . The overall ¯ij =ˆ for each task after the change a goal is to redistribute the unfinished tasks to the UAVs, so as to maximize the overall payoffs. nt nr a ¯ij f¯ij (5) max {fij }
i=1 j=1
nr s.t. f¯ij = 1 ∀j = 1, . . . , nt i=1
nt
f¯ij ≤ Ni ∀i = 1, . . . , nr
(6) (7)
j=1
f¯ij ∈ {0, 1} ∀i, j
(8)
540
X. Li and Y. Liang
To solve the above problem, we first assume that the number of tasks that UAV ui can perform is exactly Ni . Because must be the total budget of all UAVs no less than the total number of tasks i Ni ≥ nt , i.e., we will add i Ni − nt virtual tasks to the original tasks. The payoff between any virtual task and any UAV is set to zero. After adding virtual tasks, the budget constraints (3) and (7) can be rewritten as (9) and (10), In Sect. 3, we will introduce how to delete this hypothesis and restore the original problem. nt
fij =Ni ∀i = 1, . . . , nr
(9)
nt f¯ij =Ni ∀i = 1, . . . , nr
(10)
j=1
j=1
3
Online Shared Memory Auction Algorithm for Multi-UAV Task Allocation
For online multi-UAV task allocation problem (5), our method is an extension of distributed auction algorithm [9]. Firstly, we give a distributed algorithm based on shared memory model, which is called shared memory auction algorithm. In Sect. 6, we will introduce its completely distributed implementation form. In the dynamic scenario we set, as described in the Sect. 2, we add i Ni −nt virtual tasks, whose payoffs are zero. When the existing tasks disappear, the payoffs of them change to zero, and when new tasks are added, some payoffs of virtual tasks change, i.e., if x tasks are added, the payoffs of x virtual tasks are no longer equal to zero. In other words, we treat the increase or decrease of task number and the change of payoffs in a unified way, and the following algorithm no longer distinguishes these two cases separately. The auction process of online shared memory multi-UAV task allocation auction algorithm is as follows: a) Initialization: Set τ = 0, and initialize the task assignments have been obtained Ji (τ −1) ∀i = 1, . . . nr and the price vector pj (τ −1) ∀j = 1, . . . nt b) Result Update Step: UAV ui updates the task assignments have been obtained Ji (τ −1) ∀i = 1, . . . nr, and computes the updated prices pij (τ +1) ∀j = 1, . . . nt , then go to step d). c) Bidding Step: UAV ui using the price vector pj (τ ) ∀j = 1, . . . nt computes the set of tasks Ji (τ ) that it will bid for and computes the updated prices pij (τ +1) ∀j = 1, . . . nt. d) Improved Price Agglomeration Step: pj (τ +1)= maxk∈nr pkj (τ +1) and the special flag shall be processed for re-launching auction. e) Convergence Condition: pj (τ +1)=pj (τ ) ∀j = 1, . . . nt stop iteration; Otherwise, go to step c). Let the price for task tj of UAV ui at time (or iteration) τ be pij (τ ). The value for task tj of UAV ui at time τ is vij (τ ) = aij − pj (τ ) . The index set of the tasks assigned to UAV ui at time τ is Ji (τ ) .
An Optimal Online Distributed Auction Algorithm
541
In each iteration, for a given task price vector, each UAV solves the optimization problem and bids for the task with the greatest value (subject to constraints) [9] at Bidding Step. In bidding, the bidding prices of each UAV for tasks in its task set are determined according to the price update rules. The price vector is updated through the shared memory at Improved Price Agglomeration Step, and then the bidding process is repeated until the bidding remains unchanged at Convergence Condition. The price update rule ensures that each UAV trying to maximize its own payoffs converges to the overall assignments which have maximum of overall payoffs and satisfies all constraints at the same time. Algorithm 1 describes the Result Update Step for each UAV in detail.
Algorithm 1. Result Update Procedure for UAV ui 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16:
Input: aij ∀j; pi (τ ); Ji (τ − 1); Output: Ji (τ ); pi (τ +1); for j= 1 . . . , nt do if j ∈ Jchange then Ji (τ − 1)=Ji (τ − 1)\{j} end if end for n−(N −|J (τ −1)|) vij (τ ) Jmin = arg minj∈Ji (τi ) i Ji (τ − 1)=Ji (τ − 1)\Jmin for j= 1, . . . , nt do if j ∈ Jmin ||j ∈ Jchange then pij (τ + 1) = −1 else pij (τ + 1) = pij (τ ) end if end for
Suppose that there are n tasks changed Tchange = {tc1 , tc2 , . . . , tcn }, index set Jchange = {c1, c2, . . . , cn}, n ≤ nt . In the first part (line 3–7) of the Algorithm1, delete the changed task from the selected task sets. If there are less than n deleted tasks, the value of the tasks vij will be sorted and n − (Ni − |Ji (τ − 1)|) tasks will be deleted from the task sets. (lines 8–9) . The deleted selected tasks will be re-elected in the subsequent auction process. In the second part of Algorithm1 (lines 10–16), the task price of the deleted selected task is set to a special value of -1, which indicates that it needs to be auctioned again. Algorithm 2 describes the Improved Price Agglomeration Step for all UAVs in detail. When it is found that the node price flag pij (τ + 1) = −1, set pj (τ +1) to 0, and task tj will re-auction with other tasks in subsequent auction iterations . The price of remaining tasks are equal to the maximum task price of all UAVs.
542
X. Li and Y. Liang
Algorithm 2. Improved Price Agglomeration for all UAVs 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11:
Input: pi (τ + 1) Output: p(τ + 1) for i= 1, . . . , nr do for j= 1, ..., nt do if pij (τ + 1) = −1 then pj (τ + 1) = 0 else pj (τ + 1) = maxi∈nr pij (τ + 1) end if end for end for
In the basic problem, we assumed that the number of tasks UAV ui can perform is exactly Ni . We relax this constraint so that each UAV can do at most Ni tasks as indicated in (7). Virtual tasks are auxiliary, only existing in the input of the algorithm, and will be deleted in the output assignments. If the UAVs are assigned to z virtual tasks after the algorithm is terminated, there will be z remaining unused budgets for the UAVs. Deleting virtual tasks from the output assignments will ensure that any feasible solution with constraint (10) is still feasible with constraint (7), which guarantees the reliability of the method. For any feasible solution of the problem with constraint (7), the feasible solution of the original problem with constraint (10) can be obtained, thus the completeness of the method can be obtained. We will now answer the following questions about the performance of the above online task allocation auction algorithm: 1) Will the algorithm end with a feasible allocation solution in a limited iteration? 2) How good is the solution when the task allocation algorithm is terminated? For the first problem, we will prove that when the algorithm is terminated, the solution will be feasible. Then, we will prove that the algorithm will terminate in a limited iteration. First define the tasks that remain in the task set Ji after step b of the online task allocation algorithm as reserved tasks Ri , and the tasks that are canceled from the task set as deleted tasks Di . Theorem 1.1: When the algorithm is terminated, the realized assignments must be the feasible solution of Problem 2. Proof: Reference [9] proves that the shared memory auction algorithm can get the feasible solution of Problem 1. During the execution of tasks in the order of optimal value of the assignments, in the Result Update Step after the task changes, the deleted tasks are added to each UAV’s task set finally, which can ensure that each UAV bid Ni task, i.e., constraint (10) is always satisfied after each iteration. Same as the proof in Problem 1 [9], when the algorithm is terminated, it means that one UAV has been assigned to the task, and no other UAV has a higher bid for the assigned task. Hence, each task can still only be
An Optimal Online Distributed Auction Algorithm
543
assigned to one UAV, and (6) is still satisfied, i.e., all constraints are satisfied, and the realized assignment is the feasible solution of problem 2. Theorem 1.2: If there is at least one feasible solution to Problem 2, then all UAV algorithms will end in a limited number of iterations. Proof: Reference [9] proves that the shared memory auction algorithm will end in a limited number of iterations. Because the bidding step used by the proposed online task allocation algorithm in solving Problem 2 is the same as that in solving Problem 1, the proof of Theorem 1.2 is the same as that in Problem 1. For the second question, we want to explain the optimization effect of online task allocation auction algorithm, including the optimal payoffs of the assignment and the convergence time of the algorithm. Theorem 2.1: When the algorithm terminates, the achieved assignments nr Ni ε of an optimal solu{(i, (ti1 , . . . tiNi ))|i = 1, . . . , nr } must be within i=1 tion. Proof: Reference [9] proves the performance of the auction algorithm. It shows that for a small enough ε, the final assignments nr are the best. For example, if all payoffs αij are integers, the total payoff i=1 αiJ(i) for any assignment is also integers. Hence if nε < 1 , assignments that are within nε of being optimal, must be optimal. Based on Theorem 2.1, we illustrate the performance of online task allocation auction algorithm. Theorem 2.2: The reserved tasks of UAV ui are still in the task set Ji when the algorithm terminates. Proof: First, according to the Result Update Step, the prices of the reserved tasks in the UAV ui are not updated (lines 11 to 16 of Algorithm 1). Therefore, the reserved tasks Ri ∀i = 1, . . . , nr will not be cancelled from the task set Ji when the prices of them changes due to the bids of other UAVs. Secondly, for the selection of remaining joined tasks in the task set Ji , the reserved tasks will not be selected, which is determined by the sorting selection range of task values vij . This will cause the prices of the reserved tasks to remain unchanged when the prices are updated. It can be seen from the iterative process of the algorithm at Bidding Step that the reserved tasks Ri of UAV ui are still in the task set Ji when the auction conditions are terminated. Theorem 2.3: When the convergence condition is reached, the task sets where the deleted tasks are located are the task sets that can maximize the total payoffs. Proof: Firstly, the precondition is explained. The concept of remaining budget is the difference between the maximum budget of UAV and the executed tasks. This precondition does not include the situation that all tasks have not started to be executed and UAV’s own budget is lower than n. When the precondition is met, the deleted tasks Di ∀i = 1, . . . , nr will have the opportunity to be
544
X. Li and Y. Liang
added to the task set of any UAV Ji ∀i = 1, . . . , nr, which means that the task sets in which these tasks are located are the task sets that can maximize the overall payoffs value when the task prices maintain a stable equilibrium state after competing with any UAV that may bid. If there is no precondition, i.e., the remaining budget of UAV is lower than the number of changed tasks, the changed tasks may not be added to one of the task sets of UAV. However, the UAV’s task is not executed, and its own budget is lower than the number of changed tasks, which is the same as situation that the task unchanged. It is limited by UAVs’ own budget conditions, and the final overall payoffs will not be affected. Theorem 2.4: Under the precondition of Theorem 2.3, when the online task allocation auction algorithm reaches the convergence condition, the final task sets are the choice with the greatest total payoffs, i.e., the achieved assignments nr Ni ε of optimal solution. {(i, (ti1 , . . . tiNi ))|i = 1, . . . , nr } must be within i=1 Proof: This Theorem means that any task shift in the task sets obtained by algorithm can’t get a better decision. The reserved tasks Ri of UAV ui can not increase the total payoffs by moving to the task sets of other UAVs Jk , k = ui, because the optimal result obtained in Theorem 1.2 is the convergence achieved after all UAVs participate in the bidding. Secondly, the deletion tasks Di of UAV cannot increase the total payoffs by moving to the task sets of other UAVs, Jk , k = ui because the optimal result obtained in Theorem 2.3 is also the convergence result after all UAVs participate. Finally, because the UAV ui has no requirement for the execution sequence of tasks in its task set Ji , it can’t find a better position in the task set for any task Ji ∈ Ri ∪Di to maximize the total payoffs of the assignments. And UAVs execute tasks in the order of optimal value of assignments, and the remaining budget of each UAV is not lower than the number of changed tasks, so there is no conflict between the final assignments and the actual execution tasks. Furthermore, we can choose appropriate control ε to make the final assignments obtained nfactor r Ni ε of optimal solution. by auction algorithm within i=1 Theorem 3: Online shared memory max{aij }−min{aij } O(nr nt n ) iterations. ε
auction
algorithm
terminates
in
Proof: O(nt ) is the running time of Bidding Step of each UAV and max{aij }−min{aij } j ∈ Jchange ∪ Jmin is the maximum number of rounds for n ε all UAVs to run Bidding Step, because the upper limit of the bidding total task price increase is n (max{aij } − min{aij }). To sum up, the online task allocation auction algorithm is an algorithm with optimal solution guarantee in polynomial time under the condition that the remaining budget of each UAV is not lower than the number of changed tasks.
An Optimal Online Distributed Auction Algorithm
4
545
Online Distributed Auction Algorithm for Multi-UAV Task Allocation
In this part, we combine our algorithm with distributed algorithm to achieve maximum consistency in multi-UAV system and make the algorithm completely distributed. Due to the distributed multi-hop communication mode among UAVs, there are UAVs that have not yet told their actual prices for every task and every moment and other UAVs. We call these UAVs uninformed and informed respectively, and note that due to the maximum price agglomeration rule and the connection structure of the network, each uninformed UAV is informed in a limited communication cycle, which depends on its distance from the nearest informed UAV (expressed by the number of links). This means that uninformed agents may make lower bids for expensive tasks, however, in the end they will become informed and bid for an attractive task correctly.
Fig. 1. Total payoffs vs. iterations for offline and online shared memory auction algorithm
Considering a connected UAV network, if two UAVs can communicate with each other, there is a link between them. At iteration τ , each UAV ui has the price pij (τ ) of the task tj . From the initial value pij (0), UAV needs to update its price at Improved Price Agglomeration Step, pij (τ + 1) = maxk∈Υ+ pkj (τ ) i
∀j = 1, . . . nt ∀i = 1, . . . nr, where Υ+ i = {i} ∪ Υi , Υi is a neighbor set of ui in the network G. In the end, each UAV ui can get the maximum task price pj of task tj . Therefore, combined with consensus technology, the optimality of the algorithm does not change, but the convergence time may be delayed by a factor Δ, which is the diameter of UAV network, Δ ≤ nr.
5
Simulation Results
We first set the size of Multi-UAV group nr to 10, in which each UAV performs two tasks from a group of tasks nt = 20. The upper limit of the capability of the UAV group is 30. After the task allocation of UAV group is completed, the task
546
X. Li and Y. Liang
changes in the process of task execution. Assuming that a new task is added, the payoffs of all tasks are randomly generated from the uniform distribution in (0, 100) and set ε = 1. We simulated the auction process of algorithms for shared memory model and fully distributed model with different network diameters Δ on the premise that the remaining budget of each UAV is not less than one. All algorithms were implemented in MATLAB and run on an Intel Core i7 Duo 1.60 GHz processor with 8 GB RAM. As shown in Fig. 1, according to the above scenario, we compare the convergence process of auction between offline auction algorithm and online auction algorithm. As shown in the figure, when the total payoffs reach a stable value, the algorithm obtains the final task assignment. It can be seen from the figure that the online auction algorithm reduces a certain number of auction rounds compared with the offline auction algorithm in the iterative process after the task changes, and the final auction result (total payoffs) is the same as the latter, achieving the best performance in the proof.
(a) Total payoffs vs. problem size for(b) Time (sec) vs. problem size for difdifferent network topologies ferent network topologies
Fig. 2. Total payoffs and convergence time of online distributed auction algorithm
As shown in Fig. 2, we simulate the completion time of online task allocation auction algorithm with the change of network diameter under completely distributed conditions. The network structure is set as linear network, random network and fully connected network. We can see that the optimality of task assignments is almost the same for different UAV network structures, and the convergence time depends on the UAV network structure (diameter Δ). Further research shows that the slow convergence speed in large diameter networks is mainly due to the fact that even after most UAVs have converged to their assigned tasks, the final task prices still propagate to uninformed UAVs in the network.
An Optimal Online Distributed Auction Algorithm
547
(a) Total payoffs vs. problem size for(b) Time vs. problem size for different different online auction algorithm online auction algorithm
Fig. 3. Comparison of the total payoffs and convergence time of the proposed online auction algorithm with sequential auction algorithm
Finally, we expanded the scale of the number of tasks and the number of changed tasks. Mulit-UAV group nr = 10, and each UAV performs 6 tasks from a group of tasks nt = 60, and its maximum budget is 7. The number of new tasks is one to five. When the task changes, and the remaining budget of each UAV meets the precondition of not less than one to five respectively. We counted the algorithm convergence time and total payoffs of one hundred samples, and compared the proposed online task allocation distributed auction algorithm with other online task allocation algorithms.When applying the online sequential auction algorithm, the task scale of UAV and the number of changed tasks are set the same as our algorithm scenario, and the control factor ε = 1 and the payoffs of all tasks are randomly generated from the uniform distribution in (0, 100), which is used to obtain the average performance of applying the online sequential auction algorithm. As shown in Fig. 3, the proposed online task allocation auction algorithm is higher than the sequential auction algorithm [10] in total payoffs, and with the increase of the number of changed tasks, the former has more obvious advantages than the latter, which is due to the increased possibility that the optimal assignment of changed tasks is in the assigned but unexecuted task sets of UAV, while the online sequential auction algorithm does not modify the assigned sets. With the number of changed tasks increasing, the convergence time of online task allocation auction algorithm and sequential auction algorithm both increases, and the time of the former increases even more, which is due to the increase of the number of tasks participating in re-auction. Under the same topology, the convergence time is within the acceptable range of timeliness requirements.
6
Conclusion
In this paper, in view of the dynamic and changeable targets in battlefield environment, the assignments of the existing online task allocation algorithm can not be guaranteed to be optimal, so we introduced the result update mechanism
548
X. Li and Y. Liang
based on the distributed auction algorithm. In our problem model, the number of tasks that each UAV can perform is limited by its budget, and every task must be carried out completely by a UAV. During the execution process after the task allocation, the tasks change, and the goal is to find a new assignment, so as to maximize the sum of payoffs while respecting all constraints. On the premise that the remaining budget of each UAV is not lower than the number of changed tasks, we prove that our algorithm always ends with a limited number of iterations, and the solution within the factor of O(nt ε) of the optimal solution is obtained. We first propose our algorithm using a shared memory model, and then point out how to use consensus algorithm to make it a completely distributed algorithm. We also give the simulation results that the performance of the proposed algorithm is better than the current online sequence task allocation algorithm. The algorithm is suitable for multi-agent task allocation system and other decision planning systems, and future work will continue to study the improvement of the algorithm in other online application scenarios.
References 1. Yi, X., Zhu, A., Yang, S.X., Luo, C.: A bio-inspired approach to task assignment of swarm robots in 3-d dynamic environments. IEEE Trans. Cybern. 47(4), 974–983 (2017) 2. Wang, J., Wang, J., Che, H.: Task assignment for multivehicle systems based on collaborative neurodynamic optimization. IEEE Trans. Neural Netw. Learn. Syst. 31(4), 1145–1154 (2020) 3. Chen, Y., Yang, D., Yu, J.: Multi-UAV task assignment with parameter and timesensitive uncertainties using modified two-part wolf pack search algorithm. IEEE Trans. Aeros. Electron. Syst. 54(6), 2853–2872 (2018) 4. Liang, H., Kang, F.: A novel task optimal allocation approach based on contract net protocol for agent-oriented UUV swarm system modeling. Optik 127(8), 3928– 3933 (2016) 5. Fu, X., Feng, P., Gao, X.: Swarm UAVs task and resource dynamic assignment algorithm based on task sequence mechanism. IEEE Access 7, 41090–41100 (2019) 6. Lee, D., Zaheer, S.A., Kim, J.: A resource-oriented, decentralized auction algorithm for multirobot task allocation. IEEE Trans. Autom. Sci. Eng 12(4), 1469–1481 (2015) 7. Duan, X., Liu, H., Tang, H., Cai, Q., Zhang, F., Han, X.: A novel hybrid auction algorithm for multi-UAVs dynamic task assignment. IEEE Access 8, 86207–86222 (2020) 8. Chen, X., Zhang, P., Du, G., Li, F.: A distributed method for dynamic multi-robot task allocation problems with critical time constraints. Rob. Auton. Syst. 118, 31–46 (2019) 9. Zavlanos, M.M., Spesivtsev, L., Pappas, G.J.: A distributed auction algorithm for the assignment problem. In: 2008 47th IEEE Conference on Decision and Control, pp. 1212–1217 (2008) 10. Luo, L., Chakraborty, N., Sycara, K.: Competitive analysis of repeated greedy auction algorithm for online multi-robot task assignment. In: IEEE International Conference on Robotics and Automation 2012, pp. 4792–4799 (2012)
Research on the Influence of Different Network Environments on the Performance of Unicast and Broadcast in Layer 2 Routing Mingwei Wang(B) , Tao Jing, and Wenjun Huang School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China {19120131,tjing,16111027}@bjtu.edu.cn
Abstract. Distributed wireless ad hoc network is a wireless communication network composed of some communication devices with transceiver functions. It has the advantages of flexible networking, strong scalability, and low operation and maintenance costs. In the distributed wireless ad hoc network, the routing mechanism plays a vital role in the resource allocation and data exchange between neighbors. Compared with the traditional three-layer routing, the two-layer routing has the advantages of low delay and high stability, so this article adopts the two-layer routing. There are two routing methods, unicast and broadcast, in Layer 2 routing. Unicast routing needs to establish the whole network topology, and then establish routing according to the whole network topology. Broadcast routing adopts the hop-by-hop broadcast method. The two applicable network environments are different. Therefore, this article studies the effects of node density, network hops, service hops, and topology residence time on unicast and broadcast routing in Layer 2 routing, and establishes unicast routing maintenance consumption, unicast and broadcast data transmission consumption models. Finally, determine the optimal routing method in different network environments by maximizing the amount of data that can be transmitted during the residency time.
Keywords: Layer 2 routing environments
1
· Unicast · Broadcast · Network
Introduction
The wireless ad hoc network is a wireless communication network composed of some communication devices with transceiver functions. It does not require a fixed communication infrastructure to support the communication of the entire This work was supported by the National Key R&D Program of China under Grant 2017YFF0206201. c The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 549–561, 2022. https://doi.org/10.1007/978-981-16-8656-6_49
550
M. Wang et al.
network. And the networking is fast and simple, the nodes can be moved at will, and the network can be freely moved [1]. Compared with a centralized ad hoc network, the distributed wireless ad hoc network does not require the setting of a central node, and has the advantages of flexible networking, strong scalability, and low operation and maintenance costs. It will have broad application prospects in distributed wireless collaboration networks in 5G ultra-dense networking, distributed D2D and industrial wireless sensor networks. Therefore, the research and development of distributed network are imperative. In the distributed wireless ad hoc network, the wireless network routing protocol plays a very important role, and its quality directly affects the performance of the entire distributed wireless ad hoc network [2]. Therefore, choosing a good wireless network routing protocol is particularly important. The IEEE802.11s draft clearly proposes the advantages of Layer 2 routing compared to Layer 3 routing. First of all, because data packets are forwarded through the second layer, the number of splitting and encapsulation of data packets in the protocol stack is reduced, which can effectively reduce the data forwarding delay, so as to meet the requirements of some delay-sensitive services [3]. Second, the Layer 2 forwarding function is transparent to the upper layer, which can avoid the impact of changes in the lower layer protocol on the upper layer protocol, thereby enhancing system stability [4]. There are two routing methods, unicast and broadcast, in Layer 2 routing, and the two are applicable to different network environments. In the network environment where the node density is small, the number of network hops and service hops are small, and the topology is rapidly changing, the number of neighbors of a node is small, the broadcast data transmission consumption is low, and the maintenance frequency of unicast routing is high. So the Broadcasting can transmit more data during the topology residence time. In the network environment where the density of nodes is high, the number of network hops and service hops are large, and the topology is slowly changing, the number of neighbor nodes in one hop of a node is large, and the consumption of broadcast data transmission increases sharply. So unicast can transmit more data within the topology residence time. At present, scholars at home and abroad have carried out research on the Layer 2 routing mechanism and its performance. Luo models and analyzes the network throughput of Layer 2 routing based on different Ethernet spanning tree protocols [5], but it analyzes based on a single network environment and does not consider the optimal Layer 2 routing method under different network environments. Liu proposed a new Layer 2 routing mechanism, and compared the traditional Layer 3 routing and Layer 2 routing establishment time [6], but it was only based on the analysis of the unicast mechanism in Layer 2 routing, and did not analyze the broadcast mechanism. Amin analyzes the route convergence time and route establishment cost in Layer 2 routing under high-capacity
Research on the Influence of Different Network Environments
551
data scenarios, and adjusts the corresponding parameters for optimization, which reduces the route establishment cost to a certain extent [7], but it does not consider low-capacity data scenarios and the impact of topology changes on route convergence time and overhead. Habara analyzes the routing overhead and the amount of data that can be transmitted in Layer 2 routing, and proposes an optimized load balancing algorithm to achieve a low-power network [8], but it does not consider the impact of different network environments on the route establishment overhead and the amount of data that can be transmitted. Qian proposes a scalable and flexible two-layer routing system protocol, which increases the average number of data transmissions and reduces the route establishment time to a certain extent [9]. However, it only analyzes the performance of Layer 2 routing in a fast-changing topology, and does not consider the route establishment time of Layer 2 routing and the average number of data transmissions in different network environments. Therefore, from the existing literature, there is no literature to study the influence of unicast and broadcast routing in Layer 2 routing under different network environments. However, in actual applications, node density, network hop count, service hop count, and topology residence time are not fixed. If the same routing method is selected under different network environments, the best performance cannot be achieved. Therefore, it is necessary to study the influence of node density, network hops, service hops, and topology residence time on unicast and broadcast routing in Layer 2 routing, so that the optimal routing can be selected under different network environments. The main results of this paper: This paper studies the effects of node density, network hops, service hops, and topology residence time on unicast and broadcast routing in Layer 2 routing, and establishes the model of unicast routing maintenance consumption, unicast and broadcast data transmission consumption. And then, in this paper, we can determine the optimal routing method in different network environments by maximizing the amount of data that can be transmitted during the residence time.
2
System Model
As shown in Fig. 1, the topology used in this paper is the distributed multi-hop mesh topology. In this topology, all nodes have equal status and the effective communication radius of the node is d. In the network with node density ρ, the number of one-hop neighbor nodes of the node is: N1 = πd2 ρ − 1
(1)
552
M. Wang et al.
Fig. 1. Distributed multi-hop mesh topology.
This article uses the election-based resource scheduling mechanism, and both control messages and data messages are sent through elections. The election mechanism is fair and can ensure reasonable scheduling and collision-free transmission of control and data messages. The basic process is as follows: 1) The node election is successful, and then back off, waiting for the next time to send. 2) When the node receives the neighbor’s control message, it determines the effective contending node according to the neighbor’s NXT (NextXmtTime) and Esxt (EarliestSubsequentXmtTime) in the neighbor message. 3) After the node backoff is completed, it executes the Mesh Election algorithm with all valid competing nodes. If the node election is successful, the node will send control or data messages in the election time slot. If the node election fails, the time slot is increased by 1, and the node continues to elect. In the distributed wireless ad hoc network, the node election cycle E (τ ) can be expressed as [10]: E (τ ) = H + E (Q) (2) In (2), H is the back-off time, E (Q) can be expressed as: E (Q) =
Exp 1 1 − ( − 2Exp )(1 − ps )2 ps ps
(3)
Research on the Influence of Different Network Environments
553
In (3), ps is the election success probability of the node in the election cycle, which can be expressed as [11]: 1 2Exp 1 + ]+1 = (πd2 hc ρ − 1)[ ps E (τ ) E(τ )ps (2 − ps )
(4)
In (4), hc is the number of neighbor maintenance hops in the distributed election mechanism. 2.1
Comparison and Analysis of Layer 2 Routing and Layer 3 Routing
In the distributed wireless ad hoc network, the traditional three-layer routing mechanism uses the AODV protocol. The most basic routing control messages of AODV mainly include three types: RREQ (Route Request), RREP (Route Reply), and RERR (Route Error). Its pathfinding process is as follows: 1) After receiving the service request, the source node first stores the service package in its own cache queue, and then fills in the RREQ message according to the source node address, destination node address, and service type in the service package, and broadcasts it periodically. 2) After the intermediate node receives the RREQ routing message sent by the service source node, it first creates a routing entry to maintain the reverse route from itself to the source node, and then judges whether it is the service destination node based on the ip address of the service destination node carried in the RREQ message. If it is not the destination node, the node will continue to broadcast the RREQ message. 3) After the service destination node receives the RREQ message forwarded by the intermediate node, it will first create a route with the destination node being the service source node, and then generate a route response message RREP based on the information, and reply to the service source node through the reverse unicast path sent by RREQ. Since then, the two-way route from the service source node to the destination node has been established. Compared with the three-layer routing, the establishment of Layer 2 routing in the distributed wireless ad hoc network is relatively simple. The establishment of Layer 2 routing is completed according to the topological connection relationship in the whole network, and the whole network topology connection relationship is completed according to the control message interaction of the nodes on the network. The source node traverses the topological connection relationship of the entire network according to the source and destination nodes of the business, and then determines the next hop node according to the topological connection relationship. The next hop node determines the next hop node of it according to the topology of the entire network. Until the next hop of the node is the destination node, the route establishment is complete. In the distributed wireless ad hoc network, Layer 2 routing has great advantages over Layer 3 routing. First of all, the establishment of Layer 2 routing
554
M. Wang et al.
does not need to interact with the network layer, and the number of splitting and encapsulation of data packets in the protocol stack is reduced, which can effectively reduce the data forwarding delay. Secondly, the cost of establishing Layer 2 routing is the cost of maintaining the topology of the entire network. The maintenance consumption of the whole network topology is the consumption of nodes receiving maintenance messages of all the nodes in the network. Therefore, compared with the interactive process of request and response of messages through broadcast in Layer 3 routing, the routing maintenance cost of Layer 2 routing is lower. Finally, compared to Layer 3 routing, the route maintenance in Layer 2 routing is transparent to the upper layer, which can avoid the impact of changes in the lower-layer protocol on the upper-layer protocol, so the system is more stable in layer 2 routing. 2.2
Unicast Routing Maintenance Consumption Model
This article adopts the two-layer routing mechanism. The establishment of the route is completed according to the whole network topology maintained by the nodes on the network, so the cost of route establishment in unicast routing is the cost of nodes maintaining the whole network topology. The maintenance of the entire network topology is accomplished through the interaction of control messages between nodes. Therefore, the time for the node to maintain the topology of the entire network is the time when the node receives the maintenance information of all nodes on the network in the network. If the node maintains neighbors within hc hop range, the election cycle E (τ ) of the node on the network can be expressed as that the node within the hc hop range has successfully elected at least once and has sent a control message once. In the election cycle, assume that a node receives maintenance messages from all one-hop neighbor nodes in the first k time slots of the election cycle. During the election cycle, the success probability of each one-hop neighbor node in each time slot remains unchanged, so the probability that the node completes the maintenance of the neighbor information of all one-hop neighbor nodes in time slot k can be expressed as: N1 −1 1 C1 Ck−1 (5) P (T1 = k) = N1 CE(τ ) According to the probability of the node receiving all one-hop neighbor node information in k time slot, it can be concluded that the expected time for the node to receive all one-hop neighbor node information is:
E(τ )
E (T1 ) =
k=N1
N1 −1 k × Ck−1
E(τ )
k × P (T1 = k) =
k=N1
N1 CE(τ )
(6)
In the distributed wireless ad hoc network, each node has the same status, and each node has the same chance of occupying the control channel, so the neighbors within the hc hop range of the node have the same election success
Research on the Influence of Different Network Environments
555
probability in the election cycle E (τ ). There may be the case where the node’s h1 hop neighbor node has elected successfully and has sent a control message, but the node’s h1 + 1 hop neighbor has not yet sent a control message. And then the case leads to the situation that the neighbors are not maintained in time. In this case, the maintenance of the node’s h1 + 1 hop neighbor information needs to be carried out in the next election cycle, so the expected time for the node to receive all hc hop neighbor information is: E (Thc ) = E ((hc − 1)τ + T1 ) = (hc − 1) E (τ ) + E (T1 )
(7)
The above analysis node maintains the neighbors within hc hop range without considering the retransmission of the control message transmission failure. Assume that the probability of control message loss is pf c . In the wireless ad hoc network, the network access of new nodes, the interaction between nodes on the network and neighbor nodes, and the maintenance of topology information all need to be completed by the interaction of control messages. In order to ensure the normal maintenance of the network and the collision-free transmission of data, the success probability of control message transmission must be greater than 0.99. According to the control message loss rate pf c , the maximum number of retransmissions M must meet: M
(1 − pf c )pkf c ≥ 0.99
(8)
k=0
Considering that the control message transmission fails to be retransmitted, the probability of the node retransmitting m times is: P (Nu = m) = (1 − pf c )pm fc
(9)
The probability of the neighbor control message transmission failure within the hc hop range of the node is equal, so the expected neighbor maintenance time due to the node’s retransmission is: E (TNu ) = hc ×
M
m × P (Nu = m) × E (τ )
(10)
m=1
Considering the probability of control message transmission failure, the time expectation for the node to maintain the topology within the hc hop range is: E (Ttphc ) = (1 − pf c )E (Thc ) + E (TNu )
(11)
If the number of network hops meets h = hc + 1, the time for the node to maintain the topology of the entire network is the sum of the time for the node to maintain the topology within the hc hop range and the time for the node’s hc hop neighbor to receive the h = hc + 1 hop neighbor information: E (Ttph ) = E (Ttphc ) + E (T1 )
(12)
556
M. Wang et al.
If the number of network hops meets h = hc + 2, the time for the node to maintain the topology of the entire network is the sum of the time for the node to maintain the topology within the hc hop range and the time for the node’s hc hop neighbor to receive the h = hc + 2 hop neighbor information: E (Ttph ) = E (Ttphc ) + E (T1 ) + E (τ )
(13)
By analogy, the time expectations for nodes to maintain the topology of the entire network are: ⎧ h h ⎪ ⎨ hc E(Ttphc ) hc ∈ Z
h h E (T E (Ttph ) = ) + mod( ) − 1 E (τ ) (14) tph c h h c c ⎪ ⎩ h +E (T1 ) hc ∈ /Z 2.3
Unicast Data Transmission Consumption Model
This article uses the election-based resource scheduling mechanism, and data messages are sent through the election mechanism. In unicast routing, the node establishes the routing table according to the maintained network topology connection relationship, and data transmission is performed according to the established routing table after the node election is successful. Assuming that the node receives the service from the network layer, the service needs to go through hD hops from the current node to the destination node. Since both the data message and the control message are sent using the election mechanism, the transmission failure probability of the data message and the control message is the same, and the maximum number of retransmissions of the control message and the data message is also the same. According to the probability of successful election, the probability of successful election of this node in the i-th time slot is: P (TD1 = i) = (1 − ps )i−1 ps
(15)
Then the expected value of the data time slot sending time of this node is:
E(τ )
E (TD1 ) =
i × (1 − ps )i ps
(16)
i=1
Considering the retransmission of the data message, it can be concluded that the time required for the data to be transmitted from the current node to the one-hop neighbor forwarding node is: E (TD1M ) = (1 − pf d )E (TD1 ) +
M
(1 − pf d )pm f d × mE (τ )
(17)
m=1
Then the total time for data transmission from the current node to the destination node is: E (TD ) = hD × E (TD1M ) (18)
Research on the Influence of Different Network Environments
557
Therefore, during the topology residence time, the amount of data that a node can transmit through unicast is: Ds = 2.4
Tsch − E (Ttph ) × Tcon da E (TD ) × Tcon
(19)
Broadcast Data Transmission Consumption Model
This article uses an election-based resource scheduling mechanism, and both control messages and data messages are sent through the election mechanism. The data message is sent by broadcast, the node election is successful and then the data is broadcast to all one-hop neighbors, and then the one-hop neighbor broadcasts to the two-hop neighbors until the destination node receives the data message. Therefore, the total time expected for data transmission from the current node to the destination node is: (20) E (TD ) = 1 + N1 + N12 + ...N1hD −1 × E (TD1M )
Fig. 2. The impact of node density and message transmission failure probability on unicast/broadcast.
During the topology residence time, the amount of data that a node can transmit through broadcast is: Dg =
Tsch da E (TD ) × Tcon
(21)
558
3
M. Wang et al.
Simulation Analysis
According to the results of unicast routing maintenance consumption, unicast and broadcast data transmission consumption models, the model is simulated using MATLAB. And then determine the optimal routing method in different network environments according to the maximum amount of data that can be transmitted during the topology residence time. Table 1 shows some important parameters used in the simulation. Table 1. Values of simulation parameters Letter
Parameter 2
Value
ρ/node/(km)
Node density
5–40
d/m
Communication radius
250
h
Network hops
5–7
hD
Service hops
1–5
hc
Neighbor maintenance hops
2
Tsch /s
Topological residence time
0.1–0.3
pf c /pf d
Control/data message packet loss rate 0–0.2
As shown in Fig. 2, we can see the influence of node density and message transmission failure probability on unicast and broadcast. With the increase of node density, the amount of transmitted data for both unicast and broadcast decreases in the topology resident time. This is because as the node density increases, the number of one-hop neighbor nodes of the node increases, and the number of effective competing nodes of the node increases, resulting in the
Fig. 3. The impact of service hops and topology residence time on unicast/broadcast.
Research on the Influence of Different Network Environments
559
decrease in the success probability of the node election. Then it leads to the consumption of unicast routing maintenance, the consumption of unicast and broadcast data transmission becomes larger, and the amount of data that can be transmitted within the residence time becomes smaller. It can also be found from Fig. 2 that when the node density is low, the amount of data that can be transmitted by broadcast is larger, and when the node density is high, the amount of data that can be transmitted by unicast is larger. This is because as the density of nodes increases, the consumption of broadcast data transmission will increase exponentially, and the amount of data that can be transmitted will drop sharply. Therefore, it is possible to select appropriate routing methods according to different network scales. In the case of low node density, use broadcast routing, and in the case of high node density, select unicast routing. As shown in Fig. 3, it can be seen that the number of service hops and topology residence time affect unicast/broadcast. As the number of service hops increases, the amount of data that can be transmitted for unicast and broadcast during the topology residence time is reduced. This is because as the number of service hops increases, the consumption of unicast and broadcast data transmission increases, resulting in the decrease of the amount of data that can be transmitted during the residence time. It can be seen from Fig. 3 that under the condition that the topological residence time remains unchanged, there is the intersection between the unicast and broadcast transmittable data volume curves. When the number of service hops is small, the amount of data that can be transmitted by broadcast is larger. When the number is large, the amount of data that can be transmitted by unicast is larger. It can be found that with the increase of the residence time, the intersection of the unicast and broadcast transmittable data volume curves gradually becomes larger. This is because as the topology residence time becomes longer, the proportion of unicast routing maintenance consumption becomes smaller, and the amount of data that can be
Fig. 4. The impact of network hops on unicast/broadcast.
560
M. Wang et al.
transmitted becomes larger. Therefore, according to different service hops and topology residence time, we can choose a suitable routing method. Broadcast routing is used when the number of service hops is small and the topology residence time is small, and unicast routing is used when the number of service hops is large and the topology residence time is large. As shown in Fig. 4, it can be seen that the number of network hops affects unicast/broadcast. As the number of network hops increases, the amount of data that can be transmitted by unicast in the topology residence time becomes smaller, and the amount of data that can be transmitted by broadcast remains unchanged. This is because as the number of network hops increases, unicast routing maintains the entire network topology, so unicast routing maintenance consumes more. While broadcast data transmission is hop-by-hop broadcast, the amount of data that can be transmitted is related to the number of service hops and the amount of data that can be transmitted doesn’t depend on the number of network hops. Therefore, we can select the appropriate routing method according to different network hop counts. In the case of high node density, choose unicast routing; in the case of low node density, if the number of network hops is large, choose broadcast routing, if the number of network hops is small, choose unicast routing.
4
Conclusion
In the distributed wireless ad hoc network, node density, network hop count, service hop count, and topology residence time have a great influence on unicast and broadcast routing in Layer 2 routing. This paper analyzes the consumption of unicast routing establishment and the consumption of unicast and broadcast data transmission in distributed wireless ad hoc networks, and deeply considers the influence of node density, network hop count, service hop count, and topology residence time on routing consumption and data transmission consumption. The results show that the unicast routing maintenance consumption, unicast and broadcast data transmission consumption models established in this paper can effectively reflect the influence of node density, network hops, and service hops on the amount of data that can be transmitted by unicast and broadcast within the topology residence time. According to the simulation results, the appropriate routing method can be selected under different node density, network hop count, service hop count, and topology residence time. This article provides a reference for how to choose a suitable routing method according to the network environment in engineering practice, and it is suitable for a variety of different network scenarios such as dense networks.
References 1. Kaysina, I.A., Vasiliev, D.S., Abilov, A., Meitis, D.S., Kaysin, A.E.: Performance evaluation testbed for emerging relaying and coding algorithms in Flying Ad Hoc Networks. In: 2018 Moscow Workshop on Electronic and Networking Technologies (MWENT), Moscow (2018)
Research on the Influence of Different Network Environments
561
2. Thiagarajan, R., Moorthi, M.: Efficient routing protocols for mobile ad hoc network. In: 2017 3rd International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB), Chennai, India, pp. 427–431 (2017). https://doi.org/10.1109/AEEICB.2017.7972346 3. IEEE 802.11 Working Group of the LAN/MAN Committee: IEEE P802.1ls D2.0 Part 11:Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications. Amendment 10:Mesh Networking (2008) 4. Akyildiz, I.F., Wang, X., Wang, W.: Wireless mesh networks: a survey. Comput. Netw. 47(04), 445–487 (2005) 5. Luo, Z., Suh, C.: An analytical model for evaluating spanning tree based layer 2 routing schemes. In: 2016 18th International Conference on Advanced Communication Technology (ICACT), PyeongChang, South Korea, pp. 527–531 (2016). https://doi.org/10.1109/ICACT.2016.7423458 6. Lei, L., Yunfu, Z., Hongbo, C., et al.: Research and implementation of wireless network layer two switching routing protocol. Communication Technology (2018) 7. Amin, R., Goff, T., Cheng, B.: An evaluation of layer 2 and layer 3 routing on high capacity aerial directional links. In: 2016 IEEE Military Communications Conference, MILCOM 2016, Baltimore, MD, USA, pp. 331–336 (2016). https:// doi.org/10.1109/MILCOM.2016.7795348 8. Habara, T., Mizutani, K., Harada, H.: A load balancing algorithm for layer 2 routing based Wi-SUN systems. In: 2017 IEEE 85th Vehicular Technology Conference (VTC Spring), Sydney, NSW, Australia, pp. 1–5 (2017). https://doi.org/10.1109/ VTCSpring.2017.8108661 9. Qian, C., Lam, S.S.: A scalable and resilient layer-2 network with Ethernet compatibility. IEEE/ACM Trans. Netw. 24(1), 231–244 (2016). https://doi.org/10.1109/ TNET.2014.2361773 10. Li, X., Li, X.: Research on the impact of interaction times in distributed wireless ad hoc networks. IEEE Access 8, 111742–111750 (2020). https://doi.org/10.1109/ ACCESS.2020.2998575 11. Peng, J., Li, X., Li, X.: Research on election interval of distributed wireless ad hoc networks. IEEE Access 8, 110164–110171 (2020). https://doi.org/10.1109/ ACCESS.2020.2993645
Performance Analysis of MAC Layer Protocol Based on Topological Residence Time Limitation Yangkun Wang(B) , Tao Jing, and Wenjun Huang School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China {19120140,tjing,16111027}@bjtu.edu.cn
Abstract. The MAC layer protocol is an important factor affecting the performance of mobile ad hoc networks. Existing researches mostly analyze the performance of the MAC layer protocol in an ideal state, that is, when the topological residence time is long enough. However, topological dynamics is a major feature of mobile ad hoc networks and cannot be ignored. In this paper, considering the topological dynamics, that is, when the topological residence time is limited, the performance of the MAC layer protocol using the election mechanism is analyzed. The analysis in this paper finds that the topological residence time decreases as the number of one-hop nodes increases, while the network maintenance time is the opposite. When the network maintenance time exceeds the topological residence time, the network wouldn’t work normally. Under certain network parameters and protocol parameters, the upper limit of the number of one-hop nodes to enable the network to work normally is obtained. This provides a theoretical analysis method for node deployment in actual projects. Keywords: Topological dynamics · The topological residence time (TRT) · The network maintenance time (NMT)
1
Introduction
The MAC (Media Access Control) layer protocol is an important factor affecting the performance of the mobile ad hoc network. It mainly provides the channel access mechanism of the node, so that the node can share the limited wireless bandwidth resource efficiently [1]. Many MAC layer protocols have been proposed so far, which are mainly divided into competition type and allocation type. Among them, the allocation protocol adopts the synchronous communication mode and uses a certain allocation algorithm to determine certain time slots, in which a node accesses the channel. Then a time slot table is formed [2]. This work was supported by the National Key R&D Program of China under Grant 2017YFF0206201. c The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 562–574, 2022. https://doi.org/10.1007/978-981-16-8656-6_50
Performance Analysis of MAC Layer Protocol Based on TRT Limitation
563
The allocation protocol has received widespread attention because of its ability to ensure collision-free transmission. However, the allocation protocol needs to occupy wireless resources to sense the network structure, which causes consumption for network maintenance. So, the performance analysis of the MAC layer protocol is imperative. Due to the topological dynamics of the mobile ad hoc network, the maintenance of network information needs to be performed again when the topology changes during the maintenance. This paper introduces the concept of the topological residence time (TRT). When the nodes and links remain in their original state, the topology remains unchanged. The expected time that the topology remains unchanged is the TRT. It is more practical to analyze the protocol performance when the topology changes dynamically, which means the TRT is limited. However, most of the current papers analyze the performance of the MAC layer protocol under ideal conditions, that is, assuming that the TRT is long enough. Considering the number of neighbor nodes, the number of total nodes in network and the number of channels, the influence on the performance of the protocol is mainly analyzed in [3]. The influence of node density and network range on the network maintenance time (NMT) and delay is analyzed in [4]. Considering the energy consumption, the performance of the protocol is analyzed in an ideal state in [5]. The influence of the election interval and the ratio of the number of control time slots to the number of data time slots on the protocol performance is mainly analyzed in [6]. None of the above papers considered the influence of dynamic factors such as nodes’ movement and changes of channel condition. There are relatively few papers on protocol performance analysis with the dynamic change of network topology. In [7], time delay is analyzed under the condition of node movement, but the channel fading is ignored. In [8], the system throughput is analyzed under the conditions of node mobility and channel
Fig. 1. The network maintenance.
564
Y. Wang et al.
fading, but the influence of node failure is not considered. In [9], considering the channel fading, the influence of SNR on the bit error rate and the influence of distinguishing service priority on the delay are analyzed, but the influence of node mobility is not considered. Although the above papers have considered the topology dynamics of the mobile ad hoc network, it fails to fully consider all the factors that causes network dynamic changes. The topological dynamics of mobile ad hoc networks are produced by changes in node status, including failure and movement, and in link status. The above factors will be comprehensively considered to analyze the performance of the allocation protocol in this paper. In this paper, the topological residence time (TRT) and the network maintenance time (NMT) are deduced. Moreover, the influence of protocol parameters of the election mechanism on the performance is analyzed. The analysis in this paper provides a theoretical basis for the application of the MAC layer protocol using the election mechanism in large-scale network scenarios such as the Internet of Things.
2
Modeling
The allocation protocol of the MAC layer needs to consume resources to maintain network structure information. Then time slot resources are allocated based on the maintained information. In this paper, the nodes use an election mechanism to determine the time slot to send maintenance messages. The election range is two hops. All nodes participating in the election use a pseudo-random algorithm to determine the occupancy of time slot resources. If the election fails but does not reach the upper limit of the election interval, it will participate in the election of the next time slot. If the election fails and reaches the upper limit of the election interval, it will enter the backoff stage. If the election is successful, the node sends a message for network maintenance, and then enters the backoff phase. The election will continue after the backoff ends. The election period, E(τ ), is the interval between two successful elections of the node. When the information of the boundary node in the network is maintained by the node at the other boundary, it is considered that the network has completed a round of information maintenance. As shown in Fig. 1, when the information of the boundary node 1 is maintained by the boundary node 2 at the other end, the network has completed a round of information maintenance. During the election period, nodes can fairly occupy time slot resources to send maintenance messages. Each node can maintain all nodes’ information within one hop in an election period, that is, in an election period, nodes’ information can be transmitted by one hop. So the network maintenance time (NMT) is: E(Tmsuc ) = n · E(τ )
(1)
Where n, which is 4 in Fig. 1, is the ratio of the diameter of network area to the radius of one-hop area. When analyzing the election period, the impact of dynamic changes in topology needs to be considered. Topological changes may cause nodes to fail to
Performance Analysis of MAC Layer Protocol Based on TRT Limitation
565
maintain the latest network information in time. So, it is necessary to analyze the topological residence time (TRT) first. Only if the TRT is greater than the NMT, the network can work normally, and network performance can be worth discussing. Next, the topological residence time (TRT) and the network maintenance time(NMT) are analyzed. 2.1
The Topological Residence Time (TRT)
Topological fluctuations are caused by node fluctuations and link fluctuations. Node fluctuations are caused by node failures. Link fluctuations are caused by node movement and changes in channel conditions. Firstly, the probability of node survival is analyzed. Considering the random fluctuations of the node status caused by weather problems and the power supply problems, the Weibull distribution is used to model the node survival time [10]. The probability that the node vi is working normally at time t is defined as: Cvi (t) = P (vi (t) = 1) = e−(t/θ)
β
(2)
Where θ and β, which can be obtained based on experience, are the proportion parameter and the shape parameter of the Weibull distribution. It’s considered that the state of each node is independent of each other. Considering the impact of target attacks [11], the failure probability of each node is prand . So, the probability of node failure at time t is: Pvi −f ail (t) = 1 − Cvi (t) · (1 − prand ) β = 1 − e(−t/θ) · (1 − prand )
(3)
So, the probability that a single node is still alive at time t is: t
[Pvi −live (t) = e
0
ln(1−Pvi −f ail (τ ))dτ
(4)
The probability that the link exists is considered when the nodes at both ends of the link exist. The distance between two nodes at time t is denoted as R (t), then the probability density function of R (t) is [12]: 2 +L0 2 L0 r r fR (r; t, L0 ) = g(t) I0 g(t) (5) exp − r 2g(t) Where L0 is the initial distance between the two nodes, g (t) = Dτ (1 − exp (−2t/τ )) . The parameters τ and D are called relaxation time and √ diffusion coefficient, respectively. D controls the fluctuation of the node position along each coordinate axis. 1/τ controls the rate at which the device returns to the expectation position (initial position). I0 is the zero order of the modified Bessel function of the first kind.
566
Y. Wang et al.
So, the steady-state probability that the link state between two nodes is broken is: pmbl = P (Λ1 = 0) ∞ (6) = r∈I0 fR (r) dr = L fR (r) dr The probability that the link state between two nodes is connected is: Pei −live (t) = (1 − pmbl (t, L0 )) · (1 − psinr (L0 ))
(7)
Where pmbl (t, L0 ) represents the probability of link failure when the distance exceeds the coverage radius due to node mobility; psinr (L0 ) represents the probability that the link is interrupted due to the randomness of the SINR when the link exists and the distance does not exceed the coverage radius. It’s believed that the states of each link are independent of each other, and each node can form a link with all nodes within one hop range. For a network with a node density of λ and a network diameter of n · d (d is the coverage of the node), the number of all links is: ⎛ ⎝
Ne =
2π 0
⎛ ⎝
( n −1)d 2 0
a·(πd2 λ−1)da+
nd
( n −1)d 2
⎞
⎞
a·(Sλ−1)da⎠dθ ⎠·λ
(8)
2
Where a is the distance between the node and the center of the network. The two parts of the formula are the number of links that can be formed by the middle node and the edge node within the network. The distance between the central node and the network center is 0 to ( n2 − 1)d. The distance between the edge node and the network center is ( n2 −1)d to nd. As shown in Fig. 2, S is the area of the intersection of the one-hop range of the node and the network range. For the 2 − d · a · sin (α). middle node, S = πd2 , and for the edge node, S = α · d2 + β · nd 2 It can be seen from Fig. 2 that cos(α) =
d2 +a2 −( nd 2 ) 2ad
2
2
, cos(β) =
2 2 ( nd 2 ) +a −d . 2·( nd 2 )·a
When the nodes and links remain in their original state, the topology remains unchanged. So, the topological residence time (TRT) is: E(t ⎛maintain ) = ⎞ τ · (Pv i −live (τ ))Nnode
τ ⎜ ⎟ ⎟ ∞ ⎜ ·(exp ln (Pei −live (k))dk )Nel ⎜ ⎟ dτ ⎜ ⎟ 0 τ
0 ⎝ ⎠ Nef ·((exp ln (1 − Pei −live (k))dk ))
(9)
0
Where Nnode is the original number of nodes in the topology. Nel is the number of links that were originally connected in the topology, and Nef is the number of links that were originally disconnected in the topology, Nef = Ne − Nel . The three internal parts of the integral are the probability that the node remains alive and the link remains in the original state (connected and disconnected) within τ .
Performance Analysis of MAC Layer Protocol Based on TRT Limitation
567
Fig. 2. The intersection of edge node coverage and network range.
2.2
The Network Maintenance Time (NMT)
The election period is analyzed first. Suppose that the fixed backoff index is BASIC, the backoff index is EXP , the backoff period is H=2(BASIC+EXP ) , the election interval is 2EXP , and the number of nodes participating in the election is Ncompete . Due to the fairness of the pseudo-random algorithm, the success probability of node access is: ps =
1 Ncompete
(10)
The probability density function of the slots number Q from the beginning of the election to the successful election is expressed as follows: P (Q = k) = (1 − ps )k−1 · ps 1 ≤ k ≤ 2EXP
(11)
So, the expected value of Q is: E(Q) =
2EXP k=1
k · P (Q = k) =
1 ps
− ( p1s + 2EXP )(1 − ps )2
EXP
(12)
Adding the backoff period, the election period is: E(τ ) = E(Q) + H
(13)
The nodes participating in the election satisfy that the next election interval (N XT ) includes the election time slot or the next earliest transmission time of the next time (ESXT ) is before the election time slot. For a one-hop neighbor node, when the neighbor node moves beyond the coverage of the node or the link fails due to channel conditions, the NCFG
568
Y. Wang et al.
message cannot be exchanged normally in time. Then the node will determine that the election time slot is after the next earliest transmission time of the next time (ESXT ) when the neighbor node sends the NCFG message. The neighboring node will be regarded as a competing node. As shown in Fig. 3, if the neighbor node normally exchanges NCFG messages with the current node, the time slot when the current node and the neighbor node sends NCFG message are CXT 0 and CXT 1 respectively, and the difference is u. When the next election interval N XT 0 of the current node is repeated with the next election interval N XT 1 or the next earliest transmission time of the next time ESXT 1 and the following time slots of the neighbor node, it is judged that the neighbor node participates in the election.
Fig. 3. Determine whether the node participates in the election.
So, when the neighbor node normally exchanges NCFG messages with the current node, the probability of a node participating in the election is: ⎧ 2EXP −u 1 ≤ u < 2EXP ⎪ EXP ⎪ ⎨ 2EXP 02 ≤ u < H + 1 − 2EXP Pcpmpete (u) = 2EXP −H+u (14) EXP ⎪ ≤u αμt }. γt = abs[AT rt−1 ], which represents the ith element that satisfies the condition selected in the tth iteration. μt = max(|AT rt−1 |), which represents the maximum value of the inner product at the ith iteration; (3) Jt = Ft−1 ∪ St , Ft−1 is the support set of previous iteration, Jt is candidate set of this iteration. (4) Calculate θt = (At T At )−1 At T y and F = max(|θt |, L), θt is the least squares solution corresponding to the index. (5) Get residual: rnew = y − (At T At )−1 At T y. (6) Calculate: εt = ||rnew − HLS ||2 . (7) If εt > εt−1 , output reconstructed signal; otherwise, skip to step 8; (8) if ||rnew ||2 ≥ ||rt−1 ||2 , size increases; otherwise, size will not change; (9) Number of iteration increases and continue the loop from step 2.
Improved Channel Estimation Algorithm Based on Compressed Sensing
579
Fig. 2. MSAMP algorithm flow chart.
In the algorithm, step 1 is the initialization process. Step 2 is the preliminary stage of atom selection by setting an appropriate correlation coefficient. And he atoms with less correlation is eliminated. It reduces the probability of subsequent non-ideal atoms entering the support set. Steps 3 to 4 are the secondary screening stage of atoms. By introducing backtracking ideas, the atoms initially screened in step 2 are screened again to further improve the accuracy of the system. Step 5 calculates the residual, step 6 and 7 get the iteration stop parameters to decide whether to break out of the loop. That is, whether to continue the iteration is determined by two-norm of the difference between the residual and the channel information obtained by LS. If εt > εt−1 , it means that the stop condition has been met, so immediately jump out of the loop and output the result; Otherwise, continue to step 8. Step 8 and step 9 update the step length, support set and residual error. If ||rnew ||2 ≥ |rt−1 ||2 , update step L with the minimum step 1, and update the residual of this iteration with the last residual; otherwise, The best atom under this step size is not selected, then continue to iterate with the last step. The flow chart of the algorithm is shown in Fig. 2.
580
B. Wang and X. Li 50 SAMP 45
40
35
N
30
25
20
15
10
5
6
12
18
24
30
K
Fig. 3. Comparison of iteration times with 2 algorithms.
3
Algorithm Simulation and Analysis
Based on MATLAB, the number of iterations, mean square error and bit error rate are compared with other existing algorithms. Among them, the simulation parameters of the system are set as follows: The total number of subcarriers is 256. 192 subcarriers are used to transmit data, and the remaining are zero. Quadrature phase shift keying is used and the cyclic prefix is a quarter of total subcarriers. The sampling frequency is 96MHz. The normalized MSE calculation formula for frequency response estimation is as follows: 2 ˆ ] E[ |H(k) − H(k)| k (9) M SE = E[ |H(k)|2 ] k
H(k) is the k th element of the channel frequency domain response vector H, ˆ and H(k) is the k th element of the channel frequency domain response vector ˆ H. For fairness, the LS, SAMP and SAMP algorithms all use the same long preamble, and set the same iterative stop conditions for SAMP and MSAMP algorithms. 3.1
The Influence of Threshold Parameters α on Algorithm Reconstruction Time Under Different Sparsity
Figure 3 shows the influence of threshold parameters α on algorithm reconstruction time under different sparsity. As shown in the Figure, the red line is the SAMP algorithm curve with a step length of 1, and the remaining curves are the MSAMP algorithm curves under different screening coefficients. In the case of
Improved Channel Estimation Algorithm Based on Compressed Sensing
581
-5
-10
MSE[dB]
-15
-20
-25
-30
-35
-40
0
5
10
15
20
25
SNR [dB]
Fig. 4. The comparison of mean square error under different screening coefficients.
less sparsity, the number of iterations required is the least when α is 0.8, and the number of iterations is the most when α is 0.2. When α is 0.8, only few atoms with the greatest correlation are selected for each iteration, thereby reducing the probability of irrelevant atoms entering the candidate set; when α = 0.2, more irrelevant atoms are introduced in each iteration, which requires multiple iterations to filter out the atom that satisfies the iteration stop condition. If the sparsity is large, the number of iterations required is the least when α is 0.2, and the number of iterations required for other coefficients is more. The reason is that in this condition, multiple atoms can be selected at a time to expand the support set faster to find the atoms that meet the judgment conditions. Moreover, the number of iterations under all screening coefficients in MSAMP algorithm is less than that of the SAMP algorithm with a step size of 1, so the reconstruction speed of the proposed algorithm is higher than that of the SAMP algorithm. 3.2
The Influence of Threshold Parameter α on Algorithm Reconstruction Performance
Figure 4 is the comparison diagram of mean square error under different screening coefficients in a 10-path channel. When α is 0.2, the mean square error is the highest; When α is 0.8, the mean square error is the smallest. Because with the increase of α, the screening threshold in the preliminary screening of atoms increases. Only a few atoms with higher correlation can be selected into the index set, and the remaining atoms are discarded. Thus it can reduce the impact of irrelevant atoms on the system accuracy. In order to achieve the best channel estimation performance, the value of α in the following simulations are all set to 0.8.
582
3.3
B. Wang and X. Li
Performance Analysis of Mean Square Error and Bit Error Rate
Figure 5 shows the mean square error under different signal-to-noise ratio in 6path channel. When signal-to-noise ratio is low, the noise of the channel is larger, and the MSAMP algorithm is slightly better than the SAMP algorithm. As the signal-to-noise ratio increases, the influence of noise gradually decreases, so the performance advantage of the MSAMP algorithm is reflected. We can see that when the signal-to-noise ratio is 20, the MSE of MSAMP algorithm is about 2dB lower than that of the SAMP algorithm. -5
LS SAMP MSAMP
-10 -15
MSE[dB]
-20 -25 -30 -35 -40 -45
0
5
10
15
20
25
SNR [dB]
Fig. 5. The comparison of mean square error of three algorithms.
10 0
10
LS SAMP MSAMP
-1
BER
10 -2
10 -3
10 -4
10 -5
10 -6
0
5
10
15
20
25
SNR [dB]
Fig. 6. The comparison of bit error rate of three algorithms.
Figure 6 shows the bit error rate under different signal-to-noise ratio in 6-path channel. When the BER is 2.3e−5, the SNR of traditional LS algorithm is 21 dB,
Improved Channel Estimation Algorithm Based on Compressed Sensing
583
the SNR of SAMP algorithm is 17 dB, and the SNR of MSAMP algorithm is 16 dB. Compared with LS algorithm, the SNR of MSAMP algorithm is reduced by 5 dB, and compared with SAMP algorithm, the SNR is reduced by 1 dB.
4
Conclusion
This paper proposes a new kind of algorithm in view of unknown sparsity and slow reconstruction time in compressed sensing channel estimation of OFDM system. The algorithm uses estimation result of LS to obtain channel sparsity and introduces it into compressed sensing. By improving the atomic screening and setting step length, it improves system performance. Simulation analysis shows that the reconstruction time of MSAMP algorithm is lower than that of the SAMP algorithm under the same conditions. And the reconstruction accuracy of the MSAMP algorithm is better than that of the LS algorithm and the SAMP algorithm.
References 1. G´ omez-Cuba, F., Goldsmith, A.J.: Compressed sensing channel estimation for OFDM with non-Gaussian multipath gains. IEEE Trans. Wireless Commun. 19(1), 47–61 (2020) 2. Wu, W.-R., Chiueh, R.-C., Tseng, F.-S.: Channel estimation for OFDM systems with subspace persuit algorithm, pp. 269–272 (2010) 3. Lee, H.-C., Gong, C.-S.A., Chen, P.-Y.: A compressed sensing estimation technique for doubly selective channel in OFDM systems. IEEE Access 7, 115–199 (2019) 4. Donoho, D.L., Tsaig, Y., Drori, I., Starck, J.-L.: Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit. IEEE Trans. Inf. Theory 58(2), 1094–1121 (2012) 5. Liu, J., Zhang, C., Pan, C.: Priori-information hold subspace pursuit: a compressive sensing-based channel estimation for layer modulated TDS-OFDM. IEEE Trans. Broadcast. 64(1), 119–127 (2018) 6. Wang, J., Kwon, S., Shim, B.: Generalized orthogonal matching pursuit. IEEE Trans. Signal Process. 60(12), 6202–6216 (2012) 7. Do, T.T., Gan, L., Nguyen, N., Tran, T.D.: Sparsity adaptive matching pursuit algorithm for practical compressed sensing. In: 2008 42nd Asilomar Conference on Signals, Systems and Computers, pp. 581–587. IEEE (2008) 8. Bi, X., Chen, X.-d., Zhang, Y., Yang, J.: Variable step size stagewise adaptive matching pursuit algorithm for image compressed sensing. In: 2013 IEEE International Conference on Signal Processing, Communication and Computing (ICSPCC 2013), pp. 1–4. IEEE (2013) 9. Li, G., Li, T.: An improved SAMP scheme for sparse OFDM channel estimation. In: 2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), pp. 713–716 (2019)
Research on Network Overhead of Two Kinds of Wireless Ad Hoc Networks Based on Network Fluctuations Jianhua Shang(B) , Ying Liu, and Xin Tong School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China [email protected]
Abstract. Clustering is a method to solve the limited scale of the planar ad hoc network. The network overhead is one of the important performances to judge whether the current planar ad hoc network should be clustered. We can get the applicable network scale of the clustered network by combining the routing mechanism, scheduling mechanism and cluster head election mechanism of the ad hoc network, comprehensively considering network fluctuations, channel quality and network scale and finally comparing and analyzing the overhead performance of the planar network and clustered network. The simulation results show that the clustered ad hoc network is more suitable for large-scale network scenarios than the planar structure ad hoc network. Keywords: Ad hoc network · Clustered network Routing mechanism · Cluster head election
1
· Network overhead ·
Introduction
The wireless ad hoc network is a new type of network that has the characteristics of multi-hop, self-organization, self-healing, low cost and not relying on the basic communication facilities that need to be arranged in advance. The planar ad hoc network has a single type of resource scheduling method. The central node of the centralized network maintains the entire network nodes according to the scheduling tree structure. Therefore, the tree structure increases when the scale of the network increases, which leads to increased maintenance costs. The distributed network does not require the central node to maintain the entire network, but with the increase in the scale, the routing overhead of the distributed network also increases. Therefore, the scale of the network is an important factor of affecting the performance of the network. The clustering mechanism [1] is a topology management method that can reduce wireless network communication overhead and increase network scale. This mechanism divides the network This work was supported by the National Key R&D Program of China under Grant 2017YFF0206201. c The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 584–595, 2022. https://doi.org/10.1007/978-981-16-8656-6_52
Research on Network Overhead of Two Kinds of Wireless Ad Hoc Networks
585
into several clusters, each cluster is composed of a cluster head and multiple common nodes, and the cluster heads are composed of the upper level network. The cluster head in the network is the core of the subnet control. The functions of ordinary members in the cluster are relatively simple compared to the cluster head nodes, and there is no need to maintain huge and complex routing information. The clustering network divides the large-scale network into small networks that limits the number of network hops and the broadcast range, so that the routing information of the communication within the cluster is localized [2]. In the study of clustered networks, cluster routing has always been a hot spot. Although existing research has shown that cluster routing mechanisms can improve the efficiency of ad hoc networks, effectively reduce network complexity and reduce routing overhead [3–5]. However, the research on cluster routing is still continuing. Literature [6–9] mainly consider cluster routing from the perspective of energy. These literatures focus on how to balance network energy consumption, extend network life, and improve cluster stability. There is no detailed explanation and in-depth study of cluster routing mechanism. The literature [10] analyzes the difficulties of standard cluster routing. Based on the existing standard cluster routing protocol CBRP, an improved CBRP protocol is proposed based on time series prediction of node location information. The simulation results show that the protocol reduces the impact of chain disconnection caused by network fluctuations on network performance to a certain extent. The literature [10,11] propose a routing protocol based on area division. Different routing protocols are used in the area and outside the area. The simulation results show that the routing protocol can reduce the routing maintenance overhead. The routing protocol is only analyzed and researched on the planar ad hoc network. Whether it is applicable to clustered networks is not known, but according to the characteristics of clustered networks, the routing of clustered networks can be studied based on this idea. Combining plane routing with clustering technology, the literature [13] takes advantage of base station energy to communicate with base stations in non-clustered areas. On the contrary, node with energy greater than the average is selected as cluster head, to reduce the energy consumption of the balanced network. However, Whether this algorithm is effective for other performances of the network has not been verified and analyzed. This cluster routing idea in this paper is similar to the paper [10,11]. The literature [14] proposes a routing mechanism between clusters within a cluster, using proactive routing within a cluster, and reactive routing among clusters. Although the paper has tested and verified this mechanism, it has not simulated the impact of this type of routing on network performance. Although a lot of research has been done on clustered networks, most of the research is mainly focused on how to cluster and prolong the life of the network. Although some literatures have studied the mechanism of clustering routing and the help of clustering routing to improve network performance, it only focus on horizontal comparison between clustered networks, and the existing research only studies the impact of clustering routing on network performance, and does
586
J. Shang et al.
not combine specific scheduling mechanisms, cluster head re-election and other mechanisms for comprehensive analysis. There are mainly two problems in the existing research. One is that the planar network and the clustered network are not compared and analyzed. The second is that although most literatures mention that the clustering is a measure to increase the scale of the network, the research on the network scale adapted to the planar network and the clustered network respectively has not been found. This article will focus on these two issues, based on network fluctuations and channel quality, comprehensively consider the cluster routing process, the threeway handshake scheduling process, and the cluster head election mechanism to analyze and compare the network overhead relationship of the two network structures, and finally get the network scale scenario which is adapted to the clustered network.
2
System Model
Assuming that the total number of nodes in the network is N , the nodes of the planar ad hoc network are distributed in a circle with a diameter of H hops, the spatial distribution obeys the Poisson point process with a density of λp , and the communication radius of the nodes is r, then the total number of nodes is N = λp πH 2 r2 . The planar ad hoc network is based on a distributed scheduling method. The transmission of data services is based on the three-way handshake scheduling mechanism. Let A = 24+XE . Ignoring the service transmission delay overhead the single-hop overhead of the data service is the three-way handshake scheduling overhead E0 [15]: 1 − pele A (A + 1) pele 1−pele xA−1 dx + E0 = 3 ∫0 (1) A 1−x pele 2(1 − pele ) Among them: pele is the probability of election success, and XE is the random backoff index. The network uses an on-demand routing protocol. The idea is: the source node floods the Rreq message to the nodes in the entire network, and the nodes along the way record their path to the source node. When the destination node receives the flooding message, it unicasts the Rrep message to the source node in the reverse direction. The message transmission is based on a pure election mechanism, that is, the election of the transmission time slot is performed within the maintenance range, and then the message is sent out. Ignoring the message transmission delay cost, the pathfinding cost of one hop Tele is the cost of the election process [15]: 1 Tele = + 24+XE (2) pele The clustered ad hoc network is a two-level network with the intra-cluster and the inter-cluster, all of which are distributed scheduling methods, and the
Research on Network Overhead of Two Kinds of Wireless Ad Hoc Networks
587
analysis of the data service transmission is consistent with the analysis of the planar ad hoc network. In view of the characteristics of the clustered network, the network adopts a hybrid routing protocol: the on-demand routing is used within the cluster, and the active routing is used between the clusters, that is, the cluster head maintains a piece of routing table information between the cluster heads. Assuming that the number of neighbor hops maintained by the cluster head node of the clustered network is the same as the number of neighbor hops maintained by the planar network, the spatial distribution of the nodes obeys the Poisson point process with a density of λp , and the communication radius of the nodes is r. Assuming that the cluster head nodes of a cluster are all at the center of the cluster, the coverage of a single cluster can be regarded as a circle with the cluster head as the center and the cluster’s radius is h ∗ r, and the number of member nodes in a single cluster is obtained as: nm = λp πh2 r2 , N ≥ nm
(3)
According to the literature [16,17], the density of cluster head nodes and the number of cluster heads are as follows: 1 − e−nm πh2 r2
(4)
Nλc N (1 − e−nm ) = λp λp πh2 r2
(5)
λc =
n1 =
According to the literature [18], a network with N nodes, the number of nodes within the coverage of one hop distance of a single node is n, then the mathematical expectation for the number of hops between nodes in a plane structure is: ln (N ) lnN = (6) Hp ≈ ln (n) ln (λp πr2 ) The average number of forwarding hops within a cluster and the number of forwarding hops between cluster heads are: ln λp πh2 r2 ln (nm ) Hc ≈ = (7) ln (n) ln (λp πr2 ) Hint =
9 ln
14hln (N) 2 2 196λp πh r 81
(8)
When there are factors such as interference in the channel, there may be cases where the service cannot be successfully transmitted to the destination at one time, so it is necessary to increase the retransmission. The channel quality is represented by the service transmission success probability p. Assuming that the
588
J. Shang et al.
service is successfully transmitted for the ith time, the probability of successful transmission Pi is: i−1 (9) Pi = (1 − p) p The average number of transmissions Nr is: Nr =
N
i−1
i(1 − p)
i=1
3
1 p = p
(10)
Network Performance Analysis
When the network fluctuates, that is, when the nodes or links in the network are unstable, it will cause the topology of the network to change. The time for a link to stably exist in the network is called the link residence time, and the time during which all links in the network stably exist in the network is called the topology residence time. We use link residence time and topology residence time to characterize network fluctuations. The length of residence time affects the number of routing updates, which in turn affects the performance overhead of the network. In the analysis process, the message transmission delay is ignored. Therefore, the network delay cost mainly consists of pathfinding cost and scheduling cost. When considering the influence of fluctuations on the cluster head, there is an additional cluster head election overhead for the clustered network. If the impact of network fluctuations on the cluster head is not considered, the overhead caused by the election of the cluster head is ignored. This article assumes that the service arrives at zero time. 3.1
Analysis of Network Overhead of Ad Hoc Network with Plane Structure
When the topology is stable and unchanged, according to the idea of on-demand routing, a complete pathfinding consists of flooding overhead Tf lood and reverse unicast overhead Tdanbo . Considering the retransmission caused by channel quality and the number of service hops, to obtain a complete routing cost T1 and complete scheduling cost E1 as: T1 = Tflood + Tdanbo = Tele Hp Nr + Tele Hp Nr
(11)
E1 = E0 Hp Nr
(12)
When the network fluctuates, the route that has been found will become invalid and the route will be found again. For the route of a planar structure network, if the unstable link occurs on a non-routed path, there is no need to find the route again, so only need to consider the average link residence time, but on the contrary, the path finding needs to be performed again. Assuming that the average link residence time is Tlink and link fluctuations do not affect the
Research on Network Overhead of Two Kinds of Wireless Ad Hoc Networks
589
number of hops of the source and destination nodes, when the total time of the first complete route is longer than the link residence time and the link will change before a pathfinding is completed, the routing will fail and data services cannot be sent and received. In order to ensure that services are ultimately transmitted to service nodes, not only does complete routing need to be completed at least once during the residency time, but also at least one scheduling must be ensured. Constraints are obtained: (13) T1 + E0 Nr ≤ Tlink When the link residence time is long enough, it can not only satisfy the first complete pathfinding, but also complete all scheduling, that is, when E1 + T1 ≤ Tlink , the total cost is: (14) E2 = E1 + T1 When T1 + E0 Nr ≤ Tlink < E1 + T1 , multiple pathfinding and multiple scheduling are required, and the number of service hops gradually decreases with the number of times until it reaches the destination node. Assuming that Ti is the ith pathfinding overhead, Tlink − Ti represents that the remaining link −Ti residence time can be used for scheduling time, and Tlink E0 Nr is the number of hops k Tlink −Ti that the ith service has passed, the remaining hops are Hp − E0 Nr , where i=1
k is the number of re-path finding. There is the following derivation process: When k = 1: Routing cost: E2 1 = 2Tele Hp Nr Tlink − E2 1 Passed hops: E0 Nr Tlink − E2 1 Remaining hops: n1 = Hp − E0 Nr When k > 1: Routing cost: E2 k = 2Tele nk−1 Nr Tlink − E2 k Passed hops: E0 Nr Tlink − E2 k Remaining hops: nk = nk−1 − E0 Nr k
K satisfies Hp −
i=1
Epr =
Tlink −Ti E0 Nr
k
< 1. Total cost is:
E2 i
i=1 k
T2 − E2 i = 2kTele Hp Nr − 2Tele (k − i) Nr E0 Nr i=1
(15)
590
J. Shang et al.
In summary, the network overhead of the planar ad hoc network is: ⎧ (2Tele + E0 )HP Nr , ⎪ ⎪ ⎪ ⎪ (2Tele + E0 )HP Nr ≤ Tlink ⎪ ⎪ ⎨ k −E2 i Ep = 2(kHP − 2 ((k − i) Tlink ))Tele Nr E0 Nr ⎪ i=1 ⎪ ⎪ ⎪ +E0 Nr , (2Tele HP + E0 )Nr ≤ Tlink ⎪ ⎪ ⎩ < (2Tele + E0 )HP Nr K satisfies: Hp −
k i=1
3.2
Tlink −E2 E0 Nr
i
(16)
Tt , to ensure the normal arrival of services, there are constraints: 2Tele Hc Nr + E0 Nr + C ≤ Tt , that is to complete at least one complete pathfinding, one-hop scheduling and one new cluster head election. After the pathfinding is completed, if there is a link fluctuation (route expiration) during the service sending process, the pathfinding will continue. The
Research on Network Overhead of Two Kinds of Wireless Ad Hoc Networks
591
i −C number of hops that the service passes before each pathfinding is Tt −T E0 Nr , can be Tt − Ti − C indicates that the remaining time of the topology kresidence i −C used for scheduling time, and the remaining hop count Hc − i=1 Tt −T E0 Nr < 1, the pathfinding cost is: k Tt − E2 i − C Ecr = 2 kHc − Tele Nr (k − i) (19) E0 Nr i=
Each path finding is accompanied by the cluster head election overhead, and the total cluster head election overhead is: k∗C. The total overhead in the cluster is: Ec0 = 2 kHc −
k i=
Tt − E2 i − C (k − i) E0 Nr
Tele Nr
(20)
+ E0 Hc Nr + kC If the number of forwarding hops among cluster heads is Hint , the intercluster overhead is: E0 Hint Nr , and the total network overhead of the cluster structure is obtained as: ⎧ 2(2Tele + E0 )Hc Nr + E0 Hint Nr , ⎪ ⎪ ⎪ ⎪ (2Tele + E0 )Hc Nr ≤ Tt ⎪ ⎪ ⎨ k 2 i −C Ec = 4(kHc − ((k − i) Tt −E ))Tele Nr E0 Nr ⎪ i=1 ⎪ ⎪ ⎪ + 2E0 Nr + 2kC + E0 Hint Nr , (2Tele Hc ⎪ ⎪ ⎩ +E0 )Nr + C ≤ Tt < (Tele + E0 )Hc Nr K satisfies: Hc −
k i=1
Tt −E2 i −C E0 Nr
(21)
2”, which indicates that the synergy and cooperation between the subsystems in the system produce greater efficacy than the sum of the functions of the subsystems. Coordination can make multi-party cooperative systems more effective. For multi-party cooperation, coordination can not only enable all parties to obtain more benefits, but also enhance the overall system. The integration and optimization of high-quality online teaching resources in universities is to achieve in-depth cooperation between different organizations [3]. The purpose is to use Synergetics theory in knowledge innovation to obtain synergistic effects, realize the integration and optimization of high-quality resources, and obtain knowledge innovation. However, if real and meaningful coordination cannot be achieved in the integration and optimization of high-quality teaching resources in universities, it not only will be unable to create more results and benefits, but it will bring about the opposite effect, the effect of “1 + 1 > 2” cannot be achieved, and the real effect of synergy cannot be achieved. Therefore, if we want to give full play to the positive role of synergy, we must use appropriate methods and means to achieve the goal of integration and optimization of high-quality teaching resources in universities [4]. With the integration and optimization of high-quality teaching resources in universities, the relationship between the direct subjects occupied by resources such as universities and research institutes will eventually transform into a process of self-organization. The integration and optimization of high-quality teaching resources in universities relies
Research on the Coordination Mechanism of the Integration and Optimization
629
on mutual cooperation and coordination between the internal subjects of the system to share resources to achieve key common innovations, rather than receiving interference from external conditions. In the integration and optimization of high-quality teaching resources in universities, the whole process from the establishment of the system to the realization of its goals should strive to find a balance point of the system. This balance point can take into account the interests of universities and scientific research institutes and other subjects, and it can also improve the efficiency of the integration and optimization of high-quality teaching resources in universities [5]. The integration and optimization system of high-quality teaching resources in universities fully meets the requirements of Synergetics for the research system. It adopts the mode of collaborative innovation, because the members cannot complete knowledge innovation independently. Only by sharing resources and exchanging materials and information with other members of the system and the outside of the system can achieve the goal of integration and optimization of resources [6]. However, in the process of cooperation and sharing among the university members, there are not only benefits but also risks. So a reasonable coordination mechanism is the key guarantee for increasing cooperation output, overcoming risks, reducing risks, and handling conflicts in the process of cooperation between universities. Once the participating universities locate the online teaching resources that they need, they will reach a willingness to cooperate through the coordination mechanism under the guidance of common goals and interests.
3 Research on the Coordination Mechanism of the Integration and Optimization of High-Quality Online Teaching Resources in Universities The integration and optimization of high-quality online teaching resources in universities is a systematic project, which has the characteristics of complexity. The whole process is completed by the participating universities through coordination, negotiation and collaboration. As Malone put forward in 1994, the management activities of coordination and dependence beyond organizational boundaries is the coordination between different organizations [7]. In the integration and optimization process of high-quality online teaching resources in universities, the coordination and sharing of resources is an important form for the universities who participate in the integration and optimization network of high-quality online teaching resources to spontaneously interact and communicate. As all the universities have the characteristics of autonomy and active adaptability who participate in the integration and optimization network of high-quality online teaching resources, when the universities discover a market opportunity, if they do not have a relatively complete system of high-quality online teaching resources and teaching capabilities, they will use the online teaching resource network to search and position in order to find the resources that are lacking in the internal and external integration and optimization network of high-quality online teaching resources in universities [8]. For the integration and optimization network of high-quality online teaching resources in universities, scientific planning and benefit distribution are the guarantee for the healthy development of coordination and cooperation, and practical coordination technology and
630
X. Guan
effective communication are its support. The coordination mechanism for the integration and optimization network of high-quality online teaching resources in universities is shown in Fig. 1, which mainly includes relationship coordination, resource coordination and benefit coordination.
Fig. 1. The coordination mechanism of the integration and optimization of high-quality online teaching resources in universities
(1) Relationship Coordination. Within the integration and optimization network of high-quality online teaching resources in universities, the establishment of capital relationship plays a positive role in improving the utility and performance of the integration and optimization network of high-quality online teaching resources in universities. It also helps to expand the relationship network scope of the integration and optimization subjects, and provide certain relationship resources to make future business development easier. Based on the standardized integration and optimization order, the integration and optimization relationships are coordinated and the trust degree is improved through the coordination mechanism. Then there will be a positive impression of cooperation among the integration and optimization subjects, which in turn strengthens the possibility of further cooperation. (2) Resource Coordination. A large number of relatively dispersed high-quality online teaching resources exist within the integration and optimization network of highquality online teaching resources in universities. By building an organization and coordination mechanism, this part of the resources can be better integrated and optimized, which can coordinate services and sharing, improve resource utilization rate, and maximize the resources benefit. This not only reduces the pressure on the government and other administrative departments, but the utilization efficiency of the idle resources which is not effectively used also has been greatly improved, and thereby improving the overall resource allocation efficiency of the teaching resource network. (3) Benefit Coordination. The main purpose of the universities who participate in the integration and optimization of high-quality online teaching resources is to pursue the benefits of integration and optimization. Through the formulation of a series of management rules and regulations, the rights and responsibilities of the coordination mechanism to the subjects are clearly defined and guaranteed the maximization of benefits. With
Research on the Coordination Mechanism of the Integration and Optimization
631
a reasonable collaborative plan and results, these benefits are fed back to the universities who participate in the integration and optimization of high-quality online teaching resources, which will play the greatest incentive for the integration and optimization subjects.
4 Modeling on the Coordination Process of the Integration and Optimization of High-Quality Online Teaching Resources in Universities 4.1 Modeling on the Coordination Network of the Integration and Optimization of High-Quality Online Teaching Resources in Universities We use S to the integration and optimization network of high-quality online teaching resources in universities, and the coordination mechanism of the network is composed of the interaction of heterogeneous or homogeneous subsystems {S1 , S2 , . . . , Sm } at all levels. According to the basic principles of Synergetics, there are exchanges of material, energy, and information between the systems in the integration and optimization network of high-quality online teaching resources in universities and the external environment, as well as mutual cooperation. The evolution of the system has the characteristic of instability, so we can find out the order parameter of the system through the principle of dominance, and then grasp the evolution law of the system. In order to facilitate the discussion, we only consider the first-level subsystems Sn (n = 1, 2, . . . , m) of the integration and optimization network of high-quality online teaching resources in universities. For example, the first-level subsystem of key colleges and universities is represented by S1 , and first-level subsystem of ordinary universities is represented by S2 . Therefore, coordination model of the system can be further expressed as formula (1). S = H (S1 , S2 , ..., Sm )
(1)
In formula (1), H refers to the coordination factor. The main coordination aim of the integration and optimization network of high-quality online teaching resources in universities is to find effective coordination of m ∈ M based on the functional structure and other characteristics of the network. According to certain evaluation criteria, it can make the high-quality online teaching resources V (S) in the integration and optimization network S or the overall effectiveness of the network is greater than the sum of the resources or effectiveness of each subsystem under the action of various subsystems, and S takes the maximum value, as shown in formula (2). ⎧ m ⎨ q ∈ Q0 V f Sj , s.t. m(q) ∈ Q1 (2) max V m (S) − ⎩ n=1 m∈M In formula (2), q refers to the order parameter or state parameter of the integration and optimization network of high-quality online teaching resources in universities, Q0 refers to the initial state conditions of the network, and Q1 refers to the evolutionary constraints
632
X. Guan
of the network.The coordination mechanism of the integration and optimization network S of high-quality online teaching resources in universities that satisfies formula (2) is the coordination m, so the collection of coordination mechanism m is the coordination mechanism of the integration and optimization network of high-quality online teaching resources in universities, which is expressed as M . 4.2 Order Parameter Analysis of the Integration and Optimization of High-Quality Online Teaching Resources in Universities Due to the intricate interaction between the main bodies of participating universities and the environment in the integration and optimization network of high-quality online teaching resources in universities, it has caused the complexity of the network evolution process, and there are fluctuations in the network. When a certain rise or fall occurs in the integration and optimization network of high-quality online teaching resources in universities, the original structure of the network will be lost, and a new structure state will be produced. This process explain the fluctuation reason using the order parameter of the network, and characterize the orderly nature and degree of the system after the phase change, as shown in Fig. 2.
Network structure S1
Network structure S2 Order parameter
Network structure S3 Order parameter
Fig. 2. Instability of the integration and optimization of high-quality online teaching resources in universities
The integration and optimization network of high-quality online teaching resources in universities is a large, complex, and non-linear system, and its evolution direction also has many possible endings, which are not fixed. For a period of time, a single participating university subsystem will obtain a relatively stable time, space, or functional structure under specific external interventions, such as social study needs, government plans, market guidance, etc. However, the teaching resources controlled by the subsystems will also change with the changes in forces and external interventions, then the subsystems will integrate existing resources under a coordination mechanism, give full play to their respective advantages and learn strengths from each others, so as to maximize the utilization of teaching resources.
Research on the Coordination Mechanism of the Integration and Optimization
633
4.3 Research on the Coordination Process of the Integration and Optimization of High-Quality Online Teaching Resources in Universities (1) Analysis on the Forces of the Integration and Optimization of High-Quality Online Teaching Resources in Universities ➀ Friction Between Subsystems Due to differences in school scale, resource distribution, and academic level among participating universities, there will inevitably be frictions. The direction of friction between subsystems is always opposite to the direction of the integration and optimization network of high-quality online teaching resources in universities. Factors such as the scale integration degree between universities, the complementary degree of resource distribution, and the academic integration degree will determine the magnitude of frictional resistance, and the friction size is inversely proportional to the development direction of the integration and optimization network of high-quality online teaching resources in universities, which can be expressed as formula (3). f = −xv = −x
dv dt
(3)
In formula (3), f refers to the friction between subsystems, xv refers to the change of teaching resources in the integration and optimization network of high-quality online teaching resources in universities, and the negative sign means that the friction hinders the development process of the integration and optimization network of high-quality online teaching resources in universities. ➁ Coordination Between Subsystems Under the function of friction, the number of the high-quality online teaching resources participating in the integration and optimization will gradually decrease, which is not conducive to the integration and optimization network, and even tends to disintegrate. Therefore, in order to reduce the role and influence of friction, the integration and optimization network of high-quality online teaching resources in universities produces coordination among the participating universities, such as rationally distributing resources, merging with each other, and gathering talents, thereby gradually increases the quality of the network. The coordination of the integration and optimization network of high-quality online teaching resources in universities is opposite to the friction. The more consistent the interests of the integration and optimization network of high-quality online teaching resources, the greater the coordination of the network is F = λω1 ω2
(4)
In formula (4), F refers to the coordination among subsystems, ω1 refers to highquality online teaching resources, ω2 refers to the coordination benefits, and λ refers to the coefficient of positive coordination. (2) Coordination Equation Analysis Under certain conditions, the non-linear interactions can enable the subsystems of the integration and optimization network of high-quality online teaching resources in and universities to produce coherent effects and synergistic effects, which results in a new system structure and relatively orderly functions. The general operation equation
634
X. Guan
for the integration and optimization network of high-quality online teaching resources in universities, that is, the integration and optimization network activities of high-quality online teaching resources in universities can be expressed as formula (5). ∂v = (F − f )v − kv3 + a ∂t
(5)
In formula (5), −kv3 refers to the non-linear characteristics of the integration and optimization of high-quality online teaching resources in universities, The constant coefficient of v3 is k, and the direct integration and optimization capability of the network is expressed by a, which is a first-order constant. According to formula (5), the potential function equation for the integration and optimization network of high-quality online teaching resources in universities is shown as formula (6). 1 1 E(v) = − (F − f )v2 + kv4 2 4
(6)
(3) Coordination Results Analysis It can be seen that the good operation of the integration and optimization network of high-quality online teaching resources in universities are Inseparable from coordination, and we have to handle the relationship among relationship coordination, resource coordination and benefit coordination. In order to ensure the good operation of the network, we should increase the coordination F as much as possible, such as rationally distributing resources, merging with each other, and gathering talents etc., and reduce the friction, such as the scale integration degree between universities, the complementary degree of resource distribution, and the academic integration degree etc. We also should strengthen the control on the order parameter of the network, namely the integration and optimization culture and the integration and optimization value, in order to promote the network evolve from disorder to order. While controlling the total amount, we have to plan to increase the quantity of high-quality online teaching resources v, and continue to improve the direct integration and optimization capability of the network at the same time.
5 Conclusions Using the relevant theories and methods of Synergetics, this paper analyzed the coordination reason and coordination mechanism of the integration and optimization of high-quality online teaching resources in universities from the perspective of systems engineering, and conducted a modeling study on the collaborative process of the integration and optimization of high-quality online teaching resources in universities. At the same time, the integration and optimization of high-quality online teaching resources in universities is a long-term complex project, and there are still many potential problems that need to be further improved and deepened in future research work.
Research on the Coordination Mechanism of the Integration and Optimization
635
References 1. Dong, X.Y.: Integration and application of higher education information teaching resources under the background of epidemic prevention and control. Shaanxi Educ. (High. Educ.) 04, 25–26 (2021) 2. Huang, Y.T.: Research on the sharing of educational resources in American colleges and universities. Mod. Educ. Sci. (High. Educ. Res.) 06, 132–136 (2012) 3. Ding, S.L.: The Main principles and implementation of online teaching in schools during the epidemic prevention and control period. Exp. Teach. Instrum. 03, 3–7 (2020) 4. Huang, Y.F.: The significance and tactics of high-quality teaching resources sharing in universities. Econ. Perspect. 12, 57–58 (2011) 5. Shen, J.Q., Zhao, J.W.: Some thoughts on the sharing of teaching resources in universities. Lab. Res. Explor. 01, 185–187 (2012) 6. Huang, X.E.: Countermeasures to improve the effectiveness of online teaching in the context of the prevention and control of the new crown pneumonia Epidemic. Guangxi Educ. 23, 10–11+18 (2020) 7. Wang, J.L.: Research on the Issues of University Strategic Alliance. Southwest University (2013) 8. Chen, F.: Strategic analysis of methods and approaches to integrate and share teaching resources. Softw. Guide Educ. Technol. 11, 79–80 (2011)
Research on Evaluation Technology of Electric Power Company Strategy Implementation Fangcheng Tang1 , Ruijian Liu1(B) , and Yang Yang2 1 School of Economics and Management, Beijing University of Chemical Technology, Beijing, China [email protected] 2 Energy Development Research Department, State Grid Hebei Economy Research Institute, Shijiazhuang, China
Abstract. With the rapid economic development and the acceleration of the globalization process, the strategy of energy-based enterprises in the new era is not static. Instead, it is necessary to take corresponding measures based on the evaluation of the strategy implementation to ensure the final implementation result. This article takes the development strategy of an electric power company as the research object, constructs a corporate strategy implementation evaluation framework, combines corporate strategy management levels, determines the goals and principles of corporate strategy evaluation, and analyzes the key links and evaluation levels of corporate strategy implementation; Construct a corporate strategy evaluation model, study the evaluation index system of corporate strategy execution, propose corporate evaluation methods and evaluation standards, and form a complete corporate strategy evaluation technology. Keywords: Electric power company · Strategy execution · Execution evaluation · Control strategy
1 Introduction With the rapid changes in the competitive environment and the development of emerging technologies, it is increasingly difficult for companies to adopt a planned and robust strategy, and they need to continue to change in an uncertain state. From planning to execution is the core content of strategic management, and strategic execution evaluation is the key link to check whether the execution is in place [1]. Evaluate the implementation of the corporate strategy. If the performance of the company is lower than expected during the implementation of the strategy, corrective measures must be taken to make strategic adjustments; if there are new situations outside the company, corresponding measures must be taken to make adjustments. Therefore, the evaluation of strategy execution is necessary for all types of enterprises. Research on strategy execution abroad is relatively mature and has relatively wide applications. In particular, the modern balanced scorecard theory proposed by Robert Kaplan and David Norton combines three types of balanced scorecards, strategic maps, © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 636–648, 2022. https://doi.org/10.1007/978-981-16-8656-6_56
Research on Evaluation Technology
637
strategic management and organization. The system is integrated to strengthen the key links of strategic planning and strategic execution evaluation, so as to promote the improvement of the intermediate link of strategic execution. Some domestic scholars believe that a scientific performance management system is the guarantee for the effective execution of strategies [2, 3]. In addition to research on performance management, Yu [4] believes that an effective and complete strategy execution system should be established, and an effective strategy evaluation and tracking mechanism should be established from both the review of the strategic foundation and the measurement of corporate performance. The evaluation research of strategy execution is necessary for all types of enterprises. In combination with the competitive basis of the industry and the driving factors of Nespresso’s strategy formulation, scholars used the theoretical model of “balanced scorecard” to summarize and evaluate the strategy-driven execution of Nespresso, an Australian coffee brand, from four dimensions: customer, finance, internal process, learning and growth [5]. Yi [1] Based on the research of the balanced scorecard, strategic map, strategic management and organization and other theoretical systems, combined with the characteristics of the survey and design enterprise, the design is designed from four levels: strategic execution performance, process, effectiveness, management and organization Constructed a strategy implementation evaluation system. After the 18th National Congress of the Communist Party of China, in the face of new changes in energy supply and demand patterns and new trends in international energy development, General Secretary Xi Jinping proposed a new energy security strategy of “four revolutions, one cooperation” from the overall perspective of ensuring national energy security. Continue to deepen the research on strategic issues, improve the “four beams and eight pillars”, and build the evaluation technology of corporate strategy implementation around the key links of strategic execution of electric power companies under the background of the new era, and it is of great significance to analyze the implementation of corporate strategy. The important significance of carrying out the company’s strategic implementation evaluation and risk control technology research is as follows: First, the research enriches the quantitative analysis tools in the company’s strategic field, provides a theoretical basis for the company’s strategic risk management, and enhances the company’s quantitative decision-making in strategy. Secondly, the research on the connotation of the company’s strategic goals and the construction of the index system will provide more powerful support for the company to further promote the strategy, and provide clearer target guidance for the company’s units to deeply understand the strategic connotation and implement the strategic requirements. Finally, the research will improve the professional and systematic level of the company’s strategic management, which can be applied to all aspects of company strategy formulation, strategy execution, strategy evaluation and strategy optimization. It will promote the company to form strategic closed-loop management and provide support for the company’s development. Taking an electric power company (A company) as an example, this paper selects strategic execution evaluation dimensions suitable for the electric power companies for key aspects of strategic execution, and constructs a research on a corporate strategy execution evaluation model to form a complete evaluation system for the implementation of corporate strategies.
638
F. Tang et al.
2 The Status of Corporate Strategy Implementation 2.1 Advantages of Corporate Strategy Execution Management • Actively reach strategic consensus internally and externally. The enterprise unifies its thoughts and actions internally, so that the majority of employees understand the strategic goals of enterprise development, and stimulates their creativity and enthusiasm; externally demonstrates the responsibility and responsibility of the enterprise, establishes a good corporate image, reaches an external consensus, and creates a harmonious and favorable development environment. • The degree of strategic coordination is better. The enterprise organization structure is reasonable. In view of the new strategic goals, A company is involved in enterprise planning, plan formulation, resource allocation, budget allocation, assessment and other aspects. Each perform its duties, strengthen horizontal coordination, decompose tasks level by level, implement each level, and jointly promote the realization of strategic goals. • The corporate culture is open and transparent. A company clearly specified the corporate purpose, core values and spirit of the enterprise, so that the public can truly understand the culture of A company. • Pay attention to strategic management and control. The construction and application of information systems have been promoted in an orderly manner, unified management and deployment of resources across the company have been realized, the level of power services has been improved, and the effective implementation of new strategic goals has been promoted to ensure that enterprises are always in a scientific strategy. Stride forward under the guidance of the company. • Speed up the implementation of the strategy. In order to meet the needs of energy internet construction, A company actively promotes the implementation of the strategy, and strives to build infrastructure and services that support enterprise digital banking, smart grid upgrades, and ecological integration innovation. 2.2 Insufficiency of Corporate Strategy Execution Management A company implementation management system has been continuously improved, and the reporting of strategic dynamic information, analysis of strategic operations, and exchanges and discussions on major strategic issues have been carried out in an orderly manner. But at the same time, the enterprise system still has certain deficiencies in the understanding and implementation of the new strategy, which are mainly manifested in the following aspects. • For the strategic goals, enterprises have not yet formed a systematic and systematic understanding, and the whole enterprise has not yet unified their understanding of strategic goals. • Enterprises pay more attention to strategy formulation and performance, and lack of monitoring, tracking, feedback and research on strategy implementation. • In the process of advancing the implementation of the new strategy, it will face a more complex and changeable external environment, and put forward higher requirements for the continuous adaptation of the corporate strategy to the new situation.
Research on Evaluation Technology
639
At present, corporate strategy is mainly based on qualitative analysis, but with the increasing complexity of the internal and external environment, the requirements for strategic management decision-making and supporting technology have increased.
3 Construction of an Evaluation Model for Corporate Strategy Execution The strategy execution process generally includes strategy formulation, strategy clarification, strategy communication, target decomposition, plan formulation, resource allocation, strategy implementation, performance incentives, evaluation feedback and other links. This article focuses on the evaluation of A company’s strategy implementation. The key aspects of strategy implementation include the determination of strategic implementation elements, the construction of the index system, the selection and use of research methods, and the construction of structural models. From the key links of strategy execution, we see that strategy execution is a process of achieving strategic goals step by step on the basis of existing strategies, combined with strategic guarantee mechanisms, and acting together. When the internal environment or the external environment or other factors change, the strategy execution plan and execution plan should also respond to such changes, and formulate corresponding plans, so that the process of strategy execution can effectively organize corporate resources and ultimately achieve strategic goals. This is a dynamic process. 3.1 Elements of Strategy Execution Through combing the research results at home and abroad, we found that different scholars have different understanding of the elements of strategy implementation. But it can be summarized into the following categories: (1) Strategic dimension. For example, among the three major elements (strategy refinement, strategic adjustment, and strategic control) proposed by Pearce and Robinson [6] in strategic execution, the three major elements are all related to strategy; Veth [7] proposed a strategic execution model including strategic processes; strategic consensus, strategic coordination, and strategic control are also related to strategy. (2) Organizational dimension. This dimension includes organizational culture, system, structure, and human resources. Scholars also take organizational structure, corporate culture, human resources, and systems as key variables for strategic execution. Similar to these variables, classify them as organizational dimensions. (3) System and process dimensions. The feedback system included in the strategic execution components analyzed by Kaplan and Norton [8] belongs to this dimension. Based on the status quo of corporate strategy implementation and the external environment, this study analyzes the strategy implementation management process to sort out the influencing factors of strategy implementation in advance, and selects strategic implementation evaluation dimensions suitable for the A company in accordance with each link of the strategy implementation flowchart It is mainly classified into the following parts: six factors: strategic evaluation, strategic consensus, strategic coordination, execution culture, strategic performance, and strategic control.
640
F. Tang et al.
3.1.1 Strategic Evaluation In this study, the strategic evaluation field is defined as the assessment of strategy itself and strategy decomposition. First of all, strategy itself is the premise of strategy implementation. The quality and category of strategy are the prerequisites of strategy execution, and also directly affect the effect of strategy execution. The clarity and participation of strategy are important factors affecting the implementation of strategy. Secondly, strategic decomposition is a “link between the preceding and the following” in the process of strategy execution. According to the operational characteristics of the organization and the scope of power and responsibility of each department, the top management decomposes the business strategy into short-term department goals and tasks, and manages them in the execution of department plans and business processes. If the department objectives and tasks after the strategic decomposition can effectively guide the relevant responsible departments to complete their work and match the overall strategy at the same time, then all business activities of the enterprise will be carried out methodically around the strategic objectives. Therefore, whether the business strategy is decomposed and the decomposed quality is the key content of evaluating the implementation of the strategy. 3.1.2 Strategic Consensus In this study, strategic consensus is defined as a four-faceted assessment of senior high school grass-roots managers and external institutions. Strategic consensus is an important factor affecting the strategic execution and performance of an enterprise. It means that employees at all levels reach consensus on the understanding, recognition and implementation willingness of the enterprise’s strategy. It means that all managers of an enterprise can clearly know the strategic objectives of the enterprise, so as to ensure the smooth progress of the follow-up strategy implementation process. 3.1.3 Strategic Synergy In this study, strategic synergy is defined as the assessment of operational synergy, organizational synergy and communication. Strategic synergy is the key to strategy implementation, which reflects the coordination between business activities and organizational forms and strategies realized by enterprises through target decomposition, planning, resource allocation and strategic actions. 3.1.4 Execution Culture This research defines executive culture as an evaluation of three aspects: justice culture, passion culture, and ideal culture. Execution culture is a comprehensive reflection of the awareness of execution, execution style, execution ability, execution speed, and execution quality advocated by business operators, recognized by all employees, and in the implementation of business strategy thinking, target plans, and systems and regulations. It is the key to the construction of corporate culture. Only by forming a strong execution culture can all employees work for the strategy consciously and voluntarily, thereby improving the efficiency and effectiveness of strategy execution.
Research on Evaluation Technology
641
3.1.5 Strategic Performance This study defines strategic performance as an assessment from four aspects: finance, customer, internal process, learning and growth. Strategy-oriented performance evaluation is based on enterprise strategy, which runs through the whole process of setting performance indicators, implementing performance evaluation and results of performance evaluation. This study from financial, customer, internal process, learning and growth four dimensions of a causal relationship to the enterprise to conduct a comprehensive integrated performance evaluation, including financial dimension is the ultimate goal, the customer dimension is the key, the internal process dimension is the foundation of learning and growth dimension is the core, four depend on each other, support each other, balance each other, and constitute the value chain of a causal relationship. 3.1.6 Strategic Control In this study, strategic control is defined as information control and behavior control. Strategic control is an important link in the process of strategy execution. In the implementation process of business strategy, in order to check the progress of various activities carried out by the enterprise to achieve the target, the actual performance is compared with the established target, and the deviation is corrected in time to ensure the smooth implementation of the strategy. 3.2 Evaluation Index System of Enterprise Strategy Execution Based on the above section, we obtain the evaluation index system of enterprise strategy execution, as shown in Table 1. 3.3 Strategy Implementation Evaluation Research Method This article uses the analytic hierarchy process to evaluate the implementation of corporate strategies. AHP is a systematic and hierarchical analysis method that combines qualitative and quantitative analysis. Including the establishment of hierarchical structure model, construction of judgment matrix, calculation of weight vector, consistency check, data analysis 5 steps. 3.3.1 Build a Hierarchical Model First, establish a hierarchical structure model. On the basis of in-depth analysis of actual problems, the relevant factors are decomposed into several levels from top to bottom according to different attributes. The hierarchical structure is generally divided into the highest level, the middle level and the lowest level. In this article’s strategic execution evaluation index weights, the highest level corresponds to the strategic goals of A company; the middle level is the theoretical framework, which corresponds to the six factors of strategic evaluation, strategic consensus, strategic coordination, execution culture, strategic performance, and strategic control. The bottom layer is the specific indicators for the evaluation of corporate strategy implementation under each factor, as shown in Table 1.
642
F. Tang et al. Table 1. Evaluation index system of enterprise strategy execution
Factors
Factors
Index
Strategic assessment
Strategy itself
A1. The clarity of the strategy itself A2. Procedural impartiality or participation in the strategy development process A3. Strategic decision itself and actual environment analysis A4. Strategic flexibility
Strategic decomposition
A5. The strategic tasks of the department after the decomposition of the company’s strategic objectives A6. The department has made a work plan for the tasks after strategic decomposition A7. Whether a corresponding task flow exist in a decomposed department objective A8. Degree of alignment between business processes and strategic objectives A9. Decomposition of strategic objectives and combination of organizational framework and rights and responsibilities A10. The members of the department have their own strategic goals
Strategic consensus
Senior management
R1.Whether senior management participate in strategic decision-making discussions R2. Whether senior management agree on the outcome of strategic decisions R3. Whether top management agree on the strategic execution plan
Middle management
R4. Whether middle management participate in joint discussions to make strategic decisions R5. Middle managers agree on the outcome of the strategic decision R6. Middle management agree on the strategic execution plan
Grass-roots management
R7. Grass-roots management understand the company’s strategic decisions R8. Grass-roots management agree on the results of the major decisions (continued)
Research on Evaluation Technology
643
Table 1. (continued) Factors
Factors
Index R9. Whether the company held strategic promotion meetings
Strategic synergy
External consensus
R10. External stakeholders endorse corporate decisions
Operating synergy
S1. Matching degree of daily activities and strategy S2. Matching degree of key links and strategic objectives S3. Whether all links of operation activities are coordinated S4. Matching degree of Resource allocation and strategic objectives
Organization synergy
S5. The degree of cohesion between different departments S6. The functional design of key posts is beneficial to the implementation of the strategy S7. Responsibilities of the department are clear and reasonable
Communication
S8. Smooth internal information communication
Justice culture
C1. Listen to staff
S9. Smooth external communication Executive culture
C2. Transparency in System Operation C3. Don’t pass the buck Passion culture
C4. Strong team spirit C5. Strong work enthusiasm
Idea culture
C6. Satisfaction rate of employees in party style and clean government construction C7. Poverty alleviation and public welfare investment C8. Power protection, disaster prevention and other social contributions
Strategic performance
Financial
P1. Total asset tax rate P2. Clean Alternative Market Development Index P3. Electricity Alternative Market Development Index (continued)
644
F. Tang et al. Table 1. (continued)
Factors
Factors
Index P4. Renewable energy utilization P5. Return on equity P6. Asset liability rate P7. Growth rate of main business income P8. Increased revenue from unit grid investment P9. Main business income growth rate P10. Increased revenue from unit grid investment
Client
P11. Average user outage time P12. Comprehensive voltage qualification rate P13. Customer demand response time P14. Customer Satisfaction (Quality Service Evaluation Index)
Internal operation
P15. 220 ~ 750 kV grid capacity ratio P16. 220 ~ 750 kV Maximum load rate of equipment P17. Clean energy consumption rate P18. Cybersecurity Index P19. Information system operation rate P20. Power asset code coverage P21. 110(66) ~ 10 kVEquipment N-1 pass rate P22. Line loss rate
Learning and growth
P23. Talent Equivalence Density P24. R&D investment intensity P25. Employee job satisfaction P26. Staff training rate P27. Key core employee retention rate
Strategic control
Information control
M1. Companies can effectively track the effect of strategy execution M2. Companies can effectively track strategic deviations (continued)
Research on Evaluation Technology
645
Table 1. (continued) Factors
Factors
Index M3. Companies can effectively track remedial measures
Behavior control
M4. Employee performance is linked to strategy execution performance M5. There is a special supervision department in the process of strategy implementation
3.3.2 Construct a Judgment Matrix The judgment matrix is the information basis and decision-making basis of the analytic hierarchy process, and it is the most critical step in the analytic hierarchy process. First, it is necessary to determine the relative degree of each index, and express it with numerical values, and then form the judgment matrix. This article uses expert scoring to determine the scores of the six dimensions and specific indicators in each dimension according to the degree of importance, ranging from 1–9. Then judge the relative importance of the two indicators from the strategic evaluation to the six factors of strategic control, fill the survey statistical sample data into the table, and form the strategic execution judgment matrix and the judgment matrix of each dimension, a total of seven. And calculate the characteristic root of each matrix separately for subsequent use in the consistency check. Take the example of a judgment matrix for strategic execution evaluation, according to expert scores, and refer to the relative importance level table, the following judgment matrix is formed, where aij = 1/aji. 3.3.3 Calculate the Weight Vector Set the weight vector to W, and use the square root calculation method to get the weight vector. First, calculate the product of each row element of the judgment matrix, calculate the n-th root, and then obtain the weight vector of each judgment matrix. 3.3.4 Consistency Check In order to avoid logical errors in the above calculations, it is necessary to check the consistency. In the analytic hierarchy process, the consistency index CI is usually used for testing. The smaller the CI value, the better the consistency; the larger the CI, the worse the consistency. In order to eliminate the influence of random factors, it is also necessary to calculate the consistency ratio CR value. Generally speaking, when CR < 0.1, the consistency of the matrix meets the requirements, and the calculation result is valid. Calculated as follows: CI =
CI λmax − n , CR = n−1 RI
(1)
646
F. Tang et al.
λmax is the largest characteristic root of the judgment matrix, and RI is a random consistency index, which can be determined by querying the standard value table of random consistency index RI according to the order of the matrix. In this paper, the maximum order matrix is the 19th-order judgment matrix corresponding to the internal process dimension in strategic performance, so it is only necessary to set the maximum value of RI to n = 19. 3.3.5 Data Analysis According to the calculation, the index weight table is obtained, and the result analysis is carried out. 3.4 Strategy Execution Evaluation Model 3.4.1 Enterprise Strategy Execution Evaluation Structure Model Based on the construction of the above strategy execution evaluation system and weight analysis, A compay’s strategy execution evaluation structure model is obtained, as shown in Fig. 1.
Fig. 1. Enterprise strategy execution evaluation structure model
Research on Evaluation Technology
647
3.4.2 Digital Model of Corporate Strategy Execution Evaluation According to the model design of the strategy execution evaluation, the weight calculation of each evaluation factor and evaluation index, the mathematical model of the strategy execution evaluation is obtained, and the evaluation result Y of A company’s strategy execution is determined. 10
× Aki ) + 10 (R × Rki ) + 9i=1 (Si × Ski )+ 27 i=1 i 5 i=1 (Ci × Cki ) + i=1 (Pi × Pki ) + i=1 (Mi × Mki )
Y = 8
i=1 (Ai
(2)
Among them: Y: Comprehensive evaluation result of strategy implementation. Ai (assessment): Represents the evaluation value corresponding to the i-th indicator of the strategic evaluation factor. Aki: Represents the weight value corresponding to the i-th indicator of the strategic evaluation factor. Ri (recognition): Represents the evaluation value corresponding to the i-th indicator of the strategic consensus factor. Rki: Represents the weight value corresponding to the i-th indicator of the strategic consensus factor. Si (synergy): indicates the evaluation value corresponding to the i-th indicator of the strategic synergy factor. Ski: Represents the weight and value corresponding to the i-th indicator of the strategic synergy factor. Ci (culture): Represents the evaluation value corresponding to the i-th indicator of the implementation of cultural factors. Cki: Represents the weight value corresponding to the i-th indicator of the implementation of cultural factors. Pi (performance): The evaluation value corresponding to the i-th indicator of the strategic performance factor. Pki: Represents the weight value corresponding to the i-th indicator of the strategic performance factor. Mi (monitor): indicates the evaluation value corresponding to the i-th indicator of the strategic control factor. Mki: Represents the weight value corresponding to the i-th indicator of the strategic control factor.
4 Conclusion This paper combs the theory of corporate strategy execution evaluation, analyzes the current status of corporate strategy evaluation, summarizes the key links of corporate strategy execution, and builds an effective corporate strategy execution evaluation model. The model can be applied to all aspects of enterprise strategy formulation, strategy execution, strategy evaluation, and strategy optimization, to promote the formation of strategic closed-loop management of the enterprise, provide decision-makers with the best choice plan, and provide support for the development of the enterprise.
648
F. Tang et al.
Acknowledgment. This research was financially supported by Science and Technology Project of SGCC “Research on Construction of Strategic Goal System and Key Technology of Strategy Implementation for SGCC in the New Era”.
References 1. Yi, M.: Research on evaluation of strategic implementation of survey and design enterprises. Technol. Entrepreneurship Monthly 032(4), 157–159 (2019) 2. Qu, W.Z.: On the fusion construction of performance management and execution. Investment Entrepreneurship 7, 128–129 (2020) 3. Fang, X.Y.: Analysis on the implementation of Chinese enterprise strategy. Business 2, 154 (2013) 4. Yu, L.W.: Thinking on the effective operation of enterprise strategy execution system. J. Zhengzhou Inst. Aeronaut. Ind. Manage. 23(3), 52–55 (2005) 5. Wu, Z.H., Wu, Y.F.: The application of balanced scorecard in the evaluation and adjustment of enterprise strategy implementation. Market Res. 9, 35–36 (2017) 6. Pearce, J.A., Robainson, R.B.: Formulation, implementation, and control of competitive strategy. J. Comp. Neurol. 416(1), 101–111 (2015) 7. Veth, G.: Building the platform for strategy execution. DM Rev. 16(4), 31–45 (2006) 8. Kaplan, R.S., Norton, D.P.: Having trouble with your strategy? Then map it. Harvard Bus. Rev. 78(5), 167–176 (2000)
Framework Design of Data Assets Bank for Railway Enterprises Cheng Zhang1(B) and Xiang Xie2 1 School of Economics and Management, Beijing Jiaotong University Weihai Campus, Beijing, China [email protected] 2 School of Economics and Management, Beijing Jiaotong University, Beijing, China [email protected]
Abstract. In China, data has been incorporated into production factors, and data assets have increasingly become the engine to promote the total value and growth of modern enterprises. It is very important to build data asset bank in order to enhance the value of railway enterprises and ensure the future success. This paper analyzes the value and significance of establishing the data asset bank, summarizes and refines the “MDIPOV” framework, which includes six aspects: data management, data deposit, data integration, data process, data operation and data valuation. At the same time, in view of the current data valuation hot spot, this paper analyzes the five considerations of data value, summarizes and provides five valuation methods to meet the needs of different types of data. The above achievements will help railway enterprises to lay a good data base, scientifically accumulate data assets and effectively realize digital transformation. Keywords: Data asset bank · Data management · Data integration · Data process · Data operation · Data valuation
1 Introduction In China, data has been incorporated into production factors, which reflects that digital economy is an extremely important pillar of national economic development. Effective data is not only an important part of productivity, but also the foundation to promote the development of many emerging industries. It will make a solid step forward in the capitalization of data, because data can be listed as intangible assets in the accounting catalogue, asset value evaluation, investment transfer, financing and loan, etc. Increasingly, data assets are the engine driving the total value and growth of modern organizations. Railway industry involves transportation organization, operation and development, safety monitoring, engineering construction and other aspects, there are a lot of data precipitation, however many railway enterprises and departments are facing the situation that it is difficult to obtain the required data because of no data sharing mechanism. The China railway group has carried out a lot of work in the data management, and issued relevant management measures and documents, such as the Interim Measures for the © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 649–659, 2022. https://doi.org/10.1007/978-981-16-8656-6_57
650
C. Zhang and X. Xie
management of railway master data management platform, the IT planning of railway, the implementation plan of railway big data application, and the Interim Measures for railway data management, which have played a good role, However, most of these measures and documents are issued in the form of administrative management, so it is necessary to further study how to establish an effective data exchange/transaction mechanism based on data value.
2 The Value of Data Asset Bank Data has become the core asset and important production factor of railway enterprises. In the data-driven age, railway enterprises should manage and operate core business data in order to optimize products, develop new market channels, and build their core competitiveness. However, there are many pain points in the process of data governance. For example, data standards are not uniform, data responsibilities are not implemented, data quality is uneven, data integrity cannot be guaranteed without a unified data management platform; data is scattered in different IT system, more and more ‘data isolated islands’ are forming. There is a lack of effective connection channels, and data cannot be interconnected; there is a lack of effective data management mechanisms, data standards between platforms are not unified, it’s unclear for data management processes. As assets, it’s necessary to validate the value of data. However, many enterprises fail to understand both the value of their existing data assets and the underlying levers that can increase data value. This can mean, in turn, that they miss out on the competitive advantages and shareholder value that their data assets can generate. In order to capture and harvest the value of data over time, organizations must first seek clarity on how to value data as an asset, then follow through with a comprehensive data strategy to drive value enhancement. Data bank refers to the collection, calculation, storage, and processing of massive amounts of data through data technology, while unifying standards and calibers. After the data bank unifies the data, it will form standard data, and then store it to form a big data asset layer, and then provide customers with efficient services. These services have a strong correlation with the business of the enterprise. They are unique and reusable. They are the precipitation of the business and data of the enterprise. They can not only reduce the cost of repeated construction and reduce the cost of chimney collaboration, but also the difference. Where the competitive advantage lies. The data bank includes data technology, such as collecting, calculating, storing, and processing data within the enterprise, and processing the corresponding data, feeding it back to the business of the enterprise. It is the unique data of the enterprise and can be replicated. Use is the precipitation of enterprise business and data. The major value of data assets bank is described as below. 2.1 To Break ‘Data Isolated Island’ and Realize Data Sharing Based on a big data platform, the data bank gathers structured and unstructured data within the enterprise, including business systems, financial systems, and human resources, and applies multiple systems for data storage, and labels for each data.
Framework Design of Data Assets Bank for Railway Enterprises
651
With data are becoming more and more complex and fragmented massive, it is the prominent role for data banks to make full use of internal and external data, break the status quo of data silos, and continue to increase data assets value. On this basis, it can reduce the entry of data services, prosper the ecosystem of the data service, realize the closed-loop value of “the more you use the more” data. These services are strongly related to the business of the enterprise, and are unique and reusable to the enterprise. They are the precipitation of business and data, they can not only reduce the cost of repeated construction and chimney cooperation, but also the competitive advantage of differentiation. 2.2 To Accelerate the Service Generation Process from Data to Value Different application development project groups in enterprises will call the same data model and data service, but they may use different methods to process the same data, and then the results are different, some are wrong, so the development speed is slow, and the data results are not accurate and the quality is low. This is the problem faced by application development and data development in the past. Now the data bank is going to solve this problem. The data bank is going to turn those reused data models and the data reuse ability of those data models into a data ability platform, so that those who do data can focus on doing data, turn data into Lego building blocks, and provide data services for application development. Then different application development project groups can call the unique SaaS data service together to ensure its data quality and consistency, accelerate the service generation process from data to value, and build a more responsive and intelligent business. 2.3 To Provide Data Base for Business Model Innovation In a broad sense, data banking is not only technology related, but also an important infrastructure for business to enter the DT era. It should integrate the company’s strategic determination, organizational structure and technical structure, take business value as the guide, and expand business boundaries with technology. Only by relying on data and algorithms, we can transform insights extracted from massive data into actions, which can promote large-scale business innovation. It is remarkable for data bank to transform insight into action and realize large-scale commercial innovation through algorithm. On the other hand, one of the reasons why data can’t be used by business is that data can’t be read and understood. IT guys do not understand business, and business guys do not understand data, which makes it very difficult to apply data to business. Data banks need to consider breaking the barriers between IT and business. IT guys turn data into readable and easy to understand content for business guys, and business guys can quickly integrate it into business after seeing the content It can better support the innovation of business model. In addition, data banks provide standard data access capabilities, simplify integration complexity, and promote interoperability, which are also favored by enterprise CIOs. At the same time, data bank will also play an important role in the rapid construction of service capacity, accelerating business innovation and improving business adaptation.
652
C. Zhang and X. Xie
3 ‘MDIPOV’ Framework Design For Data Asset Bank Based on the analysis of past experience, this paper summarizes and refines the ‘MDIPOV’ framework, which can be used as the basic framework of the construction of data bank. MDIPOV means data Management, data Deposit, data Integration, data Process, data Operation, data Valuation. Please see Fig. 1.
Fig. 1. The framework design for data asset bank
3.1 Data Management Data management includes but not limited to data standards, metadata, master data, data model, data distribution, data storage, data exchange, data lifecycle management, data quality, data security and data sharing services. This is the basic management content of data asset bank. All fields need to be combined organically, such as data standards, metadata, data quality and so on. Through the management of data standards, data legitimacy and compliance can be improved, data quality can be further improved, and data production problems can be reduced; on the basis of metadata management, data lifecycle management can be carried out to effectively control the scale of online data, improve the access efficiency of production data, and reduce the waste of system resources; through metadata and data model management, tables, files, etc. can be saved The classification of data resources by subject can make clear the ownership and distribution of the main data sources of the parties, products, agreements and other related data, and effectively implement the planning and governance of data distribution.
Framework Design of Data Assets Bank for Railway Enterprises
653
3.2 Data Deposit The data in the enterprise has the characteristics of diversity, wide distribution and large quantity. From the point of view of data source, it can be divided into structured data and unstructured data. The data collection of enterprise data bank is based on web platform, it can be user-based, networked and dynamic data discovery, sharing and dynamic feedback Each individual is the data producer, manager, user and supervisor. For structured and unstructured data, they are used to gather many approach such as the online/offline data transmission, sensor automatic collection and third-party data source exchange, all data are stored in the data bank to reduce the repeated collection cost. According to the previous analysis of data assets inventory results, the data sources in the enterprise mainly include, • Structured data source: mainly focus on the enterprise’s transaction and business system, including TMIS, 12306, BIM, TQMS, HRIS, ERP, FMIS etc.; in addition, the enterprise also purchases some external data. This kind of structured data is highly organized and in neat format, easy to collect, store, process and access, and can easily cooperate with most standard analysis models. Enterprise data bank gives priority to effectively include this part of data, and focuses on solving the problems of massive, multi-source and heterogeneous. • Unstructured data sources: unstructured data accounts for an increasing proportion in all walks of life, including image data, documents, audio and video materials, pictures, sensor signals, etc. Enterprises need to store large amounts of unstructured data for a long time, and the business demands for data collection, management and application are becoming more and more diversified. The traditional data management mode has the disadvantages of capacity and performance bottleneck, information isolated island, management difficulty and high one-time construction cost. The requirements of compliance are increasingly strict, and business applications also require data to be more real-time and mobile. How to better grasp and use unstructured data has become an urgent problem for many enterprises. 3.3 Data Integration Data integration is a process in which heterogeneous data is retrieved and combined as an incorporated form and structure. Data integration allows different data types (such as data sets, documents and tables) to be merged by users, organizations and applications, for use as personal or business processes and/or functions. Data integration primarily supports the analytical processing of large data sets by aligning, combining and presenting each data set from organizational departments and external remote sources to fulfill integrator objectives. Data integration is implemented in data platform through specialized software that hosts large data repositories from internal and external resources. Data is extracted, amalgamated and presented as a unified form. Data integration is mainly expressed from the perspective of data storage, which refers to the centralized storage of the original data for subsequent use. The primary goal is to store heterogeneous data from different sources in the data lake. Enterprise general data is divided into several categories.
654
C. Zhang and X. Xie
• Business data, refers to the data generated in the process of business processing, such as order data, customer data, commodity data and supply chain data. This kind of data is generated by the business information system and has been stored in the existing information system, such as ERP, TMIS, etc. • Online monitoring data, such as the user’s media behavior log data can be obtained by deploying code on the enterprise’s own media; or the monitoring data generated by sensors in real time. This kind of data needs real-time online service to receive and record the corresponding log data. • Third party platform data, data exist on third party platforms, such as WeChat official account, Alipay and other platform data. This kind of data platform often provides API to pull data. There are many ways of data aggregation, it can be divided into file transmission, data extraction, message push and so on. According to the general practice, data can be stored according to R & D, transport, marketing, sales, procurement, quality, human resources, finance and other topics. 3.4 Data Processing Data processing is a derivative calculation task of data fields and indicators, which provides a visual or coding environment for data developers to manage and implement processing rules. It is an important part of data capitalization. The typical tasks of data processing include user tag calculation, value chain connection and collaborative index calculation, object-oriented link, typical e-commerce purchase index calculation, etc. The main contents of data processing include: value chain data processing, objectoriented data processing, index system management, intelligent label management, algorithm model management; • Value chain data processing: take the value chain such as R & D, transport, supply, sales, after-sales service, human resources, finance and so on as the main story line, carry out the whole process data connection and collaboration. • Object-oriented data processing: Based on the actual internal and external market demand of the enterprise, select and determine the specific object, and then process the data in a full cycle. • Indicator system management: Panoramic planning, indicator system, definition of indicators, addition, deletion, modification, etc. • Intelligent label management: multi-dimensional and multi perspective label setting and automatic processing. • Algorithm model management: computing model definition, scheduling and other configuration management. Data processing will form different business domains and enterprise data maps. The data in these domains can be used directly.
Framework Design of Data Assets Bank for Railway Enterprises
655
3.5 Data Asset Operation Data operation is a very important part of data bank, and the goal of operation is to realize data business. From a technical point of view, data operation includes some business-related and reusable public technology components or products, such as data catalog, data label, data analysis, data open interface, machine learning algorithm model, etc. which can use SaaS to provide services directly to the outside world, or which can use smaller granularity, such as API, message interface, file interface, service interface, SDK software package etc. Those mode only provides component capabilities or data services, and the internal or external third-party applications do not need to care about the underlying data preparation. They can directly call the service interface provided by the data service module to facilitate the secondary development and enhance their own capabilities. From the business point of view, we can consider three directions of data asset management: 3.5.1 Data Asset Productization It is an important operation method to integrate the value of data to develop a data product to sell. Enterprises can sell their own data directly, or sell it after “deep processing” on the basis of their own data, or enterprises can buy data from multiple companies to provide better services after integration. For example, the Internet platform provides intelligent recommendation advertising for online marketing customers, precision marketing based on big data, financial big data risk control credit reporting products and enterprise credit reports provided by credit reference companies. 3.5.2 Data Asset Capitalization Based on big data, we can carry out various professional services to obtain the realization income. For example, the Internet e-commerce platform provides financial services to platform merchants through the settled transaction data and scenario data. Another example, logistics supply chain enterprises provide financial services for the cargo owners and upstream and downstream based on the full information chain of logistics, warehousing and supply chain. 3.5.3 Data Asset Enabling The optimization of enterprise process by data service has penetrated into almost every link, such as marketing process, member management process, product management process, human resource optimization and so on. 3.6 Data Asset Valuation Generally speaking, the larger amount of data, the higher the timeliness, and the higher the accuracy of the data, often have higher value. From an economic point of view, the data that can be transformed with low cost and high efficiency, as well as the data that
656
C. Zhang and X. Xie
can bring higher income in the market, have greater value. It can be considered from the following aspects as Table 1. Table 1. Key considerations of data value No.
Dimension
KPI
1
Quality value
• Accuracy • Accessibility • Integrity
• Authenticity • Trustworthiness
2
Business value
• Insubstitutability • Real time
• Large in amount • Multi dimensional
3
Cost
• Collection cost • Governance cost • Storage cost
• Operating cost • Sales cost
4
Economic value
• Sales revenue • Rental income
• Exchange data revenue
5
Market value
• Competitor price • Open market price
• Uniqueness • Alternative price
Data is similar to other intangible assets. While the attributes of any set of data may be unique, traditional valuation approaches—which incorporate growth, profitability and risk elements—can be used, along with a strong understanding of the data’s attributes, to value data. These approaches include: 3.6.1 The Market Approach Today, companies are using advanced analytics to more fully understand their data, and to identify ways to license it to third parties. In addition, within various ecosystems, data exchanges are being developed so market participants can aggregate and trade data assets, and participating companies can exchange data to create even more value for their enterprises. As companies continue to mine their data and develop models to transact in this asset category, these transactions can be used to derive market indications of value. As with other assets, value comparability challenges will exist—but as markets mature and companies identify more ways to transact, we believe data transactions will be commonly used to value data assets. The calculation formula is as follows, Appraisal Value = Transaction Volume of Comparable Data Assets × nk=1 Correction Coefficient
(1)
The transaction volume of comparable data assets refers to the transaction volume of the same or similar data assets under active public trading.
Framework Design of Data Assets Bank for Railway Enterprises
657
• Correction coefficient: used to correct the differences between the underlying data assets and comparable cases 3.6.2 Multi-Period Excess Earnings Method (MPEEM) An income-approach methodology that measures economic benefits by calculating the cash flow attributable to an asset after deducting “contributory asset charges” (CACs), which are appropriate returns for contributory assets used by the business in generating the data asset’s revenue and earnings. The calculation formula is as follows, Return Appraisal Value = nk=1 Excess (1+i)n (2) + Income Tax Amortization Income • Excess return: the excess return of a data asset is the increase in income or decrease in cost due to the holding of the data asset; • Discount rate (I): the necessary rate of return required by data asset holders; • Service life (n): the service life of data assets; • Income tax amortization income: at present, data assets cannot be recognized as intangible assets, so relevant tax amortization income cannot be recognized. 3.6.3 With-and-Without Method A method for estimating the value of data assets by quantifying the impact on cash flows if the data assets needed to be replaced (assuming all of the other assets required to operate the business are in place and have the same productive capacity). The projected revenues, operating expenses and cash flows are calculated in scenarios “with” and “without” the data, and the difference between the cash flows in the two scenarios is used to estimate the data’s value. The calculation formula is as follows, Cash Flow Appraisal Value = nk=1 Incremental (1+i)n (3) + Income Tax Amortization Income • Incremental cash flow = cash flow (under application data asset scenario) • Cash flow ‘(without data assets); – discount rate (I): the necessary rate of return required by data assets holders; • Service life (n): the service life of data assets; • Income tax amortization income: at present, data assets cannot be recognized as intangible assets, so relevant tax amortization income cannot be recognized 3.6.4 Relief from Royalty Method A method built on the assumption that if the company doesn’t own the data asset, it might be willing to license the data from a hypothetical third party who does. In this
658
C. Zhang and X. Xie
method, the company would forgo a certain amount of profitability to license the data from a third party over a certain lifecycle. The calculation formula is as follows, Appraisal Value = nk=1 LicenseFee (1+i)n (4) + Income Tax Amortization Income • License fee: the license fee that can be charged for authorizing others to use the data asset, which is usually calculated according to the ratio of income, that is, license fee = data asset related income × license rate; • Discount rate (I): the necessary rate of return required by data asset holders; • Service life (n): the service life of data assets; • Income tax amortization income: at present, data assets cannot be recognized as intangible assets, so relevant tax amortization income cannot be recognized. 3.6.5 The Cost Approach A method that uses the concept of replacement cost as an indicator of value. The premise is that an investor would pay no more for an asset than the amount for which the utility of the asset could be replaced, plus a required profit/return to incent a third party to replace the asset. The calculation formula is as follows, Appraisal value = replacement cost − depreciation factor, Appraisal Value = Replacement cost − Depreciation factor
(5)
Appraisal Value = Replacement Cost × Newness Rate
(6)
or
• Replacement cost: the reasonable cost, tax and profit for the formation of data assets. For the data assets generated and collected within the company, the explicit cost mainly includes the labor cost and equipment cost of data collection, storage and processing, while the implicit cost mainly includes the R & D cost and labor cost of the business to which the data is attached; for the purchased data assets, the replacement cost is the amount to be paid to obtain the same data asset under the current market conditions; • Depreciation factors: in the traditional cost method, the depreciation factors of physical assets are mainly divided into economic depreciation, physical depreciation and functional depreciation. However, for data assets that do not have physical form and are not used as functional assets, the depreciation factors mainly come from the economic depreciation caused by the loss of timeliness of data assets.
Framework Design of Data Assets Bank for Railway Enterprises
659
4 Conclusion It is the general trend for railway enterprises to promote the construction of data asset bank in an orderly way by using ‘MDIPOV’ framework, which includes data management, data deposit, data integration, data process, data operation and data valuation. With scientific data asset bank, it is very helpful for railway enterprises to strengthen the data base, break the “data isolated island”, and effectively realize data sharing. Meanwhile, it will be great to deeply mine the value of data and accelerate the development of data products, data services and data empowerment from the perspective of operation in order to promote the digital transformation of railway enterprises. However, the construction of data asset bank needs to be gradual and will face many challenges and pressures. As an intangible asset, the value evaluation of data asset is difficult to quantify so far. Therefore, it is necessary to classify and refine the data asset, sort out the constraints and application scenarios in the future.
References 1. Wang, P., Ma, X., Wang, Z., Zou, D., Liu, M.: Design and application of storage architecture of railway data service platform. Railway Comput. Appl. 30(5), 48 (2021). (In Chinese) 2. Wang, Z.: China academy of railway sciences, ‘on top-level design for China railway’s big data application & case study.’ China Railway 1, 9 (2017). (In Chinese) 3. Bing, G., et al.: Personal data bank: a new mode of personal big data asset management and value-added services based on bank architecture. Chinese J. Comput. 40(1), 126–143 2017 (In Chinese) 4. Iluberty, M.: Awaiting the second big data revolution: from digital noise to value creation. J. Ind. Compet. Trade 15(1), 35–47 (2015) 5. Zhang, Z.-G., Yang, S.-S., Wu, H.-X.: Research and application for appraisal model of data asset value. Modern Electron. Technol. 08, 44, 47–51 (2015) (In Chinese) 6. Zhang, Y.-M., Mu, W.-J.: Characteristics and value analysis of financial data assets in the era of big data. Finance Res. 08, 78–80 (2015) 7. Deloitte: How Deloitte evaluates data assets: Valuation and industry practice (2019)
Research on Emergency Rescue Strategy of Mountain Cross-Country Race Qianqian Han1(B) , Zhenping Li1 , and Kang Wang2 1 Capital University of Economics and Business School of Management and Engineering,
Beijing Wuzi University School of Information, Beijing, China 2 Beijing Wuzi University School of Information, Beijing, China
Abstract. Due to the long track and poor road conditions of mountain crosscountry race, it is easy to happen dangerous situations such as accidental injury of participants. So, this paper studies the location of emergency rescue sites and the allocation of rescue equipment under stochastic demand. According to the road conditions, the track is divided into several segments, and each segment is divided into multiple demand points. For any demand point in each segment, a two-stage stochastic programming model is established with the constraint that the rescuers can arrive in the golden rescue time after sending the distress signal, and the objective function is to minimize the sum of the fixed cost of setting the rescue site, the variable cost of equipping the rescue equipment and the expected rescue cost after the accident, the GUROBI solver and heuristic algorithm are used to solve the problem. Taking the location of rescue sites and the allocation of portable mobile AED as examples, several groups of examples are generated, and the effectiveness of the model is verified by solving the examples. Finally, the results of the examples are analyzed to verify the impact of the segment road conditions and random demand distribution on the emergency rescue site selection and rescue equipment allocation scheme, and the emergency rescue strategy of key monitoring and classified management is put forward. The results of this paper can help the organizers optimize the emergency rescue plan and avoid or reduce the occurrence of accidents. Keywords: Cross-country race · Rescue site · AED · Stochastic demand · Stochastic programming
1 Introduction Cross-country running, long-distance running and hiking in an open natural environment, including mountains, Gobi, snow and other complex and difficult environment, which determines the high-risk coefficient of this sport and the difficulty of rescue. According to the Big Data Analysis Report of China Marathon in 2019 released by the Chinese Athletic Association [1], China’s marathon and related road running events continue to grow at a high speed. And cross-country running events have also increased steadily for three consecutive years. With the increase of the number of events, its security problems © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 660–671, 2022. https://doi.org/10.1007/978-981-16-8656-6_58
Research on Emergency Rescue Strategy
661
are gradually exposed. In the yellow river stone forest mountain marathon 100-km crosscountry race accident, due to the non-standard emergency rescue plan, the rescue sites in the racetrack are few and scattered, and the shortage of rescue personnel and equipment, resulting in heavy casualties. At present, the research on marathon and related events at home and abroad mainly focuses on the following aspects. Research on risk management and control of sports events: such as risk prevention strategy [2], establishment of medical insurance evaluation index [3] and methods to improve medical aid level [4], etc. Research on participants: such as participation motivation [5], participation behavior [6], participation satisfaction [7] and participants’ consumption behavior [8]. Research on physical needs: such as physiological needs [9, 10], supply needs [11] and rescue materials needs [12, 13]. In summary, very few papers consider the emergency rescue and quantitative analysis, mainly focus on sports discipline research and physiological research. Therefore, this paper intends to study how to set up the emergency rescue sites and equip them with rescue equipment scientifically and reasonably, so as to provide assistance to the participants timely in case of unexpected incidents and avoid casualties. In this paper, the optimize of rescue site location and equipment allocation for emergency rescue under stochastic demand is studied. A discrete two-stage stochastic programming model is established under the constraint of arriving at the demand point within the best time for rescue. The objective is to minimize the sum of the fixed cost of setting the rescue sites, the variable cost of equipping the rescue equipment and the expected rescue cost after the accident. The GUROBI solver is used to solve the model, which provides a decision reference for the problem of rescue site selection and equipment allocation, and further enriches the theoretical research results of this problem.
2 Mathematical Model of Rescue Site Setting Problem 2.1 Problem Description and Analysis The route sketch of the cross-country race is shown in Fig. 1, consisting of start point, end point, several check points (CP), and segments between nodes. The participants start from the starting point and pass through each CP in turn. In order to ensure the life safety of the participants, a sufficient number of rescue sites should be set up to provide the necessary rescue materials. Due to geographical restrictions in the wild, the road conditions and weather conditions of each segment may vary greatly, and there are great differences in the traffic speed of the rescue vehicles. Therefore, this paper studies the site location and material allocation under different road conditions and weather conditions. The known information includes: the number of candidate rescue sites in each segment and the set of demand points covered by each candidate rescue site, the fixed construction cost and maximum storage capacity of the candidate rescue site, the unit material allocation cost, the unit delivery cost and shortage cost of rescue materials, and the probability distribution of the demand of each demand point under different weather conditions. The question is how to determine the location of the rescue site and the material equipment of the rescue site, so that: (1) The rescue sites can provide rescue services for each demand point in each segment; (2) Minimize the sum of the fixed
662
Q. Han et al.
construction cost, material allocation cost and the expected rescue cost under various scenarios.
Fig. 1. Route sketch of the cross-country race
2.2 Model Assumptions To simplify the problem, this paper makes the following assumptions: 1) Break the continuous demand points in each segment into multiple discrete demand points, and only study the material demand of single mobile medical aid equipment, such as AED. 2) Weather scenarios include good, normal and bad, corresponding to low, medium and high emergency material demand respectively. Assuming that the weather scenario is known at the same segment, the demand for each demand point corresponds to the uniform distribution of the same parameters. 3) When the demand is greater than the delivery volume, other methods of delivery are taken as the shortage cost in the second stage. It can also be regarded as the number of potential rescue failures, which is reflected in the objective function to minimize the penalty cost of rescue failure. 4) Due to environmental constraints, there are differences in the storage capacity of rescue sites that can be set in each segment, and the fixed construction costs are also different. 5) In each segment, the set of demand points covered by candidate rescue sites accessible within the golden rescue time is known, which can be calculated in advance according to the nodes distance and rescue vehicles travel speed (In order to cover all demand points, the parameter is set to the speed in bad weather). 2.3 Notation Description Parameters: N : Set of demand points, N = { 1, 2, ..., n} S: Set of candidate sites, S = { 1, 2, ..., s} A(j): The set of demand points covered by candidate rescue site j within the golden rescue time limit B(i): The set of rescue points that can provide rescue for demand point i within the golden rescue time limit Cj : Maximum storage capacity of rescue site j hj : Fixed construction costs of the rescue site j
Research on Emergency Rescue Strategy
663
b: Unit material allocation cost f1 : Unit material delivery cost f2 : Unit material shortage cost M : A large positive number : The set of scenes, = {ξ |ξ = ξ1 , ξ2 , ...ξl }. di (ξ ): The amount of rescue material required at demand point i when the scene ξ occurs Decision variables: xj = yij =
1, Set up a rescue site at the candidate site j 0, Otherwise
1, The rescue site j provides rescue to the demand point i 0, Otherwise
zj : The amount of material reserves at rescue site j pij (ξ ): The amount of rescue materials delivered from the rescue site j to the demand point i when the scene ξ occurs. ωi (ξ ): The amount of shortage at demand point i when the scene ξ occurs 2.4 Two-Stage Stochastic Programming Model This paper takes each segment as the independent research object. For each segment, the following two-stage stochastic programming model can be established as follows: min
s
hj xj +
j=1
s
bzj + Eξ Q(x, y, z, ξ )
(1)
j=1
yij = 1, ∀i ∈ N
(2)
yij ≤ Mxj , ∀j ∈ S
(3)
j∈B(i)
i∈A(j)
zj ≤ Cj xj , ∀j ∈ S
(4)
xj ∈ {0, 1}, ∀j ∈ S
(5)
yij ∈ {0, 1}, ∀i ∈ N , j ∈ S
(6)
zj ≥ 0, ∀j ∈ S
(7)
For any given realization ξ , Q(x, y, z, ξ ) = min f1
n s i=1 j=1
pij (ξ ) + f2
n i=1
ωi (ξ )
(8)
664
Q. Han et al.
pij (ξ ) ≤ Myij , ∀i ∈ N , j ∈ S n
(9)
pij (ξ ) ≤ zj , ∀j ∈ S
(10)
pij (ξ ) + wi (ξ ) = di (ξ ), ∀i ∈ N
(11)
i=1 s j=1
pij (ξ ), wi (ξ ) ≥ 0, ∀i ∈ N , j ∈ S
(12)
The objective function (1) represents the minimum the sum of the fixed construction cost and material allocation cost of the rescue site and the expected value of rescue cost corresponding to the optimal solution for a specific random scene ξ . Constraints (2) indicate that each demand point should be covered by rescue site. Constraints (3) state that if the rescue site j serves the demand point, it must be established. Constraints (4) mean that the material reserve of each rescue site does not exceed its maximum storage capacity. Constraints (5)−(7) represent the value constraints of decision variables in the first stage. The objective function (8) represents the minimization of the delivery cost at the rescue site and the shortage cost at the demand point; Constraints (9) guarantee that only when rescue site j is responsible for the rescue work of demand point i, the rescue materials will be delivered to demand point i; Constraints (10) mean that the amount of material delivered at each rescue site does not exceed its maximum storage capacity; Constraints (11) limit that the demand for rescue material at each demand point must be met; Constraints (12) represent the value constraint of decision variables in the second stage.
3 Numerical Experiments and Result Analysis In order to verify the effectiveness of the two-stage stochastic programming model, this section uses the GUROBI solver to obtain the exact optimal solution through simulation generation of examples, and analyzes the results. A heuristic algorithm is designed, and the results of the algorithm are compared with the exact solution to verify the effectiveness of the algorithm. 3.1 Generation of Test Instances A 50-km cross-country race course consists of 6 segments, each of which is 10 km, 8 km, 11 km, 5 km, 7 km and 9 km long. Among them, the ambulance can pass through segment 1 and segment 2, and its terrain is flat. The climbing height of segment 3 and segment 4 is high and the terrain is rugged, so the medical staff can only walk through. Segment 5 and segment 6 are relatively rugged paths, which can be passed by small vehicles such as motorcycles. In order to prevent the participants from major accidents, it is necessary to set up rescue sites along the track to provide mobile AED medical rescue
Research on Emergency Rescue Strategy
665
services. In each segment, some rescue sites are selected from multiple candidate sites to provide timely rescue services for each demand point. At the same time, considering the influence of weather conditions on the demand of rescue equipment, the weather conditions are roughly divided into three scenarios: good, normal and bad. In general, the worse the weather is, the more rescue equipment is needed. Rules for Setting Demand Point: Every kilometer in the segment is regarded as a demand point. In the Compilation of China Marathon Management Documents (2021) issued by Chinese Athletic Association, it is stipulated that the portable mobile AED shall not be less than 1 per 1.5 km in the evaluation method of marathon and related sports graded events, so as to ensure the coverage of AED throughout the race [14]. The 2019 Ningbo International Marathon has a total of 80 AEDs installed on the whole 40 km track as the guarantee for the race [15]. According to the above two data, the demand for portable AED at each demand point is simulated and generated. In good weather, the demand obeys the uniform distribution of parameter [1, 2], in general weather, the demand obeys the uniform distribution of parameter [2, 3], in bad weather, the demand obeys the uniform distribution of parameter [3, 4]. Rules for Setting Candidate Sites: In the selection of candidate sites, professionals need to comprehensively consider the terrain, personnel distribution, dangerous areas and other factors, and then preliminarily determine the location of several candidate sites. Here we assume that the information of candidate sites is known. Generally speaking, the start point and end point are equipped with a sufficient amount of mobile rescue equipment, and there are several aid points (AP) to store a small amount of rescue equipment. Therefore, this paper divides the rescue sites into two categories: start and end point, and AP point. The fixed construction cost (including labor cost, auxiliary rescue equipment, etc.) and the maximum storage capacity (equivalent to the number of medical professionals) of the two types of rescue sites are different. Parameter settings are shown in Table 1. Table 1. Information about candidate rescue sites Type of rescue Fixed site construction cost
Capacity
Unit allocation cost
Unit delivery cost
Unit shortage cost
Start and end point
1500
10
5000
500
20000
500
4
5000
500
20000
AP
Rules for Setting Candidate Rescue Site Coverage Set: When the candidate rescue site j can reach the demand point i within 4 min which is the golden rescue time, the rescue site j is defined to cover the demand point i, namely i ∈ A(j). Assuming that the road conditions are the same in each segment, and the traffic speed of rescue tools is the same and known in each segment, then the set of demand points that can be reached by candidate rescue sites within the golden rescue time is known. The set of demand points that can be covered by candidate rescue sites in each segment is shown in Table 2.
666
Q. Han et al. Table 2. The set of demand points that can be covered by candidate rescue sites
Segment of the race
Number of candidate rescue sites
Number of demand points
Set of demand points that can be covered by candidate rescue sites
Segment 1
10
10
A(1) = {1, 2, 3, 4} A(2) = {1, 2, 3, 4, 5} A(3) = {1, 2, 3, 4, 5, 6} A(4) = {1, 2, 3, 4, 5, 6, 7} A(5) = {1, 2, 3, 4, 5, 6, 7, 8} A(6) = {2, 3, 4, 5, 6, 7, 8, 9} A(7) = {3, 4, 5, 6, 7, 8, 9, 10} A(8) = {4, 5, 6, 7, 8, 9, 10} A(9) = {5, 6, 7, 8, 9, 10} A(10) = {6, 7, 8, 9, 10}
Segment 2
6
8
A(11) = {11, 12, 13} A(12) = {11, 12, 13, 14, 15} A(13) = {11, 12, 13, 14, 15, 16} A(14) = {11, 12, 13, 14, 15, 16, 17, 18} A(15) = {12, 13, 14, 15, 16, 17, 18} A(16) = {14, 15, 16, 17, 18}
Segment 3
13
11
A(17) = {19} A(18) = {19, 20} A(19) = {20} A(20) = {20, 21} A(21) = {22} A(22) = {22, 23} A(23) = {23, 24} A(24) = {24, 25} A(25) = {25, 26} A(26) = {26} A(27) = {27} A(28) = {27, 28} A(29) = {28, 29}
Segment 4
8
5
A(30) = {30} A(31) = {30, 31} A(32) = {31} A(33) = {32} A(34) = {32, 33} A(35) = {33} A(36) = {33} A(37) = {33, 34}
Segment 5
8
7
A(38) = {35, 36} A(39) = {35, 36, 37} A(40) = {35, 36, 37} A(41) = {35, 36, 37, 38} A(42) = {36, 37, 38, 39} A(43) = {37, 38, 39, 40} A(44) = {39, 40, 41} A(45) = {39, 40, 41} (continued)
Research on Emergency Rescue Strategy
667
Table 2. (continued) Segment of the race
Number of candidate rescue sites
Number of demand points
Set of demand points that can be covered by candidate rescue sites
Segment 6
11
9
A(46) = {42, 43} A(47) = {42, 43, 44} A(48) = {42, 43, 44, 45} A(49) = {42, 43, 44, 45, 46} A(50) = {43, 44, 45, 46} A(51) = {44, 45, 46, 47} A(52) = {45, 46, 47, 48} A(53) = {46, 47, 48, 49} A(54) = {47, 48, 49, 50} A(55) = {48, 49, 50} A(56) = {49, 50}
Rules for Setting Segment Weather Scenes: Due to the complex terrain of the crosscountry race track and the uncertainty of the short-term weather forecast information in local areas. Therefore, the probability of various weather scenes on each segment is different. The weather scene information on each segment is shown in Table 3.
Table 3. Weather scene information of each segment Good weather
General weather
Bad weather
Segment 1
3/4
1/6
1/12
Segment 2
4/5
1/10
1/10
Segment 3
2/7
4/7
1/7
Segment 4
1/2
1/5
3/10
Segment 5
3/20
7/10
3/20
Segment 6
18/25
2/25
1/5
3.2 Analysis of Instances Results According to the above data, use the GUROBI solver to solve the problem, the rescue sites set in each segment and the total cost are shown in Table 4. A total of 36 rescue sites are set in the whole race, and the locations of rescue sites are distributed along the race between the start point and the end point. A total of 115 mobile AEDs is equipped, and the total cost is 815,149.
668
Q. Han et al.
On average, segment 1 and segment 2 set up an emergency rescue site every 2.5 km, and each rescue site is equipped with 3 to 4 AED equipment. The two segments are set up in a manner similar to the medical stations used in urban marathons, as the road conditions are the same for both races, allowing ambulances to travel. Therefore, the distance between the neighboring rescue sites is farther than that of other segments. The road conditions of segment 3 and segment 4 are the most complex, and more rescue sites need to be set, with an average of one rescue set per kilometer. On average, 2.6 AEDs are allocated to each rescue site in segment 3, while 3.6 AEDs are required in segment 4. This is because the probability of bad weather in segment 4 is higher than that in segment 3, so the average demand of segment 4 is higher, and the rescue site requires more equipment to be allocated to deal with emergencies. Due to the limited road conditions, the coverage distance of each rescue site in segment 5 and segment 6 is relatively short. One rescue site needs to be set every 1.2 km on average, and each rescue site is equipped with 3.2 AEDs on average. Segment 1 and segment 2 require 1.3 to 1.5 AED units per kilometer, while segment 5 and segment 6 require 2.5 to 2.7 units. Although the length of these segments is similar, due to the influence of road conditions and weather factors, the AED allocation rate per kilometer of segment 5 and segment 6 is relatively high. By analyzing the results, we found that different segment types and different material requirements will affect the number of rescue sites and the setting of the stocking volume. The more complicated the road conditions and the worse the weather conditions, the more rescue sites and supplies needed. In real events, the organizer can classify and manage different types of segments according to the actual research information and demand of the segments, and focus on monitoring the segments with high risk factor and changeable weather. At the same time, the external environmental factors can be combined with qualitative and quantitative means to determine the optimal emergency rescue strategy, in order to ensure the safety of participants and reduce the total cost of preparation. Table 4. Instances results Segment
Number of rescue sets
Rescue site number (and storage capacity) Total cost
Segment 1
4
2(4) 5(3) 6(4) 7(4)
139500
Segment 2
3
11(4) 14(3) 16(4)
114000
Segment 3
11
17(2) 18(3) 20(2) 22(2) 23(3) 24(3) 25(3) 26(3) 27(3) 28(3) 29(2)
185714
Segment 4
5
30(4) 31(3) 33(4) 34(3) 37(4)
Segment 5
6
39(3) 40(3) 42(4) 43(3) 44(3) 45(3)
118825
Segment 6
7
47(3) 49(3) 50(4) 52(3) 53(3) 54(3) 55(4)
158760
98350
In addition, we also design a greedy adding heuristic algorithm to solve the problem, which can be used to solve large-scale problems quickly. The main idea of this algorithm is to take each segment as the independent research object, then d = E[d (ξ )] represents
Research on Emergency Rescue Strategy
669
the expected value of stochastic demand d (ξ ), the stochastic demand of each demand point is replaced by d (In the experiment, we set d = E[d (ξ )]. Therefore, the expected value of demand in each segment is d = 2, 2, 3, 3, 3, 2). Select the alternative rescue site with the largest ratio of coverage quantity to construction cost step by step until all the demand points can be covered. The basic steps of the algorithm are as follows: Input: The set of demand points that each candidate rescue site can cover: A(j), Storage capacity of each candidate rescue site: Cj , Fixed construction costs of each candidate rescue site: hj , the set of candidate rescue points S, the set of demand points N . Initialization: The set of selected rescue sites: F = ∅ The set of demand points that have been covered: Y = ∅ Set qj = min |A(j)|, Cj /d and D(j) is the first qj elements of A(j). Step 1 For each j ∈ S, calculate
Pj = qj /hj Step 2 Set j∗ = argmax Pj (tie breaker: minimum index) and F ⇐ F ∪ {j ∗ }, j∈S
Y ⇐ Y ∪ D(j∗ ), Cj∗ ⇐ Cj∗ − qj d . For each j ∈ S, A(j) ⇐ A(j)\D(j∗ ). Step 3 If |Y | = |N |, end; Else, go to step 1. Output: F, D(j). The amount of material reserves at rescue site j can also be calculated: zj = |D(j)|d . The delivery quantity of the rescue sites and the amount of shortage at demand points can be calculated according to the stochastic demand under different weather conditions. The greedy adding heuristic algorithm is used to calculate the example in Sect. 3.1, and the results are shown in Table 5. Table 5. Results of heuristic algorithm Segment
Number of rescue sets
Rescue site number (and storage capacity)
Total cost
Segment 1
5
2(4) 3(4) 4(4) 5(4) 7(4)
152291
Segment 2
4
11(4) 12(4) 13(4) 14(4)
132800
Segment 3
11
17(3) 18(3) 20(3) 21(3) 22(3) 23(3) 24(3) 25(3) 27(3) 28(3) 29(3)
203714
Segment 4
5
30(3) 31(3) 33(3) 34(3) 37(3)
108450
Segment 5
7
38(3) 39(3) 40(3) 41(3) 42(3) 43(3) 44(3)
126625
Segment 6
5
46(4) 48(4) 51(4) 53(4) 54(2)
172860
The results show that there are a total of 37 medical rescue sites and 123 mobile AED equipment needed on the track, with the total cost of 896,740. Compared with the exact solution, the result of the heuristic algorithm requires 1 more rescue site and 8 more mobile AEDs, resulting in about 9% more cost. This enlighten us that we can use solver
670
Q. Han et al.
to find the exact optimal solution when solving small scale problems, when the problem scale increases, the efficiency of the solver will become low, and the approximate solution can be obtained in a short time by using heuristic algorithm.
4 Conclusion This paper studies the setting of emergency rescue sites considering the type of segment and random demand in cross-country race. For each segment, a discrete two-stage stochastic programming model is established, and the model is calculated and analyzed using GUROBI solver. At the same time, a heuristic algorithm is designed to solve the problem, and the results of the heuristic algorithm are compared with the exact solution. By simulating the demand for portable mobile AED equipment in the 50-km crosscountry race, the location and allocation plans of rescue sites in the 6 segments are calculated, and the effectiveness of the model in this paper is verified. Through the classification research of different segments, the influence of road conditions and weather conditions on the setting of rescue sites is verified. The quantitative analysis in this paper will help the event organizer to make scientific decisions based on the actual information of the investigation. This article only studies the demand for a single product, and does not consider the situation that AP points can provide rescue services for demand points in different segments. We will establish a more practical mathematical model to provide a more accurate decision basis for the practical application in the further study. In this paper, only heuristic algorithm is designed, and the design of exact algorithm is not discussed. Therefore, designing fast and effective accurate algorithms is another research direction in the future.
References 1. Chinese Athletic Association: Big data analysis report of China Marathon in 2019. http:// www.athletics.org.cn/news/marathon/2020/0501/346438.html. Accessed 01 May 2020 2. Xiangmei, S., Shiqiang, W., Gang, X., Xiao, S., Dan, L.: Cardiac risk, sudden death and prevention strategy of marathon. Mod. Prev. Med. 48, 1065–1068 (2021) 3. Xiukun, W.: Establishment and significance evaluation of medical security evaluation index for marathon events. Chinese J. Crit. Care Med. 40, 820–821 (2020) 4. Hoffman, M.D., Khodaee, M., Nudell, N.G., Pasternak, A.: Recommendations on the appropriate level of medical support at ultramarathons. Sports Med. 50, 871–884 (2020) 5. Taylor, J.: Predicting athletic performance with self-confidence and somatic and cognitive anxiety as a function of motor and physiological requirements in six sports. J. Pers. 55, 139–153 (1987) 6. Lixia, P., Jingguo, Z., Shulin, G.: Influence mechanism of behavioral intention for female amateur runners’ marathon participation. J. Shanghai Univ. Sport 44, 84–94 (2020) 7. Jinyu, L., Zhiyong, H., Degang, H.: Construction of satisfaction evaluation index model for marathon runners in China. J. Capital Univ. Phys. Educ. Sports 32, 233–240 (2020) 8. Bao, F., Yuanyuan, Y., Jingxuan, Z., Haiyan, H.: Consumer behavior of runners at marathon races: characteristic, challenges, and trend. J. Wuhan Inst. Phys. Educ. 54, 10–18 (2020) 9. Wenzhe, W., et al.: The variation of energy metabolism during a simulated marathon run. China Sport Sci. Technol. 56, 3–8 (2020)
Research on Emergency Rescue Strategy
671
10. Rodriguez-Marroyo, J.A., González-Lázaro, J., Arribas-Cubero, H.F., Villa, J.: Physiological demands of mountain running races. Kinesiology 50, 60–66 (2018) 11. Martinez, S., Aguilo, A., Rodas, L., Lozano, L., Moreno, C., Tauler, P.: Energy, macronutrient and water intake during a mountain ultramarathon event: the influence of distance. J. Sports Sci. 36, 333–339 (2018) 12. Khodaee, M., Ansari, M.: Common ultramarathon injuries and illnesses: race day management. Curr. Sports Med. Rep. 11, 290–297 (2012) 13. Krabak, B.J., Waite, B., Lipman, G.: Injury and illnesses prevention for ultramarathoners. Curr. Sports Med. Rep. 12, 183–189 (2013) 14. Chinese Athletic Association: Compilation of China Marathon Management Documents (2021). http://www.athletics.org.cn/bulletin/marathon/2021/0408/378807.html. Accessed 1 May 2020 15. Sina Sports: 2019 Ningbo International Marathon Stars. https://sports.sina.cn/running/201910-26/detail-iicezzrr5139512.d.html. Accessed 1 May 2020
Rail Passenger Flow Prediction Combining Social Media Data for Rail Passenger Jiren Shen(B) Beijing Capital Agribusiness and Food Group, Beijing, China [email protected]
Abstract. Comprehensive characterization and scientific prediction of urban rail transit passenger flow plays a very important role in the process of urban rail transit planning, construction and management operation. This study combines social media data to characterize urban rail transit passenger flow under the influence of different social events, and achieves a comprehensive characterization of rail transit passenger flow and scientific prediction of passenger flow. Keywords: Social media · Big data · Rail transit · Passenger flow prediction
1 Introduction Urban rail transit originated in London and was built in the 19th century. In the theoretical study of rail transit, the research of methods and models for passenger flow prediction started early, and more and more scholars gradually tend to study the short-term urban rail transit passenger flow prediction. In 1984, Okutani [1] applied the Kalman filter theoretical approach to dynamic short-term passenger flow forecasting, and in 1987, Yakowit first proposed the theory that a new combined model could be built by combining the nearest-neighbor forecasting method with the time series method [2]. method for short-term passenger flow forecasting [3] and has better forecasting results because the k-nearest neighbor algorithm does not require parameter assumptions and parameter adjustment process, but there are still errors caused by the theory. The study by Oswald et al. mainly focused on reducing the time consumption of the algorithm [4]. In 2014, Kindzersket et al. investigated the use of passenger flow data from adjacent time observation points for prediction of traffic flow data [5, 6]. Sangsoo used the ARIMA prediction model and applied it to short-term passenger flow prediction for rail transit, and the algorithm also integrated the factors influencing unexpected events that could not be addressed by the model, and the model results were good [7]. In 1997, Arem [8], after emphasizing the need for short-term traffic flow prediction, summarized the then existing the theory of short-term traffic flow prediction. The accuracy of short-term traffic flow forecasting methods should be improved. In 1943, neuronal mathematical models were proposed and argued. In 2001, Dia [10] et al. proposed a goal-oriented neural network prediction model and used it to predict short-time passenger flow with good results. Chen studied a short-time traffic flow © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 672–681, 2022. https://doi.org/10.1007/978-981-16-8656-6_59
Rail Passenger Flow Prediction
673
prediction method based on dynamic sequencing learning neural network model [11]; in 2005, Eleni et al. optimized the neural network prediction model and applied the multilayer structure optimization theory to road traffic flow prediction. The multi-sectional prediction was achieved and the results effectively reflected the stochastic variability of road traffic flow [12]. 2012, Zheng [13] et al. combined Bayes’ rule and conditional probability theory to optimize different neural networks and applied the optimized model to highway short-time traffic flow prediction. The results of the empirical validation showed that the optimized prediction results were more accurate than those of a single neural network model. In 2012, WeiYu et al., proposed a hybrid EMD-BPN short-term passenger flow forecasting method for urban rail transit systems [14]. In 2013, Ozeva, used a bilinear correlation method to investigate the factors influencing intercity passenger flows, and then predicted urban commuter flows using a linear regression method [15]. In 2014, Hshevka, from train schedules and other aspects discussed the passenger flow pattern in detail, and based on this, the daily passenger flow of suburban railroad was predicted and studied [16]. In recent years, the use of combined forecasting models has become the mainstream of passenger flow forecasting, which is a combination of several algorithms that perform well with each other, thus effectively avoiding the limitations of individual algorithms in forecasting and improving forecasting accuracy. Wang et al. used combined models to analyze in detail the variation pattern of short-time passenger flow and the trend of short-time historical passenger flow in urban rail transit [17].
2 Research Methodology and Data 2.1 Research Methodology Time series modeling is a common method to analyze the statistical properties of historical data obtained from the study through parameter estimation and curve fitting. Time series models need to be built on smooth time series data, and the modeling type needs to be determined according to the smoothness and seasonality of the time series before building the model where if the series is smooth, the ARMA (p,q) model is used; if the time series Sn (n = 1, 2,…,t) is non-stationary without seasonality and needs to be differenced at order d to become a stationary series, after which the data can be fitted using the ARMA model, then the ARIMA (p,d,q) model is used; if the series is non-stationary with seasonality, then the ARIMA (p,d,q) (P,D,Q)S model is used. The ARIMA model reflects the dependence of passenger flow on the time series, but also takes into account some stochastic volatility and incorporates it into the model. It has a high prediction accuracy for the short-term trend of passenger flow. The ARMA model is calculated as follows. p q bi St−i + εt + θi εt−i (1) St = μ + i=1
where S T —current value; μ—constant term;
i=1
674
J. Shen
p—autoregressive order; q—moving average order; bi—autocorrelation coefficient; εt —error. On this basis, to differentialize the non-stationary series, the ARIMA (p,d,q) model is used, where d is the order of differencing the data. 2.2 Data As the most important transportation mode for cities, especially super tier 1 cities, rail transit is of great significance to the operation and development of cities. The factors influencing rail traffic are complex and varied. In addition to the influence of urban normative phenomena, many other factors of factor influence such as low and high travel seasons, gatherings or large examinations can affect the passenger flow and passenger distribution [52]. In terms of holiday travel influence, there are annual holidays such as May Day and National Day, which can have a huge impact on urban rail traffic flow. In terms of large gathering events. The number of large events held in Beijing is large, involving a rich variety of types, with a wide range of personnel categories and activities, and a complex passenger flow handling situation. According to statistical data, Beijing hosts nearly 10,000 large-scale events with more than 6,000 people each year, which include different aspects of art, sports, politics, economy, and education [53]. In addition, weather, industry events, natural disasters, and many other factors can also have an impact on passenger flow. Social media data were crawled through the Weibo web client. In order to correspond to the results of the analysis and prediction of the original rail traffic data traffic characteristics in Sect. 2 and 3, microblogging event data for three months from November 2017 to January 2018, the same time period as the study above, were selected. Microblog comment content keywords can be used to assist the data crawling and event classification process. Event information can also be obtained through news software clients. Three types of social media event data were obtained for this study. The first category, event data related to tourist attractions, including the Forbidden City tour Tiananmen Square tour event during the holidays and the Happy Valley tour event during the holidays. The second category, concert event related event data, a total of seven concerts and three soccer events related to all microblogging data. The third category, holiday-related event data, including microblog comment data about Christmas and New Year’s Day. For the obtained social media data. The number of microblogs, comments and keyword frequency of a certain event within a certain period of time can be counted to correspond to the heat of the event, and then provide data support for the analysis of the change of customer flow brought by the event.
Rail Passenger Flow Prediction
675
3 Combining Social Media Data to Characterize Passenger Flow 3.1 Analysis of Station Passenger Flow Characteristics of Events Related to Tourist Attractions As shown in Table 1, the microblogging data situation related to the two major tourist attractions in Beijing in the half month before and during the New Year’s Day holiday, as well as the rail transit stations involved. The first one is the Forbidden City, Tiananmen Square, the Great Hall of the People, and the National Museum of China tourist attraction group. The rail transit stations involved are Tiananmen East Station and Tiananmen West Station. The second one is the Beijing Happy Valley scenic spot. The rail stations involved are Happy Valley scenic area station. Table 1. Event information of tourist attractions Tourist attractions keywords
Subway stations
Time period of occurrence
Weibo buzz (articles/day)
Usual Weibo buzz (articles/day)
The Forbidden Tiananmen East City Tiananmen West
12.30–1.1
4万
1050
Tiananmen Square
Tiananmen East Tiananmen West
12.30–1.1
800
28
Great Hall of the People
Tiananmen East Tiananmen West
12.30–1.1
520
53
National Museum of China
Tiananmen East Tiananmen West
12.30–1.1
240
31
Happy Valley Scenic Area
Happy Valley Scenic Area
12.30–1.1
280
89
As shown in Table 1, the search results for the keywords “Forbidden City, Tiananmen Square, Great Hall of the People, National Museum of China” before and during the New Year’s Day holiday were much hotter than the normal hotness, especially for the keyword “Forbidden City”, which was about 40 times hotter than the normal hotness. About 40 times the usual heat. It is observed that a significant part of the search results are about the content of the attractions to play the card. Therefore, from the analysis of social media data related to the Forbidden City, Tiananmen Square, the Great Hall of the People, and the National Museum of China, the number of visitors to these tourist attractions should increase significantly during the New Year’s Day holiday, corresponding to a significant peak in passenger flow at Tiananmen East and Tiananmen West stations. As shown in Table 1, the search results for the keyword “Happy Valley scenic spot” before and during the New Year’s Day holiday were also more popular than normal, about three times hotter than usual. A significant portion of the search results were about the Pleasant Valley attractions. Therefore, from the results of social media analysis related
676
J. Shen
to Happy Valley scenic spot, the number of visitors to this tourist attraction group should also show a significant increase during the New Year’s Day holiday, and correspondingly, the passenger flow of Happy Valley scenic spot station should also show a significant increase. 3.2 Analysis of Station Traffic Characteristics of Concert Event-Related Events As shown in Table 2, the microblogging data of the seven concerts are presented. The average value of the heat of the half month before the time of holding, whose venues are all Beijing Workers’ Gymnasium, can be compared with the social media heat of the East Forty subway stations analyzed in Sect. 3, and the social media data can be used to rationalize the fluctuation of the station’s passenger flow through the impact of the event. It provides a basis for qualitatively adjusting the prediction results of the passenger flow prediction model. Table 2. Concert event information Concert name (keywords)
Venue
Hold time
Weibo buzz (articles/day)
Huang Qishan personal concert
Beijing Workers’ Gymnasium
Nov. 6
52
Reflections concert
Beijing Workers’ Gymnasium
Nov. 11
21
Floating Zhao Lei concert
Beijing Workers’ Gymnasium
Nov. 18
31
Yu Quan Concert
Beijing Workers’ Gymnasium
Dec. 24
101
Hip Hop Park Concert
Beijing Workers’ Gymnasium
Dec. 31
10
One Party
Beijing Workers’ Gymnasium
Jan. 1
70
NetEase Cloud Music Original Gala
Beijing Workers’ Gymnasium
Jan. 17
19
Because the Workers’ Gymnasium often holds concerts and other entertainment activities on weekends, “ordinary concerts” have even become a normal event and will not cause abnormal passenger flow. Therefore, the Yuquan concert, which is the most popular on Weibo by comparison, can be taken as a special event, and its data characteristics can be studied and analyzed for passenger flow characteristics analysis. The popularity of Yuquan’s concert on Weibo is several times that of other concerts. Therefore, from the analysis results of its related social media data, it can be inferred that on December 24th, the Yuquan concert’s holding day, the passenger flow of the Workers’ Stadium should also be significant. Correspondingly, the passenger flow of Dongsishitiao Station should also increase significantly.
Rail Passenger Flow Prediction
677
3.3 Analysis of Characteristics of Online and Online Passenger Flow of Festival-Related Events Within the scope of this research data, there are two important festivals, Christmas and New Year’s Day. Among them, New Year’s Day involves a three-day holiday, and the day before the holiday, which is from December 29 to January 1. Regarding the popularity of its Weibo, it reached 200,000 posts per day three days before Christmas and 640,000 per day in the week before New Year’s Day holiday. The popularity of Weibo on the two festivals is very high, and the attention of New Year’s Day is significantly higher than that of Christmas. Because the scope of influence of festival events is nationwide, corresponding to this study is the city-wide, and most of the social media popularity of the two festivals can be matched with the passenger flow of the entire subway network. At the same time, in order to reflect the situation of long-distance travel and homecoming during the holidays, the festival event heat can be matched with the passenger flow of Beijing West Railway Station, one of the important transportation hubs in Beijing. Therefore, judging from the results of its related social media data analysis, on Christmas December 25, the passenger flow of the entire network should increase significantly. During the New Year’s Day holiday from December 29 to January 1, the passenger flow of the entire network and the passenger flow of the hub station Beijing West Railway Station will also increase significantly (Table 3). Table 3. Holiday event information Holiday event keywords Rail transit station
Time period Weibo buzz (articles/day)
Christmas
Passenger flow across the 12.24–12.25 200,000 entire network, Beijing West Railway Station
New Year
Passenger flow across the 12.29–1.1 entire network, Beijing West Railway Station
640,000
New Year’s holiday
Passenger flow across the 12.29–1.1 entire network, Beijing West Railway Station
100,000
Therefore, it can be concluded that holiday events, especially legal holiday events, do lead to a significant increase in traffic at the relevant sites, and the social media buzz data associated with holiday events can be used to judge and analyze the traffic characteristics of their relevant sites. And there is a positive correlation. It can be rationalized to explain the abnormal traffic changes it brings and help to make objective outlier traffic forecast adjustment based on the traffic forecast results. A simple addition to the above analysis is made through news client social media data: for holiday vacation events, it is also necessary to consider the impact of weather factors on them. The weather conditions in Beijing on January 1, 2018 were not very good, with cloudy skies and a force 4 wind on that day. According to Beijing tourism data, on January 1, all
678
J. Shen
151 tourist attractions monitored by the city received 470,000 visitors, a value that was 17.2% lower than the previous year. Park-type scenic spots received 150,000 people, a decrease of 5.2% year-on-year; entertainment-type scenic spots received 105,000 people, a decrease of 50.8% year-on-year; ski resort-type scenic spots received 16,000 people, a decrease of 4.2% year-on-year, and museum-type scenic spots received 25,000 people, an increase of 22.1% year-on-year [54]. Combined with the cloudy weather conditions with a force 4 wind on January 1, it can be assumed that a large number of tourists exchanged their trips to outdoor attractions for trips to indoor attractions or abandoned their trips. Corresponding to this social media data is the change in passenger traffic on the whole line on January 1, with a value of 6.7 million passengers, which is a significant increase over the usual regular traffic, but its increase is smaller than that of the first two days of the holiday. Therefore, weather is also an important influencing factor on rail traffic flow, especially holiday travel traffic, and needs to be taken into account in the process of rail traffic management and operation.
4 Analysis of Prediction Results In the previous part, this study analyzed and summarized the characteristics of rail traffic flow under the influence of three types of events using social media data, and verified the law that the social media fever of a specific event and the corresponding rail traffic flow are positively correlated. In the following, based on the above feature analysis results, the analysis results of the established prediction model will be adjusted to make it closer to the real data. It is hoped that the results can be used to rationalize the interpretation of passenger flow anomalies, so that the adjusted prediction results can play a more scientific and accurate guiding role in rail traffic management and operation. 4.1 Line Network Passenger Flow Time Series Model Forecast Results Adjustment The results of the analysis of passenger flow characteristics regarding festival-related events summarized using social media data. It can be adjusted in advance according to the social media fever, and the predicted value can be increased appropriately to predict the abnormal change of passenger flow in advance. In the social media data, Christmas Day is more popular, so the prediction of passenger flow on that day will be greater than the model prediction. In the social media data, the popularity of New Year’s Day increases significantly, so it is predicted that the passenger flow during New Year’s Day, which is a legal holiday and has a holiday, will be significantly larger than the predicted model results. Thus, the management and operation measures such as increasing the frequency of the subway and increasing the passenger flow diversion into the station management are arranged. As Table 4 shows the relationship between the actual passenger flow and the model prediction result passenger flow, the actual passenger flow on Christmas Day is indeed larger than the predicted passenger flow, and the actual passenger flow on New Year’s Day is indeed substantially larger than the predicted passenger flow. Therefore, it is verified that the prediction adjustment results using social media hotness data are correct and meaningful.
Rail Passenger Flow Prediction
679
Table 4. Comparison of line network passenger flow forecast results adjustment Line network Model forecast traffic Actual passenger flow Forecasted need to passenger flow special adjust the amount of point difference Christmas Day, December 25
3.9 million
4 million
100,000
December 31, New Year’s Day holiday
6.3 million
6.7 million
400,000
4.6 million
600,000
January 1, New Year’s 4 million Day holiday
4.2 Site Passenger Flow Time Series Model Prediction Results Adjustment The results of the analysis of passenger flow characteristics regarding concert-related events summarized using social media data. It can be adjusted according to the social media heat in advance, and the prediction value will be appropriately increased to predict in advance. In the social media heat data, Yu Quan concert heat is higher in comparison, thus predicting that the passenger flow at the East Forty site on that day will be substantially greater than the model prediction results. Ordinary hotness concert passenger flow will be moderately greater than the predicted model results. According to the prediction thus can arrange to increase the corresponding time period security checks, passenger flow diversion into the station management and other management and operational measures. As shown in Table 5 for the comparison between the actual patronage and the model prediction results, the actual patronage on the day of the Hip Hop Park concert and the One Party concert was indeed larger than the predicted patronage, while the actual patronage on the day of the Yu Quan concert was indeed larger than the predicted patronage by a larger margin. Therefore, it was verified that the prediction adjustment results using social media heat data were correct and meaningful. Table 5. Comparison of site passenger flow forecast results adjustment Site traffic special point
Model forecast traffic
Actual passenger flow
Forecasted need to adjust the amount of difference
Yu Quan concert day
13,000
20,000
7,000
Hip hop park day
20,000
22,000
2,000
One party day
40,000
45,000
5,000
Based on the above discussion and research, a methodological idea of rail traffic passenger flow prediction combining social media data can be summarized into two
680
J. Shen
major parts: model passenger flow prediction and social media data passenger flow prediction. What is needed is a period of raw rail traffic passenger flow data and the corresponding social media data as input. After a series of steps of processing, the final passenger flow prediction results are derived to provide reference for rail traffic management and operation.
5 Conclusion China’s urban rail transit has been developed for decades and its process is still advancing at a high speed. From urbanization process, urban development planning to urban population travel demand, all of them are putting forward higher requirements on urban rail transit construction and rail transit operation management. A reasonable and perfect analysis of passenger flow characteristics and a scientific prediction scheme are important guidance values for rail transit construction and rail transit operation management. This study analyzes and summarizes the passenger flow characteristics by using the historical data of rail transit, and establishes an accurate passenger flow prediction model, based on which the model prediction results are rationalized by using the results of multivariate data analysis such as social media, and provides a comprehensive quantitative and qualitative description and prediction of rail transit passenger flow.
References 1. Okutani, I.: Dynamic prediction of traffic volume through Kalman filtering theory. Transp. Res. Part B Methodol. 18(1), 10–11 (1984) 2. Jerome, H.F., Jon, L.B., Raphael, A.F.: An algorithm for finding best matches in logarithmic expected time. ACM Trans. Math. Softw. (TOMS) 3(3), 219–226 (1977) 3. Davis, G.A., Nihan, N.L.: Nonparametric regression and short—term freeway traffic forecasting. J. Transp. Eng. 11, 178–188 (1991) 4. Smith, B.L., Williams, B.M., Oswald, R.K.: Comparison of parametric and nonparametric models for traffic flow forecasting. Transp. Res. Part C 10(4), 303–321 (2002) 5. Turochy, R.E.: Enhancing short-term traffic forecasting with traffic condition information. J. Transp. Eng. 132(6), 469–474 (2006) 6. Hodge, V.J., Krishnan, R., Austin, J., et al.: Short-term prediction of traffic flow using a binary neural network. Neural Comput. Appl. 25(7–8), 1639–1655 (2014) 7. Setyawati, B.R., Creese, R.C.: Genetic algorithm for neural networks optimization. Proc. SPIE Int. Soc. Opt. Eng. 5605, 54–61 (2004) 8. Bart, V.A., Howard, R.K., Martie, J.M., Van, D.V., Joe, C.W.: Recent advances and applications in the field of short-term traffic forecasting. Int. J. Forecast. 13(1), 32–33 (1997) 9. Brian, L.S., Billy, M.W., Oswald, R.K.: Comparison of parametric and nonparametric models for traffic flow forecasting. Transp. Res. Part C 10, 303–321 (2002) 10. Dia, H.: An object-oriented neural network approach to short-term traffic forecasting. Eur. J. Oper. Res. 131, 253–261 (2001) 11. Chen, H.B., Grant-Muller, S.: Use of sequential learning for short-term traffic flow forecasting. Transp. Res. 9, 319–336 (2001) 12. Eleni, I.V., Matthew, G.K., John, C.G.: Optimized and meta-optimizedneural networks for short-term traffic flow prediction: a genetic approach. Transp. Res. Part C 13, 211–234 (2005)
Rail Passenger Flow Prediction
681
13. Zheng, W., Lee, D.-H., Shi, Q.: Short-term freeway traffic flow prediction: Bayesian combined neural network approach. J. Transp. Eng. 132(2), 114–121 (2011) 14. Sugiyama, Y., Matsubara, H., Myojo, S., et al.: An approach for real time estimation of railway passenger flow. Q. Rep. TRTI 51, 82–88 (2010) 15. Ozerova, O.O.: Passenger flows prediction in major transportation hubs. Nauka Ta Progres T-Ransportuv 6, 72–80 (2013) 16. Hrushevska, T.M.: Research of regualrites of passenger flows in the rail suburban traffic. Nauka Ta Progres Transportuv 5, 39–47 (2014) 17. Chen, M.C., Wei, Y.: Exploring time variants for short-term passenger flow. J. Transp. Geogr. 19(4), 488–498 (2011)
A Novel Optimized Convolutional Neural Network Based on Marine Predators Algorithm for Citrus Fruit Quality Classification Gehad Ismail Sayed1 , Aboul Ella Hassanien1 , and Mincong Tang2(B) 1
Faculty of Computers and AI and Scientific Research Group in Egypt (SRGE), Giza, Egypt [email protected] 2 International Center for Informatics Research, Beijing Jiaotong University, Beijing, China [email protected]
Abstract. Plant diseases have a huge impact on the reduction in production in agriculture. This may lead to economic losses. Citrus is one of the major sources of nutrients on the planet such as vitamin C. Last decade, machine learning algorithms have been widely used for the classification of diseases in plants. In this paper, a new hybrid approach based on the marine predators algorithm (MPA) and convolutional neural network for the classification of citrus disease is proposed. MPA is used to find the optimal values of batch size, drop-out rate, drop-out period, and maximum epochs. The experimental results showed that the proposed optimized ResNet50 based on MPA is superior. It achieved overall accuracy 100% for citrus disease classification.
Keywords: Marine predators algorithm network · Citrus diseases · Fruit quality
1
· Convolutional neural
Introduction
Plants’ diseases are one of the major problems in the agriculture industry. Plants are considered as an important source of vitamins, energy-rich compounds, and minerals not only the source of food. Citrus is one of the plants that have high nutritional content and commercial value [1]. As there is a small variety in symptoms of plant diseases, there is a need for the expert’s opinion to diagnose these diseases. Inappropriate diagnosis of the plant disease may cause a great influence on economical loss for farmers. Over the years, machine learning algorithms prove their efficiency for the automatic diagnosis of plant disease. Meanwhile, manual diagnosis is error-prone and costly [2]. In the past decade, some studies are considered deep learning for agriculture disease monitoring. Deep learning is part of machine learning [3]. It c The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 682–692, 2022. https://doi.org/10.1007/978-981-16-8656-6_60
A Novel Optimized Convolutional Neural Network
683
has proved its efficiency to obtain high accuracy for many traditional recognition tasks [4,5]. Recently, it has been extensively used not only for image classification problems but has been applied in many fields such as agriculture. Despite traditional machine learning algorithms, deep learning algorithms can yield better classification accuracy. Therefore it can be considered one of the most promising algorithms. Deep learning looks like a shallow neural network, but with many hidden layers. Compared with the shallow network, deep learning can obtain better accuracy compared with the shallow neural network. Authors in [6] used MobileNetV2 deep learning architecture to classify and identify agriculture disease. The experimental results showed that proposed citrus disease detection based on MobileNetV2 obtained overall accuracy 87.28%. In [7], authors used VGG-19 architecture with bridge connections to identify pests and diseases in the citrus plantation. The experimental results showed that the prosed hybrid model achieved accuracy 95.47%. Additionally, the authors compared their proposed model with pre-trained VGG-16 and VGG-19 deed neural network architectures. The results revealed that the proposed model can obtain better results compared with other pre-trained networks. Authors in [8] used InceptionV3 deep learning architecture with GANs-based data augmentation for citrus disease classification. The experimental results showed that the proposed model 92.60%. In this paper, we proposed another hybrid model based on Marine Predators Algorithm (MPA) and ResNet50 deep learning architecture for citrus disease classification. MPA is one or the recently proposed algorithms. It is firstly introduced in [9]. The main inspiration of MPA came from the biological behavior of predator and prey with their Brownian and L´evy movements of predators in the ocean. It has proved its efficiency for solving many optimization problems. It has shown superior performance for a number of well-known multi-objective optimization problems [10]. Moreover, it proved its efficiency to find the optimal electrical parameters of the triple-diode photovoltaic model [11]. In this work, MPA is used to find proper values for drop-out rate, batch size, drop-out period, and maximum epochs. The main contribution of the paper can be summarized as follows: 1) New hybrid model based on MPA and ResNet50 deep learning architecture is proposed. 2) The proposed hybrid model is applied on citrus fruit dataset, where four citrus disease exists. 3) The performance of the proposed hybrid model is compared with three other well pre-trained neural network architecture. The rest of the paper is organized as follows; Sect. 2 gives a brief description of the MPA algorithm. In Sect. 3, describes the used citrus dataset. The proposed citrus disease classification based on ResNet50 and MPA is introduced in Sect. 4. Section 5 provides the experimental results and discussions. Finally, conclusions and future work are presented in Sect. 6.
684
2 2.1
G. I. Sayed et al.
Marine Predators Algorithm Inspiration
Marine Predators Algorithm (MPA) is one of the recent algorithms published in 2020 [9]. The main inspiration of MPA came from the biological behavior of predator and prey with their Brownian and L´evy movements of predators in the ocean. Brownian and L´evy movements are kind of widespread foraging strategies. MPA follows the natural rules determined in the optimal foraging strategy. Moreover, it considers the rating policy among prey and predator in marine ecosystems. 2.2
Mathematical Model
MPA is similar to other population metaheuristic algorithms, where the population is uniformly distributed along the search space. The position of each search agent is defined in Eq. (1). Yi = LBi + r × (U Bi − LBi )
(1)
where LBi and U Bi is the lower and the upper boundary vector of the variables, respectively, and r is a random number generated in range 0 to 1. Concerning the theory of the survival of the fittest, the top predator Y 1 is considered the 1 , where most talented one in the foraging group. This group is called Elite Yi,j i = 1, 2, ....N , j = 1, 2, ....D, N is the number of search agents, and D is the number of dimensions. At each iteration, the fitness value of each predator is calculated and the top predator is the one with the best fitness value. MPA has another matrix called Prey. The prey matrix is used to update the positions of the predators. That means that the fittest prey of the initial prey is called a predator. The optimization process of MPA is divided into three main phases. These phases are (1) when prey and predator move in the same place, (2) when prey is moving faster than predator, and (3) when a predator is moving faster than prey. All of these phases mimic the movement behavior of prey and predator in nature. The mathematical model of the first phase, when both predator and prey are moving in the same place is defined as follows; Si , i = 1, 2, ...N/2 (2) P reyi = P reyi + 0.5 ∗ R Si = R L
(Elitei − RL P reyi )
(3)
where RL is a vector of randomly generated numbers from Levy distribution, R is a random number from 0 to 1, and is entry-wise multiplication. In this phase, half of the population is designed for exploitation and the other half for exploration. For the second half of the population is mathematically described as follows; Si , i = N/2, 2, ...N (4) P reyi = P reyi + 0.5 ∗ CF
A Novel Optimized Convolutional Neural Network
Si = R L
(Elitei − RL
P reyi )
685
(5)
where CF is parameter, which used for controlling the step size for movement of the predators. In phase 2, when the predator is moving faster than prey. This phase includes high exploration capability. Si , i = 1, ...N (6) P reyi = P reyi + 0.5 ∗ CF Si = R L
(Elitei − RL
P reyi )
(7)
The mathematical model of the movement of prey in phase number three, where predator is moving faster than prey is defined as follows; Si , i = 1, 2, ...N (8) P reyi = P reyi + 0.5 ∗ R Si = R B
(Elitei − RB
P reyi )
(9)
where RB is a vector of randomly generated numbers from Brownian distribution. Fish Aggregating Devices (FADs) is another factor which causes behavior change of the behavior in marine predators environment. The mathematical representation of the movement of prey respect to FADs is defined in follows; ⎧ P reyi + CF [LBi + R (U Bi − LBi )] , ⎪ ⎪ ⎪ ⎨if r ≤ F ADs P reyi = (10) ⎪ P reyi + [F ADs(1 − r) + r](P reyr1 − P reyr2 ), ⎪ ⎪ ⎩ if r > F ADs where FADs equals to 2, r is a random number between 0 and 1, r1 and r2 are two random indexes of the prey matrix.
3
Citrus Fruit Dataset
The dataset used in this study was acquired manually using DSLR (Canon EOS 1300D) having a sensor with CMOS system and resolution of 5202 × 3465 (Mpix) from the Sargodha region, Pakistan. The dimension of all adopted equals 256 × 256 with 72 dpi resolution. The dataset consists of five categories; four categories for the citrus fruits disease, and one category for healthy citrus fruits. The categories of citrus disease are Canker, Blackspot, Greening, Scab, and Melanose [12]. Table 1 shows the detailed description of the citrus fruit dataset. Figure 1 shows a sample of the used dataset.
686
G. I. Sayed et al. Table 1. The description of the citrus fruit dataset Disease
No. of images
Black spot 19
4
Canker
78
Greening
16
Scab
15
Healthy
22
The Proposed Citrus Disease Classification Model
The proposed citrus disease classification model comprised of two main phases, data preprocessing phase, and hyper-parameters tuning of ResNet50 using the MPA phase. In the data preprocessing phase, all the images in the dataset are resized to 224 × 224 × 3 to fit the input layer of ResNet50 deep learning architecture. Then one of the over-sampling methods is used to overcome the problem of the unbalanced dataset. This method is called random over-sampling. In this method, new samples are randomly generated from the minority class to the original dataset. In this dataset, canker disease has the highest sample size compared to the other classes. Therefore, the sample size of the other classes is increased as well to be similar to the canker class. After that, the modified dataset is divided into three sets; training, testing, and validation sets with percentages 70%, 15%, and 15%, respectively. Then various data augmentation techniques are used to improve generalization and reduce overfitting such as xtranslation, y-translation, x-reflection, y-reflection, x-shear, y-shear,x-scale, and y-scale (Fig. 2). ResNet50 is a short name for the residual network, where 50 indicates the number of layers it has. It is one of the convolutional neural networks (CNN). ResNet is commonly used for image classification problems. It consists of 48 convolutional layers with 1 average pool layer, and 1 MaxPool layer [13]. In this paper, four main parameters of ResNet50 are optimized using MPA optimization algorithm. In the second phase, training and validation datasets are used in the optimization process of MPA. MPA starts with random initialization of the search agents, each search agent (candidate solution) has four randomly generated values of drop-out rate, batch size, drop-out period, and maximum epochs. The search space for these hyper-parameters is set as in Table 2. The rest of the parameters including the population size is set to 30, and the maximum number of iterations is set to 20. Each search agent is evaluated and the one with the highest accuracy is considered the best position. In this study, accuracy is used to evaluate how far a search agent is good. This process is repeated over and over till a termination criterion is met. In this paper, the optimization process is terminated when it reaches the maximum number of iterations.
A Novel Optimized Convolutional Neural Network
Fig. 1. Samples of each disease of citrus fruits
687
688
5
G. I. Sayed et al.
Experimental Results and Discussion
In this section, an analysis of the obtained results from the proposed citrus diseases classification model is proposed. All the experiments are executed on MATLAB-2020 with Core i7 and NVIDIA GEFORCE RTX.
Fig. 2. The proposed citrus disease classification model
Table 2. Search space boundary of MPA Range Batch size
[1, 64]
Drop-out rate
[0.1, 0.9]
Drop-out period
[2, 10]
Maximum epochs [2, 20]
Table 3 shows the best values of drop-out rate, batch size, drop-out period, and maximum epochs determined by MPA. These values will be further used
A Novel Optimized Convolutional Neural Network
689
to evaluate the performance of ResNet50 on the testing dataset. Table 4 compares the performance of the proposed optimized version of ResNet50 based on using MPA, with the original ResNet50 without tuning its parameters in terms of accuracy. As it can be observed that the proposed optimized version of ResNet50 obtained higher accuracy compared with the original ResNet50. Table 3. Optimal values of drop-out rate, batch size, drop-out period, and maximum epochs determined by MPA Value Batch size
32
Drop-out rate
0.8
Drop-out period
2
Maximum epochs 20
Table 4. ResNet50 results with and without using MPA Accuracy (%) ResNet50
96.15
MPA-ResNet50 100
Figure 3 shows the training progress of the optimized ResNet50 architecture based on MPA through the optimization process in terms of accuracy. As it can be observed that the proposed hybrid model based on ResNet50 and MPA obtained overall accuracy 98.18%. Additionally, it can be observed when the number of iterations increases, the model can reach to higher accuracy value. In this study, the number of iterations and number of epochs is set using MPA optimization algorithm.
Fig. 3. The training progress of the optimized ResNet50 architecture
690
G. I. Sayed et al.
Table 5. Optimized ResNet50 based on MPA vs. other deep learning architectures Accuracy (%) SqueezeNet
94.16
InceptionV3
98.33
DenseNet201
92.73
MPA-ResNet50 100
For further evaluation of the proposed hybrid model, it has been compared with other well-known deep learning architectures, namely SqueezeNet, InceptionV3, and DenseNet201. Table 5 compares the performance of optimizing the main parameters of ResNet50 using MPA with other deep learning architectures in terms of accuracy. It should be noted that the default parameter values of the used deep learning architectures are left as they are. This due to, in the paper, we are mainly focused on boosting the performance of ResNet50. As it can be observed from the Table 5, the proposed optimized ResNet50 based on MPA is superior. It is obtained an overall accuracy 100%. Figure 4 shows the confusion matrix of the proposed citrus disease classification model based on the optimized ResNet50 using MPA. As it can be seen, the proposed model is very promising and can effectively classify all citrus diseases.
Fig. 4. The confusion matrix of the proposed citrus disease classification model based on MPA and ResNet50
A Novel Optimized Convolutional Neural Network
6
691
Conclusion and Future Works
This paper introduced a hybrid model based on ResNet50 and MPA for citrus disease classification. The proposed hybrid model consists of two main phases; data pre-processing phase and hyper-parameters tuning of ResNet50 using the MPA phase. MPA is used to find the optimal values for drop-out rate, batch size, drop-out period, and maximum epoch parameters of ResNet50. The proposed optimized ResNet50 is compared with three other deep learning architectures, squeezeNet, InceptionV3, and DenseNet201. The results showed the effectiveness of the proposed optimized ResNet50 in detecting citrus diseases. In the future, the proposed hybrid model can be further applied to other datasets.
References 1. Sharif, M., Khan, M., Iqbal, Z., Azam, M., Lali, M., Javed, M.: Detection and classification of citrus diseases in agriculture based on optimized weighted segmentation and feature selection. Comput. Electron. Agric. 150, 220–234 (2018). https://doi.org/10.1016/j.compag.2018.04.023 2. Ali, H., Lali, M.I., Nawaz, M.Z., Sharif, M., Saleem, B.A.: Symptom based automated detection of citrus diseases using color histogram and textural descriptors”. Comput. Electron. Agric. 138, 92–104 (2017). https://doi.org/10.1016/j.compag. 2017.04.008 3. Fountsop, A., Fendji, J.E.K., Atemkeng, M.: Deep learning models compression for agricultural plants. Appl. Sci. 10(19), 6866 (2020). https://doi.org/10.3390/ app10196866 4. Liu, J., Wang, X.: Plant diseases and pests detection based on deep learning: a review. Plant Methods 17(22), 1–22 (2021). https://doi.org/10.1186/s13007-02100722-9 5. Hasan, R., Yusuf, S., Alzubaidi, L.: Review of the state of the art of deep learning for plant diseases: a broad analysis and discussion. Plants 9(10), 1302 (2020). https://doi.org/10.3390/plants9101302 6. Liu, Z., Xiang, X., Qin, J., Tan, Y., Zhang, Q., Xiong, N.: Image recognition of citrus diseases based on deep learning. Comput. Mater. Continua 66(1), 457–466 (2021). https://doi.org/10.32604/cmc.2020.012165 7. Xing, S., Lee, M.: Classification accuracy improvement for small-size citrus pests and diseases using bridge connections in deep neural networks. Sensors 20(17), 1–16 (2020). https://doi.org/10.3390/s20174992 8. Zeng, Q., Ma, X., Cheng, B., Zhou, E., Pang, W.: Gans-based data augmentation for citrus disease severity detection using deep learning. IEEE Access 8, 172882– 172891 (2020). https://doi.org/10.1109/access.2020.3025196 9. Faramarzi, A., Heidarinejad, M., Mirjalili, S., Gandomi, A.: Marine predators algorithm: a nature-inspired metaheuristic. Expert Syst. Appl. 152, 113377 (2020). https://doi.org/10.1016/j.eswa.2020.113377 10. Abdel-Basset, M., Mohamed, R., Mirjalili, S., Chakrabortty, R., Ryan, M.: An efficient marine predators algorithm for solving multi-objective optimization problems: analysis and validations. IEEE Access 9, 42817–42844 (2021). https://doi. org/10.1109/ACCESS.2021.3066323
692
G. I. Sayed et al.
11. Soliman, M., Hasanien, H., Alkuhayli, A.: Marine predators algorithm for parameters identification of triple-diode photovoltaic models. IEEE Access 8, 155832– 155842 (2020) 12. Rauf, H., Saleem, B., Lali, M., Khan, M., Sharif, M., Bukhari, S.: A citrus fruits and leaves dataset for detection and classification of citrus diseases through machine learning. Data Brief 26, 104340 (2019). https://doi.org/10.1016/j.dib.2019.104340 13. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 770–778. https://doi.org/10.1109/CVPR.2016.90
A Hybrid Quantum Deep Learning Approach Based on Intelligent Optimization to Predict the Broiler Energies Ibrahim Gad1,5(B) , Aboul Ella Hassanien2,5 , Ashraf Darwish3,5 , and Mincong Tang4 1
2
Faculty of Science, Tanta University, Tanta, Egypt Faculty of Computers and AI, Cairo University, Cairo, Egypt [email protected] 3 Faculty of Science, Helwan University, Cairo, Egypt [email protected] 4 International Center for Informatics Research, Beijing Jiaotong University, Beijing, China [email protected] 5 Scientific Research Group in Egypt (SRGE), Giza, Egypt
Abstract. The global population has undergone rapid growth, particularly in the second half of the last century; consequently, total meat production is increasing rapidly. Deep neural networks have been used to solve many problems in different areas. The parameters of a deep learning model have a significant impact on the ability of the model to map relationships between the input and output data. Thus, many techniques are used to determine and optimise these parameters. Quantum computing is a rapidly growing discipline that attracts the attention of large number of researchers. Although classical computers have limitations, quantum computing helps to overcome these limitations and promises a step-change in computational performance. This paper proposes the combination of the quantum deep learning model (QDL) with the genetic algorithm technique (GA) to determine the best values of the parameters in QDL networks. Data were collected for this study from 210 broiler farms in Mazandaran, Iran. The results show that the R2 indexes for the QDL model at the test stage for broiler meat and manure outputs are 0.81 and 0.79, respectively. Keywords: Deep learning algorithms · Prediction
1
· Quantum computing · Genetic
Introduction
Chicken meat has recently become one of the most popular foods worldwide, serving as a critical source of protein for a large number of people, especially c The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 693–704, 2022. https://doi.org/10.1007/978-981-16-8656-6_61
694
I. Gad et al.
in highly populated countries. It is a good source of nutrients, proteins, and vitamins that contribute to a balanced diet [1,2]. Global demand for meat is increasing and is still on the increase. Consequently. meat production has become more than tripled in the last 50 years. Every year, the world generates more than 340 million tonnes. Although pork and beef are the world’s most common meats, the total of poultry production is growing at the fastest rate. According to the FAO, 69 billion chickens, 1.5 billion pigs, 656 million turkeys, 574 million sheep, 479 million goats, and 302 million cattle were slaughtered for meat in 2018 [3]. Thus, poultry is one of the world’s main sectors, and its growth, development, and production are critical as the world’s population rises and demand for white meat increases [1,4]. Due to the global increase in agricultural and livestock production, agricultural energy management is vital. Agriculture’s energy balance can be carried out by analyzing and comparing the number of energy inputs and outputs of a farming system. Energy is consumed in a variety of ways in the broiler industry. The average energy consumption of this sector is about 5% of that of other sectors, although this figure increases up to 16–20% when waste is considered [1,5]. Thus, energy management has the potential to increase efficiency, improve productivity and reduce waste in this production. Several researchers examined and evaluated the energy balance and production pattern of broilers. In [6], authors found that using poultry manure in the United States can save 283 million gallons of fuel. In general, a poultry farm with a capacity of 10,000–110,000 chickens can produce up to 125 tonnes of manure per period. Another study in Nigeria compared mechanised and semi-mechanized systems to traditional systems in terms of energy savings [1,5]. Deep Neural Networks (DNNs) were introduced in recent years and is becoming a powerful tool for modeling nonlinear and complex systems. Inspired by biological neural systems, DNNs have mathematically simplified the representations. Moreover, these networks are intelligent enough to understand and generalize to yield meaningful solutions to complex problems, even when the dataset is incomplete. Deep learning has proven itself as cutting-edge technology for extensive data analysis, with various exciting applications such as speech recognition, image processing, and object detection. Additionally, it has been used lately in the fields of food science, agriculture, meteorology, and engineering [1,7]. Quantum computing is a quickly growing field. It is an excellent and gentle introduction to quantum mechanics [8]. It simplifies the fundamental principles of quantum mechanics by minimizing the real world’s complexities [9]. Although manufacturers and researchers continue to struggle with the weaknesses of traditional processors (CPUs), quantum computing eliminates these constraints and offers a revolution in capacity, efficiency, computational power, and performance. The Quantum machine learning area explores the implications of using the new quantum computers for machine learning by introducing an entirely different computational system into the machine learning hardware pool such as the quantum computer. Variational quantum circuits, alternatively referred to as parameterized quantum circuits, are quantum algorithms that depend on
A Hybrid Quantum Deep Learning Approach
695
tunable parameters [10]. The principle of variational circuits is very similar to neural networks, so they play a vital role in quantum machine learning. Since variational circuits operate on a similar principle to neural networks, they play a critical role in quantum machine learning [11–13]. Deep neural networks have had a significant effect on machine learning (ML) and the field of artificial intelligence in general over the last decade. Simultaneously, quantum algorithms have shown their ability to solve most complex problems encountered on conventional computers. Quantum computing, by optimizing the corresponding objective function, will offer a much more effective platform for deep learning than the current classical system. Quantum deep learning is an area of research that aims to develop neural network models that can leverage quantum information flowing through the network. In brief, quantum deep learning networks are composed of a finite number of quantum layers constructed entirely of quantum gates [14]. In this work, we have proposed a hybrid quantum deep learning based on genetic algorithm optimization to predict the broiler energies. The proposed model consists of hybrid quantum-classical neural networks, which combine classical and quantum components. The genetic algorithms are used to optimize the deep learning architectures that are entirely quantum gates in their formulation. This paper is organized as follows. Section 2 presents the related works followed by Methodology in Sect. 3. Sections 4 present the experimental observations and detailed discussion. Finally, the conclusion and possible future works are presented in Sect. 5.
2
Related Work
Several researchers have developed various techniques to solve the problems associated with prediction analysis in poultry farming and energy forecasting. Taki et al. [15] carried out a study in Nigeria to compare the mechanized and semimechanized systems to traditional systems in energy savings. The findings indicated that the traditional system consumes the most energy at 50.36 MJ, followed by the semi-mechanized system consumes 28.4 MJ and the mechanized system consumes 17.83 MJ. Amini et al. [1] optimized RBF neural network model that is used to forecast broiler production energies in Iran. The proposed model’s findings demonstrated, an improved RBF can be used with high precision to model broiler outputs and can be applied in the coming years. Pari et al. [16] conducted a study in Iran and concluded that approximately 16.5 million calories of energy are involved in producing one kilogram of meat. The energy efficiency of the units evaluated in this analysis was roughly 0.28 on average. Similarly, Chen et al. [17] performed a study to determine the broiler production and the energy performance of poultry farms in China. They used the Life Cycle Assessment (LCA) method to determine the energy usage and emission levels correlated with broiler production. Finally, several researchers have established robust models for predicting output energy for broiler production, such as Mahmoud et al. [18], Amid et al. [19] and Sefat et al. [20].
696
3 3.1
I. Gad et al.
The Proposed Hybrid Approach The Genetic Algorithms
The genetic algorithm is a global optimization algorithm that is stochastic, and it is one of the most common and well-known biologically inspired algorithms. It is inspired by the biological concept of natural selection-based evolution. The algorithm is an evolutionary algorithm in the sense that it implements an optimization step inspired by the biological theory of evolution by natural selection using a gene representation and basic operators dependent on genetic crossover and mutations [21,22]. The GA algorithm’s main idea is to use a genetic representation for each solution, a fitness function, crossover or genetic recombination, and mutation operators, as shown in Fig. 1. The algorithm starts by generating a population of random solutions with fixed sizes. The algorithm’s main loop is repeated for a given number of iterations or until no further change in the best solution is taken place. First, each candidate solution in the population is evaluated using the objective function representing the solution’s fitness function which is intended to be minimized. Following that, parents are chosen depending on their fitness values. Any given solution can be used zero or several times as a parent to produce new children. A straightforward and efficient selection strategy involves randomly choosing a fixed number of individuals from the population and selecting the group member with the highest fitness. In a genetic algorithm, the parents are used for generating the next generation of individuals. Two parents are required to produce two children. Recombination is accomplished utilizing a crossover operator. This is achieved by selecting a random split point on the bit string and then creating a child with the first parent’s bits up to the split point and the second parent’s remainder of the series. This phase is reversed with the second child. This is referred to as a one-point crossover, and the operator has several other variants. For each parent’s pair, the crossover is implemented based on probable, that is, versions of the parents are used as children rather than the recombination operator in certain situations. The mutation is the process of flipping bits in newly generated candidate solutions [23]. 3.2
Hybrid Quantum-Classical Deep Learning (QDL) Model
A hybrid QDL has a finite number of hidden quantum layers in the form of parameterized quantum circuits. These circuits are composed of quantum gates that perform operations on the qubits that characterize the quantum layer. The quantum gates modify the state of the qubits in a separate layer is based on the outputs of a preceding classical circuit, which serve as parameters for the rotation gates. The proposed QDL model is composed of a sequence of classical neural networks and quantum layers. This kind of model is widespread and is suitable in a lot of situations. However, to increase the degree of control on how the model is
A Hybrid Quantum Deep Learning Approach
697
No
Input
GA operators Initial population
Crossover
Mutation
Terminate?
The best Yes
values of R2
The best
...
M={(Gi,Pj)}
Selection
values of R2
No
GA operators Initial population
Selection
Crossover
Mutation
Terminate?
Yes
The best values of R2
Fig. 1. Flowchart of the proposed framework.
constructed, for example when there are multiple inputs and outputs or when to distribute the output of one layer into multiple subsequent layers. The proposed hybrid model consisting of a K-neuron fully connected classical layer, a 2-qubit quantum layer connected to the first part of neurons of the previous classical layer, a 2-qubit quantum layer connected to the second part of neurons of the previous classical layer, a K-neuron fully connected classical layer which takes a 4-dimensional input from the combination of the previous quantum layers, and activation function to convert to the output [Y1 , Y2 ] vector. The model’s diagram is shown in Fig. 2. The input values [x1 , x2 , x3 , ..., x6 ]T of the classical circuit 1 are converted to the hybrid quantum-classical neural network using the hidden layer activation’s [h11 , h12 , ..., h1nodes ]T . Equations 1, 2 show the mathematical operations used to determine the layer’s output. 1 1 1 + x2 ∗ w12 + ... + x6 ∗ w16 ) h11 = σ(x1 ∗ w11
h1nodes
= σ(x1 ∗
1 wnodes1
+ x2 ∗
1 wnodes2
+ ... + x6 ∗
1 wnodes6 )
(1) (2)
where σ(.) refers to the activation function and w is the weights. The hidden layers h1 , h2 , ..., hnodes work as the rotational angle parameters to the gates Rx and Ry in the quantum circuit, and they modify the initial states |Ψ1 and |Ψ2 of the two qubits as follows: |Ψ3 = R(h1 )|Ψ1
(3)
The gate RX(φ, wires) is used for a single qubit X rotation. The mathematical formula of RX(φ, wires) is described as follows: cos(φ/2) −i sin(φ/2) (4) Rx (φ) = e−iφσx /2 = −i sin(φ/2) cos(φ/2) The gate RY (φ, wires) is used for a single qubit Y rotation. The mathematical formulation of RY (φ, wires) is given in Eq. 5: cos(φ/2) − sin(φ/2) −iφσy /2 Ry (φ) = e = (5) sin(φ/2) cos(φ/2)
698
I. Gad et al.
The gate Rot(φ, θ, ω, wires) is an arbitrary single qubit rotation, where φ, θ, ω are rotation angles. Rot(φ, θ, ω) = RZ(ω)RY (θ)RZ(φ) −i(φ+ω)/2 e cos(θ/2) −e−i(φ−ω)/2 sin(θ/2) = −i(φ−ω)/2 sin(θ/2) e−i(φ+ω)/2 cos(θ/2) e
(6)
The CNOT(wires) is the controlled-NOT operator, where the first wire provided corresponds to the control qubit. The mathematical formula of CN OT is described in the following Eq. 7: ⎡ ⎤ 1000 ⎢0 1 0 0⎥ ⎥ CN OT = ⎢ (7) ⎣0 0 0 1⎦ 0010 PennyLane Package can extract different measurement results from quantum devices: the expectation of an observable, its variance, samples of a single measurement, or computational basis state probabilities. The used measurement functions are: expval(op) is used to compute the expected value of the supplied observable. Besides, the combined PauliZ-measurement of the first and second qubit returns a list of two lists, each containing the measurement results of the respective qubit. PauliZ(wires) represents the Pauli Z operator, The mathematical formula of the Pauli Z operator is presented in the following Eq. 8: 1 0 σz = (8) 0 −1 The measurement of the qubits follows the unitary transforms in Eq. 3 on a reasonable basis. The size collapses the quantum information stored in the two qubits to classical information given by h1 , h2 , ..., hnodes , respectively. The information in the last classical layer is used to produce the final predicted output Y1 AND Y2 as follows: 2 2 2 Y1 = σ(h21 ∗ wnodes1 + h22 ∗ wnodes2 + ... + h2nodes ∗ wnodes6 )
3.3
(9)
GAQDL Model
Figure 1 shows the diagram of the proposed GAQDL model. The basic idea is to select different values for generation and population. Then, the first step is to create a population of random QDL models based on the setting values. QDL models’ parameters are represented by a range of values such as epochs, batch size, number of nodes, or activation function. So, the QDL model represents a candidate solution in the population, and it is described as a list of values (epochs, batch size, number of nodes, or activation function). An initial population of random QDL models can be created as follows, where “Generation” is
A Hybrid Quantum Deep Learning Approach Input layer
Classical Keras layer
Classical Keras layer
2-qubit Quantum Keras layer Embedding
Circuit layer
699 Output layer
Measurement
|0>
X1
...
X2
|0>
Y1 X3
X6
Embedding
Circuit layer
Measurement
...
X5
2-qubit Quantum Keras layer
...
X4
Y2
|0>
... |0>
Fig. 2. An illustration of the hybrid quantum-classical neural networks.
a hyperparameter that controls the generation size, “Population” is a hyperparameter that controls the population size, and “list of values (epochs, batch size, number of nodes, or activation function)” is a hyperparameter that defines the number of parameters in a single candidate solution. The first step in the GA algorithm iteration is to evaluate all QDL individuals. R2 will be used as a general objective function and call it to get a fitness score, which we will maximize. Following that, parents are chosen depending on their R2 fitness values. Any given QDL model can be used zero or several times as a parent to produce new children. A straightforward and efficient selection strategy involves randomly choosing a fixed number of individuals from the population and selecting the member of the group with the highest fitness. The main loop of the algorithm is repeated for a given number of iterations to find the best solution or the best R2 fitness values as shown in Fig. 1.
4
Experimental Results and Evaluations
1) Dataset Description: The Dataset used in this study was generated from Qaemshahr and Iran between July and September 2016. Qaemshahr was chosen as the sample population due to its substantial poultry population in Mazandaran province, Iran. The dataset contains details about poultry farms, including the various inputs used to measure their energy equivalents, such as the amount of fuel consumed, the amount of electricity consumed, the amount of human labor used during the production process, and the machinery used. In this study, all input energies are classified as direct energy (chickens (X1), human power (X2), diesel fuel (X4), food (X5), and electricity (X6)) or indirect energy (including day-old chicks (X1) and machinery (X3)). A poultry farm system’s significant outputs are the meat obtained (Y1) and the litter, which includes
700
I. Gad et al. Table 1. Dataset description Chick (X1) Labor (X2) Machinery (X3) Diesel fuel (X4) Feed (X5) Elecricity (X6) Broiler (Y1) Manure (Y2)
1 48.6671
62.5887
1.50606
1323.78
2972.41
2581.18
2225.13
1692.53
2 41.2588
93.1777
1.7822
1662.12
4222.17
2840.6
1243.49
2439.61
3 35.4692
76.4165
2.0289
1483.24
4178.19
3092.01
1753.05
743.831
4 41.5885
72.0903
2.35251
1733.51
4800.48
3437.47
2177.75
2034.73
5 41.516
105.721
1.96237
1751.59
4015.7
2901.22
1768.24
1524.09
manure (Y2), feathers, and also some food. Table 1 presents sample data taken from the dataset used in this research. 4.1
Results Analysis
This section discusses the experimental findings obtained with the proposed GAQDL framework. To obtain comparative results for various generations using the proposed model, several experiments were performed on selected Iranian data. Using a multicore CPU, the genetic algorithms run in parallel, ensuring that they are not dependent on one another. Additionally, each core has its range of generations and population of QDL models; as a result, these models are distinct from those of another core due to their unique parameter values for each individual, mutation, and crossover rate. In Fig. 2, G represents the number of generations for each core, and P represents the corresponding population size. Each core is assigned with a specific pair (G, P ), then the initial population of individuals (QDL models) is randomly generated, and a set of parameters for these models are initialized. The values and combinations of different parameters of the proposed model are shown in Table 2. The GA operators were working for this population where the best R2 can be saved. After that, GA is used to determine the optimum fitness values across several populations. The following step is to choose the parameter combination that produces the least error or the best R2 and is hence assigned to the best model. Moreover, the experiments were conducted on an IBM Quantum computer with the specifications mentioned as follows: 8 CPUs, 32 gigabytes of RAM, and two qubits [24]. IBM Quantum computer has the platform based on QASM, the Quantum Assembly language, and the Qiskit library, and the higher-level quantum library can be used within the Python programming language. Qiskit started to be available as open-source since its introduction in 2017. The illustration shows a typical flow of submitting a job from a classical computer to an IBM quantum computer. The Quantum computer then executes the job and returning quantum measurement results to the classical computer. For Deep Learning, in addition to exploring leading open source frameworks such as Tensorflow, Keras, fast.ai, and Pytorch. PennyLane-qiskit: supports integration with Qiskit, an open-source quantum computation framework by IBM. Provides device support for the Qiskit Aer quantum simulators, and IBM Q hardware devices. PennyLane is an open-source software cross-platform Python library for differentiable programming of quantum computers. Train a quantum
A Hybrid Quantum Deep Learning Approach
701
computer the same way as a neural network. PennyLane provides open-source tools for quantum machine learning, and It facilitates the training of variational quantum circuits [13,25]. Table 2. The setting values for different parameters of the proposed model. Parameter
Values
Nodes
Range(10,300,100)
Activation function [Softmax, Relu, Elu, Tanh, Sigmoid] Learning rate
[0.00001,0.001,0.1]
Batch size
Range (10,30,10),
Epochs
Range (50,300,50)
Validation split
[0.2,0.3]
Table 3 summarises the outcomes of the various error metrics for each generation and population. Table 3 summarises the experimental results of three distinct generations, each of which was measured against five separate populations to determine the population’s maximum R2 value. Concerning the first generation 10 and the first population 10, the model’s metric R2 has a maximum value of 81%. Additionally, the top model’s parameters include the following: number of nodes, activation function, learning rate, batch size, epochs, validation split, R2 Y1, MSE Y1, RMSE Y1, MAE Y1, R2 Y2, MSE Y2, RMSE Y2, MAE Y2 are 210, elu, 0.1, 10, 200, 0.2, 0.8117, 0.0178, 0.1333, 0.1115, 0.7915, 0.0197, 0.1404, 0.1142. While in the second population 50, the model’s metric R2 reaches a maximal value of 0.7856. In the third population, R2 has a value of 0.7546 for QDL. Finally, with the 500th population, QDL reaches a cumulative value of 0.6952. In general, fitness functions for certain populations in the first generation are variable, and some of these R2 values are very close to each other. For example, the metric R2 has equivalent values in 200 and 400 populations, indicating that the values of the metric R2 are very similar as population size grows. Similarly, the model’s metric R2 has the maximum value of 0.7987 for the second generation 20 and the first population 10. In the population 300, the model’s metric R2 has a minimum value of 0.6316 for QDL. In general, the majority of fitness function (R2 ) values are very close to each other for the selected generations, as these values are almost identical as all generations are included. R2 values are highly sensitive to population and generation size due to the random selection of the original population for each generation. As a result of the randomization, determining the appropriate population size and individuals is very complicated. Thus, it is critical to choose the proper population and generation size. In this study, we considered a generation of 10 with a population of 10 because it produces the highest R2 values, as measured by the fitness (R2 ) value of 81% for the QDL model.
10 50 100 200 300 400 500
10 50 100 200 300 400 500
10 50 100 200 300 400 500
20
50
100
210 210 210 10 10 10 210
10 10 110 110 110 110 210
10 210 10 110 110 10 10
relu tanh relu relu softmax elu elu
tanh softmax sigmoid elu elu elu tanh
elu relu tanh tanh elu elu elu
0.1 0.1 0.001 0.1 0.001 0.001 1E-05
0.001 0.001 0.001 0.1 1E-05 0.001 0.1
0.1 0.1 0.1 0.001 0.001 0.001 0.001
0.1 0.1 0.1 0.001 0.001 0.1 0.1
elu softmax softmax softmax relu sigmoid elu
10 50 100 200 300 400 500
10
210 210 210 210 10 10 210
Learning rate
Generations Population Nodes Activation function
20 10 10 10 20 20 10
10 20 20 10 10 20 20
20 20 10 20 10 10 20
10 20 20 10 10 20 20
Batchsize
250 150 100 200 200 100 250
200 250 100 150 250 250 250
150 150 200 100 200 100 100
200 150 200 100 100 150 150
0.2 0.2 0.3 0.2 0.2 0.2 0.2
0.2 0.3 0.3 0.2 0.2 0.2 0.2
0.2 0.2 0.3 0.2 0.3 0.2 0.2
0.2 0.3 0.3 0.2 0.2 0.2 0.2
Epochs Validation split
0.6468 0.6235 0.5878 0.5153 0.3573 0.2370 0.0741
0.6627 0.5872 0.1421 0.5329 0.4680 0.6748 0.6618
0.7987 0.7853 0.7352 0.6945 0.6316 0.7126 0.6987
0.8117 0.7856 0.7546 0.7018 0.6448 0.7150 0.6952
R2 Y1
0.0357 0.0246 0.0256 0.0380 0.0526 0.0595 0.0791
0.0261 0.0307 0.0670 0.0378 0.0404 0.0317 0.0251
0.0182 0.0178 0.0181 0.0282 0.0259 0.0204 0.0239
0.0178 0.0167 0.0181 0.0209 0.0243 0.0223 0.0227
0.1890 0.1569 0.1601 0.1949 0.2293 0.2440 0.2813
0.1615 0.1752 0.2588 0.1945 0.2009 0.1781 0.1586
0.1347 0.1335 0.1345 0.1680 0.1608 0.1429 0.1546
0.1333 0.1294 0.1345 0.1446 0.1559 0.1494 0.1506
0.1533 0.1267 0.1334 0.1598 0.1913 0.1976 0.2389
0.1349 0.1484 0.2210 0.1595 0.1636 0.1531 0.1341
0.1127 0.1190 0.1115 0.1380 0.1394 0.1129 0.1281
0.1115 0.1098 0.1111 0.1248 0.1279 0.1289 0.1253
0.7880 0.7291 0.7139 0.8011 0.1532 0.1398 0.0591
0.7377 0.6890 0.1359 0.6619 -0.0843 0.6574 0.6762
0.7250 0.7518 0.7375 0.6975 0.7199 0.6845 0.7561
0.7915 0.5620 0.6724 0.7617 0.6855 0.7706 0.8414
MSE Y1 RMSE Y1 MAE Y1 R2 Y2
0.0187 0.0230 0.0228 0.0169 0.0645 0.0763 0.0882
0.0241 0.0253 0.0561 0.0282 0.0915 0.0236 0.0264
0.0266 0.0203 0.0252 0.0246 0.0242 0.0270 0.0270
0.0197 0.0284 0.0300 0.0207 0.0260 0.0195 0.0131
0.1368 0.1517 0.1509 0.1300 0.2540 0.2763 0.2970
0.1553 0.1592 0.2368 0.1678 0.3025 0.1535 0.1624
0.1632 0.1424 0.1586 0.1570 0.1557 0.1642 0.1645
0.1404 0.1686 0.1733 0.1439 0.1611 0.1396 0.1147
0.1144 0.1225 0.1294 0.1091 0.2127 0.2315 0.2645
0.1291 0.1343 0.2022 0.1403 0.2517 0.1278 0.1335
0.1351 0.1193 0.1250 0.1352 0.1292 0.1432 0.1393
0.1142 0.1441 0.1451 0.1210 0.1313 0.1165 0.0967
MSE Y2 RMSE Y2 MAE Y2
Table 3. The results of GAQDL performance for broiler meat and manure energy prediction
702 I. Gad et al.
A Hybrid Quantum Deep Learning Approach
703
Although the proposed GAQDL enhanced the performance of the QDL model, it did sometimes not find the optimal solution because genetic algorithms generate solutions within the search space of the problem. From Table 3, GAQDL produces statistically significant results for broiler meat and manure energy of 0.1333 and 0.1404, respectively, as compared to the RBF neural network model [1], which achieved broiler meat and manure energy of 0.18 and 0.01 for the same dataset.
5
Conclusion
The predictive capacity of classical deep learning methods is ineffective for estimating future values due to the large number of parameters that should be chosen for deep learning and training the model for a long time. In this paper, the proposed quantum deep learning model introduced uses both classical and quantum layers in its architectures. One of the advantages of deep quantum learning is that it uses fewer parameters in comparison to their classical model and if the quantum gates are wisely chosen. Thus, genetic algorithms enable the estimation of these parameters with high confidence. Genetic algorithms are used to estimate the parameters of the QDL model in this work. As a result, the proposed GAQDL forecasting model is used to improve energy prediction accuracy. GAQDL produces statistically significant results for broiler meat and manure energy of 0.1333 and 0.1404, respectively.
References 1. Amini, S., Taki, M., Rohani, A.: Applied improved RBF neural network model for predicting the broiler output energies. Appl. Soft Comput. 87, 106006 (2020). https://doi.org/10.1016/j.asoc.2019.106006 2. Omomule, T.G., Ajayi, O.O., Orogun, A.O.: Fuzzy prediction and pattern analysis of poultry egg production. Comput. Electron. Agric. 171, 105307 (2020). https:// doi.org/10.1016/j.compag.2020.105301 3. FAOSTAT. Food and agriculture organization of the united nations (fao), production of chicken meat (2018). http://www.fao.org/faostat/en/?#data/, Accessed 2020 4. Parastar, H., van Kollenburg, G., Weesepoel, Y., van den Doel, A., Buydens, L., Jansen, J.: Integration of handheld NIR and machine learning to “measure & monitor” chicken meat authenticity. Food Control 112, 107149 (2020). https:// doi.org/10.1016/j.foodcont.2020.107149 5. Fluck, R.C.: Energy in Farm Production. Elsevier, Amsterdam (1992). https:// doi.org/10.1016/c2009-0-00488-7 6. Kalhor, T., Rajabipour, A., Akram, A., Sharifi, M.: Modeling of energy ratio index in broiler production units using artificial neural networks. Sustain. Energy Technol. Assess. 17, 50–55 (2016). https://doi.org/10.1016/j.seta.2016.09.002 7. Gad, I., Hosahalli, D., Manjunatha, B.R., Ghoneim, O.A.: A robust deep learning model for missing value imputation in big NCDC dataset. Iran J. Comput. Sci. 4, 67–84 (2020). https://doi.org/10.1007/s42044-020-00065-z
704
I. Gad et al.
8. Ferry, D.: An introduction to quantum computing. In: Quantum Mechanics, pp. 267–293. CRC Press (2020). https://doi.org/10.4324/9781003031949-11 9. Scherer, W.: Basic notions of quantum mechanics. In: Mathematics of Quantum Computing, pp. 11–75. Springer, Cham (2019). https://doi.org/10.1007/978-3030-12358-1 2 10. Huembeli, P., Dauphin, A.: Characterizing the loss landscape of variational quantum circuits. Quant. Sci. Technol. 6(2), 025011 (2021). https://doi.org/10.1088/ 2058-9565/abdbc9 11. Gruyter, D.: Introduction to quantum machine learning. In: Quantum Machine Learning, pp. 1–10 (2020). https://doi.org/10.1515/9783110670707-001 12. Stokes, J., Izaac, J., Killoran, N., Carleo, G.: Quantum natural gradient. Quantum 4, 269 (2020). https://doi.org/10.22331/q-2020-05-25-269 13. Bergholm, V., et al.: Pennylane: automatic differentiation of hybrid quantumclassical computations. arXiv arXiv:1811.04968 (2018) 14. Pattanayak, S.: Quantum deep learning. In: Quantum Machine Learning with Python, pp. 281–306. Apress (2021). https://doi.org/10.1007/978-1-4842-6522-2 6 15. Taki, M., Ajabshirchi, Y., Ranjbar, S.F., Rohani, A., Matloobi, M.: Modeling and experimental validation of heat transfer and energy consumption in an innovative greenhouse structure. Inf. Process. Agric. 3(3), 157–174 (2016). https://doi.org/ 10.1016/j.inpa.2016.06.002 16. Sefeedpari, P., Rafiee, S., Akram, A., Chau, K.W., Pishgar-Komleh, S.H.: Prophesying egg production based on energy consumption using multi-layered adaptive neural fuzzy inference system approach. Comput. Electron. Agric. 131, 10–19 (2016). https://doi.org/10.1016/j.compag.2016.11.004 17. Chen, L., Xing, L., Han, L.: Rapid evaluation of poultry manure content using artificial neural networks (ANNs) method. Biosyst. Eng. 101(3), 341–350 (2008). https://doi.org/10.1016/j.biosystemseng.2008.09.005 18. Omid, M., Khanali, M., Zand, S.: Energy analysis and greenhouse gas emission in broiler farms: a case study in Alborz province, Iran. Agric. Eng. Int. 19(4), 183–190 (2018). https://cigrjournal.org/index.php/Ejounral/article/view/4157 19. Amid, S., Gundoshmian, T.M.: Prediction of output energies for broiler production using linear regression, ANN (MLP, RBF), and ANFIS models. Environ. Prog. Sustain. Energy 36(2), 577–585 (2016). https://doi.org/10.1002/ep.12448 20. Sefat, M.Y.: Application of artificial neural network (ANN) for modelling the economic efficiency of broiler production units. Indian J. Sci. Technol 7(11), 1820–1826 (2014). https://doi.org/10.17485/ijst/2014/v7i11.17 21. Farsi, M., et al.: Parallel genetic algorithms for optimizing the SARIMA model for better forecasting of the NCDC weather data. Alexandria Eng. J. 60(1), 1299–1316 (2021). https://doi.org/10.1016/j.aej.2020.10.052 22. Maghawry, A., Hodhod, R., Omar, Y., Kholief, M.: An approach for optimizing multi-objective problems using hybrid genetic algorithms. Soft Comput. 25(1), 389–405 (2020). https://doi.org/10.1007/s00500-020-05149-3 23. Yoon, H.: Fitness-orientated mutation operators in genetic algorithms. Int. J. Innov. Technol. Explor. Eng. 9(4), 1769–1772 (2020). https://doi.org/10.35940/ ijitee.d1692.029420 24. IBM. 5-qubit backend, IBM Q team. IBM Q 5 yorktown backend specification v0.12.3 (2021). https://quantum-computing.ibm.com 25. Mari, A., Bromley, T.R., Izaac, J., Schuld, M., Killoran, N.: Transfer learning in hybrid classical-quantum neural networks. Quantum 4, 340 (2020). https://doi. org/10.22331/q-2020-10-09-340
Modeling Satellites Transactions as Space Digital Tokens (SDT): A Novel Blockchain-Based Approach Mohamed Torky1(B) , Tarek Gaber2,3 , Essam Goda4 , Aboul Ella Hassanien5 , and Mincong Tang6 1 Department of Computer Science, Higher Institute of Computer Science and Information
Systems, Culture and Science City, 6th October City, Egypt [email protected] 2 School of Science, Engineering, and Environment, University of Salford, Salford, UK 3 Faculty of Computers and Informatics, Suez Canal University, Ismailia 41522, Egypt 4 Department of Computer Science, College of Computing and Information Technology (CCIT), Arab Academy for Science, Technology, and Maritime Transport (AASTMT), Cairo, Egypt 5 Faculty of Computer and Artificial Ingelligence, Cairo University, Cairo, Egypt [email protected] 6 International Center for Informatics Research, Beijing Jiaotong University, Beijing, China [email protected]
Abstract. Blockchain technology can play a novel role in the space industry and satellite communications. This disruptive technology can be used for building decentralized and secure techniques for processing and manipulating satellite transactions using a new concept called digital space tokens (SDT). Tokenizing space transactions within a satellite swarm in the form of SDT will reflect plenty of various blockchain-based applications in the space mining industry, such as designing new blockchain protocols for tracking, managing, and securing all space transactions and communications in a transparent, verifiable, and immutable approach. In this paper, a novel blockchain-based model is proposed for modeling satellite transactions as Space Digital Tokens (SDT). Ethereum blockchain was used in the implementation of this model. Also, its performance was evaluated using transaction time, transaction latency, read latency, and Ethereum GAZ usage. Keywords: Blockchain · Space digital tokens (SDT) · Satellites transactions
1 Introduction Blockchain technology has fastly become one of the emergent technologies to address many challenges in the global space industry [1–3]. It has been used in various applications such as Blockstream Satellite [4], managing and securing satellite communications M. Torky, T. Gaber, E. Goda, and A.E. Hassanien—Scientific Research Group in Egypt (SRGE). www.egyptscience.net © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 705–713, 2022. https://doi.org/10.1007/978-981-16-8656-6_62
706
M. Torky et al.
[5], broadcasting various patterns of space data [6], and managing new procedures of Internet access based on satellite internet provider [7]. The main idea of such as applications are modeling satellites as nodes in the blockchain network. Each satellite node can work as participating nodes that store data or validating nodes that verify and validate space data, then adding a block to the Blockchain. In this scenario, satellites can store, receive, and broadcast blockchain data and apps. Hence, satellite networks can be established as the infrastructure where end-users store data and perform transactions. Moreover, data sharing between any two satellites or other two entities in space will be easier, more secure, and enhance navigation and accuracy of shared data using Blockchain [8]. The first Blockchain-based satellite was Blockstream [4], a company launched in 2017 to broadcast bitcoin worldwide. On the security side, in 2018, The Singapore Space and Technology Association (SSTA) also launched the first open-source satellite network, SpaceChain [9], that enables end-users to develop and run decentralized blockchain-based applications in space. Since Blockchain moves into outer space applications, the SPACE website in [10] referred to the capability of Blockchain to tokenize (digitizing physical space assets to process digitally) spacecraft and satellite payloads which could help in massive upcoming space projects such as the Lunar Gateway project, which space station NASA wants to build in lunar orbit [11]. This paper investigates the possibility of modeling satellite transactions using a new space digital token concept (SDT). With tokenizing space transactions within satellites swarms in the form of SDTs and processing them using Blockchain, we believe that it will support in developing plenty of various decentralized applications in the space mining industry such as tracking, managing, and securing all space transactions and communications in a transparent, verifiable, and immutable approach. Therefore, in this paper, a new conceptual blockchain model is suggested with the novel SDT concept. This model is the first proposal of using blockchain technology for modeling space transactions as a digital space token (SDT) to the best of the authors’ knowledge. The proposed model has been simulated using the Ethereum blockchain platform and evaluated through four metrics transaction time, transaction latency, read latency, and Ethereum GAZ usage. The rest of this paper is being organized as follows: Sect. 2 discusses the related work, Sect. 3 presents the proposed SDT concept and how it can be implemented through a proposed blockchain model, Sect. 4 presents the simulation results, and finally, Sect. 5 provides the conclusion of this study.
2 Literature Review Although using Blockchain in the space industry is still in its early stages, the literature has provided some research efforts that discussed the adoption of Blockchain to solve various challenges in the space industry and satellite communications [12]. In terms of the security and privacy of satellite communications, a token-based access control mechanism has been proposed to implement smart blockchain-based intrusion detection to detect attacks against satellite communication systems [13]. Feng and Hao Xu in [5] studied the security problem for mobile satellite communication networks (MSNET). The authors proposed a new security framework for securing mobile satellite communication
Modeling Satellites Transactions as Space Digital Tokens (SDT)
707
networks based on reformulating satellite communication networks as delay-tolerance networks (DTN). The Blockchain is used with DTN to 1) Secure data transactions. 2) Resist the unexpected cyber-attacks that target mobile satellite networks by integrating Blockchain with the practical satellite constellation management algorithm. Songjie Wei et al. in [14] proposed a fast and efficient access verification protocol called Blockchainbased Access Verification Protocol (BAVP) that depends on integrating identity-based encryption and blockchain technology for authenticating LEO- satellites constellations. The BAVP represents a good alternative instead of the traditional centralized authentication protocols that work for MEO/GEO satellite networks. The simulation results on the OPNET platform clarified good results of the reliability, effectiveness, and fast-switching efficiency of the proposed protocol. Ronghua Xu et al. in [15] proposed a blockchainbased access control mechanism for addressing both access authorization issues and identity authentication in the distributed space network environment. The author implemented the proposed mechanism on both resource-constrained edge devices and more powerful devices and deployed it on a local private Ethereum blockchain network for evaluating the computational and the timeliness performance of the proposed access control mechanism. Dawei Li et al. in [16] proposed a secure and anonymous data transmission system to enhance the space information network (SIN). The simulation results of this system refer to that it is semantic, secure, anonymous, and traceable, with a perfectly zero-knowledge proof, and its efficient performance shows that it can be applied in space information networks (SIN) [17, 18]. Satellite data broadcasting is another problem that has been investigated in blockchain adoption in the space industry. Zhang YH and Liu XF in [19] introduced a new blockchain protocol that works based on satellite broadcasting communications instead of the traditional Internet for data dissemination. Simulation results clarified that the proposed technique achieved a lower communication cost and can improve the throughput of the blockchain system to 6,000,000 TPS with a 20 Gbps satellite bandwidth. Lillian Clark et al. in [20] proposed a blockchain-based reputation system for developing a decentralized system for satellite relay networks. The authors tested the efficiency of the proposed system by investigating and validating the network performance in terms of average latency, computational complexity, and storage considerations for various use cases. The results clarified that the proposed system reduces the average data latency across the satellite relay network. Space debris collision is also an important problem that can be mitigated by leveraging Blockchain with space tracking systems. Swapnil A Surdi in [21] investigated the possibility of using Blockchain for tracking space debris based on decentralized sensing networks instead of being predicted by distant systems. The new blockchain-based network will prioritize the space information about the distances between satellites and orbital debris. It also shares the predicted collisions or close pass-by. This data will support the statistical analysis for the conjunction assessment between satellites and orbital debris. Although these research works represent a good step towards adopting Blockchain into the space industry, the term space transactions on which Blockchain protocols should be built is still vague and need more standardization. We try to do this in this study by
708
M. Torky et al.
proposing a new concept called space digital tokens (SDTs) that can be used as a base digital token for space transactions, which novel blockchain protocols can process.
3 Space Digital Tokens (SDT)-Based model This section proposes a new blockchain model based on a new concept called digital space tokens (SDT). SDT refers to tokenizing space transactions as digital tokens that can be processed using a blockchain protocol for authenticating space transactions between two satellites, a satellite, a spacecraft, or a satellite/spacecraft and ground station. To establish a blockchain network for managing space transactions, we have to formulate some important assumptions: 1) Satellites revolve around the earth within a swarm (or a constellation). each swarm of satellites is managed by a country or a space agency. 2) The communication between satellites within a specific swarm is conducted through a peer-to-peer network. 3) Space transactions patterns are represented as SDT that can be verified using a blockchain protocol (e.g. Ethereum). SDT can be satellite to satellite transaction or sensing data between satellite and orbital debris. According to the above assumptions, blockchain can play two important roles: 1) Blockchain as an authenticator and verifier: Through this role, Blockchain will verify all space transaction patterns that can occur within a specific satellite swarm. 2) Blockchain as a tracer: SDT can also represent sensing data between satellites and orbital debris. Hence, Blockchain can work in this scenario as a tracking system for detecting the expected space collisions between satellites and close orbital debris, so a specific maneuver algorithm has to be executed before this collision. Figure 1 depicts a proposed model to explain how space transactions can be modeled as Space Digital Tokens (SDT) within two swarms (or two constellations) of satellites. In this scenario, two space transactions can occur within those swarms, satelliteto-satellite transaction, or a satellite-to-debris transaction. The Blockchain protocol is responsible for verifying these transactions, then updating the blockchain with a new valid block. Thus, the space stakeholders of each satellite swarm become able to access the newly added blocks through the connected dashboard to the blockchain platform. Algorithm 1 specifies how Blockchain can be constructed based on processing SDT in space. First, each space transaction has to be converted to a space digital token (SDT). Then, this new transaction (i.e., SDT) has to be verified using a blockchain protocol (or a consensus) to confirm the validation of a specific transaction between two satellites in the same constellation. Finally, if the handled SDT is valid, a new block is added to the Blockchain. This new Block will contain all the needed details of the new space transaction.
Modeling Satellites Transactions as Space Digital Tokens (SDT)
709
Fig. 1. Modeling space digital tokens (SDT) using Blockchain Algorithm 1: Processing Space Digital Tokens (SDT) using Blockchain Input: Space Transactions: ST= (T1,T2,T3,------Tn) Output: Blockchain + new Block Procedure: Processing Space Digital Tokens For all transactions in ST Convert (Ti) to Space Digital Token (SDT) Blockchain Consensus (SDT) If (SDT is valid) New block=Add (SDT) Blockchain=Blockchain + New Block Else Print “invalid space digital token” End Procedure
4 Results To implement the proposed concept, modeling satellite transactions as space digital tokens (SDT), we used the Ethereum blockchain to simulate it. We created a dataset consists of five satellite transactions that have been occurred within a swarm consists of five satellites. To assess the blockchain performance in processing the handled satellite transactions, we evaluated a key three metrics [22]:
710
M. Torky et al.
1) Transaction time: The time in milliseconds used to execute a transaction between two satellites. Transaction time can be calculated automatically through Ethereum while implementing the handled transactions 2) Transaction Latency (TL): is a network-wide view of the amount of time taken for a transaction’s effect to be usable across the network. The measurement includes the time from which it is created between two satellites to the point that the result is widely available in the Blockchain network. TL can be calculated as in Eq. 1: TL = Confirmation time − submit time
(1)
3) Read Latency (RL): is the time between when the read request is submitted and when the reply is received. RL can be calculated as in Eq. 2: RL = Time when the response received − submit time
(2)
4) Ethereum GAS usage: the term gas refers to the smallest unit of work that is processed on the Ethereum blockchain network. Confirming and verifying transactions on the Ethereum blockchain requires a certain amount of gas, depending on the size and type of each transaction. So, Gas measures the number of work miners need to do to include transactions in a block, and it can be calculated automatically through etherum while implementing the handled transactions Table 1 summarizes the obtained results of evaluating Blockchain performance while implementing five satellite transactions using Ethereum. Table 1. Blockchain performance results of simulating five satellite transactions (i.e. five SDTs) Transactions Blockchain performance Time
TL
RL
GAZ
S1 to S2
173 ms 00.143017 00.006868 22024
S3 to S5
160 ms 00.255181 00.006397 21064
S2 to S4
169 ms 00.257728 00.008280 22024
S4 to S3
180 ms 00.179175 00.006009 21064
S4 to S1
165 ms 00.201464 00.004335 21064
Figure 2 depicts the transaction times in milliseconds (ms), Fig. 3 depicts the transaction latency, and Fig. 4 depicts the read latency of the five transactions in our experiment.
Modeling Satellites Transactions as Space Digital Tokens (SDT)
711
Fig. 2. Transaction time (in MS) results.
Fig. 3. Transaction latency (TL) results.
Fig. 4. Read latency (RL) results.
5 Conclusion This study aimed to investigate the possibility of modeling space transactions using Blockchain. The study introduced a new concept called space digital tokens (SDT) for representing satellite transactions and processing them using a blockchain protocol. A new conceptual blockchain model is proposed for simulating the SDT concept. Moreover, its functionality has been simulated and tested using the ethereum blockchain platform. The simulation is conducted to test: time transaction, transaction latency, read latency, and Ethereum GAS usage of five SDTs representing five different satellite transactions. Although the obtained results are initial, it represents a basic and real step toward using blockchain technology in the space industry and satellite communications. Moreover, it can be considered a good indicator of developing new blockchain protocols for
712
M. Torky et al.
managing various patterns of space transactions and communications. this is what we try to achieve in the next work.
References 1. Molesky, M.J., Cameron, E.A., Jones, J., Esposito, M., Cohen, L., Beauregard, C.: Blockchain network for space object location gathering. In: 2018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), 1 Nov 2018, pp. 1226–1232. IEEE (2018) 2. Torky, M., Gaber, T., Hassanien, A.E.: Blockchain in Space Industry: Challenges and Solutions (27 Feb 2020). arXiv preprint, arXiv:2002.12878 3. Cheng, S., Gao, Y., Li, X., Du, Y., Du, Y., Hu, S.: Blockchain application in space information network security. In: International Conference on Space Information Network 9 Aug 2018, pp. 3–9. Springer, Singapore (2018) 4. Blockstream: The Bitcoin blockchain from space. https://blockstream.com/satellite/#:~:text= Blockstream%20Satellite%20broadcasts%20the%20Bitcoin,Bitcoin’s%20dependency% 20on%20internet%20access.&text=By%20eliminating%20cost%20barriers%2C%20peop le,participate%20in%20the%20Bitcoin%20network. Accessed 20 May 2021 5. Feng, M., Xu, H.: MSNET-blockchain: a new framework for securing mobile satellite communication network. In: 2019 16th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), 10 Jun 2019, pp. 1–9. IEEE (2019) 6. Zhang, Y.H., Liu, X.F.: Satellite broadcasting enabled blockchain protocol: a preliminary study. In: 2020 Information Communication Technologies Conference (ICTC), 29 May 2020, pp. 118–124. IEEE (2020) 7. Michael, S.: SpaceX: Over 500,000 Orders for Starlink Satellite Internet Service Received to Date. https://www.cnbc.com/2021/05/04/spacex-over-500000-orders-for-starlink-satell ite-internet-service.html. Accessed 21 May 2021 8. Elizabeth, H.: Could Blockchain Tech Launch Spacefaring Nations Into a Data-Sharing Frontier. https://www.space.com/blockchain-for-space-cooperation.html. Accessed 21 May 2021 9. Spacechain, decentralized space agency: White paper v1.0 2018 1049. https://spacechain. com/wp-content/uploads/2018/09/1050SpaceChain-Technical-White-Paper.pdf. Accessed 21 May 2021 10. Elizabeth, H.: How Blockchain can change the space industry. https://www.space.com/blo ckchain-cryptography-change-space-industry.html. Accessed 21 May 2021 11. NASA, Lunar Gateway. https://sacd.larc.nasa.gov/smab/smab-projects/lunar-gateway/. Accessed 21 May 2021 12. Ahmad, R.W., Hasan, H., Yaqoob, I., Salah, K., Jayaraman, R., Omar, M.: Blockchain for aerospace and defense: Opportunities and open research challenges. Comput. Ind. Eng. 151, 106982 (2021) 13. Cao, S., Dang, S., Zhang, Y., Wang, W., Cheng, N.: A blockchain-based access control and intrusion detection framework for satellite communication systems. Comput. Commun. 15(172), 216–225 (2021) 14. Wei, S., Li, S., Liu, P., Liu, M.: BAVP: blockchain-based access verification protocol in LEO constellation using IBE keys. Secur. Commun. Netw. 14, 2018 (2018) 15. Xu, R., Chen, Y., Blasch, E., Chen, G.: Exploration of blockchain-enabled decentralized capability-based access control strategy for space situational awareness. Opt. Eng. 58(4), 041609 (2019)
Modeling Satellites Transactions as Space Digital Tokens (SDT)
713
16. Li, D., Liu, J., Liu, W.: Secure and anonymous data transmission system for cluster organized space information network. In: 2016 IEEE International Conference on Smart Cloud (SmartCloud), 18 Nov 2016, pp. 228–233. IEEE (2016) 17. Zhao, W., Zhang, A., Li, J., Wu, X., Liu, Y.: Analysis and design of an authentication protocol for space information network. In: MILCOM 2016–2016 IEEE Military Communications Conference, 1 Nov 2016, pp. 43–48. IEEE (2016) 18. Cheng, S., Gao, Y., Li, X., Du, Y., Du, Y., Hu, S.: Blockchain application in space information network security. In: International Conference on Space Information Network, 9 Aug 2018, pp. 3–9. Springer, Singapore (2018) 19. Zhang, Y.H., Liu, X.F.: Satellite broadcasting enabled blockchain protocol: a preliminary study. In: 2020 Information Communication Technologies Conference (ICTC), 29 May 2020, pp. 118–124. IEEE (2020) 20. Clark, L., Tung, Y.C., Clark, M., Zapanta, L.: A blockchain-based reputation system for small satellite relay networks. In: 2020 IEEE Aerospace Conference, 7 Mar 2020, pp. 1–8. IEEE (2020) 21. Surdi, S.A.: Space situational awareness through blockchain technology. J. Space Safety Eng. 7(3), 295–301 (2020) 22. Hyperledger Blockchain Performance Metrics. https://www.hyperledger.org/learn/publicati ons/blockchain-performance-metrics. Accessed 19 May 2021
A Derain System Based on GAN and Wavelet Xiaozhang Huang(B) School of Economics and Management, Beijing Institute of Graphic Communication, Beijing, China [email protected]
Abstract. This paper proposes an image rain removal method based on wavelet thresholding and generative adversarial networks for removing rain traces on the target image to restore the original image. This paper first uses wavelet threshold denoising to pre-process the image to remove the non-rain noise and part of the image’s rain noise. After that, this paper uses the generator in the adversarial generative network to complete the rain-containing images’ de-rain operation. The generators and discriminators in the adversarial generative network need to be trained in advance, and the artificially generated dataset is used for training in this paper. Finally, this paper conducts experiments on the actual rain trace image dataset and artificially generated rain trace image dataset, respectively, to verify the method’s effectiveness in this paper. Keywords: Image derain · GAN · Wavelet threshold denoising
1 Introduction With the rapid development of computer vision and IoT technologies, intelligent vision systems are increasingly used in our lives, such as smart surveillance, driverless systems, intelligent transportation systems, and so on. These intelligent systems use sensor networks to collect data, upload the data to the server, and then use the deep learning models deployed on the server to complete tasks such as recognition and detection. However, these intelligent vision systems have high requirements for the quality of the input data. When encountering bad weather such as rain, snow, and fog, the images sampled by the sensors will contain a large amount of noise, making the quality of the input data very low. The accuracy of the whole system will be greatly reduced as a result. The current methods of image de-drainage can be roughly divided into two categories. The first method uses the optimization method of the cost function to achieve de-drainage. Various prior knowledge of image de-drainage is added to the cost function for representing the characteristics of rain traces and image background scenes [1]. The second class of methods is based on deep learning and uses convolutional neural networks, recurrent neural networks, and generative adversarial networks to achieve image de-drainage [6, 7]. The first type of methods require elaborate prior knowledge in the This work was supported by the grant from the Program of Beijing Social Science Foundation Project (Grant No. 20XCB009). © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 714–721, 2022. https://doi.org/10.1007/978-981-16-8656-6_63
A Derain System Based on GAN and Wavelet
715
cost function to describe the rain traces in images. This elaborate prior knowledge may only apply to specific patterns, not too irregular distributions in real images, and lack generalization ability. In addition, the optimization process of the first class of methods is usually designed with a large number of computational iterations, which makes it often slow in practical applications. Convolutional neural networks and recurrent neural networks require a large number of training samples, and the generalization ability of these methods is not satisfactory due to the existence of overfitting problems. Compared with convolutional and recurrent neural networks, generative adversarial networks [2] require a relatively small number of samples and can largely alleviate the problem of model overfitting. However, in image de-rainage, the generative adversarial network model is more difficult to converge and is prone to problems such as model collapse. This is due to the fact that rain trace images also contain a lot of other noise in addition to rain traces, which makes the model difficult to converge. To address these problems, this paper proposes a method based on wavelet threshold denoising and adversarial generative networks for removing rain stains on images. The rain traces on the image can be roughly divided into two categories, one is discrete and irregular, and the other is linear and regular. The first type of rain stains and other noise on the image can be removed first using wavelet threshold denoising. After wavelet denoising, we can get a higher quality rain stain image and then use it to train the generative adversarial network to make the generative adversarial network converge better. Finally, the generator in the generative adversarial network is used to achieve the image’s rain removal. At present, image rain removal methods can be roughly divided into two categories. The first category uses the optimization method of cost function to achieve rain removal. In the cost function, a variety of prior knowledge of image dehumidification is added to represent the characteristics of rain trace and image background scene. The second method is based on deep learning, using convolution neural network, recurrent neural network, generating confrontation network and other methods to achieve image rain removal. The first kind of methods need to carefully design the prior knowledge in the cost function to describe the rain trace in the image, and these carefully designed prior knowledge may only be applicable to specific patterns, but not to the irregular distribution in the real image, which is lack of generalization ability. In addition, the optimization process of the first method usually involves a large number of iterations, which makes the reasoning speed slow in practical application. Convolution neural network and recurrent neural network need a large number of training samples, and because of the over fitting problem, the generalization ability of these methods is not ideal. Compared with convolution neural network and recurrent neural network, the number of samples for generating confrontation network is relatively small, and the problem of over fitting model can be alleviated to a great extent. However, in the image rain removal, it is difficult to generate the confrontation network model, and it is easy to collapse. This is because the rain trace image contains a lot of other noises besides the rain trace, which makes the model difficult to converge. To solve the above problems, this paper proposes a method based on wavelet threshold denoising and anti generation network to remove the rain trace on the image. The rain trace on the image can be roughly divided into two categories, one is discrete and irregular, the other is linear and regular. The first kind of rain trace and other noises on the image can be removed by wavelet threshold denoising. After wavelet
716
X. Huang
de-noising, we can get higher quality rain trace image, and then use it to train to generate the countermeasure network, which can make the generated countermeasure network converge better. Finally, the generator in the generation countermeasure network is used to remove the rain.
2 Wavelet Denoise The original rain trace image contains a large amount of noise, and the use of wavelet transform on this image results in different statistics. The energy of the image’s effective information is concentrated in a few wavelet coefficients with large absolute values. The energy of the noise, on the other hand, will be concentrated in most of the wavelet coefficients with small absolute values. It can be seen that the absolute value of the useful information coefficients of the noisy image is larger than the absolute value of the useless noise coefficients after wavelet transform, so a suitable threshold can be selected to distinguish the useful information from the useless noise. Using this threshold, the wavelet coefficients corresponding to the image’s noise can be reduced to zero, and only the useful information is retained so that the image can be denoised. In this paper, the wavelet denoising preprocessing for rain track images. The wavelet denoising preprocessing can be divided into the following three steps: firstly, the original image is decomposed by wavelet transform using sym5 wavelet basis, and the corresponding wavelet coefficients are obtained. After that, the wavelet coefficients are further processed by using the set threshold value to remove the invalid noise from the original image and keep the useful information. After that, the wavelet coefficients are inverse transformed, and the denoised image is reconstructed. In this denoising process, the selection of the threshold value is crucial, and a suitable threshold value has a crucial influence on the denoising result. Considering that the pixel values of the rain traces in the rain trace images are often very high, the Visul Shrink threshold is chosen in this paper. Where the Visul Shrink threshold is: (1) λ = d 2 log(WH ) Where d is the standard noise deviation, W is the width of the image, and H is the image’s length. After determining the threshold value, two different threshold functions are selected for denoising in this paper, which is the hard threshold function and the soft threshold function. Where the hard thresholding function is: Wi,j Wi,j ≥ λ (2) Wi,j = 0Wi,j < λ As can be seen from Eq. (2), the hard thresholding function for wavelet coefficients is to set the absolute value of the coefficients less than the threshold to 0, while the part greater than the threshold is not processed. And the soft thresholding function can be written as: ⎧ ⎪ ⎨ Wi,j ( Wi,j − λ) Wi,j ≥ λ Wi,j Wi,j = (3) ⎪ ⎩ 0 Wi,j < λ
A Derain System Based on GAN and Wavelet
717
As can be seen from Eq. (3), the soft threshold function sets the coefficients whose absolute value is less than the threshold to 0. It changes the coefficients whose absolute value is greater than the threshold to the difference between their absolute value and the threshold.
3 Gan Denoise After pre-processing the images using wavelet threshold denoising, this paper uses adversarial generative networks for further denoising. In this paper, CycleGAN [5] model is chosen for adversarial generative network denoising. The model consists of four parts, two generators, and two discriminators. The two generators are used to learn how to remove the rain traces from the original image and generate the rain traces. The two discriminators are used to determine whether the map contains rain stains or not. The overall structure of the model is shown in Fig. 1.
Fig. 1. GAN Framework
G_A2B denotes the de-rain generator, D_A denotes the no-rain picture discriminator, G_B2A denotes the incremental rain generator, and D_B denotes the rain picture discriminator. realA denotes the original rain trace picture. FakeB denotes the de-rain get picture, and realB is the original no-rain picture, and FakeA denotes the generated raincontaining image. And the structures of the generator and discriminator in this model are shown in Fig. 2 and Fig. 3, respectively.
Fig. 2. Generator
718
X. Huang
Fig. 3. Discriminator
In the model’s training phase, three losses are chosen for the generator part of the training: recognition loss, generation loss, and recurrent loss. For the discriminator, we use the simplest L1 loss. The recognition loss encourages the generator to generate better de-rain/rain-containing images that can deceive the discriminator. The recognition loss can be written as: LIA = L1 (RealA , GBA (RealB )) (4) LIB = L1 (RealB , GAB (RealA )) Where L1 is the L1 loss function, while GBA and GAB are the two generators, respectively. And the generation loss is used to measure the similarity between the original image and the generated image, which is used to guide the generator to generate images that are closer to the original image. Where the generation loss can be written as: LGA = MSE(RealA , GBA (RealB )) (5) LGB = MSE(RealB , GAB (RealA )) where MSE is the squared difference loss function. Finally, we use the circular loss to compare the similarity between the recovered image and the original image, which can be written as: LCA = L1 (RealA , GBA (GAB (RealA ))) (6) LCB = L1 (RealB , GAB (GBA (RealB ))) And the training process of the whole model is, first initialize the network parameters, after that sample a batch of data from the whole dataset, first calculate the loss of the generator, after that update the parameters of the generator model with Adam [4] optimizer, after that fix the parameters of the generator, calculate the loss of the discriminator, after that update the parameters of the discriminator again until the whole model converges.
4 Experiments The wavelet analysis in this paper is based on pywt framework in python, with sym5 wavelet as the base, two wavelet decomposition layers, and Visul Shrink as the threshold.
A Derain System Based on GAN and Wavelet
719
in addition, the generative adversarial network in this paper is based on Pytorch framework. The experimental hardware in this paper is Intel(R) Core(TM) i7-8750 (CPU) and RTX-2070 (GPU), where the GPU is mainly used to train the adversarial generative network. In this paper, we first discuss the rain removal effect of the whole model. In this paper, two datasets, Rain12 [6] and Real-world Paired Rain Dataset [7], are used to verify the effect of this model. Among them, the Rain12 dataset is an artificial rain map, while the Real-world Paired Rain Dataset contains real-world rain maps. The final experimental results of this paper are shown in Fig. 4.
Fig. 4. Derain result
In Fig. 4 the first column of images is the original rain image, the second column of images is the image obtained after the hard threshold function wavelet denoising, the third column is the image obtained after the soft threshold function wavelet denoising, and the last column is the final denoising result. From this figure, it can be seen that the proposed method in this paper can remove most of the rain traces in the rain-containing images more effectively and restore the original images with high quality. In addition, this paper also compares the loss variation of the adversarial generative network during the training process. The loss of the generator is shown in Fig. 5, and the loss of the discriminator is shown in Fig. 6.
Fig. 5. Generator loss
From Fig. 5, it can be seen that the generator starts with a fast loss change during the training process and then changes smoothly until convergence. This reflects the role
720
X. Huang
of wavelet threshold denoising in the first part of this paper, which improves the input images’ quality by wavelet threshold enoising, thus allowing the generator to converge faster and better during the training process.
Fig. 6. Discriminator loss
As can be seen from Fig. 6, the loss of the de-rain image discriminator is always higher than that of the rain-containing image discriminator, which may be since it is more difficult to discriminate the de-rain image than the rain-containing image. However, the discriminators’ overall training process is not stable enough, and the losses of both discriminators fluctuate somewhat.
5 Conclusion This paper proposes an image rain removal method based on wavelet threshold denoising and the adversarial generative network. Firstly, wavelet transform is applied to the original rain-containing image to obtain wavelet coefficients. Then the noise and discrete rain traces are initially removed using thresholding and then restored. After that, the whole image is de-rain by using the adversarial generative network. Besides, this paper also conducts experiments on real data and artificial data to verify the effectiveness of the method in this paper. Although this paper’s method has some effect, it still cannot remove most of the rain traces, and the model training time is long, and the pre-processing is troublesome. In this paper, we will further solve the above problems in the follow-up work.
References 1. Ren, G.: Research on video image de-rain method under dynamic weather. Changchun University of Technology (2019) 2. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014) 3. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) 4. Kingma, D.P., Adam, J.B.: A method for stochastic optimization. arXiv preprint arXiv:1412. 6980 (2014)
A Derain System Based on GAN and Wavelet
721
5. Zhu, J-Y., Park, T., Isola, P., A Efros, A.: Unpaired image-to-image translation using cycleconsistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017) 6. Li, Y., Tan, R.T., Guo, X., Lu, J., Brown, M.S.: Rain streak removal using layer priors. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, pp. 2736–2744 (2016). https://doi.org/10.1109/CVPR.2016.299 7. Wang, T., Yang, X., Xu, K., et al.: Spatial attentive single-image deraining with a high quality real rain dataset. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE (2020)
Research and Development Thoughts of Intelligent Transportation System Wenhao Zong(B) Shanghai Maritime University, College of Transport and Communications, Shanghai, China
Abstract. With the continuous development of economy and the dramatic growth of traffic volume, the intelligent transportation system is used to improve the utilization rate of roads, which has become the highest direction and goal of the development of transportation system. This article mainly introduces the research and analysis of the intelligent transportation system, so as to put forward the development of the intelligent transportation system in China. Keywords: Intelligent transportation system · Research · Development · Challenges · Future directions
1 Introduction Since the reform opens up, the road traffic infrastructure has achieved unprecedented development, thus accelerating the process of urbanization and motorization in China. The increase in the number of motor vehicles has outstripped the increase in the construction of the road network. According to the Texas Transportation Institute, people spend almost 42 h a year stuck in traffic, and the public transit system wastes nearly 3 billion gallons of fuel at an estimated cost of nearly $160 billion [1]. Traffic congestion, frequent traffic accidents, waste of resources, environmental pollution, economic losses and many other traffic problems may follow. According to statistics [2], with the application of intelligent transportation, the number of traffic deaths each year can be reduced more than 30%, and improve the efficiency of the use of vehicles by more than 50%. The development of intelligent transportation system of China and developed countries gap is still large. During the 13th Five-Year Plan period, in the aspects of synergy, automatic driving vehicle road is more, in general, having made obvious progress, information collection, travel services, command and dispatch. Transport business service has large improvement, However, each direction of the field of intelligent transportation in China is relatively fragmented, and the overall development of the industry is seriously unbalanced. As a huge ecosystem, the intelligent transportation industry needs all parties to pool their strengths and strengths. Therefore, it is hoped that IT Giant can achieve differentiated development through competition. At the same time, I also hope that they can speak out what they expect the government to do. The issues that more voices and common concerns must be urgently addressed by the industry authorities. Intelligent
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 722–734, 2022. https://doi.org/10.1007/978-981-16-8656-6_64
Research and Development Thoughts
723
transportation system involves cloud computing, big data, artificial intelligence and artificial intelligence technology and other advanced technologies to make full use of various transportation resources and promote the effective reduction of transportation costs. Intelligent transportation system (ITS) plays an irreplaceable role in alleviating traffic congestion and reducing traffic accidents. In addition, intelligent transportation system can improve the traffic efficiency, effectively slow down the traffic pressure, reduce the traffic accident rate, and then protect the environment, save energy, has important practical significance for the construction of “transportation power” of China.
2 The Basic Concept of Intelligent Transportation System ITS is a transportation service system based on modern electronic information technology, which can provide diversified services for traffic participants. Its outstanding characteristics are information collection, processing, release, exchange, analysis, use. 2.1 Definition of Intelligent Transportation System The simple definition of intelligent transportation system is to build an efficient, green and convenient transportation system by using advanced Internet technology and big data storage system, including data communication technology, artificial intelligence means and electronic control theory [3, 4]. It mainly includes seven fields: advanced traffic information service system, advanced traffic management system, advanced public transportation system, advanced vehicle control system, freight management system, electronic toll collection system and emergency rescue system [5]. 2.2 Main Connotation of Intelligent Transportation System ITS is essentially the intelligence, information and high technology of integrating information. In the new era, ITS is not a single technology or diversified technology, it is a new thinking mode and concept. Artificial intelligence and edge computing are the key technologies supporting the landing of intelligent transportation applications. The development of 5G will lead a new round of technology integration innovation, comprehensively enabling autonomous driving and intelligent transportation, and realizing the low delay, high reliability and high speed of autonomous driving and the collaborative interconnection between people, vehicles, roads and clouds. So as to maximize to meet people’s basic travel needs, so that people in the shortest time to achieve their own needs, to speed up the construction of modern society.
3 Development Status of Intelligent Transportation According to wang [7], the most important feature of intelligent transportation is that it can timely use road data to manage and guide traffic conditions, and can make reasonable distinctions among cars, people and roads, while ensuring the safety of the three. Intelligent transportation system can not only maintain traffic order efficiently, but also
724
W. Zong
solve problems with the fastest speed after accidents happen. The following summarizes few common and important Existing Intelligent Transportation System. Sadiku et al. [8] studied Electronic Fare Collection (EFC), which Reducing the manpower and material resources of the vehicle license plate recognition and charging system in traffic management, accelerating the parking charge or the toll collection speed. In practice, however, microwave communication interfered with each other. Vehicles strayed into ETC lane could not trade normally. The result was that vehicles passed too fast or escaped fees. So reasonable setting of equipment could be used to avoid microwave communication interference. According to Wang [7], the driver would get more timely information during the driving process, making more accurate and effective judgments to avoid bad things happening from Precise Vehicle Navigation. The disadvantage was that the vehicle travel recorder had single function and poor communication ability. In the future, considering the development trend of intelligent transportation system, RFID technology is innovatively applied to short-range wireless communication between onboard terminal and roadside vehicle management service system. Wang [7] also proposed that Intelligent Driving System could calculate and process the situation around the car, however, the technology was not mature, and there were certain security risks. The sensor could not ensure 100% accuracy and needed to be integrated with high-precision maps. Patel et al. [9] researched Advanced Traffic Monitoring Systems and Advanced Traffic Management System. If someone break rules then E-Challenge get generated with respect to Advanced Traffic Monitoring Systems. Advanced Traffic Management System could improve the overall efficiency of transportation. Their flaws were that issuing E-Challenge was a manual process and the parts didn’t connect well respectively. By doing automatic number plate recognition and taking information technology as the leading factor to eliminate such defects. Cunha et al. [10] found improving driving safety, performance, and fuel consumption owing to Data Collection System, while some of the data collected from vehicle’s sensors did not represent relevant information. Then correlating internal and external variables was an effective way. 3.1 Development Status Abroad The United States is the country with the most advanced intelligent transportation system technology and the most developed industry in the world. The number of ITS service areas has expanded from 8 to 12, and two new service areas, “support” and “sustainable transportation”, have been added, highlighting the importance of information security and green transportation. At the same time, “parking management” and “weather” are separated from the original service field, which indicates that the status of these two services is constantly improving under the background of the increasing number of motor vehicles and the increasing proportion of motorized travel. The evolution of the US ITS service field is shown in Table 1. The US ITS lays out the development strategy of intelligent transportation with the blueprint of five-year plan. Its vision and mission have some continuity and inheritance, and it is committed to improving the safety and efficiency of transportation and promoting the overall progress of society through the application and expansion of ITS technology. In terms of strategic focus, since 2010, the United States has made clear that the strategic theme of ITS is to comprehensively
Research and Development Thoughts
725
promote the integrated development of multi-mode Internet of Vehicles (IOV) integrated transportation. The 2010–2014 version of the strategy emphasizes transportation connectivity, the 2015–2019 version of the strategy emphasizes vehicle automation and infrastructure connectivity, and the 2020–2025 version of the strategy from the emphasis on autonomous driving and intelligent network connected single point of breakthrough to the layout of comprehensive innovation of emerging technologies, improving the development strategy based on the technology life cycle. Emphasis is placed on promoting the demonstration and application of new technologies in the whole process of R&D, implementation and evaluation. The development history of US ITS strategic planning is shown in Table 2. Table 1. The evoluyion of the us its service field Serial number
National ITS Architecture (V7.1)
Connected Vehicle Reference ITS Architecture (CVRIA), V2.2
Architecture Reference for Cooperative and Intelligent Transportation (ARC-IT), V8.3
1
Advanced Traffic Management Systems (ATMS)
Traffic signals; Traffic network; Border
Traffic Management (TM)
2
Advanced Traveler Information Systems (ATIS)
Traveler Information
Traveler Information (TI)
3
Advanced Public Transportation Systems (APTS)
Transit; Transit safety; Electronic payment
Public Transportation (PT)
4
Emergency Management Public safety (EM)
Public safety (PS)
5
Commercial Vehicle Operations (CVO)
Commercial Vehicle Operations (CVO)
6
Maintenance and construction (MC)
7
Archived Data (AD)
Planning and Performance Monitoring
8
Advanced Vehicle Safety Systems (AVSS)
V2I Safety; V2V Safety Vehicle safety (VS)
Commercial Vehicle Fleet Operations; Commercial Vehicle Roadsides Operations; Freight Advanced Traveler Information Systems
Maintenance and construction (MC) Data Management (DM)
(continued)
726
W. Zong Table 1. (continued)
Serial number
National ITS Architecture (V7.1)
9
Connected Vehicle Reference ITS Architecture (CVRIA), V2.2
Architecture Reference for Cooperative and Intelligent Transportation (ARC-IT), V8.3
Core Services; Security Support (SU)
10
Sustainable transport
11
Previously included in ATMS
12
Previously included in MC
Sustainable transport Parking Management (PM)
Road Weather
Weather (WX)
Table 2. The development history of us its strategic planning Strategy
2010–2014
2015–2019
2020–2025
Prospect
Provide a nationally connected transportation system for the United States
Changing the way society works (integrating transport and other social services)
Accelerate popularization of ITS application to change the direction of society
Mission
To provide countries with transport infrastructure systems, technologies and applications with connectivity
Carry out ITS research and development and promotion, promote the application of information and communication technology, and make the society move forward more safely and effectively
Develop and use ITS to move people and goods more safely and efficiently
Three phases; Research, development and application
Phase 5: confirmation and evaluation, coordination and lead R&D, value presentation, application promotion and ITS application maintenance
Technology lifecycle
(continued)
Research and Development Thoughts
727
Table 2. (continued) Strategy
2010–2014
2015–2019
2020–2025
Strategic priority
Connectivity of transportation
Achieve advanced automation and interconnection of vehicles
Five-stage strategy based on the closed loop of the technology lifecycle
The most representative highway in the United States is the “driving Wanda” highway, which consists of two parts: smart roads and smart cars. The whole system is made up of monitor and command center composed of a computer, which is mainly used to analyze the speed sensor, traffic remote control detector, GSP vehicle navigation equipment and the speed information of various traffic flow from the road, and display the traffic flow on the electronic map of the car guidance, and send it out by radio waves. Along the road, traffic radar remote control monitors are set up, and electronic safety boards with warning signs ahead are set up in areas with high accident incidence, so as to release safety information of the road ahead in real time. In the car, the driver can also receive voice prompts from the monitoring center through the frequency phase-locked FM broadcast. Therefore, in the “Wanda” road driving car drivers generally have mixed sense of ease, security, driving process, can be at any time to get the guidance and help of the computer monitoring center. Japan is also one of the early countries to conduct research on intelligent transportation system. With the close cooperation of relevant departments, Japan successfully developed the Implementation Policy of Information Technology in the Field of Highway, Transportation and Vehicle, which consists of 9 development fields such as navigation system, automatic toll collection system and safe driving assistance system and 20 user service functions. It can also realize the function of providing the most convenient driving lines and various road condition information for drivers and travelers [4]. Japan has realized the full coverage of high-speed wireless data communication on all expressways, and more than 90% vehicles have installed the second-generation ETC2.0 terminal of non-stop charging system with functions such as non-stop charging, real-time road collection, analysis and early warning, which greatly reduces traffic accidents and traffic congestion. With the development and innovation of new technology, more and more attention has been paid to the application of intelligent and information technology, especially in the field of transportation which is closely related to human survival. In accordance with the relevant regulatory provisions, Europe will achieve the innovation and application of automobile Internet technology and vehicle automation, develop five strategic themes, build safer roads and vehicle management technology, and improve the operational efficiency of transport systems. At the same time, it pays attention to the development and application of technology, integrates the green transportation system with the highefficiency and energy-saving technology type, establishes a perfect road network and traffic network, so that travel can be convenient and fast.
728
W. Zong
3.2 Domestic Development Status The construction of intelligent transportation system in our country, has made preliminary achievements, most of the city’s traffic and transportation industry has applied some intelligent products. Such as highway electronic toll collection system (ETC) have been all over the country, in the existing lane electronic parking charge system, can increase the capacity of the lane 3–5 times, according to the national standard of construction and covering the country’s 29 provinces and regions, opened thousands of professional electronic toll collection lanes, covering the user has reached more than one thousand. At the same time for the development of intelligent transportation, the government has also given strong support. The travel mode of mobile Internet and the development of its industry have been constantly innovated. The large-scale investment of cloud computing, big data and mobile Internet and the highly integration of intelligent technology and Internet have brought new opportunities for the intelligent development of urban transportation. For example, with the wide application of Didi Taxi, the car-hailing service has covered more than 300 cities nationwide, with more than 200 million registered users and over 10 million orders received on the platform every day [6]. With the application and popularization of electric vehicle technology, China gradually began to replace the original diesel vehicles with electric vehicles, and gradually developed different technology types of electric cars and electric buses, which will be put into the transportation market. The development and popularization of electric vehicle industry have brought a broad market prospect and space for the transportation market. In the face of international transportation mode, clean energy should be taken as the main basis for development, independent innovation and coexisting development mechanism should be attached to, and the development and utilization of China’s new energy automobile industry should be accelerated, so that it can get a place in the fierce international competition, so as to promote the development and progress of the intelligent society. In 2020, the Ministry of Science and Technology issued a letter to support Jinan, Xi ‘an, Chengdu and Chongqing in building national new-generation artificial intelligence innovation and development pilot zones. Among them, in the four application demonstrations required to be carried out in Chongqing, intelligent transportation stands out among them. At traffic speeds of up to 40 km per hour, the system takes over, allowing drivers to take care of other things instead of wasting time stuck in traffic. This is not science fiction, the first L3 level automatic driving core technology based on real vehicle release to reach mass production state in China -- Traffic Jam Pilot (TJP) was released in Chongqing, and its L3 level automatic driving car has also been mass produced. At present, China’s intelligent transportation system has been promoted in some cities, highways and other applications, but compared with foreign advanced technology, the overall technology and application level is still quite a big gap. In the future, there will be five major directions for the development of intelligent transportation in China, which are intelligent coordination and service of comprehensive transportation, intelligent guarantee of safe operation of transportation system, intelligent vehicles-road coordination and autonomous driving, promotion of information technology development based on special requirements of intelligent transportation, and cross-border integration of intelligent transportation industry ecosystem [13].
Research and Development Thoughts
729
4 Challenges with the Development of Intelligent Transportation System in China Practice has proved that relying on independent technological innovation, China will step into a road of intelligent transportation development with Chinese characteristics. However, there are still some obstacles that may restrict the development of intelligent transportation system in China. 4.1 Lack of Basic Theoretical Research and Limited Room for Improvement Agachai et al. [11] found through research that intelligent mobility and its environment, various assessment, prediction, management and control methods must be carried out in real time based on available information from sensors and stakeholders. Traffic related problems are characterized by a large number of parameter relationships that are not well understood, as well as the existence of a large number of incomplete data, and unclear goals and constraints, resulting in artificial intelligence unable to make rational decisions and achieve the goal to the maximum extent of intelligence. 4.2 Partitioned Resource by Industry Barriers, and the Low Sharing Degree Huang et al. [12] pointed out that due to institutional and market reasons, several major stakeholders in China still lag behind in resource integration, information sharing, multisector, multi-user intelligent traffic management and control, and service platform development. Because industry data cannot serve the society, universities, enterprises and research institutions with its technology cannot obtain real industry data, which leads to technology mismatch and waste of many social resources. 4.3 Imbalanced Industrial Development and Obstacles in Core Technologies There are great differences in the development of cities in our country. Some cities are entering the stage of deep development. However, most cities in the central and western regions have just entered the early stage of development. The typical mixed traffic is the core characteristic of China’s traffic system, which is obviously different from that of foreign traffic systems. Part of the original core technology depends on imported products. 4.4 Industrial Innovation Investment Mode is Facing the “New Normal Development Mode” Driven by Capital Under the new normal of China’s economic development, major changes will take place in the mode of government investment in public welfare fields such as transportation. With the increase of public financial input, in order to relieve short-term financial pressure, there will be a tendency for some local governments to implement intelligent transportation construction projects, which will obtain social capital through financial models
730
W. Zong
such as Public-Private Partnership (PPP). In this way, enterprises in the intelligent transportation industry will shift from the research of single technology to the comprehensive development under the new normal development mode. The intelligent transportation industry is more inclined to become an integrated industry solution provider to enhance the comprehensive competitiveness of the industry.
5 Direction and Trend of Development of Intelligent Transportation System in China Under the new situation, more challenge requirements have been put forward for the function of the transportation system. We can call the next generation “intelligent transportation” or “intelligent transportation system 2.0”. Future innovation in China’s intelligent transportation should focus on strengthening the aspects described in the following sections [14–16]. 5.1 Combination of Basic Research of Scientific and Technological Innovation with Top-Level Design The strategic planning and top-level design of intelligent transportation determine the direction of development and implementation of urban intelligent transportation. In March 2020, the Standing Committee of the Political Bureau of the CPC Central Committee held a meeting and proposed to speed up the construction of new infrastructure, marking the fast track of China’s new infrastructure development. Current should be actively promoted the deployment based on science and technology of the intelligent transportation infrastructure, through the construction of high-speed roads, wisdom hub, and other “hard” new infrastructure facilities (digital) and urban traffic, wisdom parking cloud platform, such as “soft” new infrastructure (digital), the high quality of a new round of economic development. The government needs to strengthen the guidance at the macro level. At the national level, we should attach importance to the sharing, opening and integrated application of transportation big data, build a hierarchical open sharing mechanism of traffic data for different data types, different objects and different rights, and promote the connectivity of mass transportation such as railways and aviation with urban traffic data such as subways and buses. Region/city level, the economic development of urban agglomeration and city circle to build regional traffic large data centers, different cities should be specific, according to the scale of urban governance scenarios, choice of economic financial factors such as differentiation of distributed and centralized urban traffic data platform construction mode, can promote data monitoring, custody bus operations, facilities, transportation management [17–19]. 5.2 Promote Regional and Intra-Regional Integration of Transportation System Regional integration has become a national development strategy. The full integration of regional traffic information resources will become the new trend of ITS development, and will also produce a new traffic mode and industrial structure. Fully realize the integration within the region, resource sharing, jointly promote the research of key and
Research and Development Thoughts
731
difficult points, break through the technical barriers. At the same time, the development of intelligent transportation innovation should follow the pattern of modern transportation development and have the grand vision of global technological development. 5.3 Establish a Sound Efficiency Evaluation System for Intelligent Transportation System The evaluation procedure of the US Department of Transportation for its intelligent transportation system consists of six aspects: intelligent transportation system research evaluation, deployment tracking investigation, post-deployment evaluation, project evaluation, information management and information transformation. Specifically, during the construction study phase of the project, the evaluation process mainly serves as a supervisory function to ensure that the methodology of the project study is consistent with the relevant guidelines of the federal government; During the project deployment phase, the evaluation procedure analyzes and accumulates a large amount of data and materials during the project deployment in the manner of continuous follow-up survey. The evaluation procedure of the intelligent transportation system is to evaluate the benefits of the deployment and application of the intelligent transportation system and the value of the project investment. It is of great significance to ensure that the project construction can focus on the vision and achieve the established goals at the beginning of the construction of the intelligent transportation system. At the same time, the evaluation process can better quantify the value, benefit and impact of the ITS project and promote the continuous improvement of the ITS strategy. 5.4 Summarize the Development Mode and Develop its Idea The development of new means of transportation and new technology has brought about great changes in the transportation system. Large capacity, intelligent transportation vehicles, new rail transit, cross-regional transportation, cross-industry electronic payment, electronic identification, vehicle networking will have a far-reaching impact on the development of China’s ITS. In the face of these changes that are taking place or will take place in the future, we should actively deal with and deeply study the operation mechanism and change mode of the traffic system resulting from it, so as to perfect and enrich its concept. We should pay attention to the application of new technologies and actively explore new technologies to introduce new models and new ideas. The emergence of network under the background of sharing poses a severe challenge to the “Internet + “ mode of transportation industry management and service in the development of transportation industry. In addition, the deeper application of big data and artificial intelligence will bring even greater changes. The theory of green sustainable development will further penetrate into the field of intelligent transportation. Therefore, we should actively explore changes and create new ITS through the development of new technologies to promote sustainable development.
6 Conclusion As the big data, artificial intelligence, mobile Internet and the use of new generation of information technology and popularization, ITS becomes the important direction of
732
W. Zong
modern transportation, which affects the city traffic, traffic safety and travelling efficiency. ITS makes the city more intelligent and more humanized, especially for the traffic in our country power, so its strategic significance is obvious. China is in the period of structural transformation, and the transportation industry is also facing the important development opportunities of structural transformation. Therefore, we must seize this important strategic period, grasp the development trend and direction. In the future, the intelligent transportation system gradually appears the following trend [20, 21]. 1) Forming a new traffic management system. The “intellectual” development of intelligent traffic management systems is not just a matter of technology. Based on the understanding of the social and technical system attributes of the urban traffic system, the traffic system is systematically constructed with new thinking and concepts, and the comprehensive use of technical means and the extensive participation of various personnel to realize the process and system of co-construction, sharing and collaborative governance. The new traffic management system takes into account factors such as hardware, software, organization and process comprehensively, integrates human intelligence and artificial intelligence, and is technologically based on datadriven, supported by intelligent technology, or can be realized through cloud control system, parallel system and other similar ways [22, 23]. 2) Paying equal attention to improving efficiency and adjusting demand. In recent years, one of the contradictions of the traffic system is that the growth rate of demand far exceeds the improvement of supply capacity. In order to realize the good operation of the traffic system, the connotation of the intelligent traffic management system also needs to be extended. Based on the rapid development of information technology, the construction of intelligent traffic management system needs to focus on improving the operational efficiency of physical system (transportation infrastructure, etc.), to adjust travel behavior and improve the operational efficiency of physical system, such as the integration of Mobility as Service (MAAS) and active traffic management system, etc. So as to optimize the total amount of traffic, travel time, mode, path and other choices, in order to better achieve the balance of various kinds of traffic system [24–26]. 3) Going for smart mobility. smart mobility is developed on the basis of ITS, and it is also the inevitable trend of its development. smart mobility has the characteristics of integration, real-time flexibility, demand orientation and individuation. Integration is the foundation of smart transportation. smart mobility integrates the information of “people-vehicle-road-land” and other elements in the same latitude network, reflecting the real-time interaction status and process of all elements. Real-time flexibility is reflected in the fact that through the Internet of Things technology, passengers can access information about various modes of transportation at any time, reducing uncertainty. Demand orientation and personalization are embodied in enabling passengers to change their passivity into initiative and guide services with demand. The integration platform created by smart mobility provides users with personalized choices based on different value orientations, turning transportation from a means into a service. Under smart mobility, people’s demand for travel is no longer satisfied with arriving at the destination. It becomes equally important to enjoy the whole process of travel service, which is the high elasticity of intelligent transportation [27–30].
Research and Development Thoughts
733
Acknowledgment. Upon the completion of the paper, I would like to thank my supervisor Mr. Zheng for his guidance. From the topic selection, conception, writing to the final draft of the paper, the teacher has given me careful guidance and enthusiastic help, so that my paper can be successfully completed. Mr. Zheng’s serious and responsible work, academic spirit and rigorous style of study are worthy of my lifelong learning, which also guide me to practice in the future work and study.
References 1. Ladha, A., Bhattacharya, P., Chaubey, N., Bodkhe, U.: IGPTS: IoT-based framework for intelligent green public transportation system. In: Singh, P., Pawłowski, W., Tanwar, S., Kumar, N., Rodrigues, J., Obaidat, M. (eds.) Proceedings of First International Conference on Computing, Communications, and Cyber-Security (IC4S 2019). Lecture Notes in Networks and Systems, vol. 121, pp. 183–195. Springer, Singapore (2020). https://doi.org/10.1007/978981-15-3369-3_14 2. Yuxuan, L., Tong, Z.: Thinking about the development of intelligent transportation in China. Technol. Outlook 26(028), 310 (2016) 3. Li, L.: Talking about intelligent transportation system. Sci. Public (Sci. Educ.) 2019(8), 197+170 (2019) 4. Jin, Z.: Discussion on the development of intelligent transportation system. Era Agric. Mach. 43(010), 77–78 (2016) 5. Shuang, W., Han, Y.: Development and application of intelligent transportation system. Traffic World 21(007), 8–9 (2019) 6. Wenjing, W.: Development strategy of intelligent transportation system for future intelligent society. Global Mark. Inf. Guide 1, 118–119 (2017) 7. Wang, Y.: Thoughts on the development trend of intelligent transportation and the development of intelligent vehicles. In: Jain, V., Patnaik, S., Vl˘adicescu, F.P., Sethi, I.K. (eds.) Recent Trends in Intelligent Computing, Communication and Devices: Proceedings of ICCD 2018, pp. 523–529. Springer Singapore, Singapore (2020). https://doi.org/10.1007/978-98113-9406-5_63 8. Sadiku, M.N.O., Gupta, N., Patel, K.K., Musa, S.M.: An overview of intelligent transportation systems in the context of internet of vehicles. In: Gupta, N., Prakash, A., Tripathi, R. (eds.) Internet of Vehicles and its Applications in Autonomous Driving, pp. 3–11. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-46335-9_1 9. Patel, P., Narmawala, Z., Thakkar, A.: A survey on intelligent transportation system using internet of things. In: Shetty, N.R., Patnaik, L.M., Nagaraj, H.C., Hamsavath, P.N., Nalini, N. (eds.) Emerging Research in Computing, Information, Communication and Applications: ERCICA 2018, Volume 1, pp. 231–240. Springer, Singapore (2019). https://doi.org/10.1007/ 978-981-13-5953-8_20 10. Cunha, F., et al.: Vehicular networks to intelligent transportation systems. In: Arya, K., Bhadoria, R., Chaudhari, N. (eds.) Emerging Wireless Communication and Network Technologies, pp. 297–315. Springer, Singapore (2018). https://doi.org/10.1007/978-981-13-0396-8_15 11. Agachai, S., Wai, H.H.: Smarter and more connected: future intelligent transportation system. IATSS Res. 42, 67–71 (2018) 12. Huang, W., Wei, Y., Guo, J., et al.: Next-generation innovation and development of intelligent transportation system in China. Sci. Chin. Inf. Sci. 60(11), 1–11 (2017) 13. Yijin, Z.: Research on the development of intelligent transportation system in China. Hous. Real Estate 528(06), 75 (2019)
734
W. Zong
14. Mfenjou, M.L., Ari, A., Abdou, W., et al.: Methodology and trends for an intelligent transport system in developing countries. Sustainable Comput. Inf. Syst. 19, 96–111 (2018) 15. Das, J.H., Tom, S.: Futuristic intelligent transportation system architecture for sustainable road transportation in developing countries (2016) 16. Yaqoob, I., et al.: Internet of things architecture: recent advances, taxonomy, requirements, and open challenges. IEEE Wirel. Commun. 24(3), 10–16(2017) 17. Wang, H.: The application of artificial intelligence in intelligent transportation under the background of big data. Comput. Knowl. Technol. 17(12), 198–199 (2021) 18. Zhou, D.: Intelligent transportation application system of cloud computing. Office Autom. 25(24), 10+58–59 (2020). No. 437 19. Zhao, P., Zhu, J.: The development and challenges of smart mobility. Contemp. Archit. 12(12), 46–48 (2020) 20. Li, R., Wang, C.: Development trend of intelligent transportation management system. J. Tsinghua Univ. (Natural Science) 1–7, 03 July 2021. https://doi.org/10.16511/j.cnki.qhdxxb. 2021.26.023 21. Wang, X., Zhang, J., Song, X., et al.: International research and development hotspots of intelligent transportationsystem. Sci. Technol. Rev. 37(6), 36–43 (2019) 22. U.S. Department of Transportation. Architecture reference for cooperative and intelligent transportation [EB/OL]. 30 Nov 2020, 06 Jan 2021. https://local.iteris.com/arc-it/ 23. Zhou, R.: The application prospect of artificial intelligence technology in urban intelligent traffic management. Inf. Recording Mater. 21(5), 1–3 (2020) 24. Lv, H.: Research on the application of big data in smart city traffic system. Inf. Recording Mater. 21(5), 106–107 (2020) 25. Ming, Z., Zhang, J., Jian, L.: Urban traffic big data analysis platform based on cloud computing. Geospatial Inf. 18(2), 16–20 (2020) 26. Wenxue Duan, H., Ming, Z.Q., et al.: A review of cloud computing system reliability research. Comput. Res. Dev. 57(01), 102–123 (2020). https://doi.org/10.7544/issn1000-1239.2020.201 80675 27. Porru, S., Misso, F.E., Pani, F.E., et al.: Smart mobility and public transport: opportunities and challenges in rural and urban areas. J. Traffic Transp. Eng. (English Edition) 7(1), 88–97 (2020) 28. Ge, T., Pei, L.: Smart community construction and agile governance change in high-risk society. Theory Reform 2020(5), 85–96 29. Zhang, X.: Research on the application of big data in smart city traffic system. Shanxi Architect. 46(1), 197–198 (2020) 30. Chen, L.: Intelligent transportation research in the era of big data artificial intelligence. Smart City 5(5), 88–98 (2019)
The Construction of Risk Model (PDRC Model) for Collaborative Network Organization Xiadi Cui(B) and Juanqiong Gou School of Economics and Management, Beijing Jiaotong University, Beijing, China {19120563,jqgou}@bjtu.edu.cn
Abstract. Modern enterprises generally believe that the scientific implementation of internal control risk management is an important guarantee for the construction of enterprise internal control management efficiency system. However, in the increasingly competitive market environment, enterprises often inevitably need to seek cooperation with partners. Therefore, only relying on the strong control mode can’t meet the long-term development of enterprises. Under the guidance of this perspective, this paper analyzes the construction method of collaborative risk management domain model, adds organizational elements to the risk model, constructs PDRC model (Partner/ Danger/ Risk/ Consequence chain) and gives the method of element identification. At the same time, the author specifically expanded five types of application forms of the model to analyze the relationship between entity categories and risk management strategies. Then try to summarize inherent laws. Finally, this paper given two business scenarios of a large petrochemical trading company in China to verify the proposed model and its application rules. This study can provide a model basis for collaborative risk management of future organizations. Keywords: Collaborative network · Risk management · PDRC model · Entity properties · Elements to identify
1 Introduction In the process of market economy becoming mature, the development of enterprises is faced with various kinds of competition and risks. In order to seek long-term sustainable development path, in addition to continuously enhancing market competitiveness, enterprises often strengthen internal control to avoid and deal with the possible risks. However, there are many problems in the process of practical application of internal control methods: controlling every behavior that may produce risks makes this passive management method and rigid risk control management measures have a greater impact on the effectiveness of internal control management to varying degrees, exposing the shortcomings of internal control risk management in ideas and measures [1]. In order to adapt to the rapid change of environment and fierce market competition, enterprises have begun to seek effective methods suitable for their own risk control: in today’s business ecosystem, joining collaborative networks and various similar structured cooperative organizations groups has gradually become a way to help enterprises © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 735–746, 2022. https://doi.org/10.1007/978-981-16-8656-6_65
736
X. Cui and J. Gou
grasp market opportunities and respond to competitiveness. Since then, the enterprises risk management has gradually changed from traditional internal control to collaborative control. Collaborative network is composed of a number of heterogeneous actors. Actors cooperate with each other through computer network to complement and share core resources, so as to better achieve common or compatible goals and obtain differentiated market competitive advantages [2, 3]. Although enterprises are more and more aware of the importance of collaborative risk management, this work still has a large room for improvement. On the one hand, risk itself is a complex and multi-dimensional concept containing causal relationship. And risk identification is a process to identify the nature of various risks. It is necessary to explore the sources of risks faced by enterprises and potential risks, namely the main factors of risk generation and triggering, and then to judge the possible consequences [4]. On the other hand, in a dynamically changing collaborative network environment, risks are ubiquitous and highly correlated [5, 6]. Each participant in the network is not only the subject of risk initiation and transmission, but also the receptor that bears the influence of risk evolution. Therefore, it is necessary to strengthen the cooperation among the participants in the collaborative network, so that all stakeholders are involved in risk identification and finding solutions for risk mitigation. To sum up, in such complex collaborative networks, whether we can understand the mechanism of risk triggering and recognize the importance of participating entities jointly coping with risks will directly affect the formulation of risk prevention strategies and the implementation of relevant measures. In view of the current research status, this study aims to introduce a formal and structured model to characterize the risks of inter-organizational cooperation. On the basis of considering the causal process of “risk source-risk-consequence” (DRC), a risk model including organizational relationship (PDRC) is constructed to strengthen the concept of participants in collaborative network, which pay special attention to various subjects in collaborative organization. This study can provide a model basis for collaborative risk management in future organizations.
2 Literature Review Internal control usually refers to the development of a series of procedures, rules and regulations, etc. to prevent in advance, control in the process and supervise and correct the relevant risks that may appear in the process of enterprise business activities, so as to ensure the maximization of enterprise value. However, researchers have found the disadvantages of this control method: enterprises do not realize the dynamic cascade of risks when carrying out risk management, and blindly constrain risks will often cause the loss of business opportunities and even lead to the generation of new risks. Based on this, enterprises began to explore more effective risk management methods. In recent years, the business form of mutual cooperation among enterprises has constituted a collaborative network. The collaborative network environment is in dynamic change, and risks are ubiquitous and highly correlated. This kind of complex correlation makes the risk no longer exist independently after the occurrence, but produce cascade
The Construction of Risk Model
737
effect. In other words, the occurrence of one risk may trigger the occurrence of more risks, giving rise to a series of chain reactions, and with the spread of interference, the scope and severity of its influence will increase several times [4, 7]. Based on this, more and more scholars have realized the importance of collaborative risk management. So far, some of the academic work in collaborative risk management has focused on risk modeling. Das and Teng [8] proposed a dynamic alliance model with management risk perception as the core, which consists of the following parts: the antecedents of risk perception, relational risk and performance risk, risk perception and structural preference, and preference decomposition. Huang et al. [9] proposed Distributed Decision Making (DDM) for the risk management of dynamic alliances. The model is divided into two levels, respectively describing the decision-making process of the owners and partners of dynamic alliances, which can be seen as a combination of top-down and bottom-up methods. Yang et al. [10] constructed the risk network model of green building project participants to deeply understand the key risk network, collected data through two case studies of projects undertaken in China and Australia, and then used the social network analysis (SNA) method to deconstruct the project risk complexity that is often ignored in the traditional single risk impact analysis. Zeng and yen [4] refined the concept of risk chain and built a supply chain risk system model to study the role of collaborative partnership in supply chain risk management. The research results show that the level of collaboration between partners affects the elasticity of supply chain, which means that partnership can positively affect the integration of supply chain risk system, which is more conducive to the operation of supply chain. In conclusion, although there are a wide range of research fields and problems in the current risk model from the perspective of synergy, there is a lack of consideration of risk causes and cascading relationships. The DRC model constructed by Li et al. describes the whole process of risk from generation to trigger, which promotes the understanding of risk with causal relationship. In addition, three kinds of propagation characteristics of cascading effects have been proposed based on this model [11]. This study provides a more accurate picture of the dynamics of the risk formation process and its spread impact. However, the purpose of risk identification lies in effective control. To achieve this purpose, it is necessary not only to perceive risks, but also to know which participants are involved in the formation and transmission of risks, so as to achieve accurate prevention. To sum up, the future research needs to solve two problems: (1) there are some drawbacks in the internal control oriented organizational risk management, and it is necessary to seek some methods to transform to collaborative risk management; (2) the existing collaborative risk management model lacks the association with the organization to some extent, which makes the risk response measures lack pertinence.
3 Research Methodology 3.1 PDRC Model (Partner/Danger/Risk/Consequence Chain) Building Li et al. put forward Danger/Risk/Consequence Chain (DRCc) based on the research of collaborative network and risk cascading effect theory, and constructed the risk cascading effect conceptual model.
738
X. Cui and J. Gou
Among the core concepts of DRC, “Stake” attracts our attention. “Stake” can be seen as item, thing or entity that has potential or actual value to an organization and potential susceptibility to dangers [12]. It can be seen that in the DRC model, “Stake” represents the undertaker of risks or consequences. ISO 7010:2019 [13] defines “Stake” as a tangible entity that has potential or actual value for a specific environment or system, and as a resource or tangible asset owned by a specific environment or system. When focusing on the enterprise collaborative network, Stake refers to the participating organizations in the collaborative network and the individual members in the organization, or called stakeholders. By comparing the two definitions, we find that the DRC model involves only one participant, which weakens the complexity characteristics of the collaborative network. In fact, collaborative networks involve multi participants. Considering the triggering process of risk: the risk source originates from an entity, but the entity that bears the loss after the risk occurs may not be the same as the entity that bears the risk source. That is to say, the downstream cooperative enterprise may bear the consequences of the risk due to the risk source of the upstream enterprise. In other words, there is entity transformation in the process of risk triggering.
Fig. 1. Five-factor model
Fig. 2. PDRC model
Based on this, on the basis of the original model logic (Fig. 1), we replace “stake” with the more universal concept “partner”. At the same time, we define the entity attributes of the concept: the root entity (i.e. subject) and target entity (i.e. receptor), as shown in Fig. 2. Due to the association between the concepts of the model, the association between the entity attributes of the concepts is derived. That is, in the process of risk initiation, the risk source causes the risk and has the subject attribute; the consequence is caused by the risk and has the receptor attribute. Other concepts are defined as: • Danger: Characteristics of events that may result in losses or hazardous conditions in the system environment.
The Construction of Risk Model
739
• Risk: The possibility that an entity is susceptible to the consequences of one or more dangers. • Consequences: The result of an event that affects the target. The significance of this optimization model is as follows: • Optimize the collaborative network risk management scheme: according to the entity association mode, it is more targeted in the development of risk source monitoring, risk early warning and pre control and consequence treatment strategies. • To provide a model basis for the future organization’s collaborative risk management. 3.2 Recognition of Model Elements Due to the management of the supply chain enterprise document (such as management system, management reports, etc.) contains a large amount of initial risk information. In particular, those documents that aim at risk management and control contain direct and identifiable risk related descriptions (similar to the Table 1). So the data mainly comes from the enterprise risk management documentation. This section will focus on how to identify the five elements from these data. Table 1. Risk description raw data III. Market risk: Market risk refers to the risk caused by changes in the market price of the underlying assets or competition relations Price risk Procurement price risk Without continuous monitoring and scientific prediction of the market price level, the purchase quantity, purchase price and purchase time point cannot be reasonably grasped, which leads to the increase of purchase cost
…
Selling price risk
Not sensitive to market price changes and unable to adjust the sales price in time, which is not conducive to the company’s business development
…
…
Firstly, entities are extracted from the risk description text by manual text annotation, and each entity is matched by elements. Then, the missing risk elements are supplemented with the actual business information, and sorted according to the causal correlation order of PDRC. After these, added special connectives such as “due to”, “influence”, “have”, “trigger” and “cause” between the entities to facilitate the division of sentences. Finally, classified the divided entities and stored them to form structured data. The following example shows the process of model feature recognition. (Fig. 3).
740
X. Cui and J. Gou
Fig. 3. Identification schematic of model elements
3.3 Application of the Model PDRC model can be used as a reference model for risk analysis, which represents the process of risk triggering and belongs to the theoretical level. This section will tend to the practical level, analyze several situations encountered in the specific application of the model, and summarize the general rules of risk management strategy. It should be noted that the “event” in the above model is the risk driver element and the motivation to stimulate and trigger the occurrence of risks. That is to say, once the consequences occur, there must be an event that can trigger the risk. Therefore, in the specific application of the model, the element of event is not considered (Table 2 and Fig. 4). This paper analyzes the risk triggering situation under the above five situations, and summarizes the following rules: i. The first three types are as follows: • When Sun is not equal and either Sun is not equal to Re: any subject finds the risk source and gives early warning to re of downstream enterprises; • When Sun is not equal to each other but exists Sui = Re: enterprise i carries out self-pre-control; • When Sun has duplication (Sui ) and either Sun is not equal to Re: strengthen the monitoring of enterprise i and give early warning to enterprise Re; • When Sun has duplication (Sui ) and Sui is not equal to Re: strengthen the monitoring of enterprise i and give early warning to enterprise Re; • When Sun has duplication (Sui ) and Sui is equal to Re: strengthen the monitoring of enterprise I and pre control enterprise i
ii. The latter two types are as follows: • When Ren is not equal and either Ren is not equal to Su: the discovery of risk sources shall give early warning to all Re;
The Construction of Risk Model
(a)
741
(b)
(c)
(d)
(e)
Fig. 4. Application form of PDRC model
• •
• •
When Ren is not equal to each other but exists Rei = Su: the discovery of risk sources requires early warning of all Re and self-pre-control of Su; When Ren has duplication (Rei ) and either Ren is not equal to Su: the discovery of risk sources shall provide early warning for all Re, and provide key early warnings for enterprise Rei ; When Ren has duplication (Rei ) and Rei is not equal to Su: the discovery of risk sources should give early warning to all Re, and pre control enterprise Rei ; When Ren has duplication (Rei ) and Rei is equal to Su: the discovery of risk sources should give early warning to all Re, focus on early warning enterprise Rei , and self pre control Su.
742
X. Cui and J. Gou Table 2. Many-to-one situation: risk subjects and corresponding strategies Su2
… Sui
… Sun
Su1 ∩ Su2 ∩ … ∩ Sun
Strategies
Re =
=
=
=
=
=
=Ø
The subject who finds the risk source should give early warning to Re
=
=
=
=
=
=
=Ø
Sui performs self-pre-control
=
=
=
=
=
=
= Sui
Strengthen the monitoring of Sui and give early warning to Re
=
=
=
=
=
=
= Sui
Strengthen the monitoring of Sui ; Su1 conducts self-pre-control and provides early warning for Re
=
=
=
=
=
=
= Sui
Strengthen the monitoring of Sui and conduct self-pre-control
Su1
Note: the phenomenon of “many to one” (multiple risk sources lead to one risk receptor) requires mutual warning between two enterprises
Table 3. One-to-many situation: risk subjects and corresponding strategies Re1 Re2 … Rei … Ren Re1 ∩ Re2 ∩ … ∩ Ren Strategies Su =
=
=
=
=
=
=Ø
Early warning for all re
=
=
=
=
=
=
=Ø
Rei conducts self-pre-control
=
=
=
=
=
=
= Rei
Key early warning for Rei
=
=
=
=
=
=
= Rei
Key early warning for Rei .Su conducts self-pre-control
=
=
=
=
=
=
= Rei
Rei conducts self-pre-control
4 Case Analysis In general, enterprises through the establishment of an internal control system to transform the business problems they face into risk descriptions, and put forward control measures according to relevant rules which helps to control risks. However, considering the characteristics of collaborative network, enterprises, as one of the participants of collaborative network, are inevitably affected by risks from other partners. Risk control measures from the internal perspective are insufficient to cover risks within the scope of collaborative network. Therefore, the focus here is to demonstrate how the PDRC model describes the risk triggering process with the help of cases, and propose risk response strategies through systematic risk mining. Then compared with the internal control of enterprises, to achieve
The Construction of Risk Model
743
the purpose of strengthening the cooperative awareness of enterprise collaborative risk management. This case was chosen because it describes a typical business collaboration scenario with relatively high complexity, which makes the risk description more meaningful. Therefore, the selection of cases is not random. The core enterprise of the case is a large petrochemical products trading company in China, which mainly provides petrochemical resources such as refined oil, lubricating oil and asphalt for the projects of cooperative enterprises. The participants involved in the specific collaborative business scenario mainly include product suppliers, carrier companies and project departments. Through the investigation of oil companies, we sorted out the main business scenarios and problems, and transformed them into risk description. Combined with the previous risk model, we select two main business scenarios: procurement and transportation and capital settlement: 4.1 Case 1: Procurement of Transportation Business In the procurement and transportation stage, the inventory status and the carrier’s business capabilities are the decisive factors for the on-time supply of resources. The process is described as: The oil company accepts the order from the project department, purchases from the supplier, and entrusts the carrier for transportation (Table 3). Interview Record Collation: • Risk description: The order quantity cannot be completed on time and the transportation cycle is long, which cause the materials to be unable to be delivered in quality and quantity. • Internal control methods: optimizing the logistics system and strengthening the oil company’s transportation supervision system. Analysis of the Risk Chain (Table 4 and 5):
Fig. 5. Risk analysis of procurement and transportation business
744
X. Cui and J. Gou Table 4. Risk analysis and Countermeasures
Risk identification
Late delivery of goods; Shortage of transport vehicles
Risk source identification Supplier’s production equipment failure; The carrier’s transportation plan is unreasonable; Oil companies accept urgent orders Risk response strategy
Suppliers give early warning to the project department and carry out emergency procurement in advance; The oil company monitors the carrier’s ability and arranges the order reasonably; Carriers adjust transportation plans and give early warning to oil products companies
4.2 Case 2: Reconciliation and Settlement Business In the reconciliation and settlement stage, the settlement method of goods before payment is usually adopted, resulting in that the accounts receivable cannot be recovered in time and the accounts payable cannot be paid regularly. This is a big pain point of enterprise financial problems (Fig. 5). Interview Record Collation: • Risk description: The order quantity can not be completed on time and the transportation cycle is long, which lead to the material can not be delivered with quality and quantity. • Internal control methods: Optimize the logistics system and strengthen the transportation supervision system of oil companies. Analysis of the Risk Chain (Fig. 6):
Fig. 6. Risk analysis of reconciliation and settlement business
The Construction of Risk Model
745
Table 5. Risk analysis and countermeasures Risk identification
Cash flow risk; Payment risk; Procurement risk
Risk source identification
Accounts receivable cannot be recovered on time
Risk response strategy
Oil products companies must keep abreast of the customer’s property and credit status; Give an early warning to the project department and initiate emergency purchases
Through case analysis, it can be seen that when we only focus on a single enterprise, the risk strategy tends to the internal control of the enterprise. However, when we deconstruct the risk, trace back to the source and focus on the collaborative participants, we can often formulate more targeted risk response strategies and improve the efficiency of risk management.
5 Conclusion The complexity of risk in a collaborative network is related to many factors, including risk elements, risk attributes, risk relationships, entity diversity and uncertainty. For collaborative network risk management, partners are not only the participants of risk initiation and dissemination, but also the recipients of risk evolution. Therefore, participants’ cognition of risk and behavior coping mode will directly affect the evolution of risk and the effect of risk management, which is the lack of research perspective in the existing research. DRC model is a conceptual structure representing risk. Li provided a more specific definition of the core concept and described the risk triggering process, but there was a lack of explanation for the participants in the collaborative response to risk. On the basis of this study, this paper adds entity attributes to the concept of participant and highlights the concept of main receptor. In addition, this study also provides the identification method of the model elements, expands the application of the optimization model in detail, and summarizes some general risk response rules. Finally, we used an enterprise case to verify the proposed method. This study provides a model method for risk management from internal control mode to collaborative management mode, which can be used to mitigate risks in collaborative network. In order to enhance the universality of the model, the future work will focus on the following aspects: • Build the knowledge base of risk elements and establish the causal relationship of “risk source-risk-consequence”. • In view of the characteristics of the dynamic spread of risk, it is necessary to further study the cascading effect and establish the cascading relationship of “consequencerisk-risk source”, so as to provide a more efficient reference for risk mitigation.
746
X. Cui and J. Gou
Acknowledgment. The presented research works have been supported by “the National Natural Science Foundation of China”. The authors would like to thank the project partners for their advice and comments.
References 1. Li, Q.Z.: Enterprise internal control risk management research. Enterp. Reform Manage. 22(11), 21–22 (2020) 2. Afsarmanesh, H., Ermilova, E., Msanjila, S.S., Camarinha-Matos, L.M.: Modeling and management of information supporting functional dimension of collaborative networks. In: Hameurlain, A., Küng, J., Wagner, R. (eds.) Transactions on Large-Scale Data- and Knowledge-Centered Systems I. LNCS, vol. 5740, pp. 1–37. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03722-1_1 3. Wulan, M., Peterovic, D.: A fuzzy logic based system for risk analysis and evaluation within enterprise collaborations. Comput. Ind. 63(8), 739–748 (2017) 4. Zeng, B., Yen, B.P-C.: Rethinking the role of partnerships in global supply chains: a risk-based perspective. Int. J. Prod. Econ. 185, 52–62 (2017) 5. Jamshidi, A., Abbasgholizadeh Rahimi, S., Ait-kadi, D., Ruiz, A.: A new decision support tool for dynamic risks analysis in collaborative networks. In: Camarinha-Matos, L.M., Bénaben, F., Picard, W. (eds.) PRO-VE 2015. IAICT, vol. 463, pp. 53–62. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24141-8_5 6. Fynes, B., Deburca, S., Voss, C.: Supply chain relationship quality, the competitive environment and performance. Int. J. Prod. Res. 43(16), 3303–3320 (2005) 7. Ojha, R., Ghadge, A., Tiwari, M.K.: Bayesian network modelling for supply chain risk propagation. Int. J. Prod. Res. 56(17), 5795–5819 (2018) 8. Das, T.K., Teng, B.-S.: A risk perception model of alliance structuring. J. Int. Manage. 7(1), 1–29 (2001) 9. Huang, M., Lu, F.Q., Ching, W.K.: A distributed decision making model for risk management of virtual enterprise. Expert Syst. Appl. 38(10), 13208–13215 (2011) 10. Yang, R.J., Zou, P.X., Wang, J.: Modelling stakeholder-associated risk networks in green building projects. Int. J. Project Manage. 34(1), 66–81 (2016) 11. Li, J.Y., Benaben, F., Gou, J.Q., Mu, W.X.: A proposal for risk identification approach in collaborative networks considering susceptibility to danger. In: 19th IFIP Working Conference on Virtual Enterprises (PRO-VE 2018), vol. 534, pp. 74–84 (2018) 12. Li, J.Y., Gou, J.Q., Mu, W.X.: Research on association reasoning of railway potential risk based on the situation of accident chain. J. Chin. Railway Soc. 11(39), 8–14 (2017) 13. ISO 7010: Graphical symbols- Safety colours and safety signs [DB/OL] (2019)
Temporal and Spatial Patterns of Ship Accidents in Arctic Waters from 2006 to 2019 Qiaoyun Luo and Wei Liu(B) College of Transport and Communications, Shanghai Maritime University, Shanghai, China [email protected]
Abstract. The aim of this paper is to investigate the ship accidents in Arctic waters from 2006 to 2019 and reveal their temporal and spatial patterns. Maritime activity in the Arctic is increasing due to multiple factors, such as the melting of sea ice, oil and gas development, and so on. To improve maritime safety, analysis of previous ship accidents is required. Based on Lloyds Casualty Archive database, this paper analyzed the ship accidents occurring north of 66◦ 34 . The Kernel Density Estimation (KDE) method was used to identify the accident-prone seas. The results show that in Arctic waters, fishing ships are the most accidentprone ship types. The most prone type of non-fishing ships is passenger ship, followed by general cargo ship. From 2006 to 2019, the number of ship accidents in Arctic waters fluctuated, and it began to show a downward trend in 2017. Most ship accidents occur in the Eastern Hemisphere in Arctic waters, and the most accident-intensive sea area is the water near port of Murmansk. Although ship accidents in Arctic waters rarely cause pollution, the proportion of serious accidents is relatively high, accounting for about half of all accidents. These findings can be helpful for accident prevention in Arctic waters. Keywords: Arctic · Ship accidents · Polar code · Temporal patterns Spatial patterns · Kernel Density Estimation (KDE) method
1
·
Introduction
With the melting of Arctic sea ice, Arctic routes become more attractive for shipping industry [1]. Traveling through the Arctic routes can shorten the sailing distance and save shipping costs [2]. Maritime traffic in Arctic waters is increasing [3]. However, harsh weather conditions and fragile environment make it difficult for ships to sail in Arctic waters [4]. Sea ice, low temperature, high latitude, and extreme day and night are the main threats [5]. Once an accident W. Liu—This study was funded by the Major Project of National Social Science Fund of China. Grant Number: 20&ZD070. c The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 747–757, 2022. https://doi.org/10.1007/978-981-16-8656-6_66
748
Q. Luo and W. Liu
happens, it may lead to casualties, economic losses, and environmental pollution. Therefore, there is a need to analyze ship accidents to enhance maritime safety in Arctic waters. Safe navigation in Arctic waters is an urgent issue. Afenyo et al. [6] utilized Bayesian Networks for the analysis of a probable ship-iceberg collision during Arctic shipping. Zhang et al. [7] analyzed the risks of getting stuck in ice and the collision between a ship and ice. Kum and Sahin [8] used root cause analysis to investigate the causes of maritime accidents in the Arctic from 1993 to 2011. Fedi et al. [9] analyzed maritime accidents in the Russian Arctic from 2004 to 2017. Fu et al. [10] identified associations between ship and accident attributes based on Arctic ship accidents from 2008 to 2017. Kruke and Auestad [11] analyzed the challenges of operating in Arctic waters and the preparedness equipment. Khan [12] proposed a Dynamic Bayesian Network model to assess th ship-ice collision risk. Vanhatalo et al. [13] proposed an approach to assess the probability of a ship besetting in ice from real-life data. However, most studies have assessed the probability of accidents in Arctic waters, and there are few geospatial analysis of actual ship accidents [14]. The aim of this study is twofold. First, it investigates the ship accidents in Arctic waters over a long span. It seems that analysis of ship accidents covering a long span with latest data is not seen in the existing literature. Secondly, the accident-prone spots are identified. The spatial analysis of ship accidents in Arctic waters is also rare seen. Exploring the characteristics and recognizing the accident-prone spots can contribute to developing safety strategies and preventing maritime accidents in Arctic waters. We utilize Lloyds Casualty Archive database and extract ship accidents occurring north of 66◦ 34 from 2006 to 2019. After processing the dataset, we analyze characteristics and temporal and spatial patterns of accidents. To identify the accident-prone seas, the Kernel Density Estimation (KDE) method is introduced and used. The rest of paper is structured as follows: Sect. 2 introduces the data source and methods. Section 3 describes characteristics of ship accidents in Arctic waters. Section 4 shows temporal and spatial patterns of ship accidents and discusses the results. Section 5 is the conclusion.
2 2.1
Materials and Methods Data Source and Processing
This study employed Lloyds Casualty Archive database (https://www. lloydslistintelligence.com/) maintained by the Lloyds List. First, the accidents occurring north of 66◦ 34 were extracted. The dataset was then processed before further analyses. In total, 265 accident records with 12 fields were picked out after all the process. The data structure is shown in Table 1. The main process is as follows:
Temporal and Spatial Patterns of Ship Accidents
749
(1) Accident area. According to the classification of accident area in the database, since there were no accidents recorded in the North Pole area, this region has been omitted. Consequently, the three regions of Russian Arctic and Bering Sea, Iceland and Northern Norway, Canadian Arctic and Alaska were selected. (2) Accident year. These regions did not record continuous data before 2006, which may affect the result of temporal pattern investigation, so records before 2006 were omitted. At the time of the present study, accidents that occurred in 2020 were incomplete so that records of 2020 were excluded. (3) Ship type. Ships involved in the accidents were reclassified into ten categories and details can be found in [15]. (4) Field. The original database contains a total of 23 fields, some of which were not relevant to the study, for example build place of ship, and were then removed. Some fields with rare data were also omitted.
2.2
Methods
Based on the processed accidents dataset, descriptive analysis was carried out from several aspects to explore the overall characteristics of ship accidents. Then, the temporal distributions of ships accidents were analyzed. To investigate the spatial distributions of the accidents, the Kernel Density Estimation (KDE) method was used to identify the accident-prone seas and it was realized by ArcGIS 10.2. KDE is one of the spatial analysis methods, proposed by Rosenblatt [16] and Parzen [17]. KDE is a non-parametric method that does not require the assumption about the distribution of dataset. Focusing on the distribution of the dataset itself, KDE has received more and more attention both in theory and application [18]. KDE is a popular method in traffic accidents analysis [19]. The general form of KDE can be expressed as follows: λ(s) =
n 1 dis k( ) 2 πr r i=1
(1)
where λ(s) denotes the estimated density at location s; r denotes the search radius or bandwidth; k is the kernel function, representing the weight of a point i from the distance dis to the location s; n denotes the number of the locations. Previous studies suggested that the kernel function k and the bandwidth r are the two key parameters of KDE [18,19]. The bandwidth r determines the smoothness of the estimated density [19]. The larger the value of r, the smoother the estimation. Compared with the value of r, the form of the kernel function has little impact on the estimation [18]. Therefore, this study did not discuss the effects of different kernel functions on the results. The kernel function of ArcGIS 10.2 is based on quadratic kernel function [20]. The selected search radius was the default radius of the ArcGIS 10.2. The results display well.
750
Q. Luo and W. Liu Table 1. Data structure
3
Attribute
Description
Accident area
The exact area where Russian Arctic and the accidents happened Bering Sea, Iceland and Northern Norway, Canadian Arctic and Alaska
Value
Accident Year
The exact year of the accident
Accident month
The exact month of the 1–12 accident
Accident quarter
The exact quarter of the accident
1–4
Cause of accident
The cause of the accident
Collision, Fire/explosion, Foundering, Hull damage, Machinery damage/failure, Wreck/stranding, Other
Severity of accident
Whether the accident is a serious accident
Serious, not serious
2006–2019
Pollution status of accident Whether the accident is a pollution accident
Pollution, no pollution
Ship type
Ships that are involved in the accident
See Fig. 1
Ship class
The Classification Society that a ship is classed with
DNV GL, Russian, Lloyds Register, others
Age of ship
Service life of a ship
(0,10], 10,20], (20,30], Above 30
Latitude
The exact longitude of the accident
Longitude
The exact latitude of the accident
Characteristics of Arctic Maritime Accidents
From 2006 to 2019, as shown in Fig. 1, out of a total of 265 accidents, fishing ships are the most accident-prone ship types, accounting for 41.50%. The most prone type of non-fishing ships are passenger ships, followed by general cargo ships. These two ship types account for respectively 13.96% and 13.58% of the total. Tanker ships (10.94%) and government vessels and icebreakers (7.54%) are also relatively prone to accidents.
Temporal and Spatial Patterns of Ship Accidents
751
As for classification societies, Russian Maritime Register of Shipping accounts for the most with a share of 39.62% (105/265), followed by DNV GL (38.11%) and Lloyds Register (6.04%). These three classification societies account for 83.77% of the total collectively. Other classification societies include American Bureau, Bureau Veritas, Korean Register, China Classification, etc. Moreover, the classification societies of the ships involved in the accidents vary with ship types. Most of fishing vessels (63.63%) are classed with Russian Maritime Register of Shipping, while most of passenger ships (83.78%)are classed with DNV GL. Spotting the cause of ship accidents that occur more than the others is essential for exploring and preventing accidents [18]. The most common initial events of ship accidents in Arctic waters are machinery damage (for example, lost rudder, fouled propellor), accounting for approximately 51%, followed by wreck/stranding (13%) and collision (12%) (Fig. 2). Most of the ships involved in the accidents are between 20 and 30 years old (Fig. 3).
Fig. 1. Distribution of ship accidents in the Arctic by ship type and class.
Fig. 2. Distribution of ship accidents in Arctic waters by cause of accident.
752
Q. Luo and W. Liu
Fig. 3. Distribution of ship accidents in Arctic waters by age of ship.
Fig. 4. Distribution of ship accidents in Arctic waters by severity of casualty.
Fig. 5. Distribution of ship accidents in Arctic waters by status of pollution.
As shown in Fig. 4, serious and non-serious ship accidents in Arctic waters account for roughly half of the total. The rate of serious ship accidents in Arctic waters is much higher than that of global maritime accidents [18]. On the other hand, pollution caused by ship accidents in Arctic waters is relatively rare, accounting for 4% of the total (Fig. 5).
Temporal and Spatial Patterns of Ship Accidents
4 4.1
753
Results and Discussion Temporal Patterns
Based on the processed dataset, the temporal distributions of Arctic ship accidents were analyzed. The results are shown in Fig. 6 (by year), Fig. 7 (quarter) and Fig. 8 (month).
Fig. 6. Temporal distribution of ship accidents in Arctic waters by year.
Fig. 7. Temporal distribution of ship accidents in Arctic waters by quarter.
Fig. 8. Temporal distribution of ship accidents in Arctic waters by month.
754
Q. Luo and W. Liu
As shown in Fig. 6, from 2006 to 2012, the number of ship accidents in the Arctic showed a steady increase, but it rose sharply in 2013. In 2013, some Arctic countries issued policies related to Arctic region. Especially, on May 10, 2013, the White House announced National Strategy for the Arctic Region [21], which is the first Arctic strategy of America. In February 2013, Russian President approved The Strategy for the Development of the Arctic zone of the Russian Federation and National Security for the period up to 2020. All these put the Arctic region in greater concern than before, which may lead to the increase of vessel traffic in Arctic waters and resulted in higher probability of accidents [22]. Since 2013, the number of ship accidents fluctuated and started to head downward in 2017. It indicates that Arctic ship accidents have been under control. Considering that Polar Code came into force in 2017, it also implies that Polar Code has played a role in improving safety of Arctic shipping. However, the impact of Polar Code needs to be further studied, which is beyond the scope of this study. As shown in Fig. 7, the third quarter witnessed the highest numbers of ship accidents in the Arctic, accounting for 29.81% of the total. The third quarter is the summer season with heavier vessel traffic compared to the other quarters. Ship accidents mainly occurred in March, July, February, September, and November, which is shown in Fig. 8. The number of accidents that occurred in these five months accounted for more than half of the total. Specifically, March and July are the biggest contributors, accounting for 12.07% and 10.94%, respectively. 4.2
Spatial Patterns
The spatial distribution of ship accidents in Arctic waters and the KDE results are divided into two parts, the Eastern Hemisphere (including the Russian Arctic, the Bering Sea, Iceland, and Northern Norway) and the Western Hemisphere (including Canadian Arctic and Alaska), as shown in Fig. 9 and Fig. 10, respectively. The KDE values were divided in to ten categories, the darker/redder the color, indicating the greater the value.
Fig. 9. Spatial distribution of ship accidents in the Eastern Hemisphere in Arctic waters and KDE results.
Temporal and Spatial Patterns of Ship Accidents
755
Fig. 10. Spatial distribution of ship accidents in the Western Hemisphere in Arctic waters and KDE results.
Overall, most ship accidents occur in the Eastern Hemisphere in Arctic waters, accounting for 91.69% of the total. In the Eastern Hemisphere in Arctic waters, regions prone to accidents are the waters near the port of Murmansk, followed by Northern Norway waters. In the Western Hemisphere in Arctic waters, the distribution of ship accidents is relatively scattered. The ship accidents mainly occur within the waters of the Canadian Archipelago, followed by Western Greenland waters. The spatial patterns of severe accidents and pollution accidents in the three accident areas was also analyzed. As shown in Fig. 11, The accident severity rate in all three regions is approximately 50%. As shown in Fig. 12, 70% pollution accidents occur in Canadian Arctic and Alaska waters.
Fig. 11. Spatial distribution of ship accidents in Arctic waters by severity of accident.
756
Q. Luo and W. Liu
Fig. 12. Spatial distribution of ship accidents in Arctic waters by pollution status of accident.
5
Conclusion
Based on the Lloyds Casualty Archive database, characteristics and temporal and spatial patterns of ship accidents in Arctic waters from 2006 to 2019 were analyzed. Using the KDE method, sea areas prone to accidents have been identified. The results show that in Arctic waters, fishing ships are the most accidentprone ship types. The most prone type of non-fishing ships is passenger ship, followed by general cargo ship. The most common initial event of ship accidents is machinery damage. From 2006 to 2019, the number of ship accidents in Arctic waters fluctuated, and it began to show a downward trend in 2017, indicating that Polar Code has played a role. Most ship accidents occur in the Eastern Hemisphere in Arctic waters, and the most accident-intensive sea area is the water near port of Murmansk. Although ship accidents in Arctic waters rarely cause pollution, the proportion of serious accidents is relatively high. Relevant management departments can propose targeted measures and allocate resources accordingly. These characteristics and patterns may also deserve as the bases for further research to investigate the rate and causes of accidents. The impact of Polar Code needs to be further studied.
References 1. Fu, S., Goerlandt, F., Xi, Y.: Arctic shipping risk management: a bibliometric analysis and a systematic review of risk influencing factors of navigational accidents. Saf. Sci. 139(77), 105254 (2021) 2. Lasserre, F., Pelletier, S.: Polar super seaways? Maritime transport in the arctic: an analysis of shipowners’ intentions. J. Transp. Geogr. 19(6), 1465–1473 (2011) 3. Gunnarsson, B.: Recent ship traffic and developing shipping trends on the northern sea route-policy implications for future arctic shipping. Mar. Policy 124(573), 104369 (2021) 4. Dehghani-Sanij, A.R., Dehghani, S.R., Naterer, G.F., Muzychka, Y.S.: Sea spray icing phenomena on marine vessels and offshore structures: review and formulation. Ocean Eng. 132, 25–39 (2017)
Temporal and Spatial Patterns of Ship Accidents
757
5. Aune, M., Aniceto, A.S., Biuw, M., et al.: Seasonal ecology in ice-covered arctic seas - considerations for spill response decision making. Mar. Environ. Res. 141, 275–288 (2018) 6. Afenyo, M., Khan, F., Veitch, B., et al.: Arctic shipping accident scenario analysis using Bayesian network approach. Ocean Eng. 133, 224–230 (2017) 7. Zhang, C., Zhang, D., Zhang, M., Lang, X., Mao, W.: An integrated risk assessment model for safe arctic navigation. Transp. Res. Part A Policy Pract. 142, 101–114 (2020) 8. Kum, S., Sahin, B.: A root cause analysis for arctic marine accidents from 1993 to 2011. Saf. Sci. 74, 206–220 (2015) 9. Fedi, L., Faury, O., Etienne, L.: Mapping and analysis of maritime accidents in the Russian arctic through the lens of the polar code and POLARIS system. Mar. Policy 118, 103984 (2020) 10. Fu, S., Liu, Y.: Feature analysis and association rule mining of ship accidents in arctic waters. Chin. J. Polar Res. 32(1), 102–111 (2020) 11. Kruke, B.I., Auestad, A.C.: Emergency preparedness and rescue in arctic waters. Saf. Sci. 136, 105163 (2021) 12. Khan, B., Khan, F., Veitch, B.: A dynamic Bayesian network model for ship-ice collision risk in the arctic waters. Saf. Sci. 130, 104858 (2020) 13. Vanhatalo, J., Huuhtanen, J., Bergstr¨ om, M., et al.: Probability of a ship becoming beset in ice along the northern sea route-a Bayesian analysis of real-life data. Cold Reg. Sci. Technol. 184, 103238 (2021) 14. Huang, D., Hu, H., Li, Y.: Spatial analysis of maritime accidents using the geographic information system. Transp. Res. Rec. J. Transp. Res. Board 2326(1), 39–44 (2013) 15. Arctic Council, Arctic Marine Shipping Assessment 2009 report, Arctic Council, April 2009 16. Rosenblatt, M.: Remarks on some nonparametric estimates of a density function. Ann. Math. Stat. 27(3), 832–837 (1956) 17. Parzen, E.: On estimation of a probability density function and mode. Ann. Math. Stat. 33(3), 1065–1076 (1962) 18. Zhang, Y., Sun, X., Chen, J., Cheng, C.: Spatial patterns and characteristics of global maritime accidents. Reliab. Eng. Syst. Saf. 206, 107310 (2020) 19. Xie, Z., Yan, J.: Kernel density estimation of traffic accidents in a network space. Comput. Environ. Urban Syst. 32(5), 396–406 (2008) 20. Silverman, B.W.: Density Estimation for Statistics and Data Analysis. Chapman and Hall, New York (1986) 21. National Strategy for the Arctic Region Announced, The White House, May 2013. https://obamawhitehouse.archives.gov/blog/2013/05/10/national-strategyarctic-region-announced. Accessed 2 Apr 2021 22. Budanov, I.: Resources and conditions of infrastructure development in the Russian federation. Stud. Russ. Econ. Dev. 24(5), 422–432 (2013)
A Summary of User Profile Research Based on Clustering Algorithm Lizhi Peng, Yangping Du, Shuihai Dou(B) , Ta Na, Xianyang Su, and Ye Liu Beijing Institute of Graphic Communication, Beijing, China {duyanping,doushuihai}@bigc.edu.cn
Abstract. Clustering algorithm is applicable to calculate and analyze the potential characteristics of users’ data. The results of clustering can analyze the features of user profile, digitize them and construct a new user profile, which is an important basis for achieving accurate marketing and service to users and improving the experience of users in various fields at present. The article mainly provides an overview of the definition of user profile and classical clustering algorithms, summarizes the application of clustering algorithms in user profile, sorts out the advantages and disadvantages of the algorithm in the application, and puts forward some current problems of clustering algorithms applied to user profile, and prospects the future research directions. The relevant review in this paper can provide help for the subsequent research, which is related to user profile based on clustering algorithm. Keywords: User profile · Clustering algorithm · Algorithm comparison
1 Introduction With the rapid development of new technologies such as cloud computing, mobile technology, big data and the Internet, the data generated has shown explosive growth [1]. Facing these massive data, how to sever users accurately by using data to acquire user needs and interests has become a key research direction of researchers from all fields. And in this background, user profile have emerged. As a tool to achieve precise services, user profile has been widely used in finance, libraries, tourism, e-commerce and other fields. The construction of user profile generally contains three basic steps: data collection, data mining, and data labeling [2]. Data mining is the core module among them, it mainly mines the user features hidden in the data by using techniques of clustering algorithms [3], based on statistical analysis [4], based on Bayesian network [5], based on topic model [6] and based on rule-based definition [7] and so on. The technology based on clustering algorithms are one of the most commonly used technologies to build user profile, for example, the K-means algorithm based on clustering algorithms, which uses Funding source of this paper-project to design and develop an intelligent book management platform in the physical bookstore scene (27170121001/025). © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 758–769, 2022. https://doi.org/10.1007/978-981-16-8656-6_67
A Summary of User Profile Research Based on Clustering Algorithm
759
the data of users behaviors and information characteristics to mine the data through clustering to build user profile, so as to achieve accurate service for users. The subsequent content of the article is arranged as follows: Firstly, introduce the summary of user profiling and mainly talk its definition. Secondly, sort out the classification of classical clustering algorithm and its representativeness, compare advantages and disadvantages of the classic clustering algorithm and its application to user profile, which provides reference for future related research. Finally, talk about the future development of the research on user profile based on clustering algorithm.
2 User Profile Overview The concept of User Profile was first introduced in 1995 by Alan Cooper [8], the father of interaction design, who believed that a user profile is a model target user built based on a set of real data, a virtual representation of the real users and ultimately serves the user. Some researchers portray user profile as labeled user models. R.M.quintana [9] and Massanari [10] consider it is an user’s image model which is based on a large amount of user data, including their names, hobbies, preferences and other information about them. Mengjie Yu [11] defined user profile as information products that label users’ information. Xu Man [12] consider user profile as a collection of users’ personality characteristics that reflect the user to some extent. You Minghui, Yin Yafeng, Xie Lei, Lu Sanglu [13] view user profile as an abstract labeled user model constructed from information about the user’s personality traits, state of mind, and demographic attributes. Chuanming Yu, Tian Xin, Guo Yajing, An Lu [14] consider user profile is an tagging user profile models by summarizing, mining and abstracting social relationships and behavior patterns of the users.. Some researchers portray user profile from the perspective of user object construction and user characteristics. G Amato, Straccia U [15] define a user profile as a structured collection of images that can describe any individual needs, user interests and individualized preferences of user information. Travis [16] proposes seven basic features, namely, basicity, empathy, authenticity, uniqueness, target, quantity and applicability. The English initials of the seven basic features constitute “Persona”, or “user profile”. To sum up, one of the key issues of user profile is user labeling, through the labeling process of user image, the general “Outline” of a user’s information can be traced to understand the user’s behavior, interest, consumption habits, etc. which can be used to analyze the important personal information of the user psychology. From the perspective of user object construction and user characteristics, user profile is a virtual user model based on a large number of real user data to describe user needs and preferences, which has the characteristics of comprehensiveness, authenticity, representativeness and dynamism. It can be seen that user profile focuses on the dominance position of users and can highlight the personalized needs of users.
760
L. Peng et al.
3 A Review of User Profile Based on Clustering Algorithms 3.1 Overview of Clustering Algorithms A clustering algorithm refers to an algorithm that groups a collection of objects into multiple classes consisting of similar objects. Clustering algorithm refers to the algorithm for grouping sets of objects into multiple classes composed of similar objects. The cluster generated cluster is a set of data objects, objects in the same cluster are similar to each other, and objects of different clusters differ from each other. Classical clustering algorithms mainly include the partition-based algorithms [17], the hierarchybased algorithms [18], the grid-based algorithms [19], the density-based algorithms [20], and the model-based algorithms [21]. Figure 1 shows the specific classification and the representative algorithm of each classification.
Classic clustering algorithm
Partition-based clustering
Hierarchical clustering
Density-based clustering
Grid-based clustering
Model-based clustering
Fig. 2. Schematic diagram of the partition-based clustering process.
...
GMM
SOM
COBWEB
...
DENCLUE
OPTICS
DBSCAN
...
CLIQUE
WaveCluster
STING
...
CHAMELEON
CURE
BHRCH
...
K-prototypes
K-medoid
K-mean s
Fig. 1. The specific classification and the representative algorithm of each classification.
A Summary of User Profile Research Based on Clustering Algorithm
761
divisive
agglomerative
abcde
abcd
a
b
c
d
e
Fig. 3. Schematic diagram of the hierarchical cluster structure.
1) Partition-based Clustering algorithm The basic idea of the partition-based clustering algorithm is as follows: given a data set M, which contains m data objects, use a partition method to construct n partitions of the data, each partition represents a class, and n < = m. According to the attributes of M, mutually exclusive cluster division is usually adopted, and an iterative relocation technique is adopted on the basis of distance to make data objects in the same category as “close” as possible, while date objects as “far” away as possible. Figure 2 shows schematic diagram of the clustering process based on partition. Most partition-based clustering algorithms are relatively efficient, but the number of clusters is difficult to determine, and needs to be given before cluster analysis. The K-means algorithm [22] is one of the most famous clustering algorithms among partition-based clustering algorithms. In addition, the commonly used partitionbased clustering algorithms include K-medoid algorithm [23], K-modes algorithm [24], K-prototypes algorithm [25], etc. 2) Hierarchical clustering algorithm The goal of hierarchical clustering is to create a series of nested divisions, which are generally represented by a tree hierarchy [26]. According to the different ways of constructing the tree structure, it can be divided into two categories: agglomerative and divisive. Aggregated hierarchical clustering generally adopts a bottom-up approach, that is, iteratively merging each point until all points are grouped into one cluster. Differentiated hierarchical clustering adopts a top-down approach, starting from the clusters containing all points and being gradually divided until all the points are in different clusters. Hierarchical clustering algorithms mainly include CURE algorithm [27], BIRCH algorithm [28] and Chameleon algorithm [29]. Figure 3 shows the structure diagram of hierarchical clustering.
762
L. Peng et al.
3) Model-based clustering algorithm The basic idea of the model-based clustering algorithm is to assume a suitable mathematical model mainly for each class from the perspective of probability, and the construction of the model generally depends on the density distribution function or other functions of the data set in space. In order to obtain good clustering results, the adaptability of data to the model can be optimized. The main representative algorithms include COBWEB algorithm [30], SOM algorithm [31], GMM algorithm [32] and so on. 4) Model-based clustering algorithm The basic idea of the density-based clustering algorithm is that clusters composed of dense regions separated by sparse regions in the data space, and data elements of the same cluster are closely connected. Its biggest feature is that the algorithm completes the task of data classification based on data density, rather than distance measurement. The density-based algorithm DBSCAN algorithm [33] can find clusters of any shape. It is not sensitive to noise points but can also identify noise points, but it is more sensitive to neighborhood parameters. Generally, it is necessary to calculate the distance, time and space between any two points, and which complexity is high, and different parameters and time will lead to different results. In addition to the DBSCAN algorithm, common density-based algorithms include OPTICS algorithm [34], DENCLUE algorithm [35], etc. 5) Model-based clustering algorithm The grid-based algorithm is based on space-driven, which adopts the spatial structure of the grid, which divides the dataset distribution space into several virtual disjoint grid structures, all clustering operations are not processed for a single data object rather than complete on the grid structure [36]. The grid-based clustering algorithm reduces the complexity of the clustering algorithm, and the processing time has nothing to do with the number of data points, and it has nothing to do with the order of data input, so it is suitable for largescale data. Common grid-based clustering algorithms include STING algorithm [35], WaveCluster algorithm [37], CLIQUE algorithm [38], etc. The typical representative algorithms of the above-mentioned classic clustering algorithms are enumerated, and the advantages and disadvantages are analyzed and compared, and the results are shown in Table 1. Table 1. Comparison of advantages and disadvantages of traditional classic clustering algorithms Algorithm classification
Advantage
Disadvantage
Typical algorithm
Partition-based clustering algorithm
Easy to understand and implement; easy to calculate, efficient, etc.
It is not easy to find data with non-convex shapes; it is affected by the number of initial clusters
K-Means algorithm, K-medoid algorithm, K-prototypes algorithm (continued)
A Summary of User Profile Research Based on Clustering Algorithm
763
Table 1. (continued) Algorithm classification
Advantage
Disadvantage
Typical algorithm
Hierarchy-based clustering algorithm
Mining deeper data; no need to set the number of categories before clustering
Need to calculate the distance from point to point, low efficiency; poor scalability and irreversibility
BIRCH algorithm, CURE algorithm, CHAMELEON algorithm
Grid-based clustering algorithm
High efficiency; can automatically eliminate outliers
Depending on the number of grids, the quality and accuracy of the clustering results are low
STING algorithm, WaveCluster algorithm, CLIQUE algorithm
Density-based clustering algorithm
No need to know the number of clusters; any cluster can be found
Large amount of calculation, high time and space complexity
DBSCAN algorithm, OPTICS algorithm, DENCLUE algorithm
Density-based clustering algorithm
No need to know the number of clusters; any cluster can be found
Large amount of calculation, high time and space complexity
DBSCAN algorithm, OPTICS algorithm, DENCLUE algorithm
Model-based clustering algorithm
The clusters can be elliptical and can support mixing
Low execution efficiency
COBWEB algorithm, SOM algorithm, GMM algorithm, etc.
3.2 Application of User Profile Based on Clustering Algorithm Using the cluster analysis algorithms to construct group user profile, which can dig out the similarities and associations between users’ invisible characteristics and other users, thereby identifying important potential user groups and target user groups, which is conducive to understanding the distribution of user interests. It can help enterprises and the market to quickly and accurately understand the target group of users, and achieve precise marketing and precise services to users. Huaming Yu, Zhang Zhi, Zhu Yiting [39] used clustering algorithm to conduct personal profile to students from different dimensions. The profile help students to monitor themselves, adjust their personal learning behaviors in time, while allowing teachers to personalize their services to students based on the profile. Godoy, Aimndi [40] constructed user profile by obtaining the feature vector of the website visited by the user in the browser, and put the feature vectors into a clustering algorithm and personalize the user by clustering to complete the process of user label matching. The main clustering algorithms which are often used for user profile include user profile based on classification clustering, user profile based on hierarchical clustering and user profile based on model clustering.
764
L. Peng et al.
1) User profile based on partition and clustering K-means algorithm, a clustering algorithm based on partition, is commonly used in user profile by many researchers because of its simple calculation and other advantages such as ease of computation. Jinquan Zhao, Xia Xue, Liu Ziwen, Xu Chunlei, Su Dawei, Xinxin [41] used K-means clustering algorithm to construct user profile in order to facilitate business personnel to more accurately and conveniently understand users’ electricity consumption behavior, which can determine user categories. Bing LeiChang, Zhigang, Zhong Zhen [42] used K-means clustering algorithm to cluster users, and realized the construction of user profile of online shop groups. In order to improve the informatization level of the logistics park, Kai Liu [43] introduced user portraits in order to improve the informationization of logistics parks, and after considering the time complexity, clustering effect and robustness of the algorithm, proposed the use of K-means clustering algorithm as a method for constructing user portraits for data mining. The traditional K-means algorithm has problems of its own such as sensitivity to noise, so some researchers have proposed different improvement methods to address these problems. Liwen Xu, Wang Moyu, Shen Xiaolu [44] used the improved the traditional K-means clustering algorithm for noise sensitivity and other problems, and then used the improved K-means clustering algorithm to obtain three-dimensional and clear student profile. Yunfei Hu [45] used the dichotomous K-means clustering algorithm with Marxian distance to extract the reader’s feature system from different dimensions and perspectives to construct portraits, and this method improved the accuracy and effectiveness of portraits. Yanyi Li [46] used K-means clustering algorithm and Apriori correlation analysis method to identify customers’ characteristics, needs, and preferences, and build a customer profile to provide a basis for the sales department to formulate differentiated research strategies. Haiyan Kang, Li Hao [47] combined TF-IDF algorithm and K-means algorithm to mine the attribute features of group users by selecting user attribute features as central data points to find out user groups with similar characteristics, and proposed a concept of group user portraits built on individual user portraits. Fei Hong, Liao Guangzhong [48] used the K-medoid algorithm, which has been improved on the K-means algorithm, to cluster the hacker portraits and construct the hacker group portraits. To sum up, although the partition-based clustering algorithm K-means algorithm is easy to implement clustering for user profile, but it has the problem of local optimization and easy to be affected by special points. It is necessary to combine the outlier factor algorithm, TF-IDF algorithm, Mahalanobis distance and other algorithms or methods to solve the shortcomings of the K-means algorithm itself. 2) User profile based on hierarchical clustering Two-step clustering and Chameleon algorithm etc. are two common algorithms based on hierarchical clustering algorithm. The two-step clustering algorithm is in a better situation compared to K-means algorithm in three aspects of noise sensitivity and algorithm efficiency and accuracy and is an improved algorithm of BIRCH hierarchical clustering algorithm. Chameleon algorithm is good at handling high dimensional data and can find high quality clusters with different sizes and shapes and can automatically cluster without relying on static patterns provided by the user. Some researchers selected two-step clustering algorithm, Chameleon algorithm to
A Summary of User Profile Research Based on Clustering Algorithm
765
analyze data samples and construct user portraits. Chengyi Le, Wang Xi [49] used a two-step clustering algorithm based on a hierarchical clustering algorithm to construct a high-efficiency library user profile to analyze library reader information and behavioral data to gain in-depth understanding of user needs and achieve precision for the library the service provides a reference. Xiaosong Chen, Cui Zhiming [50] used the Chameleon algorithm to not only take interconnection into consideration, but also cluster users in the Web servers, explore users with very similar access interests, and provide users with more convenient and high-quality personalized services. Ye Wang [51] used the Chameleon clustering algorithm to mine the characteristics of users in the network community, construct the user profile, and provide new ideas and ideas to solve the current operational problems of the network communities. Bi Chongwu, Ye Guanghui, Li Mingqian, Zeng Jieyan [52] used the agglomerative hierarchical clustering algorithm to extract the characteristics of the city from the massive social labels, and constructed the city profile with hierarchical structure. Therefore, it can be seen that the hierarchical algorithm used for user profile can mine deeper data, and it is not necessary to determine the number of categories before clustering. At the same time, it can solve the shortcomings of K-means algorithm. But the algorithm based on hierarchy has poor scalability, has irreversibility and is low efficiency. 3) User profile based on model clustering Gaussian Mixture Algorithm (GMM) is one of the model-based clustering algorithms, which is usually used as a probabilistic model to describe multivariate data with good clustering effect and easy to handle multidimensional data, and is often applied to the construction of user portraits. Xue Zhou [53] combined the GMM algorithm and the K-means algorithm to implement a two-stage clustering algorithm, grouping user interest and making user profile. Yao Wang [54] proposed a semi-supervised Gaussian mixture model based on inverse simulated annealing using the EM algorithm of semi-supervised Gaussian mixture model (SGMM) with the inverse simulated annealing algorithm to construct user portraits. Rongrong Li, Wang Guijuan, Deng Haotian, Chen Huarong, Wu Yadong [55] proposed the method of combining GMM with hierarchical clustering, discovered the functional information of urban areas, analyzed the law of regional user behavior, and constructed a profile of the main flow and movement of urban people. Quna Cai, Liu Shijie, Lu Qiuyu [56] used GMM to extract the typical daily load curve of user’s load data to characterize the user’s electricity consumption patterns, and build a user profile to facilitate the management of the users’ electricity consumption. Although the model-based clustering algorithm GMM algorithm can solve the problem of low accuracy of the K-means algorithm, there are still problems that are prone to rely heavily on initial conditions and fall into local optimal problems, which is also relatively dependent on the initial parameters of the model and has low execution efficiency. Therefore, it needs to be combined with other relevant algorithms to solve the problem.
766
L. Peng et al.
3.3 Comparison of the Algorithm Application Different clustering algorithms applied to user profile have their own advantages and disadvantages. In order to obtain good clustering effect, it is necessary to choose a suitable clustering algorithm according to the actual situation. This paper analyzes and compares the advantages and disadvantages of user profile research based on clustering algorithms, which is convenient for choosing appropriate clustering algorithms to construct user profile. The analysis of the advantages and disadvantages of user profile based on the clustering algorithm is shown in Table 2. Table 2. Analysis of the advantages and disadvantages of user profile based on clustering algorithm Method
Advantage
Disadvantage
Clustering algorithm based on division
K-means algorithm
Easy to understand and May converge to a implement; easy to local minimum, calculate, efficient, etc. slower convergence on large-scale data sets
Hierarchy-based clustering algorithm
Two-step clustering
Mining deeper data; no need to set the number of categories before clustering
Poor scalability
Chameleon algorithm
Clusters can be merged automatically and adaptively, and they can cope with all kinds of strange shapes. Considering interconnectivity and proximity
More parameters need to be set; high time complexity
Model-based clustering algorithm
Condensed hierarchical Limited ability to clustering algorithm handle clusters of different sizes
Large calculation volume
GMM algorithm
Complex model and slow operation
Good clustering effect
4 Conclusions and Prospects With the deepening of the information age, the identity of users has changed significantly, who are the demanders and of information resources and the creators of information resources. The core of user profile is “user”. Based on the application environment of the user, the user’s data information is collected from a multi-dimensional comprehensive
A Summary of User Profile Research Based on Clustering Algorithm
767
understanding of the user, and the user’s data is mined and analyzed through clustering, classification, association and other methods, the user’s characteristics and preferences are extracted, and the user profile model is constructed to provide accurate services for the user. At present, there are still some problems with applying clustering algorithms in user profile: (1) It is not easy to mine a single user. The general clustering algorithm can discover the group characteristics of users, which convenient to understand the user interest distribution, but it is not easy to dig for individual users. (2) The limitations of the algorithm itself. For example, the K-means algorithm is sensitive to abnormal points and is easy to fall into local optimal problem. Data mining for user profile usually needs to be improved, such as combining Mahalanobis distance and other methods to solve the defects of K-means algorithm itself. For future research, two directions can be considered for research: (1) applying clustering algorithms more deeply in user profiling; (2) improving the accuracy and validity of clustering results in user profiling applications. On the basis of analyzing and comparing the problems of classical clustering algorithm, this paper introduces and analyses the related research of user portrait based on clustering algorithm, compares the problems of partition-based clustering algorithm, hierarchy-based clustering algorithm, model-based clustering algorithm in user portrait construction, and discusses the future development direction. We hope to be able to provide useful help to researchers in related fields.
References 1. George, G., Osinga, E.C., Lavie, D., et al.: Big data and data science methods for management research. Acad. Manage. J. 59(5), 1493–1507 (2016) 2. An, L., Yiwen, Z.: profile and comparison of microblog messages and comments in the context of terrorist incidents. Inf. Sci. 38(04), 9–16 (2020) 3. Shan, W., Lei, C., Jinhua, W., et al.: Social user portal modeling based on KD-Tree clustering. Comput. Sci. 46(Z1), 442–445,467 (2019) 4. Yi, Z.: Practical analysis of the statistical methods of user profile in the context of big data. Mod. Bus. (06), 9–10 (2020) 5. Xiaoke, Z., Wenming, S., Cuifeng, D.: Research on user profile construction based on Bayesian network. Mob. Commun. 40(22), 22–26 (2016) 6. Xingshang, Y., Yingsheng, W.: Research on library user profile oriented to user cognitive needs. Library (02), 57–62 (2021) 7. Xinghai, J., Gang, Z., Jing, W., Fengjuan, Z.: Research on user profile construction technology. J. Inf. Eng. Univ. 21(02), 242–250 (2020) 8. Cooper, A.: The Inmates Are Running the Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity. The Inmates Are Running the Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity (2nd Edition) (2004) 9. Quintana, R.M., Haley, S.R., Levick, A., et al.: The persona party: using personas to design for learning at scale. In: Chi Conference Extended (2017) 10. Massanari, A.L.: Designing for imaginary friends: information architecture, personas and the politics of user-centered design. New Media Soc. 12(3), 401–416 (2010) 11. Mengjie, Y.: Data modeling of user profile in product development-from concrete to abstract. Des. Art Res. 4(06), 60–64 (2014) 12. Man, X.: Research on optimization of knowledge recommendation service in university library based on user profile. Publishing Wide Angle (01), 76–78 (2021)
768
L. Peng et al.
13. Minghui, Y., Yafeng, Y., Lei, X., Sanglu, L.: User profile technology based on behavior perception. J. Zhejiang Univ. (Engineering Science Edition), pp. 1–8, 14 Apr 2021 14. Chuanming, Y., Xin, T., Yajing, G., Lu, A.: Research on user profile based on behavior-content fusion model. Library Inf. Serv. 62(13), 54–63 (2018) 15. Amato, G., Straccia, U.: User profile modeling and applications to digital libraries. In: Abiteboul, S., Vercoustre, A.-M. (eds.) Research and Advanced Technology for Digital Libraries, pp. 184–197. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-48155-9_13 16. Travis, D.: E-Commerce Usability: Tools and Techniques to Perfect the On-Line Experience. Routledge, Milton Park (2002) 17. Yu, H.: Research and application of clustering algorithm based on partition. Comput. Knowl. Technol. 13(16), 55–56 (2017) 18. Yan, Q., Hong, W., Quanhua, Z.: Research on clustering algorithm in data mining. Netw. Secur. Technol. Appl. (01), 65–66 (2014) 19. Ling, F., Kejian, L., Fuxi, T., Qingrui, M.: An improved DBSCAN algorithm based on grid query. J. Xihua Univ. (Natural Science Edition) 35(05), 25–29 (2016) 20. Zhixiu, L., Feng, H., Weibin, D., Hong, Y.: Active learning method based on density clustering and neighborhood. J. Shanxi Univ. (Natural Science Edition) 43(04), 850–857 (2020) 21. Mo, H.: Summary of big data clustering algorithms. Comput. Sci. 43(S1), 380–383 (2016) 22. Zhao, W., Ma, H., He, Q.: Parallel K-means clustering based on MapReduce. In: Jaatun, M.G., Zhao, G., Rong, C. (eds.) Cloud Computing: First International Conference, CloudCom 2009, Beijing, China, December 1-4, 2009. Proceedings, pp. 674–679. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-10665-1_71 23. Ali, et al. Finding Groups in Data: An Introduction to Cluster Analysis by Kaufman, L., Rousseeuw, P.J. Technometrics (1992) 24. Wang, D.-W., Cui, W.-Q., Qin, B.: CK-modes clustering algorithm based on node cohesion in labeled property graph. J. Comput. Sci. Technol. 34(5), 1152–1166 (2019) 25. Jang, H.-J., Kim, B., Kim, J., Jung, S.-Y.: An efficient grid-based k-prototypes algorithm for sustainable decision-making on spatial objects. Sustainability 10(8), 2614 (2018) 26. Luping, L., Xiaobing, Z.: A survey of research on topic discovery methods based on text clustering. Inf. Res. (11), 121–127 (2020) 27. Cure, G.S.: An efficient clustering algorithm for large databases. In: Proceedings of SIGMOD 1998 (1998) 28. Xiuzhang, Y., Huan, X., Xiaomin, Y., Shuai, W., Ziru, Z., Yueqi, D.: Research on Chinese encyclopedia text clustering based on characteristic dictionary construction and BIRCH algorithm. Comput. Times (11), 23–27+31 (2019) 29. Zhang, Y., Ding, S., Wang, L., Wang, Y., Ding, L.: Chameleon algorithm based on mutual k-nearest neighbors. Appl. Intell. 51(4), 2031–2044 (2020). https://doi.org/10.1007/s10489020-01926-7 30. Shutao, Z., Liwei, X., Jianning, S., Zilin, R., Kai, Q.: Research on product image evolution algorithm based on spider web structure. Mod. Manuf. Eng. (11), 7–13 (2018) 31. Yue, H., Chengjun, G.: Underwater sensor data acquisition method based on K-means and SOM. Data Acquisition Proc. 36(02), 280–288 (2021) 32. Wan, L., Guangning, L.: Research on turnout fault diagnosis based on GMM clustering and PNN. Control Eng. China 28(03), 429–434 (2021) 33. Campello, R.J.G.B., Moulavi, D., Sander, J.: Density-based clustering based on hierarchical density estimates. In: Pei, J., Tseng, V.S., Cao, L., Motoda, H., Guandong, X. (eds.) Advances in Knowledge Discovery and Data Mining, pp. 160–172. Springer, Heidelberg (2013). https:// doi.org/10.1007/978-3-642-37456-2_14 34. Ning, W., Ling, W., Gen, X., Ronghua, D., Xiang, Z.: An angle-only navigation target detection algorithm based on OPTICS clustering. Space Control Technol. Appl. 47(01), 47–54 (2021)
A Summary of User Profile Research Based on Clustering Algorithm
769
35. Xie Conghua, L., Wanyu, X., Yuqing, S.: Research on medical image clustering division based on dynamic step size. Microelectron. Comput. 04, 66–68 (2007) 36. Zhifeng, L., Yan, Z.: Analysis and evaluation of clustering analysis algorithm. Electron. Technol. Softw. Eng. (07), 157 (2019) 37. Liguo, D., Aiping, L., Xiao, C.: Research on a dataset simplified algorithm based on wavelet clustering. J. Taiyuan Univ. Technol. 05, 532–535 (2006) 38. Nandhakumar, R., Thanamani, A.S.: Clustering high dimensional non-linear data with Denclue, optics and clique algorithms. Int. J. Recent Technol. Eng. (IJRTE) 8(3), 8844–8848 (2019) 39. Minghua, Y., Zhi, Z., Yiting, Z.: Research on the construction of research learning student profile based on visual learning analysis. China Electr. Educ. 12, 36–43 (2020) 40. Godoy, D., Amandi, A.: User profiling for web page filtering. IEEE Internet Comput. 9(4), 56–64 (2005) 41. Jinquan, Z., Xue, X., Ziwen, L., Chunlei, X., Dawei, S., Xinxin.: Selection and behavior of power users. Power Grid Technol. 44(09), 3488–3496 (2020) 42. Bing, L., Zhigang, C., Zhen, Z.: Research on the Construction of Ggroup user profile based on online store order data. J. Henan Univ. Technol. (Social Science Edition) 15(01), 52–59 (2019) 43. Kai, L.: Analysis of the user profile of the logistics park based on k-means clustering. Logistics Eng. Manage. 42(3), 52–54 (2020) 44. Liwen, X., Moyu, W., Xiaolu Stay, S.: Study on the three-dimensional profile of university students based on the label system. Changjiang Inf. Commun. 34(03), 155–158 (2021) 45. Yunfei, H.: College Library User profile Based on Reader Behavior Analysis and MultiClualgorithm. Zhejiang University of Technology (2019) 46. Yanyi, L.: Analysis of Automotive Customer profile and Customer Loss Forecasting Based on Data Mining Method. South China University of Technology (2017) 47. Haiyan, K., Hao, L.: Study on personality prediction and group profile methods based on web log. J. Zhengzhou Univ. (Science Edition) 52(01), 39–46 (2020) 48. Fei, H., Guangzhong, L.: Hacker profile Warning model based on K-medoide clustering. Comput. Eng. Des. 201,42(05),1244–1249 49. Chengyi, L., Xi, W.: Research on user profile of college library based on improving RFM clustering. Library Theory Pract. (02), 75–79+93 (2020) 50. Xiaosong, C., Zhiming, C.: Design and implementation of user clustering based on chameleon algorithm. Microcomput. Dev. 15(4), 48–50 (2005) 51. Ye, W.: Feature Mining of Network Community Users Based on Chameleon Algorithm. Harbin Engineering University (2018) 52. Chongwu, B., Guanghui, Y., Mingqian, L., Jieyan, Z.: Research on city profile perception based on tag semantic mining. Data Anal. Knowl. Discov. 3(12), 41–51 (2019) 53. Xue, Z.: Analysis and Research on User Behavior Based on Web Logs. Beijing University of Posts and Telecommunications (2017) 54. Wang, Y.: Research and Application of Active Semi-supervised Gaussian Mixture Model Clustering Algorithm. Hebei University of Geosciences (2018) 55. Rongrong, L., Guijuan, W., Haotian, D., Huarong, C., Yadong, W.: Visual analysis of urban movement patterns based on regional semantics. Comput. Appl. Res. 1–9 (2021) 56. Qiuna, C., Shijie, L., Qiuyu, L.: Classification and identification method of user loading industry based on GMM clustering and SVM. Guangdong Electr. Power 30(12), 91–96 (2017)
Research on the Evaluation Method of Storage Location Assignment Based on ABC-Random Storage-COI-AHP—Take a Logistics Park as an Example Xianyang Su, Shuihai Dou(B) , Yanping Du, Qiuru Chen, Ye Liu, and Lizhi Peng Beijing Institute of Graphic Communication, Beijing, China {doushuihai,duyanping}@bigc.edu.cn
Abstract. Reasonable storage location assignment is the basis for improving the efficiency of the warehouse. In order to speed up the picking time of goods and reduce storage costs, this paper focuses on the storage location assignment of goods in the warehouse, considering factors such as correlation of goods, space utilization, and outbound frequency. The ABC (Activity Based Classification) method, the COI (cube-per-order index) method, the random storage strategy and the AHP (Analytic Hierarchy Process) are combined with the storage location assignment evaluation method, the stacking layout of the goods is obtained, and take the storage location assignment of steel goods in a logistics park as an example for empirical analysis. The results show that the COI index method obtains the optimal storage location assignment plan in a logistics park, which can obtain a better stacking layout, thereby improving the efficiency of yard picking and enhancing the accuracy and punctuality of storage picking operations. Keywords: Position optimization · AHP · Storage strategy · COI method · ABC method
1 Introduction As one of the important factors affecting modern logistics activities and social material production, warehousing plays an important role in the overall operating environment of enterprises [1]. Storage location assignment of goods is an important part of warehousing operations, and it plays an important role in improving the efficiency of warehousing operations and enhancing competitiveness of logistics companies. Therefore, a reasonable storage location assignment of goods is the key to the overall development of the warehouse and will become an important indicator of future competition. Many scholars at home and abroad have conducted research on the issue of storage location assignment. At present, it mainly expands from the following two aspects. On the This work was supported in part by Scientific Research Program of Beijing Education Commission under Grant KM201910015004 © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 770–783, 2022. https://doi.org/10.1007/978-981-16-8656-6_68
Research on the Evaluation Method of Storage Location Assignment
771
one hand, it is carried out in terms of cargo distribution strategy or cargo space. Heskett J L [2] first proposed COI (cube-per-order index), which can reflect the relationship between the turnover rate of goods, and represent a ratio between the storage space of the goods and the turnover frequency. Li Yafang [3] used commodity association rules to storage location assignment and processes orders in batches to improve picking efficiency. Another aspect is based on the bionic algorithm or genetic algorithm to establish an optimization model to solve the problem. Liu Jingwen [4] took the picking location of Company A as the research object, combined with the historical order data out of the warehouse, used the Apriori algorithm in the data mining theory to analyze the data, obtained the rules between the related goods, and established a cargo location optimization model based on genetic algorithm. Yang Liebing [5] introduced the particle swarm algorithm and the gray wolf algorithm into the cargo location optimization model. By analyzing the weights, the multi-objective cargo location optimization problem was converted into a single-objective problem, proving the feasibility of the algorithm and the advantages and disadvantages of different algorithms in the cargo location optimization problem. To sum up, this paper considers the correlation between the goods, the turnover rate of the goods, the inventory and other factors, combines different methods to carry out the storage location assignment, and evaluates it to obtain the optimal plan of storage location assignment. So as to effectively save the picking efficiency of goods and improve the efficiency of out/in storage. At the same time, it has some reference significance for the outdoor storage location assignment of bulk goods.
2 ABC-Random Storage-COI-AHP Model Establishment The goal of this paper is to solve the problem of storage location assignment by establishing an ABC-random storage-COI-AHP model and improve the efficiency of goods in and out of storage. Through the analysis of the historical order data of the goods, the outbound frequency, order quantity and other information are obtained, combined the ABC (Activity Based Classification) method, the strategy of random storage and the COI method to allocate the goods position, three kinds of storage location assignment schemes are obtained. Finally, an evaluation index system is established and uses AHP to evaluate three kinds of storage location assignment schemes, and the optimal storage location assignment plan is obtained. The process of the storage location assignment evaluation method is Fig. 1. The specific steps are as follows: Step 1: Data preprocessing. Calculate the outbound frequency, order quantity, daily inventory and other information based on historical order data; Step 2: Calculation of goods correlation degree. After preprocessing the data, use SPSS to output the 0–1 matrix of each order under different specifications of each steel grade, and finally establish the material demand correlation [6] model according to the 0–1 matrix: rm1 m2 =
(
m1
m1 mT2 , mT1 + m2 mT2 − m1 mT2 )
and the demand correlation
degree between goods is obtained by solving. Among them: S is used to indicate the total number of outbound distribution bills, s = 1, 2, 3…, S.
772
X. Su et al. Evaluate Relevance of goods Frequency of outbound
Data of Historical order
Quantity of order Daily inventory
The size of the goods Principles of placing goods The length of the position
The number of cargo spaces required
AHP ABC's distributio n plan
Space utilization rate
COI's distributio n plan
Anal ysis
Evalu ation index
Allocation scheme of random storage
Reasonable number of positions Reasonable access rate Cost of inventory
Build a hierarchical model Construct a judgment matrix No
Optimal number of positions
Determine the weight value
Consistency check
Yes
best plan
Fig. 1. Flow chart of the evaluation method of storage location assignment
M: Material number, m = 1, 2, 3…,M, m1 , m2 ∈ M ; Ksm : 0–1 parameter, Indicates whether the order s contains material m. If order s contains material m, then, Ksm = 1, otherwise, Ksm = 1; m: The 0–1matrix in row 1 and column S represents the appearance of material m on each order, m = (K1m , K2m , . . . , KSm ); rm1 m2 is the demand correlation between material m1 and m2. Step 3: Calculation of the number of cargo spaces. According to the daily inventory, the principle of goods placement and the size of goods, calculate the number of cargo spaces required for the cargo. The following model is determined according to the fixed position quantity of each single product on the i day once a day: Set the total incoming as Ri ; the total outgoing quantity as Ci ; the historical inventory as li ; the number of warehouses as yi ; the maximum storage capacity of the storage space according to the tick-shaped stacking is ρ; a represents a day when the balance is greater than zero, which satisfies the (1) and (2). + la−1 − Ca−1 < 0 R a−1 (1) Ra + la − Ca ≥ 0 ⎧ ⎨ 0, Ri + li − Ci ≤ 0 i yi = Ra +la − Ca (2) Ri ⎩ + Ri + li − Ci > 0 ρ ρ , a
Step 4: Determine the plan of storage location assignment. First of all, according to the frequency and order quantitstorage location assignmenty of the goods, the goods are divided into three categories: A, B, and C, and then combined with the degree of correlation between the goods, the goods with a higher degree of correlation are placed adjacent to each other to obtain the ABC storage location assignment plan. After that, considering the order quantity of the goods, and the size of the goods and so on, and taking the highest overall space utilization rate of the warehouse as the objective function, the cargo location planning model is established, and then the cargo location planning plan with the highest space utilization rate is obtained. Combining the frequency of outbound
Research on the Evaluation Method of Storage Location Assignment
773
and the degree of correlation between the goods, the goods are randomly stored, and a random storage location allocation plan is obtained. Finally, analyze and calculate the daily inventory of the goods, the amount of outbound and the amount of inbound and other data to obtain the monthly average number of goods required for each kind of goods, which is the required storage space. Combining with the outbound frequency of goods, using the formula COIi = ci /fi to calculate, getting the COI value of the goods, and then classify the goods. And the storage location assignment is carried out based on the relevance of the cargo. Among them, ci refers to the storage space required for the stored goods i in a certain period of time: fi refers to the frequency of the goods i out of the warehouse in the same period of time. Step 5: Program evaluation. The first thing that needs to be considered in the evaluation is to pack the goods of an order in the shortest time and complete the delivery; the space utilization rate of the warehouse and the reasonable division of the goods in the warehouse are also factors that need to be considered in the evaluation. At the same time, the cost of inventory is also one of the factors that affect the storage location assignment of goods. According to the analysis of the storage location assignment of goods, considering the utilization rate of space, reasonable access rate, cost of inventory and reasonable number of positions to establish a comprehensive evaluation index system, by the expert scoring method to construct the judgment matrix, and then use the analytic hierarchy process to evaluate, get the best plan.
3 Case Application and Analysis 3.1 Data Preprocessing At present, there are 9 types of steel products stored in a logistics park. No. 1–5 cargo yard is suitable for outdoor storage, such as rebar, high-speed wire, industrial wire, etc., of which all rebar is stored in yard 1–5; it is the cold rolled, color coated and galvanized and so on that suitable for indoor storage. Among them, this paper needs to allocate storage space for steel suitable for outdoor storage. After sorting out the outbound frequency of steel materials in the logistics park, Fig. 2 is obtained. 100% HRB400E Seismic Resistance HRB400
90% 80% 70% 60%
HTRB600 E Seismic Resistance HTRB600
50% 40% 30%
HRB500E Seismic Resistance HRB500
20% 10% 0%
Fig. 2. Outbound frequency of steel
774
X. Su et al.
At the same time, in order to put the high correlation of steel together, the order data of a logistics park is needed to be processed. Among them, the single digit X = 1,2,3…4,5 respectively represent the steel grades HRB400, HRB400E Seismic Resistance, HRB500, HRB500E Seismic Resistance, HTRB600, HTRB600E Seismic Resistance; the decimal part Y = 1,2,3…24 are respectively expressed as the specifications of 12*12,12*9,14*12 …22*7,22*9. As shown in Table 1. Table 1. Specification sheet of steel Code
Types of steel
1.1
HRB40012*12a
…
Code
Types of steel
…
4.17
HRB500E Seismic Resistance 25*9
1.2
HRB40012*9
…
4.18
HRB500E Seismic Resistance28*12
HRB40032*9
…
6.24
HTRB600E Seismic Resistance22*9
…… 1.21
a The data of this paper comes from the 6th National College Students’ logistics design competition
of "Magang Cup"
After the data is processed, the types of steel with strong correlation are obtained through calculating (arranged from high to low and left to right), as shown in Table 2. Table 2. List of steel products with high demand relevance Types of steel
Correlation
…
Types of steel
Correlation
5.1–6.3
0.500
…
2.2–2.10
0.122
6.3–6.5
0.313
…
4.8–4.14
0.119
0.184
…
2.19–2.21
0.101
…… 4.11–4.14
3.2 Storage Location assignment Scheme of Goods 1) ABC’s Allocation Scheme Because there is little difference between the types of steel in a steel logistics park, and the quantity of steel is huge, this article uses the frequency of steel in and out as the classification basis. And according to the ABC method, the steel is divided into three categories, and the number of items of each type of steel after classification is shown in Table 3.
Research on the Evaluation Method of Storage Location Assignment
775
Table 3. Classification of steel products Category
Number
Percentage
Cumulative percentage
A
14
20.6%
20.6%
B
22
32.4%
53%
C
32
47%
100%
By processing the data, we can get the relationship between various items, the outbound frequency, and the percentage of the order quantity, as shown in Table 4. Table 4. Classification table of order data Category
Total of outbound frequency
Percentage of outbound frequency
Cumulative percentage
Order quantity(ton)
Percentage of order quantity
Cumulative percentage
A
64028
83.1%
83.1%
748080.2
82.59%
82.59%
B
8192
10.7%
93.8%
145674.6
16.07%
98.66%
C
4797
6.2%
100%
12113.5
1.34%
合计
77017
100%
905768.3
The steel is classified according to the above data, and some of the classification results are shown in Table 5. Table 5. Table of ABC classification Category
Types of steel
A
HRB400 12*9 HRB400 14*9 HRB400 16*9 HRB400E Seismic Resistance 12*9 HRB400E Seismic Resistance 14*9………
B
HRB400 12*12 HRB400E Seismic Resistance 12*12 HRB400 14*12 HRB400E Seismic Resistance 14*12 HRB400 16*12………
C
HRB400 20*7 HRB400E Seismic Resistance 16*7 HRB500 12*9 HRB500E Seismic Resistance 14*9 HRB600 12*12………
According to the order data for 2017–2018, the type, quantity, and frequency of outbound of steel in each order are counted, and the percentage is calculated based on the number of steel items, as shown in Table 3. Table 4 shows the statistics of the relationship between various types of steel and the frequency of outbound. It can be seen from Table 4 that the quantity of Type A steel accounts for 20% of the total quantity, the frequency of outbound products accounts for 80%, and the outgoing quantity accounts
776
X. Su et al.
for more than 80%. According to the ABC classification method, class A goods need key management, and the management of Class A goods can be improved to improve the operating efficiency of the entire storage area [7]. At the same time, combined with the order data of steels and steels with a correlation degree greater than 0.1 in Table 2, the goods locations of steels with a higher correlation degree in the same type of steels are set up in proximity, thereby reducing the circulation of loading vehicles in the cargo yard and improving the efficiency of loading and unloading. Among them, the degree of correlation between A-type steel is shown in Fig. 3.
Fig. 3. Correlation degree of A-type steel
According to the ABC method, the steel is classified, and the cargo location is allocated on the basis of the classification. Among them, Class A steel is characterized by fewer items, high outbound frequency, large quantity of order, need to focus on care, placed near the door [7]. The number of items and order quantities for class B goods are moderate. Class C goods have a large number of items and a small amount of orders. Hence, in the storage location assignment, fewer cargo spaces should be allocated, and they should be placed close to the door without hindering the storage of Class A and B goods. At the same time, combined with the data of outbound volume, the cargo space is allocated proportionally, and the resulting scheme is shown in Fig. 4. C B
2.24
2.13
2.4
2.10
2.7
2.2
2.17
1.17
1.4
1.2
1.2
1.24
1.10 1.7
Janitor
Fig. 4. ABC’s allocation scheme for goods
1.13
Research on the Evaluation Method of Storage Location Assignment
777
2) Random Storage Allocation Scheme Firstly, this paper needs to plan the cargo location to find the optimal number of cargo location, after that, the cargo location is coded according to the four-character numbering method, and finally the cargo location is randomly allocated according to the planned cargo location and the correlation between the steels. In this paper, xnpq is used to represent the stacking quantity of q(q = 9, 12, 14) meters long rebar in the direction of p(p = 1 represents the north, p = 2 represents the south) in the No.n (n = 1, 2...5) freight yard. According to the daily inventory data of the historical order, the daily stacking number of different steel specifications can be calculated. Take the maximum value of the sum of the total length occupied by each storage yard as the objective function. The constraint conditions are: the sum of the length of each storage yard and the interval between cargo spaces should be less than or equal to the effective storage length and the total cargo space of “well shape” shall be greater than or equal to the maximum total cargo space of 108 per day from 10/1, 2017 to 11/8, 2018. Then use LINGO solution results, and get the optimized goods number to draw the shipment bitmap. The cargo location planning model is as (3)–(16). max = q
5 2
xnpq,q=9,12,14
(3)
p=1 n=1
q q q q q q
x12q + 0.8(
x21q + 0.8( x22q + 0.8(
x11q + x112 − 1) ≤ 98, q = 9, 12, 14
(4)
x12q + x122 − 1) ≤ 98, q = 9, 12, 14
(5)
x21q + x212 − 1) ≤ 111, q = 9, 12, 14
(6)
x22q + x222 − 1) ≤ 111, q = 9, 12, 14
(7)
x31q + x312 − 1) ≤ 125, q = 9, 12, 14
(8)
x322q + x322 − 1) ≤ 125, q = 9, 12, 14
(9)
x41q + 0.8(
x51q + 0.8(
x52q + 0.8(
x31q + 0.8(
x42q + 0.8(
x11q + 0.8(
x32q + 0.8(
q q
q q
x41q + x412 − 1) ≤ 125, q = 9, 12, 14
(10)
x422q + x422 − 1) ≤ 125, q = 9, 12, 14
(11)
x51q + x512 − 1) ≤ 125, q = 9, 12, 14
(12)
x522q + x522 − 1) ≤ 125, q = 9, 12, 14
(13)
778
X. Su et al. 5 2
xnp12 + xnp9 ≥ 108
(14)
p=1 n=1 5 2
xnp12 ≥ 29
(15)
p=1 n=1 5 2
xnp12 = 1
(16)
p=1 n=1
The resulting cargo location plan is shown in Fig. 5.
Fig. 5. Random storage allocation scheme for goods
According to the degree of correlation between the types of steel and the the frequency of outbound and inbound. When the goods are put into the warehouse, if the types of steel with higher frequency are placed in turn near the door. If the correlation between the two steel materials in the same batch of incoming steel materials is relatively high, they are put together. The steel with lower frequency is placed far away from the door to reduce labor costs and enhance the turnover efficiency of goods. 3) COI’s Allocation Scheme According to the original warehouse layout, the COI method is used to classify the steel during storage location assignment of the steel. The items are sorted by COI from small to large, and the classification ratio is 2:3:5. After that, steel and storage areas are divided according to the number of categories [8]. It can be seen that category A is the first two parts of the total amount of goods, category B is three parts of the total amount of goods, and category C is five parts of the total amount of goods [9]. The COI value of steel is calculated according to the order data combined with the principle of steel stacking, and the results are shown in the Table 6.
Research on the Evaluation Method of Storage Location Assignment
779
Table 6. COI value of steel Steel types
Number of locations Outbound of goods required frequency
Annual outbound volume(ton)
The value of COI
1.1
255
37
7176.34
6.89
1.2
1432
920.7
114161
1.56
…
…
…
…
…
6.18
127
0.5
106.93
254
Each cargo has a COI value. When the COI value of the obtained goods is smaller, it indicates that the storage space of the goods is smaller and the turnover speed is faster [10]. The classification results can be obtained from Table 7. Table 7. Classification table of COI Category Steel types A
1.7, 1.4, 1.2, 1.13, 2.2, 2.13, 1.10, 2.7, 2.4, 1.24, 2.17, 1.23, 1.17
B
2.1, 2.24, 2.16, 2.14, 1.12, 1.14, 1.5, 2.11, 1.19, 1.1, 1.11, 1.22, 2.19, 2.22, 1.8, 1.3, 1.18, 2.21, 2.18, 2.8, 2.9
C
2.15, 2.5, 2.10, 2.3, 4.14, 4.22, 4.11, 4.17, 2.20, 1.21, 1.2, 4.8, 4.18, 6.11, 4.13, 4.2, 6.5, 6.14, 6.22, 3.14, 4.4, 6.8, 6.3, 3.5, 5.17, 3.11, 6.24, 6.17, 4.24, 6.13, 6.18, 3.2, 5.1, 2.6
Analyze the relevance of different types of goods separately, and focus on the analysis of Class A goods. As shown in the Fig. 6, which can clearly see the degree of relevance between the two types of goods.
Fig. 6. Correlation diagram of type A steel
780
X. Su et al.
Allocate the goods to the cargo space according to the COI value of the goods, the position from the warehouse’s exit, and the required position. In the storage location assignment, the cargo space is allocated proportionally according to the order quantity of various steel materials and the relevance of the goods. The storage allocation result is shown in the Fig. 7.
Janitor
Fig. 7. COI’s allocation scheme for goods
3.3 Program Evaluation 1) Build a hierarchy model According to the analysis of the storage location assignment problem, the three elements of the decision goal, the factor to be considered (decision criterion) and the decision object in the problem are clarified [11]. The target layer is the storage location assignment problem; The decision object is three kinds of storage location assignment scheme; Decision-making criteria are the evaluation indexes established, which mainly include the following aspects: Space utilization rate (M1): Refers to the ratio of the space required for steel storage to the total available space in the warehouse. The access rate is reasonable (M2): Refers to the convenience of the steel stored in the warehouse from the door to the storage location when entering and leaving the warehouse. Inventory cost (M3): Refers to the storage costs, labor costs and other costs incurred in the storage of steel. The number of positions is reasonable (M4): Refers to whether the quantity of goods space can store the total amount of steel on that day. According to the above analysis, the hierarchical structure diagram of the storage location assignment scheme of goods can be obtained, as shown in Fig. 8.
Research on the Evaluation Method of Storage Location Assignment
781
Storage Assignment
Target layer
Rule layer
Rate of space utilization
Index layer
ABC's distribution scheme
The frequency of access is reasonable
The number of positions is reasonable
Cost of inventory
Location allocation scheme for random storage
Cargo distribution scheme of COI
Fig. 8. Hierarchy diagram of optimization scheme
2) Construct a judgment matrix The importance of each index to the storage location assignment plan is used as the basis for assigning values to establish a judgment matrix. The obtained judgment matrix is shown in Table 8. Table 8. Judgement matrix for comprehensive evaluation of storage location assignment plan Comprehensive evaluation of storage location assignment plan M1 M2 M3 M4 Value Space utilization rate M1
1
1/3
4
2
0.25
The frequency of access is reasonable M2
3
1
6
3
0.52
Inventory cost M3
1/4
1/6
1
1/4
0.06
The number of positions is reasonable M4
1/2
1/3
4
1
0.17
3) Consistency test and scheme results According to Santy’s scaling method [12], a judgment matrix is established for the four factors that affect the optimization of cargo location. A represents the storage location assignment plan obtained by the ABC method, B represents the storage location assignment plan using a random storage strategy, and C represents the storage location assignment plan obtained by the COI method, the weight coefficient and the inspection number are shown in Table 9. It can be seen from Table 9 that all the inspection numbers are less than 0.1, so it meets the inspection requirements and passes the consistency inspection. By solving the AHP evaluation model, the corresponding weights of each plan and index are multiplied by pairs, and then the results of the pairwise products are summed to obtain the combined weight of the target, as shown in (17). Zj =
4
i=1
αi Mij , j = 1, 2, 3
(17)
782
X. Su et al.
Table 9. Weight coefficient of comprehensive evaluation index for storage location assignment plan A
B
C
Test number
M1(Weight Coefficient)
0.11
0.31
0.58
0.0018
M2(Weight Coefficient)
0.33
0.10
0.57
0.0123
M3(Weight Coefficient)
0.55
0.21
0.24
0.0091
M4(Weight Coefficient)
0.25
0.59
0.16
0.0268
Among them, Zj , j = 1, 2, 3 represents the total ranking weight of the comprehensive evaluation of the three allocation schemes, α represents the weight coefficient, and Mij represents the weight of the j-th scheme under the i-type evaluation index. Through the calculation of the above formula, the total ranking weight of the comprehensive evaluation of the scheme A, the scheme B and the scheme C can be obtained, as shown in Table 10. Table 10. Hierarchical total sorting table of storage location assignment scheme Evaluation index
Space utilization rate M1
The frequency Inventory of access is cost M3 reasonable M2
The number of positions is reasonable M4
Comprehensive evaluation total ranking weight
Weight coefficient
0.25
0.52
0.06
0.17
A
0.11
0.33
0.55
0.25
0.28
B
0.31
0.10
0.21
0.59
0.24
C
0.58
0.57
0.24
0.16
0.48
It can be clearly seen from Table 10 that among the three schemes, scheme C has the highest overall ranking value in the comprehensive evaluation, indicating that the allocation scheme obtained by using COI method is the best when carrying out storage location assignment of goods. The number of inspections obtained by MATLAB for the total ranking of levels is 0.0208 < 0.1, which meets the inspection requirements, the consistency inspection is passed, and the data is valid.
4 Conclusion This paper comprehensively considers the relevance of goods, the priority of turnover rate and other principles, and conducts research on the allocation of goods between similar goods, researches on the problem of storage location assignment among similar goods, and establishes a storage location assignment evaluation model based on ABC method,
Research on the Evaluation Method of Storage Location Assignment
783
COI method, strategy of random storage and AHP. According to the verification of examples, the model can be well applied to the problem of storage location assignment, solve the problem of storage location assignment of similar goods in the warehouse, improve the operation efficiency of the warehouse, reduce the cost of the physical industry, and realize the cost reduction and efficiency increase of the logistics industry.
References 1. Guo, G.Q., Su, X.J., Liu, X.: Design and implementation of logistics intelligent management and control systems for steel enterprises. Appl. Mech. Mater. 2617, 716–721 (2013) 2. Heskett, J.L.: Cu-per-order index: a key to warehouse stock location. Transp. Distrib. Manage. 3, 27–31 (1963) 3. Li, Y.F.: Storage assignment and order batching optimization basedon association rule. Beijing Jiaotong University (2019) 4. Jing Wen Liu: Research on optimization of warehouse location of a company based on genetic algorithm. Intern. Combust. Engine Parts 22, 172–174 (2019) 5. Yang, L.B.: Comparative research on location optimization of particle swarm optimization and grey wolf. Algorithm, Kunming University of Science and Technology (2019) 6. Li, M.K., Jiang, X.Y.: A method for storage location assignment considering COI and demand correlations. Oper. Res. Manage. Sci. 27, 22–32 (2018) 7. Liang, Y., Yang, S.: Warehouse optimization design based on ABC classification and FP-tree algorithm. Logistics Eng. Manage. 41, 60–63 (2019) 8. Xiao Yi Zhang: Research on slotting optimization strategies for B2C e-commerce logistics based on flexsim-based. Logistics Mater. Handing 24, 127–131 (2019) 9. Li, J.M.: Research on warehouse management of GC logistics company, Hebei University of Science & Technology (2018) 10. Li, M.K., Zhang, Y.P.: A study of workload distribution and COI-based storage policies. Ind. Eng. J. 18, 37–41 (2015) 11. He, F.R.: Application of AHP in construction bidding. Southwest Jiao tong University (2014) 12. Lu, F.Q., Xue, Y.S.: Stochastic analytic hierarchy process based risk evaluation for virtual enterprise. Inf. Control 41, 110–116 (2012)
Author Index
A Ahmed, Mohammed M., 389 B Bao, Xinzhong, 24 Bo, Qixin, 345 Bohács, Gábor, 442 C Cai, Yujie, 504 Cao, Jianglong, 596 Chen, Dongyan, 81 Chen, Hongjian, 228 Chen, Qiuru, 770 Chen, Xuedong, 124, 142, 169 Chen, Yang, 67 Chu, Yaopeng, 273 Cui, Xiadi, 735 Cui, Yongmei, 189 D Dai, Zhenpeng, 596 Darwish, Ashraf, 693 Deng, Yiwen, 491 Ding, Jingzhi, 465 Dou, Shuihai, 96, 758, 770 Du, Yangping, 758 Du, Yanping, 96, 770 Durek-Linn, Anna, 606
F Fan, Lei, 504 Feng, Xue, 237 Feng, Yuan, 248 Fiaidhi, Jinan, 43, 57 Fottner, Johannes, 606 G Gaber, Tarek, 705 Gad, Ibrahim, 693 Gao, Junxin, 189 Gao, Xuedong, 297, 311, 321, 345 Gáspár, Dániel, 442 Goda, Essam, 705 Gou, Juanqiong, 735 Guan, Xiaolan, 379, 627 Guo, Chunfang, 169 H Han, Qianqian, 660 Hao, Mengqi, 465 Hassanien, Aboul Ella, 389, 415, 682, 693, 705 Horváth, András Máté, 442 Hou, Hanping, 431 Huang, Wenjun, 549, 562 Huang, Xiaozhang, 220, 714 J Jia, Deng, 1 Jiang, Jiayi, 504 Jiang, Peiyuan, 596 Jin, Cheng, 478
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 X. Shi et al. (Eds.): LISS 2021, LNOR, pp. 785–787, 2022. https://doi.org/10.1007/978-981-16-8656-6
786 Jin, Chunhua, 35, 401 Jing, Tao, 549, 562 Ju, Hongjin, 259 L Lan, Danyu, 333 Lei, Qiuyuan, 24 Li, Jing, 111, 132, 151, 160, 179, 504 Li, Li, 237 Li, Shuchen, 454 Li, Tingting, 179 Li, Xinhang, 537 Li, Xiucheng, 151 Li, Xu, 523, 575 Li, Xuemei, 189 Li, Zhenping, 660 Liang, Yanan, 537 Liu, Boyang, 369 Liu, Cheng, 24 Liu, Hengyu, 67 Liu, Ruijian, 196, 358, 636 Liu, Wei, 248, 511, 747 Liu, Xiangge, 160 Liu, Ye, 96, 758, 770 Liu, Ying, 584 Lu, Lijun, 478 Luo, Ke, 111 Luo, Qiaoyun, 747 M Ma, Yanhong, 35 Mao, Yang, 369 Min, Jianing, 478 Mohammed, Sabah, 43, 57 N Na, Ta, 758 P Pan, Yingxue, 297 Peng, Lizhi, 96, 758, 770 Q Qin, Ruiyan, 523 S Sawyer, Darien, 43, 57 Sayed, Gehad Ismail, 682 Shams, Mahmoud Y., 415 Shang, Jianhua, 584 Shen, Jiren, 672
Author Index Shi, Meiling, 596 Siciliano, Giulia, 606 Su, Xianyang, 96, 758, 770 Sun, Yuhui, 189 T Tan, Gu, 228 Tang, Dezhi, 228 Tang, Fangcheng, 636 Tang, Mincong, 389, 415, 682, 693, 705 Tong, Xin, 584 Torky, Mohamed, 705 W Wang, Ai, 311 Wang, Binyu, 575 Wang, Chong, 1, 491 Wang, Jiayuan, 11 Wang, Jun, 237 Wang, Kang, 660 Wang, Mingwei, 549 Wang, Yang, 596 Wang, Yangkun, 562 Wang, Yuhan, 196, 358 Wang, Zhaohong, 142 Wang, Zihao, 124 Wei, Guiying, 208, 284, 333 Wei, Yimeng, 208 Wu, Sen, 208, 284, 333 X Xie, Wenjing, 24 Xie, Xiang, 649 Xu, E., 596 Xu, Lei, 11 Xu, Wuhuan, 237 Xu, Yuan, 237 Y Yang, Yang, 636 Yang, Zhidong, 196, 358 Yin, Xiangting, 228 Yu, Zheyuan, 132 Z Zhai, Xiaoxiao, 401 Zhang, Cheng, 649 Zhang, Juliang, 259 Zhang, Qiuxia, 431 Zhang, Xiaodong, 228 Zhang, Yi, 81 Zhang, Yiqing, 511 Zhang, Yue, 11 Zhang, Zhidong, 369 Zhao, Shenghui, 596
Author Index Zhao, Shouting, 81 Zhao, Tong, 401 Zhao, Yuzhuo, 111, 132, 142, 151, 160, 179 Zheng, Zezhi, 35
787 Zhao, Zhenxia, 169 Zhou, Tingting, 321 Zong, Huimin, 284 Zong, Wenhao, 722