277 75 27MB
English Pages 590 [591] Year 2023
Lecture Notes on Data Engineering and Communications Technologies 172
Zheng Xu · Saed Alrabaee · Octavio Loyola-González · Niken Dwi Wahyu Cahyani · Nurul Hidayah Ab Rahman Editors
Cyber Security Intelligence and Analytics The 5th International Conference on Cyber Security Intelligence and Analytics (CSIA 2023), Volume 1
Lecture Notes on Data Engineering and Communications Technologies Series Editor Fatos Xhafa, Technical University of Catalonia, Barcelona, Spain
172
The aim of the book series is to present cutting edge engineering approaches to data technologies and communications. It will publish latest advances on the engineering task of building and deploying distributed, scalable and reliable data infrastructures and communication systems. The series will have a prominent applied focus on data technologies and communications with aim to promote the bridging from fundamental research on data science and networking to data engineering and communications that lead to industry products, business knowledge and standardisation. Indexed by SCOPUS, INSPEC, EI Compendex. All books published in the series are submitted for consideration in Web of Science.
Zheng Xu · Saed Alrabaee · Octavio Loyola-González · Niken Dwi Wahyu Cahyani · Nurul Hidayah Ab Rahman Editors
Cyber Security Intelligence and Analytics The 5th International Conference on Cyber Security Intelligence and Analytics (CSIA 2023), Volume 1
Editors Zheng Xu Shanghai Polytechnic University Shanghai, China
Saed Alrabaee United Arab Emirates University Abu Dhabi, United Arab Emirates
Octavio Loyola-González Stratesys Madrid, Spain
Niken Dwi Wahyu Cahyani Telkom University Cileunyi, Jawa Barat, Indonesia
Nurul Hidayah Ab Rahman Universiti Tun Hussein Onn Malaysia Parit Raja, Malaysia
ISSN 2367-4512 ISSN 2367-4520 (electronic) Lecture Notes on Data Engineering and Communications Technologies ISBN 978-3-031-31859-7 ISBN 978-3-031-31860-3 (eBook) https://doi.org/10.1007/978-3-031-31860-3 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
The 5th International Conference on Cyber Security Intelligence and Analytics (CSIA 2023) is an international conference dedicated to promoting novel theoretical and applied research advances in the interdisciplinary agenda of cyber security, particularly focusing on threat intelligence and analytics and countering cybercrime. Cyber security experts, including those in data analytics, incident response and digital forensics, need to be able to rapidly detect, analyze and defend against a diverse range of cyber threats in near real-time conditions. For example, when a significant amount of data is collected from or generated by different security monitoring solutions, intelligent and next-generation big data analytical techniques are necessary to mine, interpret and extract knowledge of these (big) data. Cyber threat intelligence and analytics are among the fastest-growing interdisciplinary fields of research bringing together researchers from different fields such as digital forensics, political and security studies, criminology, cyber security, big data analytics and machine learning to detect, contain and mitigate advanced persistent threats and fight against organized cybercrimes. The 5th International Conference on Cyber Security Intelligence and Analytics (CSIA 2023), building on the previous successes of online meeting (2020 to 2022 due to COVID-19), in Wuhu, China (2019), is proud to be in the 5th consecutive conference year. We are organizing the CSIA 2023 at Radisson Blu Shanghai Pudong Jinqiao Hotel. It will feature a technical program of refereed papers selected by the international program committee, keynote address. Each paper was reviewed by at least two independent experts. The conference would not have been a reality without the contributions of the authors. We sincerely thank all the authors for their valuable contributions. We would like to express our appreciation to all members of the program committee for their valuable efforts in the review process that helped us to guarantee the highest quality of the selected papers for the conference. Our special thanks are due also to the editors of Springer book series “Lecture Notes on Data Engineering and Communications Technologies,” Thomas Ditzinger and Suresh Dharmalingam for their assistance throughout the publication process.
Organization
Steering Committee Chair Kim-Kwang Raymond Choo
University of Texas at San Antonio, USA
General Chair Zheng Xu
Shanghai Polytechnic University, China
Program Committee Chairs Saed Alrabaee Octavio Loyola-González Niken Dwi Wahyu Cahyani Nurul Hidayah Ab Rahman
United Arab Emirate University, UAE Stratesys, Spain Telkom University, Indonesia Universiti Tun Hussein Onn Malaysia, Malaysia
Publication Chairs Juan Du Shunxiang Zhang
Shanghai University, China Anhui University of Science & Technology, China
Publicity Chairs Neil. Y. Yen Vijayan Sugumaran Junchi Yan
University of Aizu, Japan Oakland University, USA Shanghai Jiao Tong University, China
Local Organizing Chairs Chuang Ma Xiumei Wu
Shanghai Polytechnic University, China Shanghai Polytechnic University, China
viii
Organization
Program Committee Members Guangli Zhu Tao Liao Xiaobo Yin Xiangfeng Luo Xiao Wei Huan Du Zhiguo Yan Zhiming Ding Jianhui Li Yi Liu Kuien Liu Feng Lu Wei Xu Ming Hu Abdelrahman Abouarqoub Sana Belguith Rozita Dara Reza Esmaili Ibrahim Ghafir Fadi Hamad Sajjad Homayoun Nesrine Kaaniche Steven Moffat Charlie Obimbo Jibran Saleem Maryam Shahpasand Steven Walker-Roberts
Anhui Univ. of Sci. & Tech., China Anhui Univ. of Sci. & Tech., China Anhui Univ. of Sci. & Tech., China Shanghai Univ., China Shanghai Univ., China Shanghai Univ., China Fudan University, China Beijing University of Technology, China Chinese Academy of Sciences, China Tsinghua University, China Pivotal Inc, USA Chinese Academy of Sciences, China Renmin University of China, China Shanghai University, China Middle East University, Jordan University of Auckland, New Zealand University of Guelph, Canada Amsterdam University of Applied Science, Netherlands Loughborough University, UK Isra University, Jordan Shiraz University of Technology, Iran Telecom SudParis, France Manchester Metropolitan University, UK University of Guelph, Canada Manchester Metropolitan University, UK Asia Pacific University Malaysia, Malaysia Manchester Metropolitan University, UK
Contents
Image Retrieval Algorithm of Intangible Cultural Heritage Based on Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lan Lu, Cheng Li, and Cheng Zhou
1
Network Lossless Information Hiding Communication Method Based on Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fei Chen
10
Anomaly Recognition Method of Network Media Large Data Stream Based on Feature Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fei Chen
20
Design and Implementation of Automatic Quick Report Function for WeChat on Earthquake Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yina Yin, Liang Huang, and A C Ramachandra
29
Application of BIM + GIS Technology in Smart City 3D Design System . . . . . . Congyue Qi, Hongwei Zhou, Lijun Yuan, Ping Li, and Yongfeng Qi
37
Myopic Retailer’s Cooperative Advertising Strategies in Supply Chain Based on Differential Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chonghao Zhang
46
Hardware System of Thermal Imaging Distribution Line Temperature Monitor Based on Digital Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tie Zhou, Ji Liu, Weihao Gu, Zhimin Lu, and Linchuan Guo
56
Applications of Key Automation Technologies in Machine Manufacturing Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Qifeng Xu
66
The Application of VR Technology in Han Embroidery Digital Museum . . . . . . Lun Yang Analysis of International Trade Relations Forecasting Model Based on Digital Trade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Haiyan Ruan Empirical Analysis of Online Broadcast Based on TAM Model . . . . . . . . . . . . . . Mengwei Liu, Feng Gong, Jiahui Zhou, and V. Sridhar
76
87
96
x
Contents
A High Resolution Wavelet Chaos Algorithm for Optimization of Image Separation Processing in Graphic Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Jingying Wei and Yong Tan Design and Implementation of Smart Tourism Scenic Spot Monitoring System Based on STM32 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Kewei Lei and Lei Tian Herd Effect Analysis of Stock Market Based on Big Data Intelligent Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 E. Zhang, Xu Zhang, and Piyush Kumar Pareek Intelligent Recommendation System for Early Childhood Learning Platform Based on Big Data and Machine Learning Algorithm . . . . . . . . . . . . . . . 140 Yabo Yang Big Data Technology Driven 5G Network Optimization Analysis . . . . . . . . . . . . . 151 Xiujie Zhang, Xiaolin Zhang, and Zhongwei Jin An Enterprise Financial Statement Identification Method Based on Support Vector Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Chunkai Ding An AIS Trusted Requirements Model for Cloud Accounting Based on Complex Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Chunkai Ding A Novel Method of Enterprise Financial Early Warning Based on Wavelet Chaos Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Lu Zhou Application of Panoramic Characterization Function Based on Artificial Intelligence Configuration Operation State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Xiaofeng Zhou, Zhigang Lu, Ruifeng Zhao, Zhanqiang Xu, and Hong Zhang A Decision-Making Method Based on Dynamic Programming Algorithm for Distribution Network Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Hongrong Zhai, Ruifeng Zhao, Haobin Li, Longteng Wu, and Xunwang Chen Application of Computer Vision Technology Based on Neural Network in Path Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Jinghao Wen, Jiashun Chen, Jiatong Jiang, Zekai Bi, and Jintao Wei
Contents
xi
A Comprehensive Evaluation Method of Computer Algorithm and Network Flow Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Zhiwei Huang Consumer Evaluation System in Webcast Based on Data Analysis . . . . . . . . . . . . 227 Xia Yan and Anita Binti Rosli Optimal Allocation Method of Microgrid Dispatching Based on Multi-objective Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Yang Xuan, Xiaojie Zhou, Lisong Bi, Zhanzequn Yuan, Miao Wang, and B. Rai Karunakara Application of Improved SDAE Network Algorithm in Enterprise Financial Risk Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Liyun Ding and P Rashmi The Design of Supply Chain Logistics Management Platform Based on Ant Colony Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Bin Wang Management and Application of Power Grid Infrastructure Project Based on Immune Fuzzy Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Yimin Tang, Zheng Zhang, Jian Hu, and Sen He Optimization of Power Grid Infrastructure Project Management Based on Improved PSO Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Yimin Tang, Zheng Zhang, Sen He, and Jian Hu Data Analysis of Lightning Location and Warning System Based on Cluster Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Yanxia Zhang, Jialu Li, and Ruyu Yan Application of Embedded Computer Digital Simulation Technology in Landscape Environment Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Ning Xian and Shutan Wang Short-Term Traffic Flow Prediction Model Based on BP Neural Network Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 Dandan Zhang and Qian Lu Multi-modal Interactive Experience Design of Automotive Central Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Ming Lv, Zhuo Chen, Cen Guo, and K. L. Hemalatha
xii
Contents
E-commerce Across Borders Logistics Platform System Based on Blockchain Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Qiuping Zhang, Yuxuan Fan, and B. P. Upendra Roy Application of Chaos Neural Network Algorithm in Computer Monitoring System of Power Supply Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 Ruibang Gong, Gong Cheng, and Yuchen Wang Design of Bus Isolated Step-Down Power Supply System Based on Ant Colony Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Jiangbo Gu, Ruibang Gong, Guanghui An, and Yuchen Wang Translation Accuracy Correction Algorithm for English Translation Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Qiongqiong Yang Development of Ground Target Image Recognition System Based on Deep Learning Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Rui Jiang, Yuan Jiang, and Qiaoling Chen A Mobile Application-Based Tower Network Digital Twin Management . . . . . . 369 Kang Jiao, Haiyuan Xu, Letao Ling, and Zhiwen Fang Optimization of Intelligent Logistics System Based on Big Data Collection Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 Qiuping Zhang, Lei Shi, and Shujie Sun Anomaly Detection Algorithm of Cerebral Infarction CT Image Based on Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Yun Zhang Design of Decoration Construction Technology Management System Based on Intelligent Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Xiaohu Li Construction of Power Supply Security System for Winter Olympic Venues in the Era of Intelligent Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 Jianwu Ding, Mingchun Li, Shuo Huang, Yu Zhao, Yiming Ding, and Shiqi Han Multi Objective Optimization Management Model of Dynamic Logistics Network Based on Memetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 Pingbo Qu
Contents
xiii
Design of Intelligent Logistics Monitoring System Based on Data Mining Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 Qiuping Zhang, Meng Wang, and Pushpendra Shafi English Speech Recognition Hybrid Algorithm Based on BP Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440 Feiyan Wang and Shixue Sun Optimization of Ev Charging Pile Layout on Account of Ant Colony Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 Juan Wang Quantitative Analysis Method of C4.5 Algorithm in Enterprise Management Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 Zihao Wang and Jun Li Rural Landscape Simulation System Based on Virtual Reality Technology . . . . . 469 Xiaoqing Geng and Shaobiao Deng C Language Programming System Based on Data Analysis Algorithm . . . . . . . . 479 Yuhang Lang Application of Association Rule Mining Algorithm in Civil Engineering Optimization Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 Congyue Qi, Hongwei Zhou, Ming Li, Zhihua Zhang, Lijun Yuan, and Peng Zhang The Application of Digital Construction Based on BIM Technology in Housing Complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 Yundi Peng and Lili Peng The Research on Computer On-line Teaching Simulation System Based on OBE Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509 Yi Liu and Xiaobo Liu Data Interaction of Scheduling Duty Log Based on B/S Structure and Speech Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 Changlun Hu, Xiaoyan Qi, Hengjie Liu, Lei Zhao, and Longfei Liang Food Safety Big Data Classification Technology Based on BP Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530 Dongfeng Jiang
xiv
Contents
Rethinking and Reconstructing the Life Education System of Universities Based on Big Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540 Fan Xiaohu and Xu He Unsupervised Data Anomaly Detection Based on Graph Neural Network . . . . . . 552 Ning Wang, Zheng Wang, Yongwen Gong, Zhenlin Huang, Zhenlin Huang, Xing Wen, and Haitao Zeng Efficient Data Transfer Based on Unsupervised Data Grading . . . . . . . . . . . . . . . . 565 Liuqi Zhao, Zheng Wang, Yongwen Gong, Zhenlin Huang, Xing Wen, Haitao Zeng, and Ning Wang Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
Image Retrieval Algorithm of Intangible Cultural Heritage Based on Artificial Intelligence Technology Lan Lu1 , Cheng Li2 , and Cheng Zhou1(B) 1 College of Art and Design, XiangNan University, Chenzhou, Human, China
[email protected] 2 School of Chemical Engineering and Technology, Yantai Nanshan University, Yantai,
Shandong, China
Abstract. Nowadays, the wide application of digital image has involved national defence industry, military, manufacturing, medical treatment, news media, mass entertainment and family daily life. In this wide use, a large number of different image databases have been formed. At the same time, the protection of intangible cultural heritage has entered the digital era. Many regions in China have done a lot of research work and in-depth excavation in the digital preservation of intangible historical and cultural heritage. In order to better preserve and disseminate intangible historical and cultural heritage, the retrieval and management of image database system has become an urgent topic. Keywords: artificial intelligence · Intangible culture · image retrieval
1 Introduction The research and application of content-based image database technology and related digital information technology in the preservation and dissemination of intangible historical and cultural heritage have made a major breakthrough in the research of image retrieval algorithm of intangible historical and cultural heritage, so as to more effectively improve the display and fidelity benefits of intangible historical and cultural heritage, and provide a basis for the safe and sustainable preservation and dissemination of intangible historical and cultural heritage, Took an exploratory step. The purpose of image retrieval research is to automatically and intelligently retrieve, query and manage images, so that searchers can easily, quickly and accurately find the required data, so that managers can get rid of a large number of monotonous and inefficient manual management work, and realize the non-interference and convenience of management work.
2 Definition of the Concept of Intangible Culture Intangible historical and cultural heritage refers to various forms of spiritual expression of fine traditional culture handed down by the people of all ethnic groups in China from © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 1–9, 2023. https://doi.org/10.1007/978-3-031-31860-3_1
2
L. Lu et al.
generation to generation and as a part of other material traditional culture, including items and venues related to the spiritual expression of fine traditional culture. Intangible historical and cultural heritage is the main symbol of the development achievements of a country and a nation’s historical civilization and the main component of excellent traditional history and culture. “Intangible cultural heritage” is opposite to “other material civilization relics”, which are collectively referred to as “cultural heritage”. In December 2020, due to the successful application of “Taijiquan” and “sending the king ship” - the ceremony of sustainable connection between man and the sea and related practices”, 42 activities in China were included in the global intangible cultural heritage list (roster) by UNESCO, ranking first in the world [1]. In different countries, the items involved in intangible cultural heritage are not necessarily the same, so they are different in race, social history, humanities and even national conditions. Therefore, although the specific division standards of non-material historical and cultural heritage are not consistent all over the world. However, after the United Nations unifies the intangible historical and cultural heritage in the world, the division of intangible historical and cultural heritage in the agreement on the maintenance of intangible cultural heritage formulated by UNESCO basically includes the basic conditions around the world, and the division standards of intangible historical and cultural heritage considered by Chinese academia are basically based on the system established by the agreement. In fact, the protection of intangible historical and cultural heritage has been valued by some developed countries as early as the early 1950s, including Brazil, Argentina, India and other developed countries, which have issued many laws and regulations or policy measures to protect intangible historical and cultural heritage. In the 1970s, the global community began to pay general attention to this theme. Therefore, one of the main tasks of UNESCO is to maintain and promote the diversity of world intangible cultural heritage. China is rich in intangible cultural heritage. Therefore, from the perspective of safeguarding cultural diversity and ethnic diversity, there is still a huge endogenous demand for the protection of intangible historical and cultural heritage in China. However, like Brazil, Argentina, India and other developed countries,
Fig. 1. Digital process
Image Retrieval Algorithm of Intangible Cultural Heritage
3
China’s various protection measures for intangible historical and cultural heritage started relatively late [2, 3]. As shown in Fig. 1.
3 Application Background Intangible cultural heritage has the characteristics of strong regional awareness, small audience, small publicity scope and narrow audience. Most of them are based on traditional communication methods, and most of them are looking for “digitization”. “Digitization of cultural heritage” is defined as “using digital technologies such as digital collection, digital storage, digital processing, digital display and digital communication to convert, copy and restore cultural heritage into shareable and renewable digital forms, interpret cultural heritage from a new perspective, protect cultural heritage in new ways, and use it in New needs [4, 5].“ The academic concept of “digitization of intangible cultural property” broadens this concept. The dispute between tangible and intangible categories of intangible social, historical and cultural assets will inevitably affect the concept of digitization of intangible property. The definition of “digitization of cultural relics” is used to define “digitization of intangible property” “, which eliminates the premise of redefining and dividing the tangible and intangible characteristics and elements of the objects of social, historical and traditional cultural records in the process of
Fig. 2. Protection path of intangible cultural heritage
4
L. Lu et al.
electronization, which will fundamentally determine the principle that there is no way and focus for the digitization of other material social, historical and cultural assets and other material cultural heritage [6]. This essential difference can be explained conceptually. With the development of digital science and technology, the new generation of information technology will play different roles in the preservation and dissemination of intangible cultural heritage, which will have a significant impact on the concept and content of the digitization of traditional intangible cultural heritage. Therefore, according to the increasingly changing scientific and technological environment, it is necessary to explore the concept, content and extension of the digitization of intangible cultural heritage, this will be a major basic issue affecting the research and development of intangible cultural heritage digitization. The modern digitization of intangible cultural property is different from the digitization of other material civilization relics. As shown in Fig. 2, Protection path of intangible cultural heritage. All modern digitization of intangible cultural property, such as inheriting skills, should focus on realizing and publicizing its cultural content. The greatest benefit of modern digitization should be a complete way of recording, including humanistic elements and cultural values. Therefore, to make ceramic technology, we first need a material database, that is, the data records of various firing temperatures and materials after kiln changes; Then there are data analysis records of various processes, such as blank manufacturing temperature, wire drawing, glaze color ratio, etc., or use 3D printing to analyze the data of various molds; In addition, complete color data analysis and graphic analysis must be carried out in order to transmit digitization during recording. Whether using the Internet or wearable devices, the audience can feel the temperature and breath at
Fig. 3. Digital art
Image Retrieval Algorithm of Intangible Cultural Heritage
5
the moment of firing. Therefore, the analytical framework for the digitization of intangible cultural heritage must digitize the material and intangible levels in the field of seeing and not seeing and under the constraints of context [7, 8]. As shown in Fig. 3, digital art.
4 Design of Image Retrieval System In order to fully explain the shape, texture and so on, it must be explained from one or more perspectives. In the process of image search, if you want to obtain good search results, you must comprehensively investigate two or more important characteristics. Therefore, how to reasonably organize this feature so that the obtained feature vector or similarity model can better meet people’s requirements and obtain good search results is one of the key problems that must be solved in the field of image search [9]. Especially in the relevant delivery process, the feature combination should be adjusted appropriately according to the different needs of users. According to the above problems, we propose to adopt genetic algorithm in the transmission process, and provide an image search technology through genetic reflection to adaptively adjust the characteristic weight and parameters and the similar model; In the process of transmission and processing, the conventional vector adjustment formula is abandoned, and the image fusion information technology is used to adjust the search vector, so as to provide an image search technology through genetic reflection and image integration; Similarly, the image feature database is classified by combined clustering information technology, so as to provide more representative feature combination to improve the retrieval effect; Finally, according to the reinforcement learning theory, the intelligence search architecture in autonomous learning is constructed by using message aggregation technology and genetic computing, and a set of graphic search method using modern intelligent optimization algorithm and picture information processing fusion technology is proposed, so as to realize intelligence search [10]. 3D technology application as shown in Fig. 4 and Fig. 5.
Fig. 4. 3D technology application.
6
L. Lu et al.
Fig. 5. Digitization of Art Museum
(1) Image model Before image retrieval, the digitization of information is the first subject to be processed. A ternary representation mode given in this paper is: O = (D, F, R)
(1)
where d represents an original image, such as an image in JPEG format; {fi} = represents a set of features F, and fi represents the ith feature; R = {rij }represents the specific representation of feature fi, which is also a vector containing multiple elements, as shown in formula (2). (2) rij = rij1 , rij2 , rij3 , ... rijk (2) Color features Color feature is the most commonly used visual feature in image retrieval, mainly because color is usually very related to the objects or scenes involved in the image. In addition, like other visual characteristics, color characteristics have low dependence on the size, position and angle of the graphics themselves, so they have great robustness [11]. At present, there are several ways to obtain color characteristics: (1) Color histogram. The main idea of color histogram is to measure all colors in the whole color space (such as RGB and HSV) by using certain methods (such as uniform quantization method, using color table method, color aggregation quantization method and subjective perception quantization method), and then calculate the proportion of each measured color in the overall image color distribution through
Image Retrieval Algorithm of Intangible Cultural Heritage
7
statistical analysis and other technologies. The color histogram has the characteristics that other color representations do not have, such as no change in rotation, invariance of large-scale light speed, and no change in translation. Therefore, color histogram has been more widely used in data-based image retrieval [12]. (2) Color aggregation vector. Color aggregation vector is an evolution of image histogram. Its core idea is that there are images with the same color in the same image When the total area of the occupied data range exceeds a threshold, all images in the range are clustered pixels, otherwise they are non-clustered images. In this way, the ratio between aggregated images and non-aggregated pixels of various colors involved in statistical graphics is called the color aggregation vector of graphics. In the process of pixel search, the aggregation vector of the target pixel is corresponding to the aggregation vector of the searched pixel. The set signal in the aggregation vector saves the spatial information of pixel color to a certain extent [13]. (3) Color moment. The chromatic moment is pointed out by Stricker. Its mathematical basis is that all color distributions in an image can be described by its matrix. Moreover, because the color distribution signal of the image is mainly generated in some low-order matrices, the first-order matrix, second-order moment and thirdorder matrix of the color can represent the color distribution of the image. See formulas (3) and (4) for the calculation method. μk = δk=
1 (G,N,M) Lkij k,i,j N*M 1 N*M
(G,N,M) k,i,j
(Lkij−μ )2
(3) (4)
Compared with the traditional color histogram, one of the main characteristics of color moment is that it does not require the measurement of features; Compared with other color characteristics (such as color aggregation vector and spatial information color characteristics), the color moment has nine vector components, so the calculation capacity is small and the expression is relatively simple. In practical use, in order to reduce the weak resolution of the low-order matrix, people often use the color matrix in combination with other characteristics, and have a significant reduction effect before applying other characteristics [14, 15].
5 Conclusion The research content of each stage of intangible cultural heritage digitization is deepening, showing its own characteristics, development status and research status. Through the research on the image retrieval algorithm of intangible historical and cultural heritage using artificial intelligence technology, this paper believes that the digital technology means should focus on the existence form and process performance of important intangible factors such as intangible historical and cultural heritage technology, industrial
8
L. Lu et al.
characteristics and cultural background, not only the information description of petrochemical intangible carrier, but also the technical characteristics and current situation of intangible cultural heritage digitization, This paper introduces the basic digital technology and new digital technology means respectively, and analyzes the basic digital technology means such as text and image, and the key technical applications of sound and video that are still the digitization of intangible cultural heritage. Acknowledgments. This work was financially supported by key project of Education Department of Hunan Province “Research on the Development Mode of Cultural Creative Industry in Villages and Towns of Southern Hunan with Intangible Cultural Heritage as the Main ody”, Xiang jiao-tong [2019] No. 353-19A467 fund.
References 1. Computers - Computer Graphics; Recent Findings from Ca’ Foscari University Venice Provides New Insights into Computer Graphics (Multi-feature fusion for image retrieval using constrained dominant sets). Comput. Technol. J. (2020) 2. Deng, Y., Yu, Y.: Self feedback image retrieval algorithm based on annular color moments. EURASIP J. Image Video Process. (1), 1–13 (2019) 3. Technology - Information Technology; Researchers’ Work from Wuhan University Focuses on Information Technology (Result Diversification In Image Retrieval Based On Semantic Distance). Computers, Networks & Communications (2019) 4. Mohite, N., Waghmare, L., Gonde, A., Vipparthi, S.: 3D local circular difference patterns for biomedical image retrieval. Int. J. Multimedia Inf. Retrieval 8(2), 115–125 (2019) 5. Sagae, A., Fahlman, S.E.: Image retrieval with textual label similarity features. Intell. Syst. Acc. Finance Manag. 22(1), 101–113 (2015) 6. Divya, M., Janet, J., Suguna, R.: A genetic optimized neural network for image retrieval in telemedicine. EURASIP J. Image Video Process. 2014(1), 1–9 (2014). https://doi.org/10. 1186/1687-5281-2014-9 7. Ruocco, M., Ramampiaro, H.: Event-related image retrieval: exploring geographical and temporal distribution of user tags. Int. J. Multimedia Inf. Retrieval 2(4), 273–288 (2013). https://doi.org/10.1007/s13735-013-0039-3 8. Murala, S., Maheshwari, R.P., Balasubramanian, R.: Directional local extrema patterns: a new descriptor for content based image retrieval. IJMIR, 1(3), 191–203 (2012) 9. Anami, B.S., Nandyal, S.S., Govardhan. A.: Color and edge histograms based medicinal plants’ image retrieval. Int. J. Image Graph. Sig. Process. (IJIGSP), 4(8), 24 (2012) 10. Rahman, M.M., Antani, S.K., Thoma, G.R.: A query expansion framework in image retrieval domain based on local and global analysis. Inf. Process. Manag. 47(5), 676–69 (2010) 11. Chaum, E., Karnowski, T.P., Govindasamy, V.P., Abdelrahman, M., Tobin, K.W.: Automated diagnosis of retinopathy by content-based image retrieval. Retina (Philadelphia, Pa.), 28(10), 1463–1477 (2008) 12. Loisant, E., Martinez, J., Ishikawa, H., Katayama, K.: Galois’ lattices as a classification technique for image retrieval. Inf. Media Technol. 1(2), 994–1006 (2006) 13. Choi, Y., Rasmussen, E.M.: Searching for images: The analysis of users’ queries for image retrieval in American history. J. Am. Soc. Inf. Sci. Technol. 54(6), 498–511 (2003) 14. Greisdorf, H.: Information Seeking Behaviour in Image Retrieval: VISOR I Final Report200213Lynne R. Conniss, A. Julie Ashford, Margaret E. Graham. Information Seeking Behaviour in Image Retrieval: VISOR I Final Report. Newcastle-upon-Tyne: University of
Image Retrieval Algorithm of Intangible Cultural Heritage
9
Northumbria at Newcastle 2000. x + 147 pp., ISBN: 1 86135 051 1 (Library and Information Commission Research Report 95). J. Documentation, 58(3), 346–348 (2002) 15. Vendrig, J., Worring, M., Smeulders, A.W.M.: Filter image browsing: interactive image retrieval by using database overviews. Multimedia Tools Appl. 15(1), 83–103 (2001)
Network Lossless Information Hiding Communication Method Based on Big Data Fei Chen(B) School of Big Data, Chongqing Vocational College of Transportation, Chongqing 402247, China [email protected]
Abstract. With the rapid development of China’s economy, information technology came into being under the economic development. The shadow of the information technology gradually emerged in the China’s communication technology. Lossless information has become the most important part of communication technology. How to realize the lossless network information, realize the encryption of information content, and realize the hidden communication without affecting the original content of information. Through the ages, people have various ways to hide information, but the principle is basically the same, that is, the information is stored in the “beyond the power” of vision, hearing and other senses, so as to achieve the purpose of “muddling through”. For multimedia information hiding (IH), this paper mainly introduces the network lossless IH communication method based on big date (BD). Thus, the confidential information can be transmitted without being noticed, which greatly improves the security of the information security system. Based on the study of the dissemination of large data information, this paper reasonably analyses the feasibility of transmitting information without loss from two aspects: information and electronic and offline. It can perform a column integer wavelet transform on the image after entering the information, and then enter the data according to the same method. Through such processing, the amount of embedded data can be increased by about twice. The experimental results show that the network lossless IH communication method based on BD improves the communication confidentiality by 16%. The limitations of network lossless IH communication are analyzed, discussed and summarized, so as to enrich the academic research results. Keywords: Big Data · Lossless Information · Hidden Communication · Wavelet Transform
1 Introduction With the increasing maturity of computer network technology and communication technology, while the network continues to bring people a lot of convenience, human beings become more and more dependent on the network [1, 2]. At the same time, network technology also brings a large number of information security problems [3, 4]. At present, lossless IH technology mainly uses its bottom in capacity and visual quality to measure the performance of its method. Therefore, in this experiment, the actual performance © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 10–19, 2023. https://doi.org/10.1007/978-3-031-31860-3_2
Network Lossless Information Hiding Communication
11
of the proposed method is mainly controlled by the data capability and optical quality of the input image [5, 6]. In addition, because the existing literatures with faint lossless information basically use gray-scale digital images as test images, in order to facilitate the comparison with the experimental data of existing methods, all experimental test images used in this paper are also gray-scale images [7, 8]. Hidden communication technology will become an increasingly important means of communication and improve the method of lossless IH. Lodewyk Kr believes that in today’s network age, the issue of information security is more related to a country’s military, political and cultural aspects. Firstly, the embedding region is selected by designing the optimization strategy, and then the distortion free recovery of watermark and host image is realized by using the embedding mechanism and parameter model, which effectively overcomes the problem of reversibility instability based on histogram distribution constraint method [9]. Lander NJ thinks that at present, some researchers have studied the watermark detection from the perspective of statistics, cryptography and error control coding, and have achieved some research results. Some researchers also use wavelet transform combined with morphology to study image authentication [10]. The innovation of this paper is to propose the research on network lossless IH communication method based on BD. Due to the late introduction of the Internet in China, the awareness of information security of our citizens is relatively low. Therefore, problems related to information security continue to occur despite repeated prohibitions. The IH and information detection and extraction algorithms analyzed by BD technology and their application in digital elevation model need to study the proposed IH algorithms. This section introduces the traditional classical watermarking algorithm, multi-scale geometric analysis watermarking algorithm, Lossless IH Algorithm and adaptive capacity detection method. The purpose of this study is to find a new way suitable for the current network lossless IH communication method based on BD.
2 Information Transmission Mode Under Hidden Information 2.1 Basic Concepts of Information Hiding Technology Information hiding is a new comprehensive frontier discipline, involving computer graphics, signal and information processing, computer network and cryptographic analysis and other disciplines. Information hiding refers to embedding the information to be hidden into the host carrier through the embedding algorithm, and achieving the purpose of hiding through the transmission of the carrier. Information hiding system usually consists of two parts: (1) Information embedding, which can be called embedding device, is mainly embedding algorithm. (2) Information detection, which can be called detector, is mainly an extraction algorithm used to extract hidden information from the carrier. The basic model is shown in Fig. 1:
12
F. Chen
secret key
Information to be hidden
Embedding algorithm
Extraction algorithm
Hidden information Fig. 1. Basic model of information hiding
The classification of information hiding mainly includes the following aspects. According to the different carriers of information hiding, it can be divided into information hiding in images, information hiding in video, information hiding in voice, information hiding in text and information hiding in various types of data. According to the information hiding algorithms that have been proposed so far, they can be divided into time domain (spatial domain) replacement technology, transform domain technology, spread spectrum technology, and statistical methods according to the way they modify the carrier. Information hiding technology usually has the following basic requirements: (1) Imperceptibility and undetectability: The hidden information should be imperceptible, and its existence should not change the sensory quality of the original media information or affect the sensory effect of the original media information. (2) Confidentiality: The embedding method should be secret, and the embedded information is difficult to detect. ‘In some cases, it is also necessary to encrypt the hidden information to further improve the confidentiality. (3) Information capacity: the carrier should be able to hide as much information as possible, also known as hidden capacity. In general, under the condition of ensuring the imperceptibility, the more information hidden, the worse the imperceptibility. Therefore, each specific hiding scheme needs to compromise between imperceptibility and information capacity. (4) The complexity of algorithm calculation is low: the algorithm should be easy to implement. In some applications, the implementation of the algorithm is even required to meet the real-time requirements. If the algorithm complexity is too high, it will affect the operation efficiency of the whole system, and it is difficult to meet the real-time requirements.
Network Lossless Information Hiding Communication
13
2.2 IH Environment Traditional IH methods often cause permanent distortion to the image content in the process of embedding secret information into the host image, which has become an important obstacle to its application in medicine, remote sensing and court evidence [11, 12]. The creation of BD information on the computer will be more conducive to the storage and dissemination of BD information. But at the same time, the following problem is that multimedia copyright infringement occurs from time to time, and the infringement of digital works is becoming more and more rampant. Therefore, the digital watermarking technology under IH technology will effectively solve this problem. In the national military, IH technology is of great significance to ensure the national military development strategy, the latest national weapon design scheme and the national military operation strategy. Therefore, digital image lossless IH has become a research hotspot in the field of multimedia information security in recent years. Aiming at the challenging problems faced by lossless IH methods, this paper deeply explores the solutions of digital image lossless IH from the aspects of reversibility, non-sensibility, embedding model and robustness. At the same time, other file formats have been found, which can be used for IH. The main idea of the algorithm is to layer the carrier image and extract the second bitmap and the third bitmap of the carrier image. Then, the same secret information bits are embedded in the corresponding positions of the second bitmap and the third bitmap of the image. When extracting the hidden information, the bit information is determined according to the law of statistical analysis. The specific expression of the hidden information model is as follows. dN1 = −μ∗ F
(1)
where is the learning influence factor, which can be expressed as by using the experimental classroom situation operator L; q Xt = θi εi−j (2) j=0
The hidden channel simulation algorithm has the following forms; dN1 = −μNd θ
(3)
2.3 The Prospect of Lossless IH Constructing the membership function of the carrier image and the final image and using the membership function to control the image fusion can prevent illegal users from removing the visible watermark mark by gradually approaching the weight by imitating the visible watermark mark. Moreover, the fused image can overcome the noise problem caused by pure random image fusion and meet the visual aesthetic requirements. Reversibility, as the primary prerequisite and evaluation benchmark of this technology, emphasizes that the content of the host image can be restored without distortion when the secret information is extracted in a lossless environment. In view of this goal, in recent
14
F. Chen
years, researchers have devoted themselves to the research of lossless embedding model and put forward many new ideas and methods. In particular, after the method based on histogram rotation (0nhr), Ni et al. Proposed a robust Lossless IH Method Based on spatial histogram distribution constraint (SHD).
3 Network Information Under BD 3.1 Current Situation of Network Information The exchange of information has become faster and diversified, but the protection of intellectual property rights has also attracted more extensive and in-depth attention. For example, some medical images, court evidence images and some military satellite images do not allow slight damage to the hub image. At this time, the lossless IH technology is the core of the IH technology It will play an important role in these special applications. Therefore, for lossless IH technology, how to realize the safe transmission of secret information in open channels, especially the communication conditions of China’s existing communication equipment are still relatively poor. If we can realize the safe transmission of secret information without secure communication channels, we can use the existing or civil communication channels Secret information is transmitted to reduce the implementation cost. 3.2 Current Situation of Network Information Security Technology Generally, the key can be generated from the important parameters or edge information in the embedding process. Covert attack: it is located on the channel of camouflage media transmission. Considering the particularity of lossless IH, this paper only studies the impact of non-malicious processing on the watermark image, including eg and PEG2000 lossy compression and additive Gaussian noise. In the case of lossless, it is assumed that there is no impact of covert attack. Traditional The information encryption technology cannot fully ensure the security of secret information, because the secret information will become a pile of invisible or inaudible garbled code after encryption. Although the secret information is hidden on the surface, these garbled codes further strengthen the wonder of the decipherer and increase the possibility of being deciphered. Therefore, IH technology, whether in military or They are of great economic significance. They have very important application prospects in military intelligence and e-commerce. The specific results are shown in Table 1.
Network Lossless Information Hiding Communication
15
Table 1. Method selection statistics for experimental teaching Teaching method
Proportion
Teacher demonstrates students watch
4.26%
Demonstration teaching
27.13%
Design your own experiments
50.0%
Multimedia explanation
18.60%
3.3 Network Information Hiding Control Protocol Because we use the communication protocol as the carrier of hiding information, we need to modify some positions of the packet header to hide information. Normally, we use the SOCKET interface to transmit data. The packet header information is automatically filled in by the operating system and cannot be modified by the user. However, the use of rawsocket is restricted by windows xpsp2. So we use the function of sending packets provided by WinPcap. WinPcap is independent of the protocol stack and directly oriented to the physical layer. It allows us to build packet headers according to the protocol format, so that we can modify the location of the data to be hidden and build the data packets to be sent. Because WinPcap breaks away from the characteristics of the protocol stack and cannot completely control the transmission of data, we need to design a control channel for this channel when using WinPcap to send data, which is used to control WinPcap to send data packets and realize system synchronization. We call the control channel the upper control channel, and the channel where WinPcap sends hidden information is called the lower channel. The overall framework is shown in Fig. 2:
The upper channel establishes a connection with the other party
The upper channel sends control instructions to the lower channel
The lower channel performs relevant operations according to instructions Fig. 2. Overall structure of information hiding system
For the operation of the whole system, the connection is first established by the security channel of the upper layer, and then the upper channel is responsible for transmitting
16
F. Chen
control instructions to ensure the synchronization of the data sent by the lower channel. The lower channel executes the operation according to the relevant instructions to build a hidden channel.
4 Information Security Analysis Under Hidden Communication 4.1 Information Security Development Process Information security is a very broad category, which covers cryptography, computer science, information authentication and so on. In this way, there will be two outcomes: one is that the decipherer deciphers the encryption algorithm and obtains confidential information; the other is that the decipherer cannot decipher the encryption algorithm and will destroy these garbled codes, so that the legitimate recipient of information can not obtain the correct information. The early image IH technology is a spatial IH algorithm that directly hides the information in the pixel values of the original image. The IH algorithm adopts the frequency domain decomposition technology based on integer wavelet transform. In the IH technology, the frequency band position of hidden information embedding is an important factor affecting the capacity and visibility of embedded information. The increase of embedding capacity will reduce the visual effect. The details are shown in Fig. 3. When a country’s military confidential information is leaked, it will have a significant impact on the country’s security problems; Once the company’s new product design and development projects suffer information leakage, it will directly affect the company’s benefits; The security of e-commerce depends more and more on information security technology. Experimental knowledge
60%
multimedia knowledge
50%
proportion
40% 30% 20% 10% 0% EM
EP
ER
Experimental content Fig. 3. Analysis of the situation of students’ ability training
ES
Network Lossless Information Hiding Communication
17
4.2 Security Mechanism of Upper Channel As a covert communication system, security is the first consideration. Since we use the upper channel to establish the connection between the communication parties and send instructions to control the lower WinPcap to send data, the security of the upper channel is very important. We need to authenticate the identities of both sides of the upper channel communication and ensure the security of the instructions. Based on this consideration, we design a public key system based on RSA, and use Diffie.Hellman key exchange algorithm to exchange keys to avoid man-in-the-middle attacks. The initiator of communication is generally called the client, and the receiver is called the server. The working process of RSA is as follows: First, when the client sends a connection to the server, the server sends its public key to the client. The client encrypts the key of this communication by using the public key of the server and transmits it to the server. If the server can explain the key with its own private key, then the client thinks the server is authentic. Otherwise, it means that the other party only has the public key of the server and no private key. The client ends the call. When the server is trusted, this key will be used as the session key of both sides of this communication. This ensures the authentication of the identities of both sides of each call, and also ensures the randomness of one-time encryption and key, and builds a secure platform for both sides of the communication. The specific process is shown in Fig. 4:
Encrypt session key with server public key
client
Decrypt session key with private key
Server
If the decryption is correct, the call will be made Fig. 4. RSA workflow
4.3 Current Situation of Lossless IH Firstly, the difference between gray histograms is effectively overcome by using the good distribution characteristics of generalized statistics histogram; On this basis, combined with histogram neighborhood selection, a stable scalable embedding region is constructed, which realizes the flexible control of capacity and enhances the stability and adaptability. At the same time, the use of encryption and lossless compression technology effectively solves the problem of edge information storage and improves the security of lossless IH. The idea of difference expansion is applied to the content protection of two-dimensional vector map. Since then, many researchers have proposed new
18
F. Chen
methods based on difference expansion, P at the same time, histogram based methods have also been proposed. According to whether the influence of hidden attacks is considered in the transmission process, these methods can be divided into two types: fragile and robust. Although IH technology plays a great role in ensuring the safe transmission and storage of confidential information, the process of IH will inevitably cause irreparable damage to the carrier image. However, in some special applications, the carrier image often cannot tolerate these small damages. In terms of image lossless restoration, there is little research on visible watermarking. Even some visible watermarking methods proposed are lossy. Lossy visible watermarking technology will destroy the quality of the image, which limits its application field. Some researchers use wavelet transform to realize lossless visible digital watermarking. This algorithm uses wavelet transform, the operation is complex and the amount of hidden data is limited. The new algorithm for lossless IH of BMP image is that after integer ar wavelet transform, the high-frequency coefficients of most images approximately obey the generalized Lazarus distribution, and most coefficient values are close to 0. Therefore, the method of vacancy adjustment is used to realize lossless embedding of hidden information. Firstly, this chapter briefly introduces the principle of integer wavelet transform. The specific results are shown in Fig. 5. Then, the principle of the proposed lossless IH method is analyzed, and the specific steps of information embedding, information extraction and restoring the original image are given. Then, the results of embedding information into multiple common test images are given. The method in this paper is compared with the two methods with the largest accounting information capacity, Finally, the effectiveness of the proposed lossless IH method and its embedding capacity are discussed.
proportion
teaching method
standardization method
90.00%
80.00%
80.00%
70.00%
70.00%
60.00%
60.00%
50.00%
50.00%
40.00%
40.00%
30.00%
30.00% 20.00%
20.00%
10.00%
10.00% 0.00%
0.00% IM
DM method
TM
Fig. 5. Analysis of common experimental teaching methods (TM)
Network Lossless Information Hiding Communication
19
5 Conclusions Although this paper still has significant flaws in the BD-based IH communication method. Writing non-destructive information requires not only extensive knowledge, but also a solid foundation and literacy skills. The experimental results demonstrate that the method has good stability, flexibility and energy management performance. There is also a lot of content in the BD-based IH communication method that needs to be studied in depth. There are several steps in the BD communication analysis study that were not covered due to space and personal effort.
References 1. Landi, D., Fitzpatrick, K., Mcglashan, H.: Models based practices in physical education: a sociocritical reflection. J. Teach. Phys. Educ. 35(4), 400–411 (2016) 2. Mckenzie, T.L., Nader, P.R., Strikmiller, P.K., et al.: School physical education: effect of the child and adolescent trial for cardiovascular health. Prev. Med. 25(4), 423 (2016) 3. Ahsan, M., Tea, S.H., Albarbar, A.: Development of novel big data analytics framework for smart clothing. IEEE Access, (99), 1 (2020) 4. Tripathi, A.K., Sharma, K., Bala, M., et al.: A parallel military dog based algorithm for clustering big data in cognitive industrial Internet of Things. IEEE Trans. Ind. Inform. (99), 1 (2020) 5. Coutinho, D.A.M., Reis, S.G.N., Goncalves, B.S.V., et al.: Manipulating the number of players and targets in team sports small-sided games during physical education classes. Revista De Psicologia Del Deporte, 25(1), 169–177 (2016) 6. Helles, R., Rmen, J., Lomborg, S., et al.: Big data and explanation: reflections on the uses of big data in media and communication research. Eur. J. Commun. 35(3), 290–300 (2020) 7. Ramneek, C.S., Pack, S., et al.: FENCE: Fast, ExteNsible, and ConsolidatEd framework for intelligent big data processing. IEEE Access (99), 1 (2020) 8. Cairney, J., Hay, J., Mandigo, J., et al.: Developmental coordination disorder and reported enjoyment of physical education in children. Eur. Phys. Educ. Rev. 13(1), 81–98 (2016) 9. Lodewyk, K.R., Muir, A.: High school females’ emotions, self-efficacy, and attributions during soccer and fitness testing in physical education. Phys. Educ. 74(2), 269–295 (2017) 10. Sirapaisan, S., Zhang, N., He, Q.: Communication pattern based data authentication (CPDA) designed for big data processing in a multiple public cloud environment. IEEE Access, 99, 1 (2020) 11. Okuda, Y., Sato, M., Taniguchi, H.: Implementation and evaluation of communication-hiding method by system call proxy. Int. J. Netw. Comput. 9(2), 217–238 (2019) 12. Shiina, R., Tamaki, S., Hara, K., et al.: Implementation and evaluation of novel architecture using optical wireless for WLAN control plane. IEEE Access, (99), 1 (2021)
Anomaly Recognition Method of Network Media Large Data Stream Based on Feature Learning Fei Chen(B) School of Big Data, Chongqing Vocational College of Transportation, Chongqing 402247, China [email protected]
Abstract. With the rise and rapid development of mobile communication, intelligent terminals and feature learning technologies, we are entering the era of mobile Internet. When we meet developers in the web, we have no control over the progress of various specific applications. Therefore, feature learning needs to provide a high degree of flexibility for anomaly detection. This paper firstly proposes a method for external identification of big data stream network media based on feature theory. In this paper, an applied research on optimization problems based on feature theory is carried out. The integration and distribution of highperformance evolutionary algorithms also need to be improved, and great success has been achieved in the field of recognition. This kind of feature takes the region with significant gray change (such as inflection point intersection) as the point of interest, and designs the corresponding feature description vector to quantify the detected region. These local features are robust to background and attitude changes, and are usually used in target detection, target classification and other fields. Therefore, the study of outlier coevolution feature learning is of great significance. The experimental results show that the outlier detection method based on feature learning in big data network streaming media improves the optimization rate of outlier problem by 14%. The classification of external optimization problems based on feature learning provides a good method for the application of big data analysis and genetic algorithms, thereby improving the results of learning research. Keywords: Feature Learning · Network Media · Big Data Stream · Outlier Recognition
1 Introduction The extraction of outliers can form a relatively complete coverage of the action video: the motion corner is extracted by spatio-temporal significance detection, and the function in the local neighborhood is defined to judge whether the point meets the spatio-temporal significance characteristics; This is a feature extraction process, which transforms the GMM mean super vector into a fixed size I-vector with low dimension [1]. Taking factor analysis as a precedent, other methods in the application field of pattern recognition have also been used for speaker recognition [2, 3]. Feature learning is oriented to input data, and ranking learning is oriented to retrieval results. In a comprehensive ranking © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 20–28, 2023. https://doi.org/10.1007/978-3-031-31860-3_3
Anomaly Recognition Method of Network Media
21
model, there are usually both feature learning and ranking learning [4]. As a data feature extraction method, feature learning inputs the extracted features into the ranking model for training. At present, on the Internet, various abnormal situations emerge one after another, such as “zombies”, these “zombies” are an attack platform, which can launch various forms of attacks, destroy the entire network and key Disclosure of personal information. Just like a worm virus, once there is a loophole, it will cause huge damage at the very beginning. A large number of attacks will slow down the operation of the entire system, and even cause the network to be paralyzed. For another example, when the network is congested, problems such as data loss, increased delay, and reduced throughput often occur, and in severe cases, it will cause congestion and collapse. However, at present, many methods for identifying the abnormal flow of a single link are limited to the identification of a single IP flow. One of the existing technologies is the identification of abnormal flow based on association rules, and the other is the use of unsupervised source analysis method to identify and classify abnormal flows. With the advancement of science and technology and the rapid development of the Internet, the design of quantitative optimization problems in complex fields has been continuously strengthened [5, 6]. Bhattacharyya B classified static classification algorithms into matrix recognition algorithms, clustering algorithms, task algorithms and directed ID algorithms according to the technology used for feature recognition. In some special cases, feature algorithms with high time complexity can be obtained [7]. Scott JK improved the method of calculating weight value and proposed big data analysis. When processing natural language texts, each word can be used as a feature with a large amount of features. The purpose of feature selection algorithm is to reduce the dimension of feature set and remove noise data in corpus, so as to obtain better classification effect [8]. For the big data network information model, the fixed division mode is adopted, and an over complete feature pooling method is used to improve the temporal and spatial relationship between action features and the responsibility of the model [9, 10]. But their tests went wrong, resulting in inaccurate results. The novelty of this paper is to propose an anomaly detection method based on feature learning of network data streams. This paper studies the optimization application based on feature learning, and analyzes the effective countermeasures of feature learning. However, the actual time distribution curve of abnormal points is relatively unstable, fluctuates from time to time, and has its own stage change. Therefore, for the feature extraction of abnormal points, this paper first considers the external time distribution to find the law and extract the features. When collecting the time-frequency features of abnormal points, the existing outlier examples and other registered words are used to collect the time features in the background corpus to confirm that the time features collected in the corpus can be used as another feasible standard for outlier recognition technology.
22
F. Chen
2 Feature Structure Under Feature Learning 2.1 Feature Retrieval Algorithm Establish a model that can extract suitable sorting structure features to further improve the retrieval performance. At the same time, Dawu can improve the retrieval speed without reducing the retrieval accuracy. The traditional retrieval model needs to take the basic features as a priori. Because the specific parameter settings of these features cannot change according to the changes of corresponding applications, these basic features show the expansibility of soft difference. The knowledge required by modern organizations involves different professional fields, and its decision-making results affect different stakeholder groups. Therefore, more people are often required to participate in the process of decision-making discussion, so as to achieve decision-making results through information sharing and interest balance. At present, the organizational environment has become more unstable, the communication between the organization and the outside world has become closer, and the personnel are widely distributed. Therefore, the face-to-face meeting mode is not realistic. Feature learning does not need to define complex features manually, and feature representation can be obtained only through data sampling and specific machine learning algorithms. Feature learning includes two kinds of methods: one is unsupervised learning method, which obtains the transformation space of features through methods such as sparse coding, dimension reduction and clustering. The higher the forwarding volume or browsing volume of the document, the higher the attention of the content contained in the document, and the influence of the existing outliers can be relatively improved. Therefore, they are included in the consideration of influence feature extraction, and the m of the document can reflect the influence of the document in quantity. In content, if there are outliers. Properly applying this function can design an effective algorithm in outlier optimization plan, which has the following forms. Text information feature extraction is a key link in text information retrieval under massive data. Text data has high feature dimensions, large amounts of data, redundant data, and high-dimensional sparse features. After preprocessing such as filtering and TFIDF algorithm to construct vector matrix, it is necessary to reduce the dimensionality of text vector data to retain the main information of text classification features, remove redundant information, and improve retrieval efficiency. ⎧ ⎨ IA1 = UA jωC01 (1) I =0 ⎩ B1 IC1 = UC j C01 The calculation of the algorithm is as follows: dN1 = −μ∗ F The following formula should be used to test: q Xt = θi εi−j j=0
(2)
(3)
Anomaly Recognition Method of Network Media
23
2.2 Abnormal Points In the actual data analysis process, it is often found that some data do not conform to the general law of the data and are partially inconsistent with other data. That is, they are “offset” from most other data, and in some algorithms they are considered noise. However, in some practical applications, they may provide us with some useful information, enabling us to discover some real and unexpected knowledge. This data is an outlier that we will discuss next. One can make full use of category information to classify samples. Because partial least squares uses both data information and category information, the total change space constructed by partial least squares contains the potential information given by category information to GMM supervector, which can not be obtained by the traditional method based on factor analysis. Moreover, partial least squares has the advantages of fast speed and anti overfitting. Therefore, this chapter introduces partial least squares into the estimation of total change space and the extraction of I-vector, uses data information and category information together for the estimation of total change space, and extracts I-vector on this basis. For big data problems, if the algorithm complexity of a model with good retrieval effect is too high to get results in a short time (for example, a user queries a problem in the search engine, but the search engine can’t feed back the results in ten minutes, which is obviously unacceptable), the user’s retrieval experience is very poor.
3 Big Data Environment Feature Learning Algorithm 3.1 Outlier Identification Under Decision-Making Conditions Among them, outlier identification methods can be divided into three types: supervised, semi-supervised and unsupervised: the monitoring method is mainly aimed at modeling the normality and outlier of data points, by learning given labeled data (normal data or outlier data) outliers by identifying potential connections between them. For example, normal data is modeled with learning labels, and data that does not conform to the model will be identified as outliers. In practical applications, only a small portion of the data is labeled, and most of it is unlabeled, which cannot be directly modeled by supervised methods. The semi-supervised method is to mark the data that does not conform to the model as outliers by modeling the data and its adjacent unlabeled data; for unlabeled data that is difficult to process, the unsupervised method pointer can only learn the data autonomously through a certain method potential links between. The classification method is a typical supervised method, which learns to model the data labeled as a certain class, and then uses the model to classify the data. For example, support vector machines for outlier identification on decision boundaries. Given a new data point, if the data point is outside the decision boundary, it is marked as an outlier. Semi-supervised and unsupervised methods include clustering algorithms, neural network algorithms, etc. Because feature learning is the basis of sorting learning algorithm, in order to obtain a soft and good sorting learning model, we must have features suitable for data and sorting model, so the extraction of this feature is the primary problem of our work. In the feature
24
F. Chen
extraction algorithm, we should not only consider the sorting effect of the algorithm, but also consider the computational complexity of the algorithm. The time distribution curve in the corpus used in this paper has several rising periods, and the use degree in the last stage is still very high. It can be seen that the word meets the discrimination standard on the time characteristics of outliers. The time distribution curve of outliers shows that the initial use degree is very high, The last several stages are stable, so the word still reaches the standard of time characteristics of outliers, and the time distribution curve of login words can be seen that all the listed registered words have reached a time distribution characteristic of ordinary words, and the curve remains stable. Through the further understanding and analysis of the decision-making task, discuss various decisionmaking factors such as the decision-making structure, understand the decision-making problem, and analyze the sub problems of the decision-making structure one by one; Finally, solve each sub problem, give the proposal of each decision-making participation, and discuss the decision-making solution through group discussion. 3.2 Identification Algorithm in System The two data spaces are the sample space where the GMM mean supervector is located and the category space where the category ID is located. By analyzing the characteristics of each space and establishing the relationship between the two spaces, the total change space contains both data information and relevant information. Under this condition, the product of two low rank matrices is equivalent to the linear sum of a series of elements, which makes the result of nonnegative matrix decomposition more physical significance. For example, in the field of face recognition, one low rank matrix often represents a part of the face (eyebrows, nose, etc.), and the other low rank matrix represents the weight of a person’s face on these parts. Feature learning introduces the feature learning framework into action video. This kind of method directly learns meaningful bases from action data as action feature representation, trains different action features from actual data sources, and has better description ability for real data. The specific results are known in Table 1. Table 1. Comparison of several wireless data transmission methods skill
Number of people
skill
Number of people
Operational standardization
30
Shorten experiment time
13
Add fun
12
Fully understand the principle
9
Expand the amount of knowledge
29
Improve learning efficiency
other
8
15
Anomaly Recognition Method of Network Media
25
4 Outlier Analysis Under Feature Learning 4.1 Outlier Data Model Analysis Identify outliers in high-dimensional data. High-dimensional data refers to data that exceeds three dimensions. There are many attributes of this type of data, and the distance between the data is difficult to define, but the attributes that can really identify the data samples only account for a part of the data samples. For this type of data, the dimensionality reduction processing is generally performed on the data first, and then the corresponding outlier identification method is used for identification. The disadvantage is that after dimensionality reduction, part of the data information will be lost. Existing methods are inefficient in identifying such data, and problems to be overcome include: interpretation of high-dimensional outliers, sparseness of high-dimensional data, and how to represent differences between high-dimensional data points. Spatial Data Outlier Identification. With the advent of GPS and various spatial data sensors, spatial data outlier identification has become a difficult problem in spatial data processing. The difficulty of identifying outliers in spatial data lies in the fact that it has both non-spatial and spatial attributes, as well as autocorrelation and heterogeneity. Spatial data is affected by neighboring data and is locally unstable. Commonly used methods include variogram, Z-Score, etc. For high-dimensional spatial data, some scholars have proposed corresponding methods. The spatial local anomaly metric (SLOM) is proposed in the literature, and the outliers in the local space can be fully identified by using SLOM; this paper proposes a spatial outlier detection algorithm without parameters, which can calculate the number of spatial neighbors and can Automatically find the detection threshold of outliers; literature proposes a detection algorithm based on geostatistics, which uses the theory of spatial autocorrelation, uses the Delaunay triangulation to construct the spatial neighborhood, and replaces the outliers with the mean value of the neighborhood nodes point. With the continuous update of data acquisition equipment, the complexity of spatial data is also increasing, how to improve the effectiveness of the algorithm has become the key to the identification of spatial outliers. Identify outliers in time series data. Time-series data is a series of time-related data, such as monthly water consumption, rainfall in a certain period of time, network traffic during live broadcasts, and so on. Due to the periodic influence of time series data, outlier identification of time series data becomes more difficult. A common method is to divide the time series data into subsequences of equal length, and then use distancebased identification methods for identification. Another method is to perform feature extraction on the sequence data, and identify outliers by calculating the distance of the feature data. It is sensitive to outliers, so when there are abnormal samples in the training corpus, the total change space will contain wrong category information, and the performance of the speaker recognition system will decline. The cross validation method is used to construct the category identification of local punishment for each marked speaker, and treat the contribution of each speaker’s training sample in constructing the total change space differently. Finally, all the training corpus are used. The direction selectivity of the block filter is designed to extract the local motion direction, and the motion information on different scales is extracted through scale change. Combined with the fast integral image algorithm, the computational efficiency of feature extraction and
26
F. Chen
representation can be effectively improved. In order to further increase the expression ability of action features, when counting the action features of spatio-temporal regions, we test the positional relationship between space and time, and adopt the way of over complete pooling. The specific results are shown in Fig. 1.
1.6
Y
1.4 1.2
Error
1 0.8 0.6 0.4 0.2 0 0
1
2
3
4
5
6
7
8
9
10
11
12
Packets Fig. 1. Mental model work
4.2 Abnormal Point Optimization At the feature level, a new high-level feature is usually obtained by neural network; At the same time, at the data level, we can take the data as the input and build a new neural network. In this way, we can model the neural network at the data level and the feature level at the same time, which not only makes use of the internal attributes of the feature dimension, but also models the relationship between the data, so as to expand the network structure. The implicit representation concepts of the same data are extracted according to the manifestations of different modes of the same data. How to effectively use these multimedia features has attracted more and more attention. Although each characteristic mode is sufficient for itself to learn a given training task, different modes can provide additional supplementary information to each other to improve the performance of the training model.
Anomaly Recognition Method of Network Media
27
People’s behavior habits, personal preferences and cognition of the real world are all reflected in these data. How to mine the information hidden in these data and realize fast and effective information retrieval has become a key problem. Based on the outlier detection algorithm, the action feature points are sparse and the action information is not rich. However, using the dense extraction method, the amount of motion feature calculation is large, the efficiency is low, and contains a large number of feature interference such as back recording. In this chapter, a motion recognition algorithm based on over complete pooled ha3d features is proposed. The specific results are shown in Fig. 2.
Mobile database
Traditional technology 0.45
Complete time
0.14 0.120.4
0.12
0.4 0.35
0.1
0.3 0.27
0.08
0.2
0.06 0.15
0.04 0.02 0
0.25
0.0190.09
0.045
0.15 0.1
0.021
0.01 0.02 0
0.05 0 5
Unit
10
15
20
Fig. 2. Moving object index
5 Conclusions Although this paper still has significant flaws in feature learning-based network anomaly detection methods for big data streams. With the rapid development and wide application of digital media such as the Internet, the amount of data increases rapidly. Solve big data problems, it becomes very important to design a feature learning algorithm to improve the learning speed in large-scale multimedia data processing. According to the above examples, this paper proposes a network-wide abnormal flow identification algorithm based on the distribution of traffic characteristics. The algorithm first performs coarse-grained anomaly identification on the data flow in the entire network, and then analyzes the abnormal traffic structure in the entire network. Points are classified, and the abnormality is classified according to the extracted abnormal nodes, so as to determine
28
F. Chen
the characteristic value of the abnormal flow, and according to the above-mentioned determined abnormal flow characteristic value, the abnormal flow is obtained from the abnormal node. In this paper, the data flow in the entire network is identified hierarchically, firstly, coarse-grained identification, and then fine-grained identification, which can improve the accuracy of identification without increasing a large number of measurements and calculations; accurate positioning Abnormal nodes in the entire network, and obtain communication characteristics such as IP addresses and port numbers in abnormal nodes, and can classify these abnormal flows to determine the cause of network anomalies. With the popularity of mobile Internet, various social media platforms have implanted mobile Internet technology by using the mobility, fragmentation and on-line characteristics of mobile instant messaging. This way makes the sharing, CO creation and modification of user generated content between individuals and communities more frequent and easy. Making learning features with higher resolutions requires not only a wealth of knowledge, but also a strong scientific foundation and literacy. There is still a lot of content to be further studied in the network anomaly detection method of big data stream based on feature learning. There are still many steps to study the anomaly identification and analysis.
References 1. Li, S., Huang, J.X., Tohti, T.: Fake plate vehicle auditing based on composite constraints in Internet of Things environment. IOP Conf. Ser. Mater. Sci. Eng. 322(5), 204–205 (2018) 2. Kumar, N., Rodrigues, J.J.P.C., Chilamkurti, N.: Bayesian coalition game as-a-service for content distribution in internet of vehicles. IEEE Internet Things J. 1(6), 544–555 (2017) 3. Wang, X., Han, S., Yang, L., et al.: Parallel internet of vehicles: ACP-based system architecture and behavioral modeling. IEEE Internet Things J. 7(5), 3735–3746 (2020) 4. Lee, D.G.: A multi-level behavior network-based dangerous situation recognition method in cloud computing environments. J. Supercomput. 73(7), 3291–3306 (2017). https://doi.org/ 10.1007/s11227-017-1982-1 5. Jing, M., Jie, Y., Shou-yi, L., Lu, W.: Application of fuzzy analytic hierarchy process in the risk assessment of dangerous small-sized reservoirs. Int. J. Mach. Learn. Cybern. 9(1), 113–123 (2015). https://doi.org/10.1007/s13042-015-0363-4 6. Ruddy, J., Meere, R., O’Donnell, T.: Low Frequency AC transmission for offshore wind power: a review. Renew. Sustain. Energy Rev. 4–15 (2016) 7. Bhattacharyya, B., Raj, S.: Swarm intelligence based algorithms for reactive power planning with flexible AC transmission system devices. Int. J. Electr. Power Energy Syst. 78, 158–164 (2016) 8. Scott, J.K., Laird, C.D., Liu, J., et al.: Global solution strategies for the network-constrained unit commitment problem with AC transmission constraints. IEEE Trans. Power Syst. 1 (2018) 9. Liang, H., Liu, Y., Wan, L., et al.: Penetrating power characteristics of half-wavelength AC transmission in point-to-grid system. J. Modern Pow. Syst. Clean Energy, 277–299 (2019) 10. Chen, H., Feng, S., Pei, X., et al.: Dangerous driving behavior recognition and prevention using an autoregressive time-series model. Tsinghua ence Technol. 22(006), 682–690 (2017)
Design and Implementation of Automatic Quick Report Function for WeChat on Earthquake Information Yina Yin1(B) , Liang Huang1 , and A C Ramachandra2 1 Information Center, Liaoning Earthquake Agency, Shenyang, Liaoning, China
[email protected] 2 Nitte Meenakshi Institute of Technology, Bengaluru, India
Abstract. This paper designs the Liaoning earthquake quick report pushing system which based on WeChat public platform. After applying for registration of unit WeChat public platform account and verification, this system builds the earthquake information WeChat public platform by using the developer center, crawls the earthquake quick report information data through the WeChat public platform interface, calls the Access_token verification, uses the GET method to access, and sends the Json The data packet is sent to the public platform interface address via POST to drive to the WeChat public platform. By automatically pushing the earthquake quick report information to the public at the first time, the public can understand the three elements of the earthquake: time of earthquake, location of earthquake, and magnitude of earthquake as early as possible, providing the public with fast and accurate information, and improving the efficiency of rapid risk avoidance and earthquake emergency rescue. Keywords: WeChat public platform · earthquake speed report · intelligent terminal application
1 Introduction China belongs to one of the countries with more and more severe earthquake disasters in the world. Most of the provinces in China have experienced destructive earthquakes of magnitude 5 or higher in their history [1]. Each earthquake has resulted in human casualties. People’s spirits were hurt and also caused incalculable property damage [2]. The main objective of earthquake work in China is prevention-oriented, combining defense and relief [3]. With the continuous development and growth of earthquake information services in recent years, the dissemination of earthquake rapid information has gradually shifted from the traditional television, radio, and newspaper in the past to the Internet as the platform for dissemination. For example, through major websites, APPs, microblogs, and WeChat. The Internet has expanded the range of dissemination and improved the efficiency of earthquake information reporting. The earthquake industry has the special © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 29–36, 2023. https://doi.org/10.1007/978-3-031-31860-3_4
30
Y. Yin et al.
characteristics of the industry, so it has also put forward higher requirements for the timely and accurate earthquake information flash report. Nowadays, the major platforms of the Internet are increasingly recognized by the community, and microblogs and weibo have been widely used by the public [4]. The propaganda department should make full use of the characteristics of the Internet for earthquake information speed reporting. Combined with the existing SMS service for earthquake information speed reporting, it can provide richer and more convenient earthquake information services for the public [5]. In this way, all sectors of the society as well as the public can get more timely and accurate earthquake quick report information through various channels, and also provide the public with science propaganda and earthquake dynamics, which are important for the public to carry out earthquake evacuation as early as possible and for emergency rescue forces to make early response and work deployment [6]. As of the second quarter of 2022, the monthly active users of WeChat grew to 1.299 billion [7]. WeChat has facilitated people’s communication and shortened the communication distance. As a public media, WeChat has become one of the important mediums of life nowadays. Various industries have used WeChat as one of the very important means of publicity. Important information is pushed to users through the medium of public numbers and subscription numbers. This is also more able to meet the needs of users, more convenient and more effective. By developing the WeChat public platform, it can turn WeChat into an effective channel for general earthquake knowledge to the public, push earthquake information in a timely manner, find the nearest shelter accurately, and give feedback to the local earthquake monitoring department in the first place when the public finds an abnormal situation. After the earthquake, the local disaster situation will be fed back to the emergency relief related departments in time to provide the corresponding information for the subsequent earthquake emergency relief [8].
2 Demand Analysis and Current Situation Because of the complex and diverse geological formations in Liaoning, only two cases have been successfully predicted so far, namely the 7.3 magnitude earthquake in Haicheng in 1975 and the 5.4 magnitude earthquake in Haicheng-Xiuyan in 1999 [9]. Nowadays, the developed Internet communication technology can quickly and effectively push earthquake information to the public through the Internet, which can play an important role in helping earthquake prevention and mitigation and post-earthquake relief in Liaoning. Based on the original SMS mechanism, the Liaoning Earthquake Quick Report Push System quickly grabs earthquake information, releases and pushes it to the WeChat public number, which realizes the rapid and automatic release of earthquake information in the province by the WeChat public number of Liaoning Earthquake Bureau. The Liaoning earthquake quick report push system adopts a lightweight and loosely coupled architecture, using the custom interface of the WeChat public platform as the I/O channel, and the earthquake information server based on the WeChat platform to process earthquake information and user information, and to send graphic earthquake information to the active and concerned users [10]. For example, when an earthquake occurs in a certain place, the WeChat release module of earthquake information is triggered to capture the earthquake quick report
Design and Implementation of Automatic Quick Report
31
information and push the earthquake information to the following users. The Liaoning earthquake quick report pushing system uses an automated way to publish the collected earthquake information on the Internet through the WeChat platform. The whole process is carried out by automatic processing, which ensures fast, stable and accurate data delivery [11].
3 Liaoning Earthquake Quick Report Push System Design 3.1 Overall Architecture First register the unit WeChat public platform account and pass the authentication, so that you can use the developer center in it. The earthquake speed information push requires several advanced interface functions such as getting user list, getting user geographic location, and template messages. As shown in Fig. 1, After registration, you can check the application ID and application key of WeChat Public in the management page of Developer Center and get the interface access credentials. When the user enters the authorization page to agree to the authorization, get the code, exchange the code for the web authorization Access_token, and get the basic information of the user according to the web authorization Access_token and openId through the web authorization. The first is to build the Url address, then call back the method address defined by itself, while WeChat will automatically bring out the code of the current user, then call the authorization address again according to the code, find the corresponding openId, and get the user’s openId. Access_token verification is required when the interface is called. In order not to expose the automatic AppId and AppSecret, we need to request an Access_token from WeChat server and use GET method to access it, and then return it as Json. The encryption method is used in plaintext mode at development time. After an earthquake occurs, polling finds new content of earthquake snapshot information, it will be captured to the earthquake snapshot information processing module.
Fig. 1. System Architecture Design
3.2 Algorithm Design The key of this system is to use PHP language to build the interface to connect to WeChat, use Java language to write the trigger code to capture the data, complete the data capture and push to the WeChat public platform.
32
Y. Yin et al.
3.2.1 Development Environment The server is Lenovo Qitian M4500; memory capacity is 4G; hard disk capacity is 1TB; dual-port NIC. The system running environment is RedHat Linux 64-bit operating system, configured with internal network and public network IP addresses respectively. The system firewall is configured to open ports 80, 3306 and 22. The software environment is Apache/2.2.15 (Unix), the language environment is PHP5.3.3, the database is MYSQL5.1.73, and the programming languages are PHP, C, SQL, JavaScript, HTML, XML, CSS and other languages. 3.2.2 Logic Structure The logical structure of the system is shown in Fig. 2. Firstly, users follow the public number through WeChat terminal, and the WeChat server receives the request and provides the user id to the public number. The internal part of the public number mainly consists of three parts: monitoring system, message service and management system. The three systems are used to realize the functions of earthquake message management, WeChat menu management, and interface log management. The external connection to the database of the earthquake information center is shown in Fig. 3. The monitoring system polls the database for new earthquake information in the province, looks for new data in the database of earthquake information, continues to poll if there is no new data, and if there is new data, it will grab the new data in the database and encapsulate the new data in a template and push it to public through the WeChat public platform.
Fig. 2. Logic structure diagram
Fig. 3. Data flow chart
Design and Implementation of Automatic Quick Report
33
3.3 Algorithm Implementation Developer needs to send a POST request to the WeChat server, carrying a json packet assembled according to the template. The json packet is sent to the public platform template interface address via POST. 3.3.1 WeChat Interface Algorithm When the system need to connect to the WeChat interface to send messages, first query the OPENID of the sending user by queryOpenID, then get the interface credentials by getAccessToken. Then create the data data through createTemplate, generate the POST address after passing the interface credentials, and finally send the data data, request address and request method through httpRequest method, the corresponding pseudo code is as follows:
ALGORITHM 1: WeChat Interface Algorithm String OpenId = bservice.queryOpenID(userarrayinfo.get(0)); AccessToken =WechatUtil.getAccessToken(Constants.APPID, Constants.APP_SECRET); String json = WechatUtil.creatTemplate("hello world!", OpenId,Constants.jiadan_template,use_nm,billtype,string+"hours", Constants.URL_REST,"hello greener!"); String action = "https://api.weixin.qq.com/cgi-bin/message/template/send?access_token="+AccessToken.getT oken(); WechatUtil.httpRequest(action,"POST",json);
3.3.2 Automatic Push Algorithm After an earthquake occurs, the system is informed of the news by capturing the data of the earthquake flash information, and then immediately pushes the earthquake flash information to the public automatically on the WeChat public platform, with the following pseudo-code:
34
Y. Yin et al.
ALGORITHM 2: Automatic push algorithm public void pushTemplate(String openId){ //1.settings WxMpInMemoryConfigStorage wxStorage = new WxMpInMemoryConfigStorage(); wxStorage.setAppId(wxMpProperties.getAppId()); wxStorage.setSecret(wxMpProperties.getSecret()); WxMpService wxMpService = new WxMpServiceImpl(); wxMpService.setWxMpConfigStorage(wxStorage); //2.push message WxMpTemplateMessage templateMessage = WxMpTemplateMessage.builder() .toUser(openId)//push to users .templateId(pushTemplateId)//Template id .build(); String text = "earthquake"; templateMessage.addData(new WxMpTemplateData("text",text,"#FF0000")); try { wxMpService.getTemplateMsgService().sendTemplateMsg(templateMessage); } catch (Exception e) { log.error("send failed:" + e.getMessage()); e.printStackTrace(); } }
4 Platform Operation Effect By polling the earthquake information, the three elements of the earthquake: time, latitude and longitude, and magnitude, are quickly pushed to the WeChat public (see Fig. 4). In addition, the public number can also query the historical earthquake information. Users can access the historical earthquake query interface through the page option, and can query the worldwide earthquake distribution information in the past year. The historical earthquake information is given through the web interface to get the information parsed from the official web page. In addition, the WeChat public website also publishes earthquake science articles and videos to the science propaganda section. How to carry out earthquake science propaganda for the public has become an important link for the earthquake industry to face the public in the future. If there are any anomalies and post-earthquake disaster feedback, you can also get in touch with us in the interactive platform, the page where the client receives the message is shown in Fig. 5. In the future, we will also develop new functions, such as earthquake warning information push function, and try to improve the response speed.
Design and Implementation of Automatic Quick Report
Fig. 4. WeChat Public Client
35
Fig. 5. Earthquake information push interface
5 Conclusion Establishing a WeChat public platform, using a lower cost and no more UI development design, greatly improves the development efficiency. Through the WeChat interface, it is possible to achieve real-time seizure and real-time pushing of earthquake information. The WeChat public platform does not need to redevelop the client, but can also develop other functions, such as historical earthquake inquiries and popular science propaganda, which can become an important platform for the public to understand information and communicate about earthquakes. It is also convenient to communicate with the public about earthquake precursor anomalies, provide timely feedback on the real-time situation of the disaster, and quickly carry out emergency rescue, etc. The Liaoning Earthquake Express Push System can fully support intelligent terminals of ios, Android and other systems, which can expand the user group. One megabyte of data can send a very large amount of information, which also significantly reduces the cost of access for the people. In view of all the above advantages, at this stage, Liaoning Earthquake Speedy Push System has now become one of the important platforms for the release of earthquake information in Liaoning Province. When an earthquake occurs, it quickly
36
Y. Yin et al.
pushes earthquake information, so that users can quickly understand the three elements of the earthquake and provide accurate and fast earthquake information to the people. There are still many shortcomings in our work, and we will continue to improve it. In future works, we will also improve the display function, optimize the response speed and provide earthquake information faster. In addition, we will add an earthquake warning information section to push earthquake warning information to the public so that the public can receive earthquake warning information in the first time. After receiving earthquake warning information, you will generally have a few seconds to tens of seconds to escape from danger. If the warning time is enough to evacuate, you should evacuate to an open area as soon as possible. If there is not enough time, you should take shelter nearby and then evacuate quickly.
References 1. Park, K., Joly, A., Valduriez, P.: Characteristics of spatial and temporal distribution of earthquakes in Liaoning and prediction of future earthquake trends. J. Disaster Prevent. Mitig. 30(02), 62–65 (2014) 2. Sungkwang, E., Xiongnan, J., Lee, K.: The design and implementation of Liaoning earthquake quick report software based on android platform. J. Disaster Prevent. Mitig. 38(03), 76–80 (2022) 3. Gaede, V., Gunther, O.: Design and implementation of an automatic release system for seismic information on WeChat. North China Earthquake Science 4. Zeiler, M., He, T.: Design and application of an automatic microblogging system for seismic information. https://doi.org/10.13512/j.hndz.2012.04.004 5. Hinneburg, A., Sun, Z., Shao, D.: Design of an automatic push system for seismic quick reports by WeChat. Seismic Geomag. Obs. Res. 37(02), 165–170 (2016) 6. Mazzeo, G.M., Masciari, E., Zaniolo, C.: Design and implementation of a WeChat-based seismic industry APP service system. South China Earthq. 35(02), 37–42 (2015) 7. Karypis, G., Han, E.H., Kumar, V.: Design and implementation of mobile library APP service system based on WeChat. Mod. Intell. 33(06), 41–44 (2013) 8. Ankerst, M., Breunig, M.M., Kriegel, H.P., et al.: Analysis of comprehensive services of seismic information dissemination platform. Seismic Defense Technol. 10(02), 361–366 (2015) 9. Sheikholeslami, G., Chatterjee, S., Zhang, A.: Design and application of automated output system for seismic station network in Shanghai. Softw. Guide 19(08), 165–168 (2020) 10. Ma, B., Li, Y., Wang, X., Zhang, J., He, L.: Construction of a thematic information output system for earthquakes. Comput. Program. Skills Maintenance 11, 85–87 (2020). https://doi. org/10.16184/j.cnki.comprg.2020.11.031 11. Fisher, D., Zhang, C., Xu, X.: Design and implementation of a multi-channel release system for comprehensive earthquake information. Earthq. Defense Technol. 15(03), 563–570 (2020)
Application of BIM + GIS Technology in Smart City 3D Design System Congyue Qi1,2(B) , Hongwei Zhou1,2 , Lijun Yuan1,2 , Ping Li1,2 , and Yongfeng Qi1,2 1 The Third Construction Engineering Co., Ltd. of China Construction Third Engineering
Bureau, Engineering Bureau, Guangzhou 510000, Guangdong, China [email protected] 2 The Third Construction Development (Guangdong) Co., Ltd. of China Construction Third Engineering Bureau, Guangzhou 510000, Guangdong, China
Abstract. Smart city is a new concept and model of urban development in recent years. It is another major breakthrough in scientific and technological progress. Buildings are an important part of urban construction. This paper mainly systematically introduces the principle of software platform and three-dimensional data format using traditional three-dimensional modeling method and modern BIM system. Because BIM Technology contains extremely rich modern building information, it can be mainly applied to realize the ultra micro level expression of smart city building information itself, and to use GIS three-dimensional real-time spatial position analysis software and building visualization three-dimensional technology, It provides a powerful support for the macro level and sustainable development of buildings. Keywords: BIM + GIS Technology · Smart City · 3D Design · System Application
1 Introduction In recent years, building information modeling (BIM) has become an important core technology recognized all over the world for the development of the next generation intelligent building industry [1, 2]. BIM architectural model information design and processing technology itself does have some advantages in practical application with strong advantages, but at the same time, the technology itself may also have some technical limitations. For example, the adopted three-dimensional BIM architectural analysis and modeling processing technology itself often has more or less disadvantages in the design of three-dimensional architectural model information and the modeling analysis or calculation analysis of enriching three-dimensional space content, Another main spatial analysis technology function of GIS application system software is to realize the comprehensive analysis of various complex geospatial data information formed by urban building structure and its surrounding natural environment, which can effectively further solve many disadvantages and defects that may appear in the use of traditional Bim and other software technologies in China [3, 4]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 37–45, 2023. https://doi.org/10.1007/978-3-031-31860-3_5
38
C. Qi et al.
In the 1970s, green smart city began to be born and developed in developed countries in the world, and at the same time, the social concept of green smart city construction gradually emerged. Up to now, the technology development of smart city application is also becoming more and more mature. However, at this stage, the technology of smart city engineering starts and applies relatively late in China. In architectural design and construction design, we have always believed that we can only use a traditional architectural technology, and have not directly applied this new technology to urban architectural application, There is no third-party research or institution in relevant industries to conduct relevant research, demonstration and system development for this application. Smart building city system platform can be roughly defined as its connotation: Taking the intelligent management and information system application of various large buildings in the city as the business core platform, it also has the city’s public engineering information facility system, information intelligent engineering and application technology demonstration platform system, building equipment emergency management information system, public management safety technology application information system platform, etc., integrating various information structures, systems, services Management and capability integration and its information system application, optimization, integration, combination and service are integrated into one, providing the whole society with a new information system platform for modern building environmental control and management that pays equal attention to safety, efficiency, convenience, energy conservation, environmental protection, health and harmony [7, 8]. In the field of architecture, BIM mainly includes from architectural design and manufacturing to construction., In the research of building BIM modeling and its integration with spatial GIS technology, it will mainly focus on how to model and analyze the whole urban buildings and obtain as detailed and complete three-dimensional spatial information as possible [9, 10]. BIM application system platform and traditional GIS technology platform are actually the interactive import and output of 3D data model information. In order to finally realize such a seamless and perfect connection between how to embed the architectural BIM design application into the application of network and simulated reality and GIS design, many famous scholars at home and abroad also began to conduct in-depth research on the application at multiple levels and different spatial angles. IFC industry foundation classes standard is another format standard for data production based on BIM. As another building data model standard, almost all other building software suppliers should first produce all kinds of building data products serving their own customers in accordance with the same IFC building format standard format, In this way, all building products produced based on the same building data model are formed to ensure the circulation of building data that meets the requirements of all IFC data production standard formats [11, 12]. Based on the BIM model and GIS application technology, the author discusses and develops the relevant functional system in the integrated technology development of macro and micro environmental management platform which is suitable for BIM technology and GIS application [13]. Through the three-dimensional building information acquisition and management application system, the functions of building scene simulation operation, visual field analysis, urban fire safety and escape emergency treatment plan, pipeline explosion event analysis system and so on are realized. In addition, BIM
Application of BIM + GIS Technology
39
application includes deepening design of civil construction, deepening design of steel structure, deepening design of mechanical and electrical installation, deepening design of curtain wall, deepening design of curtain wall, integrated and comprehensive application management of BIM, etc. All these fully reflect the good practical application, prospect and potential technical advantages of BIM modeling combined with GIS in the field of 3D integrated design service system for future smart city construction, and can further bring practical convenience for enterprise building structure energy saving, building economic analysis and building information management services[14].
2 Proposed Method 2.1 BIM + GIS Technology On the development of Bim in construction industry: in recent years, with the continuous improvement of national construction requirements for the quality of engineering information construction design, the state has jointly issued and formulated a special guiding opinion document on promoting the application of building information model technology in construction engineering practice; Combined with the current situation of most domestic cities at this stage, it is specially pointed out that “the informatization of comprehensive smart city construction in China started late, but developed rapidly”. Driven by the above industry background and demand, the computer-aided building information model technology platform (BIM) system, which has rapidly developed and expanded, has been widely recognized by the international industry as promoting the engineering information system design of construction units because of its comprehensive advantages of easy design, R & D and upgrading, stronger compatibility and high engineering information integration A new force in the reform of construction process and subsequent construction management mode of the project. BIM, fully known as building information modeling (translated as building information modeling) system, its main core features refer to the establishment and Simulation of various real building information three-dimensional building models with the help of three-dimensional computer technology, and the use of digital information technology to provide a complete and entity consistent building engineering information database for your built threedimensional model system and carry out relevant analysis [15, 16]. The main reason why BIM can provide accurate and complete information of architecture is that BIMM can provide accurate and complete information of architecture. About geographic information system (GIS): GIS is fully translated as geographic information system (GIS). Its core feature is that it is good at displaying large-scale threedimensional or even more than three-dimensional models larger than the building body, which can realize a variety of spatial information extraction and in-depth analysis, and can integrate information elements such as economic elements, ground features, terrain (morphology), traffic elements and building elements according to needs, Therefore, it plays an important role in many fields, such as urban planning at all levels (Architectural Design and construction management), disaster prediction (Geological and meteorological disasters), highway and railway line construction (traffic planning), national power grid system construction (energy design) and epidemic disease flow regulation (epidemic prevention and control). GIS technology belongs to a highly comprehensive discipline
40
C. Qi et al.
category. At present, it integrates remote sensing aerial survey technology, computer technology, surveying and other disciplines, and has the function of database management and the secondary development ability with computer assistance. It produces a map model with vector and other data, which can show the refined geographic 3D scene model and realize mapping processing, so as to improve the fidelity of the model, It also allows researchers to extract the data for analysis and processing. In short, it integrates geographic related images, graphics, visual images and database analysis. From the point of view of solving the problem, from traversing the data collection, through analysis to the final decision, we can locate, where are the qualified objects, the distribution mode of objects in a certain place, etc. 2.2 Smart City Smart city includes intelligent security and prevention facility system, communication technology and equipment control system and information multimedia system in the building [17]. The specific description of the content of smart city system: it is a security system including the content related to the security function of smart building system. The defense system is designed to maintain the safety of national public assets. Many modern building technologies in the fields related to building safety and defense system are widely used in the construction and application of the system [18].
3 Experiments 3.1 Project Summary This paper takes the East Fourth Finger Corridor project of the third phase expansion project of Guangzhou Baiyun Airport as an example to establish a three-dimensional design system. The total construction area of the project is about 83,000 m2 , including three above-ground floors and one underground floor, as shown in Fig. 1. The project is a key project of Guangdong Airport Group to implement the Development Planning Outline of Guangdong-Hong Kong-Macao Greater Bay Area and the “1+1+9” work deployment proposed by Guangdong Provincial Party Committee and Government, and to help build the regional development pattern of “One nuclear Belt, One District” in Guangdong.
Application of BIM + GIS Technology
41
Fig. 1. Guangzhou Baiyun Airport
3.2 System Requirements Analysis Require ments analysis is the first stage of a software life cycle, and it is also a very important stage. If the software development and maintenance methods are not analyzed in the early stage, the requirements of users for software are ignored, and there is a lack of communication with users, the quality of software development products will not meet the standard. Only by communicating clearly with users, analyzing the requirements in detail, putting forward problems and solving problems in the requirements stage, can we finally get a complete requirements description. The requirement analysis of the system is to study the needs and ideas of users and realize them with functions. The process of demand analysis is to list and analyze the problems that need to be solved and may occur. It is necessary to accurately understand the substantive problems of the demand, analyze the data input and result output of the system, and formulate a detailed development plan. 3.3 System Architecture The functional requirements of the system construction are analyzed, and different threedimensional GIS platforms are analyzed and selected. The secondary development is carried out on the basis of SuperMap GIS platform combined with database technology, as shown in the Fig. 2:
42
C. Qi et al. Solar trajectory BIM basic components
Scene operation roam
Three dimensional analysis
Add point model
Super map component
Visual field analysis Service layer User layer
Component layer Fire simulation
Tube burst analysis
Weather simulation
information management
GIS related components
Fire simulation and escape plan
Equipment information management
Fig. 2. System architecture
3.4 System Data Preprocessing Data preprocessing is to accurately and effectively identify these abnormal data and analyze them, so as to provide reliable data support and basis for the smart city 3D design system. This paper adopts the laida criterion (3x) σ Criterion) to identify and eliminate abnormal data. Under normal distribution, P (x −) μ > three σ) 0.003, which belongs to small probability event and does not belong to the category of random error, so this value should be eliminated. The steps of laida criterion are as follows. Let the measurement sequence be Xi (I = 1, 2, … , n), and its average value μ As shown in the formula: 1 n xi (1) (1) μ= i=1 n where, Xi is the ith measured value in Xi, μ Is the arithmetic mean of Xi, and N is the number of data in Xi. The residual error Xi is shown in the formula: xi = xi − μ(2)
(2)
The standard deviation is obtained according to Bessel formula, as shown in the formula: 1 n xi2 (3) σ = i=1 n−1
Application of BIM + GIS Technology
43
Therefore, when there is a measurement value XD greater than three times the error, the data should be removed.
4 Discussion 4.1 Data Sources Conduct relevant experiments to verify the application of BIM + GIS technology in smart city 3D design system. The relevant data are shown in the Table 1 below: Table 1. Experimental data Section number
U0
1
51.91307
2
27.32304
3
32.23807
4
24.59001
5 6
S6
Comprehensive resistance u
Optimized comprehensive resistance U1
5.702423
53.789362
53.989363
0.840979
73.517915
74.0179184
84.463469
84.863466
3.858178
79.268175
80.068173
27.32309
1.411467
74.088409
74.088408
27.32308
2.109823
74.786760
74.886763
7
32.23805
0.196721
67.95864
68.058667
8
27.32304
5.731928
78.408868
79.408864
9
51.91309
0.000004
48.08692
48.486945
10
37.15305
3.695879
66.542867
66.742814
11
32.23802
1.241798
69.003737
69.203732
12
27.32301
3.051615
75.728555
75.928551
16.70152
It can be seen from the above chart that relevant experiments have been carried out based on the application of BIM + GIS technology in the smart city 3D design system, the parametric model has been modified, and the optimized comprehensive road resistance has been calculated (see Fig. 3). According to the solution results of the optimized comprehensive road resistance, the original comprehensive road resistance has been replaced by spatial analysis, and it is concluded that the smart city 3D design system designed based on BIM + GIS technology is of great help to the smart city. This paper studies the existing urban three-dimensional design system, analyzes its performance, summarizes the problems and shortcomings of the existing system, studies the development status of urban three-dimensional design technology at home and abroad and analyzes it. On this basis, a GIS-based smart city three-dimensional design management system is proposed.However, there are some shortcomings in the article: (1) The positioning technology used is not very accurate and requires better equipment. (2) The system design of the experimental part is not very complete, and a more comprehensive plan is required.In the future, deeper technologies, such as virtual reality and embedded technology, should be added to the three-dimensional design of smart cities.
44
C. Qi et al.
Section number U0 S6 Comprehensive resistance u 100 80 60 40 20 0 1
2
3
4
5
6
7
8
9
10
11
12
Fig. 3. Smart city 3D design system experiment
5 Conclusions The main research content of this paper is the integrated application of Bim and GIS in smart city. Focusing on BIM Technology and 3D GIS technology, a 3D design system of smart city based on BIM + GIS technology is developed, which realizes the visual display and information management of BIM model on 3DGIS platform.
References 1. Paganelli, F., Turchi, S., Giuli, D.: A web of things framework for RESTful applications and its experimentation in a smart city. IEEE Syst. J. 10(4), 1412–1423 (2017) 2. Sharma, P.K., Moon, S.Y., Park, J.H.: Block-VN: a distributed blockchain based vehicular network architecture in smart city. J. Inf. Process. Syst. 13(1), 184–195 (2017) 3. Daniel, C., Mario, C., Giovanni, P., et al.: A Fuzzy-based approach for sensing, coding and transmission configuration of visual sensors in smart city applications. Sensors 17(1), 1–19 (2017) 4. Gasco-Hernandez, M.: Building a smart city: lessons from Barcelona. Commun. ACM 61(4), 50–57 (2018) 5. Bates, O., Friday, A.J.: Beyond data in the smart city: learning from a case study of repurposing existing campus IoT. IEEE Pervasive Comput. 16(2), 54–60 (2017) 6. Habibzadeh, H., Qin, Z., Soyata, T., et al.: Large scale distributed dedicated- and non-dedicated smart city sensing systems. IEEE Sens. J. PP(23), 1 (2017) 7. Billy, L., Wijerathne, N., Ng, B., et al.: Sensor fusion for public space utilization monitoring in a smart city. IEEE Internet Things J. PP(99), 1 (2017) 8. Anthopoulos, L.: Smart utopia VS smart reality: Learning by experience from 10 smart city cases. Cities 63, 128–148 (2017)
Application of BIM + GIS Technology
45
9. Massana, J., Pous, C., Burgas, L., et al.: Identifying services for short-term load forecasting using data driven models in a smart city platform. Sustain. Cities Soc. 28, 108–117 (2017) 10. Cia, M.D., Mason, F., Peron, D., et al.: Using smart city data in 5G self-organizing networks. IEEE Internet Things J. PP(99), 1 (2017) 11. Picon, A.: Urban infrastructure, imagination and politics: from the networked metropolis to the smart city. Int. J. Urban Reg. Res. 42(2), 263–275 (2018) 12. Ta-Shma, P., Akbar, A., Gerson-Golan, G., et al.: An ingestion and analytics architecture for IoT applied to smart city use cases. IEEE Internet Things J. PP(99), 1 (2017) 13. Liu, X.: Three-dimensional visualized urban landscape planning and design based on virtual reality technology. IEEE Access PP(99), 1 (2020) 14. Dakic, I., Leclercq, L., Menendez, M.: On the optimization of the bus network design: an analytical approach based on the three-dimensional macroscopic fundamental diagram. Transp. Res. Part B Methodol. 149(6), 393–417 (2021) 15. Zhou, W., Tian, Y.: Effects of urban three-dimensional morphology on thermal environment: a review. Acta Ecol. Sin. 40(2), 416–427 (2020) 16. Lu, Y., Gou, Z., Ye, Y., et al.: Three-dimensional visibility graph analysis and its application. Environ. Plan. B 46(5), 948–962 (2019) 17. Guo, F.F., Yang, N., Zhang, Y.Q., et al.: GIS-based analysis of geomorphological factors for landslide hazards. J. Geomech. 14(1), 87–96 (2022) 18. Wei, M., Liu, T., Sun, B.: Optimal routing design of feeder transit with stop selection using aggregated cell phone data and open source GIS tool. IEEE Trans. Intell. Transp. Syst. 22(4), 2452–2463 (2021)
Myopic Retailer’s Cooperative Advertising Strategies in Supply Chain Based on Differential Games Chonghao Zhang(B) Shanghai Maritime University, Shanghai, China [email protected]
Abstract. In this paper, the cooperative advertising problem between manufacturer and retailer in supply chain is studied based on differential game theory and a stochastic differential game model is established. This paper describes the dynamic advertising strategies of manufacturers and retailers, and obtains the feedback Stackelberg Equilibrium solution according to the dynamic programming principle. This paper is based on the fact that the majority of retailers in the supply chain only consider their immediate profits in order to maximise their own short-term profits. The results show that the type of advertising support from manufacturers to retailers depends on the marginal profit of both parties. In the balanced advertising strategy of both parties in the cooperative advertising plan, the manufacturer mainly makes long-term advertising to build brand goodwill. With the support of manufacturers, retailers mainly make short-term advertisements to stimulate sales. Cooperative advertising is a coordination mechanism in the supply chain, which can improve retailer’s willingness to follow the manufacturer’s strategy. Keywords: cooperative advertising strategies · supply chain · differential games · myopic retailer
1 Introduction Small and medium-sized enterprises are the main force of the market. With the rise of raw material prices, the increase of labor costs and other uncontrollable market factors, small and medium-sized suppliers are often constrained by production funds [1, 2]. The shortage of funds will affect the daily operation of enterprises and even fail to meet the order requirements of downstream enterprises. This leads to inefficient operation of the entire supply chain. Therefore, considering the supply chain coordination under the constraint of production funds is conducive to making the cooperation between suppliers and suppliers more effective and improving the efficiency of the overall operation of the supply chain. Under the shortage of supplier funds, the prepayment model is better than the prepayment advertising cost model, and the latter is better than the non-financing model. This paper analyses the impact of the supplier’s initial funds and the bargaining © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 46–55, 2023. https://doi.org/10.1007/978-3-031-31860-3_6
Myopic Retailer’s Cooperative Advertising Strategies
47
power of member enterprises on the contract coordination, describes the critical conditions of supply chain coordination under different financing models, and points out that the more the supplier’s initial funds or the smaller the bargaining power, the more likely the coordination of the supply chain will be realized. Secondly, the combination policy of prepayment discount and contract is designed. The income sharing policy under prepayment discount and the repurchase policy under prepayment have different coordination on the supply chain. At present, the cooperative advertising mechanism is adopted by both parties of the supply chain to jointly increase the exposure of goods and improve the sales volume. While paying attention to the production capital constraints of suppliers, the cooperative advertising mechanism can effectively help the node enterprises in the supply chain to choose a reasonable and effective contract mechanism and payment means according to the amount of funds, and reduce the risk of overall benefits and low efficiency of the supply chain caused by decision-making errors. Cooperative advertising between upstream and downstream enterprises in the supply chain, that is, the manufacturer compensates the myopic retailer for local advertising. Cooperative advertising is adopted by most industries [3, 4]. It plays an extremely important role in the marketing strategy of many companies and is an important part of the financial budget of many manufacturers. Cooperative advertising is not only a cost sharing mechanism. More importantly, through cooperative advertising, the interests of manufacturers and myopic retailers can be closely linked and the advantages of both sides can be integrated. Facing the competition in the external market, the supply chain of manufacturers and myopic retailers has stronger competitiveness, which plays an important role in promoting the sustainable development of the supply chain of manufacturers and myopic retailers. In modern enterprises, the manufacturer’s products are not directly exchanged with consumers, but mainly entrusted to myopic retailers for sales. Whether the manufacturer can effectively improve the brand image and the market share of goods is crucial to the success or failure of his business. Advertising is one of the important means to improve the demand for goods and increase the brand awareness in practice, because they can let customers fully understand the quality, performance and after-sales service of their products through advertising, so as to improve the desire of potential customers to buy their products, In order to achieve the purpose of increasing demand, but in practice, there are many examples of damage to brand image and enterprise closure due to blind high-intensity advertising. So how to determine the appropriate advertising investment intensity and how advertising affects demand will be an important issue that manufacturers and myopic retailers must face. Therefore, it is of great practical value to study the advertising strategies of manufacturers and myopic retailers.
2 Theoretical Overview 2.1 Advertising Strategy Advertising strategy refers to various means and methods to realize and implement advertising strategy, which is the specific operation of advertising strategy. The essence of advertising strategy is that managers decide when and where to put in how much level of advertising and what kind of advertising methods, in order to achieve the best
48
C. Zhang
advertising effect. In this paper, the advertising strategy refers to the level of advertising investment of manufacturers and myopic retailers and the proportion of manufacturers sharing advertising investment for myopic retailers. The level of advertising investment reflects the intensity of advertising publicity and indirectly affects the goodwill level of an enterprise. That is, the higher the level of advertising investment of an enterprise, the greater the publicity intensity and the goodwill of the enterprise, but the higher the cost of advertising investment. Therefore, it is necessary for an enterprise to formulate appropriate advertising strategies to maximize profits. Balance advertising investment income and advertising investment cost. Advertising is a bridge between customers, channel suppliers and products. A reasonable advertising strategy can increase customers’ loyalty to the channel, and a reasonable advertising strategy can also effectively reduce the cost of each member of the channel. With the improvement of people’s material life, product brands other than product prices are also getting more and more attention from customers. Therefore, advertising strategy has become an important research hotspot in the field of supply chain management in recent years [5, 6]. 2.2 Supply Chain Advertising In the supply chain system, manufacturers and retailers also need to cooperate with each other and division of labor to improve their own interests, so as to improve the performance of the whole supply chain and achieve win-win results. In the aspect of advertising, it especially reflects the necessity of cooperation among members of the supply chain. For consumers, whether it is the national advertisement of the manufacturer or the local advertisement of the retailer, the advertisement is to form the cognition and memory of the product brand, then form the brand goodwill, increase the product value, and promote the purchase behavior, regardless of whether it is the advertisement of the manufacturer or the retailer. Therefore, the manufacturer and the retailer need to cooperate with each other to invest in the advertisement reasonably to jointly develop customers. Moreover, the advertising strategy influences consumers’ purchasing behavior by forming brand goodwill. Therefore, the relationship between the advertising strategy of supply chain members and consumers’ purchasing behavior is to stimulate consumers through advertising investment, consumers’ cognition and memory of the brand, and generate feelings for the brand to form brand goodwill. Brand goodwill can increase product value and further promote consumers’ purchasing behavior. Cooperative advertising is widely used in the computer industry. For example, IBM shared half of the advertising costs of retailers. Apple, on the other hand, bears 75% of the media advertising cost. Since 1991, Intel has implemented the world’s largest cooperative advertising program “Intel inside” through cooperation with computer marketers. The cooperation fund designated for Intel microprocessor promotion activities reached about 800 million US dollars in 1999. In just two years, it grew to $1.5 billion [5, 9]. 2.3 Differential Game Differential game theory is an important branch of dynamic optimal control theory. Isaacs’ research on the completely antagonistic zero-sum game in 1965 laid the foundation of differential game theory [10, 11]. Differential game has a very wide and important
Myopic Retailer’s Cooperative Advertising Strategies
49
application in military, public security, industrial control, environmental protection, economic management and other research fields. In short, differential game is a special game in which differential equations are used to describe phenomena or laws when playing games between players. At any time, the decision made by each player is called the control variable, and the change of the control variable will affect the state variable. The state variable reflects the environment of the players. For example, in the advertising sales model, the control variable is advertising investment, and the market share is the state variable. The differential equation describing environmental changes is called the state equation, such as the differential equation of advertising sales in the advertising goodwill model. In the advertising sales model, the profit function of the decision maker is maximized as the performance index of the decision maker. Similarly, in the inventory model, the cost minimization is the performance index of the decision maker. At each moment, each player must select their own control variables, and the player optimizes their own objective function by selecting their own control variables. In addition, it should be noted that in some differential games, the control variables of the players will not only affect their own state variables, but also affect the state variables of other players. For example, in the advertising sales volume model, the advertising investment of the players will not only affect their own market share, but also the market share of the competitors. Under different control strategies, the players will choose their own control variables according to their own performance indicators. The control strategy is a given rule of how the player decides the control variable at a certain time.
3 Model Description Consider a supply chain system composed of one manufacturer and one myopic retailer. Manufacturers and myopic retailers can do two types of advertising: long-term advertising and short-term advertising. Short term advertising has only a short-term impact on stimulating and influencing current sales through promotion and booth setting. Long term advertising is to make consumers have a preference for the brand, improve the brand image and increase the goodwill among consumers to affect the future sales of products. The manufacturer controls its short-term advertising rate P (t) and long-term advertising rate B (t) and the myopic retailer controls its short-term advertising rate P (t) The manufacturer determines its long-term advertising cost payment ratio Db (t) and short-term advertising cost payment ratio Dp (t) to the myopic retailer. Let Di (t) and i ∈ {p, b} The value of is from 0 to 1. If Di (t) = 0, the manufacturer does not support the advertisement of the myopic retailer; If Di(t) = 1, the manufacturer pays the full cost for the two types of advertisements of the myopic retailer. It is assumed that the marginal profits of manufacturers and myopic retailers are fixed and expressed by πm and πr respectively. Considering the convexity of advertising costs, the costs of the two advertisements are: um 2 P (t) (1) Cp (P) = 2 vm 2 B (t) CB (B) = (2) 2
50
C. Zhang
Cp (p) =
ur 2 p (t) 2
(3)
In the above formulas, (um , ur ) and (vm , vr ) are respectively long-term and shortterm advertising costs and have positive values. C (·) and c (·) respectively represent the advertising costs of manufacturers and myopic retailers. The net advertising costs of myopic retailers are: Cp (p) =
ur 1 − Dp (t) p2 (t) 2
(4)
As the manufacturers and myopic retailers in the supply chain spend on advertising, it affects the current and future sales. Therefore, advertising can be regarded as an investment to increase the value of advertising capital. This kind of capital is called ˙ generated by changing consumers’ preference goodwill and the cumulative variable G(t) for the brand to increase new customers and thus increase the market demand for products due to the influence of advertising. It represents the goodwill of the brand. The NerloveArrow model is extended according to the specific problems studied in this paper. The change of goodwill over time satisfies the following state equation: ˙ G(t) = λm B(t) − δG(t), G(0) = G0 ≥ 0
(5)
In the formula, λm , λr and δ are non-negative constant coefficients, which are the long-term advertising influence parameters of manufacturers and myopic retailers respectively. The delay effect of the advertisement is considered. Assuming that the goodwill is only affected by long-term advertising, the promotion of short-term advertising has no impact on the goodwill of the brand. δ is a constant representing the attenuation rate. The smaller δ is, the greater the impact of long-term advertising is. The decline of goodwill is caused by consumers turning to other brands or products and brands newly introduced into the market under the temptation of advertisements of the company’s competitors. Short term advertisements from manufacturers and myopic retailers stimulate the demand of retail stores. The impact of long-term advertising on demand is realized through the goodwill accumulation process expressed by the equation. The instantaneous sales rate q (t) at any time is: (6) q(t) = [αm P(t) + αr p(t)] G(t) In the formula, αm and αr are non-negative constant coefficients, which respectively indicated the impact of short-term advertising by manufacturers and myopic retailers on increasing customer demand. It is assumed that manufacturers and myopic retailers have the same positive discount rate ρ. The goal of both parties is to seek the optimal advertising strategy to maximize their profits in the infinite time zone, which can be used as an approximate mathematical description of the long-term business relationship between manufacturers and myopic retailers in the supply chain.
Myopic Retailer’s Cooperative Advertising Strategies
The objective function of the manufacturer is: ∞ um 2 vm 2 ur P (t) − B (t) − Dp (t)p2 (t) dt Jm = exp(−ρt) πm q(t) − 2 2 2 0 The objective function of the myopic retailer is: ∞ ur Jr = exp(−ρt) πr q(t) − [1 − Dp (t)]p2 (t) dt 2 0
51
(7)
(8)
The advertising strategies of manufacturers and myopic retailers are determined by feedback strategies This means that the decision is generally a function of the current state variables. The advertising decisions of manufacturers and myopic retailers are expressed as P(G, t), p(G, t), B(G, t). Because all the parameters in the above model are time independent constants, and in any time period of the infinite time zone, the participants face the same game. Therefore, we can limit the strategy to the static strategy, that is, P(G), p(G) and B(G), whose equilibrium is static feedback Stackelberg Equilibrium. Under this condition, we study the advertising strategies of manufacturers and myopic retailers under cooperative conditions.
4 Static Feedback Stackelberg Equilibrium 4.1 Equilibrium Equation It is assumed that the manufacturer is the leader in the Stackelberg game to determine and announce its advertising strategies B and P and the support rates Db and Dp for the two types of advertisements of the myopic retailer. Under this condition, myopic retailers determine their optimal strategies. This is a typical Stackelberg differential game model whose equilibrium is feedback Stackelberg Equilibrium. Regarding the HJB equation model parameters are calculated as shown in the Table 1 below: Table 1. Summary of Equilibrium Advertising Rates M’s Long Term Advertising B I: No support
λM φM vm (δ+ρ)
II: Full support
λM ωM vm (δ+ρ)
ωM III: Support of short term vλM(δ+ρ) m advertising p only
M’s Short Term Advertising P √ αM πM G uM √ αM πM G uM √ αM πM G uM
R’s Short Term Advertising P √ αR πR G uR
√ αR (2πM +πR ) G 2uR √ αR (2πM +πR ) G 2uR
Theorem (See Table 2): if the manufacturer supports both long-term and short-term advertising of the myopic retailer, it will pay the advertising costs of Db and Dp respectively. Under the condition that the brand goodwill evolves according to equation. The
52
C. Zhang Table 2. Summary of Scenarios and Advertising Support Rates Short Term Advertising Support Rate Db
Long Term Advertising Support Rate Dp
I: No support
0
0
II: Full support
2ωM −ωR 2ωM +ωR
2πM −πR 2πM +πR 2πM −πR 2πM +πR
III: Support of short term advertising p only
0
equilibrium advertising rates of the manufacturer and the myopic retailer to maximize their profits in the infinite time zone are respectively: √ √ If αm πm G ≥ 0, P(G) = αm πm G/um , otherwise P(G) = 0 (9) √ √ αr (2πm + πr ) G If αr (2πm + πr ) G > 0, P(G) = > 0, otherwise P(G) = 0 (10) 2ur If λm ωm > 0, B =
λm ωm > 0, otherwise B = 0 [vm (δ + ρ)]
If λr (2ωm + ωr ) > 0, b =
λr (2ωm + ωr ) > 0, otherwise b = 0 2vr (δ + ρ)
(11) (12)
We have: 2 π 2 u + α 2 (2π + π )2 u 4αm m r m m r r 8um ur
(13)
2 π π u + α 2 π (2π + π )u 4αm m r r m r m r r 4um ur
(14)
m −πr If 2πm > πr > 0, Dp = 2π 2πm +πr ∈ (0, 1) If 0 < 2πm ≤ πr , Dp = 0 If πr = 0, Dp = 1
(15)
ωm = ωr = We have:
The above formulas are recorded as m −ωr If 2ωm > ωr > 0, Db = 2ω 2ωm +ωr ∈ (0, 1) If 0 < 2ωm ≤ ωr , Db = 0 Ifωr = 0, Db = 1
(16)
Myopic Retailer’s Cooperative Advertising Strategies
53
4.2 Equation Analysis Under the condition that the manufacturer supports short-term advertisements, the shortterm advertising input of the manufacturer and the myopic retailer depends on the cost parameter u and the demand parameter α and the marginal profit π. The significance of Eqs. (9) and (10) is that if the marginal profit is approximately equal. If αm/um < 3αr/2ur,we have P(G) < p(G). This result shows that if the ratio of the short-term advertising impact parameter of the manufacturer to its cost is lower than that of the myopic retailer and manufacturers will do less short-term advertising than myopic retailers. From Eq. (12), it can be seen that the myopic retailer’s balanced short-term advertising strategy is not only related to its own parameters αr, πr and ur correlation. It is also affected by the manufacturer’s marginal profit π M. the conditions for myopic retailers to make short-term advertisements are αr (2πm + πr) G > 0. This shows that even if the myopic retailer’s marginal profit πr is zero. Due to manufacturer’s support, the myopic retailers will still make short-term advertisements until πr < −2πm. Meanwhile, according to formula (15), if the myopic retailer’s marginal profit πr is zero or close to zero. The manufacturer’s balanced support strategy Dp = 1. At this time, the manufacturer will pay all the costs of short-term advertising for the myopic retailer. This indicates that under the cooperative advertising plan, due to the support of the manufacturer. Myopic retailers have enough incentives to do short-term advertising to stimulate sales. Therefore, the purpose of manufacturers adopting cooperative advertising is to provide incentives for myopic retailers’ advertising efforts. We can make the myopic retailer cooperate with the manufacturer’s advertising strategy to maximize the profits of both parties. Note: the notations are shown in the next paragraph. Decision variables pj Retail price in Model j, j = M, S, B Parameters θ The valuation for the product, following a uniform distribution on [0,1] α The satisfactory rate for the product c Unit production cost S Unit salvage value of the returned product obtained by the online myopic retailer T Unit handling cost of the returned product incurred by the online myopic retailer t Unit return-freight fee incurred by consumers ti Unit return-freight compensation obtained by consumers K The degree of consumer’s risk aversion j ci Unit RI premium in Model j, j = S, B Other notations uj Expected utility in Model j by using mean-variance method, j = M, S, B qj Demand in Model j, j = M, S, B CSj Consumer surplus in Model j, j = M, S, B
54
C. Zhang
5 Practical Suggestions The relevant conclusions of this paper provide a multi-dimensional basis for manufacturers, myopic retailers and even market speculators to formulate strategies. For manufacturers, it is necessary to consider the matching of advertising strategy and distribution structure. When they have the conditions to arbitrarily choose the supply chain structure, they should choose the decentralized structure in the high price market, and at the same time, they should match and place advertisements in both markets. Regardless of the number of advertising markets and the structure of the supply chain, we should choose to implement the cooperative advertising model. In addition, it is also necessary to adjust the amount of advertising and the proportion of cooperative advertising according to the market reality and consumer psychology. Although the myopic retailer’s strategic space is very limited, in fact, as long as the myopic retailer obeys the manufacturer’s strategic choice, it can also achieve higher profits. If myopic retailers force manufacturers to give up cooperative advertising for their own interests, it will cause losses to manufacturers, consumers and the society as a whole. Finally, grey market speculators follow the market with an attitude of inaction, and can achieve good benefits without standing in front of the stage. In the discussion of the proportion of advertising cost sharing in the decentralized supply chain, this paper regards this proportion as the decision variable of the manufacturer, so that it can be added to the game sequence. In business practice, compared with the strategies that often need to be adjusted in real time according to market changes such as pricing and advertising, the cost sharing ratio of cooperative advertising within the channel obviously changes relatively less frequently, because frequent changes in cooperative matters will bring difficulties to the operational decisions of both parties, resulting in an increase in the operational costs of both parties. Therefore, in reality, the decision on cooperative advertising is usually determined in advance in the form of agreement or contract, and remains unchanged in a long period. According to the market reality, the follow-up study can externalize the cost sharing ratio in cooperative advertising into model parameters, and then discuss its impact on advertising decisions and market participants’ profits. In the retail industry, myopic retailers have the same or even greater power or status as manufacturers. When both channel members are in a position of reciprocity, they are likely to move towards cooperation. Therefore, this part focuses on the cooperative relationship between manufacturers and myopic retailers, and obtains the optimal advertising strategy choice and the optimal profit of the system under this game structure.
6 Conclusions Based on the differential game model, this paper studies the optimal strategy of cooperative advertising between manufacturers and myopic retailers, and gives corresponding practical suggestions. The results show that the type of advertising support from manufacturers to myopic retailers depends on the marginal profit of both parties. At the same time, because the manufacturer’s advertising support to the myopic retailer encourages the myopic retailer to do more short-term advertising to stimulate sales, while the manufacturer mainly does long-term advertising to build the goodwill of the brand so as to
Myopic Retailer’s Cooperative Advertising Strategies
55
increase the profits of both parties, cooperative advertising is an incentive mechanism that can improve the myopic retailer’s strategic follow to the manufacturer. The impact of advertising on sales, the evolution of brands and the business relationship between two members in the supply chain are all dynamic phenomena. In this paper, Stackelberg differential game is used to study cooperative advertising, which extends the previous research on cooperative advertising from static to dynamic. This paper breaks through the previous limitation that cooperative advertising support plan is regarded as a unilateral decision of the manufacturer.
References 1. Alirezaee, A., Sadjadi, S.J.: A game theoretic approach to pricing and cooperative advertising in a multi-myopic retailer supply chain. J. Ind. Syst. Eng. 12(4), 154–171 (2020) 2. De Giovanni, P., Karray, S., Martín-Herrán, G.: Vendor management inventory with consignment contracts and the benefits of cooperative advertising. Eur. J. Oper. Res. 272(2), 465–480 (2019) 3. Luzon, Y., Pinchover, R., Khmelnitsky, E.: Dynamic budget allocation for social media advertising campaigns: optimization and learning. Eur. J. Oper. Res. 299(1), 223–234 (2022) 4. Kim, E.A., Shoenberger, H., Kwon, E.P., et al.: A narrative approach for overcoming the message credibility problem in green advertising. J. Bus. Res. 147(2), 449–461 (2022) 5. Abedian, M., Amindoust, A., Maddahi, R., et al.: A game theory approach to selecting marketing-mix strategies. J. Adv. Manag. Res. 19(1), 139–158 (2022) 6. Xiang, Z., Xu, M.: Dynamic cooperation strategies of the closed-loop supply chain involving the internet service platform. J. Clean. Prod. 20(1), 1180–1193 (2019) 7. Buratto, A., Cesaretto, R., De Giovanni, P.: Consignment contracts with cooperative programs and price discount mechanisms in a dynamic supply chain. Int. J. Prod. Econ. 218(3), 72–82 (2019) 8. Pan, J.L., Chiu, C.Y., Wu, K.S., et al.: Optimal pricing, advertising, production, inventory and investing policies in a multi-stage sustainable supply chain. Energies 14(22), 7544–7546 (2021) 9. Ghorai, P., Eskandarian, A., Kim, Y.K., et al.: State estimation and motion prediction of vehicles and vulnerable road users for cooperative autonomous driving: a survey. IEEE Trans. Intell. Transp. Syst. 23(10), 16983–17002 (2022) 10. Subtirelu, N.C., Lindemann, S., Acheson, K., et al.: Sharing communicative responsibility: training US students in cooperative strategies for communicating across linguistic difference. Multilingua 41(6), 689–716 (2022) 11. Gautam, P., Maheshwari, S., Hasan, A., et al.: Optimal inventory strategies for an imperfect production system with advertisement and price reliant demand under rework option for defectives. RAIRO Oper. Res. 56(1), 183–197 (2022)
Hardware System of Thermal Imaging Distribution Line Temperature Monitor Based on Digital Technology Tie Zhou(B) , Ji Liu, Weihao Gu, Zhimin Lu, and Linchuan Guo Binhai Power Supply Branch of State Grid Jiangsu Electric Power Co., Ltd., Nanjing, Jiangsu, China [email protected]
Abstract. Distribution network equipment has strict requirements for temperature environment control, in order to ensure the normal operation of power distribution network equipment, real-time monitoring of surface temperature is required, the traditional manual temperature measurement method is inefficient, costly and ineffective. Due to the particularity of the operating environment of power distribution network equipment, real-time and accurate monitoring of equipment temperature is one of the technical difficulties in the power industry, therefore, the design and development of online temperature monitoring system for distribution network equipment based on digital technology thermal imaging, taking distribution transformer as an example, can effectively reduce the work intensity of manual temperature measurement, obtain temperature data from Alibaba Cloud through the upper computer of infrared transmission, and establish a differential autoregressive moving average model (ARIMA) to predict the change trend of contact temperature, at the same time, the mobile phone APP terminal synchronously displays the monitoring data of the contact temperature, and obtains the distribution image and data of the temperature field of the distribution network equipment. Keywords: Digital technology · Thermal imaging · temperature monitoring
1 Introduction In the safety architecture of nuclear power plant, status monitoring is the basic function of the operation support system, it judges the operation status of the system through feature extraction and analysis of process and status parameters, so as to detect system anomalies in advance and prevent the development of accidents. Among them, the system status monitoring object is mainly the physical parameters such as continuous quantity (such as system temperature, pressure, etc.) or discrete quantity (such as valve opening or closing) of discrete switch status during operation, which are the on-site operation information collected by the system sensor [1]. The on-line monitoring instrument for power equipment using infrared thermal imaging technology is generally an infrared thermal imager. The infrared thermal imager can not only be used for non-contact temperature measurement, but also display the two-dimensional distribution and change of object surface temperature in real time. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 56–65, 2023. https://doi.org/10.1007/978-3-031-31860-3_7
Hardware System of Thermal Imaging Distribution
57
At present, the full digital I&C system has gradually become the preferred solution for the I&C system of nuclear power plants, compared with the analog I&C system, the main difference between the two is software, this difference brings new system failure modes and system safety problems [2]. The existing status monitoring only focuses on the hardware, and the monitoring objects are the external physical state of the system, there is no software security monitoring mechanism similar to the hardware monitoring, and it is unable to perceive the system status in real time from the overall level of the system with software, hardware and their interaction status as the object, there are deficiencies in the security architecture [3]. In this paper, the hardware system of thermal imaging distribution line temperature monitor based on digital technology is studied and designed. The system is a high-tech achievement that integrates infrared thermal imaging technology, wireless network communication technology, electronic technology, image information processing technology, etc., and carries out real-time data transmission with the help of the powerful GPRS wireless communication network of China Mobile.
2 System Design Principle The hardware system of the thermal imaging distribution line temperature monitor based on digital technology is designed according to the client/server mode (C/S mode), and consists of the monitoring terminal (image acquisition and transmission terminal) and the monitoring center terminal (monitoring center computer server). Both functional parts are supported by corresponding hardware and software. The online monitoring system consists of several data acquisition terminals installed on the transmission line tower and a monitoring master station. The data acquisition terminal mainly includes two parts: Data acquisition subsystem and data transmission subsystem, among them, the data acquisition subsystem is responsible for collecting infrared thermal images and visible images of insulators and nearby transmission lines, and the data transmission subsystem transmits the image data to the monitoring subsystem. The monitoring master station is composed of computer and monitoring system software, which is responsible for receiving, analyzing and processing data from data terminals, and sending control commands or alarm signals according to the analysis results [4]. The overall structure of the online monitoring system is shown in Fig. 1. The system is mainly composed of two parts: Monitoring terminal (image acquisition and transmission terminal) and monitoring center terminal (monitoring center computer server). The monitoring center terminal (monitoring center computer server) is responsible for the management and control of the monitoring terminal, which is at the upper level of the management level, so it is called the upper computer [5]. The monitoring terminal (image acquisition and transmission terminal) is under the control of the monitoring center, which is responsible for image data acquisition, compression and transmission, it is at the lower level of the management level, so it is called the lower computer. The system structure is shown in Fig. 2: The monitoring terminal includes image acquisition and compression module, GPRS wireless transmission module, storage module and power module with MCU as the core [6]. The monitoring center terminal includes hardware and software, the hardware is a computer with public network IP address, and the software is a server program, the server
58
T. Zhou et al.
Fig. 1. Overall Structure of Monitoring System
Fig. 2. System Structure
program realizes command transmission and data transmission between the monitoring terminal and the monitoring center terminal, and realizes infrared thermal phase diagram display, parameter setting, alarm prompt and other functions. It is specially pointed out here that since the monitoring center meets the network requirements of the server, the system places the server and the monitoring center on a computer to save hardware and network resources. The host computer mainly realizes the reception and decoding of compressed image data, and the database storage and processing of received image data [7].
3 System Overall Design The infrared thermal imager is an important component of the entire monitoring system, which is mainly responsible for the temperature monitoring of distribution network equipment. The infrared imaging online monitoring system can provide accurate, wide range, real-time, edge to edge temperature measurement, real-time process color thermal images, especially suitable for continuous and discrete industries and distribution network equipment working process temperature monitoring, image scanning, analysis and control. In the application layer of the system, data transmission is based on frames.
Hardware System of Thermal Imaging Distribution
59
Data frames include start code, sub-station number, control symbol, data length, data field, check code, and end symbol. See Table 1 for the specific format. Table 1. Data frame format Start code
Sub-station number
Control word
Data length
Data field
Inspection code
End code
1 byte
6 byte
1 byte
2 byte
Lengthen
1 byte
1 byte
Start code: 1 byte, defined as 53H; Sub-station number: 6 bytes (string format), the first two bytes can be the manufacturer code, and the last four bytes are the sub-station code; Control character: 1 byte, used to distinguish data types; Data length: 2 bytes, high bytes first and low bytes last. If it is zero, it means there is no data field; Data field: actual data value, with length no more than 300 bytes; Short message communication mode, no longer than 130 bytes. Check code: The standard CRC8 check method is used to check the sub-station number, control word, data length and all bytes in the data area. In order to improve the operation efficiency, the check method is table lookup; Ending character: 1 byte, defined as 4EH. 3.1 Overall Hardware Structure of the System The hardware structure of the system is mainly divided into monitoring terminal and monitoring center. The monitoring terminal is the image acquisition and transmission terminal, the hardware system of the monitoring terminal consists of five parts: Main control module, image acquisition and compression module, GPRS wireless transmission module, power module and storage module. The main control module is a high-performance ARM to control and manage the monitoring terminal. The image acquisition module collects the infrared thermal image from the infrared thermal imager by the image acquisition card, and performs digital conversion and JPEG image compression coding. GPRS wireless transmission module realizes wireless network transmission function by special GPRS module, and its interface with image acquisition module is universal asynchronous serial interface (UART) [8]. The storage module is mainly used for temporary storage of infrared thermal images when the network transmission is blocked. When the network is restored, the stored image is sent to the monitoring center by the transmission module. The power module is responsible for the conversion and storage of solar energy to electric energy, and converts the voltage matched with each module to supply power to each module. The monitoring terminal mainly realizes the collection and compression of the infrared thermal image of the transmission line monitoring site and the GPRS wireless channel transmission of the compressed image data, which are all supported by the corresponding software [9]. GPRS is the abbreviation of General Packet Radio Service. It adds some hardware equipment and upgrades the original network software
60
T. Zhou et al.
on the basis of the existing GSM network to form a new network. GPRS network has the following advantages: (1) Wide coverage. (2) High data transmission rate. (3) The transmission capacity of the system is large. (4) Flexible communication cost control. (5) Good real-time response and processing capability. The monitoring center end is the computer server with infrared thermal image display equipment in the monitoring center. The system hardware block diagram is shown in Fig. 3:
Fig. 3. Overall hardware block diagram of the system
3.2 Overall Software Structure of the System The software structure of the system is also mainly divided into monitoring terminal software and monitoring center software. The monitoring terminal software corresponds to each part of the monitoring terminal hardware, which is an important guarantee for the normal operation of each part of the monitoring terminal [10]. The monitoring terminal software is mainly composed of five parts: Microcontroller main program, image acquisition subprogram, image transmission subprogram, power management subprogram and storage management subprogram. The main program of the microcontroller controls and manages the main control module and monitoring terminal. The image acquisition subprogram realizes the acquisition of infrared thermal images from infrared thermal imagers, and carries out digital conversion and JPEG image compression coding. The image transmission subprogram realizes the wireless
Hardware System of Thermal Imaging Distribution
61
network transmission function. The storage management subprogram is mainly used for temporary storage of infrared thermal images when the network transmission is blocked. When the network is restored, the stored image is sent to the monitoring center by the transmission module [11]. The power management subprogram is responsible for the conversion and storage of solar energy to electric energy, and the conversion of the voltage matched with each module to supply power to each module. GPRS technology has been widely used in the field of power monitoring. It provides an efficient and lowcost wireless packet data service, which is particularly suitable for intermittent, sudden, frequent and small amount of data transmission, as well as occasional large data transmission. At present, there are two main directions in the application research of GPRS: one is to study the performance of TCP/IP protocol in GPRS environment; The second is the research of data communication using GPRS network in the monitoring system. The research of the corresponding system management software is the normative and universal system research and specific application design within the industry. The monitoring center terminal software is responsible for receiving, analyzing and processing the data from the monitoring terminal, and sending control commands or alarm signals according to the analysis results. The monitoring center software is the control center of the whole system, which has the functions of sending commands, receiving infrared thermal images, and system management. It is mainly composed of the following modules: (1) Communication module: Responsible for connecting the monitoring terminal, sending and receiving data or control commands. (2) Format reorganization module: Responsible for combining the data entered by the user or the local data of the server, the composition data stream is transmitted from the system monitoring center to the monitoring extension [12]. (3) User interaction module: Contains various menus, tables, buttons, etc., which is convenient for users to operate. (4) User login module: The engineering and technical personnel of the power department shall log in to the equipment, input the user name and password, and send them to the communication module to the monitoring center server for verification. (5) Input module: Add data to the monitoring center server or perform various related operations. (6) Infrared thermal image display module: Display the operation results and infrared thermal image on the screen, including the display of images collected by the infrared thermal imager, the display of historical values, and the display of data according to time. (7) Infrared thermal image storage module: Store the monitored transmission line infrared thermal image images for easy query, including real-time and historical images. At the same time, it also includes operation records, current status and other information. Query module: Query the read-only data of the monitoring center, or write the query conditions by the monitoring extension and upload them to the monitoring center server to query the qualified data.
62
T. Zhou et al.
3.3 Application Layer Design For the contact in the distribution line, the temperature value is not jump, and the temperature value at the current time is greatly affected by the temperature value at the previous time, therefore, the contact temperature is a time series with strong autocorrelation, so the differential autoregressive moving average model (ARIMA) is selected as the prediction model of the contact temperature in the application layer. This model is a prediction method for analyzing the internal correlation law of time series. It can analyze the internal autocorrelation of the contact temperature time series, find its mathematical law, and predict the temperature value at the future time through the mathematical model [13]. ARIMA (p, d, q) model is a combination of AR (p) autoregressive model, MA (q) moving average model of thermal imaging of digital technology and I (d) difference model. p, d and q of thermal imaging of digital technology are the orders of these three sub models respectively. The thermal imaging type of AR (p) autoregressive mode digitization technology can describe the relationship between the current value and the historical value, the historical time data of the thermal imaging of the variable’s own digitization technology can be used to predict itself, the mathematical expression is: p γi yt−i + εt + μ (1) yt = i=1
wherein, yt is the real value at time t and the real value at time i of delay; γi is the autocorrelation coefficient; εt is the residual at time t; μ is a constant term. The parameters to be solved in the AR model are γi and εi . AR model must be established for stationary time series, and the method to judge whether the time series is stationary is ADF test. Calculate the T-statistic value and significance level P probability value of the contact temperature time series, when T is significantly less than 3 confidence levels (1%, 5%, 10%) and P is close to 0, it indicates that the sequence is stable, otherwise, it will be processed by difference until it passes the ADF test, the order of difference is the value of d in ARI-MA model [14]. MA (q) model represents the accumulation of residual term εi in AR (p) model at the previous i moments, MA (q) can eliminate the random volatility during prediction, the mathematical expression is as follows: p yt = θi εt−i + εt + c (2) i=1
where: yt is the true value at time t; yt−i is the real value of i time lag; θi is the moving average coefficient; εt is the residual at time t; c is a constant term. The parameters to be solved in MA model are θi and εt . Comprehensive formula (1) and formula (2), the mathematical expression of ARIMA (p, d, q) model is as follows: d yt = yt =
p i=1
d i=0
(−1)d −i Cdi yt+i
γi yt−i +
q i=1
θi εt−i + εi + C
(3)
(4)
Hardware System of Thermal Imaging Distribution
63
Equation (3) indicates the d-order differential processing when the contact temperature time series fails the ADF test. The values of p and q are determined by the partial autocorrelation function PACF and the autocorrelation function ACF, according to the image characteristics of the two functions, there can be multiple groups of p and q values, and each group of p and q values constitutes the formula (4), the internal γi and θi are solved by the least square method. 3.4 Design of Power Supply Circuit for Internal Devices of Monitoring Terminal Power supply is the basis for normal and stable operation of the system. The design of power supply must consider the following factors: input voltage and current; Output voltage, current and power; Electromagnetic compatibility and electromagnetic interference; Output ripple, etc. In this system, there are three voltage levels: 3.3 V, 4.2 V and 11.5 V. The voltage of LPC2368 and its peripheral chips is 3.3 V, the voltage of GPRS module MC55 is 4.2 V, and the voltage of thermal imager is 12 V. See Table 2 for power supply voltage and current consumption of each part. Table 2. Power consumption comparison of main components Designation
Working voltage
Current consumption (maximum)
LPC2368 And its peripheral chip
3.3 V
LPC2368 Idle mode 5 mA, normal mode 35 mA, and the instantaneous operating current of the peripheral chip can reach 50 mA
The GPRS module, MC55
3.3 V–4.8 V
Standby mode is 15 mA The GPRS data transfer mode is at 300 mA
Video acquisition module, W718LS
3.3 V
300 mA
thermal infrared imager
12 V
500 mA when the head does not turn, and 1000 mA when the head is turned
4 Application Benefit Evaluation of Online Monitoring System There are many electronic devices used in the monitoring terminal of this system. In order to achieve the expected functions, the software design is more complex: on the one hand, it is necessary to consider the control and data communication of many electronic devices on the system hardware circuit, on the other hand, it is necessary to run the time delay detection program in real time to ensure the safety of the monitoring terminal. Urban power grid plays a very important role in China’s economic development, distribution network transformers and associated switches are characterized by large consumption and wide distribution. Therefore, using scientific and reasonable
64
T. Zhou et al.
online monitoring technology can monitor the normal operation of distribution network equipment in real time, and improve the safety and reliability of equipment operation. According to the research, the transformer and switchgear should operate within the standard temperature environment, if the problem reaches the critical point of risk when the equipment is operating, accidents such as stop, fire and even explosion will occur. The types of transformers include oil immersed transformers, dry-type transformers, etc., which are usually installed on the ground or underground. Therefore, the monitoring of equipment temperature is one of the technical difficulties in the power industry. The online temperature monitoring system of thermal imaging in distribution network equipment based on digital technology researched by the author has been tested, run and applied, and the distribution network equipment with infrared imager is installed. Except for a few jobs that require manual confirmation, the rest are automatically operated and processed by the system, it can effectively reduce the intensity of manual labor, save labor costs, and at the same time, it can effectively protect the safe production of electric power enterprises. In addition, the author uses Hikvision’s fluorite cloud computer client and mobile device APP for data browsing, so that the staff can check the operating temperature of the on-site equipment of the distribution network at any time, ensuring the safe operation of the equipment [15, 16].
5 Conclusions To sum up, with the maturity of thermal imaging of automatic control technology and digital technology, it is of great significance in online temperature monitoring of distribution network equipment, the digital thermal imaging technology used in this system is to monitor the temperature of distribution network equipment through infrared thermal imager, and generate real-time temperature images and data information, the field application test shows that the infrared imager is effective in online monitoring of power distribution network equipment, it can meet the requirements both in terms of accuracy and real-time performance, it not only improves the work efficiency, but also reduces the labor intensity of manual operation, it is worth popularizing and applying. Acknowledgements. This work was supported by IPIS2012.
References 1. Ryms, M., Tesch, K., Lewandowski, W.M.: The use of thermal imaging camera to estimate velocity profiles based on temperature distribution in a free convection boundary layer. Int. J. Heat Mass Transf. 14(2), 115–129 (2021) 2. Zheng, F.: Design of fuzzy PID control system of indoor intelligent temperature based on PLC technology. In: Liu, S., Ma, X. (eds.) ADHIP 2021. LNICST, vol 416, pp. 721–732. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-94551-0_55 3. Rui, W.U., Wang, Y.M., Wang, Y.H.: Research on temperature acquisition system of printing machine oven based on wireless sensor network. J. Phys. Conf. Ser. 2093(1), 012027–012036 (2021)
Hardware System of Thermal Imaging Distribution
65
4. Shin, S., Ko, B., So, H.: Noncontact thermal mapping method based on local temperature data using deep neural network regression. Int. J. Heat Mass Transf. 12(19), 3142–3150 (2022) 5. Wilk, E., Swaczyna, P., Witczak, E., Lao, M.: Studying the impact of multilayer fabric on the characteristic of temperature distribution under the influence of fire. Text. Res. J. 91(19–20), 2252–2262 (2021) 6. Duan, Y., Yang, Z., Ge, W., Sun, Y.: Explosive thermal analysis monitoring system based on virtual instrument. J. Beijing Inst. Technol. 30(zk), 218–224 (2021) 7. Kong, L.J.: Design and realization of distributed wireless temperature detection system based on zigbee technology. Aust. J. Electr. Electron. Eng. 17(11), 1–9 (2020) 8. Kim, D., Jeong, D., Kim, J., Kim, H., Ahn, S.: Design and implementation of a wireless charging-based cardiac monitoring system focused on temperature reduction and robust power transfer efficiency. Energies 13(4), 1008–1015 (2020) 9. Urbano, O., Perles, A., Pedraza, C., Rubio-Arraez, S., Mercado, R.: Cost-effective implementation of a temperature traceability system based on smart rfid tags and iot services. Sensors 20(4), 1163–1171 (2020) 10. Meng, X.H., Cao, S., Zhang, Y.H., Duan, L.H.: Design of temperature measurement and control system of chemical instrument based on Internet of Things 12(2), 294–303 (2021) 11. Wang, Y.: Design and development of temperature measurement and control system based on mcu under the background of Internet of Things. J. Phys. Conf. Ser. 2037(1), 012060–012070 (2021) 12. Pang, J., Yu, Z., Chen, X.: Research on thermal imaging fault detection system based on weibull distributed electrical system. J. Phys. Conf. Ser. 1941(1), 012037–012043 (2021) 13. Jensen, J.K., Kaern, M.R., Pedersen, P.H., Markussen, W.B.: A new method for estimating the refrigerant distribution in plate evaporator based on infrared thermal imaging. Int. J. Refrig. 9(126), 126–135 (2021) 14. Rung, S., Hcker, N., Hellmann, R.: Thermal imaging of high power ultrashort pulse laser ablation of alumina towards temperature optimized micro machining strategies. IOP Conf. Ser. Mater. Sci. Eng. 1135(1), 012027–012039 (2021) 15. Ravichandran, M., Su, G., Wang, C., Seong, J.H., Kossolapov, A., Phillips, B., et al.: Decrypting the boiling crisis through data-driven exploration of high-resolution infrared thermometry measurements. Appl. Phys. Lett. 118(25), 253903–253912 (2021) 16. Keeney, C., Hung, C.S., Harrison, T.M.: Comparison of body temperature using digital, infrared, and tympanic thermometry in healthy ferrets (mustela putorius furo). J. Exotic Pet Med. 36(2), 16–21 (2021)
Applications of Key Automation Technologies in Machine Manufacturing Industry Qifeng Xu(B) School of Mechanics, North China University of Water Resources and Electric Power, Zhengzhou, Henan, China [email protected]
Abstract. Mechanical manufacturing industry is the pillar industry in China’s industrial production, which occupies an important economic position in China’s economic development. Mechanical automation technology is an important technical means to improve the competitiveness of mechanical manufacturing industry and promote the sustainable development of mechanical manufacturing industry. Automation technology in the field of mechanical manufacturing mainly includes intelligent technology, integrated technology, flexible automation technology, etc. These technologies are widely used in the information system of mechanical manufacturing, product processing and inspection, supply and transportation and assembly. This paper proposes that the application of automation technology can be comprehensively optimized by strengthening the understanding of automation technology, speeding up the research and development of automation manufacturing process, improving the industry system and other strategies to improve the automation level of China’s machinery manufacturing industry and enhance the competitiveness of the industry. Keywords: Automation Technologies · Machine Manufacturing; Intelligent Technology · Integrated Technology · Flexible Technology
1 Introduction The development trend of automation technology is intelligence, integration and flexibility [1]. Automation technology is combined with computer information technology, electronic communication technology, automatic control technology and other disciplines, and innovation is carried out to form a new explosion point of industrial economic growth, and new management strategies are constantly emerging on this basis for industrial development. Mechanical automation technology is a key technology that integrates automation technology with mechanical production and manufacturing. It has a key significance for the development quality and efficiency of the mechanical manufacturing industry, and is one of the key technologies emerging in recent years. The use of mechanical automation technology can make the original mechanical production work more automatic, effectively reduce the difficulty and workload of manual production, and reduce the possibility of manual errors. In addition, mechanical automation technology © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 66–75, 2023. https://doi.org/10.1007/978-3-031-31860-3_8
Applications of Key Automation Technologies
67
can realize the fine processing of raw materials, greatly improving the quality of mechanical production and processing, and effectively increasing the economic benefits obtained. Based on such technical advantages, mechanical automation technology has been recognized by most mechanical production units, and constantly strengthen research and application. At present, China’s social transformation and upgrading speed is fast, the working environment of the machinery manufacturing industry is also changing, and its job competition pressure is growing. Therefore, actively studying and improving mechanical automation technology has become the key route for the rapid development of the mechanical manufacturing industry, which is conducive to the steady improvement of the competitiveness of related enterprises. The content of automation technology has been extended to online monitoring and control of product production process and product quality. Dynamically monitor the operation of mechanical equipment, coordinate the manufacturing process, and prevent and diagnose equipment failures [2].
2 Features and Advantages of Automation Key Technologies 2.1 Features Mechanical manufacturing automation technology has overall control function [3]. Once the equipment fails, it can stop production and power off according to the settings to ensure that the internal components of the system are not damaged. The maintenance personnel can determine the cause of circuit fault according to the automatic diagnosis function of the equipment, quickly complete the mechanical fault handling work, and ensure the normal production process. With the further development of electronic technology, operation and functions can be completed by using touch screen inching, reducing the number of switches, buttons and handles, improving the convenience of operation, ensuring that all operating procedures are simple and clear, and speeding up the progress of workers’ starting production. Once problems occur in the operation of traditional mechanical equipment, they must be shut down for repair. Some parts that need continuous processing become waste parts due to failure shutdown, which increases the probability of production safety accidents. The production safety of mechanical production automation has been improved [4]. All failure problems can be analyzed and located by themselves. In case of failure, the machine will stop automatically, and protection measures will be opened inside the equipment. The circuit power will be cut off automatically to avoid the impact of current overload on the system safety. The production of mechanical automation technology realizes the integration of the whole production process, with relatively fixed energy consumption required for production and fixed production process. After reducing the participation of human factors, the workpiece processing quality is more balanced. Relatively speaking, the connection between processes is closer. The key technology of automation has effectively reduced the rate of processing scrap and the number of consumables used in production. 2.2 Advantages Traditionally, some parts with special requirements must be completed manually [5]. However, if there is a problem in the operation of workers, the accuracy of parts will
68
Q. Xu
be affected. Based on this, although some special parts can be processed manually, the current situation is not only to ensure quality, but also to pay attention to efficiency, so this production mode will be replaced. Compared with manual operation, automation technology has more superior performance. It can automate the manufacturing process of parts and components to meet the requirements of interchangeability of parts and components. For example, numerical control processing can be programmed by the computer in advance, and then under the monitoring and operation of the computer program, the parts manufacturing can be relatively automated. Parts manufacturing is a common link in mechanical manufacturing. It can effectively improve the efficiency and quality of parts manufacturing by means of automation technology, so that the accuracy and consistency of parts manufacturing meet the requirements. The application of computer systems, product databases, CAD and other development tools of automation technology can help engineers to design the size and shape of machine parts first, and then optimize the production process and technology of products through finite element analysis and dynamic motion analysis, so as to achieve the optimal effect. In addition, industrial automation technology has management advantages, which can improve the processing quality and efficiency of mechanical manufacturing. In addition, we make good choices in parameters, so that through practical operation, the final mechanical manufacturing effect can be significantly improved [6]. The application of automation technology to production and manufacturing management and control can effectively monitor the production and manufacturing process in real time. The system completes the manufacturing operation settings in advance, and outputs and supplies materials and products in advance, thus improving the efficiency of the supply and transportation of materials and finished products, and reducing the loss of materials and semi-finished products during the transmission process.
3 New Types of Key Automation Technologies 3.1 Intelligent Technologies The production capacity and requirements of China’s machinery manufacturing industry have been continuously improved, and the production level of the machinery manufacturing industry has achieved rapid growth. The continuous integration and research of computer information technology in China has promoted the good development, application, and development of intelligent technology achievements in actual production. The human-computer integration system can not only complete the production task according to the preset parameters, but also set the saved input parameters to achieve one key parameter switching, avoiding the tedious operation of inputting parameters in advance before production. In addition, the intelligent mechanical design and manufacturing automation system pays more attention to the collaborative production of people and machinery, and automatically optimizes the production process in combination with the workpiece quality requirements and the workpiece design drawings to ensure the workpiece design quality. The mechanical manufacturing industry gradually adopts CAD and other computer graphics technology to design and draw, replacing the traditional drawing design method, effectively reducing the design cycle, while effectively ensuring the design precision [7]. With the multimedia technology development,
Applications of Key Automation Technologies
69
mechanical intelligence presents the development trend of graphical interface, which effectively helps designers communicate with customers more intuitively and better, and greatly improves work efficiency. In addition, the powerful graphic and audio image processing functions of multimedia also provide convenience for intelligent fault detection, real-time monitoring, etc., and occupy an important advantage in the production and management of enterprises. Figure 1 represents the basic structure of intelligence technology.
Fig. 1. Basic structure of intelligence technology
3.2 Integrated Technologies Integrated technology is one of the automation technologies widely used in China’s machinery manufacturing industry [7]. At present, integrated technology has been involved in many fields, and some industries must rely on integrated technology to ensure product quality. The main role of integrated technology is to ensure that the functions of equipment are not affected. This technology has occupied a leading position in China’s machinery manufacturing industry. The core of integration technology is integration, and the key is to establish a unified standard data model and data sharing system. As the integration technology integrates Internet technology, database technology and other technologies, it can give better play to the integration, so that the mechanical manufacturing equipment, instruments, functions, and functions can be managed uniformly. At present, the development mode of China’s mechanical manufacturing enterprises is gradually tending to an integrated form. They want to optimize the whole work content and process by adjusting and controlling technology, and improve the quality and level of mechanical manufacturing in improving the competitive strength of the mechanical manufacturing industry. In the practical application of integration, technicians can collect relevant information and data in the market, and add and integrate analysis according to the information based on the production characteristics of the mechanical manufacturing industry, the management structure model of the relevant manufacturing process is formulated to formulate more scientific and reasonable strategic policies and production
70
Q. Xu
management models, flexibly adjust the production efficiency according to the market situation, and reflect the flexible characteristics of automation technology. Mechanical automation technology has comprehensive characteristics, so it can effectively absorb other advanced technologies in China and give play to the overall technical advantages. It is particularly important to actively integrate the application of mechanical automation technology, which can show the technical advantages of mechanical automation. Figure 2 shows the management structure of workshop integrated manufacturing.
Fig. 2. Basic structure of integrated technology
3.3 Flexible Technologies Flexible automation technology has strong operability and plays an important role in mechanical manufacturing. The use of this technology in mechanical product production can ensure the smooth production process. Integrating flexible automation technology into the mechanical manufacturing process can maximize the production benefits of mechanical products and bring good benefits to enterprises while improving the level of production automation. Different from other automation technologies, the key to the use of flexible automation technology is drawing design. If the drawing design does not meet the technical application requirements, it will not be able to play a technical advantage. Therefore, when using flexible automation technology, manufacturing enterprises must strictly follow the design specifications, closely combine technology with
Applications of Key Automation Technologies
71
actual production needs to design to ensure that the manufacturing process of mechanical products can be optimized and meet the development goals of enterprises. Based on automation technology, information technology and manufacturing technology, flexible manufacturing relies on highly flexible manufacturing equipment [9]. At the same time, it adapts to scheduling management according to changes in processing tasks or production environment, and automatically adapts to changes in the categories of processed parts and batch production to complete the multi variety and small batch production mode. Flexible production is an efficient and highly automated manufacturing system, which can ensure the maximum utilization of the processing system. At present, it has been widely used in the fields of automobile, electronics, and furniture manufacturing. The multi module of the flexible automatic production line is mainly composed of four parts: processing detection module, material supply module, drive module and PLC control module. Its framework is shown in Fig. 3.
Fig. 3. Basic structure of flexible technology
4 Applications of Automation Key Technologies in Machine Manufacturing 4.1 Applications of Intelligent Technologies By combining mechanical manufacturing technology, artificial intelligence technology and automation technology, we can build an intelligent mechanical manufacturing system. The intelligent system can imitate human thinking mode, carry out human-computer interaction mode in mechanical manufacturing, make reasonable analysis, judgment, and logical reasoning for problems in the production process, and conduct self-monitoring and troubleshooting. Because mechanical automation has become more intelligent, the cooperation between technicians and mechanical automation is more tacit, which is more
72
Q. Xu
conducive to the development of mechanical manufacturing automation [10]. With the continuous development of our society, its production requirements for the machinery manufacturing industry are also constantly improving. Not only does the machinery manufacturing industry need to achieve rapid growth in productivity, but also requires it to have a certain level of intelligence, as far as possible away from mechanized processing and production processing. In recent years, the research on computer information technology has been deepened in China, and intelligent information technology has been further applied and developed, which provides technical basis and conditions for the intelligent application of mechanical automation technology. By implementing the intelligent advantages of mechanical automation technology, the scanning and computer drawing of mechanical manufacturing related design drawings can be realized, which is of positive significance for mechanical production instructions and efficiency. And design technology is the best example of intelligent application of mechanical automation. With the advantages of computer technology, it can achieve perfect control of mechanical manufacturing design parameters and further improve the visual processing of complex programs. Based on this, mechanical automation technology can greatly improve the precision of mechanical manufacturing and production through intelligent application, save the cost of product development and design, and improve the overall working level [11]. 4.2 Applications of Integrated Technologies In the process of production and manufacturing, the enterprise realizes integrated manufacturing by introducing advanced computer and information communication technology, and then realizes an overall optimization of the whole manufacturing process [12]. In some high-tech enterprises, the use value of integrated manufacturing technology is quite high, and because of the rapid development of modern science and technology, integrated manufacturing technology has been widely used. For example, mechanical automatic management and automatic detection system. Application process of mechanical automation management: firstly, standardized manufacturing of parts is carried out according to the manufacturing process of products; Then carry out the production and assembly of the automation system; Finally, through the production process to work efficiently, improve product processing service quality and mechanical manufacturing efficiency, thereby improving the quality of production and manufacturing services. The establishment of automatic detection system can optimize the working efficiency of mechanical manufacturing and provide strong technical support for the smooth realization of automatic production function of mechanical manufacturing enterprises. The integrated application is mainly composed of three development subsystems: engineering technology information, management information and manufacturing automation. The role of the engineering technology information subsystem is to use computer technology to analyze the data information in the mechanical manufacturing process, assist in completing the mechanical manufacturing work, and ensure the quality of the mechanical manufacturing work; The management information subsystem mainly aims at the work of operation and management, such as financial management, production management, etc., and carries out various management work on the basis of meeting the needs of mechanical manufacturing; Manufacturing automation subsystem is to use computer
Applications of Key Automation Technologies
73
numerical control technology, industrial robots and other technical measures to control the improvement of mechanical manufacturing quality under the support of computer technology. 4.3 Applications of Flexible Technologies Flexible automation technology is a multi-technology integration, a new trend of manufacturing enterprise development, and the most strategic measure in the development process of manufacturing industry. In the future, in the comprehensive application of flexible automation technology, this technology will run through the production and operation activities of manufacturing enterprises [13]. Many applications of flexible automation technology in mechanical manufacturing enterprises are bound to bring about a significant increase in the benefits of mechanical manufacturing enterprises. With the continuous development of automation technology and the extensive application of automation technology in the manufacturing industry, it will bring the improvement and manufacturing process, form a highly automated production line or even an unmanned factory, thus reducing human and management costs, improving product manufacturing quality and production efficiency, and enhancing the competitiveness of enterprises. At present, the stand-alone technology continues to develop towards the direction of high precision, high speed, and high flexibility. The flexible automation technology is developing towards the practical and effective direction of open network system and intelligent performance integration. In the future, the flexible automation technology will be more widely used in the mechanical manufacturing industry. The flexible automation of mechanical automation technology is more aimed at improving the level of enterprise management, which reflects the flexible application of mechanical automation technology in mechanical manufacturing. With the deep application of mechanical automation technology in the field of mechanical design, mechanical manufacturing should not only meet the quality requirements of people for machinery, but also meet the market adaptability and responsiveness requirements of the mechanical manufacturing industry [14].
5 Optimization Strategies of Automation Key Technologies Applications in Machine Manufacturing China’s machinery manufacturing enterprises must strengthen the comprehensive understanding of automation technology. The rationality of application must be concerned when using automation technology in mechanical manufacturing. Automation technology is closely related to science and technology [15]. Different links and processes use different technologies. Before using technology, enterprises should analyze the fit between technology and processes to determine which technology needs to be used in different processes to meet the operation requirements. At the same time, enterprises must also realize that automation is the development trend, and the application of automation technology should be gradual. Enterprises must apply automation technology in combination with the current situation of domestic automation technology research and development, understand the laws of market change, consolidate the foundation of
74
Q. Xu
automation, learn from the successful experience of developed countries of machinery manufacturing in China. For parts with strong specialty and particularity, technicians can be assigned to study in foreign professional bases, which can not only enable Chinese technicians to learn from foreign advanced R&D concepts, but also strengthen domestic and foreign manufacturing exchanges. China can also strengthen the introduction of international talents and attract international talents with high welfare and high treatment. The research on automation process in foreign countries is earlier than that in China. At present, a relatively mature and perfect R&D system has been formed. Foreign mechanical manufacturing talents have also had a strong automation concept and ability to work in this environment for a long time. The introduction of talents into China can effectively improve the current situation of the unreasonable talent structure in China’s manufacturing industry and improve the automation process level of domestic mechanical manufacturing industry. China should strengthen automatic management, keep up with the market development, and innovate the existing system. It should be noted that the industry system should be closely combined with the needs of automation. Under the constraints of the system, enterprises have to change their traditional development thinking, introduce automation technology to implement production and manufacturing, and promote the automation level of China’s machinery manufacturing industry.
6 Conclusions The level of manufacturing industry is related to a country’s position in the world. Only by mastering the core technology and improving the core competitiveness of products can the economic benefits of enterprises be improved, and the comprehensive national strength of the country be enhanced. Therefore, it is very critical to improve the automation level of mechanical manufacturing. Mechanical manufacturing automation technology will also develop towards intelligence, integration and flexibility in the process of development. Only by continuously improving the automation level of mechanical manufacturing in China and applying advanced technology in mechanical manufacturing can the economic benefits of enterprises be improved and high-quality development of mechanical manufacturing enterprises be achieved.
References 1. O’Connor, A.M., Tsafnat, G., Thomas, J., et al.: A question of trust: can we build an evidence base to gain trust in systematic review automation technologies? Syst. Rev. 8(1), 1–8 (2019) 2. Ivanov, S.: Ultimate transformation: how will automation technologies disrupt the travel, tourism and hospitality industries? Zeitschrift für Tourismuswissenschaft 11(1), 25–43 (2019) 3. Le Goff, K., Rey, A., Haggard, P., et al.: Agency modulates interactions with automation technologies. Ergonomics 61(9), 1282–1297 (2018) 4. Matta, A.: Automation technologies for sustainable production [TC spotlight]. IEEE Robot. Autom. Mag. 26(1), 98–102 (2019) 5. Saputra, Y.M., Rosyid, N.R., Hoang, D.T., et al.: Advanced sensing and automation technologies. Enabling Technol. Soc. Distancing Fundam. Concepts Solut. 3, 15–21 (2022)
Applications of Key Automation Technologies
75
6. Coombs, C., Hislop, D., Taneva, S.K., et al.: The strategic impacts of intelligent automation for knowledge and service work: an interdisciplinary review. J. Strateg. Inf. Syst. 29(4), 10–16 (2020) 7. Ntinda, N.S.: Examining the readiness of the Namibia college of open learning in adopting automation technologies for improved service delivery. University of Namibia (2022) 8. Rue, B.T.D., Eastwood, C.R., Edwards, J.P., et al.: New Zealand dairy farmers preference investments in automation technology over decision-support technology. Anim. Prod. Sci. 60(1), 133–137 (2019) 9. Macrorie, R., Marvin, S., While, A.: Robotics and automation in the city: a research agenda. Urban Geogr. 42(2), 197–217 (2021) 10. Brecher, C., Müller, A., Dassen, Y., Storms, S.: Automation technology as a key component of the industry 4.0 production development path. Int. J. Adv. Manuf. Technol. 117(7–8), 2287–2295 (2021). https://doi.org/10.1007/s00170-021-07246-5 11. Andronie, M., L˘az˘aroiu, G., S, tef˘anescu, R., et al.: Sustainable, smart, and sensing technologies for cyber-physical manufacturing systems: a systematic literature review. Sustainability 13(10), 5495 (2021) 12. Bortolini, M., Galizia, F.G., Mora, C.: Reconfigurable manufacturing systems: literature review and research trend. J. Manuf. Syst. 49(1), 93–106 (2018) 13. Lu, Y., Xu, X., Wang, L.: Smart manufacturing process and system automation–a critical review of the standards and envisioned scenarios. J. Manuf. Syst. 56(1), 312–325 (2020) 14. Tuffnell, C., Kral, P., Siekelova, A., et al.: Cyber-physical smart manufacturing systems: Sustainable industrial networks, cognitive automation, and data-centric business models. Econ. Manag. Financ. Mark. 14(2), 58–63 (2019) 15. Mourtzis, D.: Simulation in the design and operation of manufacturing systems: state of the art and new trends. Int. J. Prod. Res. 58(7), 1927–1949 (2020)
The Application of VR Technology in Han Embroidery Digital Museum Lun Yang(B) Wuhan Textile University, Wuhan, Hubei, China [email protected]
Abstract. Since With the development of the digital age, virtual reality technology tends to be mature and has been applied in many fields such as education, military affairs and cultural heritage protection. From the perspective of cultural confidence, this paper discusses the significance and application of virtual reality technology in the construction of digital museums, and takes the construction method of Han Embroidery digital museum, an intangible cultural heritage in Hubei Province, as an example to summarize the concrete steps of its construction. Keywords: First Vr Technology · Han embroidery · Digital useumResearch · Interaction Design · build
1 The Concept of VR Generally, Virtual Reality originally refers to the non-realistic images displayed by computers. Jaron Lanier first used the term virtual reality in 1984. In the late 1980s, virtual reality technology ushered in its first boom, but in the late 1990s, Due to discomfort brought by devices and poor immersion, the popularity of virtual reality technology is gradually cooling. Facebook’s acquisition of Oculus in 2014 ushered in the second wave of virtual reality technology, and 2015 has been called the first year of VR. Today [1], VR technology has been used in military, medical, education, cultural heritage protection and many other fields. Relevant research shows that VR systems are mainly divided into four types: desktop, immersive, augmented reality and distributed. Desktop virtual reality system is convenient to promote, not restricted by equipment, low production cost, but weak immersion; Immersive virtual reality system has strong immersion and good user experience, but it has high requirements for equipment, high development cost and is not easy to promote [2]. Augmented reality virtual reality system can superimpose computer images and the real world, but its application scope has certain limitations. Distributed virtual reality system can build a virtual space by computer, so that users can have real-time interactive experience in many places, but its development cost is high.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 76–86, 2023. https://doi.org/10.1007/978-3-031-31860-3_9
The Application of VR Technology in Han Embroidery Digital Museum
77
2 The Significance of the Application of VR Technology in Digital Museums 2.1 To Provide New Methods for the Preservation of Cultural Relics The General Office of the CPC Central Committee and The General Office of the State Council issued Several Opinions on Strengthening the Reform of the Protection and Utilization of Cultural Relics, which clearly proposed that “the use of the Internet, big data, cloud computing [3], artificial intelligence and other information technologies should be made full use of to promote the integration and innovation of cultural relics display and utilization methods, and promote the ‘Internet plus Chinese Civilization’ action plan.” The main task of. As an emerging product of the development of the digital age, VR technology is advanced and innovative. It uses three-dimensional modeling technology to make one-to-one model of cultural relics, which breaks the time limit to the greatest extent and can store the original appearance of cultural relics for a long time. Integrating VR technology into the construction of digital museums can not only realize the long-term storage of the original appearance of cultural relics and provide data reference for the information transmission of cultural relics, but also draw the distance between users and cultural relics and provide a new method for the protection of cultural relics. 2.2 Create New Opportunities for Non-genetic Inheritance Intangible cultural heritage, such as handicraft intangible heritage, should be mainly explored in the process of protection of its skills and cultural connotation. Industrial and digital technologies have a huge impact on handicraft non-heritage. For example, the Han embroidery in Jingchu region, The production of digital embroidery machines has brought great challenges to the development and inheritance of hand-made Han embroidery. The application of VR technology in digital museums can enhance users’ sense of experience, [4] deepen their understanding of intangible cultural heritage technology, and create new opportunities for intangible cultural heritage to adapt to the development needs of the new era. 2.3 Provide New Forms for Museums Traditional museums are affected by regional space to some extent, and apart from glass and booth, users and exhibits cannot see each other closely. However, through the restoration of virtual reality technology, users’ exhibition experience can be greatly increased, and the geographical restrictions of traditional museums can be broken to some extent, and the distance between users and exhibits can be narrowed [5]. Visitors can enlarge the exhibits or view them at 360 degrees to experience the artistic features and aesthetic value of the exhibits at close range, which also provides a new form for museum exhibition.
78
L. Yang
3 Digital Museum 3.1 The Advantage Analysis of Digital Museum The digitization of museums is an inevitable trend of the development of the digital age. The use of digital technology to broaden the form innovation of museum exhibition is conducive to the expansion of the way of cultural heritage transmission and the inheritance of intangible cultural heritage [6]. Digital museums break through the regional restrictions of traditional museums, and display cultural heritages through virtual space, so that visitors from different regions can visit venues without leaving their homes and only need to move their fingers. In addition to breaking through the geographical restrictions, compared with traditional museums, the development cost of digital museums is much lower than that of physical exhibition halls. Through the three-dimensional modeling technology of computer, the construction of virtual exhibition space also saves land resources. In the case of field museums, digital museums can also be used as a communication medium for field museums to attract experience-takers to visit physical museums through digital museums. Digital museum can also bring more intuitive and clear display for users. Through the collection of image data, the model can be restored to the maximum extent of the real appearance of the exhibits. 360° object surround technology can also enable users to view the display exhibits in a full range without dead corners, which is impossible to achieve in the traditional museum exhibition experience [7]. 3.2 The Shortcomings of Digital Museums The digital museum also has some shortcomings. Its applicable objects have a certain age limit, and it has certain requirements for the operation of equipment. If the users who are too young or too old do not know how to use electronic equipment, there is the problem of being unable to experience. Some immersive interactions require headsets and joypads, which can also reduce the communication effect of digital museums to some extent [8]. The platform also needs special personnel to conduct regular maintenance to ensure the timely update of the displayed content. Teenagers usually pay more attention to interactive experience, so for young people, digital museums need to consider the fun of interactive design [9].
4 Construction of Digital Museum Based on VR Technology The construction of a digital museum requires several steps such as preliminary data collection, image collection, three-dimensional space construction, interaction design, promotion and release. The preliminary data collection and arrangement can clarify the division of museum exhibition space, such as classification by year [11], artistic features, exhibit types, etc., which is conducive to data induction and storage. It can also provide data support for later related research. Image acquisition can provide data reference for 3D modeling and make the construction of 3D model more accurate. The construction and interaction design of 3D model and 3D space are important links in the construction
The Application of VR Technology in Han Embroidery Digital Museum
79
of digital museum, which have the most direct impact on the effect presentation of the exhibition space. After the construction of 3D space, the size needs to be adjusted, so that users can achieve the best sightseeing effect during the experience process. The smooth operation of interaction design is an important factor affecting the experience effect [12]. Appropriately increasing the sense of experience, immersion and interest of exhibition interaction is also an important topic for the development of digital museums. Promotion and release links, easy to operate and easy to spread the main target, convenient browsing steps closer to the user psychology [13]. Taking the construction of Han embroidery digital museum as an example, this paper mainly analyzes the aspects of preliminary preparation, image collection, museum scene construction, interaction design and final presentation. 4.1 Preliminary Preparation In the early stage, the information of Han embroidery was summarized. By analyzing the characteristics of the times and the meaning of the patterns of Han embroidery, it is also a summary and arrangement of customs and cultural changes in different periods [14]. The analysis and arrangement of Han embroidery can be mainly started from several aspects, such as the development process, the characteristics of the times, and the representative implication. The development process of Han embroidery can be summarized as shown in Table 1: The development and evolution of Han embroidery. Table 1. Development and evolution of Han embroidery stage
period
Research basis
Origin period
Eastern Zhou period
A large number of silk weaving relics mainly with phoenix and bird patterns were unearthed from the No.1 Chu tomb in Mashan, Hubei province
The beginning of the period
pre-Qin period
In the pre-Qin period, the silk weaving industry of the State of Chu was at the highest level. Qu Yuan’s Ci of Chu depicts the silk weaving map of the Palace of Chu (continued)
80
L. Yang Table 1. (continued)
stage
period
Research basis
A period of great prosperity
In the late Qing Dynasty and early Ming Dynasty
At that time, embroidery was mainly divided into three categories: one for daily necessities, such as embroidered clothes, embroidered pillows; the second for decoration, such as wall hanging, central hall, screen, etc., and the third for folk rituals, such as sacred robes, cassock, colorful flags and so on
Be hit hard
Anti-Japanese War period
The invasion of the Japanese army led to the destruction of Embroidery Street. Due to the influence of the war and the change of demand direction, people paid more attention to the demand for daily necessities, and the employees decreased sharply, and Han embroidery was almost lost
Reset the glory
In the 80s
The variety of Han embroidery products has increased, from the previous civil small embroidery products and costumes and other folk products to household articles, from small curtains, cloak, pillowcase, quilt surface, to the central hall, strip screen, folding, shake pieces and so on
According to the theme and form of Han embroidery in different periods, the characteristics of Han embroidery in different periods can be summarized as Table 2: the characteristics of Han embroidery in different periods.
The Application of VR Technology in Han Embroidery Digital Museum
81
Table 2. The characteristics of Han embroidery in different periods The dynasty
Dermatoglyphic pattern
Theme
Characteristics of the times
Eastern Zhou Dynasty Animal pattern
Myth is the main theme
In the Eastern Han Dynasty, people had a rich imagination of natural gods and strong worship and belief in nature. The patterns were romantic and strange, and the themes were mainly mythology, focusing on the depiction of shapes and dynamics, and pursuing vitality. Dragon, Ruifeng and other rare birds and exotic animals are used more patterns
Wei and Jin dynasties
Religious themes and Wei jin political unrest decorative themes in the northern and southern dynasties, frequent migration and complex ethnic relations make the different regions of cultural collision, instead of the development of art, han embroidery in addition to inherit the traditional pattern, also into the buddhist elements and decorative art, pairs of decorative patterns are more and more used in han embroidery patterns
Animal pattern
(continued)
82
L. Yang Table 2. (continued)
The dynasty
Dermatoglyphic pattern
Tang Dynasty
Common and visible animal pattern Flowers and plants Poultry bird grain
Theme
Characteristics of the times Han embroidery no longer pays too much attention to the myth theme, but looks at the reality, using the common animals and plants into the painting, in the form of the combination of plants and plants expressed
4.2 Image Acquisition In the image acquisition stage, costumes and pieces of Han embroidery are collected. The adjustment of light is very important in the shooting process [15]. It is necessary to make the embroidery products as little affected by the environment as possible and restore the real situation of the embroidery products to the greatest extent. In order to reflect the craftsmanship of inheritors and the aesthetic value of Han embroidery, the image collection of Han embroidery requires a high precision, which requires the equipment to have a high pixel, and the Han embroidery exhibits cannot be taken out in the exhibition cases, and the reflection caused by glass should be avoided, which also has certain requirements for the collection technology of the photographer [16]. 4.3 Museum Scene Construction The construction of the museum scene uses 3Dsmax software, and constructs the virtual scene in the conception through the 3D model production method of polygon modeling. The working steps of model building are mainly shown in Fig. 1 Among them, material rendering and scene panorama rendering are the decisive factors to determine the final picture quality. Vary4.3 renderer is generally used to pursue the authenticity of the scene [17]. In panorama rendering, the camera should be placed in the center of the display space, 1.2 m high from the ground. The output ratio of a standard panorama should be 2:1, and the camera’s field of view coverage should be adjusted to 360°. The placement of the camera position is a crucial factor affecting the rendering effect of the panorama.
The Application of VR Technology in Han Embroidery Digital Museum
83
Fig. 1. Steps of venue construction
4.4 Interaction Design The interaction design is mainly realized through the software Pano2VR and KRPano. The panorama is imported into the Pano2VR program, and the panoramic tour is realized through the setting of the polygon hot area. Click to view the relevant information of Chinese embroidery, the venue tour scene transformation and other interactions [18]. The interactive production of map button roaming transmission and background music playback hotspot is mainly achieved by writing iavascript scripts. For example, the program code used by sound playback hotspot is shown below.
84
L. Yang
The addition of interactive hot spots strengthens the display function of digital useum to a certain extent, and also maximizes the cultural connotation of Han embroidery to the experiencer. The logical steps of interaction design can be summarized in Fig. 2 4.5 Final Presentation The final presentation and promotion of the digital Museum of Han Embroidery fully consider the speed and convenience of visitors entering and browsing. It is presented in the form of 720°VR panoramic tour, which can cover the experience crowd more comprehensively. By scanning the two-dimensional code, you can easily and quickly enter the exhibition hall for roaming.
5 Conclusion The application of virtual reality technology in digital museums can greatly promote the form innovation of museum exhibition and cultural heritage protection, which is the reflection on cultural inheritance under the condition of new technology. How to better solve the technical limitations brought by equipment and how to improve the interactive experience of digital museums are urgent topics to be studied. Keeping up with the needs of The Times, using new technologies and new methods to explore new methods for the inheritance and protection of excellent national culture is conducive to maintaining cultural confidence, improving citizens’ sense of cultural identity, and creating new opportunities for cultural heritage to adapt to The Times.
The Application of VR Technology in Han Embroidery Digital Museum
85
Fig. 2. Interaction design logical steps
References 1. Atabaki, A., Marciniak, K., Dicke, P.W., et al.: Assessing the precision of gaze following using a stereoscopic 3D virtual reality setting. Vision Res. (2015)
86
L. Yang
2. Mohammed, E., Fatma, F., Nouran, K.M., et al.: Managing microclimate challenges for museum buildings in Egypt. Ain Shams Eng. J. 13(1) (2022) 3. Ignat, K., Björn, B., Magnus, H., et al.: Navigating uncharted waters: Designing business models for virtual and augmented reality companies in the medical industry. J. Eng. Technol. Manage. 59 (2021) 4. Eleanor, E.C., Cathy, U., tom Dieck, M.C., et al.: Developing augmented reality business models for SMEs in tourism. Inform. Manage. 58(8) (2021) 5. Blaga, A., Militaru, C., Mezei, A.-D., et al.: Augmented reality integration into MES for connected workers. Robot. Comput.-Integrat. Manufac. 68 (2021) 6. Mathilde, D., Nathalie, L.B., Johanne, B., et al.: The visual impact of augmented reality during an assembly task. Displays 66 (2021) 7. Santamaría-Bonfil, G., Ibáñez, M.B., Pérez-Ramírez, M., et al.: Learning analytics for student modeling in virtual reality training systems: Lineworkers case. Comput. Educ. 151 (2020) 8. Alves, J.B., Marques, B., Dias, P., Santos, B.S.: Using augmented reality for industrial quality assurance: a shop floor user study. Int. J. Adv. Manufac. Technol. 115(1–2), 105–116 (2021). https://doi.org/10.1007/s00170-021-07049-8 9. Cavazza, M., Lugrin, J.-L., Hartley, S., et al.: Intelligent virtual environments for virtual reality art. Comput. Graph. 29(6) (2005) 10. Theoktisto, V., Fairén, M.: Enhancing collaboration in virtual reality applications. Comput. Graph. 29(5) (2005) 11. Dachselt, R., Hübner, A.: Three-dimensional menus: A survey and taxonomy. Comput. Graph. 31(1) (2006) 12. eSeraglia, B., eGamberini, L., ePriftis, K., et al.: An exploratory fNIRS study with immersive virtual reality: a new method for technical implementation. Front. Hum. Neurosci. 5 (2011) 13. The State Office of the Office of the People’s Republic of China issued Several Opinions on Strengthening the Reform of the Protection and Utilization of Cultural Relics. People’s Daily (2018) 14. Zou, S.: Of virtual reality technology and culture, thinking of digital museum. J. Int. Public Relations 2022(6), 121–123 (2022). https://doi.org/10.16645/j.carolcarrollnkicn11-5281/c. 2022.06.025 15. Qin, Y.: Application Research of Virtual Reality Technology in Cultural heritage protection and museum education. Mod. Market. (next issue) 01, 140–141 (2017) 16. Ye, W., Liu, S., Song, F.: History and current state of virtual reality technology and its application in language education. J. Technol. Chin. Lang. Teach. 8(2) (2017) 17. Zhao, Z.B.: Protection, inheritance and development of AR digital intangible cultural heritage in Dalian. Drama House 4 (2018) 18. Li, X.: On the Combination and application of Network Information Platform and We Media in Museum career. China Natl. Exhib. 18 (2016)
Analysis of International Trade Relations Forecasting Model Based on Digital Trade Haiyan Ruan(B) College of Business, Ningbo City College of Vocational Technology, Ningbo 315100, Zhejiang, China [email protected]
Abstract. International trade (IT) is an important driving force for the development of the world economy. In the context of the accelerating process of global integration and the expanding scale of foreign trade. And China, as one of the major developing countries, is even more so. Therefore, it is particularly necessary to study how to better deal with trade frictions and conflicts. This paper first introduces the impact of digital trade on countries and regions, enterprises and industries, and analyzes its causes and future trends. Then it uses the gray correlation model prediction method to build an IT relations development index to quantitatively evaluate the role of various factors in the global economic integration at this stage. It can be seen from the test results that the prediction error of the trade value of the five countries in the experiment is small, which can accurately predict the degree of trade dependence between countries. Keywords: Digital Trade · International Trade · Trade Relations · Forecasting Model
1 Introduction The development of IT is the embodiment of a country’s economic strength and comprehensive national strength. With the increasingly frequent trade exchanges among countries in the world, China’s foreign trade is facing unprecedented challenges [1, 2]. New problems have emerged and shown new trends in the process of global integration, technological progress, and international division of labor. At the same time, China’s industrial restructuring has gradually accelerated in recent years, and major measures such as the “the Belt and Road” strategic plan have promoted the development of IT, It provides more opportunities for IT and plays a role of value orientation with huge space and unlimited potential [3, 4]. The research of domestic scholars on IT relations mainly focuses on trade mode, import and export structure, etc. First, based on empirical analysis, it discusses from the perspective of trade scale, commodity structure and relevant countries and regions. The second is to predict the development trend of China’s foreign trade by using the gray correlation model and combining the data of the Ministry of Commerce of China [5, 6]. The third is to build a complete and practical indicator system with strong operability © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 87–95, 2023. https://doi.org/10.1007/978-3-031-31860-3_10
88
H. Ruan
by using relevant expert experience, literature and existing research results in econometrics. Some scholars have established a bilateral international cooperation mechanism between China and the United States based on the green barrier model by using the gray correlation analysis method. Empirical tests have proved that the model can effectively predict the existence of a certain close relationship between China and the United States [7, 8]. Other scholars believe that the “the Belt and Road” policy has promoted the development of China’s foreign trade with the deepening of China’s opening-up, the significant improvement of its international status and the reform and opening up. Therefore, this paper studies the forecasting model of IT relations based on digital trade. With the continuous development of global trade, IT relations have become more and more important in world trade, and digital technology has facilitated the realization of this goal. In this paper, data regression prediction is studied based on gray correlation analysis. First, it introduces the relevant concepts and theoretical basis, then further explains the data standardization processing methods, statistical models and other commonly used methods, and verifies its feasibility and superiority through examples. Finally, it uses the gray correlation system as the identification index system to build the IT relations system, and conducts empirical tests on the system from a qualitative perspective, proving that it has certain value in practical applications.
2 Discussion on the IT Relations Forecasting Model Based on Digital Trade 2.1 Current Situation of IT Development In the context of global economic integration, IT has developed rapidly. With the deepening of globalization and a series of major events such as China’s accession to the WTO and the World Trade Organization, the scale of China’s foreign trade has grown rapidly, the structure of import and export products has been gradually optimized and upgraded, and the status of international division of labor has been significantly improved, At the same time, it also brings opportunities and many threats. However, with the deepening openness of trade markets in the world, it also brings a series of new problems [9, 10]. For example, tariff barriers have been gradually eliminated. In recent years, developed countries such as the European Union and the United States have implemented a zero tax rate policy on imported products, while China’s exports to non easily dumped products have taken duty-free measures to reduce costs and increase profits in order to gain more international competitiveness. As a result, a large number of commodities have entered our market. The rise of trade protectionism has also made China’s foreign trade frictions continue to intensify, which has led to more and more serious IT disputes and controversies. In the process of IT development, the choice of trade partners is an important factor [11, 12]. In order to obtain a higher profit level, enterprises will take more strict, preferential and even monopoly measures to participate in international division of labor and competition, and ultimately achieve the goal of optimal allocation of resources.
Analysis of International Trade Relations Forecasting Model
20%
42%
89
Primary industry The secondary industry
38%
The tertiary industry
Fig. 1. IT ratio
It can be seen from Fig. 1 that China’s dependence on foreign trade is high. It can be seen from the data that the dependence on foreign trade is too large. This is mainly because most of the export products belong to low value-added, labor-intensive industries and resource goods. Most of the imported goods come from foreign importers or local distributors and other non domestic enterprises, while the domestic market is not open to industries with large demand for them, which leads to China’s low international competitiveness. 2.2 External Environment for CHina’s IT Development In recent years, with the deepening of global economic integration, IT has developed faster and faster. It plays an important role in China’s foreign trade. However, due to the limited strength of China’s foreign trade enterprises, low product technology content and other reasons, their exports are restricted or they are subject to anti-dumping investigation and cannot carry out normal trade activities and other external factors have a greater impact. In addition, after China’s accession to the WTO, it is also faced with challenges and pressures from fierce competition in other countries’ markets. Therefore, it is necessary to take an active and effective response strategy to promote the development of China’s IT. In the development process of IT, the state formulates relevant trade laws and regulations to protect its domestic market from external interference, which is to promote the healthy, stable and sustainable development of China’s import and export. With the enhancement of China’s economic strength and comprehensive national strength, as well as the impact of international tariff barriers on traditional non-tariff barriers after China’s accession to the WTO, it also creates favorable conditions for further improving the quality of export products. Therefore, we should strengthen supervision and improve management measures to safeguard the legitimate interests of domestic enterprises and the overall rights and interests of the country, so that they will not be excluded from the international market by foreign markets. With the continuous improvement of China’s national economic level and the increase of residents’ income, the consumption structure has also changed, from relying on labor and resource intensive to capital, technology
90
H. Ruan
and other high-tech transformation and upgrading. At the same time, after the proportion of low value-added products in export commodities has decreased, they have turned to high value-added products, such as mechanical and electrical products, which have good competitiveness and broad market prospects, And the high-quality traditional industry that conforms to its future trend has developed to a new level. 2.3 Prediction Method of Grey Relational Model Grey relational degree refers to the closeness between two different indicators, that is, the degree of dependence on foreign trade and economic development of a country or region. According to the interrelationship between the sequences, mathematical operations are established, and expert experience is combined to evaluate the system. This paper mainly uses the sequential linear regression model to forecast IT relations. When selecting data, three variables, time series, space distance coefficient and random noise, can be selected as the research object. At the same time, some factors existing in the selected samples need to be considered. Then, through the gray correlation analysis method, a discriminant function is established to determine the size of mutual influence between various indicators and calculate their threshold range. A relatively stable number sequence can be obtained when calculating the change trend of various factors. This method predicts the state and state value of each participant in the process of IT to analyze the development and changes of relations between countries at different stages Interrelationship between sequences
Regional economic development
Establish mathematical operation Combined with expert experience
Dependence on foreign trade Prediction method of grey relational degree model
Forecast international trade relations Selectable time series
Sequential linear regression model
Establish discriminant function Forecast international trade process
Space distance coefficient
Fig. 2. Gray correlation degree model prediction method
Analysis of International Trade Relations Forecasting Model
91
and levels in the process of IT, and then reasonably plans the future integration process of China. The process is shown in Fig. 2. By quantifying the influencing factors, we can find out the characteristics such as the closest relationship, the closest relationship and regularity between each factor and economic growth. Grey system theory believes that when a thing develops to a certain extent, a phenomenon will appear. Therefore, this principle can be used to analyze and study the size and change trend of the correlation degree between the import and export volume and quantity of commodities among countries in the world, and at the same time, mathematical methods can be used to establish the information needed to solve complex international problems: yt = C + A1 Yt−1 + Ap Yt−p + B0 x1 + B1 xt−1 + · · · + Bq xt−q + µt
(1)
In the process of using the grey relational model to study, we can take into account the different import and export commodity structure, technical level, trade policy and other relevant information between countries. At the same time, we can also establish a data model to predict the future IT development trend and change trend. The algorithm is shown in Formula 2: yt = C + A1 Yt−1 + A2 Yt−2 + µt
(2)
This method establishes a mathematical econometric model for the research problem, selects appropriate parameters according to the known conditions to construct one or more variables to estimate the degree and size of correlation between the objects to be studied and their influencing factors, and then uses the gray system theory to determine whether these indicators have an internal logical relationship in time and space, and takes them as the actual reference value of the prediction object. d ln gtt ∗ d ln gtt−1 ∗ d , ln gtt−2 =a +b +c (3) d ln stt d , ln stt−1 d , ln stt−2 where y = (d ln gt, d Inst), C is the constant column vector, and A1 to A2 represent 2 × 2 order coefficient matrix to be estimated, and u is assumed to be a two-dimensional perturbation term.
3 Experimental Process of IT Relations Forecasting Model Based on Digital Trade 3.1 Establishment of IT Relations Forecasting Model Based on Digital Trade Based on the method of grey correlation analysis, it predicts that there is a certain relationship between the commodity trade volume and price, the change of import and export structure, the international market share and other variables in IT (its model is shown in Fig. 3). After establishing the digital data model, it is necessary to determine the possible relationship between its indicators and parameters. Firstly, the correlation degree between the factors is calculated according to the selected index system, and then the ranking order of the correlation degree of each value corresponding to the transaction
92
H. Ruan
Forecast Model of International Trade Relations
Commodity trade volume
Commodity trade price
Import and export structure of commodity trade
Grey relational analysis
Digital Data Model
End
Fig. 3. A prediction model of IT relations based on digital trade
amount of each commodity and service is obtained according to the weight. On this basis, the grey statistical analysis system is built, and the grey prediction results are processed and evaluated using the linear weighting method. In terms of export quantity, it is necessary to combine the actual situation of the importing country with that of our country. At the same time, it is also necessary to pay attention to what similarities exist between different countries and what advantages they have. Then, it is necessary to determine the corresponding index system, technical standards, product structure and other contents according to the relevant policies formulated by each country, and establish a mathematical equation model to calculate the specific numerical value, Then use the gray correlation method to analyze and process the import and export data, and then get the final result, which provides the basis for enterprises to make foreign trade decisions. 3.2 Testing Process of IT Relations Forecasting Model Based on Digital Trade When testing the model, different methods should be selected according to the actual data. First, determine whether the selected scheme is in line with the development trend
Analysis of International Trade Relations Forecasting Model
93
of IT. Then it will be converted into corresponding index values and compared with the average price of the international market for analysis, and the forecast results will be made to judge whether the expected goals and reasons are met, so as to further adjust the future trade direction and strategy to better cope with various changes. Finally, the model will be tested for application among countries or regions through calculation, so as to draw the final conclusion, which can be used in the future inspection work.
4 Experimental Analysis of IT Relations Forecasting Model Based on Digital Trade Before the prediction of IT, it is necessary to analyze the relationship between various variables in the model, so as to draw a conclusion that it is correct, feasible, effective, in line with the actual situation and meets the expected requirements. A complete and accurate digital trade data can be obtained through the above series of steps. The degree of influence of various factors can be determined according to the relevant conditions of the data set and relevant countries and regions, as well as the availability of data, and then the most representative mechanism that can reflect the mutual restriction of various factors can be selected based on the actual test results. Table 1 shows the test data of the prediction model of IT relations. Table 1. Analysis of IT relations forecast data Test item
Complete trade volume
Forecast trade value
Actual trade value
Error value
Country A
2445097
1543255
1267743
275512
Country B
2563784
1562345
1375321
187024
Country C
2315673
1675342
1456373
218969
Country D
1356326
1334563
1264214
70349
Country E
2462346
1764245
1676345
87900
This paper is mainly based on the gray correlation theory and empirical analysis of IT. For the data involved in the model, the correlation test is carried out. First, a series of important parameters such as correlation vector, stationarity index and variance contribution rate are calculated for the data. Secondly, MATLAB software is used to build a system legend to verify whether the prediction method is suitable for this paper to put forward the corresponding conclusions and give reasonable suggestions. After selecting effective samples, we will take them as the research object and main reference basis of this paper, and for other variables involved in the model, we will establish a model to test and analyze after they are determined by testing methods. It can be seen from Fig. 4 that the prediction error of the trade value of the five countries in the experiment is small, which can accurately predict the degree of trade dependence between countries.
94
H. Ruan
Country E
Test item
Country D
Country C
Country B
Country A 0
1000000
2000000
3000000
Value Error value
Actual trade value
Forecast trade value
Complete trade volume
Fig. 4. Test and analysis of the IT relations prediction model based on digital trade
5 Conclusion Digitization is the core of IT. In trade, information exchange and data processing have become important factors affecting the trading volume and quality. Technical barriers to trade is one of the outstanding problems. This paper analyzes the theories of domestic and foreign scholars on network security and green economy and establishes a model to explain its mechanism and application, and then uses the gray correlation method to build a green IT system evaluation index system based on the international certification system. Using the analytic hierarchy process, the smaller the difference between the weight coefficient and the test standard value and the actual data, the more perfect the system is.
References 1. Sakas, D.P., Giannakopoulos, N.T., Kanellos, N., Migkos, S.P.: Innovative cryptocurrency trade websites’ marketing strategy refinement, via digital behavior. IEEE Access 10, 63163– 63176 (2022). https://doi.org/10.1109/ACCESS.2022.3182396
Analysis of International Trade Relations Forecasting Model
95
2. Dharmaraj, C., Vasudevan, V., Chandrachoodan, N.: Analysis of power-accuracy trade-off in digital signal processing applications using low-power approximate adders. IET Comput. Digit. Tech. 15(2), 97–111 (2021) 3. Van Bockel, B., Leroux, P., Prinzie, J.: Tradeoffs in time-to-digital converter architectures for harsh radiation environments. IEEE Trans. Instrum. Meas. 70, 1–10 (2021) 4. Brynjolfsson, E., Hui, X., Liu, M.: Does machine translation affect it? evidence from a large digital platform. Manag. Sci. 65(12), 5449–5460 (2019) 5. Rukanova, B., Henningsson, S., Henriksen, H.Z., Tan, Y.-H.: Digital trade infrastructures: a framework for analysis. Complex Syst. Inform. Model. Q. 14, 1–21 (2018) 6. Iliadis, A., Pedersen, I.: The fabric of digital life: Uncovering sociotechnical tradeoffs in embodied computing through metadata. J. Inf. Commun. Ethics Soc. 16(3), 311–327 (2018) 7. Shelke, P.M., Prasad, R.S.: Tradeoffs between forensics and anti-forensics of digital images. Int. J. Rough Sets Data Anal. 4(2), 92–105 (2017) 8. Nasir, A., Jan, N., Yang, M.-S., Khan, S.U.: Complex T-spherical fuzzy relations with their applications in economic relationships and ITs. IEEE Access 9, 66115–66131 (2021) 9. Hu, Y.-P., Chang, I.-C., Hsu, W.-Y.: Mediating effects of business process for IT industry on the relationship between information capital and company performance. Int. J. Inf. Manag. 37(5), 473–483 (2017) 10. Alkawaz, A.N., Abdellatif, A., Kanesan, J., Khairuddin, A.S.M., Gheni, H.M.: Day-ahead electricity price forecasting based on hybrid regression model. IEEE Access 10, 108021– 108033 (2022) 11. Antwi, E., Gyamfi, E.N., Kyei, K.A., Gill, R.S., Adam, A.M.: Modeling and forecasting commodity futures prices: decomposition approach. IEEE Access 10, 27484–27503 (2022) 12. Das, U.K., et al.:Optimized support vector regression-based model for solar power generation forecasting on the basis of online weather reports. IEEE Access 10, 15594–15604 (2022)
Empirical Analysis of Online Broadcast Based on TAM Model Mengwei Liu1 , Feng Gong1(B) , Jiahui Zhou1 , and V. Sridhar2 1 School of Economics and Management, Zhixing College of Hubei University, Wuhan, Hubei,
China [email protected] 2 Nitte Meenakshi Institute of Technology, Bengaluru, India
Abstract. With the development of network communication technology, China’s Internet users are growing. Webcast rose in 2016 and developed rapidly. Webcast has become a research hotspot at home and abroad. Firstly, this paper analyzes the background and characteristics of webcast, establishes a new model according to TAM theory, puts forward the research hypothesis, verifies whether the hypothesis model is established through the analysis of actual survey data, and finally puts forward relevant suggestions. According to TAM theory, the thesis establishes the influencing factor model of online purchase intention of webcast users, and draws following conclusions: information quality and reviews are positively associated with users’ attitude and purchase intention; Promotional has no significant impact on users’ attitude and purchase intention; User attitude is positively associated with purchase intention. The conclusion has certain theoretical significance and practical value. Keywords: Online Broadcast · Purchase Intention · TAM
1 Introduction Internet technology has been developing vigorously. Online shopping has been accepted by the public, and has basically achieved nationwide coverage. Traditional e-commerce industry tends to be saturated, and traditional online retailers begin to seek new breakthroughs. With the spread of social networking, the marketing mode of integrating social networking and e-commerce has become the breakthrough direction of e-commerce enterprises [1]. E-commerce platforms have begun to try diversified social marketing combinations to attract new users and enhance user stickiness. Webcast began to rise [2]. It uses a new video mode to entertain users. In 2016, the scale of the live broadcasting industry expanded rapidly and entered a period of growth explosion. The e-commerce industry began to enter the mode of “e-commerce + Live Broadcasting”. Before 2017, most of the webcast marketing focused on game and FMCG enterprises. By the second half of 2017, after the rectification of the live broadcasting industry, the development situation tends to be stable, a competition pattern has been formed between platforms, and the growth trend of users has slowed down. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 96–106, 2023. https://doi.org/10.1007/978-3-031-31860-3_11
Empirical Analysis of Online Broadcast Based on TAM Model
97
Webcast effectively meets the fragmented entertainment and social needs of users on the mobile terminal [3]. As a new way of entertainment shopping, webcast meets the leisure needs of consumers, and help consumers do shopping while watching [4]. Webcast brings users a real shopping experience through high participation and strong interaction [5]. Live broadcasting is a perfect combination of traditional marketing methods and modern sales models in the field of e-commerce, which can display products or services in a three-dimensional way [6]. The combination of “e-commerce + Live Broadcasting” emerges one after another. The prosperity of webcast is the general trend of the development of the times.
2 Literature Review 2.1 Online Broadcast Since 2016, webcast has gradually been accepted by the public, and has profoundly affected people’s daily lifestyle. The content of live broadcast is in various forms and can meet different individual needs. It has become an important way for people to relax and enjoy. At the same time, webcast has a profound impact on enterprises, capital, technology and other aspects. From the perspective of the development of online live broadcasting in China, online live broadcasting can be divided into three stages: the first stage is marked by online games such as “League of Heroes”; The second stage is that online games are broadcast in large quantities through the live broadcast platform, and become the main content of online live broadcast; The third stage is that users spend a lot of time on mobile applications and enter the era of national webcast. For webcast, many scholars have also conducted research and put forward their views on webcast. Scholar Zhao Mengyuan proposed that webcast is a highly interactive entertainment way with users. The anchors record through mobile phones, computers and other tools, and broadcast their own games, singing, eating food and other activities. Users reward the anchors by sending bulletins to the anchors, purchasing virtual props, and the anchors respond or make corresponding actions. Chen Luying proposed that webcast is a form of online content service in which the host provides users with programs through the live broadcast platform. The live broadcast is used to explain and display, and users communicate with each other by sending bullet screens and giving likes. It is a form of online content service in which users interact with the host through live broadcast. Tan Chang believes that the main characteristics of webcast are wide range of dissemination, strong interactivity and effectiveness. Compared with traditional marketing methods, webcast marketing has the characteristics of stronger personalization, more vivid image, faster speed and stronger connectivity. The enterprise regards the live broadcast platform as a medium and sells products intensively to achieve the goal of improving the brand or increasing sales. Compared with traditional marketing, webcast marketing has innovated and overturned traditional shopping methods. It is a way that can reflect the time-space characteristics of online video. With the improvement of living standards and production technology, the technology of mobile internet has also been developed, and live broadcasting has become a new media form. Due to the convenience of mobile terminals, the continuous progress of the
98
M. Liu et al.
express industry and the increasingly popular 5G network, the threshold of online broadcasting has been constantly lowered, and people can watch videos and buy goods without leaving home. People are increasingly agreeing with this online purchase method. This has also led to the development of the attention economy, Internet celebrity economy, and the marketing model brought by this new media. At present, webcast is still in the process of development. Although it is not mature enough, it has shown its marketing explosive force and influence as a media form. Scholars believe that webcast, as a relatively fast developing medium, extends the media time and space, making it free from the influence of time and space constraints. E-commerce live broadcasting is a subdivision mode of online live broadcasting. Since 2018, the research on e-commerce live broadcasting has gradually enriched, but there is no authoritative definition at present. Tan Yuli believes that e-commerce live broadcast combines commodity sales through live broadcast platform with traffic and equipment, and explains products through live broadcast to attract users to buy. Liang Zhixuan believes that e-commerce live broadcast is recorded by enterprises, businesses or hired anchors with the help of mobile phones, computers and other tools to display the products to be sold in the store in front of users, explain to users, explain how to use the products, the usefulness of the products, answer users’ questions, hang up links, promote users’ purchase, and interact in real time. Wang Tong believes that e-commerce is a business model that combines products and users by recording and displaying them to users through live broadcast, explaining the purpose and color of their products, stimulating user interaction, answering user questions, activating the atmosphere of the live broadcast room, stimulating users’ desire to buy. In combination with the above research, online e-commerce live broadcast is divided into two perspectives: consumers and enterprises. One is a new shopping model that uses short video platforms (Douyu, Kwai, Tiktok, etc.), e-commerce platforms (Taobao, JD, etc.), live broadcast platforms, and other Internet channels to broadcast goods through watching enterprise goods or online celebrities. The second is a new business model that combines products and webcast, and strengthens the interaction between users and producers to improve the experience of purchasing goods. 2.2 TAM Model Based on the previous literature review, it is found that the technology acceptance model is a mature theory that has been used and proved many times by the academic community. This paper will build the theoretical framework of research based on the technology acceptance model, and put forward scientific, reasonable and practical research assumptions and theoretical models. In 1989, Davis put forward the rudiment of technology acceptance model in the process of studying the user’s acceptance of information systems. The theory was recognized by the academic community and began to be valued and applied. With the continuous development and improvement of the theory, the current technology acceptance model theory was formed. The technology acceptance model is mainly used to study users’ acceptance behavior of technology, explain people’s behavioral intention to accept technology, and explore the decisive factors of users’ acceptance of a new technology.
Empirical Analysis of Online Broadcast Based on TAM Model
99
The technology acceptance model points out that perceived usefulness and perceived ease of use have a great impact on a new technology and are important determinants of audience acceptance of this technology. Davis defined “perceived ease of use” as the ease of personal use of a system, that is, the ease of user experience in the process of contact, learning, work and use; “Perceived usefulness” is understood as the extent to which a person can improve his work performance when using a certain work system, that is, the extent to which users can feel the usefulness of improving themselves in all aspects by using a new technology. Some scholars have added “perceived entertainment” to the model construction after empirical research and testing. Through empirical analysis, Moon and Kim (2001) believe that perceived usefulness and perceived ease of use have an important impact on users’ acceptance and use of various information technologies. However, in the context of the development of the Internet, these two key factors are not fully used to explain users’ attitudes and behaviors. Through the research, the introduction of “entertainment” has a positive impact on the acceptance of a certain system. It is believed that “perceived entertainment” is the key factor for users to accept information systems. On the basis of retaining the original variables, the perceived entertainment of users has a great impact on users’ acceptance of a new technology. “perceived entertainment” can be included in the discussion of the use and acceptance behavior of the Internet. Wu Bing and Wang Yufang also reached a similar study by studying media dependence and other theories: perceived usefulness and perceived ease of use have a positive impact on use intention, and user perceived value also has a positive impact on use intention. Gu Man, Huang Shaohua and other scholars have studied the factors of online interpersonal communication and believed that perceptual interaction plays an important role in the structure of cyberspace. In the early stage of network development, it only appeared in a small scale, and its penetration into real daily life was not deep. With people’s acceptance and social promotion, online life and offline life became more and more closely connected, relying on real-time and two-way communication to produce a sense of presence. Liu Rong et al. (2017) studied the role of interaction in the process of brand realization, and obtained the research results: the user’s experience when interacting with the anchor online has a positive impact on the corporate brand identity. In the process of live broadcast, consumers will, to some extent, be affected by the live broadcast environment, consciously participate in the brand interaction and the dissemination and creation of brand information, and have behaviors of liking or even loving the brand, such as actively rewarding the anchor, watching the live broadcast of the brand carefully, actively forwarding relevant brand information and repeated purchase. Hu M et al. proposed that user’s identification with the anchor and even the business has a great relationship with whether the user has a good interaction with the anchor. When users get good interaction and response from the anchor, users’ satisfaction with the anchor will increase. The authenticity of the social experience gained by users in the live broadcast will affect their continuous use and their satisfaction with the anchors, products and enterprises.
100
M. Liu et al.
3 Theoretical Model and Hypothesis “E-commerce + Live Broadcast “ is a new format of business. Online shoppers use this format to sell goods, because it is authentic, visible, interesting and interactive perfectly. This format can improve purchase experience. (Ma & Mei 2018). Many scholars have been paid attention to webcast. Perceived value theory (Ma & Mei 2018), social presence theory (Tong 2017), and immersion theory are used to explain why consumers purchase by webcasting.
Information quality
H1 H2
Attitude
H3 Promotion
H7 H4 H5
Reviews
H6
Purchase intention
Fig. 1. Conceptual Model
As seen in Fig. 1, a conceptual model is established. Information quality means accuracy, consistency and integrity of content, which has profound impact on consumers to make decisions. Compared with traditional ways of information exhibition, webcast has advantage in product information release, because the exhibition of information is more convenient and complete. In the webcast, users have more diversified ways to obtain information, including the information published by the anchor, the interaction between the anchor and users, such as real-time answers to users’ questions, and the interaction between users, such as repurchasing users (users who have purchased and used experience) publishing their feelings about the product [7]. In addition, traditional online shopping consumers get product information from pictures or videos of products that have been photographed and refined, while products are displayed live in webcast. The information provided by webcast to users is more comprehensive, direct and image, which can facilitate users to make purchase decisions. Therefore, it is assumed that: H1: Information quality has a positive impact on users’ attitude. H2: Information quality has a positive impact on users’ purchase intention. Whether it is in the traditional marketing mode or webcast, when consumers buy goods, they will generally decide whether to buy products by measuring the price or product value [8]. Price is not only the direct embodiment of product value, but also one of the factors affecting consumers to make purchase decisions. The survey shows that one of the reasons why consumers are more willing to place an order through the live broadcast channel is that the product price of the live broadcast channel is more favorable.
Empirical Analysis of Online Broadcast Based on TAM Model
101
In reality, manufacturers usually provide the best price in the live broadcast channel, that is, the price of the live broadcast channel is lower than that of offline channels and online stores. Because many manufacturers sell goods through live broadcast channels, aiming to improve brand influence. They are willing to sacrifice profits to a certain extent. On the other hand, flexible and diverse promotional activities can increase the interest of live broadcasting and adjust the atmosphere of the live broadcasting room [9], which will also affect consumers’ attitude and purchase intention. Therefore, this paper puts forward the following assumptions: H3: Promotional activities have a positive impact on users’ attitudes. H4: Promotional activities have a positive impact on users’ purchase intention. The existing online consumption research points out that due to the limited information when shopping online, consumers can easily be affected by other consumer behaviors and reviews, showing a significant tendency to follow the crowd (Park et al. 2018). In the live broadcast mode, the decision-making environment of consumers has qualitatively changed. However, social psychology and marketing both argue individual can affect other persons’ behavior. Many empirical researches support the conclusion (Park et al. 2018). It has long been found that users’ evaluation and emotional orientation towards products are linked to and positively correlated with product sales and enterprise performance (Pavlou 2002). After the rise of live broadcast, the bullet screen has attracted wide attention. Some scholars pointed out that due to the influence of cognition, an appropriate amount of active pop-up content can bring a good experience to users watching video ads and also improve their willingness to accept the ads. Consumer reviews is that users evaluate products or services through their subjective feelings [10]. Although the live broadcast presents us with more intuitive and comprehensive information, it still cannot solve the problem that potential consumers cannot obtain use experience before buying. In the live broadcast room, the repurchasers already have use experience, and their evaluation often includes their own use experience, which can make up for this deficiency. On the other hand, many consumers are eager to get the support of others when making purchase decisions. The positive comments of other consumers on the products, whether about the use experience or based on the on-site visual observation in the live studio, are conducive to improving consumers’ attitude and purchase intention. This paper assumes that: H5: Consumer reviews has a positive impact on users’ attitude. H6: Consumer reviews has a positive impact on users’ online purchase intention. Attitude is a person’s feelings about liking or disliking something. TPB (Icek Ajzen 1988,1991) can explain how attitude affects people’s behavior. Attitude is an important antecedent variable of behavior intention and behavior. Furthermore, many empirical studies show that attitude will have an impact on consumers’ purchase decision and subsequent behavior. This paper assumes that: H7: Users’ attitude positively affects users’ online purchase intention.
102
M. Liu et al.
4 Empirical Analysis 4.1 Methods Structural equation modeling (SEM) has many advantages, including allowing measurement errors, estimating the structure between factors and the relationship between factors simultaneously, the ability to allow more flexible measurement models, and estimating the degree of fit. In conclusion, structural equation modeling is finally adopted. The data used in this study are all from the questionnaire distributed through Internet channels. 380 questionnaires were issued and 335 valid samples were obtained. In this survey, there is little difference between men and women, with women accounting for 54% and men accounting for 46%. The sample age group is concentrated between 19–30 years old, accounting for about 75% of the total sample, which is consistent with the age distribution of live audience in reality. The popularity of live broadcasting is high among young people, and young people have a strong acceptance of live broadcasting. Therefore, the audience of live broadcasting is mainly young people. In terms of the education level of the sample, there is little difference in the proportion of high school, college and undergraduate. In terms of occupation, the sample is mainly concentrated on company employees and private professionals, accounting for 68% in total. 4.2 Reliability and Validity Analysis 4.2.1 Reliability Analysis This study adopts α to analyze the reliability of the sample data. Use SPSS 20.0 software calculates Cronbach’s α. According to Table 1, the values of all variables’ α value is greater than 0.7, which shows that the reliability of this questionnaire is good. 4.2.2 Validity Analysis Validity can be divided into content validity, convergent validity and discriminant validity. Different analysis methods measure different aspects of validity. The content validity is generally evaluated by the combination of logical analysis and statistical analysis. This study uses the mature questionnaires developed by the predecessor. These questionnaires are empirically tested, so they are credible in terms of content validity. Convergence validity can be measured by “average variance extracted (AVE)”. Generally speaking, when it exceeds 0.5, it indicates that the convergence of potential variables is relatively effective. As shown in Table 1, the AVE values measured by each variable in this paper are more than 0.5, indicating that the convergence validity between each factor is relatively effective.
Empirical Analysis of Online Broadcast Based on TAM Model
103
Table 1. Reliability test Construct
Items
Standardized factor loading
α
CR
AVE
Information quality
3
0.775
0.841
0.840
0.708
0.811
0.814
0.698
0.794
0.803
0.656
0.786
0.774
0.616
0.861
0.841
0.721
0.757 0.825 Promotion
3
0.754 0.779 0.771
Reviews
4
0.671 0.723 0.708 0.703
Attitude
4
0.731 0.764 0.630 0.822
Purchase intention
3
0.811 0.796 0.851
Discriminant validity is mostly used for the measurement of multiple indicators, which is generally measured by comparing the correlation coefficient between factors and the square root of AVE. If the former exceeds the latter, discriminant validity is good. On the contrary, discriminant validity is poor. According to Table 2, the square root of AVE in this study are above the correlation coefficient between factors. It can be seen that the results of this measurement have good discriminant validity. Table 2. Validity test 1
2
3
4
1:Information quality
0.697
2:Promotion
0.690
0.761
3:Reviews
0.679
0.734
0.709
4:Attitude
0.571
0.548
0.534
0.809
5:Purchase intention
0.631
0.653
0.681
0.691
5
0.697
104
M. Liu et al.
5 Hypothesis Test
Table 3. Results of regression model 1 Model
Unstandard coefficient
Standard coefficient
t
Sig
B
SE
Constant
0.610
0.275
-
2.218
0.028*
Information quality
0.193
0.095
0.182
2.031
0.044*
Promotion
0.121
0.114
0.120
1.067
0.287
Reviews
0.453
0.155
0.387
2.924
0.004**
Dependent variable: Attitude
As seen in Table 3, the regression analysis proposes information quality and reviews significantly impact attitude, therefore, assumptions H1 and H3 are true. However, the impact of promotional on users’ attitude is not significant, that is, H2 is not supported. So why is the effect of promotion on consumer attitudes not significant? The reason may be that, on the one hand, some consumers choose to purchase through live channels rather than through other channels, because many suppliers will give the live channels the greatest promotion, that is, the most favorable price. If it is a relatively mature product, consumers are familiar with the product, buy it regularly, and are well-known with the prices of different channels (including online and offline), then low prices will be very attractive to them, which will significantly affect their attitude towards the product, and the impact is definitely positive. On the other hand, promotion often means low prices. For unfamiliar products, consumers often take price as a quality clue because they lack relevant information as a basis for judging product quality. In this case, promotion may have a negative impact on consumer attitudes. The respondents may have considered the purchase of different types of products comprehensively, leading to bias in their judgment of their own attitude. In addition, in recent years, the events of selling fake goods in the live broadcast room, including that of some very famous head anchors, which may also have an impact on consumer attitudes.
Empirical Analysis of Online Broadcast Based on TAM Model
105
Table 4. Results of regression model 2 Model
Unstandard coefficient
Standard coefficient
t
Sig
B
SE
Constant
−0.135
0.268
–
−0.473
0.636
Information quality
0.218
0.099
0.179
2.198
0.029*
Promotion
0.038
0.118
0.033
0.324
0.746
Reviews
0.728
0.161
0.543
4.523
0.000**
Dependent variable: Purchase intention
According to Table 4, the regression analysis proposes information quality and reviews significantly impact purchase intention, that is, assumptions H4 and H6 are true. However, the impact of promotional on purchase intention is not significant, that is, H5 is not supported. The reason may be that competition is becoming increasingly fierce in most industries, and it is difficult for live broadcast channels to ensure that their prices are the lowest. On the other hand, it may also be that with the upgrading of consumption, the impact of price on consumers’ purchase decisions has decreased. Table 5. Results of regression model 3 Model
Unstandard coefficient
Standard coefficient
t
Sig
0.282
–
6.882
0.000**
0.073
0.460
7.189
0.000**
B
SE
Constant
1.938
Attitude
0.526
Dependent variable: Purchase intention
According to Table 5, the regression coefficient value is 0.526 (t = 7.189, P = 0.000 < 0.01), which means that user attitude positively impacts purchase intention. Therefore, H7 is assumed to be true.
6 Conclusion According to TAM theory, the thesis establishes the influencing factor model of online purchase intention of webcast users, and draws the conclusion through empirical research: information quality and reviews are positively associated with users’ attitude and purchase intention; Promotional has no significant impact on users’ attitude and purchase intention; User attitude is positively associated with purchase intention. This conclusion has certain practical significance for businesses. First, companies should try to enhance information quality of live broadcasting, so that users can truly
106
M. Liu et al.
feel that the information provided by e-commerce during the live broadcast is helpful to them. Enterprises can further refine information, regularly investigate user needs, and screen live broadcast information according to the key points of user needs. In addition, don’t exaggerate the product too much in the live broadcast, release the real product information and do a good job in the after-sales service of the product. Secondly, encourage customers to publish comments and bring more value to customers. User comments can help users get more information and make better shopping decisions. At the same time, it can also increase the interest of the live broadcast, mobilize the enthusiasm of users and produce a positive attitude.
References 1. Forbes, L.P., Vespoli, E.M.: Does social media influence consumer buying behavior? an investigation of recommendations and purchases. J. Bus. Econ. Res. 11(2), 107–112 (2013) 2. El Afi, F., Ouiddad, S.: The rise of video-game live streaming: motivations and forms of viewer engagement. In: Stephanidis, Constantine, Antona, M., Ntoa, S. (eds.) HCII 2021. CCIS, vol. 1421, pp. 156–163. Springer, Cham (2021). https://doi.org/10.1007/978-3-03078645-8_20 3. van Bonn, S.M., Grajek, J.S., Schneider, A., et al.: Interactive live-stream surgery contributes to surgical education in the context of contact restrictions. Eur. Arch. Otorhinolaryngol. 8, 155–162 (2021) 4. Chen, N.-T.N., Dong, F., Ball-Rokeach, S.J., Parks, M., Huang, J.: Building a new media platform for local storytelling and civic engagement in ethnically diverse neighborhoods. New Media Soc. 14(6), 931–950 (2012) 5. Suzan, B., Alena, S.: Interactive or reactive? marketing with twitter. J. Consum. Mark. 28(7), 491–499 (2013) 6. Chen, Chien-Hsiung., Che, Liang-Yuan.: User Experience Study of Live Streaming News with a Second Screen on a Mobile Device. In: Ahram, Tareq, Taiar, Redha, Colson, Serge, Choplin, Arnaud (eds.) IHIET 2019. AISC, vol. 1018, pp. 465–470. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-25629-6_72 7. Sun, W., Gao, W., Geng, R.: The impact of the interactivity of internet celebrity anchors on consumers’ purchase intention. Front. Psychol. 10, 1–9 (2021) 8. Dam Tri Cuong: Influence of brand trust, perceived value on brand preference and purchase intention. J. Asian Finan. Econ. Bus. 7(10), 939–947 (2020) 9. Gao, Y., Zhao, L.: Synergistic effect between online broadcast media and interactive media on purchase intention. Front. Psychol. 12, 1–9 (2021) 10. Lascu, D.N., Zinkhan, G.: Consumer conformity review and application for marketing theory and practice. J. Market. Theory Pract. 7(3), 1–12 (1999)
A High Resolution Wavelet Chaos Algorithm for Optimization of Image Separation Processing in Graphic Design Jingying Wei1(B) and Yong Tan2 1 College of Art and Design, Guangdong University of Science and Technology, Dongguan,
Guangdong, China [email protected] 2 Graduate School, University of Perpetual Help System DALTA, Alabang–Zapote Road, 1740 Las Piñas, Metro Manila, Philippines
Abstract. Image separation is the key to image processing, but in image separation, it is easily disturbed by abnormal signals, which reduces the separation accuracy, and causes key image loss and blurred separation edges. Based on this, this paper proposes a high-resolution wavelet chaos algorithm to perform highfrequency resolution of image separation information, enhance image signal, and shorten pixel distance. Grayscale value analysis is then performed on pixel gradients and neighborhoods. Finally, the standard wavelet algorithm separates the edge signals for search, and the final separation results are output. The results show that the high-resolution wavelet chaos algorithm can accurately separate images, the abnormal signal interference rate is less than 10%, and the separation accuracy is greater than 90%, which is better than the standard wavelet algorithm. Therefore, the high-resolution wavelet chaos algorithm can meet the requirements of image separation and is suitable for image processing. Keywords: Image Separation · High Resolution · Wavelet Chaos · Graphic design
1 Introduction Some scholars believe that image separation is the reverse process of image fusion, and it is necessary to adjust and extract pixels and lines [1], and it is easy to have problems such as blurred edges and incomplete images. At present, image separation in graphic design often has the problems of low accuracy [2], long separation time, and high honor of multi-layer images [3]. Therefore, some scholars propose to apply the plus intelligent algorithm to image separation and identify the abnormal signals in it to strengthen the processing of the separation edge [4]. However, the processing effect of true-color images such as 128K and 1 080 P is still not ideal [5], and there is still a problem of low robustness. To this end, some scholars have proposed a high-resolution wavelet chaos algorithm [6], which recognizes the geometry and complex graphics in the image by strengthening the image, and iteratively analyzes the edge data signal to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 107–115, 2023. https://doi.org/10.1007/978-3-031-31860-3_12
108
J. Wei and Y. Tan
achieve effective separation of the image [7]. Therefore, based on the high-resolution wavelet chaos algorithm, this paper separates the images in graphic design and verifies the method’s effectiveness.
2 High-Frequency Resolution High-frequency resolution strengthens the geometric features in the image and detects the image features, which can effectively separate the image features [8]. High-frequency resolution mainly calculates the pixel characteristics of the separation position based on the relationship between pixels and points [9–11]. Then, local, and global feature identification occurs, and feature values are mapped to a detached list [12]. Definition 1: The image data after the high-frequency resolution is data i, the separation position is xi , the feature point is li , the integration result is ri , and the separation data set is set i , the mapping angle is θ0 , and the number of separations is ci . Then, the calculation process of datai is shown in Eq. (1). datai = (li · xi ) · cosθ · ri · ci (1) set i
Definition 2: The constraint function of high-frequency resolution is Me(x, l, r | k), the constraint of high-frequency resolution is k, and the constraint condition processed by the wavelet chaos algorithm is K, the interference value is ζ . The calculation process is shown in Eq. (2). Me(x, l, r | k) =
set
k · (ri · xi · li ) +ζ K
(2)
Definition 3: Image enhancement degree Cei, pixel enhancement is xi , enhancement data set is seti , and number of reinforcement is ci . Then, the calculation process of Cei is shown in Eq. (3). Cei = xi · ci | seti (3) ci
Definition 4: The enhanced adjustment function is f (x, c | seti ), A is the constraint, ξ is the error coefficient. Well, the calculation process of f (x, c | seti ) is shown in Eq. (4). f (x, c | seti ) = A · (ri · xi ) | ξ (4) set
In the image separation process, the separated edge information should be standardized to reduce the blurring rate after separation [13]. According to the high-frequency resolution theory, the blurring problem after separation can be solved by identifying the pixels and lines with differences and calculating the corresponding gray value and the stable signal after image separation [14]. Definition 5: The separation function of the high-resolution wavelet chaos algorithm is F( data i ), the gray value is P, and the feature separation function is F( data i | P), and the calculation is shown in Eq. (5). n sinθi · ri · K datai · (5) F( data i | P) = k · set P
A High Resolution Wavelet Chaos Algorithm for Optimization
109
where sinθi is the mapping angle of image separation, P is grayscale values of different angles. Definition 6: The image synthesis enhancement function is F(Cei | data i ), the calculated as shown in Eq. (6). Cei · sinθi · ri (6) F(Cei | P) = α · A · set
3 Implementation Steps for Image Separation Image separation can reduce the blur of the edge of the image, so it is necessary to use the gray value for sampling analysis, including: pixel point, line segment distance signal, and calculate the signal change amplitude. In addition, the image signal set is enhanced according to the high-frequency resolution theory, and the local features are calculated. At the same time, overlay analysis of images of different dimensions is carried out to reduce external interference [15]. Step 1: The graphic design image is collected, the image separation position is determined, and the image is resolved at high frequency to determine the threshold and weight of the separation position [16]. Step 2: According to the separation position [17], the pixel and line segment distance are calculated to identify abnormal signals and supplement abnormal signals. Step 3: Compare the pixels and line segments at the separation location to verify the separation results’ accuracy and Stability, and record each feature value’s information. Step 4: Determine whether the separated image is complete, and if it is complete, terminate the analysis. Otherwise, the image separation is continued [18]. The implementation steps of image separation is shown in Fig. 1.
Fig. 1. Implementation steps of image separation
110
J. Wei and Y. Tan
4 Practical Examples of Image Separation in Graphic Design In order to verify the effect of the high-resolution wavelet chaos algorithm on image separation, the static image of 1080 P was used as the research object to analyze the separation effect of the image. Among them, the image has no encryption, anti-counterfeiting photocopying, hidden official seal, etc., and the image belongs to In PNG format. The specific image indicators are shown in Table 1. Table 1. Parameters for separating images Parameter
Image core
Image edges
Number of segments (pcs)
13,223
422,232
Pixels (pcs)
322,120
645,125
Image signal (Hz)
322.22~432.31
322.22~432.31
Dimension (pcs)
3 floors
3 floors
Error rate(%)
0.02
0.03
Ambiguity(%)
1
1
Resolution (%)
42.48
39.88
Interference factor rate (%)
43.54
46.42
Error correction rate (%)
46.70
41.95
Dynamic rate (%)
45.40
45.74
Comprehensive rate (%)
44.57
48.53
Interaction rate (%)
42.04
44.92
According to the parameters in Table 1, there are differences in the number of line segments and pixels separating the core and edge of the image, but there is no significant difference in signal band, dimension, error rate, and blur. Therefore, the data of the image edge needs to be analyzed more times, and the core area is used as a reference for classification image verification. The separation of the data in Table 1 is shown in Fig. 2.
A High Resolution Wavelet Chaos Algorithm for Optimization
111
Fig. 2. Data processing ratio at different locations
It can be seen from Fig. 2 that the data processing in the core area is relatively concentrated, and the data processing in the edge area is relatively scattered, which meets the requirements of image analysis, so the later data can be compared and analyzed. The Stability and accuracy of image separation. Image separation should be divided into two stages, preprocessing of data in the early stage and verification of data in the later stage, and the specific results are shown in Table 2. Table 2. Comparative results of stability and accuracy (Unit: %) Algorithm
Classify
Parameter
Accuracy
Stability
Degree of deviation
High-resolution wavelet chaos algorithm
prophase
line segment
96.32 ± 1.11
98.12 ± 2.01
2.91 ± 0.31
Pixels
98.26 ± 2.32
94.04 ± 3.03
2.15 ± 0.22
Separation range
93.16 ± 1.12
92.23 ± 1.22
1.26 ± 0.12
line segment
92.12 ± 1.33
92.32 ± 2.03
1.25 ± 0.32
Pixels
93.32 ± 1.03
92.32 ± 2.01
1.41 ± 0.26
Separation range
93.26 ± 1.12
92.23 ± 1.22
1.16 ± 0.32
anaphase
(continued)
112
J. Wei and Y. Tan Table 2. (continued)
Algorithm
Classify
Parameter
Accuracy
Stability
Degree of deviation
Standard wavelet algorithm
prophase
line segment
83.32 ± 2.11
88.12 ± 2.01
3.31 ± 1.31
Pixels
88.26 ± 2.32
86.04 ± 2.03
3.12 ± 1.22
Separation range
83.26 ± 2.12
82.23 ± 2.22
3.12 ± 1.32
line segment
85.12 ± 2.33
82.12 ± 2.13
3.23 ± 1.32
Pixels
83.32 ± 2.13
82.32 ± 2.11
3.21 ± 1.36
Separation range
86.22 ± 1.33
87.52 ± 2.32
3.54 ± 1.32
anaphase
It can be seen from Table 2 that the Stability and accuracy of the high-resolution wavelet chaos algorithm are greater than 95%, and the deviation is less than 3. Among them, the change amplitude of pixels, line segments, and separation ranges is relatively small, so the overall result of the high-resolution wavelet chaos algorithm is better. The line segments and pixels of the standard wavelet algorithm are pretty different, between 5–6, and the accuracy and Stability are only more significant than 80%. In order to
Fig. 3. Comparison of Stability and accuracy of different algorithms
A High Resolution Wavelet Chaos Algorithm for Optimization
113
further verify the effectiveness of the algorithm, the accuracy and Stability of the above are continuously analyzed, and the results are shown in Fig. 3. It can be seen from Fig. 3 that in different stages of image separation, the separation results of the high-resolution wavelet chaos algorithm method are more concentrated. In contrast, the accuracy and Stability of the standard wavelet algorithm are more scattered, which is consistent with the research results in Table 2. The reason is that high-frequency resolution strengthens the image pixels and line segments, and the separation’s location is more precise. Wavelet chaos analysis is performed on pixels and line segments at the separated location to improve the integrity of the separated image. The time for image separation. The separation time is a later indicator of the image separation effect, including the separation position, the determination time of the abnormal signal, etc., and the specific results are shown in Table 3. According to Table 3, the image separation time in the high-resolution wavelet chaos algorithm is relatively short, and the change amplitude is 3–4 s. Among them, the processing time of layer 1, layer 2, and layer 3 is between 16–17 s, the processing time of safety levels I–II is between 13–15 s, and the overall processing time is relatively stable. Compared with the high-resolution wavelet chaos algorithm, the calculation time of the standard wavelet algorithm is relatively long, at 21.45–31.52. The reason is that the standard wavelet algorithm has an extended processing time in finding abnormal signals, separation positions, and other information. In addition, the verification time of pixels and segments at the separated location is long. The high-resolution wavelet chaos algorithm determines the separation position faster through image enhancement, and the separation position is verified by comparison, so the separation time is shortened. The processing time of the overall data in Table 3 is shown in Fig. 4. Table 3. Processing time for image separation(Unit: seconds) Method
High-resolution wavelet chaos algorithm
Parameter
Layer
Separation class
1 floor
2 floors
3 floors
Class I
Class II
Class III
Abnormal signal identification
16.25 ± 1.21
16.32 ± 1.33
16.21 ± 3.22
13.35 ± 2.41
13.22 ± 2.12
13.35 ± 2.21
Edge location
16.65 ± 1.52
16.41 ± 1.65
16.43 ± 3.13
14.15 ± 2.72
14.45 ± 2.35
14.75 ± 2.33
Integrity verification
16.42 ± 1.32
16.12 ± 1.42
16.87 ± 3.25
14.42 ± 2.42
14.02 ± 2.21
14.32 ± 2.53
The magnitude of the temporal variation
0.45~0.52
Standard wavelet algorithm
Abnormal signal identification
23.25 ± 3.21
27.33 ± 2.43
27.22 ± 1.12
33.35 ± 3.41
33.12 ± 2.22
33.32 ± 3.52
Edge location
27.65 ± 3.52
27.21 ± 2.25
27.33 ± 1.23
33.32 ± 3.12
33.35 ± 3.22
33.15 ± 3.22
Integrity verification
29.42 ± 2.32
28.22 ± 1.32
28.87 ± 1.35
33.22 ± 3.22
33.42 ± 2.23
33.22 ± 4.51
The magnitude of the temporal variation
0.43~0.22
114
J. Wei and Y. Tan
Fig. 4. Comprehensive comparison of different methods
Through the analysis of Fig. 4, it can be seen that the comprehensive analysis degree of the high-resolution wavelet chaos algorithm is larger, and the overall change is relatively stable, while the change range of the wavelet chaos method is larger, which is inferior to the former. Therefore, the results in Fig. 4 further validate the results of Table 3.
5 Conclusions Aiming at the problem that it is difficult to determine the separation location and separation point in the process of image separation in graphic design, this paper proposes a high-resolution wavelet chaos algorithm, which improves the analysis accuracy of pixels and line segments by collecting signals in the core and edge areas of the image and integrating the abnormal signals. The results show that the Stability and accuracy of the high-resolution wavelet chaos algorithm are greater than 90%, the deviation value is small, and the difference of the standard wavelet algorithm is significant. In addition, the high-resolution wavelet chaos algorithm is not affected by the layer and security level in image separation, and the overall separation time is short, while the calculation time of the standard wavelet algorithm is relatively long. Therefore, the high-resolution wavelet chaos algorithm can meet the image separation requirements and is better than the standard wavelet algorithm. However, the enhancement process is not analyzed in the high-resolution image processing process, and the future will focus on optimization.
References 1. Couceiro, P., Alonso-Chamarro, J.: Fluorescence imaging characterization of the separation process in a monolithic microfluidic free-flow electrophoresis device fabricated using lowtemperature co-fired ceramics. Micromachines 13(7), 102–114 (2022)
A High Resolution Wavelet Chaos Algorithm for Optimization
115
2. Heckert, M., Enghardt, S., Bauch, J.: Novel multi-energy x-ray imaging methods: experimental results of new image processing techniques to improve material separation in computed tomography and direct radiography. PLoS ONE 15(5), 95–107 (2020) 3. Isozaki, Y., Yamamoto, S., Sakata, S., et al.: High-reliability zircon separation for hunting the oldest material on earth: an automatic zircon separator with image-processing/microtweezersmanipulating system and double-step dating. Geosci. Front. 9(4), 1073–1083 (2018) 4. Kursun, I., Terzi, M., Ozdemir, O.: Evaluation of Digital Image Processing (DIP) in analysis of magnetic separation fractions from Na-feldspar ore. Arab. J. Geosci. 11(16), 65–78 (2018) 5. Hulser, T., Koster, F., Jaurigue, L., Lodge, K.: Role of delay-times in delay-based photonic reservoir computing. Opt. Mater. Express 12(5), 1214–1231 (2022) 6. Lee, W.H., Park, C.Y., Diaz, D., et al.: Predicting bilgewater emulsion stability by oil separation using image processing and machine learning. Water Res. 22(3), 59–68 (2022) 7. Lu, G.H., Tsai, W.T., Jahne, B.: Decomposing infrared images of wind waves for quantitative separation into characteristic flow processes. IEEE Trans. Geosci. Remote Sens. 57(10), 8304–8316 (2019) 8. Moreira, I.B., Monteiro, R.D.M., da Silva, R.N.O.: Separation of coriander seeds by red, green and blue image processing. Ciencia Rural 52(9), 65–77 (2022) 9. Muyskens, A.L., Goumiri, I.R., Priest, B.W., et al.: Star-galaxy image separation with computationally efficient gaussian process classification. Astron. J. 163(4), 98–112 (2022) 10. Schach, E., Buchmann, M., Tolosana-Delgado, R., et al.: Multidimensional characterization of separation processes - part 1: introducing kernel methods and entropy in the context of mineral processing using SEM-based image analysis. Miner. Eng. 137(3), 78–86 (2019) 11. Shoba, S., Rajavel, R.: Image processing techniques for segments grouping in monaural speech separation. Circuits Syst. Sig. Process 37(8), 3651–3670 (2017). https://doi.org/10. 1007/s00034-017-0728-x 12. Tao, Y.D., Li, H.X., Zhu, L.M.: “Hysteresis modeling with frequency-separation-based Gaussian process and its application to sinusoidal scanning for fast imaging of atomic force microscope. Sens. Actuat. a-Phys. 311(2), 45 (2020) 13. Zhu, W., Chen, L., Wang, B., Wang, Z.: Online detection in the separation process of tobacco leaf stems as biomass byproducts based on low energy x-ray imaging. Waste Biomass Valoriz. 9(8), 1451–1458 (2017). https://doi.org/10.1007/s12649-017-9890-4 14. Wu, G., et al.: Mass spectrometry-based charge heterogeneity characterization of therapeutic mAbs with imaged capillary isoelectric focusing and ion-exchange chromatography as separation techniques. Anal. Chem. 14(3), 90 (2022) 15. Zhang, K., et al.: Deep feature-domain matching for cardiac-related component separation from a chest electrical impedance tomography image series: proof-of-concept study. Physiol. Meas. 43(12), 15–17 (2022) 16. Lyu, Y., Cui, Z.P., Li, S., Pollefeys, M., Shi, B.X.: Physics-guided reflection separation from a pair of unpolarized and polarized images. IEEE Trans. Pattern Anal. Mach. Intell. 45(2), 2151–2165 (2022) 17. Paez-Perez, M., et al.: Directly imaging emergence of phase separation in peroxidized lipid membranes. Commun. Chem. 6(1), 46–49 (2023) 18. Zhu, Y.D., et al.: A new two-stream network based on feature separation and complementation for ultrasound image segmentation. Biomed. Signal Process. Control 8(11), 72 (2023)
Design and Implementation of Smart Tourism Scenic Spot Monitoring System Based on STM32 Kewei Lei1(B) and Lei Tian2 1 School of Business Administration, Xi’an Eurasia University, Xi’an 710065, Shaanxi, China
[email protected] 2 School of Electronic Engineering, Xi’an University of Posts and Telecommunications,
Xi’an 710121, Shaanxi, China
Abstract. In order to design a set of intelligent scenic spot monitoring system with more complete functions, higher integration and wider application range, this paper fabricates and implements an intelligent scenic spot monitoring system which based on the STM32 micro board. The system is mainly composed of three parts: information collection, information transmission and display, and information storage and processing. Through the cooperative operation of these three parts, the system can realize the three functions of temperature, humidity and light intensity information collection, display and transmission, automatic light source adjustment, and real-time monitoring and snapshot alarm. Design and implementation of smart tourism scenic spot monitoring system based on STM32. Keywords: STM32 · Smart tourism scenic spot · Light intensity adjustment · Sensors · monitor
1 Introduction With the trend of the era of intelligent and digital development mode, the management of tourist attractions needs to find a suitable construction plan of intelligent scenic spots in combination with their actual needs, carry out intelligent construction of the scenic spots, and provide more comfortable intelligent service experience for tourists. In recent years, the rapid development of computer network technology and communication technology has made human society move towards an intelligent era. The scenic spots have also applied these new technologies one after another, hoping to escort the scenic spots with the help of “intelligent” information technology, so as to make the protection, management, development and utilization of the scenic spots more scientific and rational, and make the work more refined and intelligent. Smart tourism creates opportunities for the development of tourist attractions. The smart scenic spot technology will also depend the IoT with the 5G technology to “monitor” the scenic spots, especially the cultural and natural heritage scenic spots. Maintaining the relatively sustainable and stable environment in the scenic spots is vital to protecting the safety of tourism resources. For example, monitor the environment, temperature and humidity, light intensity and other information of the scenic spot, accurately regulate © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 116–128, 2023. https://doi.org/10.1007/978-3-031-31860-3_13
Design and Implementation of Smart Tourism Scenic Spot Monitoring
117
and continuously monitor the temperature and humidity in the heritage scenic spot by means of on-site and backstage, to maintain its safety and relative stability, and then issue a “prescription” to let the intelligent scenic spot system adjust the more appropriate temperature. It is not only suitable for tourists to visit; it but also plays a monitoring, early warning and protection role for the environmental protection of heritage scenic spots [1, 2].
2 System Overall Scheme Design Based on the punctual atom STM32F407 Explorer development board, the system uses DHT11 module, 2 million pixel OV2640 module, photosensitive sensor, 4.3-inch TFTLCD screen, SD card, ATK-ESP8266 serial port to WiFi module, infrared detection module, light source adjustment circuit and buzzer in hardware. In terms of software, Android mobile phone network debugging assistant and Keil uVian5 development environment are used to complete the software design. STM32F407 development board is based on Cortex-M4 architecture. 168 MHz main frequency is one of its advantages, which means higher running speed. It has hardware FPU unit and DSP instructions, and has lower power consumption. In addition, the functions of peripherals are also enhanced. Compared with STM32f1, STM32f4 has a faster A/D conversion speed, and the communication speed of SPI and USART has also been greatly improved. The STM32F407 [3] adopted in the design has the following characteristics: (1) Rich interfaces. Convenient for debugging through serial port or STLINK; (2) The design is not rigid. It is easy to allocate resources, which is conducive to subsequent development; (3) There are many resources. The CPUSTM32F407ZGT6 of the development board has a built-in 1 m byte Flash, and a 16 M byte flash is added to the 1 M byte SRAM. The resources of a development board used in this design include 192 KB SRAM, 1024 KB Flash, TIM4 and TIM9 timers, FSMC interface, serial port 1 and serial port 3, SDIO interface, 1 DMA controller, 1 USB port, a 12-bit ADC, camera interface and several general IO ports. In the normal conditions, the DHT11 collects temperature and humidity, and the light intensity collected by the photosensitive sensor updates the display on the LCD screen in time and sends it to the server by the serial port 3 to WiFi. When the detected light intensity is less than a certain threshold, the light source adjusting circuit is called to adjust the light source automatically to improve the light intensity. When the number of tourists in the scenic spot is too large, which threatens to damage the environment of the heritage scenic spot, the infrared module will monitor the detected signal and send it to the STM32 board. The system will respond and call the active buzzer for alarm. Ov2640 captures this image and stores it. At the same time, the latest captured image will be displayed on the LCD screen. Finally, the alarm signal will be sent to the server through the serial port 3 to WiFi.
118
K. Lei and L. Tian
3 System Hardware Design 3.1 Introduction and Design of Information Acquisition Module 3.1.1 DHT11 Temperature and Humidity Sensor The circuit design of DHT11 temperature and humidity sensor forms a DHT11 module [4]. The 2 pins of the chip combined the PG9 pins to the STM32F407 board. Through this module, the temperature and humidity can be measured. The circuit design of DHT11 module is shown in Fig. 1.
U12
VCC
DQ
VCC3.3
1
2
NC
4 3
GND
DHT11
C79 GND DQ
104 R55
4.7K GND
Fig. 1. Circuit design of DHT11 module
DHT11 temperature and humidity sensor is a temperature and humidity composite sensor with calibrated digital signal output [5]. The mechanism adopted by DHT11 is the serial mode. The two-way connection is established through a single line of the serial interface. It takes a short time to communicate once. The data that can be output through one communication includes 40 bits, and the high bits are output first. The first 16 bits can get the required humidity data, the second 16 bits can get the required temperature data, and the last 8 bits are the checksum. 3.1.2 Light Intensity Module Here, we constructed the light intensity module with the photosensitive sensor. The sensor uses the characteristics of photosensitive elements to convert the optical signals into the electrical signals, whose response wavelength is near the visible wavelength. In general, photosensitive sensor is often used when detecting light intensity [6]. The stronger the light intensity, the lower the corresponding voltage. Conversely, the higher
Design and Implementation of Smart Tourism Scenic Spot Monitoring
119
the light intensity, the higher the corresponding voltage. The sensor is connected to the PF7 pin of the development board and uses the analog input function of this pin. At the same time, the on-board ADC3 channel is used to convert the collected analog signals into digital signals. 3.1.3 Infrared Detection Module The system uses the HC-SR501 to realize the infrared detection technology [7]. The normal working condition is 4.5–20 V with the detected range of 7 m. The two working modes are: (1) Non triggering mode: when the induction output high level is delayed for a period of time, it will automatically change from high level to low level; (2) Repeatable trigger mode: if the sensing output is high level [8], the output of animals or people within the monitoring range will continue to be high level until they leave, and the high level will be changed to low level with delay. In the system, this module is connected with the PA3 PIN and the IO port is set as an input function to read the output level of the infrared detection module. 3.1.4 Camera Module This part mainly introduces the DCMI interface of OV2640 camera and development board. OV2640 provides single-chip camera function. Working in the image mode, the maximum frame rate that the OV2640 can achieve is 15 frames/s. Any image processing process can be realized by SCCB interface programming. The functional modules of OV2640 module mainly include: (1) Photosensitive array, 1600 * 1200 (200 W pixels) is the maximum image size output by OV2640; (2) Analog signal processing processes all analog functions, including analog amplification and gain control [9]; (3) The 10 bit A/D conversion amplifies the analog signal first, and then converts it into a digital signal. The highest operating frequency is 20 MHz, and it is completely synchronized with the pixel frequency. (4) DSP is used to control a process. The function of this process is to interpolate the original signal into the RGB signal, and then control the image quality, such as edge sharpening, noise reduction, saturation control, etc.; (5) 8-bit microprocessor with 512-byte SRAM and 4 KB ROM. The interface type of DCMI is synchronous parallel interface, which is used when connecting OV2640. It can accept high-speed data streams from CMOS cameras and support multiple data formats. The DCMI interface includes four signals: the data input is used to receive the 8-bit data of OV2640 in this design; The row synchronization input receives the row synchronization signal of the OV2640; The field synchronization
120
K. Lei and L. Tian
is used to receive the VSYNC signal; The pixel clock input is used when receiving the clock signal of the OV2640. 3.2 Design of Information Transmission and Display Module 3.2.1 ESP8266 Modular The ESP8266 module is a serial port wireless WiFi module, with an on-board ATKESP-01 chip. If communication with MCU is required, the ESP8266 module needs to be connected to the serial port to communicate. The ESP8266 contains a TCP/IP protocol stack. After download the relevant software for this module, it can work in three modes to finish the data transmission. This part uses the pin resources of the STM32F407 development board PB10, PB11, PC0 and PF6, of which PB10 and PB11 are multiplexed for serial port 3. [10, 11]. 3.2.2 TFTLCD Screen After detection, the system uses the LCD screen to display the tourism scenic spot in real time. Under normal conditions, the temperature, humidity and light intensity information are refreshed. At the same time, the camera module operates normally and displayed the captured image on the screen. The TFTLCD screen used in this design is 4.3 inches. The 16-bit parallel mode and external connection can refresh the image faster, and the RGB565 format is used to store the color data. TFTLCD screen needs to act as SRAM on the FSMC interface connected to the development board. FSMC is a static memory controller. The memory types that can be connected to it include synchronous/asynchronous memory or memory card. The FSMC of STM32F4 divides the external devices into two types, but the address bus and the data bus are common to them. If different devices are distinguished, they need to be realized through different CS chip selection. 3.3 Design of Information Storage and Processing Module 3.3.1 SD Module In order to save more captured images for the system, we use the SD card through the SPI/SDIO drive. It is very convenient to be used as external memory in the single chip microcomputer system. The SDIO interface has the advantage of being compatible with a variety of memory cards. The SDIO controller of STM32F4 consists of the SDIO adapter module and the apb2 bus interface connected to the CPU. Generally, the D0 port is used for data transmission. After initialization, the host can change the width of the data bus. If the SD card is connected to the bus, configure SDIO through the host_ D0/SDIO_ D [3:0] for data transmission. If initialization is performed, SDIO_ CMD needs to be set to open circuit mode; If command transmission is performed, SDIO_ CMD needs to be set to push-pull mode.
Design and Implementation of Smart Tourism Scenic Spot Monitoring
121
3.3.2 The Light Source In order to better make the system work in a variety of light conditions. The system needs to control the brightness of the light in real time, connect the light source and the development board through the PN port, and compare the intensity of various types of light through the output. The system compares the collected light intensity signal with locally stored light. When the light is found to be weak, adjust the brightness to increase the brightness of the light source, and when the light is found to be bright, reduce the brightness of the light source to ensure the shooting quality. 3.3.3 Alarm Module The alarm module mainly controls the buzzer to send an audible alarm through STM32. The system sends the detected abnormal signal to the main control platform, and the main control platform sends an instruction to the buzzer through WiFi. At this time, the buzzer turns on, indicating that the surrounding users have an abnormal phenomenon. The buzzer mainly works when the light alarm function cannot prompt the user normally. At this time, the buzzer alarm plays a great role.
4 System Software Design 4.1 Collection, Display and Transmission 4.1.1 Data of the Temperature, Humidity and Light Intensity According to the working principle of photosensitive sensor, lsens.c and adc3.c files are compiled to complete the acquisition of light intensity. Because the ADC channel is used here to obtain the analog value, first write the configuration function about the initialization of the ADC and start the function adc3 of the AD converter_ Init(); Write get again_ Adc3() function to obtain the value of the seventh channel of adc3, then initialize the photosensitive sensor, set PA7 as the analog input function, and write lsens_ Get_ Val() to complete the acquisition of the light intensity value. Call DHT11 after writing the function function_ Read_ Data(), Lsens_ Get_ Val () can complete the collection of the detailed data. 4.1.2 Display the Data The system used the TFTLCD screen to show the data of the temperature, humidity and light intensity. At the same time, the screen driver and Chinese character display function need to be written. Here we mainly write lcd.c for screen driving and text. c for Chinese character display.
122
K. Lei and L. Tian
First, the lcd.c. complete the initialization of the required pins to drive the FSMC interface, complete the initialization of the LCD screen by using the initialization Series in the data manual of the screen, and finally set the coordinates and write GRAM to realize the display of characters or numbers. LCD_ Setcursor() is used to set cursor position and LCD_ Scan_ Dir() is used to set LCD automatic direction, LCD_ Writereg() is the write register address, LCD_ WriteRAM_ Prepare() is used to write gram and LCD_ Bgr2rgb() realizes the conversion from the RGB format written to the GBR format required by the chip to read data. Write text.c is mainly for more convenient display of text content, GET_ Hzmat() is used to find the font and show from the font_ Font() can be used to display Chinese characters of specified size, show_ Str() displays a string. At the same time, in order to update the temperature, humidity and light intensity information in real time and not interfere with the operation of the main program, the timer 4 is started in this design and the TIM4 is written_ Int_ Init() initializes timer 4, enables NVIC interrupt of timer 4, and writes timer 4 interrupt service program TIM4_ Irqhandler() to obtain real-time temperature, humidity and light intensity information and display it on the LCD screen. 4.1.3 Transmission of the Detected Data In the transmission step, the system uses the esp8266.c to connect to the sensor. And the detected data can be communicated by the WiFi module. First, sets the below instructions: “At + WAMODE = Y” means to set the working mode of the WiFi module. At present, there are three working modes of the WiFi module, respectively: Y = 1 indicates that it is in the sta mode. At this time, the function of the module is to connect to the wireless network as a wireless WiFi sta. Y = 2 means that the WiFi module is set in AP mode. At this time, the module can act as a WiFi hotspot and allow other WiFi to connect to the module. Y = 3 means that the WiFi module is set to STA + AP mode. At this time, the WiFi module has two functions at the same time. It can be used as a hotspot to connect other WiFi devices or join other WiFi hotspots. And one mode corresponds to three modes. In this design, we use the sta mode in which the module is connected to other WiFi hotspots. Therefore, the three modes under STA are introduced in detail, and the sub modes of the other two modes are also similar. The WiFi circuit works in three modes, that is TCP server, TCP client and UDP respectively. The configuration in TCP client mode is shown in Table 1. AT + CIPSTART is used to establish a TCP/UDP/SSL connection, AT + CIPSERVER is used to set up a TCP server, and AT + cifsr is used to query the IP address of the WiFi module. AT + CIUPDATE is used to upgrade the firmware of the module. After mastering the AT command, you need to write the WiFi module use function and write esp8266_ start_ Trans() is used to connect hotspots and establish a connection with the host computer server software, using esp8266_ send_ Data() sends data to the upper computer, and after sending the data, it passes through esp8266_ quit_ The trans() function exits transparent transmission and closes the connection with the server. Because the serial port used is transferred to the WiFi module [10], the process of sending data is that the MCU sends data to the serial port 3, and the serial port 3 receives the data and then sends it to the upper computer server through the WiFi
Design and Implementation of Smart Tourism Scenic Spot Monitoring
123
Table 1. WiFi module configuration Commands
Effect
AT + CWMODE = 1
Set the WiFi module to sta mode
AT + RST
Restart the WiFi module and take effect
AT + CWJAP = “xx”, “xxxxxxx”
Join WiFi hotspot: XX, password: XXXXXXXX
AT + CIPMUX = 0
Open single connection
AT + CIPSTART = “TCP”, “192.168.1.XXX”,8000
Establish TCP connection to “192.168.1.xxxx”, 8000
AT + CIPMODE = 1
Enable transparent transmission mode
AT + CIPSEND
Start transmission
module. Therefore, a function using the serial port 3 needs to be configured here. Write usart3_ Init() initializes with the corresponding pin and sets the NVIC interrupt. In order to ensure that the transmission is not disturbed, the NVIC interrupt here should be set to have a high preemption priority. For the rightness of the received data, the timer 7 needs to be called when the serial port is initialized, and the service function is interrupted by the timer 7 to complete the data clearing before reception. After the initialization of serial port 3 is completed, set the interrupt service function usart3 of serial port 3, USART3_IRQHandler () is used to receive data. Write u3_ The printf() function is used to send data, which will be used in writing the data sending function of esp8266.c, or it can be directly used to send data after the ESP8266 module establishes a connection with the server. 4.2 Automatic Adjustment of Light Intensity To realize this function, pwm.c to control the intensity of the light source. Initialize the timer 9PWM, define the structural variables required by the timer 9, enable the TIM9 clock and the A-port clock, multiplex thePA2 pin into the timer 9, initialize the pa2 pin, and then initialize the timer 9. Since the PWM mode of channel 1 of the timer 9 is used here, it is also very important to initialize and configure this channel, Finally, enable the timer 9 again to complete the setting of the PWM wave of the timer 9. Then, the comparison value of the PWM wave is set according to the acquired light intensity value to modify the duty ratio to achieve the purpose of adjusting the light source. The larger the light intensity value, the smaller the set comparison value and the weaker the light source [12, 13]. 4.3 Monitoring and Storage 4.3.1 Infrared Detection and Alarm To accomplish the detect and alarm function, the system uses the DT.c. After the correct connection and the initialization, the output level of the infrared detection module is
124
K. Lei and L. Tian
detected through the PA3 pin, and the rising edge is detected to trigger the interrupt. At the same time, write the service function of interrupt line 3 EXTI3_IRQHandler() calls some function functions in the service function of interrupt line 3 to alarm. In addition, the buzzer needs to be configured, and the PF8 pin can be initialized. In addition, the buzzer needs to be configured, and the PF8 pin can be initialized. 4.3.2 Capture and Storage The purpose of this section software is to complete this function, the driver code of SCCB interface shall be written first, mainly including initializing the configuration function SCCB for SCCB interface_ Init(), SCCB start signal function SCCB_ Start(), SCCB stop signal function SCCB_ Stop(), generating Na signal function SCCB_ No_ Ack(), SCCB write and read byte function SCCB_ WR_ Byte(), SCCB_ RD_ Byte(), SCCB read/write register function SCCB_ WR_ Reg () and SCCB_ RD_ Reg()。 And then open the camera by the function of OV2640.c. After the register configuration, the camera is also required and reset other signals. [1, 14]. At the last step, the system finished the four functions by write dcmi.c. The first is set up the clock and configure pin function. Next, the system configures the DCMI_ CR register HSPOL/PCKPOL/VSPOL data width and other important parameters, and write DCMI interrupt service function for data processing while opening frame interrupt. The third is to configure DMA. DMA is a method that can be directly controlled without a CPU. The two devices A and B are directly connected to transmit data. It establishes these connections through the control bus matrix for transmission, including the transmission from peripheral devices to memory, the transmission from memory to peripheral devices, and the transmission from memory to memory. Complete source address DCMI through DMA_ Contents in Dr to target address 1lcd - > ram and target address JPEG_ Buf0 and JPEG_ Buf1 is transported to the ram of the LCD to display the image content on the screen, and transported to JPEG_ Buf0 and JPEG_ The buf1 bit stores the image information into the SD card for preparation. For configuration DMA, the continuous transmission mode is adopted, and then channel 1 of dma2 data stream in the development board is selected to realize data transfer. Finally, enable DCMI capture and set the output image size of ov2640. By writing DCMI_ Start() starts data stream 1 of dma2 and enables DCMI capture to start transmission; Writing DCMI_ Stop() to close DCMI and close data stream 1 of dma2 to stop transmission. The fourth step is to write SDIO. Adcard.c, mainly to drive the SD card. After completing the above steps, finally write photo.c to realize photographing. Jpeg_ data_ The process() function is used to process data and move the data in the internal buffer to the external buffer to store the image data on the SD card. Since the three pins of the SD card module and the camera module PC8, pC9 and pc11 are common to them, they need to be time-division multiplexed and SW is written_ ov2640_ Mode() and SW_ sdcard_ Mode() is used to switch between two modes. Create a file name by writing camera_ new_ Pathname(). Write ov2640_ jpg_ Photo(). Write take_ Photo () is used for external calling to realize the final photo storage function.
Design and Implementation of Smart Tourism Scenic Spot Monitoring
125
5 System Test Results 5.1 Actual Measurement of the Data In this step, the test results is mainly detect the data and show on the screen and send the data to the upper computer. The test results of these contents are shown in Fig. 2:
Fig. 2. Display of the detected data
From Fig. 2, it can be seen that the data acquisition, display on LCD screen and transmission through ESP8266WiFi module are correct. 5.2 Actual Measurement of Automatic Adjustment Light Source In this part, the light source module is observed by changing the light intensity. Here, in order to make the test result more obvious, red LED is selected for testing. When the light intensity is within the range of 20–30, it can be seen that the light source is emitting light, but it is not particularly bright, as shown in Fig. 3: The test is completed by automatically adjusting the light source, and the test result is normal.
126
K. Lei and L. Tian
Fig. 3. Test picture of automatic adjustment light source
5.3 Infrared Monitoring Alarm and Storage Measurement This part of the function realization needs to detect animals or people to capture photos. This part carries out two groups of tests. Each group of tests needs to test the following three indicators: (1) Whether the system can return to the normal state after capturing the captured image information (i.e. the temperature, humidity and light intensity can be displayed, and the next capturing can be carried out); (2) Keep the latest captured image on the screen and verify whether the captured image is stored on the SD card; (3) Whether the buzzer gives an audible alarm and whether the upper computer receives the alarm signal “warning!”.
Design and Implementation of Smart Tourism Scenic Spot Monitoring
127
6 Conclusion In this paper, the system realizes the function of monitoring the smart tourism scenic spot. It uses the punctual atom STM32f407 development board and various sensor modules to complete the data collection, display and transmission. It can accomplish the automatic adjustment of the light intensity, and complete the testing of the functions of monitoring alarm and snapshot storage, thus realizing the basic smart scenic spot monitoring design. Compared with the design and implementation of other intelligent scenic spot monitoring systems, this system has the following advantages: (1) The use method is simple, and the data reception is simple and easy to understand; (2) The hardware can be tailored. If the punctual atom STM32f407 development board used in the test is applied in practice, it only needs to redraw the board transplant program according to the hardware design; (3) This design has considered its application range before design, so its usable range is wide.
Acknowledgements. This work was supported by Natural Science Basic Research Program of Shaanxi (Program No.2021JM-460); the project of Xi’an Science and Technology Bureau (Number: GXYD15.22);the Scientific Research Program Funded by Shaanxi Provincial Education Department (No.21JC033); Shaanxi Arts and Sciences Planning Project Research on Digital Economy Enabling Shaanxi Culture and Tourism Integrated Development (Project No.: 2022HZ1720) and the Innovation and entrepreneurship training program for College Students (202211664099X, 202211664012; 202211664021).
References 1. Jeoung, J., Jung, S., Hong, T., Choi, J.K.: Blockchain-based IoT system for personalized indoor temperature control. Autom. Constr. 140, 104339 (2022) 2. Torres, M., Aparicio, C.: Intelligent tourism. Post covid trend in Mexico (3) (2021) 3. Jamil, M.S., Jamil, M.A., Mazhar, A., et al.: Smart environment monitoring system by employing wireless sensor networks on vehicles for pollution free smart cities. Procedia Eng. 107, 480–484 (2015) 4. Sanchez-Pacheco, F.J., Sotorrio-Ruiz, P.J., et al.: PLC-based PV plants smart monitoring system: field measurements and uncertainty estimation. IEEE Trans. Instrum. Meas. 63, 2215– 2222 (2014) 5. Goh, Y.M., Tian, J., Chian, E.: Management of safe distancing on construction sites during COVID-19: a smart real-time monitoring system. Comput. Ind. Eng. 163, 107847 (2022) 6. Hagem, R.: Smart monitoring system for substation’ parameters and automatic under frequency load shedding (2) (2021) 7. Gu, S., Neubarth, S.K., Zheng, Y., et al.: Smart item monitoring system. US11030575B2 (2021) 8. Solano, F., Krause, S., Woellgens, C.: An internet-of-things enabled smart system for wastewater monitoring. IEEE Access 10, 4666–4685 (2022) 9. Myint, C.C., Aung, Y.: Microcontroller based room temperature and humidity measurement system (2019)
128
K. Lei and L. Tian
10. Hulea, M., Mois, G., Folea, S., et al.: Wi-sensors: a low power Wi-Fi solution for temperature and humidity measurement. In: IECON 2013 - 39th Annual Conference of the IEEE Industrial Electronics Society. IEEE (2013) 11. Vena, A., Samat, N., Sorli, B., Podlecki, J.: Ultralow power and compact backscatter wireless device and its SDR-based reading system for environment monitoring in UHF band. IEEE Sensors Letters 5(5), 1–4 (2021) 12. Huang, Z., Wang, X., Sun, F., et al.: Super response and selectivity to H2S at room temperature based on CuO nanomaterials prepared by seed-induced hydrothermal growth. Mater. Des. 201, 109507 (2021) 13. Mondal, S.: Tailoring 2D Materials by Structural Engineering for Flexible PressureTemperature and Humidity Sensing Devices (2021) 14. Ahmad, Y.A., Gunawan, T.S., Mansor, H., et al.: On the evaluation of DHT22 temperature sensor for IoT application. In: 2021 8th International Conference on Computer and Communication Engineering (ICCCE) (2021)
Herd Effect Analysis of Stock Market Based on Big Data Intelligent Algorithm E. Zhang1(B) , Xu Zhang1 , and Piyush Kumar Pareek2 1 Zhixing College of Hubei University, Wuhan, Hubei, China
[email protected] 2 Nitte Meenakshi Institute of Technology, Bengaluru, India
Abstract. The rapid development of big data has made it widely used in all aspects of production and life. Traditional stock market analysis is mainly based on technical indicators and experience summary, with strong randomness. The application of big data in the stock market makes the stock market analysis and prediction more accurate. This paper takes the herd effect analysis in the stock market as an example, uses big data intelligent algorithm to collect and sort out the big data of the stock market, and then uses the big data of the stock market for empirical analysis, and then comes to the conclusion that there is a herd effect in China’s stock market. At the same time, by comparing the effects of China’s major policies before and after, we draw a conclusion that policies have a certain impact on the herding effect of the stock market. In view of this conclusion, the paper puts forward corresponding suggestions on the problems existing in China’s stock market. Keywords: Intelligent Algorithm · Herding Behavior · CCK Model · SSE 180
1 Introduction 1.1 Research Background and Significance With the popularization of big data technology application, the application analysis of massive data of stock market transactions has attracted more and more scholars’ attention, and the concept of “financial technology” has also emerged. The stock market analysis based on big data technology includes six basic aspects: visual analysis, data mining algorithm, predictive analysis ability, language engine, data quality and data management, data storage and data warehouse. In particular, support vector machines, neural networks, decision trees and other methods have been widely used in the field of stock forecasting. Under such a technical background, we can analyze the herd effect of China’s stock market more effectively. As a market developed in the transitional economy, China’s stock market has developed for more than 30 years, and various systems have been constantly improved and developed. A series of measures were introduced, including the GEM, the Science and Technology Innovation Board, the implementation of the registration system and the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 129–139, 2023. https://doi.org/10.1007/978-3-031-31860-3_14
130
E. Zhang et al.
establishment of the Beijing Stock Exchange. However, compared with the stock market in western countries that has already gone through a hundred years of history, China’s stock market started late and developed in a short time. There are still many aspects that are not mature and the system structure is not perfect. For example, the policy intervention is complicated, the information is seriously asymmetric, the investor structure is dominated by retail investors, the trading mode is backward, and there are many short-term speculations. In this context, China’s stock market is more prone to abnormal fluctuations, breeding herding behavior. The research on herding behavior in the investment market is extremely important to improve the regulatory mechanism in China. Studying the investment characteristics of investors and their role in the stability of the securities market is conducive to understanding various aspects of the securities market. Studying the fluctuation of stock market price can help us better understand the operation law of the stock market and more accurately predict the fluctuation law of stock price [1, 2]. 1.2 Journals Reviewed As for the research on herding effect, Maug and Naik (1996) believed that under the premise of the principal-agent relationship, the stock managers’ failure in investment was not due to their own wrong investment decisions, but due to market fluctuations. At this time, managers would prefer to follow the trend, and prevent risks by making investment decisions similar to those of other stock managers, thus triggering the herding effect in the stock market [3]. After research, Graham (1999) believed that analysts have significant herding effect on each other, and analysts with low ability or good reputation are more likely to follow the crowd and imitate the investment decisions of most people to avoid reputation damage [4] Tan (2008) concluded after research that herding effect of securities investment stocks is likely to lead to market imbalance or volatility, resulting in a wide range of fluctuations in stock prices [5]. Mouna Jlassi et al. (2015) [6] and Ben Sa ï da A (2017) [7] demonstrated herding behavior in the US stock market against the operation of the US stock market, Nha Duc Bui; (2018) analyzed the herd effect in Vietnam stock market [5], and Dazhi Zheng (2017) analyzed the herd effect in Asian stock market [8]. The above scholars’ analysis, from theory to demonstration, analyzed the herd effect in the stock market from different angles with different data. On the basis of the above theoretical analysis, this paper makes full use of big data to screen and select more than 200 stocks in the 180 index of Shanghai Stock Exchange from January 1, 2021 to March 1, 2022 in China’s stock market. Combined with CSAD method under CCK model, this paper studies the overall herding effect of China’s stock market.
2 The Influence of Herding Behavior on the Stock Market Herding behavior has a great impact on the stock market. Especially for China’s immature capital market, there are still shortcomings in investors’ minds, information disclosure and supervision of listed companies. Serious herding behavior will have a negative impact on China’s stock market, but proper herding behavior can also play a positive role.
Herd Effect Analysis of Stock Market Based on Big Data Intelligent Algorithm
131
2.1 Positive Impact We know that herd behavior is like a catalyst to accelerate the volatility of stock prices and market cycles. This catalyst will also have a positive impact on the stability of the stock market when it is appropriate and rational. When investors with professional information collection and accurate research and analysis ability find undervalued stocks in the market, they will buy them in time, while those overvalued stocks will be cleared in time. Herding behavior generated by such investors can make stock prices tend to real prices, rather than away from them, which is conducive to stabilizing market fluctuations [9]. 2.2 Negative Impact If it is irrational herd behavior, investors are easy to ignore private information and follow other people’s investment behavior. This herding behavior will lead to a large deviation of the stock price. When investors get unfavorable information, the market will move towards the opposite expectation and gradually spread, thus triggering the herding behavior of the seller, which makes the stock price drop rapidly and aggravates the cyclical volatility of the market. Because herding investors generally lose their own ideas, the market can not transmit information well. At the same time, the market fluctuates greatly, which is not conducive to the stable development of the market and makes the allocation of market resources fail to play a good role. Serious herding will also cause a huge foam in the stock market, resulting in a financial crisis, and adversely affect the national economy and financial development [10]. Empirical Analysis of Stock Market Herd Based on Big Data Intelligent Algorithm. 2.3 Model Selection and Establishment There are two methods to measure herding behavior in the stock market. The first is model measurement. Through statistical analysis of data in the market. The second method is questionnaire survey. We choose the model measurement method. CCK model regards all investors in the whole stock market as a whole, judges the herding behavior of the whole market through the public market data of stocks, such as the yield, and measures the herding behavior with CSAD indicators (the deviation degree of the absolute value of the yield of each stock in the market and the overall portfolio yield at the same time in the cross section). The core idea of this model is that when investors ignore their own information judgment and decision-making, that is, when herd behavior occurs, the linear relationship between the two will not be established. In order to prove herding behavior in the stock market, first discuss the relationship between the absolute deviation of the cross section yield and the market yield. The derivation process is as follows. The expected return rate of CAPM model stock is equal to risk-free return plus systematic risk premium: Et (Ri ) = rf + βi Et (Rm − rf )
(1)
In the formula, Rf is the risk-free rate of return, Et (Ri )is the expected return of stock i at time t, Rm is the market yield, βi is the systematic risk coefficient of asset i,
132
E. Zhang et al.
Et (.)indicates the expectation at t. Then the systematic risk coefficient of the average market as a whole βm can be expressed as: βm =
N 1 βi N
(2)
i=1
Weighted average of CAPM model formula in the whole market can get: N 1 Et (Ri ) = rf + βm Et (Rm − rf ) N
(3)
i=1
Then the absolute value of the deviation between the expected rate of return of individual stocks in period t and the expected rate of return of the market is: AVDi,t = |βi − βm |[Et (Rm ) − rf ]
(4)
Therefore, in combination with the above formula, the expected absolute deviation of cross section ECSAD in period t is: ECSAD =
N N 1 1 AVDi,t = |βi − βm |Et (Rm − rf ) N N t=1
(5)
t=1
Upper formula Et (Rm ) the first derivative and the second derivative o,it can be obtained respectively: N ∂ 2 ECSADt 1 ∂ECSADt |βi − βm | > 0, = =0 ∂Et (Rm ) N ∂Et (Rm )2
(6)
i=1
If the first derivative is greater than zero and the second derivative is equal to zero, it means that it is a monotone increasing function. To facilitate the calculation of ECSAD and Et (Rm ) use CSADt and Rm instead. Therefore, the absolute deviation degree of cross section (CSAD) is positively related to the market yield (Rm ). The assumption of this model is that there is no herd behavior under rational conditions. But when there is herd behavior, the relationship between them will be nonlinear. At the same time, nonlinear regression model is introduced, α Is the average dispersion degree of other income samples except for the dispersion degree of extreme partial returns, Rm is the weighted average of individual stock returns in the overall sample, that is, the market return, εt is the random error term. CSADt = α + β1 |Rm,t | + β2 R2m,t + εt
(7)
Because CSADt = N1 N t=1 |βi,t − βm,t |, when investors in the market ignore their own information judgment, imitate others’ decisions, and generate herd behavior, the returns of individual stocks will be close to the overall market yield, and CSAD will become smaller. When the degree of herd behavior is small, the CSAD and Rm is nonlinear increasing correlation. When herding behavior is significant, the relationship between
Herd Effect Analysis of Stock Market Based on Big Data Intelligent Algorithm
133
herding behavior and herding behavior is nonlinear decreasing. Except β2. If it is not zero, it can be considered that there is herd behavior in the market. When it is significantly positive, it can be considered that the degree of herding behavior in the market is small, and when it is significantly negative, it can be considered that there is more intense herding behavior in the market. 2.4 Empirical Study on Herding Behavior Based on Big Data Algorithm In this paper, the data of Shanghai Stock Exchange 180 Index is filtered through big data, and the CCK model is selected. Through big data, we eliminated ST stocks, stocks with problems in operation, stocks with abnormal volatility in stock prices and newly listed stocks, and screened 180 stocks that are the most representative of the market, with good liquidity and representativeness. The circulating market value accounts for about 50% of the circulating market value of Shanghai Stock Exchange, and the trading volume accounts for about 47%. First, calculate the correlation between CSAD (absolute deviation of daily earnings of a single stock and SSE 180) and SSE 180 index through big data. Through Python program, using Wind database and Choice financial terminal database, we selected the daily yield of Shanghai 180 Index from January 1, 2021 to March 1, 2022, totaling 279 trading days, representing the daily market yield R (m,t). The daily yield of the constituent stocks of the Shanghai 180 Index is R(i,t). Calculate the market absolute yield Rm,t . Square value of market rate of return R2m,t , use, to calculate CSAD. Through data sorting, we can get: Table 1. Descriptive statistics Statistical Indicators
CSADt
Rm,t
Rm,t
R2m,t
Average value
0.01831
−0.000283
0.008332
0.00115
Maximum
0.03168
0.034068
0.034068
0.00116
Minimum value
0.00980
−0.0032
0
0
Standard deviation
0.00435
1.0721
0.6766
0.000185
−0.23715
0.6212
1.8320
9.9012
−0.1965
1.3572
2.9530
Kurtosis Skewness
0.50011
When using the CCK model to study herd behavior in the stock market, we use CSADt . . Overall market yield R(m, t), as well as the absolute value of market return |R(m,t) |, and the square value R (m, t) ^ 2. By making a CSADt and R (m, t) and observe whether it is linear (see Table 1). Use the results to preliminarily identify whether there is herd behavior in China’s stock market in the near future. As shown in the Fig. 1 above, it is obvious that R(m, t) and CSADt here is no linear monotonic increase in t, that is, there is no linear correlation. R(m, t) is concentrated near the 0 axis, and the Shanghai Stock Exchange 180 index represents China’s stock
134
E. Zhang et al.
CSADt-Rm,t Scatter Plot (unit%) 3.5 3 CSADt
2.5 2 1.5 1 0.5 0 -4.0000 -3.0000 -2.0000 -1.0000 0.0000 Rm,t
1.0000
2.0000
3.0000
4.0000
Fig. 1. Scatter chart of cross section absolute deviation and market yield of SSE 180 index
market. Therefore, it can be preliminarily judged that there is herd behavior between January 1, 2021 and March 1, 2022 in China. Next, we will analyze whether herd behavior in China’s stock market is significant. Before analysis, unit root processing is required to ensure the stability of time series data. Therefore, we should pay attention to [CSAD]t, R(m,t), R The stationarity of (m, t) ^ 2 time series is tested by ADF unit root test. Table 2. Unit root test of correlation coefficient of time series project 1% critical value 5% critical value 10% critical value Inspection value P value CSADt −3.345537
−2.87255
−2.57264
−2.89248
0.33556
Rm,t
−3.45444
−2.87215
−2.57242
−9.28099
0.00000
R2m,t
−3.45418
−2.87203
−2.57236
−10.17382
0.00000
It can be seen from Table 2 that the unit root test value of R (m, t) and R2m,t 2time series is smaller than the critical value under their 1%, 5% and 10% significance levels. CSADt The unit root of the time series of t is less than the critical value at the significance level of 5% and 10%, so they are stable and can be used for regression analysis. Through regression analysis, the following results are obtained (see table 3): le : CSADt = 1.7301 + 0.0533|Rm,t | + 0.0495R2m,t + ε
(8)
According to the above model, β2 If the regression coefficient is 0.0495, it is not 0, and it is a positive number, P = 0.02. Therefore, from January 1, 2021 to March 1, 2022, China’s stock market has moderate herding behavior in more than a year.
Herd Effect Analysis of Stock Market Based on Big Data Intelligent Algorithm
135
Table 3. Regression analysis results of herding behavior in the market Variable
Correlation Coefficient
Standard Deviation
T value
P value
|Rm,t |
0.0533
0.108
0.495
0.621
R2m,t
0.0495
0.039
1.263
0.020
Constant
1.7301
0.056
31.046
0.000
3 An Empirical Study of Herding Behavior in the Pre and Post Policy Stages 3.1 Data Slection In the above, we use the scatter chart and regression analysis to get that there is (significant/slight) herding behavior in China’s stock market in the past year. From the reasons for the above herding behavior, we can know that the herding behavior may be because China’s stock market is still immature and the policy has a great impact on the stock market. We will further explore and verify whether the policy will lead to herding behavior. However, it is still herding behavior based on individual stocks, so we continue to use CCK model to verify and measure herding behavior. However, the selected time period is different, so the above regression model is expanded. If there is any difference, it may be due to the impact of policy on herd behavior. The specific model is as follows: 2
AF AF CSADtAF = α + β1AF |RAF m,t | + β2 (Rm,t ) + εt
(9)
Among them, CSADAF represents the absolute deviation degree of cross section at the stage after the policy is issued. RAF m,t refers to the daily rate of return of the market after the introduction of the policy. Similarly, if β2AF is significantly positive, so it can be considered that herd behavior is relatively moderate after the policy is introduced. If the regression coefficient β2AF is significantly negative, so it can be considered that after the introduction of the policy, the degree of herd behavior is relatively serious. At the same time, the comparison before and after the policy can reflect to some extent whether the herd behavior is affected by the policy, εt represents the random error term. As for the selection of data, we can trace back to the history of China’s stock market. The introduction of major policies often triggers a round of cyclical market rise. The introduction of more famous policies in history has triggered a lot of market. The first was the comprehensive deregulation of stock prices in the Shanghai Stock Exchange in 1992, which led to the strong performance of the Shanghai and Shenzhen Stock Exchanges. The Shanghai Stock Index rose more than 15 times. In 1999, the “Request for Instructions on Several Policies to Promote the Development of the Securities Market” was approved, and the “5.19” market of A shares broke out. In 2008, the four trillion yuan policy was introduced, and A-shares emerged from a strong market. In 2014, the “Several Opinions on Further Promoting the Healthy Development of Capital Market” (Article 9 of the new country) was launched. We use the latest Shanghai 180 index data of the year before and after the introduction of the policy on May 9, 2014. Before the introduction of the
136
E. Zhang et al.
policy, there were 243 trading days from May 9, 2013 to May 9, 2014, and 245 trading days from May 9, 2014 to May 9, 2015 after the introduction of the policy. 3.2 After the Introduction of the Policy In the same way, we use the CCK model to observe the absolute deviation of cross section and market yield RBF m,t in the year after the promulgation of the policy “Several Opinions on Further Promoting the Healthy Development of Capital Market” on May 9, 2014.
AF Fig. 2. Scatter chart of CSADAF t − Rm,t t of the Shanghai 180 index and the market yield before the introduction of the policy
It can be seen from Fig. 2 that the market daily rate of return is concentrated around 0.5–1.5. It can be seen from the horizontal that CSADAF t and market daily return.There . From this point, we can preliminarily is obviously no linear relationship between RAF m,t judge that there may be herd behavior in China’s stock market within one year after the policy was issued. In order to better quantify the comparison before and after the introduction of the policy, regression analysis is carried out. Table 4. Unit root test of correlation coefficient of time series (after the introduction of the policy) project
1% critical value
5% critical value
10% critical value
Inspection value
P value
CSADAF t
−3.458366
−2.873866
−2.573339
−2.892483
0.694754
RAF m,t
−3.457894
−2.872147
−2.573229
−6.384382
0.00000
R2m,t
−3.457551
−2.872031
−10.173823
−10.173823
0.00000
AF and R2 are It can be seen from Table 4 that, The unit root detection value of Rm,t m,t less than the critical value at the significant level of 1%, 5% and 10%, CSADAF t time
Herd Effect Analysis of Stock Market Based on Big Data Intelligent Algorithm
137
series is less than the critical value at 5% and 10% significant level. Use the regression model after the release of the above policies to carry out regression analysis, and the regression results are shown in the following Table 5. Table 5. Regression analysis results of herding behavior in the market(after the introduction of the policy) Variable
Correlation Coefficient
Standard Deviation
T value
P value
|RAF m,t |
0.4612
0.080
5.747
0.000
2 (RAF m,t )
−0.0333
0.015
−2.181
0.030
Constant
1.1334
0.071
15.926
0.000
2 + ε, βAF regression That is, CSADAF = 1.1334 + 0.4612Rm,t − 0.0333Rm,t t 2 coefficient is −0.0333, and its P value is 0.030. Therefore, there is significant herding behavior in China within one year after the introduction of the policy. Taking the Shanghai 180 index as an example, the CCK model is selected for empirical research. The sample period of Shanghai 180 is from January 1, 2021 to March 1, 2022, with a total of 279 trading days. Absolute deviation of CSAD cross section and Rm There is no linear relationship between market returns and β2 The regression coefficient is 0.0495, which indicates that there is relatively mild herding behavior in China’s stock market recently. At the same time, it is found that before and after the introduction of major policies βAF 2 regression coefficient is 0.0181, and after the policy was released, regression coefficient is −0.0333, It is indicating that the degree of herd behavior is βAF 2 more obvious. Therefore, there may be some reasons for deepening the policy of herd behavior. However, it should be noted that the rise and fall, trading volume and other reasons that may lead to herd behavior are not taken into account. For example, the bull market in 2015 was ushered in after the release of the policy, and the stock market actually rose relatively large, while the amplitude before the release of the policy was small, there was no large fluctuation, and the trading volume also had certain changes.
4 Empirical Summary Through the above research results, we can conclude that there is herd behavior in China’s stock market and the policy will aggravate the degree of herd behavior.Herding behavior in China’s stock market is mainly caused by the following reasons: 4.1 Frequent and Excessive Policy Intervention Policies have a great impact on China’s capital market. Many people believe that China’s stock market is a “policy market”, and there are many famous “policy bottoms” in the history of China’s stock market. However, appropriate policy intervention can ensure the stable development of the stock market. As China’s economy and finance are in
138
E. Zhang et al.
the period of reform, many rules and regulations are still improving. At the same time, shareholders attach great importance to the authority of the government, and investors are highly dependent on policies, which leads to a more significant herd behavior in decision-making. 4.2 Incomplete Regulatory Policies and Asymmetric Information The development of China’s stock market is still immature, the information is seriously asymmetric, and there are still many problems in the information disclosure of the stock market. The cost for individual investors to obtain information is high, inaccurate and untimely, and there are many false information and disturbing information. Individual investors can’t understand the real information behind it, and can only guess and judge the unknown information behind it through the trend of the K line, trading volume, etc., resulting in herding behavior. 4.3 Immature Investor Structure and Trading Mode in China Although the proportion of institutional investors in China is expanding, individual investors still account for the main part. Individual investors lack the correct investment concept and rational consciousness, nor the professional analysis and judgment ability. They have serious speculative behavior, and “chasing up and killing down” and “following the trend of speculation” often occur. In specific operations, people are vulnerable to emotional interference, and the probability of herd behavior will be greater.
5 Conclusion Ntelligent algorithm has important research significance and practical value in the application of securities big data. This paper makes full use of securities big data and uses CKK model for intelligent calculation to analyze the herd phenomenon in China’s stock market. In the future research work, it is necessary to further study various intelligent algorithms and improve the existing algorithms according to the needs of different models to form new algorithms that are more suitable for actual needs, provide more intellectual support for securities big data analysis, and make the development of China’s securities market more healthy.
References 1. Liu, C., Liao, Y.: An empirical analysis of herd effect in China’s stock market based on CCK model. Heilongjiang Journal of Bayi Agricultural Reclamation University 31(06), 105–110 (2019) 2. Ishwarappa, K., Anuradha, J.: Stock Market Prediction Based on Big Data Using Deep Reinforcement Long Short-Term Memory Model. International Journal of e – Collaboration 18(2), 1–19 (2022) 3. Demirer, R., Gupta, R., Lv, Z., Wong, W.-K.: Equity return dispersion and stock market volatility: evidence from multivariate linear and nonlinear causality tests. Sustainability 11(2), 351 (2019)
Herd Effect Analysis of Stock Market Based on Big Data Intelligent Algorithm
139
4. Litimi, H., Ben Saïda, A., Bouraoui, O.: Herding and excessive risk in the American stock market: a sectoral analysis. Res. Int. Bus. Financ. 38, 6–21 (2016) 5. Ben Saïda, A.: Herding behavior on idiosyncratic volatility in U.S. industries. Financ. Res. Lett. (2017) 6. Kim, W., Chay, J.B., Lee, Y.: Investor herding behavior and stock volatility 13(2), 59–81 (2020) 7. Jlassi, M., Naoui, K.: Herding behaviour and market dynamic volatility: evidence from the US stock markets. Am. J. Financ. Account. 4(1), 70–91 (2015) 8. Bui, N.D., Nguyen, L.T.B., Nguyen, N.T.T., Titman, G.F.: Herding in frontier stock markets: evidence from the Vietnamese stock market. Account. Financ. 58, 59–81 (2018) 9. Zheng, D., Li, H., Chiang, T.C.: Herding within industries: evidence from Asian stock markets. Int. Rev. Econ. Financ. 51, 487–509 (2017) 10. Yuan, J.: Empirical analysis of herding behavior in China’s a-share market. Financ. Theor. Pract. (02), 82–87 (2020) 11. Yang, Y.: Empirical study on herding behavior in Shanghai a-share market. Market Weekly 34(04), 136–139 (2021)
Intelligent Recommendation System for Early Childhood Learning Platform Based on Big Data and Machine Learning Algorithm Yabo Yang(B) Xi’an Fanyi University, Xi’an 710105, Shaanxi, China [email protected]
Abstract. Due to the low cognitive ability of children, the intelligent recommendation system should assist children in learning with the assistance of parents. Therefore, the intelligent recommendation system needs to evaluate the ability of users (children learners and their parents), and assign learning content with suitable difficulty Match with the user’s ability, assist the user to learn efficiently, and gradually cultivate the user’s learning interest and habits, and can be a useful supplement to the user’s regular learning. This paper uses JAVA EE language to develop an intelligent recommendation system for children’s learning platform based on Hadoop big data framework and machine learning algorithms. Learning platform with intelligent recommendation function. By analyzing the cognitive ability of children, and using the collaborative recommendation algorithm to recommend learning resources suitable for this age stage for children, in order to improve learning efficiency and cultivate children’s learning interest from an early age, at the same time, the resource library in the learning platform has more Practical value. Keywords: Big Data · Machine Learning Algorithm · Learning Platform for Children · Intelligent Recommendation System
1 Introduction The recommendation system is to help users discover the content they are interested in, but it is worth mentioning that the intelligent recommendation system needs to further mine data through the user’s historical behavior. Both the content producer and the content demander are facing great challenges. How to push information to users and how users can find interesting content is a very difficult thing. The intelligent recommendation system designed in this paper can solve the problem of learning resource recommendation in children’s learning. There are many achievements in research on the design of intelligent recommendation system for early childhood learning platform based on big data and machine learning algorithms. For example, a scholar, based on previous research experience and achievements, adopted a simplified method to measure readability, and designed an intelligent recommendation system for school-aged children. For school-aged children, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 140–150, 2023. https://doi.org/10.1007/978-3-031-31860-3_15
Intelligent Recommendation System for Early
141
the difficulty of language learning lies first in pronunciation, and then is the length and frequency of use of the vocabulary and the number of word meanings, followed by the length of the sentence. The intelligent recommendation system can first determine the learning ability of children, and then match them with appropriate learning content [1, 2]. A scholar uses a recommendation algorithm to build user interest and item feature models, and then makes recommendations by comparing similarity. The weight setting of the algorithm is the core of the algorithm. The recommendation algorithm is applied to learning recommendation. The algorithm first uses the user’s historical learning behavior data to establish a learning database for it, extracts one or more learning keywords, and then calculates the similarity of learning resources, select appropriate learning content to recommend to users [3, 4]. Although the design and research of the intelligent recommendation system of the children’s learning platform has achieved good results, due to the limited learning ability of children and their low cognitive level, the recommendation function of the system needs to be extremely accurate to meet the learning needs of children. This paper first proposes big data related technologies and four machine learning algorithms, and then develops the system architecture based on the B/S model, and designs the functional structure for administrator users and learner users. The results of testing and satisfaction survey have confirmed that the system designed in this paper can meet the operational requirements and user requirements.
2 System Design Techniques and Algorithms 2.1 Big Data Related Technologies (1) Hadoop distributed computing framework As a top-level project of Apache, Hadoop integrates parallel computing and HDFS, including three core modules: HDFS, MapReduce and Yarn. The Hadoop cluster adopts the master-slave architecture, including a name node NameNode and multiple data nodes DataNode. The NameNode manages the FSImage image file and Editlogs log file of HDFS, maintains the file system tree, and temporarily records the data node information in the block where each file is located. When the system starts, this information is reconstructed by the data nodes. If the NameNode node goes down, it will cause the failure of the entire file system. DataNode mainly manages the storage files in its node, and all computations are also done in this node [5, 6]. (2) Spark distributed computing framework Different from the MapReduce programming model, Spark is based on in-memory computing and is a further upgrade of Hadoop MapReduce. Its performance is more than 10 times higher than that of the MapReduce model, improving data processing efficiency. Spark consists of a series of powerful components, which can be seamlessly connected to each other through RDD (Resilient Distributed Datasets) [7].
142
Y. Yang
2.2 Machine Learning Algorithms Machine learning algorithms fall into three categories: supervised learning algorithms, unsupervised learning algorithms, and reinforcement learning algorithms. Supervised learning, also known as teacher learning, firstly identifies and trains data to obtain specific or actual results, and then organizes unknown data according to training rules [8]. The difference between unsupervised learning and supervised learning is that there is no need to identify data in advance. According to the rules of different teaching methods and the grouping characteristics of the data itself, it is possible to find relevant internal rules and find useful hidden information. Complex and damaged data [9]. After various channel inquiries and data collation, a brief description of several machine learning algorithms is given. These algorithms are: decision tree, Naive Bayes, K-NN algorithm and clustering. (1) Decision tree The purpose of decision tree learning is to present the rules in a series of complex and irregular data samples in the form of trees for data classification. There are three main situations in the process of creating a decision tree. There are one or more samples in the training set but they all belong to the same class, there are no samples in the training set and there are samples but they belong to multiple classes. The decision tree algorithm is empty at first, and with the comparison and judgment of the data, new nodes are added accordingly, until finally all the data can be classified [10]. (2) Naive Bayes Bayesian theory is mainly based on the prior probability of the hypothesis, the data studied and the probability of the data studied under the hypothetical conditions, so as to calculate the method of the hypothesis probability. Bayesian learning is a parameter classification method based on probability and statistics theory, which can statistically ensure the smallest classification error, but when using this method, there are two requirements: the number of categories of the classification problem is known and the relevant probability can be solved distribution [11]. It can be represented by the following formula (1). P(a | B) =
P(B | a)P(a) P(B)
(1)
Among them, P(a|B) is the posterior probability, P(a) is the prior probability, P(B|a) is the likelihood function, and P(B) is the evidence factor. (3) K-NN algorithm K-NN algorithm, the full name of K-nearest neighbor algorithm, the algorithm idea of K-NN is: Assuming that there are many kinds of classified data in a space, when a new data is input, how to determine which category the data belongs to, by calculating the distance between this data and the nearest k data in the surrounding area, and select the closest one, then classify the data into this category, which can also be understood
Intelligent Recommendation System for Early
143
as deciding which category the data belongs to by the nearest k data around, of course, certain rules must be followed [12]. (4) Clustering The classification results obtained by clustering can clearly distinguish the samples, but the final confirmation of each category needs to be marked again according to certain criteria. S = S1 ∪ S2 ∪ · · · ∪ Sn
(2)
Sxi ∩ Sxj = ∅(∀i = j, i, j = 1, 2, . . . , n)
(3)
Among them, S is the sample space, xi and xj are cluster samples.
3 System Design 3.1 System Architecture Design The development language in B/S mode is mainly JAVA, which is the language most used by developers at present. JAVA is an object-based compiler language, which is inherently capable of very powerful object capabilities and description and expression capabilities. JAVA EE is the main direction of the JAVA development platform. Not only is it free, open source, and cross-platform, but it also has many domain application libraries with high scalability and high security. The development based on JAVA EE not only has huge technical support, but also the basic development environment and development tools are free, so the development language environment and basic tools of the system in this paper choose JAVA EE. As shown in Fig. 1 is the system architecture based on B/S mode.
144
Y. Yang
browser
learning record
resource search
Cognitive test
User login
Learning Resources
Resource tab
Hadoop framework Fig. 1. System architecture based on B/S mode
3.2 Systematic and Diversified Learning Content The construction of the system’s diversified learning content is shown in Fig. 2. (1) Children’s learning resource library Children’s learning resource bank mainly refers to the learning content in the interactive learning system and the question bank designed according to the learning content. This question bank will recommend learning materials for different children’s users according to the recommendation algorithm, so as to facilitate the random access and use of materials in the system. (2) Identification diagram In the learning process of independent exploration, children mainly use recognition map to present virtual learning content and integrate with real learning scenes, so the design of recognition map is an indispensable part of the system. For the design of recognition map, we should first consider the intuitive visual expression, such as the theme, style, and content, and pay attention to the identifiability of the recognition map.
Intelligent Recommendation System for Early
145
This purpose is to enable children to quickly find the recognition map needed in learning tasks in multiple recognition maps. (3) Auxiliary equipment Auxiliary equipment mainly includes camera and auxiliary equipment support. The camera equipment is mainly used to assist the realization and use of augmented reality technology in the interactive learning process. The auxiliary equipment support is mainly used to assist children in the use of intelligent hardware devices. For example, children need to interact with virtual content when using the system. At this time, children cannot complete one-handed operation. Therefore, the support of intelligent equipment is needed to help children complete learning tasks.
Identification diagram
Situational learning
Early Childhood Learning Platform
auxiliary equipment
Children's resource library
learning
Cloud database
Child learning equipment Fig. 2. Learning content of the system
3.3 System Use Case Design (1) Registered user login use case The registered user login use case is mainly for the relevant core functions that have been registered as a formal user of the system. The learning resource content browsing
146
Y. Yang
function module includes the recommendation function based on the difficulty factor. My learning module and search are the main functions of the (learner) user, the learner user obtains the resource information of the system available for the user to learn through the My Learning function module, and displays the learning content in the system resource library. (2) Administrator use case The administrator use case describes the relevant functions of the learning platform administrator. The administrator function of this platform is relatively complex, and the main functions related to recommendation include the review of registered users, the review of content, the labeling of learning resources, and the matching management of learner user ability and resource difficulty. The learning resource asset labeling function is mainly to supplement and improve the main features, keywords, knowledge points, related fields, related assets and other information of the learning resources uploaded by users, so as to provide necessary information for the calculation of the difficulty coefficient and the collaborative filtering recommendation. 3.4 Interface-Aware Design (1) Design of children’s color perception in visual interface. Through the investigation and analysis of learners’ children, it can be seen that children will have a very intuitive and obvious preference for color in the process of growing up. They are very easy to be attracted by colors with bright color characteristics, such as red, orange, yellow and other warm colors. It can be seen that in the design of the interactive learning system, bright and bright colors should be integrated into the screen design, and through the visual sensory channel, children’s attention should be increased. (2) Design of interesting graphic elements of two-dimensional interface. In the two-dimensional interface design, simple graphic elements and layout design can deliver effective knowledge information to children. The use of easy-to-understand graphic elements will increase children’s cognitive effect on learning content and stimulate their desire to use, such as the design of graphic elements that make things silhouetted, cartoonized and anthropomorphic, and the presentation of information content will be more easily accepted and loved by children, and enhance their impression of products. (3) Design of visualization model of 3d virtual object. With the support of augmented reality technology, the presentation of 3D virtual object models and animations will attract children’s attention. The animation design of 3D virtual objects can not only attract children’s attention through visual sensory channels, but also promote children’s understanding of virtual objects and improve their cognitive ability through vivid animation models. Integrating the learning content of 3D
Intelligent Recommendation System for Early
147
virtual objects with the real learning environment and generating interesting interaction can reduce children’s cognitive load and increase their learning interest. However, using too many animations and interactive effects in product design will backfire, making children unable to concentrate and reducing the dissemination rate of key knowledge. It can be seen that in the animation design of 3D virtual objects, the dynamic and static should be combined orderly, and the primary and secondary dissemination of knowledge points should be clear.
4 System Implementation 4.1 Implementation of System Registration Function In the process of user registration, there are two important tasks to do: First, when registering, you must provide complete information on children and parents, including age, gender, weight, family composition, and area. When registering, you also need to provide relevant information such as parents’ academic qualifications and emphasis on learning. In the actual registration process, parents generally register, and after registration, the relevant learners (children) log in and enter the system with the assistance of parents to learn. Second, after successful registration, before officially entering the learning stage, children must complete the initial cognitive ability diagnosis. After the learner registers, the platform extracts appropriate test questions from the cognitive diagnostic question bank based on the basic information at the time of registration to conduct cognitive tests on the children. The cognitive test mainly consists of three parts, namely language ability, number recognition ability, and graphic perception ability test, mainly to examine the basic cognitive ability of children. 4.2 System Performance Test As a platform that needs to operate for a long time, the intelligent recommendation system of the children’s learning platform needs to have some basic functions of daily operation in addition to the necessary business functions. These functions must meet the operating requirements of the system to ensure the normal use of the system. The performance of the test includes: the stability of the system, that is, the platform system needs to ensure uninterrupted service for one day, and there can be no unexpected crash and other service terminals that cause more than 3 min, and the stability performance must be above 93%; the rapid response of the system, that is, the page and the user. The action response time should be fast, and the correct operation result can be given within 2 to 3 s to ensure that the user has no obvious sluggish feeling; maintainability, that is, the system should have functions such as recording, searching, and backup. In the event of a major failure, problems can be quickly found and services can be restored, and the system can be easily expanded as the user’s business grows, and the maintainable performance must be above 90%; the security and confidentiality function, that is, the system must safely manage the user’s personal information and learning data to prevent privacy exposure. At the same time, it must be able to manage and control the security
148
Y. Yang
of assets and prevent the dissemination of bad speech, thoughts, and behaviors through audio, video, animation, text, etc., and the security performance must be more than 95%. The performance test results are shown in Table 1. According to the data in the table, it can be seen that the design requirements are met, as shown in Fig. 3.
Fig. 3. Test result
Table 1. Performance Test Results test value
does it reach the requirement
stability
96%
Yes
Responsiveness
0.87–1.64 s
Yes
maintainability
92%
Yes
Security and confidentiality
99%
Yes
4.3 System Satisfaction Test The application of the system should not only meet the learning needs of children, but also meet the management needs of administrators. After a period of use, this paper conducts
Intelligent Recommendation System for Early
149
a satisfaction survey on system administrators, children and their parents. The questionnaire is published on the system server, and users can log in to the system to fill in online satisfaction. The levels of satisfaction options are very satisfied, relatively satisfied, average, relatively dissatisfied and dissatisfied. Administrator satisfaction mainly includes administrators’ satisfaction with the use of functions such as learning user review, learning resource review, and resource management. The satisfaction of children and their parents mainly includes their satisfaction with functions such as cognition level testing, learning resource acquisition, and learning content matching. Use satisfaction, and the child is accompanied by parents to fill in the satisfaction. The statistics of the survey results are shown in Fig. 4.
administrator 70
Toddlers and Parents
65.49 61.28
60 Preportion(%)
50 40 30
22.53 17.58
20
10.72 8.14
10
3.645.42
1.833.37
Very dissatisfied
dissatisfied
0 Very satisfied Quite satisfied
generally
satisfaction level Fig. 4. User satisfaction results
As shown in Fig. 4, among the number of users who have filled in, 81.83% of administrator users are satisfied with the system, 93.07% of children and parents are satisfied with the system, and a minority are dissatisfied. It shows that the functions of the system can basically meet the needs of users.
5 Conclusion This paper uses the Hadoop distributed framework of big data technology to design an intelligent recommendation system for children’s learning platform based on machine
150
Y. Yang
learning algorithms and B/S mode. It can help children gradually establish good learning habits by completing the assessment of learning outcomes without barriers. In the system performance test experiment of this paper, the test results show that the stability, maintainability, responsiveness and security and confidentiality of the system all meet the design standards, and the user’s satisfaction with the system is very high, which shows that the system designed in this paper is It works. Acknowledgements. 2018 Shaanxi Province Education Science “Thirteenth Five-Year” Planning Project “Action Research on Teacher Guidance Strategies in Outdoor Sports Games in Kindergartens in Xi’an”, project number: SGH18H463. The phased results of the Shaanxi Preschool Education Research Project “Research on the Quality Assurance System of Preschool Integrated Education” in 2020, project number: YBKT2004.
References 1. Ramzan, B., Bajwa, I.S., Jamil, N., et al.: An intelligent data analysis for hotel recommendation systems using machine learning. Sci. Program. 2019(4), 1–20 (2019) 2. Faisal, M., Al-Riyami, S.A., Yasmeen, S.: Intelligent recommender system using machine learning to reinforce probation students in OMAN technical education. Int. J. Control Autom. 13(2), 349–357 (2020) 3. Ali, T., Afzal, M., Yu, H.W., et al.: The intelligent medical platform: a novel dialogue-based platform for health-care services. Computer 53(2), 35–45 (2020) 4. Partheeban, N., Radhika, R., Ali, A.M., et al.: An intelligent content recommendation system for e-learning using social network analysis. J. Comput. Theor. Nanosci. 15(8), 2589–2596 (2018) 5. Faqihi, B., Daoudi, N., Hilal, I., et al.: Pedagogical resources indexation based on ontology in intelligent recommendation system for contents production in d-learning environment. J. Comput. Sci. 16(7), 936–949 (2020) 6. Visuwasam, L., Paulraj, D., Gayathri, G., et al.: Intelligent personal digital assistants and smart destination platform (SDP) for globetrotter. J. Comput. Theor. Nanosci. 17(5), 2254–2260 (2020) 7. Rosewelt, L.A., Renjit, J.A.: A content recommendation system for effective e-learning using embedded feature selection and fuzzy DT based CNN. J. Intell. Fuzzy Syst. 39(8), 1–14 (2020) 8. Badami, M., Tafazzoli, F., Nasraoui, O.: A case study for intelligent event recommendation. Int. J. Data Sci. Analytics 5(4), 249–268 (2018). https://doi.org/10.1007/s41060-018-0120-3 9. Ifada, N., Nayak, R.: A new weighted-learning approach for exploiting data sparsity in tagbased item recommendation systems. Int. J. Intell. Eng. Syst. 14(1), 387–399 (2021). https:// doi.org/10.22266/ijies2021.0228.36 10. Abid, M., Umar, S., Shahzad, S.M.: A recommendation system for cloud services selection based on intelligent agents. Indian J. Sci. Technol. 11(9), 1–6 (2018) 11. Senousy, Y., Shehab, A., Hanna, W.K., et al.: A Smart social insurance big data analytics framework based on machine learning algorithms. Cybern. Inf. Technol. 20(1), 95–111 (2020) 12. Tchapga, C.T., Mih, T.A., Kouanou, A.T., et al.: Biomedical image classification in a big data architecture using machine learning algorithms. J. Healthcare Eng. 2021(2), 1–11 (2021)
Big Data Technology Driven 5G Network Optimization Analysis Xiujie Zhang(B) , Xiaolin Zhang, and Zhongwei Jin School of Information Engineering, Heilongjiang Polytechnic, Harbin 150001, Heilongjiang, China [email protected]
Abstract. 5G is the core of a new generation of information and communication infrastructure, with higher speed, greater capacity and lower latency than 4G. The digital transformation of production infrastructure and social infrastructure based on 5G networks is enabling technologies and applications such as big data, cloud computing and the Internet of Things to move from the conceptual to the practical, from the abstract to the concrete. There is a growing recognition of the importance of big data to the development of today’s society. Big data technology and 5G communication technology as the current emerging technology, its ability to promote the development of our economy is very strong [1, 2]. Big data technology and 5G communication technology as the current emerging technology, its ability to promote the development of our economy is very strong. The integration of big data technology and 5G communication technology can effectively enhance the development of related technologies on the one hand, and the effective integration of the two can provide strong support for the development of artificial intelligence on the other hand, laying a solid foundation for the high-quality development of China. Explain the application of big data technology in improving 5G communication architecture, the problems in the development of 5G network, combine the advantages of 5G communication network and big data technology, discuss the application practice of big data technology, and analyze the significance and value of the application of big data technology [3]. Keywords: Analysis · Big Data Technology · Network · Optimization · 5G
1 Introduction According to statistics, mobile data traffic worldwide will grow more than sevenfold between 2016 and 2021. Both the growth rate and the growth frequency show the unique characteristics of the new era, the traditional technology system can no longer meet the actual needs of people. Therefore, the emergence of 5G will further promote the rapid development of mobile data. At present, China has increased its investment in 5G network construction, and several provinces and municipalities have started to run 5G base stations, but there is still huge space for the application and research of 5G technology [4, 5]. The application of big data analysis in mobile communication network under the background of 5G is the inevitable direction of exploration and development, combined © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 151–159, 2023. https://doi.org/10.1007/978-3-031-31860-3_16
152
X. Zhang et al.
with the analysis of people’s actual needs, the research of big data analysis in mobile communication network optimization application should gradually realize the innovation of technology. Combining the characteristics of mobile communication networks in the context of 5G and the current situation of applying big data analysis, maintaining the stable and safe operation of mobile communication networks and improving the level of mobile communication technology in China. After the launch of 5G network technology, the development of the communication industry has accelerated and it has also played an important role in helping to support the development of various industries. In the process of 5G network development and application, it is necessary to rely on big data technology to continuously optimize the 5G network and better meet the needs of communication technology transmission in the development process of the times. Moreover, the integration of 5G network and big data technology can not only build an intelligent development pattern, but also facilitate the coordinated development of big data technology and 5G communication technology [6, 7].
2 Overview of Big Data Technology and 5G Mobile Big Data includes user-generated data and operator-generated data, where usergenerated data includes self-published data and rich media data, and operator-generated data includes log data and basic network data. (a) There are many links on the operator’s network where data can be collected, and at the terminal where data such as road tests/minimal road tests, test reports, transmission group sizes, usage habits, terminal types, etc. can be collected. At the base station end, data such as user location information, user call records, link status information, received signal strength (RSSI), etc. can be obtained; measurement, signaling, call statistics, etc. can be collected through the operation and maintenance system in the background. The Internet allows you to collect data such as news, information, maps, videos, chats, and applications. This means that not only can service data such as service type, upstream and downstream traffic, and visited sites be obtained in the operator’s network, but also the status of the entire channel can be mastered. As shown in Fig. 1, 5G network should be user-centric, context-aware and first-responsive network, and 5G wireless network can realize the convergence of communication, caching and computing capabilities [8, 9], so it is necessary to optimize the network operation and management design by using big data technology and adapt to the transmission of big data when designing the network architecture in order to realize the operation intelligence and network intelligence of 5G network [10]. In the era of big data, the data mining technology can effectively improve the data transmission effect, enhance the level of data interaction under the maintenance of good system performance, ensure higher frequency and larger capacity of information reception effect, so as to play the application value of data, and lay a solid foundation for information management and information interoperability. 5G network is the main channel to achieve data terminal to data center transmission in the process of big data application, combined with Internet and IoT technology can effectively create data edge storage and computing, and the application of core network to ensure that data transmission work to provide more effective information for data centers and cloud computing centers. It is worth mentioning that 5G networks not only establish data transmission patterns, but also
Big Data Technology Driven 5G Network Optimization Analysis
153
Fig. 1. Application of big data analytics in 5G network optimization
trigger the exchange of information between terminals and different servers and routers of the core network, thus ensuring that more data content can be carried. For example, with the application of 5G technology, users making HTTP requests of 1KB can achieve a 930-fold increase in internal data flow, thus establishing linked management of data from different back offices, databases, and gateways to improve data transmission rates. The 5G network is the main channel from the data terminal to the data center, as shown in Fig. 2. Data collected from the Internet, IoT terminals or mobile users are pre-processed and stored through base stations and wireless access networks with edge caching and computing capabilities, and finally transmitted to data centers and cloud computing centers for data analysis through the core network.
Fig. 2. 5G networks are the primary gateway from data terminals to data centers
154
X. Zhang et al.
3 Common Approaches to Big Data Technologies in 5G Communication Networks 3.1 Big Data Collection and Analysis Techniques 5G communication networks have their own operational needs, among which is the need for precise customer targeting. At this time, the data collection and analysis technology of big data technology can be used to collect and process antenna data and interference data in 5G communication networks, so that interference data can be eliminated and transmission efficiency can be improved. In the case of specific data analysis, specialized data analysis techniques are required, and data analysts should combine GPS technology to ray-trace the data in three dimensions, analyze the antenna data and network data after they are obtained, and help make scientific decisions. This allows for precise location of customers, which can lead to a more scientific layout of the 5G communication network and thus improve service levels (Details can be seen in Fig. 3).
Fig. 3. Big Data Collection and Analysis Chart
3.2 Big Data Mining Technology In the application of big data technology to process data in 5G network communication, the relatively key technology that needs to be applied is big data mining technology. The use of big data mining technology can promote the improvement of 5G communication network by deep mining the data in the network and discovering the potential value of the data. In the specific big data mining technology application process, the technical staff should then calculate the model, and its application to obtain feedback on the situation, through deep processing, so as to promote the update and optimization of the 5G communication network(Details can be seen in Fig. 4).
Big Data Technology Driven 5G Network Optimization Analysis
155
Fig. 4. Big Data Mining Technology Map
3.3 Big Data Storage Technology Big data storage technology is also widely used in 5G communication networks. Usually, after the data analysis is completed, the data needs to be classified and stored, and this process also involves the construction of a heterogeneous database. Technicians need to analyze each base station information, interference information and obligatory flow data, etc., and implement classification and storage. The ability to sense cloud data in 5G communication networks is enhanced by effectively accessing different clouds into them in the 5G communication network. It is also necessary to use big data technology to analyze various data in the communication network and the location of important deployment access clouds to achieve optimization of the 5G communication network environment. 3.4 Mobile Cloud Computing Technology Mobile cloud computing technology is also a representative technology of big data technology, and when applying it, the people concerned need to pay attention to the connection point between mobile cloud computing technology and 5G communication network, i.e. mobile devices. And currently, with the continuous development of information technology, people have higher and higher requirements for mobile devices, including smart phones, smart computers and smart homes, all of which need to continuously improve the level of intelligent control. Currently, the development of IoT technology is gradually improving, and technicians need to pay attention to the efficient connection mode between mobile devices and the network. Based on this, in the 5G communication network should be a reasonable application of mobile cloud computing technology, the mobile cloud computing products into the 5G communication network, the construction of a basic cloud service platform to meet the operational requirements of users. When applying mobile cloud computing technology, technicians should focus on the improvement of infrastructure, platform and computing storage. Through the application
156
X. Zhang et al.
of mobile cloud computing technology, it will enable users to achieve remote control, and the ability to achieve the Internet of everything. In addition, mobile cloud computing technology can help achieve distributed file processing and storage so that complex information can be stored in the cloud, providing convenient conditions for people to query and use information. 3.5 Wireless Surveillance Technology In 5G communication networks, the data center serves as an important operation center, and it stores a large amount of information. This also makes the stability of the data center relevant to the efficiency and quality of 5G communication network operations. This requires the staff to pay more attention to the data center and to increase the monitoring of the data center to ensure its stable and reliable operation. Technicians also have to centrally implement monitoring of the data center, especially for various parameters. Specifically, wireless sensing technology can be applied in data center monitoring systems, which are more secure and interactive as a form of wireless monitoring. However, the 5G communication network has high requirements for wireless monitoring, so in the actual application of wireless sensing technology, but also with the specific operational needs of the 5G communication network, and actively improve the wireless sensing technology to ensure the stable operation of the 5G communication network on the basis of the wireless monitoring technology can also provide a good environment for the efficient application of wireless monitoring technology.
4 Big Data Supports the Direction of 5G Network Optimization 4.1 Large-Scale Antennas and Distributed Antennas Supported by Big Data 5G will use massive antennas, with up to 128 or even 256 antennas. High-order MIMO provides one fugitive antenna transmit beam for each channel to achieve air-division multiplexing, but there is interference between the beams, which reduces the efficiency of MIMO and requires collecting interference data between dense beams and performing complex optimization based on the system’s computational power. In addition, the network terminal receives more power in the center of the base station and poorer signal at the edge of the base station]. This problem can be solved by using distributed antennas, but distributed antennas also interfere with each other, if the channel data and interference data of all antennas can be collected, the joint signal processing of all wireless access access points through big data analysis technology can guide each antenna and micro base station to achieve the offset of interference, and the capacity can be increased by about 2 orders of magnitude compared with LTE system. In addition, if MIMO data and network data can be collected and analyzed for decision making using big data technology, the positioning accuracy can be improved. With the development of 3D simulation and 3D ray tracing technology, the combination of indoor antenna and WLAN technology can also precisely locate the user outdoors or indoors, or even the specific floor where the user is located.
Big Data Technology Driven 5G Network Optimization Analysis
157
4.2 Big Data Support 5G Cloud Network As shown in Fig. 5, the 5G network is a cloud-based network that includes an access cloud, a forwarding cloud, and a control cloud. Access cloud refers to a cluster of microbase stations forming a virtual cell in the scenario of ultra-dense coverage of micro-cells, realizing cooperative management of resources and interference coordination among micro-base stations. Forwarding cloud means that each service stream shares high-speed storage and forwarding with various service enabling units such as firewall and video transcoding. The control cloud includes modules for network resource management, network capability opening, and control logic. In addition, in the 5G scenario there can be various clouds such as mobile cloud computing, mobile edge computing (MEC), micro cloud and femtocloud, which can be deployed in different locations of the wireless network and their configurations need to be optimized with the help of network and user big data analysis.
Fig. 5. Access to the Cloud
4.3 Big Data to Support 5G Radio Access Network Resource Management. The radio access network in the 2G and 3G era was a multi-layered network, and under this network structure, the tidal effect often resulted in unevenly busy base stations. Therefore, the 4G system flattens the network design by decomposing the base stations into baseband processing units (BBUs) and radio frequency pull-away modules (RRUs), and the BBUs of multiple base stations can be pooled into baseband pools to achieve intensive resource utilization. The 5G network further decomposes BBU function into centralized unit (CU) and distribution unit (DU). CU can manage multiple DUs to achieve interference management and service aggregation, while DU realizes multi-antenna processing and forward compression, flexibly responding to changes in transmission and service requirements, optimizing real-time performance and reducing hardware costs. This design also allows for closer proximity to users and facilitates centralized management. However, how many DUs are managed by a CU requires an optimized design based on a large amount of big
158
X. Zhang et al.
data of users’ spatial-temporal behavior, especially how to achieve different resource deployment during busy and idle times from the perspective of energy efficiency. 4.4 Optimize 5G Source Routing with the Help of Network Big Data Sliced Packet Network (SPN) is based on sliced Ethernet and segmented routing (SR) technology for mid-transmission and backhaul. The traditional IP network works in a connectionless manner, and each IP packet with the same source address and the same destination address belonging to the same communication is processed independently without considering their association before and after, and each IP packet belonging to the same communication is routed independently at each node along the way, or even takes a different route, which is a trade off for flexibility and survivability at the cost of delay and efficiency when the network reliability at the beginning of the Internet was not high. Nowadays, network performance has been greatly improved, if the data plane devices are configured according to the characteristics of the first packet in each communication (i.e., the configuration flow table), then the subsequent data packets of that communication are abstracted to the same stream, and each subsequent IP packet of the same communication does not need to be routed again. Since an ordered set of instructions has been set at the source node, identifying the nodes or links that pass along the way, these nodes do not need to sense the service state, but only need to maintain the topology information and simply perform the forwarding function according to the configured flow table, which is equivalent to connection-oriented packet communication and significantly improves the network efficiency. Therefore, segmented routing, also known as source routing, eliminates the need for signaling protocols such as LDP/RSVP-TE and is suitable for receiving control from SDN. The design of the source routing instruction set needs to be optimized with the help of network big data.
5 Conclusion In short, many industries have a wide range of big data technology, and according to a large number of use of practice has shown that all reasonable use of big data enterprises or institutions, its efficiency in processing data have been more obvious improvement. Based on the background of the era of big data, if the 5G network wants to effectively promote the reasonable optimization of its own services, it must pay great attention to the reasonable use of big data technology, and then actively use the collection and analysis of big data to optimize the traditional communication system, so as to solve all the problems arising from network construction, and then effectively maintain the stability and harmony of the development of China’s 5G network business. Big data will play an enhanced and optimized role in the development of 5G networks. The application of big data in 5G network has a very broad space, but also faces a lot of challenges, need to solve the data mining computational complexity, timeliness, energy efficiency, security and other issues, but also to 5G network standardization and implementation of many innovative topics.
Big Data Technology Driven 5G Network Optimization Analysis
159
References 1. Divakaran, J., Malipatil, S., Zaid, T., et al.: Technical study on 5G using soft computing methods. Sci. Program. 2022, 1–7 (2022) 2. Khattak, S.B.A., Jia, M., Marey, M., Nasralla, M.M., Guo, Q., Gu, X.: A novel single anchor localization method for wireless sensors in 5G satellite-terrestrial network. Alex. Eng. J. 61(7), 5595–5606 (2022) 3. Chimeh, J.: 5G mobile communications: a mandatory wireless infrastructure for big data. In: Third International Conference on Advances in Computing, Electronics and Electrical Technology - CEET 2015 (2015) 4. Dong, H., et al.: 5G virtual reality in the design and dissemination of contemporary urban image system under the background of big data. Wireless Commun. Mob. Comput. 2022, 1–14 (2022). https://doi.org/10.1155/2022/8430186 5. Khatib, E.J., Barco, R.: Optimization of 5G networks for smart logistics. Energies 14(6), 1758 (2021) 6. Ramírez-Arroyo, A., Zapata-Cano, P.H., Palomares-Caballero, Á., Carmona-Murillo, J., Luna-Valero, F., Valenzuela-Valdés, J.F.: Multilayer network optimization for 5G & 6G. IEEE Access 8, 204295–204308 (2020) 7. Dangi, R., Lalwani, P., Choudhary, G., et al.: Study and investigation on 5G technology: a systematic review. Sensors 22(1), 26 (2022) 8. Boiko, J., Pyatin, I., Eromenko, O.: Analysis of signal synchronization conditions in 5G mobile information technologies. In: 2022 IEEE 16th International Conference on Advanced Trends in Radioelectronics, Telecommunications and Computer Engineering (TCSET). IEEE (2022) 9. Chergui, H., Verikoukis, C.: Big data for 5G intelligent network slicing management. IEEE Network 34(4), 56–61 (2020) 10. Arya, G., Bagwari, A., Chauhan, D.S.: Performance analysis of deep learning-based routing protocol for an efficient data transmission in 5G WSN communication. IEEE Access 10, 9340–9356 (2022)
An Enterprise Financial Statement Identification Method Based on Support Vector Machine Chunkai Ding(B) School of Finance and Economics, Guangdong University of Science and Technology, Dongguan, Guangdong, China [email protected]
Abstract. Intelligent identification is the main method of financial statement identification, and in the process of enterprise financial statement identification, it is easy to have the problem of large report volume, reduce the accuracy of report identification, and even lose report data. This paper proposes a support vector machine method to identify report recognition and comprehensively shorten the report recognition time. Unusual declarations and financial audits are then identified. Finally, the support vector machine method summarizes the recognition results and outputs the final report recognition. MATLAB shows that the support vector machine method can accurately identify financial statements, with an error rate of less than 10% and a recognition accuracy greater than 90%, which is better than the bee colony algorithm. Therefore, the support vector machine method can meet the requirements of financial statement identification and is suitable for financial statement management. Keywords: Finance · Enterprise · Report Identification · Vector Machine
1 Introduction Some scholars believe that report identification is an intelligent process of financial management [1], which should summarize accounts and surpluses [2], and is prone to problems such as data loss and incomplete data [3]. Currently, in identifying enterprise financial statements [4], there are often problems such as low accuracy, long identification time and high recognition rate [5]. Therefore, some scholars propose to apply the support vector machine method to the financial statements and identify the surplus data of the statements [6]. However, harsh environments, external interference, and other conditions make the reporting equipment have a great interference problem. To this end, some scholars propose a support vector machine method [7], which optimizes the financial statements, identifies the key surplus data in the power grid [8], makes abnormal declarations, and identifies abnormal declarations to achieve effective dispatch of power grids. Based on the support vector machine method, this paper summarizes and identifies the financial statements and verifies the effectiveness of the support vector machine method. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 160–169, 2023. https://doi.org/10.1007/978-3-031-31860-3_17
An Enterprise Financial Statement Identification Method
161
2 Mathematical Description of Financial Statements Enterprises have the problem of long report identification time and many uncertain factors, and the identification of characteristics of financial statements has a specific guiding role [9]. Enterprises mainly rely on audit data and management data to summarize and identify. The vector machine theory is used to identify the data points in the report randomly and audit, and the standard verification is carried out through the financial points. Among them, the rate of change of the characteristic value represents the degree of recognition of the enterprise. Businesses need to define the following. Financial statements is si , statements is xi , earnings data is ri , identification sets is seti , li and degree of recognition is θ, The number of recognitions is ci . Then, the report recognition process is shown in Eq. (1). (li · xi · ci ) · tanθ (1) seti = dai
The constraint function is f(x, da, l | θ), A is the recognition constraint, ς is the financial constraint [10], and B is the recognition rate. The f(x, da, l | θ) calculation process is shown in Eq. (2). f (x, da, l | θ ) =
dai
A · (ri · xi ∧ θi ) ·ς B
(2)
3 Support Vector Machine Method for the Identification of Financial Statements In identifying enterprise financial statements [11], it is necessary to quantify the financial audit and financial surplus to reduce the identified financial norms. According to the audit specification [12], the key points [13], the data of the financial audit, should be identified, and the measuring machine should calculate the status of the audit data [14]. At the same time, the error rate of grid transmission should be identified. Therefore, the feature values must be randomly extracted. The financial judgment function of violation is F(dai | t), k = 1 in the event of an outlier. The specific calculation is shown in Eq. (3). F(dai | k) = A ·
n dai · tanθ · li · B da k
(3)
If the output result of F(dai | k) is positive, the enterprise is running stably, otherwise an exception occurs [15]. The time test function is O(dai | t) that in t time, the test result is shown in Eq. (4). O(dai | t) =
n t=1
dai ∪ v k
where v is the change in the degree of recognition
(4)
162
C. Ding
The financial surplus reconciliation function is C(dai | lo), lo is the change in financial surplus, which is calculated as shown in Eq. (5) [16]. n C(dai | lo) = lo ∪ dai (5) t=1
The financial audit function is D(dai | r), r is the rate of change, calculated as shown in Eq. (6). n D(dai | r) = r ⇒ dai tanθ (6) t=1
4 Steps for Identifying Business Financial Statements Enterprise financial statement identification to identify abnormal values and sample identification, including audit data report, Financial audit, abnormal report location, and financial statements. In addition, according to the financial review and status of the report, the identification requirement algorithm and vector machine theory are used to identify the financial statements comprehensively. At the same time, the financial statements are identified as a whole to eliminate the impact of interference on the identification of the statements, and the specific methods are as follows. Step 1: Collect financial statements, determine the rules for determining outliers, and conduct financial audits, comprehensive judgments of financial surpluses, and determine thresholds [17]. Step 2: Calculate points based on audit data and financial audit, and judge abnormal financial audits to determine financial surplus and financial audit finally. Step 3: Compare different methods, compare the accuracy of financial surplus, and error rate, and output the final result [18]. Step 4: If all enterprises have been traversed, terminate the identification, otherwise, repeat steps 1–3.
5 Practical Examples of Enterprises In order to verify the recognition effect of the support vector machine method on financial statements, the identification effect of audit data is judged by taking small and mediumsized enterprises as the research object. Among them, the financial statements, financial audits, and the environment are all within the normal range, and the specific results are shown in Table 1.
An Enterprise Financial Statement Identification Method
163
Table 1. Status of SMEs Identification type
Traditional industries
Innovative industries
Financial audits
Comply with national standards
Comply with national standards
Financial surplus
profit
profit
Identify requirements
Compliance with standards
Compliance with standards
Financial norms
specification
specification
Recognition rate
90
90
Earnings data
145.21
24.62
Loss data
12.6
19.54
Debt ratio
22.42
19.36
Other data
52.45
62.95
importance
ordinary
ordinary
According to the types of grid identification in Table 1, there are no significant differences between traditional and innovative industries in financial audit, financial surplus, identification requirements, financial norms, identification rate, etc., indicating that relevant data in enterprises can be used as identification objects. Among them, the importance of traditional industries is higher, and the importance of innovative industries
Fig. 1. Analysis of the financial statements
164
C. Ding
is low, indicating that traditional industries dominate the identification of enterprises. The results of the financial statements in Table 1 are shown in Fig. 1. It can be seen from Fig. 1 that the overall state of the enterprise is better, the change range of different identification types is small, and later identification can be carried out. The error rate and accuracy of identifying the enterprise’s financial statements. Error rate and accuracy are important indicators for identifying enterprise financial statements, and whether the support vector machine method can effectively identify finances requires verification of the above indicators, and the specific results are shown in Table 2. Table 2. Comparison of error rate and accuracy of different methods (Unit: %) Algorithm
Business type
Identification type
Accuracy
Error rate
Recognition rate
Support vector machine method
Traditional industries
Profitability scale
93.32 ± 2.11
1.12 ± 0.21
93.12
Loss scale
93. 23 ± 1.12
2.34 ± 0.43
93.83
Comprehensive scale
95.16 ± 1.31
1.12 ± 0.33
93.35
Profitability scale
93.22 ± 1.13
2.12 ± 0.33
95.06
Loss scale
92.23 ± 1.23
2.22 ± 0.31
94.32
Comprehensive scale
95.31 ± 1.12
2.34 ± 0.23
94.26
Profitability scale
82.12 ± 2. 01
4.32 ± 1.01
93.21
Loss scale
83. 26 ± 2. 02
4.14 ± 1.03
83.12
Comprehensive scale
83.26 ± 2.01
4.24 ± 1.03
83.15
Profitability scale
82.12 ± 2.33
4.22 ± 1.03
84.22
Loss scale
81.33 ± 2.13
4.12 ± 1.21
84.21
Comprehensive scale
81.16 ± 2.12
4.32 ± 1.23
84.33
Error
12.11
2.322
4.32
Interference factor
15.21
16.32
19.85
Comprehensive
24.36
16.85
42.65
Innovative industries
Bee colony algorithm
Traditional industries
Innovative industries
It can be seen from Table 2 that the error rate of the support vector machine method in traditional and innovative industries is less than 4, the accuracy is greater than 90%, and the recognition rate is less than 6%. The error rate of the bee colony algorithm is higher, and the accuracy is lower, but the recognition rate is consistent with that of the support vector machine method. It shows that the support vector machine method
An Enterprise Financial Statement Identification Method
165
has better accuracy and error rate under the same interference. In order to verify the effectiveness of the methodology presented in this paper, continuous identification of financial statement identification is required, and the results are shown in Fig. 2.
Fig. 2. Comparison of error rate and accuracy of different algorithms
It can be seen from Fig. 1 that in the continuous report, the support vector machine method can effectively improve the accuracy of report recognition, and the number of errors is reduced. In contrast, the swarm algorithm’s accuracy and error rate wave vary greatly, and the overall results are consistent with the research results in Table 2. The reason is that the support vector machine can identify the characteristics of financial statements and profit scales, simplify the identification process, and improve the recognition efficiency, so the overall operation is better. The time of identification of financial statements. The identification time is the leading indicator of the identification effect of enterprise financial statements, including the report location time and the identification time of abnormal profit scale, and the specific results are shown in Table 3. According to Table 3, in the support vector machine method, the summary time of the report in the scope of traditional industries, innovative industries, and new industries, as well as the time of partial reports, overall reports and stage reports, is shorter, which is better than the swarm algorithm. In addition, in terms of profit scale, loss table, report summary, and report summary, although the time change of the two methods is relatively stable, the support vector machine method is significantly shorter than the bee swarm algorithm. The reason is the support vector machine method. Comprehensively process data such as profit scale and financial loss, and formulate a report summary of response.
166
C. Ding
For the report summary results, test different report schemes to verify their validity of the schemes. The bee colony algorithm also carries out the overall report summary requirements but lacks the vector machine processing step, while the support vector machine method can take different scheme errors for the report situation by identifying the results, as shown in Fig. 3. Table 3. Identification time of enterprise financial statements (Unit: seconds). method
Support vector machine method
Bee colony algorithm
Comprehensive comparison results
Identification type
Profitability identification Traditional industries
Innovative industries
New industries
Loss scale Partial reports
Overall report
Stage reports
Profitability scale
51.21 ± 1.11
49.52 ± 5. 52
42.22 ± 1.12
41.55 ± 1.41
51.12 ± 1.52
25. 55 ± 1. 52
Statement of losses
56.15 ± 1.22
46.41 ± 1.25
46.15 ± 1.25
41.15 ± 2. 52
51.25 ± 1.15
25.75 ± 1. 12
Report summary
46.52 ± 1.22
46.25 ± 1. 52
46.27 ± 1. 55
42.45 ± 1.42
51.01 ± 1.15
22.52 ± 1.21
Report auditing
46.51 ± 1.22
47.25 ± 1.42
49.15 ± 1.21
42.62 ± 1.42
52.22 ± 2.21
51.15 ± 1.55
Profitability scale
67.15 ± 5.51
67.55 ± 4.45
57.22 ± 2.12
65.15 ± 5. 51
45.12 ± 5.22
65.62 ± 4.22
Statement of losses
67.55 ± 5.12
67.12 ± 4.22
57.55 ± 2. 55
65.25 ± 5.52
45. 55 ± 5. 55
65. 55 ± 4. 52
Report summary
69.42 ± 5.52
68.52 ± 4.15
54.87 ± 2. 55
65.15 ± 5.15
45.12 ± 5.15
65.75 ± 4.55
Report auditing
69.52 ± 5.212
68.51 ± 4.12
58.57 ± 2.25
65.27 ± 5.72
45.82 ± 2. 55
68.12 ± 4.61
Profitability scale
40.98
43.00
47.19
40.25
44.12
40.98
Statement of losses
41.92
46.77
45.93
46.69
39.32
41.92
Report summary
46.20
44.93
51.85
43.53
50.01
46.20
Report auditing
49.93
42.65
44.67
45.63
44.22
49.93
An Enterprise Financial Statement Identification Method
167
Fig. 3. Processing time for different methods
It can be seen from Fig. 3 that the support vector machine method has a shorter time and is better than the swarm algorithm, which further verifies the results of Table 3.
6 Conclusions Because of problems such as the inaccurate summary of financial statements and the inability of swarm algorithms to mine data effectively, this paper proposes a support vector machine method, comprehensively identifies the profit scale and financial surplus of enterprises, and uses the report data to identify abnormal reports. The support vector machine method adopts different financial scale identification and combines the collected financial scale data to determine the financial scale situation. The results show that the recognition error rate of the support vector machine method is less than 5, and the accuracy is greater than 90%, which is significantly better than the bee colony algorithm. Moreover, in the support vector machine method, the calculation time of financial statements is shorter, and the time for summary and verification of loss statements, profit scales, and statements is shorter. At the same time, the time of stage report, partial report, and comprehensive report are better than the bee colony algorithm. Therefore, the support vector machine method can meet the requirements of enterprise financial statement identification and is suitable for financial statement management. Acknowledgements. This work is supported by the Industry-University Collaborative Education Project by the Ministry of Education of China in 2022: "The Construction of Accounting Education Practice Base in the Age of Digital Intelligence" (NO.220606342143936).
168
C. Ding
References 1. Coufalova, L., Mikula, S., Zidek, L.: Misreporting in financial statements in a centrally planned economy: the case of czechoslovak state-owned enterprises in late socialism. Account. Hist. 1(3), 27–30 (2020) 2. Lee, J.Z., Chen, H.C., Tsai, T.Y.: The relationship between shared audit opinions and the audit quality of group enterprises’ financial statements - based on the audit adjustment. Ntu Manage. Rev. 30(2), 37–70 (2020) 3. Moshchenko, O.V., Smetanko, A.V., Glushko, E.V.: Multimethodology for analysing financial statements of enterprises on the example of Russian fuel and energy companies’ current assets. Int. Trans. J. Eng. Manage. Appl. Sci. Technol. 10(18), 89 (2019) 4. Musanovic, E.B., Halilbegovic, S.: Financial statement manipulation in failing small and medium-sized enterprises in Bosnia and Herzegovina. J. Eastern Eur. Central Asian Res. 8(4), 556–569 (2021) 5. Nguyen, D.T., Hoang, D.H., Nguyen, N.T.: Preparation of financial statements of enterprises according to IFRS: an empirical study from Vietnam. J. Asian Finan. Econ. Bus. 9(2), 193–207 (2022) 6. Setiorini, K.R., Rahmawati, P., Hartoko, S.: The pentagon fraud theory perspective: understanding of motivation of executives to manipulate with the financial statements of a state-owned enterprise. Econ. Ann.-Xxi 194(11–12), 104–110 (2021) 7. Page, J., Littenberg, T.B.: Bayesian time delay interferometry. Phys. Rev. D 104(22), 102–114 (2021) 8. Pan, D.B., Zhang, G., Jiang, S.: Delay-independent traffic flux control for a discrete-time lattice hydrodynamic model with time-delay. Phys. a-Statist. Mech. Appl. 563(38), 301–308 (2021) 9. Shahbazzadeh, M., Sadati, S.J.: Delay-dependent stabilization of time-delay systems with nonlinear perturbations. Circuits Syst. Sign. Process 1–16 (2021). https://doi.org/10.1007/ s00034-021-01810-w 10. Sharifi, M.: Robust finite-time consensus subject to unknown communication time delays based on delay-dependent criteria. Trans. Inst. Meas. Control. 44(9), 1205–1216 (2022) 11. Al-Okaily, M., Alkhwaldi, A.F., Abdulmuhsin, A.A., Alqudah, H., Al-Okaily, A.: Cloudbased accounting information systems usage and its impact on Jordanian SMEs’ performance: the post-COVID-19 perspective. J. Finan. Report. Account. 12(3), 98 (2020) 12. Jin, T.L., Zhang, B., Yang, Z.: Cloud statistics of accounting informatization based on statistics mining. Comput. Intell. Neurosci. 14(3), 78 (2022) 13. Musyaffi, A.M., Septiawan, B., Arief, S., Usman, O., Sasmi, A.A., Zairin, G.M.: What drives students to feel the impact of online learning in using a cloud accounting integrated system? Tem J.-Technol. Educ. Manage. Inform. 11(4), 1577–1588 (2020) 14. Saad, M., et al.: Assessing the intention to adopt cloud accounting during COVID-19. Electronics 11(24), 98 (2020) 15. Tie, X.H.: Informatization of accounting system of electric power enterprises based on sensor monitoring and cloud computing. Int. Trans. on Electric. Energy Syst. 17(3), 89 (2022) 16. Wang, J.L., Yang, X.Q., Li, Z.: Cloud data integrity verification algorithm based on data mining and accounting informatization. Sci. Program. 13(3), 78 (2022) 17. Zhang, M., Ye, T.T., Jia, L.: Implications of the “momentum” theory of digitalization in accounting: evidence from Ash Cloud. China J. Account. Res. 15(3), 76 (2022)
An Enterprise Financial Statement Identification Method
169
18. Jin, T.L., Zhang, B.: Intermediate data fault-tolerant method of cloud computing accounting service platform supporting cost-benefit analysis. J. Cloud Comput.-Adv. Syst. Appl. 12(1), 87 (2023) 19. Li, Y.H., Shao, J., Zhan, G., Zhou, R.: The impact of enterprise digital transformation on financial performance-Evidence from Mainland China manufacturing firms. Manag. Decis. Econ. 9(3), 87 (2022)
An AIS Trusted Requirements Model for Cloud Accounting Based on Complex Network Chunkai Ding(B) School of Finance and Economics, Guangdong University of Science and Technology, Dongguan, Guangdong, China [email protected]
Abstract. Trusted demand model is the primary method of cloud accounting audit, and in a complex network environment, it is easy to have the problem of inaccurate demand analysis, which reduces the accuracy of cloud accounting audit and even occurs in mistakes. So it aims at the problems of inaccurate accounting audit analysis and the inability of traditional audit methods to carry out credible demand models effectively. This paper proposes an AIS trusted demand model to comprehensively analyze cloud accounting, comprehensively identify accounting audit data, and shorten the time. Then conduct a credible requirements analysis for exception audits and violation audits. Finally, leverage online audit data and output trusted requirements results. MATLAB shows that the AIS trusted requirements model could accurately perform accounting audits, with an audit error rate of less than 10% and an audit accuracy rate of more than 90%, which is better than traditional auditing methods. Therefore, the AIS trusted requirements model can meet the requirements of cloud accounting and is suitable for accounting auditing. Keywords: Complex Networks · Audit · Trusted Requirements · Cloud Accounting
1 Introduction Some scholars believe that cloud accounting is an intelligent process of accounting auditing, which must audit accounting data and audit data [1], and it is prone to inaccurate audit analysis of violations Issues such as incomplete audits. In cloud accounting audits, there are often problems such as low accuracy [2], long audit time, and high disturbance rate. Therefore, some scholars propose to apply the trusted demand model to accounting audit analysis and identify illegal audits [3]. However, unfavorable factors such as complex networks and interference make cloud accounting audits have vital interference problems [4]. To this end, some scholars put forward the AIS trusted demand model through the optimization of the audit plan, identifying the critical points in the accounting audit, abnormal audit, and correct Exception audits to perform credible requirements analysis to achieve effective accounting audits [5]. Based on complex networks, this paper analyzes the trusted requirements of accounting audits and verifies the effectiveness of the AIS trusted demand model. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 170–179, 2023. https://doi.org/10.1007/978-3-031-31860-3_18
An AIS Trusted Requirements Model for Cloud Accounting
171
2 Mathematical Description of Accounting Audit Accounting audit has the problems of long audit time and complex environment, and the identification of the characteristics of the audit plan has certain uncertainties [6–8]. Accounting audit relies on accounting and illegal audit data to carry out a credible demand model [9]. The demand theory is used to identify random and regular characteristics of data points in accounting, and standard verification is carried out through critical points. Among them, the rate of change of the characteristic values represents the network complexity of the accounting audit [10]. The following definitions are required for accounting audits. Accounting audit is si , audit data is xi , point is li , trusted requirements data is ri , audit set is set i [11], The network complexity is ci , and the number of analyses is θ . Then, the trusted requirements model process is shown in Eq. (1). −−→ xi ∧ (ci · sinθ) (1) seti = li dai
The constraint function is f (xθ ), a is the constraint of the configuration variation, b is the constraint of the trusted requirements model, an ς d is the perturbation rate. The calculation process of f (xθ ) is shown in Eq. (2).
f (x · θ ) =
a ⇒ ri ∧ xi ∩ sinθi · ς ) xi b
(2)
3 AIS Trusted Requirements Analysis In the process of AIS trusted requirements analysis, the audit and operation of violations should be quantified to reduce the complexity of auditing. According to the scheduling principle, it is necessary to identify the data of audit points and illegal audits [12, 13], and need to calculate the illegal audit of accounting data. At the same time, the audit rate of non-compliance should be analyzed. Therefore, the feature values must be randomly extracted. The violation audit judgment function is F(si | k ), when P = 1, F(si | k ) is outlier occurs; When P < 1, F(si | k ) is smooth. The specific calculation is shown in Eq. (3) [14, 15]. n (si · tanθ ) ∩ k (3) F(dai | k) = β · α ·
s
If the output result of F(dai | k) is positive, it means that the accounting audit is running stably, otherwise an exception occurs [16, 17]. The time test function O(si | t) is that in t time, the test result is shown in Eq. (4). n O(si | t) = α · si → v (4) t=1
where v is the change in network complexity.
172
C. Ding
The audit function is C(dai | lo), lo is the change in operation [18], and C(dai | lo) calculation is shown in Eq. (5). n D(si | lo) = lo · α ∪ si (5) t=1
The violation audit function is D(dai | r), r is the rate of change, calculated as shown in Eq. (6). n E(si · γ ) = ri ∪ si ⊕ cosθ (6) t=1
4 Steps for Cloud Accounting Auditing Cloud accounting audit should identify abnormal values and conduct sampling audits, including accounting data operation, abnormal audit data, and accounting audit. In addition, according to the operation situation, illegal audit, and use the planning algorithm and demand theory to conduct a comprehensive analysis of the accounting audit. At the same time, the overall audit of the accounting audit is carried out to eliminate the impact of interference on the trusted demand model, and the specific method is as Fig. 1. terminate the audit
Collect accounting audits Cloud counter
all accounting audit contents have been traversed
Data deal
A0
Calculate points
A3
A2
A
B
C
Compare the different methods
Fig. 1. Calculation process
Step 1: Collect accounting audits, determine the rules for determining outliers, and conduct comprehensive judgments on violations and operations to determine thresholds. Step 2: Calculate points based on accounting data and illegal audits, judge abnormal audits, and finally determine the operation and audits. Step 3: Compare the different methods, compare the accuracy of the operation, the audit rate of violations, and output the final result. Step 4: If all accounting audit contents have been traversed, terminate the audit, otherwise, repeat steps 1–3.
An AIS Trusted Requirements Model for Cloud Accounting
173
5 Practical Examples of Cloud Accounting Auditing In order to verify the effect of the AIS trusted demand model on cloud accounting audit, the audit effect of accounting data is judged by taking the accounting audit as the research object, and the specific results are shown in Table 1. Table 1. Data related to accounting audits Parameter
Individual companies
State-owned enterprises
Accounting Data (M)
215~315
95~115
audit
Audit → validation
Audit → validation
Comprehensive data
115~225
195~233
Network complexity
Level 3
Level 3
Perturbation rate
0.11
0.65
Data direction
+
+
Amount of data
452.32
123.455
Complexity of data
85.23
72.62
Data processability
89.32
72.32
Degree of data set
45.32
3.2.55
Change trend of data
↑
↓
According to the accounting audit parameters in Table 1, there are no significant differences between individual companies, state-owned enterprises, accounting data, audit data, network complexity, disturbance rate, etc., which is explained. The relevant data in the accounting audit can be used as a trusted demand model object. The results of the accounting audit in Table 1 are shown in Fig. 2. As can be seen from Fig. 2, the overall audit of accounting audit is better, and the change range of different parameters is small, which can be analyzed later.
174
C. Ding
Fig. 2. Accounting audit
The audit rate and accuracy of cloud accounting audits. The audit rate and accuracy of violations are essential indicators of cloud accounting audit, and whether the AIS trusted demand model can effectively audit the distribution network needs to verify the above indicators. The specific results are shown in Table 2. Table 2. Comparison of audit rate and accuracy of violations by different methods (unit: %) Algorithm
Scale
Parameter
AIS Trusted Requirements Model
Individual companies
State-owned enterprises
Violation audit rate
Perturbation rate
Accounting data 99.99 ± 1.78
99.79 ± 1.93
1.29
Audits of violations
94.79 ± 1.79
99.94 ± 1.29
1.89
Comprehensive audit
95.91 ± 1.97
99.79 ± 1.19
1.55
Accounting data 89.99 ± 1.79
94.79 ± 1.39
2.01
89.99 ± 1.99
99.99 ± 1.27
4.92
Audits of violations
Accuracy
(continued)
An AIS Trusted Requirements Model for Cloud Accounting
175
Table 2. (continued) Algorithm
Scale
Other enterprises
Traditional auditing methods
Individual companies
State-owned enterprises
Parameter
Accuracy
Violation audit rate
Perturbation rate
Comprehensive audit
85.37 ± 1.79
99.94 ± 1.69
1.91
Accounting data 83.99 ± 2.79
84.79 ± 1.39
3.11
Audits of violations
83.99 ± 2.99
89.91 ± 1.27
2.19
Comprehensive audit
81.37 ± 1.79
89.94 ± 1.99
1.91
Accounting data 87.79 ± 3.77
48.99 ± 3.07
2.97
Audits of violations
89.71 ± 2.99
41.74 ± 2.19
2.79
Comprehensive audit
85.21 ± 3.07
45.94 ± 1.29
2.75
Accounting data 89.29 ± 2.99
49.99 ± 1.39
3.93
Audits of violations
89.92 ± 6.79
49.79 ± 1.17
2.97
Comprehensive audit
85.31 ± 6.79
48.29 ± 2.99
4.99
Accounting data 29.19 ± 2.99
39.99 ± 2.29
3.19
Audits of violations
59.92 ± 6.79
49.39 ± 137
6.17
Comprehensive audit
75.31 ± 6.79
48.99 ± 1.19
3.29
Table 2 shows that the audit rate of the AIS Trusted Requirements Model in individual companies and state-owned enterprises is less than 50, the accuracy is greater than 90%, and the disturbance rate is less than 7%. The violation audit rate of traditional audit methods is higher, and the accuracy is lower, but the disturbance rate is consistent with the AIS trusted demand model. This shows that the AIS trusted requirements model has better accuracy and violation audit rate under the same interference. In order to verify the effectiveness of the proposed method, it is necessary to analyze the cloud accounting of accounting audits continuously, and the results are shown in Fig. 3. It can be seen from Fig. 3 that in continuous audit, and the AIS trusted requirements model could effectively improve the accuracy of the trusted requirements model with fewer adjustments. In contrast, the accuracy of traditional audit methods and the violation audit rate vary greatly. The overall results are consistent with the findings in Table 2. The reason is that the AIS trusted demand model could analyze accounting and illegal audit characteristics, simplify the audit process, and improve audit efficiency. So the overall operation is better.
176
C. Ding
Fig. 3. Comparison of violation audit rate and accuracy of different algorithms
The time analysis of accounting audits. Time is the leading indicator of the cloud accounting audit effect, including accounting data, comprehensive audit number, audit data, and complexity, and the specific results are shown in Table 3. Table 3. Cloud Accounting review time (unit: seconds). method
AIS Trusted Requirements Model
Traditional audit methods
parameter
Individual company audits
State-owned enterprise audits
Accounting data
Audit data
Key points
External audit
Internal audit
Comprehensive audit
Accounting data
58.95 ± 3.08
62.11 ± 4.62
44.21 ± 1.54
34.74 ± 4.62
39.35 ± 3.35
35.55 ± 3.35
Consolidated number of audits
40 ± 4.62
46.32 ± 4.62
60 ± 4.62
60 ± 3.08
37.33 ± 3.57
37.33 ± 3.53
Audit data
53.68 ± 4.62
27.37 ± 4.62
60 ± 1.54
28.42 ± 1.54
37.53 ± 3.35
37.57 ± 3.37
Complexity
57.89 ± 3.08
56.84 ± 1.54
55.79 ± 4.62
68.42 ± 4.62
37.53 ± 3.35
39.33 ± 3.53
Accounting data
33.68 ± 4.62
44.21 ± 4.62
44.21 ± 1.54
64.21 ± 1.54
77.33 ± 3.33
77.55 ± 5.35
Consolidated number of audits
55.79 ± 3.08
65.26 ± 1.54
51.58 ± 3.08
51.58 ± 4.62
77.35 ± 3.55
77.33 ± 5.33
Audit data
27.37 ± 1.54
65.26 ± 1.54
37.89 ± 1.54
67.37 ± 4.62
78.35 ± 3.33
73.87 ± 5.37
Complexity
46.32 ± 3.08
58.95 ± 1.54
34.74 ± 3.08
68.42 ± 3.08
78.33 ± 3.35
78.37 ± 5.57
According to Table 3, in the AIS trusted requirements model, accounting data, audit data, key points, external audit, internal audit and comprehensive audit The audit time is shorter, which is better than traditional audit methods. In addition, in terms of accounting data, comprehensive audit number, audit data, and complexity, although the time
An AIS Trusted Requirements Model for Cloud Accounting
177
variation of the two methods is relatively stable, the AIS trusted demand model It is significantly shorter than traditional audit methods because the AIS trusted requirements model comprehensively processes accounting data, operation and other data, and formulates a responsive audit plan. According to the audit results of violations, the effect of the trusted demand model of different methods is tested to verify the effectiveness of the proposed method. Traditional audit methods also carry out overall accounting audit data, but lack credible requirements analysis steps, while AIS trusted requirements model can target through demand adjustment The audit situation adopts different trusted requirements model content, as shown in Fig. 4.
Fig. 4. Processing time for different methods
As can be seen from Fig. 4, the AIS trusted requirements model has a shorter time and is better than traditional audit methods, which further verifies the results of Table 3.
6 Conclusions It aims at the problems of inaccurate accounting audit analysis and the inability of traditional audit methods to carry out credible demand models effectively. This paper proposes an AIS trusted demand model, comprehensively analyzes the accounting audit data and operation, and uses the trusted requirement model to identify abnormal audit violations. Through the analysis of the AIS trusted demand model, different audit function methods can be adopted, and combined with the collected accounting audit data, the overall customer results of the accounting audit can be determined. The results show that the audit rate of the AIS trusted requirements model is less than five and the accuracy is greater than 90%, which is significantly better than traditional audit methods. Moreover, in the AIS trusted demand model, the time of accounting audit data is shorter,
178
C. Ding
and in external audit accounting data, Shorter verification time for comprehensive audit numbers and complexity. At the same time, the audit time of external, comprehensive, and internal audits is better than that of traditional audit methods. Therefore, the AIS trusted demand model can meet the requirements of accounting audit data and is suitable for accounting audit data management. Acknowledgements. This work is supported by the Industry-University Collaborative Education Project by the Ministry of Education of China in 2022: "The Construction of Accounting Education Practice Base in the Age of Digital Intelligence" (NO.220606342143936).
References 1. Al-Okaily, M., Alkhwaldi, A.F., Abdulmuhsin, A.A.: Cloud-based accounting information systems usage and its impact on Jordanian SMEs’ performance: the post-COVID-19 perspective. J. Finan. Report. Account. 3(2), 19–21 (2021) 2. Altin, M., Yilmaz, R.: Adoption of cloud-based accounting practices in turkey: an empirical study. Int. J. Public Adm. 45(11), 819–833 (2022) 3. Buchetti, B., Parbonetti, A., Pugliese, A.: Covid-19, corporate survival and public policy: the role of accounting information and regulation in the wake of a systemic crisis. J. Account. Public Policy 41(1), 1–12 (2022) 4. Dunham, L.M., Grandstaff, J.L.: The value relevance of earnings, book values, and other accounting information and the role of economic conditions in value relevance: a literature review. Account. Perspect. 21(2), 237–272 (2022) 5. Walakumbura, S.: An empirical study on cloud accounting awareness and adoption among accounting practitioners in Sri Lanka. Int. J. Sci. Res. Publ. 11(7), 342–347 (2021) 6. Jin, T.L., Zhang, B., Yang, Z.: Cloud statistics of accounting informatization based on statistics mining. Comput. Intell. Neurosci. 9(4), 118–131 (2022) 7. Tawfik, O.L., Tahat, A, Jasim, A.L., et al.: Intellectual impact of cyber governance in the correct application of cloud accounting in Jordanian commercial banks - from the point of view of Jordanian auditors. Allied Bus. Acad. 32(5), 102–114 (2021) 8. Giustolisi, O., Ciliberti, F.G., Berardi, L., et al.: A novel approach to analyze the isolation valve system based on the complex network theory. Water Resour. Res. 58(4), 304–318 (2022) 9. Gallego-Molina, N.J., Ortiz, A., Martínez-Murcia, F.J., et al.: Complex network modeling of EEG band coupling in dyslexia: an exploratory analysis of auditory processing and diagnosis. Knowl.-Based Syst. 240(3), 684–699 (2022) 10. Bazhenov, A.Y., Nikitina, M., Alodjants, A.P.: High temperature superradiant phase transition in quantum structures with a complex network interface. Opt. Lett. 47(12), 3119–3122 (2022) 11. Al-Okaily, M., Alkhwaldi, A.F., Abdulmuhsin, A.A., Alqudah, H., Al-Okaily, A.: Cloudbased accounting information systems usage and its impact on Jordanian SMEs’ performance: the post-COVID-19 perspective. J. Finan. Report. Account. 2(3), 90 (2022) 12. Jin, T.L., Zhang, B., Yang, Z.: Cloud statistics of accounting informatization based on statistics mining. Computational Intelligence and Neuroscience, 23(6), 82 (2022) 13. Musyaffi, A.M., Septiawan, B., Arief, S., Usman, O., Sasmi, A.A., Zairin, G.M.Z.: What drives students to feel the impact of online learning in using a cloud accounting integrated system? TEM J. 1577–1588 (2022). https://doi.org/10.18421/TEM114-19 14. Saad, M., et al.: Assessing the intention to adopt cloud accounting during COVID-19. Electronics 11(24), 78 (2022)
An AIS Trusted Requirements Model for Cloud Accounting
179
15. Tie, X.: Informatization of accounting system of electric power enterprises based on sensor monitoring and cloud computing. Int. Trans. Electric. Energy Syst. 2022, 1–7 (2022). https:// doi.org/10.1155/2022/3506989 16. Wang, J.L., Yang, X.Q., Li, Z.: Cloud data integrity verification algorithm based on data mining and accounting informatization. Sci. Program. 13(5), 652022) 17. Zhang, M., Ye, T.T., Jia, L.: Implications of the “momentum” theory of digitalization in accounting: evidence from Ash Cloud. China J. Account. Res. 7(3), 45 (2022) 18. Jin, T., Zhang, B.: Intermediate data fault-tolerant method of cloud computing accounting service platform supporting cost-benefit analysis. J. Cloud Comput. 12(1) (2023). https://doi. org/10.1186/s13677-022-00385-4
A Novel Method of Enterprise Financial Early Warning Based on Wavelet Chaos Algorithm Lu Zhou(B) School of Finance and Economics, Guangdong University of Science and Technology, Dongguan, Guangdong, China [email protected] Abstract. Financial early warning is the key to financial management, but in the process of financial early warning, it is easily interfered by policies and industry conditions, which reduces the accuracy of early warning, and causes the deviation of key early warning data and the error of early warning results. Based on this, this paper proposes a wavelet chaos algorithm, which performs wavelet chaos algorithm on financial data, enhances key financial data, and shortens the mapping distance between financial data. Then wavelet analysis is performed on financial data gradients and neighborhoods. Finally, the chaos method mines the early warning data and outputs the final warning results. The results show that the wavelet chaos algorithm can accurately carry out a financial early warning, reduce the interference of policy and industry conditions, and the early warning accuracy rate is greater than 92%, which is better than the chaos algorithm. Therefore, the wavelet chaos algorithm can meet the requirements of enterprise financial early warning and is suitable for financial management. Keywords: Corporate Finance · Early Warning · Intelligence · Wavelet Chaos Algorithm
1 Introduction Some scholars believe that enterprise financial early warning is a comprehensive analysis process of financial management [1], and it is necessary to conduct comprehensive analysis of financial data and audit data, and it is easy to have problems such as false early warning and incomplete financial analysis [2]. At present, financial early warning in enterprise financial management often has the problems of low accuracy [3], long warning time, and redundancy of financial data of different departments. Therefore, some scholars propose to apply intelligent algorithms to enterprise financial early warning [4], identify disturbing factors such as policies and industry conditions, and strengthen the mining of early warning results. However, the excavation effect of listed companies and large enterprises is still not ideal, and there is still a problem of low accuracy [5]. To this end, some scholars propose a wavelet chaos algorithm, which identifies outliers and complex values in key values by strengthening key values, and iteratively analyzes key financial data to achieve effective early warning of enterprise financial management [6]. Therefore, based on the wavelet chaos algorithm, this paper analyzes the key values in financial management and verifies the effectiveness of the early warning. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 180–188, 2023. https://doi.org/10.1007/978-3-031-31860-3_19
A Novel Method of Enterprise Financial Early Warning
181
2 Financial Management Analysis Financial management analysis warns the geometric features in key values and detects the change characteristics of key values, which has a high ability to resist policy and industry conditions and can effectively warn is a commonly used financial early warning method for enterprises [7]. Financial management analysis mainly calculates the peak value of financial data based on financial data points and the relationship between data. The wavelet chaos algorithm uses chaos theory to audit and identify key points in financial data, and completes early warning identification through key value intersection calculation [8]. Among them, the projection of the feature value represents the change direction of the key value warning. The wavelet chaos algorithm needs to be defined in 2 ways, which are as follows [9]. Any enterprise financial data is xi, early warning data is xi , data relationship is li , integral result is ri , and early warning set is seti , the collective angle is θ0 , the number of corporate financial warnings is ci . Then, the calculation process of data i is shown in Eq. (1). data i = li · xi ⇐⇒ cosθ · ci ∈ seti (1) ri
The constraint function of wavelet chaos algorithm is β, f(x, l, r) is the constraint condition, α is the constraint condition for early warning mining, and ς is the interference coefficient of policy and industry conditions. Well, the f(x, l, r|θ ) calculation process is shown in Eq. (2). f (x, l, r) =
θ
α·
(ri · xi ∧ li ) |ς β
(2)
The key value is Cei, the change of early warning is xi , the set of accounting data is seti , and the number of warnings is ci . A is a wavelet analysis class of financial data. Then, the calculation process of Cei is shown in Eq. (3). Cei = xi · ci ∪ set i ∧ A (3) ci
The early warning adjustment function is f(x, c), A is the constraint, ξ is the early warning error coefficient. Well, the f(x, c) calculation process is shown in Eq. (4). f (x, c) = A · (ri · xi ) + ξ (4) set
In the process of enterprise financial early warning, financial information should be quantified to reduce the error rate of early warning. According to the principle of statistics, it is necessary to identify the financial data points and audit data with discrepancies, calculate the wavelets of financial data, identify the early warning financial data, and calculate the early warning stability after the financial early warning of the enterprise. Therefore, it is necessary to extract random eigenvalues of the key values after mining [10].
182
L. Zhou
The early warning function is F(xi | A), when P < 1 an early warning value occurs, when P = 1 it is normal, the characteristic early warning function F(xi | A) is calculated as shown in Eq. (5). n F(xi | A) = α · xi · sinθi · Ri · β · Piθ (5) i=1
where the degree of financial early warning of the enterprise is θi , the early warning results of different degrees is Piθ . If the output F(xi | A) is positive, the company’s financial warning is reasonable, otherwise the value should be excluded. If F(xi | A) is less than α, it means that the characteristic value does not meet the early warning requirements, and the early warning parameters should be adjusted, and F(xi | A) is greater than α that the characteristic value meets the early warning requirements. The comprehensive judgment function of early warning is F(Cei | xi ), calculated as shown in Eq. (6). F(Cei | A) = α · A · Cei · sinθi · ri (6) set
3 Wavelet Chaos Algorithm for Enterprise Financial Warning In the process of AIS trusted requirements analysis, the audit and operation of violations should be quantified to reduce the complexity of auditing. According to the scheduling principle, it is necessary to identify the data of audit points and illegal audits [10], and need to calculate the illegal audit of accounting data. At the same time, the audit rate of non-compliance should be analyzed. Therefore, the feature values must be randomly extracted.
4 Implementation Steps of Corporate Financial Alerts To reduce the occurrence of critical values, enterprise financial early warning needs to conduct a sampling analysis of key-value early warning, including financial data points, financial data relationships, and outlier ranges. In addition, the key financial data is mined according to the wavelet chaos algorithm, and the statistical theory is used to audit and calculate different financial data points and relationships. At the same time, the interference analysis of key values of different complexity [9] is carried out to eliminate the influence of key value interference on the calculation results and reduce the difficulty of calculation, as follows. In the first step, collect financial data, determine early warning judgment rules, perform wavelet chaos algorithm analysis on the collected data, and then determine the threshold and weight of calculation. Step 2: Calculate the points according to the relationship between financial data and financial data in the key values, compare the results with points, finally determine the characteristic values of financial data points and financial data relationships, and mine the reasons for abnormal financial data points and financial data relationships.
A Novel Method of Enterprise Financial Early Warning
183
Step 3: Compare abnormal financial data points and financial data relationships, verify the accuracy, predictability and stability of the results, and store the results in the financial log. Step 4: Accumulate the calculation of key numerical financial data points and financial data relationships, and terminate the analysis if the preset capacity of key values is exceeded or the maximum number of iterations is reached, otherwise, the calculation will continue..
5 Practical Examples of Corporate Financial Warnings In order to verify the effect of the early warning calculation method on the early financial warning of enterprises, the early warning effect of financial data is judged based on small and medium-sized enterprises, and the specific parameters are shown in Table 1. Table 1. Parameters of key values of enterprise financial early warning Parameter
Key values
Early warning
Audit data
2,22
3,22
Accounting data
22,130
35,225
Early warning conditions
23.22~22.22
23.22~22.22
Complexity
Class III
Class III
Departmental data
2~4
2~1
Feature center
0.32
0.32
Alert list
0.12
0.12
According to the parameters in Table 1, there is no significant difference between the feature centre and the early warning list in terms of audit data, accounting data, early warning conditions, complexity, number of parts, feature centre and early warning list, indicating that there is no obvious difference between the key value early warning and the key value core, which meets the statistical analysis requirements and can be analyzed for enterprise financial early warning. The early warning results of the data in Table 1 are shown in Fig. 1.
184
L. Zhou
Fig. 1. Proportion of data mining without warning
It can be seen from Fig. 1 that the data mining in the core area is relatively concentrated, and the data mining in the early warning area is relatively scattered, which meets the requirements of key numerical analysis, so the later data comparison and analysis can be carried out. The stability and accuracy of corporate financial warnings. Enterprise financial early warning should maintain a certain degree of stability, otherwise it will affect the calculation results, and the stability of the enterprise financial early warning results should be tested, and the specific results are shown in Table 2. Table 2. Comparative results of stability and accuracy (unit: %) Algorithm
Manner
Parameter
Accuracy
Stability
Average magnitude of change
Wavelet chaos algorithm
accounting
Financial data relationships
96.62 ± 6.01
98.22 ± 2.02
2.92 ± 0.62
Financial data points
98.26 ± 2.22
94.04 ± 6.06
2.25 ± 0.22
Comprehensive data points
22.22~62.62
22.22~62.62
\ (continued)
A Novel Method of Enterprise Financial Early Warning
185
Table 2. (continued) Algorithm
Wavelet chaos method
Manner
Parameter
Accuracy
Stability
Average magnitude of change
audit
Financial data relationships
95.22 ± 0.06
92.22 ± 0.06
2.25 ± 0.62
Financial data points
96.62 ± 0.06
92.62 ± 0.02
2.42 ± 0.26
Comprehensive data points
22.22~62.62
22.22~62.62
\
Financial data relationships
76.62 ± 6.22
78.22 ± 20.02
5.62 ± 2.62
Financial data points
78.26 ± 6.62
76.04 ± 9.06
5.22 ± 2.22
Comprehensive data points
22.22~62.62
22.22~62.62
\
Financial data relationships
75.22 ± 2.66
72.22 ± 0.06
5.26 ± 2.62
Financial data points
76.62 ± 2.26
72.62 ± 0.02
5.22 ± 2.66
Comprehensive data points
22.22~62.62
22.22~62.62
\
accounting
audit
wavelet
df = 8.25, P < 0.02
It can be seen from Table 2 that the stability and accuracy of the wavelet chaos algorithm are greater than 93%, the mode change is less than 6, and the change wavelet of different methods is less than 9, and there is no significant difference. At the same time, the change range between financial data points and financial data relationships is relatively small, so the overall result of the early warning calculation method is better. The financial data relationship and financial data points of the wavelet chaos method are quite different, between 7 ~ 9, and the difference value of the change range is also greater than 3. The variation in accuracy and stability of the different methods is shown in Fig. 2. It can be seen from Fig. 2 that in the continuous sampling at different stages, the accuracy and stability of the wavelet chaos algorithm are more concentrated, while the accuracy and stability of the chaos method are scattered, which is consistent with the research results in Table 2. The reason is that the wavelet chaos algorithm maps and analyzes key numerical financial data, financial data relationships and other data, simplifying the layer, financial data relationship length, financial data point location and other attributes. After discovering the breakpoints, overlapping financial data, exceeding thresholds and other policies and industry conditions, use early warning to analyze and determine the location of key numerical warnings.
186
L. Zhou
Fig. 2. Comparison of stability and accuracy of different algorithms
The Early warning time for enterprise finances. Early warning time is an important indicator of the effect of enterprise financial early warning, including accounting early warning, audit early warning, industry early warning determination, etc., the specific results are shown in Table 3. Table 3. Warning time of enterprise finance (unit: seconds). method
Wavelet chaos algorithm
parameter
Early warning
Early warning forecasting
accounting
audit
industry
accounting
audit
industry
Industry early warning
6.75 ± 2.72
6.67 ± 2.66
6.72 ± 2.77
22.65 ± 2.42
22.77 ± 2.27
22.65 ± 2.77
Accounting alerts
6.65 ± 2.57
6.42 ± 2.65
6.46 ± 2.26
22.25 ± 2.77
22.45 ± 2.65
22.75 ± 2.67
Audit alerts
6.47 ± 2.67
6.27 ± 2.47
6.87 ± 2.75
22.47 ± 2.47
22.07 ± 2.76
22.67 ± 2.52
Magnitude of change
0.45~0.57
Chaos algorithm
Industry early warning
7.75 ± 6.72
7.66 ± 7.46
7.77 ± 2.27
26.25 ± 6.42
26.27 ± 7.27
26.67 ± 6.77
Accounting alerts
7.65 ± 6.57
7.72 ± 7.75
7.66 ± 2.76
26.27 ± 6.27
26.65 ± 6.67
26.25 ± 6.67
(continued)
A Novel Method of Enterprise Financial Early Warning
187
Table 3. (continued) method
parameter
Early warning
Audit alerts Magnitude of change
Early warning forecasting
accounting
audit
industry
accounting
audit
industry
9.47 ± 7.67
8.77 ± 2.67
8.87 ± 2.65
26.67 ± 6.77
26.27 ± 7.66
26.77 ± 4.52
2.45~2.57
According to Table 3, in the wavelet chaos algorithm, the time of enterprise financial early warning is relatively stable, and the change range is between 0.45–0.57. Among them, accounting, auditing, industry in early warning The warning time is between 6–7 s, and the warning time of prediction level I~II is between 22–27 s, and the overall warning time is ideal. Compared with the wavelet chaos algorithm, the calculation time of the wavelet chaos method is relatively long, and the time variation range is 2.45–2.57. The reason is that the wavelet chaos method is based on a small number of policies and industry conditions, iteratively analyzes the position of signals, the amount of financial data, and the relationship between financial data, accurately finds the location of abnormal points, and determines key numerical warnings. If the number of excavation points for early warning is small, the amount of calculation in the later stage will decrease
Fig. 3. Comprehensive comparison of different methods
188
L. Zhou
exponentially, thereby shortening the calculation time. The warning time for the overall data in Table 3 is shown in Fig. 3. The analysis of Fig. 3 shows that the comprehensive analysis degree of wavelet chaos algorithm is larger, and the overall change is relatively stable, while the change range of wavelet chaos method is larger and inferior to the former. Therefore, the results in Fig. 3 further validate the results of Table 3.
6 Conclusions For enterprise financial warning, the chaos method cannot accurately carry out enterprise financial warning. Based on this, this paper proposes a wavelet chaos algorithm to set data for accounting and auditing, and uses integration to analyze early warning feature values. The wavelet chaos algorithm analyses the relationship between financial data points and financial data to reduce the mapping of policies and industries in early warning. The feature values are used as nodes for early warning analysis to realize the early warning of key values. The results show that the stability and accuracy of the wavelet chaos algorithm are greater than 90%, and there is no significant difference in the changes of auditing, accounting and industry early warning, but the difference of the chaos algorithm is greater. In the wavelet chaos algorithm, the time of enterprise financial early warning is relatively short, and it is not affected by accounting, auditing, industry, and early warning prediction level, while the early warning of the chaos method. The time is relatively long, and the time variation range is 2.45–2.57. Therefore, the wavelet chaos algorithm can meet the requirements of enterprise financial early warning and is better than the chaos method.
References 1. Siranova, M., Zelenak, K.: Every crisis does matter: comparing the databases of financial crisis events. Rev. Int. Econ. 3(2), 105–113 (2022) 2. Ahmadi, C., Karampourian, A., Samarghandi, M.R.: Explain the challenges of evacuation in floods based on the views of citizens and executive managers. Heliyon 8(9), 32–44 (2022) 3. Blair, J., Brozena, J., Matthews, M.: Financial technologies (FinTech) for mental health: the potential of objective financial data to better understand the relationships between financial behavior and mental health. Front. Psych. 13(3), 98–111 (2022) 4. Gladysz, B., Kuchta, D.: Sustainable metrics in project financial risk management. Sustainability 14(21), 109–117 (2022) 5. Kuang, J., Chang, T.C., Chu, C.W.: Research on financial early warning based on combination forecasting model. Sustainability 14(19), 89–104 (2022) 6. Liu, C., Yang, S., Hao, T., Song, R.: Service risk of energy industry international trade supply chain based on artificial intelligence algorithm. Energy Rep. 8(7), 13211–13219 (2022) 7. Liu, W.A., Fan, H., Xia, M., Pang, C.Y.: Predicting and interpreting financial distress using a weighted boosted tree-based tree. Eng. Appl. Artif. Intell. 116(3), 78–90 (2022) 8. Ragkos, A., Skordos, D., Koutouzidou, G.: Socioeconomic appraisal of an early prevention system against toxic conditions in mussel aquaculture. Animals 12(20), 78–93 (2022) 9. Tran, K.L., Le, H.A., Nguyen, T.H., Nguyen, D.T.: Explainable machine learning for financial distress prediction: evidence from Vietnam. Data 7(11), 32–45 (2022) 10. Vacheron, M.N., Dugravier, R., Tessier, V., Deneux-tharaux, C.: Perinatal maternal suicide: how to prevent? Encephale-Revue De Psychiatrie Clinique Biologique Et Therapeutique 48(5), 590–592 (2022)
Application of Panoramic Characterization Function Based on Artificial Intelligence Configuration Operation State Xiaofeng Zhou1(B) , Zhigang Lu2 , Ruifeng Zhao2 , Zhanqiang Xu2 , and Hong Zhang1 1 Yantai Haiyi Software Co. Ltd., Yantai, Shangdong, China
[email protected] 2 Electric Power Dispatching and Control Center, Guangdong Power Grid Co., Ltd.,
Guangzhou, Guangdong, China
Abstract. Panorama portrayal is the main method of adjusting the running state of configuration, and in the process of adjusting the running state of configuration, it is easy to have the problem of inaccurate portrayal, reduce the accuracy of the panorama portrayal function, and even cause adjustment errors. Based on this, this paper proposes an artificial intelligence algorithm to comprehensively analyze the panorama characterization, comprehensively identify the configuration state data, and shorten the panorama characterization time. The fault signals and status are then analysed in a panoramic manner. Finally, the artificial intelligence algorithm is used to monitor the configuration status and output the panoramic characterization results. MATLAB shows that the artificial intelligence algorithm can accurately analyze the configuration state, and the adjustment error rate is less than 10% and the adjustment accuracy is greater than 90%, which is better than the traditional characterization algorithm. Therefore, the artificial intelligence algorithm can meet the requirements of panoramic portrayal of the configuration state and is suitable for the management of the configuration state. Keywords: Distribution Network · Scheduling · Panoramic Portrayal · Dynamic Planning
1 Introduction Some scholars believe that the adjustment of the operating state of distribution is an intelligent process of distribution state management [1], which should monitor the state of transmission and distribution and electrical components [2], and it is easy to have problems such as inaccurate state analysis and incomplete depiction. At present, in the adjustment of the running state of configuration [3], there are often problems such as low accuracy, long adjustment time and high disturbance rate [4]. Therefore, some scholars propose to apply artificial intelligence algorithms to the analysis of distribution and transformation states, and identify state signals [5]. However, unfavorable factors such as power grid and interference make the distribution and transformation state have the problem of strong interference. To this end, some scholars propose artificial intelligence © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 189–197, 2023. https://doi.org/10.1007/978-3-031-31860-3_20
190
X. Z. Lu et al.
algorithms to optimize the adjustment scheme [6], identify the key signals and fault signals in the configuration state, and portray the fault signals in a panoramic manner, so as to realize the effective scheduling of the configuration state. Based on the artificial intelligence algorithm, this paper depicts the configuration state in a panoramic manner and verifies the effectiveness of the artificial intelligence algorithm.
2 Mathematical Description of the Mating State The configuration state has the problems of long monitoring time and many interference sources, and the feature identification of the adjustment scheme has certain auxiliary and auxiliary [7–9]. The distribution state is mainly based on the state of transmission and distribution and the state data of electrical energy, and the panoramic depiction is carried out. The dynamic theory is used to identify random and regular characteristics of electrical data points, and standard verification is carried out through scheduling points. Among them, the rate of change of the eigenvalues represents the degree of adjustment of the matching state. The configuration state needs to be defined as follows. Definition 1: The configuration state is si , the operating state is xi , the signal is li , the panorama is depicted as ri , the adjustment set is set i , the degree of regulation is θ, and the number of analyses is ci . Then, the panorama portrayal process is shown in Eq. (1). xi · (ci · tanθ ) (1) seti =
dai
Among them, the degree of panoramic portrayal is level 4, and the panorama portrayal time t. ˆ is the constraint function, a is the constraint of the configuration, Definition 2: f(x | θ) b is the constraint condition for panoramic characterization, and ς is the disturbance rate. ˆ The calculation process is shown in Eq. (2). f(x | θ) f (x | θˆ ) =
a · (ri · xi · sinθi ) +ς xi b
(2)
3 Panoramic Depiction of the Configuration State by Artificial Intelligence Algorithm In the process of adjusting the operating state of the configuration change, the state and operation should be quantified to reduce the complexity of the adjustment. According to the scheduling principle, the data of the monitoring point and the status [10] should be identified, and the status of the transmission and distribution state should be dynamically calculated. At the same time, the state adjustment rate of configuration state transmission should be analyzed. Therefore, the feature values must be randomly extracted.
Application of Panoramic Characterization Function
191
Definition 3: The abnormal judgment function is F(si | k). When the function value is distorted, P = 1. When the function value is stable, P < 1. The specific calculation is shown in Eq. (3). F(dai | k) =
α·
n si ·tanθ s ( k ) . β · li
(3)
If the F(dai | k) output result is positive, the configuration state is stable, otherwise an exception occurs. Definition 4: The time test function is that in O(si | t) t time, the test result is shown in Eq. (4). O(si | t) = α ·
n t=1
si v
(4)
Among them, v is the change in the degree of regulation. Definition 5: The operating condition adjustment function is C(dai | lo), lo is the change in operating condition, and its calculation is shown in Eq. (5). D(si | lo) = lo · α ·
n
si
(5)
t=1
Definition 6: The state functions is D(dai | r), r is the rate of change, and the specific calculation process is shown in Eq. (6). E(si | r) =
ri ·
n
∼
si · tanθ
(6)
t=1
The adjustment of the configuration operation state should identify the abnormal value and carry out sampling adjustment, including: the operation of the transmission and distribution state, the abnormal operation state, and the configuration state. In addition, according to the operation situation and state, and the planning algorithm and dynamic theory are used to comprehensively analyze the configuration state. At the same time, the overall adjustment of the configuration state is carried out to eliminate the influence of interference on the panoramic depiction, and the specific method is as follows. Step 1: The configuration status is collected, the judgment rules for outliers are determined, and the comprehensive judgment of state adjustment and operation is carried out to determine the threshold. Step 2: Calculate the points according to the state and state of the transmission and distribution, judge the abnormal state, and finally determine the operation and status. Step 3: Compare the different methods, compare the accuracy of the operation, the status adjustment rate, and output the final result. Step 4: If all mating states have been traversed, terminate the adjustment, otherwise repeat steps 1–3.
192
X. Z. Lu et al.
4 Practical Examples of Mating States 4.1 The Configuration State Parameters In order to verify the adjustment effect of artificial intelligence algorithm on the analysis of distribution and transformation state, the adjustment effect of the transmission and distribution state is judged by taking the 35 kV distribution state as the research object. Among them, the configuration state, state, and state are all within the normal range, and the specific results are shown in Table 1. Table 1. Status of 35 kV configuration Parameter
Power distribution
Microgrid
tidal current
215–315
95–115
Matching status
Change → match
Change → match
Equipment characterization
panorama
panorama
The degree of portrayal
Level 3
Level 3
Perturbation rate
18. 11
19.65
According to the configuration state parameters in Table 1, there is no significant difference between power distribution and transformation in terms of state, distribution
Fig. 1. Configuration status
Application of Panoramic Characterization Function
193
state, equipment characterization, panoramic characterization degree, disturbance rate, etc., indicating that the relevant data in the configuration state can be used as panoramic depiction objects. The configuration status results in Table 1 are shown in Fig. 1. It can be seen from Fig. 1 that the overall state of the configuration state is good, and the change amplitude of different parameters is small, which can be analyzed later. 4.2 The State Adjustment Rate and Accuracy of the Configuration Operation State Adjustment The state adjustment rate and accuracy are important indicators for the adjustment of the configuration operation state, and whether the artificial intelligence algorithm can effectively adjust the distribution network needs to verify the above indicators, and the specific results are shown in Table 2. Table 2. Comparison of state adjustment rate and accuracy of different methods (unit: %) Algorithm
The network Parameter
Accuracy
Status adjustment rate
Perturbation rate
Artificial intelligence algorithms
power distribution
tidal current
93.33 ± 3.77
33.73 ± 7.37
3.73
State conditioning
94.73 ± 3.73
33.34 ± 7.43
3.83
Fault judgment
95.36 ± 3.37
33.73 ± 7.33
3.35
tidal current
83.33 ± 7.73
34.73 ± 7.33
5.06
State conditioning
83.33 ± 7.33
33.33 ± 7.37
4.33
Fault judgment
85.37 ± 7.73
33.34 ± 7.33
4.36
tidal current
87.73 ± 3.77
48.33 ± 3.07
3.37
State conditioning
83.76 ± 3.33
46.74 ± 3.03
3.73
Fault judgment
85.36 ± 3.07
45.34 ± 3.03
3.75
tidal current
83.73 ± 4.33
49.33 ± 7.03
4.33
State conditioning
83.33 ± 4.73
49.73 ± 7.07
4.37
Fault judgment
85.76 ± 4.73
48.33 ± 7.33
4.33
Power substation
Traditional characterization algorithms
power distribution
Power substation
194
X. Z. Lu et al.
It can be seen from Table 2 that the state adjustment rate of the artificial intelligence algorithm in the distribution and transformation stages is less than 50, the accuracy is greater than 90%, and the disturbance rate is less than 7%. Relatively speaking, the state adjustment rate of traditional characterization methods is higher and the accuracy is lower, but the disturbance rate is consistent with that of artificial intelligence algorithms. This shows that the AI algorithm has better accuracy and state adjustment rate under the same interference. In order to verify the effectiveness of the proposed method, it is necessary to continuously analyze the panoramic characterization of the configuration state, and the results are shown in Fig. 2.
Fig. 2. Comparison of state regulation rate and accuracy of different algorithms
It can be seen from Fig. 2 that in the continuous monitoring, the artificial intelligence algorithm can effectively improve the accuracy of panoramic characterization, and the number of adjustments is less, while the accuracy of the traditional characterization method and the state adjustment rate wave change greatly, and the overall results are consistent with the research results in Table 2. The reason is that the artificial intelligence algorithm can analyze the characteristics of the configuration state and state, simplify the adjustment process, and improve the adjustment efficiency, so the overall operation status is better. 4.3 Panoramic Characterization Time Analysis of Configuration State Panorama characterization time is the main index of the adjustment effect of the configuration running state, including power flow, portrayal degree, operating state, and panorama, and the specific results are shown in Table 3.
Application of Panoramic Characterization Function
195
Table 3. Adjustment time of configuration operation state (unit: seconds). method
Artificial intelligence algorithms
Traditional portrayal methods
parameter
Distribution regulation
Substation regulation
power
Tai District
Distribution points
voltage
resistance
current
tidal current
33.53 ± 3.33
39.35 ± 3.35
35.55 ± 3.35
33.53 ± 3.33
39.35 ± 3.35
35.55 ± 3.35
Degree of portrayal
37.37 ± 3.55
37.33 ± 3.57
37.33 ± 3.53
37.37 ± 3.55
37.33 ± 3.57
37.33 ± 3.53
Running status
37.35 ± 3.55
37.53 ± 3.35
37.57 ± 3.37
37.35 ± 3.55
37.53 ± 3.35
37.57 ± 3.37
Panorama
37.33 ± 3.55
37.53 ± 3.35
39.33 ± 3.53
37.33 ± 3.55
37.53 ± 3.35
39.33 ± 3.53
tidal current
77.37 ± 7.33
77.33 ± 3.33
77.55 ± 5.35
77.37 ± 7.33
77.33 ± 3.33
77.55 ± 5.35
Degree of portrayal
77.37 ± 3.35
77.35 ± 3.55
77.33 ± 5.33
77.37 ± 3.35
77.35 ± 3.55
77.33 ± 5.33
Running status
79.35 ± 7.35
78.35 ± 3.33
73.87 ± 5.37
79.35 ± 7.35
78.35 ± 3.33
73.87 ± 5.37
Panorama
79.35 ± 7.535
78.33 ± 3.35
78.37 ± 5.57
79.35 ± 7.535
78.33 ± 3.35
78.37 ± 5.57
According to Table 3, in the artificial intelligence algorithm, the adjustment time of power, station, distribution point, voltage, resistance and current is shorter, which is better than the traditional characterization method. In addition, in terms of trend, portrayal, operating state, and panorama, although the time variation of the two methods is relatively stable, the artificial intelligence algorithm is significantly shorter than the traditional characterization method, and the reason is that the artificial intelligence algorithm is the artificial intelligence algorithm The data such as power flow and operation are comprehensively processed, and a response adjustment plan is formulated. According to the state adjustment results, the panoramic characterization effect of different methods is tested to verify the effectiveness of the proposed method. The traditional characterization method also carries out the overall configuration operation state characterization, but lacks the panorama analysis step, while the artificial intelligence algorithm can adopt different panorama characterization content according to the monitoring situation through dynamic adjustment, as shown in Fig. 3. It can be seen from Fig. 3 that the time of the artificial intelligence algorithm is shorter, which is better than the traditional characterization method, which further verifies the results of Table 3.
196
X. Z. Lu et al.
Fig. 3. Processing time for different methods
5 Conclusions In view of the problems such as inaccurate analysis of configuration state and the inability of traditional portrayal methods to effectively portray the panorama. In this paper, an artificial intelligence algorithm is proposed, and the operation state and operation of the configuration are comprehensively analyzed, and the abnormal state recognition is identified by panoramic portrayal. Through the analysis of artificial intelligence algorithms, different functional methods can be adopted, and combined with the data of the configuration operation status that has been collected, the overall customer results of the configuration status can be determined. The results show that the characterization adjustment rate of the artificial intelligence algorithm is less than 5 and the accuracy is greater than 90%, which is significantly better than the traditional characterization method. Moreover, in the artificial intelligence algorithm, the panoramic characterization time of the configuration operation state is shorter, and the verification time of voltage, power flow, characterization degree and panorama is shorter. At the same time, the characterization time of voltage, current and resistance is better than that of traditional characterization methods. Therefore, the artificial intelligence algorithm can meet the requirements of panoramic characterization of the configuration operation state, and is suitable for the configuration operation state management.
References 1. Bai, Z.Y., Pang, Y., Xiang, Z., Wong, C.K., Wang, L., Lam, C.S., et al.: A capacitive-coupling winding tap injection DSTATCOM integrated with distribution transformer for balance and unbalance operations. IEEE Trans. Industr. Electron. 70(2), 1081–1093 (2023)
Application of Panoramic Characterization Function
197
2. Cai, D.F.: Physics-informed distribution transformers via molecular dynamics and deep neural networks. J. Comput. Phys. 4(7), 16–34 (2022) 3. Elahi, O., Behkam, R., Gharehpetian, G.B., Mohammadi, F.: Diagnosing disk-space variation in distribution power transformer windings using group method of data handling artificial neural networks. Energies 15(23), 102–117 (2022) 4. Fereydouni, M., Rokrok, E., Taheri, S.: Design of a hybrid transformer based on an indirect matrix converter for distribution networks. Electric Power Comp. Syst. 7(10), 92–94 (2022) 5. Hong, L.C., Chen, Z.H., Wang, Y.F., Shahidehpour, M., Wu, M.H.: A novel SVM-based decision framework considering feature distribution for Power Transformer Fault Diagnosis. Energy Rep. 8(13), 9392–9401 (2022) 6. Liu, C.L., Zhao, T., Sun, Y., Wang, X.L., Cao, S.: Dynamic behaviour of a suspended bubble and its influence on the distribution of electric fields in insulating oil of an on-load tap-changer within power transformers. Int. J. Electr. Power Energy Syst. 14(7), 98 (2023) 7. Megahed, T.F., Kotb, M.F.: Improved design of LED lamp circuit to enhance distribution transformer capability based on a comparative study of various standards. Energy Rep. 8(11), 445–465 (2022) 8. Niu, B., Wu, X.T., Yu, J.Y., Wu, H., Liu, W.F.: Research on large-capacity impulse test technology for distribution transformer based on energy storage intelligent power. Energy Rep. 8(3), 275–285 (2022) 9. Vacheron, M.N., Dugravier, R., Tessier, V., Deneux-tharaux, C.: Perinatal maternal suicide: how to prevent? Encephale-Revue De Psychiatrie Clinique Biologique Et Therapeutique 48(5), 590–592 (2022) 10. Zhao, Z., Wu, Y.H., Diao, F., Lin, N., Du, X.Y., Zhao, Y., et al.: Design and demonstration of a 100 kW high-frequency matrix core transformer for more electric aircraft power distribution. IEEE Trans. Transp. Electrific. 8(4), 4279–4290 (2022)
A Decision-Making Method Based on Dynamic Programming Algorithm for Distribution Network Scheduling Hongrong Zhai1(B) , Ruifeng Zhao2 , Haobin Li2 , Longteng Wu2 , and Xunwang Chen1 1 Yantai Haiyi Software Co. Ltd., Yantai, Shandong, China
[email protected] 2 Electric Power Dispatching and Control Center, Guangdong Power Grid Co., Ltd.,
Guangzhou, Guangdong, China
Abstract. Auxiliary decision-making is the main content of distribution network scheduling, and in the process of auxiliary decision-making of distribution network scheduling, the problem of large amount of monitoring data is likely to occur, which reduces the accuracy of auxiliary decision-making, and even the loss of monitoring data occurs. This paper proposes a dynamic programming algorithm to make planning adjustments for auxiliary decision-making, comprehensively identify power grid data, and shorten the auxiliary decision-making time. The anomalous signals and current currents are then dynamically analyzed. Finally, the dynamic programming algorithm monitors the distribution network and outputs the final auxiliary decision. MATLAB shows that the dynamic programming algorithm can accurately analyze the distribution network status, and the abnormal decision rate is less than 10% and the decision accuracy rate is greater than 90%, which is better than the static programming algorithm. Therefore, the dynamic programming algorithm can meet the auxiliary decision-making requirements of the distribution network and is suitable for power grid management. Keywords: Distribution Network · Scheduling · Assisted Decision-making · Dynamic Planning
1 Introduction Some scholars believe that the auxiliary decision-making of distribution network scheduling is an intelligent process of power grid management, which should monitor the transmission network and electrical components and is prone to problems such as data loss and incomplete data [1]. At present, in the decision-making assistance of distribution network scheduling, there are often problems of low accuracy, long decision-making time, and high interference rate [2]. Therefore, some scholars propose to apply the dynamic programming method to distribution network scheduling and identify electrical signals. However, harsh environments [3], external interference, and other conditions make electrical equipment have great interference problems. To this end, some scholars propose a dynamic programming algorithm to optimize the decision-making scheme © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 198–206, 2023. https://doi.org/10.1007/978-3-031-31860-3_21
A Decision-Making Method Based on Dynamic Programming Algorithm
199
[4], identify key signals and abnormal signals in the power grid, and make dynamic decisions on abnormal signals so as to achieve effective dispatch of the power grid [5]. Based on the dynamic programming algorithm, this paper assists the decision-making of the distribution network and verifies the effectiveness of the dynamic programming method.
2 Mathematical Description of the Distribution Network The distribution network has the problems of long monitoring time and many interference sources, and the feature identification of decision-making schemes has particular auxiliary auxiliary [6–8]. The distribution network mainly relies on the status data of the transmission and distribution network and electricity to make auxiliary decisions. The dynamic theory is used to identify random and regular characteristics of electrical data points, and standard verification is carried out through scheduling points. Among them, the rate of change of the eigenvalues represents the degree of assistance of the distribution network. The distribution network needs to be defined as follows. Definition 1: The distribution network state is si , the electrical location is xi , the signal is li , the auxiliary decision is ri , the decision set is set i , and the degree of assistance is θ , The number of analyses is ci . Then, the auxiliary decision-making process is shown in Eq. (1). −−→ (xi · si ) → tan θ (1) seti = li si
Among them, the degree of auxiliary decision-making is level 4, and the auxiliary decision-making time is t. Definition 2: The constraint function is f(x | t), A is the constraint of the distribution network, B is the constraint of the environment, and ς is the disturbance rate. The calculation process of f(x | t) is shown in Eq. (2). A · (ri · xi ∧ θi ) ·ς (2) f (x | t) = si B
3 Auxiliary Decision-Making of Dynamic Programming Algorithm on Distribution Network In the process of auxiliary decision-making in distribution network scheduling, the power flow and electrical state should be quantified to reduce the complexity of decisionmaking. According to the dispatching principle, the data of monitoring points and power flows should be identified, and the status of the transmission and distribution network should be dynamically calculated [9]. At the same time, the regulation rate of grid transmission should be analyzed. Therefore, the feature values must be randomly extracted. Definition 3: The abnormal judgment function is F(dai | k), when P = 1 the distortion value occurs, the specific calculation is shown in Eq. (3). n xi · tanθ D(dai | k) = A · · li · B da k
(3)
200
H. Zhai et al.
If the output result of D(si | k) is positive, the distribution network is running stably [10]. Otherwise an exception occurs. Definition 4: The time test function is O(xi ) that in t time, the test result is shown in Eq. (4). E(xi ) = A ·
n t=1
xi v
(4)
where v is the change in the degree of assistance. Definition 5: The electrical state adjustment function is C(dai | lo), lo is the change of electrical state, which is calculated as shown in Eq. (5). n F(si | lo) = C · xi (5) t=1
Definition 6: The current flow function is D(da i | r), r is the rate of change, calculated as shown in Eq. (6) n D(xi | si ) = C · r · xi ∧ sinθ (6) t=1
Distribution network dispatching auxiliary decision-making should identify outliers and make sampling decisions, including transmission and distribution network electrical power flow, abnormal electrical location, and distribution network status. In addition, according to the electrical power flow and state, and the planning algorithm and dynamic theory are used to analyze the distribution network comprehensively. At the same time, the overall decision-making of the distribution network [9] is carried out to eliminate the impact of interference on auxiliary decision-making, as follows. In the first step, the distribution network status is collected, the judge rules for outliers are determined, and the comprehensive judgment of power flow and electrical status is carried out to determine the threshold. Step 1: Calculate the integral according to the transmission and distribution network and power flow, judge the abnormal power flow, and finally determine the electrical state and power flow. Step 2: Compare the different methods, compare the accuracy of the electrical state, the adjustment rate, and output the final result. Step 3: If all distribution networks have been traversed, terminate the decision, otherwise, repeat steps 1–3.
4 Practical Examples of Distribution Networks 4.1 The Configuration Parameters In order to verify the decision-making effect of the dynamic programming algorithm on distribution network scheduling, the decision-making effect of transmission and distribution network is judged by taking the 35 kV distribution network as the research object. Among them, the distribution network status, power flow, and environment are all within the normal range, and the specific results are shown in Table 1.
A Decision-Making Method Based on Dynamic Programming Algorithm
201
Table 1. Status of the 35 kV distribution network Parameter
Mainnet
Microgrid
Tidal current
215–315
95–115
Grid status
Good
Good
Planning
Dynamic
Dynamic
Complexity
Level 4
Level 4
Interference rate
8.21
9.65
Weight
0.80–0.82
0.41–0.51
According to the grid parameters in Table 1, there are no significant differences between the main network and the microgrid in terms of power flow, grid status, planning, complexity, interference rate, etc., indicating that the relevant data in the distribution network can be used as the analysis object. Among them, the weight of the main network is higher and the weight of the microgrid is low, indicating that the main network dominates the distribution network analysis. The distribution network status results in Table 1 are shown in Fig. 1.
Fig. 1. Distribution network status
It can be seen from Fig. 1 that the overall state of the distribution network is good, and the change range of different parameters is small, which can be analyzed later.
202
H. Zhai et al.
4.2 The Adjustment Rate and Accuracy of Distribution Network Scheduling Auxiliary Decision-Making The adjustment rate and accuracy are essential indicators for auxiliary decision-making in distribution network scheduling, and whether the dynamic programming method can make effective decisions on the distribution network needs to verify the above indicators. The specific results are shown in Table 2. Table 2. Comparison of state adjustment rate and accuracy of different methods (unit: %) Algorithm
The network
Parameter
Dynamic programming algorithms
Mainnet
Microgrid
Single dynamic programming approach
Mainnet
Microgrid
Accuracy
Adjustment rate
Interference rate
Tidal current 93.32 ± 2.11
3.12 ± 1.21
3.12
Scheduling decisions
94.13 ± 2.12
3.34 ± 1.43
3.83
Failure analysis
95.26 ± 3.31
3.12 ± 1.33
3.35
Tidal current 83.22 ± 1.13
4.12 ± 1.33
5.06
Scheduling decisions
82.23 ± 1.23
3.22 ± 1.31
4.32
Failure analysis
85.31 ± 1.12
3.34 ± 1.23
4.26
Tidal current 81.12 ± 2.11
8.32 ± 3.01
3.21
Scheduling decisions
83.16 ± 2.22
6.14 ± 3.03
3.12
Failure analysis
85.26 ± 2.01
5.24 ± 3.03
3.15
Tidal current 83.12 ± 4.33
9.22 ± 1.03
4.22
Scheduling decisions
82.33 ± 4.13
9.12 ± 1.01
4.21
Failure analysis
85.16 ± 4.12
8.32 ± 1.23
4.33
Table 2 shows that the adjustment rate of the dynamic programming algorithm in the main network and microgrid stages is less than 4, the accuracy is greater than 90%, and the interference rate is less than 6%. Static programming has a higher adjustment rate and lower accuracy, but the interference rate is consistent with dynamic programming. This shows that the dynamic programming algorithm has better accuracy and adjustment rate under the same interference. In order to verify the effectiveness of the proposed method, continuous analysis of the auxiliary decision-making of the distribution network is required, and the results are shown in Fig. 1.
A Decision-Making Method Based on Dynamic Programming Algorithm
203
Fig. 2. Comparison of adjustment rate and accuracy of different algorithms
It can be seen from Fig. 1 that in continuous monitoring, the dynamic programming algorithm can effectively improve the accuracy of auxiliary decision-making, and the number of adjustments is less. In contrast, the static programming method’s accuracy and adjustment rate wave change significantly, and the overall results are consistent with the research results in Table 2. The reason is that the dynamic programming method can analyze the distribution network status and power flow characteristics, simplify the decision-making process, and improve the decision-making efficiency, so the overall operation status is better (Fig. 2). 4.3 The Analysis of Auxiliary Decision-Making Time of Distribution Network Decision time is the main indicator of the auxiliary decision-making effect of distribution network scheduling, including electrical location time and abnormal power flow identification time. The specific results are shown in Table 3. Table 3. Auxiliary decision-making time for distribution network scheduling (unit: seconds) Method
Dynamic programming
Adjust the parameters
Contingency decisions Mainnet
Microgrid
Transition Network
Scheduling decisions Local scheduling
Overall scheduling
Emergency dispatch
Tidal current
31.21 ± 1.11
49.32 ± 3.32
42.22 ± 1.12
41.35 ± 1.41
31.12 ± 1.32
23.35 ± 1.32
Voltage
36.15 ± 1.22
46.41 ± 1.25
46.13 ± 1.23
41.15 ± 2.32
31.25 ± 1.15
23.75 ± 1. 12
(continued)
204
H. Zhai et al. Table 3. (continued)
Method
Static programming
Adjust the parameters
Contingency decisions Mainnet
Microgrid
Transition Network
Scheduling decisions Local scheduling
Overall scheduling
Emergency dispatch
Decision-making
46.32 ± 1.22
46.23 ± 1.32
46.27 ± 1.35
42.43 ± 1.42
31.01 ± 1.13
22.32 ± 1.21
Verify
46.31 ± 1.22
47.23 ± 1.42
49.13 ± 1.21
42.62 ± 1.42
32.22 ± 2.21
31.13 ± 1.53
Tidal current
57.15 ± 5.31
67.33 ± 4.43
57.22 ± 2.12
63.15 ± 3.31
43.12 ± 3.22
63.62 ± 4.22
voltage
57.35 ± 3.12
67.12 ± 4.22
57.33 ± 2.33
63.23 ± 3.52
43.35 ± 3.33
63.35 ± 4.32
decision-making
59.42 ± 5.32
68.32 ± 4.13
54.87 ± 2.35
63.13 ± 3.15
43.12 ± 3.13
63.73 ± 4.53
Verify
59.32 ± 5.212
68.31 ± 4.12
58.37 ± 2.25
63.27 ± 3.72
43.82 ± 2.33
68.12 ± 4.61
According to Table 3, in the dynamic planning algorithm, the decision-making time within the scope of the emergency decision-making main network, microgrid and transition network, as well as the local scheduling, overall dispatching and emergency dispatch time are shorter is better than the static planning method. In addition, in terms of power flow distribution, voltage regulation, decision-making, and decision accounting, although the time variation of the two methods is relatively stable, the dynamic programming algorithm is significantly shorter than the static programming method, and the reason is the dynamic programming method Data such as power flow and electrical status are comprehensively processed and decisions are made to respond. According to the decision-making results, different scheduling schemes are tested to verify their effectiveness of the schemes. The static programming method also carries out overall planning but lacks dynamic processing steps, while the dynamic programming method can take different scheme adjustments for the monitoring situation through dynamic adjustment, as shown in Fig. 3.
Fig. 3. Processing time for different methods
A Decision-Making Method Based on Dynamic Programming Algorithm
205
It can be seen from Fig. 3 that the dynamic programming method has a shorter time and is better than the static programming method, which further verifies the results of Table 3.
5 Conclusions Inaccurate decision-making for distribution networks and the inability of static planning methods to mine data effectively. In this paper, a dynamic programming algorithm is proposed, the power flow and electrical state of the distribution network are comprehensively analyzed, and the abnormal power flow is identified using dynamic analysis. Through dynamic programming analysis, different dispatching decisions can be taken, and combined with the power flow data collected, the distribution network’s overall cause can be determined. The results show that the adjustment rate of the dynamic programming algorithm is less than 5%, and the accuracy is greater than 90%, which is significantly better than the static programming method. Moreover, in the dynamic programming algorithm, the time for distribution network scheduling to assist decision-making is shorter, and the time for voltage, power flow, decision selection, and decision verification is shorter. At the same time, the time of emergency scheduling, local dispatching, and overall scheduling is better than static planning methods. Therefore, the dynamic programming algorithm can meet the requirements of auxiliary decision-making of distribution network dispatching and is suitable for distribution network dispatching management.
References 1. Chang, N.A., Song, G.B., Hou, J.J., Chang, Z.X.: Fault identification method based on unified inverse-time characteristic equation for distribution network. Int. J. Electr. Power Energy Syst. 145(2), 87–102 (2023) 2. Dashtaki, A.A., Hakimi, S.M., Hasankhani, A., Derakhshani, G., Abdi, B.: Optimal management algorithm of microgrid connected to the distribution network considering renewable energy system uncertainties. Int. J. Electr. Power Energy Syst. 145(2), 319–330 (2023) 3. Nasri, A., Abdollahi, A., Rashidinejad, M.: Probabilistic–proactive distribution network scheduling against a hurricane as a high impact–low probability event considering chaos theory. IET Gener. Transm. Distrib. 15(2), 194–213 (2021) 4. Waswa, L., Chihota, M.J., Bekker, B.: A probabilistic conductor size selection framework for active distribution networks. Energies 14(19), 1–19 (2021) 5. Jaramillo, A., Saldarriaga, J.: Fractal analysis of the optimal hydraulic gradient surface in water distribution networks. J. Water Resour. Plan. Manag. 149(4), 65–80 (2022) 6. Escobar, J.W., Duque, J., Vélez, A.L., et al.: A multi-objective mathematical model for the design of a closed cycle green distribution network of mass consumption products. Int. J. Serv. Oper. Manag. 41(2), 114–141 (2022) 7. Fambri, G., Diaz-Londono, C., Mazza, A., et al.: Techno-economic analysis of power-togas plants in a gas and electricity distribution network system with high renewable energy penetration. Appl. Energy 312(15), 1–17 (2022) 8. Bouhouras, A.S., Kothona, D., Gkaidatzis, P.A., et al.: Distribution network energy loss reduction under EV charging schedule. Int. J. Energy Res. 46(6), 8256–8270 (2022)
206
H. Zhai et al.
9. Bhatt, P.K.: Harmonics mitigated multi-objective energy optimization in PV integrated rural distribution network using modified TLBO algorithm. Renew. Energy Focus 40(5), 13–22 (2022) 10. Samuel, A.M.R., Arulraj, M.: Performance analysis of flexible indoor and outdoor user distribution in urban multi-tier heterogeneous network. Int. J. Mob. Commun. 21(1), 119–133 (2023)
Application of Computer Vision Technology Based on Neural Network in Path Planning Jinghao Wen1(B) , Jiashun Chen2 , Jiatong Jiang3 , Zekai Bi4 , and Jintao Wei4 1 School of Computer, Central China Normal University, Wuhan 430000, Hubei, China
[email protected]
2 School of Energy and Power Engineering, Xi’an Jiaotong University, Xi’an 710000, Shaanxi,
China 3 School of Computer, Sichuan University, Chengdu 610000, Sichuan, China 4 School of Cyber Science and Engineering, Zhengzhou University, Zhengzhou 450000, Henan,
China
Abstract. Path planning (PP) is a hot topic in the field of robotics, and neural networks (NNs) and computer vision technologies are widely used in the PP of robots. In this paper, the application of robot PP is classified, compared and analyzed, the autonomous dynamic obstacle avoidance of the robot in the dynamic obstacle environment is realized, and the particle swarm algorithm is used to find the global optimal path. In this paper, a fuzzy optimization algorithm for PP of visual robots is proposed. The algorithm is based on predictive control rolling optimization, and expresses the system optimization goals and constraints by membership, so as to realize the multi-objective optimization problem. Simulation experiments show that the proposed algorithm is effective. The experimental results show that compared with the traditional PP of the robot based on NN, the NN can find the shortest path for the robot in the dynamic avoidance of obstacles, which greatly optimizes the avoidance time. Therefore, it is very valuable to use NNs and computer vision technology to study PP. Keywords: Path Planning · NN · Computer Vision Technology · Obstacle Avoidance · Target Optimization
1 Introduction With the rapid development of robotics, it is more and more widely used in various industries, and many heavy and dangerous tasks can be left to robots to do [1]. In order to better meet the work requirements, its navigation, obstacle avoidance, PP and other functions must be continuously optimized [2]. Due to the complex and changing movement environment of robots, PP has always been an important topic in the movement problem of mobile robots [3, 4]. As the core content of robotics, PP is a topic of great practical significance and research value. Therefore, how to effectively solve the problem of PP is the key to improving the level of robotics. J. Wen, J. Chen, J. Jiang, Z. Bi, J. Wei—These authors contributed equally to this work. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 207–216, 2023. https://doi.org/10.1007/978-3-031-31860-3_22
208
J. Wen et al.
PP has become a hot topic in research, and it is particularly important to study PP in different fields. Among them, the three-dimensional PP of unmanned combat aircraft is a complex optimal problem, which mainly focuses on the optimization of flight routes considering different types of constraints in complex combat environments. On this basis, Wang GG proposed a new type of “predator-prey-pigeon” heuristic optimization (PPPIO) for solving the three-dimensional PP problem of UCAV in a dynamic environment. In this method, maps, compasses, road signs and other modes are used to find the optimal function. The comparative simulation results show that the PPPIO algorithm proposed by him is more effective in solving the UCAV three-dimensional PP problem [5]. On the other hand, Bo Z combines the Bat algorithm (BA) and differential evolution (DE) to optimize the three-dimensional PP of UCAV for the first time. In addition, in order to better solve the problems in the UCAV system, he also used a B-spline curve. The experimental results show that compared with the basic BA model, IBA is a better technique for UCAV three-dimensional PP problems [6]. Chen Y B uses relaxation variables to transform the constrained optimization problem into an unconstrained optimal solution, and then uses the idea of the optimal solution of the function to transform it into an optimal problem. In addition, the feasibility of six-degree-of-freedom four-rotor helicopter flight trajectory tracking has been proved through the research [7]. However, the effect is not very obvious in the use of PP algorithms and needs to be improved. The innovation point of this paper is to introduce NNs into the PP of mobile robots and combine them with sensor information to build a grid map in an unknown environment.
2 NN-Based PP Method 2.1 PP Method Global PP Grid method: It is a method of describing PP in a map manner. The grid evenly divides the surrounding space where the robot works into blocky areas. The grid can be divided into two forms: the first is the “free grid”, which exists under the condition that there are no obstacles around it; the second is the “obstacle grid”, which is the opposite of the first. The grid law uses a map composed of two grids to describe the working environment of the robot [8]. Topology method: When the space of the surrounding environment is large and there are small obstacles, this method can reflect the surrounding environment well. The working environment of a robot is a graph composed of a node and a line connected to it. One of the nodes represents certain key points in an environment, while the connection between a node and a node is the movement of a robot. Local PP Fuzzy logic algorithm: This method is an abstract simulation based on the human driving feeling. In the absence of accurately calculated environmental information, this method is used to first obfuscate the environmental information, and then obtain the planned route from the corresponding fuzzy rule table, so as to achieve the planning from the starting point to the destination [9].
Application of Computer Vision Technology Based on Neural Network
209
2.2 Computer Vision Computer vision technology is an emerging discipline of artificial intelligence [10, 11]. It can transform real-world data into digital information, while computer vision technology deeply processes digital information and uses algorithms to simulate the processing and recognition of information by the human eye, which greatly improves production efficiency [12]. Computer vision has been used in medicine, military, transportation, etc. for a long time, but because it is different from the traditional industrial production environment, the one-time use of engineering “products” and the complex and changeable construction environment make its application in PP very difficult [13]. In short, computer vision technology is a way of perceiving the real world. It can use data to allow computers to perceive the surrounding environment and changes (for example, path obstacle avoidance). 2.3 Model Construction of the Mobile Robot As a simulation of the energy storm mobile robot, this paper defines the mobile robot model shown in Fig. 1: the middle block represents the host structure of the robot, and the left and right two blocks represent the two driving wheels of the mobile robot. The movement of the mobile robot uses the force obtained by the driving wheel to control the running speed and direction. Since this paper studies the motion and control of mobile robots in unknown environments, several sensors are assembled to achieve real-time perception of the robot environment. Since five sonar sensors are placed at the tip and five sensors are also placed in the defined robot model, they are evenly distributed in the front of the robot −π/2 to +π/2, and obstacles in the other range do not affect the movement of the robot and are not considered. The sensor is described by some line segments. The length of the line segment represents the perceived distance (the perceived distance is defined as 50 in the simulation test), and the middle small circle is the position of the center of gravity of the robot. The spatial motion range of the robot is limited to 2D, and the obstacles present in the space can be described by linearly connected polygons, and assume that the person moving the robot moves in a predetermined workspace. In order to easily observe the optimization process, the robot model is shown in a certain number of evolutionary generations (1 with a black solid line, the rest with a light gray dashed line). Once a path that evolves into a certain algebra and meets the performance index, it is possible to show only the path that the robot plans or searches, without showing the robot model. This is because the goal of this paper is not to observe the motion control of the robot, but to find a smooth path from the starting point to the target point. 2.4 NN Motion Control Due to the use of a multi-layer feedforward NN, the hidden layer adopts a layer of hidden layer [14]. Therefore, the overall structure of the NN includes three layers: the input layer, the hidden layer and the output layer, of which nine neurons are used for the input layer, namely: the obstacle distance detected by the four sensors (that is, the sensor returns the function value), the distance between the four sensor terminals and the target
210
J. Wen et al.
Fig. 1. Mobile robot model
point, and the angle information between the target point and the current direction of movement. The number of neurons in the hidden layer is 20, and the output layer contains 2 neurons, and the corresponding outputs are used as the driving force for the rotation of the left and right wheels. The operating cycle of the NN motor network controller is shown in Fig. 2. On this basis, the NN is used to learn and analyze the sensor so that it can effectively avoid obstacles in each operation cycle [15]. Since the angle information between the target distance and the target position is also added to the target position, the robot can continue to move towards the target until the target position. Since the unipolar S-shaped function and its derivative are continuous, it is easy to handle. Therefore, all excitation functions of neurons are unipolar S-shaped functions, which are defined as follows: f (x) =
1 1 + e−x
(1)
Among them, the input value of each neuron is represented by a, which can be obtained: N Na = wba xb + θa (2) b=1
In the above formula, Na represents the net input of neuron a, xa represents the input associated with neuron i, wba is the corresponding input connection weight, and θa represents the threshold value of neuron a.
Application of Computer Vision Technology Based on Neural Network
Motion command
211
Robot neural network controller
Input
Sensor data
Robot pretreatment
Fig. 2. NN motion control diagram
Assume that the position coordinates of the mobile robot ta time (current time) in the working space are (xa, ya), and then use the NN motion controller to determine the next action behavior. It can be concluded that the coordinates of the robot’s position in the working space at the next moment are: (xa+1 ya+1 ) = (xa + va+1 × (− sin θa+1 ), ya + va+1 × cos θa+1 )
(3)
At the same time, the position information of the mobile robot at various points in time is also stored in a vector array, and then displayed by a red rectangular block, so that the trajectory of the robot can be obtained [16, 17].
3 Experimental Data Set Construction First, 1000 grid maps are randomly generated with a size of 25 × 25, and they are initialized to a blank area without obstacles. In each map, a number of obstacles are randomly generated, and the size of these obstacles will not exceed the size of the map. In the case of barrier-free, two points with a longer distance were randomly selected and used as the starting point and target point. This method solves a new path by selecting a particle swarm algorithm with heuristic and known cost information, resulting in more than 1,000 paths on the existing grid map. In addition, the classic particle swarm algorithm can treat each point on the path as a new starting point, and thus produce multiple individual particle swarm paths [18].
212
J. Wen et al.
In this way, in the initial 1000 paths generated, each node can be regarded as a starting point, and the corresponding input vectors and output vectors can be obtained through the corresponding obstacles and the relevant information of the target, thereby obtaining more training data. Through this step, 54,357 different path results can be obtained, and these data can provide better learning and guidance for future fuzzy NNs.
4 PP Results and Discussion 4.1 Comparison of Paths in Different Environments In order to verify the effectiveness of the algorithm, this paper designs a map environment suitable for narrow obstacles (20 * 20) for simulation tests. The original environment is expressed in a circular coordinate system, while shadows are multi-faceted obstacles. On this basis, particle swarm algorithm and genetic algorithm are used as experimental controls. The comparison results are shown in Fig. 3: (The red line represents the robot avoidance path under the algorithm in this article, the blue line represents the avoidance path under the genetic algorithm, and the black line represents the avoidance path under the particle swarm algorithm).
20
0
20 Fig. 3. Comparison of paths of different algorithms
Application of Computer Vision Technology Based on Neural Network
213
Experiments have proved that all three methods can obtain a route that avoids obstacles from the starting point to the end point. The routes generated by the particle swarm algorithm and the genetic algorithm are the shortest, but there are large corners. Some path points do not take into account the size of the robot itself, which makes the friction between the robot and the obstacle greater. Algorithm of this article (PSO-NN) can realize the obstacle avoidance of the robot, and according to the distance between the robot’s body shape and the obstacle, make its trajectory more stable, so as to avoid excessive energy loss caused by steering, and meet the path requirements of the robot in actual engineering. The algorithm proposed in this paper aims to solve how to efficiently search for one in a complex map environment A reasonable path. Although the genetic algorithm and particle swarm algorithm mentioned above only consider the shortest path length, they also solve the problem of efficient path search for a large range of raster maps. Table 1 presents the experimental results of the algorithm in this paper compared with the other two algorithms. Table 1. Pway search results for different algorithms Name
Map size
Running time
SA proportional
Algorithm of this article
800 * 800
4.2
3.69
Genetic algorithm
Multiple
8.3
35.67
Particle swarm algorithm
Multiple
14.35
286.45
Based on the above instructions, it can be seen from the comparison of the data in the table, the algorithm is much smaller than the other two algorithms in terms of SA ratio and time consumption. Therefore, compared with the existing algorithms to solve similar problems, the current algorithm has a significant improvement in time efficiency and computational efficiency. 4.2 Effectiveness of the Algorithm The purpose of the simulation test is to investigate whether the algorithm proposed above can avoid dynamically known obstacles. First of all, the experimental design is as follows. Set the moving speed of the robot V = 0.4 m/s, and set the speed of the two dynamic obstacles to Vx = 0.2 m/s and Vy = 0.2 m/s to move in the direction of the maximum boundary along the minimum boundary of the X-axis and the Y-axis, respectively. After passing through the boundary, it stops, and the other obstacles stand still 40 experiments were carried out under the same operating conditions. Figure 4 shows the change of the fitness value of the algorithm in the experiment with the number of iterations. The average length of the path calculated 30 times is 36. 2325, similar to the data of multiple experiments. The experimental results show that the comparison of the planning results using a combination of NN and particle swarm algorithm shows that PSO-NN can better use
214
J. Wen et al.
70 60
Path length
50 40 30 20 10
0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200
0 Number of iterations Fig. 4. Dynamic PP effect
NNs to detect the surrounding dynamic obstacles such as rectangles and circles, so as to make its planned trajectory smoother, thus proving the correctness and stability of PSO-NN in a dynamic environment.
5 Conclusion Based on the practical application of robots, combined with the current research background and research status, various algorithms existing at home and abroad have been studied in depth, and their advantages and disadvantages have been compared for different application scenarios. Through NNs combined with particle swarm algorithms, smooth collision-free paths can be quickly planned in static and dynamic environments. However, in future research, not only obstacles must be considered, but also various feature points and key points must be considered. With the continuous development of computer vision and artificial intelligence technology, image information understanding, text scene recognition, differential information comparison, etc. can be used and applied to intelligent understanding and recognition. However, due to the limitations of the research conditions, there are still some shortcomings to be further studied and improved, mainly including the following two aspects: (1) There are still some limitations in dealing with modeling problems. At present, the algorithm in this paper can
Application of Computer Vision Technology Based on Neural Network
215
only deal with the path planning problem under the two-dimensional plane, but it has not reached the requirements of robot path planning in the three-dimensional environment. (2) Lack of a processing mechanism to deal with emergencies, and further research is needed to carry out real-time and efficient path planning for the robot in the unknown working environment space.
References 1. Shorakaei, H., Vahdani, M., Imani, B., et al.: Optimal cooperative path planning of unmanned aerial vehicles by a parallel genetic algorithm. Robotica 34(4), 823–836 (2016) 2. Zhou, Y., Wang, R.: An improved flower pollination algorithm for optimal unmanned undersea vehicle path planning problem. Int. J. Pattern Recognit. Artif. Intell. 30(4), 1659010.1–1659010.27 (2016) 3. Subramani, D.N.: Energy optimal path planning using stochastic dynamically orthogonal level set equations of technology by signature redacted signature redacted energy optimal path planning using stochastic dynamically orthogonal level set equations. Deep-Sea Res. Part II 56(3), 68–86 (2017) 4. Kovacs, B., Szayer, G., Tajti, F., et al.: A novel potential field method for path planning of mobile robots by adapting animal motion attributes. Robot. Auton. Syst. 82(C), 24–34 (2016) 5. Wang, G.G., Chu, H.C.E., Mirjalili, S.: Three-dimensional path planning for UCAV using an improved bat algorithm. Aerosp. Sci. Technol. 49(FEB), 231–238 (2016) 6. Bo, Z., Duan, H.: Three-dimensional path planning for uninhabited combat aerial vehicle based on predator-prey pigeon-inspired optimization in dynamic environment. IEEE/ACM Trans. Comput. Biol. Bioinf. 14(1), 97–107 (2017) 7. Chen, Y.B., Luo, G.C., Mei, Y.S., et al.: UAV path planning using artificial potential field method updated by optimal control theory. Int. J. Syst. Sci. 47(6), 1407–1420 (2016) 8. Song, B., Wang, Z., Li, S.: A new genetic algorithm approach to smooth path planning for mobile robots. Assem. Autom. 36(2), 138–145 (2016) 9. Conesa-Munoz, J., Pajares, G., Ribeiro, A.: Mix-opt: a new route operator for optimal coverage path planning for a fleet in an agricultural environment. Expert Syst. Appl. 54(Jul), 364–378 (2016) 10. Greene, R.L., Lu, M.L., Barim, M.S., et al.: Estimating trunk angle kinematics during lifting using a computationally efficient computer vision method. Hum. Factors: J. Hum. Factors Ergon. Soc. 64(3), 482–498 (2022) 11. Bjerge, K., Mann, H., Hye, T.T.: Real-time insect tracking and monitoring with computer vision and deep learning. Remote Sens. Ecol. Conserv. 8(3), 315–327 (2022) 12. Li, H., Spencer, B.F., Bae, H., et al.: Deep super resolution crack network (SrcNet) for improving computer vision–based automated crack detectability in in situ bridges. Struct. Health Monit. 20(4), 1428–1442 (2021) 13. Noreen, K., Umar, M.: Computer vision syndrome (CVS) and its associated risk factors among undergraduate medical students in midst of COVID-19. Pak. J. Ophthalmol. 37(1), 102–108 (2021) 14. Wang, L., Xu, R., Yu, F.: Genetic nelder-mead neural network algorithm for fault parameter inversion using GPS data. Geodesy Geodyn. 13(4), 386–398 (2022) 15. Aslam, M., Munir, E.U., Rafique, M.M., et al.: Adaptive energy-efficient clustering path planning routing protocols for heterogeneous wireless sensor networks. Sustain. Comput. Inform. Syst. 12(DEC), 57–71 (2016) 16. Wu, K., Esfahani, M.A., Yuan, S., et al.: TDPP-Net: achieving three-dimensional path planning via a deep NN architecture. Neurocomputing 357, 151–162 (2019)
216
J. Wen et al.
17. Li, K., Yuan, C., Wang, J., et al.: Four-direction search scheme of path planning for mobile agents. Robotica 38(3), 531–540 (2020) 18. Sun, L.: A real-time collision-free path planning of a rust removal robot using an improved neural network. J. Shanghai Jiaotong Univ. (Sci.) 22(005), 633–640 (2017)
A Comprehensive Evaluation Method of Computer Algorithm and Network Flow Techniques Zhiwei Huang(B) Computer and Information Engineering College, Wuhan Railway Vocational College of Technology, Wuhan, Hubei, China [email protected]
Abstract. Network flow is often accompanied by the best path and the maximum flow, and it now is an indispensable component of network flow. A subroutine of the maximum flow in the network flow theory can be used to solve a lot of practical problems, such as transportation engineering, network communications, relevance between computer algorithm and techniques. This paper mainly study the relevance between computer algorithm and techniques to prevent the bad nodes in the network flow. In the process of selecting and optimizing the nodes, the network optimization algorithm based on the comprehensive evaluation method can save two times of time comparing with the Ford-Fulkerson algorithm. This shows that the network optimization algorithm has good advantages in comprehensive evaluation method, reduces the selecting times of augmenting chain in the analysis, and improves the stability in the comparison process, to decrease the amount of time and accelerate the speed of calculation. Keywords: Distribution Network · Scheduling · Assisted Decision-making · Dynamic Planning
1 Introduction Along with the progress and development of human society, the comprehensive evaluation system has gradually developed scientifically, consciously and maturely, to be one of scientific evaluation methods [1]. Due to its wide practical functions, it is used continuously in things closely related to human life, so that it gets extensive attention from related researchers [2, 3]. Network flow theory is the core theory of computer network and communication. Network flow parameters are mainly the best path and maximum flow, which is also one of the main objectives of network optimization [4–6]. In real life, the maximum flow in the network flow optimization theory can be used to solve a lot of practical problems, such as transportation engineering, network communications, relevance between computer algorithm and techniques. New words related to flow, such as traffic flow, traffic flow, and information flow and so on, are emerging in the face of human beings [7]. Thus maximum flow can not only provide effective ways to solve real © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 217–226, 2023. https://doi.org/10.1007/978-3-031-31860-3_23
218
Z. Huang
life problems, but also applies strict mathematical formula to get the optimized mathematical model to enable the complex comprehensive evaluation, which also reflects the significance and application of the network flow optimization theory.
2 Comprehensive Evaluation Method The evaluation subject of the comprehensive evaluation system is the result of the evaluated index calculation of the objective existence by evaluation model, and the form of the result can be a specific classification community, a rank order, or a numerical value of the corresponding value [8]. The evaluation subject can also verify whether this model is reasonable according to the result and index calculated by comprehensive evaluation system. The key elements of the evaluation system include module goals, evaluation criterion, evaluation system, evaluation mode, evaluation result, evaluation subject and evaluation objects. Their relation is shown in Fig. 1.
Module Goals
Evaluation Criterion
Evaluation Objects
Evaluation System
Evaluation Subject
Evaluate Mode
Evaluation Result Fig. 1. Key elements of the evaluation system
The process of comprehensive evaluation is the process of understanding objective things, in which the subject of comprehensive evaluation generalizes the objective existence. After years of development, this process is mainly composed by choosing evaluation criterion, ensuring module goals, establishing evaluation system, determining its weight value, choosing evaluation mode, calculating evaluation result and completing evaluation report. Step 1. To ensure module goals. This is the cause of the comprehensive evaluation process, and the starting point and basis of the whole process. Its goodness or badness is directly related to ensuring the goals of the comprehensive evaluation process, and choosing the evaluation model and evaluation index. Step 2. To choose evaluation criterion. The criterion of the comprehensive evaluation system is mainly the standard of utility. According to the preference criterion of the subject, several evaluation criterion can be set.
A Comprehensive Evaluation Method
219
Step 3. To establish evaluation system. The index of evaluation refers to the specific statistical indicators that can be operated under each specific evaluation system, so as to constitute a certain structure and choose those for quantitative analysis. Step 4. To determine weight value of comprehensive evaluation. The index weight of evaluation system is determined by the difference of the amount of information among the indexes, the influence of the target, and the different evaluation model. Step 5. To choose evaluation mode. The selection of evaluation model of comprehensive evaluation system is determined according to the purpose of evaluation and the evaluation of the evaluation model. Step 6. To calculate evaluation result. The evaluation results of the comprehensive evaluation system are mainly based on the synthesis and calculation of the evaluation index by the evaluation model. Step 7. To test the results. When testing whether the results of comprehensive evaluation is reasonable, the results should be feedback to the model selection of the comprehensive evaluation system, the index weight, the structure between each index, the corresponding value of the index and so on, until the reasonable scientific evaluation results appear. Step 8. To complete evaluation report. This step mainly analyzes, releases and presents the results of the comprehensive evaluation system.
The evaluation target
Selection criteria
Establishing sea indicator system
To determine evaluation index weights
Fuzzy hiberarchy evaluation
Calculate the evaluation results Below the mark Inspection results
To complete the evaluation report Fig. 2. The process of the comprehensive evaluation system
220
Z. Huang
The above eight steps of the comprehensive evaluation system are not strictly distinguished but mutually influenced. The comprehensive evaluation can be learned as a complete system, because the evaluation itself will be determined by evaluation standard, and the dynamic process of the mutual influence of each step is shown in Fig. 2.
3 Network Flow Algorithm Definition 1: Density and attenuation coefficients of data itemsets. For each itemset I contained in a transaction that arrives at time tc , its density value D(I , t) is a variable over time t. It is defined as δ(I , 0) t=0 D(I , t) = (1) D(I , t − 1) · λ + δ(I , t) otherwise
1 a(t) , a(t) is the data item received at time t, λ is 0 otherwise a constant (0 < λ < 1), called attenuation coefficient. According to Definition 1, we can get three lemmas. 1 . Lemma 1. For any data item set I, its density value D(I , t) satisfies D(I , t) < 1−λ Prove: Even if item set I appears at every moment, its density value at time t is: t 1 1 + λ + λ2 · · · + λt−1 = 1−λ 1−λ < 1−λ Proof completed. Suppose t > ts, before time t, the last time that data item set I is received is ts. Obviously, there is Among them, δ(I , t) =
D(x, t) = D(x, ts )λt−ts .
(2)
The density of data items is constantly changing. However, it is not necessary to update the density values of all data records at each time step. Conversely, it is possible to update the density of the data item only when a new data is received from the data stream. For each data item, the time it receives the latest data record needs to be recorded. In this way, the density of data items can be updated when new identical data items come in. Lemma 2. Assuming that the data stream contains the data item set I in the transaction received at the time ta , and the last time I appeared in the data stream is tc , then the density of I can be updated by the following formula. D(I , tc ) = D(I , ta ) ∗ λta −tc + 1
(3)
Therefore, it is not necessary to modify the density of all nodes in each time step, but to modify the above formula only when new identical itemsets arrive, so tc value should be recorded. Definition 2: Frequent Data Itemsets and Closed Frequent Data Itemsets.
A Comprehensive Evaluation Method
221
Let S be the support threshold S ∈ (0, 1) given by the user. If the data itemset I S satisfies D(I , t) ≥ 1−λ at t time, then I is called frequent data itemset. If for frequent data itemset I, any I ⊃ I must have D(I , t) < D(I , t), then I is called closed frequent data itemset. Let ε be the error threshold given by the user. We need to mine all closed frequent S − ε. data itemsets I in the stream data to satisfy D(I , t) ≥ 1−λ As time goes on, the number of candidate frequent item sets in stream data increases. Assuming that ε is the error threshold given by the user, we call the itemsets whose density is less than Dl is sparse itemsets. To save storage space, some sparse itemsets should be deleted. However, if the density of an itemset is less than Dl, is it necessarily because it appears fewer times? Not necessarily. For the density less than Dl, there may be two reasons: one is that the number of occurrences is less, and the other is that the number of occurrences is indeed more, but because of the passage of time, their density coefficients degenerate, making the density smaller. We call the former an isolated sparse data item and the latter a metamorphosed sparse data item. In stream data, older data should have less impact on current mining results, because mining closed itemsets is always interested in newer data. Therefore, metamorphosed sparse data items are sparse data items that should be retained. There may be many sparse data items in the process of processing, so those isolated sparse data items should be deleted to reduce the workload of system processing. However, the above metamorphosed sparse data items should be retained in order to preserve the historical information of data stream. Now the question is: how to distinguish the two sparse itemsets mentioned above? To this end, we define a density threshold function Dmin(tg, tc) as a criterion for judging whether item sets can be deleted. Network flow theory now is an indispensable component of network optimization. A subroutine of the maximum flow in the network flow theory can be used to solve a lot of practical problems, such as transportation engineering, network communications, relevance between computer algorithm and techniques. With the network flow algorithm, s1 randomly generates data d1 (k), k = 1, 2 … a, and d1 (k) generated on the side must be transmitted through P1 to t at the corresponding time. s2 randomly generates a data d2 (j), and d2 (j) needs to be transmitted from s2 to t through P2 at the corresponding time. However, P1 and P2 may use the same data repeatedly, when t(1) is not necessarily zero. If d1 first arrives the node, then t1 must be 0. In network flow model s2 is before d1 , and B represents a network flow model data before d1 , so t(1) is obtained by the following formula 0 if tta1 (1) ≥ tt1 (w) (4) tw1 (1) = tt1 (w) − ta1 (1) other Similarly, if d2 (1) is the first data node, then ttw1 (1) = 0; conversely w = 1, 2, … a represent data before d2 in the network flow algorithm, so tt(1) is as follows: 0 if tta1 (1) ≥ tt1 (w) (5) ttw1 (1) = tt 1 (w) − ta1 (1) other where t1 (k) and t2 (j) respectively means d1 and the interval of d2 .
222
Z. Huang
Let ts(k) represents departure time of d1 (k), t(k) represents the time point that d1 (k) outstrips node u, and t(k) represents the time point that d1 (k) arrives t. Based on the above analysis, network flow in circulation during the waiting time can be divided into two types. One is the same branch of network flow, and the other is a different branch of network flow. From tw0 (k), tw2 (k), ttw0 (j) and ttwq (j) in which k = 2, 3 …, α, it can be found that j = 2, 3 …, β, q = 2, 3, are the competition of network flow priority caused by the commonly used road of same branch. The result can be obtained from the following equation. 0 if taq (j) ≥ tts (j − 1) (6) ttwq (j) = other tts(j − 1) − ttaq (j) From ttwq (j) in which k = 2, 3, …, α, and j = 2, 3, …, β, q = 2, 3, are the right of priority because of competition of different data source in the common line. Network flow algorithm to optimize the path can be divided into two types. In the network flow model, if d1 (k) is the distance between the before nodes, then d1 (k − 1), based on tw1 (k), k = 2, 3 … α, can be introduced from the following formula 0 if taq (j) ≥ tts (j − 1) tw1 (k) = (7) other t1 (k − 1) − ta1 (k) In the network flow model, if d2 (k) is the data before the network node, then d2 (k − 1), based on tw2 (k), k = 2, 3…, α, can be introduced from the following formula. 0 if taq (j) ≥ tts (v) tw1 (k) = (8) other tt 1 (v) − ta1 (k) With the same approach, if d2 (j) is the data before the network node, then d2 (j − 1), based on tw1 (j), j = 2, 3 …, β, can be introduced from the following formula. 0 if taq (j) ≥ tts (j − 1) ttw1 (k) = (9) other tt1 (j − 1) − ta1 (j) In the network flow model, if d2 (j) is the front data, then d1 (vv), vv = 2, 3 …, α, based on ttw1 (j), j = 2, 3 …, β, can be introduced from the following formula. 0 if taq (j) ≥ tts (v) ttw1 (k) = (10) other tt 1 (vv) − tta1 (j) Considering the time constraints, it is also required that ta(k) and tta(j) should not surpass the limited time when the corresponding network flow passes the node. That is ta(k) ≤ A(k) + Tth1 (k), k = 2, 3, . . . . . . , α
(11)
tta(j) ≤ B(j) + Tth2 (j), j = 2, 3, . . . . . . , β
(12)
Tth1 (k) and Tth2 (j) are respectively related to the size of data d1 (k) and d2 (j). If Tth1 (k) and Tth2 (j) respectively have linear relationship with d1 (k) and d2 (j), the formula of Tth1 (k) and Tth2 (j) is as follows. Tth1 (k) = a × d1 (k) + b1
(13)
A Comprehensive Evaluation Method
Tth2 (j) = a × d2 (j) + d1
223
(14)
The above a, b, c, and d are the parameters proposed in the calculation process of the network flow model. If data distribution belongs to discrete distribution, taking d1 (k), k = 1, 2, . . . , α as an example, the size of data of d1 (k) can choose dd i , i = 1, 2, . . . , nu + 1 according to (nu + 1). Interval probability of corresponding data of each dd i is [Pr 0 Pr 1 ] or [Pr i−1 Pr 1 ], i = 2, 3, . . . , nu + 1, then d1 (k) = 1, 2, . . . , α can be obtained from the following algorithms. Step1. The parameter k is initialized. Step2. The values are well distributed. Step3. Self-increasing operation of parameter k is finished. Step4. The adjacent d1 (k) is calculated. Based on the above theory, the result of d2(j) can be obtained. According to the characteristics of the network flow model, considering the transmission between two nodes, network flow data generated in the entrance can flow out respectively from the pointing nodes, and the flow capacity of the network cannot be negative. Let (d1 (k), Tth1 (k), P1 ) − X represents unit road network flow that is successfully transferred to d1 (k) through the P1 , k = 1, 2…, under Tth1 (k), and determine that each datum accords with the basic conditions of network flow.
4 Results of Network Flow Algorithm In order to prevent the bad node data appearing in the network flow, assume the monotonic use of relevance. According to the irreversible hypothesis of the attack, once the initiator get the effective information, it will not get again [10]. Thus in the network flow algorithm with the comprehensive evaluation method, the third line specially determines whether the newly added node appears in the previous section, which is to avoid the same two relevance in a path. In order to verify the effectiveness of the comprehensive evaluation method of network flow algorithm, compare it with the Ford-Fulkerson algorithm and simulate in the MATLAB software. Then compare the results of network flow algorithm under the comprehensive evaluation method and For-Fulkerson algorithm, and finish the network simulation flow after the nodes increase from 200 to 2000 nodes. Under the different simulation algorithms, surrounding of each node will have 100 nodes adjacent to it, and the probabilistic setting between any two points in the network flow is 0.0005. When running the program, run for 15 times. The experimental results are listed in the following Table 1 and Table 2.
224
Z. Huang Table 1. Results of Ford-Fulkerson algorithm
Times
200
400
600
800
1000
1200
1400
1600
1
0.49
0.52
2.45
1.72
3.15
7.24
9.73
20.63
2
0.49
0.53
2.41
1.67
3.13
7.15
9.76
20.60
3
0.43
0.55
2.43
1.71
3.14
7.20
9.73
20.54
4
0.46
0.53
2.42
1.72
3.20
7.13
9.77
20.65
5
0.45
0.53
2.41
1.66
3.21
7.30
9.62
20.63
6
0.46
0.55
2.36
1.73
3.14
7.26
9.80
20.82
7
0.46
0.58
2.41
1.73
3.24
7.22
9.82
20.63
8
0.46
0.56
2.41
1.76
3.24
7.36
9.71
20.85
9
0.45
0.55
2.40
1.75
3.22
7.29
9.69
20.59
10
0.47
0.54
2.44
1.71
3.22
7.22
9.66
20.71
Average time (s)
0.46
0.54
2.41
1.73
3.18
7.42
9.73
20.67
Maximum flow
3190
2753
2141
2973
2578
2605
3807
3543
Table 2. Results of network flow algorithm Times
200
400
600
800
1000
1200
1400
1600
1
0.11
0.41
0.76
1.23
1.56
3.01
5.02
8.30
2
0.11
0.42
0.77
1.26
1.54
3.02
5.01
8.31
3
0.11
0.41
0.77
1.24
1.53
3.03
5.02
8.34
4
0.11
0.40
0.77
1.22
1.54
3.02
4.99
8.28
5
0.11
0.41
0.75
1.24
1.54
3.01
5.04
8.31
6
0.11
0.41
0.78
1.25
1.55
2.99
4.99
8.35
7
0.11
0.39
0.77
1.25
1.55
3.01
4.99
8.31
8
0.11
0.39
0.77
1.23
1.55
3.01
4.96
8.44
9
0.11
0.42
0.76
1.24
1.54
2.97
4.98
8.30
10
0.11
0.42
0.77
1.23
1.54
3.00
4.96
8.37
Average time (s)
0.11
0.41
0.77
1.24
1.54
3.01
5.00
8.33
Maximum flow
3190
2753
2141
2973
2578
2605
3807
3543
According to the data in Table 1 and Table 2, with the comprehensive evaluation method, the comparison of the two algorithms is shown in Fig. 3. According to the Table 1, Table 2 and Fig. 3, it can be clearly seen that when the number of nodes is between 1200 and 1800, the network algorithm is better than FordFulkerson. When the number of nodes is 1600, network flow algorithm can save more than half of the time of the Ford Fulkerson algorithm. And the time of Ford-Fulkerson
A Comprehensive Evaluation Method 0
2
20
4
6
8
225 10 10
Ford-Fulkerson Network flow algorithm
8
15
Time/(s)
6 10 4 5 2 0
0
200
400
600
800
1000
1200
1400
1600
1800
2000
0 2200
Node numbers
Fig. 3. Comparison of the two algorithms
algorithm is more than two times of the network optimization algorithm. It indicates that the network algorithm can well improve the stability during the sub-process, and reduce the calculation time and accelerate the speed.
5 Conclusions Comprehensive evaluation method is a kind of effective evaluation criterion, comparing with single evaluation method which refers to evaluation of one-sided understanding to evaluate single information or determined indicators of a certain objective. To have an accurate understanding of relatively objective things, the evaluation should be done from more than one aspect or several dimensions, in order to truly understand the objective things. Network flow theory is the core theory of computer network and communication. Network flow parameters are mainly the best path and maximum flow, which is also one of the main objectives of network optimization. In real life, the maximum flow in the network flow optimization theory can be used to solve a lot of practical problems, such as transportation engineering, network communications, relevance between computer algorithm and techniques.
References 1. Chang, N.A., Song, G.B., Hou, J.J., Chang, Z.X.: Fault identification method based on unified inverse-time characteristic equation for distribution network. Int. J. Electr. Power Energy Syst. 145(2), 87–102 (2023)
226
Z. Huang
2. Dashtaki, A.A., Hakimi, S.M., Hasankhani, A., Derakhshani, G., Abdi, B.: Optimal management algorithm of microgrid connected to the distribution network considering renewable energy system uncertainties. Int. J. Electr. Power Energy Syst. 145(2), 319–330 (2023) 3. Nasri, A., Abdollahi, A., Rashidinejad, M.: Probabilistic–proactive distribution network scheduling against a hurricane as a high impact–low probability event considering chaos theory. IET Gener. Transm. Distrib. 15(2), 194–213 (2021) 4. Waswa, L., Chihota, M.J., Bekker, B.: A probabilistic conductor size selection framework for active distribution networks. Energies 14(19), 1–19 (2021) 5. Jaramillo, A., Saldarriaga, J.: Fractal analysis of the optimal hydraulic gradient surface in water distribution networks. J. Water Resour. Plan. Manag. 149(4), 65–80 (2022) 6. Escobar, J.W., Duque, J., Vélez, A.L., et al.: A multi-objective mathematical model for the design of a closed cycle green distribution network of mass consumption products. Int. J. Serv. Oper. Manag. 41(2), 114–141 (2022) 7. Fambri, G., Diaz-Londono, C., Mazza, A., et al.: Techno-economic analysis of power-togas plants in a gas and electricity distribution network system with high renewable energy penetration. Appl. Energy 312(15), 1–17 (2022) 8. Bouhouras, A.S., Kothona, D., Gkaidatzis, P.A., et al.: Distribution network energy loss reduction under EV charging schedule. Int. J. Energy Res. 46(6), 8256–8270 (2022) 9. Bhatt, P.K.: Harmonics mitigated multi-objective energy optimization in PV integrated rural distribution network using modified TLBO algorithm. Renew. Energy Focus 40(5), 13–22 (2022) 10. Samuel, A.M.R., Arulraj, M.: Performance analysis of flexible indoor and outdoor user distribution in urban multi-tier heterogeneous network. Int. J. Mob. Commun. 21(1), 119–133 (2023)
Consumer Evaluation System in Webcast Based on Data Analysis Xia Yan1(B) and Anita Binti Rosli2 1 College of Journalism and Communication, Advertising, ShangHai Jian Qiao University,
Shanghai, China [email protected] 2 Department of Social Science and Management, Faculty of Humanities, Management and Science, Universiti Putra Malaysia Bintulu Campus, Bintulu, Sarawak, Malaysia
Abstract. With the birth and rapid development of the Internet, online live shopping has also developed rapidly, which has changed the traditional physical consumption shopping form to a certain extent and is developing rapidly with strong momentum. Internet not only meets people’s main needs of purchasing goods and obtaining corresponding services, but also is a convenient choice for information exchange. Different from the traditional offline entity consumption mode, the satisfaction degree of consumer groups in webcasting is seriously divided, and the word-of-mouth evaluation after shopping consumption affects the purchase decision of online consumers. There is information inequality between consumers and merchants in online transactions, but the consumer evaluation system can make transactions transparent, so that potential consumers can also make shopping judgments and reduce unnecessary consumption expenses. Therefore, the importance of consumer evaluation system is becoming more and more critical. Based on the method of data analysis, this paper studies the consumer evaluation system in webcasting from the perspective of consumers and taking consumers’ online shopping experience as reference factors. In order to establish consumer evaluation and consumer trust of the webcast platform, establish a better consumer feedback mechanism in the live shopping environment, improve the consumer feedback and evaluation mechanism of the webcast industry, provide more effective evaluation guidance for merchants and anchors, enrich the research on online consumer evaluation behavior, improve consumers’ trust in the live consumption environment, and promote the healthy and vigorous development of the webcast industry. Keywords: Data analysis · Consumer evaluation · Live streaming
1 Introduction Since the new century, webcasting has taken off with the development of the Internet. While consumers are shopping quickly, they also need to bear all kinds of shopping and consumption risks brought about by information asymmetry between business groups [1]. Due to many potential risks arising from online transactions, transaction disputes in © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 227–235, 2023. https://doi.org/10.1007/978-3-031-31860-3_24
228
X. Yan and A. B. Rosli
live webcasts occur from time to time, and consumers generally lack confidence in live shopping. Compared with traditional offline business forms, consumer satisfaction of live shopping is usually low. In order to solve the risks caused by asymmetric consumption information, consumers exchange relevant information, express their own shopping views and experiences, and express their views and comments on live broadcast shops, services and shopping processes [2]. At present, there are a lot of big coffees in the live webcast, and the market competition in the live broadcast industry is extremely fierce. Consumer evaluation not only conveys commodity information to potential consumers, but also helps consumers make shopping decisions later, and also provides feedback information for live broadcast merchants, which promotes the improvement of their business methods and improves the ability of network platform operation. Therefore, consumer evaluation is very important, but in real life, it is influenced by consumers, webcast stores and online business platforms, and the evaluation information may not reflect the real situation [3]. How to build a healthy and reasonable consumer evaluation system and reflect the real evaluation information, so that businesses can gain the trust and recognition of consumers has become a key issue in webcasting. Because the products sold by webcasting are virtual propaganda means for consumers, and the authenticity of the product information in each live broadcast room needs to be considered, it is particularly important for consumers to evaluate the actual purchase of live broadcast products from the perspective of a third party [4]. And this kind of consumer evaluation also greatly affects the purchase decision of potential consumers. According to the statistics of China Internet Network Information Center, more than half of the consumers in the live broadcast rooms said that they would read the relevant product reviews before buying the products introduced in the live broadcast, through cross-platform and cross-live broadcast room comparison or live broadcast room barrage. The purchase comments of live broadcast products give consumers the right to speak, strengthen the interaction between consumers and live broadcast operators, and help consumers choose products to increase their stickiness. Consumer evaluation has undoubtedly become an important factor affecting the consumption level in webcasting. The nature of network interaction makes it easier for consumers to search all kinds of information about live broadcast products and share their experience in purchasing and using products with others [5]. Commodity information in live broadcast rooms and online stores has become the main source channel for consumers to purchase information they need. This kind of information posted by other consumers on the Internet is online shopping word of mouth. Its main definition is that consumers freely upload comments and discussions about products and services by means of Internet tools, and such consumer comments affect the consumption decisions of potential consumers. With the rise of webcasting and the accumulation of shopping experience, a small number of live broadcast consumers with rich shopping experience have emerged, and they have been able to consume rationally according to their previous consumption experience and other consumers’ evaluation of goods without being induced by the words of the anchor with goods [6]. To sum up, based on the method of data analysis, this paper studies the consumer evaluation system in webcasting from the consumer’s point of view and the consumer’s online shopping experience as a reference factor. In order to establish
Consumer Evaluation System in Webcast Based on Data Analysis
229
consumer evaluation and consumer trust of the webcast platform, establish a better consumer feedback mechanism in the live shopping environment, improve the consumer feedback and evaluation mechanism of the webcast industry, provide more effective evaluation guidance for merchants and anchors, enrich the research on online consumer evaluation behavior, improve consumers’ trust in the live consumption environment, and promote the healthy and vigorous development of the webcast industry.
2 Webcast Data Statistics Based on Data Analysis In the statistics of webcasting data based on data analysis: CNNIC released the 50th Statistical Report on Internet Development in China (Fig. 1). By June 2022, the number of netizens in China had reached 1.051 billion, up 1.4% points from December 2021; Mobile Internet users reached 1.047 billion, basically the same as in December 2021. The number of online video users is 995 million, accounting for 94.6% of the total netizens. Among them, the number of short video users is 962 million, accounting for 91.5% of the total netizens. The number of webcast users in China reached 716 million, accounting for 68.1% of the total netizens. As can be seen from the data, the number of new Internet users in China has gradually become online, and it is unlikely that there will be a significant increase in new users of short videos. However, there are currently 716 million live users, including 469 million e-commerce live users, accounting for 44.6% of the total netizens, and there is a lot of room for growth.
Fig. 1. Population size and netizen size in China (Source: Survey on Internet Development in China, National Bureau of Statistics)
Since the video number was launched in 2020, it has been rapidly iterated. In 2022, the functions of commodity sharing, interactive small tasks and live advertising have been launched one after another, further improving the way of live short video on the
230
X. Yan and A. B. Rosli
Internet; The number of service providers has also increased rapidly, and professional services have been provided in many fields, such as live broadcast, short video, supply chain, etc., which further strengthens the operators’ means of realizing cash, and also enables small and medium-sized enterprises to have better live broadcast marketing methods [7]. The Report on the Development Trend of E-commerce in China in 2022: The Important Role of E-commerce in High-quality Economic Development recently released by the China Council for the Promotion of International Trade shows that the iteration of the new e-commerce model represented by live e-commerce has accelerated. As of June this year, the number of live e-commerce users in China was 469 million, an increase of 204 million compared with that in March 2020, accounting for 44.6% of the total number of netizens (Fig. 2).
Fig. 2. China video (including short video), scale of live users and usage rate (unit: 10,000 people)
3 The Status Quo of Webcasting From May 2021 to April 2022, the Tik Tok platform had more than 9 million live broadcasts every month, and sold more than 10 billion items, with the total transaction volume increasing by 2.2 times year-on-year; As of March 2022, the cumulative number of viewers of Taobao Live has exceeded 50 billion. The report believes that network digital technology has become a new driving force for consumption upgrading. Network digital technology has spawned new consumption patterns, such as “cloud shopping” and “webcast shopping” that young people like. On the other hand, network digital technology promotes the optimization and upgrading of consumption structure, which better meets the customized and personalized needs of consumers. Under the background of coordinating the prevention and control of epidemic situation and the great development of social and economic regions, the perfect national economy has continuously recovered its economic recovery and its economic resilience
Consumer Evaluation System in Webcast Based on Data Analysis
231
has gradually manifested [8]. Webcast has played an important role in it. On the one hand, webcasting with goods has become an important force to stimulate consumption and prevent epidemic and ensure supply. From January to September this year, the online retail sales of physical goods reached 8,237.4 billion yuan, up 6.1%, much higher than the growth rate of total retail sales of social consumer goods in the same period. Chinese consumers rank first in the world in many aspects, such as online retail market, online shopping population, express delivery business and mobile payment scale. On the other hand, the continuous innovation of webcasting scenes has boosted the upgrading of the service industry. Webcast breaks the time and space constraints of both business groups and consumer groups, and releases the potential of the consumer market [9]. By June 2022, the number of online office users in China reached 461 million, accounting for 43.8% of the total netizens.
4 The Main Link of Consumer Evaluation in Webcasting 4.1 Consumer Behavior Links Consumer behavior is the theoretical basis of consumer activities, and all consumer behaviors and activities can be understood by their behavior theory [10]. At the same time, consumer behavior is a comprehensive discipline, which involves physiology, sociology, economics, economic marketing, psychology, human behavior and other disciplines [11]. Some scholars have put forward a general model of consumer behavior (Fig. 3), in which the consumer’s black box is a series of psychological activities produced by consumers when they receive a series of external stimuli (such as emotional traction of anchor shopping guides and guidance of speech information). Make decisions after psychological activities [12].
Fig. 3. General patterns of consumer behavior
In daily life, all people need to consume all kinds of material constantly to meet the physiological and psychological needs of human beings. Therefore, consumption behavior is one of the most common and universal behaviors. Due to the differences in purchasing motives and consumption patterns, different consumers have different behaviors, but they still have some common regularity. Individual differences such as consumers’ evaluation, attitude, belief and perception are a part of their behavior. The relationship between consumption evaluation and consumers’ intention, emotion and experience attitude, as shown in Fig. 4, is consistent with the internal adaptive coordination of intention, especially when thinking is in conflict.
232
X. Yan and A. B. Rosli
Fig. 4. Relationship between consumer evaluation and consumers’ intention, emotion and experience attitude
4.2 Webcast Link In the operation channel of webcasting information, the pain point of information security hidden trouble that has been criticized for a long time has gradually become the main concern of consumers. Since the launch of the National Anti-fraud APP, antifraud propaganda has been launched vigorously throughout the country, but there are still deviations in the individual fields of consumers, which directly affects the public impression and recognition of webcasting transactions. The fundamental reason is that compared with the super-large e-commerce live broadcast platform, enterprises, private enterprises and self-employed individuals with imperfect responsibility systems use a lot of incentives to pull consumers into the private domain in order to hijack traffic, and provide knowledge, emotion and other targeted services to enrich the consumer information database during interaction with consumers. As a result, criminals resort to electronic network channels, use real-time information and even privacy that can grasp
Fig. 5. Live streaming mode in private domain networks
Consumer Evaluation System in Webcast Based on Data Analysis
233
consumers’ intentions, establish a trust relationship with the audience, and then commit fraud. Because there is no one-to-one correspondence between the real responsible persons, it gradually becomes widespread in society (Fig. 5).
5 Construction of Consumer Evaluation System in Webcasting 5.1 Consumer Trust and Consumer Evaluation The consumer trust here is the consumer’s trust in the whole online shopping environment before buying, because online shopping is a virtual way, and the goods sold cannot be directly touched, so the consumer’s trust in the online shopping environment is particularly important. If you don’t trust online live shopping before consumption, there will be no consumer shopping behavior. When potential consumers have sufficient trust in the webcasting and its products through the third-party consumer evaluation content, they will make a purchase decision according to their own online shopping experience, which is influenced by consumers’ trust, shopping experience and other consumers’ purchased evaluations. Therefore, the construction of consumer evaluation embodiment in webcasting should fully reflect the way of consumers’ trust. Consumers’ trust and loyalty are important invisible assets of webcasting merchants. Consumer trust affects consumers’ loyalty, thus affecting consumers’ live shopping rate and repurchase rate. Whether it is cognitive trust or ideal emotional trust built by consumers through external information, when the trust is high, consumers’ evaluation of webcasting and goods is positive; Conversely, it is a negative evaluation. 5.2 Consumer Online Shopping Experience and Consumer Evaluation According to consumers’ understanding and familiarity with the webcast environment and anchor, and the shopping times of online shopping products, consumers’ judgment on webcasting products is influenced. Shopping experience reflects consumers’ feelings about previous shopping, which are positive or negative, and certainly positive or negative. Therefore, its trust and evaluation of consumers are closely related to consumers’ online shopping decisions. If the consumer’s previous shopping process is pleasant, it will make him continue to buy again, and it will be shared with relatives and friends around him, which will have a certain divergent effect, prompting consumers to have a positive evaluation of webcasting and products; Otherwise, you will get a bad review. 5.3 Consumer Trust and Consumer Purchase Decision As mentioned above, the consumption of webcasting has become a new way of life in this society, which has attracted more and more attention. Whether the purchase decision of webcasting consumers is successful is also one of the elements of industry competition among webcasting network operators. In the virtual world of the network, trust is very important to make consumers make purchases, and consumer trust has a positive impact on consumer purchase decisions. Only when the buyer has generated trust can the purchase behavior occur. By providing and optimizing the path of factors that affect the
234
X. Yan and A. B. Rosli
trust of online consumers, we can gain the trust of consumers and get a positive purchase decision. The online live shopping environment is no better than the traditional offline shopping environment of physical stores. In the traditional offline shopping environment, consumers and shopping guides are in face-to-face contact transactions. Although it is easier to establish trust factors, consumers are often influenced by the language, behavior and attitude of shopping guides. In the online live shopping environment, consumers can choose products rationally and reach a deal. When consumers have full trust in the seller, it will promote the conclusion of the transaction. The trust purchase decision established in this way can make consumers make positive consumption evaluation after experiencing and using the goods, so as to pave the way for the trust establishment process of potential consumers and form a positive online live shopping atmosphere.
6 Summary and Recommendations With the global economic development stepping into the digital network track, the Internet not only meets the needs of consumers for online shopping and services, but also the online evaluation information that appears together with goods and services has become an important part of consumers’ online shopping, and the importance of online evaluation is increasing day by day. With the birth and rapid development of the Internet, online live shopping has also developed rapidly, which has changed the traditional physical consumption shopping form to a certain extent and is developing rapidly with strong momentum. Internet not only meets people’s main needs of purchasing goods and obtaining corresponding services, but also is a convenient choice for information exchange. Different from the traditional offline entity consumption mode, the satisfaction degree of consumer groups in webcasting is seriously divided, and the word-of-mouth evaluation after shopping consumption affects the purchase decision of online consumers. There is information inequality between consumers and merchants in the process of online transactions, but the consumer evaluation system can make the transaction transparent, so that potential consumers can also make shopping judgments and reduce unnecessary consumption expenses. Therefore, the importance of consumer evaluation system is becoming more and more critical. Based on the method of data analysis, this paper studies the consumer evaluation system in webcasting from the perspective of consumers and taking consumers’ online shopping experience as reference factors. In order to establish consumer evaluation and consumer trust of the webcast platform, establish a better consumer feedback mechanism in the live shopping environment, improve the consumer feedback and evaluation mechanism of the webcast industry, provide more effective evaluation guidance for merchants and anchors, enrich the research on online consumer evaluation behavior, improve consumers’ trust in the live consumption environment, and promote the healthy and vigorous development of the webcast industry. According to the discussion in this paper, the process mechanism of consumer’s shopping decision can be made through consumer’s online shopping evaluation. In the construction of consumer evaluation system in online live broadcast, the live broadcast room can focus on inducing consumer’s evaluation content from the objective content of consumer groups, such as guiding consumers to write an objective description of product characteristics in the box of consumer evaluation in online live broadcast, and
Consumer Evaluation System in Webcast Based on Data Analysis
235
taking subjective emotion as an auxiliary tool, such as asking consumers whether they are satisfied with this product in the live broadcast room, because these consumer evaluation languages can better influence other potential consumers’ purchase decisions. In addition, according to the analysis of consumers’ online live shopping experience, online live stores should pay more attention to the old customers in the repurchase area or consumers with high experience in online live shopping on other platforms, because such consumers are more likely to have purchase intentions, and may also recommend them to other potential customers around them. With regard to the construction of consumer trust in webcasting, operators should pay attention to the establishment of consumer cognitive trust. Online platforms are virtual, and customers want to see more comprehensive cognitive information. It is necessary to pay more attention to the content of consumer evaluation of webcasting and urge shoppers to write consumer evaluation information. The quantity of consumer evaluation is not the main reason that affects the purchase decision of potential consumers, and the webcast platform should focus on the quality of consumer evaluation rather than the quantity of evaluation.
References 1. Guo, Y.: Study on the impact of online instant comments on online webcasting on consumers’ purchase intention. Times Econ. Trade 20(01), 108–113 (2023) 2. Shen, W.: Study on the influence of barrage on consumers’ purchase intention in live webcasting. Yunnan University of Finance and Economics (2022) 3. Huang, L.: Protection of consumers’ rights and interests in webcasting with goods. Hebei Enterp. 06, 155–157 (2022) 4. Huo, M.: Study on the factors influencing the consumer intention of e-commerce live streaming. Guangzhou University (2022) 5. Fan, J.: Research on the impact of customer experience on customer stickiness in the context of e-commerce live broadcast. Shanxi University of Finance and Economics (2022) 6. Shi, J.: Research on the influencing factors of consumers’ purchase behavior of live broadcast goods from the perspective of information ecology. Jilin University of Finance and Economics (2022) 7. Zhang, T.: Optimization strategy of consumer satisfaction based on product characteristics of direct-broadcast e-commerce of agricultural products. Anhui Agricultural University (2022) 8. Zhou, Z.: On the protection of consumer rights and interests under the mode of live broadcast with goods. Hebei University of Economics and Business (2022) 9. Xu, J., Zhang, R., Guo, J., Gong, J.: Barrage and discipline: gender gaze in webcast-analysis based on betta big data 44(04), 146–151+153–167 (2022) 10. Liu, Y.: Research on the influence of products and situations on consumers’ purchase intention in live broadcast environment. Ningxia University (2022) 11. Cui, Z., Choi, J.: The impact of e-commerce live broadcast characteristics on consumer purchases. Zhejiang University (2022) 12. Di, W.: Research on consumers’ purchase decision under the mode of live broadcast with goods in online celebrity. Harbin University of Science and Technology (2022)
Optimal Allocation Method of Microgrid Dispatching Based on Multi-objective Function Yang Xuan1 , Xiaojie Zhou1(B) , Lisong Bi1 , Zhanzequn Yuan1 , Miao Wang1 , and B. Rai Karunakara2 1 National Nuclear Power Planning and Design Institute Co., Ltd., Haidian District,
Beijing 100089, China [email protected] 2 Nitte Meenakshi Institute of Technology, Bengaluru, India
Abstract. With the birth of microgrid technology, its related research and application have been widely concerned by the world. This paper mainly studies the optimal dispatching of microgrid. Firstly, the operation characteristics of common distributed power sources in the microgrid are analyzed and modeled, then a multi-parameter and multi-objective microgrid scheduling optimization model is proposed, and an improved genetic algorithm supporting multiple time scales is used to solve the scheduling planning problem. Finally, the scientific and practical nature of the proposed method is verified by an actual example. Keywords: microgrid · distributed power generation · scheduling optimization · genetic algorithm
1 Introduction In recent years, with the wide application of distributed power generation in the power grid, the characteristics of intermittency and volatility also have an impact on the security and stability of the power grid [1]. The proposed microgrid can effectively alleviate the problems caused by the access of distributed generation to the grid, so as to give full play to the advantages of distributed generation. The microgrid is composed of distributed power sources such as wind energy and solar energy, energy storage equipment, energy conversion devices and loads. It is a small power system that can achieve self-monitoring, self-protection and form independent autonomy [2, 3]. It can be connected to the grid and operated in isolation. With the development of microgrid technology, the optimization of economic operation of microgrid has become a widespread concern of society, so how to improve the economic benefits of microgrid operation has become an urgent issue to be studied in depth [4, 5].
2 Microgrid Structure Microgrid system is a complete system including micropower (distributed power supply), energy storage, load and protection system. The energy conversion device in the system © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 236–244, 2023. https://doi.org/10.1007/978-3-031-31860-3_25
Optimal Allocation Method of Microgrid Dispatching
237
performs energy conversion, and the monitoring and protection device continuously monitors and protects the system during the system operation. It is a small power system that can self-protect, control, manage and operate [6]. It can not only operate in parallel with the large grid, but also operate in isolation from the island of the large grid. Microgrid technology can effectively alleviate the problems caused by the access of distributed generation to the grid, so as to give full play to the advantages of distributed generation [7]. Typical microgrid is shown in Fig. 1 below.
Photovoltaic Hydropower
Diesel generator
Hydropower
Microgrid Substation Power grid
Energy storage
Electric vehicle Load
Fig. 1. Typical Microgrid Structure
3 Distributed Power Generation Output Model Considering Uncertainty (1) PV output model. The principle of photovoltaic power generation is to convert solar energy into electricity through the photovoltaic effect of photovoltaic cells [8]. Ppv = fpv Prated
A 1 + αp (T − TSTC ) AS
(1)
where, fpv is the power derating coefficient of the photovoltaic power generation system, representing the ratio of the actual output power of the photovoltaic power generation system to the output power under rated conditions; Ptated is the rated power of the photovoltaic power generation system; A is the actual solar irradiation intensity; As is the irradiation intensity under standard test conditions; Ap is the power temperature system; T is the current surface temperature of the system; TSTG is the system temperature under standard test conditions.
238
Y. Xuan et al.
(2) Battery charging and discharging model. The remaining power of the battery is reflected by the state of charge (SOC) [9], which is defined as follows: Id t Cnet =1− (2) SOC(t) = Cbat Cbat Pbat (t) Ibat (t) = Nbat Vbat (t) (3) SOC(t + 1) = SOC(t)[1 − σ (t)] + Ibat (t)tη(t)/Cbat where, Cnet is the remaining power of the battery; Cbat is the total capacity of the battery; I is the discharge current of the battery; Ibat(t) is the charging and discharging current at t time (positive when charging and negative when discharging); Pbat(t) is the battery charge and discharge power (positive when charging and negative when discharging); Vbat(t) is the battery terminal voltage; O(t) is the self-discharge rate; t is the time interval between the two moments; Cbat is the capacity of the battery when installed; N is the charge and discharge efficiency. (3) Output model of micro gas turbine. The power output model of CCHP system based on micro gas turbine is as follows [10]: ⎧ ⎪ ⎪ QMT = PMT (1 − ηMT − η1 )/ηMT ⎪ ⎪ ⎪ Qho = QMT ηrec Kho ⎨ Qco = QMT ηrec Kco (4) ⎪ T1 −T2 ⎪ ⎪ ηrec = T1 −T0 ⎪ ⎪
⎩ VMT = PMT t /(ηMT L) where, PMT is the output power of gas turbine; QMT is the residual heat of gas turbine exhaust; ηMT is the efficiency of gas turbine; η1 is the heat loss coefficient of gas turbine; Qho and Qco provide heat and cooling capacity for waste heat of gas turbine flue gas respectively; Kho and Kco are the heating and cooling coefficients of the bromine refrigerator respectively; Nrec is the waste heat recovery efficiency of flue gas; T is temperature; VMT is the natural gas consumed by the gas turbine during the operation time; t is the operating time of the gas turbine; L is the low calorific value of natural gas.
4 Microgrid Optimal Dispatching Model In this paper, the optimization objective functions such as minimum operating cost, maximum environmental benefit, minimum depreciation cost of micro-power supply and minimum life loss of battery charging and discharging are set to study the scheduling optimization of micro-grid. (1) Minimal operation cost of microgrid. The operation cost of the microgrid is the fuel cost and operation and maintenance cost of all micro-power sources in the microgrid.
T N
(5) minCop = min (Ci (Pi (t)) + Oi (Pi (t))) + Cgrid Pgrid (t) t=1
i=1
Optimal Allocation Method of Microgrid Dispatching
239
where, Cop is the operation cost of microgrid; Ci(Pi(t)) is the fuel cost function of micro power supply; Oi(Pi(t)) is the operation and maintenance cost function of micro-power; Cgrid(Pgrid(t)) is the electricity price traded with the main network in time period t, t is a dispatching cycle. (2) Microgrid has the highest environmental benefits. Some micro-power sources in the microgrid will emit CO2, SO2, NOx and other polluting gases, and the treatment costs of these gases should be included in the objective function.
T N (6) αi Pi (t) minCEN = min t=1
i=1
where, CEN is the environmental governance cost of the micro grid; αi is the cost coefficient of different micro-power environments. (3) Minimal depreciation cost of micro power supply. The depreciation cost of micro-power mainly takes into account the expenses incurred in the depreciation of all micro-power sources in the micro-grid, and adopts the basic straight-line method for the depreciation of fixed assets.
T N Ci,ins (Pi (t)) minCENP = min (7) t=1 i=1 365 × 24Li where, CENP is the depreciation cost of microgrid; Ci,ins(Pi(t)) is the installation cost of micro-power i; Li is the life of micro power supply i. (4) Minimum battery charge and discharge life loss. The charging and discharging times of the storage battery are limited. This paper assumes that the loss caused by each charging and discharging is a certain value. Cost Battery = CPT × Times
(8)
where, Costbattery refers to the loss caused by battery charging and discharging; CPT is the consumption of each charge and discharge; Times is the number of battery charges and discharges required in the planning; Convert multiple objective functions into single objective functions in the form of linear weighting: minCp = min(ω1 COP + ω2 CEN + ω3 CDEP + ω4 CBAT )
(9)
where, ω1 ≥ 0, ω2 ≥ 0, ω3 ≥ 0, ω4 ≥ 0 are the weights of each objective function, and meet the requirements of ω1 + ω2 + ω3 + ω4 = 1.
5 Improved Genetic Algorithm In this paper, an improved genetic algorithm is used to solve the multi-parameter and multi-objective optimization problem under the microgrid energy dispatching model. The improved algorithm supports multiple time-scale scheduling planning solutions, and can dynamically generate the optimal solution according to the conditions caused by the
240
Y. Xuan et al.
changes of natural environment and human factors. In this algorithm, multiple electrical equipment models and electrical characteristics constraints of related equipment are established, so that the algorithm can more realistically simulate and solve the required scheduling plan. The main process is as follows: (1) The algorithm generates several random initial solution groups according to the provided power load data, generation power data and relevant algorithm configuration information and constraint configuration information. The solution of the battery part of the initial solution must meet the SOC constraint. (1) For all solutions, check whether they comply with the electrical equipment constraints in this problem. (3) Calculate the fitness of the initial solution group (planning cost in this algorithm), and select several solutions with the lowest fitness. (4) According to these selected optimal solution groups, the solutions in all non-optimal solution groups are subject to random crossover mutation among individuals. (5) Repeat the fitness of the solution set again. If the fitness meets the termination requirements, stop the iterative process and output the optimal solution. Otherwise, repeat step 2. In the solution process, four cost calculations, including microgrid operation cost, microgrid environmental penalty cost, micro-power depreciation cost and battery charge and discharge loss cost, are mainly included. Each cost has a corresponding weight coefficient. Users can adjust the coefficient according to their actual conditions to achieve their own multi-objective maximization of income.
6 Example Analysis The microgrid model system in the example is composed of solar photovoltaic power generation, energy storage battery, gas turbine and controllable load, as shown in Fig. 2. The data of a certain day is selected as an example for analysis. The daily power load curve in the microgrid is shown in Fig. 3. Time-sharing electricity and electricity purchase price data, basic parameters of each micro-power supply in the micro-grid system, operation and maintenance cost parameters of micro-power supply and operation parameters of livestock battery have been given.
Optimal Allocation Method of Microgrid Dispatching
Distribution network
Energy storage
Photovoltaic
Diesel generator
Fig. 2. Microgrid structure diagram
Power˄kW˅
Load
Fig. 3. Daily power load of microgrid
241
242
Y. Xuan et al.
Considering that the output of photovoltaic power generation varies with the seasons, weather, sunshine intensity and other conditions, based on historical data, it is assumed that the daily power generation curve of photovoltaic power generation is as shown in Fig. 4.
Power˄kW˅
PV
Fig. 4. 24-h photovoltaic power generation curve
During the simulation of the example, the target weights (w1 ≥ 0, w2 ≥ 0, w3 ≥ 0, w4 ≥ 0) of the four goals of the minimum operating cost, the maximum environmental benefit, the minimum depreciation cost of the micro-power supply and the minimum battery discharge life loss of the micro-grid can be configured to realize the multiobjective operation optimization and control of the micro-grid. The initial capacity of energy storage is set at 50%, and the optimization solution is carried out according to the above model and algorithm. The optimization results considering different weights are shown in the Figs. 5 and 6. At this time, when the scheduling algorithm meets the minimum operation cost target of the microgrid, it will charge and discharge the energy storage frequently without considering the depreciation cost of the energy storage equipment, and it will choose the gas generator to generate electricity when the electricity sales of the grid are higher than that of the gas power generation, which will bring some pollution gas emissions, as can be seen from the results, The algorithm can select the lowest cost power dispatching mode according to the change of system load and energy, so as to achieve the lowest cost operation goal of microgrid.
Optimal Allocation Method of Microgrid Dispatching
243
24-hour forecast scheduling planning Energy storage
kW
Diesel generator Power grid
Power
PV
Time
Fig. 5. Optimized operation results with the lowest operation cost
24-hour forecast scheduling planning Energy storage Diesel generator Power grid
Power
kW
PV
Time
Fig. 6. Operation results of equal weight optimization scheduling
At this time, due to the introduction of the minimum target weight of battery discharge life loss, the scheduling of the algorithm will not charge and discharge the energy storage system frequently. At the same time, the goal of minimum depreciation cost of micro-power supply and the goal of maximum environmental benefit are considered. When calculating the cost of power generation, the cost of the impact of gas generator power generation on the environment needs to be added. On the premise of meeting
244
Y. Xuan et al.
the operation goal, the power generation of gas turbine should be reduced as much as possible, Choose how large the power grid to purchase directly.
7 Conclusion In this paper, by modeling distributed power sources such as photovoltaic power generation system, energy storage system and micro gas turbine, an optimal operation configuration model of microgrid is established. Genetic algorithm is used to solve the optimal operation of the established model, and then the accuracy of the algorithm is verified by simulation analysis of an example.
References 1. Wu, G.K.: System modeling and optimal dispatching of multi-energy microgrid with energy storage. J. Mod. Power Syst. Clean Energy 8(005), 809–819 (2020) 2. Gupta, N.P., Paliwal, P.: Design and operation of smart hybrid microgrid. Int. J. Emerg. Electr. Power Syst. 22(6), 729–744 (2021) 3. Binh, P., Le, T.N., Phan, Q.D., et al.: Optimal generator dispatching with uncertain conditions for islanded microgrid. IEEE Access PP(99), 1 (2020) 4. Bui, V.H., Hussain, A., Kim, H.M.: A strategy for optimal operation of hybrid AC/DC microgrid under different connection failure scenarios. Int. J. Smart Home 10(12), 231–244 (2016) 5. Jang, M.J., Kim, K.H.: Cooperative and autonomous control strategy of AC microgrid with energy storage system. Int. J. Control Autom. 10(5), 47–64 (2017) 6. Lee, J.W., Kim, M.K., Kim, H.J.: A multi-agent based optimization model for microgrid operation with hybrid method using game theory strategy. Energies 14 (2021) 7. Bui, V.H., Hussain, A., Kim, H.M.: Q-Learning-based operation strategy for community battery energy storage system (CBESS) in microgrid system. Energies 12 (2019) 8. Wang, S., Wang, D., Han, L.: Interval linear programming method for day-ahead optimal economic dispatching of microgrid considering uncertainty. Autom. Elec. Power Syst. 38(24), 5–11 and 47 (2014) 9. Yang, L., Xinwen, F., et al.: Optimal active power dispatching of microgrid and distribution network based on model predictive control. Tsinghua Sci. Technol. (2018) 10. He, Y.: Research on the optimal economic power dispatching of a multi-microgrid cooperative operation. Energies 15 (2022)
Application of Improved SDAE Network Algorithm in Enterprise Financial Risk Prediction Liyun Ding1(B) and P Rashmi2 1 Chengdu Neusoft University, Dujiangyan, Sichuan, China
[email protected] 2 Inventuriz Labs Pvt Ltd, Bengaluru, India
Abstract. Enterprise financial risk refers to the loss to the company caused by various difficult or uncontrollable factors in the production and operation process, which leads to the decline of investors’ expected income or even loss. Based on SFAD network algorithm(NA), this paper studies its advantages in enterprise financial analysis and prediction. The article first introduces the development and application status of SDAE system at home and abroad. Then the model is optimized and designed from the two parts of MSAQ system and process management, and the improved method is combined with other methods. Finally, the prediction model is simulated and tested. The test results show that the absolute error value of the improved SDAE NA is small, and the difference between the actual risk value and the predicted value is small. This shows that the algorithm can accurately predict the financial risks of enterprises. Keywords: Improve SDAE Network Algorithm · Enterprise Finance · Risk Prediction · Prediction Application
1 Introduction Enterprise financial risk refers to the unpredictable and uncontrollable economic benefits arising from the company’s operation, management and financing activities, thus causing economic losses to the company, causing certain losses to investors, and being in a disadvantageous position in the market competition [1, 2]. With the acceleration of the process of global economic integration and the rapid development of science and technology, the continuous improvement and popularization of network technology have made major companies begin to pay more attention to their internal financial problems. China is also committed to developing a systematic and efficient risk early warning system and control method suitable for China’s national conditions to reduce the cost of enterprises, reduce unnecessary capital consumption and save money [3, 4]. At present, domestic and foreign scholars have carried out a lot of theoretical and practical exploration on data mining technology, which has gradually developed from traditional methods to new fields such as machine learning and knowledge-based management. Remarkable achievements have been made in these areas. Domestic literatures © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 245–254, 2023. https://doi.org/10.1007/978-3-031-31860-3_26
246
L. Ding and P. Rashmi
mainly focus on cluster analysis. Some scholars put forward a research idea of clustering algorithm that applies distance model to predict a certain region or industry and verified its feasibility through experiments [5, 6]. Other scholars have conducted in-depth discussions on data mining technology, and established a complete and effective early warning mechanism from within the enterprise. By establishing a reasonable evaluation index system to reflect the problems in the company’s operation and management, and combining the Chinese market environment and foreign experience, a method suitable for China’s national conditions and capable of promoting financial risk prediction is proposed. Other scholars believe that the risk prediction model is established based on SFA method for analysis and calculation. The model can effectively convert the financial index data into quantifiable numerical values, and analyze the relationship and change rule between the structural parameters and weights of the DAE network through SPSIS software, providing guidance for its practical application [7, 8]. Therefore, based on the improved SDAE NA, this paper carries out application research on enterprise financial risk prediction. This paper takes the SDAE NA as the research object, introduces it systematically, and makes a brief analysis of the problems and risk points that may occur in the actual application, reducing costs, improving efficiency and other ways to reduce the company’s losses, optimizing the process, and improving the algorithm ability to deal with the situation that the risk value is large and difficult to determine under the complex and changeable environment, making it more close to the data and indicators in the actual application.
2 Discussion on Improved SDAE NA in Enterprise Financial Risk Prediction 2.1 Enterprise Financial Risk The financial risk of an enterprise refers to the deviation of the company’s actual income from the expected target due to various unpredictable or uncontrollable factors, resulting in economic losses, opportunity costs and potential losses. Under the conditions of market economy, enterprises are faced with a large number of uncertain factors in the fierce competitive environment that affect their development process and results, which may cause serious harm to production and operation and bring huge fluctuation risks to economic benefits [9, 10]. On the other hand, the investment decision can be adjusted accordingly to make it more reasonable, so as to reduce the probability of financial crisis and the degree of loss. The financial risk of an enterprise refers to the deviation of the actual income from the expected target due to the influence of uncertain factors, resulting in loss or value reduction. When the capital supply and demand market is in a state of decline, investors need a large amount of capital to meet their investment needs, and may face financing difficulties to some extent. At this time, if there is not enough idle working capital to choose from, the enterprise will fall into financial difficulties, and the enterprise will bear higher risks in the process of external financing assets. Therefore, the enterprise financial risk refers to the existence of liabilities, As a result, the enterprise cannot repay its debts on schedule, thus bringing economic losses to
Application of Improved SDAE Network Algorithm
247
investors, causing losses to investors and bankruptcy. When the capital is invested in the production and operation, the problem of profit distribution will arise when the investment is not recovered [11, 12]. If the rate of return is greater than the borrowing rate, the next operation cannot be carried out, resulting in losses and even financial crisis, which are the result of improper financing decisions. Enterprises should also follow the principle of cost and benefit when choosing which way to raise capital. From the concept of enterprise financial risk, we can see that the so-called enterprise financial activities refer to the process of capital movement, and various uncertain factors are generated in this process [13, 14]. Due to certain changes in the external economic environment and internal operating conditions, the shortage of funds raised or the inability to repay the principal and interest at maturity, or even the inability to recover the principal of the debt, may occur. In this case, a large part of the financial management problems of the enterprise arise. In the process of project implementation, due to many factors such as decision-making, market and internal, the financial situation faced by the enterprise has changed greatly, which has led to the deviation between its income and the actual situation. 2.2 Comprehensive Evaluation of Enterprise Financial Risk In the financial risk prediction, we can analyze the business ability and profitability of the enterprise through the comprehensive evaluation method. Among them, the comprehensive evaluation method refers to a comprehensive and systematic investigation of the overall economic benefits, investment returns and other aspects of a project or multiple subprojects. It can effectively avoid the existence of uncertain decision-making problems under the influence of subjective factors. For non-quantitative evaluation indicators, it is necessary to introduce qualitative analysis to ensure the objective reliability of the results. After applying DAE network to actual enterprises, we should pay attention to both qualitative and quantitative aspects [15, 16]. The SEC algorithm can be used to predict the future development and change trend of financial data only after the standardized processing of financial data. However, the SPSE model can not be used to avoid some new uncertainties. At this time, the existing theories and technical means combined with relevant experience can be used to take measures to reduce the probability of risk occurrence or reduce the degree of loss. Figure 1 is the enterprise financial risk prediction model. This method is based on the financial statements of the target enterprise, and carries out a comprehensive evaluation on it to obtain the degree of risk. In practical application, it is found that the DAE NA can process different types of data in different levels, and calculate the differences between each sub-sample. However, because each company has its own unique characteristics and operating characteristics and other factors that affect these results, the possibility of using this model to predict risks will also be limited [17, 18]. In the comprehensive evaluation of enterprise financial risk, it is necessary to analyze various possible situations and take corresponding measures to avoid or reduce losses. This paper mainly studies the improvement scheme based on the SDAE NA, and optimizes the different types of data samples and model parameters to obtain the target value and optimal results that are most in line with the actual situation. Select new technologies and methods that are suitable for the current development situation of
248
L. Ding and P. Rashmi
Sample set
Modeling sample
Sample to be tested
Prediction model
Forecast results
Fig. 1. Enterprise financial risk prediction model
the company and can meet its risk prediction needs through comparison, and propose corresponding preventive measures or solutions for possible problems in the system. The comprehensive evaluation of financial risk is mainly to analyze various problems that the enterprise may face by using different evaluation criteria, so as to find corresponding countermeasures to reduce losses, improve profits, judge whether there are potential factors and the impact of such risks, and set the quantitative scale to a specific numerical range to measure. 2.3 Improved SDAE NA Aiming at the deficiency of SDAE NA in enterprise financial risk prediction, an improved SDAE NA is proposed. This method compares the optimized model with the traditional method, and then evaluates its application effect. Through experiments and comparison, it is found that the improved scheme based on this paper can better solve the above problems. However, in the actual operation process, there will be some unexpected
Application of Improved SDAE Network Algorithm
249
situations or some parameters with large changes, uncertainties and other problems. At the same time, there may be large errors in data processing and it is difficult to accurately estimate the value. n−1 2 f (x) = [100(xi+1 − xi2 ) + (xi − 1)2 ] (1) i=1
To solve the problem of too large parameter value in SDAE NA, an improved SDAE network optimization model can be adopted. First, a mathematical analysis framework is established on the data set. Xmut (d ) = N (0.5, 0.3) × [(xr1 − xr2 ) + (xr3 − xr4 )]
(2)
The system is simulated to determine whether the object under study meets the expected objectives and requirements; Secondly, select the appropriate function type according to the actual work situation, and finally use the parameter value setting method in the improved SDAE NA to calculate the relationship between the optimal solution and the prediction result to get the final scheme and optimize the design based on it. In SDAE NA, the calculation process is mainly based on the combination of RBF algorithm and improved method. Figure 2 is the schematic structure of the improved SDAE NA.
0
Start
...
0
...
Z1
...
0
...
Z2
...
End
Z3
Fig. 2. Improved the SDAE NA
The model can process the original data. It can transform complex and multidimensional problems into simple and easy to implement, and improve the accuracy and accuracy of the analysis of the relationship between data structure and attributes. It can use the existing network performance indicators to evaluate whether the whole system has good stability, and reduce the computational complexity to a certain extent. Aiming at the parameter sensitivity of SDAE NA, an improved SDAE network optimization method is proposed. This method is to analyze the use of data by setting different levels of users in the program and get the best results. Xnew (d ) = {
xnew (d ) + xmut (d )r2 ≤ PAR xnew (d ), r2 > PAR
(3)
Set group X control points and group d independent points to implement control strategies for PAR users, and D provides the system with input service function module,
250
L. Ding and P. Rashmi
output interface module and other relevant service information; R provides a support platform for the collection and output of different types of customer information in the system. In view of the inapplicability of the DAE NA, this paper proposes an improved SDAE network optimization method, that is, the combination of NPVC and RBF to achieve the optimal path length when certain conditions are met.
3 Improve the Experimental Process of Improved SDAE NA in Enterprise Financial Risk Prediction 3.1 Enterprise Financial Risk Prediction Model Based on Improved SDAE NA The enterprise financial risk prediction model based on SDAE NA mainly analyzes the data to find the best path to reduce losses and improve efficiency. First of all, we need to preprocess the data. The following methods are used in the original collection: First, remove the uncertainty factors such as invalid sampling and noise; Second, delete highfrequency sequence information; Thirdly, convert the signal into the relationship matrix between the effective value and the original signal and extract it to form a new useful feature vector as the prediction model parameter (as shown in Fig. 3). According to the DAE network example diagram described above, we can see that the DAE routing protocol based on SDAE algorithm is very successful in practical application and has high efficiency, applicability and practical value. Since this paper is about improving the SFA network optimization system model, there are also some limitations when applying this method to other enterprises for financial risk prediction. The first is to sample, analyze and process the new network access data. By comparing the costs and benefits of different schemes, we can find the solutions to the problems of the best path selection and the best time interval arrangement. When making financial risk prediction, enterprises should adopt different methods to achieve different data types. This paper will introduce several main parameters used in the DAE NA and the improved SDAE model. It is determined by comparing each level and the weight value.
Application of Improved SDAE Network Algorithm
251
Start
Raw data analysis
Construct effective predictors to determine independent variables and dependent variables
Normalization
Test sample
Training sample
Multi-population chaotic particle optimization support vector regression model parameters
SDAE training algorithm
NO
Conditions met YES
Prediction model
Output Forecast
End
Fig. 3. Enterprise financial risk prediction model based on the improved SDAE NA
3.2 Predictive Test of Enterprise Financial Risk Based on Improved SDAE NA The predictive test of enterprise financial risk based on the improved SDAE algorithm is mainly aimed at the difference between the actual workload and the estimated value to
252
L. Ding and P. Rashmi
a certain extent in the process of budget implementation, and corresponding measures are taken to reduce the error according to different situations. The control variables corresponding to the target value under different parameters can be obtained by analyzing and comparing various methods and relevant principles and calculation formulas such as standard deviation method and least square method proposed in the existing literature. The impact of various indicators on the financial risk of enterprises in the actual application of the improved SDAE NA in the budget implementation process can be achieved.
4 Experimental Analysis of Improved SDAE NA in Enterprise Financial Risk Prediction The predictive test of enterprise financial risk based on improved SDAE NA can be analyzed from two aspects. One is to use this method when there is a large difference between the target function value and the actual value of an indicator. The second is to select the most appropriate value and weight coefficient as the standard deviation from the calculated results under different parameter conditions, convert it into the standard deviation, and then distribute it to each scheme to achieve the overall optimal control effect and optimal control purpose. At the same time, the optimization analysis can also be carried out by comparing and improving the mutual combination of each module of SDAE NA. Table 1 is the test data of the enterprise financial risk prediction model. Table 1. Enterprise financial risk forecast model test data Order number
Predicted value
Actual value
Absolute error value
1
3.413
2.453
0.343
2
4.356
3.424
0.313
3
3.647
3.648
0.435
4
4.265
3.416
0.563
5
5.320
4.735
0.741
Through the above analysis, we can draw the conclusion that the improved SDAE NA is feasible in the enterprise financial risk prediction. First, from the overall perspective, the model evaluates the company’s operating conditions. Then comparing it with the traditional method, we can find that SDAE based method can effectively reduce the loss or expense increase caused by environmental changes, and the improved SDAE coefficient method can timely and accurately send warnings and signals to the internal management to reduce the possibility of financial crisis, and control the risk through reasonable use of cost-cost ratio, capital structure adjustment and other means to improve the level of enterprise income. At the same time, it can be seen from the data in Fig. 4 that the absolute error value of the improved SDAE NA in predicting risk is small, and the difference between the actual value of risk and the predicted value is small. This shows that the algorithm can accurately predict the financial risks of enterprises.
Application of Improved SDAE Network Algorithm
6
5.32 4.735
5
Value
4 3
253
4.356
4.265 3.6473.648
3.424
3.413
3.416
2.453
2 1
0.343
0.313
0.563
0.435
0.741
0 1
2
Predicted value
3 Order number Actual value
4
5
Absolute error value
Fig. 4. Improve the forecast value of SDAE NA in enterprise financial risk prediction
5 Conclusion With the development of economic globalization, the competition among enterprises around the world is becoming increasingly fierce. Financial risk management has become an increasingly important issue for modern enterprises. This paper analyzes and discusses SDAE NA. Through the analysis and collation of literature and data, this method concludes that there are some defects in the links such as the SFA-based linear model and the improved SDAE routing optimization and data mining, which are greatly affected by uncertainty factors. Combined with the actual case application, it is of great significance to reduce the risk coefficient, improve the financial efficiency and improve the profitability of enterprises.
References 1. Liu, C., Kumar, N.A.: Risk prediction and evaluation of transnational transmission of financial crisis based on complex network. Clust. Comput. 22(Supplement), 4307–4313 (2019) 2. Shahbazi, Z., Byun, Y.-C.: Machine learning-based analysis of cryptocurrency market financial risk management. IEEE Access 10, 37848–37856 (2022) 3. Huynh, T.L.D., Shahbaz, M., Nasir, M.A., Ullah, S.: Financial modelling, risk management of energy instruments and the role of cryptocurrencies. Ann. Oper. Res. 313(1), 47–75 (2022) 4. Jana, R.K., Tiwari, A.K., Hammoudeh, S., Albulescu, C.T.: Financial modeling, risk management of energy and environmental instruments and derivatives: past, present, and future. Ann. Oper. Res. 313(1), 1–7 (2022)
254
L. Ding and P. Rashmi
5. Pekár, J., Pˇcolár, M.: Empirical distribution of daily stock returns of selected developing and emerging markets with application to financial risk management. CEJOR 30(2), 699–731 (2021). https://doi.org/10.1007/s10100-021-00771-4 6. Sun, Q., Hong, W., Zhao, B.: Artificial intelligence technology in internet financial edge computing and analysis of security risk. Int. J. Ad Hoc Ubiquitous Comput. 39(4), 201–210 (2022) 7. Soto-Beltrán, L.L., Robayo-Pinzón, O.J., Rojas-Berrio, S.P.: Effects of perceived risk on intention to use biometrics in financial products: evidence from a developing country. Int. J. Bus. Inf. Syst. 39(2), 170–192 (2022) 8. Kwateng, K.O., Amanor, C., Tetteh, F.K.: Enterprise risk management and information technology security in the financial sector. Inf. Comput. Secur. 30(3), 422–451 (2022) 9. Aren, S., Hamamci, H.N.: The impact of financial defence mechanisms and phantasy on risky investment intention. Kybernetes 51(1), 141–164 (2022) 10. Ballew, H.B., Nicoletti, A., Stuber, S.B.: The effect of the paycheck protection program and financial reporting standards on bank risk-taking. Manag. Sci. 68(3), 2363–2371 (2022) 11. Goussebaïle, A.: Risk allocation and financial intermediation. Math. Soc. Sci. 120, 78–84 (2022) 12. Pandey, K.K., Shukla, D.: Stratified linear systematic sampling based clustering approach for detection of financial risk group by mining of big data. Int. J. Syst. Assur. Eng. Manag. 13(3), 1239–1253 (2022) 13. Albrecher, H., Azcue, P., Muler, N.: Optimal ratcheting of dividends in a Brownian risk model. SIAM J. Financ. Math. 13(3), 657–701 (2022) 14. Dolinsky, Y., Moshe, S.: Short communication: utility indifference pricing with high risk aversion and small linear price impact. SIAM J. Financ. Math. 13(1), 12 (2022) 15. Jaimungal, S., Pesenti, S.M., Wang, Y.S., Tatsat, H.: Robust risk-aware reinforcement learning. SIAM J. Financ. Math. 13(1), 213–226 (2022) 16. Nutz, M., Zhang, Y.: Reward design in risk-taking contests. SIAM J. Financ. Math. 13(1), 129–146 (2022) 17. Cerqueti, R., D’Ecclesia, R.L., Levantesi, S.: Preface: recent developments in financial modelling and risk management. Ann. Oper. Res. 299(1), 1–5 (2021) 18. Elsayed, A.H., Helmi, M.H.: Volatility transmission and spillover dynamics across financial markets: the role of geopolitical risk. Ann. Oper. Res. 305(1), 1–22 (2021)
The Design of Supply Chain Logistics Management Platform Based on Ant Colony Optimization Algorithm Bin Wang(B) Shandong Vocational College of Light Industry, Zibo 255300, Shandong, China [email protected]
Abstract. With the development of logistics engineering and the enlargement of research scope, the relationship between logistics and SC is getting closer and closer. SC management is a systematic project that integrates logistics, information flow, capital flow and various technologies. However, the capital market and the strong support of national policies will inevitably make logistics play an increasingly important role in the SC. The purpose of this paper is to design a supply chain (SC) logistics management platform. The design of logistics management platform for automobile logistics service SC, in the experiment, M automobile logistics company is the research object, M company is a logistics company affiliated to a large automobile manufacturer, and it processes the data of the management platform. It is found that under the evaluation results of three standardized indicators, namely s1, s2 and s3, the asset-liability ratio results are 0.1154, 0.5584 and 0.985 respectively. The results of current ratio were 0.2458, 0.0842 and 0.751, respectively. The net interest rate on assets was 0.8914, 0.8542, and 1.325, respectively. It shows that the net asset interest rate and the current ratio are standardized, and the asset-liability ratio is standardized, and the smaller the value, the better. Keywords: Swarm Optimization Algorithm · Logistics Service SC · Logistics Management · Platform Design
1 Introduction In recent years, logistics, as a core industry in the service industry, has played an increasingly critical role in economic development. Especially with the improvement of the integrity and complexity of outsourcing, the chain has received extensive attention from the business and academic circles. As the integral characteristics, the coordination mechanism plays an extremely important role in the optimization; the fairness concerns of chain also generally exist in real life. Therefore, it has significance to study the optimization and its application considering the coordination mechanism and fairness concerns [1]. With the improvement of technology, optimization theory and methods effect the fields of transportation, social production and industrial design. Santis R D proposed © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 255–263, 2023. https://doi.org/10.1007/978-3-031-31860-3_27
256
B. Wang
a new metaheuristic routing algorithm for minimizing the pickers. It is the best metaheuristic algorithm of the insect colony, combined with the Freud-Worschal algorithm. To evaluate the algorithm, analyzes were performed. It provides an efficient solution to the selection problem is analyzed as a function of parameter settings [2]. Selvakumar AE sees it delivering the data virtualization, rather than directly interacting with servers. Customers can organize and launch the properties they want, they only pay for the properties they need. Then, to provide a component for product asset management, this task will become the main goal. In order to increase the allocation of resources, a new method of the insect colony is proposed [3]. The interesting phenomenon of ant colony has aroused great research interest and enthusiasm, and an ant colony is proposed ants finding the shortest path for foraging. Ant colony optimization (ACO) algorithm has a wide range of applications in the fields of path planning, data mining and combinatorial optimization. This paper studies the concept, background; information management concept and ACO algorithm. The design of the logistics management platform for the SC of automobile logistics services, in the experiment, the M automobile logistics company is the research object. The M company is a logistics company affiliated to a large automobile manufacturer, providing professional automobile logistics services for the manufacturer. In the experiment Process the data of the management platform, etc.
2 Research on the Design of SC Logistics Management Platform Based on ACO Algorithm 2.1 The Concept of SC The SC is a whole-process activity concept that covers the production activities of an enterprise. It starts with a center, one core enterprise, starts from the procurement of raw materials, passes through the production stage of intermediate products, to the delivery of product output, and finally allows the seller to sell to customers. The entire process, becoming a process chain from raw materials (suppliers provide raw materials) to output (product manufacturing) and final sales (sales channel marketing) [4]. The SC has a wide range of institutional models: in the entire process of product survival and sales, there are links with different responsibilities called node enterprises. Each node enterprise needs to serve the core enterprise, covering the whole process of the SC, forming A “horizontal integration” cooperation model. With the changes in the market, the average input model is no longer in line with the consumer-led market environment faced by most companies [5]. The management idea of SC is based on managers effectively grasping the internal laws and external connections of production. In the entire circulation process of products from raw materials to final consumer goods, the raw material suppliers, manufacturers, distributors, etc. are effectively organized, and logistics, work flow, and organizational flow involved in each node are carried out. Reasonable management control, maximize the effectiveness, while reducing costs [6].
The Design of Supply Chain Logistics Management Platform
257
2.2 Logistics Service SC (1) The concept of logistics service SC. The logistics service SC is a new research field, which is the ordinary chain and originates from the process in the physical chain. Based on the specific of the customer, the logistics integrator selects the appropriate it, and it is responsible for completing the specific logistics activities. The main purpose is to increase the number of suppliers [7]. In the existing literature, scholars define it in various forms. It is generally believed that logistics service SC can be divided. Linked together, including the starting point of the SC, the functional logistics enterprise, to the integrated it and the recipient of the final logistics service-customer. Compared with the narrow chain, the broad scope of it is broader, which refers to the provision and maintenance of facilities, equipment, information services and other enterprises, as well as the cooperation and cooperation to achieve the needs of integrated logistics services. The framework structure composed of companies or departments [8]. (2) Research background. Logistics service SC has also received extensive attention from domestic and foreign scholars in recent years. Foreign scholars have conducted in-depth research on logistics service SC. It discusses classify logistics service providers, logistics operators, agents and integrators [9]. View the service SC as a cross-functional core process, and consider all the processes and management activities of the service SC for planning, moving, and repairing materials for the after-sales support of the company’s products. In the case of uncertain logistics demand, the main members of the chain include integrators of single logistics services, functional logistics suppliers and subcontractors of logistics capabilities. Aiming at the conflict of interests generated by various subjects in the logistics service SC, it is proposed that in order to establish an effective cooperation and coordination mechanism, information, interests and incentives should be comprehensively considered. (3) Basic structure. By integrating social logistics resources, it coordinates different types of logistics suppliers, quickly responds to customers’ logistics needs, and creates service value for customers. Therefore, the essence of the logistics service SC is a service SC about the cooperation of the service capabilities of various logistics enterprises. Relevant scholars at home and abroad have put forward their own opinions. Its basic network structure is shown in Fig. 1:
258
B. Wang
Logistics supplier
Logistics services integrator
Logistics demand side
Fig. 1. Basic network structure
2.3 The Concept of Logistics Information Management Logistics information management refers to the process of using basic functions to collect, report, exchange and provide services for logistics information, and effectively use basic elements such as resources to achieve the overall goal of logistics management. Logistics information management is an ever-expanding and changing matter [10]. The earliest application of information processing systems in enterprises is not in the logistics department, but is mainly limited to front-end business departments such as business management and procurement management. With the intensification of market competition, the channels of sales and procurement have also entered diversification, and various costs including circulation costs are required to be continuously reduced. Therefore, while the offline process of logistics itself is systematized, its logistics information processing system has also been developed by a matching leap.With the acceleration of the replacement of electronic computers and the technological innovation of data communication systems, the speed and accuracy of logistics information processing have been improved. Utilizing advanced computer computing power and fast communication system, logistics-related information can be quickly exchanged and processed over long distances. At the same time, such logistics information processing promotes and enhances business operations, financial accounting, and manufacturing internal operations. Efficiency and accuracy. 2.4 ACO Algorithm Observing the behavior of ants during foraging for a long time, they found that they have a substance called pheromone in their bodies.Ants leave pheromones as clues. Along the route, it is necessary to choose a route with a high concentration of pheromones.The more ants, the more pheromones on that road [11]. On the other hand, ants leave the pheromone through this path, so it can be seen that they have a positive response to this behavior.The most convenient way to find the optimal path complexity problem, ACO is a cognitive optimization model constructed by simulating the biological characteristics of the ant colony, which has a higher search efficiency and lower complexity.ACO is currently used in various fields, and many researchers have also applied this technology to wireless sensor networks.
The Design of Supply Chain Logistics Management Platform
259
3 Investigation and Research on the Design of SC Logistics Management Platform Based on ACO Algorithm 3.1 Research Content and Objects The design of the logistics management platform for the SC of automobile logistics services takes M Automobile Logistics Company as the research object. M Company is a logistics company affiliated to a large automobile manufacturer, providing professional automobile logistics services for the manufacturer. At present, due to the increase in business volume and the limited resources of the company itself, Company M needs to integrate social logistics resources, build an automotive logistics service SC, and seek companies with high-quality logistics service capabilities as partners. 3.2 ACO Algorithm ACO algorithm process: Set up with n cities and m ants. Use tabuk to record all the cities that the k = {1, 2, · · · , m} ant has traveled so far, dij represents the distance between ci and cj , and τij (t) represents the pheromone concentration on the path ci to cj , at time t. In the process of path selection, ant k selects an optimal path based on probability, so the probability ptk (t) of the ant at time t can be obtained as: ⎧ ⎨ [τij (t)]α [ηij (t)]β α β , j ∈ allowed k k∈allowed k [τij (t)] [ηij (t)] (1) p= ⎩ 0, others , nij (t) =
1 dij
(2)
4 Analysis and Research of the SC Logistics Management Platform Based on ACO Algorithm 4.1 Data Processing of SC Logistics Management Platform The raw data is processed, and data processing is performed for the three indicators of asset-liability ratio, current ratio and net asset interest rate. The results of the standardized indicator evaluation values are shown in Table 1 and Figs. 2, 3, 4: Table 1. Indicator evaluation value after standardization Secondary indicators Asset-liability ratio
Evaluation of estimate s1
s2
0.1154
0.5584
s3 0.985 (continued)
260
B. Wang Table 1. (continued) Secondary indicators
Evaluation of estimate s1
s2
s3
Current ratio
0.2458
0.0842
0.751
Net interest rate on assets
0.8914
0.8542
1.325
s1 1.4 1.2 0.8914
Value
1 0.8 0.6 0.4 0.2
0.2458 0.1154
0 Asset-liability ratio
Current ratio
Net interest rate on assets
Indicators
Fig. 2. Data comparison of logistics management platform S1
From the data in Table 1, Figs. 2, 3 and 4, it can be seen that the asset-liability ratio is 0.1154, 0.5584 and 0.985 respectively under the three standardized indicators of s1, s2 and s3. The results of current ratio under s1, s2 and s3 are 0.2458, 0.0842 and 0.751 respectively. The net interest rate of assets under the evaluation of s1, s2 and s3 are 0.8914, 0.8542 and 1.325 respectively. It can be seen that the larger the standardized value of net asset interest rate and current ratio, the better, while the smaller the standardized value of asset-liability ratio, the better.
The Design of Supply Chain Logistics Management Platform
s2 1.4 1.2
Value
1
0.8542
0.8 0.6
0.5584
0.4 0.0842
0.2 0 Asset-liability ratio
Current ratio
Net interest rate on assets
Indicators Fig. 3. Data comparison of logistics management platform S2
s3 1.325
1.4 1.2 0.985 Value
1 0.751
0.8 0.6 0.4 0.2 0 Asset-liability ratio
Current ratio
Net interest rate on assets
Indicators
Fig. 4. Data comparison of logistics management platform S3
261
262
B. Wang
5 Conclusions This paper is a study on the design of SC logistics management platform based on ant colony algorithm. Through the study, it is found that ant colony algorithm is of great significance to the design of SC logistics management platform. Experimental data prove that it is an inevitable trend for ant colony algorithm to be introduced into the design of SC logistics management platform. ACO is widely used in practical problems and has become the most valuable optimization algorithm in development. However, there are also some problems in this study while achieving some achievements. First, due to the urgency of time, there may be some errors in the experimental results due to the short aging period of this experiment. Moreover, due to the limitation of experimental conditions, there are still many problems to be improved in the process of experimental research. Finally, ACO algorithm to carry out further research is very necessary, but this paper on ant optimization algorithm research is still only stay at a very shallow level, in fact, ant optimization algorithm contains extensive mathematical profound, need to continue to invest time slowly study, savor. However, with the development of time and technology, it is believed that the research on the application of ant optimization algorithm in the design of SC logistics management platform will be more and more profound. In order to have a deeper understanding of the impact of ant optimization algorithm on the design of SC logistics management platform, the subsequent research will further improve the analysis results according to these problems.
References 1. Jérémy, D., Olivier, G., et al.: A hybrid memetic-ACO algorithm for the home health care problem with time window, synchronization and working time balancing. Swarm Evol. Comput. 46(1), 171–183 (2019) 2. Santis, R.D., Montanari, R., Vignali, G., et al.: An adapted ACO algorithm for the minimization of the travel distance of pickers in manual warehouses. Eur. J. Oper. Res. 267(1), 120–137 (2018) 3. Selvakumar, A., Gunasekaran, G.: A novel approach of load balancing and task scheduling using ACO algorithm. Int. J. Softw. Innov. 7(2), 9–20 (2019) 4. Ahmad, M., Othman, R.R., Ali, M., et al.: A tuned version of ACO algorithm (TACO) for uniform strength T-way test suite generator: an execution’s time comparison. J. Phys: Conf. Ser. 1962(1), 012037 (2021) 5. Varghese, B., Raimond, K., Lovesum, J.: A novel approach for automatic remodularization of software systems using extended ACO algorithm. Inf. Softw. Technol. 114(10), 107–120 (2019) 6. Mercy, M.G., Kumari, A.K., Bhujangarao, A., et al.: ACO algorithm GPS clustering approach. J. Phys: Conf. Ser. 2040(1), 012011 (2021) 7. Jla, B., Jlb, C.: Applying multi-objective ACO algorithm for solving the unequal area facility layout problems - ScienceDirect. Appl. Soft Comput. 74(1), 167–189 (2019) 8. Sengathir, J.: A hybrid ant colony and artificial bee colony optimization algorithm-based cluster head selection for IoT. Proc. Comput. Sci. 143(1), 360–366 (2018)
The Design of Supply Chain Logistics Management Platform
263
9. Kumar, P.M., Devi, U.G., Manogaran, G., et al.: ACO algorithm with Internet of Vehicles for intelligent traffic control system. Comput. Netw. 144(24), 154–162 (2018) 10. Dahan, F.: An effective multi-agent ACO algorithm for QoS-aware cloud service composition. IEEE Access 9(1), 17196–17207 (2021) 11. Dahan, F., Hindi, K.E., Ghoneim, A., et al.: An enhanced ACO based algorithm to solve QoS-aware web service composition. IEEE Access 99(99), 1 (2021)
Management and Application of Power Grid Infrastructure Project Based on Immune Fuzzy Algorithm Yimin Tang1(B) , Zheng Zhang1 , Jian Hu2 , and Sen He2 1 State Grid Shanghai Electric Power Company Engineering Construction Consulting Branch,
Pudong, Shanghai 200120, China [email protected] 2 Beijing JingHangTianLi Science and Technology Company Limited, Haidian, Beijing 100038, China
Abstract. In recent years, with the development of social economy, the State Grid Corporation of China has increasingly invested in infrastructure. State Grid’s infrastructure projects are complicated to operate, involving a large number of people, and project management is difficult. In the process of project execution, the effect of project management will directly affect the construction cost of the project and the profitability of the project. With the urgent requirements for management methods with large capital investment and intensive personnel, power supply companies must also find more effective management methods to meet the needs of today’s information age. Therefore, the construction of public utility infrastructure must achieve efficient management through information system applications. This paper studies the management and application of immune fuzzy algorithm in power grid infrastructure project. It understands related theories on the basis of literature, and then designs the power grid infrastructure project management system based on immune fuzzy algorithm, and tests the designed system. The test result shows that the response time of the system designed in this paper meets the system requirements. Keywords: Fuzzy Algorithm · Power Grid Infrastructure · Project Management · Immune Algorithm
1 Inductions Power infrastructure projects are complex, scarce and difficult to manage [1, 2]. The efficiency and cost control in the construction of power grid infrastructure directly affect the amortization of costs and profits during project implementation. The construction of power grids urgently needs to be improved in the management of power infrastructure projects, which also poses new challenges for power grid companies from the start of the project to the completion of the fund transfer [3, 4]. For the implementation of project life cycle management, the implementation of the enterprise resource planning system meets the needs of this era. The system uses information technology and other technologies as a © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 264–271, 2023. https://doi.org/10.1007/978-3-031-31860-3_28
Management and Application of Power Grid Infrastructure Project
265
platform and uses advanced management experience to optimize and reorganize the initial project management process of the enterprise to make it possible Run in real time and monitor all processes of all project operations [5, 6]. The company’s project management application provides a comprehensive information exchange platform for corporate fund management, material procurement, project management, cost management and other businesses, as well as business management of enterprises and business units, integrates complete data exchange, and strengthens internal control. Information management can improve the company’s management efficiency, reduce costs, and greatly improve the overall competitiveness of power companies [7, 8]. Regarding the research on power grid infrastructure projects, some R&D personnel also gave some opinions and suggestions on the implementation process of ERP, which mainly involved key issues in the implementation process and specific solutions. However, due to people’s understanding of the ERP system at that time it is not very clear that there are many areas that need to be improved in the introduction of ERP projects into power supply construction. But at present, through further research and improvement, all power supply companies are gradually opening up their own power infrastructure and ERP business routes [9]. Some researchers analyzed the possible risks in the ERP management process carried out in public utility facilities, and put forward risk management suggestions to deal with the possible risks, analyzed the main risk sources of ERP implementation, and elaborated on the risks of ERP implementation the early warning method, finally gives the basic principles of ERP implementation and risk management countermeasures [10]. Some scholars have specifically analyzed the characteristics and basic management processes of power grid infrastructure companies in terms of infrastructure design management, construction project process management, production process management, service quality management, quality and safety management, and corporate information management, with ERP as the focus conduct management, analyze and summarize the actual implementation of power grid infrastructure project solutions, and show the impact of system implementation [11]. To sum up, there are many researches on the management of power grid infrastructure projects, but there are fewer management and application of immune fuzzy algorithms in power grid infrastructure projects. This paper studies the immune fuzzy algorithm in the management and application of power grid infrastructure projects. It analyzes the problems in power grid infrastructure project management and the immune fuzzy algorithm based on the literature, and then designs the power grid infrastructure project management system based on the immune fuzzy algorithm, and test the designed system, and draw relevant conclusions through the test results.
2 Research on the Management of Power Grid Infrastructure Projects 2.1 Problems in the Management of Power Grid Infrastructure Projects (1) The organizational structure is unreasonable. The management of the distribution infrastructure project is mainly carried out by the enterprise’s grass-roots offices (mainly prefecture-level and regional
266
(2)
(3)
(4)
(5)
Y. Tang et al.
offices), and the whole process project management is the main task, so the network infrastructure is often loose and extensive [12]. This is one of the reasons for the problems in the management of the distribution network infrastructure and is not conducive to improving quality control. Employees have not received system training. Since most of the staff who directly manage the construction process of the power grid infrastructure are concentrated in the base power plant, and the power supply stations compete with each other, there is a big difference between the system technology training of the power supply station and the factory for the power grid infrastructure management. And because of resource constraints, the factory was unable to organize targeted system technical training. Because of the large gap in the quality of service stations, the higher-level personnel department is often unable to provide truly qualified professional training for each unit. However, the inconsistency between the quality of the employees and the quality management requirements of the distribution network facilities project will often cause engineering quality management problems. Insufficient application of high-tech equipment. The poor application of high-tech equipment is affecting the quality control of distribution network infrastructure projects. For example, shock wave testing equipment may be able to better detect poor insulation of cable heads, but in fact, due to the shortage of personnel, there are not many items used for quality control before trial operation. The engineering materials are inappropriate. Recently, public utility companies have carried out major changes in material management, conducting large-scale integrated bidding at the grid and local enterprise levels. Due to the wide variety of bidding materials, it takes time for staff to operate and adapt to new operating methods, and the quality of some materials deteriorates. In addition, the authority of municipal institutions for bidding materials has been significantly reduced. While centralized management, problems such as deterioration of material quality have also appeared. Due to the long distance between the management levels, the quality control of the project is seriously affected, and it is often not solved in time. The design depth of the project is insufficient. If during the preliminary design of the power distribution infrastructure project, due to the insufficient design depth of the design unit, some important issues were not fully considered in the project implementation process. The construction time limit often affects the quality of the project.
2.2 Immune Fuzzy Algorithm Immunity comes from Latin. As mankind has long known that people who have been infected will develop various diseases after they are cured. Therefore, staying away from microbiology and virology means preventing and treating chronic diseases. In other words, the regression of resistance to infectious diseases is the inevitable result of the human body’s immediate response to infectious diseases after the first infection. In medicine, the defense response is usually related to normal human exposure to foreign
Management and Application of Power Grid Infrastructure Project
267
antibodies. The self-defense system can produce a variety of antibodies, and the proper mechanism of the self-defense system can use only the required amount of antipyretic to perform this regulatory function. Through network theory, it can be assumed that bacteria in a cell line become active and start to multiply under the stimulation of antigens, and that other cell lines that have detected this type of bacteria can also multiply more vigorously. In this way, as this process continues, lymphatic activity is controlled. The macro description of the biological immune process is shown in Fig. 1.
First infection
Secondary infection
Natural immunity
Get immunity
Immune memory
Infect Recover Immunity, no more infection Fig. 1. Macroscopic description of biological immunological process
The main defense of the immune system is to resist the attack of bacteria, viruses and other pathogens. The self-defense system achieves the purpose of recombination and creation through a variety of devices and removes antibodies against invaders’ antigens and antigens. In order to achieve effective protection, the defense system must conduct sample inspections to ensure that its molecules and cells are different from antigens. The defense system may be different from other low-level biological defense systems. In addition to identifying resources, it also has memory and memory capabilities. Due to the above symptoms, the second immune system response to the same antigen is faster and stronger than the first. The algorithm of the artificial immune system is an algorithm constructed in a defense system similar to the elements of the human defense system. 2.3 Algorithm Affinity reflects the binding power of antibody and antigen, and the level of affinity is related to the distance between antibody and antigen. Common distance types are:
268
Y. Tang et al.
(1) Hamming distance type D=
n i=1
∂
Among them, when 1 = 1, ab, = ag; when 2 = 0, other cases. (2) Euclid distance formula D= (xi − yi )2
(1)
(2)
Where x and y represent antibody and antigen respectively.
3 Based on the Immune Fuzzy Algorithm in the Power Grid Infrastructure Project Management System 3.1 Project Management Module (1) Establish a classification and coding plan, input project documents and classification lines from the plan source, manage design and central management plan. (2) Introduce the WBS and FBS concept templates and their related goals, summarize the overall budget for management and cost management, understand their modes and appropriate management. (3) Further clarify planning standards and clarify the conditions for central administrative approval. (4) Relying on the project management process to improve the accuracy of design completion and economic management. 3.2 Hardware Management Module (1) Standardize key data management and establish unified and standardized core data management templates and management models. (2) Unified and standardized materials and service provision management methods, manage centralized materials, organize procurement resources, standardize service supply activities, and manage transaction and supply costs. (3) Integrate procurement and distribution management, strengthen the ability to integrate procurement requirements, and vigorously support frame procurement. (4) Standardize the inventory management mode, realize the management of the allocation of spare parts, and avoid the direct distribution of materials between base units. 3.3 Financial Management Module (1) The financial management theory and project management, hardware management system, performance management and other technologies are fully integrated to reduce the intrusion of secondary data management, thereby improving the effectiveness, accuracy and integrity of the financial management system. (2) The reasonable design of the download module and the diversification and integration of multiple financial analysis. (3) Improve the accuracy of cash forecasts by adjusting the monthly payment schedule and weekly payment application process.
Management and Application of Power Grid Infrastructure Project
269
3.4 Project Risk Assessment Project risk assessment refers to the use of corresponding methods to assess the probability of project risk and the impact of project risk on the basis of determining project risk. With the deepening of project risk management research, risk assessment methods continue to emerge. Choosing a scientific project risk assessment method is crucial to formulating reasonable and targeted project risk management strategies. Through risk assessment, companies can compare the probability of a single risk factor with the consequences of possible risks, and compare the total risk probability with the consequences of joint actions of all risk factors. In these two aspects, whether the risk assessment results are true and scientific depends on the applicability of the risk assessment model. If the evaluation model is incorrectly selected, the company may not be able to get the correct results. Therefore, it is very important to choose an appropriate project risk assessment model. In this article, a vague immune system will be selected for the risk assessment of the project. 3.5 Risk Assessment Procedures (1) Antibody initialization. The Gaussian distribution is used to randomly sample from the center of the target area, where M particles are used as antibodies, and the affinity given to each antibody is 1M. (2) Choose a sample. According to the principle that light particles are not easy to be selected and heavy particles are more likely to be selected, M samples are randomly selected from M input samples. (3) Antibody cloning. Calculate the affinity of each antibody. Antibody clones follow the principle of inhibiting low-affinity antibodies and promoting high-affinity antibodies to quickly converge, approach the world’s best solution, and calculate the number of clones. (4) Antibody mutants. Calculate the variance of all antibodies. (5) The maturation of antibodies. Calculate the affinity of all the antibodies after mutation, sort the affinities, and then reselect the top M antibodies to create a new antibody memory module. (6) The sequence termination condition, if it is met, terminate the program, if it is not met, then go to (2).
4 System Test This article tests the corresponding time of the system, operates the system functions through different numbers of users, and records the system response time. The experimental results are shown in Table 1:
270
Y. Tang et al. Table 1. The experimental results Project management
Material management
Salary management
100
12
13
14
150
12
14
15
200
13
14
16
250
13
15
17
18
17 16
16
15 14
14
15
14
14
13 12
13
13
12
Time
12 10 8 6 4 2 0 100
150
200
250
User Project management
Material management
Salary management
Fig. 2. System test results
It can be seen from Fig. 2 that as the number of users increases, the response time of the system also increases, but the increase is relatively gentle. It can be concluded that the response time of the system meets the corresponding requirements.
Management and Application of Power Grid Infrastructure Project
271
5 Conclusions This article focuses on the research of immune fuzzy algorithm in the management and application of power grid infrastructure project. It analyzes related theories on the basis of literature data, and then designs the power grid infrastructure project management system based on immune fuzzy algorithm, and tests the designed system. The test result shows that the response time of the system meets the corresponding requirements.
References 1. Vellaithurai, C., Srivastava, A., Zonouz, S., et al.: CPIndex: cyber-physical vulnerability assessment for power-grid infrastructures. IEEE Trans. Smart Grid 6(2), 566–575 (2017) 2. Yang, F., Zhang, D., Sun, C.: Chinas regional balanced development based on the investment in power grid infrastructure. Renew. Sustain. Energy Rev. 53, 1549–1557 (2016) 3. Bertsch, V., Fichtner, W.: A participatory multi-criteria approach for power generation and transmission planning. Ann. Oper. Res. 245(1–2), 177–207 (2015). https://doi.org/10.1007/ s10479-015-1791-y 4. Gurung, A.B., Borsdorf, A., Füreder, L., et al.: Rethinking pumped storage hydropower in the European Alps. Mt. Res. Dev. 36(2), 222–232 (2016) 5. Trieb, F., Kern, J., Caldes, N., et al.: Rescuing the concept of solar electricity transfer from North Africa to Europe. Int. J. Energy Sec. Manag. 10(3), 448–473 (2016) 6. Repo, S., Ponci, F., Giustina, D.D., et al.: The IDE4L project: defining, designing, and demonstrating the ideal grid for all. IEEE Power Energ. Mag. 15(3), 41–51 (2017) 7. Kim, H.-S., Chung, C.-K.: Integrated system for site-specific earthquake hazard assessment with geotechnical spatial grid information based on GIS. Nat. Hazards 82(2), 981–1007 (2016). https://doi.org/10.1007/s11069-016-2230-3 8. Anisenkov, A.V.: Role of the ATLAS grid information system (AGIS) in distributed data analysis and Simulation. Optoelectron. Instrum. Data Process. 54(2), 208–212 (2018) 9. Anderson, K.R., Salem, Y., Shihadeh, S., et al.: Design of a compost waste heat to energy solar chimney power plant. J. Civil Eng. Res. 6(3), 47–54 (2016) 10. Hajebrahimi, A., Kamwa, I., Huneault, M.: A novel approach for plug-in electric vehicle planning and electricity load management in presence of a clean disruptive technology. Energy, 158, 975–985 (2018) 11. Srivastava, A.K., Hahn, A.L., Adesope, O.O., et al.: Experience with a multidisciplinary, teamtaught smart grid cyber infrastructure course. IEEE Trans. Power Syst. 32(3), 2267–2275 (2017) 12. Gharavi, H., Hu, B.: Wireless infrastructure M2M network for distributed power grid monitoring. IEEE Netw. 31(5), 122–128 (2017)
Optimization of Power Grid Infrastructure Project Management Based on Improved PSO Algorithm Yimin Tang1(B) , Zheng Zhang1 , Sen He2 , and Jian Hu2 1 State Grid Shanghai Electric Power Company Engineering Construction Consulting Branch,
Pudong, Shanghai 200120, China [email protected] 2 Beijing JingHangTianLi Science and Technology Company Limited, Beijing Haidian, 100038, China
Abstract. The number of power grid construction projects is increasing, and there are a lot of problems in the construction process that need to be solved urgently. Due to the lack of risk analysis and prevention awareness of power companies in the management of infrastructure projects, some irreversible economic losses are prone to occur. To this end, this article intends to study and improve the PSO algorithm, conduct an in-depth study on the optimization of power grid infrastructure project management, and obtain results. This paper mainly uses the experimental test method to compare the improved PSO algorithm and the gray-level correlation particle swarm algorithm, and obtains the performance of the two algorithms in the power grid infrastructure data test. The experimental results show that the worst value of the improved PSO algorithm is 0.02, which is 1 smaller than the particle swarm algorithm with gray-scale correlation. This is the advantage of the improved PSO algorithm in calculation accuracy. Keywords: Improved PSO Algorithm · Power Grid Infrastructure · Project Management · Optimized Design
1 Introduction The State Grid system reform has carried out a comprehensive reform in the construction field, and the management of power grid infrastructure projects has gradually become a very important part of the development process of power enterprises. Refined management of infrastructure projects, discovering problems during the construction process and proposing improvement plans are the goals of power grid construction. In this paper, an optimization model based on the improved PSO algorithm is established for these problems. There are many theoretical results of the research on optimization of power grid infrastructure project management based on the improved PSO algorithm. For example, Wang Xianghong proposed a daily distribution model of micro-grid suitable for photovoltaic and wind power generation in response to the large transmission loss of microgrid. And for this multi-objective optimization problem, a nearest neighbor stochastic © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 272–280, 2023. https://doi.org/10.1007/978-3-031-31860-3_29
Optimization of Power Grid Infrastructure Project Management
273
optimal PSO algorithm is proposed [1]. Sun Leichao takes the microgrid in a specific storage area as the research target, formulates the optimal operation strategy of the microgrid, and constructs the active performance model and power generation cost model of commonly used distributed power sources [2]. Zhong Zhen said that the optimal joint distribution of the air-heat system is an extremely complex NP problem, which is not easy to solve [3]. Therefore, an improved particle swarm optimization algorithm is proposed and applied to the joint optimization planning of the air-heating system. Therefore, this paper also optimizes the management of power grid infrastructure projects based on the improved PSO algorithm. This article first describes and analyzes the project management operation mode of power grid enterprises. Secondly, the particle swarm optimization algorithm is discussed and described in detail. Then it analyzes the construction of the index system of the influencing factors of the quality management of the power system distribution network infrastructure project, and builds the system according to the principles. Finally, an experimental test is carried out to compare and analyze the two algorithms mentioned in the article, and the results are obtained.
2 Optimization of Power Grid Infrastructure Project Management Based on Improved PSO Algorithm 2.1 Project Management Operation Mode of Power Grid Enterprises The power distribution company sets up an engineering project department to comprehensively control and implement the company’s infrastructure projects. In terms of project management, the person in charge of the engineering department of each project department has the main direction to organize the project work in an all-round way. In project management, the person in charge of the engineering department should take the lead in doing various tasks, including construction coordination, target control, schedule management, comprehensive evaluation, and information file management. Regularly organize internal or external coordination meetings to improve working conditions, to evaluate and supervise employees in various positions to ensure the smooth progress of work. Therefore, their job responsibilities are very heavy. When the project is running at the same time, if the person in charge of the technical department manages the internal work well, but also assigns higher-level tasks, the daily workload is very large [4, 5]. In the engineering department, the full-time budget estimate is a full-time technical manager for the management of your funds. This position comprehensively manages the inflow and outflow of various project funds, reviews and approves project budgets and payments, and pays for each project expenditure. Therefore, the technical department should carefully select full-time personnel for budget estimates. However, there are some limitations in managing project budgets. All the work performed by the full-time budget clerk is based on written documents such as project accounts, project data packages, project budgets, and technical drawings. According to the current situation, many funds are operated in accordance with specifications or experience values. It is difficult for them to react to the current situation, and their work cannot be absolutely precise and efficient [6, 7].
274
Y. Tang et al.
For construction units, safety and quality management are the most important of many management. Safety quality management is directly related to the personal safety of actual employees. The efficient operation of machines and systems is a strong guarantee for the company’s decision-making power. Their importance is obvious. Safety and quality management positions were established in the engineering department. This position is mainly responsible for the daily inspection and seasonal inspection of each site at the construction site, and then fill in the results of various inspection reports. Especially when several projects are started at the same time, the frequency of inspections is very high, and the workload is correspondingly increased. If the distance between individual items is large, the time and effort required for transportation are also important [8, 9]. The subject of the contract is the structural unit of the project. These contracts are mainly design contracts, construction contracts and electrical installation contracts. In addition, the construction unit must sign an order-agency contract with the construction supervision unit. The construction supervision unit can manage the construction of the project on its behalf, thus forming a three-in-one project management model, namely the construction unit, the construction supervision unit and the contractor. In the contract, the construction unit is the main body and has absolute decision-making power and control power over the construction of the project. The monitoring unit may monitor and manage quality, resources, information, and project progress on behalf of the construction unit. In other words, the monitoring unit is an agent and has the right to deal with various violations that occur on the construction site, and issue rectification and stop orders. During the construction of the entire project, the construction team is mainly responsible for administrative work, such as communication with relevant units before construction and coordination with the government. In the end, the ability of construction units to manage construction sites is also greatly reduced [10, 11]. In addition, other full-time jobs in the engineering department, such as full-time materials, full-time planning, etc., have unique work content, lack of mutual communication, solidify thinking and work habits, and deal with problems in a rigid manner and low work efficiency [12]. 2.2 Particle Swarm Optimization Algorithm With the gradual separation of the electricity market, the integration of distributed power sources has increased the scale of the system and the number of constraints in the system model, which has made the planning model more complicated. Therefore, when solving the peak shaving scheduling optimization problem, it is not only necessary to establish an appropriate model, but also to find an appropriate model solution method. Since the particle swarm algorithm is a global optimization algorithm with simple structure and convenient operation, fewer parameters to be considered, simpler program operation, and inherent parallel search mechanism, it basically does not have any requirements on the objective function, and is very suitable for power grid peak shaving and dispatching. A class of complex high-dimensional mixed integer programming problems. In the real world, many optimization problems in science and engineering applications are multi-objective programming problems. However, trade-offs are the most
Optimization of Power Grid Infrastructure Project Management
275
common problem in multi-objective optimization problems. Therefore, finding multiple solutions to minimize the conflict between different goals is the key to studying multiobjective problems. The traditional multi-objective optimization problem is solved by transforming the multi-objective problem into a set of single-objective optimization problems, such as the classic multi-objective solution methods such as the component weighting method, the constructor method, and the component optimization method. However, these methods cause The assignment of the weighted value of each goal is strongly subjective. In the process of optimizing the objective function, the progress of each goal becomes inoperable because the optimization goal is only the weight of each goal. At the same time, each goal is weighted due to conflicting decision variables. The structure of the latter objective function is complicated. In cybernetics, the completely transparent information in the system is named the white system, and the completely unknown information is named the black system. The system in between is named the gray system. For high-dimensional nonlinear systems, due to the large number of variables within the system and the complex relationships between variables, the concept of gray correlation can be used to select only some parameters that can reflect the characteristics of the system while ignoring the correlation between certain parameters. By using these parameter sequences to characterize the state of the complex system, the simplification of the complex system is achieved. Gray’s theory judges the closeness of each sequence based on the similarity of the shape between each sequence. The higher the similarity, the higher the gray correlation degree of each sequence, and the lower the gray correlation degree of each sequence. The degree of relevance is: 1 m l (1) B= l=1 m where l is the number of target vectors. The higher the degree of association, the larger the B, and the closer the system state represented is to the ideal state of the system. In order to balance the global and local search capabilities of the PSO algorithm, this paper uses the formula of nonlinear dynamic inertia weighting coefficients to improve the basic PSO algorithm: )·(g−gmin ) , g ≤ gavg Wmin − (Wmax −Wgmin avg (2) Wmax , g gavg In the formula: Wmin and Wmax are respectively expressed as the maximum and minimum values of W, g represents the current objective function value, gavg represents the average objective function value, and gmin represents the minimum objective value. Through the analysis of the parameters of the standard particle swarm algorithm and improvement analysis, the basic working principle of the improved PSO algorithm in this paper is as follows: First, the population needs to be initialized, including the position and velocity of the particles, and the size of the particles is initialized to 0, and the optimal People as the current location. Save any non-dominated solutions obtained during each iteration in the external archive set to ensure that the external archive set S is updated in each iteration until the size of S reaches the upper limit, and the nearest neighbor-based cleanup algorithm is used to remove the congested great, non-dominated ones solution. The details are shown in Fig. 1:
276
Y. Tang et al. Initial population
Fitness evaluation
Global optimal solution External archive
Calculate the individual optimal solution
Particle velocity
Is it optimal
Whether the termination conditions are met
End
Fig. 1. Improved Particle Swarm Algorithm Process
2.3 Construction of the Index System of Influencing Factors for the Quality Management of Power System Distribution Network Infrastructure Projects Since the power system has its operating rules and characteristics, the safety of the power network is the most concerned issue of the power network operator. With the continuous advancement of energy system reform, the market demand-driven concept has gradually become popular. Especially after more than 30 years of rapid development of reform and opening up, the network structure has been significantly improved, and the security of the power grid has been guaranteed: power grid operators have invested in weak distribution networks. With the beginning of the investment spring of distribution network infrastructure projects, network companies must formulate a gradual management system on the one hand to limit the quality management activities of subordinate units and monitor the quality of distribution network infrastructure projects from a macro perspective. On the other hand, quality monitoring units should also be used to monitor the key nodes of the distribution network infrastructure project, and carry out quality control on the distribution network infrastructure project at the micro level to meet the requirements of the project department. The demand side generates profits on time. (1) Principles for the construction of the index system of the influencing factors of the quality management of the distribution network infrastructure projects. objectivity. This means that the power grid quality management impact index system of the distribution network infrastructure project should follow the objective laws of the construction of the distribution network infrastructure project system.
Optimization of Power Grid Infrastructure Project Management
277
Clarity. That is to say, the indicators in the indicator system of the influencing factors of the quality management of the power system distribution network infrastructure engineering must be clearly mentioned, and are clearly different from other indicators. Any ambiguity, misunderstanding, and use of indicators should be avoided. Versatility. That is to say, the indicators in the index system of the influencing factors of the quality management of the power grid infrastructure engineering can be widely used in various projects of the power system distribution network infrastructure engineering, and have universal significance. Systematic. This means that all stages of the entire project can be covered, and systematic project quality management assessments can be carried out, rather than onesided, partial quality management assessments. Practicality. This means that the power grid distribution network infrastructure project quality management index influencing factor indicator system, concisely and effectively reflects the quality management of the network infrastructure project, can point out major issues, and provide a basis for quality management decision-making.
3 Experimental Analysis 3.1 Metrics In order to be able to quantitatively evaluate the performance of the algorithm in the case of multi-objective optimization problems, it is necessary to use multiple evaluation indicators to quantitatively evaluate it. This algorithm can compare the fitness function values of the optimal solutions obtained by different algorithms when solving singleobjective optimization problems. Due to the uniqueness of the solution, it is very simple to evaluate the performance of the algorithm. However, because the multi-objective optimization problem has a set of non-dominated solutions instead of a single optimal solution, the impact of index evaluation on the set of non-dominated solutions must be fully considered when selecting the score. The convergence index is used to evaluate the distance between the non-dominated solution obtained by the algorithm and the actual Pareto limit. The diversity index is selected to evaluate the uniformity of the non-dominated solutions obtained by the algorithm. The coverage rate is used to judge whether the algorithm has found all the non-dominated solutions to ensure that the algorithm can effectively deal with multi-objective optimization problems. 3.2 Comparison Algorithm The gray correlation degree particle swarm optimization algorithm is used to compare with the improved PSO algorithm. The gray correlation degree particle swarm optimization algorithm maintains the diversity of the population through the use of crowded distance operators and elite strategies, and is often used as a benchmark algorithm for comparison. In order to avoid the inaccuracies caused by random factors, each function is repeatedly tested 20 times. Then the optimal value, worst value, mean value, median deviation and standard deviation of the two algorithms are compared and analyzed.
278
Y. Tang et al.
3.3 Experimental Setup In this paper, Matlab programming is used, and the improved particle swarm optimization algorithm and the gray correlation particle swarm optimization algorithm are used to test the power grid related functions. During the experiment, the population size of the gray correlation particle swarm optimization algorithm is set to 100, the crossover probability is set to 0.8, and the championship selection method is used to operate, and the mutation probability is set to 1/L, where L is the number of decision variables. Set the initial number of particles of the improved PSO algorithm to 100, and the condition for the end of the algorithm loop is to reach the maximum number of iterations.
4 Experimental Results and Analysis 4.1 Test Results of the Two Algorithms Through statistical comparison of the test results, the data results of the optimal value, worst value, mean value, median deviation and standard deviation of the improved particle swarm optimization algorithm and the gray correlation particle swarm optimization algorithm under the power grid infrastructure test function are obtained. As shown in Table 1: Table 1. Test Results of the Two Algorithms PSO GID
Improved PSO
The optimal value
8.31
8.05
Worst value
1.02
0.02
Mean
9.37
9.21
Median difference
9.03
9.02
Standard deviation
4.29
3.67
Optimization of Power Grid Infrastructure Project Management
PSO GID
279
Improved PSO
10 9.37
9 8.31
8 7
9.21
9.03 9.02
8.05
Data
6 5 4.29
4 3
3.67
2 1 0
1.02 0.02 The optimal Worst value Mean value Indicators
Median difference
Standard deviation
Fig. 2. Comparison Results of the Two Algorithms
As shown in Fig. 2, when the improved particle swarm algorithm and the gray correlation particle swarm optimization algorithm process the test function, both algorithms can converge to the real Pareto frontier. However, through comparison, it is found that the improved particle swarm optimization algorithm has high convergence performance, which has certain advantages compared with the gray correlation particle swarm optimization algorithm.
5 Conclusion In the research process of power grid project management optimization, this paper firstly expounds related theories, and at the same time conducts in-depth research on particle swarm optimization. This paper builds a power grid infrastructure management system based on improved PSO algorithm. This method can solve the problem that the traditional method is difficult to achieve the global optimal solution. Through experimental data, in the actual process, an improved PSO method can be used to achieve comprehensive budget control and cost control of power grid infrastructure projects.
References 1. Rogdakis, K., Karakostas, N., Kymakis, E.: Up-scalable emerging energy conversion technologies enabled by 2D materials: from miniature power harvesters towards grid-connected energy systems. Energy Environ. Sci. 14(6), 3352–3392 (2021) 2. Phommixay, S., Doumbia, M.L., Lupien St-Pierre, D.: Review on the cost optimization of microgrids via particle swarm optimization. Int. J. Energy Environ. Eng. 11(1), 73–89 (2019). https://doi.org/10.1007/s40095-019-00332-1
280
Y. Tang et al.
3. Salkuti, S.R.: Multi-objective based economic environmental dispatch with stochastic solarwind-thermal power system. Int. J. Elec. Comput. Eng. 10(5), 4543 (2020) 4. Hannan, M.A., et al.: Battery energy-storage system: a review of technologies, optimization objectives, constraints, approaches, and outstanding issues. J. Energy Storage 42, 103023 (2021) 5. Abyad, R.: The role of project management in public health. World Fam. Med 19, 87–96 (2021) 6. Zongwei, Z.: Research on project management of power infrastructure projects. Eng. Technol. Full Text Edition 3, 00187 (2017) 7. Jian, L.: Quality management optimization strategy for power system infrastructure and distribution network projects. Dig. Design (Part 1), 000(012), 318–319 (2019) 8. Anderson, J.: US Northeast power grid operators begin preparations for massive offshore wind additions. Platts Energy Trader 17–18 12 Aug 2019 9. Maharaja, U.K., Surange, V., Murugan, P., et al.: Execution of transmission line projects with innovative methods for augmentation of EHV network in mega city of mumbai- challenges & solutions. J. Int. Assoc. Electr. Gener. Trans. Distrib. 31(2), 41–47 (2018) 10. Rajakovic, N.L., Shiljkut, V.M.: Long-term forecasting of annual peak load considering effects of demand-side programs. J. Mod. Power Syst. Clean Energy 6(1), 145–157 (2017). https:// doi.org/10.1007/s40565-017-0328-6 11. Braeckman, J.P., Skinner, J.: Financial risks and FutureDAMS. Int. Water Power Dam Const. 71(8), 32–33 (2019) 12. Ustun, T.S., Hussain, S., Kirchhoff, H., et al.: Data standardization for smart infrastructure in first-access electricity systems. Proc. IEEE 107(9), 1790–1802 (2019)
Data Analysis of Lightning Location and Warning System Based on Cluster Analysis Yanxia Zhang1 , Jialu Li2 , and Ruyu Yan3(B) 1 Baiyin Meteorological Bureau, Baiyin, Gansu, China 2 Queen’s University Belfast, Belfast, UK 3 Chinese University of Hong Kong, Shenzhen, Hong Kong
[email protected]
Abstract. Through the analysis of lightning location data, the characteristics of lightning are studied, and the clustering algorithm for data mining is compared and improved so that it can be adapted to the analysis of lightning location data and can complete the task of mining lightning location data to bring real-time warning and forecasting information to the lightning protection and meteorological departments. The aim of this paper is to study the data analysis of lightning location and warning systems based on clustering algorithms. A comprehensive analysis of lightning location data is carried out using cluster analysis methods to confirm its aggregated distribution characteristics in the latitude and longitude planes. Secondly, dense clustering was selected as the analysis method for this study based on the characteristics and applicability principles of the clustering algorithm, and the ADBSCAN algorithm was implemented. Finally, the proximity forecasting method of clustering is selected and effective solutions to specific problems are proposed with satisfactory results. Keywords: Cluster Analysis · Lightning Localisation · Early Warning System · Data Analysis
1 Introduction Lightning is a destructive natural phenomenon that is also random, destructive and much more. It stores a strong tension that, when broken, generates enormous energy and has an unpredictable impact on human life [1, 2]. On the one hand, it poses an unpredictable threat to the safety of human life; in addition, public services such as communications and aviation are among the areas most severely affected by the rays. With the development of technology, the degree of modernisation and the increasing sophistication and price of technological products in all fields, humans are benefiting from and gaining confidence in high-tech living [3, 4]. The study of lightning activity can therefore provide early warning and prediction services for lightning occurrences, thus greatly reducing the impact of lightning lines on humans. Based on the continuous development and maturity of lightning tracking technology in China, a large amount of real-time and accurate lightning tracking data has been accumulated, recording in © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 281–290, 2023. https://doi.org/10.1007/978-3-031-31860-3_30
282
Y. Zhang et al.
detail the time of lightning occurrence, lightning intensity, positioning methods and other information, creating an environment for comprehensive analysis [5]. Lightning localisation is an important means of studying the lightning discharge process and storm activity. The total lightning pinpointing capability based on low-frequency signals has improved in many ways, but most are based on post-wave shape processing and slow localisation. Cristina Villalonga-Gómez presents an artificial intelligence technique for lightning localisation based on low-frequency electric field sensing arrays (LFEDA). A new method based on deep learning coded feature matching is proposed, providing a means to locate lightning totals quickly and accurately. Compared with other LFEDA localisation methods, the new method significantly improves the matching efficiency by more than 50% and significantly increases the localisation speed [6]. Lightning prediction is important for reducing potential damage to electrical installations, buildings and humans. However, existing lightning warning systems (LWS) operate with boundary methods and have low prediction accuracy. To improve lightning prediction accuracy, Jinho Choi developed an intelligent LWS based on electromagnetic fields and artificial neural networks. an electric field grinding sensor and a pair of circular antennas were designed to detect the electric and magnetic fields caused by lightning in real time, respectively. The electric field, temperature and humidity variability obtained in the first 2 min of the radius were used to develop a neural network using a back propagation algorithm. After 6 months of lightning observation and prediction, the proposed prediction accuracy of the LWS was found to be 93.9% [7]. Lightning and lightning are the most violent natural phenomena. They cause much economic damage. Determining the specific geographical area of an attack is essential for the emergency services to increase their effectiveness by intensively re-covering the affected area. To achieve this on an urban scale, Jinho Choi presents the design, prototyping and validation of a network of distributed devices for the Internet of Things (IoT). IoT devices have ray detection capabilities and are synchronised with other devices in the sensor network. All of these can work in a network capable of locating different events due to the implementation of a three-sided measurement algorithm in a large data environment. A low-cost ray detection system has been designed based on the 3935 sensor. Effective detection of this system is limited, but accuracy and performance improve when added to the IoT structured network [8]. This paper uses a clustering approach to analyse lightning hazard data, providing a solid scientific guide for early warning and lightning protection. With the expansion of multi-domain data mining techniques, data mining has been applied to all areas of life, but little research has been done on lightning location data and lightning hazard text data mining. Data mining related to lightning is challenging, but the results of the analysis will create great value for lightning prevention and mitigation.
2 Research on Lightning Location and Early Warning System Based on Cluster Analysis 2.1 Positioning Principle When lightning occurs, it releases very strong currents and generates strong electromagnetic waves. The changes in the electromagnetic field caused by lightning can be
Data Analysis of Lightning Location and Warning System
283
accurately detected from a central station to obtain information such as radius polarity and current magnitude. Multiple lightning detection stations use them to simultaneously measure the low frequency/VLF (low frequency/VLF) signal generated by lightning, remove the flash signal from the clouds and place the lightning on the ground. The fixed combined coordinates of the detection station are known and can be resolved by geometry (e.g. the direction and distance of the station relative to the radial point) to find the radial point. There are three main localisation methods available: directional localisation, time difference localisation, and directional integrated localisation - time difference [9, 10]. 2.2 Lightning Localisation and Warning Systems The entire Web interface of the lightning location and warning system is divided into three parts: the functional area, which is used to select query requirements and display paths; the data view area, which displays system query results, lightning strike related information; and the map interface, where the map only displays the main functional area [11, 12]. On the functional area, there is an option to display an animation of the lightning strike process on the map layer, and the most common method of querying is a keyword query. Functions that can be performed in the data view area include displaying the number of lightning strikes, lightning strike tracking information for the location workstation and displaying the results of common queries. The query results visually show the lightning strike information during the specified query. The name of the located detection station can be displayed directly and all these query results can be exported in bulk using an excel spreadsheet for easy recall. To the left of the map preview area is a map zoom button, or the map can be reduced using the mouse wheel. 2.3 Density Clustering Algorithm DBSCAN The DBSCAN algorithm can be a common density clustering algorithm for two, three and high dimensional spaces. This paper focuses on the characteristics of the distribution of lightning location data in the latitude and longitude planes and can therefore be considered as two-dimensional data. The core idea of the DBSCAN algorithm is that for each point in the cluster, certain conditions must be satisfied, i.e. the number of points near each point in the cluster must be greater than the value provided. The DBSCAN algorithm has two very important parameters: the Eps neighbourhood radius and the MinPts minimum density. While the DBS algorithm may have significant advantages, it also has some disadvantages. The DBS algorithm may not require a pre-set number of clusters, but the Eps neighbourhood radius value and the MinPts minimum density value can be provided manually as input parameters. However, the determination of Eps and MinPts values is usually difficult. For different data sets with different data distribution characteristics and different numbers of data records, the DBS algorithm may be the best grouping effect corresponding to the input parameters. The
284
Y. Zhang et al.
DBS algorithm may rely on more than one parameter and therefore the input parameters must be determined according to the actual circumstances of the application.
3 Lightning Location Data Characterisation 3.1 Improving DBSCAN The DBSCAN algorithm can automatically identify isolated points and find clusters in any way, but this method requires manual determination of the Eps neighbourhood threshold parameters. This paper requires a clustering analysis of the lightning localisation results, while the degree of aggregation varies between the actual localisation results, the neighbourhood parameters are difficult to determine uniquely and the clustering is difficult to optimise. In this paper, an improved ADBSCAN algorithm is proposed to determine the values of the neighbourhood parameters according to the statistical properties of the localisation result dataset to achieve adaptive discrimination of the parameters. In order to implement Eps adaptive discrimination, the distance distribution matrix of the dataset D must first be calculated as follows: DIST n×n = {dist(i, j)|1 ≤ i ≤ n, 1 ≤ j ≤ n }
(1)
Each element in DIST n×n represents the distance between objects I and J in data set D. DIST n×n is arranged row by row in ascending order. The peak of the distribution curve is best solved by fitting an inverse Gaussian distribution statistical model. The probability density formula for the inverse Gaussian distribution is: λ −λ(x−μ)2 /(2xμ)2 e (2) p(x) = 2π x3 where and can be obtained by maximum likelihood estimation. The first order derivative of the inverse Gaussian distribution probability density formula is found to find the extreme value when the derivative is zero. Thus, when minPts = k, the neighbourhood parameter distk is obtained by fitting an inverse Gaussian, as shown in Eq. (3). μk 9μ2k + 4μ2k − 3μ2k (3) Epsk = 2λk
3.2 Example of a Lightning Strike Incident The geographic coordinates of the lightning location and warning system for each detection station and their detected arrival times are shown in Fig. 1 and Table 1.
Data Analysis of Lightning Location and Warning System
285
Table 1. Example data Station No
latitude
longitude
arrival time/s
1
25.14
107.57
0.264
2
25.86
105.74
0.277
3
24.67
106.66
0.285
4
23.17
105.96
0.299
5
24.96
107.03
0.313
latitude
longitude
120 100
value
80 60 40 20 0 1
2
3 station No
4
5
Fig. 1. Coordinates of each detector and the time of lightning strike detected
The lightning localisation in Table 1 was achieved by using a localisation algorithm based on the modified DBSCAN method. Three or four of the five return strikes were randomly selected for localisation and the 10 localisation results were clustered by the DBSCAN and ADBSCAN algorithms respectively. Figure 2 shows a comparison between the other methods and the improved DBSCAN-based lightning localisation algorithm.
286
Y. Zhang et al.
ADBSCAN
method
DBSCAN
MFOA
PSO
TDOA 0
50
100
150 200 250 positioning error/m
300
350
400
Fig. 2. Comparison of positioning results based on ADBSCAN
After adding the DBSCAN algorithm for localisation calculation, the localisation accuracy is significantly higher than the traditional time difference method, with a localisation error of 216.46 m. However, the DBSCAN algorithm requires manual determination of the neighbourhood boundary parameters. The improved ADBSCAN algorithm evaluates the neighbourhood parameters based on the statistical characteristics of the positioning result dataset to achieve adaptive parameter determination and improve the clustering effect, reducing the positioning error to 153.86 m and further verifying the effectiveness of the algorithm.
4 Clustering-Based Proximity Forecasting 4.1 Overall Process After considering the characteristics of lightning location data and various forecasting methods, this paper selects a clustering-based proximity forecasting method. This method is essentially based on the temporal extrapolation of clustered clouds and mainly includes data processing, cloud tracking and identification, motion feature analysis and lightning strike area prediction. The overall process is shown in Fig. 3.
Data Analysis of Lightning Location and Warning System
287
Selected data samples
Data preprocessing
Generate thunderstorm clouds
Compute cloud centroid
Fit motion trajectory
Calculate speed and direction
Move the current lightning location data with a time interval of speed and direction as the prediction area Fig. 3. Overall implementation flow chart
4.2 Cloud Mass Cores As lightning location data are clustered in the latitude and longitude planes, and as cloud clusters are long-lasting and their core points are relatively stable, the coordinates of the centre of mass of the clustered clouds can be used to represent the position of the entire cloud cluster. The centre of mass of each clustered cloud can be calculated by the following method: B = 1n ni=1 bi (4) L = 1n ni=1 li In the equation, n is the amount of data for the clustered cloud masses and b is the latitude and longitude of each data point. In order to verify the effectiveness of the above identification method, this paper selects three consecutive periods of data in the lightning location and warning system, each lasting 15 min. The three data segments are numbered T1, T2 and T3. First, the first 15 min of data are divided into three data segments by time, each lasting 5 min. The three data segments are then grouped into three clouds C1, C2 and C3. The centre of mass of each cloud was then calculated and finally each clustered cloud group was
288
Y. Zhang et al.
identified and the motion path was adjusted according to the centre of mass coordinates in order to calculate the mean velocity and direction of motion for C1, C2 and C3. The results of the calculations are shown in Table 2.
35 30
Direction (°)
Speed (km/min) 31.74
27.42
25.94
value
25 20 15 10 5 0.41
0.35
0.38
0 C1
C2 cloud cluster
C3
Fig. 4. Prediction results of average speed and direction of cluster clouds
Table 2. Calculation results cloud cluster
Cluster cloud number
Direction (°)
Speed (km/min)
C1
North by East
27.42
0.35
C2
North by East
25.94
0.41
C3
North by East
31.74
0.38
As can be seen from Fig. 4, the clustered cloud masses move in essentially the same direction and have close velocity values. From the fitting results it can also be seen that the centre of mass points of the clustered cloud clusters are distributed on both sides of the straight line and close to the straight line in a short period of time. The above also justifies the use of a straight line to cluster cloud trajectories in a short period of time.
Data Analysis of Lightning Location and Warning System
289
5 Conclusions With the rise and advancement of artificial intelligence and big data technologies, relevant algorithms are widely used in other research areas with good results, and the combination of automatic learning and weather forecasting is becoming the order of the day. Machine learning algorithms can combine real-time monitoring data with large amounts of historical weather data through monitoring, semi-monitoring or non-monitoring to learn from the experience of the weather evolution process and achieve good forecasting results. In this paper, we analyse the shortcomings in two aspects, lightning localisation and lightning warning, and try to apply clustering algorithms to both schemes, proposing a lightning localisation and warning system based on cluster analysis. In future research, we will invest more in the intersection of artificial intelligence, big data technology and weather forecasting, using neural network-related algorithms to tackle different problems such as lightning prediction and lightning hazard risk assessment We have made improvements and innovations according to the characteristics of the meteorological field to build prediction and evaluation models suitable for the meteorological field, and hope to make more progress. Acknowledgements. This work was supported by IPIS2012.
References 1. Diehl, A., Pelorosso, L., Ruiz, J., Pajarola, R., Eduard Gröller, M., Bruckner, S.: Hornero: thunderstorms characterization using visual analytics. Comput. Graph. Forum 40(3), 299–310 (2021) 2. Orf, L.: Modeling the world’s most violent thunderstorms. Comput. Sci. Eng. 23(3), 14–24 (2021) 3. Das, S., Datta, S., Shukla, A.K.: Detection of thunderstorm using Indian navigation satellite NavIC. IEEE Trans. Geosci. Remote. Sens. 58(5), 3677–3684 (2020) 4. Grazioli, J., et al.:: An adaptive thunderstorm measurement concept using C-Band and X-band radar data. IEEE Geosci. Remote. Sens. Lett. 16(11), 1673–1677 (2019) 5. Klimov, P.A., et al.: UV transient atmospheric events observed far from thunderstorms by the vernov satellite. IEEE Geosci. Remote. Sens. Lett. 15(8), 1139–1143 (2018) 6. Villalonga-Gómez, C., Cantallops, M,M.: Profiling distance learners in TEL environments: a hierarchical cluster analysis. Behav. Inf. Technol. 41(7), 1439–1452 (2022) 7. Choi, J., et al.: A large-scale bitcoin abuse measurement and clustering analysis utilizing public reports. IEICE Trans. Inf. Syst. 105-D(7), 1296–1307 (2022) 8. Tanaka, T., Cruz, A.F., Ono, N., Kanaya, S.: Clustering analysis of soil microbial community at global scale. Int. J. Bioinform. Res. Appl. 18(3), 219–233 (2022) 9. Balaji, K.M., Subbulakshmi, T.: Malware analysis using classification and clustering algorithms. Int. J. e Collab. 18(1), 1–26 (2022) 10. Abdalzaher, M.S., Sami Soliman, M., El-Hady, S.M., Benslimane, A., Elwekeil, M.: A deep learning model for earthquake parameters observation in IoT system-based earthquake early warning. IEEE Internet Things J. 9(11), 8412–8424 (2022) 11. Petropoulos, A., Siakoulis, V., Stavroulakis, E.: Towards an early warning system for sovereign defaults leveraging on machine learning methodologies. Intell. Syst. Account. Finance Manag. 29(2), 118–129 (2022)
290
Y. Zhang et al.
12. Cerný, J., Potancok, M., Castro-Hernandez, E.: Toward a typology of weak-signal early alert systems: functional early warning systems in the post-COVID age. Online Inf. Rev. 46(5), 904–919 (2022)
Application of Embedded Computer Digital Simulation Technology in Landscape Environment Design Ning Xian and Shutan Wang(B) College of Design and Art, Shenyang Jianzhu University, Shenyang, Liaoning, China [email protected]
Abstract. With the development of social economy and the acceleration of urbanization, people pay more attention to the harmony and beauty of Landscape environment (LE) design, and are committed to building an environment in which people live in harmony with nature, so that people, life, environment and nature form a harmonious and beautiful organic whole. As an independent design subject, environmental design plays an important role in promoting the level and superiority of decoration design. At the same time, in recent years, people have added more landscape elements to environmental design to create a green, ecological and environmental. Landscape element is an innovative design mode integrating natural landscape and human landscape. Its application in environmental design contributes to improving and enriching people’s quality of life and LE. Extracting the elements that can be used for reference in landscape design from the traditional environment, natural elements: the commonalities between elements and landscape design, and highlighting the natural culture needed in the application of LE design, are crucial to the formation of a design road with the characteristics of the times. On the other hand, we should dare to break through the shackles of traditional concepts to design more new landscape phenomena. It seems that the practical value of LEal works is not only to beautify the living environment on which we live, but also to extract the beautiful elements of nature and humanistic art, bringing more spiritual experience to modern people. Based on this, in the future landscape design, environmental designers should fundamentally improve the importance of natural beauty and harmonious beauty, and rationally apply and innovate the LE on the basis of not damaging the ecological environment. This paper briefly analyzes and explores the application of LEal design. Keywords: LE design application · Embedded computer · Digital simulation technology · Computer technology
1 Introduction In different times, the application process of integrating LE into design is often accompanied by different design applications. Therefore, the new LE is a topic that is often discussed by people in the environmental design industry in their creation, such as the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 291–299, 2023. https://doi.org/10.1007/978-3-031-31860-3_31
292
N. Xian and S. Wang
matching of old city wall tiles and steel frame glass [1]. It is precisely because of the rich design background of the LE that the original ancient city wall tiles and wall reliefs complement each other and can fully reflect the charm of the traditional ancient city [2, 3]. LEal design cannot exist independently of art. Only by adding new materials and new technology can the beauty of technology be reflected. But when using new materials, we should be flexible and innovative. The best way is to arouse people’s specific feelings through changes in color, shape, texture and texture, and bring strong appeal [4]. Moreover, new materials can surpass traditional materials and reach new heights in color, transparency and texture. Throughout the development of LE in recent years, a lot of transparent plastics and glass have been applied. The application of these materials has made the whole LE transparent [5]. At present, the application of LEal engineering in landscape design can no longer meet the original environment. Yufu C should break the routine, find a new way, and constantly create the LEal materials and skins that can reduce the natural landscape, and constantly research and manufacture new non-toxic green design applications that meet the sustainable development of the environment, so as to create the ecology of the LE [6]. In the LE design, the environmental designer needs to fully consider the application of regional environment. Yin Y respects the requirements of the natural environment for the environmental design, adapts measures to local conditions, avoids the phenomenon of large excavation as much as possible, and damages the overall beauty of the LE. This issue has become the consensus of environmental design [7]. The application of ecological landscape in LE design is mainly carried out from the overall layout of urban environment and ecological planning layout. The urban environment layout is mainly aimed at the orientation, address and surrounding environment of the activity area, and Min J plans the activity area pertinently; Ecological planning and layout in the activity area, including the planning and design of squares, roads and courtyards [8]. However, the above research on the application of LEal design is not in-depth and needs to be improved. As mentioned in this article, with the continuous progress of science and technology, the concept and form of LE design are also changing imperceptibly. A variety of new LEs are springing up like mushrooms. In future environmental design, on the one hand, we should skillfully combine the traditional LE with the new landscape design, on the other hand, we should dare to break through the shackles of traditional concepts to design more new landscapes.
2 Application Method of Computer LE Design 2.1 Embedded Computer At present, the embedded computer system is developing towards the application, platformization and intelligence in LE design. The task function set originally completed by multiple computers is completed by a single integrated and embedded computer [9]. The mission tasks completed by the original computers are completed by the corresponding hardware modules and software on the integrated computer. The communication between the modules is completed by the high-speed bus of the integrated computer. In
Application of Embedded Computer Digital Simulation Technology
293
addition, many computer modules also include FPGA, DSP, human-computer interface, process control layer, hardware driver, etc. Specific features are shown in Fig. 1.
embedded computer
Windows XP operation
people machine hand over mutual circles
hard too Cheng control system layer
piece drive move
F P G A
D N S
Cheng order
noodles
Fig. 1. Basic features of embedded computer
2.2 Computer Technology Mode The development of computer technology has gone through a series of relatively perfect processes. First of all, since the beginning of the 20th century, computer technology has been further developed, from the relatively simple computing function of the computer at the beginning to the development function of multimedia network, and then to the development and application of computer LE design at the present stage. The continuous improvement of these functions has greatly promoted the continuous improvement of the entire computer development system. At this stage, the computer network system has been widely used in various fields of production and life as well as various computer development. The continuous construction of these computer application systems and the continuous improvement of the system development process. The improvement of various technical fields and systems has better promoted the continuous improvement of the entire computer information system [10]. With the advent of the era of application in the design of the environment, the development space of the entire computer information system is also broader, and
294
N. Xian and S. Wang
the application mode of the environmental design of computer technology is also continuously improving. Therefore, in view of these problems of the development of computer technology, the new computer development mode and the progress of technology level are of great significance. This issue needs to be highly valued and improved in the specific research process. The flow chart of computer technology research is shown in Fig. 2:
New computer data
Information technology processing
Network data processing
Multimedia network processing
Intelligent supercomputer
Fig. 2. Computer technology process diagram
2.3 Digital Simulation Technology Management Digitalization is the integration of simulation technology and modern computer environment design and application. Both of them are two aspects of a thing. Emphasizing or exaggerating either aspect will reduce the overall significance of digitalization. Therefore, digitalization is the product of technology selection and progress. The development of digital simulation technology and its computer environment design and application still needs more development and development space. The development of digital technology and computer digitalization in China is still in its infancy, which can greatly develop the security of digital simulation technology, and can also provide a reasonable solution for digitalization [11]. To ensure the calculation effect, the continuous power flow method is generally used for calculation. The following is the calculation process:
Application of Embedded Computer Digital Simulation Technology
295
Assuming that the digital void ratio is calculated, the data value and digital value as well as the initial digital volume must be measured and calculated. X = {Un − (1 + k)∪0 } ∗ 100 = 1 − 1.6
(1)
In the above formula, X : digital void ratio V simulation numerical volume Va ; Initial numerical volume; K Spatial coefficient of simulation data element. BookThe formula of K = 0.6 in the rule is. V fij (Va ) (2) X = k=0.6
The comprehensive evaluation result of digital technology design application management ability is ω; The maximum value in is recorded as fij ; then the evaluation grey level corresponding Oa to is the final evaluation grade result. Oa =
k i=1
(ωi Fij )
(3)
This equation can efficiently and quickly solve the environmental application design of digital simulation technology and the safety and stability in environmental design [12].
3 LE Design Application Experiment LEal design cannot exist independently of art. Only by adding new materials and new technology can the beauty of technology be reflected. However, when using new materials, we should be flexible and innovative. The best way is to arouse people’s specific feelings through changes in color, shape, texture and texture, and bring strong appeal. And the LE can surpass the traditional materials and reach a new height in color, transparency and texture. Throughout the development of LE in recent years, a lot of transparent plastic and glass design applications have been added. These design applications have made the whole landscape transparent. The foundation of LE design is material and technology design, which has promoted the development of LE. 3.1 Problems in the Application of LE Design Through the investigation, it is found that the LE of some cities in China lacks overall planning in the design application process, and the whole urban LE lacks unity and coordination. Overlooking the city, the landscape at different locations is very messy. Without scientific and reasonable spatial layout, the LE and natural landscape can not be well integrated, giving people a sense of incompatibility. Therefore, the LE design without overall planning will largely destroy the perception, and will also reduce the evaluation of urban residents on the overall LE.
296
N. Xian and S. Wang
3.2 Lack of Application of LE Design The LE in the city determines the livability index of the city to a certain extent. The higher the environmental design, the higher the ecological value, and the more suitable the city is for people to live in. However, there are many cities where the LE design lacks ecological value, and the green LE covers a very small area. Most of the LEs are non-natural landscapes such as sculptures. These LEs will lose their previous aesthetic feeling after being exposed to the sun and wind. Therefore, many cities have to redesign and plan after a period of time, which will increase the cost of urban landscape construction, and at the same time can not achieve good LEal effects. In addition, the lack of ecological value in urban landscape design is not conducive to the sustainable development of the city, which is contrary to China’s initiatives in LEal protection. 3.3 LE Design Application Scheme In urban LE design, the use of natural LE is very desirable. Making full use of the natural LE to complete the urban LE design can not only improve the urban air quality, improve the urban greening rate, but also create a good landscape viewing effect. To complete the difference between natural landscape and non-natural landscape through design and application scheme. In the four seasons of the year, plants can present different viewing effects. Use table form to describe the temperature and viewing environment of spring, summer, autumn and winter. The flowering plants will bloom in spring, shade in summer, and have different views in autumn and winter. This is difficult for non-natural landscape to achieve. The LE in spring, summer, autumn and winter is described as shown in Table 1: Table 1. LE design scheme in spring, summer, autumn and winter spring
summer
autumn
winter
16%
37%
18%
33%
12%
24%
36%
25%
13%
30%
28%
36%
Table 1 above describes the seasonal rate of LE in spring, summer, autumn and winter, and represents the LE scheme points in percentage form. It is everywhere in the LE and can play a role of concentration and control. Compared with the space, distant or small objects can be regarded as a data point in the LE space, so when using these visual characteristics of the points to visually locate the space, whether the size, shape, color, and texture of the space points should be in sharp contrast with the surrounding LE, for example, the number of points in the LE obtained in Fig. 3 is prominent in the LE design of the entrance, It is easy to attract people’s attention. It is the focus of human vision and also represents the style characteristics of the whole site landscape. As shown in Fig. 3:
Application of Embedded Computer Digital Simulation Technology
Total value of landscape data points
60
Data value 1
Data value 2
297
Data value 3
50 40 30 20 10 0 data point
Spatial point Shape point Single data value
Color dots
Fig. 3. Landscape environment data points
4 Application Results and Discussion of LE Design 4.1 Application and Development of LE Design LE refers to the land and the complex of space and objects on the land. It is the imprint of complex natural processes and human activities on the earth. Neither the West nor the East can fully explain its concept. Geographers regard LE as a scientific term and a kind of scene, such as urban landscape, grassland landscape, forest landscape, etc.; Ecologists define landscape as an ecosystem; The artist regards the landscape as a beautiful Yipu Garden; It can be seen that the LE is closely related to human beings and is the basic living place of human beings. 4.2 LE Results Landscape can provide powerful help for the environment to achieve these goals, because the extinction or migration of species in the region, the change of temperature, the change of ground temperature and other LEal problems, in the process of research and analysis, the change of temperature and geological changes are the precursors and epitome of the larger range of environmental problems with greater harm, and then use the LE for comparison. The results are shown in Fig. 4: It can be seen from Fig. 4 that on the premise of the gradual increase of the LE temperature, the geological surface heat also increases, increasing by 3.3° from 2019 to 2022, and the geological temperature by 5° from 2019 to 2022. Therefore, the LE is determined as the starting point for formulating environmental protection.
298
N. Xian and S. Wang
45
air temperature
geology
40 Pverall coordinates
35 30 25 20 15 10 5 Single phase
0 2019
2020
2021
2022
Fig. 4. Hazard comparison between temperature change and geological change
4.3 Strategies for LE The connotation of LE is changeable, which can not only refer to the natural LE and LE in a narrow sense, but also radiate the whole nature in a broad sense, or reflect human activities and natural changes. As one of the research objects of environmental history, landscape can promote the comprehensive study of time and space, broaden the field of historical research, and provide real and accessible historical data for the study of LEal history. Therefore, landscape research is indispensable to environmental history. At the same time, LE research has great practical significance in today’s increasingly serious environmental problems. It not only reflects the changes of the natural landscape itself, but also reflects the impact of human activities on the landscape and the environment, which helps scholars find problems that need to be solved. When carrying out LEal design, it is necessary to carry out effective investigation of the region, and then understand and grasp the environmental characteristics of the region. For example, the characteristics and shape of the building itself can reflect the relationship with the surrounding environment, and the characteristics of the building is the best proof. This specific performance is essentially an effective description of the characteristics of the surrounding environment, so as to highlight the distinctive characteristics of the environment. However, how to effectively adjust the relationship between the natural environment and the building is an important development task. This requires the LE designer to have a deep understanding and grasp of the local natural environment, and gradually deepen the understanding of the environmental characteristics, and try to start with the local real environmental characteristics.
Application of Embedded Computer Digital Simulation Technology
299
5 Conclusion Symbiosis LE plays an important role between ecological nature and design application, which can alleviate the contradiction between environmental protection and economic development. An in-depth study of the application of LE in cities, villages and parks can reflect one point, that is, to pay attention to the protection of the ecological environment and not to develop at the expense of the environment. From it, we know that the construction of LE is not only to destroy the environment to develop, but to coordinate development on the basis of protecting the natural LE to achieve the goal of symbiosis. Foreign countries have long attached importance to the protection of the environment, and have studied the LE more deeply than China. There has been practice as the theoretical basis. The scholars of LEal design should learn more practical experience to make the theoretical basis more reliable.
References 1. Luo, Q., Wen, Z.: Application of garden design style in tang dynasty to the design of modern city public gardens: a case study of tang paradise. J. Landscape Res. 10(02), 14–17 (2018) 2. Wang, W.H., Wang, Y., Sun, L.Q., et al.: Research and application status of ecological floating bed in eutrophic landscape water restoration. Sci. Total Environ. 704(20), 135434.1–135434.17 (2020) 3. Zhang, G., Kou, X.: Research and application of new media urban landscape design method based on 5G virtual reality. J. Intell. Fuzzy Syst. 1(1), 1–9 (2021) 4. Stremke, S., Schoebel, S.: Research through design for energy transition: two case studies in Germany and The Netherlands. Smart Sustainable Built Environ. 8(1), 16–33 (2019) 5. Aghnoum, M., Feghhi, J., Makhdoum, M.F., et al.: Assessing the environmental impacts of forest management plan based on matrix and landscape degradation model. J. Agric. Sci. Technol. 16(2), 841–850 (2018) 6. Yufu, C.: Application and value analysis optimization of multimedia virtual reality technology in urban gardens landscape design. Boletin Tecnico/Techn. Bull. 55(15), 219–226 (2017) 7. Yin, Y.: Imitation and reproduction - The application study of sculptures in landscape design by the method of topological psychology. IPPTA: Quarterly J. Indian Pulp Paper Tech. Assoc. 30(8), 580–587 (2018) 8. Min, J.: Application of computer simulation in urban garden landscape design and value analysis based multimedia. Revista de la Facultad de Ingenieria 32(1), 574–581 (2017) 9. Mao, M., Huang, C.: Application of Leizu culture to interior design. J. Landscape Res. 03, 36–39 (2017) 10. Diao, J., Xu, C., Jia, A., et al.: Virtual reality and simulation technology application in 3D urban LE design. Boletin Tecnico/Tech. Bull. 55(4), 72–79 (2017) 11. Gul, S., Gupta, S., Shah, T.A., et al.: Evolving landscape of scholarly journals in open access environment. Libr. Rev. 68(6/7), 550–567 (2019) 12. Zhou, J.: VR-based urban landscape artistic design. J. Landscape Res. 12(01), 117–119 (2020)
Short-Term Traffic Flow Prediction Model Based on BP Neural Network Algorithm Dandan Zhang1(B) and Qian Lu2 1 Zhengzhou Technical College, Zhengzhou 450121, Henan, China
[email protected] 2 City University of Zhengzhou, Zhengzhou 450000, Henan, China
Abstract. Nowadays, the application of traffic data collection in major cities around the world is constantly updated, which promotes the continuous improvement of short-term traffic prediction capabilities. The purpose of this paper is to study the short-term traffic flow (TF) prediction model based on BP neural network (NN) algorithm. This paper presents a new forecasting model and an algorithm for forecasting short-term TF. Short-term traffic forecasting will play a very important role in traffic management applications. This paper proposes a prediction algorithm based on BP NN, and uses the optimal density rule to improve the prediction algorithm. Based on the demand structure of the TF forecasting system, this article outlines the overall program flow. This article conducts TF forecasting analysis and processing on the experimental data obtained and used by the BP network, and compares them through experiments. It is concluded that the method is effective in improving TF. The accuracy of the forecast is valid. Experimental research shows that the degree of fit between the test output curve and the expected output curve in this paper is enhanced. In the case of relatively large actual output fluctuations, after GA optimizes the weights and thresholds of the BP NN, the convergence rate of the predicted output curve and the prediction accuracy are increased by 20%. Keywords: BP Neural Network · Prediction Model · Short-Term Traffic Flow · Neural Network Algorithm
1 Introduction Because the change of urban traffic is a non-linear problem, it is difficult to predict with simple models. Short-term calculation of TF will lead to uncertainty of short-term potential changes in traffic. This has brought serious problems to the prediction of urban TF [1, 2]. The construction of urban roads has increasingly promoted the rapid growth of our national economy. However, this will bring about problems such as market prosperity and deterioration of the transportation environment. On the whole, short-term traffic forecasting and medium- and long-term traffic forecasting are two main types, among which short-term traffic forecasting is the most real-time and the most responsive, and thus has been most widely used. Short-term traffic forecasting is a method to accurately predict 0 in real time. Short-term traffic flow © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 300–308, 2023. https://doi.org/10.1007/978-3-031-31860-3_32
Short-Term Traffic Flow Prediction Model
301
forecasting is to use a variety of detection methods to predict traffic flow in real time, and then use forecasting models and algorithms to analyze traffic flow, so as to obtain accurate, real-time and reliable traffic forecast, so as to provide real-time and accurate traffic information for travelers. Induction information of traffic flow and dynamic routes, so that travelers can avoid traffic rush hours and save travel time for travelers. Since the 1960s, there have been quite mature mathematics, computer and other related theories and models at home and abroad, which have been widely used in short-term traffic flow forecasting, and many new forecasting models and methods have been developed. On the whole, it can be divided into statistical method model, dynamic traffic allocation model, nonlinear forecasting theory model, autoregressive moving average model and various combination forecasting models. Among them, the neural network model has been well applied in short-term traffic forecasting, especially the BP neural network model, which has been widely used at present, but there are still problems such as easy to fall into local minimum problems and poor real-time performance. Genetic Algorithm (GA) is an intelligent optimization algorithm with inherent incompatibility and better global optimization performance. It adopts a probability-based optimization strategy, which can realize automatic optimization of sample space and adaptive adjustment of sampling., without presetting E. On this basis, a new method of BP neural network is proposed. In the research on the short-term TF prediction model based on the BP NN algorithm, many scholars have studied it and achieved good results. For example, Guo Y proposed a new TF prediction model based on the combination of big data and deep learning. The test shows that the expected TF prediction result after the experiment of this method is relatively close to the real result [3]. Qiao Y proposed a TF prediction model based on Hermite network [4]. It can be seen that the research on the short-term TF prediction model based on the BP NN algorithm has important research significance. This paper combines domestic and foreign short-term TF forecasting methods, so that the support vector can be fully utilized in the process of predicting the object. Use custom prediction algorithms. This article analyzes the TF data from a time perspective, digs out their rules for a large amount of time series data, and uses an improved clustering analysis algorithm to process the TF data, and then combines the BP NN to determine the specific detector in a certain detector. The TF of a specific time period of the day is predicted, which greatly improves the accuracy of the prediction.
2 Research on Short-Term TF Prediction Model Based on BP NN Algorithm 2.1 TF Prediction Method Combined with BP NN 2.1.1 BP NN TF Prediction Method From a time point of view, the daily TF will have a peak period in the morning and evening, that is, morning peak and evening peak; from a spatial perspective, the TF between road networks also has a certain impact on each other. Then, when forecasting, it can be divided into two methods:
302
D. Zhang and Q. Lu
(1) On the same road section, use the TF in the first few periods of the day to predict the TF in the next period; (2) In the same time period, use the TF of adjacent travel sections to predict the TF of the road section to be predicted. Then when using BP NN to predict, the input data of the former comes from the same detector, and the input data of the latter comes from different detectors [5, 6]. In the actual research, which method is used, it is necessary to “specifically analyze the specific problems” according to the actual situation. The detectors on the highway are generally far apart, and the correlation between the spatially analyzed data will be limited, so the TF data can be analyzed and predicted only from the perspective of time. 2.1.2 Evaluation Criteria for Short-Term TF Forecasting In the TF forecasting system, appropriate indicators should be selected to evaluate the quality of the used or improved forecasting method. By measuring performance indicators, it can be concluded whether the selected prediction method is appropriate. The standard for evaluating TF results in this study is the absolute percentage error. The absolute percentage error refers to the percentage of the absolute value of the difference between the predicted value and the true value, which reflects the degree to which the predicted value deviates from the observed value. 2.2 Short-Term TF Prediction Model 2.2.1 Principles for Establishing Short-Term TF Prediction Models (1) Accuracy Avoiding traffic jams is one of the main reasons for predicting TF. Therefore, only by improving the accuracy of the prediction model and meeting the standard requirements in the evaluation standards, can the prediction effect of TF be brought into full play. (2) Real-time The prediction model should have a real-time query function. In order to be able to use the prediction results to manage TF in a timely and effective manner, the prediction data must be requested before the real data is displayed. (3) Portability According to the different road design of urban road sections, after simulating the model of a section of road, the predictive modeling can be applied to other road sections. 2.2.2 Selection of Short-Term TF Prediction Model There is a correlation between data, and there is also an internal connection between close data. According to the characteristics of current TF changes affected by the previous time flow, short-term TF prediction based on current history and data analysis, internal
Short-Term Traffic Flow Prediction Model
303
correlation can also achieve TF prediction [7, 8]. in addition, short-term TF data has non-linear characteristics and uncertainties. The traditional linear prediction model is far from the current large-scale network prediction. Fluctuations in TF can also lead to the generation of regular models. One of the current trends in short-term traffic forecasting is to introduce comprehensive forecasting models. The advantage is that multi-model combined forecasting will improve the overall performance and calculation performance of the algorithm, and better solve the problem. This method can cover multiple changes between data, and can make predictions reach a higher level. This article adopts advanced BP NN model to increase the conversion training rate of BP algorithm, and predicts the overall performance of the model by testing TF. 2.3 Characteristics and Influencing Factors of TF TF is affected by many factors such as weather, statutory holidays and so on. It is not affected by one factor. If it is affected by one factor, then the flow of the line will change, so the trend of TF has a strong uncertainty [9, 10]. The TF will continue to change over time, But the features of the TF to be predicted in this case are shown in Fig. 1:
Periodically
Similarity
Continuity
Uncertainty
Fig. 1. TF characteristic diagram
It can be seen from Fig. 1: (1) Periodicity Under normal circumstances, people have the same schedule and rest on workdays. Although many companies have different working hours, they are generally the same. Therefore, during the working day, the TF is constantly changing. However, on weekends, because everyone has different plans, some people have holidays, and some people want to go out because of boring workdays, so the change in traffic time on weekends does not appear to be the same as every workday. (2) Similarity Under normal circumstances, the program from Monday to Friday is fixed, so the schedule for the five working days is basically the same. The peak hours are visible from
304
D. Zhang and Q. Lu
7 in the morning until 10 in the evening or 5 in the afternoon. The TF is more than ever before. (3) Continuity In the next five consecutive working days, the TF is generated in time series, so it is a continuous and intermittent curve on the image. (4) Uncertainty When a car accident occurs in an area, or in a heavy rainstorm, the TF during these periods will increase, and these factors directly affect the changes in the TF. 2.4 Network Construction The network structure design of the NN refers to the determination of the number of network layers and the number of (hidden layer) HL, as well as the determination of the nodes of each layer (that is, the number of neurons). The following is a specific introduction in combination with TF forecasting. (1) Selection of the number of network layers The choice of HL must be fully designed in terms of regular network and training time. For a simple pairing relationship, when the normal network meets the requirements, a single HL can be selected to increase the speed. For complex mapping links, choosing a very hidden level can improve the accuracy of network prediction, but it will also increase the complexity of the network and the training time of the network [11, 12]. Under normal circumstances, special attention should be paid to increasing the number of neurons in the HL. In summary, the simulation of this paper chooses a single HL, that is, a three-layer network structure. (2) Selection of input and output nodes According to the fluctuations of TF, the daily data is divided into four time periods (O:00–6:00, 7:00–12:00, 13:00–18:00, 19:00–0:00) Perform analysis. The data of these four time periods are used as the 4 inputs of the network, and the data of these four time periods are selected and output accordingly. (3) Selection of HL nodes. The number of hidden classes also has a direct impact on the prediction: if the number of classes is too small, the network will not be able to learn correctly. As training progresses over time, the network is more prone to overfitting. The number of HL nodes can be analyzed with the following formula. √ (1) N < n+m+a
Short-Term Traffic Flow Prediction Model
√ N = [ 2n + m, 2n + m]
305
(2)
In the formula, N is the number of neurons in the HL, n is the number of neurons in the input layer.
3 Experimental Research on Short-Term TF Prediction Model Based on BP NN Algorithm 3.1 BP NN Modeling of Chaotic Time Series BP NN is a typical three-layer NN system: input layer, HL and output layer. When the number of neurons in the BP input layer is m, the prediction result of the prediction model is relatively good. The number of HL nodes of the BP NN also has a greater impact on the prediction performance of the BP NN. The forecast of chaotic time series is essentially the inverse problem of dynamic system. BP neural network is a very good nonlinear forecasting model, it can well establish the nonlinear forecasting model of chaotic sequence. The traditional threelayer BP neural network includes an input layer, a hidden layer, and an output layer. Its core is to correct the errors of the network while making mistakes, so as to achieve or approach the desired input and output the corresponding mapping relationship. 3.2 TF Data Collection First, collect TF data for a certain intersection in this city for 7 days. With 30 min as a collection cycle, a total of 100 time points of valid data are recorded in 7 days. First, use the data of the previous 3 days to train the network, correct the model output, and finally use Data validation model on day 4.
4 Experimental Research and Analysis of Short-Term TF Prediction Model Based on BP NN Algorithm 4.1 Establish the Prediction Model of BP NN First determine the network parameters of the BP NN, and set the number of nodes in the input layer, output layer and HL to 2–5-1. First, take the data of the first three days as the training data, train the BP NN, and then use the data of the fourth day as the test data, and finally through the Matlab simulation, the results are shown in Table 1. It can be concluded from Fig. 2 that the BP NN has a strong fitting ability, but there is still a large error between the test output of the network and the expected output, especially when the flow is small, it cannot accurately reflect the change trend of the flow.
306
D. Zhang and Q. Lu Table 1. BP NN Matlab simulation the results
Point in time
Expected output
Predictive output
0
49.6
43.1
20
45.2
14.3
40
173.2
167.8
60
223.7
201.5
80
248.9
189.4
100
36.3
35.2
Expected output
Predictive output
350 300
Traffic volume
250 200 150 100 50 0 0 -50
20
40
60
80
100
Point in time Fig. 2. BP NN prediction output results
4.2 Establish a Ga Optimized BP NN Prediction Model The GA optimized BP NN prediction algorithm is mainly divided into three modules. First, design the BP NN structure, and then determine the length of GA. Then GA is used to optimize the weights and thresholds of the BP NN, and finally the optimized weights and thresholds are passed to the BP NN, and the BP NN is used to predict the network output. In this simulation algorithm, the structure of the BP NN is designed to be 2–5-1, so the entire network should contain 15 weights and 6 thresholds, so the individual code length of the genetic algorithm is designed to be 21. The three-day traffic data is used as the training data to train the network output, and the fourth day’s traffic data is used to test the network. The Matlab simulation results are shown in Table 2.
Short-Term Traffic Flow Prediction Model
307
Table 2. The Matlab simulation results Sample point
Expected output
Predictive output
0
39.9
62.4
20
15.2
36.0
40
148.3
147.6
60
198.8
203.1
80
203.7
210.5
100
67.2
68.5
Predictive output
Expected output
100
Sample point
80 60 40 20 0 -50
0
50
100 150 Traffic volume
200
250
300
Fig. 3. GA optimized BP NN prediction results
In Fig. 3, after GA optimizes the density and margins of the BP NN, and then uses the optimized density parameters to train the BP NN, the correlation between the test method and the expected production process is improved. In the case of very large changes in the actual output, GA optimizes the weights of the BP NN to predict the convergence rate of the output curve and the accuracy of the prediction by 20%. But when the input data changes at a slow rate, the accurate forecasting scheme needs to be improved.
5 Conclusions Reasonable speed control of expressways is an important guarantee to ensure the high efficiency, safety, economy and comfort of expressways. Therefore, whether it is for safe operation or economical efficiency, speed limits are inseparable. The current speed
308
D. Zhang and Q. Lu
limit standards and design schemes currently in use in China are still unreasonable. Compared with newly built expressways, the speed limit is more complicated, and the traffic accidents on the old road and the speed limit on the old road must be considered comprehensively. At the same time, it also has a certain reference value for the speed limit planning of new highways. How to make the speed limit plan more in line with the characteristics and actual conditions of the highway reconstruction and expansion, so that it can be optimized and utilized to the greatest extent in terms of safety, efficiency, economy, comfort, etc., has become an important issue for the safe operation of highway reconstruction and expansion in the future. Based on the study of TF prediction methods, this paper further explores the basic knowledge and NN learning process, and focuses on the application of BP network algorithms, which lays the foundation for the realization of NN TF prediction. In order to measure the actual TF information, this article uses the Matlab platform to analyze the specific BP network specifications to achieve TF prediction, describe the decisionmaking process of each network parameter in detail, and fully consider the accuracy error and the result to predict the training time. And determine the optimal prediction result related to each network.
References 1. Tedjopurnomo, D.A., Bao, Z., Zheng, B., et al.: A Survey on Modern Deep Neural Network for Traffic Prediction: Trends, Methods and Challenges. IEEE Trans. Knowl. Data Eng. PP(99), 1–1 (2020) 2. Pamua, T.: Impact of Data Loss for Prediction of Traffic Flow on an Urban Road Using Neural Networks. IEEE Trans. Intell. Transp. Syst. 20(3), 1000–1009 (2019) 3. Keynia, F., Heydari, A.: A new short-term energy price forecasting method based on wavelet neural network. International Journal of Mathematics in Operational Research 14(1), 1 (2019) 4. Doan, E.: Robust-LSTM: a novel approach to short-traffic flow prediction based on signal decomposition. Soft Comput. 26(11), 5227–5239 (2022) 5. Karthika, R., Parameswaran, L.: A novel convolutional neural network based architecture for object detection and recognition with an application to traffic sign recognition from road scenes. Pattern Recognit. Image Anal. 32(2), 351–362 (2022) 6. Chafi, Z.S., Afrakhte, H.: Short-term load forecasting using neural network and Particle Swarm Optimization (PSO) algorithm. Math. Probl. Eng. 2021(2), 1–10 (2021) 7. Nakamura, T., Fukami, K., Hasegawa, K., et al.: Convolutional neural network and long shortterm memory based reduced order surrogate for minimal turbulent channel flow. Phys. Fluids 33(2), 025116 (2021) 8. Rahimipour, S., Moeinfar, R., Hashemi, M.: Traffic prediction using a self-adjusted evolutionary NN. J. Modern Transp. 27(4), 306–316 (2019) 9. Zhengwu, Yuan, Chuan, et al.: Short-term TF Forecasting Based on Feature Selection with Mutual Information. In: AIP Conference Proceedings, vol. 1839(1), pp. 1–9 (2017) 10. Ma, X., Pang, Q., Xie, Q.: Short-term prediction model of photovoltaic power generation based on rough set-BP NN. J. Phys. Conf. Ser. 1871(1), 012010 (6pp) (2021) 11. Pamula, T.: Impact of data loss for prediction of traffic flow on an urban road using neural networks. IEEE Trans. Intell. Transp. Syst. 20(3), 1000–1009 (2019) 12. Zhou, X.Y., Li, H.M., Zheng, W.H., et al.: Short-term TF prediction based on the IMM-BPUKF model. J. Highway Transp. Res. Dev. (English Edition) 13(2), 56–64 (2019)
Multi-modal Interactive Experience Design of Automotive Central Control System Ming Lv1 , Zhuo Chen1 , Cen Guo1(B) , and K. L. Hemalatha2 1 Shenyang Jianzhu University, Shenyang, Liaoning, China
[email protected] 2 Sri Krishna Institute of Technology, Bengaluru, India
Abstract. The design strategy of multi-modal technology in the automotive central control system promotes the conversion of automotive interactive systems to smart cars. Method: Starting from the new principles of interaction design and product design methods, combined with the current application of multi-modal technology, analyze the physiological and psychological needs of users, and summarize the current situation and problems in the application of multi-modal technology. Conclusion: Multi-modal technology stimulates multi-sensory collaboration, realizes the user’s synesthetic perception, mobilizes the user’s senses to give information and psychological hints to the user, and brings new feelings. When multi-modal technology is applied to the central control system of an automobile, it can enhance the expressiveness of the central control system, meet the various psychological needs of users, and bring users a rich physical and mental experience. Keywords: Automotive central control system · Multi-modal design · Interactive design
1 Introduction With the rapid development of the Internet of Things, the interactive mode of the automobile central control system has also ushered in a huge change. The automobile central control system is the link for users to control the automobile, understand the automobile, feel the automobile, and communicate and interact with the automobile. Therefore, the user experience of the automotive central control system has become one of the main factors for brand development. The cognitive function of the car central control system helps spread brand information, and an effective interaction method can carry the user’s emotional expression, so that the car and the user have a good interaction relationship, and the user has a pleasant experience.
2 Concept 2.1 The Concept of Multi-modal Technology Multi-modal interaction technology refers to the human-computer interaction technology in which the machine acquires human language, behavior, gestures and other modal © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 309–320, 2023. https://doi.org/10.1007/978-3-031-31860-3_33
310
M. Lv et al.
information, and then integrates judgments and gives feedback. It is the fusion of human sensory information, and finally reaches the human-machine interface. The purpose of symbiosis and harmony [1]. Multi-modal human-computer interaction for human intelligence enhancement. In order to provide a better human-computer interaction experience, it is necessary to make full use of human’s multiple perception channels, auditory, visual, tactile, olfactory, consciousness, etc. Different forms of input combining voice, image, gesture, touch, posture, expression, eye movement, brain wave, etc. provide multiple choices for human-computer interaction channels, improving the naturalness and efficiency of human-computer interaction. Multi-modality is not a collection of multiple modalities, but organic synergy and integration between single modalities [2]. Multimodal man-machine dialogue: interactive learning ability is becoming more and more important. 2.2 The Concept of Automotive Central Control System At this stage, there are mainly three design features of domestic and foreign automotive central control systems. The air-conditioning system and the central control screen are divided into zones. Various functions are mostly concentrated on the central control screen. Users rely on touch screen operations, and on this basis, some buttons are reserved for functional operations (Fig. 1). The second is a large-screen central control system with physical buttons. All physical button function information is concentrated on the central control screen, and a series of functional tasks are operated on the screen [3]. The third type is to add a separate touch system to the gear position on the basis of the
Age Distribution of Motor Vehicle Drivers
9% 10% 23%
58%
18-25
26-50
51-60
60
Fig. 1. Age Distribution of Motor Vehicle Drivers
Multi-modal Interactive Experience Design
311
traditional central control screen, including a rotary button or a control panel. It enables the driver to conveniently operate the required functions and tasks during the driving process, reduces the motion range, and provides a guarantee for safe driving.
3 Current Status 3.1 Current Design Opportunities for Automotive Central Control Systems At this stage, there are mainly three design features of domestic and foreign automotive central control systems. The air-conditioning system and the central control screen are divided into zones. Various functions are mostly concentrated on the central control screen. Users rely on touch screen operations, and on this basis, some buttons are reserved for functional operations. The second is a large-screen central control system with physical buttons. All physical button function information is concentrated on the central control screen, and a series of functional tasks are operated on the screen. The third type is to add a separate touch control system to the gear position on the basis of the traditional central control screen, including a rotary button or a control panel. It enables the driver to conveniently operate the required functions and tasks during the driving process, reduces the motion range, and provides a guarantee for safe driving [4]. 3.2 Development Status of Multi-modal Technology At present, the physical keys of the central control system and the information of the central control interface are logically mapped to each other and can be operated with each other. Both Its main functions are car entertainment, navigation settings, air conditioning control, seat control, Bluetooth settings, voice assistant, parking. Other settings such as auxiliary system. Through summarizing and studying the central control systems of different brands of vehicles, it is found that the physical buttons Interoperable functions with the central control information interface include on-board entertainment, air conditioning control, seat control and parking aid; only The functions that can be operated by the central control screen are navigation settings, Bluetooth settings and voice assistant. So at present, the car central control system. The central control screen plays a very important role in the function design of. Many functions have been moved to the information interface, so as to simplify. The beautiful shape also makes the interface information more complex and increases the difficulty of operation. So common functions are divided into the following. (see Table 1).
312
M. Lv et al. Table 1. Vehicle Central Control System information
Serial number
Interactive operation
Usage
1
Voice Assistant
Perform task operation through voice assistant
2
Song radio channel switching
You need to select a song or channel
3
Bluetooth
Connect mobile devices
4
Answer/make phone calls
Answer or make a call
5
On/off air conditioning
Turn on the air conditioning temperature switch
6
Navigation
Enter destination
7
Parking Distance Control
When reversing the vehicle, observe the vehicle behind through the display screen
3.2.1 Physical Interaction Human behavior recognition is to improve the accuracy of human behavior recognition by mining and using different modal information with their own characteristics. Define the function triggered by each gesture. BMW’s iDrive system (Fig. 2)is recognized as a system with clearer operating procedures.
Fig. 2. BMW.com
3.2.2 Voice Interaction Communicate with the machine by voice, let the machine understand what you say, it can be said that it is the auditory system of the machine. In the automotive field, pressing the voice button on the steering wheel or directly calling out can realize functions such as calling, controlling the volume, adjusting the air conditioner, querying the route, opening and closing the windows, and playing music. It is a convenient configuration. In 2015, the I Cloudrive 2.0 (Fig. 3) smart in-vehicle system jointly created by HKUST IFLYTEK and Chery Automobile was launched. The product subverted people’s perception of
Multi-modal Interactive Experience Design
313
i nteractive scenarios with an efficient voice experience, and deeply opened up carmachine functions and infotainment services through voice interaction. Has become a benchmark product of human-vehicle interaction sought after by the industry [5].
Fig. 3. Auto time.com
With the development of technology, the third-generation car machine is developing towards informatization and intelligence, adopting better and more compatible Android and other car machine systems. Through the rapid integration of new technologies such as wake-up-free, voice enhancement, sound source location, voiceprint recognition, natural voice interaction, active interaction, and intelligent voice prompts, the safety, interest, and interactivity of the driving space have been comprehensively enhanced, and the Intelligent cockpit with immersive interactive experience. 3.2.3 Visual Interaction Express the content through the screen and enhance the immersion of the content. The HUD head-up display (Fig. 4) is the head-up display system, which was first applied to airplanes and used the principle of optical reflection to project flight information on glass that is level with pilot glasses. At present, vehicle-mounted HUDs are mainly divided into three categories: direct-view HUD, reflector HUD and windshield HUD. Reflector The projection device of HUD is under the dashboard, and it is reflected into the driver’s eyes through the reflector on the dashboard, and its reflectivity is between 60% and 80%. At present, most automobiles use windshield glass HUD. Its projection device is a translucent reflector combiner located in front of the driver’s seat, which projects information on the filmed or coated front windshield. The driver is viewing the road conditions ahead. At the same time, you can follow the information displayed on the HUD at any time.
314
M. Lv et al.
Fig. 4. Sina.com.
4 Design Method 4.1 Human-Oriented Design Principles According to Maslow’s needs from low to high levels of needs, physiological needs, safety needs, social needs, respect and self needs. The functions of the existing car central control system on the market are too concentrated and some of the functions are idle. To a certain extent, it is a waste of resources for the society and does not gain respect and self-demand for customers. According to the statistical results of the console, while female drivers know less about the buttons on the center console. This means that more than 1/3 of the functions provided by the central control of the car not only have no effect, but even create experience barriers for the driver. Therefore, in order to enhance the core competitiveness of the automotive industry, the perspective should be shifted to enhance user experience (Fig. 5). To let users understand the car is to reap happiness in the process of car interaction. ed. The user’s internal state, the system’s characteristics in use, and the context of interaction contribute and influence the perceived [6]. According to the survey results, it is found that the users divide the coordinates of the use of various functions in the central control system to obtain the primary and secondary relationship of the common functions of the central control system for the drivers (Fig. 6). 4.2 Design Principles in Automotive Interaction Design Compared with other periods, automobile interaction design under the background of the Internet of Things has both commonality and particularity [7]. Therefore, the following principles should be followed when designing. Safety first, unified standards, information classification, information graphics, interaction Natural, interactive and intelligent, and timely feedback. Cars are different from mobile phones. For users, different frequency of use also affects the familiarity of cars [8]. For society, the social occupancy of cars affects the resources and ecological environment of society, in order to achieve the goal of peaking carbon by 2030, And in the context of the sharing economy, the choice of car interaction methods should be systematic and standardized, and the operation interface should be classified according to the characteristics and needs of users, so that users
Multi-modal Interactive Experience Design
315
Fig. 5. Classification of Elements
Color Preference of Center Console
35 30 25 20 15 10 5 0 dark
light colour
black
personalize colors
Fig. 6. Color Preference of Center Console
can use it under any conditions, whether it is a shared car or a family car. Can be used without the burden of memory.
316
M. Lv et al.
5 Design Strategy of Multi-modal Central Control System 5.1 Application of Multi-modal Technology in Improving the Cognition of Automobile Central Control System These steps can be directly associated with the pre-design, design, and post-design phases introduced by Nielsen (1992). At the same time, UX as a practice includes three main steps: Envision UX, Design UX, and Evaluate [9] survey results show that male drivers only have about 70% of the knowledge about the functions of the center console, while female drivers have a lower level of understanding. This means that more than one-third of the functions provided by the car’s central control system not only have no effect, but even create experience barriers for the driver. As a result, there are great deficiencies in the utilization of the central control system. These problems need to fully consider the user’s operating behavior in the early stage of the design. The automotive central control system should not be a functional pile. The user is not an expert. The designer needs to start from the perspective of an ordinary user. What the user needs is a solution that can solve the problem. Central control system. Therefore, when the user first contacts the car central control system, it is necessary to give the user sufficient feedback and information prompts so that the user can intuitively know how to operate and the consequences of the operation. With the cooperation of multi-modal technology, the distance between the user and the product can be shortened. The first is the visual interactive system. Designers can use colors, materials, and modeling text to convey interactive information, allowing users to more intuitively feel the content contained in the central control system. Basic functions, additional functions, and personalized functions. The presentation of visual information should conform to the user’s reading habits, and attention should be paid to avoid visual fatigue. Visual fatigue Visual fatigue includes the frequent or even excessive appearance of some information. Such a situation will cause the user to be psychologically exhausted, reduce the user experience, and occupy cognitive memory. Therefore, in the interactive information presentation of the central control system, the principle of moderation should be followed, combined with the characteristics of selective acceptance of vision, the important content and information should be emphasized, and the interactive information should use the primary and secondary relationship to find the user’s visual foothold. It is convenient for users to quickly grasp the various functions and functional partitions of the car’s central control system. If people’s perception systems are sorted in a certain order, then the second place is auditory perception. The auditory interaction system plays a more auxiliary role in design expression, and better supplements the visual performance in the design. The auditory perception Perception is integrated into the design, which further strengthens the information conveyed by the design, and enables users to have a deeper understanding of the functions of the central control system. The arrival of the era of intelligent cars has brought earth-shaking changes to the central control system of automobiles. From physical buttons to adding part of the entertainment system and then to the central control screen, a full-screen model of physical buttons is finally formed Equation. In view of the current trend, the position of the central control screen is particularly important in the design structure of the central
Multi-modal Interactive Experience Design
317
control system. By According to the survey and summary above, most elderly users tend to operate with physical keys. For some essential skills It is not necessary to cancel the physical keys, but can perform task operation through the combination of keys and touch The driver is safer and more reliable when performing blind driving (Fig. 7), so the current mainstream car brands In the design of the central control, the physical keys are reserved. Through the layout of air conditioner, screen, physical keys and gear the design makes it more consistent with the operating habits of drivers.
Fig. 7. Typical Structure of Central Control System
The haptic interaction system is also very proactive. The addition of the haptic system will enable users to truly feel the interaction with the car’s central control system, and establish an emotional connection between the design and the user from the user’s perspective. Tactile perception includes temperature, texture, and texture. When choosing a tactile interaction method, it is necessary to pay attention to the interactivity and consistency, reduce the user’s memory burden, and use the most natural interaction method to interact with the user.
318
M. Lv et al.
5.2 Application of Multimodal Technology in Information Collection and Arrangement of Automobile Central Control System Since qualitative research focuses on explaining why certain behaviors or phenomenon occur, human factors such as user perception or satisfaction are of primary interest in qualitative resea [10]. Intelligent, digital, and personalization are the development trends of future automobiles, and a large number of science and technology are integrated into the car. Inside, the car has become more personal and personalized, breaking through the original transportation function, and the car has gradually developed into a movable space. The user chooses to be in the car as a work and rest scene, then the car is required to have a certain storage capacity. How to expand the storage function of automobiles is a problem that cannot be ignored in the development of intelligent automobiles. The division and allocation of car space are required to be reasonable, and portable auxiliary functions should be developed. In addition, various items should be classified and recorded, so that the information can be matched with the car’s interior scenes, and the utilization of movable space can be maximized. The inaccuracy of the correspondence between the item and the location information affects the user’s evaluation of this movable space. Two parts of the design are required, namely information entry and information search. The unified integration of the central control system is conducive to the collection and sorting of fragmented and scattered information, and visual presentation. In turn, it forms an integrated system with the Internet of Things at home to help users intuitively and clearly grasp the information of each scene. In the process of information entry, through multi-modal technology, it is convenient for users to efficiently integrate scattered products and improve the coordination of products and spatial layout. In the process of information search, the use of multi-modal technology and multi-sensory interaction can truly reproduce the scenes in the car and the specific object positions, occupying space. And calculate the utilization rate. 5.3 Application of Multi-modal Technology in Mechanism Linkage in Central Control System The vehicle-mounted central control system includes three subsystems, namely, the driving system, the entertainment system, and the navigation system. There are too many levels of linkage information of each system mechanism, which causes functions to be hidden and causes confusion for users’ operations. The interactive path is cumbersome and requires excessive learning costs. Users can’t smoothly select and switch the functions required by each system, and the car design is becoming more diversified (changes in literature and art). Many users have a little understanding of the car’s central control system, let alone the linkage relationship between the systems. Multi-modal technology can effectively remind, and multi-sensory cooperation can realize the linkage between systems, so that user needs do not conflict with each other (change). Before departure, it is necessary to collect data in various scenes while driving, use observation methods to collect emergency situations that occur in the operation process of the user’s driving system, and observe the proportion of time that each system is used. The functions of different systems are mobilized according to the user’s situation to improve user experience and increase the utilization of the central control
Multi-modal Interactive Experience Design
319
system. The linkage of various mechanisms realizes that the central control system accurately captures the user’s psychological and physiological states, as well as the driving scene Forecast of unexpected situations encountered. Multi-modal technology linkage considers that before users, provide users with diverse and effective choices, and give users correct feedback, so as to maximize the utilization of various systems in the car and shorten the distance between the car and the user. When the vision is occupied, it can be supplemented by gesture interaction and voice interaction, that is, to meet the user’s driving needs and entertainment needs while ensuring the safety of the driving process. Use multi-modal technology to judge the needs of users to determine whether users are willing to spend time to experience in-depth, so as to provide users with corresponding intelligent services. In the design, a personalized service system should be established according to the user’s preference, suitable for the interactive mode of the user’s characteristics, and the design and research of each subsystem of the automobile central control system should be carried out to realize the information exchange and seamless connection between the systems. Relying on multi-modal technology to build a smart central control system to achieve efficient collaboration between systems and systems is conducive to increasing the utilization rate of the central control system and also conducive to collecting user feedback.
6 Conclusion From the user’s point of view, this article researches the automotive central control system based on the principle of interaction design and the principle of Maslow’s demand. As a mobile space, cars are especially in today’s accelerating process of intelligence. Users have higher and higher requirements for cars, and car central control systems play an increasingly important role as an intermediary between cars and users. Therefore, the introduction of multi-modal technology into the automotive central control system is conducive to improving users’ awareness of automotive systems. From the user’s point of view, multi-modal technology and the car’s central control system are more conducive to the integration of information, allowing users to be more closely connected with the car. The mutual linkage between the multi-modes makes the car’s central control system work more effective and smarter.
References 1. Harvey and Stanton, 2016 Usability Evaluation for in-Vehicle Systems Harvey C. and Stanton N. A. Taylor & Francis Ltd. (2016). https://www.ebook.de/de/product/21182820/catherine_ harvey_neville_a_stanton_usability_evaluation_for_in_vehicle_systems.html 2. Bouncken et al., 2018-Ritala et al., 2014Bouncken et al., 2018Bouncken et al., 2020Garri, 2020Ritala et al., 2014Coopetition in new product development alliances: Advantages and tensions for incremental and radical innovation 3. Burkacky, et al.: Rethinking car software and electronics architecture Burkacky O., … +2 … , Knochenhauer C.McKinsey Co. (2018) 4. The role and potentials of field user interaction data in the automotive UX development lifecycle: An industry perspective
320
M. Lv et al.
5. Roto, V., Rantavuo, H., Väänänen, K.: Evaluating user experience of early product concepts. In: Proceeding of the International Conference on Designing Pleasurable Products and Interfaces DPPI09 (2009) 6. Hassenzahl and Tractinsky, 2006User experience - A research agenda Hassenzahl M. and Tractinsky N. Behav. Inf. Technol., 25 (2) (2006), pp. 91–97 7. Fastrez and Haué, 2008 Designing and evaluating driver support systems with the user in mind (2008) 8. Cartagena, A., et al.: Privacy violating opensource intelligence threat evaluation framework: a security assessment framework for critical infrastructure owners. In: 2020 10th Annual Computing and Communication Workshop and Conference (CCWC) (2020) 9. Bi, H., et al.: A deep learning-based framework for intersectional traffic simulation and editing. IEEE Trans. Visual Comput. Graph. 26(7), 2335–2348 (2020) 10. Yozevitch, R., et al.: Save Our Roads from GNSS Jamming: A Crowdsource Framework for Threat Evaluation. Sensors (Basel, Switzerland) 21.14 (2021)
E-commerce Across Borders Logistics Platform System Based on Blockchain Techniques Qiuping Zhang1(B) , Yuxuan Fan1 , and B. P. Upendra Roy2 1 School of Economics, Harbin University of Commerce, Harbin, Heilongjiang, China
[email protected] 2 Channabasaveshwara Institute of Technology, Gubbi, Karnataka, India
Abstract. As its techniques and concepts continue to mature, blockchain is gradually infiltrating all aspects of real life from the virtual world, and E-commerce across borders is a good opportunity to connect the virtual and the reality at this stage. The technical advantages of blockchain such as immutable, uniqueness, “smart contracts”, and decentralized self-organization can just cope with the current challenges faced by the current development progress of the cross-border logistics industry. Using the relevant technical characteristics and model concepts of blockchain to build an E-commerce circulation system. It can interconnect the value chain of information and commodity supply chain of E-commerce. This article integrates the management concepts and technical characteristics of the blockchain into the E-commerce across borders logistics platform, making it a bridge connecting the information flow and value flow of the E-commerce across borders circulation system, and predicts its possible effects. Keywords: Blockchain · E-commerce across borders · Logistics Platform
1 Introduction Since 2020, the global COVID-19 epidemic has had a huge strike on the real economy, but it has made e-commerce and logistics and transportation industries develop to a certain extent [1]. Residents’ consumption, brand marketing, and sales channels have shifted more from offline to online. As a result, [2] brand owners’ service needs in e-commerce channel development and upgrades have also increased accordingly. The penetration rate of the e-commerce industry continues to increase, and the development of new marketing methods such as live broadcast marketing and social e-commerce has also expanded its user coverage [3]. In recent years, the scale of China’s e-commerce industry has gradually expanded. The market and industry environment have also undergone corresponding changes. The inherent flaws and operational problems of the e-commerce industry are more prominent. This also makes the development of China’s E-commerce industry into a deadlock. Based on this, [4] in order to enhance the competitiveness of China’s cross-border E-commerce industry in market competition, it is necessary to carry out industrial upgrading. Recently, With the in-depth exploration of the mode of combining blockchain and E-commerce across borders, the value of using blockchain technology © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 321–329, 2023. https://doi.org/10.1007/978-3-031-31860-3_34
322
Q. Zhang et al.
in the field of E-commerce across borders has been further highlighted. The technical combination of blockchain and E-commerce across borders has become the mainstream direction of industry transformation.
2 The Concept and Characteristics of Blockchain Techniques The concept of blockchain originated from Bitcoin [5]. Bitcoin is a virtual encrypted digital currency in the form of P2P, and its basic structure is a chain composed of data blocks. The blockchain is a chain composed of one block after another. The basic function of these blocks is to use a distributed consensus algorithm for information and replacement, and also for data verification and encryption of storage. Generally speaking, blockchain has four major characteristics: immutability, uniqueness, “smart contract” and decentralized self-organization. First, the blockchain cannot be tampered with. It is formed based on its unique “block + chain” ledger. That is, [6] in chronological order, blocks containing transaction information are continuously added to the end of the chain. If you want to modify the data in a block, you need to regenerate all the blocks after it, this makes it very expensive to modify the blockchain. Generally, the transaction data in the blockchain ledger can be regarded as unable to be “modified”. It can only be [7] “corrected” through approved new transactions, and the “correction” process will leave traces, which is why the blockchain cannot be tampered with. Therefore, destroying data is not in the interests of users. This practical design enhances the data reliability of the blockchain. Second, the blockchain is unique. On the one hand, once the information on the blockchain is on the chain, it cannot be tampered with, and its historical data cannot be changed. This is the uniqueness of the blockchain in time. On the other hand, the blockchain maintains the only main chain during its operation [8]. Once other chains appear, there is a fork. The appearance of forks will cause the blockchain to have copies in two different spatial dimensions. This is the uniqueness of the blockchain in space. Third, the “smart contract” of the blockchain. A “smart contract” is a special protocol that provides verify and execute contracts. The blockchain uses a transformable programming system to construct contracts at each node, and indicate the obligations that need to be performed and the conditions for the judgment to take effect, this enables traceable, irreversible, and secure transactions without third-party supervision. Fourth, the blockchain is a decentralized self-organization [9]. Decentralization means that the storage, verification, maintenance, transmission and other functions of blockchain data do not rely on third-party management agencies and hardware facilities. Relying on pure mathematical methods to replace the central organization, build and realize the trust between nodes, and then form a highly stable data structure distributed in a decentralized form. Self-organization means that each node in the blockchain is independent of each other, and problems with any one node will not affect the operation of the entire system. Since the emergence of blockchain technology in 2009, blockchain technology has undergone three important technological changes. Among them, the blockchain 1.0 era is characterized by the unit chain data block structure, which is mainly used in the financial field, such as transfer, remittance and digital payment. Its core is to create a set of decentralized and open and transparent transaction general ledgers. The database is jointly maintained by nodes, and there is no single node to control the general ledger.
E-commerce Across Borders Logistics Platform System
323
The blockchain 2.0 era is the decentralization of the market. The key concept of its development is “contracts”, which has a wide range of application scenarios, covering almost all financial fields and tangible or intangible assets. The core of the blockchain 3.0 era is to create a collaborative society and better integrate cross-regional, cross-domain and cross-enterprise cooperation. Its evolution logic is from Dapp (decentralized application) to DAC (decentralized autonomous company) to DAO (decentralized organization). In particular, a model consensus of “blockchain + general certificate” has been formed, so that it can empower the real economy, better improve the industrial operation mechanism and reshape the industrial structure, so as to improve the production relationship and promote the development of productivity.
3 Construction of E-commerce Across Borders Logistics Platform System Based on Blockchain Techniques At present, cross-border logistics methods are diversified, [10] such as international express delivery, air transportation and maritime transportation. They all have multi-subject characteristics, which coincides with the multi-subject characteristics of blockchain decentralized self-organization, [4] which promotes the application of blockchain’s technical advantages such as non-tamperability, uniqueness, “intelligent contract” and decentralized self-organization in the field of logistics. Blockchain technology has enabled the transformation of the operation service model of the cross-border e-commerce logistics platform system, from a traditional one-to-many structure to a multi-to-many structure. Under the traditional logistics industry business model, crossborder e-commerce logistics platforms are generally operated by a single manufacturer to serve multiple distributors, and then a single manufacturer will serve multiple consumers. There are many levels of operation service models. Upstream and downstream enterprises interact mainly through information transmission, and such information can only be transmitted in one direction, so it is less efficient. However, with the empowerment of blockchain technology, social idle logistics resources can be centralized and uniformly distributed through information platform channels. Each participant in the blockchain system can obtain the same information at the same time, and allocate effective resources reasonably, scientifically and efficiently according to the principle of market supply and demand to maximize the efficiency of resource utilization. In addition, the industrial information transmission direction of the cross-border e-commerce logistics platform empowered by blockchain technology is multi-dimensional and decentralized. The ecosystem is re-established through the mesh structure, so as to establish a flexible network system, so that different industries can achieve cross-border and integration, and nurture more innovative models. Blockchain technology enables the multi-center collaborative operation of the crossborder e-commerce logistics platform system and creates an integrated supply chain model. The decentralized characteristics of blockchain technology have changed the modern logistics trading mechanism and system. Within the framework of the trading system of the modern cross-border e-commerce logistics platform system, it is mainly composed of suppliers, logistics enterprises and consumers, and the transactions between them will strictly follow the top-down linear trading rules, and there are barriers to the
324
Q. Zhang et al.
information interaction between levels. However, with the empowerment of blockchain technology, the information acquisition channels and content between suppliers, logistics enterprises and consumers are exactly the same. The trading mechanism is based on the consensus mechanism. Different levels and enterprises are completely equal, and it is easier to achieve fair trading. Therefore, blockchain technology breaks the transaction structure under the modern cross-border e-commerce logistics platform system and creates a fairer trading structure model. In addition, the blockchain system has changed the hierarchical system of modern logistics. Under the function of platform resource accumulation, an integrated supply chain model can be built to systematically and intensively manage all participants in the system, including upstream and downstream enterprises, core enterprises, logistics enterprises and consumers, and comprehensively improve the production efficiency and information collection and response capacity of the intelligent logistics industry chain. In view of the cumbersome cross-border e-commerce logistics process, the Internet of Things technology can be used to capture warehousing, transportation, distribution and other process information in real time and store it in the blockchain logistics data sublibraries, and establish links to the databases of customs, commodity inspection departments, tax departments and insurance companies to carry out real-time monitoring in the transaction process to simplify the domestic transportation, transit customs clearance and international transportation operation processes of traditional cross-border e-commerce logistics, especially those involving additional links such as packaging, inspection and quarantine, insurance. In view of the high overall operational risk of cross-border ecommerce logistics, point-to-peer transmission technology can be used to store logistics circulation information on multiple nodes, so that the information of the transaction subject runs through the whole process of cross-border e-commerce logistics, forming a blockchain with dual uniqueness in time and space, and adopts “smart contract” technology to realize the automatic completion of logistics activities that meet transaction rules and improve the operational efficiency of cross-border e-commerce logistics platforms. In view of the high cost rate of cross-border e-commerce logistics, blockchain technology can be embedded in the logistics after-sales service platform, and intelligent algorithms and consensus mechanism technology can be used to improve cooperation among individual organizations, improve the efficiency of cooperation, realize the rapid response of after-sales services such as return and exchange, and reduce the risks that may exist in the return and exchange process. To sum up, the application of blockchain technology in cross-border logistics platforms will greatly improve the collaborative efficiency of various processes of logistics nodes, optimize cross-border supply, packaging, warehousing, distribution, delivery, commodity inspection and after-sales, so as to reduce storage costs, improve transportation efficiency, ensure distribution safety and improve after-sales service, and promote the development of China’s cross-border e-commerce logistics industry. At present, China’s cross-border e-commerce logistics platform system industry has achieved primary development. In its operation process, it relies heavily on data and algorithms to run through various operational links of the cross-border e-commerce logistics platform system, such as transportation, storage, packaging, loading and unloading and distribution. However, in the industrial chains of different cross-border e-commerce
E-commerce Across Borders Logistics Platform System
325
logistics platforms and different logistics enterprises, the interconnection of logistics data is greatly hindered by the versatility of information and barriers, which leads to the shortage of resources in the peak season of logistics and the waste of resources in the off-season of logistics. Therefore, the current development model of the cross-border e-commerce logistics platform system is only the primary form of the development of modern logistics. It is only intelligent in some logistics operations. It does not take consumers as the core and flexibly allocate resources according to market needs, and does not really realize the automation and intelligence of logistics. However, with the empowerment of blockchain technology, many pain points in the development of the cross-border e-commerce logistics platform system can be effectively solved. For example, blockchain technology is non-tamperable and traceable. Each operation process has detailed data records, and its data is true and reliable. When a dispute arises, as long as the information is traced, the responsibility can be accurately defined to effectively protect the interests of all parties, rather than favoring a certain subject. From the perspective of economic benefits, the cross-border e-commerce logistics platform system from the perspective of blockchain is more economical. Each transaction in the blockchain system will be stored at the specified node, including the price, transaction subject and other main information, and electronically signed by both parties. At the same time, the data in the blockchain system cannot be tampered with, and the accountbased distribution ensures that even if the blockchain system leaks information or single and some nodes are paralyzed, it will not affect the stored data and transactions in the system, which is more secure. In addition, any transaction in the blockchain system does not require manual participation and operation, which can not only reduce the operating costs of logistics enterprises, but also reduce the risk of errors brought about by human operation and improve the efficiency of industrial operation. Blockchain technology empowerment will also affect the operation process of the modern logistics industry, break the original rules and mechanisms, and create a new mechanism that is more in line with the development of enterprises. For example, in the process of supply chain finance operation of cross-border e-commerce logistics platforms, supply chain finance is a financing model dominated by core enterprises. Therefore, only first-level suppliers or first-class sellers closely associated with core enterprises have easier access to financing services, while other enterprises at the end of the supply chain have difficulty in obtaining financing services, resulting in a two-tier differentiation trend for enterprises in the same supply chain, which is obviously not conducive to the industrial development of the cross-border e-commerce logistics platform system. However, the application of blockchain technology can solve the problem of blocking of multi-level transmission of credit. Using circulating digital vouchers, the debt claims generated by the transaction generate unique digital vouchers, and give timestamps to ensure authenticity, so that enterprise credit can be transformed into financial instruments. At the same time, digital vouchers can be split at multiple levels until they flow to the end of the industrial chain to ensure that all participants will receive financing services.
326
Q. Zhang et al.
4 Innovation of Cross-Border E-commerce Logistics Platform System Based on Blockchain Technology Blockchain technology constructs a distributed network maintenance and immutable digital ledger by sharing, reliable and secure information recording and transmission. On the one hand, it can expand the main body of the cross-border e-commerce logistics platform system and innovate the cross-border e-commerce logistics platform system model. On the other hand, based on the consensus mechanism, the blockchain crossborder e-commerce logistics platform system is more credible, which is easy to identify responsibility when disputes arise and avoid disputes. Therefore, the sharing concept, technological innovation potential and safety performance of blockchain technology will be the best choice for the platform development of cross-border e-commerce logistics platforms. From the perspective of blockchain, the cross-border e-commerce logistics platform system can be carried out from two aspects: First, the bottom operation logic link, deepen the integration of blockchain technology and the core technology of the cross-border e-commerce logistics platform system, and improve the application effect of the cross-border e-commerce logistics platform system. Second, deeply integrate blockchain technology from the perspective of business innovation, improve the logic of modern logistics operation, and connect and supplement each other in the form of industrial chain through platform intermediaries, so as to realize the construction of a platform model of cross-border e-commerce logistics. Driven by blockchain technology, the underlying technology of the cross-border e-commerce logistics platform system can not only solve existing development problems, but also create more application results. For example, the integration of blockchain technology and big data can effectively solve the unstructured problems in the basic application of big data, and innovate functions such as structured data governance, full-cycle data management, network anonymity security and asset dataization. The blockchain service platform with blockchain technology as the core reshapes the business and organizational structure of the cross-border e-commerce logistics platform system in the form of modules. First, blockchain technology comprehensively improves the operation quality of cross-border e-commerce logistics platform system. For example, on the operational side, real-time quality monitoring on the production side is realized, and blockchain technology enables big data technology to collect and analyze data in different dimensions and process relevant results in real time. At the same time, blockchain can unify different business data standards and achieve unified management of intelligent devices. On the decision-making side, blockchain technology helps to optimize the algorithm, establish the effectiveness of the simulation simulation verification scheme, and realize intelligent matching and on-demand deployment on the supply-demand side. Second, multiple business modules can form industrial modules, including both the same industry module and the cross-industry module. The data between the same industry modules can be directly used, while cross-industry modules cannot be directly used due to information standard problems. In this regard, blockchain technology can be processed uniformly, and the data standards can be unified according to the consensus mechanism within the system to realize the sharing and transmission of data information. Empowered by blockchain technology, the industrial module of
E-commerce Across Borders Logistics Platform System
327
the cross-border e-commerce logistics platform system has the advantages of coordination, transparency of the whole process, and credit information in the whole link. Third, industrial chain modules can be formed between different industrial modules to enhance the ecological system coordination ability of cross-border e-commerce logistics platform systems, such as cross-industry collaboration or transnational collaboration, which has the role of symbiosis, connection and reconstruction of the industrial chain of cross-border e-commerce logistics platform system.
5 Effect Prediction of E-commerce Across Borders Logistics Platform System Construction Based on Blockchain Techniques Figure 1 shows the effect prediction of cross-border logistics platform system construction from the perspective of blockchain.
Logistics Cooperative Dispatching Center
Supply
Blockchain Cross-border E-commerce Logistics Platform Packaging
Warehousing Customs
Commodity Inspection
Tax Department
Insurance Agent
After-sales
Distrubution
Domestic Transportation
Customs Clearance
International Transportation
Fig. 1. The effect diagram of cross-border logistics platform system construction from the perspective of blockchain
6 Conclusions The cross-border e-commerce logistics platform system is the carrier of many links such as cross-border supply, packaging, warehousing, distribution, delivery, commodity inspection and after-sales. It is related to the coordinated development of the cross-border e-commerce industry and the logistics and transportation industry. It involves the multiparty cooperation between domestic and foreign cross-border e-commerce enterprises, logistics and transportation enterprises and customs, commodity inspection departments, tax departments, insurance companies and other departments and institutions.
328
Q. Zhang et al.
However, the lack of international general standards for blockchain has resulted in poor block connection and poor interoperability. Every automatic transaction using blockchain technology must comply with the pre-agreed “smart contract”, but there is currently a lack of unified technical specifications internationally, which can easily lead to inconsistent writing rules and codes. Therefore, the formulation of blockchain standards in line with international standards is the basic guarantee for promoting the construction of cross-border e-commerce logistics platform system under the background of blockchain. Blockchain technology can gradually eliminate the technical constraints of crossborder e-commerce logistics, continuously accelerate technological innovation according to the application needs of logistics scenarios, and ensure the integration of blockchain technology with big data, cloud computing, Internet of Things and other technologies. However, judging from the actual development, there are still many technical problems to be broken through. For example. Blockchain technology’s data storage architecture, consensus mechanism, smart contracts, privacy security and other technical links still need to be further improved to meet the continuous development trend of crossborder e-commerce logistics. At the same time, establish unified industry standards, enhance blockchain technology and its system interactivity, accelerate the interconnection of different intelligent modules, fully explore the advantages of network value of platform development, integrate a variety of resources in services, and innovate more new functions and new models of smart logistics. In this regard, it is recommended that the government take the lead in organizing cooperation between universities, research institutions and logistics enterprises. According to the actual development needs of crossborder e-commerce logistics, formulate research and innovation directions, give full play to the organizational advantages of industry, university and research, and improve the conversion rate of scientific and technological applications. In addition, although blockchain technology uses asymmetric encryption to improve the integrity and confidentiality of information transmission, and its spatiotemporal dual uniqueness also provides a certain degree of guarantee for the information stability of blockchain, there is still a hidden danger of being used by hackers to attack the system. Based on this, the establishment of blockchain technology consulting services and crossborder e-commerce logistics consulting services is encouraged. With the help of such consulting services, interpreting coding rules, estimating potential risks and eliminating hidden factors will be conducive to the construction of China’s cross-border e-commerce logistics platform system and promoting the comprehensive application of blockchain Internet of Things technology in the cross-border e-commerce field.
References 1. Cheng, L.: Analyzing the impact of social networks and social behavior on electronic business during COVID-19 pandemic. Inf. Process. Manage. 58(5) (2021) 2. Smaïl, B., Naouel, M., Nachiappan, S.: Impact of ambidexterity of blockchain techniques and social factors on new product development: a supply chain and Industry 4.0 perspective. Technol. Forecasting Soc. Change, 169 (2021) 3. Marion, G., Horst, T.: The influence of blockchain-based food traceability on retailer choice: the mediating role of trust. Food Control 129 (2021)
E-commerce Across Borders Logistics Platform System
329
4. Goldman Sjoukje, P.K., et al.: Strategic orientations and digital marketing tactics in Ecommerce across borders: comparing developed and emerging markets. Int. Small Bus. J. Researching Entrepreneurship 39(4), 350–371 (2021) 5. Jackson, D.: How blockchain techniques can benefit the Internet of Things. Urgent Communications (2021) 6. Jovan, K., et al.: Trading application based on blockchain techniques. Tehniˇcki glasnik 15(2), 282–286 (2021) 7. Ji, J., Jin, C.: Managing the product-counterfeiting problem with a blockchain-supported E-commerce platform. Sustainability 13(11), 6016 (2021) 8. Cangir Omer, F., Onur, C., Adnan, O.: A taxonomy for Blockchain based distributed storage technologies. Inf. Process. Manage. 58(5) (2021) 9. Cameron, G., Samuel, F.-W., Brice, A.J.: Online consumer resilience during a pandemic: An exploratory study of e-commerce behavior before, during and after a COVID-19 lockdown. J. Retailing Consumer Serv. 61 (2021) 10. Panagiotis, A., Vilma, T., Anindya, G.: Demand effects of the internet-of-things sales channel: evidence from automating the purchase process. Inf. Syst. Res. 32(1) (2020)
Application of Chaos Neural Network Algorithm in Computer Monitoring System of Power Supply Network Ruibang Gong2(B) , Gong Cheng2 , and Yuchen Wang1 1 State Grid Xin Jiang Company Limited Electric Power Research Institute,
Urumqi 830011, Xinjiang, China 2 State Grid Xin Jiang Company Limited, Urumqi 830011, Xinjiang, China
[email protected]
Abstract. The power system is an industry closely related to the national economy. It is related to the country’s economic development and people’s living standards. In the continuous development of the power system, the power supply network has been highly valued by more and more countries and enterprises. Therefore, the research on the power grid computer monitoring system is very valuable. This paper mainly uses the experimental method and the comparative method to carry out related research on the application of the chaotic neural network algorithm in the computer monitoring system of the power supply network. Experimental results show that when the number of users reaches 400, the maximum time consumption is 29 s, and within a certain range, the basic needs can be met. Keywords: Chaotic Algorithm · Neural Network · Power Supply Network · Monitoring System
1 Introduction The stability of the power system determines the quality of power supply, and electric energy is a high-quality, stable, safe and reliable operation method. In the entire power grid, the transmission network plays a vital role. Computer monitoring technology in the power system is an indispensable project with great potential. There are many theories to study the application of chaotic neural network algorithms in the computer monitoring system of power supply network. For example, Wang Hongxi uses big data analysis methods to conduct data mining analysis on the chaotic characteristics of regional hourly load data, and fully considers historical load data, weather factors and other traffic factors, and establishes a theory based on chaos and neural networks [1]. Yu Sheng combined the properties of short-term charges and used fuzzy neural networks to predict short-term charges, describing the structural properties of fuzzy neural networks and the BP learning algorithm [2]. In order to protect video copyright, Liang Jiadong proposed a video watermarking algorithm based on Chebyshev © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 330–338, 2023. https://doi.org/10.1007/978-3-031-31860-3_35
Application of Chaos Neural Network Algorithm
331
chaotic neural network [3]. So this paper analyzes the chaotic neural network algorithm and its application in the computer monitoring system of the power supply network. This article first studies the chaotic neural network and describes its basic theory. Secondly, it analyzes the related technology of system development. Then the overall design of the monitoring system is described. Finally, through the realization of the monitoring system, using experimental tests to detect, and draw relevant conclusions.
2 Application of Chaotic Neural Network Algorithm in Computer Monitoring System of Power Supply Network 2.1 Chaotic Neural Network It has been discovered that seemingly random neuroelectric activity is not random “noise”, but is inherently safe. This apparently random state of motion with inherent security is the phenomenon of chaos. The electrical activity from the nerve core to the entire brain is inherently safe [4, 5]. Chaos has at least the following effects on the physiological activities of the nervous system: Neural network is composed of a large number of neurons, which is a kind of nonlinear mapping, and it has strong adaptability. The basic idea of chaos theory is to use the complex connection relationship between neurons, and through the study of its internal laws, so as to realize the interaction and cooperation between the various unit nodes in the system. Chaotic neural network is formed by a large number of neurons gathered together. It has strong adaptive ability, so it has great advantages in learning and imitation. The chaotic firing neuron has a strong ability to adapt to the external environment. Neurons with chaotic discharge characteristics can receive and process external information more easily. Confusion is related to the transmission of neurological information. Chaos is related to the state of consciousness. Chaos is related to thinking activities such as learning, memory and cognition [6, 7]. Chaos phenomenon is a universal phenomenon in nature and human society, and it is a manifestation of the inherent random process of nonlinear deterministic systems. Since neural network is a highly nonlinear dynamic system, and chaos has the above-mentioned properties, neural network is closely related to chaos. Chaos is a random process that continuously changes in time. It has a strong regularity and can be completed in a few simple steps under normal circumstances. However, due to its own unique characteristics, there are many problems in the chaotic system itself. Therefore, if we want to achieve the best performance of the computer control system, we must continuously optimize the neural network algorithm to achieve this goal. The complex mathematical model of the power system is established by chaos theory, and the chaotic dynamics equation represents the solution under uncertain conditions. Therefore, we need to find a simple, effective, accurate and easy-to-implement approximate solution set. Chaos algorithm is realized through the analysis of the internal structure and operation law of the power system and the changes of the external environment, and then the predictive control strategy is realized [8, 9]. Aihara chaotic neurons have external input and feedback elements from internal neurons, and this refraction element is thought to decrease exponentially over time. The
332
R. Gong et al.
dynamic model description of the neural network composed of N chaotic neurons: Ap (s + 1) = gp (
n
m
k=1
p=1
fk l w (ak (s − w)) +
N k=1
Qpk
s w=0
l w Pk (ak (s − w))) (1)
The axon mutation transfer function of the k-th chaotic neuron is fk ; the number of external inputs is M. This paper proposes a chaotic neural network model with a chaotic oscillator as a single neuron [10]. For discrete time, the equation of motion of the coupled chaotic oscillator is described by g (a) and h (a) as follows: Ap (s + 1) = g(Ap (s)) + Cp (s)[bp (s + 1) − Ap (s + 1)]
(2)
The coupling coefficient of the p-th neuron at discrete time s in the formula is Cp (s). There is also a chaotic neural network obtained by improving the discrete Hopfield neural network. If the storage capacity of the model does not reach the number of samples to be stored, the stable attractor of the neural network system will be deformed. Chaos optimization is a direct search algorithm that seeks the optimal solution through randomness, ergodicity and regularity. Its essence is an intelligent optimization method with certain randomness, which overcomes the dependence of traditional differential optimization methods on gradient information. In chaos optimization, due to the randomness of chaos variables, the method can quickly find the optimal solution, thus reducing the subsequent search time. However, due to the chaotic random motion, when approaching the optimal solution, it often jumps to a far place suddenly, resulting in a waste of search time. It can be said that randomness
Input controlled object
Read in data and initialize
Construct optimization variables as chaotic variables
The optimal solution is obtained by local optimization in the field of optimal solution
Construct the optimal solution as a chaotic variable
Global search to get better solution
Fig. 1. Basic flow chart of chaos optimization
Application of Chaos Neural Network Algorithm
333
is an important factor that causes the algorithm to jump out of local optimization, but it will also cause a lot of time waste. Trajectory ergodicity of chaotic optimization, that is, under given conditions, it can experience any state without repetition, which is the most basic starting point. Figure 1 below shows the basic process of chaos optimization. Generally, according to the characteristics of chaotic motion, it is divided into the following two steps: First, according to the ergodic trajectory generated by the deterministic iterative method, the whole solution space is studied. On the premise of meeting a certain condition, the optimal state found in the search process has approached the optimal value of the problem, and can be used as the starting point of the search if the search path is long enough. Secondly, based on the results obtained in the previous stage, an additional small amount of interference is carried out in the local area until the end condition of the algorithm is satisfied. Chaos optimization method fully considers the randomness and ergodicity of chaos. This method has the advantages of simple structure, rich search space, and the initial value selection will not have any impact on the search results. However, because all points traversed need to be completed in a certain time, especially in complex engineering calculations, it takes more time. Therefore, the hybrid method is usually used to solve the target problem. By combining genetic algorithm with simulated annealing algorithm, the convergence speed of simulated annealing algorithm and the population degradation of genetic algorithm can be effectively compensated; Combining chaos with particle swarm optimization, combining the ergodic characteristics of chaos with particle swarm optimization. 2.2 System Development Related Technologies Microsoft seems to be platform agnostic, which is well reflected in.NET. The basic theory of this system design is the MVC design model, which uses its code to process data, which shows the perfect separation of the three interactive processing mechanisms, and separates the data of different code modules to achieve good communication processing and data communication. In the ARCGIS system structure diagram, it is not difficult to see that the bottom is the data storage system such as DBMS, XML, files, the middle is the application logic package, and the top is divided into GIS desktop, GIS engine, GIS application and other parts. Microsoft has developed the SQL server database, which is widely used and has a wide range of functions. In terms of versatility, it is similar to a database management system, which can provide reliable guarantees for data input, output and storage; it can also be considered as a set of relational databases with close business relationships and great complexity. From a distribution perspective, large-scale distributed storage and processing can be realized. Once deployed to the server, it can be viewed and used. At the same time, it is not restricted by the operating system. UML is essentially a complete set of diagrams and elements. This modeling language uses a large number of graphic symbols and is an industry standard. With the help of UML modeling tools, software projects can be ideally planned and controlled. Some
334
R. Gong et al.
more complex and larger systems are more suitable for management using this tool, so that a clear structure can be achieved and existing projects can be “introduced to newcomers”. Geographic information system technology can solve spatial planning and distribution problems. The information management system is a geographic information system based on geographic information data, used to express the spatial relationship and topological structure between geographic information. Since most of today’s GIS system applications are based on the Internet, the use of map caching technology can reduce bandwidth consumption and improve the processing efficiency of the system. At present, all major GIS tools such as Google Maps use map caching to improve user access speed. Therefore, data caching technology is an indispensable technical guarantee for GIS applications. 2.3 The Overall Design of the Monitoring System Integrity of system function: Since the power plant is designed according to the principle of non-monitoring (small-man operation), the monitoring system should also be based on this principle, and the entire system should have a complete functional design. The power plant ensures the stable and reliable data communication with the upper control center through the automatic power generation control method and the automatic voltage control method, so that the upper control center can remotely monitor, control and adjust the control unit. System scalability and openness: Each node computer in the system meets the openness requirements for function expansion and hardware upgrades. In addition, the system software is open, conforms to the international standard data format, and can communicate with other systems. System security: The system realizes network security and control process security. The system is equipped with hardware and software with network security guarantees. In the control process of the service personnel, the system can check the authority of the operator and reconfirm that the command is correct. Real-time performance of the system: Real-time performance is reflected in the response time of the system. The response capability corresponds to the time requirements of system data acquisition, data processing, general control and setting functions, and system communication, and the response time is no more than 1 s. On-site operation and control functions: When the system is offline, the on-site control unit has monitoring, control, and setting functions for some important equipment on the site, if it is separated from the control level in the factory. Maintenance personnel can complete the equipment randomly. The generator start and stop operations include generator start and stop and power adjustment operations to ensure the normal operation of the generator set.
3 Realization of the Monitoring System 3.1 The Production of Monitoring Screen The process of creating a monitoring screen is similar to the process of creating a web page. First design the table, and then link it to the relevant variables needed to achieve
Application of Chaos Neural Network Algorithm
335
the monitoring and control objectives. “EpSynall” provides a variety of drawing tools and interface models, and all the components and symbols of the electrical system can be found in the library it provides. Once the static west side is ready, you can link each indicator to the corresponding variable. In this way, all the states and values that need to be monitored, as well as the switches and operating steps that need to be operated, can be displayed and operated on the monitoring screen, so as to obtain real-time performance in real time. Temporarily monitor the operating status of the entire system to effectively control the safe and economical operation of the system. 3.2 System Security Settings In order to ensure safe production, the monitoring system has set different security zones, priorities and passwords for operators, and each pixel on the system screen has different access rights. If the operator’s priority is lower than the object’s access priority or is not in the object’s safe access area, the object is inaccessible. These functions can be performed through the “User Management” option and “User Login” menu in the EpSyall system configuration. 3.3 Software Design of Video Surveillance System The system is implemented programmatically on the Visual C++ 6.0 development platform under the Windows 2000 operating system environment, and adopts a multithreaded client/server architecture design. The test was conducted in a system environment with a host configuration of P4 2.0 GHz, 256 MB RAM, operating systems
System control module
Auxiliary Management Module
Alarm control module
Network communication module
Video storage module
Video display module
Video capture module
Fig. 2. Video Server-Side System Software Module
336
R. Gong et al.
of Windows 2000 Server and Windows 2000 Professional, and 10/100 MB LAN. The software module division of the video server system is shown in Fig. 2: The host of this site acts as a video server, and realizes real-time video and audio compression collection and encoding through the capture card, and performs network transmission of compressed code streams according to the requirements of the client, and performs transmission control according to the requirements of the client. Receive and execute various control request commands sent by the client. Automatically receive and process the alarm while sending alarm information to the monitoring host. In this paper, an overall work flow composed of a computer power supply network monitoring system is established, and according to the control principle of the system, each power supply detection station is managed in detail. First, the polling response method is used to connect each sub-station with the main station, and the data of each sub-station is archived. Once a command is sent to a sub-station and no response is received within one second, it will conduct another detection within three seconds. If not, it will immediately trigger an alarm. However, when asked, the master base station will decode the UDP, convert it into UDP standard UDP data, and then transmit it to the computer network for transmission. Finally, find the corresponding sub-station according to the address, and use the decoding test to make the information transfer correctly. The data transmitted through the computer monitoring device will undergo standard conversion between the terminal and the sub-station. The system will find the target site according to the corresponding address and path, and then perform the final decoding. The data source can judge whether there is gas anomaly according to the dynamic change of the data package. If there is an anomaly, it must immediately start the alarm or report it to the ground monitoring center through the dynamic image of the MGCS. Then, the ground monitor will analyze the current situation according to the actual situation of the site and the data above, so as to make correct response and command.
4 Analysis of Experimental Test Results 4.1 Average Transaction Response Time for Different Numbers of Users
Table 1. Response time Average response time 50
5
Maximum response time 13
100
8
17
200
13
23
400
18
29
During the performance test of the system, the concurrent transactions operated by fifty users were simulated. The longest and average response time reached 13s and 5s respectively. The details are shown in Table 1.
Application of Chaos Neural Network Algorithm
337
In Table 1, when the user is 100, the average response time is 8 s, and the maximum response time is 17 s; When the user is 200, the average response time is 13 s, and the maximum response time is 23 s; When the user is 400, the average response time is 18 s, and the maximum response time is 29 s.
Maximum response time
Average response time 29
Number
400
18 23
200
13 17
100
8 13
50
5 0
5
10
15
20
25
30
35
Time Fig. 3. Average Transaction Response Time for Different Numbers of Users
As shown in Fig. 3, the concurrent transaction operations of one hundred users are simulated. The longest and average response time are 29 s and 18 s, respectively. From this result, we can know that the system has a very high transaction success rate, and the longest response time did not exceed 30 s, which reached the expected value. To sum up, the system has excellent performance and is worthy of further promotion and application.
5 Conclusion In the continuous improvement of the power system, people have put forward higherlevel and deeper requirements for power quality, and the power supply network serves as an intermediate link and an important foundation for the operation of the power grid. In the power system, the transmission and distribution of electrical energy is a very important process. Therefore, the rational control and management of the power grid is particularly critical. This system uses chaotic neural network algorithm as the monitoring object, and controls when certain conditions are met. The algorithm has good fault tolerance and robustness. Chaotic optimization is an optimization method
338
R. Gong et al.
between deterministic and stochastic. It has rich temporal and spatial dynamics, and its dynamic evolution will cause the migration of attractors. At the same time, due to the ergodic property of chaos, the search will not fall into the local minimum, which is very different from the probability inferior transfer and tabu table check of simulated annealing. Therefore, chaos is a new optimization method and has been widely concerned by people. The main purpose of this paper is to study the application of chaotic neural networks in power systems. Experiments have proved its superiority and verified that the algorithm has good performance.
References 1. Takahashi, D., Iizuka, T., Mai-Khanh, N.N., et al.: Fault detection of VLSI power supply network based on current estimation from surface magnetic field. IEEE Trans. Instrum. Meas. 1–12 (2018) 2. Prakash, S., Mishra, S.: VSC control of grid connected PV for maintaining power supply during open phase condition in distribution network. IEEE Trans. Ind. Appl. 1 (2019) 3. Mehrpooya, M., Mohammadi, M., Ahmadi, E.: Techno-economic-environmental study of hybrid power supply system: a case study in Iran. Sustain. Energy Technol. Assess. 25(FEB.), 1–10 (2018) 4. Kaur, R., Krishnasamy, V., Kandasamy, N.K.: Optimal sizing of wind–PV-based DC microgrid for telecom power supply in remote areas. IET Renew. Power Gener. 12(7), 859–866 (2018) 5. Gomes, D.,Barbi, I., Lazzarin, T.B.: High voltage power supply using T-type parallel resonant DC-DC converter. IEEE Trans. Ind. Appl. 1 (2018) 6. Cany, C.,Mansilla, C., Mathonniere, G., et al.: Nuclear power supply: going against the misconceptions. Evidence of nuclear flexibility from the French experience. Energy 151(MAY15), 289–296 (2018) 7. Elsisi, M., Mahmoud, K., Lehtonen, M., et al.: An improved neural network algorithm to efficiently track various trajectories of robot manipulator arms. IEEE Access 1 (2021) 8. Das, A., Halder, A.,Mazumder, R., et al.: Bangladesh power supply scenarios on renewables and electricity import. Energy 155(Jul.15), 651–667 (2018) 9. Sheikholeslami, A.: Power supply noise [circuit intuitions]. IEEE Solid-State Circuits Mag. 12(3), 15–17 (2020) 10. Golilarz, N.A., Mirmozaffari, M., Gashteroodkhani, T.A., et al.: Optimized wavelet-based satellite image de-noising with multi-population differential evolution-assisted Harris Hawks optimization algorithm. IEEE Access 1 (2020)
Design of Bus Isolated Step-Down Power Supply System Based on Ant Colony Algorithm Jiangbo Gu1(B) , Ruibang Gong2 , Guanghui An3 , and Yuchen Wang1 1 State Grid Xin Jiang Company Limited, Urumqi 830011, China
[email protected]
2 State Grid Xin Jiang Company Limited Electric Power Research Institute,
Urumqi 830011, China 3 State Grid Xinjiang Electric Power Company Limited Changji Power Supply Company,
Changji 830011, China
Abstract. With the development of communication technology, various communication modes have been widely used in power system. As a new wireless transmission mode, bus power supply technology has attracted more and more attention from users and enterprises. Therefore, it is of great significance to study bus step-down power supply system based on ant colony algorithm. Firstly, this paper introduces the working mechanism and characteristics of bus isolated stepdown power supply, expounds its classification, and then studies the application of ant colony algorithm. Based on this algorithm, the bus isolated step-down power supply system is designed, and the simulation experiment of the system is carried out to test its performance. Finally, the test shows that the safety capacity of each node increases with the increase of the number of nodes, and the voltage of each end node is not less than 40 V. At the same time, it can be observed that the power consumption of each node of the system is small, and the maximum power consumption is not more than 2 W. This shows that the system can effectively isolate and reduce the bus voltage. Keywords: Ant Colony Algorithm · Bus Isolation · Isolation and Step-down · Power Supply System
1 Introduction With the continuous development of communication technology, people have higher and higher requirements for the security and confidentiality of transmitted data and information. At present, there are many kinds of bus isolated step-down power supply schemes. In modern wireless communication systems, it is particularly important to use distributed computing resources to solve the security problems of traditional networks [1, 2]. In order to effectively solve these problems and make it operate stably and efficiently, bus isolated step-down power supply device must be adopted. Many scholars have carried out relevant research on isolated hypotension. In the fields of communication technology, computer network and new materials, the research of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 339–348, 2023. https://doi.org/10.1007/978-3-031-31860-3_36
340
J. Gu et al.
wireless channel isolation system has achieved some results. However, with the increasing requirements of RF signal transmission rate and interference types, there are two main communication modes in China: one is the direct use of indirect wired networks, such as BT, San, etc. The other is the bus structure directly and indirectly connected [3, 4]. This new microwave circuit structure has become a new theoretical problem. Linear transmission system is often used in foreign research. It integrates the signal source and control equipment into one chip to form a full duplex signal calendar mode (FDS) or half bridge digital waveform channel to realize the function of two-way data communication. In this way, all information is transmitted through one cable, so as to effectively avoid the problem of channel interference [5, 6]. The above research has laid the foundation for this paper. With the rapid development of communication technology and computer network, a variety of intelligent appliances have sprung up, and cable devices in these household appliances are also essential. Therefore, the research on bus isolated step-down power supply system is of great significance. This paper introduces a design scheme of controller (PLC) based on a new generation of simulated biological evolution methods such as ant colony algorithm and genetic algorithm, and gives the corresponding program to reduce the loop coupling coefficient and improve the communication efficiency. At the same time, it analyzes the problem that the traditional method has more serial load and is difficult to meet the requirements of high stability.
2 Discussion on Bus Isolated Step-Down Power Supply System Based on Ant Colony Algorithm 2.1 Bus Type Isolated Step-Down Power Supply 2.1.1 Working Mechanism The working principle of line isolated step-down power supply system is to install a controller on the power supply line to communicate between the whole main control chip and sub modules. When the bus fails, it plays a very effective role in its protection. When the host fails, it will automatically disconnect the power supply. Each independent processor has its own different functions and characteristics. It can run according to the needs of users, or complete part or all of the working process and display the status separately. At the same time, it can also communicate with other chips through the serial interface. In the communication system, due to the large number and wide distribution of cables, the transmission distance is far greater than 10 km. When the bus isolated step-down power supply technology operates, it is necessary to block all channels with shielding network and buffer. It is then overwritten by one or more nodes. In this way, the influence (such as noise, light, etc.) caused by external immeasurable interference and signal attenuation caused by the loop can be prevented. Finally, after a series of processing, all other parts in the communication system can be connected as a whole [7, 8].
Design of Bus Isolated Step-Down Power Supply System
341
2.1.2 Working Characteristics In the bus isolated step-down power supply system, it mainly has the following characteristics: (1) because each node should consider its own security problems. Therefore, it is necessary to ensure the normal operation of the whole network. After the modular design is adopted, the functions of various components in the circuit can be fully connected, so as to realize the strong integrity and stable operation of the system. (2) In order to prevent unnecessary loss and waste of electric energy caused by bus or substation failure, a protection device needs to be installed to ensure that the working state of other equipment is not affected. At the same time, the system cost should be reduced under certain requirements, so as to reduce the economic cost, so as to better improve the reliability, safety and stability. (3) The unnecessary coupling between communication devices and nodes is reduced, and the cable consumption is saved. It improves the operation efficiency of the system. In traditional ring networks, there are many factors such as disordered propagation and strong interference. After optimization by ant colony algorithm, the ring network structure can be more compact and simplified, so as to achieve the ideal value of better noise pollution reduction, that is, reduce the communication loss, make the bus isolated step-down power supply system more practical, reduce unnecessary transmission links, save cost and effectively improve efficiency, so as to provide a good environment for bus control system [9, 10]. 2.1.3 Type In the actual linear design, there are many uncertain factors in the system, which will lead to various effects in the process of signal transmission, such as noise, power fluctuation and load mutation. These disturbances will cause the output voltage value to change. Therefore, in order to ensure that the signal can be transmitted accurately. At present, several common step-down power supply methods have the following three types of work: Adopt distributed network topology. Distributed network structure means that a master station is set on each node, which is generally composed of multiple branches and subnets. The ring ant colony algorithm with this structure is used to deal with the multi-path optimization problem. When all branches in the whole link adopt the same connection mode, the global maximum cut set can be eliminated. The optimal solution optimization control is carried out without complex computing grid size, that is, there is only one master station or route on each parent node, and the subnet is composed of multiple lines. Centralized power distribution system is adopted. Bus type centralized power distribution system supplies power to each area, and each load has an independent power supply to ensure the stability of the whole communication network. When a fault occurs between nodes, it will not affect other equipment. Therefore, the system adopts this mode. Replace the overall network architecture with local topology. Local topology is to connect each node through one or more buses to achieve their own control. When designing the overall network architecture, the global topology is generally not considered. This layout mode can only be adopted when the whole system needs to be expanded or there
342
J. Gu et al.
are special requirements, but for some important areas such as routing protocol and data communication, it must also adopt local layout and centralized distribution. 2.2 Ant Colony Algorithm 2.2.1 Concept Ant colony algorithm is a heuristic search method. It simulates the topology, behavior characteristics and objective function of information by randomly selecting the optimal solution in the process of ant foraging. In nature, each individual has its unique parallel seeking ability. Because each organism has the characteristics of diversity and mutation may occur in uncertain environment, and each path also has different adaptation ranges, ant colony algorithm can process these information according to its own needs to achieve the goal of global optimization. At present, most ant colony algorithms are designed for single or multiple individuals. When we get the optimal solution according to different objective functions, we need to meet certain constraints to achieve the goal. 2.2.2 Features Adaptability. Ant Optimization selects an initial group and starts the search. If it finds the shortest path, it continues. Because pheromones and ants walk the shortest route are caused by an heuristic factor. Therefore, when there is a local extreme value in the search process (i.e. falling into a local minimum) or the best advantage of the whole group is the largest or reaches the maximum, stop the iteration to the convergence state, otherwise stop the search until all the best feasible schemes are found. Heuristic learning characteristics. It mainly refers to that there are certain laws between pheromone concentration and positive and negative solutions in the population. In the search process, these laws take the ant individual as the center, and achieve the optimal value by integrating the existing knowledge. Strong real-time adaptability. Ant colony algorithm is a new technology developed by intelligent computing method, random search for optimal solution or global optimization problem theory. Parallel ant colony algorithm has strong real-time adaptability, so it can converge quickly in the solution process. Ant algorithm is a bionic algorithm, also known as ant colony algorithm. Ants are social animals with weak individual strength, but ants working in groups have unlimited power. For example, our daily nesting, migration, foraging and other behaviors are caused by the power of ants. Ant colony algorithm is formed on the basis of long-term ant colony behavior. This paper illustrates its working principle with an example, and its principle is shown in Fig. 1.
Design of Bus Isolated Step-Down Power Supply System
343
Food H G F C
Obstacle B
E
D A Small losses
Fig. 1. Basic principle of ant algorithm
As can be seen from Fig. 1, there are two paths from the ant nest to food: ABCGH and ABDEFGH. When ants walk, they leave a pheromone on the road. When they diverge again, they will move along this route. Assuming that forty ants leave the nest, we can draw the following conclusions: In the first minute, 40 ants walked towards B at A’s position. The next minute, 40 ants came to the position of B. On this route, there were no pheromones. Each route would have 20 ants choose BC and B. In the third minute, 20 ants arrive at C, 20 arrive at D, and then continue to move towards E. In the fourth minute, 20 ants arrive at G, then continue to move in the direction of H, 20 ants arrive at E, and then continue to move in the direction of F. In the fifth minute, the first 20 ants found food at the H position and returned to the G direction. After reaching F, the 20 ants continued to move towards the G direction. In the sixth minute, 20 ants appear at the position of G. Because there are 20 ants passing by both CG and FG, the pheromone concentration of CG and FG is the same, so ten ants will choose GC, and the other ten ants will choose GF. The other 20 people who reached G continued to move in the direction of H. In the seventh minute, 10 ants arrived at C and then moved to B: the other 10 ants arrived at F continued to move towards E; The remaining 20 ants found food at the position of H, and then returned to G. In the eighth minute, 10 ants arrive at B and then move towards A; After the other 10 ants arrived at E, they continued to move towards D; There are 20 ants in the position of G. Because there are more than 30 ants passing by CG and FG, the signals of CG and FG are the same, so ten ants will choose GC, and the remaining ten will choose GF. In the ninth minute, the first ten ants took the food they found to position A, and then returned to position B. Ten ants arriving at D continue to move towards B, 10 ants arriving at C continue to move towards B, and 10 ants arriving at F continue to move towards E. This cycle lasted until ten minutes, when ten ants came to B, forty ants passed in the direction of BC, and thirty ants passed in the direction of BD, so ten ants would move
344
J. Gu et al.
in the direction of BC. With the passage of time, the pheromone in the BC direction will also become more rich, and eventually all ant colonies will move towards the BC direction, which means that the ant colony will adapt to this situation to a certain extent. 2.2.3 Application Ant colony algorithm is a probabilistic method for solving complex optimal combination problems. The optimal combination problems that can be solved by ant colony algorithm can be represented by a graph G (n, e), where n represents the node in the graph and e represents the edge connected with the node. Because the fuzzy logic system is composed of a series of if-tehn rules, the output of the system is mapped by rules. The ant colony algorithm is used to adjust the parameters and screen the rules of the fuzzy logic system, so as to improve the performance of the fuzzy logic system. The performance of the fuzzy logic system can be measured by the system fitness, which is defined as the root mean square error between the expected output and the actual output: n 2 (fs (xi ) − yi ) (1) RMSE = i=1
where n is the number of training data pairs or test data pairs, FS (x) I and Yi are the actual output and expected output of the system respectively. The specific implementation method of ant colony algorithm is as follows: Firstly, each variable value of the algorithm is defined. Then, the search path matrix is established. The search path matrix must meet e (I, J) = e (J, I), that is, the path length from node i to node j must be less than the path length from node j to node i, and the distance from node to itself is zero, then the search path matrix is a symmetric matrix with zero main diagonal. Finally, all ants are randomly assigned to different nodes, and a certain taboo table is set according to each ant. The taboo table of the k-th ant is marked as tabug, which is used to record the nodes visited by each ant and prevent ants from visiting the same node many times. Ants carry a certain amount of pheromones. In the process of node access, if ants release pheromones on the route, the pheromones will volatilize over time. In this way, the route will be shorter and shorter, and the number of pheromones left on the route will be more and more, so as to induce more ants to choose the route. Then, the probability that the ant selects the next node to be visited is: / Tabu [τij (t)]α [ηij (t)]β , j ∈ (2) pijk (t) = 0, j ∈ Tabu Among them, η Represents the visibility of the path (I, J), which is defined as the reciprocal of the length of the path (I, J). 2.3 Design Requirements for Bus Type Isolated Step-Down Power Supply System In the design process, the main requirements are: (1) to ensure the safe operation of the system, the whole bus communication network must be planned reasonably. (2) Minimize bus length and equipment quantity. When one node fails, the power supply of another node cannot be affected. For the DSP processor which adopts new technologies
Design of Bus Isolated Step-Down Power Supply System
345
such as modular structure, distributed wiring and programmable controller, it is designed to improve the data processing speed of CPU, so there must be a chip with high reliability in hardware.
3 Experiment of Bus Isolated Step-Down Power Supply System Based on Ant Colony Algorithm 3.1 Design Framework of Bus Isolated Step-Down Power Supply System The design framework of bus type step-down power supply system based on ant colony algorithm can be seen from Fig. 2 that the communication module is the most important part of the wireless network, which is responsible for receiving and forwarding the information sent from the user. In the actual operation process, we should pay attention to the transmission speed, data integrity and real-time. Therefore, when the signal is transferred from the transmitting station to the routing host, the serial mode is used to complete the transmission. After receiving the required parameters detected by multiple sensor nodes, the command can be transmitted to the terminal through the channel and the results can be accepted to achieve the control goal.
System power supply
Bus
Communication connection Micro control unit
Signal interface circuit
Logic gate circuit
Kernel circuit
Bus low voltage differential signal unit
Other communication units
Other communication units
Fig. 2. Bus isolation and step-down power supply system framework
346
J. Gu et al.
3.2 Power Supply System Performance Test Process Due to the influence of system structure, physical parameters and other factors in the actual operation process, there are many unknowns or uncertainties. Therefore, this paper simulates the bus step-down power supply system. Firstly, the u-plc software is used to write a program to control the internal serial communication data of the single chip microcomputer to realize the function of information transmission and reception. Secondly, the serial communication mode is used to complete the one-way transmission process of the signal from the address line to the input point. Finally, the transition between working states under different parameters is adopted in different positions, and it is compared with the initial value, so as to achieve the purpose of correctness test.
4 Experimental Analysis of Bus Isolated Step-Down Power Supply System Based on Ant Colony Algorithm 4.1 Performance Yest of Bus Isolated Step-down Power Supply System Based on Ant Colony Algorithm Table 1 shows the performance test data of the power supply system. Table 1. Power supply system performance test Node
Safety capacity
Input terminal power supply voltage (V)
Maximum power dissipation (W)
1
1
40
2
2
2
45
2
3
3
48
1
4
4
48
2
5
5
48
1
Design of Bus Isolated Step-Down Power Supply System
6
50
5
48
44
3
42
2
40
1
Voltage(V)
46
4 Data
347
38
0
36 1
2
3
4
5
Node Safety capacity Maximum power dissipation (W) Input terminal power supply voltage(V) Fig. 3. Performance comparison of the power supply system
This paper mainly studies the bus isolated step-down power supply system based on ant colony algorithm. In this design, it is simulated, and the simulation results are compared with the actual situation. From Fig. 3, it can be seen that the safety capacity of each node increases with the increase of the number of nodes, and the voltage of each end node is not less than 40 V. At the same time, it can be observed that the power consumption of each node of the system is small, and the maximum power consumption does not exceed 2 W. This shows that the system can effectively isolate and reduce the bus voltage.
5 Conclusion With the rapid development of communication technology, people have higher and higher requirements for information transmission performance. In practical engineering, the line load increases due to the complexity and large capacity of power supply system. In order to solve this problem, it is necessary to design a bus isolation and step-down device with load. The device can be used for wireless communication, control and scheduling between various lines or equipment. At the same time, it can be connected with other devices to reduce the cost.
References 1. Joodi, M.A., Abbas, N.H., Al-Mulahumadi, R.: Optimum design of power system stabilizer based on improved ant colony optimization algorithm. J. Eng. 24(1), 123–145 (2018) 2. Mehrpooya, M., Mohammadi, M., Ahmadi, E.: Techno-economic-environmental study of hybrid power supply system: a case study in Iran. Sustain. Energy Technol. Assess. 25(FEB.), 1–10 (2018)
348
J. Gu et al.
3. Nie, A.: Design of English interactive teaching system based on association rules algorithm. Secur. Commun. Netw. 2021(s1), 1–10 (2021) 4. Thummala, P., Yelaverthi, D.B., Zane, R.A., et al.: A 10-MHz GaNFET-based-isolated high step-down DC–DC converter: design and magnetics investigation. IEEE Trans. Ind. Appl. 55(4), 3889–3900 (2019) 5. Kanso, B., Kansou, A., Yassine, A.: Open capacitated ARC routing problem by hybridized ant colony algorithm. RAIRO – Oper. Res. 55(2), 639–652 (2021) 6. Mokhtari, Y., Rekioua, D.: High performance of maximum power point tracking using ant colony algorithm in wind turbine. Renew. Energy 126(OCT.), 1055–1063 (2018) 7. Sankar, B.R., Umamaheswarrao, P.: Multi objective optimization of CFRP composite drilling using ant colony algorithm. Mater. Today Proc. 5(2), 4855–4860 (2018) 8. Gheraibia, Y., Djafri, K., et al.: Ant colony algorithm for automotive safety integrity level allocation. Appl. Intell. Int. J. Artif. Intell. Neural Netw. Complex Probl.-Solving Technol. 48(3), 555–569 (2018) 9. Schrittwieser, L., Leibl, M., Kolar, J.W.: 99% efficient isolated three-phase matrix-type DAB buck-Boost PFC rectifier. IEEE Trans. Power Electron. 35(99), 138–157 (2019) 10. Sharma, A.K., Sambariya, D.K.: Application of FOPID design for LFC using flower pollination algorithm for three-area power system. Univers. J. Control Autom. 8(1), 1–8 (2020)
Translation Accuracy Correction Algorithm for English Translation Software Qiongqiong Yang(B) School of Foreign Languages, Dalian University of Science and Technology, Dalian, Liaoning, China [email protected]
Abstract. With the development of technology, the speed of information technology in many countries continues to increase, and the demand for translation as the main means of communication between different languages has also increased significantly. The purpose of this paper is to study the design of translation accuracy correction algorithm based on English translation software. First, the dependency tree is briefly introduced, and the string bilingual tree dependency model used in this paper is modeled, and the possibility of applying this structure to the Chinese-English machine translation system example of this work is discussed. Based on the built-in Chinese and English dependency tree string library, the traditional similar snapshot retrieval and translation generation methods are improved accordingly, and the similar case retrieval unit and system translation generator are improved. The method of each unit of the system proposed in this work is organically combined to form a complete example of a translation system. The performance of the system has been effectively verified in a large-scale comparative experiment on a training body. In the 8th round of training, the BLEU value of our model reached 23.6. Keywords: English Translation · Software Translation · Accuracy Correction · Algorithm Design
1 Introduction With the deepening of global economic integration, the economic and cultural exchanges between countries have become more extensive and diversified, and free and barrierfree communication between different languages has become an urgent need for people [1]. Especially for economically underdeveloped countries and regions, a high degree of informatization is needed to drive economic development, and the realization of informatization is inseparable from barrier-free information exchange [2]. Therefore, it is of great practical significance to effectively solve the automatic translation of language lack of resources for efficient information acquisition and communication [3]. Machine translation is the process of using a computer to translate a language into another language [4]. Birkenbeuel J calculated the ability of Google Translate (GT) to accurately translate and sort phrases (from English to Spanish) that are often used © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 349–357, 2023. https://doi.org/10.1007/978-3-031-31860-3_37
350
Q. Yang
in a healthy way. Eighteen English subject subjects (9 males and 9 females), mean age 36 years, volunteered to participate in a sentence-reading study. In the case of retaining the original spoken English words, the accuracy rate of word generation single sentence is 89.4% [5]. Zhang H developed new English-Kannada KB translation data from Probase and ConceptNet and compared their design approach with several basic elements. Experimental results show that this method improves translation accuracy compared to the baseline method [6]. Machine translation becomes important, providing lasting energy for human development, which is of great significance [7]. This paper improves the basic particle swarm optimization algorithm to find the optimal solution of the Chinese-English sentence alignment function. The basic particle swarm optimization algorithm is prone to early convergence and stagnation problems, by exchanging and sharing information with neighbors, thereby avoiding the early convergence of particles and the accuracy of sentence alignment. It is efficient and effective to find proposition-based alignment by analyzing and fitting the optimal solution of the function, and the convenient particle swarm optimization algorithm.
2 Research on the Design of Translation Accuracy Correction Algorithm for English Translation Software 2.1 Improved Particle Swarm Optimization Algorithm Given that optimal particle swarm optimization is sensitive to local extrema, early integration or stagnation, in conjunction with this paper, we use optimal particle swarm optimization to solve the practical problem of English word order [7]. In order to solve the efficiency problem, the English particle swarm optimization algorithm improves the sentence alignment method on the basis of the optimal particle swarm optimization algorithm, and divides the particle swarm into several groups [8]. That is, peers exchange and share information with neighbors to find the best solution for the fitness match of the proposed match [9]. The formation of a particle swarm is the beginning of particle size, that is, the number of particles in the particle swarm, the initial state and initial velocity of the particles. For the generated particles, the k-mean algorithm is used for aggregation and division to obtain m-groups. After dividing the particles in the English sentence set into m subgroups, when solving Chinese-English sentence alignment, this paper selects the appropriate particles to construct the sending list and the replacement list through certain rules. For information exchange, the sending list is sent to one of the other subgroups, and the subgroup that receives the sending list will use its own replacement list for replacement [10]. At the level of an individual particle, the particle will follow a leader (1eader) of its subgroup, which is the particle with the best behavior in the subgroup; at the neighbor level, the particle will have the best behavior among its neighbors The particles maintain a synchronized motion and share information; at the level of the entire particle swarm, each particle will follow the motion of the particle with the best behavior in the entire swarm [11, 12].
Translation Accuracy Correction Algorithm
351
2.2 Calculation of Semantic Similarity Based on “HowNet” HowNet is a general understanding platform with Chinese and English words as illustrations to illustrate the relationship between ideas and concepts such as basic content [13]. HowNet is a valuable and knowledgeable dictionary that provides an invaluable tool for natural language and machine translation research. HowNet has two main ideas: “idea” and “meaning source”. A “thought” is a description of the semantics of a word. Each word can be represented as multiple concepts. “Thought” is described in “knowledge language”, and the “words” used in “knowledge language” are called “morphemes”[14]. A word is the smallest word used to describe a “thought.“ On the one hand, sememe is the most basic element to describe a concept, and on the other hand, there are complex relationships within sememe. HowNet describes eight relationships: deficitrelated, homology-related, antisense-related, antisense-related, war-related, relatedrelated, product-related, and event-related. It can be seen that the sememe belongs to a complex network system, rather than a simple tree system. 2.3 Dependency Tree to String Model The characteristics of the subject in the dependency tree determine the structural properties of the tree (subtree) with the subject as the root node. This ensures that the root part of the subtree is sufficiently representative to make it suitable for retrieving parallel events oriented towards the root part of the tree or the tree context, whereas EBMT treats events in the form of a grammar. In addition, the trust tree system is simple, easy to extend, and easy to apply to the EBMT system. In this article, we will keep our bilingual life organized and write EBMT using Chinese and English data as an example. Since Chinese and English belong to different language families, their syntactic versions are very different, and if both languages are in a set form, there is no guarantee that they will fit perfectly. This project aims to build a Chinese-English EBMT and introduce some Chinese vocabulary and reference models, so the project adopts Chinese-English dependency tree format for strings and maintains the source language elements of bilingual production and alignment. Based on the similarity at the original projection level, the editor receives a bilingual trust tree that contains phrases for locations. The three are used to represent the dependency tree correlation model in the string. Where represents a plural dictionary, D is the dependent language column of the, and S is the string of the. The relative position between the source language and the sentences in our target language source D and target language S. 2.4 Translation Evaluation Model The problem with evaluation is that it is difficult to be objective and obtain true results, because there are too many factors and variables that complicate things. Taking into account the evaluation of translation quality, the purpose is to establish an evaluation method that can quantify translation into numbers. In translation quality evaluation, many quantitative methods have been proposed, which promise to assign scores to a
352
Q. Yang
certain translation [15]. The idea behind the automatic evaluation of machine translation is similar: quantitative machine translation. For the manual evaluation of machine translation, the purpose is also to measure the quality of specific numbers [16]. Thinking machine translation systems are regarded as systems with subsystems like algorithms and corpora. The original text is used as the input of the system, and the translation they produce is the output [17, 18]. In order to evaluate the translation quality of a machine translation system, it is necessary to only look at the final translation result. To understand the quality of translation, a sound evaluation method is essential.
System Thinking
System
Machine Translation
Diversity
Evaluation
Model
Information
Grammar
Fig. 1. Machine translation model
Applying systematic thinking in the evaluation of translation by machine translation systems is a very important step. A system has its components or components, the connections that bring the components together, and the combination of components and connections that produce something larger than the two combined. In Fig. 1, the component corresponds to the information index, because the information is the material part of the translation.Composition is connected to the index of diversity; complexity is the result of components and connections being connected together, and diversity is an indispensable part of the translation that adds diversity to it.
3 Investigation and Research on the Design of Translation Accuracy Correction Algorithm for English Translation Software 3.1 Implementation of Machine Translation System The Sato & Nagao method is used to describe reliable devices, and the dependency tree is usually described in a straightforward manner. Find features in the library with corresponding descriptive methods and take input suggestions to discover such events. Three ways to match proxies: replace, filter and add. For sea level in relative conditions without a reliable tree system, the corresponding meaning expressions also change accordingly. As the log-linear model in this project, the characteristic functions used are:
Translation Accuracy Correction Algorithm
353
Use this function to measure the quality of translation and improve the convenience of translation. In this work, the likelihood of a translated segment in the target language can be obtained by a language model of the target language. 3.2 Experimental Method This experiment also uses the official CWMT2015 Chinese and English news ratings as the experimental body, without adding additional resources. The experiment will compare the Mongolian-Chinese neural machine translation model based on LSTM multi-granularity fusion with the traditional statistical machine translation model and the cyclic neural network RNN machine translation benchmark model, and obtain the relevant advantages and characteristics of this model through experimental data. (1) Traditional machine translation model: The construction of PBMT is completed by building internal modules such as preprocessing model, translation model, and language model. (2) Model RNN-based network architecture: using transcoding codec, the codec and decoder use RNN to build a template, and set up a single input layer, hidden layer, single hidden layer and single production layer. Program. In order to simplify the design, the vector dimension and the number of nodes are set to 512 at the same time, and Dropout technology is used to prevent overfitting in training. 3.3 Log-Linear Model The log-linear model is a distinctive model based on the idea of multiple attributes rather than the idea of source channels. For a given input sentence f1J = f1 , . . . , fj , . . . , fJ and the generated translation e1J = e1 , . . . , ei , . . . , eI , the maximum entropy-based translation model is shown in formula (1). I J M exp e1 |f1 λ h m m m=1 p e1I |f1J = (1) I J M e exp λ h |f I m m m=1 e 1 1 1
Among them, hm e1I , f1J is a different feature function (m = 1, 2, 3 . . . M), λm is the weight corresponding to the feature function. Since the denominator in Eq. 1 is a constant during translation generation, Eq. 1 can be simplified to Eq. 2. M e1I = (2) λm hm e1I |f1J m=1
Log-linear models are of good size and can define new features for different operational needs, helping to guide machines with more language skills. In practical applications, feature functions and their corresponding feature weights can be defined according to the needs of the system. The scores of different translations are calculated, and the best translation can be selected according to the scores of the translations.
354
Q. Yang
4 Analysis and Research on the Design of Translation Accuracy Correction Algorithm for English Translation Software 4.1 Comparison Between the Improved Method and the Basic Particle Swarm-Based Method In the process of Chinese-English sentence alignment, this paper improves the basic algorithm, divides the initialized particle swarm into multiple subgroups to exchange difference information, shares neighborhood information to find solutions, and improves the accuracy. Define the algorithm to repeat 100 times, when the number of repetitions of the improved algorithm (10 times) is reached, exchange information and update the position and velocity of the particles. The improved method is compared with the sentence alignment search method, and the results are shown in Fig. 2.
1
Basic Particle Swarm Optimization
Improve algorithm
0.9 Alignment Accuracy
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0
20
40 60 Number of iterations
80
100
Fig. 2. Comparison of the improved method and the basic particle swarm-based method
The experimental results show that when solving Chinese-English sentence alignment based on the basic particle swarm optimization algorithm, it will converge after about 60 iterations, and the alignment accuracy rate is about 0.82; while the improved particle swarm optimization algorithm will only converge after 80 iterations, the correct rate of statement alignment is 0.91. It can be seen that the improved method needs more iterations to converge, and its alignment accuracy is improved to a certain extent. This shows that the correct rate of sentence alignment of the improved method is also greatly improved.
Translation Accuracy Correction Algorithm
355
4.2 Analysis of Results After training, the BLEU value of the experiment is shown in Table 1. By increasing the number of repetitions, the model score of the first 8 rounds of training keeps improving, reaching a maximum of 23.6, which is higher than the BLEU value of the RNN system (19.5) and the BLEU value of the phrase-based automatic translation statistics system (20.2). From the ninth round of training, due to problems such as over-adaptation, the translation quality of the model gradually decreased, the semantic confusion value gradually increased, and the BLEU value of the translated text also decreased correspondingly under different experiments. Table 1. Experimental results BLEU (The paper model of the improved algorithm)
BLEU (RNN system)
BLEU (PBMT statistical translation model)
Epoch 1
5.6
5.2
5.8
Epoch 2
6.2
6.1
6.4
Epoch 3
10.5
7.4
7.5
Epoch 4
13.6
8.5
8.9
Epoch 5
18.4
10.9
11.3
Epoch 6
20.3
13.5
15.2
Epoch 7
21.8
16.8
17.6
Epoch 8
23.6
19.5
20.2
Epoch 9
20.6
17.5
18.2
Epoch 10
20.2
16.4
17.1
The corresponding BLEU values under different experiments are shown in Fig. 3, which shows the translation effect of the model described in this paper. The effect of the model trained in the previous iteration is similar to that of other models, but as the model training deepens, the model learns More semantic relations and corresponding vector information to help select and predict translations.
356
Q. Yang
BLEU(The paper model of the improved algorithm) BLEU(RNN system) BLEU(PBMT statistical translation model)
25
BLEU value
20 15 10 5 0 Epoch Epoch Epoch Epoch Epoch Epoch Epoch Epoch Epoch Epoch 1 2 3 4 5 6 7 8 9 10 Epoch Fig. 3. Experimental BLEU value
5 Conclusions With the acceleration of the global economic integration process and the increasing frequency of international exchanges, how to overcome language barriers has become a common problem faced by the international community. This paper proposes a reliable and practical method for constructing Chinese-English string tree snapshot library. On the one hand, the general model of Chinese lexical analysis is integrated into the processing of Chinese topics, integrating three preprocessing tasks word segmentation. Its reliability has been effectively verified in experiments in large training institutions. On the other hand, on the basis of traditional methods, combined with the actual state of the system, we propose a method to extend the dependency tree to string examples. This effectively increases the utility of snapshots.
References 1. Ren, Q., Su, Y., Wu, N.: Research on Mongolian-Chinese machine translation based on the end-to-end neural network. Int. J. Wavelets Multiresolut. Inf. Process. 18(01), 46–59 (2020) 2. Miura, A., Neubig, G., Sudoh, K., et al.: Syntactic matching methods in pivot translation. J. Nat. Lang. Process. 25(5), 599–629 (2018) 3. Lee, S., Islam, N.U.: Robust image translation and completion based on dual auto-encoder with bidirectional latent space regression. IEEE Access 1 (2019)
Translation Accuracy Correction Algorithm
357
4. Schreurs, R., Dubois, L., Becking, A.G., et al.: Implant-oriented navigation in orbital reconstruction. Part 1: technique and accuracy study. Int. J. Oral Maxillofac. Surg. 47(3), 395–402 (2018) 5. Birkenbeuel, J., Joyce, H., Sahyouni, R., et al.: Google translate in healthcare: preliminary evaluation of transcription, translation and speech synthesis accuracy. BMJ Innov. 7(2), bmjinnov-2019-000347 (2021) 6. Zhang, H.: Neural network-based tree translation for knowledge base construction. IEEE Access 1 (2021) 7. Ebdrup, B.H., Axelsen, M.C., Bak, N., et al.: Accuracy of diagnostic classification algorithms using cognitive-, electrophysiological-, and neuroanatomical data in antipsychotic-nave schizophrenia patients. Psychol. Med. 49(16), 1–10 (2018) 8. Rao, D.D.: Machine translation. Resonance 3(7), 61–70 (1998). https://doi.org/10.1007/BF0 2837314 9. Andrianjaka, R.M., Hajarisena, R., Mihaela, I., et al.: Automatic generation of Web service for the Praxeme software aspect from the ReLEL requirements model. Procedia Comput. Sci. 184(1), 791–796 (2021) 10. Scienti, O., Bamber, J.C., Darambara, D.: Inclusion of a charge sharing correction algorithm into an x-ray Photon counting spectral detector simulation framework. IEEE Trans. Radiat. Plasma Med. Sci. 1 (2020) 11. Khazaal, A., Cabot, F., Anterrieu, E., et al.: A new direct sun correction algorithm for the soil moisture and ocean salinity space mission. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 1 (2020) 12. Shah, A.L., Mesbah, W., Al-Awami, A.T.: An algorithm for accurate detection and correction of technical and non-technical losses using smart metering. IEEE Trans. Instrum. Meas. 1 (2020) 13. Zaghloul, M.: Remark on algorithm 680: evaluation of the complex error function: cause and remedy for the loss of accuracy near the real axis. ACM Trans. Math. Softw. 45(2), 24.1–24.3 (2019) 14. Li, Z., Wu, K., Guo, M., et al.: Analysis and comparison of the vibration correction accuracy of different optimization objectives for the classical absolute gravimeter. IEEE Trans. Instrum. Meas. 1 (2021) 15. Yang, J.: Accuracy of corneal astigmatism correction with two Barrett Toric calculation methods. Int. J. Ophthalmol. 12(10), 1561–1566 (2019) 16. Huang, Y., Zhang, B., Wu, J.: Adaptive multi-point calibration non-uniformity correction algorithm. Infrared Technol. 42(7), 637–643 (2020) 17. Romano, C., Minto, J.M., Shipton, Z.K., et al.: Automated high accuracy, rapid beam hardening correction in X-Ray computed tomography of multi-mineral, heterogeneous core samples. Comput. Geosci. 131(Oct.), 144–157 (2019) 18. Komathi, C., Umamaheswari, M.G.: Design of gray wolf optimizer algorithm-based fractional order PI controller for power factor correction in SMPS applications. IEEE Trans. Power Electron. 35(2), 2100–2118 (2020)
Development of Ground Target Image Recognition System Based on Deep Learning Technology Rui Jiang(B) , Yuan Jiang, and Qiaoling Chen School of Information Engineering, Baise University, Baise 533000, Guangxi, China [email protected]
Abstract. Deep learning is a new and efficient learning method. It uses computers as information processing tools to effectively apply knowledge, experience and non-cognitive abilities, and obtain required information at different levels. Due to the complexity of ground information and the efficiency of deep learning, this paper proposes the development of a ground target recognition system based on deep learning technology. The purpose is to master the deep learning method and improve the accuracy of the recognition system. This article mainly uses experimental method and case analysis method to carry out correlation analysis and experiment on ground target image recognition technology. The experimental results show that the accuracy of the SAR algorithm of the ground target recognition system with mirror image reaches 92%, which is suitable for the research and development of the system. Keywords: Deep Learning · Ground Target · Image Recognition · System Design
1 Introduction At present, in the research of image processing technology at home and abroad, deep learning occupies an increasingly important position and has become an indispensable part of human science research. And depth classification is a series of complex processes that can effectively distinguish, filter and understand clutter information. Therefore, the recognition of the target image can start from deep learning. There are many theoretical achievements in the development of target image recognition system. For example, Song D developed a convolutional neural network for SAR images for target recognition [1]. In order to solve the problems of sea echo and various interferences in traditional radar image target detection methods, Kennedy H L proposed a method of deep learning target objects for radar detection [2]. Kalra M said that radar detection is an effective way of earth observation [3]. On the basis of previous studies, the research on the ground target recognition system proposed in this paper has practical significance. This article first studies the relevant theoretical knowledge of deep learning and convolutional neural networks. Secondly, it studies the target recognition algorithm based © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 358–368, 2023. https://doi.org/10.1007/978-3-031-31860-3_38
Development of Ground Target Image Recognition System
359
on deep learning. Then analyze the key technology of SAR image target recognition. Finally, the overall framework of the system is designed, and experimental tests are carried out, and the data results are obtained.
2 Development of Ground Target Image Recognition System Based on Deep Learning Technology 2.1 Deep Learning and Convolutional Neural Networks (1) Deep learning Deep learning is a meaningful knowledge system with infinite proximity and experience. The deep learning we define is a multi-layer model structure, multi-dimensional code size and space complexity characteristics, based on statistical theory, through the use of specific methods to classify sample data at different scales. It has two levels. The cognitive layer, as an input element in the entire system, mainly includes a structured encoder and a self-organizing storage unit. Expand the foundation and application objects. When demand changes, methods or algorithms of deep learning strategies are proposed in order to meet the ever-increasing and renewal needs. Determine new rules and build models by analyzing and designing input data, knowledge base and knowledge framework. It constructs the unit with characteristic information contained in the training sample. First, we must design the corresponding number of layers and weight interval according to the convolutional neural network before starting training; secondly, we need to determine the input vector and output method and establish the relevant parameter set model; finally, the quality and dimension of the original image during deep learning Optimization of [4, 5]. In order to perform more complex learning tasks, more complex models with more parameters are usually required. Deep neural network is a typical deep learning model. Deep learning gradually combines low-level functions with more abstract, high-level functions. Each layer of the deep neural network is a single-layer nonlinear network. Several superpositions of these single-layer networks form a structure with a powerful non-linear mapping function, which can supplement the approximation of complex functions obtained through deep learning [6, 7]. According to the structure, deep learning can be divided into three categories: generative deep structure. This structure mainly describes the high-order correlation characteristics of the data or the common probability distribution of the observation data and the corresponding category. The purpose of the distinctive depth structure is to provide discriminative ability for the classification of the model. Mixed structure. If the generating unit is used to distinguish tasks, a mixed structure will result. The process of learning to generate structure can be combined with other discriminant algorithms to optimize attribute values [8, 9]. (2) Convolutional neural network The general process of convolutional neural network processing input information is as follows: 1) Several different convolution kernels filter the image and add bias to extract
360
R. Jiang et al.
local features. Each convolution kernel generates a feature map. 2) Use the non-linear activation function to process the previously obtained output results. 3) The pooling layer performs a pooling operation on the result of the activation function. This has two disadvantages: The first disadvantage is that each convolution operation will shrink the image, and we don’t want the convolutional layer to reduce the size of the image when identifying features. The second disadvantage is that the pixels at the edge of the image are only used once in the convolution calculation, while the pixels in the middle of the image are used multiple times, which means that most of the information at the edge of the image is used. In order to make up for the above shortcomings, the filling process can be performed before the folding process [10]. In the convolution process, all the receptive fields share the parameters of the convolution kernel. In the convolutional neural network, the convolution kernel will slide on the image, calculate different receptive fields, and finally generate a new image after convolution. The number of parameters is only related to the size of the convolution kernel, which is weight sharing. After the image is processed by a convolution kernel, a basic feature of the image is obtained. No matter how complex the image is, it is composed of some basic features. The high-level features of the image can be obtained by combining the basic features. 2.2 Target Recognition Algorithm Based on Deep Learning Target recognition refers to the process of identifying a special target among other types of targets. Brand recognition must not only classify the target in the image, but also mark the position of the target in the image. In the image classification problem, an image belongs to only one category, while in the target recognition problem, there can be multiple targets in an image. All objects must be classified, and their position in the picture must be accurately marked. The detection process of traditional target detection algorithms is generally divided into three stages: candidate region selection, feature extraction and candidate verification. So its shortcomings are obvious. The process of identifying candidate areas wastes resources. The development of artificial features is time-consuming and tedious, and requires the expertise of designers. The generalization performance of engineering artificial features is poor. In order to solve the shortcomings of traditional localization algorithms, this paper proposes a convolutional neural network with regions (R-CNN). (1) R-CNN is the earliest method to apply deep learning to target recognition. R-CNN uses sliding windows to generate candidate regions, and only runs recognition algorithms on a small number of sliding windows. The specific process is shown in Fig. 1:
Input image
Extraction area
Computer neural network characteristics
Fig. 1. The Process of R-CNN to Identify the Target
Classification area
Development of Ground Target Image Recognition System
361
The target detection algorithm has taken a big step forward on the basis of predecessors. For the first time, the three parts of candidate region selection, feature extraction and classification are combined in one network to deepen the training process. This improves the accuracy and speed of recognition. (2) YOLO algorithm. The biggest feature of YOLO is its fast recognition speed, which can basically meet the real-time requirements in some environments with poor hardware. Although the recognition speed of the R-CNN model has improved, these improvements have not brought any qualitative changes, and the model still cannot meet the real-time requirements. YOLO abandoned the strategy of region proposal and significantly improved the detection speed. YOLO is a part of the regression network, which treats the target detection problem as a regression problem, and the final output vector directly gives the target’s location, size, confidence and category information. YOLO divides the input image into Q × Q grids, and each grid predicts whether the coordinates of the center position of the object are within the grid. If the center position coordinates of the object fall within the grid, the grid is responsible for identifying the object. The confidence level indicates the probability that the current grid contains the target. The bounding box confidence formula is:
truth Con(0) = pre(0) × QWEpred
(1)
When an object falls into the grid, pre(o) takes 1, and vice versa. QWEtruth pred is the intersection ratio of the predicted bounding box and the actual bounding box. O refers to object. In the process of image recognition, the same target in the image may be detected multiple times by the recognition algorithm. Therefore, it is necessary to discard and judge the boundary of the detected object. The loss function (loss) used in YOLO is the sum of square error. The calculation formula of the loss function is: coord Error =
Q2 Y i=0
qobk [(ai k=0 ik
2 − aˆ i ) − (bi − bˆ i )]
(2)
Among them, coordError is the coordinate prediction error, Y is the number of frames, and a and b represent the output value of the network. Finally, we can get the graph characteristic V of the target: V = tan(Wu + Wp−i,q−i + Wh ht ) (3) p,q∈Nij
362
R. Jiang et al.
2.3 Key Technologies of SAR Image Target Recognition (1) There is a problem 1) The problem of translation sensitivity: SAR targets usually do not appear in a fixed position, which leads to different states of the same SAR target, which makes it difficult to identify. So there are two ways to solve it. One is to extract translationinvariant features from the original image, such as the moment-invariant feature Hu; the other is to start with the classifier and learn something to make it insensitive to target recognition. 2) Strength sensitivity issues The change of the relative distance between the target and the radar, the randomness of the coherent speckle noise, and the occlusion of the target will cause a large difference in intensity of the same target. Two problems in actual use stem from intensity sensitivity, namely the sensitivity of coherent speckle noise and the problem of target hiding. 3) Position sensitivity problem: for SAR images, the radar moves relative to the target, and the target is a three-dimensional object. Due to the different detection angles, the same target generates SAR images at different positions. The main influencing factors are azimuth and pitch angle changes. 4) Feature extraction and classifier design issues: Both feature extraction and classifier design are key steps in SAR target detection. SAR images under other conditions may miss very effective features, thereby reducing reconnaissance performance. The pretreatment process of this article is shown in Fig. 2. As shown in Fig. 2, first input the original SAR image, then cut it to a suitable size, select the enhanced Lee filter, perform power transformation, and then normalize the energy to get the pretreated image. End. 2.4 System Framework Design (1) System requirements In terms of data set management, this system needs to realize effective management of remote sensing big data to provide data support for deep learning algorithm modules. For the models that exist in the server, the system needs to form a visual list display of all the models to achieve the purpose of user query, retrieval, download, and deletion. For the recognition task in training, the system should support the user to query the progress of the training task, and display the training error curve and training log in real time for the user’s reference. For the recognition model being trained, the system needs to support the training pause function so that the user can control the training progress in real time. The task creation module is the core functional module of the remote sensing target recognition system with different resolutions. It realizes the creation of training tasks related to deep neural networks and the creation of two resolution image test tasks. The user management module realizes effective management of users in different resolution remote sensing target recognition systems.
Development of Ground Target Image Recognition System
363
(2) The overall structure of the system The ground target recognition system based on deep learning adopts the B/S architecture, which can be accessed by a large number of users through a browser. In terms of the management of the recognition model in the system, the system needs to realize the effective management of the online training generation model and the offline uploading model. This system includes three parts: original image acquisition, original image acceleration, IP core control and HDMI display. The detailed work of each part is as follows: 1) The original image is acquired through the peripheral interface FPC40 camera of the development board. 2) After the original image is acquired, it is stored in DDR3 through DDR control. 3) The image classification part is to realize the image processing in FPGA. The process includes passing the scaled image to the convolutional neural network for recognition, and the final classification result is obtained. The convolutional neural network has been autonomously trained before implementation, including multiple convolutional layers and fully connected layers, and can perform functions such as feature extraction and classification of the target image. The amount of calculation involved
Fig. 2. SAR image preprocessing process
364
R. Jiang et al.
in this step is the main body of the entire algorithm, which consumes the longest time. 4) The IP core control and its display part is to realize the functions of IP core control of each layer, pipeline control of the image transmission process, and image recognition result display under the ARM processor. This system adopts the platform Zynq SoC 7020, and the core is to realize the image target positioning and recognition through the method of software and hardware collaborative design. The convolutional layer is accelerated by FPGA (Programmable Logic, PL). As the hardware acceleration component of the entire system, the IP core can be solidified according to different acceleration tasks to achieve the acceleration of the algorithm. The system can select different IP according to the specific algorithm scheduling. Core performs parallel computing. The rest of the operations are implemented by ARM, as the control end of the entire system, including CPU and Memory. (3) The entire process of the system After the development board collects the image through the FPC40 camera, the original image is stored in the external DDR3 memory, and then the original image in DDR3 is read through the VDMA channel and the image is transferred to the IP core for acceleration processing to complete the characteristics of the target image Extraction, classification and other functions. The image processed by the IP core is then sent back to the DDR through the VDMA channel. At this time, the PS part will perform image analysis and pairing, and finally display the image through HDMI.
3 Ground Target Image Recognition System Experiment 3.1 System Integrated Test Environment (1) Test hardware environment GPU: GTX1080 Ti dual graphics cards Processor: Intel Xeon Processor E5-2630v4 clocked at 2.20 GHz (2) Test software environment Operating system: Ubuntu16.04 Deep learning framework: Tensorflow1.2.1, Keras2.1.6 Python environment: Python2.7 Database: SQLite3 Web framework: Flask Stress testing tool: Ap
Development of Ground Target Image Recognition System
365
3.2 Test Method System operation testing replaces black box testing. Black box testing means that the tester does not understand the functions of the internal program of the remote sensing target detection system with different resolutions, and combines the user’s plan and his different needs from the user’s point of view to explain the operation of the system, etc., in order to be between input and output Use appropriate relationships. Perform a full-function test on the target detection system to measure whether the various functions of the system meet the requirements of technical indicators. Testers design test cases according to user needs and plans, provide a set of specific inputs to the system, receive feedback results from the target recognition system, and evaluate whether the feedback results meet the expected assumptions about the output results. If the output results meet expectations, the system meets the index requirements, and if they are inconsistent, the problem is reported to the R&D personnel. System performance testing uses special testing tools to simulate various normal or abnormal conditions and detect system performance under normal or abnormal conditions. Testing includes load testing and stress testing. Load test is used to measure the changes of system performance counters under different loads. Stress testing is used to find functional bottlenecks and key points in the system to obtain indicators of the service level that the system can provide, so as to better run the actual analysis of the system. 3.3 Experimental Data Set In the experiment, we used the VOC2007 data set and VOC2012 data set to train the network model and test the performance of the model. VOC is one of the most trusted object recognition and image classification image databases in the world. It is widely used to benchmark performance and test various image recognition, classification, target recognition and other algorithms. Many classic algorithms have been tested on the COV data set, and the test results obtained from the COV data set are used as the standard for evaluating the performance of the model. 3.4 Performance Recognition The detection performance of the SAR target detection algorithm is verified by multiple experimental designs. The experimental data uses the MSTAR public data set. Since the MSTAR data set contains images of multiple sizes and the target is located in the center of the image, all experimental samples are now uniformly cropped to pixels of the same size to remove problematic backgrounds and reduce computational workload. In this paper, a total of 6 samples were collected for data analysis.
4 Target Image Recognition System Test Results 4.1 Sensitivity of SAR Image Recognition Use 1 to 6 to label the targets respectively. What they represent are objects with different properties on the ground. The SAR target recognition is tested from the three aspects of rotation, displacement and noise addition, and Table 1 is obtained.
366
R. Jiang et al. Table 1. SAR target recognition test 1
2
3
4
5
6
Total
Spin
51.6
1.32
23.5
1.98
61.2
40.1
21.2
Displacement
73.2
30.19
68.9
82.6
40.1
24.5
41.8
Noise
5.1
96.56
13.5
65.4
22.6
28.6
27.4
Mirroring
86
99
97
96
93
84
92
-NoValue-As shown in Fig. 3, we can see that the performance of the SAR target recognition algorithm is average. The model’s recognition accuracy on the target rotation, target displacement, and target noise test sets are 21.2%, 41.8%, and 27.4%, respectively. This shows that the model is sensitive to the displacement, rotation and noise of the target. The recognition accuracy of the SAR recognition algorithm in the test set after the target undergoes mirror transformation reaches 92%, indicating that the recognition algorithm is not sensitive to mirror transformation.
1
2
3
4
5
6
Total
120 96.56
100
Data
80 61.2 60
82.6 73.2 68.9
99 96 93 92 86 97 84
65.4
51.6 40.141.8
40.1 40 23.5 20 1.32
30.19 21.2
27.4 28.6 22.6 13.5
24.5
1.98
5.1
0 Spin
Displacement
Noise Items
Mirroring
Fig. 3. The Sensitivity of the SAR Recognition Algorithm to Rotation, Displacement and Noise
The time required for correct recognition of SAR recognition algorithm is shown in Fig. 4: It can be seen from Fig. 4 that the time required for SAR recognition is relatively stable, basically maintained at about 13 s. In 120 recognition, only three cases with large fluctuations occurred, which shows that SAR recognition algorithm can play an important role in target recognition.
Development of Ground Target Image Recognition System
367
25
Times(s)
20
15
10
0
1 7 13 19 25 31 37 43 49 55 61 67 73 79 85 91 97 103 109 115
5
Frequency Fig. 4. Time required for SAR identification
5 Conclusion Deep learning technology is a method based on artificial neural networks, which can effectively process complex sample data and is implemented by using multiple training algorithms. Deep learning can be used to improve the accuracy of target recognition. When the difference between the sample set obtained by processing the original data pre-training set and the recognized picture is small and has a very high resolution, more accurate and effective information can be obtained. At the same time, the degree of misdifferentiation and error rate can be reduced, thereby ensuring the quality of subsequent feature extraction and the final output result. This not only meets the system requirements, but also greatly enhances its performance. On the basis of deep learning, applying it to the target image recognition system will help improve the recognition accuracy of the system. According to the experimental research in this article, the image is not sensitive in the recognition algorithm, and the accuracy is high. Although the system designed in this paper still has the problem of insufficient accuracy, its design concept is good and can provide a reference for the identification system. Acknowledgements. This work was supported by Baise University Research Startup Fund. (DC2000002784).
References 1. Song, D., Tharmarasa, R., Wang, W., et al.: Efficient bias estimation in airborne video georegistration for ground target tracking. IEEE Trans. Aerosp. Electron. Syst. 1 (2021)
368
R. Jiang et al.
2. Kennedy, H.L.: Isotropic estimators of local background statistics for target detection in imagery. IEEE Geosci. Remote Sens. Lett. 1–5 (2018) 3. Kalra, M., Kumar, S., Das, B.: Moving ground target detection with seismic signal using smooth pseudo Wigner-Ville distribution. IEEE Trans. Instrum. Meas. 69(6), 3896–3906 (2020) 4. Román-Caballero, Marotta, A.,Lupiáez, J.: Target-background segregation in a spatial interference paradigm reveals shared and specific attentional mechanisms triggered by gaze and arrows. J. Exp. Psychol.: Hum. Percept. Perform. 47(11), 1561–1573 (2021) 5. Abaunza, H., Castillo, P., Theilliol, D., et al.: Cylindrical bounded quaternion control for tracking and surrounding a ground target using UAVs. IFAC-PapersOnLine 53(2), 9354–9359 (2020) 6. Anshuman, V., Singh, M.P.M.: Automatic target detection in ground surveillance radar by. i-manager’s J. Electron. Eng. 10(2), 1–10 (2020) 7. Kiefer, K., Mueller, A., Singer, H., et al.: New relevant pesticide transformation products in groundwater detected using target and suspect screening for agricultural and urban micropollutants with LC-HRMS. Water Res. 165(Nov.15), 114972.1–114972.11 (2019) 8. Olthuis, R., Kamp, J., Lemmink, K., et al.: Touchscreen pointing and swiping: the effect of background cues and target visibility. Mot. Control 24(3), 1–13 (2020) 9. Simona, C., Valentina, B., Jacopo, M., et al.: Amiodarone-induced thyrotoxicosis: differential diagnosis using 99mTc-SestaMIBI and target-to-background ratio (TBR). Clin. Nucl. Med. 43(9), 655–662 (2018) 10. Udoshi, R., Perakala, A.: Prompt strike: ground-launched hypersonics target missile defences. Jane’s Int. Defense Rev. 51(AUG.), 38–42 (2018)
A Mobile Application-Based Tower Network Digital Twin Management Kang Jiao, Haiyuan Xu(B) , Letao Ling, and Zhiwen Fang Shenzhen Power Supply Co., Ltd., Shenzhen 518000, Guangdong, China [email protected]
Abstract. In recent years, intelligent technologies such as the Internet of Things, cloud computing, and big data have developed rapidly with the development of computers, and all aspects of people’s lives have gradually become intelligent. The Internet of Things, big data and other technologies are gradually merging with people’s real economy, even national defense and military industry, energy and other aspects. Smart helmets, smart homes, and power transmission tower intelligence are all representative manifestations. It can be seen that intelligence is people’s entry into the future. Indispensable key technology. As an important theoretical and technical support, digital twins have received more and more attention and recognition. The main purpose of this paper is to study the digital twin management of tower networking based on mobile applications, fully analyze the system requirements and design methods of the intelligent transmission tower, and then analyze the digital twin technology. This paper designs the lightweight of the grid geometric model Experiments and fault location experiments of power grid transmission towers. Experiments have proved that the mobile application terminal performance test experiment done in this article has a response time of 0.42 s under the big data stress test of 200 people; and the CPU occupancy rate of the background system is small. At the same time, the memory occupancy rate is also lower than 40%, and the resource occupancy rate is low, all controlled below 40%. It has certain reference value in practical application. Keywords: Digital Twin · Mobile Application · Smart Grid · Image Processing
1 Introduction As the best technical way to realize intelligent management, digital twins are particularly concerned by relevant academic circles and enterprises at home and abroad. The management and control system, as the main object of the intelligent upgrade technology transformation, realizes intelligent management through the deep integration of the digital twin and the management system. Most importantly, the introduction of digital twins can make traditional management systems more open, and can easily integrate new generation information technologies such as big data and artificial intelligence. It has certain practical significance for the intelligentization of transmission towers. Have conducted research on it and have drawn many effective conclusions. For example, Lee J proposed that digital twin technology should also include experts on © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 369–377, 2023. https://doi.org/10.1007/978-3-031-31860-3_39
370
K. Jiao et al.
its original basis. Knowledge to achieve accurate simulation [1]. Ferguson S proposed a digital twin system that synchronizes the virtual and real of the digital model and the physical equipment to form the entire line [2]. Jafari MA proposes a decoupling algorithm for coupling optimization problems in the production process on the basis of the former, as an engine to drive the digital twin [3]. The research done by these scholars on digital twins provides a lot of theoretical basis for this article. This paper analyzes the methods of digital twin management of tower networking and intelligent monitoring of transmission towers, such as remote image acquisition of transmission towers, and proposes a method for image gray-scale processing, which greatly reduces the amount of processing calculations, and at the same time requires digital twin technology. Analyze, Finally, the performance experiment of the mobile application terminal and the UAV inspection function test were carried out, and the results were compared to prove the practical applicability of the algorithm, and then the experiment was summarized to analyze the deficiencies of the experiment.
2 Management Method and Demand Analysis of Digital Twin Tower Network Based on Mobile Application 2.1 Optimized Preprocessing Method of Pole and Tower Image (1) Image grayscale The collected original image of the transmission line tower is a three-channel color image, but compared with the gray-scale image, the amount of calculation to process the color image is greatly increased. From the perspective of the reduction of the calculation amount of image processing, if you want to process the image faster To improve efficiency, firstly, grayscale the color image of the original transmission line tower. Compared with the original color image, although the color information of the image is lost after grayscale, the overall chromaticity and brightness level of the image The distribution is still intact, so the image does not lose too much detail components, so the grayscale processing greatly reduces the amount of processing calculations [4]. (2) Bilateral filtering Due to the influence of certain factors in the imaging process of poles and towers and urban and rural processes, the original grayscale images of transmission line towers contain more noise points. In addition, the original gray-scale backgroundthe image scale contains many unnecessary details and other lines and towers. These factors will increase the difficulty of detecting the contour of the target tower. Therefore, it is necessary to perform fuzzy filtering processing on the original image of the tower to reduce the amount of noise and unnecessary details in the original image, thereby reducing the number and difficulty of subsequent processing. Through the above-mentioned image blur filter processing technology, it can be known that the noise in the image is mainly Gaussian noise [5]. Gaussian filtering can better filter out Gaussian noise, but the performance of Gaussian filtering tip retention is poor. Compared with Gaussian filter processing, bilateral filter processing can not only filter out most of the original image noise, blur unnecessary details and other lines
A Mobile Application-Based Tower Network Digital Twin Management
371
and towers in the background, reduce the influence of these factors on the contour detection of the target tower, and better maintain the target. The edge contour of the tower reduces the difficulty of subsequent processing [6]. Therefore, this paper chooses the binary filter algorithm as the fuzzy filter processing algorithm for the original image of the transmission tower. 2.2 Tower Networking Design Method of Poles and Towers The electronic transmission line monitoring system is powerful, mainly because its basic technology is smart grid, Internet of Things communication technology, etc. In addition to displaying information, it can also analyze and measure information. Through these technologies, the system can detect and display the operating conditions of the equipment, facilitate manual analysis by operators, assist managers in making decisions and crises, and ensure early detection and elimination of potential accidents [7, 8]. (1) Using a modular structure, the tower networking system can realize online diagnosis. In addition, under the action of the system, it can provide protection for the regular operation of the transmission line, which is mainly dependent on its online detection function and can also operate the system. Remote detection of faults in the system. (2) The multi-level distributed operation of the system ensures the complete transmission of on-site real-time monitoring data and early warning/alarm information to the power-saving company’s transmission line monitoring line. (3) Using the ZigBee method, the transmission system can provide a certain guarantee for the security of the underlying data, which can improve the economy of the system while maintaining the healthy operation of the system; in addition, it is precisely because of the fact that the control system of the chip adopts low power The consumption design enables the equipment to run for a long time. (4) Under the action of the remote transmission channel, the scope of application of the system can be improved. Even in the field environment, it can still transmit data relatively stably with high reliability. 2.3 Demand Analysis of Digital Twins (1) Application field expansion requirements Initially, digital twins were mainly oriented to the needs of the military and aerospace fields. In recent years, digital twins have had reference and application requirements in the fields of electric power, automobiles, medical care, ships, etc., and the market prospects are broad [9]. (2) Intelligent service demand As the scope expands, digital twins must meet the application requirements of different industries, different levels of users (such as terminal operators, professional and technical personnel, decision makers, and product end users), and different business needs [10].
372
K. Jiao et al.
(3) Dynamic multi-dimensional multi-space-time scale model requirements, this model is the engine of digital dual applications. The current digital modeling of physical entities mainly focuses on the construction of the geometry and physical dimensions of the model, and there is no multidimensional dynamic model that can simultaneously reflect the geometry, physics, behavior, rules and constraints of the physical entity. In different dimensions, there is a lack of “multi-spatial scale models” to describe the characteristics, behaviors and characteristics of physical entities with different properties at different spatial scales; at the same time, there is a lack of different time scales to describe process evolution and real-time dynamics [11]. Changes in physical entities over time. In addition, from a system perspective, there is a lack of integration and fusion of models of different dimensions, different spatial scales, and different time scales. The above models are inadequate and incomplete, leading to the inability of the existing virtual entity models to describe and portray real and objective physical entities, resulting in insufficient accuracy of related results (such as simulation results, prediction results, evaluation results and optimization). Therefore, how Constructing a dynamic, multi-dimensional, multispace-time scale model is a scientific challenge facing the development and practical application of digital twin technology [12].
3 Experimental Research on the Management of Tower Network Digital Twins Based on Mobile Applications 3.1 Load Balancing Scheduling Experiment Load balancing technology is a programming strategy for platforms with a large number of tower network service fleets. The goal is to distribute user request resources fairly among each server so that one server is not overcrowded and other servers cannot run their own processing power. Load balancing programming algorithms are mainly divided into two types: static and dynamic. (1) Static The server is programmed through the defined static rules, and the programming result can be determined. The advantage is that there is no need to consider the actual load pressure of each service node during the operation, and only need to allocate tasks according to the established strategic plan. It is characterized by low charge and easy application. The disadvantage is that the resources of each runtime task are different and difficult to predict, the processing power of each server node is different, and the current load situation is also different. Therefore, the allocation method based on the static planning strategy can only achieve a certain degree of equilibrium under certain circumstances. They usually include IP hashing, polling, random algorithms, etc.
A Mobile Application-Based Tower Network Digital Twin Management
373
(2) Dynamic The programming strategy is not fixed, and must be combined with the current survival status of each node server when the system is running. Through specific strategy analysis, the new service request is preferentially allocated to the optimal server node received under the strategy, and the optimal candidate is preferentially selected to help improve the performance rate of the entire service cluster. Get better programming results at the expense of general system resources. The dynamic algorithms commonly used in general server groups include the minimum number of connections, the weighted connection method, and the quick response priority algorithm. 3.2 Lightweight Experiment of Geometric Model The geometric models created by information model modeling software usually have many geometric vertices, many faces, and many hidden objects. It has high requirements for computing performance and memory capacity, and it is difficult to use in application system development and user experience. Very bad. In practical applications, it is usually necessary to lightly edit geometric patterns to reduce the transmission time of geometric patterns on the network, reduce the time to load geometric patterns from a computer, and improve appearance effects. The experimental steps are divided into the following steps. (1) Establish a geometric model in the modeling software, and then use the modeling software to export the geometric model to a file with component attribute information. (2) Import the file into 3dmax and analyze the imported model to determine whether there are redundant dotted lines. If there are redundant dotted lines, delete the redundant dotted lines. If there is no dotted line, see if the model needs to be reduced. (3) If there are redundant surfaces in the model, you can use tools for surface reduction, and complex models can be shelled. (4) Manually revise the simplified texture according to the position of the texture before the simplification of the face to ensure the consistency of the texture before and after the simplification of the face. (5) Finally, export the model in the software, export the setting parameters, geometric parameter settings, scale factor parameter settings, file format settings, etc. 3.3 Mobile Application Edge Monitoring and Positioning Accuracy Experiment In order to improve the penetration of power transmission patrol task management, transmission line managers will send patrol tasks to patrol personnel to carry out tasks on-site. How to use the software’s automatic or manual positioning management, spot check the authenticity of the inspectors on the scene, and then combine the criteria of
374
K. Jiao et al.
the Canny operator to determine the edge quality, the formula is shown in (1). +w −w G(−x)h(x)dx SNR = +w σ −w h2 (x)dx
(1)
In formula (1), G(x) represents the edge function, h(x) represents the impulse response of the filter with width W, and SNR represents the signal-to-noise ratio. If the signal-to-noise ratio is larger, the quality of the extracted edge will be higher.. On this basis, continue to carry out the mobile application terminal positioning accuracy experiment, the formula is shown in (2). +w −w G (−x)h (x)dx (2) L= +w σ −w h2 (x)dx Among them, G’(X) and H’(X) are the derivatives of G(X) and H(X), respectively. The size of the L value indicates the accuracy of the positioning.
4 Experimental Analysis of Tower Network Digital Twin Management Based on Mobile Applications 4.1 Mobile Application Power Terminal Performance Test The test adopts the method of multi-person concurrent (simulating the pressure of big data) to carry out the system data stress test, and the number of concurrent tests reaches 200 people. The test results are shown in Table 1. Table 1. Terminal application data stress test table Indicator type
Target value
Actual value
Response time (s)
0.5
0.42
Processor usage
0.45
0.32
Memory usage
0.45
0.13
Success rate
0.70
0.83
A Mobile Application-Based Tower Network Digital Twin Management
Target value
0.9
375
Actual value 0.83
0.8
0.7
0.7
value
0.6 0.5 0.4
0.5 0.42
0.45
0.45 0.32
0.3 0.2
0.13
0.1 0 Response time Processor usage Memory usage (s) Indicator type
Success rate
Fig. 1. Terminal application data stress test chart
It can be concluded from Fig. 1 that in the case of a large number of concurrent users, the software’s various data indicators are good, and all performance indicators can meet the daily transmission line management needs of the power supply company, meet the design requirements, and complete the work quickly Affairs. At the same time, the average response speed is very fast. Under the big data stress test of 200 people, the response time is 0.42 s; and the CPU occupancy rate of the background system is small, and the memory occupancy rate is also less than 40%, and the resource occupancy rate is low, all controlled Below 40%. Therefore, the load performance of the system can meet the daily concurrent data requirements and needs of the power supply company, and the transmission line inspection management software system can perform stably under long-term stress testing, and the data update speed, that is, the upload and download speeds are also satisfied. Daily inspection tasks meet the requirements of actual inspection tasks for transmission lines. 4.2 Mobile Application Drone Inspection Function Test By testing the transmission line images collected by the drone, the accuracy of the collected information, the image clarity of the mobile application terminal and the positioning of the drone tower pole controlled by the mobile application terminal are studied under the condition that the number of pictures is gradually increasing. The accuracy of the experiment results are shown in Table 2.
376
K. Jiao et al. Table 2. Transmission image analysis table
Number of images
Image recognition accuracy rate (%)
Image clarity (%)
Positioning accuracy (%)
20
86.5
89.6
92.3
40
85.8
89.1
91.8
60
84.2
89.3
91.2
80
83.3
88.6
89.9
100
82.1
88.2
88.3
Image recognition accuracy rate Image clarity % Positioning accuracy %
94
92.3
91.8
92 90 value
88
89.6 86.5
89.1
%
91.2 89.3
89.9 88.6
88.288.3
85.8
86
84.2
84
83.3 82.1
82 80 78 76 20
40
60 Parameter Type
80
100
Fig. 2. Transmission image analysis chart
It can be seen from Fig. 2 that the accuracy of the system’s image processing algorithm for power line image recognition and the clarity of the image transmitted back to the mobile terminal are both close to 90%. The algorithm studied in this paper is aimed at the inspection work of transmission lines and belongs to the actual engineering application, and the image is collected by the mobile application data collection terminal to control the UAV to collect. In addition to the continuous optimization of the algorithm to improve the recognition rate, it can also improve the original Image quality, increase the manual check to improve the recognition rate.
5 Conclusions This work optimizes the overall fault location in the experiment, compares and analyzes the adaptability of the method, improves and synthesizes the physical dominant
A Mobile Application-Based Tower Network Digital Twin Management
377
frequency extraction method, and compares the accuracy through experiments. Using the global main natural frequency value, more accurate positioning, and finally verified the principle of the method through simulation experiments, which improves the effect of error positioning. However, the research done in this article is still lacking. For example, although the performance of the proposed improved amplitude algorithm is indeed improved, when there are more branch lines, the number of error wave reflections increases, and the energy loss increases. The installation conditions need to be further verified, and it is hoped that they can be improved in subsequent experiments.
References 1. Lee, J.: Integration of digital twin and deep learning in cyber-physical systems: towards smart manufacturing, 38(8), 901–910 (2020) 2. Ferguson, S., Bennett, E., Ivashchenko, A.: Digital twin tackles design challenges. World Pumps 2017(4), 26–28 (2017) 3. Jafari, M.A., Zaidan, E., Ghofrani, A., et al.: Improving building energy footprint and asset performance using digital twin technology. IFAC-PapersOnLine 53(3), 386–391 (2020) 4. Liu, J., Liu, J., Zhuang, C., et al.: Construction method of shop-floor digital twin based on MBSE. J. Manuf. Syst. 60(2), 93–118 (2021) 5. Zhang, H., Liu, Q., Chen, X., et al.: A digital twin-based approach for designing and multiobjective optimization of hollow glass production line. IEEE Access 2017(5), 26901–26911 (2017) 6. Madeo, S., Bober, M.: Fast, compact and discriminative: evaluation of binary descriptors for mobile applications. IEEE Trans. Multimedia 19(2), 221–235 (2017) 7. Bhandari, U., Neben, T., Chang, K., et al.: Effects of interface design factors on affective responses and quality evaluations in mobile applications. Comput. Hum. Behav. 72(JUL.), 525–534 (2017) 8. Rahbari-Asr, N., Ojha, U., Zhang, Z., et al.: Incremental welfare consensus algorithm for cooperative distributed generation/demand response in smart grid. IEEE Trans. Smart Grid 5(6), 2836–2845 (2017) 9. Collier, S.E.: The emerging enernet: convergence of the smart grid with the internet of things. IEEE Ind. Appl. Mag. 23(2), 12–16 (2017) 10. Mengelkamp, E., Notheisen, B., Beer, C., Dauer, D., Weinhardt, C.: A blockchain-based smart grid: towards sustainable local energy markets. Comput. Sci. Res. Dev. 33(1–2), 207–214 (2017). https://doi.org/10.1007/s00450-017-0360-9 11. Wei, W., Wang, D., Jia, H.: Hierarchical and distributed demand response control strategy for thermostatically controlled appliances in smart grid. J. Mod. Power Syst. Clean Energy 5(1), 30–42 (2017) 12. Nan, L.D., Rui, H., Qiang, L., et al.: Research on fuzzy enhancement algorithms for infrared image recognition quality of power internet of things equipment based on membership function. J. Vis. Commun. Image Represent. 62(JUL.), 359–367 (2019)
Optimization of Intelligent Logistics System Based on Big Data Collection Techniques Qiuping Zhang1(B) , Lei Shi1 , and Shujie Sun2 1 School of Economics, Harbin University of Commerce, Harbin, China
[email protected] 2 Graduate School of Global Environment Studies, Sophia University, Tokyo, Japan
Abstract. Today is a new era in which big data and cloud computing technologies support the high-quality development of the intelligent logistics industry to increase quality, effectiveness and innovation ability. How to optimize Intelligent Logistics System by mining data has aroused considerable attention from all circles of the society. This paper analyzes the methods and feasibility of data mining in all stages of Intelligent Logistics System and then evaluates each component of this system based on the method of systematic analysis. Finally, it proposes strategies to optimize Intelligent Logistics System from “one center and three main lines”, which would give impetus to the logistics industrial upgrading and propel economic growth of high quality based on big data collection technology. Keywords: Data Collection Technology · Intelligent Logistics System · Supply Chains
1 Introduction With the emergence of Internet technologies like cloud computing and IoT, intelligent development is achieved, in many industries, by technology and mode innovation. Logistics industry, as a fundamental industry of the national economy, has actively explored. Based on big data collection technology, information sharing and collaborative operation can be realized in the whole logistics industry chain, which would enhance operating efficiency and quality of ILS. As IoT and cloud computing technology gradually mature, big data collection technology, applied in the logistics industry, would realize logistics information sharing and collaborative operation of each link. It optimizes intelligent logistics enterprise’s management inside and improves its competitiveness outside. Moreover, this technology would prompt the construction of intelligent logistics data platform and realize logistics network-based operation.
2 Value of Big Data Technologies for Developing Intelligent Logistics System [13] Intelligent logistics refers to the use of big data in logistics as a foundation, radio frequency settings, and other technologies, so that the logistics system has intelligent © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 378–387, 2023. https://doi.org/10.1007/978-3-031-31860-3_40
Optimization of Intelligent Logistics System
379
perception, active analysis, automatic decision-making, and long-term iteration of the wisdom of the ability [9]. It helps to achieve the information, automation, and intelligence of logistics, to promote the value of the co-creation in the logistics industry chain, to achieve sustainable development of smart logistics. A more complete wisdom logistics system is emerging as big data and cloud computing technology continue to develop and become more mature. [3] Data collection technology-based intelligent logistics has achieved substantial advancements in market forecasting, supply chain management, storage optimization, and logistics information transfer [6]. First, the Internet of Things (IoT), which was created based on data gathering technology, may automate the identification and supervision of commodities, allowing numerous parties to check the source of the items and their state in real time. Secondly, technologies like satellite navigation, radio frequency identification, and sensing offer technical assistance for data collecting and make it possible to monitor distribution visually throughout the logistical process [14]. Finally, big data collecting technology works with automated control in the process of intelligent logistics and distribution to realize the coordinated allocation of business flow, logistics, information flow, and capital flow. Data warehouses for data management first appeared as data gathering technology first intersected with the logistics sector at the start of the twenty-first century. Then, as big data, the Internet of Things, and cloud computing technologies advanced quickly, logistics companies started to investigate smart logistics models (Table 1). Table 1. The development trend and characteristics of ILS Evolution stages
Technical characteristics
Logistics characteristics
Informationization stage
Sensor, GPS, bar code, ERP enterprise management system and other technology as a symbol
The initial application of information technology in the logistics industry, mainly responsible for logistics planning, management, design and control links, part of the logistics link has realized automation, information and digital operation
Internet of Things stage
Iot technology enables logistics entities to be connected, and sensors, communication networks and big data solve the integrated application and control of logistics information
The Internet of Things has transformed the management mode of the logistics industry, enabling logistics enterprises to track, locate, monitor and manage each logistics link, and enabling logistics business and service traceability, visualization and automatic distribution (continued)
380
Q. Zhang et al. Table 1. (continued)
Evolution stages
Technical characteristics
Logistics characteristics
Intelligent logistics
On the basis of the Internet of Things network combined with automatic control, artificial intelligence, decision management and other technical means
Intelligent logistics can provide logistics enterprises with daily business decisions and strategic decisions, such as logistics distribution optimal path decision, automatic sorting machine control decision, automatic steering car operation control decision
Smart logistics
Big data and cloud computing have changed the way of obtaining and storing logistics information points, making logistics information transparent and solving the problem
With the help of portal websites, mobile apps, mobile terminals and other carriers, the new technology constantly collects and analyzes logistics information for customers, providing customized and personalized logistics services
[1] Firstly, big data collection technology can process and analyze data gathered from the market environment, including consumer preferences and experiences of logistics users, to assist logistics organizations in raising the caliber of their services in response to customer wants. At the same time, logistics organizations can boost their overall competitiveness by gathering crucial data about their target markets, including logistics costs, fundamental pricing, and market resources [8]. Second, big data gathering technologies may increase the openness of an organization’s internal operations, encourage the sharing of information resources within an organization, boost management effectiveness, and optimize the internal management model [7]. Thirdly, the use of big data collection technology makes it possible to combine massive data information from acquisition, storage, and analysis into an information resource service system with reference value, maximize the value of information resources through Internet interaction, and ultimately build a smart logistics data platform that will hasten the transformation and advancement of the logistics sector. The gravity model (Formula 1) can also be used here in order to assess the realistic state of the development of smart logistics at this time. A mathematical model called the gravity model is used to analyze and anticipate spatial interaction capabilities as well as to compute and explain spatial interaction capabilities. The model makes the assumption that the one-way trade flows between two economies are inversely proportional to the distance between them and proportional to the size of their respective economies (1) Tij = A Yi Yj /Dij Finally, big data collecting technologies will make it easier for the wisdom logistics system to continuously innovate and keep up with the times. The research area of the
Optimization of Intelligent Logistics System
381
innovation development of the wisdom logistics system can use the knowledge production function model (Formula 2) to synthesize the model of the innovation effect of the wisdom logistics system. (Formula 3) This will make it easier to comprehend the important components influencing the innovative development of the smart logistics system and to categorize the significance of the many influencing factors, opening up possibilities for the long-term growth of the smart logistics system. Iit = ALit α Kit β
(2)
The traditional knowledge production function studies knowledge inputs and outputs. Iit represents the innovation output of region i in period t. Lit represents the R&D personnel input of region i in period t. Kit represents the R&D capital input of region i in period t. α and β represent the output elasticities of R&D personnel and R&D capital respectively. A represents the conditional parameter. Iit = AX1it α1 X2it α2 X3it α3 X4it α4 ...Xnit αn
(3)
Innovation in ILS is also influenced by other relevant factors that can contribute to the effective use of innovation resources. The inclusion of the corresponding explanatory variables according to the specific situation will, to a certain extent, make the model more objective and in line with the real situation. In the above equation, α1 to αn denote the elasticity coefficients of innovation output relative to each of the above innovation inputs, and are used to indicate the degree of influence of each explanatory variable on the explanatory variable.
3 Optimization of ILS Based on Big Data Collection Technology ILS covers each individual of the logistics industry chain and each ecosystem represents an aggregation of the collection of individuals in ILS [2].Additionally, the construction of intelligent logistics data platform will play a decisive role in the operation of the whole logistics ecosystem. It promotes the effective flow of human resources, material resources and information resources among various systems and makes the whole logistics mode operate with low cost and high efficiency. Therefore, based on big data collection technology, the optimization of ILS will expand upon “one center and three main lines”. (Fig. 1).
382
Q. Zhang et al.
Fig. 1. Optimization and optimized path of ILS
3.1 One Center: Construction of Intelligent Logistics Data Platform According to the process of data transmission, the intelligent logistics data platform can be divided into collect&clean layer, data storage layer, data analysis layer, data service layer and platform management layer. The function of collect&clean layer lies in removing duplicate, missing, false or irrelevant data, and using the standardized data processing to output the same format of data from different sources [5]. Then it imports these data into a large distributed database for further pretreatment. Data analysis layer, based on big data computing engines like Map Reduce, Spark and Flink, adopts open source statistical algorithms and machine learning algorithms to build an open, flexible and extensible data analysis environment. It will meet the analysis requirements of general scenarios by statistics. Data storage layer uses technologies like HDFS, HBase, Hive and Greenplum to build a unified Data Lake, which is easy to storage and manage data uniformly. It also provides the basis for cross-library data association analysis. Data service layer uses micro-services Architecture to provide data sharing services for various users. These services include query service, visualization service, prediction service, unified data access service, etc. Platform management layer uses Yarn as the resource management scheduler to provide unified management for various computing frameworks in the cluster. Zookeeper is used to manage data in distributed environments. OOzie workflow schedule system is used to manage tasks and schedule workflows. Cloudera Manager is used to monitor the running status of cluster. The construction of intelligent logistics data platform is conducive to information interaction of each node in the logistics industry chain. It makes upstream, midstream and downstream of the logistics industry firmly connected, which will accelerate its
Optimization of Intelligent Logistics System
383
integration. Furthermore, this platform will have positive effects on the three subsystems of ILS, so as to achieve the goals of intelligent decision-making, logistics prediction and social environment analysis. (Fig. 2).
Fig. 2. “One Center” Optimization diagram of ILS based on big data collection technology
3.2 The First Line: Integrating the Supply System to Achieve Optimal Resource Allocation Based on data collection technology, the establishment of this standardized information platform has the good reference value to integration of logistics information, which is conducive to accelerating the integration process of manufacturing, logistics and sales supply chain. By integrating all resources at the supply side of the logistics industry chain, all nodes can truly share resources, information and data [10]. The supply side can quickly grasp the market demand, and take differentiated countermeasures according to the different needs of consumers, so as to optimize the allocation of resources. The suppliers use big data collection technology to build a knowledge map of logistics enterprises throughout the network. It helps to monitor logistics enterprises from financial situation, management system, negative behavior and development ability [4]. When selecting logistics enterprises, suppliers realize information sharing and make suitable decisions according to the quality of logistics services and their own needs through this platform. Meanwhile, real-time sales forecast information can be acquired through this platform and inventory can be adjusted timely according to market demand. It can help realize the coordinated control of sales and inventory, reducing the operation costs and improving the overall operation efficiency of the supply chain [11]. By integrating, logistics enterprises can not only reduce the logistics inventory rate but also improve the efficiency of commodity processing and delivery accuracy through further analysis.
384
Q. Zhang et al.
3.3 The Second Line: Integrating the Demand System to Explore Potential Demand Big data collection technology is conducive to accurate prediction so as to better allocate human and material resources on the supply side and realize the coordinated development of supply and demand. With the development of retail logistics, the distance between suppliers and consumers is shortened and the structure tends to be flat. The data-driven warehouse distribution system begins to appear. Meanwhile, consumption upgrading will bring about different requirements for individuation. Consumers have higher requirements for logistics. Therefore, based on big data collection technology, logistics enterprises analyze the order information to paint the consumer portrait. The analysis results helps enterprises to predict the consumer behavior in this region, which could optimize the storage space and transport path. Only in this way can we improve the operational efficiency of enterprise and achieve scientific logistics [12]. Besides, according to the data, enterprises can keep a close watch on the demand for vehicle, loading people and distribution staffs in different regions and at different times to realize the optimal allocation of resources. In the warehouse, this algorithm takes into account some factors such as consumer orders, product position on shelves, and so on. Outside the warehouse, it considers dynamic factors to optimize order-picking routes and distribution routes. 3.4 The Third Line: Integrating the Regulatory System to Embed the Government Coordination Mechanism As the administrative organ with public credibility, government should improve review mechanisms and establish the standard system under the principles of openness, fairness and impartiality. Based on big data collection technology, government can input operation information and transaction data of logistics enterprises into the platform for government information disclosure. Meanwhile, the machine learning algorithm is applied into automatic identification, data collection and standardization. Government staff only need to audit and label the results calculated by machine. Data mining algorithm will realize automatic analysis of relevant data, so as to effectively prevent fraud in the logistics industry, ensure the flowing of right-defending channels and ensure efficient operation of ILS. In addition, government leverages this technology to strengthen monitoring and early warning of intelligent logistics market, which will improve the predictability, pertinence and effectiveness of market regulation and public information services. Through the establishment of big data collection mechanism, which is based on the close interaction between government and logistics industry, an efficient ILS will be formed.
4 Optimized Path of ILS Based on Big Data Collection Technology The way to reconstructing ILS can be realized through “one center and three main lines” based on big data and cloud computing. The logistics e-commerce platform connects to the smart logistics model primarily from the demand side from the perspective of
Optimization of Intelligent Logistics System
385
information flow and service flow. It penetrates all facets of the supply chain, fully utilizing the Internet of Things, beginning with raw materials and ending with consumers, bringing together the commercial flow, logistics, capital flow, and information flow involved in the entire supply chain to the logistics e-commerce platform (Fig. 3).
Fig. 3. The logistics e-commerce platform
The e-logistics platform establishes a connection with the wisdom model on the supply side, gathers real-time static data from logistics activities like warehousing, human resources, and inventory, and enters it into the big data system. It also implements full
Fig. 4. The e-logistics platform
386
Q. Zhang et al.
logistics process monitoring using Internet of Things technology and creates dynamic data for real-time transmission to the big data system (Fig. 4). In order to facilitate government oversight, the logistics e-government platform focuses primarily on the government’s oversight of the wisdom of the logistics model for micro-regulation and macro-control, logistics-related policy information, logistics regulations between regions, government agencies’ planning, regulation and design instructions, and other information summaries into the platform (Fig. 5).
Fig. 5. The logistics e-government platform
The entire intelligent logistics system realizes information flow from three paths with the aid of the logistics e-commerce platform, the logistics e-logistics platform, and the logistics e-government platform, quickly integrating the supply and demand information of intelligent logistics and providing timely and effective feedback to the demand side, enabling quick and efficient logistics operation, judicious use of resources, significant cost reduction, and comprehensive and integrated supply chain management.
5 Conclusion A new round of scientific revolution and technological innovation have become the main driving force for the optimization of ILS. Promoting the integrated development of big data collection techniques and intelligent logistics will become a new proposition in the new era. Today, with the rapid development of big data technology, China needs to analyze the problems of the existing intelligent logistics system and discuss the strategies to optimize ILS systematically. So this paper proposes an information-based and intelligent logistics development model from “one center and three main lines” which will realize the coordinated development of quality and efficiency. Acknowledgement. This paper is supported by the fund project of Harbin University of Commerce (2017BS016) and Sophia University.
Optimization of Intelligent Logistics System
387
References 1. Ageron, B., Bentahar, O., Gunasekaran, A.: Digital supply chain: challenges and future directions. Supply Chain Forum: Int. J. 21(3), 133–138 (2020) 2. Bonadio, B., Huo, Z., Levchenko, A.A., Pandalai-Nayar, N.: Global supply chains in the pandemic. J. Int. Econ. 133 (2021) 3. Chien, C.-F., Dauzère-Pérès, S., Huh, W.T., Jang, Y.J., Morrison, J.R.: Artificial intelligence in manufacturing and logistics systems: Algorithms, applications, and case studies. Int. J. Prod. Res. 58(9), 2730–2731 (2020) 4. Ding, Y., Jin, M., Li, S., Feng, D.: Smart logistics based on the internet of things technology: an overview. Int. J. Log. Res. Appl. 24(4), 323–345 (2021) 5. Guo, Z., Sun, S., Wang, Y., Ni, J., Qian, X.: Impact of new energy vehicle development on china’s crude oil imports: an empirical analysis. World Electric Vehicle J, 14(2), Article 2 (2023) 6. Helo, P., Hao, Y.: Blockchains in operations and supply chains: a model and reference implementation. Comput. Ind. Eng. 136, 242–251 (2019) 7. Hobbs, J.E.: Food supply chains during the COVID-19 pandemic. Can. J. Agricult. Econ./Revue Canadienne d’agroeconomie 68(2), 171–176 (2020) 8. Jagtap, S., Bader, F., Garcia-Garcia, G., Trollman, H., Fadiji, T., Salonitis, K.: Food Logistics 4.0: Opportunities and Challenges. Logistics, 5(1), Article 1 (2021) 9. Koberg, E., Longoni, A.: A systematic review of sustainable supply chain management in global supply chains. J. Clean. Prod. 207, 1084–1098 (2019) 10. Lobe, B., Morgan, D., Hoffman, K.A.: Qualitative data collection in an era of social distancing. Int. J. Qualit. Methods 19 (2020) 11. Pan, S., Zhong, R.Y., Qu, T.: Smart product-service systems in interoperable logistics: Design and implementation prospects. In: Advanced Engineering Informatics, pp. 42 (2019) 12. Wieland, A., Durach, C.F.: Two perspectives on supply chain resilience. J. Bus. Logist. 42(3), 315–322 (2021) 13. Woschank, M., Rauch, E., Zsifkovits, H.: A review of further directions for artificial intelligence, machine learning, and deep learning in smart logistics. Sustainabil. 12(9), Article 9 (2020) 14. Yang, J., et al.: Brief introduction of medical database and data mining technology in big data era. J. Evid. Based Med. 13(1), 57–69 (2020)
Anomaly Detection Algorithm of Cerebral Infarction CT Image Based on Data Mining Yun Zhang(B) Department of Neurology, The First Affiliated Hospital of Hebei North University, Zhangjiakou 075000, Hebei, China [email protected]
Abstract. Anomaly detection is one of the important applications in the fields of image processing and pattern recognition. Through the use of image processing technology and machine learning algorithms to analyze the image, and check the variation of the graphics, that reduces the workload of manual data processing, but also solves the subjective difference between different operators, and because of its high sensitivity. The advantages of high detection rate, low false alarm rate, fast processing speed, and high precision have been increasingly used in different occasions such as medical image processing, and are expected to be widely used. This paper aims to study the abnormal detection algorithm of cerebral infarction CT images based on data mining. Based on the analysis of the characteristics of CT images, image segmentation technology and image abnormality detection algorithms, the algorithm is tested using 50 cerebral infarction CT images provided by a hospital. The test results show that the proposed algorithm has high accuracy in the abnormal detection of cerebral infarction CT images, which proves that the proposed algorithm has good generalization ability. Keywords: Data Mining · CT Images of Cerebral Infarction · Image Segmentation · Image Abnormality Detection
1 Introduction With the continuous development and improvement of related technologies in the field of computer imaging, the development of medical imaging, especially in China, has begun to accelerate, and hardware and software have been significantly updated and technologically improved. The relative maturity of these factors pushes medical diagnosis and treatment to a new stage, and at the same time greatly promotes the development of clinical medicine. Cerebral infarction is caused by cerebral atherosclerosis, cerebral artery lumen stenosis lining blood vessel damage, and local thrombosis caused by various factors, which aggravate or interfere with arterial stenosis, ischemia and ischemia, cause hypoxia, and lead to nerve function disorders of diseases [1, 2]. Computed tomography is the first choice for cerebrovascular diseases due to its simplicity, speed, effectiveness, noninvasiveness, and high diagnostic accuracy, and it provides clinicians with reliable diagnostic data [3, 4]. The accuracy of brain computed tomography showed that the size © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 388–398, 2023. https://doi.org/10.1007/978-3-031-31860-3_41
Anomaly Detection Algorithm of Cerebral Infarction CT Image
389
and location of cerebral infarction ranged from 66.5% to 89.2%, and the accuracy of early cerebral hemorrhage was 100% [5, 6]. If the heart attack occurs within 24 h, if the focus of the heart attack is less than 8 mm, or if the lesion is in the brain stem and cerebellum but is limited by human eye perception, the rate of CT scan identifying early cerebral infarction will be very low [7, 8]. A CT scan of the brain usually fails to make a correct diagnosis and should be rechecked immediately if necessary. The automatic interpretation of CT images of stroke helps doctors find the location and scope of the initial stroke in time, and provide further treatment as soon as possible. It is also very important to reduce the impact of heart attacks and even save the lives of patients. Based on a large number of relevant references, this article combines the characteristics of CT images, image segmentation technology and image anomaly detection algorithm, in order to verify the performance of the random forest-based CT image anomaly detection algorithm for cerebral infarction proposed in this article, this article is provided by a hospital 50 CT images of cerebral infarction to test the algorithm. The test results show that the forest detection algorithm proposed in this paper has a certain level of speed improvement compared with the traditional random forest.
2 Anomaly Detection Algorithm of Cerebral Infarction CT Image Based on Data Mining 2.1 The Characteristics of CT Images A CT image is basically based on the gray level of the image, that is, the degree of absorption of the X-ray beam. From the above, we can conclude that it is actually a medical CT image. The black and white image essentially means that the gray area represents the low-density area of the area to be inspected, that is, the low-density area, and the high-density area has a white shadow in the lungs, similar to the opposite of the operating principle, that area is a dense area. In other words, the test shows that the area is a dense area like bones [9, 10]. In short, CT looks like an X-ray image to some extent, but it is not. The biggest difference between CT and beam images is their sensitivity to density. The density of human organs and tissues does not change that much. At this time, it needs to be sensitive to density in order to measure and display. Compared with X-ray images, this is the most important advantage of CT images. In summary, medical CT images can better reflect the soft tissue organs of the human body, such as the brain, lungs, heart, liver, spleen, and spinal cord. On the collected anatomical CT medical images of pelvic organs, etc., the image of the lesion can be accurately displayed [11, 12]. Figure 1 shows the application of CT images in human tissues.
390
Y. Zhang
Brain
Heart
Lung
Spleen CT image
Liver Spinal cord Fig. 1. Application of CT image in human tissue
The X image can explain the density of the area to be measured to a certain extent, that is, the high-density area and the low-density area, but unfortunately it cannot be used to quantify the density. Compared with X-ray images, medical CT images can describe the internal mechanisms of major organs and tissues, that is, the different densities of different internal regions, and quantify specific density differences at a higher resolution. It can also be done. In actual work, the absorption coefficient may not be used for convenience, but the CT value is used in the formula. The CT value is used to express the density and the unit is Hu. Image Segmentation Technology.
Anomaly Detection Algorithm of Cerebral Infarction CT Image
391
(1) Threshold method Threshold division method is the most basic method of image segmentation, and it is also the most common method used in real applications. By comparing the number of pixels in the image with a specific limit value, the image can be divided into several regions. The key problem of the image threshold method is the selection of the threshold value. According to the type of threshold value, it can be divided into a fixed threshold method and an adaptive threshold method. The main drawback of the threshold method is that it is very sensitive to image noise, because it only uses the grayscale signal in the image, and cannot investigate the spatial relationship within the image. (2) Edge detection method The edge detection method uses the gray difference information between the target and the background to directly describe the boundary of the target area to achieve the purpose of segmentation. Acne between different areas is determined by the discontinuity of gray scale. Detecting the grayscale changes of each pixel nearby, and deriving the acne law near the edge to detect the edge of the image. However, if the image is blurred or the detected edge is damaged or not closed, the segmentation effect of the edge detection method will be significantly reduced. (3) Regional growth method The basic idea of the region growing method is to select seed pixels in the image and merge adjacent pixels with similar characteristics to the seed pixels according to the established expansion mode to obtain regions with the same attributes to realize image segmentation. The key to the regional growth method is seed selection and the formation of growth patterns. (4) Clustering method The clustering method mainly divides the image pixels into many regions with the same attributes according to a certain similarity, so the similarity within the region is greater than the similarity between the regions. This is essentially a sorting method that achieves the result of segmenting the image. The clustering segmentation method only uses the gray information of image pixels, while ignoring the spatial relationship between image pixels. 2.2 Image Anomaly Detection Algorithm The performance of the random forest algorithm is much better, but there is still room for improvement. Especially for the CT scan of abnormal cerebral infarction studied in this paper, the target parameters can be optimized to improve the detection performance of the whole system. The analysis of the construction process of the random forest algorithm shows that the factors that affect the classification performance of the algorithm mainly include the depth of the decision tree and the principle of node selection. This paper
392
Y. Zhang
optimizes the random forest algorithm of abnormal CT scan images of stroke from these three aspects. (1) Deep optimization of decision tree Generally speaking, the pruning function participates in the construction of the decision tree, which helps to improve the generalization ability of the decision tree. However, in the random forest, the training depth required for the decision tree is usually limited, and “pruning” is not included in inside. However, the depth of the decision tree is one of the key factors affecting the performance of the decision tree. If the number of nodes is too small, the accuracy of the decision tree classification cannot be guaranteed; too many nodes will reduce the overall generalization ability of the algorithm. (2) Optimization of node selection principles Regarding the selection principle of decision tree nodes, many scholars have been studying; the most famous are ID3 algorithm, C4.5 algorithm and CART algorithm. All these algorithms have shown great power for various problems. However, although the results of these three algorithms on the system are similar, there are still some differences in some data. The reason is that these selection principles have different focuses, and each has its own limitations. This restriction results in that even if a particular classification function is an optimal classification function based on two selection principles, it cannot be guaranteed, and it is best to follow another selection principle.
3 Experiment 3.1 Experimental Data The experimental data used in this study are from a hospital in this province. The images were taken from a dual-source 64-row CT machine: the tube voltage of the CT machine was 120KVP, the distance from the radiation source to the detector was 1040mm, and the CT image size was 512x512 pixels. It is 0.35 mm. 3.2 Preprocessing of CT Brain Images The preprocessing of brain images mainly involves segmentation and image correction of brain regions. The segmentation of the brain area is mainly based on the segmentation of the skull, and then the seal of the skull is used to obtain the internal area of the brain. The advantage of dividing brain regions in the preprocessing stage is that there is no need to consider the insertion of brain bones and background in the subsequent processing algorithms. Image correction is to align the median value of the brain area with the median value of the image. It first detects the ideal median of the brain area, and then moves and rotates according to the deviation of the brain median from the image median to obtain the alignment effect. The main reason for image correction in the preprocessing stage is to provide symmetrical images of brain regions for additional functions, such as extracting brain parenchyma and detecting cerebral infarct regions.
Anomaly Detection Algorithm of Cerebral Infarction CT Image
393
(1) Segmentation of brain regions Computed tomography images of the brain mainly include brain regions, brain bones, brain foreign bodies and background. The CT value of the brain bone in the image is relatively constant, much higher than the CT value of the brain parenchyma and background. This characteristic of the anatomy allows the bones of the brain to be segmented by a fixed threshold. Because the shape of the bones in the brain is fixed and closed, the closed area inside the bones in the brain is considered to be an area of the brain. (2) Correction of brain image position During imaging, the patient is scanned at a fixed position, but the brain image needs position correction, because different computed tomography images have different brain rotation angles according to the patient’s position. Correcting the position of the head according to the ideal midline of the brain roughly determines the anatomical structure of the entire brain. The brain image correction algorithm is mainly performed by aligning the ideal brain centerline, and it includes two processing steps. One is the detection of the ideal brain image centerline, and the other is the spatial transformation of simulated radiation. The detection of the ideal center line of the brain image mainly includes the detection of the region of interest (ROI) in the high gray value brain 2D image and the ideal flat adaptation of the detection area in the 3D space. 3.3 Evaluation Criteria (1) Rapidity For the brain infarction CT image abnormality detection algorithm studied in this article, speed is a very important indicator. Only when the fast performance meets certain standards, the algorithm in this paper can meet the needs of real-time clinical diagnosis. In addition, the speed can reflect the detection efficiency of the algorithm. Therefore, in this article, we will first test the fast performance of the cerebral infarction CT image abnormality detection algorithm and its comparison algorithm, and use the scan time as an indicator to verify the performance of the algorithm. (2) Accuracy Accuracy is the most important indicator to measure the algorithm in this article, because the algorithm in this article will eventually be used for clinical diagnosis. The algorithm can only be implemented when the accuracy reaches a certain level. Therefore, this article focuses on testing the accuracy of the algorithm, and uses this as a template to verify the performance of the algorithm. For the accuracy of the algorithm, this article evaluates the error rate. Error rate is the most commonly used indicator for classification tasks. For a specific data set D, the
394
Y. Zhang
classification error rate of the learner is defined as follows: 1 m ||(f (xi ) = yi ) i=1 m
E(f ; D) =
(1)
Among them, D is a data set containing m test samples, and each test sample has a label, where yi is the actual label of sample xi . . It can be seen that: to evaluate the performance of the learner f is to predict the result f(x) is compared with the real label y; therefore, the general error rate is for samples that are ranked incorrectly relative to the total number of samples. For data distribution D and probability density function p(•), the error rate has a more general description: ||(f (x) − y)p(x)d (x) (2) E(f ; D) = x∼D
4 Discussion 4.1 Comparison of Detection Time In this article, we will use detection events as a measure of the fast performance of the algorithm. This article uses the average detection time of 50 sets of CT images of cerebral infarction as the execution time of the single scan algorithm to avoid extreme errors. The experiment will be run 3 times in total. The results are shown in Table 1: Table 1. Comparison between the algorithm in this paper and the traditional random forest algorithm Number of experiments
The algorithm of this paper/s
Traditional random forest/s
1
5
7.48
2
5.33
7.73
3
4.94
7.31
average value
5.09
7.51
Anomaly Detection Algorithm of Cerebral Infarction CT Image The algorithm of this paper/s
395
Traditional random forest/s
9 8 7
Time
6 5 4 3 2 1 0 1
2
3
average value
Number of experiments
Fig. 2. Comparison of detection time
It can be seen from Fig. 2 that the random forest anomaly detection algorithm proposed in this paper has an average detection time of 5.09 s for 50 CT images of cerebral infarction, which saves 2.42 s compared with the traditional detection algorithm. Therefore, the forest detection algorithm proposed in this paper has a certain improvement over the traditional random forest.
396
Y. Zhang
a Number of experiments
5 The algorithm of this paper
4 3 2 1
0.1
0.105
0.11
0.115 0.12 False rate
0.125
0.13
0.135
b 5 Number of experiments
Traditional random forest
4 3 2 1 0
0.1
0.2
0.3 False rate
0.4
0.5
0.6
Fig. 3. Error rate of abnormal detection of cerebral infarction CT image
4.2 Error Rate Comparison The so-called error rate refers to the ratio of the wrong area of cerebral infarction detection to the area of real cerebral infarction detection. If the result of cerebral infarction detection is correct, experts will manually compare the results. We declare that less than 90% of the cerebral infarction area in the test results is caused by errors. In order to avoid extreme errors, 50 images were adjusted in each data set, the experiment was performed 5 times, and the maximum value of the 5 results was selected for analysis. The test
Anomaly Detection Algorithm of Cerebral Infarction CT Image
397
result is shown in Fig. 3. Figure 3 (a) is the algorithm in this paper, and Fig. 3 (b) is the traditional algorithm. It can be seen from Fig. 3 that the error rate of traditional random forest in detecting abnormal CT images of cerebral infarction is as high as 56%, which is much higher than the 13.1% error rate of the algorithm in this paper. From the overall analysis, the algorithm in this paper has a high accuracy in detecting abnormalities in CT images of cerebral infarction, which shows that the algorithm in this paper has excellent generalization ability.
5 Conclusions Early rehabilitation nursing for patients with cerebral infarction can improve their limb function and reduce their disability. At present, although traditional nursing methods have certain clinical effects, in practice, they are often carried out according to the clinical experience of nurses. There is a certain degree of randomness and blindness, and the individual differences of patients are not noticed. In addition, various nursing measures are not standardized, which makes the prognosis of patients unable to achieve the expected results. The algorithm proposed in this paper has a good detection effect in the abnormal detection of cerebral infarction CT images. The detection of image abnormalities is a complex problem. With the development of imaging methods and image processing technologies, it is increasingly used. The development of image anomaly detection algorithms for specific application backgrounds is of great significance for reducing the workload of manually reading images and improving the accuracy and objectivity of detection.
References 1. Zhang, L., Zhao, C.: A spectral-spatial method based on low-rank and sparse matrix decomposition for hyperspectral anomaly detection. Infrared Phys. Technol. 38(14), 4047–4068 (2018) 2. Bozcan, I., Sejersen, J., Pham, H., et al.: GridNet: image-agnostic conditional anomaly detection for indoor surveillance. IEEE Robot. Autom/. Lett. PP(99), 1–1 (2021) 3. Du, Y.: An anomaly detection method using deep convolution neural network for vision image of robot. Multimedia Tools Appl. 79(13–14), 9629–9642 (2020). https://doi.org/10.1007/s11 042-020-08684-1 4. Song, S., Qin, H., Yang, Y., et al.: Hyperspectral anomaly detection via graphical connected point estimation and multiple support vector machines. IEEE Access PP(99), 1–1 (2020) 5. Dib, J., Sirlantzis, K., Howells, G.: A review on negative road anomaly detection methods. IEEE Access PP(99), 1–1 (2020) 6. Wang, S., Wang, X., Zhong, Y., et al.: Hyperspectral anomaly detection via locally enhanced low-rank prior. IEEE Trans. Geosci. Remote Sens. PP(99), 1–15 (2020) 7. Bahrami, M., Pourahmadi, M., Vafaei, A., et al.: A comparative study between single and multi-frame anomaly detection and localization in recorded video streams. J. Vis. Commun. Image Represent. 79(1), 103232 (2021) 8. Zhang, W., Cheng, J., Zhang, Y., et al.: Analysis of CT and MRI combined examination for the diagnosis of acute cerebral infarction. J. College Phys. Surgeons Pakistan 29(9), 898–899 (2019)
398
Y. Zhang
9. Chen, C., Dong, Y., Kang, J., et al.: Asymptomatic novel coronavirus pneumonia presenting as acute cerebral infarction:case report and review of the literature. Chin. J. Emerg. Med. 29(00), E017–E017 (2020) 10. Gui, X., Wang, L., Wu, C., et al.: Prognosis of subtypes of acute large artery atherosclerotic cerebral infarction by evaluation of established collateral circulation. J. Stroke Cerebrovasc. Dis. 29(11), 105232 (2020) 11. Hsu, P.K., Hsieh, Y.J.: Failure to detect large cerebral infarction by continuous near-infrared spectroscopy cerebral oximetry monitoring during aortic dissection. Changhua J. Med. 17(1), 33–36 (2019) 12. Quinn, C.T.: Silent cerebral infarction: supply and demand. Blood 132(16), 1632–1634 (2018)
Design of Decoration Construction Technology Management System Based on Intelligent Optimization Algorithm Xiaohu Li(B) Zhejiang HongSha Construction Co., LTD., Haining 314400, Zhejiang, China [email protected]
Abstract. Intelligent optimization algorithm is in the process of system operation, according to the actual situation, reasonable configuration of each parameter, in order to achieve the optimal goal. The management of decoration construction technology (DCT) is a complex, huge and tedious project. Therefore, applying the intelligent optimization algorithm to the design of the DCT management system can meet the needs of the system. To this end, this article uses experiments and examples to study the building DCT management system, and integrates related algorithms. Experimental results show that the data input response time of the system is 213 ms. The results of the performance test are mainly system requirements, and database access performance can be improved by adding data table indexes and stored procedures. Keywords: Intelligent Optimization Algorithm · Decoration construction Technology · Management System · Decoration Design
1 Introduction With the continuous improvement of the social and economic level, people have higher and higher requirements for the quality of life. They have higher-level development needs in terms of comfort, convenience and safety. DCT based on intelligent optimization algorithm is a new, advanced and efficient decoration construction method. It automates the management of decoration construction projects through a computer system, which can effectively enhance the competitiveness of enterprises in the market. There are many researches on intelligent optimization algorithms and theoretical results of decoration construction. For example, Wang Lejun proposed a trajectory planning control method based on intelligent optimization algorithm [1]. Based on the problem of optimizing the loading position in the current warehousing logistics system, combined with actual education needs, Liu Yiwei is developing an intelligent warehousing logistics training platform based on Siemens API S7–1200 [2]. Pan Zhangqian said that architectural decoration includes protecting the main structure of the building. The development of architecture and the development of architectural decoration technology, these are the needs of people’s lives [3]. Therefore, this paper proposes the design of the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 399–408, 2023. https://doi.org/10.1007/978-3-031-31860-3_42
400
X. Li
DCT management system, and combines it with the intelligent optimization algorithm, which can meet the requirements of science and rationality. This article first studies the intelligent optimization algorithm, and analyzes the swarm intelligence algorithm and particle swarm algorithm. Secondly, the main technical methods of building decoration are described. Afterwards, the overall design of the management system was analyzed and practiced. Relevant performance data results are obtained through experimental tests.
2 Design of Decoration Construction Technology Management System Based on Intelligent Optimization Algorithm 2.1 Intelligent Optimization Algorithm Optimization is a technique supported by mathematical theory, which makes certain trends conducive to decision-making. According to the different problems it actually solves, many branches of the field of mathematics are derived. For example, optimization problems are very important in production and life, and are common in production planning, economic planning, decoration construction design, system control, artificial intelligence, pattern recognition and other fields. With the rapid development of business and technology, the optimization problems that need to be solved are becoming more and more complex. In view of more and more complex, more and more boundary conditions and more and more difficulty in mathematical modeling, traditional methods have the upper hand with intelligent characteristics, and adapting to various complex conditions has become a very useful research direction [4, 5]. Intelligent optimization algorithm is an artificial intelligence-based system that simulates and simulates complex systems, and performs corresponding functions in the simulation. The intelligent optimization algorithm is mainly aimed at traditional DCT; in the management process, due to many uncertain factors, the decoration construction efficiency is low and the project quality is not high. Therefore, the traditional manual management method needs to be improved [6]. In the swarm intelligence algorithm, the set containing all individuals is called the population, denoted by S(v), where v is the number of iterations of the population, and st is the size of the population, that is, the population size. Each individual represents a candidate to solve the optimization problem, and guides population research by calculating the fitness value of the individual, including the three main research processes of population cooperation, adaptation and competition [7, 8]. The algorithm model is as follows: S(ν) = {S(st), T (∂), X (ε), Y ( ), v}
(1)
On behalf of the population set S(st) of population size st, T, X, Y respectively represent population cooperation, self-adaptation and mutual competition operations. In addition, ∂, ε and are the corresponding information performed for each operation. In order to optimize a problem or a function, we give its allowable solution space, and imagine that a particle group composed of a certain number of particles with variable speeds and positions moves in the allowable solution space. These particles constantly
Design of Decoration Construction Technology Management System
401
adjust their speed of motion by evaluating their best historical position and the best historical position of the entire particle swarm, so as to realize the intelligent search of the entire feasible solution space, similar to the foraging behavior of the entire bird swarm, and finally all It is possible for particles to find themselves to move to the optimal solution position [9, 10]. The specific evolution process of particles is: Qils+1 = opils + dl uils (yils − τils ) + d2 uils (yils − τils )
(2)
Among them, i represents the i-th particle, and l represents the dimensionality of the particle. s represents the number of iterations s. In image processing, the operation directly performed on pixel points is called spatial method. h(a, b) = FI (g(a, b))
(3)
Compared with other evolutionary algorithms, PSO does not need to encode variables, it is more convenient to use, and does not require a lot of manual tuning. 2.2 The Main Technical Means of Building Decoration Architectural decoration refers to the various treatment processes of the internal and external surfaces and spaces of the building by using decoration materials or ornaments to protect the main structure of the building, improve the use function of the building, and beautify the building. Decoration is a combination of craftsmanship and art. It includes not only the decoration of the building, but also the design of space, light and wind. Its main technical means are installation, pasting, spraying, hanging and so on. Installation is to attach various functional devices, parts, finished products, semi-finished products and other decorative units to the building structure. Pasting is to bond various surface materials and decorative elements to the top surface, ground and wall surface of the building with adhesives. Spraying is to use paint coatings, after the surface of the base layer is treated, protective spraying and decorative spraying are carried out. Hanging includes setting up, hanging and displaying furniture, decorations, handicrafts, flowers, calligraphy and painting at a predetermined location in the room [11, 12]. Decoration is not only the decoration of the inside and outside of the building, but also the strengthening and maintenance of the structure of the building, as well as the structural change, demolition, and transformation that change the original function of the building. Decoration construction can increase the external maintenance structure of the building, increase the internal water treatment and lighting system of the building system, and increase permanent decoration. Demolition is the removal of structures and components that interfere with the new use of the building, including floors, stairs, walls, equipment, furniture, doors, windows, and siding. The renovation includes the renewal and adjustment of the original surface materials, doors and windows, fixtures, and functional layout to improve the decoration function and the goal of energy saving and emission reduction. The DCT management plan is for the enterprise’s engineering project. During the decoration construction process(DCP), according to the actual situation of different projects, scientific and reasonable methods and measures are adopted to ensure the quality of the project. Realize the management of the entire project quality, schedule, cost
402
X. Li
and other aspects through the preparation of detailed schedules. When designing, the project management objectives should be determined according to the actual situation of the project, and practical work plans and technical measures should be worked out. Relevant responsible persons must have a clear and clear understanding of the problems that may occur or are unpredictable during the DCP. During the DCP, it is necessary to ensure the completeness of the decoration construction drawings, and to conduct a detailed analysis of the site and do a good job of calculation. Under this method, the optimal plan is determined after analyzing and comparing the actual situation, and the optimization algorithm processing is combined with the control strategy. Improve the traditional manual operation mode and combine intelligent technology to solve practical problems, while using manual calculation methods to achieve system parameter adjustments and other tasks. In actual work, due to the complex and cumbersome conditions of the decoration construction site, and the high demand for data processing speed, there are some problems in the system that are difficult to predict and cannot be accurately estimated. In order to avoid unnecessary losses or increased costs due to these problems, intelligent optimization algorithms are one of the efficient and feasible methods. It can effectively solve the shortcomings of traditional manual operation such as low efficiency and excessive workload. 2.3 Design of Decoration Construction Management System In the traditional DCP, often due to the low technical level of the decoration construction personnel, the requirements for DCT and materials are not strict, resulting in frequent project quality problems. The system designed based on intelligent optimization algorithm can effectively improve the management efficiency of architectural decoration enterprises. Since the intelligent optimization algorithm can solve the problems existing in the traditional DCP to a certain extent, this paper proposes an improved decoration building automatic management system based on BP neural network technology. First, determine the model parameters after modeling and analyzing the system. Then use the ASM module to design the model data table and input it into the MATLAB software. Finally, the use of CRC neural network to realize the entire intelligent optimization algorithm can effectively solve the problems existing in the traditional DCP in actual engineering applications. Determine the main technical requirements and basic parameters of each process according to the engineering situation. Implement operating procedures and corresponding operating procedures for various types of workers, equipment, etc. in accordance with the process flow. In the DCP, automatic control is realized through the humancomputer interaction interface, so as to achieve the purpose of automatic adjustment and improve efficiency and quality. Since the intelligent optimization algorithm is an efficient and cost-effective method, it needs to be designed according to system requirements during the DCP. In the DCT management plan, it mainly includes three aspects, namely the development, design and maintenance of the system. DCT management planning is an optimization method based on intelligent optimization algorithms. It integrates and analyzes all aspects involved in the system, and combines relevant data to establish a corresponding system. In the
Design of Decoration Construction Technology Management System
403
preparatory stage of the decoration construction process, we must have a certain understanding of the decoration construction personnel and management personnel. At the same time, it should be based on the actual situation to formulate a scientific, reasonable, implementable and project-compliant program as the basis of the decoration construction technology management planning. At the same time, the various departments must be linked to each other to ensure that the entire management work can be carried out smoothly and achieve the expected goals. The overall design of the system. The main requirement of the system is to digitize and computerize the information collection in the home improvement supervision process, so that owners can participate in the home improvement process extensively, and promote the industrialization and standardization of the home improvement industry. After analysis, the system is mainly composed of smart meters, network cameras, Web servers, and mobile clients (monitoring clients, owner clients), and combines mobile Internet technology to complete system-wide data communication. The functional requirements of the server should include personnel information, building information, indicators, feedback evaluation, and user authority management. Personnel information management includes owner information, supervision information and decoration construction personnel information management. Building information management includes building information and building information management. Measurement data management
User Interface
Query interface
Management interface
User Management
House management
Instrument management
Data collection
Data distribution
Interface design
Inquire
Insert
Renew
Fig. 1. Server Frame Structure Design
Delete
404
X. Li
includes intelligence of instrument information, management of measurement standards, collection of quality control data, and distribution of quality control data. Feedback score management includes owner feedback and decoration service scores. User authorization management includes user, role and authorization management. Its composition is shown in Fig. 1: Based on the C/S architecture, the system is divided into two parts: client and server, and the server interacts with the database. The communication between the system client and server first needs the support of the network environment. Under the principle of C/S architecture, the system software architecture is divided into client software and server software, and the server software realizes the interaction with the database.
3 System Test 3.1 Test Environment The physical structure of the system is shown in Fig. 2:
Fig. 2. System physical structure
Design of Decoration Construction Technology Management System
405
(1) Web server This design uses a Lenovo notebook computer, the PC processor uses 8MB secondary cache, main frequency 3GHz, local storage 8GB, hard disk 2TB, with WiFi and Bluetooth modules, suitable for software server development and testing, while providing MySQL environment and JMeter performance testing tools. (2) Mobile devices Select the mini Android tablet on the mobile terminal. The main hardware parameters include: processor, operating system, screen size (10.1 inches), display core (8 display engines), memory (2GB memory), screen resolution (1024 × 781), network (Wi-Fi, 5G), Capacity (32 GB). 3.2 Function Test This section reflects the completion of each functional module through the testing of each interface of the website, and also provides instructions for users of smart home monitoring software. The modules to be tested in this section mainly include: user login management, employee information management, building information management, quality control data management, and employee authority management. Then each functional module is tested and the test result is displayed. In order to ensure the good stability and reliability of the mobile terminal software, it is necessary to perform multiple running tests in order to correct errors in the software running in time. This part uses the mobile terminal user interface to test each functional module, including the user interface response time and whether the function can be used normally, and also provides guidance for users who use the software. The most important tests include: whether it can quickly establish a data connection with the smart instrument, whether it supports multiple instruments to work online at the same time, whether the quality control data can be uploaded to the server in time, and how the image quality is monitored online in real time. 3.3 Performance Test System performance monitoring is to test whether the system performance meets the production performance requirements by simulating the combination of business pressure and production and operation use scenarios, and to evaluate the processing capacity of the system when the peak load or exceeds the maximum load. At the same time, the load and response time data obtained during the test can be used to verify the capabilities of the expected model, find system bottlenecks and identify problems in the software to optimize the system. Then perform performance tests on the commonly used MySQL database and HTTP query interface of the system.
406
X. Li
4 Analysis of Test Results 4.1 Analysis of Performance Results Use the JMeter test tool to simulate multi-user concurrency, and perform data insertion, data query, QA data interface and data query interface tests on the QA data table. The specific test results are shown in Table 1: Table 1. Test response time table
Data insertion
All threads start time
Cycles
Response time
50
10
100
213
100
10
1000
20
Upload interface
80
10
10
745
Data query interface
80
10
20
1120
data query
Threads
Cycles
All threads start time
Response time
1200 1000
10
1000 10
12
10
10
10 745
800 Data
1120
8
600
6
400
4
Time
Threads
213 200 50
100
100 20
80
80
2
10
20
Upload interface
Data query interface
0
0 Data insertion
Data query Indicators
Fig. 3. Response Time Analysis of Various Performance Tests
As shown in Fig. 3, the average response time of data insertion is 213ms, the average response time of data query is 20ms, the average response time of system processing interface is 745ms, and the response time of simulation request is 1120ms. Can basically meet the needs of the system.
Design of Decoration Construction Technology Management System
407
Database is the core of data storage in the whole system. In order to ensure the safe, reliable and stable operation of the system, it is necessary to determine the optimal number of concurrent users that the system can carry and the load applicable to the number of long-term concurrent users. See Table 2 for details: Table 2. System data description Data insertion
Data query
Data upload
Average
216
19
765
Median
65
1
734
575
46
1290
90% 95%
816
105
1435
99%
1362
331
1640
Min
18
0
25
Max
3445
926
1855
Data insertion
4000
Data query
Data upload 3445
3500 3000 Time
2500 2000 1290
1500 1000 500
765 216 19
734 65 1
1855
1640 14351362
926
816 575 46
105
331 18 0 25
0 Average Median
90%
95% 99% Indicators
Min
Max
Fig. 4. Test Response Time
As can be seen from Fig. 4, in the pressure test of the quality inspection data upload interface, the system processes the interface at 80.81/sec, and the average response time is 765ms. The average response time of the quality inspection data query interface performance test is 216ms, which basically meets the system requirements. The average response time of data query stress test is 19ms.
408
X. Li
5 Conclusion In the traditional DCP, due to the complexity, cumbersome, low efficiency and other factors of the decoration construction personnel, the project progress is slow and the cost increases. Intelligent optimization algorithms can make full use of computer technology, network communication technology and artificial intelligence control principles to improve decoration construction quality. In the DCP, calculation is a very important link. Its main function is to analyze the actual situation of the project. Due to the particularity and complexity of the building. Therefore, a reasonable method must be determined according to the specific environment. This paper proposes to use intelligent optimization algorithm to design the DCT management system. From the experimental test, we can know that the management system can realize the data interaction, display and platform management.
References 1. Lejun, W., Yawu, W., Xuzhi, L., et al.: Pendubot trajectory planning and control method design based on intelligent optimization algorithm. Control Dec. 5, 1085–1090 (2020) 2. Liu, Y., Yang, J., Ding, W., Zhao, Z.: The design of intelligent warehousing training system and application research of cargo location optimization algorithm. Logistics Technol. 43(301(09)), 171–175 (2020) 3. Zhangqian, P.: Analysis on decoration construction technology management of building decoration. Arch. Eng. Technol. Des. 000(010), 771 (2017) 4. Zheng, J.: Analysis of the impact of the Internet era on the decoration construction and management of traditional residential interior decoration. Arch. Eng. Technol. Des. 000(006), 2095 (2017) 5. Xuemei, D.: Problems in the management of building decoration engineering and countermeasures for improvement. Arch. Eng. Technol. Des. 000(024), 3200 (2017) 6. Xiaomei, W.: Discussion on decoration construction technology management measures of building decoration engineering. Arch. Eng. Technol. Des. 000(024), 502 (2017) 7. Qi, C., Long, M.: The effective application of intelligence in the decoration construction management of building decoration. Arch. Eng. Technol. Des. 000(021), 3388 (2017) 8. Zengfu, F.: Optimization measures in the management of building decoration construction technology. Arch. Eng. Technol. Des. 000(012), 4513 (2017) 9. Wei, H., Yichao, S.: Multi-objective optimization design during decoration construction period based on BIM-genetic algorithm. J. Civ. Eng. Manag. 036(004), 89–95 (2019) 10. Wei, D.: Optimization measures in the management of building decoration construction technology. Arch. Eng. Technol. Des. 000(034), 1040, 1044 (2017) 11. Wei, Y., Mao, J.: An improved intelligent optimization algorithm. Electron. Measurement Technol. 41(307(23)), 29–34 (2018) 12. Xiangtian, Z.: The application of genetic algorithm in the decoration construction and optimization of university network center computer room. China Equip. Eng. 411(24), 74–76 (2018)
Construction of Power Supply Security System for Winter Olympic Venues in the Era of Intelligent Technology Jianwu Ding, Mingchun Li, Shuo Huang, Yu Zhao, Yiming Ding, and Shiqi Han(B) State Grid Beijing Electric Power Company, Beijing 100031, China [email protected]
Abstract. The 2022 Beijing Winter Olympics will be grandly opened with the goal of building a “green, open, intelligent, shared, technological and innovative” Olympic event. On the basis of the core of power data, through the full access, full coverage and full integration of the Olympic-related data inside and outside the system, a venue power supply security system based on intelligent technology will be built. In the process of system technology application, comprehensively sort out the power supply security issues of the venues, ensure that various technical support and service measures are put in place, provide high-quality, efficient and reliable power supply security services for the Beijing Winter Olympics, and guide the Winter Olympics The platform provides a strong technical guarantee. Keywords: Intelligent Technology · Power Supply Security · Power Supply Guarantee · System Construction
1 Introduction The smooth manner of any large-scale event is highly required for power safety protection. In July 2015, Beijing took hands with Zhangjiakou Shenao Wei, and the eyecatching ice and snow event has become reality. After 14 years, China will hold hands with Olympics. For the world-class events such as the Winter Olympics, the sports venues of the hosting event need to be safe and reliable supply guaranteed to provide a safe and reliable power supply for the event. The safety of the power system is critical, otherwise it will cause huge economic losses [1, 2]. The current application of information technology in management has greatly improved the intelligent ability of information services. After entering the information age, the wiring of the information network has extended to various aspects of the power system [3, 4]. Winter Olympics Power Supply Safety System Construction follows national and corporate network security related requirements, actively build “Technology Winter Olympics”, and focusing on the safety risks of Winter Olympics, focusing on security zone boundaries, safety computing environments, data security and safety operation guarantees and other aspects [5] are designed to ensure power supply safety.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 409–419, 2023. https://doi.org/10.1007/978-3-031-31860-3_43
410
J. Ding et al.
2 Construction of Venue Power Supply Security System After the attack on the command platform of the Winter Olympics, social order and public interests will be affected. Business data leakage or service unavailability may cause general damage to social order and public interests (see Table 1). Table 1. The values of time in the mill drum first section Risk layer
Risk description
Consequence and harm
Risk level
Network level risk
Data transmission between application services and Intranet applications is stolen, damaged, or tampered with
The system cannot provide normal service; Disclosure of sensitive information such as user data and business data
medium
Application level risk The application system itself has security vulnerabilities and risks of being attacked and malicious attacks. The interface between the application system and other applications is risky. Application risk should be focused on
Easily cracked, login system high to obtain data, resulting in user data, business data and other sensitive information leakage
Data level risk
Data is intercepted and decoded in the process of network transmission
User data, business data and high other sensitive information is leaked
Data is illegally copied on the terminal
User data, business data and medium other sensitive information is leaked
Terminal level risk
Login by false system user; Stealing advanced access to the system;
The system cannot run medium properly. User data, business data and other sensitive information is leaked
Host level risk
Login by false system user; Steal advanced system access permission
The system cannot run properly. Important data is lost
Host infected with viruses and other malicious code, can not work properly
The system cannot run medium properly. User data, business data and other sensitive information is leaked
medium
(continued)
Construction of Power Supply Security System
411
Table 1. (continued) Risk layer
Risk description
Consequence and harm
Physical level risk
The equipment room suffers The system is unavailable physical damage such as and data is lost earthquake, fire or flood; The power supply is abnormal, etc
Risk level low
2.1 Overall Architecture The security protection framework system of the command platform of the Winter Olympics complies with the relevant requirements of national and corporate network security. In view of the security risks faced by the command platform of the Winter Olympics, it focuses on the design of business application security functions, business application interface interaction security, and other aspects (see Fig. 1 and Fig. 2).
Fig. 1. Winter Olympics command platform protection system frame
412
J. Ding et al.
Fig. 2. Deployment structure of the Winter Olympic command platform
2.2 Safe Area Boundary The types of boundaries include the user boundary of the management information area, the horizontal domain boundary of the management information area, the APN private network boundary of the management information area, the third-party private line boundary of the management information area, the boundary between the management information area and the Internet area, and the Internet boundary. By deploying security devices such as firewalls, border access control is achieved; firewalls, intrusion detection devices, attack source tracing systems, and APT security threat perception systems are deployed on the terminal side of the command center; port-based network access control is enabled; ports are opened according to the actual needs of each business;Use desktop terminal security management system, IP/MAC address binding and other technical means for access authentication and control, and block the access of unauthorized devices; at the same time, set up access control, security audit, and intrusion prevention according to the unified requirements of company border security and other safety-related requirements (see Fig. 3).
Fig. 3. Safe Area Boundary
Construction of Power Supply Security System
413
2.3 Secure Computing Environment The business system includes data center, unified authority platform, unified authority platform, Winter Olympics Organizing Committee MOC, urban traffic operation information, Winter Olympics venue video cameras, integrated energy system, TCOS antiexternal force system, State Grid smart car networking platform, smart single Soldier system. The authentication is performed before the interface data connection is established, and the authentication method adopts the shared password or username/password or digital certificate. Mandatory requirement; The length of the password is not less than 8 characters, and it should be three or more combinations of uppercase letters, lowercase letters, numbers, and special characters; the user name and password are not the same; the password should be changed at least once every 3 months. Certification. The application scope of the folio interface should be limited, and only include the minimum set of business functions required by the interconnected system; the access control of the access system to the object should be restricted according to the interface access control policy to achieve access control. The encryption technology is used to ensure the integrity of sensitive information such as authentication information and important business data during the transmission process. The sensitive information is encrypted and the transmission receiver decrypts to achieve encrypted transmission. Do a good job of log auditing of interface interaction data. Audit records should include event date, time, event type, user identity, event description, and event result. User identity should include user name and IP address, and should have a unique identifier to implement log auditing. 2.4 Data Security Data security is divided into storage security and transmission security [6]. Both storage security and transmission security include confidentiality and integrity. [7] Confidentiality is protected by encryption algorithm encryption. The encryption algorithm includes the use of symmetric encryption, asymmetric encryption, and hybrid encryption (symmetric and asymmetric hybrid encryption). The key to confidentiality lies in the protection of the encryption algorithm key; the integrity can be protected by means of digest function, message authentication code, digital signature, digital certificate, etc. [8, 9]. Data is divided into password data, business application configuration data, important data and general data [10]. The password data includes internal user password data, which requires high storage confidentiality and integrity. A unified authority platform is used to ensure the confidentiality of data transmission. Business application configuration data includes important configuration data and program data. Among them, important configuration data storage confidentiality requirements are high, transmission confidentiality requirements are high, and availability requirements are high. Program data storage confidentiality requirements are low, transmission confidentiality requirements are low, and availability requirements are low. Important data includes operating data, which requires high storage confidentiality, medium transmission confidentiality requirements, high transmission integrity requirements, and high availability requirements. Ordinary data includes business data, which has low storage confidentiality requirements, low transmission confidentiality requirements, low transmission integrity requirements, and low availability requirements (see Table 2).
414
J. Ding et al. Table 2. The values of time in the mill drum first section
Category
Basic indicator requirements
Identity Authenticat-ion and Transmissio-n Security
The password of the information system is forbidden to be stored in plain text and must be transmitted with SSL encryption
Remark
The information system has the function of login weak password verification and regular password modification The information system should have the ability to limit the number of consecutive login failures of user accounts and lock them for a short period of time The information system should have the function of monitoring and alarming abnormal account logins, remote logins, frequent login failures, etc The information system should have the function of real-name account information management, and the management function of ID card number Access control
Enforce access control to restricted resources according to system access control policies
Data Security
For data security protection requirements, security design should be carried out from the three levels of data collection, transmission, and use to ensure data confidentiality, integrity, and availability
2.5 Safe Operation Guarantee According to requirements, in the system operation and maintenance stage, focus on formulating technical and security management specifications for the Winter Olympics command platform, and strengthen business application access management. Strengthen the daily operation safety management and system operation monitoring of the Winter Olympics command platform, and promptly rectify and repair problems found. For system management, based on the company’s unified authority platform, the identity of the system administrator should be authenticated, only allowing them to perform system management operations through specific commands or operation interfaces, and audit these operations; based on the company’s unified authority platform, the system management should be The administrator configures, controls and manages the resources and operation of the system, including user identity, system resource configuration, system loading and startup, exception handling of system operation, backup and recovery of data and equipment, etc. (see Fig. 4).
Construction of Power Supply Security System
415
Fig. 4. System safety operation guarantee principle
3 Comprehensive Evaluation Model of System Information Security Level Protection Security control evaluation includes security technology evaluation and security management evaluation. Security technology evaluation includes physical security, border security, data security, host security, application security, network security and terminal security, etc. Security management evaluation includes security management system, security management organization, personnel security management, system construction management and system operation and maintenance management. 3.1 Quantification of Protection Evaluation System How to express the security level of information system closer to the real situation is the goal that many researchers pursue. After assessing the security level of information systems, people prefer to have a quantitative index in addition to qualitative summary, because people can understand quantity more easily than words. In the above comprehensive evaluation system, each evaluation unit has one or more evaluation items. Before the evaluation conclusion of the whole evaluation unit is given, the results of each evaluation item must be obtained. To this end, assessment units can be quantified by scoring them (optionally between 0 and 100) according to how well each assessment item conforms to the assessment criteria. According to the contribution of different kinds to the information security level, different weights are assigned and the comprehensive evaluation index of the system is calculated. At present, the method of weight determination mainly adopts the empirical judgment method of expert consultation. At present, weight determination has basically changed from individual experience decision to expert collective decision. In data processing, arithmetic mean value is generally used to represent the concentrated opinions of the judges. The calculation formula is as follows: m (aij )/n, (j = 1, 2, · · ·, m) (1) aj = i=1
416
J. Ding et al.
where: n is the number of judges; m is the total number of evaluation indicators; aj is the average weight of the j index; aij is the score given by the i th judge to the j th index weight. As the result of normalization is more in line with people’s understanding and usage habits, the above formula is normalized: m aj = aj / (aj ) (2) j=1
The final result represents the collective opinion of the judges. Generally speaking, the weights determined in this way can correctly reflect the importance of each indicator and ensure the accuracy of the evaluation results. 3.2 Grey Correlation Analysis Grey correlation refers to the uncertain relation between things, or the uncertain relation between system factors and principal behavior factors. According to the four principles of normalization, even symmetry, integrity and proximity, grey correlation analysis determines the correlation coefficient and correlation degree between the reference series and several comparison series, and finds out the important factors affecting the target value, so as to master the main characteristics of things. Let the reference number be listed as {x0 (t)} and the comparison number as {xi (t)}, then when t = k, the correlation coefficient ξ0i (k) between {x0 (t)} and {xi (t)} can be calculated by the following formula(k = 1, 2, · · ·, N): ξ0i (k) =
minmin|x0 (k) − xi (k)| + ρmax max |x0 (k) − xi (k)| i
i
k
k
|x0 (k) − xi (k)| + ρmax max |x0 (k) − xi (k)| i
(3)
k
where ρ is the resolution coefficient, and the smaller ρ is, the larger the resolution is. Generally, the value interval of ρ is [0, 1], and more generally, ρ = 0.5. The correlation degree between {x0 (t)} and {xi (t)} is: r0i =
1 N ξ0i (k) k=1 N
(4)
Considering that the correlation degree of different sub-items has different importance to the whole system, that is, the corresponding weight {a(i)} is assigned according to the importance of different sub-items, where M is the number of sub-items, then the comprehensive evaluation index is: A=
1 M r0i a(i) i=1 M
(5)
When the information system has been evaluated for many times, the correlation order analysis can be carried out to find out the factors that have great influence on the security level of the information system.
Construction of Power Supply Security System
417
4 New Power Supply Guarantee System Case 4.1 Break Through the Tradition The network architecture has changed from the Internet to the Internet of Things, the data information has changed from multi-source discrete to integrated collection, the system interface has changed from two-dimensional plane to three-dimensional threedimensional, and the security command mode has changed from traditional manual to digital. Based on cutting-edge technologies such as digital twin, artificial intelligence, blockchain, etc., integrate power grid information, Winter Olympics events, venue energy and other resources, develop equipment such as extreme cold tolerance, clean power generation, and intelligent epidemic prevention; develop Winter Olympics power security command platform; application Digital assurance command system. 4.2 Improve the Security System Faced with the problems of complex support environment, scattered support information, and difficulty in depth support, relying on the Winter Olympics command platform, organically integrating equipment information, building the first digital support command system of the State Grid, achieving four-level penetrating command vertically and horizontally Information exchange between the Winter Olympics Organizing Committee. It fully covers fault analysis, active response, real-time communication, unified deployment and accident handling processes, providing intelligent and active support services for Winter Olympics users, and realizing the digital transformation of the entire chain of support and command. Independent research and development of various types of equipment for extreme cold tolerance, intelligent epidemic prevention, clean power generation, etc., in response to the high-altitude extreme cold weather, the special situation of epidemic prevention and control, and the concept of green Olympics, it is the first time in China to realize the multi-coordinated configuration of various types of support equipment. The digital power supply guarantee command provides strong support. Break through 29 system data barriers, apply digital twins, knowledge maps and other technologies to realize the classification and integration of data such as power grids, venues, and events. Build the first panoramic intelligent monitoring system for power operation guarantee in Olympic history, build a panoramic model of power grid for power transmission, distribution, and realize penetrating perception of grid users’ full voltage level information, extending the security monitoring power to the “last centimeter” of users. 4.3 Comprehensive Protection of Data Security In terms of data security, the platform can desensitize sensitive fields, and has a data backup mechanism to ensure data storage and transmission through effective means such as access control, identity authentication, data encryption, disaster recovery backup/recovery, security auditing, and network protection. Security, no data security risks. In terms of host security, regularly scan the server for vulnerabilities, repair new vulnerabilities in a timely manner, and ensure that there are no vulnerabilities and no
418
J. Ding et al.
host security risks. In terms of application security, all important nodes of the system are deployed redundantly, using software load and hardware load coordination to ensure high system availability and no application security risks. In terms of network security, the system construction is carried out in accordance with the third-level requirements of information security grade protection, and the network and security equipment meet the requirements of Class Protection 2.0, and there is no network security risk. In terms of user security, each user has an independent account and password complexity requirements, user access rights are minimized, information leakage is reduced, and there is no user security risk. In terms of equipment safety, the hardware equipment adopts a modular design with a high degree of integration, and the communication interface adopts connectors with high stability such as aviation plugs and M12 plugs, which have strong communication stability. Device security risk. In terms of policy/technology development, the R&D process conducts compliance review on the use of technology patents, and applies for core patented technologies, without policy/technology development risks.
5 Concluding Remarks The political power supply of the Winter Olympics cannot be lost. How to successfully complete the power supply guarantee task is the primary responsibility and ultimate goal of the company’s participation in the Winter Olympics. The power supply guarantee work is very important. In order to ensure the safety of power supply, in order to achieve highquality venue power supply security management, in the era of intelligent technology, the construction of venue power supply security system is essential. On the basis of the core power data, through the full access, full coverage and full integration of the Olympicrelated data inside and outside the system, it provides a strong technical guarantee for the Winter Olympics command platform.
References 1. Liang, G., Weller, S.R., Zhao, J., Luo, F., Dong, Z.Y.: The 2015 Ukraine blackout: implications for false data injection attacks. IEEE Trans. Power Syst. 32(4), 3317–3318 (2017) 2. Chen, T., AbuNimeh, S.: Lessons from Stuxnet. Computer 44(4), 91–93 (2011) 3. Kabalci, Y.: A survey on smart metering and smart grid communication. Renew. Sustain. Energy Rev. 57, 302–318 (2016) 4. Ameri, M.H., Delavar, M., Mohajeri, J.: Provably secure and efficient PUF-based broadcast authentication schemes for smart grid applications. Int. J. Commun. Syst 32(8), e3935 (2019) 5. Zeraati, M., Aref, Z., Latify, M.A.: Vulnerability analysis of power systems under physical deliberate attacks considering geographic-cyber interdependence of the power system and communication network. IEEE Syst. J. 12(4), 3181–3190 (2018) 6. Fakude, N.C., Akinola, A.T., Adigun, M.O.: The effect of data transmission and storage security between device–cloudlet communication. In: Yang, X.S., Sherratt, S., Dey, N., Joshi, A. (eds.) Fourth International Congress on Information and Communication Technology. Advances in Intelligent Systems and Computing, vol 1027, pp. 307–319. Springer, Singapore (2020). https://doi.org/10.1007/978-981-32-9343-4_24 7. Savukynas, R.: Internet of Things information system security for smart devices identification and authentication. In: 2020 9th Mediterranean Conference on Embedded Computing (MECO), pp. 1–5. IEEE, June 2020
Construction of Power Supply Security System
419
8. Ge, Y., Xu, W., Zhang, L.: Integrative security protection system for full life cycle of big data based on SM crypto algorithm. In: International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), vol. 12156, pp. 396–402. SPIE, December 2021 9. Xiong, W.: Discussion on the role of information encryption technology in computer network security. Digital Technol. Appl. 37(1), 206–207 (2019) 10. Jeong, H., Jung, H.: Monopass: a password manager without master password authentication. In: 26th International Conference on Intelligent User Interfaces-Companion, pp. 52–54, April 2021
Multi Objective Optimization Management Model of Dynamic Logistics Network Based on Memetic Algorithm Pingbo Qu(B) Shandong Transport Vocational College, Weifang, Shandong, China [email protected]
Abstract. With the development of science and technology and the continuous improvement of the professional level of logistics, customers have higher and higher requirements for service quality and effectiveness. More and more enterprises take the dynamic logistics network multi-objective optimization management model as an important means to improve the market competitiveness and core competition level. Memetic algorithm is a flexible combination of evolutionary algorithm and some local search algorithms. Memetic algorithm is a flexible framework, which can select appropriate search strategies according to different problem models to form different memetic algorithms. In this paper, memetic algorithm and multi-objective optimization management method are used to establish the multiobjective optimization management model of dynamic logistics network and carry out relevant experimental verification, in order to improve the overall logistics level of our country through the multi-objective optimization management model of dynamic logistics network established by memetic algorithm. Keywords: Memetic Algorithm · Dynamic Logistics Network · Multi-objective Optimization Management · Model Establishment
1 Introduction With the development of market economy, logistics, as the “third profit source” of enterprises, has a more and more significant impact on economic activities [1, 2]. The core link of logistics system optimization is dynamic logistics network, which is not only an indispensable part of e-commerce activities, but also an important link directly related to consumers [3, 4]. Enterprises can reduce transportation costs, improve customer satisfaction and obtain higher economic benefits by optimizing the scheduling of distribution vehicles. Therefore, seeking ways to reduce the transportation cost in logistics distribution and improving the evaluation of customer satisfaction have become an urgent problem to be solved in this field. Therefore, it is of great practical significance to study the problem of dynamic logistics network [5, 6]. At present, most researches on Dynamic Logistics assume that the customer’s location information (including location information and demand), service time required by customers, on-site service time and travel time are known before path planning. All this © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 420–429, 2023. https://doi.org/10.1007/978-3-031-31860-3_44
Multi Objective Optimization Management Model of Dynamic Logistics Network
421
information has nothing to do with time, that is, it does not change with time [7, 8]. Based on this assumption, the dynamic arrangement of path time should be relatively fixed. This static VRP can be called the VRP of static logistics. However, there are still many uncertainties in the objective real world, which reflect many uncertainties brought about by customer demand in the process of dynamic logistics problems, the uncertainty of customer demand time point, various subjective preferences of line planners, traffic congestion and major failure of transportation vehicle system. With the frequent change of transportation time and route, those traffic uncertainty information may also appear in large quantities. It is necessary to consider dynamically and flexibly changing the original normal operation route of passing vehicles, and timely and appropriately adjust the original planned normal driving route [9, 10]. At this time, the theory and method of static VRP are no longer applicable to these problems. Therefore, for the solution of dynamic vehicle routing problem (DVRP), it is necessary to study a new set of theories and methods. It can be seen that dynamic driving is closer to the actual driving production operation and working life than VRP technology. Coupled with the rapid development of network communication system and real-time information acquisition, processing and analysis technology in recent years, people can realize this relatively simple operation mode to dynamically obtain and process vehicle real-time information, and realize the comprehensive real-time control and management of the whole fleet system, Dynamic vehicle routing [11, 12]. In foreign countries, the research on dynamic logistics began in the 20th century, focusing on the dynamic dial-a-day problem. For the first time, the dynamic logistics problem is simply and clearly defined as “arranging vehicle routes to meet the customer needs of real-time travel”, and the dynamic logistics VRP technology is compared with the static logistics VRP technology. Later, many experts and scholars elaborated on the problem of dynamic logistics in their academic works. The demand modeling method in dynamic random vehicle transportation is studied. The model is successfully applied to many fields, such as postal express delivery, product distribution service and production scheduling. The parallel tabu search algorithm is used to solve the real-time dynamic logistics problem requested by new customers. The mathematical model of DVRP with varying travel time is established, and a mountain climbing algorithm is designed to solve it. This paper studies the vehicle routing problem with random demand by using neural dynamic programming method, puts forward a solution to the single vehicle dynamic routing problem with real-time operation, and successfully solves it to meet all customer requirements. The vehicle scheduling problem with random customers and demand and the VRP with random travel time are studied. This paper introduces VRP under the condition of fuzzy demand information. Aiming at the dynamic fuzzy VRP considering all available information, this paper puts forward the solution of this problem, constructs a mathematical model for it, and designs a real-time heuristic algorithm to solve this problem. Using GPS, GSM and GIS technology and the shortest path and greedy algorithm, the VRP optimization under single yard and non full load conditions is successfully solved. Aiming at the DVRP of vehicle failure and vehicle recycling, a two-stage solution strategy is designed, that is “making overall optimal distribution plan and real-time local optimal scheduling”. Aiming at the time-varying traffic information and customer orders, a time-dependent dynamic path planning model is constructed and solved by genetic algorithm.
422
P. Qu
With the development of the country, more and more super high-rise buildings will appear in people’s lives. Therefore, the seismic safety of super high-rise buildings is particularly important. The rational use of damping and isolation technology to improve the safety, durability and applicability of super high-rise buildings is the direction of future design. As a new design method in the field of structural earthquake resistance, damping and isolation design breaks through the limitations of traditional seismic design, and has been gradually applied to practical engineering in recent years. Taking high-rise building as an example, combined with Memetic algorithm, this paper discusses the possibility of its application in practical engineering.
2 Proposed Method 2.1 Logistics Distribution Logistics distribution is an important link in the logistics system. It refers to the selection, processing, packaging, combined distribution and other operation processes according to
Factory
Store goods
Order Start
Storage
Order processing Distribution
Loading End of order
Customer
Fig. 1. Basic process of logistics and distribution
Multi Objective Optimization Management Model of Dynamic Logistics Network
423
the needs of customers, in an economic and reasonable area, and in accordance with the needs of customers, and transport them to the location of users. Distribution is a smallscale and comprehensive activity, which combines “matching” and “delivering” to meet the needs of customers. Figure 1 shows the basic process of logistics and distribution. The modern logistics system is not only composed of goods provided by suppliers to customers, but also consists of a large number of goods, warehousing, distribution, transportation, distribution, processing and other forms. In this process, how to integrate distribution and distribution, professional division of labor, and reasonable resource allocation has become an important part of logistics distribution. It can be seen from the above flow chart that the reasonable distribution mode of logistics organization includes the loading of goods and vehicles, the selection of routes, etc. 2.2 Memetic Algorithms Memetic algorithm actually only refers to such a hybrid search algorithm, but it can directly combine the traditional evolutionary algorithm (EA) search technology and other biological genetic (LS) technology and methods [13, 14]. It is usually called genetic local search of biological molecular genetic information. With the continuous development of evolutionary technology, Ma has also been enriched and developed. The implementation process of memetic algorithm is divided into the following three steps: (1) Memetic algorithm framework The framework model of memetic algorithm is proposed: 1. 2. 3. 4. 5. 6. 7. (2)
Select the global search strategy; Select local search strategy; Initialize the population; Select an appropriate algorithm to search the population globally; Select an appropriate algorithm for local search of individuals; Population update operation; Judge termination conditions. Basic flow of memetic algorithm
Memetic algorithm is based on the theory of evolutionary algorithm, through the local search of the individual with the solution generated by evolutionary algorithm, so as to achieve the optimal solution. Firstly, the population model is initialized to generate a random solution, and then the optimal solution can be generated after several repeated iterations. In each iterative optimization process, the new species population is generated through the global search strategy, and finally the individuals of the population are searched locally, so as to achieve the purpose of optimizing the updated solution. (3) Basic flow chart of memetic algorithm (see Fig. 2)
424
P. Qu
Initial population
Calculated fitness
Global search algorithm: evolutionary algorithm
Local search algorithm
New population was generated and fitness was calculated
Judgment end condition
Terminate the algorithm and output the optimal solution
Fig. 2. The basic flow chart of Memetic algorithm
2.3 Multi Objective Optimization Method Multi-objective programming is not only the most important academic content to explore the scientific methodology of modern western management system, but also an important basic mathematical tool for modern system decision-making and analysis. The problem of multi-objective system planning method is actually a kind of scientific problem of system multi-objective system decision-making, which is characterized by the specific form of system mathematical system planning problem. With the successful convening of the first large-scale international scientific seminar on multi-objective strategic decisionmaking in the United States and the publication of a series of monographs and papers, the research on multi-objective strategic decision-making has begun to be carried out
Multi Objective Optimization Management Model of Dynamic Logistics Network
425
rapidly, vigorously and healthily. So far, a number of theoretical research results with important value have been accumulated. The basic solution methods of multi-objective programming model include weighting method, constraint point method, and so on. Weighting method is to assign weight to each objective function and convert the multi-objective function into a linear objective function. It is often used to solve the set of non inferior solutions. Constraint method is to retain one of the most important goals (or any goal) among the Q goals. The rest are targeted as constraints. fi (x) ≤ εi , i = 23 . . . q minz(x) = fi (x) st{
x ∈ D = {X ∈ E(n) | gi (x) ≥ 0, i = 1, 2, . . . n} fi (X ) ≤ εi , i = 2, 3 . . . .
(1) (2)
By taking all possible values in the objective constraints, the constraint method can approach the set of all non inferior solutions of the multi-objective decision-making problem.
3 Construction of Multi-objective Optimization Model of Dynamic Logistics Network Based on Memetic Algorithm 3.1 Problem Description of Multi-objective Optimization Model of Dynamic Logistics Network In the general multi-objective optimization research of dynamic logistics network, it is usually assumed that the related factor state or information, such as customer demand and traffic conditions, are known and fixed, that is, static VRP. In practice, in the process of distribution, customer demand and road conditions are changing indefinitely, and the optimal choice of multi-objective optimization of dynamic logistics network also changes, that is, dynamic VRP. The transportation network has a great impact on the distribution cost. In the static VRP, the distribution and transportation cost is only related to the distribution distance, which is too restrictive to the reality. In the dynamic VRP, especially the goods distribution in the city, weather changes, special festivals, traffic congestion and other factors directly affect the traffic volume and vehicle speed, resulting in the change of the multi-objective optimization cost of the dynamic logistics network, Therefore, the optimal logistics path will change with time. 3.2 Model Assumptions In order to further better understand and grasp the research purpose, content framework and model of this paper, the following assumptions are made on the method of constructing the model of this paper: (1) The running speed of cold chain distribution vehicles changes with time;
426
P. Qu
(2) The vehicle speed is only related to the daily routine factors of traffic congestion, excluding special holidays and traffic control And other unconventional factors; (3) The transportation time spent by the distributor after arriving at the customer point is generally only related to the distance between stations or the total driving speed, ignoring the freight consumption time of intermediate receiving and unloading.
4 Discussion 4.1 Analysis of System Experimental Results In this paper, the memetic algorithm is simulated to solve the multi-objective optimization model of dynamic logistics network under the simulation environment of MATLAB r2014b. In order to fully verify the real performance of memetic algorithm program, the memetic algorithm program is compared with QGA and GA. All algorithm programs are realized by MATLAB program coding algorithm. Each coding algorithm program runs independently and randomly for at least 20 times, and the maximum number of iterations per run is 100 per second on average. Due to the lack of a standard test and case library in the dynamic vehicle scheduling problem, a set of data should be randomly generated, including a parking lot coordinate and 17 actual demand customers. The actual maximum allowable load capacity of each parking lot is about 30t, and the parking lot coordinates of this group are [18.70,15.29]. The specific data are shown in the table. At time t, three new customers are randomly generated and the demand of two original customers changes. The specific data are shown in the Table 1 and Fig. 3: Table 1. Parking lot and static customer information Serial number
abscissa
ordinate
Demand for
1
18.71
15.27
0.0
2
16.46
8.43
3.0
3
20.06
10.13
2.5
4
19.37
13.38
5.5
5
25.25
14.22
3.0
6
22.01
10.06
1.5
7
25.44
17.09
4.0
8
15.76
15.13
2.5
9
16.61
12.36
3.0
10
14.04
18.13
2.0 (continued)
Multi Objective Optimization Management Model of Dynamic Logistics Network
427
Table 1. (continued) Serial number
abscissa
ordinate
Demand for
11
17.52
17.39
2.5
12
23.51
13.43
3.5
13
19.48
18.11
3.0
14
22.13
12.53
5.0
15
11.24
11.06
4.5
16
14.16
9.78
2.0
17
24.01
19.80
3.5
18
12.24
14.54
4.0
Serial number
1
2
3
4
5
6
abscissa
7
8
9
ordinate
10
11
12
13
Demand for
14
15
16
17
18
Fig. 3. Parking lot coordinates and customer information statistics
It can be seen from the above chart that memetic algorithm is superior to QGA and GA in terms of optimal value, worst value and average value. When solving dynamic logistics problems, the effect of adaptive quantum revolving door update mechanism in memetic algorithm is more obvious. After adding mutation operation in memetic algorithm, the diversity of population can be maintained, so as to increase the search width of understanding; By adding 2-opt method and swap method to local search, the high-quality solution area can be searched more carefully. The multi-objective optimization model of dynamic logistics network based on memetic algorithm shows that
428
P. Qu
the improved global search and effective local search can obtain the high-quality solution of the multi-objective optimization model of dynamic logistics network.
5 Conclusions The selection of logistics facilities is a key link in the planning of the logistics system. The logistics network of enterprises should be reasonably arranged to achieve the goal of improving the logistics efficiency and reducing the logistics costs of enterprises. In actual logistics services, customers’ needs often change with the changes of time and space. Therefore, in order to better meet customers’ needs, logistics managers must break through the existing service limitations and take the way of cooperation among multiple agencies to provide better services. This paper mainly studies the multi-objective optimization model of dynamic logistics network based on memetic algorithm. Firstly, this paper introduces the research background and significance of field logistics problems in today’s society, and then summarizes the research status of dynamic logistics problems, focusing on the optimization algorithms used in this paper, namely memetic algorithm and multi-objective optimization method. Combined with memetic algorithm and multi-objective optimization method, a dynamic logistics network model is constructed to reduce the number of vehicles and cost. Under the condition of meeting the time window and the load capacity of the vehicle, in order to search the high-quality solution area carefully, the local optimization adopts the customer node reset and 2-opt method. Finally, simulation experiments verify the effectiveness of the multi-objective optimization model of dynamic logistics network based on memetic algorithm.
References 1. Kimms, A., Maiwald, M.: An exact network flow formulation for cell-based evacuation in urban areas. Nav. Res. Logist. 64(7), 547–555 (2017) 2. Ewedairo, K., Chhetri, P., Jie, F.: Estimating transportation network impedance to last-mile delivery. Int. J. Logist. Manag. 29(1), 00 (2018) 3. Asgari, N., Rajabi, M., Jamshidi, M., Khatami, M., Farahani, R.Z.: A memetic algorithm for a multi-objective obnoxious waste location-routing problem: a case study. Ann. Oper. Res. 250(2), 279–308 (2016). https://doi.org/10.1007/s10479-016-2248-7 4. Dulebenets, M.A.: A novel memetic algorithm with a deterministic parameter control for efficient berth scheduling at marine container terminals. Maritime Bus. Rev. 2(4), 302–330 (2017) 5. Patil, S., Kulkarni, S.: Mining social media data for understanding students’ learning experiences using memetic algorithm. Mat. Today Proc. 5(1), 693–699 (2018) 6. Vijayaraju, P., Sripathy, B., Arivudainambi, D., et al.: Hybrid memetic algorithm with twodimensional discrete Haar wavelet transform for optimal sensor placement. IEEE Sens. J. 17(7), 2267–2278 (2017) 7. Pereira, J., Ritt, M., Vasquez, O.C.: A memetic algorithm for the cost-oriented robotic assembly line balancing problem. Comput. Oper. Res. 99, 249–261 (2018) 8. Pandiaraj, K., Sivakumar, P., Jeyaprakash, K., et al.: Thermal analysis for power efficient 3D IC routing using memetic algorithm based on hill climbing algorithm. J. Green Eng. 11(1), 998–1010 (2021)
Multi Objective Optimization Management Model of Dynamic Logistics Network
429
9. Chauhan, N.R., Tripathi, S.P.: Optimal admission control policy based on memetic algorithm in distributed real time database system. Wirel. Pers. Commun. 117(2), 1123–1141 (2020). https://doi.org/10.1007/s11277-020-07914-x 10. Nadizadeh, A., SabzevariZadeh, A.: A bi-level model and memetic algorithm for arc interdiction location-routing problem. Comput. Appl. Math. 40(3), 1–44 (2021). https://doi.org/ 10.1007/s40314-021-01453-2 11. Behnamian, J., Afsar, A.: Optimization of total lateness and energy costs for heterogeneous parallel machines scheduling using memetic algorithm. J. Manage. Stud. 18(58), 29–57 (2020) 12. Ozkan, O.: Wireless sensor deployment on 3-D surface of moon to maximize coverage by using a hybrid memetic algorithm. Uluda˘g Univ. J. Faculty Eng. 25(1), 303–324 (2020) 13. Tian, Y., et al. Evolutionary large-scale multi-objective optimization: a survey. ACM Comput. Surv. (CSUR) 54(8), 1–34 (2021) 14. Abdollahzadeh, B., Gharehchopogh, F.S.: A multi-objective optimization algorithm for feature selection problems. Eng. Comput. 38(3), 1845–1863 (2022)
Design of Intelligent Logistics Monitoring System Based on Data Mining Technology Qiuping Zhang1 , Meng Wang1(B) , and Pushpendra Shafi2 1 International Trade, Harbin University of Commerce, Harbin, Heilongjiang, China
[email protected] 2 Prince Sattam Bin Abdul Aziz University, Al-Kharj, Saudi Arabia
Abstract. The data age has developed to the present, Internet technology has made a qualitative leap, and the field of intelligent logistics has risen rapidly. We have made a lot of efforts in monitoring the smart logistics environment and strengthened the management of the smart logistics industry, we have introduced data mining technology, used its various methods to process massive data, found the potentially valuable information behind the data, created a new business model, and optimized and reconstructed the existing logistics service capacity. For a relatively new industry like smart logistics, the design of the overall logistics monitoring system is inseparable from the support of the Internet of Things, find the problems in the logistics process through effective data mining, and study the logistics monitoring. With our unremitting efforts, in recent years, data mining technology is highly sought after, and the logistics monitoring system has also achieved remarkable results. These achievements prove that the future of smart logistics industry is bright. Keywords: Data Mining Technology · Smart Logistics · Monitoring System Design
1 Introduction The continuous innovation of Internet technology has provided an opportunity for the rise of the logistics industry, and smart logistics has also emerged in this environment. At present, the demand of the smart logistics industry is mainly concentrated in four aspects, namely logistics data, logistics cloud, logistics technology and logistics model. From 2016 to 2019, China’s smart logistics industry has seen relatively obvious development, which is reflected in the increase in the growth rate of market scale. In 2020, the market size and stock will exceed 571 billion yuan. The scale of smart logistics market has continued to expand. However, in reality, it is common for enterprises to despise data, pay far less attention to data mining, and do not realize the importance of data mining to enterprises. In the field of intelligent logistics, the development of network information is becoming more and more mature, and the importance of data mining for enterprises is becoming increasingly prominent. Enterprises need to use data mining technology to find new business models from massive data, so as to realize effective monitoring and governance. This paper uses the technology related to data mining that can be realized at this stage to detect and control smart logistics. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 430–439, 2023. https://doi.org/10.1007/978-3-031-31860-3_45
Design of Intelligent Logistics Monitoring System
431
2 Data Mining Technology Nowadays, the data stock shows explosive growth, and data mining came into being. Data mining is widely used in the field of business information processing. 2.1 Concept of Data Mining Technology How to understand data mining technology? Simply put, it is to mine the underlying information behind the useful data in the database. At present, when artificial intelligence is everywhere, data mining has become a hot word. The data mining process integrates the professional contents of multiple disciplines, such as statistics and machine learning, as shown in Fig. 1 [1]. Through data mining technology, the data of enterprises are summarized and analyzed, and the information with potential value is mined, which is helpful to help enterprise managers change business models, adjust the corporate strategy according to the actual situation and make the correct decision in line with the market conditions [2].
Fig. 1. Data mining integrates the knowledge and technology of other disciplines
2.2 Analysis Method of Data Mining Technology Society is always progressing and developing, all walks of life rise rapidly, and the amount of data of enterprises is growing. In this context, if we want to improve the efficiency of data processing, we need to apply various analysis methods of data mining technology. Data mining methods commonly used in data analysis include feature analysis, correlation analysis, deviation analysis and mental network analysis. They mine data from different angles [3].
432
Q. Zhang et al.
First, feature analysis. Feature analysis method classifies data according to the characteristics of various data, so as to facilitate the efficient processing of subsequent data. The specific approach is to conduct virtual classification for various data through computer technology, and deeply mine the classified data according to the characteristics of the required data, so as to obtain valuable data information. Cluster analysis most commonly uses the K-means algorithm. Its algorithm is constructed as follows: Input: Number of clusters k and data set containing n objects X = {x1 ,x2 ,…,xn }. Output: k clusters {S1 ,S2 ,…,SK }, making the objective function minimal (squared error criterion is minimized. i) Select the number of clusters k; ii) Randomly select k objects from the data set as the initial clustering centers c1 , c2 ,…,ck ; iii) Assign objects xi (i = 1,2,…,n) to the nearest clustering center cj according to the Euclidean distance, 1 < j < k, where m is the number of data attributes, m min xi − cj = 1 Y exists in the set. If the condition is true, the set X is used for expression. If the result is true, it is represented by set Y. δ(X ) is the probability of x happening. Using the probability of X and Y together, S(X-> Y). Support: S(X → Y) =
δ(X ∪ Y ) |T |
(1)
It is assumed that X and Y represent the set, and X-> Y exists in the set. If the condition is true, the set A is used to express the set. If it’s true, we’ll write it in terms of Y. If X is true, the probability of Y is true can be expressed as confidence by C (X-> Y): C(X → Y ) =
δ(X ∪ Y ) δ(X )
(2)
Finally, the above formulas (1) and (2) are used to judge the strong association rules, so as to determine the optimal scheme of civil engineering optimization design [16, 17]. The optimization process of civil engineering is a complex process. Firstly, the association rules of frequent items in civil engineering should be judged. The algorithm model, namely formula (1) and Formula (2), is used to calculate the support degree and confidence degree of civil engineering frequent term [18]. After the calculation, the minimum support and minimum confidence of formula (1) and Formula (2) are determined. Finally determine the best program of civil engineering optimization [19].
4 Investigation and Research Analysis of Application of ARM Algorithm in Civil Engineering Optimization Design Spark platform and Apriori algorithm were used to test the effect. The Cluster link was adopted in the test. The environment has 4 nodes, the number of Master and Cluster nodes is 1 and 3 respectively. CPU processor intelcore I5-4709HQ, its processing unit rate is 752.3MB/s of the component. Master storage capacity 128G, Worder storage capacity 256G. Spark is 1.8.2. The operating system used is CentOS6.0. Java plugin jdk1.7.0_141; Scala runs version 2.11.0. The software combination is IDEA + Spark. Scala language is used for program implementation.
Application of Association Rule Mining Algorithm
495
The test uses software to generate data sets with a capacity of 100 GB. The data recorded 1200W real-time transactions in the online mall. This data set was divided into three parts, with the capacity of 300W, 600W, 1200W, 2400W and 3600W, and several tests were performed to ensure the accuracy of the results. The operation is shown in Table 1. Table 1. Apriori algorithm and traditional model take time 300 (million pieces of data)
600 (million pieces of data)
1200 (million 2400 (million 4800 (million pieces of data) pieces of data) pieces of data)
Apriori algorithm
33s
42s
49s
58s
70s
Traditional model
47s
75s
112s
150s
217s
Table 1 records the time required by Apriori Algorithm and Traditional Model to process data volume based on the application of ARM algorithm in civil engineering optimization design. The Apriori algorithm takes less time and has higher processing efficiency for the same level of data. The traditional model takes more time.
Apriori algorith
Traditional model
250 217 200 150 150 112 100
50
75 47 33
42
300
600
49
70
58
0 1200
2400
4800
Fig. 3. Apriori algorithm and traditional mode take time to image
496
C. Qi et al.
As can be seen from the trend of the data in Fig. 3, as the number of data processed increases, the time for data processing under both algorithms also increases, however, the time growth rate under the association rule algorithm is slower. In addition, when the data is 48 million pieces, the processing time under the traditional algorithm is the highest at 217 s, while the processing time under the corresponding correlation rule algorithm is only 70 s, which is shortened by more than three times. Therefore, through the processing of the same data amount, it can be judged according to the length of time used that the association rule algorithm is higher, which proves the correctness of selecting the association rule algorithm in this paper. The test shows that the application of ARM algorithm in civil engineering optimization design has more advantages.
5 Conclusions If the structural design of civil engineering tends to be more scientific, it can improve the advantages of civil engineering by absorbing the advantages of civil engineering. At present, civil engineering structure design is becoming more reasonable and optimized with the efforts of many scholars and researchers. However, there is still a very important problem in civil engineering structural design, that is, engineering structural design often ignores the integrity of the project, while it is easy to ignore many details of the project. At the same time, the cross section load and load bearing cross section missing innovation. The civil engineering optimization design scheme designed in this paper effectively solves many defects and improves the level of civil engineering optimization. The application of ARM algorithm in civil engineering optimization design improves the level of civil engineering optimization design. However, this paper does not provide much evidence on how the association rule mining algorithm is applied to civil engineering, and will be studied in further research. In addition, the amount of data given in the experiment part is not obvious, so more groups of experimental controls should be set for comparison, so as to be more convincing.
References 1. Heraguemi, K.: Whale optimization algorithm for solving ARM issue. IJCDS J. 10(1), 332– 342 (2021) 2. Arsyad, Z.: Text mining menggunakan generate association rule with weight (garw) algorithm untuk analisis teks web crawler. Int. (Inf. Syst. J.) 2(2), 153–171 (2020) 3. Tanantong, T., Ramjan, S.: An arm approach to discover demand and supply patterns based on thai social media data. Int. J. Knowl. Syst. Sc. 12(2), 1–16 (2021) 4. Senthilkumar, A.: An efficient FP-Growth based ARM algorithm using Hadoop MapReduce. Indian J. Sci. Technol. 13(34), 3561–3571 (2020) 5. Hossain, T.M., Watada, J., Jian, Z., et al.: Missing well log data handling in complex lithology prediction: an nis apriori algorithm approach. Int. J. Innov. Comput. Inf. Control: IJICIC 16(3), 1077–1091 (2020) 6. Velmurugan, T., He, M.B.: Mining implicit and explicit rules for customer data using natural language processing and apriori algorithm. Int. J. Adv. Sci. Technol. 29(9), 3155–3167 (2020)
Application of Association Rule Mining Algorithm
497
7. Soni, A., Saxena, A., Bajaj, P.: A methodological approach for mining the user requirements using Apriori algorithm. J. Cases Inf. Technol. 22(4), 1–30 (2020) 8. Saxena, A., Rajpoot, V.: A comparative analysis of ARM algorithms. IOP Conf. Ser. Mater. Sci. Eng. 1099(1), 012032 (2021) 9. Derouiche, A., Layeb, A., Habbas, Z.: Metaheuristics guided by the Apriori principle for ARM: case Study - CRO Metaheuristic. Int. J. Organiz. Collect. Intell. 10(3), 14–37 (2020) 10. Lee, E.H.: Application of self-adaptive vision-correction algorithm for water-distribution problem. KSCE J. Civ. Eng. 25(3), 1106–1115 (2021) 11. Kaveh, A., Zaerreza, A.: Shuffled shepherd optimization method: a new Meta-heuristic algorithm. Eng. Comput. 37(7), 2357–2389 (2020) 12. Koshevyi, O., Levkivskyi, D., Kosheva, V., et al.: Computer modeling and optimization of energy efficiency potentials in civil engineering. Strength Mater. Theory of Struct. 106, 274– 281 (2021) 13. Abdulhadi, M., Xun’an, Z., Fan, B., Moman, M.: Substructure design optimization and nonlinear responses control analysis of the mega-sub controlled structural system (MSCSS) under earthquake action. Earthq. Eng. Eng. Vib. 20(3), 687–704 (2021). https://doi.org/10.1007/s11 803-021-2047-2 14. Ashraf, N., Khafif, M.E., Hanna, N.F.: Cost optimization of high-speed railway prestressed box girder bridge. Int. J. Civil Eng. Technol. 11(4), 91–105 (2020) 15. Adebayo, O.S., Aziz, N.A.: Improved malware detection model with Apriori association rule and particle swarm Optimization. Secur. Commun. Netw. 2019(6), 1–13 (2019) 16. Malu, S., Bairwa, K.N., Kumar, R.: Design engineering optimization of process parameters affecting the surface roughness in sand casting of Al6063 alloy using design of experiments. Design Eng. (Toronto) 7, 8128–8137 (2021) 17. Xiao, W., Liu, Q., Zhang, L., et al.: A novel chaotic bat algorithm based on catfish effect for engineering optimization problems. Eng. Comput. 36(5), 1744–1763 (2019) 18. Dey, B., Raj, S., Mahapatra, S.: A novel ameliorated Harris hawk optimizer for solving complex engineering optimization problems. Int. J. Intell. Syst. 36(12), 7641–7681 (2021) 19. Lee, S.J.: An auto-tuning method of the population size in differential evolution for engineering optimization problems. Kor. J. Comput. Design Eng. 25(3), 287–297 (2020)
The Application of Digital Construction Based on BIM Technology in Housing Complex Yundi Peng1 and Lili Peng2(B) 1 Chongqing Qibang Construction Engineering Co., Ltd., Chongqing, China 2 Chongqing College of Architecture and Technology, Chongqing, China
[email protected]
Abstract. The construction industry is facing huge changes. The traditional construction methods are gradually replaced by new technologies and new methods. In this context, BIM technology is the key to bringing about this change. Based on the digital construction of BIM technology, this paper studies its application in housing construction complexes, optimizes the design and construction scheme of housing construction complexes, thereby saving costs and construction periods, creating greater social and economic benefits, with a view to BIM It provides certain reference and basis for the implementation and promotion of technology. Keywords: BIM Technology · Digitization · Construction · Housing Complex
1 Introduction At present, China’s national economy is in a stage of rapid development, and the application of new technologies in all walks of life has become the main means of market competition among enterprises in the industry, and the construction industry is no exception. In the “Twelfth Five-Year Plan” for the development of the construction industry, BIM technology has been promoted and applied as an important new technology. BIM technology has unlimited application prospects in all parties involved in construction projects. In particular, it has been incorporated into the corporate strategic vision by various large construction companies [1]. BIM technology is a revolution in the construction industry. BIM technology makes it possible to informatize, process and refine the management of construction projects, and it will be continuously improved to make buildings more valuable.
2 Overview of BIM BIM in architectural design realizes the informatization of architectural design. BIM technology has multiple functions, which can improve the level of architectural design and integrate architectural design content from multiple angles. The purpose is to improve the accuracy of residential design, use BIM to build models, and avoid errors in architectural design. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 498–508, 2023. https://doi.org/10.1007/978-3-031-31860-3_52
The Application of Digital Construction
499
Building Information Modeling (BIM), the full name of BIM, refers to a “visualized” digital building model. Personnel in various links such as property maintenance provide a scientific collaboration platform of “simulation and analysis” to help them design, construct and operate projects using 3D digital models (see Fig. 1). Its ultimate purpose is to enable the entire engineering project to effectively realize the establishment of resource plans in all stages of design, construction and use [1]. Full life cycle management, the introduction of BIM technology can provide various basic data required in the construction of engineering projects, assisting the decision-making of construction project management.
Fig. 1. Based on BIM implementation project life cycle information
3 The Use of Building Information Modeling (BIM) in Architectural Design 3.1 The Integration of Design and BIM Technology in the Early Stages of a Project The process is complex and involves using BIM technology. The designer gathers building-related parameters and assesses the construction site. BIM creates a virtual model, which is used to compare with actual needs to identify design flaws and make adjustments. This leads to a more suitable design plan to ensure standardization in construction. BIM also helps maintain building safety and stability, preventing changes [2].
500
Y. Peng and L. Peng
One of the key benefits of using BIM in the early stages of design is the ability to visualize the building in 3D. This allows architects, engineers, and contractors to understand the design more fully, identify potential problems, and make necessary changes before construction begins. BIM also provides the ability to perform virtual walk-throughs of the building, helping to identify potential issues and improve the overall design. Another advantage of BIM is that it allows for improved collaboration between all stakeholders involved in the project. By sharing information in a centralized digital platform, all parties can access up-to-date information and make informed decisions. This can result in improved communication and reduced errors, leading to a smoother and more efficient construction procession can be used to create detailed cost estimates, schedules, and construction plans. This allows project managers to better understand the cost and schedule implications of different design options, making it easier to select the best design for the project. The application of BIM in the early stages of a design project can greatly improve the overall process and result in a more efficient and effective construction phase. It provides benefits such as improved visualization, collaboration, and cost and schedule planning, leading to a successful outcome for all stakeholders involved. 3.2 Enhancing Building Design With BIM Technology: Analysis of Parametric Design Improvement The design of a building involves various factors, such as the size of residential spaces, water and drainage systems, etc. These factors impact the building’s construction and safe use. BIM technology helps refine design parameters, checking and improving them as needed. It also provides information and smart processing options. Architects can correct any incorrect parameters and verify accuracy. BIM technology offers a architecture design database, which can be updated and used during construction. This ensures that BIM parameters meet the building’s design requirements [3, 4]. The checking calculation formula of precast concrete is: nj ∗ Vmax ≤ VkE
(1)
According to the above formula, the shear bearing capacity of the joint under the design condition is: When the prefabricated column is under pressure: VkE = 0.8N + 1.65AiE fi fy (2) When the prefabricated member is in tension, it is: 2 N VkE = 1.65AiE fi fy 1 − AiE fy
(3)
Above: V is the force borne by concrete, N is the shear resistance, A is the area of precast concrete, and f is the unilateral amount of force.
The Application of Digital Construction
501
3.3 Strategies for Pre-Embedded Part Layout Design in Prefabricated Buildings To design prefabricated buildings, it’s important to consider the placement of embedded parts. BIM technology ensures precise control of the shape, size, and location of these parts. For instance, the layout of prefabricated columns can be customized to fit specific needs. The height of steel plates and connections between columns and walls must be set properly. Wall panel connectors should be adjusted based on global parameters. Hooks can be adjusted by calling and shifting for proper reinforcement and accurate placement during construction [3]. 3.4 BIM: Empowering Dynamic Modeling For Buildings The BIM model plays a crucial role in architectural design. It’s used to monitor the building’s design to any problems that may arise. During construction, the actual construction progress is compared with the BIM model to identify deviations. If deviations are found, the cause is determined and the construction process is adjusted. The BIM model also helps calculate the workload and material consumption for the building’s construction. Any differences between the simulation and actual construction can be detected and addressed (see Fig. 2). In BIM, objects and systems in the building are represented as intelligent 3D elements, each with its own properties and data. This enables real-time updates and simulations to be performed, and allows for the analysis of different design options and construction scenarios. By using BIM, professionals can identify potential issues and make informed decisions before construction begins, reducing the risk of costly rework and delays. Additionally, BIM can be used to help manage facilities and maintain the building over time, making it a valuable tool throughout the entire building lifecycle.
Fig. 2. Workflow management system instructure
502
Y. Peng and L. Peng
Workflow Management SystemProduct modelling systemUse Embeded libraries mapping of parts to tasks and tasks to partsProcess modelling systemPart libraryPart based cost databaseEmbeddedActivity based cost databaseProcess 3.5 BIM Participates in the Spatial Planning and Design of Buildings Space planning is crucial and proper enhances the building’s value. It helps integrate the various factors involved in space planning, such as the building’s location, floor area, and floor structure, and displays them in a single model. This allows designers to better understand the relationships between different factors and ensures a rational space plan. BIM technology also provides visual representation of the design and allows for easy adjustment (see Table 1). It ensures the feasibility of the space plan and helps avoid conflicts during construction [5]. Table 1. BIM participates planning Component type Ordinary masonry wall
Volume 3560.4
100 thick prefabricated wall
721.6
200 thick prefabricated wall
1658.7
Beam
2013.1
Column shear wall
3659.8
Prefabricated floor slab
2118.7
Reinforced cement slab
1217.3
Total volume of prefabricated components
4564.6
Total volume of non-prefabricated components
10577.7
Total volume of all components
15142.3
Single prefabricated assembly rate
30.20%
Total volume of all concrete components
9094.9
Total volume of precast concrete
2119.7
Proportion of prefabricated components in industrial production
70.20%
4 Application of BIM Digital Construction in the Design Stage of Housing Complex 4.1 Project Planning Stage BIM technology can meet the needs of the owner to maximize market revenue in the project planning stage. The owner can not only intuitively understand the style and
The Application of Digital Construction
503
characteristics of the project planning through the BIM model, but also communicate with the designer conveniently. Conduct site analysis, sunshine analysis, pedestrian flow analysis and a series of work related to building performance to evaluate, thus affecting the selection of design strategies. It can help the owner to understand the general situation of the whole life cycle, so that the owner can minimize the risks caused by the blindness of investment [5]. 4.2 Architectural Concept Design The rapid acquisition of analysis data through the building mass model provides a basis for building scheme optimization and structure type selection. In the early, the BIM model is connected with subsequent software through relevant format files, so as to carry out scheme analysis and optimization, and can carry out indoor lighting and wind environment analysis, acoustic environment analysis, smoke diffusion simulation, personnel evacuation simulation, and energy consumption statistics and analysis, etc., adjust the window-to-wall ratio, shading form, orientation and transformation selection for the volume model, calculate and count the energy consumption of different volume models
Fig. 3. BIM digital technology architectural concept design
504
Y. Peng and L. Peng
through related software, so as to select the optimal solution to save energy costs for later operation [6], as shown in the Fig. 3. 4.3 Visual Design It is used for preliminary plan deliberation and staged rendering display. The BIM enables emergence of designers have 3D visual design tools, and can use 3D thinking to complete architectural design, and also enables owners to know what their investment can get at any time [4]. Visualization is not only used in the display of renderings and the generation of reports, but also in the visualization of project discussions and decisionmaking in the process of project design, construction and operation, as shown in Fig. 4. Three-dimensional collaborative design is completely different from traditional twodimensional design [3]. It treats the building as a living whole. The design process of architecture, structure, plumbing and electricity is no longer isolated from each other, but in a common information platform.
Fig. 4. BIM digital technology visualization design
The Application of Digital Construction
505
4.4 Performance Analysis In the design process, the BIM virtual building model created by the architect contains a large amount of design information such as geometric information, material properties, and component properties. The analysis results that originally required professionals to spend a lot of time inputting a large amount of professional data can now be automatically completed by simply importing the model into performance analysis software, which greatly improves work efficiency, improves the quality of design, and provides more professional skills and services [4].
5 The BIM Digital Construction Application in the Construction Stage of Housing Complex 5.1 3D Collision Check The BIM technology application to check the leakage, collision and defect of 3D pipelines realizes functions that cannot be realized by ordinary 2D design. Designers can make reasonable use of the layout space and easily find collision in the design in a virtual environment [4]. Arranged pipelines are arranged accurately, which greatly improves the comprehensive design ability and work efficiency of pipelines. During the construction phase, the construction personnel can use the collision-optimized 3D pipeline scheme to conduct construction disclosure, thereby greatly reducing the rework of the construction process, winning time, and saving costs for the owner [5]. 5.2 Construction Collaboration, 4D Construction Simulation Through the BIM technology application, a comprehensive model of civil engineering and equipment management has been constructed to coordinate the positions of various professional pipelines, the direction of elevation, and to ensure that the construction is completed in one step, accurate and correct [6]. Integrate the BIM model and the construction plan progress data during the construction stage to realize the four-dimensional application of BIM space and time dimensions. And scientifically carry out site layout, analyze the advantages of different construction schemes, so as to obtain the best construction scheme, and conduct unified management, as shown in Fig. 5. Through the 4D construction model of BIM, it is possible to simulate the buildability of difficult or difficult parts of the construction process, reflect in advance, conduct informational 3D model disclosure, show the construction process flow, and simulate and analyze construction guidance measures such as the layout of the construction site, optimize the construction process management [7]. 5.3 Accurate Material Statistics The 4D relational database based on BIM technology records all the information of buildings, components and equipment in detail, and can quickly and accurately obtain the basic data of the project in the process of splitting the physical quantity, fast and
506
Y. Peng and L. Peng
Fig. 5. BIM technology digital 4D construction simulation
accurate material statistics, and provide procurement plans and quotas at any time. The formulation of material picking provides timely and accurate data support; it provides an audit basis for on-site management such as flight orders.In addition, the owner can be aware of the cost control of the investment process and make timely adjustment decisions [8].
6 Application of BIM Digital Construction in Other Management Aspects of Housing Complex 6.1 3D Animation Rendering and Walkthrough The owner needs to use the rendered three-dimensional animation when conducting sales or publicity and display about the building. Using the existing information model, if the design intent is changed, the model is modified, and the renderings and animations are quickly released. As an add-on product of BIM technology, it allows customers to obtain higher returns with less investment [9]. 6.2 Construction Site Collaborative Management The traditional project is a mode in which all parties search for effective information on site drawings, and then communicate with each other. Compared with traditional projects, BIM integrates information and provides a three-dimensional communication environment, so the efficiency is greatly improved. It acts as a communication platform for all parties on the construction site, so that all parties in the project can coordinate the project plan, demonstrate the feasibility of the project, eliminate hidden dangers in time, reduce the resulting changes, and shorten the construction time [10]. At the same time, the cost caused by design coordination can be reduced and the production efficiency can be improved.
The Application of Digital Construction
507
6.3 Property Management and Post-Operation Through BIM, we can give it relevant operational information on the basis of the original building model, or realize visual mechanical and electrical equipment maintenance, traditional facility management, enterprise building space management and other functions through professional management software. In the case of buildings and their equipment, the virtual model is used to adjust the systems, equipment and their parameters used in the building to achieve the best use effect [11]. And it can provide the owner and project team with effective historical information in the process of future renovation, renovation, and expansion.
7 Conclusion At present, all parties involved in construction projects have the demand for timely and accurate screening and transfer of basic engineering data in the process of project control. With the continuous deepening of BIM technology digital construction applications realization and refinement will soon be realized, and the digital construction of BIM technology will surely push the housing complex into a revolutionary era.
References 1. Meterelliyöz, M.Ü., Özener, O.Ö.: BIM-enabled learning for building systems and technology. J. Inf. Technol. Constr. 27, 1–19 (2022) 2. Demagistris, P.E., et al.: Digital enablers of construction project governance. CoRR abs/2205.05930 (2022) 3. Michalski, A., Glodzinski, E., Bde, K.: Lean construction management techniques and BIM technology - systematic literature review. CENTERIS/ProjMAN/HCist, pp. 1036–1043 (2021) 4. Mangia, M., Lazoi, M., Mangialardi, G.: Digital management of large building stocks: BIM and GIS Integration-based systems. In: Canciglieri Junior, O., Noël, F., Rivest, L., Bouras, A. (eds.) Product Lifecycle Management. Green and Blue Technologies to Support Smart and Sustainable Organizations. PLM 2021. IFIP Advances in Information and Communication Technology, vol. 639. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-94335-6_10 5. Sanchís-Pedregosa, C., Vizcarra-Aparicio, J.-M., Leal-Rodríguez, A.L.: BIM: a technology acceptance model in Peru. J. Inf. Technol. Constr. 25, 99–108 (2020) 6. Paavola, S., Miettinen, R.: Dynamics of design collaboration: BIM models as intermediary digital objects. Comput. Supp. Cooper. Work (CSCW) 27(3–6), 1113–1135 (2018). https:// doi.org/10.1007/s10606-018-9306-4 7. Hui, W.: Application analysis of BIM technology in the construction phase ofconstruction projects. J. Lianyungang Vocat. Techn. Coll. 31(4), 7–9 (2018) 8. Merschbrock, C.: Bjørn Erik Munkvold: Effective digital collaboration in the construction industry - a case study of BIM deployment in a hospital construction project. Comput. Ind. 73, 1–7 (2015) 9. Akanmu, A.A., Anumba, C.J., Ogunseiju, O.O.: Towards next generation cyber-physical systems and digital twins for construction. J. Inf. Technol. Constr. 26, 505–525 (2021) 10. Roxin, A., Abdou, W., Derigent, W.: Interoperable digital building twins through communicating materials and semantic BIM. SN Comput. Sci. 3(1), 1–25 (2021). https://doi.org/10. 1007/s42979-021-00860-w
508
Y. Peng and L. Peng
11. Schweigkofler, A., et al.: Development of a digital platform based on the integration of augmented reality and BIM for the Management of information in construction processes. In: Chiabert, P., Bouras, A., Noël, F., Ríos, J. (eds.) Product Lifecycle Management to Support Industry 4.0. PLM 2018. IFIP Advances in Information and Communication Technology, vol. 540. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01614-2_5
The Research on Computer On-line Teaching Simulation System Based on OBE Concept Yi Liu1 and Xiaobo Liu2(B) 1 School of Marxism, Jiangsu Police Institute, Nanjing, Jiangsu, China 2 School of Information Engineering, Nanjing Xiaozhuang University, Nanjing, Jiangsu, China
[email protected]
Abstract. As a mature educational concept centered on comprehensively improving students’ ability and quality, oriented by improving teaching results, and following the sustainable development, OBE thought has a pivotal guiding significance level of school education practice. With the comprehensive information technology development, the demand for computer network technology talents is increasing day by day, the current teaching mode of computer network technology is difficult to adapt to the modern education form, innovation research is not deep enough, and there is still a big gap with the objective requirements of education. Therefore, the introduction of the OBE concept has very significance for increasing the innovation of the computer network technology teach and learn mode. This paper first briefly introduces the concept of OBE education and its important status and function, then analyzes the situation and shortcomings of computer network technology teaching, and finally combines the on-line teaching simulation system design with the concept of OBE, which provides a powerful innovation of computer on-line teaching. Keywords: OBE concept · Computer · On-line teaching · Teaching mode · Simulation system
1 Introduction With the modern scientific information technology development and the gradual improvement of the construction of local area networks, the Internet and other infrastructures, the application of various information technologies has gradually penetrated all aspects of social production and life, and its influence in people’s daily life is increasing [1]. As an important part of information technology, computer network technology is a new type of discipline that combines modern computer technology with advanced network communication methods. It has a pivotal social status and is therefore favored by more and more people. However, due to the influence of external objective factors in the early stage, the education system of computer network technology is not perfect, the development of subject teaching is relatively lagging, and there is a lack of innovation ability [2], so that the current social employment supply and demand in the field of computer network technology is unbalanced and the gap is large. It cannot meet the needs © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 509–519, 2023. https://doi.org/10.1007/978-3-031-31860-3_53
510
Y. Liu and X. Liu
of employers such as large and medium-sized enterprises. As an educational concept that emerged in the 1980s and 1990s, the Outcomes-Based Education (OBE) model is different from the traditional education model, which focuses on guiding and driving by relying on the content of the previous input. The main goal of implementing teaching is to focus on It is the achievement of students and the expected goals achieved [3], and the personal ability and quality they have when they graduate, thus providing a new idea for the innovation of teaching mode.
2 The Basic Meaning of OBE Concept and Its Importance 2.1 OBE Education Concept Meaning Beginning in the 1980s and continuing to the 1990s, the OBE educational concept has been vigorously developed. It first originated from the basic education reform in the United Kingdom and the United States, and has been deeply developed in the United States, forming a new educational concept, which has gradually been widely used in the world. In the OBE teaching philosophy, it has always been adhering to the idea that the result of students’ learning is far more important than the learning process. From this point of view, the OBE concept is an educational concept that integrates learning results throughout all the processes that determine the entire teaching activity [4]. No matter how the connotation definition is analyzed, we can see that the OBE education philosophy always insists that educators must clearly grasp the ability level that students should have in a certain period, and then use a reasonable teaching process to achieve teaching goals on this premise. The inner strength of the operation of the entire education system is the result of the students, which is the biggest difference between the OBE education concept and the traditional education concept. The traditional educational concept focuses on objective memory rather than subjective understanding, which makes it difficult to solve some open problems and lacks the ability to solve problems. The OBE education model focuses on students’ creative thinking, ability to comprehensively analyze problems, and expand to actively solve problems. It can clearly grasp the basic ability of students and has its own unique theoretical advantages. The modeling of OBE computer simulation is as follows X·Y=
The logic is shown in Fig. 1,
1 (X + Y)2 − (X − Y)2 4
(1)
f 2 (x) = −(x + y)2
(2)
f 2 (y) = +(x − y)2
(3)
The Research on Computer On-line Teaching Simulation System
511
Fig. 1. OBE logical structure
2.2 The Importance of OBE Concept At present, the original teaching mode of most colleges has certain limitations. For example, most of them adopt a unified education syllabus, which does not consider the needs of individuals in different majors and does not have any links such as assessment process ability. To solve the lack of the original teaching mode, the OBE concept has been gradually introduced into the subject teaching of colleges has become a new teaching mode trend. Among them, computer network technology, as an part of the subject of technology, is of epoch-making significance. Therefore, it can be said that the OBE concept has fundamentally changed the direction development of the teaching mode and provided a novel theoretical idea for the innovation of the teaching mode [3]. The essence of the OBE concept can be regarded as a student-oriented educational concept. In practice, it is also student-centered, focusing on the improvement and expansion of students’ abilities. In actual teaching practice, all teaching links and teaching activities are closely centered on this center to achieve the expected teaching effect. The construction of computer network technology teaching is also planned on the premise of achieving pre-achieving results. To a certain extent, it reflects that the core of this education model is that educators can judge the network technology capabilities that students have after completing their studies, and use them to as a teaching basis, a new teaching mode of computer network technology is constructed to ensure the realization of teaching goals [2], as shown in Fig. 2.
3 Construction of Teaching Mode of Design Specialty Based on OBE Concept The OBE model curriculum teaching method focuses on cultivating students’ professional practical ability, advocates the advantages of practical courses, and rapidly transforming theoretical knowledge into ability. The OBE model course teaching is more flexible, and it is an organic combination of teaching activities. The course consists of theoretical knowledge, experimental operation and practical projects, which are interlinked. The teaching mode based on OBE concept is divided into three steps: first, teachers’ classroom teaching and demonstration; The second is the collection and utilization of information and the digestion of knowledge by students; Third, teachers answer questions and evaluate after class. When classroom teaching cannot meet the teaching needs,
512
Y. Liu and X. Liu
Fig. 2. OBE teaching philosophy
teachers can use the form of on-line teaching to comment and answer questions to students one by one. In on-line teaching, teachers can use high-quality on-line resources as extracurricular development materials to guide students, and teachers can answer questions on-line. In order to meet the needs of the development characteristics of curriculum education, the curriculum reform has abandoned the single theoretical teaching content in the classroom and replaced it with on-line and off-line teaching methods. The Internet is committed to enhancing the exploration of theoretical knowledge and the practice of network technology. In off-line teaching, teachers should supplement different knowledge for different professional courses of their majors, and if necessary, combine on-line teaching with case demonstration. In the process of undertaking and completing production technology projects, teachers lead students to complete comprehensive professional technical training and cultivate market concepts and team spirit. The teaching mode based on the combination of production and research should be carried out from the following aspects: 1) Update the course learning environment. Taking advantage of the advantages of schools, industries and scientific research institutions in talent cultivation, combining the school education environment of indirectly imparting knowledge in the classroom with the production site environment of directly obtaining practical experience and ability can improve interest and enthusiasm of learning. The principle of assigning members to groups should be based on the curriculum requirements and the level of students’ professional knowledge reserves. The allocation of group members should follow the principle of complementary advantages, and students of different majors should learn from each other.
The Research on Computer On-line Teaching Simulation System
513
2) Reasonably arrange the content and structure of the research project. The core of research-based teaching is to scientifically arrange the layout and content structure of research topics. Teachers should divide the teaching process into a number of core topics that need to be discussed in depth, sort out relevant excellent cases, and focus on the teaching content, with in-depth interpretation of each key point and its corresponding excellent cases, so that students can accurately grasp all the core points and knowledge theories of the course content. 3) Properly organize classroom discussions and academic exchanges. In the process of design, it is necessary to fully communicate with teachers and students in time to solve difficulties and share research resources and experience. The teacher introduces the flipped classroom teaching according to the curriculum plan, lets students play the role of lecturers, assigns the work of cooperative teams in the classroom and displays the design of each group.
4 Computer On-Line Teaching Mode Based on OBE Concept 4.1 Clarify the Teaching Objectives Combined with the Teaching Objectives of Computer Network Technology Professional Learning In the study of integrating computer technology and network special technology, teaching teachers should adhere to the OBE-related educational concept as the guiding ideology of teaching, and firmly determine the orientation of the learning process. Programming language, have certain programming ability, and can master network technology, build websites, and carry out daily maintenance in the later period [5]. Taking these teaching goals as the starting point and foothold, teachers of related disciplines should carry out innovative design of teaching modules, optimize the teaching content of courses, separate computer professional technical content as basic courses, introduce new teaching modes, and gradually realize the final educational learning. Target. 4.2 Setting Teaching Activities Based on OBE-Related Teaching Concepts Teachers should teach according to the professional curriculum setting, carry out comprehensive teaching activities, and have appropriate teaching arrangements from the initial pre-class preparation, class completion to post-class acceptance and other links. Ensure that students understand the content and key points of classroom learning in the early stage, and use this as a guide for classroom activities, changing the bad habit of only focusing on copying notes in previous teaching. Targeted questions are carried out in the classroom, and the test of students’ learning level is taken as the breakthrough point of teaching [5]. According to the requirements of the syllabus, teachers should explain important knowledge points in depth, especially difficult problems, so that students can better grasp relevant knowledge and improve their ability level. After class, do a good job of reconsideration, change the traditional teaching mode, and change the summary completed by the teacher into a group discussion in the classroom combined with the homework reserved in the early stage, so that the students can complete it independently, and finally the educator will check and summarize [6], as shown in Fig. 3. The final answer deepens the student’s impression of learning.
514
Y. Liu and X. Liu
Fig. 3. OBE-related teaching concepts
4.3 Regularly Organize Teaching Evaluations with the OBE Education Concept Combined with the computer network technology courses, starting from the ability and quality of professional students, enriching teaching evaluation methods, and giving full play to the key role in different assessment methods. In professional learning, examination is an effective assessment method. It is necessary to pay attention to the assessment of students’ daily performance, increase the proportion of daily performance assessment, and maintain a busy state in ordinary study. Meticulous quantification, giving reasonable scores, verifying whether students really master relevant knowledge points, and strengthening the absorption of computer network technology in a subtle way can better improve students’ comprehensive ability and quality [6]. When conducting professional examinations, we should focus on the flexible application of basic knowledge points in daily teaching, avoid rote memorization of examination-oriented education, increase the proportion of subjective question types, and effectively and comprehensively examine students’ ability and quality.
5 On-Line Teaching Design Simulation System Based on OBE Concept The OBE concept is an outcome-oriented teaching model. Therefore, first of all, the teaching objectives of the course must be clearly defined. Our school syllabus points out: Through the study of this course, students will master the basic concepts and methods of testing technology, have a deeper understanding and understanding of the principle and application of sensors, and master the evaluation methods of the performance of common testing systems and the basics of signal analysis. Methods, familiar with the detection methods of engineering parameters commonly used in this major, and cultivate students’ ability to ask questions, analyze problems and solve problems [7]. In short, after the study of this course, students should be able to have the ability to detect common
The Research on Computer On-line Teaching Simulation System
515
engineering parameters after graduation, no matter whether they pursue postgraduate studies or enter the workplace. Based on this goal, the on-line teaching design is carried out. 5.1 Implementation and Analysis of Evaluation System OBE education mode requires that the focus of OBE teaching evaluation is the process of knowledge acquisition, and it is more important to construct the evaluation of knowledge than the evaluation of results. Based on OBE’s “principle of computer network” MOKO evaluation, the MOKO teaching quality evaluation system based on OBE is constructed based on the degree of achievement of teaching objectives, starting from the course learning objectives, rather than the teaching content and teaching methods. This OBE evaluation model is not only the evaluation of students’ knowledge mastery, but also the evaluation of students’ learning status and learning ability. According to Table 1, the composition of the course’s score evaluation can be obtained: Total score = course discussion performance × (5%) + in-class test × (10%) + homework results × (5%) + experiment × (20%) + final score × (60%), at the same time, the distribution table of the total score is made. 5.2 Teaching Content Determination According to the objectives of teaching, set the teaching content, as shown in Fig. 4. The learning content of each chapter serves the teaching objectives. Make full use of effective class hours to maximize the teaching effect. 5.3 The Design of Teaching Methods This on-line teaching is centered on “modular recording and broadcasting teaching”, and various teaching modes such as flipped classroom, live classroom, and on-line and off-line discussions complement each other [8]. The advantage of recorded and broadcast teaching is that it does not require synchronization of teaching and learning. In this way, the problem of network jamming and students being unable to enter the live broadcast classroom on time is fully solved. Students can freely arrange their study time and conduct repeated and circular learning for key and difficult points, shown in Fig. 5. Where rumination ratio = video viewing time/actual video length. If students use double speed to watch videos, the rumination ratio will decrease [8]. This side reflects the seriousness of students’ learning. “Modular recording and broadcasting teaching” is to set each knowledge point into a micro-lecture, and the time is strictly controlled for about 30 min to solve the students’ lack of concentration during on-line learning. After sufficient digestion, absorption, and rest, move on to the next knowledge point. In this way, students can learn more efficiently and ensure the learning effect [9]. However, the shortcomings of recorded and broadcast teaching are also very clear. Teachers cannot clearly see the learning effect of students, so they cannot adjust the learning content and learning progress in time. At this time, other teaching methods are needed.
Master the concept and architecture of computer network
3,4,5,7
Total score
……
Course objectives
Graduation requirement index
Course discussion performance (5%)
Quiz in class (10%)
Assessment link and weight Operation (5%)
Experiment (20%)
Final assessment (60%)
Table 1. Evaluation criteria of course learning objectives Assessment score
Graduation required score
Goal achievement
516 Y. Liu and X. Liu
The Research on Computer On-line Teaching Simulation System
517
Fig. 4. The relationship between teaching objectives and teaching content
Fig. 5. Design of teaching methods
The flipped classroom deepens the mastery of knowledge points. In flipped classroom, teachers assign homework through the network platform, students upload the homework after completion, and the teacher makes corrections and comments according to the students’ homework [9]. The live broadcast software can use Tencent Classroom or Tencent Conference, use the shared screen mode, discuss issues with the students on-line, and can write on the blackboard on-line. Although it is in a different space with the students, it still achieves the effect of on-line teaching. Because the students have
518
Y. Liu and X. Liu
already learned the recorded courses in advance, they are very efficient when answering questions on the live broadcast, and the classroom is very active. Of course, we have also set up a WeChat discussion group off-line [10]. In the group, students can ask questions about the course at any time. In many cases, before the teacher answers, the students have already started a heated discussion. Such off-line discussion is a must. It is the collision of thinking between students and teachers, which truly expands the depth of the classroom and consolidates and strengthens the teaching effect [8]. Although the above teaching modes are all conducted on-line, they are all designed with the OBE concept. Most students report that the quality of on-line teaching is no worse than face-to-face teaching. Even, students began to like on-line teaching. 5.4 Design of Work Form Based on the teaching goal, students should have the ability to detect common engineering parameters”, the design of large homework is carried out. From the beginning of the first class, the author announced the assignment requirements to the students, and asked the students to design a set of experimental devices to measure a physical quantity, preferably a related physical quantity. So, from the very beginning of the course, students learn with questions and goals. Moreover, the design of this assignment fully tests the students’ ability to apply textbook knowledge to practice [10].
6 Conclusion This paper discusses the introduction of OBE teaching theory in the teaching process and discusses the on-line teaching simulation system. In view of the various problems faced by on-line teaching during the epidemic, according to the OBE education concept, the computer on-line teaching simulation system is designed, and based on the teaching objectives, the design of teaching content, teaching mode and classroom assignments are introduced in detail. The teaching process design is centered on the teaching goal and serves the realization of the teaching goal. Acknowledgement. Teaching Project of Nanjing Xiaozhuang University: Research and practice of mixed teaching mode based on Blackboard platform -- Taking Higher mathematics as an example. Qing Miao Project of Jiangsu Police Institute (JSPI2018QM) stage result.
References 1. Wahl, F., Seidl, T.: On the three-dimensional teaching mode of literary theory course under OBE concept. ICIMTECH 317, 1–317 (2021) 2. Daun, M., et al.: Teaching conceptual modeling in on-line courses: coping with the need for individual feedback to modeling exercises. CSEE&T, pp. 134–143 (2017) 3. Obermeier, S., Beer, A.: On the construction of the first-class major of normal major of chinese language and literature under the concept of OBE. ICIMTECH 316(1–316), 4 (2021) 4. Kremer, A., Oberländer, A., Renz, J., Meinel, C.: Finding learning and teaching content inside hpi school cloud (schul-cloud). EDUCON, pp. 1324–1330 (2019)
The Research on Computer On-line Teaching Simulation System
519
5. Roat, S., Obermann, M., Kuehne, A.: Ideological and political teaching reform: an introduction to artificial intelligence based on the OBE concept. ICEIT 2022, pp. 6–9 (2022) 6. Kawash, J., Horacsek, J., Wong, N.: What we learned from the abrupt switch to on-line teaching due to the COVID-19 pandemic in a post-secondary computer science program. CSEDU (2), pp. 148–155 (2021) 7. Withdrawn: on the three-dimensional teaching mode of literary theory course under OBE concept. ICIMTECH, pp. 1–5 (2021) 8. Al Hashlamoun, N., Daouk, L.: Information technology teachers’ perceptions of the benefits and efficacy of using on-line communities of practice when teaching computer skills classes. Educ. Inf. Technol. 25(6), 5753–5770 (2020) 9. Wünsche, B.C., et al.: Using an assessment tool to create sandboxes for computer graphics teaching in an on-line environment. CSERC, pp. 21–30 (2021) 10. Güdemann, M.: On-line teaching of verification of C programs in applied computer science. FMTea, pp. 18–34 (2021)
Data Interaction of Scheduling Duty Log Based on B/S Structure and Speech Recognition Changlun Hu(B) , Xiaoyan Qi, Hengjie Liu, Lei Zhao, and Longfei Liang State Grid Shandong Electric Power Company Laiwu Power Supply Company, Jinan 271100, Shandong, China [email protected]
Abstract. In daily work, it is often necessary to carry out tedious business scheduling and statistical work. Through the scheduling duty log system developed based on the B/S structure, the functions of overall import, partial modification and historical query of duty information can be realized, and the statisticians will be liberated from the complicated work, thereby improving the office efficiency of the staff and scheduling. The core technology of these functions of the duty log system is the interactive acquisition of data. In this regard, this paper introduces voice recognition technology in the design of the dispatch log system. With the support of this technology, the on-duty personnel only need to input voice commands to realize log information modification, query and other functions, which greatly facilitates the log management of the on-duty personnel. Keywords: B/S Structure · Dispatch Log System · Data Interaction · Speech Recognition Technology
1 Introduction The operation of the on-duty log system in various industries can improve the information transmission efficiency and accuracy of the production department, realize the circulation and sharing of the production information of each department with the help of the network, provide support for the business analysis of various user layers, and realize the events in the on-duty log. Transfer and transfer of information to share information.In daily work, they often do some complicated work. Based on the B/S architecture, an electronic on-duty recording system is established, which can realize real-time on-duty. Information import as a whole, local modification, and historical query. Statisticians will help employees get rid of tedious work and improve their work efficiency. The main function of the sub-duty log is the interactive acquisition of data. Many researchers have carried out in-depth discussions on the related research on the data interaction realization of dispatching duty log based on B/S structure and speech recognition. For example, foreign governments pay more attention to dispatching and on-duty management, and the dispatching and on-duty systems they design basically adopt the B/S structure. The degree of sophistication and intelligence realizes the standardization, visibility and process of dispatching on-duty management [1]. Most of these © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 520–529, 2023. https://doi.org/10.1007/978-3-031-31860-3_54
Data Interaction of Scheduling Duty Log
521
systems in China still use the C/S structure. In the case of C/S, when a large number of clients directly connect to the database, the limited resources of the database connection are difficult to manage, and it is difficult to deploy and maintain fat client applications. Although some systems use the B/C structure in function, with the increase of the running time of the existing similar systems, the protection of the system from the impact of data accumulation on performance is not considered, and it is not involved in the consideration of the business requirements permitting conditions. Data transfer problem. Many systems use the system without considering adding new functionality to the duty log system without major adjustments to the main functionality [2, 3]. Log has great advantages in storing log information. It supports users to define the storage destination of log information, and can write log information to a remote database system. However, the use of Log is not intuitive and cumbersome [4]. Many research units and companies in my country are also actively developing products for log storage and management, and they have been successfully applied in the industry [5]. Although good research results have been achieved in the realization of scheduling on-duty log data interaction based on B/S structure, the log data management efficiency is not high, and the performance of the scheduling on-duty log system needs to be optimized. This paper analyzes the design requirements of the scheduling duty log system. The system should include functions such as log editing, log handover, log transfer, and duty information query. Then, the index technology and data lossless compression technology of the system design are proposed. Based on the B/S structure design the system architecture and log database, for the realization of system log data interaction, this paper uses speech recognition technology to analyze the speech recognition accuracy of log data interaction.
2 System Design Requirements and Technical Analysis 2.1 Demand Analysis of Scheduling Duty Log System Scheduling duty log management is the core part of the entire system, and is responsible for the event entry, handover, circulation, and query functions of the entire duty log information. The system also needs to meet the following functional requirements: (1) Log Editing Events that occur during the shift are recorded, and any entries submitted may be accompanied by attachments, preferably by estimating the size of the attachments. Entries submitted to the system may not be modified, but new information may be added. When additional description of an event is required, it can be inserted directly after the original record of the event. After editing the record, the system will automatically generate the time and recorder, and the system will automatically create a time recorder. Log event processing requirements (such as log entries) can be aggregated, and commit events can be automatically generated after submission. It’s a good idea to provide a preview feature before logging is committed. Log entries need to be sorted by level, so that they can appear in different formats during circulation and handover, and form an effective reminder to users [6, 7]. Events typically contain the following properties: date, description, status, created by, list of attachments, etc. The submitter of the log event is
522
C. Hu et al.
allowed to modify, because the current user does not have to be the current user to log the event. (2) Log Handover For events that need to be processed in other shifts, the log events are handed over before closing the log. Log entries submitted by each department will be moved to the corresponding department log on the next shift after delivery selection. For example, if the events recorded by the “command and coordination” module of the four-shift day shift have no results and need to be resolved by the next shift, or important information needs their attention, after selecting the handover function, the new log generated by this module in the night shift will contain such information [8]. (3) Log Circulation When editing a log, if the user selects a specific flow event belonging to a specific category after the log is completed, the log immediately becomes a flow event, and the flow is executed according to the default parameters. Users can also temporarily and dynamically decide which module business units to deliver events to according to the actual situation [9]. Once the user transfers an event, the handling of the event by the onduty personnel can be reflected in the logs of other personnel involved in the post. Due to some special circumstances, the initiation of some circulation events is not generated by the software system (for example, a telephone notification is used), and such events are considered internal events that are non-circulation events, and no special processing is required. The circulation event received by the user can be circulated again, because it may require the support of other lower-level departments to solve [10]. (4) On-duty Information Query The user can query all or part of the duty information, and all the duty information is the summary of the header information of each department. Support the query of historical/current/predefined duty information, the query conditions are department name, seat name, running day, shift, etc. and it is a compound query. The results obtained by querying the on-duty information at a certain moment must be consistent with the actual on-duty personnel. 2.2 Key Technologies of System Design (1) Indexing Technology The log database records detailed historical information of system status and operations, and the system records a large amount of data in a short period of time in the database. This requires creating an efficient indexing mechanism that helps the database find disk space quickly and accurately. The amount of log information in the dispatch management system is huge, and the log database index needs to be stored in the main disk. If the index is on disk, it will be easier to search the database and maintain the index function in real time. The system log information is sealed in time, and the log data records the specific state of the system at a specific point in time. The time attribute of each record is also the focus of maintenance personnel when querying. Therefore, the indexing mechanism created for the log database should focus on the time attribute. At the same time, the indexing mechanism should be able to improve the search efficiency of the device [11].
Data Interaction of Scheduling Duty Log
523
The system adopts B/S architecture and ASP NET software realizes the editing, viewing and obtaining of on-duty records. The basic functions of the system are completed by data interactive conversion of Excel tables, SQL Server databases and Gridview controls. When inputting the on-duty information, the staff will input the data into the Excel table in a fixed form, and then input the data into the background database, which is realized by combining the Gridview control with the database. When the system works in the opposite direction, it will bind the data to the Gridview control, and then use this code to generate an Excel table. Its data interaction principle is shown in Fig. 1.
SQL Server database
Excel table
Gridview control
Fig. 1. Schematic diagram of data interaction
As shown in Fig. 1, when users need to view history records and make Excel tables, the system will perform background queries according to the time selected by the user, and combine the query results with the Gridview control to finally generate Excel from the Gridview. The electronic on-duty log is designed by using the above method, which is stable and easy to operate. In the daily programming development, it is very important to use data interaction. The structure of the program can be greatly improved through reasonable interaction methods and call methods. (2) Data Lossless Compression Technology
524
C. Hu et al.
The log database in the dispatching duty management system has the characteristics of high real-time performance, large amount of direct data, and long storage time. These characteristics are set according to the high complexity of the system and customer requirements. But that may change over time. If these historical data are not processed effectively, it will not only waste a lot of disk space, but also affect the performance of the database and reduce the efficiency of query operations. Therefore, in the optimization phase of the log database, it is very important to design a data compression scheme. The purpose of data compression is to store the same amount of data in less storage space by changing the storage size or the representation of the data. The function of the log information of the scheduling management system is to record the detailed and comprehensive information of the system status, so the compression method used in the system designed in this paper is lossless compression, so as to ensure that the historical information can be completely restored without data loss [12]. 2.3 Speech Recognition Since the data volume of the voice signal is relatively large, the voice signal must be preprocessed first. After preprocessing, feature parameter extraction is also performed on the original data, such as the time domain feature and frequency domain feature extraction of the speech signal. After feature extraction, the original speech signal becomes a feature vector. The next work is to use these feature vectors to train the model and recognize. Currently commonly used training and recognition methods include dynamic time rounding, hidden Markov models, and neural network models. Voice Feature Extraction: ∞ x(t)w(t − τ )e−jwt dt (1) STFT{x(t)}(τ, w) = −∞
Among them, w(t) is the window function, x(t) is the input speech signal, τ is the time coordinate, w is the frequency coordinate, and STFT is the short-time Fourier transform of the signal. loss(p, q) =
1 n (max{0, 1 − pi + qi })2 i=1 n
(2)
where n is the feature dimension of the speech signal, p is the recognition output, and q is the training label.
3 System Design 3.1 System Architecture Design Based on B/S Structure The scheduling duty log management system adopts the B/S three-tier structure, as shown in Fig. 2, the presentation layer mainly uses the JSP engine to manage the scheduling duty log. The business logic layer mainly uses the Transaction transaction processing and OR Mapping (iBatis) mapping in the Spring framework, and processes the functions through the mapping file applicationContext.xml. The persistence layer mainly uses
Data Interaction of Scheduling Duty Log
presentation layer
JSP engine
Spring Framework OR Mapping(iBatis)
525
Transaction applicationCon text.xml business logic layer
iBatis Framework SqlMap.xml SqIMapConfig.xml
persistence layer
database Fig. 2. Scheduling duty log management system based on B/S structure
the configuration files SqIMapConfig.xml and SqlMap.xml in the iBatis framework to operate the database data. Advantages of B/S triple architecture: ASP at three levels NET architecture can be used for different physical servers with only a few code modifications, so it has a higher architecture and better performance. What’s more, what each level does is invisible to others, so changing or updating a certain level does not necessarily require recompilation, nor does it change the entire layer. This is a very powerful ability. For example, if the data access code is separated from the business logic level, after the database service level changes, only the data access code needs to be changed. Since the business logic level does not change, there is no need to change or recompile. Presentation layer: The presentation layer is used to display the user interface, and also uses business-level classes and objects to “drive” the interface. NET system includes aspx page, user control, server control, and some security-related categories and objects. In short, it is what a user sees when using a system. Business layer: The service layer accesses, modifies and deletes the data layer, and then returns the data structure to the presentation layer. In asp.net, this layer includes
526
C. Hu et al.
acquiring data from SQL Server or Access database, updating data and deleting data, then storing the acquired data into data reader or dataset, and returning it to the performance layer. The returned data may only contain an integer type of data, such as the number of columns in the table, but this is also done in the data layer. The processing of specific problems can also be said to be the operation of data layer and the logic of data business. Data access level: The data level is a database or data source. NET SQL or ACCESS database, but not limited to these two formats, can also be Oracle mySQL, or even XML. Transactions at this level are directly processed on the database, including adding, deleting, modifying, updating, searching, etc. In software architecture design, hierarchy is the most common and critical level. The hierarchical architecture proposed by Microsoft usually has three layers, namely data access layer, business logic layer and presentation layer. 3.2 Log Database Design
Table 1. Duty log table
Duty log number
Field length
Is it empty?
20
No
Department No.
30
No
business module number
50
No
5
No
duty shift duty status
10
No
log creation time
15
No
As shown in Table 1., to make the scheduling duty log table, according to the design requirements of the log database, it is necessary to set the field lengths of the duty log number, the department number, the business module number, the duty shift, the duty status, and the log creation time, which are 20 and 30 respectively., 50, 5, 10, 15, and none of them are allowed to be empty.
4 Implementation of Log Data Interaction in the Dispatch Log System 4.1 Realization of Data Exchange on Duty Information (1) Import of Duty Information The import of shift scheduling information is mainly to import the information of the Excel table into the background database, and obtain the path of the Excel table through the input upload control and perform background processing. (2) Dynamic Display of Duty Information
Data Interaction of Scheduling Duty Log
527
The dynamic display of duty information refers to the interaction between the background and the control database. First, use the calendar control to select the query date, then search the database in the background to save the log data in the database in the database collection, and finally use the data connection method to connect the output to realize the page display. (3) On-duty Information Query The duty log module is the core module of the whole system. This module provides log recording and query functions, and is the main workbench for front-line duty personnel. The module to which the user belongs has the duty log and has the authorization to access the log of this module. In this module, the front-line duty personnel completes the filling in the duty log and performs the corresponding operation guarantee transaction processing. And when the user submits a new event or appends an event, the submitter is allowed to be modified, and the default is the real name of the system user. 4.2 Log Data Interaction Based on Voice Signal For dispatching on-duty log data, the system can realize the functions of log data entry, deletion, handover, transfer, modification, query, etc. The extraction and recognition of voice signals is mainly aimed at the on-duty personnel in the process of using the system, and only need voice commands to realize the addition, modification and deletion of log data and other operations.
recognition accuracy(%)
120
84.5 86.1 80
96.5 94.4
94.2 94.6
100
95.5
78.9 81.6
83.7 77.3
85.6 83.9 75.7 81.4
80.2
93.8
76.3
60 40 20 0 input
delete
handover circulation
Revise
voice command Fig. 3. Voice command recognition accuracy
Inquire A
B
C
528
C. Hu et al.
As shown in Fig. 3, the dispatching duty log management system recognizes the accuracy of various operation instructions after receiving the voice instructions of three persons on duty. It can be seen from the data in the figure that when the on-duty staff uses the dispatching system to input voice operation instructions for deleting and querying log information, the voice recognition accuracy is above 90%, while the voice recognition accuracy when inputting voice operation instructions such as handover and modification is relatively reduced.
5 Conclusion As the construction of the dispatching duty management system becomes more and more perfect, the informatization degree of the duty log is getting higher and higher, and the maintenance dispatching duty log data is also more abundant. For this reason, it is necessary to develop the dispatching duty log system to realize the log data interaction, so as to save the management and management costs, prefecture and county dispatching on-duty log work, this paper adopts B/S structure and voice recognition technology to design the dispatching on-duty log system, which can realize log data editing, modification, query and other functions, and realize human-computer interaction through voice recognition, so as to facilitate the management of on-duty personnel log information.
References 1. Raeissi, M.M., Farinelli, A.: Cooperative queuing policies for effective scheduling of operator intervention. Auton. Robot. 44(3–4), 617–626 (2019). https://doi.org/10.1007/s10514-01909877-w 2. Apt, E., Regev, T., Shapira, J., et al.: Residents’ perspective on duty hours at an Israeli tertiary hospital. Israel J. Health Policy Res. 11(1), 1–7 (2022) 3. Yavuz, H.C.: The effects of log data on students’ performance. E˘gitimde ve Psikolojide Ölçme ve De˘gerlendirme Dergisi 10(4), 378–390 (2019) 4. Pellegrin, F., Yücel, Z., Monden, A., Leelaprute, P.: Task estimation for software company employees based on computer interaction logs. Empir. Softw. Eng. 26(5), 1–48 (2021). https:// doi.org/10.1007/s10664-021-10006-4 5. Ravanelli, M., Brakel, P., Omologo, M., et al.: Light gated recurrent units for speech recognition. IEEE Trans. Emerg. Top. Comput. Intell. 2(2), 92–102 (2018) 6. Pakoci, E., Popovi, B., Pekar, D.J.: Improvements in serbian speech recognition using sequence-trained deep neural networks. SPIIRAS Proceed. 3(58), 53–76 (2018) 7. Markovic, B.R., Galic, J., Mijic, M.: Application of teager energy operator on linear and mel scales for whispered speech recognition. Arch. Acoust. 43(1), 3–9 (2018) 8. Darabkh, K.A., Haddad, L., Sweidan, S.Z., et al.: An efficient speech recognition system for arm-disabled students based on isolated words. Comput. Appl. Eng. Educ. 26(2), 285–301 (2018) 9. Laszlo, T., Ildiko, H., Gabor, G., et al.: A speech recognition-based solution for the automatic detection of mild cognitive impairment from spontaneous speech. Curr. Alzhmer Res. 14(2), 130–138 (2018) 10. Sahadat, M.N., Alreja, A., Ghovanloo, M.: Simultaneous multimodal pc access for people with disabilities by integrating head tracking, speech recognition, and tongue motion. IEEE Trans. Biomed. Circuits Syst. PP(1), 1–10 (2018)
Data Interaction of Scheduling Duty Log
529
11. Allen, A.A., Shane, H.C., Schlosser, R.W.: The echo as a speaker-independent speech recognition device to support children with autism: an exploratory study. Adv. Neurodevelop. Disorders 2(1), 69–74 (2018) 12. Arafa, M., Elbarougy, R., Ewees, A.A., et al.: A dataset for speech recognition to support arabic phoneme pronunciation. Int. J. Image Graph. Sig. Process. 10(4), 31–38 (2018)
Food Safety Big Data Classification Technology Based on BP Neural Network Dongfeng Jiang(B) Shandong Institute of Commerce and Technology, Jinan, Shandong, China [email protected]
Abstract. Food safety is directly related to people’s life safety and is the material basis of people’s life. People have low tolerance and high attention to food safety events directly related to their own life safety. This paper discusses the structure of BP neural network and big data food supervision technology, analyzes the limitations of BP neural network and the problems existing in the application of food safety data, summarizes the network data classification method, and tests the recommended BP neural network algorithm model. The results show that algorithm B improves the problem that the score prediction accuracy of the traditional algorithm is not ideal under the sparse score matrix. Compared with algorithm a, algorithm B has higher score prediction accuracy. Keywords: Food Safety · Big Data Classification Technology · BP Neural Network · Network Data Classification Method
1 Introduction As a kind of public information source, big data itself contains rich information. If we can use technical means to collect food safety related data from big data, process it and tap its internal value, it will help to solve food safety problems and promote social stability and development. With the continuous progress of science and technology, many experts have studied big data classification technology. For example, Devi s g and sabrigiriraj m introduced an ofs algorithm based on meta heuristic algorithm, which uses MapReduce paradigm. A new hybrid algorithm of multi-objective firefly and simulated annealing (hmofsa) is proposed and evaluated by kernel support vector machine (KSVM) [1]. Zhu m, Chen Q proposed the local distribution depth nonlinear embedding of structural information (sildne), and explained in detail how the model combines the structural and attribute characteristics of nodes. SVM classifier classifies the known labels, and sildne integrates the network structure, labels and node attributes into the deep neural network [2]. Zhao J, Wang m, Zhang s proposed an unbalanced binary classification method for large-scale data sets based on undersampling and integration. The proposed classifier fusion scheme uses fuzzy integral to fuse the basic classifiers, so as to establish the relationship model between the basic classifiers [3]. Although the research results of big data classification © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 530–539, 2023. https://doi.org/10.1007/978-3-031-31860-3_55
Food Safety Big Data Classification Technology
531
technology are quite abundant, there are still deficiencies in the research of food safety big data classification technology based on BP neural network. In order to study the food safety big data classification technology based on BP neural network, this paper studies the BP neural network and food safety big data classification technology, and finds the evolution of probability generation model. The results show that BP neural network is conducive to the establishment of food safety big data classification technology.
2 Method 2.1 BP Neural Network (1) BP neural network structure There is no corresponding connection between the neurons of each layer of BP neural network, but the connection between the upper and lower layers can be realized. The basic principle of artificial neural network is to train samples through its nonlinear calculation process, and continuously adjust the connection weight and threshold between networks over time [4]. Finally, without knowing the relationship between input variables and output variables, the implicit relationship law of the model is obtained, so as to achieve the purpose of data prediction. There are many kinds of artificial neural networks [5]. Among them, BP neural network is widely used to solve the problem that it is difficult to describe the process characteristics with physical equations because of its strong modeling ability of nonlinear system and simple iterative operation structure. In order to simulate the human nervous system, neurons are connected with each other through different intensities in a multi-layer network. Therefore, when building the BP neural network model, the simple structure with only three layers is usually preferred, which means that the neural network has only one hidden layer [6]. (2) Limitations of BP neural network If the initial weight threshold is large, the derivative of the activation function will act in the saturation region, and its gradient change is close to 0; If the initial weight is very small and the change of error gradient is very small, it will lead to the slow training process and even hinder the final convergence of the algorithm. In addition, because the weight threshold of BP neural network is different every time, its performance is not balanced. Therefore, in practical research and application, BP neural network is usually combined with other intelligent algorithms such as swarm intelligence algorithm to reasonably optimize its structure. Gradually adjust the extreme value of the network along the direction of local improvement, trying to achieve the global solution and minimize the error function, but in fact, it is to obtain the local minimum [7]. It is difficult to select the number of hidden layers and nodes in the network, which is usually determined based on experience, resulting in poor network design. The network is too large, resulting in low efficiency, and even over fitting, resulting in the reduction of network performance; The network is too small, resulting in slow convergence. The network memory ability and
532
D. Jiang
learning ability are not stable enough. When the trained BP network has a new storage mode, the existing connection authority will be interrupted and the original storage will disappear. Therefore, the original learning model must be retrained together with the new model. 2.2 Food Safety Big Data Classification Technology (1) Network data classification method. Use the attributes of the node itself for classification. When the node attributes are known, local classifiers can be constructed to predict the node label. At this time, the network node classification method is similar to the classification method in traditional data mining. Its research focus is how to effectively use local attributes to mine node features. One of the differences between network data and traditional data is whether there is a connection relationship. Therefore, there are relatively few methods to predict only using the local attributes of nodes [8]. In addition, in most fields, it is difficult to fully obtain the local attributes of nodes. In this case, it is more to predict the label of nodes based on the connection relationship. Use the connection relationship of nodes for classification. As shown in Fig. 1:
Input layer
Hidden layer
Output layer
Fig. 1. BP neural network
As shown in Fig. 1: BP neural network can deal with arbitrary complex patterns and map multi-dimensional functions well, thus overcoming the XOR problem that cannot be handled by general perceptron. BP network is divided into input layer, hidden
Food Safety Big Data Classification Technology
533
layer and output layer in structure; BP algorithm essentially takes the square of the network error as the objective function, and uses the gradient descent method to find the minimum value of the objective function. The biggest difference between network data mining and traditional data mining is that it no longer meets the assumption of independent and identically distributed data. In network data, there are various connection relationships between nodes, which has a very important impact on node classification. In addition, in sparse labeled networks, because the neighbor nodes of unknown nodes are often unknown, the use of direct neighbor nodes is often unable to complete the classification task [9]. On the other hand, this method generally does not deeply mine the essential characteristics of the network, especially ignores the impact of the network structure on the node category, which makes it unable to make use of the overall structure characteristics and has a certain impact on the classification performance. (2) Big data food supervision technology. The complexity of food safety is a challenge for every country. It will also involve many problems, such as the wide variety of food, the difficulty of unifying data standards and so on. Urge enterprises to improve diversified data acquisition and receiving equipment, upgrade food information system, and ensure that even if products continue to increase, food information data will not miss any information, which can not only ensure the self inspection of food production and operation enterprises, but also the mutual feedback and mutual supervision of enterprise circulation. The good operation and application of big data mainly rely on the mature network operation system. With fast software update speed, wide coverage of hardware facilities and high technical means, it can promote flexible data use, coordinated and unified management, clear responsibility attribution and diverse application modes. At present, to break through the constraints of the old system and promote system substitution by strengthening technical support, we need to improve it from three aspects. The first is data acquisition technology. Second, data information management system. Information gathering is the first step. The original data obtained from the monitoring equipment and spot check can be stored as high-quality and accurate data only after formatting and processing, so as to ensure the integrity, standardization and easy retrieval of the data. At present, the storage software of Shanghai food supervision platform has not been updated before the introduction of big data function, and the use of big data algorithm has not formed a normal state, that is, it needs to be exported from the traditional database management and then carry out big data analysis, and the database lacks hierarchical management. Third, data sharing and openness. Data sharing and openness need to be based on the infrastructure supporting data, and the corresponding technical settings can ensure that they are responsible for the accessibility and reusability of data. At present, Shanghai food safety supervision network ignores the process of planning, construction and development, and stays in the stage of opening websites, publishing static information and building internal LAN. There is no special and unified background application system of food safety supervision system among departments, which hinders the data transmission between departments. In order to realize the seamless integration of cross departmental business, there is still a lack of network integration guarantee. Facing the decision-making needs of food safety and nutrition analysis and detection, summarize and mine some laws hidden in LIMS
534
D. Jiang
data, predict the future trend with known data, and provide scientific basis for the development of food safety and nutrition in the future. As a serious social topic, food safety is directly related to the health of the people and the stability of social order. Therefore, we must ensure that the relevant research data are true and effective; Analyzability refers to the amount of operable space that information has in form or content. Using the existing technology, data with high analyzability can be analyzed and analyzed from multiple angles and dimensions. (3) Problems in the application of food safety data. To mine the resources and information behind big data, data management and data application are the key. The traditional food safety management mode mostly relies on scattered information for food safety management, which has some limitations. Big data management realizes the collection of a large number of food safety information data, unifies the data originally scattered in various fields, departments and locations, carries out comprehensive analysis, processing and application, and breaks through the limitations of traditional food supervision. With the help of the new model of big data architecture, in the face of a large number of food safety supervision data, food safety management should first do a good job in the scope and degree of collection, how to collect and how to summarize. Data collection is the premise, but data cannot be collected simply. It involves all aspects of social and economic production. It is decentralized. In the hands of enterprises, consumers and in the process of government supervision and service, there are various data with different structures and attributes. The current reality is that the quality of food safety big data collection is poor, the unified measures are not strong, and the collection means are few. How to strengthen the comprehensive collection, scientific classification and effective sorting of these relevant data and ensure the real-time and effective information data is the key to the effective and powerful follow-up food safety management. Due to the prominent phenomenon of independent operation of each department in the construction of food safety management information platform, the natural isolation of each system platform, the difficulty of information collection of each department and unit, and the prominent phenomenon of information island [10]. Take a very realistic food safety supervision as an example. The local food and Drug Administration found that there was a safety problem with a certain food, and punished and disposed of the problem food. However, if the information cannot be interconnected in time through information means, even if the food and drug supervision and administration departments of food production and food wholesale are notified by letter, copy and other means, if the value of the data cannot be brought into full play, information barriers will also arise. Many food and drug regulators in other regions are unable to obtain information and dispose of food accordingly. The same problem food may be safe in other areas. This phenomenon is the lack of effective information communication and application, resulting in the loss of the potential value of many food safety data [11]. 2.3 Evolution of Probability Generation Model For documents, document w = (w1 , w2 , . . . , wn ) has n words, and each word is wj [12]. In the single graph model, each word in the document is independent of the polynomial
Food Safety Big Data Classification Technology
535
distribution, so the probability of generating document W is Eq. (1): p(w | β) =
N
p(wn | β)
(1)
n=1
Among β Is a multiple distribution. Parameter θ the dependence distribution is known (e.g. satisfying Gaussian distribution θ = {μ, σ 2 }), so θ Estimate that is, find the appropriate parameters θ, Maximize the probability of the following formula (2): p
L(θ | X ) = (X | θ ) =
{X = x | θ } =
x∈X
p(x | θ )
(2)
x∈X
where, represents θ P (x) of| θ) Parameter generates the conditional probability function of X. In order to reflect the principle of “survival of the fittest”, this random sampling ensures that high adaptive individuals are more likely to be selected than low adaptive individuals, but high adaptive individuals will not be selected. (3) calculation method is as follows: Fi pi = N
j=1 Fj
(3)
If the fitness of individual I is Fi , the probability of being selected by the selected operation is pi .
3 Experience 3.1 Object Extraction The main functional requirements of the system are to design the database fields and corresponding table structure according to the characteristics of microblog data, deploy the microblog web crawler program in the distributed system, store the obtained data in mysql, and then extract the Chinese text information of MySQL into HDFS in real time combined with Kafka and flume technology, and then extract the preprocessing features such as word segmentation, de duplication, de blocking words and generating word vector data. Then, the clustering and emotion polarity analysis algorithms deployed on spark are used to analyze the processed data to realize the functions of topic discovery and emotion polarity judgment. Finally, the visual front-end display after data analysis enables users to understand the trend of public opinion through the system and find hot topics and topic emotional polarity. 3.2 Experimental Analysis First, take the three kinds of food safety data collected as positive data and the non food safety data collected on the same website as negative data, and divide them into
536
D. Jiang
training set and test set in proportion. Then the preprocessed data is segmented by word segmentation tool, and then the document vector of each corpus is calculated. Finally, three SVM classifiers are trained successively for the classification and screening of food safety news reports, food inspection notices and food safety criminal judgment documents. The underlying implementation of sqoop depends on MapReduce. Sqoop executes multiple export tasks in parallel and uses buffers to store data information in the database. When executing a task, all operations cannot be completed in one transaction, so all submitted results are visible to the public during the process of exporting data.
4 Discussion 4.1 Experimental Analysis When testing the score prediction accuracy of the recommended algorithm model, it is necessary to use the known test set to test the score prediction accuracy of the algorithm model. The higher the partition rate of the training set, the lower the sparsity of the user scoring matrix. Because there are more historical scoring items for reference, the accuracy of the algorithm is higher. In this paper, the known data sets are divided into two different proportions. The specific division is shown in Table 1. Table 1. Specific conditions of division proportion of different data sets algorithm
Number of test set records
Sparsity of scoring matrix
A algorithm
1000
0.84
B algorithm
1500
0.72
It can be seen from the above that the number of test set records of algorithm a is 1000 and the sparsity of scoring matrix is 0.84; The number of test set records of algorithm B is 1500 and the sparsity of scoring matrix is 0.72. The specific presentation results are shown in Fig. 2. The data results show that algorithm a mainly improves the scalability of the traditional algorithm and reduces the time complexity and storage space overhead of the traditional nearest neighbor collaborative filtering algorithm, while algorithm B uses the scoring prediction error to measure the scoring behavior similarity between users. Although it does not reduce the complexity of the traditional algorithm, it improves the problem that the score prediction accuracy of the traditional algorithm is not ideal under the sparse score matrix. Compared with a algorithm, it has higher score prediction accuracy. 4.2 Construction and Evaluation of Causality Map Based on Food Safety Corpus From the perspective of semantics, the correct identification of causality stems from the use of causal trigger words. Then, the selected 99 groups of causal event pairs were evaluated by manual comparison. The evaluation results are shown in Table 2.
Food Safety Big Data Classification Technology Number of test set records
1600
Sparsity of scoring matrix
537
0.86 0.84
1400
0.82 1200 0.8
data
1000
0.78 0.76
800
0.74
600
0.72 400 0.7 200
0.68 0.66
0 A algorithm
B algorithm type
Fig. 2. A algorithm and B algorithm data set division and comparison
Table 2. Food safety corpus causality map evaluation results method
Accuracy(%)
Recall(%)
Paper method
43.5
51.5
Manual marking 1
55.3
53.2
Manual marking 2
52.5
58.3
It can be seen from the above that the accuracy rate of this method is 43.5% and the recall rate is 51.5%; The accuracy of manual labeling 1 method was 55.3%, and the recall rate was 51.3%; The accuracy rate of manual labeling 2 is 52.5% and the recall rate is 58.3%. The specific presentation results are shown in Fig. 3. The results show that the accuracy of causality diagram constructed by this method on food safety corpus is 43.5%. Although compared with the manual marking method, this method has made some improvement in recall rate, but it is slightly insufficient in accuracy, which indicates that this method needs to be improved than the baseline method in the evolution of causal events. However, the accuracy of untrained people through
538
D. Jiang
70
Accuracy(%)
Recall(%)
60 50
data
40 30 20 10 0 Paper method
Manual marking 1 method type
Manual marking 2
Fig. 3. Comparison of evaluation results of different corpora
manual labeling was only 55.3% and 52.5%. The evaluation results of this method are basically consistent with those marked manually.
5 Conclusion Food safety is a major issue related to the national economy and the people’s livelihood. Socialism with Chinese characteristics has entered a new era, and the new journey of building a well-off society and a modern socialist country in an all-round way has begun. Improving China’s food safety supervision ability is facing new opportunities and challenges. This paper evaluates the construction of cause and effect diagram based on food safety corpus. The results show that the accuracy of the causal map constructed by this method in the food safety corpus is 43.5%, which is basically close to the way of manual labeling.
Food Safety Big Data Classification Technology
539
References 1. Devi, S.G., Sabrigiriraj, M.: A hybridmulti-objective firefly and simulated annealing based algorithm for big data classification. Concurr. Pract. Exper. 31(14), e4985.1–e4985.12 (2019) 2. Zhu, M., Chen, Q.: Big data image classification based on distributed deep representation learning model. IEEE Access 99, 1 (2020) 3. Zhai, J., Wang, M., Zhang, S.: Binary imbalanced big data classification based on fuzzy data reduction and classifier fusion. Soft Comput. 26(6), 2781–2792 (2021). https://doi.org/10. 1007/s00500-021-06654-9 4. Cheng, F., Li, H., Brooks, B.W., et al.: Retrospective risk assessment of chemical mixtures in the big data era: an alternative classification strategy to integrate chemical and toxicological data. Environ. Sci. Technol. 54(10), 5925–5927 (2020) 5. Mohammed, M.S., Rachapudy, P.S., Kasa, M.: Big data classification with optimization driven MapReduce framework. Int. J. Knowl.-Based Intell. Eng. Syst. 25(2), 173–183 (2021) 6. Lakhwani, K.: Big data classification techniques: a systematic literature review. J. Natl. Remed. 21(2), 972–5547 (2020) 7. Srivani, B., Sandhya, N., Rani, B.P.: Literature review and analysis on big data stream classification techniques. Int. J. Knowl.-Based Intell. Eng. Syst. 24(3), 205–215 (2020) 8. Xing, W., Bei, Y.: Medical health big data classification based on knn classification algorithm. IEEE Access 99, 1 (2019) 9. Qiu, Y., Du, G., Chai, S.: A novel algorithm for distributed data stream using big data classification model. Int. J. Inform. Technol. Web Eng. 15(4), 1–17 (2020) 10. Selvi, R.S., Valarmathi, M.L.: Optimal feature selection for big data classification: firefly with lion-assisted model. Big Data 8(2), 125–146 (2020) 11. Le, D.N., Acharya, B., Jena, A.K., et al.: NoSQL database classification: new era of databases for big data. Int. J. Knowl.-Based Organ. 9(1), 50–65 (2019) 12. Alhaisoni, M.M., Ramadan, R.A., Khedr, A.Y.: SCF: smart big data classification framework. Indian J. Sci. Technol. 12(37), 1–8 (2019)
Rethinking and Reconstructing the Life Education System of Universities Based on Big Data Analysis Fan Xiaohu2 and Xu He1(B) 1 School of Marxism, Fuyang Normal University, Fuyang 236037, Anhui, China 2 School of Education of Fuyang Normal University, Fuyang 2360371, Anhui, China
[email protected]
Abstract. Using Baidu index search as a tool, we conducted a survey and analysis by mining big data on the topic of “life education” to grasp the real dynamics of public concern about life education and explore the characteristics of life education attention, understand the geographical distribution, population distribution and related word popularity and change trend. Reflect on the current focus of life education, enhance the understanding of the significance of life education, explore and reconstruct the effective channels and realization paths of life education in colleges and universities. This requires not only updating the concept of life education, guiding students to establish correct life consciousness, deepening the construction of curriculum system, and giving full play to the education function of life curriculum, but also creating practical education carrier, continuously stimulating the inherent potential of life experience, constructing joint education mechanism, truly forming a strong force of life education, and effectively improving the effect of life education. Keywords: Life education system in colleges and universities · Baidu index · Big data
1 Introduction 1.1 A Subsection Sample Life education is the essential meaning of education, which has been paid more and more attention by experts and scholars in recent years. Life education in mainland China started relatively late, attracting the attention of education scholars at the end of 20th century, and gradually developed in primary and secondary schools and colleges and universities at the beginning of this century. In 2010, the Outline of the National Medium - and Long-Term Education Reform and Development Plan (2010–2020) proposed that “Attaching importance to safety education, life education, national defense education and sustainable development education”, which for the first time included “life education” into the national education development strategy, and life education in institutions of higher learning also entered a new stage of rapid development. At the beginning of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 540–551, 2023. https://doi.org/10.1007/978-3-031-31860-3_56
Rethinking and Reconstructing the Life Education System of Universities
541
2020, the COVID-19 epidemic outbreak suddenly swept the world, and life education once again became the focus of social attention, causing people to rethink life safety and the value and meaning of existence. The healthy existence of life ontology is the base point of life education and also the beginning of life education. Without a healthy and strong body, life education will not be able to be developed. However, life education is not limited to the ontological concern of natural life, but should extend to the level of social life, because the essence of human is not a single individual inherent abstract object, in its reality, it is the sum of all social relations. [1] As the essential existence of human life, social life further enriches the connotation of life education and expands the space of life education. In the new era, life education not only needs policy support to indicate the direction, but also needs universities to play an active role in talent training. At the same time, it also needs the high attention and active participation of the public. It not only needs experts and scholars to carry out academic research from different levels, but also needs quantitative analysis supported by big data. However, from the current research results, the research on life education in mainland China is still mainly confined to the introduction of the experience of advanced countries and regions, and the theoretical discussion on life education, and has not yet formed a systematic and scientific practice model and curriculum practice system of life education. [2]Therefore, in order to grasp the basic status quo of life education, understand the degree of public concern, clarify the development direction of life education, and reconstruct the life education system in colleges and universities, it is urgent to obtain nationwide survey data. Today, with the popularization of the Internet, more and more people are used to obtaining information through the Internet. As the world’s largest Chinese search engine, Baidu launched the Baidu Index big data analysis platform, which makes it possible to obtain information related to life education extensively and deeply, and also provides comprehensive and powerful data support for this research.
2 Data Sources and Research Ideas 2.1 Data Sources With the rapid development of Internet technology, the Internet penetration rate is constantly improving, especially the wide application of search engine, Internet is playing a more and more important role in human social life. Search engine is an important network information platform for the public. When people search information according to their own needs and interests, search data is also recorded by network search tools, which not only provides resources for big data computing, but also provides new research ideas and network investigation tools for scientific research. According to CNNIC Chinese Internet users’ search engine usage survey data, as of December 2022, the number of Chinese search engine users reached 770 million, accounting for 77.8% of the total Internet users. In terms of the market share of mobile and PC search engines, Baidu search engine is far ahead (see Table 1).
542
F. Xiaohu and X. He
Table 1. Percentage of market share of search engines of various brands in China (Jan.-Jun. 2022). Overall proportion
China PC terminal
China’s tablet end
China Mobile phone terminal
Baidu
75.54
51.68
89.69
90.85
Bing
11.47
25.87
6.46
2.17
Sogou
4.83
10.09
0.93
1.47
Google
3.56
7.33
1.74
1.13
Haosoo
2.2
4.42
0.23
0.79
Displayed equations are centered and set on a separate line. As can be seen from the table above, no matter the overall share, or the market shares of PC, tablet and mobile terminals, Baidu search occupies a priority position in the search engine field in China. In particular, the market share of Baidu search on tablet and mobile terminals is close to or over 90%, which is in an absolute dominant position. In order to meet the needs of Internet users for quick and convenient access to information, Baidu launched the Baidu Index function, which is a data sharing platform based on the massive Internet user behavior data of Baidu. Based on the data of Internet users’ search volume on Baidu, the weighted sum of search frequency of each keyword in Baidu web search is scientifically analyzed and calculated with keywords as statistical objects. The index is updated daily, reflecting netizens’ active search demand and netizens’ attention to network information. [3]. According to different search sources, the search index is divided into PC search index and mobile search index. The PC search index began in June 2006, and the mobile search index began in January 2011. So far, Baidu index has “trend research”, “demand graph”, “population portrait” and other functional modules. Here, you can find search trends for selected keywords, gain insight into changes in Internet users’ needs, monitor trends in media opinion, and find user characteristics. Each online query returns a demand graph, demographics, and geographic distribution. [4]This not only makes up for the limitations and deficiencies of offline data statistics, but also makes it possible to carry out large-scale investigation and research. Moreover, it can directly and objectively reflect the interests and demand characteristics of netizens, which meets the statistical requirements of all data collection and research in the era of big data. In view of the leading advantage of Baidu search in China’s search engine field, this paper takes Baidu index as the data source and “life education” as the keyword, and calculates the frequency of searching this term on PC and mobile terminals from January 2013 to December 2022, so as to obtain the relevant information of life education in colleges and universities. From the time distribution, geographical distribution, group characteristics, related search terms and other aspects, this paper analyzes the changing trend of the public’s attention to life education on the Internet, summarizes the current situation and characteristics of life education in colleges and universities in mainland China, and reflects on and rebuilds the life education system in colleges and universities.
Rethinking and Reconstructing the Life Education System of Universities
543
2.2 Research Ideas Firstly, the paper studies the temporal and spatial distribution and changing trend of life education attention. The search index of “life education” from January 2013 to December 2022 was queried through Baidu index, and the online search data of “life education” was summarized to form the trend chart of life education network attention, based on which the temporal and spatial characteristics of the search index were analyzed. Then, through the function of “crowd portrait” in Baidu Index, the paper explores the distribution and basic attributes of the attention paid by the life education network, and finds out the key links and important points of life education in colleges and universities. Secondly, the search situation of life education related hot words is studied. Search the hot words related to “life education” from January 2013 to December 2022 through Baidu Index, learn about the network hot words related to the attention of life education, find and track the social hot spots and related topics related to life education, and better understand the public’s interest in life education. In order to improve the quality and effectiveness of life education in colleges and universities, policy adjustment and program design should be carried out. Finally, on the basis of scientific summary of data investigation statistics and research analysis content, the paper finds and summarizes the current practical problems existing in college life education in mainland China, and puts forward corresponding countermeasures and suggestions.
3 Data Analysis and Discussion In recent years, although life education in mainland China has made great progress and people’s cognition of the existence and meaning of life has been further improved, the growing environment for young people is still complicated. The outbreak of the epidemic has exacerbated the changes and uncertainties of future life, so it is very easy to appear the phenomenon of wavering values, floating life goals and confused spiritual world. Many teenagers show anxiety, boredom, depression, jealousy and hatred of the ideological tendency, some autism, self-harm, drug abuse, mutual violence, more serious is suicide or harm others’ lives. It is not only a problem of students themselves and family education, but also a problem of school education and the whole society. It is urgent to strengthen life education to solve it. As Michael Landman has pointed out, “Nature does only half of man; the other half is left to man himself.” [5] Fortunately, the massive online search data of “Life education” collected by Baidu Index provides reference materials for understanding and studying people’s attention to life education at present. 3.1 TrendChange Analysis Enter the keyword “life education” into Baidu index, and the daily mean value of the search index of “life education” between January 2013 and December 2022 can be captured on the graph, so as to see the change trend of the annual search index of “life education” (see Fig. 1).
544
F. Xiaohu and X. He
Fig. 1. The annual search index trend of “Life Education” from 2013 to 2022.
As can be seen from Fig. 1, from January 2013 to December 2019, the daily mean value of the search index of “life education” fluctuated, but was relatively stable and changed little on the whole. Except for 2017, when the daily mean value of the search index was lower than 250, the other years fluctuated between 250 and 265, and the difference was not obvious. However, by 2020, the daily average of the search index will directly increase to 497, showing a double growth, which shows that people are paying more attention to life education. In 2021–2022, the daily average decreased slightly from 2020, but still showed breakthrough growth compared with pre-2019. The change in search index reflects the butterfly effect of public emergencies: in February 2020, a sudden novel coronavirus pneumonia swept the world, bringing an unprecedented public health crisis to mankind.[6] In the three years since the epidemic continued to spread, people have not only faced the fact that millions of people around the world have lost their lives, but also actively responded to national epidemic prevention and control policies and implemented daily prevention and control requirements such as nucleic acid testing and “two-code” monitoring. This has not only taught people to cherish and Revere life, but also prompted them to think about the meaning and value of life. It lays a solid foundation and premise for the development of life education. 3.2 Analysis of Geographical Distribution According to the time period and region customization function of Baidu Search, statistics and analysis are made on the geographical distribution of the search user population portrait of “Life Education”, so as to understand the distribution and ranking of the search user’s provinces, regions and cities (see Fig. 2).
Rethinking and Reconstructing the Life Education System of Universities
545
Fig. 2. Geographical distribution of “Life Education” search users from 2013 to 2022.
In terms of regional popularity of attention, Guangdong pays the most attention to “life education”, followed by Jiangsu, Zhejiang, Beijing, Sichuan, Shandong, Henan, Shanghai, Fujian and Hebei. The top 10 cities are Beijing, Shanghai, Guangzhou, Shenzhen, Chengdu, Hangzhou, Chongqing, Suzhou, Zhengzhou and Nanjing. From the perspective of seven geographical distribution regions in China, East China pays the highest attention to “life education”, followed by South China, North China, Southwest and Central China, and Northeast and northwest China pay less attention. Generally speaking, the search area is mainly concentrated in the eastern coastal areas and the central and southern inland areas, showing a trend of decreasing search number from southeast to northwest. This is not only related to the level of local economic development, areas with a high degree of economic development tend to pay more attention to ideological and cultural education, on the contrary, there is a weakening trend in the field of ideological and cultural, but also closely related to the density of population distribution in different regions. A large number of labor forces in the central and western regions flow into the eastern coastal areas, and the population distribution naturally increases from west to east. As a result, there are obvious differences in geographical distribution. In addition, this distribution can also be seen from the ranking of provinces and cities. The more developed provinces and cities are, the higher their attention to “life education” is, while the other way around, they are relatively low or do not enter the ranking at all, which fully reflects the foundation and decisive role of economic development. 3.3 Population Distribution Analysis By using the “crowd attribute” function of Baidu Index and analyzing the attributes of the “life education” search crowd, we can get the age distribution, gender ratio and other information of search users (see Figs. 3 and 4). From the perspective of age distribution, people under 20 years old and 20–29 years old account for a high proportion, 32.05% and 31.54%, respectively, and are the main focus of “life education” network. The next is the 30–39 years old group, accounting for 22.51%; The second is the 40–49 age group, accounting for 11.26%; People over the age of 50 had the smallest number, accounting for only 2.64 percent. As the object of receiving or about to receive higher education, teenagers are also relatively active network natives. In the process of long-term contact with the Internet, they pay great attention to the information about “life education”, which reflects their interest and demand in life cognition and growth, indicating that it is not only feasible, but also
546
F. Xiaohu and X. He
Fig. 3. Age distribution of online attention of “Life Education”.
Fig. 4. Gender distribution of “life education” network attention.
necessary and urgent to carry out life education in colleges and universities. In terms of gender distribution, 70.3% of women and 29.7% of men are women, nearly 2.5 times as many as men, and they pay much more attention to life education than men. The reason should be that women are more sensitive and delicate in their experience of life existence and life emotion. Especially since the COVID-19 epidemic has swept the world, women
Rethinking and Reconstructing the Life Education System of Universities
547
are more likely to think about their own life status and life value when faced with the daily increase of death data and the death of relatives around them. Therefore, we pay more attention to the significance of life growth and life promotion brought by life education. 3.4 Analysis of the Popularity and Change of Related Words Based on the “demand graph” function of Baidu Index and the degree of correlation between keywords and related terms as well as the search needs of related terms themselves, the search popularity and search change rate of related terms of “life education” can be seen (see Table 2). Table 2. The search heat of “life education” related terms in Baidu Index and the sequential increase of search terms. Sort
Related hot words
Search terms with month-on-month growth
1
Life
Be grateful for life
2
Self-consciousness
Be kind to life
3
Cherish life
Poetry about life
4
Reverence for life
Parent-child companionship
5
Mental disorders
Life is like a song
6
Campus safety education
Reverence for life
7
Death education
Death education
8
Ideological and moral education
Cherish life
9
Crisis intervention
Self-consciousness
10
Life safety education
Mental disorders
The search popularity of related terms reflects the user’s search summary of the central term, and the search index of the summary term is calculated comprehensively. The sequential growth of search terms reflects the users’ search summary related to search central terms, and the sequential change rate of summary terms is calculated comprehensively. With the help of the demand graph function of the search keyword “life education” in Baidu Index, it can be seen that “life”, “self-consciousness” and “cherish life” rank the top three in the popularity list of related words, followed by “reverence for life”, “mental disorders”, “campus safety education”, and “death education” and “ideological and moral education”. Finally, “crisis intervention” and “life safety education”, it can be seen that the public can not only recognize life, know how to cherish and respect life, but also realize a series of problems faced by the existence of life. It is urgent to strengthen physical and mental health education, which provides a strong data support for colleges and universities to carry out life education in combination with the public demand. It is worth noting that in the ranking of search terms with sequential growth, “grateful for life” and “friendly to life” ranked at the top of the search growth rate, reflecting the public’s attitude towards life to a certain extent, reflecting the public’s expectation of life growth and the importance of life education.
548
F. Xiaohu and X. He
4 Research Conclusions and Suggestions The search situation presented by Baidu Index directly reflects the public’s concern and cognitive demand for life education, and to some extent reflects the overall characteristics and changing trend of life education in mainland China. From the perspective of changing trend, the public’s attention to life education showed a steady development from 2013 to 2019. After 2020, under the influence of the novel coronavirus outbreak, the attention increased significantly, and life education received much attention. From the perspective of regional distribution, the attention of the eastern region is obviously higher than that of the central region, the central region is higher than the western region on the whole, and the central city is higher than other cities. The influencing factors mainly depend on the economic development level and education level of each region. From the perspective of people concerned, teenagers under the age of 29 are the main concern group, which reflects the urgency of strengthening life education in schools. Judging from the popularity and changing trend of related words, the public pays more attention to the physiological and psychological health problems related to life education, which reflects the areas and space that need to be strengthened in the development of life education. Colleges and universities are the cradle of talent training and the important base for carrying out life education. It is necessary to enhance the understanding of the significance of life education and explore the effective channels and paths of life education. 4.1 Renew the Concept of Life Education and Guide Students to Establish Correct Life Consciousness Colleges and universities need to renew the concept of life education, locate the value meaning of life education scientifically, and guide students to establish the correct life consciousness. One is the objective cognition of the individual value of life. Everyone has only one life, and the existence of life is the premise of all other activities. We should learn to cherish life, treat life well, preserve life, and never give up life easily. We should calmly deal with setbacks and death that may occur in the process of life, and realize physical and mental health of individuals through physical exercise and psychological adjustment. The second is to correctly understand the social value of the existence of life. Everyone’s life from the moment of birth, has become the continuation of parents’ life, the carrier of social relations, know how to respect life, reverence life, development of life, and objective treatment of the difference between life and others, in order to realize the extension of individual life in relation to life, promote the harmonious development of relationship life. The third is to deeply understand the spiritual value of the existence of life. Life education should guide college students to inquire about life, create life, surpass life, integrate individual life into the development of The Times, take the responsibility of life actively, constantly create the value of life, and realize the expansion of life connotation.
Rethinking and Reconstructing the Life Education System of Universities
549
4.2 Deepen the Construction of Curriculum System and Give Full Play to the Educational Function of Life Curriculum Life education curriculum is the basic carrier to lead the growth of students’ life, as well as the main channel and front to carry out life education systematically and centrally. Colleges and universities should strengthen the construction of life education curriculum system, form an integrated force of curriculum functions, and ensure the orderly development and effective implementation of life education. First, strengthen the construction of life education curriculum system. Life education is regarded as a basic educational concept throughout the whole process of college talent training, life education curriculum is incorporated into the overall design of talent training program, and a life education curriculum system that combines elective and compulsory courses, connects theory and practice, and complements in-class and extracurricular activities is gradually established. Second, the effective use of life education curriculum resources. It is necessary to explore the life education elements of biology, ethics, psychology and other professional courses, as well as traditional culture courses, ideological and political theory courses, mental health education and other public courses, strengthen the penetration of life education, give play to the role of different subject knowledge in constructing the meaning of life, and form students’ comprehensive cognition of the connotation of life. Third, strengthen the construction of life education network courses. Actively introduce or develop online life education courses to adapt to the new characteristics of college students’ learning in the new era, meet the new requirements of students’ convenient learning anytime and anywhere, stimulate students’ interest in learning, mobilize students’ enthusiasm for independent learning, timely resolve students’ life confusion, and promote the harmonious and healthy development of college students’ body and mind. 4.3 Create a Practical Education Carrier to Continuously Stimulate the Inner Potential of Life Experience Life education activities in colleges and universities are the “hidden curriculum” of life education and the “second classroom” of life education for college students. The purpose is to let them perceive the beauty of life, cherish the happiness of life, enhance the meaning of life, create the value of life, enrich the connotation of life, and understand that the fullness of life lies in serving others and contributing to the society. Instead of being greedy for pleasure and blind demand, they should be guided to stimulate their life potential and improve their life realm by integrating into society and caring for others. On the one hand, life education can be integrated into campus cultural activities. By regularly inviting experts inside and outside the school to carry out life education theme lectures and forums, college students can pay timely attention to the theoretical frontier of life education and build a complete system of life concepts. Through the organization of fire drill, safety knowledge publicity, quality development training, so that college students master survival skills, self-life protection and prevention; By carrying out classic reading, film and television appreciation, and extreme challenge, college students are shocked and have profound experience, realize the limited and short life, and enhance the sense of responsibility and mission of life. On the other hand, it actively expands the space for social practice of life education. By organizing students to visit and experience
550
F. Xiaohu and X. He
institutions such as hospitals, funeral homes, drug rehabilitation centers and prisons, college students can feel the possible reality of life and learn to face life correctly, cherish life and protect life. Through visiting the Hall of Fame, museum, revolutionary memorial hall, martyrs’ cemetery and other educational bases, to understand the value and significance of life, appreciate the state of life; By going into communities and rural areas to carry out social public welfare practice activities, and by participating in such activities as unpaid blood donation, voluntary donation, caring for the elderly and caring for left-behind children, I can enhance my sense of self-responsibility and experience the meaning and achievement of life existence. 4.4 To Construct the Joint Education Mechanism and Truly Form the Powerful Force of Life Education Life education is an all-round and multi-channel systematic project, which requires the close cooperation and participation of schools, families and society to actively build a multi-level integrated life education system, form a strong force of life education, and promote the comprehensive and healthy development of college students’ body and mind. One is to strengthen the function of family life education. Family is the origin of individual life generation and growth, as well as the origin of life relationship generation and life consciousness cultivation. Family members’ cognition and views on life directly affect the early formation of college students’ life concept. In family life, parents should take the initiative to pay attention to the child’s life experience, infect the child’s life growth, respect the child’s life process, with an equal attitude to communicate with the child’s life dialogue, establish a democratic and harmonious life relationship, lay a good foundation for life education. Second, create an atmosphere of social life education. Society is the platform and soil for the development of individual life. The society’s attention to life education directly determines the success or failure of college students’ life education. All sectors of society should actively pay attention to the life education process of college students, abandon the utilitarian and instrumentalized education model and the employment orientation, form an overall atmosphere that values life, respects life and cares for life, create a social environment conducive to the growth of college students, let them feel the dignity of life, consciously enhance the value and potential of life. Third, create a cooperative platform for life education. Relying on the rapid development of network information technology, life education for college students can rely on the network carrier to create a network cooperation platform with the participation of school, family and society. Through complementary advantages, common information and resource sharing, a trinity of cooperative education space can be created to realize the instant transmission, communication and exchange of information. Jointly commit to the timely discovery and effective solution of life problems of college students, and effectively lead the life growth of college students. Acknowledgements. This work is supported by the key project of teaching research of Anhui quality project in 2019 “Innovation of scientific research classification evaluation system and practical exploration of science and education integration in applied undergraduate universities” (2019jyxm0270) and the commissioned project of teaching reform research of Anhui quality
Rethinking and Reconstructing the Life Education System of Universities
551
project in 2021 “The ‘Anhui action’ of higher education classification development under the perspective of integrated development of Yangtze River Delta” (2021zdjgxm013).
References s 1. Marx , Engels: Selected works of Marx and Engels. People’s Publishing House, (2012).60 2. Mingjing, G.: Overview of the research status of Major Risk Life Education in China in recent 20 years – Based on the statistical analysis of relevant literature of China National Knowledge Network (CNKI). Journal of Fuqing Branch of Fujian Normal University 36(4), 78–91 (2016) 3. Yongbin, W.: Who is paying attention to the core socialist values – Big data analysis based on Baidu Index. Studies in Marxism 36(2), 124–128 (2018) 4. Xuehong, M.: Analysis of Life Education Concern Based on Baidu Index. Journal of Heilongjiang Ecological Engineering Vocational College 35(2), 75–80 (2022) 5. Michael Landman: Philosophical Anthropology. Translated by Zhang Tianle. Shanghai Translation Publishing House, (1990).239 6. Qiang Li. Reflections on restarting life-oriented education under the major epidemic crisis. China: Journal of Jiyuan Vocational and Technical College,19 (3):88–92(2020)
Unsupervised Data Anomaly Detection Based on Graph Neural Network Ning Wang1(B) , Zheng Wang2 , Yongwen Gong2 , Zhenlin Huang1 , Zhenlin Huang1 , Xing Wen1 , and Haitao Zeng1 1 China Southern Grid Extra High Voltage Power Transmission Company, Guangzhou,
Guangdong, China [email protected] 2 Beijing Smartchip Semiconductor Technology Company Limited, Beijing, China
Abstract. The anomaly detection algorithm proposed in this paper uses deep learning to realize automatic equipment abnormality monitoring in view of the time-consuming and laborious problem of manual equipment abnormality monitoring in converter station scenarios. Although the overall performance of supervised learning is better than unsupervised learning at this stage, supervised learning faces problems such as difficulty in labeling massive data and deteriorating performance of pre-trained models in actual scenarios. Therefore, this paper proposes an anomaly detection system based on graph neural network, which not only focuses on visual features but also introduces attribute features to extract deep features of monitoring data. The algorithm first extracts attribute features from the data and constructs a graph neural network through attributes. In the iterative process, the graph neural network mines data information from two dimensions, visual and attribute, and solves the problem of performance degradation caused by the difference between the data of the source domain and the target domain. The model pre-trained in the source domain can be applied to the target scenario and maintain the original anomaly detection capability. Keywords: Deep Learning · Graph Neural Network · Attribute Features
1 Introduction With the intelligent production of electric power, the digital transformation of network monitoring, smart substations, and smart meters in power transmission systems, the scale and type of power monitoring data have increased rapidly, and huge power grids have generated a large amount of data. From power generation to electricity consumption, a large number of data collection sources, especially data collected in the form of images, will collect specific data information intensively. There is a large amount of data in the converter station to monitor the operation status of the equipment, and it would be time-consuming and labor-intensive to judge abnormalities manually. In recent years, with the development of science and technology, computer hardware technology has become increasingly mature, which makes the price of graphics © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 552–564, 2023. https://doi.org/10.1007/978-3-031-31860-3_57
Unsupervised Data Anomaly Detection Based on Graph Neural Network
553
processor GPU, which can speed up in-depth network training, fall. The popularity of hardware devices and the improvement of software architecture have accelerated the research and application of anomaly detection based on deep learning. Deep learning can automatically learn image features from massive image data, which has been widely studied by scholars at home and abroad. After the structure is preset, the neural network model needs to improve its accuracy through training and parameter adjustment. The training methods can be roughly divided into three types: supervised learning, unsupervised learning, and semi-supervised learning. Supervised learning refers to letting the network learn the knowledge in the samples under the condition of supervision, in which supervision means that each sample will be labeled. For example, a prediction model is trained under the premise that all the values are continuous, and the output value of the model approximates the label value of the sample when inputting the sample, or the classification model is trained when the label is discrete. The core requirement is that the samples must be labeled so that the difference between the output value and the actual value can be used to adjust the parameters in each sample learning. Unsupervised learning is another way of thinking. Input unlabeled samples and let the model learn the features in the samples by itself. The goal of model training is to classify samples. Because there is no label, its work is actually clustering. That is to say, the model does not know what standard to use to classify the samples, but has found a set of classification standards during the training process, which may run counter to the requirements of the designer at the beginning. Therefore, for the same classification task, the effect of supervised learning model is much better than that of unsupervised learning model. In the past few years, there has been great progress in fully supervised learning algorithms based on deep learning. However, due to domain differences between datasets, model performance drops significantly when migrating to new systems. In order to maintain the model performance, the data from the target domain should be used to continue training the model pre-trained on the source domain dataset. However, in the case of large-scale target domain data, large-scale labeling data is also very time-consuming and expensive. In the method based on semi-supervised learning, the core problem to be solved is how to balance the relationship between model performance and the number of labeled samples, so that the model based on semi supervised learning can make full use of a large number of unlabeled data when there are only a few labeled samples, to achieve the performance comparable to that of the fully supervised model. In semi-supervised classification tasks, training samples are usually composed of two parts: labeled training sets and unlabeled training sets. Usually, the number of samples in labeled data sets is far less than that in unlabeled data sets. In order to realize semisupervised learning, it is usually necessary to establish the distribution assumptions of labeled data and unlabeled data on the model. The hypothesis of semi-supervised classification problem mainly includes smoothing hypothesis, clustering hypothesis, and manifold hypothesis.
554
N. Wang et al.
The Smoothness Assumption indicates that if two samples in the dataset are close in data distribution, their corresponding labels should also be similar, which means that the distance between the labels can be evaluated by calculating the distance between image samples. In the semi-supervised classification task of smoothing hypothesis, if the distance between unlabeled samples and labeled samples is similar in data distribution, the prediction results of unlabeled samples should be similar to the labeled gold standard; On the contrary, if the unlabeled samples are far away from the labeled samples in the data distribution, the prediction results of unlabeled samples should be different from the labeled gold standard. Cluster Assumption indicates that the data distribution of samples is uneven, and samples belonging to the same category are often clustered into clusters in the data distribution. In other words, the decision boundary of different clusters should be located in the area with sparse data, so this assumption is also called Low-density Separation Assumption. In the semi-supervised classification task of clustering hypothesis, because the number of labeled samples is small, the aggregation effect of the sample space is not obvious, and it is difficult to determine the appropriate decision boundary. Therefore, we can analyze and obtain a better decision boundary by observing the data distribution of a large number of unlabeled samples in the sample space. Manifold Assumption indicates that there is data redundancy in high-dimensional manifolds. Therefore, high-dimensional data is often embedded into low dimensional manifolds to reduce dimensions. The manifold assumption holds that the samples in the local neighborhood of low-dimensional manifolds may have similar attributes, so their labels are similar. Semi-supervised learning is a training method using a small number of labeled samples and a large number of unlabeled samples. In the anomaly detection task studied in this paper, only a small number of samples are labeled and the detection task is actually a binary classification task, so semi-supervised learning is more appropriate. The idea of semi-supervised learning is to label unlabeled data with existing labeled data by some methods during the training process or even before the training. Of course, these labels will be inaccurate, so the confidence rule is generally introduced. But in essence, it is to transform semi-supervised learning into supervised learning. Among them, the Unsupervised Domain Adaptation algorithm (Unsupervised Domain Adaptation, UDA) began to attract everyone’s attention, the purpose is to transfer the knowledge in the labeled data to the unlabeled domain. However, there are many kinds of data in the site, and the shooting conditions of video images are quite different. The model trained on the existing data set will face the problem of performance degradation when it is directly applied to the scene in the site. In this case, training will not improve the performance of the classifier but may lead to negative optimization. Therefore, further research on unsupervised domain adaptive algorithms is needed to solve the problem of model performance degradation. This paper proposes an unsupervised data feature extraction method based on attribute-assisted clustering to perform in-depth feature extraction on monitoring data. Specifically, the unsupervised domain adaptive method is used to transfer the knowledge
Unsupervised Data Anomaly Detection Based on Graph Neural Network
555
in the labeled data to the unlabeled domain. And for the unlabeled data, the attributeassisted clustering method of the graph neural network is used to enhance the data features, and finally judge whether the data needs to belong to abnormal data.
2 Related Work Anomaly detection refers to finding problems that do not conform to normal or expected behavior patterns in data. Anomaly detection can be regarded as a special classification task, so this kind of problem can be solved by both traditional machine learning algorithm and deep learning. The anomaly data analyzed in this paper are all picture types. The application of anomaly detection algorithms in image data can be divided into two categories: traditional machine learning anomaly detection algorithms and depth learning based anomaly detection algorithms. Traditional machine learning needs to use feature engineering to refine and clean data manually[1–2]. In 2016, Vaikundam et al. extracted SIFT features of images, filtered descriptors using clustering algorithms, and then used image matching technology to locate abnormal areas. In 2017, Fu Peiguo et al. proposed a local distance anomaly detection algorithm based on density biased sampling. By sorting the local anomaly coefficients of each data point, the higher the score, the more likely it is to be an anomaly. In 2018, Sarikan et al. used CCTV cameras to collect multiple consecutive frames in order to achieve real-time and accurate detection of abnormal vehicle movement direction to reduce accident risk. After extracting relevant location features, they used KNN classifier model to evaluate a public highway and achieved good detection results. In 2021, Lochner and Bassett proposed an Astronomy framework combining new personalized active learning methods for different types of astronomical data. Considering the complexity of deep learning, high computing cost and relatively low performance, the framework combines two traditional machine learning exception detection algorithms, including iForest and LOF. Deep learning anomaly detection models are mostly used for image data types. This is because the image dataset has a high dimension, and deep learning can better extract the required features[5–8]. In 2015, Jinwon et al. proposed an anomaly detection method using the reconstruction probability of the variational self encoder. The experimental results show that this method is superior to the methods based on the automatic encoder and the principal component. In 2017, Thomas et al. proposed AnoGAN, a depth convolution generation network, to learn a variety of normal anatomical variations. The results of research on retinal optical coherence tomography images show that this method can correctly identify abnormal images, such as images containing retinal fluid or highly reflective lesions. In 2019, Thomas et al. proposed f-AnoGAN, which can identify abnormal images and image fragments. In the experiment of optical coherence tomography data, f-AnoGAN is superior to the alternative method, and has higher anomaly detection accuracy. In 2020, Kaplan et al. used the KDDCUP99 dataset as an anomaly detection task, proposed two different BiGAN training methods, and proved that the proposed methods improved the performance of BiGAN in anomaly detection tasks. Chow et al. used convolutional automatic coder to detect abnormal defects in concrete structures. The proposed anomaly detection technology is robust and applicable to
556
N. Wang et al.
a wide range of defects. Heger et al. used the convolution automatic encoder to detect abnormal phenomena in the formed metal plate, in which they checked whether the sheet metal image formed from the real production line had cracks and wrinkles. This method solves the problem that sufficient defect samples are needed to obtain reliable detection accuracy. Kozamernik et al. proposed a variational automatic encoder model for anomaly detection to more reliably detect the coating (KTL) defects of metal parts surface protection in the automotive industry through further image processing. In 2021, Liu et al. proposed a semi supervised anomaly detection method based on dual prototype automatic encoder (DPAE) based on the encoder decoder encoder paradigm, and constructed the aluminum profile surface defect (APSD) data set. Finally, a large number of experiments on four datasets show that DPAE is effective and superior to the most advanced methods. In the research of cross domain detection problem, the cluster based pseudo label algorithm is one of the unsupervised domain adaptive algorithms with good results at present. Pseudo Labelling is a very common technical means in unsupervised learning. The core idea is to use the model obtained by training labeled data with supervised means to predict the label of unlabeled data, that is, “pseudo labeling”. After screening, the training model is updated with the labeled data, and repeated until the model converges. On the issue of cross domain anomaly detection, when the pseudo label technology is implemented, because the number of anomalies in the target domain is unknown, the number of tags cannot be determined, and the image shooting conditions in the target domain and the source domain are also different, it belongs to an open set problem, so the monitoring images in the target domain are often labeled by clustering and other methods. The general detection framework is shown in the following figure: first, extract the image features of the target domain from the trained model of the source domain; Then, K-Means and other clustering methods are used to label the target domain image samples with false labels; Then the training model is updated by using the target domain image samples with false labels as training data; Finally, the first two steps of pseudo label clustering and model training are cycled until the model converges. At present, the supervised anomaly detection based on deep learning is usually trained by classification, that is, the same fault is considered as a class; In the test, it is a sorting process based on feature distance. Therefore, in anomaly detection tasks, measurement loss and classification loss are usually calculated at the same time, which can obtain better recognition accuracy. It can be seen that the final clustering plays a key role in unsupervised anomaly detection algorithm based on pseudo label clustering algorithm. Two commonly used clustering analysis algorithms are K-Means clustering algorithm [9] and density clustering algorithm [10]. 1. K-Means Clustering Algorithm The K-Means clustering algorithm is relatively simple. The data is divided into K clusters according to the K categories given by us, regardless of whether K conforms to the data distribution law. Firstly, K sample points are determined as the center points
Unsupervised Data Anomaly Detection Based on Graph Neural Network
557
Fig. 1. General flow of unsupervised anomaly detection training based on pseudo label clustering algorithm
through random initialization; Second, for each sample point y, calculate its distance to K center points, and select the center point with the smallest distance as the mark of this sample; The third step is to calculate new center points for K clusters. Repeat the second and third steps until the center point does not change. The advantage of K-Means is that it is very fast, because all we do is calculate the distance between the point and the cluster center, so it has a linear complexity of O (n). However, K-Means has two disadvantages. First, the number of clusters must be set, which is not simple for clustering algorithms. We hope it can find these answers by itself, because the purpose of clustering algorithm is to obtain hidden information from data. In addition, K-Means algorithm starts from randomly selected cluster centers, which may produce different clustering results in different running processes of the algorithm. This may lead to unrepeatable and inconsistent results. However, other clustering methods are relatively consistent. K-Medians is another clustering algorithm related to K-Means. The difference is that we use the median vector of each cluster to recalculate the cluster center instead of the mean value. This method is not sensitive to outliers (because of the use of the median), but it is much slower for large data sets, because the median vector needs to be sorted at each iteration. 2. Density-Based Spatial Clustering The idea of Density Based Spatial Clustering of Applications with Noise (DBSCAN) is quite different from that of K-Means clustering algorithm. Its algorithm design makes the data that are relatively close to each other can be divided into the same category, which is based on density and more in line with human’s intuitive recognition. It can deal with the data distribution that cannot get better clustering results from K-Means clustering algorithm. The density clustering algorithm has two parameters that need to be set: the radius of the defined domain and the domain density threshold MinPts. According to experience, MinPts is often taken as 4, half of which can be calculated based on experience through MinPts value and k distance. In this research method, the same method is used to determine these two parameters. On this basis, the values of radius and MinPts can
558
N. Wang et al.
also be adjusted, and appropriate parameter values can be selected through experimental comparison. In many research work of cross domain anomaly detection, the convolutional neural network trained on the source domain is directly used to extract the visual features of the target domain image, and then the pseudo label is extracted using the pre training model, which can obtain better recognition results. In this process, the visual features of the input samples are extracted based on the source domain model. When the visual differences between the target domain and the source domain image samples are large, the source domain model will not be able to obtain the prediction results with high confidence. In the previous unsupervised cross domain recognition work based on pseudo label clustering, researchers often do different clustering based on convolutional neural network through feature information to obtain better cross domain recognition accuracy. It can be seen that cross domain recognition of joint attribute information should also achieve higher recognition accuracy. Inspired by this, this paper designs a cross domain anomaly detection method using both attribute and visual feature information, combines unsupervised convolutional neural network learning technology and unsupervised graph convolutional network learning technology, and uses pseudo label technology and multiple clustering methods to improve the recognition ability of convolutional neural network in the target domain, so that the recognition accuracy of cross domain anomaly detection based on depth learning can reach a higher level.
3 Technology This paper provides an unsupervised data anomaly detection method based on attributeassisted clustering, which solves the problem of significant decline in model performance when migrating to the target application scenario due to domain differences between the training data set and the target application scenario in related technologies. The method consists of four main parts: the first part is the identification of attribute set, which uses OCR character recognition technology to recognize the text content in the target image data and regular expressions to match the attribute information of the text content in the target image data; the second part is the construction of data attribute graph, which is based on the graph neural network, with the input monitoring image as the node and the similarity of data attributes as the edge to construct the graph neural network; the third part is the attribute assisted feature refinement, which is used to improve pseudo-label quality, thereby transferring knowledge from labeled data to the unlabeled domain; The last part is anomaly detection, which is used to detect whether the data is abnormal data. 3.1 Identification of Attribute Set This part identifies the attributes of the target image data, and determines the attribute information of the target image data. In this embodiment, the target image data includes some information, such as shooting time, monitored equipment, shooting position, parameters displayed on the equipment, etc. For example, a target image of monitoring
Unsupervised Data Anomaly Detection Based on Graph Neural Network
559
smart electricity meter, according to this target image, we can get the image shooting time is 12:13 on November 18, 2022, the monitoring equipment is smart electricity meter, the shooting location is No. 3 in Area A, and the electricity consumption on the smart electricity meter is 500023 kwh. Therefore, we can get the attribute information in the image through attribute recognition processing. The attribute information can include time, place, equipment and equipment parameters. The target image data is about the power monitoring data of the transmission system for network monitoring, smart substations, smart meters, etc., and is generally collected in the form of images, mainly for monitoring the operating status of equipment. These data can be used to identify whether the equipment is abnormal. When processing the data, the text content is identified in the target image data by OCR text recognition technology. Then, a regular expression is used to match the text content of the target image data to obtain attribute information. Since the target image data contains information such as time, device name, and operating status, it is unstructured data, and the length of the separator is variable. It is difficult to structure the data by identifying the separators between fields. Based on regular expressions, errors caused by identifying separators can be effectively avoided, and the impact of separators can be minimized. 3.2 Construction of Data Attribute Graph Graph is a data structure, which is composed of nodes and their relationships (edges). Graph is not only a powerful representation, but also a common non European data structure in many fields of natural science. In recent years, the algorithm research of graph structure analysis using machine learning has attracted more and more attention. For example, in the research fields [16] such as social network [11, 12], physical science [13], protein structure and interaction network [14] and knowledge atlas [15], it is necessary to use graph structure and algorithms on the graph to carry out node classification, relationship prediction and cluster analysis. Graph Neural Networks (GNNs) is a method based on deep learning to perform operations on the graph domain, and has been widely applied to various Graph analysis problems. Convolutional Neural Networks (CNNs) [17] have revolutionized the field of deep learning and artificial intelligence in recent years. The key is that it can extract features with high representation ability from linear data. Graph Convolutional Networks (GCNs) are derived from convolutional neural networks. Convolution neural network can have such a powerful ability, which is basically determined by its three characteristics: local connection, shared weight and multi-layer network [18]. Conventional European data such as text, image and voice have translation invariance, so convolutional neural network model can be directly applied. However, each part of the graph structure may have a different structure, which has lost the translation invariance. Convolution and pooling cannot be defined as usual, and the convolution neural network model cannot be directly applied. Therefore, one of the major tasks of graph convolution network research is how to redefine these two points [19]. Graph convolution networks can be divided into spatial methods and spectral methods, and many solutions have been developed. The graph convolution network based on spectral method needs to first transform the graph data from spatial domain to spectral domain using Fourier transform, then define convolution in spectral space and perform convolution calculation, and finally
560
N. Wang et al.
convert the data back to spatial domain using inverse Fourier transform; Spectral methods include Spectral Network [20], ChebyNet [21], Graph Convolution Network (GCN) and Adaptive Graph Convolution Neural Network (AGCN) [22]. Spectral method requires eigen decomposition of Laplace matrix, which consumes a lot of space and time when calculating large images. The graph convolution network based on spatial method defines convolution directly by defining neighbors and computing on neighbors and sharing parameters, which is also a special spectral method to some extent. The spatial methods include diffusion graph convolution networks (DCNNs) [23], dual graph convolution networks (DGCN) [24], mixed convolution networks (MoNet) [25] and graph sampling aggregation networks (GraphSAGE). Here, the Graph Sampling Aggregation Network (GraphSAGE) used in this method is briefly described. The Graph Sample and Aggregate (GraphSAGE) framework is a graph convolution network framework based on spatial methods. The framework generates embedding by sampling and aggregation features from the local neighborhood of nodes through forward propagation, and trains the Graph convolution layer through back propagation. The basic algorithm flow of sampling and aggregation is as follows: 1 Random Sampling k-order Neighbor For each node, GraphSAGE only considers its local rather than the entire network structure. In order to reduce the computational complexity, each node is randomly sampled a fixed number of neighbor nodes and each hop is randomly sampled a fixed number of nodes. 2 Use Aggregate Function to Aggregate Neighbor Nodes to Obtain New Node Representation Aggregates the target nodes from far to near by volume. First, update the node representation of the first-order neighbor from the node representation of the secondorder neighbor, and then update the node representation of the target node from the updated first-order neighbor. Specifically, in the convolution layer, the node representation of nodes in each layer is aggregated and updated by the node representation of the upper layer of its neighbor nodes. GraphSAGE gives three types of aggregation functions: average aggregation function, short-term memory aggregation function and pooled aggregation function. In addition, you can define your own aggregate function. 3 Perform Downstream Tasks and Update the Network The updated node representation obtained in the second step is sent to the full connection layer to predict the label of the node or complete the corresponding task. The supervised or unsupervised loss function can be set. The supervised loss function can be set according to the downstream tasks, while the unsupervised loss function based on the graph is set to make the node representations of adjacent nodes more similar and the node representations of non adjacent nodes more different. Then optimize the volume layer through backpropagation update.
Unsupervised Data Anomaly Detection Based on Graph Neural Network
561
It can be seen from the training process that graph sampling aggregation network is a sampling aggregation process. Under the unsupervised condition, it is not necessary to specify the number of aggregation types, which is more reasonable for new scene datasets with unknown distribution. Moreover, the graph sampling aggregation network learns the model of expressing nodes with neighbor nodes, and does not limit the structure of the graph. Therefore, new nodes can be added at any time, with distributed training capabilities, and can compress the demand for memory. Because of the above excellent characteristics, the graph convolution network algorithm in this paper uses graph sampling aggregation network. In this paper, the attribute graph refers to the attribute-based relationship expression between target images, and it is a data structure. Through the graph neural network, the input target image is used as a node, and the similarity of attribute information is used as an edge. Through clustering, the similarity of the target image in the attribute space is explored to achieve node-level aggregation. If the similarity is greater than the threshold, the two nodes are connected. For example, if the threshold value is 3 and two target images A and B have identified their attribute information, A has 3 attribute information, B has 5 attribute information, and A and B have 2 common attribute information, then the similarity is 2. If the threshold value is less than the threshold value, then the nodes of the two target images are not connected. The following is the formula of attribute similarity: 1sim ai , aj > τ #(1) Aij = 0 otherwise
3.3 Attribute Assisted Feature Refinement In the previous part, the basic attribute graph is established to form the initial graph neural network. We divide the power data set into several equal sub-data sets and input them into the graph neural network for training. Input the target image data into the detection model to obtain the intermediate features, and then combine the intermediate features with the deep features to input them into the detection model to obtain the predicted target line data with false labels. In the graph neural network, nodes with similar attribute degrees will gather, and nodes with dissimilar attribute degrees will stay away. Then, the attribute classification is formed in the feature space. The new features consider both the spatial feature relationship and the attribute relationship. The detection model proposed in this paper is divided into two stages, one is feature extraction stage, the other is classification stage. After the target image data is input into the detection model, the intermediate features are obtained in the feature extraction stage. After the intermediate features are combined with the deep features, the target image line data with false labels is obtained after being input into the classification stage of the detection model. The pseudo tag here refers to whether the device is abnormal. This embodiment can improve the quality of pseudo labels by combining the intermediate features with the deep features in the classification stage, which is more conducive to model training.
562
N. Wang et al.
3.4 Anomaly Detection Next, we use the predicted target line data with false labels as sample data to train the detection model and obtain the trained detection model. In this paper, the detection model uses the Multi layer Perceptron (MLP) to classify the network, and the network output is whether the monitored device is abnormal, and it iterates to convergence. The predicted target line data with pseudo tags is used as sample data to train the detection model, and the trained detection model can be obtained. The trained monitoring model is more adaptable to the new target environment and maintains good detection performance. This paper designs an efficient data transmission system based on deep learning technology and unsupervised clustering classification algorithm. The system mainly includes three modules: data acquisition, data classification and dynamic rate selection module. First of all, each node collects data separately and stores it locally. The local data is used to train the data classification model. The graph neural network is constructed by using the attribute characteristics of the data. The graph sampling aggregation network is used to learn the representation of each node based on some combination of its neighboring nodes, so as to obtain the deep characteristics of the input data. Then, the unsupervised algorithm is used to grade the data according to the deep features, so as to avoid the extra cost of labeling massive data. Secondly, according to the classification results, the storage space of each computing node is adaptively divided to achieve efficient utilization of the edge side storage space. Each node shares its own data storage information to the central node to jointly build a node communication network. Finally, based on the dynamic rate selection technology, priority is given to the transmission of high-level data as the task target, and the current round transmission rate is dynamically selected for different levels of data. Thus, the computing resources of each node can be fully utilized, the synchronization of model training can be enhanced, the waiting time can be reduced, and the high-precision model can be trained more efficiently.
4 Conclusion In this paper, an anomaly detection algorithm based on graph neural network is proposed. Aiming at the problem that manual monitoring equipment is time-consuming and laborious under the converter station scenario, the automatic equipment anomaly monitoring is realized by using machine learning. Although the overall performance of supervised learning is better than that of unsupervised learning at this stage, supervised learning faces problems such as difficulties in labeling massive data, and performance degradation of pre training models in actual scenarios. Therefore, this paper proposes an anomaly detection system based on graph neural network, which not only focuses on visual features, but also introduces attribute features to extract depth features from monitoring data. The algorithm first extracts the attribute features of the data, and constructs a graph neural network through the attributes. The graph neural network mines data information from both visual and attribute dimensions in the iterative process to solve the problem of performance degradation caused by data difference between source domain and target domain. The model pre-trained in the source domain can be applied to the target scene and maintain the original anomaly detection capability.
Unsupervised Data Anomaly Detection Based on Graph Neural Network
563
References 1. Vaikundam, S., Hung, T.Y., Liang, T.C.: Anomaly region detection and localization in metal surface inspection In: 2016 IEEE International Conference on Image Processing (ICIP), 759– 763 (2016) 2. Peiguo, F., Xiaohui, H.: Local distance anomaly detection method based on density biased sampling. J. Softw. 28(10), 2625–2639 (2017) 3. Sarikan, S.S., Ozbayoglu, A.M.: Anomaly Detection in Vehicle Traffic with Image Processing and Machine Learning. Procedia Comput. Sci.140, 64–69 (2018) 4. Lochner, M., Bassett, B.A.: Astronomaly: Personalised active anomaly detection in astronomical data . Astron. Comput.36, 100481 (2021) 5. Kaplan, M.O., Alptekin, S.E.: An improved BiGAN based approach for anomaly detection Procedia Comput. Sci.176, 185–194 (2020) 6. Chow, J.K., Su, Z., Wu, J., Tan, P.S., Mao, X., Wang, Y.H.: Anomaly detection of defects on concrete structures with the convolutional autoencoder. Adv. Eng. Inform. 45, 101105 (2020) 7. Heger, J., Desai, G., Abdine, M.Z.E.: Anomaly detection in formed sheet metals using convolutional autoencoders. Procedia CIRP 93, 1281–1285 (2020) 8. Kozamernik, N., Braˇcun, D.: Visual Inspection System for Anomaly Detection on KTL Coatings Using Variational Autoencoders. Procedia CIRP 93, 1558–1563 (2020) 9. MacQueen, J., et al.: Some methods for classification and analysis of multivariate observations In: Proceedings of the 5th Berkeley symposium on mathematical statistics and probability Oakland, CA, USA 1, 281–297 (1967) 10. Ester, M., Kriegel, H.P., Sander, J., et al.: A density-based algorithm for discovering clusters in large spatial databases with noise. Kdd: 96, 226–231 (1996) 11. Hamilton, W.L., Ying, R., Leskovec, J.: Inductive representation learning on large graphs Adv. Neural Inf. Process. Syst. 30, 1024–1034 (2017) 12. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks ICLR (Poster) (2016) 13. Battaglia, P.W., Pascanu, R., Lai, M., et al.: Interaction networks for learning about objects, relations and physics NIPS’16 In: Proceedings of the 30th International Conference on Neural Information Processing Systems 29, 4509–4517 (2016) 14. Fout, A.M.: Protein interface prediction using graph convolutional networks. Colorado State University (2017) 15. Hamaguchi, T., Oiwa, H., Shimbo, M., et al.: Knowledge transfer for out-of-knowledge-base entities: a graph neural network approach In: 26th International Joint Conference on Artificial Intelligence, 1802–1808 (2017) 16. Dai, H., Khalil, E.B., Zhang, Y., et al.: Learning combinatorial optimizational gorithms over graphs. 30, 6351–6361 (2017) 17. LeCun, Y., Bottou, L., Bengio, Y., et al.: Gradient-based learning applied to document recognition Proc. IEEE 86(11), 2278–2324 (1998) 18. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature, 521(7553), 436–444 (2015) 19. Bingbing, X., Keting, C., Junjie, H., et al.: Overview of graph convolution neural networks J. Comput. Sci., 5 (2020) 20. Bruna, J., Zaremba, W., Szlam, A., et al.:Spectral networks and locally connected networks on graphs ICLR 2014 : International Conference on Learning Representations (ICLR) (2014) 21. Defferrard, M., Bresson, X., Vandergheynst, P.: Convolutional neural networks on graphs with fast localized spectral filtering NIPS’16 In: Proceedings of the 30th International Conference on Neural Information Processing Systems 29, 3844–3852 (2016) 22. Li, R., Wang, S., Zhu, F., et al.: Adaptive graph convolutional neural networks In: Proceedings of the AAAI Conference on Artificial Intelligence, 32 (2018)
564
N. Wang et al.
23. Atwood, J., Towsley, D.:Diffusion-convolutional neural networks. Advances in Neural Information Processing Systems 29, 1993–2001 (2016) 24. Zhuang, C., Ma, Q.: Dual graph convolutional networks for graph-based semi-supervised classification In: Proceedings of the. World Wide Web Conf. 2018, 499–508 (2018) 25. Monti, F., Boscaini, D., Masci, J., et al.: Geometric deep learning on graphs and manifolds using mixture model CNNs In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5115–5124 (2017)
Efficient Data Transfer Based on Unsupervised Data Grading Liuqi Zhao1(B) , Zheng Wang2 , Yongwen Gong2 , Zhenlin Huang1 , Xing Wen1 , Haitao Zeng1 , and Ning Wang1 1 China Southern Grid Extra High Voltage Power Transmission Company, Guangzhou,
Guangdong, China [email protected] 2 Beijing Smartchip Semiconductor Technology Company Limited, Beijing, China
Abstract. Aiming at the problem of low data transmission efficiency between nodes in distributed scenarios, the data classification model is designed based on the graph neural network, and the level adaptive rate selection system is designed by making full use of the characteristics of hierarchical management of power data, to improve the data transmission efficiency and reliability of each computing node in the transmission system. Firstly, the data volume and maximum transmission speed of each node in the network are obtained, and the data ranking model is trained for a small amount of labeled data. Due to the domain difference between massive data and labeled data, the performance of the pre-trained model in the target scenario is degraded. In order to solve this problem, this paper proposes to construct a graph neural network to extract the deep features of the data and introduce attribute features on the basis of the original deep features, to obtain the feature representation of multi-dimensional data and reduce the impact of dataset differences on model performance. After calculating the importance level of the data, an adaptive transmission rate selection algorithm is designed based on the importance level of the data to prioritize the transmission of the important level data and ensure that the data transmission rate of the same level is consistent. This improves the data transmission efficiency and reliability of each computing node in the transmission system. Keywords: Deep Learning · Graph Neural Network
1 Introduction With the growing demand for big data, cloud computing, and the Internet of Things, machine learning, especially deep learning, can conduct various data analysis to assist traditional manual decision-making. There are a large number of sensors in the converter station, which generate massive data to record the operation status of the equipment at any time. In the transformation of power intelligence, the transmission system has undergone digital transformation in network monitoring, intelligent substations, intelligent meters, etc. The scale and type of power data have increased rapidly, and the huge power grid has produced a large amount of data. These structured and unstructured big data contain © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 565–575, 2023. https://doi.org/10.1007/978-3-031-31860-3_58
566
L. Zhao et al.
useful information for the safe operation of the converter station, which needs to be analyzed and processed. At present, power grid automation system, power equipment detection, relay protection, and other devices play an important role in the daily operation of the power system. Relatively speaking, the detection system for the operation status of power equipment is still imperfect: many difficulties are due to the factors of high voltage and strong electromagnetic field, which lead to the acquisition of many key parameters of power equipment will be constrained by the measurement object, environment, temperature, weather, and measurement methods. System safety, weak signal, transmission process, and other complex environment and equipment factors also need to be considered. In addition, the operating parameters and fault information of some power equipment are difficult to be directly converted into electrical signals through the data fed back by sensors, and some can only be obtained by manual detection. At present, video monitoring system has been added to various national power grid constructions, which plays an important role in detecting the operation status of power equipment. The traditional centralized monitoring system stores a large amount of system data, including voltage, current, browsing records, equipment alarms, early warning information at all levels, and so on. The storage and processing of data by the system can be mainly divided into three ways: first, real-time data, mainly including switching value and analog value, is stored for at least one month in a short term by the system to realize data rolling management; The second is important data, including logs, browsing records, reports, reports, cases, alarms, early warnings, processing records and other important data. The system stores such data permanently and implements life-cycle management for such equipment; The third type is the data collected by the junction equipment, including the data collected by the external power grid at high frequency. The system stores such data for a short period of time, generally no less than 7 days, and also implements data rolling management. With the increasing number and types of equipment collected by the monitoring system, the amount of monitoring data is becoming larger and larger. The traditional centralized monitoring data storage mode can no longer meet the data storage requirements of the monitoring system in the age of big data. The main disadvantage is that all monitoring data cannot be stored forever. After a certain period of time, the data will be automatically covered, which is not conducive to the analysis of fault and alarm records. Another obvious disadvantage is that the query speed of the traditional monitoring system is very slow, because the data collected by the traditional monitoring system are stored on the server. This phenomenon is more obvious when multiple terminals access the real-time analog value of the system at the same time. The query speed will become very slow, which is not conducive to the query and use of on-site personnel. Based on the problems existing in the traditional centralized monitoring system, the concept of big data storage has been paid more and more attention, and the technology of big data storage has developed more and more rapidly. The big data platform has three storage systems, two of which are used to store business data and real-time data, and one is big data storage. These data are closely related to the status of electrical equipment. When the system needs to analyze and process the data in real time, the system will retrieve the data stored in the system for system decision-making.
Efficient Data Transfer Based on Unsupervised Data Grading
567
Big data storage adopts distributed storage technology. Because distributed storage technology is adopted, data security is more effectively guaranteed. Because of the distributed storage of data, when the storage device fails, it will only affect part of the data of related devices, not the data stored in other locations [1]. The centralized monitoring system stores a large amount of monitoring data and transmits it to various systems. In order to ensure the safety of monitoring data and avoid being stolen by illegal elements during transmission and storage, or being lost or stolen by internal staff, the centralized monitoring system takes the following measures: (1) The operating system and database must use genuine software, use the security means provided by the operating system and database software, strengthen the inspection of logged in users, control the access rights of on-site users, change the password regularly, and prevent data theft and leakage caused by password cracking. (2) The maintenance data of the centralized monitoring system can be actively encrypted by multiple methods to avoid data theft and disclosure. (3) The monitoring and maintenance data of the centralized monitoring system are stored in a distributed way. Different terminals and data are stored in different ways. The important central system is provided with a separate disk array to achieve redundant error tolerant storage and ensure the integrity of system information. This way can effectively prevent the loss of monitoring and maintenance data. (4) Strengthen the research and development of data encryption technology. At present, the storage of big data is generally realized by using the virtual mass storage technology. The data transmission is realized through optical fiber, wireless and other data streams. Therefore, we can use SSL (Secure Sockets Layer Protocol Layer) method to encrypt and protect the upload and download of data streams to ensure that the data can be uploaded after encryption and used after decryption, Ensure the safe storage and use of big data [2]. In the context of big data, not only does it pose challenges to the safe storage of data, but also data computing cannot be completed by a single computer. It is necessary to use distributed deep learning technology to partition data and distribute it to different computing nodes, and then use the parallel optimization capability of the single computer to perform local computing, and finally aggregate the local model into the global optimal model through distributed communication. Distributed machine learning system makes use of the idea of distributed system to enable it to accommodate larger machine learning models and larger amounts of data, to improve artificial intelligence services based on machine learning. The edge-oriented distributed machine learning framework is developed based on the parameter server framework [3]. For example, federated learning also has the role of parameter server, work node, etc. The training process of federated learning also follows the general process of the parameter server framework, that is, the parameter server sends the latest parameters to the work node, the work node’s machine learning model parameters based on crime use local data for training, and the parameter server obtains the machine learning model on the work node and aggregates the model. However, for distributed machine learning frameworks designed for data centers, thanks to the stable operating environment provided by the data center, they do not need to be specially designed for the network conditions of the working nodes, nor do
568
L. Zhao et al.
they need to consider the problem of excessive differences in the performance of the working nodes. However, in the edge environment, most of the devices as working nodes only have relatively poor network conditions [5] and lower computing performance [6, 7]. Therefore, federated learning modifies the overall framework according to the characteristics of edge devices. In order to solve the problem of network fluctuation in edge devices, federated learning proposes that only some nodes are selected for training in each round of training. To solve the problem of weak computing performance of edge devices, we can balance the time for selected nodes to complete local training by changing the number of local training iterations in federated learning. Federated learning is the first distributed machine learning framework customized for edge environments. Federated learning proposes that distributed machine learning in an edge environment mainly faces the following challenges. First, data on edge devices are not distributed independently. Second, the number of training data samples on each edge device is different. Third, the network communication resources of edge devices participating in the training are very scarce. For these three challenges, Federal Learning proposes the following adjustment plans. For non-independent identically distributed training data sample sets, federated learning proposes to solve this problem by increasing the local computing load of edge devices. By increasing the number of iteration rounds (Epoch) of training data samples and reducing the batch size, the bad impact of nonindependent identically distributed training data sets on the model convergence speed can be reduced [4, 8]. To solve the problem of imbalance in the number of training data samples, federated learning proposes that in the model aggregation stage, the number of training data samples participating in training on edge devices is taken as a weight value and the model aggregation is conducted through a weighted average scheme. In view of the lack of communication resources of edge devices, federated learning relieves the network pressure in the edge environment by limiting each round of edge device training. Although distributed learning effectively reduces the storage and computing pressure compared with centralized learning, distributed learning is more challenging in data transmission. This is mainly because distributed deep learning faces the test of communication requirements and limited bandwidth, so its performance and scalability are greatly limited. After the massive data collection is completed, how to ensure safety, non-loss, and low delay in the transmission process is a technical problem. Because the data transmission rate supported by the communication channel between nodes is limited, if the data is stored according to the data collection time, it is easy to cause uneven data distribution among nodes at all levels. Due to the priority of processing high-level data, the training between local models will not be synchronized, and then the overall computing efficiency of the distributed network will be low. Therefore, the key of distributed deep learning is to make full use of the computing resources of each node to enhance the synchronization of model training, thus reducing the waiting time and training high-precision models more efficiently. In order to solve the above problems, this paper proposes an efficient data transmission system based on data classification, and an unsupervised clustering classification algorithm based on deep learning technology. First, the graph neural network is constructed to extract the deep features of the data. Then, the unsupervised algorithm is used to grade the data according to the deep characteristics. The node communication
Efficient Data Transfer Based on Unsupervised Data Grading
569
network is constructed, and the storage space of each computing node is adaptively divided according to the classification results. Finally, based on the dynamic rate selection technology, priority is given to the transmission of high-level data as the task target, and the current round transmission rate is dynamically selected for different levels of data. The formation of relevant algorithms and systems can make full use of the computing resources of each node, enhance the synchronization of model training, reduce the waiting time, and train high-precision models more efficiently.
2 Related Work With For data classification, there are many relevant studies at home and abroad [9–12]. These studies mainly determine the value level of data based on the access frequency, importance and other conditions of data, and use different storage devices to save data according to the value level of data, so as to improve system performance and save storage costs. For example, the hierarchical storage scheme [9] proposed by Yang Dongju et al. evaluates the hardware performance of each node in the cluster, divides it into three performance levels, and then grades the data according to the access frequency, adjusts the storage location of data blocks of different levels, and improves the data access efficiency of the cluster; Yang Dongju and others decided the storage location in the cluster according to the access frequency of the replica in order to reduce the number of data replica scheduling [10]; Guo Gang et al. proposed a hierarchical storage model based on data importance level [11], which calculates the importance of data through the volume, time, access and other factors of the data itself, and then evaluates the potential value of the data. The importance and potential value of the data together determine the importance level of the data. The above researches on hierarchical storage are all based on data access frequency, access time and other attribute characteristics to determine the importance level of data and store it in the storage location of the corresponding level, so as to relieve storage pressure and improve system efficiency. Due to the variety of data acquisition forms in the converter station, different data contain different attribute characteristics, so manually determining the importance level of data will bring great subjective differences. Therefore, this paper proposes a data grading method based on machine learning, which extracts the deep features of the data according to the model, and then distributes the data to the edge data center reasonably according to the features, reducing the data transmission time during the system workflow operation, and improving the system work efficiency. At the same time, in order to further reduce the data transmission time and improve the system efficiency, a new transmission rate selection scheme is proposed to speed up the data transmission efficiency. Data hierarchical classification has always been a research hotspot in distributed architecture systems. In the existing work, it is mainly divided into two categories: clustering algorithm and evolutionary algorithm. In terms of clustering algorithm, Zhao et al. proposed a two-stage data classification strategy [16]. In the first stage, the data is clustered according to the correlation and size of the data, and the initial data is placed in the corresponding data center. In the second stage, the data is redistributed according to changes in the data; Wang et al. first
570
L. Zhao et al.
proposed a data hierarchical classification strategy based on data size and dependency between data [17], then analyzed the association between data and tasks, and adopted multi-level task replication strategy to reduce data transmission and improve workflow execution efficiency; Li Xuejun and others put forward a matrix clustering strategy based on data dependency destruction degree in combination with the security of hybrid cloud [13]. This strategy divides the data with high dependency into the same group and places them in the same data center according to the data dependency degree, so as to reduce the transmission of data across centers; Zhang Peng et al. proposed a service workflow architecture based on cloud and client [8]. Under this architecture, data sets are clustered and divided according to the dependency relationship between data sets and tasks. Frequently used data sets are placed on the same node. At the same time, the location of data sets is dynamically adjusted during workflow execution. This method effectively reduces data flow and improves workflow efficiency. The above literature divides data sets into data centers according to their similarity or dependency, so that objects in the same data center have a high degree of correlation, but does not give a comprehensive consideration to data privacy, data center capacity limits, load balancing, and bandwidth differences between data centers. In addition, the clustering algorithm is sensitive, and different clustering centers or clustering orders will make the final scheme significantly different. In terms of evolutionary algorithms, they are mainly divided into genetic algorithms, particle swarm optimization algorithms and particle swarm optimization algorithms based on genetic algorithms. Cheng Huimin and others proposed a multi-objective optimized data classification strategy based on genetic algorithm [4]. At the same time, they took the data transmission duration and load balance between data centers as the measurement criteria of the classification scheme, and tried to obtain a dual optimal data classification scheme. Experiments show that this method can effectively solve the multi-objective data classification problem; Wang et al., combining mobile edge computing and data center, proposed an algorithm based on genetic algorithm and simulated annealing algorithm [15] to carry out calculation diversion and data cache, reducing task execution and data transmission delay; Li et al. regard multiple workflows as a whole, and then use particle swarm optimization algorithm to flexibly place data sets [14]. Other research work is to propose a data classification strategy suitable for the hybrid cloud environment by combining the advantages of genetic algorithm and particle swarm optimization algorithm in the hybrid cloud environment to improve the workflow efficiency. The above documents all propose different optimization algorithms to solve the optimization problem of data classification and classification in different workflow environments, aiming at the problems such as the decline of the optimization ability of evolutionary algorithm in the late stage and premature convergence. To sum up, the current research on data classification has made many remarkable achievements, but there are also some limitations, that is, it ignores the comprehensive consideration of bandwidth differences between data centers, capacity constraints, dataset size and other factors. At the same time, in the current research work, clustering placement based on data correlation can help reduce the data transmission time during workflow execution, but the result is not optimal. However, data placement based on
Efficient Data Transfer Based on Unsupervised Data Grading
571
traditional evolutionary algorithms can search for the optimal solution, but there are problems such as premature convergence and low search efficiency.
3 Technology This paper designs an efficient data transmission system based on deep learning technology and unsupervised clustering classification algorithm. As shown in Fig. 1, the system mainly includes three modules: data acquisition, data classification, and dynamic rate selection module.
Fig. 1. Overview of the system
First of all, each node collects data separately and stores it locally. The local data is used to train the data classification model. The graph neural network is constructed by using the attribute characteristics of the data. The graph sampling aggregation network is used to learn the representation of each node based on some combination of its neighboring nodes, to obtain the deep characteristics of the input data. Then, the unsupervised algorithm is used to grade the data according to the deep features, to avoid the extra cost of labeling massive data. Secondly, according to the classification results, the storage space of each computing node is adaptively divided to achieve efficient utilization of the edge side storage space. Each node shares its own data storage information with the central node to jointly build a node communication network. Finally, based on the dynamic rate selection technology, priority is given to the transmission of high-level data as the task target, and the current round transmission rate is dynamically selected for different levels of data. Thus, the computing resources of each node can be fully utilized, the synchronization of model training can be enhanced, the waiting time can be reduced, and the high-precision model can be trained more efficiently. 3.1 Data Acquisition The data acquisition module is equipped with distributed nodes in the converter station to monitor the massive power grid equipment in the converter station at all times. Each
572
L. Zhao et al.
computing node will acquire a large amount of data at any time. If you choose to transmit all data to the central node for processing, it will cause significant system performance degradation. Therefore, we need to more specifically and selectively select important or core data, reduce the scale of data processing, and speed up processing. 3.2 Data Classification In order to implement the Data Security Law of the People’s Republic of China promulgated and implemented in 2021, the National Cyberspace Office proposed that the country should establish a data classification and hierarchical protection system. According to the impact and importance of data on national security, public interests, or the legitimate rights and interests of individuals and organizations, data are divided into general data, important data, and core data, and different protection measures are taken for different levels of data. The state focuses on the protection of personal information and important data, and strictly protects core data. All departments and industries shall also manage the data of their own departments and related industries and fields according to the national data classification and grading requirements. Data classification and classification is a basic project in the field of data security. Only when we have a clear understanding of the business ownership and importance of data, can we adopt different strategies to protect and manage data. Data classification can adopt the three classification methods mentioned in the Basic Principles and Methods of Information Classification and Coding: line classification, area classification, and hybrid classification. In combination with the characteristics of its own data assets and security protection needs, select the appropriate classification method for data classification, and form a corresponding data classification category table to provide a data resource basis for data classification. According to the specific business characteristics, the mixed classification method is adopted. According to the business categories of operation management and internal management, and in combination with the data nature, importance, management needs, user needs, and other factors, the data is divided into business data related to operation management, engineering data, trade data, and personnel management teaching, financial management data, asset management data, legal management data related to internal management Safety management data, quality data, official document management data, audit management data, comprehensive planning data, information technology data, and other data are classified into primary categories, and further subdivided as required. The method of combining multi-dimensional and line classification can be used to classify data in three dimensions: project, research and service. For each dimension, line classification can be used to classify it into three levels: large category, medium category and small category. Business departments can subdivide data into sub categories according to business needs. For the subdivision of subcategories, each department can expand and subdivide according to the nature, function, technical means and other issues of business data. According to the principle that too few levels of data classification are not conducive to classification and too much is not conducive to management, data categories can generally be subdivided into three levels. According to Article 21 of the Data Security Law, on the basis of data classification, we can focus on the impact objects of the
Efficient Data Transfer Based on Unsupervised Data Grading
573
four dimensions of state, public interest, organization and individual, combine the three impact factors of data security (confidentiality, integrity and availability), and according to the impact degree of three levels (serious damage, general damage, minor damage), Through the relevant grading requirements mentioned in the Regulations on the Security Protection of Key Information Infrastructure and the Data Security Law (for data related to key information infrastructure, it shall be managed at least in three levels; data related to national security, the lifeblood of the national economy, important people’s livelihood, major public interests and other data are national core data, and a stricter management system shall be implemented), So as to analyze the different impact degrees of the affected objects, map the classified data to each level according to the five levels of non sensitive, internal sensitive, common business secrets, core business secrets, and core secrets, so as to obtain the data level classification directory, which provides scientific basis for data classification. In addition, the data can be controlled hierarchically based on roles or application fields. You can manage permissions by granting corresponding permission sets to roles. That is, you can specify which objects users with this role can access and what operations they can perform on these objects. You can also easily grant different levels of privilege to access the same dataset to multiple groups. Therefore, there needs to be a clear mapping relationship: permission → role → user group → user. Applying role based fine-grained user classification to the power industry, we can classify data according to roles/fields such as production control area (control area and non control area) and management information area, design different levels of data access models for different types of data, and verify the correctness of the data classification function. In this paper, machine learning algorithm is used to achieve data classification. Traditional supervised learning can achieve good results, but under the background of massive data, data annotation will consume a lot of manpower and material resources. And different from the ordinary data annotation, the hierarchical classification of data has great subjectivity, and it is difficult to obtain accurate annotation results. Therefore, this paper uses unsupervised clustering algorithm to classify the data. However, compared with supervised learning, unsupervised learning can learn without providing supervised information (the true value of the prediction quantity). However, due to the lack of feedback of the true value, the model performance will decline. In order to solve this problem, the graph neural network is used to extract the deep features of the data to make up for the lack of performance of subsequent unsupervised classifiers. First, after the node obtains the device status data, it will use the character recognition technology to extract the common attributes in the attributes, such as time, device name, running status, etc. Secondly, the graph neural network is constructed by using the collected data and the attribute information of the data. The graph network takes the input data as the node and the data attribute similarity as the edge. Through clustering, we can explore the similarity of data in attribute space, achieve node level aggregation, and obtain deep feature representation. Then, the unsupervised algorithm is adopted, with the deep features as the algorithm input and the predicted data level labels as the output. Finally, the storage space of each computing node is divided according to the level, so that each level of data has a corresponding storage space, and the storage
574
L. Zhao et al.
space division ratio of nodes is dynamically adjusted according to the amount of data at each level. 3.3 Dynamic Rate Selection The main function of the dynamic rate selection module is to dynamically select the current round of transmission rate for different levels of data with priority to transmit high-level data as the task target. The rate selection steps are as follows: 1. Detect the network environment of the communication network and obtain the maximum bandwidth of the data transmission channel. 2. According to the amount of data to be transmitted and the bandwidth, determine the transmission time required for data in the ideal state. 3. According to the communication network, obtain the node status communicating with the current node. 4. Compare the transmission time of data of the same level between nodes. Set the current node as N0 and the neighbor node as N1; N0 contains three levels of data, and the required transmission time is t01, t02, and t03. If t11-t < = t01 < = t11 + t is met, where t is the acceptable delay time of the system, the system judges that the current data transmission is normal; Otherwise, it is judged that the transmission is abnormal. 5. When the transmission is abnormal, the system will not transmit new data to the overflow node, and will adaptively increase the transmission rate of data at the corresponding overflow level, slowing down the transmission rate of data at other levels.
4 Conclusion This paper proposes an efficient data transmission system based on data classification, which uses deep learning technology and unsupervised clustering classification algorithm. First, the graph neural network is constructed to extract the deep features of the data. Then, the unsupervised algorithm is used to grade the data according to the deep characteristics, and the storage space of each computing node is adaptively divided according to the grading results. Finally, based on the dynamic rate selection technology, priority is given to the transmission of high-level data as the task target, and the current round transmission rate is dynamically selected for different levels of data. The formation of relevant algorithms and systems can make full use of the computing resources of each node, enhance the synchronization of model training, reduce the waiting time, and train high-precision models more efficiently.
Efficient Data Transfer Based on Unsupervised Data Grading
575
References 1. Fengqi, W.: Discussion on Data Storage Security Countermeasures in the Age of Big Data. Scientist 9, 32–33 (2015) 2. Dongming, F., Bingjian, Z., Mengtao, H.: Research on Emergency Management of Intelligent Monitoring Electric Service Failure Based on Big Data. Railway Commun. Signal Eng. Technol., (4). 32–42 (2020) 3. Chuanxin, Z., Yi, S., Degang, W., et al.: A Review of Federated Learning Research. J. Netw. Inf. Secur., 1–16 (2021) 4. Huimin, C., Xuejun, L., Yang, W., et al.: Scientific Workflow Data Layout Strategy Based on Multi-objective Optimization in Cloud Environment. Comput. Appl. Softw. 34 (3)1–6(2017) 5. Xinyuan, Q., Zecong, Y., Yilong, C.: A Review of the Research on Communication Cost of Federated Learning. Comput. Appl. 1–11 (2021) 6. McMahan, H.B.: Advances and Open Problems in Federated Learning Foundations and Trends®in Machine Learning, Now Publishers, Inc., 14(1), 1–10 (2021) 7. Lim, W.Y.B., Luong, N.C., Hoang, D.T., et al.: Federated learning in mobile edge networks: a comprehensive survey In: IEEE Communications Surveys & Tutorials, IEEE 22(3), 2031– 2063 (2020) 8. Peng, Z., Wang Guilin, X., Xuehui.: Data layout method suitable for workflow in cloud computing environment. Comput. Res. Dev. 50(03), 636–647 (2013) 9. Dongju, Y., Qing, L., Chongbin, D.: Hierarchical storage scheduling mechanism in hdfs heterogeneous clusters. Small Microcomput. Syst. 38(01), 29–34 (2017) 10. Dongju, Y., Qing, Y.: Hierarchical Storage Scheduling Policy for Replicas in Storage Comput. Sci., 44 (04), 85–89 (2017) 11. Guo Gang, Y., Liang, J.L., Changtian, Y., Lutong, Y.: Data migration model under memory cloud hierarchical storage architecture. Comput. Appl. 35(12), 3392–3397 (2015) 12. Ran, D., Qiulan, H., Yaodong, C., Gang, C.: Design and research on diversity mechanism of block based hierarchical storage system. Comput. Eng. 42(12), 50–59 (2016) 13. Li Xuejun, W., Yang, L.X., Huimin, C., Erzhou, Z., Yun, Y.: Workflow data layout method for data center in hybrid cloud. J. Softw. 27(07), 1861–1875 (2016) 14. Li, X., Zhang, L., Wu, Y., et al.: A novel workflow-level data placement strategy for datasharing scientific cloud workflows. IEEE Trans. Serv. Comput. 12(3), 370–383 (2019) 15. Wang, H., Li, R., Fan, L., et al.: Joint computation offloading and data caching with delay optimization in mobile-edge computing systems In: 2017 9th International Conference on Wireless Communications and Signal Processing (WCSP). IEEE (2017) 16. Zhao, Q., Xiong, C., Zhao, X., et al.: A data placement strategy for data-intensive scientific workflows in cloud In: IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, 928–934 (2015) 17. Wang, D., Zhang, J., Fang, D., Luo, J.: Data placement and task scheduling optimization for data intensive scientific International Conference on Advanced Cloud and Big Data, 77–84 (2014)
Author Index
A An, Guanghui
339
B Bi, Lisong 236 Bi, Zekai 207 C Chen, Fei 10, 20 Chen, Jiashun 207 Chen, Qiaoling 358 Chen, Xunwang 198 Chen, Zhuo 309 Cheng, Gong 330 D Deng, Shaobiao 469 Ding, Chunkai 160, 170 Ding, Jianwu 409 Ding, Liyun 245 Ding, Yiming 409 F Fan, Yuxuan 321 Fang, Zhiwen 369 G Geng, Xiaoqing 469 Gong, Feng 96 Gong, Ruibang 330, 339 Gong, Yongwen 552, 565 Gu, Jiangbo 339 Gu, Weihao 56 Guo, Cen 309 Guo, Linchuan 56 H Han, Shiqi 409 He, Sen 264, 272 He, Xu 540
Hemalatha, K. L. 309 Hu, Changlun 520 Hu, Jian 264, 272 Huang, Liang 29 Huang, Shuo 409 Huang, Zhenlin 552, 565 Huang, Zhiwei 217 J Jiang, Dongfeng 530 Jiang, Jiatong 207 Jiang, Rui 358 Jiang, Yuan 358 Jiao, Kang 369 Jin, Zhongwei 151 K Karunakara, B. Rai 236 L Lang, Yuhang 479 Lei, Kewei 116 Li, Cheng 1 Li, Haobin 198 Li, Jialu 281 Li, Jun 459 Li, Ming 489 Li, Mingchun 409 Li, Ping 37 Li, Xiaohu 399 Liang, Longfei 520 Ling, Letao 369 Liu, Hengjie 520 Liu, Ji 56 Liu, Mengwei 96 Liu, Xiaobo 509 Liu, Yi 509 Lu, Lan 1 Lu, Qian 300 Lu, Zhigang 189
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 172, pp. 577–578, 2023. https://doi.org/10.1007/978-3-031-31860-3
578
Author Index
Lu, Zhimin 56 Lv, Ming 309
Wen, Xing 552, 565 Wu, Longteng 198
P Pareek, Piyush Kumar Peng, Lili 498 Peng, Yundi 498
X Xian, Ning 291 Xiaohu, Fan 540 Xu, Haiyuan 369 Xu, Qifeng 66 Xu, Zhanqiang 189 Xuan, Yang 236
129
Q Qi, Congyue 37, 489 Qi, Xiaoyan 520 Qi, Yongfeng 37 Qu, Pingbo 420 R Ramachandra, A C 29 Rashmi, P 245 Rosli, Anita Binti 227 Roy, B. P. Upendra 321 Ruan, Haiyan 87 S Shafi, Pushpendra 430 Shi, Lei 378 Sridhar, V. 96 Sun, Shixue 440 Sun, Shujie 378 T Tan, Yong 107 Tang, Yimin 264, 272 Tian, Lei 116 W Wang, Bin 255 Wang, Feiyan 440 Wang, Juan 450 Wang, Meng 430 Wang, Miao 236 Wang, Ning 552, 565 Wang, Shutan 291 Wang, Yuchen 330, 339 Wang, Zheng 552, 565 Wang, Zihao 459 Wei, Jingying 107 Wei, Jintao 207 Wen, Jinghao 207
Y Yan, Ruyu 281 Yan, Xia 227 Yang, Lun 76 Yang, Qiongqiong 349 Yang, Yabo 140 Yin, Yina 29 Yuan, Lijun 37, 489 Yuan, Zhanzequn 236 Z Zeng, Haitao 552, 565 Zhai, Hongrong 198 Zhang, Chonghao 46 Zhang, Dandan 300 Zhang, E. 129 Zhang, Hong 189 Zhang, Peng 489 Zhang, Qiuping 321, 378, 430 Zhang, Xiaolin 151 Zhang, Xiujie 151 Zhang, Xu 129 Zhang, Yanxia 281 Zhang, Yun 388 Zhang, Zheng 264, 272 Zhang, Zhihua 489 Zhao, Lei 520 Zhao, Liuqi 565 Zhao, Ruifeng 189, 198 Zhao, Yu 409 Zhou, Cheng 1 Zhou, Hongwei 37, 489 Zhou, Jiahui 96 Zhou, Lu 180 Zhou, Tie 56 Zhou, Xiaofeng 189 Zhou, Xiaojie 236